Category Archives: global climate change

Bradley excluded as Mann expert witness.

Some of you have already learned that CEI won the summary judgment to not be sued by Mann. The case against Symberg continues. The court also made decision about expert witnesses, excluding 7 proposed by Mann’s side and 1 of 2 proposed by the defendants. The reasons some were excluded made me laugh. But the exclusion I found most interesting is that of Raymond S. Bradley, D.Sc, the “B” in MBH. I’m going to mostly just post text from the ruling.

Dr. Bradley provides expert testimony about the findings from the MBH research, how global temperatures were calculated, and the use of proxy data in the hockey stick research. Bradley Rep. ¶¶ 12-39. The Court will, however, exclude this testimony because Dr. Bradley fails to put forth the scientific technique or methodology underlying his expert opinion. See Sacchetti v. Gallaudet Univ., 344 F. Supp. 3d 233, 250-51
The type of proxy data used and whether it was properly incorporated into the hockey stick research is a fact at issue in this case. Dr. Bradley, as a co-author of the MBH98 and MBH99 studies, has first-hand knowledge of the data used in the hockey stick research. In arriving at his conclusion that the “technique was properly incorporated into, and used appropriately in, the MBH98 and MBH99 studies,” Dr. Bradley’s testimony skips a significant step that is required of all expert testimony. See Bradley Rep. ¶¶ 36-39. Dr. Bradley only speaks to how the proxy data was chosen but fails to establish the principles and methodologies he used to arrive at his conclusion that the data and technique in the MBH98 and MBH99 studies were properly incorporated and used appropriately. See Arias v. DynCorp., 928 F. Supp. 2d 10, 15-16 (D.D.C 2013) (indicating that an expert’s opinion can be based on their experience, but in those instances the expert must explain “how the experience leads to the conclusions reached, why the experience is a sufficient basis for the opinion and how that experience is reliably applied to the facts.”). The Court must, therefore, exclude Dr. Bradley’s testimony.

I find this very striking because many of Steve McIntyre’s discussions of those criticizing the Hockey Stick research address the point of principles and methodologies are applied to the following questions:

  • How do we know any proxie is a valid temperature proxy?
  • How do we know whether using a proxy “upside down” is proper or improper application of “the” methodology?
  • Has the methodology described been shown to give correct answers in previous applications? And finally:
  • Was the methodology described actually used to come up with the results in the paper?

And these questions about proxies don’t even get use to “the trick” which gets to the question of whether there is an accepted methodology for splicing different sorts of data together.

Often the response to this is pretty much “other people have gotten the same thing”. Or alternately, “You don’t know enough to be able to understand the nuances of making these choices.”

But of course, that’s not a satisfactory answer in court! Because, evidently, in court, the judge’s position is “Yep. But you supposedly do. So explain those nuances”. And “Tell us how trained people do these analyses.” Both certainly ought to be possible. And if science is systematic and based on principles, facts, observations and techniques, someone should be able to explain the nuances or tell others how to get the answer!

In fact, if you can’t describe or outline a proper method, it’s clearly silly to claim you can determine if the method used to get a result was proper or improper.

Yet. It appears Bradley— the B in MBH– did not describe proper principles or techniques to select proxies. This, despite the fact that his proposed role was to tell the jury that proper principles or techniques were used. And, more over, it appears describing what these principles and techniques are ison of the things the law requires of an expert witness.

Other proposed expert witnesses were rejected for similar reasons. But to me, Bradley’s was the most interesting rejection of all.

Fuego Erupted: It’s a doozy!

As may know, temperature dips are often attributed to volcanic eruptions. Fuego just erupted. The BBC reports ash flew up as high as 6 km. This is evidently larger than the mid-70s eruption that caused the dip I discussed a while back, see below. The figure only shows forcings, but temperature are known to be affected.

Those who think those swings “just happen”, or might be caused by El Nino or La Nina have probably never compared the big dips to the periods when major stratospheric volcanic eruptions occurred. These were discussed a while back and shown below:

You can scan up and down and begin to notice that, yes, as we have all heard, major stratospheric volcanic eruptions affect the global mean surface temperature. In fact, part of the proof that GCM’s are able to predict some variations in GMST arising from variations in forcings is that they do predict these the dip in temperature after eruption like Pinatubo.

The phenomenology is understood: After a major stratospheric volcanic eruption, the stratosphere fills with aerosols. Radiative forcing decreases, and the global surface temperature drops for a while. Not surprisingly, 7 year trends with their start dates roughly 7 years or less before the eruption tend to dip.

I guess we’ll see what happens this year.

Tom Fuller: Lukewarmers way.

Guest Post by Tom Fuller.

Lucia was kind enough to invite me to submit a guest post promoting my new book, ‘The Lukewarmer’s Way–Climate Change For the Rest of Us.”

Now that I’m here I feel a little awkward. Many of the regulars here know more about climate science than I do–many of you contributed in a major way to my education on the subject. And, one, of, you, knows, more, about, the, proper, use, of commas, as, well.

My book isn’t about climate science per se. Nor is it about grammar. It is about explaining a third position in the climate debate, one that is neither alarmist nor skeptical. As Lukewarmers are getting a bit more press than previously and as a lot of the press is as wrong about us as it is about everything climate, I am hoping my book will serve as a flagpost that both identifies us and locates us accurately on the spectrum of climate opinions.

Of course, the simplest definition is Mosher’s: “The simplest definition was given by Steve Mosher, who frequently comments on your blog. ‘Given and over/under on sensitivity of 3C, lukewarmers will take the under.”

Over at the blog Making Science Public, Brigitte Nerlich spent some time trying to figure out who Lukewarmers are, what we actually think and how we’re different from skeptics and warmists. After a lot of discussion it turns out that we agree with the science, that there is an A in AGW, but that we also think sensitivity is lower than warmists. Not much of a revelation there.

One of the commenters on the thread is one of my favorite humans, Lucia Liljegren of The Blackboard. She pursues the topic in greater depth here, referring to Tamsin Edward’s post in The Guardian and was kind enough to mention me.

Matt Ridley has adopted the mantle of Lukewarmer and seemed to expect that we would all fall in line behind him. In comments at my blog he seemed to be a bit miffed that that didn’t happen. Similarly, Pat Michaels has taken on the label and is reportedly coming out with a book called ‘The Lukewarmer’s Manifesto,’ which we manifestly don’t have and probably don’t need.

So what is a Lukewarmer? Let’s start with how the opposition describes us. This is what ATTP writes: “The fundamental problem I have with the Lukewarmer position is that it appears to be based on the idea that everything could be fine, therefore let’s proceed as if it will be fine. That’s why Eli Rabett calls them Luckwarmers – we’ll be lucky if they’re correct.”

On the other side of the spectrum, Shub Niggurath chimes in: “The term ‘lukewarmer’ has some of its roots in some skeptics trying to distance themselves from the extreme portrayal that was directed at skeptics. These skeptics tried to hijack and smuggle away whatever acceptable facets there were, so they could be spared the smearing and tarring. Skeptics would then be left holding only greenhouse gas skepticism, contrails, US govt agency meddling etc. In reality lukewarmers are just skeptics, they are a timid bunch that’s all.”

Let’s contrast that with what Lucia says and what I say.

What I say in the Forward to my book is, “Briefly, what I came to think was that climate change is real. It is probable that humans contributed to the dramatic warming of the last quarter of the 20th century. One of those contributions consists of greenhouse gases from the burning of fossil fuels. Burning of fossil fuels will accelerate rapidly in the 21st Century. This is likely to pose a problem for continued human development, and it would be wise to take steps to reduce the emissions and prepare for the impact of future warming. However, climate change is not solely caused by human actions. Fossil fuel emissions of CO2 are not the only human contribution to climate change and may not even constitute more than half (deforestation, conventional pollution, black soot and preparation of cement may combine to have a larger effect). Alarmist scenarios of very large temperature rises and sea level increase are not supported by mainstream science. Climate change should be considered a serious issue deserving of our attention, but it is not a ‘planet-buster’.”

This is the short version of what our hostess (who I consider the first and foremost of Lukewarmers) has to say: ”

Do you believe in AGW with the emphasis on the “A” or what?

Yes. Lukewarmers believe in “GW” and a noticeable proportion is due to “A”. (Some might be natural variations — it’s not somethign like 99.99999999% natural. The cut-off isn’t entirely clear since this is a label. )”

I somehow don’t think that is going to settle the issue definitively, but let’s move on from labels.

Because Lukewarmers don’t have a manifesto (at least until Pat Michaels tells us what we’re supposed to believe), the Lukewarmers I have read all have different ideas on what it means.

Here’s what I wrote in my book:

“What we seem (so far) to have in common is an understanding that the basic underpinnings of climate science are understandable, well-grounded and not controversial, plus the growing realization that one of the key components of an extended theory of climate change has been pushed too far.

That component is the sensitivity of our atmosphere to a doubling of the concentrations of CO2. The activists who have tried to dominate the discussion of climate change for more than twenty years have insisted that this sensitivity is high, and will amplify the warming caused by CO2 by 3, 4 or even 10 times the 1C of warming provided by a doubling of CO2 alone. The Skeptics who oppose them dispute the models that have shaped Alarmist views of sensitivity and argue instead that sensitivity is only about 1C or even less.

There is a middle ground. It is my hope that this book and the efforts of other Lukewarmers will take the conversation far ahead of the extremists on both sides of this important issue and leave them to their increasingly irrelevant and increasingly arcane tossing of insults and ignoring reality.”

As I have done more than my share of tossing insults, it should be clear that that last paragraph is more aspirational than descriptive.

Here is why I’m not an alarmist:

“Climate models have projected more warming than has occurred through 2014. Although they do a good job at charting the broad sweep of climate over the years, they do not get the fine level of detail needed to inform planning.

  1. A pause (or slowdown) in temperature rises has occurred just at the time that human emissions of CO2 have exploded. Almost one third of all human emissions have taken place since 1998, but warming has slowed dramatically during that same time frame. This is an argument against a high sensitivity of the atmosphere.
  2. Recent calculations of atmospheric sensitivity to increased concentrations of CO2 in the atmosphere are based on observations and provide values for sensitivity that are much lower than previous versions that were based on models.
  3. Sea level rise has increased from 2mm a year to 3mm a year in the past two decades. However, sea level rise shows no sign of accelerating beyond that and some indications are that it is returning to the 2mm annual increase of prior years.
  4. The physics-based approach to calculating climate change leaves calculations vulnerable to large biological or chemical responses to warming. Vegetative cover on earth has increased by 7% recently—how much additional CO2 will this draw out of the atmosphere? Physicist Freeman Dyson is frankly dismissive of models’ ability to capture the interaction between the various inputs into models.
  5. Temperatures estimated from before the modern record do not seem reliable, although part of the problem may be due to inappropriate statistical treatment of the data.
  6. Advocacy of an active policy response does not seem to rely on a confident view of science. Rather it suggests that Alarmists rely more on dismissing the opposition as ‘deniers’ and exaggerating the modest findings of climate science, precisely because the results of science to date are not alarming.”

“Here are reasons why I am not skeptical of human-caused climate change:

  1. The physics underlying the basics of climate science are utterly uncontroversial, over a century old and broadly agree with observations. CO2 is a greenhouse gas, it does interact with infrared radiation at certain wavelengths and prevents heat from escaping the atmosphere.
  2. Temperatures clearly have risen since 1880 by as much as 0.8C.
  3. One of the key predictions of climate science, that the Arctic would warm much faster than the rest of the planet, has come true. Arctic temperatures have climbed by 2C.
  4. Sea level rise, almost all ‘steric’ (expansion of the water caused by heat) has increased from 2mm to 3mm per year.
  5. Human emissions have grown from 236 million metric tonnes of carbon in 1880 to 1,160 in 1945 to 9,167 in 2010.
  6. Concentrations of CO2 in the atmosphere have grown from 280 ppm in 1880 to 400 ppm in 2014.
  7. The rate of increase in CO2 concentrations is increasing. The volume of CO2 in the atmosphere grew by 0.75 ppm annually in the 1950s. In the last decade it has increased by 2.1 ppm per year.
  8. Growth in energy consumption is skyrocketing with the development of Asia and Africa. My projections show that we may use six times as much energy in 2075 as we did in 2010.
  9. Other impacts of human civilization, such as deforestation and other changes to land use, pollution and black soot landing on Arctic snows also contribute to warming.
  10. Published plans for construction of renewable energy infrastructure and nuclear power plants fall far short of what is needed to appreciably reduce emissions.
  11. The two principal drivers of emissions are population and GDP growth. Both are projected to rise considerably over the course of this century.”

As I said up top, many of the readers here are well-versed in one or more aspects of the many disciplines that combine to cover climate change. My book will not further your education, if that describes you. What I’m looking for is to describe the Lukewarmer position, tell why it is based on mainstream science and to discuss possible policy options based on the Lukewarmer stance on climate change.

I hope you’ll buy it.


click to buy

Note Guest Post by Tom Fuller. Posted by Lucia while he is in Taiwan.

Volcanoes: Ridley et al. (Bleg)

I’m reading “Total volcanic stratospheric aerosol optical depths and implications for global climate change
D. A Ridley1, S. Solomon2, J. E. Barnes3, V.D. Burlakov4, T. Deshler5, S.I. Dolgii4, A.B. Herber6, T. Nagai7, R. R. Neely III8, A.V. Nevzorov4, C. Ritter9, T. Sakai7, B. D. Santer10, M. Sato11, A. Schmidt12, O.Uchino7, J. P. Vernier13,14″

It’s an interesting paper. I have a big question and I’m hoping to that some readers might be able to point to resources. Briefly:

Is there a brief resource that lists the volcanic forcings used by groups who contributed projections to the AR5.

Specifically, I want to know which groups ‘froze’ volcanic forcings after 2000 and which did not. For groups that did not freeze volcanic forcings in 2000, I’d be interested in knowing when the year they did freeze them, and at what level those forcings were frozen.

RidleySaysFrozeThe reason I wish to know: Ridley seems to suggest “many” froze forcings. I don’t know what fraction ‘many climate model studies to date’ constitutes nor which models. Are the “many climate model[s]” the AOGCM’s used to create projections in the AR5? Or are they EMICs? Etc. Of the AOGCM runs, is ‘many’ 20% ‘froze’ or was it more like 80%?

My impression (which could be incorrect) is that Model E did not freeze volcanic aerosols in 2000. (NASA’s Model E page says,

“The simulations (using TCADI) denoted by p3001 are continuations from 1991 of the p3 simulations including further updates to the volcanic and solar forcings through to 2012. The volcanic forcings are from a more recent version of the Sato et al. data than used in the historicalExt runs described below.”

But perhaps I’m jumping to the conclusion that these were the runs used for “projections” in the AR5 and perhaps they did ‘freeze’ volcanic aerosols in 2000 and then use that to create projections. (Note: if they ‘froze’ in 2012, one might expect the ‘mean’ for their trends from 2012-2014 should be lower than the earths, because the volcanic aerosol cleared somewhat. Any discrepancy would be undetectable as it would be swamped by ‘weather noise’.)

I would like to know the year other modeling groups who contributed to the AR5 (or AR4) ‘froze’ volcanic aerosols.

BTW: I’m pretty sure it’s true that of those that included volcanic aerosols in the AR4, ‘many’ did freeze in 2000, though once again, I believe Model E did not. I think they froze later– around 2003 or so. That said: many models in the AR4 did not include volcanic aerosols at all; that makes it very difficult to speculate how the variation in aerosols in 2000 mattered– as one ought to two off-setting factors: (a) that those models were not including the tail end of the ‘recovery’ to Pinatubo — which might be small but still not zero– and (b)also did not include the ‘cooling’ effect of newer, but much smaller, eruptions. Nevertheless, it would be interesting to have a listing.

I know someone is going to be telling me I can hunt these down at PCMDI: I prefer not to slog away at PCMDI. What I want is for someone to tell me if someone else has already gone through and at least has information on a ‘yes/no’ froze in 2000 level and/or ‘which year froze’. Or if someone else wants to slog through PCMDI and find out the year any particular group ‘froze’ volcanic aerosols, that would be lovely. This is not a task that requires high technical skills: just time. So if you are curious, and have time, you could look. 🙂 )

The effort is interesting because it is important because the degree to which one ‘ought’ to believe the discrepancy between either AR4 or AR5 models and observations can be ‘explained’ by projections being based on ‘frozen in 2000’ aerosols depends on the fraction of modeling groups that froze volcanic aerosol forcings in their input files when creating runs to actually contributed to the AR4 or AR5. I have no idea what this fraction is.

If “many” froze them means roughly 20% did, then the discrepancy between model mean and runs is difficult to explain by the difference in ‘real’ and ‘modeled’ volcanic aerosols. In contrast, if 80% froze them, we can test that too. We can also dig through and see whether there is any systematic difference between 2000th century trends between models that did ‘freeze’ volcanic aerosols and those that did not. (Assuming some did and some did not, of course.)

I have a few other comments that I could make on this paper. The main one that first comes to me is this:

When finding likely effect of forcings on trends in the 2000’s it strikes me as a bit ambitious to simply do everything relative to the volcanic forcing in 2000. The reason is that the atmosphere does have a response time. So, if one wants to find out the effect incorporating forcings in the lower stratosphere into the recent trends, one ought to know what those were in past years before deciding the effect of any increase in Stratospheric Aerosol Optical Depth (SAOD) during the 2000’s. It appears no AERONET data exists prior to 1995; I’ll take the researchers word that the data are tenuous prior to 2000. But this is a factor we should consider– and possibly one ought to widen the range of possible effect on temperature in that paper. (Perhaps doubling it.)

That said: it’s an interesting exercise, and worth being aware of. It could potentially explain the discrepancy between the model mean and the observed trend. (Which will mean that the discrepancy between a projected model mean trend of “about 0.2 C/decade” existed and still does as opposed to ‘did not exist’.)

Temperature: Compare to AR4.

I haven’t been following Climate so much. But I noticed ‘rumors’ that the hiatus was over… on twitter that. This motivated me to whip out my script and see how temperature have progressed. As the AR5 is now “officially” out, I thought it might be useful to compare the temperature to the previous prediction from the AR4, which is now so “yesterday”. Nevertheless, many of the AOGCM’s used to form the basis of the AR5 are more-or-less the same as those used in the AR4, so I thought some of you might enjoy seeing the observations shown next to the multi-model mean from the AR4.
TempsSince2000

The above shows annual average temperature along with the IPCC projection of annual average temperatures.

Recall: In the AR4, the IPCC projected warming to proceed at about 0.2 C/decade for the first two (or was it three?) decade of this century. Whatever it was, we are quite a large fraction of the way there. Also, in the AR4, like it or not, the IPCC authors elected to communicate uncertainty using ±1 σ error bars around a multi-model mean. To match them I show the observed temperature on a similar graphs. You can see the present temperature while certainly ‘up’ relative to recent temperatures is just brushing the lower -σ boundary for temperature itself. If we use a ‘red noise’ model for time series to compute uncertainty intervals for trends consistent with observations, the 0.2C/decade warming trend is well outside the 2σ uncertainty intervals consistent with the observations.

Of course, this is now old news: Even the IPCC has admitted a ‘hiatus’. Even if many won’t quite want to admit the models are somehow “wrong”, the peer reviewed literature is now permitting people to observe that the temperatures are not exactly following the mean trend. But it’s well worth comparing observations to projections now that those projections are (likely) to be deemed “old news”.

Anyway: I’m rather unconvinced ‘the hiatus’ is over. That said: it’s a bit difficult to say for sure because the definition of ‘hiatus’ is rather vague. It does seem to me we are going to need to see quite a bit of warming to overcome the doubts of those who think models are not well suited to predicting warming over periods as long as 20 or 30 years.

More on Estimating the Underlying Trend in Recent Warming

This post is an update on my earlier post on the same subject.  In my earlier post I regressed linearly detrended Hadley global temperature anomalies since 1950 against a combination of low pass filtered volcanic forcing, solar forcing, and ENSO influence.  For ENSO influence, I defined a low pass filtered function of the detrended Nino 3.4 index, which I called the Effective Nino Index (ENI).

My regression results suggested the rate of warming since 1997 has slowed considerably compared to the 1979 to 1996 period, contrary to the results for Foster & Rahmstorf (2011), which showed no change in the rate of warming since 1979.   There were several constructive (and some non-constructive) critiques of my post in comments made here at The Blackboard…. and elsewhere.  This post will address some of those critiques, and will examine in some detail how the choices made by F&R generated “no change in linear warming rate” results; results which are in fact not those best supported by the data.

Limitations of the Regression Models

 It is important to understand what a regression, like done by F&R or in my earlier post, can and can’t do.  If each of the coefficients which come from the regression are physically reasonable/plausible,  then the quality of the regression fit indicates:

1) if the assumed form of the underlying secular trend is plausible, and

2) if there are important controlling variables not included in the regression

The difference between the regression model and the data (that is, the model ‘residuals’) is an indication of how well the regression results describe reality: Is the form of the assumed secular trend plausible? Do the variables included in the regression plausibly control what happens in the real world?   However, if the coefficients from the regression output are not physically reasonable/plausible, then even a very good “fit” of the model to the data does not confirm that the shape of the assumed secular trend matches reality, and the regression results may have little or no meaning.

Some Substantive Critiques from My Last Post

The fundamental problem with trying to quantify and remove the influences of solar cycle, volcanic eruptions, and ENSO from the temperature record was pointed out by Paul_K, who noted that selection of a detrending function, which represents the influence of an unknown secular trend, is essentially circular logic.  The analyst assumes the form of the chosen secular function (how the secular function varies with time: linear, quadratic, sinusoidal, exponential, cubic, etc, or some combination) in fact represents the ‘true’ form of the secular trend.  The regression done using a pre-selected secular function form is then nothing more than finding the best combination of weightings of variables in the regression model which will confirm the form of the assumed secular trend is correct.

Hence, any conclusion that the regression results have “verified” the true form of an underlying trend is a bit circular… you can’t verify the shape of an underlying trend, you can only use the regression to evaluate if what you have assumed is a reasonable proxy for the true form of the secular trend.  In the case of F&R, the assumed shape of the secular trend was linear from 1979; in my post the assumed secular trend was linear from 1950.  Both suffer from the same circular logic.   F&R also allow both lag and sensitivity to radiative forcing to vary independently, which allowed their regression to specify non-physical lags and potentially non-physical responses to forcings, which together lead to the near perfect ‘confirmation’ of their assumed linear trend.   All of the regressions in this post, as well as in my original post, require that both solar and volcanic forcings to use the same lag, though that lag is free to assume whatever value gives the best regression fit, even if the resulting lag appears physically implausible.

Nick Stokes suggested substituting a quadratic function (with the quadratic function parameters determined by the regression itself) and went on himself to compare the regression results for linear and quadratic functions for 1950 to 2012 and 1979 to 2012.   Like me, Nick used a single lag for both solar and volcanic influences.  Nick concluded that with a quadratic secular function, there is some (not a lot) deviation from a linear trend post 1979, which varies depending on what temperature record is used.  Nick’s results are doubtful because simply choosing a quadratic secular function is just as circular as choosing a linear function.   Some of the lag constants Nick’s regression found for the 1975 to 2012 period (eg. ~0.11) appear physically implausible (much too fast).

Tamino (AKA Grant Foster of F&R) made a constructive comment at his blog: a single lag constant for solar and volcanic influences (a “one box lag model”) was not the best representation of how the Earth is expected to react to rapid changes in forcing like those due to volcanoes, and that a two-box lag model with a much faster response to account for rapid warming of the land and atmosphere would be more realistic.  I have included this suggestion in my regressions.

Commenter Sky claimed that basing ENI on tropical temperature responses was “a foolishness” (I strongly disagreed) but his comments prompted me to look for any significant correlation between the ENI and non-tropical temperatures at different lags, and I found that there is a very modest but statistically significant influence of the ENI of non-tropical temperatures at 7 months lag.  Incorporating both ENI and 7-month lagged ENI slightly improves the regression fit in all cases I looked at, and generates an estimated global response for ENI (not lagged 7 months) which is close to the expected value of half the response for the tropics.   (I will describe the (modest) modifications I made based on Tamino’s suggestion and on a 7-month lagged ENI contribution in a postscript to this post.)

Finally, Paul_K suggested that a way to avoid logical circularity was to try a series of polynomials in time, of increasing order, to describe the secular trend (with time=0 at the start of the regression model period) and with the polynomial constants determined by the regression itself.  The resulting regression fits can then be comparing using a rational method like AIC (Akaike Information Criterion) to determine the best choice for order for the polynomial (the minimum AIC value is the most parsimonious/best).  For a linear regression with n data points and M independent parameters, the AIC is given approximately by:

AIC = n * Ln(RSS) + 2*M

Where Ln is the natural log function and RSS is the Residual Sum of Squares from the regression (sometimes also called the “Error Sum of Squares”).  M includes each of the variables: solar, volcanic, ENI, Lagged ENI, secular function constants, and the constant (or offset) value.  Higher order polynomials should allow a better/more accurate description of secular trends of any nonlinear shape, but each added power in the polynomial increases the value of M, so a better fit (reduced RSS) is ‘penalized’ by and increase in M.   A modified AIC function, which accounts for a limited number of data points (called the AICc) is better when the ratio of n/M is less than ~40, but this ratio was always >40 for the regressions done here.

AIC Scores for Polynomials of Different Order

The ‘best’ polynomial to use to describe the secular trend, based on the AIC, depends as well on whether or not you believe that the influences of volcanic forcing and solar forcing are fundamentally different on a watt/M^2 basis.  That is, if you believe that solar and volcanic forcings are ‘fungible’, then those forcings can be combined and the regression run on the combined forcing rather than the individual forcings.   In this case, the best fit post 1975 is quadratic.  Troy Masters (Troy’s Scratchpad blog, based on a suggestion from a commenter called Kevin C) has showed that summing the two forcings improves a regression model’s ability to detect a known change in the slope of secular warming in synthetic data.

If a regression is done starting in 1950 (as in my original post) with solar and volcanic forcings treated separately, then it appears the best, or at least most plausible polynomial secular trend is 4th order, which represents a ‘local minimum’ in AIC… lower than 5th order.  AIC scores of 6th order and above are smaller than 4th order,  but the regression constants for solar and volcanic influences do not “converge” on similar values for each higher order polynomial; they instead begin to oscillate, indicating that the higher order terms (which can simulate higher frequency variations) are beginning to ‘fit’ the influences of volcanoes and solar cycle, rather than a secular trend.  In any case, using  a 4th order polynomial for the regression starting in 1950 generates a much improved fit compared to an assumed linear secular trend.

1975 to 2012

Figure 1 shows the AIC scores and lag constants for regressions from 1975 to 2012.

Figure1a 

The minimum AIC score is for a third order polynomial.  The corresponding regression coefficients (with +/- 2-sigma uncertainties) are shown below:

 

Volcanic                                0.14551 +/-0.0248

Solar Cycle                           0.47572 +/-0.2034

ENI                                         0.08253 +/-0.0143

7 Mo Lagged ENI                 0.03179 +/-0.0144

Linear Contribution             0.01335 +/-0.00897

Quadratic Contribution      0.0003753 +/- 0.000542

Cubic Contribution              (-8.655 +/-9.18)*10^(-6)

Constant                                0.02384 +/-0.0383

R^2                                         0.828

F Statistic                               306

Figure 2 shows the regression model overlaid with the secular component of the model.  The secular component is what is described by the above linear, quadratic, cubic contributions plus the regression constant.  Figure 3 shows the regression model overlaid with Hadley temperature anomalies.   The model matches the data quite well.  The residuals (Hadley minus model) are shown in Figure 4.  The residuals are reasonably uniform around zero, and show no obvious trends over time.

Figure2a

Figure3a

Figure4a

Figure 5 shows the individual contributions (ENSO, solar cycle, volcanic) along with their total.

Figure5a

Figure 6 shows the original and ‘adjusted’ Hadley data, where the influences for ENSO, solar cycle, and volcanoes have been subtracted form the original Hadley data.  I have included calculated slopes for 1979 to 1997 (inclusive) and 1998 to 2012 (inclusive).  The best (most probable) estimated trend for 1979 to 1997 is 0.0160 C/yr, while from 1998 to 2012 the best estimate for the trend is 0.0104C/yr, corresponding to a modest (35%) reduction in the rate of recent warming.  (edit: 0.0160 should have been 0.0171, 0.014 should have been 0.0108; the reduction is 37%)

Figure6a

 

1950 to 2012

Figure 7 shows the AIC scores and lag constants for regressions from 1970 to 2012.

Figure7a

The local minimum (best) AIC score is for a fourth order polynomial.   At orders 6 and above the AIC score continues to fall, but without convergence of solar and volcanic coefficients, which suggests to me that the higher order polynomials are beginning to interact excessively with the (higher frequency) non-secular variables we are trying to model, and the continued fall in AIC score is not indicative of a true improvement in accuracy of the higher order polynomials as a secular trend.  I adopted the fourth order polynomial as the most credible representation of the secular trend.  The corresponding regression coefficients (with +/- 2-sigma uncertainties) are shown below:

Volcanic                                0.1346 +/-0.0234

Solar Cycle                           0.3283 +/-0.170

ENI                                         0.0972 +/-0.0122

7 Mo Lagged ENI                 0.0245 +/-0.0121

Linear Contribution             0.00804 +/-0.00828

Quadratic Contribution       -0.000772 +/- 0.000542

Cubic Contribution              (2.97 +/-1.32)*10^(-5)

Quartic Contribution           (-2.7 +/-1.06)*10^(-7)

Constant                                -0.0137+/-0.0374

R^2                                         0.83691

F Statistic                               473

Figure 8 shows the regression model overlaid with the secular component of the model.  The secular component is what is described by the above linear, quadratic, cubic, and quartic contributions plus the regression constant.  Figure 9 shows the regression model overlaid with Hadley temperature anomalies.   The model matches the data quite well.  The residuals (Hadley minus model) are shown in Figure 10.

Figure8a

Figure9a

Figure10a

Figure 11 shows the original and adjusted Hadley data, where the influences for ENSO, solar cycle, and volcanoes have been subtracted form the original Hadley data.  I have included calculated slopes for 1979 to 1997 (inclusive) and 1998 to 2012 (inclusive).  The best estimate for the trend from 1979 to 1997 is 0.0164 C/yr, while from 1998 to 2012 the best estimate for the trend is 0.0084C/yr, corresponding to a 49% reduction in recent warming.

 

Figure11a

 

But what if radiation is fungible?

The divergence between the regression diagnosed ‘sensitivity’ to changes in solar intensity and volcanic aerosols is both surprising and puzzling.   The divergence is reported (albeit to a smaller extent) in the 1950 to 2012 regressions as well as the 1975 to 2012 regressions. In each case, the regression reports a best estimate response to solar cycle forcing which is more than twice as high as volcanic response on a watts/M^2 basis.   Lots of people expect the solar cycle to contribute to total forcing in a normal (fungible) way.  Figure 12 (from GISS) shows that for climate modeling, the folks at GISS think there is nothing special about solar cycle driven forcing.

Figure12a

For the diagnosed divergence between solar and volcanic sensitivities to be correct, there must be an additional mechanism by which the solar cycle substantially influences Earth’s temperatures, beyond the measured change in solar intensity.  I think convincing evidence of such a mechanism (changes in clouds from cosmic rays, for example) is lacking, although I am open to be shown otherwise.   But if we assume for a moment that there really is no significant difference in the response of Earth to solar and other radiative forcings, which seems to me plausible, then the above regression models ought to be modified by combining solar and volcanic into a single radiative forcing.

When the regressions are repeated for 1975 to 2012 with a single combined forcing (the sum of individual solar and volcanic) the minimum AIC score is for a quadratic secular function (rather than cubic when the two are independent variables), but the big change is that the regression diagnoses a longer lag for radiative forcing and a stronger response to radiative forcing (which is of course dominated by volcanic forcing).  Figure 13 shows the AIC scores and lag constants for polynomials of different orders when solar and volcanic forcings are combined.  The minimum AIC score with the combined forcings (quadratic secular function, 649.2) is slightly higher than the minimum for separate forcings (cubic secular function, 648.2), which lends some support to higher sensitivity for solar forcing.

Figure13a

Figure 14 shows the regression model and Hadley data, and Figure 15 shows the Hadley data adjusted for ENSO and combined solar and volcanic forcing.

Figure14a

 

Figure15a

 

The 1998 to 2012 slope is 0.0072 C/yr, while the 1979 to 1997 slope is 0.0165 C/yr; the recent trend is 44% of the earlier trend.

Why did F&R get different results?

The following appear to be the principle issues:

1.  Allowing volcanic and solar lags to vary independently of the others.

2.  Accepting physically implausible lags.

Treating solar and volcanic forcing as independent, combined with number 1 above seems to have some unexpected consequences.  Figure 16 shows the lagged and un-lagged volcanic forcing along with the un-lagged solar forcing.  The two major volcanic eruptions between1979 and 2012 (El Chichon and Pinatubo) happen to occur shortly after the peak of a solar cycle.

 

Figure16a

The solar and volcanic signals are partially ‘aliased’ by this coincidence (that is, both acting in the same direction at about the same time), while the decline in solar intensity following the solar cycle peak in ~2001 did not coincide with a volcano.  Since there was a considerable drop in rate of warming starting at about same time as the most recent solar peak, and since the regression can “convert” some of the cooling that was due to volcanoes into exaggerated solar cooling due to aliasing, the drop in the rate of warming after ~2000 can be ‘explained’ by the declining solar cycle after ~2001.  In other words, aliasing of solar and volcanic cooling in the early part of the 1975-2012 period, combined with free adjustment of ‘sensitivity’ to the two forcings independently, gives the regression the flexibility to increase sensitivity to the solar cycle by reducing the sensitivity to volcanoes, so that the best fit to an assumed linear secular trend corresponds to a larger solar coefficient.  Allowing the solar and volcanic forcing to act with different lags further increases the ability of the regression to increase solar influence and diminish volcanic influence.  All of which contributes to the F&R conclusion of “no change in underlying warming trend.”

Of course, the same aliasing applies to the regression for 1950 to 2012, but since there are more solar cycles and more volcanoes in the longer analysis, and those do not alias each other well, the regressions for the longer period reports a smaller difference in ‘sensitivity’ to solar and volcanic forcings.  For example, with an assumed linear secular trend (similar to F&R, but using one lag for both solar and volcanic forcings), the 1975 to 2012 regression coefficients are: volcanic = 0.1294, solar = 0.5445, while the best fit regression from 1950 (4th order polynomial secular function) the coefficients are: volcanic: 0.1346, solar = 0.3283.

It will be interesting to see how global average temperature evolves over the next  6-7 years as the current solar cycle passes its peak and declines to a minimum.  If F&R are correct about the large, lag-free influence of the solar cycle, this should be evident in the temperature record…. unless a major volcano happens to erupt in the next few years!

Conclusions

There is ample evidence that once natural variation from ENSO, solar cycles and volcanic eruptions is reasonably accounted for, the underlying ‘secular trend’ in global average temperatures remains positive.   But it is also clear that the best estimate of that secular trend shows a considerable decline in warming compared to the 1979 to 1997 period.  The cause for that decline in the rate of underlying warming is not known, but factors other than ENSO, volcanic eruptions, and the solar cycle are almost certainly responsible.

 

Postscript

In my last post I showed how the low pass filtered Nino 3.4 index correlates very strongly with the average temperature between 30N and 30S, and can account for ~75% of the total tropical temperature variation.   To check for the possible influence of ENSO (ENI) on temperatures outside the tropics, I first calculated the “non-tropic history” from the Hadley global history and Hadley tropical history (non-tropics = 2* global – tropics).  I then checked for correlation between the non-tropics history and the ENI (which is calculated from the low pass filtered Nino 3.4 index) at each monthly lag from 0 to 12 months.  Significant correlation is present starting at ~4 months lag through ~11 months lag, with the maximum correlation at 7 months lag from the ENI.  I then incorporated this 7-month lagged ENI as a separate variable in the regressions discussed above, and found very statistically significant correlation in all regressions.  The coefficient was remarkably consistent in all regressions, independent of the assumed secular trend function, at ~1/3 that of the global influence of ENI itself.  (The ENI coefficient itself was also remarkably consistent.)  The 7-month lagged ENI influence was significant at >99.9% confidence in all regressions.  The increased variable count was included in the calculation of AIC.

I used a simple dual-lag response model (two-box model) instead of a single lag response because land areas (and atmosphere) have much less heat capacity than ocean, and so react much more quickly to applied forcing than the ocean.  The reaction of land temperatures would be expected to be in the range of an order of magnitude faster based on relative (short term) heat capacities, which on a monthly basis makes the land response essentially immediate (lag less than a few months).   If land and ocean areas were thermally isolated, we would expect the very fast response of land to represent ~30% of the total, and the slower response of the ocean to represent ~70%.  That is, in proportion to the relative surface areas.  However, land and ocean are not thermally isolated, and the fast land response is reduced by the slower ocean response because heat exchanges between land and ocean pretty quickly.  Modeling this interaction would appear to be a non-trivial task, but I guessed that a simple approximation is to reduce the relative weightings of land and increase that of the ocean, and assumed 15% ‘immediate’ response and 85% lagged response.  The lag constants optimized in the regression applied to only 85% of the forcing; 15% of the forcing was considered essentially immediate.  The above appears to improve R^2 values a bit in nearly all regressions I tried, but does not impact the conclusions: the underlying secular trend appears lower from 1998 to 2012 than from 1979 to 1997.

 

Estimating the Underlying Trend in Recent Warming

Introduction

Foster & Rahmstorf (1) used a multiple regression model based on solar variation, volcanic aerosols, and ENSO to estimate how those factors have influenced surface temperature since 1979; the paper is basically a rehash, with some changes, of earlier published work by others (see for example http://www.agci.org/docs/lean.pdf and references). F&H adjusted measured changes in Earth’s surface temperature based on the results of their regression model, and claimed that the apparent slowdown in warming over the past 10+ years is entirely the result of natural variation, and that there has been absolutely no change in the underlying (secular) rate of warming since 1979. Oh yes, they also concluded that it is critical for people stop burning fossil fuels immediately…. though it is not immediately obvious how a multiple regression model on global temperatures leads to that conclusion.

Many people found the F&R paper to be technically weak, and its conclusions doubtful; my personal evaluation was that the paper was little better than a mindless curve-fit exercise. In spite of the coverage the paper got in some publications, I would normally prefer to ignore such things. But since the F&R paper seems to now have taken on the character of an urban legend, and is pointed to by warming-concerned folks whenever someone notes that warming has been much slower recently, I figured any reasoned critique of F&R is a useful endeavor.

F&R considered the influence of the solar cycle, a change of about 0.1% in solar intensity from peak to trough of the cycle, separately from the effects of stratospheric volcanic aerosols, even though both are expected to change the intensity of solar radiation reaching the Earth’s troposphere and surface. (Why should solar intensity change and volcanic aerosol forcing not be fungible?) Like some earlier publications, F&R also (strangely) limited their analysis to post 1979, even though data on volcanoes, solar cycles, and ENSO over longer periods is available. F&R concluded that variation in solar intensity has much greater influence on surface temperature, on a degree/watt/M^2 basis, than an equivalent change due to stratospheric volcanic aerosols, and further conclude the response of surface temperature to solar intensity variation is essentially instantaneous (no lag!), while stratospheric aerosols influence surface temperature only with considerable lag. Odd, very odd.

Here I offer what I believe is a more robust regression analysis of the same three variables (volcanic aerosols, ENSO, and solar cycle) on temperature evolution since 1950. I will show:

1) An improved index for accounting for ENSO.

2) The best regression fit is found when volcanic aerosols and solar intensity variation are lagged considerably due to thermal inertia of the system. The estimates for the influence of both (on a degrees/watt/M^2 basis) are very similar, not dramatically different.

3) After taking ENSO, volcanic aerosols, and solar cycles into account, the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996.

 

 

I. A Slightly Improved Method for Estimating ENSO Influence on Temperature Trends

The Nino 3.4 index is the monthly average temperature anomaly, in Celsius degrees, for the roughly rectangular area of the Pacific ocean bounded by 120 and 170 degrees west, 5 degrees north and 5 degrees south. This region represents only about 2.5% of the surface area of the Earth’s tropics (~30 north to ~30 south), yet is known to be strongly correlated with the ENSO and with changes in average temperature in the tropics. (For a more complete description see: http://www.ncdc.noaa.gov/teleconnections/enso/indicators/sst.php). Some months ago in a comment at The Blackboard, Carrick showed that Nino 3.4 shows little or no correlation, at any lag period, with temperatures outside of the tropics.  That is, ENSO strongly influences tropical temperatures but does not influence temperatures outside the tropics very much.  I concluded that if one is going to “account for” the influence of ENSO on global average temperatures using Nino 3.4, then it would be best to estimate the influence based on the variation in temperature anomaly for the tropics only. Eliminating uncorrelated temperature data from higher latitudes ought to improve signal to noise ratio, and yield a more accurate estimate of ENSO driven temperature changes.

The Nino 3.4 index, lagged two or three months, correlates reasonably well with temperature variation in the tropics, and can account for ~65% – 70% of the measured variation in average temperature. But can Nino 3.4 actually provide more information than gleaned from the 2 or 3 month lag correlation?

The answer seems to be that there is a bit more information available. If we consider ENSO to be a cyclical redistribution of heat that accumulates in the tropical Pacific, then it becomes clear the response to a change in ocean surface temperature in the Nino 3.4 region can’t be immediate, nor is the influence going to be accurately described by a specific lagged monthly Nino 3.4 value. During La Nina, stronger trade winds push warm surface water westward toward the Pacific warm pool, and that water is replaced with cooler water which upwells, mainly in the eastern Pacific. When the trade winds drop, an El Nino begins, with warm water flowing eastward from the Pacific warm pool, while the rate of upwelling in the eastern Pacific drops, which leads to warming in the eastern Pacific. The temperature response of the tropics ought to be something other than a simple lag of the Nino 3.4 index, since it takes time for heat to be distributed throughout the tropics.

So how can we use a direct measure of the tropical Pacific temperature anomaly (Nino 3.4) to better estimate the response of global average tropical temperature to ENSO? I reasoned as follows: The temperature rise in the tropics that is associated with an increasing Nino 3.4 temperature takes time to be distributed over all of the tropics, so any response should be gradual. As the tropical temperature rises, heat loss to space increases, and the warming influence for any single month should decay gradually to nothing. The influence of an instantaneous change (eg. a rise in the Nino 3.4 index from 0 C to 2C for only one month, followed by a flat Nino 3.4 index of 0 C for many months) ought to show an exponential-like decay from an initially strong influence.  There is not a single monthly Nino 3.4 influence at an optimal lag time, but rather a continuously evolving influence over some extended period. A strong El Nino or La Nina continues to have influence on tropical temperatures even after the Nino 3.4 index has returned to a neutral state.

I modeled the evolution of Nino 3.4 influence by iteratively calculating a new monthly index I call the “Effective Nino Index” (ENI):

ENI(n) = k * ENI(n-1) + (1-k) * Nino34d(n-1)

where ENI(n) is the Effective Nino 3.4 Index
n is the current month
(n-1) is the previous month
Nino34d(n-1) is the detrended Nino 3.4 index for the previous month
k is a constant between zero and one

ENI(n) is essentially a low pass filtered representation of all earlier Nino 3.4 values.   I tested several values of k to see what value generates an ENI which best correlates with temperature evolution in the tropics. Since ~1997, the temperature trend in the tropics has been relatively flat and not influenced by major volcanic eruptions, so I ran a regression of ENI against the detrended tropical temperature anomaly for 1997 to present (I used the Hadley Hadcrut4 tropics history, downloaded from Wood For Trees).  The best correlation between ENI and average tropical temperature is at k = 0.703. In other words, in any single month, the running history (2 and more months past, represented by ENI(n-1)) contributes 70.3% of the ENSO influence on average tropical temperature, and the previous month’s Nino 3.4 index contributes 29.7% of the influence on average tropical temperature. Figure 1 shows how the relative influence of any single month of Nino 3.4 declines over time, with zero months lag meaning the current month.  (Click on any image to view at the original resolution.)

Figure1

A comparison of Nino 3.4 with ENI is shown in Figure 2. The lagging effect of the low-pass filter function is clear.  Please note that ENI is not a temperature index per se, but an index that represents the weighted contribution of all past Nino 3.4 temperatures, with the relative influence of earlier Nino 3.4 values falling rapidly in importance the further back in time you look.

Figure2

Figure 3 shows the ENI and the detrended tropical temperature for 1997 to 2012 (Hadcrut4, downloaded from Wood for Trees) on the same graph, and Figure 4 shows the detrended tropical temperatures and ENI for 1950 to 2012.

Figure3

Figure4

There is very good correlation in the 1997 to 2013 period, where volcanic influences are minimal. You may note in figure 4 that periods of significant deviation between the detrended tropical temperature anomaly and the ENI correspond to the aftermath of major volcanic eruptions, which is consistent with significant aerosol cooling. The “adjusted” tropical temperature model based on the ENI regression against tropical temperatures is:

Tadj = Torg – (0.1959 +/- 0.016) * ENI

Where Torg is the original Hadley temperature anomaly for the tropics. +/-0.016 is the two sigma uncertainty for the model coefficient.

For the 1997 to 2012 period, the model’s F statistic was 594 (very highly significant), and the R^2 value was 0.756, meaning 75.6% of the total variance in tropical temperatures is predicted by the ENI value. It is important to note that ‘predicted’ is a suitable word, since the ENI influence is due to the combination of all earlier months’ Nino 3.4 values, not the present Nino 3.4 value.  Since the ENI is based on the detrended Nino 3.4 index, there is no net contribution to ENI from any general warming of the ocean surface over time.

Figure 5 shows the above ENI adjustment applied to all the Hadcrut4 tropical temperature data since 1950. As we might expect, the influence of volcanic aerosols from Pinatubo shows up much more clearly than in the unadjusted temperature data.

Figure5

I will use the ENI in the combined regression analysis that includes volcanic and solar effects.

 

II. About Those Natural Forcings

NASA GISS provides data for their estimate of aerosol influences from 1850 to present (http://data.giss.nasa.gov/modelforce/strataer/). The data is in the form of Aerosol Optical Depth (AOD at 550 nm wavelength), which is converted into a net forcing value (watts/M^2) by multiplying the AOD by a constant of 23 (NASA’s value). The GISS volcanic aerosol forcing since 1950 is shown in Figure 6.

Figure 6

Direct measurements of solar intensity over the solar cycle are only available since 1979 (via satellites), but the correlation between sunspot number (SSN) and measured changes in solar intensity is good, so it is possible to estimate the historical variation in solar intensity based on SSN records. To estimate solar intensity variation, I used the following empirical equation, which comes from regressing measured solar intensity with sunspot number (data from a spreadsheet by Leif Svalgaard):

Solar intensity = 1365.45 + (0.006872 * SSN)  watts/M^2

Where solar intensity is measured above the Earth’s atmosphere, and SSN is the monthly sunspot number. The R^2 for this regression was 0.984.  The variation of solar intensity about an average value is then:

Variation = 0.006872 * (SSN – AvgSSN) watts/M^2

Where AvgSSN is the average number of sunspots over the period being studied (in this case from 1950 to 2012).

If we assume Earth’s albedo is 30%, and average over the entire surface (a factor of 4 compared to the cross-sectional area Earth presents to the Sun), the variation in solar energy reaching the Earth (including the troposphere) is:

Variation = (0.7/4) * 0.006872 * (SSN – AvgSSN) = 0.001203 * (SSN – AvgSSN) watts/M^2

Since the solar cycle is ~11 years long, we expect solar forcing to generate a temperature response with peaks separated by ~11 years. Figure 7 shows the calculated solar forcing since 1950.

 

Figure7

 

III. Regression Model

The regression model has three independent variables: the ENI, with nominal units of temperature (as described above), lagged volcanic forcing, and lagged solar cycle forcing (both with nominal units of watts/M^2). We do not expect an instantaneous temperature response to volcanic and solar forcing, since the thermal mass of the Earth’s atmosphere, land surface and ocean surface are expected to slow the response… that is, to introduce lag between the applied forcing and the response.

A very accurate estimate of the global temperature response to solar and volcanic forcing history would require an accurate model of ocean heat uptake at different latitudes over time, as well as an accurate model of heat transport between high and low latitudes, between land and ocean, and between Earth and space. Since this type of model arguably doesn’t exist, I am forced to use a much simpler lag-type model. The lag model is based on a single constant value with a repetitive monthly calculation that approximates a low pass filter function:

EF(n) = EF(n-1) * (1 – K) + F(n) * K

where:
EF(n) is the effective forcing for month n (solar or volcanic)
F(n) is the actual forcing for month n (solar or volcanic)
K is a decay constant

When K =1, the effective forcing is identical to the actual current forcing. Smaller values of K introduce increasing lag in the response. This type of function is essentially equivalent to the expected response of a “slab” type ocean, or to Lucia’s ‘Lumpy’ model response.  Please note that the lag applies to both solar and volcanic aerosol forcings, since these are both radiative forcings.

Since I did not a priori know the best value of K, I tried different values of K and found the value which gave the best fit regression (that is, the highest R^2 value) for the three variables against the detrended monthly Hadley temperature series from 1950 to 2012. The best fit for K was 0.031. Figure 8 shows the “step response” of the lag function with K = 0.031, with F(n) starting at zero and then stepping to constant value of 1 at month 1.

Figure8

Detrending of the temperature series was used prior to regression because the underlying long-term secular trend, whether due to GHG forcing alone or in combination with other long term influence(s), can’t be accurately modeled by the three variables in the regression, since these three variables are all expected to have relatively short term influence. Using the original temperature data (not detrended) distorts the regression fit by essentially forcing the regression to explain all the temperature change, including any slow secular trend, using the three short-influence variables, and so yields very poor (even physically nonsensical) results.

Figure 9 shows the original and lagged volcanic aerosol forcing, and Figure 10 shows the original and lagged solar forcing.

Figure9

Figure10

The best fit regression (with K = 0.031) yields the following constants:

ENI:         0.1099 +/-0.0118 (+/- 2-sigma uncertainty)
Volcanic: 0.2545 +/- 0.0277
Solar:       0.233 +/- 0.231

R^2 for the regression was 0.445 (44.5% of the variance was accounted for by the model).

The much greater uncertainty in the solar influence is due to the solar forcing being quite small compared to the other two. Still, it is encouraging that the regression shows the best estimates for response to both radiative forcing variables are very similar… just as one might expect, since radiation is fungible.

Figure 11 shows the temperature influence of the three variables and their combined influence based on the regression constants for each.

Figure11

Figure 12 shows an overlay of the detrended Hadley temperature series and the sum of the three adjustments (both offset to average zero, which makes visual comparison easier), and Figure 13 shows the adjusted and unadjusted Hadley global temperature series.

figure 12

Figure13

I have added the slope lines for the adjusted series from 1979 to 1996 (inclusive) and from 1997 to 2012 (inclusive). The slope since 1997 is less than 1/6 that from 1979 to 1996.

 

IV. Comments, Conclusions, Caveats, and Uncertainties

Warming has not stopped, but it has slowed considerably. This analysis can’t prove the cause for that change in rate of warming, but any suggestion that solar cycles, volcanic aerosols, and ENSO are completely responsible for the recent slower warming rate is not supported by the data. Some may suggest long term cyclical variation in the secular warming rate has caused the recent slow-down, but this analysis can’t support or refute that suggestion.

It is encouraging that the influence of the ENI on global temperatures (as calculated by the by the global regression analysis) is just slightly more than half the influence found for the tropics alone (30S to 30N): 0.1099+/- 0.0118 global versus 0.1959+/-0.016 tropics. Since Carrick showed almost no correlation of ENSO with temperatures outside the tropics, and since 30S to 30N represents exactly half the Earth’s surface, we could reasonably expect the regression constant for the entire globe to be about half as large as for the tropics… and it is indeed very close to half (and within the calculated uncertainty limits).

The analysis indicates that global temperatures were significantly depressed between ~1964 and ~1999 compared to what they would have been in the absence of major volcanoes.

Here are a few caveats and uncertainties. First, the analysis is only as good as the data that when into it. Historical volcanic forcing from GISS is at best an estimate for all eruptions before Pinatubo; if the GISS volcanic forcing is wrong, then this could distort the regression results. The same is true for all other data, including the Hadley temperature series and the sunspot number model used to calculate solar forcing. While sunspot number is an excellent proxy for solar intensity over the last 3 solar cycles, that does not guarantee sunspot number has always been an equally excellent proxy for solar intensity.

Second, the single constant low-pass filter function used to calculate lagged solar and volcanic forcings is a fairly crude representation of reality. While the true lag function is almost certainly similar in shape, it will not be identical, and this too could distort the regression analysis to some extent. The reality is that there are a multitude of lag constants associated with heat transfer to/from different locations, especially different depths of the ocean.

Third, it is tempting to infer very low climate sensitivity from the regression constants for volcanic aerosols and solar cycle forcing (these constants have units of degrees/watt/M^2, and the values correspond to a climate sensitivity of a little less than 1C per doubling of CO2). This temptation should be resisted, because the model does not consider the influence of (slower) heat transfer between the surface and deeper ocean. In other words, the calculated impact of solar and volcanic forcings would be larger (implying somewhat higher climate sensitivity) if a better model of heat uptake/release to/from the ocean were used.

 

Request for only constructive comments:  Skydragon slayers and rabid catastrophic warmers should not feel their comments are required or requested.

 

 

(1) Grant Foster and Stefan Rahmstorf 2011 Environ. Res. Lett. 6 044022

Observation vs Model – Bringing Heavy Armour into the War

As I have noted before, most of the AOGCMs exhibit a curvilinear  response in outgoing global flux with respect to average temperature change.  One of the consequences of this is that there is a well-reported apparent increase in the effective climate sensitivity with time and temperature in the models; in particular, the effective climate sensitivity required to match historical data over the instrument period in the GCMs is less than the climate sensitivity reported from long-duration GCM runs.   This is not a small effect, although it varies significantly between the different GCMs.   In the models I have tested, it accounts for about half of the total Equilibrium Climate Sensitivity (“ECS”) reported for those models.   (Equilibrium Climate Sensitivity is defined by the IPCC as the equilibrium temperature in degrees C after a doubling of CO2.)  In general, models which show a more pronounced curvature will have a larger ratio of reported ECS to the effective climate sensitivity required to match the model results over the instrument period, and vice versa.

Kyle Armour et al have produced a paper, Armour 2012 , which offers a simple, elegant and coherent explanation for this phenomenon.   It comes down to geography.

Continue reading Observation vs Model – Bringing Heavy Armour into the War

Uncertainty: Let’s just call it an acquaintance!

Recently, I became aware of the oddly titled Uncertainty is Not Your Friend by Stephan Lewandowsky posted at Planet3.0org back in June. I vaguely recalled reading this and thinking it a rather strange mixture of mangled text, graphics and math. Ben Pile commented on the paper blog post. But at the time I didn’t consider the pastiche of misinformation mixed with some not-necessarily wrong observations was worth commenting on. Today I’ll comment on a few items so as to clarify which statements in that specific article appear totally false, which are worded so strangely as to be impossible to parse and note things that happen to be true.

I will start by saying this: Of course it’s true that uncertainty is “not your friend”. I will add that “A bird in the hand is worth two in the bush” and “Better the devil you know.”.

I’ll also say that generally speaking, when one accounts for uncertainty of anything we might call X, it is often the case that the cost of our best estimate of X will be lower than our best estimate of the cost of X. This appear to be the case that for climate change. (If we use the mean or expected value to represent our “best estimate” of anything, the previous statement can be expressed using mathematical symbols. If $latex E[X] $ is the expected or mean value of $latex X $ and $latex C(T) $ is the cost associated with a temperature rise of $latex T $ , then often $latex C(E[T]) < E( C[T]) $. I don't know if the property $latex C(E[T]) < E( C[T]) $ means uncertainty is "our friend". I would suggest uncertainty would not be our friend if $latex C(E[T]) > E( C[T]) $ either. I’ll defer discussing costs for now.

I want to focus on certain specific verbiage in Lewandowsky’s first post which seems to focus on the probability that temperature would change more or less than “anticipated”. Specifically, in his first of three posts he writes things like

So uncertainty doesn’t just mean that things could be worse than anticipated—in the case of climate, chances are that things will be worse rather than better than anticipated.

Uncertainty in climate evolution means things are likely to be worse, rather than better, than anticipated.

In paper 1, he the “thing” he focuses on is the change in temperature. But I had to scratch my head for a while because the following question popped into my head: “Does that depend on what is ‘anticipated’?”

I continued reading and hoped the examples might reveal which properties of the uncertain temperature defined the changes that are “anticipated”. Based on the example, it seemed that — possibly- the value that assumed to be ‘anticipated’ (by someone) is the mean value of the temperature change. At least, it seem to clarify his points, Lewandowsky used probability distributions for climate sensitivity that all shared the same mean temperature. To wit:

To make my point, I ensured that the mean of the four distributions is identical (around 3, with a tiny amount of deviation introduced by the simulation). However, the standard deviations (spread) of the distributions differ considerably, from .49 in the top left to 2.6 in the bottom right. The spread of each distribution characterizes the extent of uncertainty surrounding the mean estimate of 3 degrees.

Reading this, it seems whatever Lewandowsky means, he is comparing probabilities computed when we assume a) similar functional forms for the probability distribution function, b) identical mean values for the change in temperature and c) an increase in variance about the mean.

So, I thought I’d just plow through the math, making changes where I thought suitable. I noticed Lewandowsky drew heavily from “Roe, G. H. & Baker, M. B. (2007).” (RB) but substituted his own probability density function for theirs which is based on a normal distribution for the “climate feedback parameter” $latex f $. I thought I’d follow RB more closely, but introduced a tweek motivated by problems that arise in their probability distribution function as their dimensionless climate feedback parameter approaches $latex f $ approaches or exceeds 1. (Some of these problems were identified by Zaliapin, I. & Ghil, M. 2010). For the purposes of the discussion here I think it is sufficient to resolve the problem by simply noting that empirical evidence suggests $latex f<1 $. To reflect this known requirement, I replaced the normal distribution for $latex 1-f $ used by RB with an Inverse Gaussian Distribution with mean freed back $latex \overline{f}= E[f] $ and standard deviation $latex \sigma_f $ Then, to create my examples, I used the numerical values of $latex \overline{f}= E[f] = 0.62 $ and $latex \sigma_f =0.13 $. These values originate from a “suite of GCM simulations” and correspond to trace “A” in RB2007 figure 3. The resulting probability density function for the temperature rise shares the skewness of those in RB and the log-normal distribution used by Lewandowsky

However, if we are to use the “method of eyeball” to compare features like the mode, mean, median, it is more convenient to examine the cumulative distribution plot. Below, I show the cumulative distribution for the parameter values I picked:

Notice that I’ve highlighted the “mean value” in red. On the right hand side, I’ve indicated the temperatures that bound 90% of the range we might anticipate. So, based on this distribution, we might say that that we anticipate that the temperature will likely fall between 1.93 C ad 5.76 C. The expected value is 3.53C. We think there is a 5% chance the temperature will fall below 1.93 C and a 5% chance it will fall above 5.76 C.

What if the uncertainty was higher?
Now, suppose that someone suggests that we should explore what happens to our probability estimates if we:

  1. Use the same functional form for the probability distribution function
  2. Hold the expected value of temperature constant (as Lewandowsky did) but
  3. Increase the uncertainty. That is: double the standard deviation about the expected value.

When we do that we obtain the following cumulative distribution function:

Recall that in the previous it was decided that the “anticipated” temperature range was (1.93C to 5.76 C); there was a 5% chance chance temperature would fall below this range and a 95% chance they would fall above this range. Examining the new distribution

  1. The probability that temperature will fall outside the (1.93C to 5.76 C) range is now estimated to be 31% which is larger than 10%. This outcome is not surprising: uncertainty is defined this way.
  2. The probability the temperature will fall below the anticipated lower bounds has increased to 19.1%.
  3. The probability the temperature will fall above the anticipated upper bound is 12.1%.
  4. 19.1% > 12.1% so the probability that temperature will fall below the range currently anticipated will rise more than the probability it will rise above the range currently anticipated.

Based on this, I would suggested that based on what people might understand by the words “anticipated” and “likely” (meaning more probable) and the model for probability distribution I described, the following claim appears to be backwards:

Uncertainty in climate evolution means things are likely to be worse, rather than better, than anticipated.

The figure above shows that assuming that if, as we gain understanding, the mean value of the predicted temperature rise remains constant but our uncertainty in our ability to predict temperature increases, then we will find the probability that temperature will fall below the current estimated lower bounds increase more than the probability that temperatures will exceed the current estimated upper bounds.

For those who might think finding the increasing uncertainty means that the probability of lower temperature rises rises more rapidly than the probability of higher temperatures means “uncertainty is our friend”: That’s not the case. Uncertainty is uncertainty. It’s really never a “friend”. We can better understand why uncertainty is not a friendly thing when we turn to discussing costs. At that time we can discuss whether its unfriendly nature means what Lewandowsky thinks it means.

References
“Roe, G. H. & Baker, M. B. (2007). Why Is Climate Sensitivity So Unpredictable?Science, 318, 629-632.”

Zaliapin, I. & Ghil, M. (2010). Another look at climate sensitivity. Nonlinear Processes in Geophysics, 17, 113-122