This post is an update on my earlier post on the same subject. In my earlier post I regressed linearly detrended Hadley global temperature anomalies since 1950 against a combination of low pass filtered volcanic forcing, solar forcing, and ENSO influence. For ENSO influence, I defined a low pass filtered function of the detrended Nino 3.4 index, which I called the Effective Nino Index (ENI).
My regression results suggested the rate of warming since 1997 has slowed considerably compared to the 1979 to 1996 period, contrary to the results for Foster & Rahmstorf (2011), which showed no change in the rate of warming since 1979.  There were several constructive (and some non-constructive) critiques of my post in comments made here at The Blackboard…. and elsewhere. This post will address some of those critiques, and will examine in some detail how the choices made by F&R generated “no change in linear warming rate” results; results which are in fact not those best supported by the data.
Limitations of the Regression Models
 It is important to understand what a regression, like done by F&R or in my earlier post, can and can’t do. If each of the coefficients which come from the regression are physically reasonable/plausible, then the quality of the regression fit indicates:
1) if the assumed form of the underlying secular trend is plausible, and
2) if there are important controlling variables not included in the regression
The difference between the regression model and the data (that is, the model ‘residuals’) is an indication of how well the regression results describe reality: Is the form of the assumed secular trend plausible? Do the variables included in the regression plausibly control what happens in the real world?  However, if the coefficients from the regression output are not physically reasonable/plausible, then even a very good “fit” of the model to the data does not confirm that the shape of the assumed secular trend matches reality, and the regression results may have little or no meaning.
Some Substantive Critiques from My Last Post
The fundamental problem with trying to quantify and remove the influences of solar cycle, volcanic eruptions, and ENSO from the temperature record was pointed out by Paul_K, who noted that selection of a detrending function, which represents the influence of an unknown secular trend, is essentially circular logic. The analyst assumes the form of the chosen secular function (how the secular function varies with time: linear, quadratic, sinusoidal, exponential, cubic, etc, or some combination) in fact represents the ‘true’ form of the secular trend. The regression done using a pre-selected secular function form is then nothing more than finding the best combination of weightings of variables in the regression model which will confirm the form of the assumed secular trend is correct.
Hence, any conclusion that the regression results have “verified” the true form of an underlying trend is a bit circular… you can’t verify the shape of an underlying trend, you can only use the regression to evaluate if what you have assumed is a reasonable proxy for the true form of the secular trend. In the case of F&R, the assumed shape of the secular trend was linear from 1979; in my post the assumed secular trend was linear from 1950. Both suffer from the same circular logic.  F&R also allow both lag and sensitivity to radiative forcing to vary independently, which allowed their regression to specify non-physical lags and potentially non-physical responses to forcings, which together lead to the near perfect ‘confirmation’ of their assumed linear trend.  All of the regressions in this post, as well as in my original post, require that both solar and volcanic forcings to use the same lag, though that lag is free to assume whatever value gives the best regression fit, even if the resulting lag appears physically implausible.
Nick Stokes suggested substituting a quadratic function (with the quadratic function parameters determined by the regression itself) and went on himself to compare the regression results for linear and quadratic functions for 1950 to 2012 and 1979 to 2012.  Like me, Nick used a single lag for both solar and volcanic influences. Nick concluded that with a quadratic secular function, there is some (not a lot) deviation from a linear trend post 1979, which varies depending on what temperature record is used. Nick’s results are doubtful because simply choosing a quadratic secular function is just as circular as choosing a linear function.  Some of the lag constants Nick’s regression found for the 1975 to 2012 period (eg. ~0.11) appear physically implausible (much too fast).
Tamino (AKA Grant Foster of F&R) made a constructive comment at his blog: a single lag constant for solar and volcanic influences (a “one box lag model”) was not the best representation of how the Earth is expected to react to rapid changes in forcing like those due to volcanoes, and that a two-box lag model with a much faster response to account for rapid warming of the land and atmosphere would be more realistic. I have included this suggestion in my regressions.
Commenter Sky claimed that basing ENI on tropical temperature responses was “a foolishness” (I strongly disagreed) but his comments prompted me to look for any significant correlation between the ENI and non-tropical temperatures at different lags, and I found that there is a very modest but statistically significant influence of the ENI of non-tropical temperatures at 7 months lag. Incorporating both ENI and 7-month lagged ENI slightly improves the regression fit in all cases I looked at, and generates an estimated global response for ENI (not lagged 7 months) which is close to the expected value of half the response for the tropics.  (I will describe the (modest) modifications I made based on Tamino’s suggestion and on a 7-month lagged ENI contribution in a postscript to this post.)
Finally, Paul_K suggested that a way to avoid logical circularity was to try a series of polynomials in time, of increasing order, to describe the secular trend (with time=0 at the start of the regression model period) and with the polynomial constants determined by the regression itself. The resulting regression fits can then be comparing using a rational method like AIC (Akaike Information Criterion) to determine the best choice for order for the polynomial (the minimum AIC value is the most parsimonious/best). For a linear regression with n data points and M independent parameters, the AIC is given approximately by:
AIC = n * Ln(RSS) + 2*M
Where Ln is the natural log function and RSS is the Residual Sum of Squares from the regression (sometimes also called the “Error Sum of Squares”). M includes each of the variables: solar, volcanic, ENI, Lagged ENI, secular function constants, and the constant (or offset) value. Higher order polynomials should allow a better/more accurate description of secular trends of any nonlinear shape, but each added power in the polynomial increases the value of M, so a better fit (reduced RSS) is ‘penalized’ by and increase in M.  A modified AIC function, which accounts for a limited number of data points (called the AICc) is better when the ratio of n/M is less than ~40, but this ratio was always >40 for the regressions done here.
AIC Scores for Polynomials of Different Order
The ‘best’ polynomial to use to describe the secular trend, based on the AIC, depends as well on whether or not you believe that the influences of volcanic forcing and solar forcing are fundamentally different on a watt/M^2 basis. That is, if you believe that solar and volcanic forcings are ‘fungible’, then those forcings can be combined and the regression run on the combined forcing rather than the individual forcings.  In this case, the best fit post 1975 is quadratic. Troy Masters (Troy’s Scratchpad blog, based on a suggestion from a commenter called Kevin C) has showed that summing the two forcings improves a regression model’s ability to detect a known change in the slope of secular warming in synthetic data.
If a regression is done starting in 1950 (as in my original post) with solar and volcanic forcings treated separately, then it appears the best, or at least most plausible polynomial secular trend is 4th order, which represents a ‘local minimum’ in AIC… lower than 5th order. AIC scores of 6th order and above are smaller than 4th order,  but the regression constants for solar and volcanic influences do not “converge” on similar values for each higher order polynomial; they instead begin to oscillate, indicating that the higher order terms (which can simulate higher frequency variations) are beginning to ‘fit’ the influences of volcanoes and solar cycle, rather than a secular trend. In any case, using  a 4th order polynomial for the regression starting in 1950 generates a much improved fit compared to an assumed linear secular trend.
1975 to 2012
Figure 1 shows the AIC scores and lag constants for regressions from 1975 to 2012.
The minimum AIC score is for a third order polynomial. The corresponding regression coefficients (with +/- 2-sigma uncertainties) are shown below:
Volcanic                              0.14551 +/-0.0248
Solar Cycle                         0.47572 +/-0.2034
ENIÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.08253 +/-0.0143
7 Mo Lagged ENIÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.03179 +/-0.0144
Linear Contribution            0.01335 +/-0.00897
Quadratic Contribution     0.0003753 +/- 0.000542
Cubic Contribution             (-8.655 +/-9.18)*10^(-6)
Constant                               0.02384 +/-0.0383
R^2Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.828
F Statistic                              306
Figure 2 shows the regression model overlaid with the secular component of the model. The secular component is what is described by the above linear, quadratic, cubic contributions plus the regression constant. Figure 3 shows the regression model overlaid with Hadley temperature anomalies.   The model matches the data quite well. The residuals (Hadley minus model) are shown in Figure 4. The residuals are reasonably uniform around zero, and show no obvious trends over time.
Figure 5 shows the individual contributions (ENSO, solar cycle, volcanic) along with their total.
Figure 6 shows the original and ‘adjusted’ Hadley data, where the influences for ENSO, solar cycle, and volcanoes have been subtracted form the original Hadley data. I have included calculated slopes for 1979 to 1997 (inclusive) and 1998 to 2012 (inclusive). The best (most probable) estimated trend for 1979 to 1997 is 0.0160 C/yr, while from 1998 to 2012 the best estimate for the trend is 0.0104C/yr, corresponding to a modest (35%) reduction in the rate of recent warming. (edit: 0.0160 should have been 0.0171, 0.014 should have been 0.0108; the reduction is 37%)
1950 to 2012
Figure 7 shows the AIC scores and lag constants for regressions from 1970 to 2012.
The local minimum (best) AIC score is for a fourth order polynomial.  At orders 6 and above the AIC score continues to fall, but without convergence of solar and volcanic coefficients, which suggests to me that the higher order polynomials are beginning to interact excessively with the (higher frequency) non-secular variables we are trying to model, and the continued fall in AIC score is not indicative of a true improvement in accuracy of the higher order polynomials as a secular trend.  I adopted the fourth order polynomial as the most credible representation of the secular trend. The corresponding regression coefficients (with +/- 2-sigma uncertainties) are shown below:
Volcanic                               0.1346 +/-0.0234
Solar Cycle                         0.3283 +/-0.170
ENIÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.0972 +/-0.0122
7 Mo Lagged ENIÂ Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.0245 +/-0.0121
Linear Contribution            0.00804 +/-0.00828
Quadratic Contribution      -0.000772 +/- 0.000542
Cubic Contribution             (2.97 +/-1.32)*10^(-5)
Quartic Contribution          (-2.7 +/-1.06)*10^(-7)
Constant                               -0.0137+/-0.0374
R^2Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0.83691
F Statistic                              473
Figure 8 shows the regression model overlaid with the secular component of the model. The secular component is what is described by the above linear, quadratic, cubic, and quartic contributions plus the regression constant. Figure 9 shows the regression model overlaid with Hadley temperature anomalies.  The model matches the data quite well. The residuals (Hadley minus model) are shown in Figure 10.
Figure 11 shows the original and adjusted Hadley data, where the influences for ENSO, solar cycle, and volcanoes have been subtracted form the original Hadley data. I have included calculated slopes for 1979 to 1997 (inclusive) and 1998 to 2012 (inclusive). The best estimate for the trend from 1979 to 1997 is 0.0164 C/yr, while from 1998 to 2012 the best estimate for the trend is 0.0084C/yr, corresponding to a 49% reduction in recent warming.
But what if radiation is fungible?
The divergence between the regression diagnosed ‘sensitivity’ to changes in solar intensity and volcanic aerosols is both surprising and puzzling.   The divergence is reported (albeit to a smaller extent) in the 1950 to 2012 regressions as well as the 1975 to 2012 regressions. In each case, the regression reports a best estimate response to solar cycle forcing which is more than twice as high as volcanic response on a watts/M^2 basis.  Lots of people expect the solar cycle to contribute to total forcing in a normal (fungible) way. Figure 12 (from GISS) shows that for climate modeling, the folks at GISS think there is nothing special about solar cycle driven forcing.
For the diagnosed divergence between solar and volcanic sensitivities to be correct, there must be an additional mechanism by which the solar cycle substantially influences Earth’s temperatures, beyond the measured change in solar intensity. I think convincing evidence of such a mechanism (changes in clouds from cosmic rays, for example) is lacking, although I am open to be shown otherwise.   But if we assume for a moment that there really is no significant difference in the response of Earth to solar and other radiative forcings, which seems to me plausible, then the above regression models ought to be modified by combining solar and volcanic into a single radiative forcing.
When the regressions are repeated for 1975 to 2012 with a single combined forcing (the sum of individual solar and volcanic) the minimum AIC score is for a quadratic secular function (rather than cubic when the two are independent variables), but the big change is that the regression diagnoses a longer lag for radiative forcing and a stronger response to radiative forcing (which is of course dominated by volcanic forcing). Figure 13 shows the AIC scores and lag constants for polynomials of different orders when solar and volcanic forcings are combined.  The minimum AIC score with the combined forcings (quadratic secular function, 649.2) is slightly higher than the minimum for separate forcings (cubic secular function, 648.2), which lends some support to higher sensitivity for solar forcing.
Figure 14 shows the regression model and Hadley data, and Figure 15 shows the Hadley data adjusted for ENSO and combined solar and volcanic forcing.
The 1998 to 2012 slope is 0.0072 C/yr, while the 1979 to 1997 slope is 0.0165 C/yr; the recent trend is 44% of the earlier trend.
Why did F&R get different results?
The following appear to be the principle issues:
1. Allowing volcanic and solar lags to vary independently of the others.
2. Accepting physically implausible lags.
Treating solar and volcanic forcing as independent, combined with number 1 above seems to have some unexpected consequences. Figure 16 shows the lagged and un-lagged volcanic forcing along with the un-lagged solar forcing. The two major volcanic eruptions between1979 and 2012 (El Chichon and Pinatubo) happen to occur shortly after the peak of a solar cycle.
The solar and volcanic signals are partially ‘aliased’ by this coincidence (that is, both acting in the same direction at about the same time), while the decline in solar intensity following the solar cycle peak in ~2001 did not coincide with a volcano. Since there was a considerable drop in rate of warming starting at about same time as the most recent solar peak, and since the regression can “convert” some of the cooling that was due to volcanoes into exaggerated solar cooling due to aliasing, the drop in the rate of warming after ~2000 can be ‘explained’ by the declining solar cycle after ~2001. In other words, aliasing of solar and volcanic cooling in the early part of the 1975-2012 period, combined with free adjustment of ‘sensitivity’ to the two forcings independently, gives the regression the flexibility to increase sensitivity to the solar cycle by reducing the sensitivity to volcanoes, so that the best fit to an assumed linear secular trend corresponds to a larger solar coefficient. Allowing the solar and volcanic forcing to act with different lags further increases the ability of the regression to increase solar influence and diminish volcanic influence. All of which contributes to the F&R conclusion of “no change in underlying warming trend.”
Of course, the same aliasing applies to the regression for 1950 to 2012, but since there are more solar cycles and more volcanoes in the longer analysis, and those do not alias each other well, the regressions for the longer period reports a smaller difference in ‘sensitivity’ to solar and volcanic forcings. Â For example, with an assumed linear secular trend (similar to F&R, but using one lag for both solar and volcanic forcings), the 1975 to 2012 regression coefficients are: volcanic = 0.1294, solar = 0.5445, while the best fit regression from 1950 (4th order polynomial secular function) the coefficients are: volcanic: 0.1346, solar = 0.3283.
It will be interesting to see how global average temperature evolves over the next 6-7 years as the current solar cycle passes its peak and declines to a minimum. If F&R are correct about the large, lag-free influence of the solar cycle, this should be evident in the temperature record…. unless a major volcano happens to erupt in the next few years!
Conclusions
There is ample evidence that once natural variation from ENSO, solar cycles and volcanic eruptions is reasonably accounted for, the underlying ‘secular trend’ in global average temperatures remains positive.   But it is also clear that the best estimate of that secular trend shows a considerable decline in warming compared to the 1979 to 1997 period. The cause for that decline in the rate of underlying warming is not known, but factors other than ENSO, volcanic eruptions, and the solar cycle are almost certainly responsible.
Postscript
In my last post I showed how the low pass filtered Nino 3.4 index correlates very strongly with the average temperature between 30N and 30S, and can account for ~75% of the total tropical temperature variation.  To check for the possible influence of ENSO (ENI) on temperatures outside the tropics, I first calculated the “non-tropic history” from the Hadley global history and Hadley tropical history (non-tropics = 2* global – tropics). I then checked for correlation between the non-tropics history and the ENI (which is calculated from the low pass filtered Nino 3.4 index) at each monthly lag from 0 to 12 months. Significant correlation is present starting at ~4 months lag through ~11 months lag, with the maximum correlation at 7 months lag from the ENI. I then incorporated this 7-month lagged ENI as a separate variable in the regressions discussed above, and found very statistically significant correlation in all regressions. The coefficient was remarkably consistent in all regressions, independent of the assumed secular trend function, at ~1/3 that of the global influence of ENI itself. (The ENI coefficient itself was also remarkably consistent.) The 7-month lagged ENI influence was significant at >99.9% confidence in all regressions. The increased variable count was included in the calculation of AIC.
I used a simple dual-lag response model (two-box model) instead of a single lag response because land areas (and atmosphere) have much less heat capacity than ocean, and so react much more quickly to applied forcing than the ocean. The reaction of land temperatures would be expected to be in the range of an order of magnitude faster based on relative (short term) heat capacities, which on a monthly basis makes the land response essentially immediate (lag less than a few months).   If land and ocean areas were thermally isolated, we would expect the very fast response of land to represent ~30% of the total, and the slower response of the ocean to represent ~70%. That is, in proportion to the relative surface areas. However, land and ocean are not thermally isolated, and the fast land response is reduced by the slower ocean response because heat exchanges between land and ocean pretty quickly. Modeling this interaction would appear to be a non-trivial task, but I guessed that a simple approximation is to reduce the relative weightings of land and increase that of the ocean, and assumed 15% ‘immediate’ response and 85% lagged response. The lag constants optimized in the regression applied to only 85% of the forcing; 15% of the forcing was considered essentially immediate. The above appears to improve R^2 values a bit in nearly all regressions I tried, but does not impact the conclusions: the underlying secular trend appears lower from 1998 to 2012 than from 1979 to 1997.
How about testing the residuals for autocorrelation.
DeWitt,
For sure there will be some autocorrelation.
SteveF, I think there is some wrong thinking with lumping Solar in as just another radiant forcing. Atmospheric forcing is a response to surface energy. It can only retain energy. Solar actually forces energy into the system and at different depths. Since Solar can be transported internal to the oceans in currents or transferred through the atmosphere as latent, each Wm-2 transported to the poles gets a bigger bang for the buck. Transported to land as latent it gets even a bigger bang for the buck because of specific heat capacity differences.
Just consider the tropics which are least sensitive to atmospheric forcing and most sensitive to solar.
Dallas,
Well, F&R consider them separately, and they get zero-lag solar response. If much solar energy is absorbed by the ocean well below the surface(and clearly it is) then zero-lag response to solar intensity variation makes even less sense…. How does all that heat deposited well below the ocean surface not have to warm the thermally massive well mixed layer?
SteveF, with all the sloshing around I don’t think you can pick out a 0.05 C impact the tropics that can have an impact of nearly 0.1 C close to the poles. ENSO itself has an underlying trend and shifted north.
https://lh4.googleusercontent.com/-uEdwPFb9kVk/UcYRVG8RSzI/AAAAAAAAIwE/qbUuVIn3mMg/s997/seasonal%2520cycle%2520shift.png
I think I showed you that, If you pick a different baseline to remove the seasonal signal you lose or gain about the magnitude of the impact. If you look closely at the 25N to 45N that is what looks like a standing wave sandwiched between two waves in near synchronization.
Steve,
“The analyst assumes the form of the chosen secular function (how the secular function varies with time: linear, quadratic, sinusoidal, exponential, cubic, etc, or some combination) in fact represents the ‘true’ form of the secular trend. The regression done using a pre-selected secular function form is then nothing more than finding the best combination of weightings of variables in the regression model which will confirm the form of the assumed secular trend is correct.”
I don’t agree with that. In the regression, you are identifying the exogenous components and subtracting them out. You don’t impose a form on what is left.
What you do need is that the regression correctly separates those components – and doesn’t try and get their combination to fit secular processes. The other functions used, whether as detrendors or extra regressors, have to span the alternative space. That’s why linear post-1950 was not good; the exogenous variables had to try to fit the dip in the middle, and whatever they could do to achieve that was then subtracted. Including a quadratic reduces the distortion. It isn’t an assertion that the behaviour is actually quadratic.
I don’t think too much should be made of the lags the regression identifies. It just means that the fit isn’t very sensitive to the amount of lag. Put another way, it’s a good fit over a range of lags.
I think the solar story can be overstated too. It’s a small component and the regression just doesn’t pin it down very well.
what if the volcanic forcing data is just way wrong (except maybe for pinatubo, but it treats the others like they were the same?)
Nick,
“You don’t impose a form on what is left.”
Any assumed trend (linear, quardratic, or whatever) most certainly does impose the form of the secular trend (or as you say, the lower frequency trend which is left after you subtract the ‘exogenous components’). Surely you can see that a secular function (or a selected detrend) does that.
Nick,
“I don’t think too much should be made of the lags the regression identifies.”
I think you are mistaken about that. The aliasing of volcanic and solar forcings, combined with unconstrained lags causes the regression to assign the high solar sensitivity. To be more clear than I was in the post itself: if the regression can assign 1) high and instantaneous response to solar forcing and 2) adjust volcanic lag to a lower than realistic level, the response of the temperature in the early part of 1975 to 2012 can be very well explained by a combination of very high sensitivity to solar forcing, very rapid solar response, combined with very low volcanic sensitivity and somewhat faster than realistic volcanic lag. It is why F&R get perfect confirmation of their presumed linear trend.
Nick,
Suppose an analyst, let’s call him George Stokes, believes that the true underlying trend in a data set is a straight line, but that the trend is distorted by several other higher frequency influences, influences which he suspects he has identified and measured (for which he has data). George decides to linearly detrend the data, and then regresses the detrended data (that is, residual after detrending) against the data for the several suspected high frequency influences. The regression does one and only one thing: it finds the best coefficients for the several high frequency influences to minimize (in an RMS sense) deviation from zero in the final model. IOW, George sets up the regression to see if it can confirm his belief that the underlying trend is a straight line.
Steve,
What your process does is take the signal and divide it into three parts. One it identifies as secular, according to your selection. One is identified as exogenous and the rest is residuals. The secular is then put back with the residuals.
Suppose you should have chosen a different secular, or secular set. Then the difference is split between the three classes, It’s the component that goes into the exogenous that causes error. But the exogenous are not well-shaped to fit a smooth difference, so they will probably only take up a small share.
Anyway, what counts here are the CI’s of the various coefficients. I see in your post-1975 table that zero is well within the CIs for quadratic and cubic. The regression does not rule out linear. I see too that you had a 7 month lag for ENI; I had a 9-minth lag overall. Not so different.
I’m doubtful about the CI’s for higher order polynomial terms. They are far from orthogonal, and there will be a covariance matrix with big off-diagonal terms. I’d recommend using Legendre polynomials, or some other orthogonal set.
Nick (#117153) –
Why do you describe ENSO as exogenous? It strikes me that ENSO is no more exogenous than, say, PDO or AMO.
Nick,
Yes, I agree with your description of the “partitioning”.
.
“The regression does not rule out linear. I see too that you had a 7 month lag for ENI; I had a 9-minth lag overall. Not so different.”
.
Of course the regression can’t rule out linear! But the bigger point is that linear is not the most likely secular trend. The F&R claim that the underlying rate of warming has not changed (equivalent to saying the secular trend function is linear) is less likely true than false.
.
WRT the 7-month lagged ENI: just to be clear, the ENI represents how measured temperatures in the tropics trail the measured Nino 3.4 index via low pass filtration. The 7- month lagged ENI is a 7-month offset (not a low pass filtered/exponential decay) and seems to represent the influence of tropical temperature variation, driven by ENSO, on non-tropical temperatures. I the 7-month delay is in indication of transport between tropical temps and nontropical. It does not tell us anything much about how Earth responds to a forcing for solar variation or volcanic aerosols.
.
For me the bigger issue is that the short response times are contrary to plausible response profiles from volcanoes. For example, Wigley et al (2005) relate the exponential decay constant to climate sensitivity (decay is slower for higher climate sensitivity); they suggest a decay constant in the range of 30 months, even for very low assumed climate sensitivity.
.
The flexibility of setting different sensitivities for solar and volcanic forcings allows the regression to assign those sensitivities which will give the best regression fit to the assumed secular trend. If that assumed trend is linear, the regression reports a solar sensitivity four times larger than volcanic. This strikes me as improbable at best.
So the surface is not warming, but the .700 meter depth is, according to these latest claims.
What is the mechanism where by water heated at the surface moves that heat down to >700 meters, by passing the ~2000 feet of water in between the surface and 700 meters?
hunter,
The conclusion of the post was that warming has most likely slowed in the last 14 years compared to the 1979 to 1997 period. What does your comment have to do with that?
Hunter, “What is the mechanism where by water heated at the surface moves that heat down to >700 meters, by passing the ~2000 feet of water in between the surface and 700 meters?”
I believe it is called diapycnal mixing. A fluid sliding across across a density boundary layer like making a layered Cocktail. In the NH, North Atlantic especially, deep water formation is more dependent on temperature than salinity, since it is already more saline. Since there is not “A” thermocline, but a variety of isothermal density layers, the deep water forming can slide in above or below any of the established layers. Changes in diapycnal mixing would be related to surface wind variations near the polar convergence zones, sea ice extent, bottom topography etc.
SteveF: So if it has slowed, what are the chances of it being part of a normal natural cycle rather than needing other explanations?
RichardLH,
“So if it has slowed, what are the chances of it being part of a normal natural cycle rather than needing other explanations?”
.
Hard to say. Some will claim aerosol emissions from increasing coal burning in China are responsible, some will say that there are natural cyclical processes which caused both “higher than normal” warming before 1998 and “lower than normal” warming since 1998. Some will suggest increasing heat accumulation in the ocean below 2000 meters. My post only addresses the question of whether a change in the underlying secular trend is more likely or less likely than an unchanged linear secular trend (I conclude a recent reduction in underlying trend is more likely). If pressed to make my personal best guess, I would suggest the influence of natural cyclical processes (changes in thermo-haline circulation, etc.) is the most likely explanation for the recent slowing in warming rate.
.
Steve F,
My question is valid, whether or not the alleged heating has slowed, stayed the same or increased: what mechanism transports heat from the surface to the sub-700 meter level and leaves the intervening ~2000 ‘boundary’ unchanged?
Dallas,
The diapycnal seems to be too poorly understood to be a satisfactory mechanism for heat transport to get the amount of credit it is receiving irt my question. But is a fascinating topic in its own right. The diapycnal mechanism may offer some important insights on the El/La Nino/a oscillations, which of course are global weather changing movements of warm and cool water.
Hunter,
Whether or not your question is ‘valid’ in some absolute sense, it seems to me it has nothing to do with my post. You could as easily have asked why the Red Sox collapsed in September, and then claim the question is ‘valid’. I am more than happy to answer relevant questions… those actually related to the content of the post.
.
If you want to discuss ocean warming profiles, I suggest you find a thread related to that subject.
Hunter, diapycnal mixing is one of those million dollar prize problems. The rate of change in deep ocean uptake is around 0.8 C per 400 years, but it is not a driver of ENSO, more coupled with the AMO/SO. Most of the main stream Climate Scare types like to avoid the longer term oscillations, but there are recurrences in the 150, 400, 1000, 1200 and 1700 years ranges that appear to be related to ocean mixing efficiency rather than “Solar” orbital cycles.
http://upload.wikimedia.org/wikipedia/commons/6/63/Pack_ice_slow.gif
That is the main pump, the where determines the temperature. Since there is about 10 C temperature gradients over a degree or two of latitude in the SH antarctic convergence zone, doesn’t take much of a shift to produce an impact which has a delayed impact on the Arctic and vice versa.
Toggweiler and Brierley are a couple of good names to Google on the basic ocean stuff.
Judith Curry is supposed to have a post on the Diapynal mixing which also impacts gas sequestration. It is extremely complicated, so I prefer the layered cocktail analogy.
SteveF, I believe Hunter was responding a comment not the post.
dallas,
Please point out which comment you think hunter was responding to. I have read them all, and I see nothing remotely relevant.
“For the diagnosed divergence between solar and volcanic sensitivities to be correct, there must be an additional mechanism by which the solar cycle substantially influences Earth’s temperatures, beyond the measured change in solar intensity”
Not only does the total power output of the sun change, but the spectrum changes too. The amount of uv changes a lot more than the total heat flux
http://www.sciencemag.org/content/244/4901/197.abstract
The sun’s total irradiance decreased from 1980 to mid-1985, remained approximately constant until mid-1987, and has recently begun to increase. This time interval covered the decrease in solar activity from the maximum of solar cycle 21 to solar minimum and the onset of cycle 22. The sun’s ultraviolet irradiance also decreased during the descending phase of cycle 21 and, like the total irradiance, is now increasing concurrently with the increase in cycle 22 activity. Although only 1 percent of the sun’s energy is emitted at ultraviolet wavelengths between 200 and 300 nanometers, the decrease in this radiation from 1 July 1981 to 30 June 1985 accounted for 19 percent of the decrease in the total irradiance over the same period.
DocMartyn,
OK, the solar spectrum clearly chances over the solar cycle, especially in the UV part of the spectrum. Please explain or provide references to how that spectral shift relates to large zero-lag response in temperature. What is the proposed mechanism by which the shift in spectrum generates a substantial solar influence with little or no lag?
SteveF (Comment #117131)
June 25th, 2013 at 4:38 pm
Dallas,
Well, F&R consider them separately, and they get zero-lag solar response. If much solar energy is absorbed by the ocean well below the surface(and clearly it is) then zero-lag response to solar intensity variation makes even less sense…. How does all that heat deposited well below the ocean surface not have to warm the thermally massive well mixed layer?
Dallas,
I still see virtually nothing related to my post.
SteveF, neither do I, that is the only thing that I could find that he was responding to, “How does all that heat deposited well below the ocean surface not have to warm the thermally massive well mixed layer?”
Let’s consider some facts about the underlying trend. Is the Arctic warming? Yes, like everywhere else, with a long-term trend for 500 years rising out of the Little Ice Age at the rate of about half a degree per century, due to turn to cooling at least within 200 years. But is there a hockey stick? No.
http://img854.imageshack.us/img854/2865/xkbx.jpg
In fact the Arctic is no hotter than it was in the late 1930’s and early 1940’s.
http://img843.imageshack.us/img843/5030/pso0.jpg
Is there a super-imposed 60 year natural cycle that caused all the alarm during the 30 years of rising prior to 1998? Yes.
http://img27.imageshack.us/img27/2496/otc3.png
But it’s all natural – every bit of it. And it’s nothing whatsoever to do with carbon dioxide, radiative forcing, back radiation, greenhouse effects or any such travesties of physics.
SteveF,
I apologize for the tangential question.
Your topic is plenty substantive and needs no distractions.
dallas,
Thank you for your patience and insights.
SteveF (Comment #117189)
June 26th, 2013 at 2:24 pm
RichardLH,
“So if it has slowed, what are the chances of it being part of a normal natural cycle rather than needing other explanations?â€
“Hard to say.”
I believe that there are short term 1-15 year natural cycles already recorded in the data that will drive the immediate future and will have a larger impact of the resultant figures.
This is addition to any longer term warming from CO2.
I note that 4 years is the true solar year and recomend any ‘Normals’ that are created for Anomaly analysis should be 1461 days long rather than 365.
Using a 4 year ‘Normal’ means that you are now truely ‘sun referenced’ in time terms rather than floating/jittering as with a 365 day reference. This is just matching viewed outputs ‘in sync’ with the overall solar input really.
Easy to see why that could modulate temperatures in a 4 year pattern that are otherwise destroyed (with one year ‘Normals’).
We used to do this sort of waterfall pattern analysis all the time in the past when looking for low level signals in the presence of large amounts of noise.
When things ‘line up’ the underlying patterns jump out at you.
Think of it like treating the data as a byte stream rather than just a bit stream 🙂
RichardL, Pardon the butt in, but I have checked the standard deviation of quite a few paleo records and “natural” variability is on the order of +/- 0.8 C.
A paleo paper by Brierley et al, On the Relative Importance of Meridional and Zonal Energy Flux estimated that meridional had roughly a 3.2 C impact and zonal about 0.6 C impact. Since the average NH SST is 3C warmer than the SH SST, that should be an indication of the meridional impact range. The zonal, ENSO on steriods, is less obvious, so this is my new favorite chart,
https://lh5.googleusercontent.com/-yCVnY6nXIiQ/UZmVEhGt-oI/AAAAAAAAIJs/EozQSkgn614/s817/IPWP%2520spliced%2520with%2520cru4%2520shifted%2520anomaly%2520from%25200ad.png
Instead of a hockey stick, it is more of a pooper scooper.
dallas (Comment #117213)
June 27th, 2013 at 4:10 am
“RichardL, Pardon the butt in, but I have checked the standard deviation of quite a few paleo records and “natural†variability is on the order of +/- 0.8 C.”
I have not looked at anything beyond the thermometer and satellite records so far. I very much doubt that we will find any proxy records that have the 1-15 year cycle resolution that I would be looking for in any case.
Thinker.
I agree there is evidence of pseudo-cyclical behavior at several time scales. And thanks for injecting some humor into the thread by showing us a hint of your profound understanding of radiative physics.
The details of the maths of this pair of posts are beyond my modest training… no Dunning-Kruger for me, in this instance. Notwithstanding, I do see some take-aways in the high-level comments.
With Feynman-like plain English, SteveF laid out his understanding of the limits of these types of regression analysis: while the r^2 value achieved can help evaluate the plausibility of the analysis’ underlying assumptions, it can’t serve as proof of those assumptions, no matter how close to 1.
More controversially, SteveF claims that every term used in a regression analysis should be scrutinized with respect to its reasonableness and plausibility. For climate modeling, this must refer to the underlying physical processes.
If one accepts these two notions, then the runs in these two posts help illuminate one of the most troublesome and open-ended questions in climate modeling, the impact of aerosols on surface temperature. While SteveF’s topic is volcanic aerosols, his reasoning would apply equally to anthropogenic ones. Tightening the boundaries of the plausible magnitude of aerosols’ effects on temperature would allow for more critical (i.e. better) evaluations of competing climate model hindcasts and forecasts.
However, it seems that many or most Climate Scientists do not agree with SteveF that regression coefficients should be closely evaluated for their reasonableness, plausibility, and consistency, as a matter of course. It would be a sign of real progress if these Scientists felt obliged to explicitly defend their stance, rather than letting it pass as an unstated background assumption. That might be the post’s most constructive contribution to the field.
AMac,
“The details of the maths of this pair of posts are beyond my modest training…”
.
I think you sell yourself short. The math is not difficult, there is just a fair amount of it.
.
With regard to reasonableness/plausibility/consistency, that is absolutely the point. As I wrote in my first post, the implausibility of the diagnosed lags in F&R, combined with widespread fawning acceptance of the F&R results, is what motivated me to address the subject. Evaluating the plausibility of any result is always needed. The acceptance of implausible behaviors as ‘real’ is just very bad science, independent of field. It falls in the general category of ‘no reality check’.
Steve, I think this moves our understanding forward and again, excellent work.
My biggest reservation remains understanding what the effect on uncertainty of this type of regression against quantities that contain incoherent signal components with respect to the temperature signal you are regressing against.
We’re back to the comment I’ve made previously that simply because you’ve removed variance from the original signal, doesn’t mean you’re reduced absolute uncertainty.
@AMac: If you read papers and postings by any skilled statistician, economist, etc, you will see that they do in fact look carefully at how reasonable their coefficients are. The lack of similar rigor in Climate literature continues to shock me.
Reality checks work in both directions. If you read a skilled economist, like Andrew Gelman, you’ll find that they actually advise keeping variables in a model even if the coefficients are not statistically significant, IF the variables are meaningful and motivated by theory. Conversely, they advise you to reconsider your model if coefficients have an unexpected sign or are unusual in some other way, even if they are statistically significant.
Most Climate literature strikes me as being written by undergraduate seniors who are very clever. They’re very sure of themselves and they are quite skilled in certain techniques, but they simply don’t get the Real World.
Carrick,
Thanks.
I agree that assigning uncertainties is not simple (at least beyond the stated uncertainties for the regression coefficients). Of course, I was trying mainly to show that the data are more parsimonious with a non-linear secular trend than with a linear secular trend. I think it is also pretty clear that the aliasing of volcano and solar forcings is leading to some pretty strange regression results (I have not searched the literature, and others may have long ago recognized this). I am not sure how to address that, but maybe it would be informative to run the regression with solar withheld and see what is diagnosed for volcanoes alone, then withhold the volcanoes and see what is diagnosed for solar alone.
SteveF,
The Lower-Stratospheric Response to 11-Yr Solar Forcing:
Coupling to the Troposphere–Ocean Response
LON L. HOOD AND BORIS E. SOUKHAREV
ftp://ftp.lpl.arizona.edu/pub/lpl/lon/stratosphere/hood12jas.pdf
You might like this quote:-
“For several reasons, it is unlikely that the decadal variation
of low-latitude column ozone shown in Fig. 1a could
be a fortuitous consequence of the two major volcanic
eruptions (El Chicho` n and Pinatubo), which both occurred
following solar maxima in this time period.”
Hood looks at ozone depletion and uv penetratrance, as a function of flux.
Great minds think alike, as they say.
Meanwhile, over at Jeff Condon’s blog:
SteveF
The time constant is a fundamental of the climate systems thermal capacity and sensitivity. This is not just Wigley, this is build into the base equation where all this linear feedback stuff comes from. If the regression produces values significantly different from acceptable values of the time constant, it is a good indication that the regression is spurious.
This is what I suggested Nick should try. It’s interesting that the AIC test clearly favour 3rd order. This is the lowest order polynomial that can capture that segment of the 60 year cycle plus a multicentury trend. You could try removing the quadratic term from the p3 , that would further reduce your parameter count and probably fit just as well.
Calling that a “local minimum” is laughable, AIC is showing that whole “locality” to be unsuitable, don’t use it. You are correct that higher orders will be interacting, this tells you something about how poorly the other regression variables are able to fit the data. This tells you reject the whole regression of that particular formulation, not ignore what AIC tells you.
…and since the regression can “convert†some of the cooling that was due to volcanoes into exaggerated solar cooling due to aliasing, the drop in the rate of warming after ~2000 can be ‘explained’ by the declining solar cycle after ~2001.
You comment reveals you are trying to impose a preconceived idea on what you see. You say “exaggerated solar cooling” when the regression shows you something. You put explained in quotes as if to cast doubt on it. Your bias apparently prevents you considering that the other result may be showing “exaggerated volcanic cooling” .
You object to having a different sensitivity to solar but the solar “forcing” is the number of sun spots, it is a proxy for solar activity, it certainly is not fungible with estimated volcanic radiative forcing.
It seems obvious that the change in TSI is too small to have much effect. Allowing a different sensitivity allows you to investigate some other effect linked to solar activity.
Why don’t you report the tau time constant? That was one of your regression variables wasn’t it? It is also the one that allows a check on the sanity of the result. Perhaps you could add it to the post.
I like your “Limitations of the Regression Models”. This is an essential staring point. I don’t think any such regression can be meaningful until it has something that can account for the 9 year cycle that BEST/Curry just published on.You can get away with not having 60y if you use a 3rd order detrend on the 1950-present data. But the 9 y cycle is too big to be ignored.
By means of comparison look at how AMO (linear detrended index) compares to 9+22+60 year cycles put through the linear feedback model:
http://climategrog.wordpress.com/?attachment_id=403
That may explain the “exaggerated” volcanic forcing since it produces very similar variation without using either volcanoes or solar.
Steve Ta (Comment #117246)
Thanks. “Thinker” does use an Australian IP. That’s another Australian IP in automoderation.
LON L. HOOD AND BORIS E. SOUKHAREV
ftp://ftp.lpl.arizona.edu/pub/…..d12jas.pdf
OMG, yet another paper that does not understand you can not regress the input of a linear feedback model against it’s output.
Otherwise an interesting paper but some of these guys need drop into some first year maths lectures or something.
” I am not sure how to address that, but maybe it would be informative to run the regression with solar withheld and see what is diagnosed for volcanoes alone, then withhold the volcanoes and see what is diagnosed for solar alone.”
Worth trying, I think you’ll find ENI will be strongly compensating volcanics. IMO El Nino cycle (which as someone commented is not truly exogenous) is in fact the non-linear negative feedback that causes tropical auto regulation. At least in part.
In order to model this non linear tropical climate response while trying to retain everything as linear it is necessary to ‘invent’ a false exogenous ‘forcing’.
It’s a bit like using fictitious forces like centrifugal and Coriolis forces.
I think it is well past the stage where this needs to be recognised explicitly as what is being done. Otherwise why is an ‘internal’ climate cycle exogenous?
Actually, it’s not that simple. the trade wind data shows strong lunar
and solar cycles , so to that extent it is truly exogenous as well.
who said any of this was as simple as regressing CO2 and global average SST?
Trade wind spectral analysis:
http://climategrog.wordpress.com/?attachment_id=283
“El Nino cycle (which as someone commented is not truly exogenous) is in fact the non-linear negative feedback that causes tropical auto regulation. ”
Clarification: the exogenous (cyclic) part is luni-solar influence, there is also a manifestation of the non linear response of the tropics:
http://climategrog.wordpress.com/?attachment_id=310
If we can account for some of the tropical response as a presumed linear negative f/b, that leaves a residual part, negatively correlated to things like volcanism, to explain the remainder of the non linear response.
This will appear as a pseudo forcing that anti-correlates with the true exogenous forcing.
dallas (Comment #117213)
What’s the data source for you poop scooper?
I find it useful to put that information on the x-axis label , there are too many unverifiable graphs floating around. I don’t believe anything unless I can get the data (look at the providers caveats/metadata) and plot it myself.
Prime example was WHT’s Samoan CO2. He said it showed one that and called it “raw”. It was far from raw and showed the opposite.
Plot looks interesting. Source please.
Greg, the splice is described here.
http://redneckphysics.blogspot.com/2013/05/how-to-splice-instrumental-data-to.html
There is no “standard” method for splicing instrumental to Paleo, so I did my own.
Steve TA,
Very funny! I am happy you found that. The moderation requests come to me (and Lucia) on this thread…. I’ll leave any similar comments for Lucia to decide.
Greg Goodman,
Seems you have a lot more time to write comments than I do, so I will respond only to a couple of your points.
.
WRT withholding solar and volcanic separately, this is something I plan to try.
WRT the tau values, those are the inverse of the lag constants in the above graphs of AIC and lag constant: eg. 0.04 means 25 months for tau.
WRT my prejudices, I simply see the trade-offs the regression is making to improve fit when it reports a very large sensitivity to solar forcing.
WRT most of the rest, it seems that you reject as useless the entire effort to account for historical temperature variation using ENSO, volcanic forcing, and solar cycle forcing. That is OK with me. I think you are mistaken, of course, but have neither the time nor inclination to try to convince you otherwise.
If I thought it was useless, I would not suggest ways it could be improved or comment on what parts of it may show. I said it needs something that can reflect 9 and 60 year cycles. I said you may get away with 60 if you use p3 on post 1950 data. That still leaves 9.
Taking you result for best AIC gives a ‘lag’ of 0.065, so if I understand you that corresponds to a time constant of 15 years for the linear feedback response.
Since accepted values are in the range 3.5-5 years, that’s rather wide of the mark.
One interpretation of that is that the regression is fitting something but then answers do not match known climate response times. The most obvious conclusion is that input ‘forcings’ which are being used in the regression are not those that drive climate, or that they are incomplete.
If that can be established it is a result , hence not useless.
Ah, no 15 _months_ , beg your pardon.
Still under half what is expected.
Greg Goodman, when the post 1975 regressions were repeated with solar and volcanic forcings combined, the diagnosed tau was ~23 months, not so far from published values, especially if low climate sensitivity is assumed. Wigley et al put it a bit under 30 months for low sensitivity. There is not a single lag constant but many. Short of an accurate ocean heat uptake model, it is hard to say what single tau value best represents reality. I do note that the model used by Wigley et al was found to be very consistent with the ocean heat uptake calculated by GCM’s, and those calculated uptakes are known to be high relative to ARGO measurements, so a tau somewhat below those in Wigley et al (and elsewhere) doesn’t seem surprising to me.
Linear Contribution 0.01335 +/-0.00897
Quadratic Contribution 0.0003753 +/- 0.000542
Cubic Contribution (-8.655 +/-9.18)*10^(-6)
Assuming that is kelvin and year (not months, please give units)
converting that to per century units:
1.34 +/- 0.9 K/ca
3.7 +/-5 K/ca^2
-8.6 +/-9 K/ca^3
So the detrend which is being fitted is a long term upward trend and what I would regard as being due to the 60y cycle, here approximated by the non-linear parts of the detrend fn
fitted to just over half a cycle.
 
http://i43.tinypic.com/nv5lsl.png
Rather a short segment to correctly fit the curve. How does it change if you run 1950 as you did before?
Doc Martyn, #117240
The Stratosphere in general appears to be the red haired step child of radiant physics problem for some odd reason. I modified a atmosphere profile like I did a few years ago for Lucia when I was trying to explain importance of Frame of Reference selection in thermodynamics.
https://picasaweb.google.com/118214947668992946731/Layers#5894520892279220850
You might find it interesting.
You need a lot more than 2012-1950 = 62 years to argue that you need to include a 60-year cycle. It will eat any excess variance that happens to be in phase with any “real” 60 year cycle, so it’s a very noisy basis function to be “forced” to include. And we don’t know is exactly “60” years, so really we have to fit that too, and that adds noise.
It might be useful to show what happens if you include an arbitrarily picked frequency like 60 years (instead of e.g. 56 years) out of a hat. In fact, it may even be plausible that such an oscillation is really present in the data (but I believe we lack a long enough period of reliable temperature measurements to establish this unambiguously).
But it certainly in no way is useless or pointless if you don’t include that in your regression, since the inclusion of that frequency component is itself really a bit of an arbitrary thing to do in this case.
No less arbitrary than assuming a linear trend, or quadratic, or a cubic.
Â
That’s the problem with all this linear detrending game. It’s so much part of standard working practice in climate now, a lot of people seem to think it does not need justifying. Linear detrending is just part of normal data processing.
Â
Yes 60y is a bit arbitrary and it may be 58 or whatever. But it does fit rather well.
Â
http://climategrog.wordpress.com/?attachment_id=403
Â
Now Steve’s AIC showed p3 was ‘best’ poly detrend and I’ve shown that this is similar to lin+60y .
Looking at Steve’s fig 2 and3 the detrend is not taking out the final down turn. It may do if he uses a longer period so that the cubic fit is allowed to better catch the long term pattern. It will do if he uses a lin+60.
As Steve’s pre-amble explained the choice of detrend somewhat arbitrary and the result self-referential.
” the diagnosed tau was ~23 months, not so far from published values, especially if low climate sensitivity is assumed. Wigley et al put it a bit under 30 months for low sensitivity.”
Â
OK, so you’re in the right ballpark.
 Â
Linking solar as TSI direct, is close to taking it out completely. That may be informative since I don’t think it is direct TSI anyway. (You will note that my 9,22,60 test data does not have a circa 11 year term and it still fits quite well)
Â
Â
It is interesting to note 1(1/9-1/10.8)=54 , so the circa 60 may reflect both SSN and the missing 9.
Â
So if the fit is providing credible time constants for low sensitivity models when direct SSN is effectively taken out, then someone ought be having a serious look for the origin of the 9 year periodicity.
SteveF
I suspect you noticed it did not get approved quickly. That was not merely because Jim is home and so we had a bunch of house tasks. (Clean out the basement. Presence of both is useful for making sure we keep things one of us wants. We have plenty of stuff neither of us wants that was accumulated long ago, dumped here by my Mom whose ‘dream’ seem to be that everything she no longer wants will ‘inherited’ by her kid’s and kept as ‘heirlooms’. I could describe these “heirlooms”, but needless to say, they take up space. (Some are even quite nice and don’t take up too much space. After making enquiries to assure myself they liked it, I gave the pretty blue china place setting for 12 plus full complement of serving stuff to my nephew and his bride. Example ebay. It’s lovely, but I don’t care to decorate in blue… They love it! I have pink/red colored china. A color I love! And lets face it: one set of fine china is enough! Some stuff is in a state of disrepair, bulky, worn out etc. Often things are nice, but wrong color for my taste.)
I think the issue wrt to solar and volcanic is one of accidental overlap, which is an issue pointed out by Steve.
It happens by chance that the 1984 and 1992 eruptions occur approximately in cycle with solar forcing, but the effects of this on the relative independence of the basis functions associated with SSN and volcanic forcings is profound, especially if you use a shorter time period like 1979-2012.
To see what effect having two basis functions that are nearly parallel has on a regression analysis, let’s look at the model:
$latex x_n = a f_n + b g_n + \varepsilon_n$,
Here $latex x_n, f_n, g_n, \varepsilon_n$ are respectively the data, the two functional forms (e.g., SSN and volcanic forcings) and the residual of the fit and $latex n = 1, 2, \cdots, N$.
This has the well known solution for OLS of
$latex \left[\matrix{a\cr b}\right] = {\cal G}
\left[\matrix{\vec f\cdot \vec x\cr \vec g\cdot \vec x}\right]$,
where
$latex {\cal G} = {\cal H}^{-1} = \left[\matrix{
\vec f\cdot \vec f & \vec f \cdot \vec g \cr
\vec g\cdot f & \vec g \cdot \vec g \cr }\right] ^{-1}$
where dot product implies sum over indices, e.g., $latex \vec f \cdot \vec g = \sum_{n=1}^N f_n g_n$.
Let’s assume the two vectors f and g are normalized but not orthogonal, so $latex \vec f \cdot \vec f = \vec g \cdot \vec g = 1$ but $latex \vec f \cdot \vec g= \cos\theta$. Then, we can show,
$latex {\cal H} =
\left[\matrix{1 & \cos\theta\cr\cos\theta & 1}\right]$,
$latex {\cal G} =
{1\over \sin^2 \theta}\left[\matrix{1 & -\cos\theta\cr-\cos \theta & 1}\right]$,
and its easy to verify $latex {\cal G} {\cal H} = {\cal I}$ where $latex {\cal I}$ is the indentity matrix.
If we can assume $latex \varepsilon_n$ obey a Gaussian white noise distribution, then we can relate the uncertainties in the regression parameters $latex a, b$, which I write as $latex \sigma_a, \sigma_b$, to
$latex \sigma_n^2 = \vec\varepsilon \cdot \vec\varepsilon = {1\over N} \sum_{n=1}^N \varepsilon_n^2$,
via
$latex \left[\matrix{\sigma_a^2& \rho_{ab} \sigma_a
\sigma_b \cr\rho_{ab} \sigma_a \sigma_b & \sigma_b^2}\right]= {\cal G} \sigma_n^2$.
So we find $latex \sigma_a = \sigma_b =\sigma_n/|\sin\theta|$ and
$latex \rho_{ab} = \cos\theta$.
(This simplified form comes from the normalization of the two basis vectors. I recommend standardizing the basis functions, by which I mean normalizing and requiring their components also sum to zero, since this tends to improve the accuracy of the inversion process.)
Anyway — you can see the consequences clearly in this simple example of having highly correlated (nearly parallel) vectors in the regression analysis—namely you get a “noise amplification”.
You can get around this problem in a more formal way by replacing the inverse with singular value decomposition. It’s actually considered good practice to use SVD rather than ordinary inversion when performing regression analyses exactly for this reason.
Couldn’t get the matrix formulas to work out on that post, so I deliberately forced it to display the
gibberishLaTeX.Here’s a PDF version.
Greg:
No, it’s much more arbitrary, because when we assume the linear trend, we expect it to end after a while, whereas when we posit an oscillatory component, we expect it to last indefinitely.
The usual rule of thumb for oscillatory components is observing three to four periods.
Fitting a low-frequency oscillatory component when you have a nonzero trend can be very noisy, in the sense of “noise amplification”, in exactly the way I described above.
I gave a hand-waving description of the origins of the noise amplification for this case, but like with the example I gave above, this can be formalized.
Nick Stokes and I had a bit of discussion about the perils of fitting really low frequency signals (signals with a period on the order of the fitting interval) in this thread on his blog.
The coincidence of major eruptions to the SSN cycle is not limited to the last two. Most are on falling SSN or at the bottom.
Â
http://climategrog.wordpress.com/?attachment_id=315
Â
Stack all six and taking average SSN:
http://climategrog.wordpress.com/?attachment_id=317
Â
If SST is similarly stacked we see that extra-tropics show a dip that could be attributable to volcanism but also a downward trend was already underway before the eruption:
Â
This will certainly lead to exaggerated volcanic cooling by a regression that does not have a variable that corresponds to that prior cooling.
Â
http://climategrog.wordpress.com/?attachment_id=277
Â
In the tropics there is no discernible dip post eruption (!) other than that of a pattern that exists before and after.
Carrick (Comment #117271),
Yes. If you naively try to regress the Nino 3.4 index displaced over a range of different lags (say 1 to 12 months) from 1998 to 2012 against temperature, to try to generate an empirical relationship between Nino 3.4 and global temperature, you get nonsensical wild oscillation in the sign and magnitude of the regression coefficients for alternating months… a SVD would probably give quite reasonable coefficients.
Greg, I don’t see this.
Figure.
For the period 1980-2000, it looks like you have volcanic forcing with a period that approaches that of SSN, but phase shifted. It’s easy to imagine when you start allowing the individual forcings to have their own lag how you can result in a singular regression matrix.
For period 1960-1980, volcanic forcings has 1/2 the period of SSN, so these two basis functions have little overlap for that interval.
Carrick, I don’t think those little ones count for much. I took the six largest stratospheric eruptions. Agung was in the SSN trough, the other major two, El Chichon and Mt P, fall on the steepest part of the decline. Krakatoa was similarly aligned. That’s why the stacked average has such an SSN peak a few years prior and a trough a few year after.
Averaging all major events in an attempt to remove any fortuitous alignments with other variations, shows tropics actually maintain the degree.day integral across major events. Only ex-tropics take a hit.
Since land SAT varies at twice the rate of SST, I think using a global land+sea temp index for the regression blurs the relationships even further. I would recommend trying the same process against SST.
Â
Oceans are where the major feedbacks happen and represent 70% of surface area. In view of what I’ve pointed out about the tropics it would probably make sense to also try the regressions against tropics and extra-tropical regions separately.
There are so many different climate responses getting binned together in global land+sea, it’s really muddying the waters unnecessarily when looking for correlations.
Carrick: ” It’s easy to imagine when you start allowing the individual forcings to have their own lag how you can result in a singular regression matrix.”
Sorry, I was rather missing your point. Yes 1980-2000 that would be true but no one is doing that I think. Over 1979-2012 it would not be singular but would still be close enough to raise the possibility of false attribution
Greg:
This isn’t quite my concern, which has to do with the effects of noise amplification, when you regress against signal components that aren’t orthogonal. It’s an issue you have to be concerned about when you start these “kitchen sink” models and something that definitely needs testing against.
SteveF—anyway with your code to convert it to SVD?
Carrick,
Sorry, no. It was all done on spreadsheets. I think there are some commercial Macros available for purchase that will do SVD, but these would no doubt have something of a learning curve. Were I not so busy, I would try to learn R; but R has its own steep and high learning curve. 🙂
I was just looking at this graph for something else and noted that I’d found 0.7K/ca average rate of change in SST over last 60 years.
http://climategrog.wordpress.com/?attachment_id=233
Now that’s almost half the 1.34K/ca from the linear component in Steve’s p3 detrending function fitting to land+sea.
However complicated you make it, I think trying to detect the underlying trend by looking at the rising half of an oscillation is going to be fraught with false signals.
Steve, how about running the same p3 that gave a clear AIC best but using 1950-present. My guess is the linear coeff will drop to between 0.9 and 1.0 K/ca (0.01 in your units).
Â
I’ve already noted dT/dt for land is about twice that of SST. Approximating land / sea area ratio as 1:2
Â
1/3*(0.7*2)+2/3*0.7=0.93 K/ca
Â
Testable hypothesis: run your p3 regression 1950-present on your land +sea data and you’ll get something close to that value.
I’m guessing that cubic fit will come out something roughly like one sine oscillation. It may be near to 1.0 since AGW gets a little “correction” help in both hadSST and CRUT.
I removed the local SST influence on the seasonal cycle of the Mauna Loa data, and then found the residual by subtracting a power-law trend from the profile. Then I applied the BERN impulse response of CO2 sequestering to the carbon emissions data from the Carbon Dioxide Information Analysis Center. I compared the two and while I was at it, I also used the CO2 data from American Samoa. I used the Samoa data as is because there was no apparent seasonal data to the profile.
http://imageshack.us/a/img69/7626/hyc.gif
Note that Mauna Loa and Samoa agree and the yearly CO2 residual matches scale-wise and temporally.
As usual, I followed the mainstream climate science, and used only parameters that were agreed on elsewhere (adjustment time, pre-industrial baseline CO2). Everything else is data. This would beat any other model using AIC and BIC as a metric.
Yes, the proportional land/sea warming model suggests that the ocean surface temperature is rising at half the rate as land:
Proportional Land/Sea Global Warming Model
The ocean heat content model suggests that the ocean depths are also retaining half the effective thermal forcing generated by the radiative imbalance at the surface:
Ocean Heat Content Model
These two observations are not coincidence. Whatever heat is gained as an uptake by the ocean depths (i.e. heat sinking) is not reflected by an ocean surface temperature increase. The land has no heat-sink to speak of, so reflects the radiative imbalance directly. This is straightforward heat capacity bookkeeping.
(1) T_ocean = 1/2 * T_land
(2) T_global = spatial mean of T_ocean and T_land ~ TCR
(3) T_land ~ ECS
(4) TCR ~ 2/3 * ECS = 0.7*T_ocean + 0.3*T_land = 0.65 * T_land
The last based on the 70/30 split between ocean and land masses
Isaac Held is getting close to pursuing this approach based on his latest blog post:
“38. NH-SH differential warming and TCR « Isaac Held’s Blog.â€
This is all supporting the latest views of mainstream climate science with the result that ECS is still close to 3C for a doubling of atmospheric CO2, while TCR is 2/3 of this value. The heat uptake of the ocean is obscuring the long-term trending to the eventual ECS, but this can be seen easily in the land temperature data (using CRUTEMP or BEST).
Webster, Right, since the specific heat of land is about 1.8 versus 3.9 J/g for salt water, land warms at about twice the rate of the oceans.
http://redneckphysics.blogspot.com/2013/05/the-elusive-global-surface-temperature.html
To figure out what is causing what, you need absolute temperatures IHMO not anomalies. Then you can see the change in the hemisphere imbalance in degrees C. That is why I reference Brierley, Toggweiler etc., because internal heat distribution matters.
Then you can compare the ocean land temperature differentials by hemisphere.
http://redneckphysics.blogspot.com/2013/05/ocean-land-temperature-differential.html
Rock solid in the SH, drifting lower in the NH. Note that the NH drifts below the mean ~ 1985, when the “Global” diurnal temperature trend shifts. Hmmm? What would that do to NH and SH paleo proxies?
Which brings us back to this,
https://lh5.googleusercontent.com/-yCVnY6nXIiQ/UZmVEhGt-oI/AAAAAAAAIJs/EozQSkgn614/s817/IPWP%2520spliced%2520with%2520cru4%2520shifted%2520anomaly%2520from%25200ad.png
According to Oppo2009, the Indo-Pacific Warm Pool is back to the millennial scale “normal”.
If you like you can use Lake Tanganyika,
https://lh3.googleusercontent.com/-rRs69Ekl9Zc/T_7kMjPiejI/AAAAAAAAChY/baz0GHWEGbI/s917/60000%2520years%2520of%2520climate%2520change%2520plus%2520or%2520minus%25201.25%2520degrees.png
It appears to be back to the millennial scale “normal” as well.
You can even use Nick Stokes’ revision of Marcott, (with a little more realistic confidence intervals) same thing. Though if Nick split the Marcott proxies by hemisphere, it would be more entertaining.
That’s wrong Cappy D and I think you know it. The specific heat is only a part of the comparative analysis — far more important is the diffusivity; and the diffusion coefficient of earth is orders of magnitude lower than the vertical eddy diffusivity of 1 cm^2/sec for the ocean.
The total heat capacity is proportional to the volume, and how efficiently the heat can access the complete volume. The earth has a volume that is very shallow since the low diffusivity (or thermal conductivity) prevents it from penetrating much underground. Obviously, the high diffusivity of the ocean allows a surface impulse of heat to travel hundreds of meters over the course of time. That is what the OHC experiments are showing.
WHT:
Â
You still don’t get it do you? I corrected you on this last time posted your half the heat is disappearing hypothesis and you totally ignored it rather either accept or refute.
Â
You are totally ignoring SPECIFIC HEAT CAPACITY , temperature is not heat.
“Proportional Land/Sea Global Warming Model “:
Â
That has nothing to do with ‘krigging’ it is because hadCrut IS 70% hadSST plus 30% CRUT . LOL
Â
The only reason it does not overlay perfectly is because you did not even read the metadata and you used the wrong version of hadSST.
Â
Get a grip. That kind of silliness really is a waste of time.
” the vertical eddy diffusivity of 1 cm^2/sec for the ocean.”
Eddy diffusivity rather depends upon having eddies. That explains the heat being fairly well connected in the mixed layer. What’s the eddy diffusivity at 100m or 200m ??
Yeah, odd that, isn’t it.
So you say half the heat must be going into the ocean and Trenberth says it’s a travesty that we can’t find it.
It must some of that ‘black heat’. Like black matter we know it must be there because our equations says so. We just can’t find it because it does not interact with matter.
Still once they corrected the data a bit more it won’t seem so bad.
WHT ” I also used the CO2 data from American Samoa. I used the Samoa data as is because there was no apparent seasonal data to the profile.”
There wasn’t in the data you linked to because again you had not read the metadata and did not realise it had been smoothed with 20y spline.
If you get the data from Scripp’s as I said, you find it does, though only about 2ppm pk-pk , rather than 5ppm at MLO. It also has a strong 6m component, typical of temperature variations in the tropics.
You seem quite good as some aspects of the maths but should take more time to find out what data you are using.
Webster, In the long run, the specific heat capacities will tell the tale. Ocean is ~4J/g, land ~2J/g and atmosphere ~1J/g. Think about it.
That is why I also use deep ocean and upper atmosphere references instead of playing with all the “surface” noise. With the average temperature of the deep oceans at 4C (334.5 Wm-2) and the average temperature of the Stratopause at 0C (316 Wm-2) and the average temperature of the Turbopause at -89.2 C (65 Wm-2) there are three solid frames of reference with very little variability. Think about those numbers. We live on a water world, water temperatures tend to dominate. That is why more attention is being paid to water vapor greenhouse versus dry gas green house by the real modelers. Unless CO2 can change the freezing point of water, it has limited forcing potential.
Goodman, OK so you have captured some sort of rules that I am violating. Why not let’s work together and make the analysis better. It is clear that you are moving in the direction that I have been pursuing, since you make the same claim that I am making in suggesting that land temperature rise is 2*SST rise.
So you say that my application of the HadCrut data is inconsistent. My point still is that one can reconstruct the global temperature accurately from flipping the land temperatures as 2X the ocean temperatures and the ocean temperatures as 1/2 the land temperatures and then compositionally adding these for each year in the data records. This is a fundamental basis that you are pointing out and I am just confirming. I realize that this is more difficult since 0.71 is close to being 2 * 0.29, but as long as the land/ocean area ratio is not 0.666/0.333 we can try to squeeze the information out. (I have to assume that your “LOL” is partially self-directed.)
And then you have issues with the diffusivity I used on the OHC model. Hansen and others have long assumed these values.
In addition, you say the Samoa data has a 20 year spline filter. Hard to believe that after I showed the data as a set of rather noisy points.
I have no problem with you taking my arguments apart. I am also trying to look for holes in the work of the climate scientists. My simplification approaches are attempts at looking for inconsistency. So far I haven’t found much to write home about. It all seems to make sense.
Cappy said:
You can’t continue to just “think about these numbers” as if this will magically provide a comprehensive analysis. At some point, you will have to pick up a pen and pencil and actually do the math and create some comparative models.
You may have the biggest brain in the world in being able to compute it all in your head, but since it only resides in your head, there it will remain. I can’t and have never been able to figure out what you are talking about based on the game of “20 Questions” that you seem to prefer, so am of little help.
Â
OK, Mr. Telescope, stop addressing me a “Goodman” and we’ll see if we can work together. We have complementary skills yet opposing views about the result, that could be a productive mix.
Â
BTW it would be more appropriate to take this to your blog rather than squatting this thread but you don’t allow me to just post under my own name and it dumbly refuses to accept my wordpress handle.
Â
Please configure your blog so that I can post without going through and online ID check and a body scanner if you want to work together on this.
Â
I presume we are now agreed that plotting hadCRUT4 against 0.7*hadSST3+0.3*CRUTem4 and fitting the slope of the scatter plot is just checking the 0.7+0.3 still equals unity. Note that fitted slope is slightly less than unity.
Â
The next graph where you plot SST and against Crut is more interesting.( My estimate of 2x was just a quick eyeball scaling of the derivates).
Â
Now here you do the classic OLS error, data processing #3 in the climatology top ten all time greats:
Â
OLS regression is based on minimising the y errors and assumes x-error is negligible by comparison. If you have you do this on a scatter plot with significant error/uncertainty in each variable OLS regression will systematically under-estimate the slope. How much depends upon the nature of the data.
Â
Unfortunately, there no quick answer to get right result, however, a first step is flip the axes and do the OLS the other way around. This gives an underestimation the other side and bounds the range of the correct answer. By the look of it the range is not huge. One non-rigorous estimation is to take the average of the two slopes.
Â
I also note that the positive quadrant of this plot has a lot less spread. This will be the recent data with less “issues”. That may provide a more reliable slope. It is clearly less than that of the whole dataset. That difference may be climate information but I suspect it tells us more about sampling bias, “bias correct” bias and UHI.
Â
Eyeball guess: that quadrant will give both OLS slopes as close to 0.5 and 2.0
Â
This will have an impact on the residual errors and the ensuing quadratic that you solve. It will be interesting to see whether the new quadratic error correction changes the form of the ocean/land ratio plot.
Â
That I find to be the most interest graph so far. But before trying to interpret what it ‘means’ you need to see whether it changes when you correct the OLS error.
Webster, I have already shown you the comparisons of OHC to surface air temperatures. There is no lag to speak of and the response is different in the NH and SH. That is not consistent with WMGHG forcing. With the 4, 2 and 1 J/g SHC proportions plus the “active” land ocean area ratio, you are locked into a 2:1 regardless of cause. When you take the noisiest data to “prove” the point, you are just digging a deeper hole.
I don’t have a definitive attribution split for the same reason no one else has a definitive attribution split, too much noise and instrumentation uncertainty. Curry’s 1/3:1/3:1/3 is about as good a guess as any.
What I do have is a “sensitivity” estimate based on more stable references. 0.8C is based on “average” ocean temperature (4C * 334.5 Wm-2), the effective black body cavity and the ~334 DWLR response of the atmospheric radiant “shells”. That would be doubled by the land mass/ocean SHC ratios and since the SH land includes the Antarctic which has anti-phase response to radiant forcing, an explanation for the differences between the hemispheres, NH land amplification.
WebHubTelescope (Comment #117292),
I read Isaac’s N/S analysis, and I liked it very much, since it does the kind of ‘reality check’ that I think is always needed. My conclusion was that he found the most likely transient response to be ~1.3 C per doubling. I asked him if my understanding was correct, and he confirmed it was, although he also noted that his analysis could not exclude somewhat higher values. If ECS is 1.5 times transient response, and that seems reasonable (though a bit higher than I would guess), then that puts Held’s estimation of ECS at ~1.95C, not far from my best guess (~1.8C).
.
With regard to the “latest views of climate science”: paradigms in many fields change only slowly, even in the face of obviously conflicting data. You can see this in the flurry of papers being written to explain the recently slower rate of warming (F&R being one of these) and to explain the glaring divergence between GCM projections of warming and reality, while maintaining ~3C as the ESC value. In the case of climate science, I suspect the tendency to cling to “about 3C per doubling” is very strong, for lots of reasons, the biggest of which is the enormous intellectual/professional/financial investment in GCMs and their projections. Consider for a moment the number of high profile papers that have been written based on GCMs; if the GCMs turn out to be way off in their projections (say, by almost 50%), then hundreds (thousands?) of papers, and much of the work that went into those papers, will be shown be either wrong or highly doubtful. That is a hard thing for many people to accept. (Yes, even climate scientists are people. 😉 )
.
Some in the field are beginning to revise their “most probable” estimates of ECS downward; Isaac Held appears to be one of these. There are also a number of papers being published that show empirical (not model based) estimates of ECS with most probable estimates that are well below 3C. So I think the field will (very gradually) move toward lower ECS estimates, but modelers are likely to be the last to revise their estimates downward. There is a lot of inertia involved.
“…. for lots of reasons, the biggest of which is the enormous intellectual/professional/financial investment in GCMs and their projections. ”
AND political. There seem to be an awful lot of these guys let their personal, environmental concerns (which I probably share) taint their work to the point that they lost all objectivity.
Many still seem entrenched in a cause or crusade and would like the world to commit to reducing CO2 “anyway”. Those will try to string this out long enough for the post Copenhagen process to set up a huge international green “slush” fund with no legal oversight, before they admit that ,” in the light of new data …” it is not as bad as we feared.
Obama’s recent speech about controlling CO2 “like other toxins” is not encouraging.
Dallas: With the 4, 2 and 1 J/g SHC proportions plus the “active†land ocean “area ratio, you are locked into a 2:1 regardless of cause.”
I’m suspicious of this 2:1. It seems too easily to define an “effective” land SHC based upon average moisture. It’s too much like a climate model fudge factor for my liking, though it must roughly about that value being somewhere between water and dry rock.
I’m also curious that it seems to fit inter-annual just as well as 120 year change.
Your use of the word active is probably the key, it is that stable because of feedbacks not passive SHC ratios. If average land moisture does put at around 2 , the same as ice we should probably be regarding Arctic ice as ‘land’ and your active land/ocean ratio could be relevant.
SteveF: Do you think there is any value is this presentational view of the UAG data series?
Fig 1 – Original UAH Global data Dr R Spencer (http://www.drroyspencer.com)
Fig 2 – Additional cascaded low pass filters added to the presentation from Fig 1
Fig 3 – Comparison of the various filter stages available for the whole record so far
SteveF; What do you think of this presenational view of the UAT data?
Original from DR R Spencer (http://www.drroyspencer.com)
http://i1291.photobucket.com/albums/b550/RichardLH/UAH_LT_1979_thru_May_2013_v5_5_zps42d9d2a9.png
Additional cascaded low pass filters of 12-37 months added
http://i1291.photobucket.com/albums/b550/RichardLH/uahtrendsinflectionfuture_zps7451ccf9.png
Comparison between individual filter stage outputs (IPCC CO2 temperature rate, the final stage, can be set as required)
http://i1291.photobucket.com/albums/b550/RichardLH/UAH-Comparisonofcascadedlowpassfilteroutputmonthsrunningaverage_zpsf1883e44.png
Isaac Held NH/SH etc:
I like the intentions but assuming multi-decadal variability is internal, symmetrical and has no effect, is a rather big ‘as little as possible’.
Right on. The day we start looking at tropical / ex-trop divide too we may start making progress.
Greg, ” If average land moisture does put at around 2 , the same as ice we should probably be regarding Arctic ice as ‘land’ and your active land/ocean ratio could be relevant.”
Exactly, Moist or ice covered “land” above about -20 C you are forced into the 2 range. Between -20 and -40 C you get into the anti-phase response range. So Arctic Ice would be like land, Antarctic ice would be “inactive” or counter active with the SHC relationship.
To cut out the noise, 60S-60N provides a better ocean/land relationship.
ERSST3 has 30 degree bands for ocean, land and combined that I use since I don’t want to get too involved with novel data sets.
https://lh3.googleusercontent.com/-iS0fQarNeAU/UcXH3jzHHDI/AAAAAAAAIvo/dV2diJCpaOQ/s708/sopol%2520surf%2520ls.png
That is Admundsen Scott SoPol versus UAH SoExt lower stratosphere. You can see the kind of noise that complicates the “global” surface temperature correlations.
RichardLH,
I am not sure what you are trying to say with those low pass filtered curves. Can you spell it out?
SteveF: Sure. By cascading low pass filters in this way you can easily ‘visualise’ where the ‘energy’ is in time terms. Using them as a bandpass filter (by comparing the differences betwen stages) allows easiy visualisation of any periodic structure longer that 1 year.
And out pops 3 year (I think it is probably 37 months but that is for later), 4 years, and 12 (4*3) cycles.
That is just by re-examining the UAH data and treating it as though it were any other signal source and looking for patterns.
The CET comes from seeing the 4 year pattern in the UAH and creating ‘Normal’s of 4 years long and not 1. That is the true solar year of 1461 days.
Then out pops a pattern down the whole history of the CET where on some days in the 4 year cycle they have consistently been colder or warmer compared to other years.
+0.4c to -0.3c warmer or colder than a ‘averge’ year.
That is almost certainly the strong 3 year pattern from the UAH beating with 4 years and if it really is 37 months then ahrmonics strech way out in time.
Oh and bye the way – if it is mainly natural cycles of these forms and periods then the future is slightly less hard to see ;_)
RichardLH,
Matching data to sine waves may be entertaining, but I think there are two real problems: 1) you have to accept that obvious causal changes (cooling due to volcanoes, for example) are insignificant, and 2) you have to accept that there are unspecified cyclical mechanisms which are acting to generate all observed variation.
.
Sorry, I just don’t accept either of these propositions; they are for me not at all credible.
This is not matching anything to anything. It is just adding more averages with longer spans to the same data and then presenting them all on one graph.
No external sine waves havve been added – even though it may look as though they have.
In the UAH graphs, the data is unchanged from Roy’s site. Like wise for the CET series from the Met.
I am just extracting what cyles have been recorded by using low pass filters and comparing them. The data says that these cycles occured. Why and what causes them I am not sure yet. But I do have thoughts if those two frequencies of 37 months and 48 months are correct.
Take any column of temperature data in a spreadsheet you wish to examine.
In the next column place a central output running average of a given span.
Drag down over the whole input data.
Repeat for addition columns to provide the number of filter poles you wish to add but limit the input of subsequent stages to what comes from the previous stages.
I use the series below in cascade to remove digital sampling arcifacts.
Running average
time span X2 =X1 * 1.3371
3 4.0113
4 5.3484
5 6.6855
7 9.3597
9 12.0339
12 16.0452
16 21.3936
21 28.0791
28 37.4388
37 49.4727
49 65.5179
66 88.2486
88 …
Grab the whole data and plot on a scatter graph.
Lo and behold a reasonable visualisation of where the perodic structure is in the record.
C.f. The UAh series with the periods mentioned.
RichardLH,
Sorry, I misunderstood what you were doing. The dominant influence in the satellite data over periods of up to a few years is ENSO, which is responsible for most of the temperature changes. The lower troposphere has a larger reaction to ENSO than does the surface. There may be cyclical behavior in the long CET record, but I have never looked at that record. I do know that there is a pretty clear tendency for the northern hemisphere above 30 N to alternate between fairly periods dominated by brief, large warm and cold anomalies, with period (warm to warm or cold to cold) near 21 months. I do not know what causes this behavior.
The interesting thing about this is that it is all cyclic. Must be directly tied to orbital mechanics.
With 37 and 48 months as candidates the modulating factors on Global Temperature list is going to be small.
This in addition (or part of) any aperiodic things such as ESNO.
Just my take.
And have you looked at where the next black diamond is? That is predicting the UAH series for the next 18 months!
Richard, glad to see you’ve understood negative lobe problem. That puts you a step ahead of most climatologists already. When I first read you’re using running means I went “oh, no!”.
However, I don’t really understand this as a string of slides. I’m interested in looking but I don’t have time to puzzle over what you did, why, what you think it shows. It needs writing up clearly.
I’m intrigued by this one.
http://s1291.photobucket.com/user/RichardLH/media/CET-Recent4YearAnomaliestoMay201328daylowpassfiltered_zps4182d753.png.html
The heavy plot on the left is very similar in form to my volcano stack except it seems to be just one year long.
http://climategrog.wordpress.com/?attachment_id=278
If I could see what you’re doing it may tell me something about that plot too. You said these running means were centred , why is the result on the left?
And in my boldness (and probable ignorance) I would suggest a 12 year pattern may underlie ENSO.
Ok let me try and explain better.
For the UAH series I took the data from Roy and applied the methodology described.
That allowed me to predict the 18 month future for UAH (and hopefully incresed my betting chanses).
The repeating pattern was just too good to miss.
Then I configured it to a bandpass by comparing two adjacent stages.
That drops the frequncies out without the tuning that otherwsie may produce odities.
So in order
Original from DR R Spencer (http://www.drroyspencer.com)
http://i1291.photobucket.com/albums/b550/RichardLH/UAH_LT_1979_thru_May_2013_v5_5_zps42d9d2a9.png
Fig 1 – Original UAH Global data Dr R Spencer (http://www.drroyspencer.com)
Additional cascaded low pass filters of 12-37 months added
http://i1291.photobucket.com/albums/b550/RichardLH/uahtrendsinflectionfuture_zps7451ccf9.png
Fig 2 – Additional cascaded low pass filters added to the presentation from Fig 1
Comparison between individual filter stage outputs (IPCC CO2 temperature rate, the final stage, can be set as required)
http://i1291.photobucket.com/albums/b550/RichardLH/UAH-Comparisonofcascadedlowpassfilteroutputmonthsrunningaverage_zpsf1883e44.png
Fig 3 – Comparison of the various filter stages available for the whole record so far
“This in addition (or part of) any aperiodic things such as ESNO.”
Â
Who told you ENSO was aperiodic ? 😉
Â
It’s a lunar based periodicity, split into two by modulation with a longer period close to 30 years. This produces something like 5.2 and 3.8 year cycles from 4.43 years. Since no seems to get beyond staring at the temperature time series to try to understand climate it looks “aperiodic” alternatively described as having a period of “between 3 and 5 years”.
Â
You also need to break things down . Just looking at global averages muddies every thing up and it gets called chaotic.
Â
The real chaos is in climate science , not in climate, which is surprisingly ordered when you look properly.
If you want all the physics that goes to make this valid (and a wild guess as to the reasons) – that will have to wait until after the weekend I’m afraid but I am fairly confident it will stand analysis.
Greg Goodman (Comment #117328)
June 29th, 2013 at 2:50 pm
“This in addition (or part of) any aperiodic things such as ESNO.â€
“Who told you ENSO was aperiodic ? ”
I did suggest that there may be a 12 year pattern to ENSO but that is well outside my knowledge area. Just comes from 3*4 which I do see.
Greg “It’s a lunar based periodicity split into two my modulation with a longer period close to 30 years. This produces something like 5.2 and 3.8 year cycles from 4.43 years. Since no seems to get beyond staring at the temperature time series to try to understand climate it looks “aperiodic†alternatively described as having a period of “between 3 and 5 yearsâ€.
There is also a difference in the poleward propagation rate because of ocean asymmetry. The tidal influences both on the oceans and atmosphere seem to be completely overlooked.
Greg Goodman (Comment #117328)
June 29th, 2013 at 2:50 pm
“It’s a lunar based periodicity”.
Question: What lunar based phenomena is 37 months long? Clue think solar 4 years.
Oh go on then – before I head off into the wilderness for the rest of the weekend.
If 37 and 48 months are important. Then takes these diagrams
http://en.wikipedia.org/wiki/File:Field_tidal.png
http://en.wikipedia.org/wiki/File:Jetcrosssection.jpg
and consider if 54 degrees lunar and 45 degrees solar could bear any influence on the polar jet stream characteristics?
Damn fingers – that is 54 degrees angle to the direction of…
This is a more straightforward interpretation of Land vs Ocean temperatures.
http://img854.imageshack.us/img854/2439/usfe.gif
Over the ocean, half the thermal forcing goes to diffusional mixing uptake.
Over land, all of the thermal forcing goes into increasing the surface temperature (no heat sinking to speak of).
This means that land temperature increases at 2X that of ocean surface.
The effective forcing is about 1.55 W/m^2, giving a temperature anomaly over the land of about 1.2C. Recent measurements by Balmaseda and Levitus show that ocean uptake is around 0.7 W/m^2.
The surface temperature and OHC measurements are self-consistent.
WHT, glad to see you took up (most of) my suggestions on OLS. But you need to do that plot of various periods BOTH ways around. The recent segment is tidy enough that it will be very close in both plots.
However, the other periods, especially the earlier one with a huge spread and the out-liers will give very different results. The current spread of slopes 0.5, 0.95,2.0 will exaggerated by the OLS misfit but probably will still be present. You need to at least do the same processing with both axis orientations and then work out how to best estimate the true slope. The earliest period pre-1900 will be grossly under-estimated here.
Also try to identify the outliers that are in the upper left quadrant.
Once you have a less biased estimation from the two orientations I think you will still see a long term change in the slope and then we can consider what this may tell us about climate. It may indeed reflect a change in ocean uptake/diffusion which could be informative. But I’ll wait to see the numbers before speculating what it means.
Dallas
That’s exactly what I’m currently working on. There is a strong 9y period in extra tropical zones, this is basically what Curry/BEST just reported on in land temps. I suspect this is a long term oceanic tidal effect that changes the circulation of tropical heat N/S and accounts for the polar see-saw.
This also manifests as circa 4.5 years in many tropical datasets which would correspond to whatever it is passing the equator twice per cycle. That is a strong indication it’s of celestial origin.
Scafetta also identified a 9.1+/-0.1 cycle that he showed to be of lunar origin. IIRC Curry reports 9.1 as well.
I suspect 9.1 is 9.065 and comes from a failure to resolve 8.85 and 9.3.
Again this is the problem of not separating tropical and ex-tropical . There is often a frequency doubling at the equator and it all becomes an unfathomable mess in the ever popular global averages.
But then those who want to assume it’s all “stochastic” internal variation that all averages out and can be ignored, so that they can fit AGW to the rising phase of a cycle, will find global averages attractive.
WHT, I would also suggest working with dT/dt rather than T(t). This mitigates the effect of any bias offsets, which becomes glitches rather than permanent offsets which skew the OLS result.
dT/dt is noisier so there will be more spread between OLS slope from the two orientations but any periods for which the two give notably different answers can looked at in more detail for the presence of uncorrected sampling bias or spurious bias corrections.
Since the derivative is orthogonal to the time series it can be said to be looking at the data ‘from another angle’.
If you do that I think it will produce a fairly solid picture of a progression in the ocean/land ratio. We can then look at where it may be coming from.
Duh, I’ve just realised, you’re doing all this with anomalies based on the later period, that’s why it looks much cleaner. This also means the earlier results BOTH contain an offset and that is why the ratios are smaller.
You need to do all this with dT/dt. (The later period should agree with dT/dt).
Alternatively recalculate the ‘anomalies’ for each period. This will considerably reduce the noise in the earlier data, much of which comes from the fact the annual ‘climatology’ was different in earlier periods, so subtracting 1960-1990 climatology leaves a lot of annual scale noise (and it is now noise because it’s not even the annual data).
All this ‘anomaly’ crap is yet another problem with mainstream climatology data processing.
Jeez, if you want to remove the annual cycle you do it with a filter that will treat all periods equally, not subtract the variability of one period from the rest of the record and thereby introduce noise into the signal.
I would suggest a much better way to do this OLS fit is to find some REAL temperatures not ‘anomalies’, filter out the annual signal, then work in dT/dt.
Use either a 3-sig gaussian of sigma=6 months or a 12,9,7m triple running mean as filter.
My quick look at this showed a roughly constant x2 across the board. What I find significant is that the long term changes seem to conform to about the same ratio as the inter-annual changes.
http://climategrog.wordpress.com/?attachment_id=219
Looking more closely at this, BEST seems to show a different ratio in the post war cooling period (closer to unity) so breaking things in to the four periods as you did above will be informative.
If the cooling period shows a different ratio this will be very useful for looking at your diffusion arguments.
Logically there must be some diffusion beyond a one slab ocean model so this may substantiate what you’re saying.
One caveat is that Hadley actually use comparison to air temps to justify the need to ‘bucket correct’ SST, so there is a certain amount of induction going on if we then use their data to look at the ratio.
First step is REAL temps; 12m filter; dT/dt
This graph from Keeling and Whorf shows maximum long term lunar tides B C C’ D etc. that coincide closely with major eruption dates.
http://www.pnas.org/content/94/16/8321/F7.expansion.html
These are exactly the events that I identified as lining up in my volcano stack analysis. Noting that the cooling starts before eruptions.
The tidal effects on oceans also affect the ‘solid’ earth and the link with volcanism may not be so coincidental. I don’t completely understand Keeling’s graph but if the phase of the lunar effect is in the same direction as to produce a cooling the potential for false attribution is significant.
If the tropics are largely immune to variations in radiative input as the degree.day integral shows then much of the attribution to volcanism is spurious.
I say most since I also showed there is a temporary effect in extra-tropical regions even when the underlying cooling is accounted for. So I’m saying the effect of volcanism is being confounded with lunar driven effects and thus greatly over-estimated.
Any regression, such as that featured in this article , that does not have a variable able to reflect the 9 year cycles will necessarily falsely attribute the coincident events in that period to volcanism.
WHT, I’m interested in how the following plot changes when you take in to account the 2.0 value found in the recent data
http://img442.imageshack.us/img442/4509/n5t.gif
Â
I suspect that your quadratic correction will now be very small and the graph may end up quite similar, but it needs checking. I thought the quadratic correction was neat but I would be even better if it was not needed.
Â
There is an intriguing similarity to some work I’ve done on changes in the Arctic.
Â
http://climategrog.wordpress.com/?attachment_id=226
Â
If you redo that graph, how about running it a bit earlier, to pre ’89 at least, to see whether this similarity bears out.
Â
WHT,
The current estimate for GHG forcing is about 3.1 watts/M^2. The only way your 1.55 watts/M^2 figure is correct is if aerosols are reducing GHG forcing by ~50%. This is unlikely. The AR5 SOD aerosol group suggests total aerosol effects are closer to 0.5 – 0.7 watt/M^2. (Yes, I know it is only a draft.) Please also remember that there is rapid heat transfer between land and ocean, it is not plausible for the two to be acting in a fully independent way as your calculation suggests. The divergence between land and ocean temperatures is more complicated than assuming they are represented by two isolated heat balances. The greatest contribution to land warming has been in the northern hemisphere at high latitudes. Multiple effects could help to explain the divergence, like changes in boundary layer mixing and changes in wintertime cloud cover, among others.
Greg, “That’s exactly what I’m currently working on. There is a strong 9y period in extra tropical zones, this is basically what Curry/BEST just reported on in land temps. I suspect this is a long term oceanic tidal effect that changes the circulation of tropical heat N/S and accounts for the polar see-saw.”
Right, and the question is what time period(s) the internal sloshing around averages to zero. I have suggested that since CO2 forcing is about the only thing we do know, detrend using ~0.8 C per doubling for “global” temperature. That 0.8 C is based on ocean average energy (~334.5Wm-2 at 4C) which matches DWLR estimates and Stratospause estimates. That gives you a legitimate reason to remove a portion of the long term secular trend. Then you could focus on synchronizing perturbations, 1915 and 1976 for example for baseline periods, to estimate the regional propagation. There is just too much noise in my opinion to decipher the shorter term oscillations in any meaningful way, until the 90 to 150 year recurrences are dealt with.
You also have another problem with “surface” temperatures. The (Tmax+Tmin)/2 is a poor mate for SST. Tmin, is a good match for SST. In fact the global average absolute Tmin is close to 4C (334.5 Wm-2) which matches the deep ocean average and the 316Wm-2 stratopause after adjusting for areal differences.
With those references you can use the tropical ocean paleo like Oppo2009 with a more solid physical mechanism than Barycenter fluctuation and novel fusion reaction modulation.
Best should have their land and oceans data online pretty soon, hopefully where the land can be separated in similar bands as the oceans. Then things should make more sense using Tmin versus Tmax instead of averages.
From what I understand, a CO2 doubling resulting in a 3°C rise corresponds to a value of λ of 0.8 K/(W/m^2).
So a value of 1.55 W/m^2 would result in a ΔT = 0.8*1.55 = 1.2°C,
which is essentially the land temperature rise we have seen over the modern era.
If what you are saying is that the 1.55 W/m^2 should actually be 3.1 W/m^2, then the λ should be half 0.8 or 0.4 K/(W/m^2). Or else if we consider that the aerosol effects compensate the 3.1 and knock it down to 1.55 W/m^2, then removing the aerosols would cause the land temperatures to rise to 2.4°C from current CO2 values.
I always thought the aerosol effects were invoked to explain why the ECS was not 4.5 and instead closer to the observed value of 3.
Webster, “From what I understand, a CO2 doubling resulting in a 3°C rise corresponds to a value of λ of 0.8 K/(W/m^2).”
No. A CO2 doubling is supposed to produce ~3.6 Wm-2 of atmospheric forcing. That 3.6 Wm-2 would raise the temperature of a surface at ~255K degrees by 1-1.5 C degrees. The 3C per doubling is an average of the Hansen and Manabe estimates which includes water vapor feedbacks. Assumptions averaged is a poor starting point if you are looking for precision.
The ~255K is based on an “effective” radiant layer based on the assumption that clouds are fixed. More averaging and assumptions
To get past all the assumptions, you can select different frames of reference. Basically multiple models that should converge close to reality.
http://redneckphysics.blogspot.com/2013/06/atmospheric-energy-profile-or-energy-urn.html
Hence the atmospheric urn.
WHT,
Man-made aerosols have always been assumed to have “offset” some (or a even most) of the forcing from increases in GHG’s. Look at the combined aerosol influences (direct, indirect and black carbon) in the GISS forcing graphic I showed in my post; they sum to almost exactly half the total GHG forcing over the entire period…. the correlation between total GHG forcing and total aerosol offset is >0.9 IIRC.
.
Man-made aerosols have nothing to do with ECS, which is the equilibrium response of average surface temperatures to a doubling of CO2 (equal to ~3.7 watts/M^2 radiative forcing). Modeling groups (like GISS) have always assumed a fairly large aerosol “offset”, and it is pretty clear that the size of the aerosol offsets each group uses is just about inversely proportional to the diagnosed sensitivity value for that group’s model. IOW, the assumed aerosol off-set is adjusted for each model so that historical temperatures can be more-or-less accurately hind-cast. That does not mean that the assumed historical off-sets and the model are accurate, only that they are consistent with each other. The unfortunate reality is that aerosol influences have always been the least certain of the important climate influences, so they have become essentially a kludge which allows GCMs to match past temperatures. But the best current estimates for aerosol offsets are considerably lower than in the past…. which means that the most probable sensitivity is lower than the GCMs have diagnosed (using larger, custom-selected aerosol offsets). Lucia’s regularly documented divergence between GCM temperature projections and reality since the early 2000’s (the models are running much too warm) is consistent with models that are too sensitive to GHG forcing combined with assumed aerosol influences which are too high.
.
As I noted in an earlier comment, there is a lot of inertia to big changes if those changes challenge the validity of a large body of work. Thomas Kuhn recognized this a long time ago.
Excellent, let’s see what we can do with that info.
Consider the increase of CO2 from a baseline of 290PPM
RF= 3.6*ln(395/290)/ln(2) = 1.6 W/m^2
This is close to the value of RF=1.55 W/m^2 that I mentioned earlier. Then with a feedback factor λ of 0.8 K/(W/m^2)
ΔT = λ * RF = 0.8 * 1.6 = 1.28°C
I guess it all comes down to what the value of λ is, because that contains all the extra feedback factors.
If I have that wrong, please correct because I don’t have direct dealings with climate scientists, and learn everything based on what I can glean from the literature.
Webster, “I guess it all comes down to what the value of λ is, because that contains all the extra feedback factors.”
And the “surface”. As I said the 255K assumes a fixed albedo. Clouds and aerosols are not fixed. The ocean surface albedo is close to fixed. The average temperature of the oceans, the source of the energy for the atmospheric, is 4C 334.5 Wm-2. From that more stable reference surface, you get 0.8 C per doubling.
From the DWLR estimated lower atmosphere “surface”, you get about 0.8C per doubling. From the ~4C average land Tmin you get about 0.8C per doubling. From the Stratopause at ~316 Wm-2 you get about 0.8 C per doubling. You can get lower estimates from dozens of reference surfaces, but making the same failing assumptions than produced the 0.8K/Wm-2, you can get bigger numbers and lots of climate paradoxes.
WebHubTelescope,
You have it wrong because CO2 is not the only greenhouse gas. GHG forcing is currently about 3W/m^2 higher than preindustrial according to the numbers being fed into AR5’s models.
Re: WebHubTelescope (Jun 30 08:57),
You’re only looking at CO2. As SteveF pointed out above, there are other forcings like chlorofluorcarbons that increase the total present forcing to 3.1 W/m² from the CO2 only forcing of 1.6 W/m². Now some of that has been offset by Planck feedback from the surface temperature increase and some by aerosols reducing solar input. That then leaves the imbalance at the top of the atmosphere. The rate of increase of OHC puts an upper bound on TOA imbalance of ~0.7 W/m².
If there’s significant positive feedback, then the effective forcing is increased above 3.1 W/m² as the temperature is increased (or the effective Planck feedback is reduced). To sustain a high TCS or ECS, one must assume that aerosols are offsetting a large portion of that forcing to keep the TOA imbalance where it is. But aerosol forcing appears to have been overestimated. That puts an upper bound on positive feedback and thus TCS and ECS estimates will be lower.
Re: WebHubTelescope (Jun 30 08:57),
I don’t think you’re looking at the feedback factor correctly either. Feedback is a function of temperature, not forcing. You can have a very large forcing, but until the temperature changes, you don’t get any feedback. Sure you can invert the feedback factor and multiply it times forcing to get a temperature, but that doesn’t make it correct. So if we say instead that feedback is 1.2 W/(m² K), then the surface temperature increase of 0.8 K has increased forcing by ~1 W/m², or a total of ~4 W/m², which then has to be offset by increased aerosols to explain why the current TOA imbalance isn’t higher.
SteveF (#117352):
“the correlation between total GHG forcing and total aerosol offset is >0.9 IIRC.” Part of the reason for that is that the aerosol forcing is arbitrarily fixed as a fraction of GHG in the GISS data which you cited. From the caption to Figure 1 of Hansen et al. 2011, which is the source of your figure 12: “…the tropospheric aerosol forcing after 1990 is approximated as -0.5 times the GHG forcing.” The GHG forcing is calculated in a systematic way from measured GHG concentrations; but aerosol forcing seems disconnected from direct measurements. As previously remarked, recent aerosol forcing estimates (e.g. AR5 draft) are considerably smaller in magnitude than GISS’s.
Re: HaroldW (Jun 30 10:24),
Making aerosol forcing equal to some fixed fraction of ghg forcing is exactly equivalent to a negative feedback. So, to all intents and purposes, the modelers know that the total positive feedback in the models is too high.
What I think you ought to do is to change all the Wikipedia and other references which explain the forcing as a CO2 basis (3.7 W/m^2 for doubling) and then temperature results form this and the other feedbacks which reinforce that number.
The equation was needed to concisely explain how much the temperature changed due to a change in CO2. Then one can determine an ECS by calculating λ*3.7*ln(CO2/290)/ln(2), where λ = 0.8.
This is not my problem, if it is indeed a problem then it is one of the definition. You will have to redefine what λ means.
http://en.wikipedia.org/wiki/Climate_sensitivity#Calculations_of_CO2_sensitivity_from_observational_data
WHT,
Wikipedia does a good job with lots of subjects that are not politically contentious. Climate science is not one of those. The information in the article you linked to has several factual errors and/or “tilted” facts. People who try to correct those errors end up being prohibited from editing. Which is why I stopped sending money to Wikipedia several years ago… too much political influence in most of their articles related to global warming.
.
Some people here (like DeWitt, an experienced physical chemist who knows quite a lot of relevant science) are trying to lay out clear technical explanations of the basics for you, but it seems to me you are reluctant to try to understand those explanations. Too bad. You have some pretty obvious misconceptions about factual information which is really not in dispute. I think you would do well to learn enough about the physical processes involved to better understand and more critically evaluate the technical arguments. Nobody can do that for you.
Re: WebHubTelescope (Jun 30 13:09),
Why change what is correct? The forcing for doubling CO2 is ~3.7 W/m². It’s just not the only forcing, as can easily be determined by consulting the WGI reports from all four IPCC assesments. The problem began with the United Nations Framework Convention on Climate Change (UNFCCC). That was the origin of the focus solely on CO2. OTOH, CO2 will begin to dominate all other forcings in the not too distant future.
Specifically the forcings and their values in W/m² with uncertainty from the AR4 as of 2005:
CO2: 1.66 [1.49 – 1.83]
CH4: 0.48 [0.43 – 0.53]
N2O: 0.16 [0.14 – 0.18]
Halocarbons: 0.34 [0.31 – 0.37]
stratospheric ozone: -0.05 [-0.15 – 0.05]
tropospheric ozone: 0.35 [0.25 – 0.65]
stratospheric water vapor from CH4: 0.07 [0.02 – 0.12]
land use: -0.2 [-0.4 – 0.0]
black carbon on snow: 0.1 [0.0 – 0.2]
total aerosol effects consisting of:
aerosol direct: -0.5 [-0.9 – -0.1]
cloud albedo: -0.7 [-1.8 – -0.3]
linear contrails: 0.01 [0.003 – 0.03]
solar irradiance: 0.12 [0.06 – 0.30]
total anthropogenic: 1.6 [0.6 – 2.4]
The net forcing in 2005 is nearly equal to the forcing from CO2 alone. But look at the uncertainty. The upper end on the total is four times the lower end. And it’s almost all due to aerosols. You can find experts who will argue that the aerosol cloud albedo effect, otherwise known as the aerosol indirect effect, is, in fact, either insignificant or completely non-existent. But if you minimize aerosols, your climate sensitivity moves toward the low end of the range, and the priority of carbon emission control drops. Can’t have that.
OK so you have 1.6 W/m^2 and I said 1.55 W/m^2 in comment #117335.
If we use 1.6 W/m^2 and plug in λ of 0.8 K/(W/m^2) then we get 1.28°C .
The land temperature has increased by 1.2°C, so evidently all we are arguing about is uncertainty based on +/- compensation by aerosols.
Tell me what to write and I will edit the Wikipedia article. I haven’t been banned yet, so I will see how it goes.
Is the plan to explain why λ is a value much less then 0.8°K/(W/m^2)?
If that doesn’t work, I will at least know what the correct argument is, and for that I would be grateful.
Webster, “is the plan to explain why lambda is much less tan 0.8K/Wm-2?”
http://rankexploits.com/musings/2011/a-simple-analysis-of-equilibrium-climate-sensitivity/
0.8K/Wm-2 would require perfection. That means an ideal radiant “shell” with zero advection/convection changes. The advection, hemisphere imbalance if you prefer, impacts the efficiency of the CO2 equivilent “shell” and water vapor distribution.
Exhibit a: http://eesc.columbia.edu/courses/w4937/Readings/Brierley%20and%20Fedorov.2010.pdf
The meridional SST gradients changes also appear to
dominate global changes in average climate properties. For
example, the shift to the modern meridional SST gradient in
the model leads to a global mean temperature reduction of
∼
3.2°C versus
∼
0.6°C in case of the zonal changes. This is
mainly due to a much greater reduction in the atmospheric
water vapor content (which is a potent greenhouse gas) in
the former case. These results indicate that reproducing the
correct SST distribution appears to be critical for a model to
account not only for the ice sheet inception and changes in
precipitation but also to reproduce proper changes in the key
elements of the global radiative forcing, such as the distri-
bution of water vapor and cloud cover (both high and low
clouds) in the atmosphere.
Re: WebHubTelescope (Jun 30 20:19),
That’s actually quite a large issue. If we assume the low end of the forcing range and a ΔT of 1.2 K, then with your λ (actually 1/λ in standard notation), ΔT = 0.5 K. At the other end of the range ΔT = 2.0 K. Or conversely, we could choose a different λ, i.e. ECS, that gives the same result, which is precisely what the IPCC does. So for 0.6 W/m² and an equilibrium ΔT = 1.2 K, λ = 2.0 and for 2.4 W/m² λ = 0.5. ECS, btw, equals 4 * λ in your notation. For an ECS of 3 K/doubling λ = 0.75. So ECS then ranges from 2 to 8. I doubt, however, that anyone really believes that total anthropogenic forcing in 2005 was anywhere close to the low end of the uncertainty range. If we assume the IPCC ECS range of 2 – 4.5, λ is then 0.5 – 1.125. The forcing uncertainty range is then 2.4 – 1.1 W/m². Therefore, you’re begging the question by assuming your conclusion that the land surface temperature has equilibrated when you pick your value of λ.
In fact, it’s just as likely, if not more so IMO, that the land surface temperature is above the equilibrium value for the current forcing.
WHT
http://theoilconundrum.blogspot.fr/2013/05/proportional-landsea-global-warming.html
So once you put in the correct value of 2.0 how does this affect your results? What does the land/sea ratio plot look like now?
http://img442.imageshack.us/img442/4509/n5t.gif
WHT
http://theoilconundrum.blogspot.fr/2013/05/proportional-landsea-global-warming.html
In fact you need to recalculate most of the page now since the OLS results were wrong on two counts. The most important is that you cannot take the ratio of “anomaly” data outside the reference period. It should have been done on differences (ie dT/dt) . The orientation of the regression will probably also mean you need to estimate the slope of the earlier, noisier data from the OLS slope of the two orientations.
SteveF,
Congratulations on a very well written article and a fascinating thread.
One of the things that is interesting here is that if the time series is decomposed in the frequency domain, the “true” very low frequency gradient works out to be very close to 0.1 deg C/decade over the last 25 years or so. Z.Wu at al (http://link.springer.com/article/10.1007/s00382-011-1128-8#page-1) found 0.096 deg C/decade. My own decomposition gave an almost identical result. (http://rankexploits.com/musings/2012/more-blue-suede-shoes-and-the-definitive-source-of-the-unit-root/)
I don’t think it is too diffcult to reconcile these results with your analyses, if we recognise that your secular trend represents the sum of the quasi “60-year” cycle together with the very low frequency trajectory.
Your polynomial fit represents a sort of low-pass filter. What is interesting is that while it can capture the character of the quasi 60-year cycle in the data, it does not filter the next lowest frequency in the data; this is a “22 year” cycle. Visual inspection of your residuals shows this cycle pretty clearly. I am sure that a FT would expose it as the dominant low frequency in the residuals. So evidently the solar variation that you have as input does not explain this cycle.
I still don’t know what this means for the analysis, but it does seem to suggest that there is a missing mid-range frequency that needs to be accounted for before your higher frequency components can be separated out cleanly.
I am now wondering whether there is some value in fitting your high frequency components to the residuals of the aggregate low frequency components (i.e. the temperature predicted using just the frequency components with periodicity of 22 years and above) rather than to a polynomial curve. The latter clearly cannot capture the 22 year cycle unless you go to very high order.
Paul_K,
Thanks Paul.
I will pass the residuals through an 11 year centered average filter, then through a 22 year centered average filter. The difference should show a 22 year cycle pretty clearly if one is there.
.
I agree that the overall trend is consistent with a cyclical contribution in the 60 yr range changing the warming rate from a more linear trend, although a good mechanistic explanation seems lacking. A ‘true’ GHG driven trend near 0.1C per decade would be consistent with a climate sensitivity about half of the GCM average (or a bit more). If someone could just convince the modelers to revise their cloud parameters and reduce their assumed aerosol offsets it would be time for that adult conversation your have called for. 🙂
Paul_K ” The latter clearly cannot capture the 22 year cycle unless you go to very high order.”
And one of his formulations showed the best AIC results for p9 (as far as he went). As I commented earlier, this indicates that particular attempt shows the regression variables don’t fit. (That was the regression where SSN and volcanoes were allowed different time constants.)
I was going to request that Steve posted the residual data so this I could look at the spectrum, since I can see the missing 9 year is likely present. I agree, that by eye it looks like there is a 22y as well.
All this ties in with my trivial 9,22,60 test data which seems to provide quite a good synthesis of the temperature record, without the need for volcanoes, convolved with an exponential impulse response with tau=3.5 years.
http://climategrog.wordpress.com/?attachment_id=403
It seems clear that Hale cycle is the relevant climate forcing not the shorter Schwabe 11y cycle. That mean polarity does matter and someone needs to come up with a way of ‘unfolding’ the SSN data, which seems to reflect the _amplitude_ of what we need to be regressing.
In fact I think the 11y cycle is part of the 60 signal, along with the missing 9y variable.
9.1y (Curry/BEST/Scafetta) 10.8 SSN will produce a amplitude modulation close to 60 years.
Like I have pointed out on a number of forums, the apparent lack of correlation to SSN is the presence of the so far largely ignored 9y cycle
Steve has been quite thorough and looked into a number of variations on the model being fitted and the residuals are significant whatever is done and have obvious cyclic elements remaining. What needs to be concluded from all this is that the current set of variables is incomplete.
“I will pass the residuals through an 11 year centered average filter”
Please make sure that ‘filter’ is not a simple running mean that will inject false 8 year signal ( 11/1.3371 ) .
If you would like to post the residuals I’ll do a frequency analysis that will give a much better idea than splashing around with filters.
Paul, SteveF:
Using the sort of low pass filter that I have described above, you can clearly see which frequency bands contain cycles of interest.
Globally I see 3, 4 & 12 and part of 60+ years. For ENSO similar analysis can be used to find similar(?) cycles.
Greg Goodman (Comment #117392)
July 1st, 2013 at 4:57 am
“Please make sure that ‘filter’ is not a simple running mean that will inject false 8 year signal ( 11/1.3371 ) ”
That is why you use a minimum 3 pole low pass filter/running mean to remove the ‘square wave’ sampling beats with the incomming signal that otherwise get through a single stage.
That is what I use a 3 pole filter with the sequence of: next = round(previous * 1.3371) and thus get very few residual errors.
9.1y (Curry/BEST/Scafetta) 10.8 SSN will produce a amplitude modulation close to 60 years.
http://climategrog.wordpress.com/?attachment_id=405
RichardH:”Globally I see 3, 4 & 12 and part of 60+ years. For ENSO similar analysis can be used to find similar(?) cycles.”
Can you tell with that kind of method whether “4” is not 4.45 or 3.8 both of which occur in a wide variety to datasets.
If you look at my low pass frequency anyalsis mentioned above you will see that the are ‘nodes’ in the outputs at those frquencies.
These fall at 37 month, 4 year and 12 year as best I can tell given that the statistics of 10ish or 3 samples does not mean ‘proof’ of anything.
Nyquists limits it to <15 years for now.
http://i1291.photobucket.com/albums/b550/RichardLH/UAH-Comparisonofcascadedlowpassfilteroutputmonthsrunningaverage_zpsf1883e44.png and tell me what you see?
“Nyquists limits it to <15 years for now."
No, Nyquist is half the sampling frequency, so 2mth period for monthly data.
The satellite record is 34 years long. Nyquist say 17 years but you would have to be mad to expect that in a noisy signal!
The Nyquist constraint is because of the length of the record, not the monthly sampling frequency for looking for the longest periods don’t forget.
“That is why you use a minimum 3 pole low pass filter/running mean to remove the ‘square wave’ sampling beats with the incomming signal that otherwise get through a single stage.”
No Richard, it’s not a case of anything just “getting past” a simple running mean filter, that would not be too bad. All filters have leakage. The problem is that it also inverts it. Thus creating a total aberration of the signal. Any way it’s good to see someone aware of the need to use three passes with different periods.
In fact you are rather working backwards, so you will find that your starting point of 4 years is not properly filtered and neither is the next one up. That’s why you get a residual at 4years and guess what 4y=48m; 48/1.3371= 35.9m . could that be something to do with your 37m ?? 😉
Hmm. This is just a digital implementation of a classical analogue circuit.
Rigged as bandpass it provides a decompilation of the input signal into low frequency bands.
Sort of things you see everywhere else in science and engineering.
This is applying that methodology to temperature data.
Somethings that one-one seems to have done before.
figure 4 :
http://climategrog.files.wordpress.com/2013/03/icoads_pds_9_grp.png
from my basin by basin freq analysis showed that the 9 year period is present in most of the major ocean basins.
http://climategrog.wordpress.com/2013/03/01/61/
Any attempt at this kind of regression that does not have a variable with that frequency will produce spurious correlations to the one or two events in other regressed variables that happen to coincide.
Indeed it has one big advantage – the sum of all of the outputs must – by definition – be same as the input signal.
Internal verification of correct analysis 🙂
Greg Goodman (Comment #117401)
July 1st, 2013 at 6:19 am
“”The problem is that it also inverts it.”
And you think that the 1.3371 multiplier between stages has nothing to do with exactly this problem?
“Somethings that one-one seems to have done before.”
What you’re doing is a crude breakdown like you’d see on an audio “graphic equaliser” output display. If “one-one” 😉 was going to do a frequency analysis there are a many methods that give a more or less continuous detailed spectrum that is more informative.
Having said that, you could still do your kind of freq band analysis (as long as you do to the 2nd and 3rd pass going beyond the highest frequency you want to look at. ie add 36 and 27 month filters)
But if there is significant signal between 6.7 and 9.3 I’d need to know whether it’s 7.5 8.85 9.1 or 9.3 before doing looking for a cause.
I really don’t think it’s detailed enough to be much use. Though correct as far as it goes.
Does its reciprocal help 0.7478…
Greg Goodman (Comment #117406)
July 1st, 2013 at 6:45 am
“Somethings that one-one seems to have done before.â€
“What you’re doing is a crude breakdown like you’d see on an audio “graphic equaliser†output display.”
Indeed. But as it makes no assumptions about internal cycle distributions because of its unweighted sampling so it is fairly detuned in that regard.
Does its reciprocal help 0.7478…
does it help you see why you find circa 37 months?
“And you think that the 1.3371 multiplier between stages has nothing to do with exactly this problem?”
You read all my posts on this subject again and then post back the answer. Along with a revised analysis with the two missing stages of your filter stack and tell me if you still get 4y and 37m.
Well if there IS a 37 month and a 4 year signal in the data the candidate list for what COULD be modulating Global temperature is fairly short!
That is what I see. Time alone will tell if what I see is really correct. Me, I’m just calling that there is a pattern in the data that needs looking at.
Greg Goodman (Comment #117409)
July 1st, 2013 at 6:51 am
“You read all my posts on this subject again and then post back the answer. Along with a revised analysis with the two missing stages of your filter stack and tell me if you still get 4y and 37m.”
There are no missing stages. Correct use as digital implemntation requires the numbers used.
So what better method would you propose for finding all the natural cycles longer than a year that have occured as measured by the UAH series?
Fig 1.
http://i1291.photobucket.com/albums/b550/RichardLH/UAH_LT_1979_thru_May_2013_v5_5_zps42d9d2a9.png
Fig 2.
http://i1291.photobucket.com/albums/b550/RichardLH/uahtrendsinflectionfuture_zps7451ccf9.png
Fig 3.
http://i1291.photobucket.com/albums/b550/RichardLH/UAH-Comparisonofcascadedlowpassfilteroutputmonthsrunningaverage_zpsf1883e44.png
“There are no missing stages.”
Your “first” stage is apparently 4 years. A 4y RM filter will have a negative lobe at c.36 months.
Those are your missing stages. Your first _complete_ filter will then be the 4y stage. You use it to dump out anything 4y and shorter, then do your band pass analysis on what remains.
No point in talking about 4y component since you supposedly fully removed, The 37m is an artefact due to your missing stages.
Try understand what is being said rather than just being dismissive without any better argument than insisting you are correct.
No I think you are missing the point. I undertand what this shows and its limitations. Of course you cannot go hunting precise periods with the bandpass display. It does show what range to look in though.
Nodal analysis (i.e. local zero crossing) DOES show that though which is how 37 months, 4 years came out.
In most other disiplines it is the zero crossing that is used for cycles, not the peaks.
If you want the maths – you use a cross multiplier between all the stages to get the Nodes out (or use hand techniques as I did here).
My current theory. Lateral orbital tides modulating the Polar/Ferrel Cell boundary and thermohaline response describes most behaviour seen.
Ice ages occur when conditions allow 2 cell rather than a 3 cell atmosphere.
http://en.wikipedia.org/wiki/File:Field_tidal.png
http://en.wikipedia.org/wiki/File:Jetcrosssection.jpg
RichardLH:
Actually, the Nyquist frequency supplies the highest frequency you can sample at before you get aliasing of frequencies. There is no Nyquist constraint associated with the duration of the record–You can sample to zero frequency (infinite period) even for a fairly short duration signal.
The length of the record just affects the resolvability of signals. For a rectangularly-windowed signal and using an FFT, your resolution is exactly 1 over the duration of the signal.
That’s just for that algorithm though, and isn’t an absolute limit.
For nonlinear algorithms, for example, the resolvability of two closely-spaced-in-frequency signal also depends on the signal-to-noise ratio of the signals you are looking for.
Carrick (Comment #117417)
July 1st, 2013 at 8:03 am
You are probably right. My point was that the longest resolvable period that can be obtained by this methodology is half the record length. I thought that Nyquist said that also but ….
OK. I claim a reverse Nyquist as 1/Period = Frequency 🙂
“No I think you are missing the point.”
No, you are missing the point but I’m getting really tired of repeating what you refuse to comment on.
To filter 4y you need 38,36,27 m stages . So far, unless you have incorrected reported what you are doing, you stopped (or rather started because you’re working backwards.) at 48m
Are you Richard Holle or another RichardH.
Greg Goodman (Comment #117420)
July 1st, 2013 at 8:10 am
“No, you are missing the point but I’m getting really tired of repeating what you refuse to comment on.”
I am sorry. I am not trying to missunderstand what you are saying. I am pointing out that there are what I see is 37 month, 4 year and 12 year patterns in the data.
I have found it from two reputable sources.
I have perfomed standard signal analysis of those records to decompose them into frequency bands.
I have located the local Nodes and translated those into a Periodic structure.
I have offered you the observation that those cycles exist.
I have pinned my stake firmly in the ground to predict what this says about the 18 month future for the UAH series and how climate is driven in the short term.
Time alone will tell if the patterns I see are just noise and imagination or real.
Greg Goodman (Comment #117420)
July 1st, 2013 at 8:10 am
“Are you Richard Holle or another RichardH.”
No that’s not me.
RichardLH, “My current theory. Lateral orbital tides modulating the Polar/Ferrel Cell boundary and thermohaline response describes most behaviour seen.”
A correct theory should explain everything. The problem is when a theory explains “most” and there is a trace gas that can be manipulated to explain the rest.
https://lh6.googleusercontent.com/-8Po1gw_Tg9s/UdGMmW7jEqI/AAAAAAAAI3o/XBzZ3tbWXw4/s909/TIM%252041km%2520Aqua.png
That is the TOA TSI with the 41km Stratosphere from AQUA. Short period, but may be useful.
https://lh3.googleusercontent.com/-wiNVt6Vi4r0/UdGMmikTeeI/AAAAAAAAI3s/_TRIPM_QDsY/s703/TIM%252041km%2520Aqua%2520Wm-2%2520Estimate.png
That is the range of sensitivity of the 41km stratosphere layer to solar forcing. 5.4Wm-2/K or 0.185K/Wm-2. The Stratosphere would be a rapid response region and more of a “global” reference that “global” average surface temperature.
I would think that for periods of less than 15 years, that would be a better place to apply your method.
As a note, above that 41km layer you start getting into the ionosphere which would respond to geomagnetic potential changes. Lindzen has already done a good bit of work on atmospheric tides, so there is some ground work already done.
Perhaps a more top down approach would help tell you if you are on the right track or not?
dallas (Comment #117423)
July 1st, 2013 at 8:25 am
RichardLH, “My current theory. Lateral orbital tides modulating the Polar/Ferrel Cell boundary and thermohaline response describes most behaviour seen.â€
“A correct theory should explain everything. The problem is when a theory explains “most†and there is a trace gas that can be manipulated to explain the rest.”
Please note that the IPCC CO2 related temperature ‘wedge’ is displayed (I think correctly) on the decompostion graph.
“I am sorry. I am not trying to missunderstand what you are saying.”
You seem intent on not listening. You seem to have the 1.3771 filter trick back to front like you got Nyquist back to front.
You should have started with the LONGEST frequency , then progressively removed the negative lobe at each stage, stopping TWO stages BEYOND the highest frequency you want in your bands, which seems to be 4 years from what you posted.
That means you if you highest freq band is >4y you need the last three filters to be 48m,36m,27m otherwise you will have the filter defects for that band.
Now, please. Is there any part of that you disagree with?
You can use the series in either direction, it all depends on where you start.
You can start from the lowest and multiply or from the longest and divide.
Greg Goodman (Comment #117391)
July 1st, 2013 at 4:52 am
Greg, you wrote:
If it does form part of the 60 year variation, then I suspect it must be only a small part. Because these frequencies are high relative to the 60-year period, the effect of amplitude modulation is to increase (decrease) variance around the mean as the waveforms come into (go out of) phase over the 60 year period. This is not the 60-year oscillation evident in the temperature record. The 60 year cycles in the temperature record show a change in mean rather than variance.
I certainly don’t rule out harmonics, but these high frequency cycles don’t have the power to explain the observations in my view.
There is NO 37m periodicity in that data, in either hemisphere.
http://i44.tinypic.com/v5d73n.png
3.7 years is 44.4m
Greg Goodman (Comment #117425)
July 1st, 2013 at 8:32 am
“You seem to have the 1.3771 filter trick back to front like you got Nyquist back to front.”
Hmm. I/period = frequency and multiply up rathen than divide down.
Greg Goodman (Comment #117428)
July 1st, 2013 at 8:39 am
“There is NO 37m periodicity in that data, in either hemisphere.
http://i44.tinypic.com/v5d73n.png
3.7 years is 44.4m”
Highly tuned filters suffer from assuming that you know the internal data distribution.
Square wave sampling removes any such assumptions.
And for the record length we have fairly pointless I would suspect.
Greg: I suspect you don’t agree with my reasoning.
I am not looking for a fight.
I am just suggesting that the periodic structures exist.
So you tell me – do they or don’t they?
“There is NO 37m periodicity in that data, in either hemisphere.
http://i44.tinypic.com/v5d73n.png
3.7 years is 44.4mâ€
Sorry – missed the change of cards. The data I was examining was the Global set. I have yet to do the work to decompose this further.
That means you if you highest freq band is >4y you need the last three filters to be 48m,36m,27m otherwise you will have the filter defects for that band.
Now, please. Is there any part of that you disagree with?
OK. So you plot whatever data summary of whatever length you wish on top of the data in the same mannner as I have shown above and demonstrate what structures you find, if any.
Greg Goodman (Comment #117434)
July 1st, 2013 at 9:08 am
That means you if you highest freq band is >4y you need the last three filters to be 48m,36m,27m otherwise you will have the filter defects for that band.
“Now, please. Is there any part of that you disagree with?”
Nothing at all, except I can start from x and multiply to y or y and divide to x.
You see one direction, I observe two.
The series chosen is one that includes 12. This is very convieient as it means you can use the same filter set on normalised and non-normalised data.
I suppose it is the inevitable series if you take the integer requirement into account as it starts 3, 4, 5, 7, 9, 12, 16……
All other series will devolve to this at the root becuase of the 3, 4, 5, into.
“You see one direction, I observe two.”
Jeez, what does it take? You got it backwards.
It’s not an option, you got it backwards.
The filter does not inject spurious signals in both directions so you need to understand what it DOES in order to correct it.
Running mean distorts higher frequency (longer periods if you prefer). So you need to run a 37m filter to correct the 48m filter. Then you need to run 27m to correct what 37m did. By that stage the residual is confetti and can be ignored.
You can not “choose” to see this in another “direction” and try to fix it by doing 6 year and 9year instead.
RichardLH, “Please note that the IPCC CO2 related temperature ‘wedge’ is displayed (I think correctly) on the decompostion graph.”
I would not get too excited about matching the IPCC. The Stratosphere btw is a good way to get “above” the noise for gut checks. Since it responses quickly to changes in ocean heat capacity on a global scale, I am a bit surprised that it has been ignored for so long, since it is a radiant physics problem after all, it make a much better radiant shell than the turbulent middle troposphere.
https://lh3.googleusercontent.com/-nVs_9UFiZ5U/UcHYjCYF7GI/AAAAAAAAIsM/zoskauefMmY/s945/squiggly%2520mess%2520with%2520curve.png
When a radiant shell varies in altitude or shape, it varies the whole system response. The “IPCC” literature assumes that is not an issue and bulldozes ahead with various “adjustments” in order to maintain the “range of comfort”.
If you are looking for a gravitational influence, why not start where it would be most obvious?
Greg: OK. we are not going to agree about how to derive data summaries.
All I ask is that you come up with a better graph of UAH data that shows the preiodic structure.
And for that matter, a better monthly filter series, preferably including 12, to look at the data with.
dallas (Comment #117440)
July 1st, 2013 at 9:26 am
RichardLH, “Please note that the IPCC CO2 related temperature ‘wedge’ is displayed (I think correctly) on the decompostion graph.â€
“I would not get too excited about matching the IPCC.”
I rather obviously need to put more smileys on the posts. Did you look at the relative size of the >88 month output? Compared to the CO2 wedge? Do you need to consider how little it would take for even longer term modulation on that 60+ year signal to make that disappear?
Oh and bye the way – good luck with scaring anyone with those graphs out there.
Greg: My monthly series is 3,4,5,7,9,12,16,21,28,37,49,66,88 for the UAH data. Sampling length stops it there. I don’t always use all the steps.
By pure chance the 37 and 49 month spans almost exactly match the other underlying cycles so allowing them to show through becuase we are on the ‘nulls’ as well as exactly removing the 12 month signal.
Now this all could be noise and imagination – but I see a pattern.
“Greg: OK. we are not going to agree about how to derive data summaries.”
I’m not saying anything about your “data summaries” or your zero crossing nodes nor am I anything but being for recognising the periodic structure in the data.
But if you do it, you have to get the maths right. This is a very limited discussion that you seem intend on taking anywhere else or pushing off with “only time will tell”.
No. Maths is a science. If there is a difference of opinion it is settled by the maths. I must have explained your error at least ten times now. You are obviously one of the those people who will argue the arse end of a donkey rather than admit you made a simple mistake and correct it.
In order to do a 4y filter you need to do 48m,36m,27m . You did only 48m.
Whatever you do after that will reflect the error you made at this stage. So fess up and address that issue. If you don’t think a 48m RM produces spurious results at 36m or you think this is corrected by the other longer period filters, then you need to come up with the maths on which you base that conclusion.
That can be concluded in one post, we don’t need to wait ten years to see what climate does.
Greg: I am sorry if that came across as confrontational.
I propose that we use the monthly series above for looking at temperature data when using central output running averages.
That is, the series that includes 12 months as one pole.
Divide below 12, multiply above but you will end up with 3,4,5, 7…
I don’t see you as being confrontational.
” I propose that we use ..”
yet again you AVOID “confronting” the issue and try to take it somewhere else.
In order to do a 4y filter you need to do 48m,36m,27m . You did only 48m.
Greg: I have my 12 month series. Both above and below. What other seires do you suggest?
Greg: The maths set the values. I am not choosing them. The series is a ratio. Works either way from a single pole. I chose 12 months for obvious reasons and then went up and down from there.
And given the integer 3,4,5 start to this series I suspect that it the the ‘central’ value for all such running average filters.
Exactly as you would if you were making an anaologue filter and nulling out the main component first 🙂
In order to do a 4y filter you need to do 48m,36m,27m . You did only 48m.
Greg Goodman (Comment #117455)
July 1st, 2013 at 10:55 am
“In order to do a 4y filter you need to do 48m,36m,27m . You did only 48m.”
But I am not doing a 4 year filter, I am doing a one year one as that is the main component. The rest just falls out.
So did you start a 4y and work upwards as you stated or did you start at 12m as you now state?
The equivalent of using one year normals but don’t get me started on that 🙂
I have always stated that the series chosen included 12 months as the main pole.
All right this merry-go round has gone far enough. I asked initially that you write this up so as to clearly describe what you did. You just respond with yet more incomplete snippets in posts here.
Your result is bogus because there is no 37m signal in that data. Quite how you lead yourself astray is your business, since you refuse to communicate your method.
I regret having polluted this otherwise interesting discussion by trying to discuss this with you.
Apologies to Lucia and other for the noise.
I am sorry you think it is noise. I think it is pattern. We will see.
Greg Goodman (Comment #117460)
July 1st, 2013 at 11:28 am
“All right this merry-go round has gone far enough. I asked initially that you write this up so as to clearly describe what you did. You just respond with yet more incomplete snippets in posts here.”
Slightly unfair. I have done my best to explain how and why I chose the values I did.
I gave complete descriptions of how I processed the data.
I drew conclusions based on that summary.
I asked for opinion.
I got suggestions that I did not understand digital signal processing of short sample data.
There were time when trying to find signals in noise from some well over the horizon short wave radio source was exactly what this sort (of analogue then) circuit was used for.
Lucia: I appologise if this discussion has been a problem. I think it is worth pursuing.
Paul_K :
That is a good point … unless the variance can have an effect on the mean.
This is where I think Bob Tisdale, despite his lack of formal training, has hit on a key observation. The processes of capture of solar energy during colder conditions and transfer from ocean to atmosphere during warmer SST conditions are NOT symmetrical processes.
The first is additional input to the system, the second is an internal transfer which will indirectly lead to a proportion being lost to the system. But this is not like a simple linear oscillation like mass on a spring or a pendulum.
This is the fundamental error of mainstream climatology : the arbitrary assumption that any “internal” variation (that is probably far from being internal anyway) will just average out and can be ignored as stochastic “noise”.
This is a corollary of Eschenbach’s “governor”, which is a fundamentally non linear negative f/b. As I indicated above, I think the Nino signal correlates because it represents the non linearity of the tropics. Though that is just one component of El Nino. There is also an externally driven cyclic component, that I am working to quantify.
Larger El Nino/Nina variability has the means to capture more of the available solar and hence cause warming. When the variability drops below the norm it will result in cooling.
I thank you once again for pertinent criticism that has helped clarify my ideas.
SteveF,
Why did you pick 1998 as the breakpoint for the comparison?
Is that how F&R did it? It seems likely that someone will accuse you of cherry-picking is the reason I ask.
It’s not a problem.
I agree that is open to question. I don’t like any of this “linear trend” game which assumes a fundamentally wrong model to be fitting to this kind of data.
Â
If there is a different ‘trend’ then the exercise has failed in its objective to separate the the short term variability from the long term signal. The downward turn is the prominent 60y cycle. Unless that, or something that can simulate it, is included in the detrending function, it will still be that at the end.
Â
The presence of a difference is worth noting in that sense: that the regression effort failed. In that context the date is not that critical.
Greg: The 88+ period seems close to 60 years but….
Greg: and if you haven’t figured it out from looking at the graphs then it is possible to any final trend (including none). All it does is add or subtract to the previous stage, in this case the 88+ cycles.
Greg: If you wish to add a two stage pre-filter of 7 and 9 to the 12 then please do.
I did not think it was required as I was looking for periods > 21 months,
Lucia: Any comments on the series used, the methodology and the conclusions?
Me said: “Larger El Nino/Nina variability has the means to capture more of the available solar and hence cause warming. When the variability drops below the norm it will result in cooling”
I then popped into look at WUWT and find a new paper about ENSO variability. How on queue is that? And the headline graph shows variability has a steady underlying increase since LIA.
http://wattsupwiththat.com/2013/07/01/claim-recent-el-nino-behavior-is-largely-beyond-natural-variability/#comment-1351874
Of course the authors manage to ignore that inconvenient truth and spin it into the usual “footprint” of AGW. But the chronology looks like it may contain useful information.
Bill_W (Comment #117466),
Just following F&R. As far as I can tell, there is
no reason to pick that break point other than it is what jumps out as a change point when you look at the data.
Prologue:
Â
I did not spot this when I read the article. This may be why F&R’s separate time constant for solar regresses to a different value. The extra degree of freedom allows one of the variables to capture some part of ‘faster’ land component of the combined dataset.
Â
The fact that such a regression produces a markedly different tau for two radiative forcings clearly indicates a fundamental incompatibility between the model and the data. Grant “Tamino” Foster must be well aware of that but sweeps it under the carpet because he gets the result he wanted.
Â
To do this kind of regression I think it is necessary to restrict it to SST. As discussion with WHT above showed, there is quite a clear 2:1 land/ocean ratio in changes in temperature. Using a combined land+sea record is simply muddying the waters. Also if the solar linkage is to be assumed to be radiative, it must have the same tau as volcanic forcing.
Â
I think the strongest result you have here is that using a common tau and taking 1950- period, AIC shows a high order polynomial produces a better fit. It would be worth looking at that solution in detail rather than rejecting it. Clearly the p9 will be doing a lot of the matching, as you wrote, so what is the magnitude of the fitted variables in that case? Does it look physically credible? What does that polynomial look like?
Â
I think you should give more attention to that result because it shows what happens if you don’t load the dice by cherry-picking a subset of the time series and allowing physically illogical degrees of freedom.
Â
I think what you have shown, in quite a thorough fashion, with that regression is that the input variables are insufficient to explain the variability in the data. That is why the AIC runs out to p9.
Â
I think a 9.1 year cosine peaking at 2005 would make this regression work. There is ample evidence of a 9y signal even if its physical origin
remains obscure.
Â
You have provided a formal proof that omitting it does not work, perhaps someone should attempt a regression that includes it.
Just found this I did a year or two ago while trying to detect volcanic impacts on OHC:
http://i44.tinypic.com/xe3gnk.png
Â
(there’s typo in the legend, it’s two sines – no sinc function 😉 )
Â
It is clear that the lack of circa 22y component opens the likelihood of confounding with an arbitrarily spread volcanic signal forcing. It is the 22y cycle that would account for the recent downturn.
Â
I’m going to take another look at this and the residual, since 3.7 is also a recurrent and strongly present periodicity in trade wind data and rate of change of CO2 in both MLO and the Samoa records.
Â
And a dominant signal in UAH TLT , NH,SH and tropics:
http://climategrog.wordpress.com/?attachment_id=407
Again there is clear indication of a climatic rebound following an eruption that is quite distinct from the steady, longer term rise that is usually attributed to AGW.
Any suggestions on the origin of 3.7 years would be welcome. 😉
Greg Goodman (Comment #117489)
July 2nd, 2013 at 4:07 am
“It is the 22y cycle that would account for the recent downturn.”
From my work I would have said that a 24 year cycle was more likely, given that is the easy multiple of 12.
You can imagine that the 88+ month output is made of much more than one cyclic component stretching out to whatevere IPCC CO2 ‘wedge’ you wish.
At least one is 60 years.
If you subract the trend portion that would give from the 88+ month output then you can recude some of what is left to a 24 year sine wave which would make sense. I can almost ‘see’ it in the 88+ month output now but…..
Of course, all speculative because we do not have a long enough record for now.
SteveF
Thanks for you excellent discussion and exploration of regression and circularity.
Re: “an additional mechanism by which the solar cycle substantially influences Earth’s temperatures, beyond the measured change in solar intensity. I think convincing evidence of such a mechanism (changes in clouds from cosmic rays, for example) is lacking, although I am open to be shown otherwise.”
Whatever the cause, may I suggest considering:
Eastman, Ryan, Stephen G. Warren, 2013: A 39-Yr Survey of Cloud Changes from Land Stations Worldwide 1971–2009: Long-Term Trends, Relation to Aerosols, and Expansion of the Tropical Belt. J. Climate, 26, 1286–1303.
doi: http://dx.doi.org/10.1175/JCLI-D-12-00280.1
See also: Arctic Cloud Changes from Surface and Satellite Observations 2010
David L. Hagen (Comment #117501),
Thanks for the two references. I don’t see any obvious causal link with solar cycles in the cloud cover data, but maybe I am not looking at it right. Figure 5 in the first reference shows an increase in daytime cloud cover (not a decrease) over ocean, which seems to correlate with the 1970’5 to late 1990’s warming, followed by a drop in daytime cloud cover over ocean since ~2000. That would seem to suggest that daytime ocean cloud cover increases with warming… higher temperature, higher vapor pressure, more clouds (but it is unlikely to be that simple).
“which seems to correlate with the 1970’5 to late 1990′s warming followed by a drop in daytime cloud cover over ocean since ~2000. That would seem to suggest that daytime ocean cloud cover increases with warming”
Negative feedback in the tropics is what keeps the tropics so stable even with perturbations like -20% for a major volcano the number of degree.days (growth days to a farmer) are maintained
http://climategrog.wordpress.com/?attachment_id=310
Tropical storms are the mechanism. Less cloud since 2000 means some radiative input is dropping back.
Greg Goodman,
The forcing associated with a major volcano is nothing like 20% of average solar intensity. The measured attenuation at Earth’s surface (eg the GISS optical density data) tells you how much of total solar intensity was scattered. But most of that scattering of sunlight is in a forward direction (at angles not far from the original direction). Most scattered light still makes it’s way to the surface as “diffuse light”. It is only the relatively small portion which is back scattered (and so lost to space) which adds to albedo and reduces net energy flux from the sun.
Thanks. But there must be more backscatter than forward deflection. Like under light cloud you get some payback from indirect but it’s nowhere near what you loose on direct insolation.
Greg Goodman,
No, there is always much, much more forward scatter than back scatter. Consider driving a car toward the setting Sun. It is difficult to see because particles in the air (and tiny imperfections on the surface of the car’s windshield) scatter sunlight at low forward angles… and into your eyes. Even if you block the direct light from the sun, you are still blinded by scattered light arriving from angles near the sun’s angle. Scattering in air only becomes more-or-less uniform in angular intensity when the particles doing the scattering are extremely small compared to the wavelength of light (say below 20 nm). Air molecules (a few angstrons) scatter light pretty uniformly… so a clear the sky is uniformly blue everywhere except near the horizon. Atmospheric particulates (natural dust, sea salt, volcanic areosols, or man-made areosols) are much larger, and scatter mainly at low forward angles. Mie theory provides a complete solution for scattering by spherical particles, and is worth learning about if you want to understand why the particulate scattering in the atmosphere is mainly forward. A good text is “Scattering of Light by Small Particles” by Bohren and Huffman… a classic.
Greg Goodman,
Clouds consist of very large droplets/crystals (10 microns or more) and scatter strongly at forward angles. However, a cloud has such a high concentration of scattering particles that light entering the cloud is multiply scattered. With each scattering event the light becomes more randomized in direction. After being scattered many times, (within a hundred meters or less in relatively thick clouds), nearly all information about the original beam direction is lost… the light becomes almost perfectly uniform in intensity in all directions. That is why thick clouds are such good reflectors in the visible spectrum; the probability is much greater that multiply scattered light will escape from the top of the cloud where it came in than escape from the bottom (which is much further away). Thin wispy clouds do allow some sunlight to pass through without being scattered at all (you can continue to ‘see through’ the thin cloud). Atmospheric aerosols are a bit like very thin wispy clouds… they scatter quite a lot of light, but don’t have much multiple scattering, so the light remains headed mainly in the forward direction.
Greg,
If someone contradicts you, you really should do some research to see if perhaps you don’t really understand what you’re talking about.
So once again it’s time to trot out the references:
An excellent textbook on Atmospheric Radiative Transfer that is inexpensive is Grant Petty, A First Course in Atmospheric Radiation, $36 direct from the publisher. If you’re too cheap to spring for that, there’s always R. Caballero’s on line Lecture Notes. Scattering is covered in Chapter 5 starting on page 105.
Steve, thanks for the explanations, that has improved my understanding. My figure of 20% was obviously OTT.
However, that does not negate the point I was making
Negative feedback in the tropics is what keeps the tropics so stable even with perturbations
like -20%for a major volcano the number of degree.days (growth days to a farmer) are maintainedhttp://climategrog.wordpress.com/?attachment_id=310
Greg Goodman,
The influence of ENSO on tropical temperatures needs to be accounted for before you can see the influence of volcanoes more clearly. There very well may be negative feedbacks related to reflective cloud cover (people have been saying this for a long time, since at least Lindzen’s papers on the iris effect), but I think a better argument can be made that the state of the tropical Pacific (ENSO) responds to strong volcanic forcing with a change to el Nino conditions, partially offsetting the volcanic influence in the tropics, so that would be a negative feedback as well, and a strong one.
Greg, I recommend the stratosphere data when you are trying to figure out the impacts of forcings and lags.
https://lh3.googleusercontent.com/-nVs_9UFiZ5U/UcHYjCYF7GI/AAAAAAAAIsM/zoskauefMmY/s945/squiggly%2520mess%2520with%2520curve.png
That is UAH NH compared with the NH deep ocean temperatures. The deep ocean resolution/uncertainty is not very good, but it gives you an idea of what is going on.
Thanks Dallas, there does seem to be a about a 2y lag on that heavily smoothed OHC derived data w.r.t. TLS. So far changes of that kind of frequency that lag seems about right.
This is in good agreement with my volcano stack.
http://climategrog.wordpress.com/?attachment_id=310
Remembering that it is the downward slope in cumulative integral that corresponds to colder SST, two years hits it on the head. That’s a useful corroboration of my result from independent data.
What is unclear from that kind of plot is the tropical recovery that has profound implications for the sensitivity to radiative changes.
Greg, “What is unclear from that kind of plot is the tropical recovery that has profound implications for the sensitivity to radiative changes.”
The tropics have a great deal of thermal inertia and because of solar incidence angle, less impact from aerosol forcing. When you get into the lower angles, aerosols can reflect more radiation. Everything is doing pretty much what it should do. The problem is when the internal lags are long enough or out of sequence enough that you get partial synchronization, like tropical SST, not to be confused with ENSO. Since surface heat transfer north is greater and at a different rate than south, you get a confused “thermal” wave action. Then some “sets” lead you to believe you have found the key, until they drift out of that sequence.
I posted NH in reply to yours but SH is essentially the same, tropics did not average out as cleanly but same topo.
http://climategrog.wordpress.com/?attachment_id=312
Here’s the unintegrated plot for extra tropics:
http://climategrog.wordpress.com/?attachment_id=277
Now if I follow what you’re saying, it’s basically that scattering is forward scattering in the tropics, so don’t expect any noticeable hit.
However, even in extra tropics regions we see only a temporary dip in temp. There may be detectable final offset but it’s of the order of -0.025K
The key thing is the recovery. This is too fast for it to be AGW filling the gap so is your comment about inertia saying this is mixing with deeper water averaging it out to the level of insignificance?
In which case I come back to your plot (derived from OHC apparently) to note that 0-100m seems recovering rapidly. Stratosphere also witnesses a compensatory reaction showing a positive excursion of several years after both events.
That TLS record means less heat getting out. The energy budget actively adjusts for a period and allows more energy capture.
Whether this is purging of the stratosphere ,ozone, or a lower level cloud reduction is a secondary issue. It shows there are compensatory mechanisms at work.
The problem with exercise Steve does here and many others have attempted is that it does not take account of other natural variations and risks exaggerating volcanic component by spurious correlation. That was the aim of my volcano stack, to try to average out non synchronous variability. In fact the tropical section of the result shows marked synchronous variations as well.
Again, unless measures are taken to account for this spurious regression results will occur.
http://climategrog.wordpress.com/?attachment_id=278
Steve, do you have any objection to making the residuals in figure 10 available? I like to see what is left after all the detrending and fitting.
They are of substantial magnitude and don’t seem to be white noise. I think a quick power spectrum would be informative.
Greg, “The key thing is the recovery. This is too fast for it to be AGW filling the gap so is your comment about inertia saying this is mixing with deeper water averaging it out to the level of insignificance?”
Just inertia. The tropical oceans are at close to 29C which is 6.25 Wm-2/K and a specific heat capacity of ~4J/g. The tropical thermocline is ~20C degree. The average “surface” air temperature in the middle latitudes land areas is about 9C with SHC of about 1J/g. The oceans are a big buffer. Now if the oceans are in a cooling phase when an eruption forcing peaks, there would be more impact.
“In which case I come back to your plot (derived from OHC apparently) to note that 0-100m seems recovering rapidly. Stratosphere also witnesses a compensatory reaction showing a positive excursion of several years after both events.”
Vertical temperature anomaly not OHC. You could take the Reynolds Oiv2 data for a region to compare the forcing timing with the seasonal cycling, but there are so many short term fluctuations I doubt it would be very useful.
” Now if the oceans are in a cooling phase when an eruption forcing peaks, there would be more impact. ”
could you clarify? You mean there will be greater cooling for some physical reason or that there will be false attribution?
Greg, Goodman,
“Steve, do you have any objection to making the residuals in figure 10 available? I like to see what is left after all the detrending and fitting.”
Honestly, I do have objections. I have see the kind of wild-eyed analysis you think is correct (this thread on ‘cycles’; plus your back-and-forth with Ferdinand at WUWT about CO2… which is almost beyond belief… frightening even), so I am reluctant to give you, or anyone who seems so disconnected from physical reality, more fodder for the mill. Were I confident that you would think carefully about the data, I would have no hesitation.
.
I know that is a harsh assessment, but it is an honest one. You can of course fly into a rage, but I do hope you will think carefully about what I have said. I am old. I do not care much one way or the other about your lack of understanding of relevant technical subjects, but I will not support your distortions of reality by providing you with data.
Greg, “could you clarify? You mean there will be greater cooling for some physical reason or that there will be false attribution?”
There would be greater cooling. If they are cooling to begin with aerosols would reduce the sink temperature making the cooling more efficient. If they are warming, aerosols would slow the rate of warming, but not necessarily reverse the warming.
Re: SteveF (Jul 3 17:19),
IMO, Greg is just as bad in his own way as Doug Cotton was originally. He has his own idea of physical reality and nobody is going to convince him otherwise.
DeWitt,
Agreed. Greg strikes me as a mind meld of Doug Cotton and Nick Stokes… More than a little crazy, but with math skills.
Â
That has to be best reason for refusal since Phil Jones’ , why should I give the data, you only want to find something wrong with it”. The complete antithesis of scientific investigation and verification.
Â
You can see yourself that the residual is not just random noise. But state that it would be supporting a “distortion of reality” if I evaluated what signal remains unfitted.
Â
Steve, the data is the reality, as is the residual the reality of what you have failed to account for in your regression attempt.
 That is part of your results.
The broad range of ways you attempted the regression was informative, in particular the AIC test showing p9 fitted the data better than the input variables being used. You also exposed the problems of the physically unreal degree of freedom in allowing separate tau for solar. This was clearly a worthwhile reassessment of F&R’s rather trivial regression exercise.
Â
I was curious as to why you had not commented on those observations when I made them. I now understand. You are yet another person pretending to do science while refusing any independent verification of what you have done.
Â
You are the one with a preconceived idea of “reality” that you are not willing to see put to the test of validation.
Â
I thank you for honestly stating your position, rather than remaining without comment. It allows everyone to see your lack of scientific integrity and to assess your opinions and work in that light.
Â
I would like to be surprised at such an attitude but sadly this refusal of the basic tenets of science now seems to be endemic.
Â
Thanks for making your position clear
Greg.
SteveF (Comment #117540)
So having chosen not to even counter my observations about what your analysis shows, it’s weaknesses and how it could be taken further, you now have the nerve to question my state of mind and resort to insults.
Your previous comment openly states your lack of scientific integrity. That last one shows you have no integrity of any sort.
Thanks for being honest, at least we know where you stand now.
Greg Goodman,
I try to aviod commenting on nonsense, which is why I did not comment on all the “cycles” stuff.
.
I provided you with all the information needed to generate your own residuals. Nick Stokes managed to do a similar analysis (in R) in a few days. To repeat: Hadley global and tropical temps, Nino 3.4 from NOAA, GISS volcanic forcing, international sunspot numbers. Have at it.