NOAA’s May temperature anomaly is in. I’ll be darned but it was down relative to April. Must be that dying sun. 🙂
The monthly observations since Jan 2000, are plotted along with the multi-model mean based on the A1B SRES, trend fits based on least squares and an choice of ARMA(1,1) that creates the widest uncertainty intervals based on a subset tested:
Some readers will notice I’ve picked a new color scheme. Let me know if these choices seem ‘visible’ or ‘difficult to see’.
- As usual, the graph contains the trend fit using ordinary least squares and the associated uncertainty intervals computed under the assumption that the ‘weather+measurement error’ process is AR(1). These are shown in gold. Under this assumption, the multi-model mean trend (dashed black: 0.205 C/dec) as a point value falls outside the 2σ uncertainty for the trend of earth’s temperatures as reported by NOAA/NCDC which indicates the upper 2σ value for trends is 0.164 C/decade.
- The trend fit using ARIMA(4,0,0) is also shown (dark green). This fit was selected in a rather odd way. It is not the best fit ARIMA for the data shown: it is the ARIMA(p,0,q) with p or q up to 4 that results in the largest uncertainty intervals. I pumped up the uncertainty intervals from that fit by taking the pooled average of the reported uncertainty intervals and the standard deviation of all trends from all ARIMA’s tested. The 2σ uncertainty intervals for the trend are illustrated with straight dashed lines in dark green. Note the trend based on the multi-model mean also falls outside the upper 2σ value for the trends on this basis.
Statistical tests to determine whether the multi-model mean is consistent with the NOAA/NCDC trend require using the pooled variance of trends from all models in the multi-model mean and the estimated variance. The ratio of the difference of the multi-model mean and observed trends nomalized by the pooled results in d*= 2.09; this is larger than the critical T value for the 95% confidence interval for 135 degrees is 1.98. Being conservative and assuming I need to reduce the number of degrees of freedom for the t-test, and estimating degrees of freedom based on the red-noise correction, the critical T is 2.02. In both cases, the d* is greater than the critical t’ based on this comparison, the trend for the multi-model mean is both higher and inconsistent with the observations. Contingent on the assumptoins for estimating the uncertainty intervals, the multi-model mean is rejected relative to the trend in NOAA/NCDC.
Cherry picking notes:
Bear in mind that if one wished to cherry pick a start date to get a result one “likes”, start dates during relative minimums will give higher trends; start dates during relative maximums will give lower estimates. Jan 2000 is during a La Nina and gives higher observed trends that a start date of Jan 2001. So, the choice of 2000 tends to be more favorable to models than the choice of 2000. (I still prefer 2001 for testing owing to the date when the SRES were published.)Also: this is NOAA/NCDC only. Full results should involve HadCrut and GISTemp. I’ll discuss those when those agencies report.
- Having noticed that the observations have tended to fall below the multi-model mean over the entire decade, I decided to do test on both the constant and the trend for a least squares. The appropriate place to test whether the constant for the fit through observations and a forecast is correct is the center of the data set tested. So, for 137 evenly spaced points, we test at point 68.5. The easiest way to do this is to normalize the time series to [-68,68] instead of [0,137] and perform the fit to that time series. Fit routines then return an uncertainty in the intercept which corresponds to the appropriate value to test.
(For those wondering: Despite framing this as a parameter in a mode fit, this test amounts to testing whether the mean temperature over 137 points differ for observations and the multi-model mean. Also, to a some extent, the differences in outcome for this test and the previous one will be affected by whether the trend for the model mean exceeded the observed trend during the baseline period. )
Owing to baselining comparisons of constants associated with fits can only be done using data outside the baseline period. IPCC projections are stated relative to the average for Jan 1980-Dec 1999 which sets the period I use to rebaseline models and data. So, comparisons of the constants for fits are restricted to fits with data beginning in 2000. This is the main reason today’s post shows data beginning in 2000 rather than my usual choice of 2001.
Also owing to the uncertainty in the estimate of the observed mean relative to the multi-model mean must include the uncertainty in determining the value of ‘0’ based on finite data from the baseline. I estimated this also using the ARIMA fit to the baseline period that gave the largest estimate in the uncertainty estimate during the baseline period.
To let readers visualize the uncertainty in the constant, I placed a vertical line at the appropriate time point. A thin vertical dark green line appears just after 2005 in the figure above. The top and bottom of the line touches the curved dashed dark green lines. Note that the top of this line falls below the multi-model mean at the point in time and just grazes the lower 1$sigma; uncertainty for the multi-model mean (heavy grey line just outside the heavy black line). This suggests the constant value for the
Of course, I also applied a test to determine whether the constant value for the multi-model mean and observations match, using an estimate of uncertainty that includes that for the spread of model means. The result is: d*=2.88, which indicates the difference between the 137 month mean for observations and for the multi-model mean are statistically significant. The diagnosis: contingent on choice of start date, observational set and method of estimating the uncertainty intervals, the multi-model mean is warmer than the data and the difference is statistically significant at p=95%.
Cherry picking note: Of course we should do the test will all agencies reports of observations. Also, for this tests, picking a start date during a relative minimum has the opposite ‘cherry picking’ effect relative to that for comparing trends. That is: starting during a low temperature will result in larger values of d*. So, tests starting in 2001 will result in lower d* than starting in 2000. When all three land series have reported for May, I’ll report all results in a table, and also show results starting in 2001.
- Believe it or not… rumor has it that theory tells us errors in the estimate trends and that for estimates in the constant value for a linear fit are uncorrelated. (And, yes, I’ve checked the rumor at least when the errors are white. I have not fully tested for other noise models, but I suspect I’ll confirm more generally.)
This suggest that I can create a more powerful metric based on the pooled values of the two d*. That is: If I argue that I would ‘reject’ models as showing too much warming when the combination of both d*’s suggests {the models were too cold over the 137 month periods and the trend is also too low} and I would reject as showing too little warming if the opposite occurred, but otherwise I would accept the models as on track, then I can create a statistic to monitor this by computing the pooled d* by taking the square root of the average of the squares of the d*’s.
This results in d*pooled 3.51 which is well outside 95% confidence intervals contingent on accepting the ARIMA I chose gives an upper bound on the uncertainty for the 137 months trends and 137 month means.
Cherry picking note: Recall that people testing trends who wish to cherry pick start year to get a result the “like” would make opposite choices to get the answer the ‘like’ for comparisons of trends and means. So this pooled metric is fairly robust to choice of start year and more difficult to cherry pick. It is also more statistically powerful because it is based on data from 1980-2011 rather than data from 2000-2011 alone and includes information from both deviations in the mean and the trends. (Note to those wondering: picking a more powerful statistical tests is not ‘cherry picking’; it’s called ‘good practice’. If one becomes aware of a more powerful statistical test to apply to a predefined data set and the new method doesn’t introduce adverse features like bias, one should pick the more powerful test. Always.)
For those wondering about the main message: Based on NOAA/NCDC, the multi-model mean is running hot, and the difference is statistically significant. You’ll be seeing more pooled d* results when GISTemp and HadCrut report.

Watch SO2 from Nabro
http://sacs.aeronomie.be/nrt/index.php?&Region=000&InstruGOME2=1&InstruOMI=2&InstruSCIA=3&InstruIASI=4&InstruAIRS=5&obsVCD=1&obsAAI=2&modeADD=1&horaireIASI=1&horaireAIRS=1&NRT=1
I’m stumped. What do I click to watch SO2 from Nabro?
Ok… I had to click at least one instrument (you can select all 5.) I picked SO2. Then picked June 15 and clicked submit; maps on the right show the SO2 all over the place. I can also see a dot on the 14th.
Of course this didn’t affect May temperatures. But that’s a pretty cool site.
Has that gone stratospheric. (I found 2 news stories which didn’t say and I haven’t figured that out from that site.)
I like the new colours – but then I didn’t have problems distinguishing the old colours. I think the new ones are just aesthetically nicer 🙂
I think the uncorrelated nature of errors is a definition of a kind of error rather than something theory predicts – but maybe I misunderstood your point.
Lucia, my apologies for the cryptic link, the Nabro Volcano in Eritrea erupted on June 13th, so yes of course it didn’t affect May temperatures, but perhaps it can have influence on global temperatures in the coming month(s).
I believe that the global diurnal temperature is about 16 degrees C. The peak to trough between 2008 and 2010 is about 0.2 degrees; so in energy terms there was a swing of 1/80 the global diurnal temperature.
I can’t see how you can see an addition of 3.7 Wm2 on top of 237 Wm2, 1/64, over the course of years if your noise is in the same order of magnitude.
So Lucia, where did all the energy go from 2006 to 2008; where did all the extra energy come from between 2008 and 2010 and where is all the energy going now?
I’m sure there are better ways to word it. But the thing is: The math says the errors will be uncorrelated. And, if I run montecarlo with synthetic data where I know the true trend and true intercept and compute a bunch of trends and intercepts, I do, indeed, confirm the error in the trends and intercepts are uncorrelated.
DocMartyn
Well… it could radiate off into space. But actually given the nature of the system and measurements, there doesn’t have to be accumulation of heat in joules for surface temperature to vary. The surface has no volume. So, oddly enough, if the temperature is not uniform, the average surface temperature can change quickly as a result of redistribution of the ‘hot’ and ‘cold’ bits.
This sounds like I’m making it up. But imagine a solar pond with salty warm water below and fresher cooler water on top. (Solar ponds often have dark bottoms. Rain falls on top. It could be in this state at 8am in the morning.)
Imagine further that you’ve put all your thermometers on floats right at the surface. So you get a “surface temperature” by averaging over them.
Imagine owning to heat accumulation the hot salty water warms further until it is less dense than the top surface. Suddenly, some hot salty water rises, which induces more flow and mixing. The whole pond mixes. Now everything is at a uniform temperature.
Suppose in the meantime, you just plot surface temperature. Guess what: The ‘surface temperature’ will suddenly rise. No heat was created in this process.
(Of course, you would also imagine that afterwards the pond will now lose more heat to air because it’s surface is warm. But at least on a short time scale ‘surface temperatures’ can rise because of sudden mixing. The same thing can happen on earth: warmer or cooler water can be upwelling because of ENSO and so forth. So, of the fluctuations don’t represent any long term trend.
GISS is out. Also down – from 0.55 to 0.42
Lucia: I suspect your lukewarming will follow the graphs. I predict that by the end of 2012 you will not be a lukewarmer anymore, but just joking. BTW Mann stated that the solar minimum will only cause a 0.1C loss I think he forgot that Solar minima are related to increased tectonic effects thus all volcanic activity we have seen this year and in coming ones. This could bring it down much much more re: its all nonsense. Its extremely coincidental that we are living in a time when solar minima is coming.
Andrea–Please stick to one name. Ok?
I assume that if present trend continues it will be outside the lower IPCC SD trend and then we can conclude that its all natural variation by end of 2011? BTW currently sea surface temps are trending down (not up as expected due to end of La Nina) see AMSU SST
Andrea no problem
The La Nina was around -1.6C from September to January. The ups and downs of the past few months are still a reaction to this period. Probably on the way back up now.
Sven–
Yep! But GISS tweaked their format slightly. I fixed the bit of my code to read in and then hubby got home. We went out to dinner. I drank… 2 .. margarita’s. No way I can post until tomorrow! (Hick!)
Lucia,
“the multi-model mean is running hot, and the difference is statistically significant”
.
Yes. The question is why. Alternative #1: The models have it very wrong, and the true sensitivity is well below the IPCC best estimate (and likely below 2C per doubling). Alternative #2: Rapid growth in emission of fossil fuel aerosols by China, India, and other developing countries has essentially off-set all recent increases in GHG forcing. Alternative #3: Most warming has been offset by the start of the cooling phase of the 60-70 years cycle that is evident in the temperature history. Alternative #4: Some combination of the above.
.
While the jury is still out, it is not looking good for the models. Expect some rapid backpedaling very soon from the more sane of the AGW troops (like Gavin). The true loons (Hansen, Trenberth and many others) will never budge, in spite of what the data say. No surprise there, since it was never really anything except politics for them from the get-go.
Rest my case its over… solar and IPCC story etc and real data. Its completely falling apart now. You can keep modeling forever my last post here
http://wattsupwiththat.com/2011/06/17/the-wit-and-wisdom-of-real-climate-scientist-dr-ray-pierrehumbert/
SteveF– Yep. The question is why. I think a number of people will, in public, at least stick with the notion that if the temperature is within the spread of the models, then somehow we “can’t” notice that the temperature is in the lower range, below the mean, and that this observation is statistically significant and/or somehow this “doesn’t matter”.
But the fact is: Whether the temperature rise is in the lower or upper range of projections has policy implications. It is worth knowing when people like Tobis want to say we need to decide things based on the “risk” of the high side. Well.. maybe. But in that case, it’s important to know how likely following the high trajectories actually is.
Yes.
It’s all about policies.
What I love about this site is that you evaluate the statistical evidence. unemotionally. (Yes, I’ve seen all the stuff about ‘if you need statistics’ and experimental design – but there’s only one Earth …..).
The evidence doesn’t seem to support some advocated policy solutions …… Or am I misreading it?
Correction.
When I said ‘advocated’ I think I meant ‘of the more extreme’.
SteveF (Comment #77518)
.
[“Yes. The question is why”]
.
Alternative #1 gets my vote… 🙂
SteveF (Comment #77518)
June 17th, 2011 at 9:05 pm
“Yes. The question is why.”
Here’s a hockeystick for you.
http://isccp.giss.nasa.gov/zD2CLOUDTYPES/B38glbp.anomdevs.jpg
Change in upwelling SW:-
http://isccp.giss.nasa.gov/zFD/an9090_SWup_toa.gif
Change in albedo:-
http://isccp.giss.nasa.gov/zFD/an9090_ALB_toa.gif
The ISCCP writes:
“The overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements.
…The overall slight rise (relative heating) of global total net flux at TOA between the 1980’s and 1990’s is confirmed in the tropics by the ERBS measurements and exceeds the estimated climate forcing changes (greenhouse gases and aerosols) for this period. The most obvious explanation is the associated changes in cloudiness during this period.”
High energy GCR anyone?
Paul
Re: Paul_K (Jun 19 18:06),
I get 404 errors for all three links.
Hi Lucia,
I am a bit suspicious of your assertion that you can form a pooled statistic to make a more powerful test:-
“This suggest that I can create a more powerful metric based on the pooled values of the two d*. That is: If I argue that I would ‘reject’ models as showing too much warming when the combination of both d*’s suggests {the models were too cold over the 137 month periods and the trend is also too low} and I would reject as showing too little warming if the opposite occurred, but otherwise I would accept the models as on track, then I can create a statistic to monitor this by computing the pooled d* by taking the square root of the average of the squares of the d*’s.”
If you wish to generate confidence intervals which account for uncertainty in estimation of both constant and trend, why not use the more common approach for doing this? Confidence intervals at a varying level of the dependent variable are easily calculated. The formula is included in this Wiki article in the section headed “Normality Assumption”. (I don’t normally reference Wiki, but in this case, it is correct.)
http://en.wikipedia.org/wiki/Simple_linear_regression
The variance of the sum of the two estimators (trend and intercept) can be calculated by analytic pooling, since the analytic expressions are known, and hence one can develop a t-test, but I don’t think you get the same answer (I haven’t tried it) to a hypothesis test of goodness-of-fit as applying the CIs calculated using the above formula. Am I missing something?
DeWitt,
They still work for me. Have another go.
If that fails, the full suite of images and text are here:
http://isccp.giss.nasa.gov/projects/browse_fc.html
If you double click on the images you will get full-size.
Paul
DeWitt,
I forgot to mention that to access the first “hockeystick” plot I included which was the change in IR High-level Clouds, you will need to click on the link in the text “changes in cloudiness”. Hope you find it.
Paul
PaulK–I see the confidence intervals on
* The trend (they use β), which I discuss above,
* The constant value (interept); which I discuss above and
*The ‘y’ values– which I don’t care above.
By “Confidence intervals at a varying level of the dependent variable are easily calculated.” do you mean the confidence intervals on ‘y’? That’s not what I’m trying to tests. Or do you mean something else I should look at?
I’m not sure what you mean by the “of the goodness of fit” in your sentence above.
Do you mean you don’t think the sum of the two d*’s will have a standard deviation of 1/sqrt(2)? Do you mean you aren’t sure about the ‘t’ values? These are easily tested for white noise– or any noise process one wants to test for. (Obviously, I plan to post the tests with white noise and AR1. )
If you mean something else, let me know so I can do the test.
Lucia, those colours are so much clearer! Thanks.
Re: Andrea (Jun 17 18:00),
Ah no. “natural variation” is not an explaination. It’s the absence of an explaination. The IPCC trend is 2C per century. As luke warmers we see that as high.
A smarter question is what do we have to see in observations to conclude that the warming for doubling is BELOW 1.5C?
Perhaps instead of the multimodel mean, Lucia should do the lukewarmer
mean… the mean of all models with a sensitivity less than 3.
Stephen Mosher:
No, I can’t agree with that. Natural fluctuations (e.g. through atmospheric ocean coupled oscillations) are an explanation of sorts. We can observe those in the historic data, and they can be used to set limits on future expected variability in natural fluctuations.
In the absence of a (statistical) model, I would agree with you. But we have information on the “natural variation”, so positing it as an explanation for observed deviations form the slow secular temperature drift associated with GHG forcings, really is an explanation. IMHO.
steven mosher (Comment #77627)
June 19th, 2011 at 11:23 pm
“A smarter question is what do we have to see in observations to conclude that the warming for doubling is BELOW 1.5C? ”
———————
If you take out the natural variation from the ocean cycles, that is exactly what we see – around 1.5C by 2100.
1965 to 2040 – IPCC Forecasts versus Natural Variability-removed Hadcrut3.
http://img196.imageshack.us/img196/9984/ipccipccprednatvarremov.png
Re: Paul_K (Jun 19 19:29),
I had to go to the main page.
Why high level clouds? As I remember, the cosmic ray hypothesis was related to low level clouds.
This is my view also. Given enough historic data we should be able to set a limit on the amount of variability in natural fluctuations. Of course, we see lots of arguments on that. For example, I don’t think d=1 in ARIMA can form the basis for creating an estimate for the probability distribution of 1 year, 10 year, 100 year or 1,000 year trends. But arguing about that is different from saying that it’s impossible to bound the variability due to natural fluctuations.
Hi Lucia,
I wrote my comment in the early hours of the morning and it probably wasn’t the most lucid thing I’ve ever written.
You stated that
“Believe it or not… rumor has it that theory tells us errors in the estimate trends and that for estimates in the constant value for a linear fit are uncorrelated. (And, yes, I’ve checked the rumor at least when the errors are white. I have not fully tested for other noise models, but I suspect I’ll confirm more generally.)â€
I should have started by saying that, as I understand this comment above, I don’t believe it is true, but we may be talking about different things. I note above that you state that you have tested this, so I am puzzled. I tested the same thing and found that the errors are highly (negatively) correlated. To make sure we are talking about the same thing, say there is a joint distribution f(x,y) where the x’s and y’s are correlated in this (exact) form:
Yi = a + bXi + epsi where the epsi are normally distributed random errors about zero.
Let’s say that you now choose, say n samples in the form of (x,y) pairs from this distribution and use them to estimate a and b. Call the estimators alpha and beta. Let’s also say that you do this k times, so that you have k values of alpha and beta. If you now cross-plot the errors in the estimates of a (i.e. a-alpha) against the errors in the estimates of b (i.e. b-beta), you should find that they are strongly negatively correlated. This can be deduced directly from first principles from the relationship between alpha and beta, so I am unclear what you mean when you say that “errors in the estimate trends and that for estimates in the constant value …are uncorrelatedâ€.
Hence my first point is that, if you wish to calculate the sample variance of the function (alpha + beta), you cannot do so by a simple summation of the variances, since they are not independent. You need to account for the covariance. However, you CAN estimate the variance of this function by analytic pooling (since their relationship is well defined for the assumptions here). My question then was – why would you want to do this?
If you want to test whether an observed joint distribution (in this case your (temp, t) from the CMIP models is compatible with an independently derived (linear) relationship, taking into account the uncertainty in both trend and intercept, then the Wiki formula I pointed you to allows you to do so. (This was what I was referring to as “goodness-of-fit†– not a Chi squared test.) I suspect, but don’t know, that testing whether the relationship falls within the bounds of the CIs (yes I was talking about the CIs on the Y’s) accounting for joint uncertainty, would give a different result from computing the variance of your (alpha plus beta) function and doing a t-test.
I hope this clarifies. I suspect we are talking at cross purposes probably because I don’t understand what you are doing.
Paul
DeWitt Payne (Comment #77639)
June 20th, 2011 at 7:47 am
“Why high-level clouds?”
Hi DeWitt,
Great question. I have no idea! The low level cloud data shows a fairly continuous decline even after 2000. There is a major reversal in high-level cloud which seems to ape the reversal of trend in outgoing SW/albedo change. I put it up because the change in cloudiness seems to be a major component in the answer to SteveF’s question. I was also very interested in the second ISPCC comment that suggests that the change in cloudiness explains a change in net flux which “exceeds the estimated climate forcing changes (greenhouse gases and aerosols) for this period”. That suggests an unaccounted for internal forcing or an (unaccounted for) exogenous forcing on cloud behaviour and hence albedo.
Paul_K
Yes. For now lets stick with white noise errors for the epsi
If I generate synthetic data, for the ‘epsi’, with a=0 and b=0, do the OLS fit to get ‘a’ and ‘b’, I get ‘a’ is uncorrelated with ‘b’. I’ll show this and post my code– if I made an error, I want to know this.
I suspect I know the problem. Your X’s are probably numbers like:
(1,2,3,4,5….. N)
(For now, let’s assume N is even to make the comment easier.)
But to get uncorrelated, the fit needs to be to (-N/2, -(N/2-1),…. , +N/2-1, N/2) You need to center the data– this is essential. (Ok. You can do other things and fix up after getting the intercept at x=0, but if you don’t center the ‘x’s on zero, you do have ti fix up afterwards.)
But what’s really uncorrelated is the mean of the errors in Y and the errors in the trend. See the discussion near my point (3)– where I write
Lucia,
OK, I’m beginning to get an inkling of what you mean, but beware! With a non-zero value of b (slope), the degree of correlation in the error terms for the parameter estimators is a function of the degree of correlation between the x’s and y’s, as I recall. Your tests with a=0, b=0 are starting our by definition with a zero correlation between the x’s and y’s. (The distribution of the y’s is the same as your error term, epsi, with no dependence on x.) This perhaps explains why you see no correlation in the error terms.
Mean-centering the data won’t solve this problem – it won’t make the correlation go away if it exists in the original (x, y) dataset.
If you start off with a null hypothesis that says “no correlation between X and Y”, then you get a free pass from me. Otherwise, you need to tread carefully. I’ll wait on your final description since I’m still not sure I’ve got what you are trying to do.
Paul
Paul–
I wrote a post to clarify. I stopped after demonstrating with trendless data figuring as I show each thing we can find areas were we agree or disagree and clarify those before trying to get confused moving on to more difficult things.
My claim is only that the error in the trend that is uncorrelated with the error in the mean value. It’s not going to matter that the ‘signal’ in y is correlated with x. But we can turn to my script for that or we can do the algebra when the trend is not zero. I think it works out. (And…well…. I did do the montecarlo…. So, I have more confidence that if I’d just thought it up and claimed it!)
Carrick:
I agree that natural variation may dominate and man’s influence gets crushed and obliterated in the great climate machine. That is one possibility.
The problem with natural variation is that it is very poorly understood.
What skeptics and warmers tend to forget is that natural cooling variation may mask a strong warming signal from high sensitivity AGW. For example, it is possible that post 1998 is a strong natural cooling cycle that shows no change in temperature metrics because of AGW (GHG+all the other stuff).
This is why I thing Mosher is dead wrong about natural variability.
Natural variability and feedbacks go hand in hand in my conceptual model. Until these aspects of climate are understood reasonably well (right now, there is no certainty of the sign, let alone the order of magnitude) no predictions or risk assessments can be relied upon.
I think we are in total agreement here, Howard.
The models have to get the physics of natural variability right, if they are to have any hope in predicting environmental climate sensitivity to anthropogenic forcings.
The one thing that we do have in our favor is that natural variation is oscillatory and therefore bounded in how much variability you can get from it. If you wait long enough, you should always be able to pick out a climate signal from AGW forcing.
The question is just “how long do you need to go”? I think that’s the direction the art needs to be moving towards…how to reduce the “waiting period” before we can eliminate natural variability as a candidate as the causative agent an observed change in climate (temperature, precipitation, etc.)
This is why I thing Mosher is dead wrong about natural variability.
The problem with natural variation is that it is very poorly understood.
##########
and you don’t see the irony in claiming that I am dead wrong about something poorly understood.
“natural variation” is nothing more and nothing less than the observation that we do not understand something.
When the temperature drops after a volcano we do not shrug our shoulders and say ” gosh that dip is nothing unprecedented” we do not shrug our shoulders and say “see! natural variability EXPLAINS THAT”. No. we do neither of those. we explain that the dip has a cause. changes in the constituents of the atmosphere. we build a model. it explains, more or less, the dip we see. Thus the dip which looked like “natural variability” is explained by an appeal to forcings.
When we look at the record we see ups and downs. knowing nothing about what causes these we call it “natural variation”
That OBSERVATION explains nothing. It merely points to the fact of observation: “this thing goes up and down” That is where the science BEGINS. it is never where the science ends. It is not an explaination. It is the thing to be explained. To be sure, after we start our accounting of forcings there will always and foreever be a residual.
Mosher:
Not sure I agree with that. If, for example, the best we can do with sub-atomic particle behavior is a probabilistic model, is that an admission of ignorance or just the way it is? Is there always a deterministic element at the root of everything? Always a mathematical pony underneath every messy pile of apparent randomness?
Maybe if we stopped saying “forcings” and instead say something like “natural suggestions” we would be less psychologically predisposed to incurring disappointment in our lack of certainty.
Mosher:
Gang up on Steven time….I am sure that I don’t agree with this.
That’s not even close to the definition of “natural variation”.
If the solar forcing increases, is measured, and its effect on global mean temperature accurately modeled, that would be an example of a natural variation that is both measurable as an instance of natural variation, as well as something that is understood.
Natural variation is simply not a rubric for “things we don’t understand.”
Steven/Carrick
I think we need three separate concepts:
1) Natural variability. The opposite is anthropogenic.
2) Unforced variability. The opposite is unforced.
3) Unexplained variability. The opposite is variability for which we have an explanation.
Variations arising from volcanic eruptions are natural and forced.
Variations arising form man made GHG’s are anthropognic and forced.
Variations arising from El Nino are natural and unforced.
Whether or not something is ‘explained’ may depend on context.
Variations arising from ‘cosmic rays/ leprechaun/ gremlins/ rise in post office rates’ are totally unexplained in almost any context. Variations arising from El Nino may be ‘explained’ in a curve fitting context where we use MEI, or they might be ‘unexplained’ if we don’t use MEI. But variations due to an oscillation like ENSO remains natural and unforced even if ‘explained’. Meanwhile variations due to volcanic eruptions remain forced and natural– and may or may not be explained in a particular curve fitting or modeling context.
One difficulty is that no matter how carefully you try to use these words, someone reading will decide your “unforced” must translate to “natural” or “unexplained or whatever. But they aren’t really the same thing.
Re: lucia (Jun 22 12:19), yup
Re: Carrick (Jun 22 11:50), I think lucia is probably saying better what I mean.
Carrick:
Lucia has outlined a very useful framework to help focus discussion of this issue. I agree with her that ENSO is not a forcing, My guess it is part of a forcing storage, transmission and feedback mechanism.
I *mostly* agree with you “that natural variation is oscillatory and therefore bounded in how much variability you can get from it. If you wait long enough, you should always be able to pick out a climate signal from AGW forcing. ” Some natural variation is random, like impact crap flying in from the OORT Cloud or mega/multi-volcanism.
This is why I think paleo-climatology is so important and why it is frustrating that it appears to be dominated by the contrived Hockey Stick theory.
My geologic hunch is that there are relatively low frequency, high amplitude variations that may be due to explained or unexplained forcings. Also, my hunch is that not all of these paleo-climate forcings are natural as I am rather fond of the work of Pielke Sr. and others.
My guess is that low freq high amplitude unforced natural variations do not occur (unless Tom Vonk can explain how they might, ha ha).
“Variations arising from volcanic eruptions are natural and forced.
Variations arising form man made GHG’s are anthropognic and forced.
Variations arising from El Nino are natural and unforced.”
Very clear.
A pendulum is swinging. We can record the extent to which the bob of the pundulum deviates from its mean position. The swings are unforced [something set it swinging some time ago but that is irrelevant – currently it just swings].
Something jolts the pendulum. This is a forcing.
A person shortens or lenghtens the cord of the pendulum. This is an anthropogenic forcing.
Re: Howard (Jun 22 13:23),
I think that translates into you don’t believe that the climate exhibits long term persistence. A power density spectrum in the frequency domain would show a maximum at some moderate frequency like 0.01-0.001/year and then decline to zero at zero frequency. The problem with that is things like Dansgaard-Oeschger and Heinrich events. The frequency of D-O events is ~0.0007/year and ~0.0001 for Heinrich events. Then there’s glacial/interglacial transitions at ~0.00001/year. We can no more predict these events than we can predict ENSO or the AMO. They appear periodic until they’re not. Their magnitude is only loosely connected to the proposed forcing. That’s exactly what you would expect for a system exhibiting long term persistence where the power density spectrum continues to increase as the frequency decreases (1/f^x, where x is greater than zero).
Re: Nyq Only (Jun 22 14:04),
A double (or more) pendulum, rather than a single pendulum, with a large angle of oscillation. In other words, chaotic.
Lucia, thanks…. that was a very nice exposition.
DeWitt:
Problem I have with these arguments is we don’t enough trustworthy data to make any real inferences over these time scales.
Re: Carrick (Jun 22 14:57),
I think we have enough data to reject the hypothesis that noise power density declines at frequencies less than 0.03/year (thirty years is climate).
Re: Howard (Jun 22 13:23),
“Lucia has outlined a very useful framework to help focus discussion of this issue. I agree with her that ENSO is not a forcing, My guess it is part of a forcing storage, transmission and feedback mechanism”
sounds better
ha!
submitted
ftp://cran.r-project.org/incoming
Now I just have to wait to see if it passes their tests. gulp.
DeWitt:
Agreed. I’d set the limit at around 100-year periods. Fluctuations on the scale 1000-year periods are definitely pushing the limits of our understanding.
Let me just go on record as saying I suspect the power-spectral density plateau is probably somewhere beyond 1000-years. My suspicion is for internal, unforced fluctuations the limit isn’t much beyond that, though.
[Internal versus external could be added to Lucia’s breakdown, as would be feedback versus forcing.]
DeWitt Payne (Comment #77756),
Yes, that is right. There is a lot of evidence that climate varies significantly on all time scales. During the Holocene there appears to have been pretty much continuous change (at least if you believe ice core and other corroborating data) over a range of at least +/- 0.5C. Which is not to say these natural variations are not causal… there probably exist reasonable explanations, even if we don’t know what these are. But they are in any case ‘natural’ variations.
Carrick–Is external solar either because of orbital mechanics of variations in the sun’s intensity? I’d put those in forced and natural. They are external. I’d call volcanism external in the sense that I think volcanic are probably not triggered by changes in the thermohaline circulation , surface temperature, ENSO etc. The are internal if someone calls everything on earth internal.
Lucia, I’d definitely consider volcanic external, unless we think climate change affects volcanism (nobody can be that crazy, right???)
Carrick– Tom Chalco http://rankexploits.com/musings/2008/odd-theories-for-or-about-global-warming/
DeWitt:
Thanks for the great points and links.
My gut instinct agrees with Carrick putting a limit of 1K-years on persistence of unforced variation within an interglacial period.
However, during glacial periods, 10K-years for a persistent unforced variation seems plausible because the climate system seems so unbalanced. Just imagine those massive ice sheets when a large hunk of ice gets cut loose, it would take quite some time to work it’s way all the way back through the system.
lucia,
I ran across this paper on fractional differencing to remove long term persistence before OLS analysis of linear trends. Or at least I think that’s what it’s about. My math fu leaves a lot to be desired. Unfortunately, in the Q&A after the paper, the author admits not having looked at periodic trends. I suspect her time series are a lot longer too.
Steven:
I’m glad you approve 😉 However, even unforced variation could be mistaken for or masking anthro-forcings. IMO, this is a very big deal when considering estimates of climate sensitivity.
At this point, we don’t know if the MWP and LIA were forced or unforced, internal or external. I can’t see how climate sensitivity can be reasonably estimated without knowing what caused these significant climatic events.
There is no long-term trend, up or down, in the ENSO. It is flat as far back as we can go.
But if you take any 10 year period, it can have a positive or a negative trend.
Given it seems to have about a +0.3C to a -0.2C influence (there have been no Super La Ninas), it will leave a small multi-decadal signal in the temperature trend, particularly if the end-point is a major La Nina or a major El Nino.
And it directly affects every major climate index there is – all of them – (it even influences the annual GHG forcing change if that makes one feel better and even the speed of the rotation of the Earth).
Now, assume it is a quasi-periodic ocean cycle (which it is). How many more are there in all those vast and deep oceans. How many of those have multi-decadal signals rather than 18 month signals like the ENSO. How many have multi-century oscillations.
The ocean doesn’t just sit there. It is constantly moving around and sinking and rising. Even a few warm decades in the Arctic could resurface as slightly warmer than normal ocean water in the Pacific 800 years later.
If we could track every cubic metre of the ocean, we might have a better understanding of all these potential cycles. But we never will.
Re: Howard (Jun 22 19:00),
Well,you mention the LIA and MWP. Those events do not constrain the sensitivity estimate very well. That is why I’ll claim that they are a distraction and that Mann’s work isnt scientifically interesting.
Hansen’s work on Paleo
http://www.youtube.com/watch?v=5EV3zKjwC9Y&feature=player_embedded
Is another matter since he considers much longer periods
That’ll start a fight.
Oh, the package got accepted. Its on CRAN.. mac binaries will have to wait but you can always install from source.
RghcnV3
steven mosher (Comment #77772)
June 22nd, 2011 at 10:27 pm
It’s not that it’s not interesting, it’s just that it’s only a small part of a big package. Various people seem to have vastly over estimated it’s importance in the case for AGW, and the ability to disprove AGW by ‘breaking’ it. (Continued research just reinforces the basic claims of the ‘hockey stick’, btw).
DeWitt Payne (Comment #77751) June 22nd, 2011 at 2:31 pm
“A double (or more) pendulum, rather than a single pendulum, with a large angle of oscillation. In other words, chaotic.”
Yes, much better. That also encompasses the weather/climate distinction with the position or current trajectory of the bob being “weather” and the area swept out by the bob over time being climate. The anlogy overstates the extent to which we could identify anthropogenic forcing (where that was analogous to chagning the length of a cord) but still.
Steve Mosher,
Thanks for the link to Hansen’s presentation. I couldn’t listen to the whole thing (he is too tedious in his presentation), but got through about 75%. It is interesting that he recognizes the problem Argo data poses for climate models, and even “tested” three different ocean response curves (the actual GISS Model E plus two faster responses). All duplicate the temperature history equally well with suitable proportional adjustment in man-made aerosol forcing. Only the fastest ocean response matches the Argo heat accumulation data. Of course, he also said he thinks Argo is wrong, which kind of shows his mindset. Argo will continue to partially constrain climate models. Now if there were only some decent aerosol data to fully constrain them… too bad, but that satellite (Glory) didn’t reach orbit.
@steven mosher (Comment #77772)
If we knew what _caused_ the MWP and LIA (that’s what Howard is talking about) that would help attribution in the MWP, wouldn’t it?
One might like Hansen’s presentation if you haven’t gone through all the data and recognized how much of it is distorted in the presentation.
I note the part where he has the Deep Ocean Temperatures today (or in the ice ages) at about -5C. The chart is also cut-off so that one might not know that the Deep Ocean Temperatures in the Cretaceous were +20C using the same scale. Or that the Albedo effect of all the glaciers, sea ice and desert in the ice ages was only -3.5 w/m2 (while other studies put it at -24 w/m2 in just the northern hemisphere). I could keep going.
Cool! Congrats.
Bill Illis,
I think Hansen has always been extremely careful to tilt/selectively use data, and to discount conflicting data/publications, so that very high climate sensitivity remains always plausible. It seems maintaining that plausibility is his top priority. He will probably be long gone before he is proven wrong.
@steven mosher (Comment #77772)
What I meant to say was that if we knew the causes of the MWP and LIA it would help us attribute the modern warming to natural vs anthropogenic causes.
Re: Niels A Nielsen (Jun 23 06:15),
LIA is pretty well understood.
1. changes in TSI
2. Increased volcanic activity.
3. Knock on effects to ozone\circulation patterns in the NH, especially europe.
MWP.. not so much
Steven Mosher, not that I regard Wikipedia as an authority, but here it is on causes of LIA:
.
“Scientists have tentatively identified these possible causes of the Little Ice Age: orbital cycles [sic ??], decreased solar activity [only the possible effect of missing sun spots is considered – no mention of decreased TSI], increased volcanic activity, altered ocean current flows, the inherent variability of global climate, and reforestation following decreases in the human population.”
.
Is the LIA “pretty well understood” or are “possible causes only tentatively identified”? The Wiki article is admittedly unconvincing but what makes you think our understanding of the LIA has moved beyond “possible causes tentatively identified”? According to Leif Svalgaard and his work on TSI reconstructions the Wikipedia article is right to neglect TSI variation as a cause of the LIA.
@ Lucia “Must be that dying sun.”
Not at all!! CO2 trumps the sun every time don’t you know? The sun could be quite dead but we will die a fiery death due to our evil cars, planes, cows and hot chilli spices.
Its just that there are limits on fudging even by NOAA
@ Lucia “Must be that dying sun.”
Not at all!! CO2 trumps the sun every time don’t you know? The sun could be quite dead but we will die a fiery death due to our evil cars, planes, cows and hot chilli spices.
Its just that there are limits on fudging even by NOAA
Richard–
I don’t think NOAA is fudging. I thought I was making a joke about monthly variability during a month when we are all discussing the recent story about the possibility of a repeat of the Maunder minimum.