I promised I would check the status of the comparison between IPCC AR4 projection of 2C/century each month and report. The NOAA data came in last week, and I had a chance to update my spread sheet with all data series.
The main result is: Despite an uptick in the temperature, the IPCC projections or 2C/century still falsifies to p=95% (α=5%).
This result is illustrated in the figure below. The mean trend for the temperature calculated based on the average of all five data sets is show in orange. The uncertainty intervals are illustrated in fuzzy orange lines. The IPCC short term projection is shown in brown and lies outside the uncertainty intervals for the trends that are consistent with the data. Additional details are discussed later in this post.
The rest of this post shows:
- A table summarizing results of tests using individual instruments.
- A very brief discussion of Beta Error. The “Beta” or “Type 2” error is important when interpreting the meaning of “failed to falsify results”
- Links to other bloggers discussing March temperatures.
Table of individual Results
This month, the IPCC 2C/century projection was falsified using the averaged data, and all main data reporting services except GISS. So, GISS failed to falsify. The results are summarized below. I have also added a column quantifying the “beta” (β) or “type 2” error that would be associated with any fail to falsify result based on the average of all 5 instruments sets. (The beta is higher for individual instruments.)
| Best Fit Trend | Reject 2.0 C/century to confidence of p=95%. (α=5%) | β (beta error) relative to: |
||
| C/century <m> | Result of test. | Denialists Hypothesis 0 C/century |
TAR Hypothesis 1.5 C/century |
|
| Average all, fit T vs time: | -0.7 ± 2.0 | IPCC Projection Rejected | 49% | 95% |
| Average all, fit T vs (time, MEI): (See note 1:) |
0.1 ± 1.7 | IPCC Projection Rejected | Fit T vs time, then average: | -0.7 ± 2.1 | See note 2. |
| Individual Instruments | ||||
| GISS | 0.2 ± 2.1 | Fail to reject | ||
| HadCrut | -1.3 ± 1.8 | IPCC Projection Rejected | ||
| NOOA | 0.0 ± 1.6 | IPCC Projection Rejected | ||
| RSS | -1.5 ± 2.2 | IPCC Projection Rejected | ||
| UHA | -0.9 ± 2.8 | IPCC Projection Rejected | ||
| Note 1: There are known problems associated with using MEI in any correlation including lagged variables, because the MEI include time. However, this is included for now because people are interested in an estimate of the effect. I’m looking for better ways to do this, but have been swamped with real work over the past two weeks. | ||||
|
Note 2: Calculating the trend for the 5 individual instruments and averaging afterwards is shown for illustrative purposes. However, as I noted in an earlier post, this method is poor, as the uncertainty intervals only include the variation due to measurement uncertainty which is expected to be uncorrelated between instruments but also treats weather variability as uncorrelated. |
||||
| Note 3: “Beta” (β or ‘type 2’) error is the probability that a test will “fail to falsify” even though the null hypothesis is false. If we treat type 1 and type 2 error similarly, “failed falsify” means “confirmed” only when β is less than or equal to the chosen α, which is 5% for our tests. | ||||
Graphs of β error relative to alternatives.
What does the “Beta” (β) mean?
The “Beta” (β) error describes the probability that the result of a hypothesis test would be “failed to falsify the null” hypothesis when the null hypothesis is false.
Given the formalism of statistics, this type of error is very frequent when data contain noise or random components of any type and limited quantities of data are available. In fact, when one has not yet taken any data, the β error equals 100 %.
To a large extent, the idea that it takes years of data to formally test hypotheses about climate variables arises from the fact that many hypotheses in climate are proven using “failed to falsify” as a criterion. Failed to falsify only means “confirm”, when β error is low and it turns out that to get empirical confirmation of AGW in the first place too years. I discussed this generally here.
For now: Suffice it to note that the amount of data for my current test of 2C/century is such that:
- If the out and out denialists are correct, and warming is 0C/century is correct, we would expect our test of 2C/century to result in “failed to falsify” 49% of the time. (That is to say, we’d make the mistake of “failing to falsify” nearly 50% of the time, while incorrectly falsifying only 5% of the time.)
- If the IPCC TAR projection of 1.5 C/century is correct, we would expect the 2C/century result to “fail to falsify” 95% of the time. (We really need a lot of data to distinguish between these two values.)
Those who doubt the previous falsifications are now going to think “Hmmm… she wants to have her cake and eat it too!” After, all, I’m telling you fail to falsify means practically nothing, but falsification does mean something!
Well, … this is just the way it is.
The reason it is this way is that we initially gave the 2C/century preferred status. We structured the hypothesis test so its result is “fail to falsify” before we have even one iota of data. We also set bar high for “falsify” high: We must show that if 2C/century the data we get would happen only 1 time in 20 by pure random chance. So, we only say “falsify” when there is strong evidence that 2C/century must be wrong.
If the evidence 2C/century exists but is “medium” rather than “strong”, we say “fail to falsify”. It is the preference from saying 2C/century that results in ‘fail to falsify’ not actually meaning confirm. What “fail to falsify means is, “There is no strong evidence, and I refuse to believe it’s wrong unless the evidence is strong!”
But, once a falsification is logged we really can’t just forget the evidence existed (and materialized rather quickly in this particular case.)
What are we likely to see going forward?
Even though 2C/century has been shown likely false, and it is likely falls, we will probably see some ‘fail to falsifies’ to ‘falsify’ transitions for quite some time. I think this because
- I think it’s nearly impossible that the climate trend is 0C/century (or below). The only way we’d get a permanent string of “falsify” this soon after decreeing the “start” data of 2001 is if the trend were much lower than 0C/century. So, that won’t happen.
- We got a falsify the first time I ran the test. (I was expecting to get “failed to falsify”, but… well.. number are numbers.)
- We know we are in the region where β error is high for all underlying trends I think remotely possible. This means we should get “fail to falsify” quite often, even though the 2C/century is probably false.
So, how will readers know if the “falsification” we are currently getting are incorrect? One way is if we see “fail to falsify” with β<5%. The other way is if we get a result, starting with data from 2001 that falsifies 2 C/century, but on the high side.
In the meantime, “failing to falsify” means rather little, as we expect that result rather often even if 2C/century is flat out wrong.
Other People Discussing March Data
Since this is just the regularly scheduled comparison, I thought I’d bring readers links to what other people are saying about the recent temperature fluctuations:
- David Stockwell Compares Norhtern and Southern Hemisphere temperatures.
- Anthony Watts discusses a German who claimsHadCrut data converging to GISS data, but Anthony finds that claim suspect.
- Chris Colose note GISS Temps are up in March.
- Roger Pielke Sr. discusses some reasons why Northern and Southern Hemisphere temperatures may diverge.
- Craig James compares this March to previous Marches on record.
- Hall of Record talks about recent “coolth”.
- Redstate.com discusses why 10 years is not long enough to be a trend.
- Yid with a Lid also discusses the temperatures.
As a closing note, I observe that most the bloggers talking about the recent temperatures are on the AGW-skeptic end. For the most part, I found these links on Technorati, so presumably I would have found AGW-believers blogs discussing the temperatures if they were discussing the temperatures.
I’ll try to remember to repeat this in a few months when El Nino returns just to see the results. 🙂
Click for larger
Lucia,
Does your analysis presume that the datasets are independent? For example, all of the surface measurements use some of the same thermometers and the satellite measurements use the same satellite data.
Raven,
That depends. 🙂
Look down 3 rows, and find the analysis with note “See note 2.”. That analysis involves assuming the 5 data sets are independent- and that applies both the the weather and uncertainty noise.. It’s done by first fitting Temp vs. Time and then calculating the slope, and then averaging. Then, I calculate the uncertainty based on the standard deviation around that mean slope, divide by the square root of (N-1) and apply the t-test for 4 degrees of freedom.
In that analysis, the assumption the uncertainties are indepenednet is clearly false, and that’s why I don’t show any conclusion based on that. However, I post the answer, just so people can see. (It also lets me explain that that is not what I do for my main method should they ask. 🙂 )
In contrast, the “main” method, which is listed in the top result row doesn’t really assume anything about the dependence of independence. The calculation is done in this order:
The average temperature for the month is the simple average of the five sets. This calculation involves no computation of uncertainty, because a) we don’t know the uncertainty for any individual temperature and b) we don’t need that information at this point.
Had we needed it, then I would have had to make an assumption about independence. But I don’t need to make any assumption, so I don’t.
After averaging, I apply the linear regression to that set. The uncertainty is whatever it is. (If they were truly independent, we would have certain expectations about how that dropped.)
Then, I compute the uncertainty in the fit according to the standard method for uncertainties in fits. That also makes no assumption about dependence of independence.
So, the only case that makes an assumption is listed in row three. That’s noted in “note 2”, and I don’t use it!
The place dependence of independece does come in is indirectly. If the uncertainties due to measurement in the five sets were independent, we’d see the uncertainty obtained by averaging be lower than the those obtained with each instrument. In contrast, if they were all totally dependent with each other, we’d see no reduction by averaging.
But whatever happened, it would fall out naturally– making no assumption.
I hope this answers without being too confusing.
Every so often, the IPCC issues a graph showing the doom that will come upon us if we do not repent of our sinful ways, but they all seem to have vanished, which leads me to suspect it would be interesting to dig out one of these graphs issued around 2001, and plot the recent satellite data on top of it. Can anyone point me to one such graph of doom, or recollect a website that carried them back in 2001?
James,
The Third Assessment Report (TAR) was issued in 2001. You can flip through the pages here:
http://www.ipcc.ch/ipccreports/tar/vol4/english/080.htm
Political blogs should not write about global warming. Yikes.
I agree Boris, they shouldn’t but they do, myself included.
This would include all sides of the political spectrum.
Lucia I got a little bit lost trying to understand beta. is there a formula you used to run the test? I think I can get it if I can have something to plug numbers into and then chug away. (that’s how I learned calculus…plug and chug.)
Terry,
There isn’t a formula so much as a procedure. I describe it here:
http://rankexploits.com/musings/2008/ipcc-projections-continue-to-falsify/#comments
What you need to do is:
Figure out how to do the hypothesis test for 2C/century.
In the process, figure out the values of the slope that would “falsify” 2C/century. For the period of time since 2001, these happen to be a bit below 0 on the low side and a bit above 4 c/century on this high side for our current tests. The calculation of these values uses the standard error for the slope, which is calculated based on the magnitude of the “weather noise” we have actually experienced. Lets’ call these “Low bound” = 0C/century; “high bound” = 4C/century. (I’m rounding to make the explanation easier to type– but the real values aren’t 0 or 4).
So, you know you would falsify 2C/century if the trend “m”<0 C/century=mlow or if 4 C/century < mhigh.
So, given you will say falsify for these cases, here is how you figure out the likelyhood you would falsify if the trend were really 0C/century.
For the purpose of calculation, assume the trend is 0C/century.
2) Assume that the weather variability is still what we have experienced. So, you can just take the standard errors on the slope you got from the previous calculation.
Recall you will falsify 2C/century if m<mlow=0C/century or if 4C/century=mhigh<m.
Calculate two things, but assuming the true mean is 0C/century.
a) Probability you will falsify, because m is less than the lower bound for 2C/century.
The probability that the weather would exhibit a slope less than mlow (0C/century). You use the gaussian distribution for this. (It’s programed in Excel). Note that in this example, 0C/century happens to equal the mean, so if the true mean were 0C/century, we would determine there is a 50% chance that the weather would have a trend with a slope less than 0C/century. (The other 1/2 the time, the slope would be greater than 0C/century. 🙂 )
This image may help (Though it’s backwards for the current discussion):
b) Probability you will falsify, because m is greater than the lower bound for 2C/century. The probability the weather would exhibit a slope greater than 4 C/century. (This ends up being really, really small for our case.)
Add the probabilities from (a) and (b). You’ll get something just above 50%. That’s the total probability you will falsify 2 C/century when the real value is 0 C/century. This is called “The Power”.
So, the probability you fail to falsify is 1-Power= β. This is “beta” error contingent on m=0C/century, because in this hypothetical, we assumed m is really 0C/century.
Obviously, this can all be repeated for any alternate hypothesis. In the table, I used 0C/century and 1.5C/century, because those seemed like reasonable ones. The first value is the one the denialists insist on. The second one would be the TAR; so that would interest us if we wanted to figure out if the AR4 values are any better than the TAR values.
lucia,
I put together my own spreadsheet of temperature trends, and I’m happy to say that our results agree (within 0.1K/century) on all trends. The small differences are probably due to different values of rho in the Cochrane-Orcutt regression. It’s pretty sensitive.
The C-O trends are still more negative than the OLS trends, but the gap is closing. I looked at different time periods and the C-O trends are at times more positive than the OLS trends. That gives me confidence that C-O is not inherently biased towards negative trends. (I couldn’t see how it would be, but it never hurts to check). In fact, C-O seems like a sensible technique IMHO.
Basically, I agree that the 7-year trend from Jan2001 to Mar2008 is not consistent with a century-long average trend of 2.0K/century.
So, why is the current short-term trend different than the longer-term trend? Why is it basically flat since 2001? You have discussed ENSO, and your correlation with MEI makes sense to me. As you’ve shown, it’s not enough to explain the trend.
You have hypothesized that a PDO switch in 2001 would explain the change. In comments, you have also expressed some interest in the solar cycle vs temperature correlations from WattsUpWithThat.com. I’m surprised you haven’t yet looked at the influence of the current solar cycle on the short-term temperature trend.
The correlation between solar cycle and temperature is not well defined. At the low end, Tamino has shown that the correlation is weak if it exists at all. At the high end, Camp and Tung 2007 show an effect of ~0.16K between solar min and solar max. In the middle, other empircal analyses have found ~0.1K and model simulations have predicted ~0.06K.
In Jan2001 we were near solar maximum. In early 2008 we are at solar minimum. If we assume a short lag time between solar cycle and temperature (consistent with the 2 month lag between MEI and temperature), then we can expect the temperature signal to be near full amplitude.
Using rough numbers, the trend from the solar cycle over the last 7 years is between 0.06K/7years (0.9K/century) and 0.16K/7years (2.4K/century). It could actually be 50% higher or it could be zero using the published 95% confidence limits.
Adding the range of solar cycle trends to your MEI-adjusted trend:
MEI-Adjusted + Small Solar Trend: 1.0 +- 1.7 K/century
MEI-Adjusted + Large Solar Trend: 2.4 +- 1.7 K/century
These trends definitely fail to falsify 2.0K/century. The same is true of the average trends that are not MEI-adjusted. Have I missed something obvious?
Hi JohnV,
I think it’s worth while for everyone to look at various possible reasons. But, I’m not going to look at solar partly because others are. Either Anthony and Basil will find a smoking gun there, or they won’t. I’m currently looking try to find how much ‘energy’ people believe exists in other long cycles– like the PDO and the AMO.
In those cases, when I find literature, I’ll try to do a back of the envelope calculation to see if those explain the slow down, and also, to see what that same explanation says about the initial empirical support for warming during the 1970-now run up.
I’m also trying to learn better statistics to deal with ENSO. (Because the method I’m using isn’t quite up to snuff.)
ok, thanks Lucia, I think I have it now.
I’m disappointed that you’re not going to look at the solar cycle. Next to the seasons and ENSO it’s the clearest cycle in the climate system. Its timing from max to min matches your 7-year trend very well. Even without correcting for ENSO, the solar cycle amplitude need be only ~0.02K to de-falsify (is that a word?) the “2C/century” trend. With your ENSO compensation, a solar cycle amplitude of ~0.11K would be sufficient to falsify the “no-warming” hypothesis.
From my back-of-envelope calculations, it is clear that even a modest solar cycle effect on temperature can make a substantial difference to a 7-year trend. It would also have very little effect on the 38-year trend preceding 2008 (the “1970-now run up”).
None of this necessarily means that PDO and AMO are not factors — but surely an explanation as simple (and seemingly complete) as the solar cycle is worthy of investigation.
John,
Are you saying that if I account for the decline in temperature expected due to a drop in the solar intensity from 2001 to now, then we would see the AGW signal?
lucia,
If I understand your question, then my answer is yes. Allow me to clarify that I’m answering the right question:
The trend from Jan2001 to Mar2008 can be written as:
(1)
T = A + E + S + O + W
where
A is the AGW trend,
E is the ENSO trend,
S is the solar-cycle trend,
O is the trend from other sources (AMO, PDO, etc)
W is weather noise
The IPCC trend is basically just A. You have shown that E is fairly small and have attempted to correct for it. Your ENSO-corrected trend can be written as:
(2)
Te = A + S + O + W
W is the remaining error bars, which you have estimated as plus or minus 1.7K/century.
That leaves S and O (plus the W noise). I can’t say anything intelligent about O, but S has been estimated. If the solar cycle temperature amplitude is between 0.06K and 0.16K (references that I found from a quick search), then S is between -0.9K/century and -2.3K/century for the last 7 years. To keep it simple, I’ll define S = -1.5K/century (a little less than the average of the range).
Re-writing (2) to solve for A:
(3)
A = Te – S – O – W
Substituting your computed trend of 0.1K/century:
(4)
A = 0.1 + 1.5 – O – W
Neglecting O and expanding W as error bars at plus or minus 1.7K/century:
(5)
-0.1 K/century < A < 3.3K/century
or,
(6)
A = 1.6 K/century plus or minus 1.7K/century
This is very close to the IPCC trend of 2.0K/century. The results are similar using the temperature trends without ENSO correction.
JohnV–
Hmmm…Do you have references on the magnitude of those effects?
I’ll look a bit through the SRES, but the normal 11 year solar cycle, being entirely predictable, should in principle already be included in the IPCC projections and incorporated into the mean trend. (Unlike ENSO, PDO, AMO, it’s should be something in the error bars. But, I’ll need to read a bit on the forcings used in the IPCC document.
So, basically, if the 2C/century was projected with full accounting for the 11 year solar cycle at the strength the modelers think it has, a correction to give credit for that would be inappropriate.
With respect to the IPCC ‘falsification’ it matters who says that effect exists. Because, if I understand those at Real Climate, NASA etc. at this point, the solar cycle is supposed to be buried in the noise. So, in a sense, if their projection is only ‘saved’ by the impact of the solar cycle, then their models are…well… wrong!
Still, if you can say who suggests those numbers, it might still be interesting to examine.
I am no statistician, but I will comment that I think the window you are taking data from is way too small to make the claims you are making, lucia. At least include a full solar cycle, preferably two. You aren’t capturing the full amount of noise in the system when you only sample from solar maximum to solar minimum, and so your error bars on your trend line are not nearly wide enough, IMO.
I’d have a look at what the variance is in the temperature record going back to the introduction of satellite records, I think you’ll find that 7 year trends vary a lot more than you are estimating.
The IPCC never issued a forecast for what the trend would be 7 years out, IINM, for good reason.
The 11-year solar cycle averages out so does not need to be considered in a multi-decade trend. That is, it has little effect on the *mean* trend just as ENSO has little effect on the *mean* trend. It only affects the trend over short time scales. I doubt there is any discussion of the solar cycle in SRES.
As for the IPCC error bars, I don’t believe you are using them. Instead, you have been using the IPCC mean trend and deriving the error bars from observations. That’s a valid choice but means that the size of the IPCC error bars is irrelevant to your analysis.
Camp&Tung (2007) is the most recent study that I know of (link below). They found 0.16K (0.06K to 0.26K) for the solar cycle. At the low end, they reference Stevens & North (1996) which finds a model-predicted cycle of 0.06K. (My results for the underlying AGW trends for 0.06K and 0.16K are in comment #2046).
http://www.amath.washington.edu/~cdcamp/Pub/Camp_Tung_GRL_2007b.pdf
Using NASA’s numbers from one of your earlier posts (link below), the solar cycle has a peak-peak amplitude of ~0.3W/m2. Coincidentally, the GHG forcing is increasing at roughly 0.3W/m2/decade or ~0.2W/m2 in 7 years. The net change in forcing from GHG and solar effects over the last 7 years is thus about -0.1W/m2.
http://rankexploits.com/musings/2008/what-does-nasa-mean-by-solar-variations-dont-matter/
Ironically, those who are most convinced that the IPCC is wrong tend to argue for a strong solar cycle effect on temperature (and vice-versa). In this case, the two arguments negate each other. If the solar effect is weak, the IPCC trend is falsified. If the solar effect is strong, the IPCC trend for AGW is validated. Nobody can have it both ways.
JohnV–
In the first post, I showed the IPCC error bars for the mean trend. The IPCC doesn’t communicate these in words, or even very well. So, unfortunately, it’s a bit difficult to talk about them. Here’s the graph:
There are other graphs for other SRES, but they are all similar in the short term projection.
What I found then is, given the data at that point, the climate trends predicted were not consistent with the trends in the data. The range of trend consistent with the data are based on the data. The climate trends the IPCC projects were those discussed in the report.
But one must always compare trends to trends, and that’s what I do.
But yes, at this point, in my table, I’m focusing on talking about the central tendency for the trend, and seeing if that is consistent with the data. The central tendency is important in and of itself. From time to time, I’ll show the full uncertainty intervals, but admittedly not in the table.
With respect to the solar cycle: When I said I needed to read, that’s what I meant. 🙂 Either they included those when running the “simple tuned models” used to project or they didn’t. I’m reading table 10.1 on page 756 of the WG1 part of the AR4, and solar is listed as “C”. Reading sideway, the seem to let it vary in AOGCM’s but treat it as a constant or annually cyclic in scenario integrations. This would mean you are correct and they don’t include the 11 year cycle in scenario integrations– which I think is what they ultimately use for their projections. I think they do this because they consider the variation over the 11 year cycle weak. So…
Yes, I agree that it is ironic that those who disagree the IPCC modeling efforts suggest the strong effect, while those who think the IPCC models are great think the solar effect is weak. (In fact, evidently sufficiently weak as to neglect it when projecting trends!? If we are both reading correctly?)
Still, it seems to me that if the only way to “redeem” the IPCC models is the solar cycle… well… (But I think we agree on this?)
Ben:
If you read the IPCC AR4, it’s not clear what the time scale for their projections are. They never discuss how one would verify or falsify, what time scales etc. That’s one of the reasons Roger Pielke Jr. recently advised they should state these things more clearly.
The issue of needing several solar cycles is puzzling since those who make the projections seem to insist the solar cycle doesn’t matter. So, in a sense, if it matters, the basis for their projections is somewhat undermined! (You can see that’s why John V and I are trying to puzzle this out.)
I’d be willing to agree on the need for more data if anyone brought up a relevant phenomena with a large time scale with sufficient “energy” in terms of temperature variation. But so far, no one does. They just decree things in terms of numbers of years. When analyzing data, there are actually quantifiable things one can state, and for some reason, no one will suggest particular known cycles.
So, in the meanwhile, I’m trying to read the literature to see if estimates of the amount of variability due to the PDO or AMO matter. (If they matter a lot, that has the potential to affect the issue of empirical proofs of AGW in the first place, because these cycles are L-O-N-G. Long enough to encompass the full recent run up! But, based on most my reading, the estimates are that they are not strong enough to impact that assessment. If so, they aren’t strong enough to affect this one. But, I admit, I’m still not sure– as you can see from my comment above.)
John V– OOpps.. I didn’t read everything.
First: Thanks for the numbers and papers.
The problem with the amplitude of the forcing from my previous post is it doesn’t tell us the amplitude of the response. One of the difficulties in the theory is the “in the pipeline” issue. Based on that, about 1C/century (roughly) of the projected run up was due to the GHG’s already being too high, and the planet being low relative to equilibrium for that current level of GHG’s. The second +1C/century was sort of due to the increase after 2001.
So, you can’t just say: we expected GHG’s to go up an amount ‘A’ resulting in a temperature rise of ‘dT’. So, if solar goes up ‘B’, but we neglected that, we would expect the temperature to rise an amount (1 + B/A) dT.
So, I’m a bit puzzled as to how to estimate that. But the numbers in the paper you suggest should help me do order of magnitude calculations (after I read them! 🙂 )
Ah yes, I remember that graph now.
Care must be used when stating that the solar cycle “doesn’t matter”. It’s true that it has little effect on long-term trends. It’s true that even an extended solar minimum would make little difference on a multi-decade trend. But it does have the potential to *substantially* impact a 7-year trend that starts at solar max and ends at solar min.
Just for fun, I did OLS and C-O fits going back to the last solar minimum (June 1996). I used the average temp from Atmoz’s data file. The trends with 95% confidence intervals are:
OLS: +1.1K/century (+0.5 to +1.8 K/century)
C-O: +0.9K/century (-0.9 to +2.7 K/century)
I made no attempt to correct for ENSO. I realize that I cherry-picked the starting point so I’m not going to make any claims about these trends.
Although the Jan2001 starting date for the IPCC falsification was justified by publication dates, it starts at a solar maximum and ends at a solar minimum. Is an un-intentional cherry-pick still a cherry-pick? 🙂
=====
Re-reading my previous post, I see that I left the impression that the negative trend in forcing would give a negative trend in temperature. That was a mistake. You are right to say that there is warming “in the pipeline”.
My gut feeling is that the “pipeline” warming is mostly in the oceans. The atmosphere should respond quickly to any forcing. The atmosphere’s response is damped by the slower ocean response.
It pretty obvious from the beginning that the 7 years coincided with falling edge of the solar cycle. I am pretty suprised that it took so long for someone like John V. to point that out. It seems like many in the warmer camp are afraid of admiting that the sun does have an observable influence on climate. Is suspect it is because they know that a strong cooling effect on the trailing edge also means a strong warming effect on the leading edge.
I used the solar data here: http://solarscience.auditblogs.com/2008/04/22/ken-tapping-the-current-solar-minimum/
And I eyeballed the temperature trends for the falling edge of the last three solar cycles using the graphs here: http://junkscience.com/MSU_Temps/Warming_Look.html
It is clear that there is a roughly 0.2 degC drop for the surface and satellite datasets and this drop is consistent with the drop observed since 2001. This is consistent with the conclusions made in the Camp and Tung paper.
However, It seems to me that removing a 0.2 degC oscillation from the temperature dataset over the last 30 years would also result in a trend much lower than what IPCC claims. For example, eyeballing from the 1982 max to the 2002 max gives a trend around 0.05 degC/decade. Similarily, eyeballing from the 1987 min to the 2008 min gives a trend of 0.05 degC/decade too.
A net trend of 0.05 degC/decade over 20 years would invalidate many of the predictions made by the IPCC.
I also eyeballed the MEI index here: http://www.cdc.noaa.gov/people/klaus.wolter/MEI/
I noticed that every solar min coincides with a La Nina which and the rising edge of the solar cycle always corresponds to a El Nino.
This observation could imply that ENSO and the solar effect are one in the same.
John V says:
“My gut feeling is that the “pipeline†warming is mostly in the oceans. The atmosphere should respond quickly to any forcing. The atmosphere’s response is damped by the slower ocean response.”
I don’t buy it. The oceans may have damped the initial response, however, enough time has passed that we should be seeing some of the warming coming out of the pipe. I more likely explaination is the CO2 sensitivity estimates are wrong.
Raven,
I’m also surprised nobody else has brought up the solar cycle.
I strongly doubt your conclusions about its effect. Since it is *cyclic* it has no effect on the overall trend (by definition). Eyeballing and subtracting is not the right way to calculate trends. The OLS trends for the dates you picked are:
1982 to 2002: 1.9 degC/century
1987 to 2008: 1.8 degC/century
Raven — you are a conclusion-making machine! 🙂
John V says:
I realize that eyeballing is not the right way to do things but I don’t think that an OLS trend that includes all of the data is the right way to do it either. If it was then it should have given something close to the peak-to-peak and min-to-min estimates. I suspect the difference is due to the volcanoes which pull down the OLS trend in the 80s and 90s. Eyeballing a 1 year average around the max/min removes the volcanic effect even if it lacks precision.
lucia,
Section 9.2.2.1 of IPCC AR4 WG1 discusses the effect of the solar cycle on temperature:
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf
(I tried to include the entire pertinent paragraph — I apologize in advance if there is missing text that contradicts the excerpt above).
Using IPCC AR4’s value of 0.1C from peak to trough, the expected trend from the solar cycle alone is approximately -1.4C/century (0.1C / 7years). Referring back to comment 2057 the ENSO- and solar-corrected underlying AGW trend using Cochrane-Orcutt is:
A = 1.5 K/century plus or minus 1.7K/century (-0.2K to 3.2K/century)
Note that I am not extrapolating from the solar forcing but instead looking directly at the empirical temperature response. I never should have mentioned the net forcing — it was an unnecessary complication.
The confidence limits are actually wider because the solar effect is not known with much accuracy. Nevertheless, the 7-year trend does seem to be consistent with the IPCC. I think it’s de-falsified.
Actually, James Hansen mentioned it.
Oh, BTW, a stronger solar response would also support a higher sensitivity to CO2–even given the different nature of the forcings.
Boris says:
I mentioned it a number of times to warmers who were complaining about Lucia’s falsification. I had thought that the IPCC did not take the solar cycle into account so its projections were still falsified even if there was an explaination. Unfortunately, the CYA statements in the IPCC doc that John V found mean that solar cycle must be accounted for before one can claim falsification.
Boris says:
Right. More have your cake and eat it too logic. If the trailing edge of the solar cycle causes cooling that cancels the CO2 effect then at least half of the warming during the leading edge must be due to the solar effect. Any other outcome implies the solar cycle does not average out over time.
Raven– John’s convinced me to look at it. That said, I don’t know what I’ll finally make of it. I’m going to discuss AMO first.
The things I’d note about Boris:
1) He says Hansen said something, but not what, and did not provide a link.
2) He claims something (though heaven knows waht) about the sun mattering would mean sensitivity is higher, but provides no other details.
So, for now, I will take those statements for what they are: not only unsupported, but so vague as to be meaningless. Should Boris be moved to add details, I might read them. But in the meantime, what he says is not worth wasting time thinking about.
Yes. Obviously, if the down trend of the solar cycle explains non-warming, the up-trend could explain heating for about a decade. That’s actually important. The difficulty is quantifying. The other difficulty is that if there is thermal mass– and so a time constant– then there is a phase lag in heating and cooling. So, the cooling and heating would be out of phase with the warming. This is why it’s going to take me a bit of time to look at this and figure out what I think.
In my view, everyone gets to figure things out in their own time, and have their own opinions. So, my not immediately agreeing with John V (or him with me) causes me no distress. In the long run, we each try to puzzle together what we think and why. (If the IPCC would be a bit more quantitative about info in their document– or even RC in their blog, I would be grateful. But alas, that is not the case. So, in the meantime I look at things and post what I think and say why.)
lucia, I’m glad I’ve convinced you to look at it.
I’m only writing comments in blogs, so I have not looked at it from every angle yet. My question from my original comment in this thread is still important — have I missed anything obvious?
Rather than going at it from first principles, in my opinion you should start with observations of the solar cycle temperature signal. Doing so avoids all the complications of modelling time lags, etc. I look forward to your analysis.
JohnV–
I’ll try to see if there are any observations of the temperature signal. The difficulty is… are there any? I’m not sure there are as they may be buried under volcanos etc. But, I’ll look at it.
Well, the solar forcing is known in terms of Wm`2, so the difference in the analyses of how the known variation in the solar cycle corresponds to temperatures will be the difference in strength of feedbacks. Feedbacks apply no matter what the forcing is–though there can be some differences (I’d imagine ice albedo feedback is faster with solar than with GHGs–but I don’t have a source for that, sorry.) If you want strong feedbacks for solar, you have to give strong feedbacks to CO2 as well. You cannot have your cake–oh, Raven already said that.
lucia,
Camp & Tung (2007) derive their estimate of 0.16K for the solar cycle from observations only. I have only skimmed the paper and it’s fairly new so I can’t vouch for how well it will stand up to scrutiny. I believe the references in the IPCC AR4 quote above are also based on observations.
I don’t have time to look into it because it’s Hockey Night in Canada!
Hi Lucia,
de-lurking for a moment…
Like your blog, but I’m curious to know why global average temperature is calculated as the mean.
It seems to me that in terms of heat content at least, median would be more physically meaningful. Any chance you could do a post on this – the pros and cons of median vs. mean?
Thanks in advance.
back to lurking… 😉
Lucia, you’d better watch how you choose your end points and scale factors because Tamino the Cherry Picker will haul you off for perjury:
http://www.climateaudit.org/phpBB3/viewtopic.php?f=3&t=253
I saw that title. I thought “Accusations of Perjury? On a blog with the self proclaimed name of ‘Open Mind’?”
Wouldn’t someone with an open mind be more likely to consider other possibilities, like a) incompentence, b) mistakes, c) different point of view, or d) other?
Anyway, whats with all the legal theories flying around the web. Some blogger accused Krysten Byrnes of libel against Hansen? Does that blogger not read movies to learn the definition of libel in the US??!
JohnV perhaps a note to gavin would help. In the “projections” do they model a change in solar forcing, the
11 year cycle?
FWIW you can get ModelE results for 1880 to present for the response of GSMT to solar forcing alone.
Steve- Do you have a link to those Model E results?
steven mosher, in the context of multi-decade trends I don’t think it matters if the IPCC models include or exclude the solar cycle. The solar cycle is important for this analysis only because it is short *and* runs from a solar max to a solar min. It’s the worst case for determining underlying trends in the presence of a cyclic forcing.
JohnV–
I concur about the multidecadal. Certaintly after 22 years (2 cycles) we would expect this to mostly wash out (though, oddly, it would still be measurable if it’s large!)
But, in the context of any verification/validation done to date, it would matter. So, for example, Rahmstorf et al. using “slide and eyeball” to assess agreement uses only 17 years of data. If there are relatively large 11 year cycles in 17 years of data, then one might want to at least know whether one started at the bottom and ended at the top or vice versa when both sliding and eyeballing!
So, if this does matter, there is a problem with the IPCC’s method of communicating and with the climate scientists method of communicating etc.
In my opinion (and it is that) if solar does make a difference of 0.18K peak to peak (which one reading of Camp and Tung suggests for surface temperatures(!)) or 0.1K, or even 0.05K, then, since the 11 year cycle is more predictable than the increase in GHG’s the IPCC should be including those either in error bars, or their nice smooth trend lines. (There is no reason they can’t have wiggles and mention those are due to the solar cycle.)
But, where I totally agree with you: If they leave it out, we need to see if the decision to leave it out is what “falsifies” in the short term.
In which case, the message to the IPCC might be: Why don’t you add this into your graphs? Then the “stalls” in warming would be communicated as not just mere random events but predictable anticipated events, and we’d even know when to expect them. Equally importantly, we would better assess whether a sudden run up is due to the sudden increase in solar!
And even more importantly– knowing whether the models correctly predict the response to solar really would give people more confidence. (And it would ally lots of skeptics who are going nuts over the solar cycle.)
So… right now, I’m trying to read various discussions of this and trying to figure out my opinion on a)what’s the possible range of solar influence based on the literature (I think JohnV may have that pegged, b) should there be a time lag, c) how does this jive with the AOGCM’s. etc.
Here are Leif’s comments on Camp and Tung: http://www.climateaudit.org/?p=2983#comment-233060
Here are few choice quotes from the paper:
Leif notes:
Leif tried to get Gavin to tell whether the seasonal signal was modelled by the GCMs. I don’t know if he got an answer.
Apologies if this question has already been asked and hopefully answered. What does the data say about the AR1-3 projections for 2001-2008?
Reference: Do you mean the TAR? (Third Assessment Report?) That was about 1.5C/century for central tendency. Right now, the fit using all five instruments, averaged says -0.7 ± 2.0 C/century. So, the upper trend that is good to 95% confidence is 1.3C/century.
So, the 1.5 C/century central tendency is currently “out”.
Mind you, John and I are trying to see if we can figure out if this could be due to neglecting the effect of solar cycles (and if yes, what that would mean.)
lucia #2119:
I mostly agree with your points. Rahmstorf et al should have considered the solar cycle. (I have not read it so I can not say for sure if the solar cycle is described).
As far as IPCC communication goes, as far as I know the IPCC authors have been consistent in saying that 7 years is too short for validation or falsification. The effect of the solar cycle is included in the IPCC AR4 WG1 text. Wiggles on the IPCC trend lines might be useful but would give the impression of better accuracy than could be justified (and that would open up a whole new can of complaints). The wiggles would also quickly devolve into noise because of uncertainties in solar cycle lengths and amplitudes.
For determining long-term trends (which IMO is the goal of the IPCC) the solar cycle is just not that important. It’s only an issue in this case because of the unfortunate timing.
Anyways, enough IPCC politics — let’s get back to figuring out the correct answer. That’s way more interesting.
JohnV,
Rahmstorf et al. 2007 is a 1 page paper. I don’t see “solar” mentioned and certainly, they don’t wait “several decades” as suggested in the paragraph you quoted from the AR4! “Several” generally implies more than 2 decades, so 16 years is, well… not “several”.
Political or not, what the IPCC said they meant by their documents is important with respect to the discussion of how we are to interpreting what they told people. It’s true that in chapter 8 of the WG1 Technical basis of the AR1, the need to wait several decades to average out solar is mentioned. But it is also true that the authors themselves are somewhat, shall we say, “flexible” in deciding how seriously to take the paragraph you quoted matters– they publish comparisons without waiting “several” decades. Also, it’s worth noting that Chapter 10 itself, where the projections are discussed, gives no particular guidelines are for verifying projections “uncertainty” in the portion discussing the projections themselves matters
Now… the “what might be the effect of solar front”: I’m trying to do some reading. I read the Camp and Tung results, and they do predict a sizeble influence, and evidently in phase with the cycle. (I still haven’t downloaded the solar intensities though. So… I need to get those.)
It turns out that when I reread the table 10.1 on page 756 of the WG1 part of the AR4, listing things like “C” and “Y” telling us whats included in the projections, some of the IPCC model projections appear to explicitly include solar in the SRES integrations. Specifically, Model E does.
Steve Moscher sent me a link to what Model E would predict for the solar influence and I plotted it:

We think this is supposed to be the models prediction of response to solar forcing. If your reading is different, let us know.
You can replicate here:
http://data.giss.nasa.gov/modelE/transient/Rc_jt.1.06.html
The path to that page is: http://data.giss.nasa.gov/modelE/transient/climsim.html
That page also explains what the data supposedly tell us.
To get the response to solar only, look at table 1 under “miscellaneous”, pick forcing and “lat time”.
I downloaded the 1 month data. (It turns out 2001 temps were predicted as relatively low.)
Unfortunately for our discussion, the calculations at the link Steve provides don’t go past the end of 2003. But as you can see, for the 2001 was at a relative low point as far as their predictions go. (If you go to the GISS site, you can see GISS was hindcasting 2001 as low.) So, this would suggest that starting in 2001, at least GISS would have been expecting the response to solar to amplify the temperature response.
On the other hand, the magnitude of the peak to peaks could be in the range suggested by Camp and Tung. But now I need to see raw solar data they used.
Out of curiosity– do you have solar intensity data based on measurements? That would help me. (If you have any idea what Model E might have projected for Solar after dec 2003, that would be interesting too.)
JohnV, well the models include the changes in solar forcing due to excintricty.
For hindcast they use the historical TSI ( from lean I think) so the question is, when they do a projection
do they assume a contant 11 year cycle, do they monte carlo that forcing, or do they just ignore the changes in
solar forcing. Same with volcanoes. since its unpredtable do they use an AVERAGE volacanic forcing?
Or do they assume No valcanic forcing becaause its unpredictable?
JohnV,
I think I should probably get clarification from gavin on this. When I asked him for Hindcast data he sent
me to this page. And I’m just taking that data at face value, so when the feild says solar irradiance forcing
I’m assuming that is TSI from lean and no other forcings. So the response would be the response of solar only.
Thanks for looking at that John. Yeah, with clean data, you expect 11 years. When I first looked, I saw all that scatter, and yet for some reason assumed it was ensemble averaged results. But… I guess the range we are seeing is weather. (That by itself is interesting actually. If that’s weather, we can compare weather noise in the model to real weather noise. But… I better read to see if that’s any sort of ensemble average!)
I found some Total Solar Irradiation data. Oddly, while 2001 is near the peak, there happens to be a local ‘dip’ for about a year. (Not enough to make it a ‘low’ year– but you’ll see when I show the plot. )
I also need to check how Camp and Tung treated the data– monthly average? Year. (It’s in there, I just don’t remember.) Tamino has a theory about there being some problem with Camp and Tung which he doesn’t reveal. (He also describe problems he does reveal. For example, he doesn’t like that there is a short-ish time frame used, and he doesn’t like the complication, but he said there is something else he doesn’t like, but he evidently wanted to email Camp and Tung about it. So…. who knows?)
Leif Salvgard at least seems skeptical over at CA. He seems to think it’s a big dubious to find the 11 year signal without finding the annual signal. (I’d tend to agree, particularly if the 11 year signal has no phase lag, suggesting near instantaneous response with no phase lag. That said, I haven’t run the numbers.)
I read RayPierre’s comment on the paper after a presentation at AGU. He seemed intrigued by it and was planning to do something with an NCAR model– but I haven’t seen if he reassessed later.
(I left this question:
50.
Raypierre:
These ideas could be tested by more complete diagnostics of heat burial in the NCAR model, and solar-cycle response runs with a two-layer mixed layer model in which the upper layer is shallow. I think I’ll give it a go, if I can find the time.
Did you ever happen to do this? Or do you know anyone who has done anything along these lines?
)
So, maybe we’ll learn more. Anyway, it might be interesting what Raypierre currently thinks of Camp&Tung.
But, on the issue of the magnitude: If the solar causes that much and if it hits “just right” then, yes, that would be big enough to cause issues with testing projections that neglected the existence of this response. (But, I also think if climatologists working on projections thought there is was systematic response to the nearly deterministic external solar forcing that’s that big they surely ought to have included that quite explicitly in projections, and repeated it in fairly obvious ways in thigns like “guide to policy makers”. )
I’m short on time so can’t dig very deeply.
I had a quick look at the Model E results. I agree with your interpretation of what they mean. An FFT analysis shows a strong peak at 11 years, as expected. The amplitude of the solar cycle is only about 0.0075 degC (0.015 degC peak-to-peak) — lower than other results I’ve seen. I do not have time to check the phase of the temperature response.
lucia,
I saw Tamino’s post about the Camp and Tung results. I’m definitely not advocating for the C&T results since they are so different than the accepted value of ~0.1C.
There is a difference between modelling the solar cycle response and observing the solar cycle response. I believe Leif Svalgaard’s comments were about modelling the annual variation. We would need to move from anomalies to actual temperatures to look for that cycle. As interesting as that may be, it’s not related to the 11-year solar cycle and is not important in terms of falsification of IPCC trends.
Must do real work…
lucia, your comment 2124:
Thank you for your near real time reply. Yes, I was trying to refer to the Third, Second and First Assessment reports. So both FAR and TAR are based on unreliable projections. Perhaps it’s not worth the effort but there would be some closure and completeness to knowing if the Second and First Assessments had any skill.
Is it clear how these projections were arrived at? If not then looking for the source of the error will be a hopeless task.
lucia,
I know I’m repeating myself, but the solar response is only “that big” when looking at trends over less than a single solar cycle. Its effect is critical for your falsification because of the timing from solar max to solar min. All it takes is -0.03degC from the solar cycle to undo the IPCC falsification (for the average trend).
If the IPCC made 7-year predictions then it would be necessary to prominently discuss the solar cycle — for their multi-decade predictions it is basically irrelevant.
John– Of course there is a difference between observing and modeling. 🙂
But I’m pretty sure Leif is talking about observing. The discussion of phenomenology is there because he’s explaining that it should be easier to observe this in the annual record than in the 11 year record because physical arguments should suggest the effect is much stronger in the annual record. (Plus, of course we also know statistically there are 11 years in an eleven year cycle, so that should help on the statistics.)
But no one reports seeing the solar signal in the annual record. So, that doesn’t have to do with modeling. (Maybe the just don’t look?) Still, if they’ve looked and haven’t found, that’s a question. And the fact is believability of the results in these papers does relate to the modeling.
I have a question for you: Are you sure mentioning results and what someone found in in a paper means the idea is “accepted”? That’s not necessarily true in what is essentially a literature survey.
Papers discussing the solar cycle and the 0.1C estimate are mentioned, but that doesn’t always mean the results are “accepted.” The IPCC document as a whole and methodology for developing projections reads as if the solar cycle is not considered important– so I’m trying to gauge that in making up my mind about what the solar stuff means.
(I know you’ve already decided what you think– but that’s because you’d thought about it before you asked me. So for me, I started thinking about it after you thought about it and later asked.)
Still, certainly, it’s mentioned. So, I’m not waving this off– it’s just that I want to read what the papers say, and also find out what different people think of these various findings/ claims.
I’m also looking through the papers to see if anyone says anything about phase lag or to see if the 0.1C is mentioned specifically or to see if they mention other features we should be seeing. But that’s going to take a bit of time. In H Gleisner, P Thejll – Geophys. Res. Lett, 2003, for example, they seem to have no phase lag. So your method of identifying the peak/ trough would make sense according to what they say. In con
But according to H Gleisner, P Thejll analysis, we should be seeing really big swings at 200 hPa- 300 hPA. So, if what they suggest is real, we should have seen the temperatures a loft, and especially in the equatorial to mid latitudes take a big, big noise dive. They supposedly drop 3-4 times as much as the surface during a solar min! Do we see these? If I’m going to think about the solar effect, I think would be interesting to look. (I haven’t.)
All I have from VAN LOON Harry (1) ; SHEA Dennis J. (1) ; is the abstract. They seem to have looked at August July only, and say ” At the earth’s surface an 11-year solar signal is not obvious in the zonally averaged temperatures and pressures in July-August.” This would suggest that we see nothing at the surface. (They see stuff between the mid tropopause and 10 hPa.)
JohnV–
I agree the uncertainty due to solar forcing is only large at small times.
We have a difference of opinion about the likely interpretations of those graphs by readers. Those documents, and those graphs are communicated the the wide spread public and policy makers, and the sorts of caveats you are finding are buried in small paragraphs amounting to literature surveys separted from the projections by pages and pages of text, graphs etc. (The caveats you find are in Chapter 8. The projections are in chapter 10 and repeated in the summaries– with no ‘reminder’ of what’s in Chapter 8.)
You can think smooth graphs starting with projections at time “0”, surrounded by small uncertainty intervals is clearly communicating “Don’t test for decades (unless you are Rahmstorf in which case, start in 16 years)”, but I think they mislead.
So, I think the IPCC is wrong not to either
a) make increase the uncertainty intervals at small time periods so as to accomodate any uncertainty in response to solar forcing or
b) incorporate supposedly “known” -0.1C per 6 year excursions in the central tendency for the trend due to response to nearly deterministic variations in solar forcing.
You are not required to agree with me on this, but it’s what I think. -0.1 C in six years is huge compared to what they are communicating to people. But no matter what the result, if as this line of argument seems to suggest, that -0.1C was accepted, anticipated and fully explained that excursion belonged on the graph illstrating the projections because it overwhelms the projections actually communicated to the public.
Ya lucia, I think part of the problem is that inthe short term, lets say 1-10 years the
uncertainty is dominated by short cycles ( like solar, enso etc and noise), so that modellers
are basically reading tea leaves and chicken bones. beyond 10 years the short cycle variability
is assumed to cancel out and the trajectory is dominated by GHG forcings, that is the various guesses
at GHG emisions. the SRES. The question is how to present such a quizzical beast. In the short term
emmissions dont matter, but what matters is stuff we cant model, but in the long run the stuff we cant
model evens out, and the stuff we guess at ( emissions) matter.
So for me GHG go up, temp go up. I’ll guess 1.7C per century, sres be damned. that’s my naive forecast.
It seems like you’re saying an annual cycle in temperature has not been observed. Let me know if I misunderstood.
There is a clear annual pattern in observed temperatures. Since we’re discussing climate we tend to work with monthly anomalies. The baseline for the anomalies removes the annual cycle — that’s the purpose.
I’m not certain of the amplitude of the solar cycle temperature response (as I’ve said in previous comments). I am confident that the response is large enough to affect a 7 year trend, and I am glad that you’re looking into it.
This discussion is slipping from collaboration to argument. I really don’t care if the IPCC should have stamped a solar cycle warning on every page in big red letters. The goal is supposed to be testing the validity of the trend — not analyzing the prominence of certain words.
JohnV no argument here, just a suggestion about how better to frame the claims.
Lukewarmers.
John V observes that ‘If the IPCC made 7-year predictions then it would be necessary to prominently discuss the solar cycle.’ It’s surprising therefore that some leading members of the IPCC milieu join in making 7-year and even annual predictions without discussing the solar cycle (so far as I know). Specifically, Phil Jones of the CRU at the University of East Anglia was a coordinating lead author of Chapter 3 of AR4 and David Parker of the UK Met Office (Hadley Centre) was a lead author of the same chapter. The organisations with which these scientists are affiliated have produced a joint forecast of the coming year’s global mean temperature anomaly every year since (I think) 1999. I’ve seen claims (at Climate Audit, if I remember correctly) that these estimates have consistently overpredicted the following year’s temperature. And in 2007 the CRU/Hadley Centre produced for the first time a prediction for a longer time span: 2014 would be 0.3 C warmer than 2004 (an error range was provided).
Even more puzzlingly, in December 2003 Phil Jones told the BBC that “Globally, I expect the five years from 2006 to 2010 will be about a tenth of a degree warmer than 2001 to 2005.†This was a brave prediction, and it now seems almost certain to have been too high. The average HADCRUT3 global mean temperature anomaly for the first 27 months of the 2006-10 quinquennium (Jan. 2006-Mar.2008) was BELOW the average for the 2001-05 quinquennium (Jan.2001-Dec.2005), and the average global mean temperature anomaly for the rest of the 2006-10 period (Apr.2008-Dec.2010) that would be required to reach Jones’s forecast is now 0.3 C higher than the Jan.2006-Mar.2008 average. Yet the Hadley Centre/CRU predicts a relatively cool 2008 and also, I think, 2009.
If there are indeed ‘0.1C per 6 year excursions in the central tendency for the trend due to response to nearly deterministic variations in solar forcing’, it is puzzling that Dr. Jones did not mention this. This is a comment on the impression communicated by the institutions concerned, not on the quality of their research.
John V says:
The earth’s orbit is elliptical which means the TSI at the winter solstice is 90 w/m2 higher than the TSI at the summer solstice. This variation is 100 times the purported effect of the TSI over the solar cycle. If the small change in TSI over a solar is enough to cause a 0.10 degC change in temp then we should see a 10 degC difference between the GMST at the solstices. The data is always reported in anomalies so it is not easy to check for this signal. If this 10 degC difference does not exist then the 0.1 degC variation over a solar cycle must be treated with skepticism.
I don’t know about you but a 10 degC difference in GMST between the soltices sounds awfully large.
Leif seems to be saying that. I actually have no idea. You can’t get it out of the anomalies which is the only thing I’ve looked at.
But, presumably, it would make sense for these various papers to compare what they discover about the annual effect to what they find as the two analyses should confirm each other. But the papers I’ve read so far are silent on any confirmation. That is odd. (Odd doesn’t mean wrong. It just means, odd.)
Do you know of any papers where they have linked the results from both? If so, I’d like to see them and I’m guessing Leif would too.
I know you don’t care. However, I strongly suspect low prominence about the +0.1C exits because the author, collectively, mostly believed there is no +0.1C. However, if the opposite is true and they truly believe is a +0.1 effect, I think the low prominence of the discussion is a poor choice.
The issue of what they believed about the effect of solar on the trend even over 6 years is important with regard to testing the validity of the projected trend.
FWIW: I’m checking papers and numbers. But, I’m going to also check the things that seem important to me and also discussing them. One of those is: Does it appear the author’s or climatologists believe there is an 0.1C, or 0.2C or 0.01 C trend.
Lucia,
I realise that I am late to this discussion, and I may have missed something in past threads. But I note that the diagram in AR4, Fig 10.26, that you use for your falsification did not actually predict a trend. It predicted a range of temperatures. You are falsifying your own deduced trend.
Now the trend you have chosen seems a reasonable best guess. But it would have an error range consistent with the AR4 error range, and by eyeball, it looks as if that might be very large, given the few years in the range. Shouldn’t you be falsifying relative to the error range of the IPCC model data, rather than the variance of your residuals.
To take this a but further, all the IPCC has really projected is that the temperature will lie between certain bands. Can you falsify that?
pliny, in the text the ipcc notes that the trend from 2001 for the next two decades will be .2c per decade,
regardless of emissions scenario , warming in the pipe. after 2011 the provide ranges that go from .21C and up
depending upon different emission scenario. claerly they where free to say ” we make no projection for the next 10 years, however from 2011 and on warming will fall within these ranges.” but they didnt take this conservative uncertain view of the first decade of the century. alternatively one can say that the observed trend after 6 years
is what is +- 3sig, which is what it is. short peroid, big error bars, but some trends are ruled out as unlikely.
like a trend of .2c per decade is ruled out, for now. this will change as more data comes in
Hi Pliny,
The AR document described the central tendency of 2C/century both in words and in the graph. They predict an underlying climate trend of with a central tendency 2C/century for the beginning of the century.
It appears you are suggesting I should compare a the trend experienced on earth to the IPCC model weather.
Falsifications are traditionally done by comparing one underlying ‘noise free’ to a different underlying ‘noise free’ trends. One does not traditionally compare trends to weather. So, I find the range of trends consistent with the weather we experienced and compare that range of climate trends to the IPCC projected climate trends.
If I had the IPCC weather data, I could find the range of trends consistent with the model weather and still compare trends to trends. But, since the IPCC already did that step and published the trend with the uncertainty on the trends, I just compare the trends for the weather to the trends they published.
The IPCC did not predict the temperature would be within certain bands. If you examine the plots, they are “relative to 2001”. However, the value in 2001 is never stated. This may seem a subtle point, but I discuss the difficulty earlier.
However, if you are asking, can the weather data for GMST be made to consistently stay inside the uncertainty bands shown? The answer is: I haven’t exactly checked, but probably not. A test for the TAR would fail– that everyone has seen. The weather almost never stays inside plots like that. Those bands aren’t meant to encompass weather noise. In fact, generally when people do “slide and eyeball”, even the first few data points are outside that bands.
No one considers individual weather data falling outside those bands meaningful because those bands aren’t intended to encompasse the weather noise– they are intended to describe trends.
lucia,
I’m trying to get a feel for just how large 0.1C really is. My gut feeling is that its closest analog is ENSO (in terms of frequency and amplitude, although I realize ENSO is internal). For comparison, do you recall the size of your adjustments for ENSO? I am also curious about the prominence of ENSO and its effect on trends in the IPCC reports.
—
Raven,
I really like this kind of comparison. I believe there are some complicating factors to consider. Give me a little time to think about it…
John–
The “slope” with MEI is 0.061 and the standard deviation of MEI is 0.588 So it’s about 0.035C as far as “correlation” goes. So, 0.1 is strong.
The positive empirical points in favor of the “sun” correction are:
* If I apply an cosine amplitude of 0.05C with 11 year cycle to the fit, the 1979-now correlation is improved a bit. (The slope during that period goes up also– but so far this is OLS.)
* At least one paper actually directly claims this amount. (That’s out of the two cited I read. ) I’m still not sure it’s “accepted” in the sense of everyone believes it, but presumably, it’s not “totally crackpot”.
So… I’m going to do this, and run the C-O. Then, I’ll add it to the cycle. It may piss everyone off, since on the one hand, it will (likely) redeem the IPCC projections, but on the other hand, it does it by considering the solar effect important.
On the other hand, we’ll see what happens in a few years.
John V,
The Camp and Tung approach appears to extract a signal from the temperature data by looking for correlations. This means it would find a signal even if the signal is not caused by TSI (e.g. GCR, magnetic fields, ENSO interactions etc). This could explain why they found a significant signal even if the annual signal due to TSI variations is very small. OTOH, the IPCC completely discounts the possibility of non-TSI related forcings nor do we know if any non-TSI related forcing would really cancel out over time.
lucia,
Just to confirm, is the “slope” in units of degC/MEI? I don’t understand how you got 0.035C — please explain.
I downloaded the MEI data you linked in your “Accounting for ENSO” post (link below) and got very different results. Using the MEI range from -2.2 to +3.1 I get a full-scale ENSO effect of:
(3.1 + 2.2) * 0.061 = 0.32degC
Or using the standard deviation (0.97) I get 95% confidence limits (peak-to-peak) of roughly:
(4*0.97) * 0.061 = 0.24degC
Where did I go wrong?
http://www.cdc.noaa.gov/people/klaus.wolter/MEI/table.html
You aren’t doing anything wrong. I wasn’t looking at the min/max. I was looking at the effect for 1 standard deviation during the post 2001 period. That’s 0.558 MEI units for the period in which we are testing.
There are ‘weird’ statistical issues with doing the MEI correction with Cochrane-Orcutt because the MEI index is partly based on temperature. So, that calc sort of accounts for ENSO, but really, I need to learn full ARMA to do that calc correctly. I’m carrying it along the results for now, but I need to improve that.
For this week, I’m going to have the solar entirely separate from the ENSO, because I’d like to do as little with that MEI index until I can do it better.
lucia,
In the sense that 0.05degC for the solar cycle will annoy people on both “sides”, I think it’s a good choice. 🙂 On a log scale it splits the difference between the Model E and Camp&Tung results.
—
Raven,
I’m not arguing that the Camp & Tung result is correct. It’s never a good idea to accept a single analysis that stands out from the majority. Time will tell but right now I think it best to follow the IPCC AR4: “The peak-to-trough amplitude of the response to the solar cycle globally is estimated to be approximately 0.1°C near the surface.”
—
Raven,
I believe your comparison of the annual TSI cycle to the 11-year solar cycle (“Schwabe Cycle”) is valid only if the global atmospheric temperature response is instantaneous and directly proportional to the forcing. Neither of those conditions is met. The response is further complicated by slower feedback mechanisms but I won’t even attempt to get into those (since I’d be way over my head).
First, in the annual cycle we know there is a lag between maximum TSI and maximum temperature. Results I’ve seen vary from 1-3 months. This attenuates the global response to the annual cycle but has little effect on the Schwabe cycle.
Second, the longer Schwabe cycle will have a proportionally larger effect on ocean temperatures. The total energy in the Schwabe cycle is roughly 10% of the energy in the annual cycle (11x longer, 1% as strong). In the most extreme case, the Schwabe cycle response could be as large as 10% of the annual cycle response (ignoring feedbacks).
The big question is the amplitude of the global temperature response to the annual solar cycle. That’s a tough number to track down. The best I could find was “Temperature response of Earth to the annual solar irradiance cycle”, Douglass, Blackman, and Knox, 2004:
http://arxiv.org/ftp/astro-ph/papers/0403/0403271.pdf
Eyeballing Figure 2 from this paper I get temperature responses (NH winter to summer) of:
+15degC for 30N to 60N
+4degC for EQU to 30N
-4degC for EQU to 30S
-5degC for 30S to 60S
No results are given for the polar regions. Assuming the polar regions have the same response as the mid-latitude regions and integrating over the globe, the global temperature response is *roughly* 2degC.
It should be noted that Douglass, Blackman, and Knox explicitly refer to negative feedbacks in the annual cycle and state that the sensitivity to the annual cycle is much less than the sensitivity to the Schwabe cycle. There is also a corrigendum that is not available online without purchasing.
This is all very “back-of-envelope” and could easily have large errors.
for grins
figure that out.
Heh. Yes– assuming we are interpreting the Model E results properly. Maybe we will discover other wise.
Steve moscher who was aware of the runs also told me he found this:
http://data.giss.nasa.gov/modelE/transient/climsim.html
So, the model E runs are evidently generally averages of 5 runs. That should reduce ‘weather noise’ by &root;(5). This is useful for comparing the ‘weather’ noise in a series to the ‘weather noise’ in real life. (I did a quick comparinson. The recent period is far from volcanos erruptions. So, I compared the ‘noise’ to the solar only ‘noise’. They are comparable– but I need to look further.)
John V says:
I don’t believe that paper correctly takes the earth’s orbital eccentricity into account. If it did the surface area weighted sum of the insolation at the winter equinox should be 90 w/m2 higher than the insolation at the summer equinox. The fact that the GMST is *higher* at the summer equinox supports this (my over simplified equations suggest the GMST should be 10 degC *lower* at the summer solstice).
This statement lead me to believe that they assume the the incoming TSI to be constant over the year:
Re 2173
Ice age rebound followed by industrial aerosols followed by clean(er) air? :-^
Raven,
You are right that the GMST is out-of-phase with the annual cycle.
I believe they do account for eccentricity. You quoted the first sentence of a paragraph. Here’s the whole thing:
It seems that the annual cycle due to eccentricity is buried by the larger seasonal response of the NH (perhaps because there is much less ocean in the NH).
John V says:
Something does not compute. Those TSI curves are mirror images of each other. That can only mean that:
1) The correction was not applied to the data presented in those graphs,
2) The correction is insignificant.
It is also worth noting that a 350 change in TSI produces a 10 degC change in the NH. This would put an upper bound on the change due to eccentricity (90/350*10 = 2.5 degC). This supports the notion that the eccentricity is insignificant.
The fact that a lag of only 1-2 months is observed also suggests that the effect of 90 w/m2 change would be observed if it was signficant (i.e. the climate appears to respond fast enough to show an annual signal).
(previous comment evaporated, trying again but shorter)
1. Lucia, you should e-mail Nir Shaviv for info on lags, etc. I sent his e-mail address through the Contact Lucia link. (Shaviv notes a .1C variation that lags solar activity 2 years here.)
2. Raven’s thought on a relationship with ocean and solar cycles is very plausible. Ocean is heated primarily by solar incidence (search Pielke Sr. for ref. Sorry, I’m at work and have RS/chronic pain problems). AGHG theory leads to reduced cooling throughout the atmosphere and mostly where it is dry and pressure is low, it doesn’t explain a large increase in heat in the oceans. CRF/cloud effect would affect the oceans primarily (from Ilya Usoskin).
Lucia – looks like conversation here has ceased. However, I am very confused by your claims if I understand what your “+-” numbers are supposed to be. I’m no statistician, but the standard scientific approach I’m familiar with makes the “+-” error quantity equal to 1 standard deviation in the data. For a normal distribution, 1 standard deviation is only the 68% confidence level; you need to go to 2 standard deviations to get to 95%. And in all cases, the IPCC number is well within twice your “+-” number of the trend you find.
So am I misunderstanding what “+-” means here, or did you goof on this?
I see you derived an estimate for standard deviation over a decade of 1.1 degrees C/century here:
http://rankexploits.com/musings/2008/can-ipcc-projections-be-falsified-sample-calculation/
However, I can only conclude that that number must be an underestimate, if you are now getting standard deviations of 1.6 to 2.8 C/century from the actual trend calculations. There’s something fundamentally wrong if the numbers derived from the actual data don’t agree to that extent… at the least, I find it hard to believe you can claim “falsification” from that sort of messy data. Maybe systematic issues like the solar cycle or issues with some of the instruments used are behind the apparently larger standard deviation in the trends than what you estimate based on the older long-term trend. But your claims of falsification don’t make sense just looking at the actual numbers from 2001 to now.
AArthur-
I post 95% confidence intervals, and have already done the math to obtain this value from the standard errors. So, there is no reason to multiply by 2 (even if that were correct in this instance.)
Standard errors (1-sigma) confidence bars are often used in science; they are quick and easy to calculate. Similarly, many use 2-sigma confidence bars to estimate the 95% confidence intervals. The 95% confindence intervals are widely used in many scientific fields. For example, Hadley reports them:
see http://hadobs.metoffice.com/hadat/uncertainty.html
This is sufficiently common that those who get statistics only in one or two undergraduate laboratory courses believe 2 sigma is the 95% confidence interval under any and all circumstances. In fact, this is not so.
The correct multiplier to determine the 95% confindence interval depends on a variety of things. Before you can begin to get the correct value, you need to answer:
Do you want a 1 sided or 2 sided confidence interval? (Which you want will depend on the question posed in a hypothesis test.)
Do you want the confidence interval for selecting a single sample out of a population with known mean and variance?
Do you want the confidence interval for an experimentally determined quantity?
Is the population of the uncertainties for an infinite number of samples is assumed normally distirbuted.
It happens that if you want the 2 sided confidence interval to describe a measurable feature of an single sample out of a population with known mean and variance, and that population is normally, then multiplier is 1.96– which is the value you can look up for the Gaussian distribution. 2.5% of samples will fall above and 2.5% will fall below the true mean ± 1.96 standard deviations.
When people are doing math in there heads, this is generally rounded to “2”. This is reasonable as one often only cares about 1 signficant figure for these sorts of estimates.
Based on the wording of your question, this back of the envelope method of estimation is the one you are likely accustomed to.
However, the method must be modified if one is doing something else: and I am. I am trying to find the uncertainty in the estimate of the mean based on experimental data and perform a hypothesis test. In this case, the multiplier isn’t “2” and sometimes, it isn’t even approximately so. Moreover, for small amounts of data “2” is too small, and results in error bars that are too small. For example: if you do an experiment to determine the average heights of Army recruits, select 2 at random and calculate the average and standard deviation, the uncertainty in your estimate of the mean will not be (1.96 * standard deviation) it will be (12.7 * standard deviation). The value of “12.7” is obtained by determining something called a “T” value. The “T” value used here is for a 2 sided uncertainty bounds, and for 1 degree of freedom. (One got used up to calculate the sample mean.)
So, as you can see, if you simply use “2” for the standard error in that case, you estimate for the uncertainty intervals will be much too small.
So, when sample sizes are small, instead of using a shortcut, and rounding to “2”, I account for the sample size, and estimate the “T” value. Lucky for me, this function, and t-tests, are so widely used it is coded into Excel as the “TINV” function.
You can find further discussion of hypothesis testing at NIST (or any undergraduate text book on experimental methods. These are typically covered sophomore year.)
NIST has a nice discussion of hypothesis testing here: http://www.itl.nist.gov/div898/handbook/eda/section3/eda352.htm
The reason the numbers in this article and the earlier one you found are different are:
1) The earlier one tests a hypothesis discussed as an inequality, so 1 sided tests were used. The current uncertainty intervals are 2 sided
2) The eariler one was done in “expermental design” mode, where I was trying to estimate what sorts of trends would need to occur to falsify a hypothesis. So,
a) many value are assumed based on data from a period with volcanic eruptoins etc. This means one must select a standard deviation that reflects all possible futures. (On cannot know in 2000 whether there might be an eruption in 2005. However, once 2008 arrives one knows if it occurred. So, one may now estimate the standard deviation for the scenario that actually occurred.)
b) The time period for the two articles is different. The earlier article was a prospective “what if” computaiton, and used 10 years. The current test uses 7 years of data. This number of data always affects the standard error in the estimate of any mean, and this is particularly true for the standard error in the experimental determination of amean slope– as discussed in the article you linked.
3) The earlier article used annual averaged data; the newer one uses monthly data.
So, the numerical values are different because the question is different, the time periods are different, the number of samples are different, the type of data used are different. As for the numerical values, it turns out that, most the functions and methods I use are coded into EXCEL. Many are now programmed into hand held programmable calculators. Undergraduates find these very handy when doing laboratory experiments.
Hi Lucia – thanks for clarifying. I was a theorist, so did very little experimental work beyond my undergraduate efforts (and that was in physics which probably cares less about statistics than most fields) – so as I said, no expertise in statistical analysis.
But to be clear, the numbers quoted in this post, for example the -0.7 ± 2.0 at the top, these are based on a time series of monthly average numbers from Jan 2001 to March 2008, where you have used Excel’s TREND and TINV functions to determine the two numbers (-0.7 and 2.0)?
Are you willing to post your spreadsheet this is based on?
Arthur–
When I use Ordinary Least Squares, I use excel LINEST. That’s a matrix function and returns slope, intercept, standard error in the slope, uncertainty in the intercept &etc. (If you fit T=m t +b, it returns two columns and 5 rows.) Of course, these values all assume OLS works.
The standard error in the slope is a standard error. So, I multiply that by the value I get from TINV.
However, for the more recent posts, I use a method called Cochrane-Orcutt. The reason is that if you examine the residuals for OLS, they are autocorrelated. That violates an assumption for OLS. So… since I’d always been careful to design experiments to avoid this sort of wretched outcome, I had to hunt down methods.
I found Cochrane-Orcutt works for red noise. So, I figured that out (it’s easy), applied that. That’s not programmed into EXCEL– but it’s in many statistical packages as it’s a classic method in econometrics. (The derivation from OLS requires a) stating the residuals are “red” with an autocorrelation b) multiplication then c) subtraction. DONE )
When doing C-O, you define new parameters Yi and Xi as Yi= Ti- ρTi-1 and Xi = Timei- ρTimei-1 and use LINEST for that. Iteration is required to get the value of ρ
However, CO only works if the residuals are “red”. So, of course, I checked those after doing the test. I had to look up how to do that too. 🙂 But, it turns out that *currently* the residuals for the monthly data since 2001 appear red. So, for all practical purposes, the method appears suitable. (Caveat: the noise doesn’t look red during periods with large volcanic eruptions though. Also, the first month I did this, the first lagged residual was just a hair over what the level to decree “red”. So, I need to learn ARMA– which I’ve said since the first post on this subject.)
Am I willing to post spread sheets? Sure. I have before! In fact my plan is to upload one for each post with new computations, but I often forget.( Forgetting is a bad thing, since I nearly always continue using the same file and modifying…. )
Here is the file I have on my desk top. It’s been slightly tweaked since I wrote the post you are currently reading, but it has raw data, various charts etc. You can see the method generally and find sheets for other calculations. (Caveat: I haven’t proof-read any changes since the time I most recently posted using an old version of this file. )
Spreadsheet for Arthur
Lucia – thanks, much appreciated, I’ll probably have questions on this later, but it makes sense so far.
Wow, falsification at 95%CL on noisy data. That’s hard.
However, your data is short (only since 2001) and your analyses appear to presume all sources of variability are well-represented within your dataset. However, there are longer-term sources of potential temperature variability which are _known_ not to be representable in such a short dataset. Things like sunspots and PDO.
Sorry to argue against you and a finding I find personally attractive. But fair is fair. I will not play the zealots game.
— Robert, AGW denier in Houston
Robert– No worries. The very first post on this discussed the PDO caveat. My theory is the down turn is related to the PDO shift, which is a statistically infrequent event.
The modelers from the IPCC don’t include these large variations when they publish their uncertainty intervals. So, they provide very narrow ones.
We are all trying to look for other “explanations” for the statistical outlier, and we are all trying to find estimates of how much peak to trough temperature variation is thought to be related to the PDO. If we knew that, we could do a back of the envelop estimate to see if missing the temperature variations in that large time scale feature that could be “the reason”.
It seem to be a difficult number to locate. 🙂
But yes. Falsification with noisy data. It’s sort of amazing. I was expecting to get failed to falsify because the uncertainty bars are so large. (Look at the beta’s compared to the alternate hypotheses. There is so little data, we wouldn’t have even expected to falsify 0C/century, and I,for one and almost certain that’s wrong!)
Lucia
Just as a follow up to my last comment on this subject (and while you sort your anomalies out) can I ask you this.
Why is it that your spreadsheet shows a negative trend to individual series (like GISStemp) when no-one else can see that? (For an example see the 8 year trends at http://scienceblogs.com/deltoid/2008/05/no_global_warming_didnt_stop_i.php)
Your spreadsheet shows substantial declines rather than rises which is what everyone else sees. I know you use OC which you contend to be more accurate than Ordinary Least Squares, but I have serious problems with the fact that OLS gives the opposite result to OC. OC is just least squares with an assumption of an autocorrelated error term.
It strikes me as very unusual that an error term would so dramatically reverse the conclusion of a standard procedure like OLS.
So why is it happening?
Thanks for the prompt reply.
For an [upper] estimate of the PDO magnitude, why not look at historical data (hopefully ~100 yrs) from the most affected weatherstations: Hawaii, Galapagos, Japan, Philipines, and maybe selected coastal ones (Portland, OR). The PDO effect will be lesser elsewhere — the universe only contains stable or slow-changing things. The fast changers, do.
“The 11-year solar cycle averages out so does not need to be considered in a multi-decade trend. That is, it has little effect on the *mean* trend just as ENSO has little effect on the *mean* trend. It only affects the trend over short time scales. I doubt there is any discussion of the solar cycle in SRES.”
That likely to be wrong JohnV. Because the solar cycle will effect the amount of accumulated joules in the oceans. Which will in turn effect the amount of water vapour in the air, which will thereby affect the average temperatures and not just during that cycle, but likely for a long time after depending on the way energy is building up or dropping. We aren’t talking about any instantaneous equilbrium here. In fact you might expect the temperatures to correlate better with the prior solar cycle than with the current one.
“My gut feeling is that the “pipeline†warming is mostly in the oceans. The atmosphere should respond quickly to any forcing. The atmosphere’s response is damped by the slower ocean response.â€
It ought to be the other way around. Its the imbedded joules in the ocean that ought to react immediately. For example you would expect the oceans to accumulate and release heaps of joules every year in line with when the earth is closer and further away from the sun. The oceans should accumulate joules immediately in response to forbush events or extended solar brightness.
But when and the way in which this energy is released into the troposphere is another matter and subject to delay. Since the amount of joules in the troposphere is just tiny compared to the ocean its the oceanic joules that ought react straight away and drive whatever is destined to happen in the troposphere. Hence it is that solar/oceanic connection that ought to be first gotten a handle on….. and thereafter one would suss out how that played out down the track in the troposphere as an afterthought.
JM:
Cochrane-Orcutt gives different results than OLS. That’s why it’s happening. If the results were the same it would not be a different method. The OLS and C-O slopes are not all that different — they just happen to fall on either side of zero.
—
Graeme Bird:
The atmospheric temperature responds quickly because it has a low heat capacity. The ocean can absorb a lot of energy (expressed in Joules, ergs, or whatever you like) without a large change in temperature. Therefore, for a given forcing the atmosphere will respond more quickly than the ocean. If the forcing is sustained the ocean will eventually catch up.
A physical system with a response that “correlate better with the prior solar cycle than with the current one” would be very suprising to me. I think you’re saying that the temperature will respond slowly. This would damp the response, particularly if the lag is nearly as large as a complete cycle.
I wouldn’t work that way. Because while the atmosphere will react quickly to whats happening with the sun now, that atmospheric temperature will be pushed around by whatever is happening in the ocean now as well. The atmosphere ought to be considered as an extension of the ocean by other means. Its not a backdrop to the atmosphere or a heat sink to it. Because the first thing that happens is the solar energy gets punched deep into the ocean.
The watts-per-square-metre guys have gotten used to using the ocean as a sort of fudge-factor. And thats simply because their models don’t work. But the ocean HEAT BUDGET (not its temperature) ought to react directly-and-immediately to any extra energy punched into it from the sun. You ought to be able to see a yearly inhaling and exhaling of energy from the oceans due to the way the earth gets closer and further away from the sun. You ought to see an immediate reaction to the heat budget of the ocean with Forbush events. Or in solar maximums. But that extra energy will get doled out to the troposphere over time. The solar energy passes through the atmosphere and yes it might warm it up straight away. But the energy will penetrate into the ocean and will not be given up to the troposphere and to space all at once if there is a more than usual amount. It will be given up over time, step-wise or perhaps rythmically. But not all at once.
So its the oceans you want to go to to measure immediate buildup in energy due to the sun.
“A physical system with a response that “correlate better with the prior solar cycle than with the current one†would be very suprising to me.”
Thats because you are looking at the wrong metric. Supposing you were a dolphin scientist diving halfway down the photic zone a lot of the time. Then as a dolphin scientist you wouldn’t be biased to temperatures at sea level or 50 metres above or something. If you were considering the oceanic heat budget as the main metric you were tracking, and the average air temperature as only a sideline, then that sideline could very easily correlate as well, if not better, to the prior solar cycle. For three reasons:
1. The air temperature through this cycle will be largely dependent on the beginning heat budget of the oceans.
2. If the Gulf Stream and ocean conveyer more generally has built up great momentum as a result of a strong prior cycle that too will tend to help stronger average air temperatures in the subsequent cycle via Stefan Boltzmans law.
3. The oceanic system is likely to slip into a certain rythm along with the pulsing of the sun.
Hence from the perspective of the oceanic heat budget being the main metric its very easy to see how air temperatures could correlate as well, almost as well, or perhaps event better, with the prior solar cycle, then with this one.
John V “Cochrane-Orcutt gives different results than OLS. That’s why it’s happening. If the results were the same it would not be a different method. The OLS and C-O slopes are not all that different — they just happen to fall on either side of zero.”
OLS gives about 1.8C/century warming, OC gives about -1.1C/century cooling. That’s not falling either side of the zero, that’s just plain ridiculous. Most of the OC result is identical to OLS, so the error term is responsible for a turn around like that? I have my doubts.
Lucia? Any thoughts?
JM, how about reading:
http://rankexploits.com/musings/2008/ols-with-pumped-up-error-bars-is-crude-the-ipcc-2-ccentury-still-falsified/
You know, the fruit never falls far from the tree.
JM (comment #2593) – the source of your trouble is that that picture looks at 8-year trends; Lucia’s looking at a 7-year trend (from January 2001)! 🙂 That’s why I recommended a Bayesian analysis of a priori’s here, but apparently that’s not appropriate for some reason…
Arthur–
Bayesian would be fine if you can illustrate what you intend. Otherwise, I think just saying “Bayesian” is rather vague, and I have no idea what one might do.
As for the graph, if you examine it, you will see that *every* downturn is associated with a major volcanic eruption. This is known to affect the temperature trend– sufficiently so that the qualitative of the predictive ability of model is illustrated by their ability to explain down trends after volcanic eruptions. From a statistics point of view, this means that 5 or 8 year periods that end (or begin) with volcanic eruptions are drawn from a different population than those with no volcanic eruptions.
The consequence is you can’t estimate the rate of 5 (or 8 ) year down turns during times with no volcanic eruptions with that graph. (I say 5 or 8 because Gavin made the 8 year graph and WC made the 5 year graph.)
“That’s why I recommended a Bayesian analysis of a priori’s here, but apparently that’s not appropriate for some reason..”
Come on Arthur? Who are your priors? A UN racket for your priors? Of course Annans Bayesian analysis is inappropriate here. What is significant is when he ran it it always seemed to move closer to zero. It moved in one direction from the prior estimates. So it left Annan looking like a Scotsman you are trying to borrow money off. You ask him for a loan of 50 bucks he says what do you mean you want 30 bucks. I aint got 20 bucks to lend you even if I wanted to lend you 10 bucks I simply don’t have the 5 bucks that you are after.
Well the number kept dropping. It hasn’t dropped far enough. But you cannot use Bayesian analysis when you are starting off with a UN scandal as your beginning point.
Dover
I’ve read it. The criticism still remains. Why does OC invert the result? In case you’re not aware OC = OLS + autocorrelated error.
When the error term contains the majority of the result, there is something wrong (and in this case I think it might be the implementation, not the method)
Lucia?
“OLS gives about 1.8C/century warming, OC gives about -1.1C/century cooling. That’s not falling either side of the zero, that’s just plain ridiculous. Most of the OC result is identical to OLS, so the error term is responsible for a turn around like that? I have my doubts.”
Whats ridiculous about it JM? Or what do you think is ridiculous about it?
The projections ought to be heavily biased towards cooling not warming. Thats if you go with evidence and not make-believe. So what is it that you think is ridiculous? Why do you try it on JM? You don’t like the results of the exercise so you say its ridiculous? Is that it?
No-one ought to fool themselves as to what JM is up to here. JM is a radical leftist and he’s quite meticulously trying to tarnish Lucia by this drip drip drip nonsensical criticism. He isn’t even talking to her. Lucia will know that they both know that his objections are just idiotic. The whole point is to sow doubt in third parties who do not have a background in statistics.
I ask you all. Has even ONE of his objections made sense? They might sound like they make sense but only to people who aren’t working with statistics every day. They sound OK if you are not yourself a quantitative analyst.
Its not a conversation he’s having. Its a tactic he’s exercising. He won’t even reveal his identity.
Lucia you ought to think pretty hard if you want this guy to just slowly drag your image down. He might seem just too ridiculous and dishonest to be harmful right away. But it will have an effect over time what he is doing.
Lucia
You have a sign error. (Referring to this spreadsheet revised by you on 24 April 2008 http://rankexploits.com/musing…..solar2.xls)
Eyeball the sheets ‘Chart_Main_Solar’ and ‘Chart_Main’. Notice how the slopes in the first are positive, but negative in the second (which is the chart you display here)?
Then check cells Z11 and AD11 on the sheet ‘Raw_Temperature’. Those are the slopes of your ‘Average’ orange line for those two charts.
Notice how one is positive and one is negative?
Check your work. Then adjust the anomalies before you use them.
None of your conclusions to date can be trusted until you correct those errors (at the very least)
Now that’s a 25 page spreadsheet you have there, and if you haven’t noticed these obvious bugs I expect there’ll be a lot more.
(Graeme, go away. Lucia can handle this without your help)
Oh, and can I make a suggestion once you’ve done all that?
Backtest against other 7 year periods in the record to make sure your spreadsheet model actually works.
I’d suggest 1975-82, 1982-89, 1989-2006. You’ll probably pick up a few more bugs that way.
JM & Graeme:
Could you two stop sniping at each other? If you don’t, I’m going to have to give you both long time outs.
JM.
The results for the solar and the ones on this post have been discussed and compared previously. The solar is positive because it’s a back of the envelope calcualtion to find out what happens if we assume that the 11 year solar period has a 1C peak to trough variability that is perfectly sinusoidal and is positioined “just right” to “explain” the down trend.
You’ll need to read the previous post to understand the idea for that calculation.
I have back checked against longer time periods and previous ones.
FWIW, I’ve been reading your comments, but I’m mostly letting others answer for you. As you are aware, JohnV believes in warming, and I find him fairly trusthworth as checking things. He also has relevant ideas, which is why I did the solar calculation. Arthur is also looking at my spread sheet. The three of us don’t always agree on the implications of calculations, but I think Arthur, JohnV and I can at least agree 1+1=2 whether or not that supports our particular hunch about what “truth” likely is.
Lucia: “You’ll need to read the previous post to understand the idea for that calculation.
I have back checked against longer time periods and previous ones.”
Can you post these results as a spreadsheet please? Because it’s not just your average that inverts, regressions on the base 5 series (GISS, HadCRU etc) invert as well.
It looks very much like a simple bug.
JM–
I’m not hunting through my past spreadsheets to find the precise one where I first checked the past trends. I am also not doing calculations simply to indulge you. If your intuition tells you that these results don’t hold up over time, feel free to down load the data and fill it into the cells yourself. The formulas are all in there.
Afterwards, set up a blog, post your results and spreadsheet there. And, feel free to point to your analysis in comments here.
I pursue readers suggestions when a) I understand precisely what they are suggesting b) they seem meritorious c) the show signs of being willing to do some calculations themselves and d) I have time and inclination for that particular calculation. Otherwise, I leave it to you to pursue your own wild hares.
Lucia
This is a process somewhat like “peer review”. Although I am not your peer on statistical analysis, I have posed a number of simple questions that put your conclusions in doubt (IMHO)
In this process, it would behoove you to respond to my concerns if you want to retain credibility.
Maybe (probably) I am wrong in my concerns, but it would be a good look for you to address them rather than refuse unless “I have time and inclination for that particular calculation”. If I am wrong, I am willing to learn.
You say I should “read the previous post to understand the idea for that calculation”
Which previous post? Could you give me the URL?
JM:
Here is the link:
http://rankexploits.com/musings/2008/what-about-the-solar-cycle-yes-john-v-that-could-explain-the-falsification/
Your notion that your comments are like peer review are naive. Also, it is in your power to do these calculations yourself and post them. If you see a wild hare, feel free to pursue it. You could easily download the data from the links in the spread sheet you already have, and add that to the spread sheet as it stand.
I have absolutely no fears that my other readers will think not pursuing your ideas reflects poorly on my judgment, objectivity or think badly at my by not jumping every time you bark an order that I spend hours doing calculations you could easily do yourself.
You will note that I take comments from many people seriously. In particular, you can easily see that, even when I disagree with Arthur, pliny, Phil, SteveUK, Atmoz,Martin, or JohnV on interpretation, I take them seriously. Why? Because they:
a) describe their reasons in their own words (not by simply linking to other posts).
b) run calculations on their own.
c) don’t bark out orders at me demanding I spend all day doing things they could perfectly well do themselves and
d) show some signs of understanding phenomenology and/or statitics in their comments.
(Sorry if I left other readers who leave substantive comments out of the list. I’m trying to highlight long time commenters who post actual analyses they did themselves. There are quite a few, and I know I left some out.)
With half the “data” now available for April, it’s almost time for an update!
UAH MSU April 2008: +0.02 °C (decline of 0.07 °C from March)
RSS MSU April 2008: +0.08 °C (no change from March)
The excitement builds …
Dear All:
Lucia’s analysis assessed recent global temperature data in an attempt to assess statistical falsifiability of historical projections of global temperature. It made no assumptions on causes.
However, some of the above postings have attempted to explain Lucia’s findings on the basis of effects of climate cycles, especially solar cycles. Indeed, John V says;
“Ironically, those who are most convinced that the IPCC is wrong tend to argue for a strong solar cycle effect on temperature (and vice-versa). In this case, the two arguments negate each other. If the solar effect is weak, the IPCC trend is falsified. If the solar effect is strong, the IPCC trend for AGW is validated. Nobody can have it both ways.â€
Well, sorry but I do “have it both ways†because observation suggests there are several natural global temperature cycles – many with unknown causes – that are overlaid on each other: and any man-made global temperature effect must be overlaid on them. And I want investigation of the causes of all the apparent climate cycles, not only the solar cycles.
One apparent cycle length is ~1500 years and since the time of Christ it has given us globally
the Roman Warm Period, then
the Dark Age Cool Period, then
the Medieval Climate Optimum, then
the Little Ice Age, then
the Present Warm Period.
Another apparent cycle length is ~60 years so globally there was
cooling to ~1910, then
warming to ~1940, then
cooling to ~1970, then
warming to 1998, followed by
slight cooling (i.e. near stasis).
Is anthropogenic warming preventing the 30 years of global cooling that the 60-year cycle could be expected to provide from ~2000?
Or
Has the 1500 year cycle reached its peak so another long cooling trend is about to start?
Or
Is the apparent existence of the cycles an effect of randomness or of something else?
Or ….
Possible answers to these and similar questions deserve serious investigation.
Importantly, the slight global cooling – near temperature stasis – since 1998 is because warming of the northern hemisphere has been more than cancelled by cooling of the southern hemisphere. Global warming was supposed to be global, not hemispheric.
What one can say is that the basis of man-made global warming theory is challenged by the existing trends. AGW-promoters have repeatedly suggested there would be a global warming trend with variable rate: none of them suggested there would be global cooling for a decade while atmospheric carbon dioxide concentration increased by ~5% (as has happened).
Lucia’s analysis supports the need for investigation of the above and similar questions.
All the best
Richard