Nicola Scaffetta has posted a comment on Benestad and Schmidt (BS09) over at Roger Pielke Sr.’s blog. It’s a fun read. I found the discussion of difficulties with BS’s treatment of end points. (The reason I find this interesting arises from the Rahmstorf kerfuffle, which deals with RC author, Stefan Rahmstorf’s, suggestion that TAR models were under-predicting warming based on his misunderstanding of the effect of his filtering on the information content of a smoothed curve near its end points.)
In fact, the reason why Benestad and Schmidt did not succeed in repeating our calculation is because they have misapplied the wavelet decomposition algorithm known as the maximum overlap discrete wavelet transforms (MODWT). This is crystal clear in their figures 4 where it is evident that they applied the MODWT decomposition in a cyclical periodic mode. In other words they are implicitly imposing that the temperature in 2001 is equal to the temperature in 1900, the temperature in 2002 is equal to the temperature in 1901 and so on. This is evident in their figure 4 where the decomposed blue and pink component curves in 2000 just continue in 1900 in an uninterrupted cyclical periodic mode as shown in the figure below which is obtained by plotting their figure 4 side by side with itself:
Any person expert in time series processing can teach Benestad and Schmidt that it is not appropriate to impose a cyclical periodic mode to a non stationary time series such as the temperature or TSI records that present clear upward trends from 1900 to 2000. By applying a cyclical periodic mode Benestad and Schmidt are artificially introducing two large and opposite discontinuities in the records in 1900 and 2000, as the above figure shows in 2000. These large and artificial discontinuities at the two extremes of the time sequence disrupt completely the decomposition and force the algorithm to produce very large cycles in proximity of the two borders, as it is clear in their figure 4. This severe error is responsible for the fact that Benestad and Schmidt find unrealistic values for Z22y and Z11y that significantly differ from ours by a factor of three. In their paragraph 50 they found Z22y = 0.58 K/Wm-2, which is not realistic as they also realize later, while we found Z22y = 0.17 K/Wm-2, which is more realistic.
This same error in data processing also causes the reconstructed solar signature in their figures 5 and 7 to present a descending trend minimum in 2000 while the Sun was approaching one of its largest maxima. Compare their figures 4a (reported above), 5 and 7 with their figure 6 and compare them also with our figure 3 in SW06a and in SW08! See figure below where I compare Benestad and Schmidt’s figures 6 and 7 and show that the results depicted in their Figure 7 are non-physical.
Because of the severe and naïve error in applying the wavelet decomposition, Benestad and Schmidt’s calculations are “robustly” flawed. I cannot but encourage Benestad and Schmidt to carefully study some book about wavelet decomposition such as the excellent work by Percival and Walden [2000] before attempting to use a complex and powerful algorithm such as the Maximum Overlap Discrete Wavelet Transform (MODWT) by just loading a pre-compiled computer R package.
There are several other gratuitous claims and errors in Benestad and Schmidt’s paper. However, the above is sufficient for this fast reply. I just wonder why the referees of that paper did not check Benestad and Schmidt’s numerous misleading statements and errors. It would be sad if the reason is because somebody is mistaking a scientific theory such as the “anthropogenic global warming theory” for an ideology that should be defended at all costs.Nicola Scafetta, Physics Department, Duke University
I encourage you to visit Roger Sr.s blog and read the rest of Nicola’s reaction to BS09.


Yet another of the many “You can’t prove the models are wrong that way either” papers from Drs. ABC, DEF, XYZ and Gavin.
The format is always pretty much the same: 1) generate a “simulated” data set using a GISS model, 2) apply the analysis method (or some poorly implemented version of it) used by the other author to the model simulated “data set”, 3) note that the results based on the model output are either unrealistic or too noisy to be meaningful, 4) declare the other author’s method and/or data are either wrong or not “robust” because they can’t be duplicated with data from the GISS model, and 5) if all that is not convincing enough, offer a red herring alternative explanation for the observed data, and insist the model is still correct in spite of iron-clad conflicting data.
The model is always declared right when it doesn’t agree with real data, or in other words, the model s right because the model is right.
Absurd.
Scaffetta wrong in title.
Over at WUWT, Leif thinks the TSI considerations of both are wrong. No opinion myself. However, if one goes to poke a hornet nest, best do it correctly. So, I don’t care in one sense if the Scaffetta paper is wrong; it will stand or fall on its own worth. But I do care that if you are going to attack someone else’s work, you gotta get it right.
John–
I agree with you. I doubt the sun is the explanation for recent warming, and I don’t care if Scaffetta’s paper turns out wrong. But, it appears BS09’s criticism may be seriously flawed. It’s also worth nothing the sorts of mistakes made in these climate papers. The mistake Scafetta identifies in BS is a rather rudimentary error.
Lucia, I agree. When you are talking about methodology, the data series is a given, so its a bit of a red herring to focus on whether the data is ‘right’ or not.
The error identified by Scaffetta is like setting a moving average to wrap around the ends, like setting the circular option in R’s filter command to TRUE. Rudimentary indeed.
David–
Yes. If Leif thinks he can show Scaffetta is wrong for a different reason, that’s fine. That wouldn’t fix this flaw in BS09. If they made the error BS09 failed to show what they think they showed by not quite understanding what their math did to the data they were processing.
To make an analogy to recent papers: If you manage to show that SOI does explain a huge amount of variability in the observed tropospheric temperatures, MdF&C will still have problems. They failed to show what they think they showed by not quite understanding what their math did to the data they were processing.
The specific details of the mistakes differ, but there are qualitative similarities.
Yes, whether the object of discussion is the mathematics or reality gets confused sometimes. I am submitting a defence of McLean’s SOI, and I get the specific error you mention out of the way in the first sentence. But I still think that SOI as an explanation of recent warming has legs, by way of accumulation of natural variations, so their assertions still could be correct in ‘reality’. In mathematics, small changes in solar insolation could force larger changes in absorption of solar radiation during persistent ENSOs. In ‘reality’ there are lots of uncertainties around the hypothesis.
This is definitely an error of the same caliber as that made by MdF&C. Since being snarky is so much work and I am definitely a low energy snarker, just consider all of the comments made by the Taminoites about MdF&C to be officially made by me here about BS09.
Artifex–
Should we attribute all the Taminoites comments about MdF&C to you? All just all the ones that are about the substance of the paper omitting the ad homs/ conspiracy theories etc.?
The mistake Scafetta identifies in BS is a rather rudimentary error.
For sure.
Astoundingly so, if it’s true. The graphs presented sure look like it though.
A few points:
1. We don’t know that B&S made this periodic error. S&W deduce that they did, based on circumstantial evidence (which looks plausible). I would imagine that Gavin will say something about this fairly soon.
2. As Lucia says, there’s a focus on the endpoint here because of recent discussion of Rahmstorf. But there’s a difference. In R’s paper, the endpoint was very important, because his paper was concerned with where things might be heading. But S&W and B&S are trying to identify a pattern in the whole 100yr profile. It’s bad to make endpoint errors, but they aren’t so critical.
3. You can’t draw the moral that B&S erred (vs S&W) by trying to smooth right to the endpoint. S&W did the same. And it isn’t clear (to me, at present) how they did it.
4. The wavelet issues being more complicated, I looked into S&W’s indignation about the sine amplitude misunderstandings. And whatever it is, it is misunderstanding. There’s no problem with what they meant to say. But if you look at their 2005 paper, they do simply talk about the amplitudes of sinusoids when they mean peak-trough, This is not conventional usage, and they don’t seem to say (in words) anywhere that they are deviating. You are left to work it out from that factor 1/2 appearing in one of their equations.
Nick–
1) Sure. Maybe B&S didn’t make the error. But, it does look like it since it would be quite a coincidence for the end points to match up if they didn’t. Also, presumably Scaffetta has done it the other way and knows that unlikely coincidence doesn’t happen to occur in this instance. I guess we’ll see.
2) It’s true the issues surrounding the specific end points mistakes are different. We’ll know the magnitude of the difference if someone does the analysis both ways.
3) I didn’t draw that moral. I just note that in both cases, the difficulties have to do with mistakes at end points. It’s pretty common for people to think about how techniques or theories apply in ‘infinite’ mediums– or data with no end points. Then, methods that would work if we just never had to worry about end points are applied, and problems ensue. This looks like it may have been one of those cases.
I’m certainly willing to believe S&W also made some tenuous or downright mistaken analytical steps. That’s something we would need to examine to decide if they supported their conclusions. But that would not wipe any mistakes that might have been made in BS09’s criticism off the slate.
4) When I first skimmed BS09, my thought was the paper was a bit confusing because they were trying to comment on too many diverse papers in one fell swoop. But, I was mostly just skimming so that might have been due to my lack of familiarity with the papers BS09 were addressing and the fact that I just glanced at BS09. (BTW: I tend not to believe the solar influence is important. My reason for believing this is that the so many of the papers attributing lots of climate change to solar cycles seem to do a heck of a lot of massaging to find the signal in the noise. If the effect of solar was very strong, heavy massaging would not be required. It would just pop out.)
On the 1/2 issue: You know perfectly well that conventions vary from field to field. Also, sometimes texts are less than perfectly clear. Many papers end up forcing you to figure out whose convention they are using by locating the factors of 1/2 or π or &sqrt; pi in the equations. The 1/2 is in the paper. If BS09’s criticism hinges on something being off by a factor of two and but Scafetta’s values are consistent with his equations, then B&S, should have paused and re-read the equation to make sure the difficulty was not their misunderstanding (even if that misunderstanding was due to confusing wording by S&W.)
If the effect of solar was very strong, heavy massaging would not be required. It would just pop out.
That’s so true. I had a chance to do a project for a company in Chi town that puts coatings on optics. They were experiencing a variety of odd failures over time which they couldn’t attribute any cause for.
The pres presented me with some graphs and asked if I would do a statistical analysis to pull out the signal from the noise. My response to what truly were shotgun plots in every sense was – typically when you sit down to do a statistical analysis you can already see the answer in the data and are simply dotting the i’s. I didn’t take the project but instead recommended a series of new experiments to detect the problem.
If you torture data it will confess.
Nick: As regards your point 4. I looked at the article and the offending sentence reads: “The amplitude A of an oscillating signal, f(t) = (1/2) A sin (2 pi t), is related to the signal variance …”
There’s no excuse based on convention. S&W are stating what they are computing and showing how they compute it. There was nothing to work out.
The mistake by B&S should have been caught in review (assuming there was a review) and removed.
Joe/Nick,
I admit I would usually would call the product “1/2 A” the amplitude. However, wikipedia suggest the convention varies from field to field. (No surprise to me.)
In particular, they suggest “Peak-to-peak amplitude is the measure of the change between peak and trough. Peak-to-peak amplitudes can be measured by meters with appropriate circuitry, or by viewing the waveform on an oscilloscope. Peak-to-peak is a straightforward measurement to make on an oscilloscope, the peaks of the waveform being easily identified and measured against the graticule. It remains a common way of specifying amplitude but sometimes other measures of amplitude are more appropriate.”
They note
That is to say, Scaffetta is using the terminology those at Wikipedia appear to consider the “most common” usage.
Wikipedia does also admit that “Some scientists[3] use “amplitude” or “peak amplitude” to mean semi-amplitude, that is, half the peak-to-peak amplitude.[2]”
That is: some use amplitude the way I do and the way Scafetta suggests BS09 used the term.
Assuming Scaffetta didn’t quickly edit Wikipedia, it appears that some group at wikipedia think his usage is the most conventional.
Lucia: I agree that amplitude is perfectly fine for peak to trough. For example, that would be the amplitude if one were doing Fourier analysis with complex sinusoids instead of real sine and cosine.
My point is that the convention is immaterial. Scarfetta is very clear about what he is computing. B&S should have caught their false criticism in the first draft. Failing that the reviewers of B&S should have caught the false criticism. Failing that, the editors of JGR should have glanced over B&S knowing that there was a public criticism of another’s work being made and found the error. Failing all that, Benestad, Schmidt and the editors of J. Geophys Res. deserve the small embarrassment they have earned by published a silly mistake.
Joe–
We agree.
Nick said “they do simply talk about the amplitudes of sinusoids when they mean peak-trough, This is not conventional usage,”.
According to wikipedia, Nick is wrong. Scaffetta’s use is conventional. I also agree with you that even if their use was not conventional, the reader could easily have figured out what they meant.
Right now, Scaffetta’s criticism of BS09 looks plausible. To some extent, we’ll have to wait for the guys at RC to respond to see if they have some explanation for the end points meeting, or their values of the amplitude etc. Presumably, Gavin will respond.
Well, as I am feeling particularly indolent and ad homs / conspiracy theories are a definite sign of intellectual laziness, so I think I have to go with this one.
The only disappointment is that I was hoping that one of the realclimatati would show up to agree that the math was then bad so I could then be outraged and claim that their heart wasn’t in the defense.
Alas, Nick came along and spoiled my symmetry. Where we couldn’t find a lukewarmer to defend the poor mathematical technique in MdF&C. Nick appears like clockwork to defend the indefensible. I must confess to being a bit disappointed.
Is Nick really suggesting that a difference in a scaling constant in a transform is the same class of error as a fundamental misunderstanding of effects of periodicity within transforms ?
Scaffetta pretty convincingly argues that the BS09 maps directly to something that is mathematically incorrect. No, we don’t know for sure “exactly” what B&S did, but the fact that their solution maps to something that is demonstrably incorrect does not bode well for their methodology.
The sadist in me hopes Gavin and company defend this paper, but the realist in me guesses that they will just ignore it and hope it goes away (which seems to be SOP when they are clearly incorrect).
Lucia,
I don’t think that Wiki article helps you. If you go down a bit to “formal representation” they say
x = A sin(t – K) + b ,
A is the amplitude of the wave,
And noting that “some scientists” exception, well, look at their Figure. They do it themselves.
But OK, let’s trot out the usual resources. Mathworld
Dictionary.com: Physics. the absolute value of the maximum displacement from a zero value during one period of an oscillation.
Or, follow one of the Wiki links to Berkeley, which spells it out very explicitly
Now I explicitly didn’t criticize the S&W paper here. They have been consistent. But they used a word in an unusual way (outside econometrics) with little explanation. Their indignation that B&S might have assumed the conventional meaning is unjustified.
And Artifex, I’m not passing any judgment on the enormity of any errors – I’m suggesting that we should hear what Gavin has to say. In fact, I think what’s happening may be something like this. A Fourier series analysis would assume periodicity, and have trouble because a trend is represented as a sawtooth wave. The real issue is that the model you want – a steadily rising trend – just isn’t in the set of basis functions. I think that is true of wavelet analysis too. So something has to be done. Do you know what S&W did? Detrending before analysis is the usual remedy.
Huh, Isn’t part of the whole reason for using MODWT to look for trends in the first place ? I would assume that S&W used the standard mirror symmetrization to avoid the periodicity problem, but I guess they “could” have zero filled and used some other trickery.
I too will be interested to see what Gavin has to say. For this particular problem it’s pretty hard to say anything but “Ooops !”.
Also to the point, the rest of S&W’s piece is pretty damning as well, it is just this oops is a pretty basic error for someone who has actually this tool and it is compounded by the tone of Gavin’s review of S&M’s paper. If you are going to write a snarky review of someone else’s work, it is best to get the very basics correct. It’s a good thing this didn’t make it through peer review …. oh wait ! Gosh how did that happen ?
BS09 do we need to say more? (joke)
Artifex,
Subtracting off a trend is fine, as long as you account for it. It is just a way of adding a line to your set of basis functions. It makes whatever happens at the ends less critical. If the “detrended” data still has a trend, according to the analysis, you just add them.
Nick:
Eh . . . econometrics? Physics, maybe. And only unusual in your mind. Electricians use peak-to-peak amplitude all the time; so do electrical engineers. Oceanographers use peak-to-peak – sea state is defined as peak-to-peak of the highest 1/3? 1/2? (I forget exactly) of the waves. For irregular oscillations (i.e., not pure sinusoids), peak-to-trough amplitude is used all the time even in physics. One might say that it is a more “general” usage of the word.
.
And when the “unusual” usage is accompanied by a crystal-clear equation, I think the indignation is more than justified.
.
Seriously, dude.
Ryan,
Sure people use peak-to-peak amplitude. And they call it that, or some equivalent term. I’m talking about the term “amplitude” of a sinusoid. I’ve given several references – there are lots more eg (Physics World, Answers.com). Can you quote any authority for an alternative standard usage?
Nick
I use amplitude the way you & BS09 did. But it’s nonsense to say the Wiki article doesn’t say what it says about the convention. They explicitly say peak to peak is the most common. Maybe they are wrong–but they developed that impression somewhere. So, clearly, people need to check the convention used when interpreting whether or not papers are reporting correct results.
Indignation aside: If BS09 does, indeed, contain this mistake, B&S did not read carefully, and then found their numbers differed from those reported in SW by a factor of 2. Finding the discrepancy, they didn’t re-read the equation to notice the equation contained a 1/2. Instead, the part of their criticism of the paper is based on their own misinterpretation not the contents of the paper.
Publishing that sort of criticism does not contribute to scholarship, advancement of science or to convincing those who disagree with B&S that they carefully check what the believe or say. That fault would lie on B&S’s side not Scafetta’s.
There can be no doubt that B&S did make the mistake that Scafetta says they did, in the application of the wavelet transform. Otherwise it would be an astonishing coincidence that the graphs match up perfectly in their magnitude and their slope!
Also, the long discussion of the meaning of ‘A’ here is completely irrelevant. As pointed by Scafetta, and Joe, and even by B&S themselves, the meaning of A is quite clear. There is an obvious factor-of-2 error in B&S, between para 48 and para 50. (It’s worse than you say Lucia, they dont even seem to have read their own paper).
Looks like the RC folks are preparing a response: http://www.realclimate.org/index.php/archives/2009/08/still-not-convincing/comment-page-2/#comment-132824
The amplitude issue seems on its face a fairly straightforward mistake, though it will be interesting to see the arguments on that and other other points raised in this back-and-forth.
Lucia,
Wiki doesn’t explicitly say peak to peak is the most common meaning of “amplitude”. In fact it doesn’t equate the meanings at all.
Up to March, 2008, they were quite explicit and correct (see diagram). After that someone made the top bit mushy. As I pointed out, later in the article they revert to correct usage.
Nick,
Amplitude is not a well specified in the way that radius and diameter are specified. Anyone using amplitude in a specific sense would do well to define their use of the term, which I would regard as interchangable with magnitude. In an electrical engineering sense, peak or peak to peak, or rms are all possible and should not be assumed. Should I care that you think the word has a specific meaning?
Nice to see that so many people are interested in our paper!
I’ll make a few points here since the discussion seems a little more focused than in other venues….
First off, B&S09 clearly stated that we had not been able to fully emulate Scafetta and West’s methodology and so statements that we did something different to them were to be expected. The issue with the periodic vs. reflection boundary conditions in the wavelet decomposition does make a difference – but what they used was never stated in any of their papers (look it up!). The amplitude issue is also valid, but actually has no implication for the calculation since it is the ratio of two amplitudes that goes into the empirical model that S&W use. The factor of 2 cancels out.
Secondly, this will all end up being beside the point. The real issue that we were trying to address is that many methods used to calculate attributions – like S&W or multi-linear regressions – are very sensitive to co-linear forcings (i.e. increases in both CO2 and solar) and unforced noise (particular in frequency bands that are being specially selected for). In the test cases we looked at (different GCM simulations using different sets of forcings) we knew what the answer should have been, and these methods were not able to correctly identify the actual solar trends.
This is true for our emulation of S&W (even including a switch to a different boundary treatment for the wavelets) and very likely to their version as well. (Note we still don’t have a perfect emulation, so perhaps you guys could agitate for some ‘code freeing’ to help out 😉 ). (NB. I don’t think that wavelet analysis really helps very much for this problem – more standard band-pass filtering gives similar results. We only used it because S&W did).
Third, a couple of people commented on the issue of the which solar reconstructions should be used. I am actually working with a number of groups to update these estimates for the AR5 models, but the reason why we used Lean (2000) in the paper was simply because that was what the model simulations had been run with back in 2004 (and which were what we had available). I have no reason to prefer that over any more up-to-date reconstruction and GISS (at least) will be using one of them in the AR5 runs. To a large extent the point is moot in this paper since the idea was to detect something we knew was there – because if a method doesn’t work when you know the right answer, it is unlikely to work in the real world where you don’t.
Finally, in case anyone was in any doubt, I plead guilty to not being perfect.
“Finally, in case anyone was in any doubt, I plead guilty to not being perfect.”
No worries Gav– we know you aren’t.
Hi Gavin,
First, to clarify: You did use the different boundary condition on the wavelet analysis? If yes, are you going to publish the results when you use the proper boundary conditions on the wavelet analysis? Either as a corrigendum or at RC? You’re kinda’ sorta going to have to because otherwise doubters aren’t going to believe your main claims about the solar regressions lack of robustness still being demonstrated when you use the correct BC’s. (I actually do believe your main results hold up because I believed them not only before I read your paper but before you even wrote the paper.)
I have some sympathy for you here. Failing to completely specify all feature of methods is common enough in many papers. In fact, the the ambiguity of many text is often discussed on blogs requesting that codes be published along with papers.
My sympathy is somewhat moderated by the fact that when the method used is ambiguous, it is best to assume the authors used the correct method (non-periodic boundary conditions) and replicate that. If that doesn’t work, try another set. Then, later, if all else fails, ask the authors, or… ask them for their code. (Then if they don’t give it to you, cry bloody murder at your blog.)
Well… that all depends on what the question is, right?
If the point is: “Does BS09 as written correctly show precisely what is wrong with that particular S&W paper (and other solar papers)?”, then specific errors you may have made would never be beside the point. If the point is, “Are the uncertainties associated with regressions in those papers much higher than represented in the various solar papers.”, then some specific errors you made might be beside the point in the sense that you got the correct answer (which is probably “Yes. Their uncertainties are whoppers!”)
I know that you think the second point is more important in the big picture. But the first point matters too. I think you know that, and given some things you post at your blog, I think you know why it matters.
I had read skimmed your paper, read that this was your main point and thought, “Yeah. Colinear independent variables in a multiple regression. Classic trouble area.”
The colinearity along with the large range of possible filtering methods is why I generally don’t like these solar papers trying to tease out effects that seem buried.
On the point about the robustness of the results of analyses to identify the strength of the solar variations on temperature increases: I tend to suspect the uncertainties are whoppers. I think the idea of showing the statistical methods would not work based on models is kinda sort of ok, but not definitive. (But it’s interesting and reportable even if these sorts of things don’t, in my mind, put the nail in the coffin.)
Yep. This is where “code freein’ ” would help out. Have you asked Scaffetta for his code? You may not need anyone to agitate for you. 🙂
I already asked this but, are you saying you now repeated using the boundary conditions Scaffetta says you ought to use?
Understood. Sometimes, the reason one picks a specific technique is it’s the one someone else used. If nothing else, this shows that you weren’t out hunting for the worst method out of many to support the claim that all possible methods were bad.
Sleeper–
I think this is a good time to try to be nice.
Reading Gavin’s comments above one wonders why on earth they bothered to write a paper which was unable to fully replicate the original methodology, could’nt correctly identify the solar trends and will obviously be ignored in the updating for AR5.
Well, I’m not a scientist but this whole process seems to give off an almighty stink to me!
Gavin,
A nice reply, it shows character. I would be happy to request code from Dr. Scafetta on his work although Dr. Steig was never forthcoming. Can you be specific as to which paper you require it for since your result goes after so many works?
Lief’s response to all this was to state that the solar series requires corrections which nearly completely remove any trend from the long term forcing. Are you familiar with his work?
Lucia- re:(Comment#17444)
Sorry, point taken. I was smiling when I wrote it tho. I’m sure Gavin posts here out of respect for you, and I wouldn’t want to damage that. Thanks.
(Shuffle off stage left, hat in hand)
Lucia, I don’t think you can characterise the SW vs BS boundary conditions as correct vs non-correct. Remember, we’re talking about the band-pass components D8 (centered on 22yr) and D7 (11yr). The non-stationary part has already been taken out with a smooth (in BS, a fifth order polynomial approx). I gather from Gavin that the alternative that SW would have used is reflection – ie preserving continuity, but with a sudden reversal of gradient. In time series terms, it’s minimum slope. Neither is “correct”. Of course, the SW method should have been used for consistency, had it been stated.
Gavin:”(Note we still don’t have a perfect emulation, so perhaps you guys could agitate for some ‘code freeing’ to help out ). ”
ROFL
While we’re doing that, perhaps you can too with your peers?
Start with MBH98 and work up from there.
Nick–
Come on now. Really.
Verify the calculations; all calculations. That activity is underway on a couple of other calculations.
” . . . so perhaps you guys could agitate for some ‘code freeing’ to help out”
I’ll trade you some code freeing agitate for some coding and methods and calculation Verifications. Verification, not model Validation. But we do really need to get to that, too.
OK, Lucia, do you know what is “correct”? And why?
Allow me to translate from Nickese to something a bit more understandable:
“I gather from Gavin that after he realized that he had made a very naive mistake with a piece of unfamiliar software, he quickly did just a little bit of on-line search and realized that the standard way this is handled is with mirror symmetrization as pointed out to me by Artifex in his above comment.”
“Of course there is still some error in this approach due to finite sample size just like in Gavin’s naive approach. The fact that there is some error in the carefully considered approach (which has been examined by many people who fully comprehended the nature of the MODWT) renders it equivalent to the approach of a tyro who blunders across the problem without being aware of its existence.”
“Really, they are both in error, pay no attention to the magnitude or nature of the error. Just the fact that there is error in both shows that S&W were really just as clueless about this as B&S”.
My friend, you have a future in broadcast journalism !!
Artifex, I think the brevity was lost in translation 🙁
It figures that Nick would try and argue semantics as a way out of a losing argument. Just more sophistry from him unfortunately, and this after running away when pushed to defend his own imprecise terminology.
There isn’t any set definition of “amplitude” in science, period.
That’s why we use terms like peak amplitude (max(abs(y)) in an interval), rms amplitude (sqrt(avg(y^2)) in an interval), peak-to-trough aka peak-to-peak amplitude (max(y) – min(y) in an interval), half peak-to-peak amplitude ((max(y) – min(y))/2), or finally instantaneous amplitude y itself.
Before Nick Stokes dips into his sophistic toolbox, I’ll simply point out you can find all of these variations in terms if you search scholar.google.com to verify that they are in common usage.
If there were a unique meaning to “amplitude” there would a) be no reason for qualifiers and b) we would use a word other than “amplitude” to describe the thing peak, peak-to-peak, etc are qualifying.
These statements are true based on simple linguistic analysis and the only reason to argue them is either for purposes of obfuscation or simply because you don’t understand science well enough to be able to perform the linguistic analysis yourself.
Sorry if this is a bit harsh, but sophistry gets me a little riled up.
Nick,
I think the point is that everyone makes mistakes, even really stupid mistakes. If there was a hell for mistakes in basic arithmetic, I would certainly be banished to the 9th circle. (As they say “never trust a mathematician with basic arithmetic”). Heck, sometime I even make huge conceptual mistakes. When I make mistakes however I say admit to them and wonder how I could do such a silly thing. I try to be objective and honest about my own foibles. With this in mind I also tend to have a sense of mercy for those who are trying desperately to comprehend but are not getting the point as I am often in their shoes.
That being said, I tend not to have much patience with dodge and weave tactics. I am used to seeing them in design reviews and thesis defenses where the weaker side has nothing left to bolster their argument so they obfuscate and divert. I must confess a tendency to quickly decide that the technical argument must have already been decided when I see these tactics in play even if such an assessment is too early on my part.
I would also point out that as a general sophistic tactic, this fails as well. Not only do you not convince any third party watching the debate on a specific subject, you convince the third party that you are arguing from a position of weakness. If you are known to admit clear errors, your history follows you and you are more convincing in the future.
There are good reasons that I consider everything that Lucia, Steve M or Lif Svalgaard says even if I don’t agree with them. Whenever Gavin says something I immediately wonder what he is not telling me and if in fact he is “bluffing his way through the problem”. In both cases, this has little to do with the “correctness” of the arguments and more to do with the way those arguments are presented an whether I feel I am going to get an honest argument or just spin.
I would argue that you would be best served by not beating this dead horse. In this particular case Gavin shot an airball – it happens. Let’s not try and paint it as a glorious play by a superstar that should be excused by the fact the others miss the basket too. The rest of us would be best served by holding ourselves to a higher standard and showing Gavin a bit more mercy than he shows others. Gavin did botch it some, but it sure seems to me that at least a few of his arguments are intact.
That longwinded tirade being over, Gavin does make one very, very good point. We should definitely hold S&W to the same standards that we attempt for the Mann papers or Steig’s latest. If there is confusion as to the methods in S&W, this should promptly be remedied.
Oddly enough I got into the ‘amplitude’ discussion twice this year outside of blogland. I was incorrect the first time on this and right the second. It’s an easy one to mess up after you’ve worked in one field for long enough.
Artifex (Comment#17467) August 6th, 2009 at 7:12 pm
“I think the point is that everyone makes mistakes, even really stupid mistakes.”
Hear, hear!
Thanks for a very thoughtful post. I hope Gavin reads it.
Gavin
(Comment#17440)
Thanks for dropping by to explain. I stopped by Roger Sr. site today and was disappointed to find out that Scaffetta had not released code. Putting other scientists in the position of having to try to emulate a method is inexcusable. Nothing good results from it.
I also don’t like that Scaffetta posts on a blog where people like me don’t have the opportunity to take him task for not sharing code. I don’t mind tilting against windmills. Lucia, would you like to join me in a request to Dr. Scaffetta for his code? Maybe the two Jeffs and Ryan and others would like to chime in as well.
Lukewarmer: free the data; free the code; free the debate
stevenm–
I’d be happy to ask Scaffetta for the code. However, before we all descend on Nicola, I’d like Gavin to request the code. Then, if Scaffetta refuses it, we should all impress upon him that we think handing out code is a good thing.
If Gavin says he asked and Scaffetta refused, we’ll ask. (I’ll email gavin.)
stevenm– I emailed Gavin to find out if he requested and was refused. JeffId already volunteered to ask for the code for Gavin.
I quite like the way the Blackboard is becoming a neutral zone. Please leave all sidearms at the door as you enter.
hi Lucia,
I see that my comments to BS09 paper has attracted some attention. I just would like to add a few comments.
The problem is not the code, but how one uses the code.
The code BS09 used should be right (I do not know, they just uploaded a R-package without understanding it). It is the way how they used the code that is wrong because they do not know how wavelet works and how they must be used to let them to work.
One needs to study the book that I reference in my papers SW05. Percival, D. B., and A. T. Walden (2000), Wavelet Methods for Time Series Analysis, Cambridge Univ. Press, New York. The codes are here
http://faculty.washington.edu/dbp/wmtsa.html
Moreover, it is not true that I do not explain how I did the calculation. The paper is sufficiently clear if somebody spent some time to read it and if s/he understand some basic math.
The problem I saw in BS09 is not just the evident mathematical errors it contains but the arrogance of the tone. They never contacted me to ask a clarification, for example.
If somebody does not understand a paper of another author he/she does not write a paper claiming to disprove that author while indeed what he does is to criticize his own misunderstandings.
What people do in such a situation is to study better those papers and become more familiar with the methodology. It may need time and study. I believe this is called “intellectual correctness”.
Unfortunately, I did not find this “intellectual correctness” in BS09 nor, indeed, in Benestad’s articles on realclimate that are full of gratuitous errors as well, as I prove in my rebuttals there.
That is not the behavior of somebody who would like to learn, but a very different behavior, indeed. Let us hope for the future. People can change after all.
In any case, the entire logic of BS09 paper is poor, contrary to what Gavin claims. It is not possible to disprove an empirical analysis of real data by just using synthetic data produced by a model (GISS modelE in this case) without first proving that the model correctly reproduce the data patterns that are to be tested. This is basic philosophy: from a false statement (ModelE simulations) one can deduce or disprove whatever one wishes, so not definitive prove can be deduced from it.
The prove that GISS modelE is correctly reproducing the patterns is missing yet, or better it is proven that Giss modelE does NOT reproduce the data patterns (read carefully Lean and Rind (2008) that has also been misinterpreted in BS09). In particular Giss modelE predicts a too small solar signature relative to what is actually seen by the empirical studies.
There is another important issue that many people are still missing. That is, what the sun did. Did it increase from 1980 to 2000 as ACRIM says it did, or it did not as PMOD claim?
I understand that this is a difficult issue. Many people claim that the sun did not increase. I cannot blame them. After all it was since the times of Aristotele that people believed in a “constant sun”! This Aristotelian belief has not vanished yet!
The issue must be tested in a scientific way: that is, by measuring what the sun really did. To do this it is necessary to do experiments that are specifically studied to measure TSI. This experiments exist and are the satellite TSI measurements. What these measurements tell us is that TSI more or less did what ACRIM claims, that is, TSI increased from 1980 to 2000 and now it is going down.
PMOD is not very different ACRIM since 1992. Before 1992 PMOD alters the published satellite data using TSI proxy models that do not show any trend such as Lean’s one. So, it is not a true satellite composite.
Now, people believe PMOD and not ACRIM, why? Because the proxy TSI models do not show the trend from 1980 to 2000. This is a form of circular reasoning, isn’t it?
What is important to understand is that the proxy models are build on data taken from surface observations which were NEVER intended to measure TSI, they just measure a very few frequency lines of the entire spectrum! So, it is not scientific to use them to test the satellite observations that were specifically studied to measure TSI!
The right scientific logic goes the other way around: one uses better experiments to test the lower experiments, not the other way around! This is basic scientific logic that BS09 and many others do not yet grasp.
However, somebody may state that before 1992 there is some disagreement among the satellite teams. Fine! Let us see what happened AFTER 1992, then.
The fact is that all satellite experimental teams agree, I repeat, ALL SATELLITE EXPERIMENTAL TEAMS AGREE, by using the most advanced machines designed explicitly for measuring TSI, that the TSI minimum in 1996 was significantly higher than the minimum in 2008.
Now, what do the proxy models such as Lean’s one predict?
Answer: they do not show this decrease. More precisely Lean’s proxy model predict that the TSI in 2007/8 is higher than in 1996!!!!!!!!!!!
Thus, from a purely scientific point of view there is no evidence that PMOD is correct because it is based on the assumption that Lean’s proxy is fine, which has been just dis-proven. Also, the sun presents more variability than what these proxy models show.
In conclusion, the TSI proxy models give us only an approximate TSI behavior. It should not be blindly trusted for every details. And we need to work to find better solar model.
In any case, right now it is not “scientific” to dismiss ACRIM as BS09 do just because PMOD and Lean proxy model fit better their AGW theory.
Dr Scafetta, are you familiar with the work of Leif Svalgaard? He believes that there is no evidence for the long term “background” trend in older TSI data sets (i.e. the proxies for TSI before the measurements) that was assumed by several. He reckons that the solar minima centuries ago were the same as now and not significantly dimmer as some of the older data suggest. Would using such different, lower variability reconstructions affect the various results from your methodology much?
http://www.leif.org/research/TSI%20(Reconstructions).xls
If Gavin was unclear about how Scafetta handled the boundaries, the solution is simple – ask him!
I did exactly this a few days ago (because it wasnt clear to me on reading S&W05) and he replied within 24 hours. [ The answer is that he takes the solar data up to a maximum (2002) and then uses reflection to extend it as far as needed for the wavelet transform. ]
I think the blame here lies primarily with the journal JGR. Gavin’s paper refers to S&W many times quite critically, even in the abstract. Obviously the journal editor should have used S or W as a referee for the paper – then many of these questions could have been sorted out before publication and much aggravation could have been avoided. But it looks like that did not happen.
Anyway, it is good to see both parties discussing the issue here in a civilised way.
(Comment#17482)
Hi, Dr Scafetta,
When we hounded gavin and Hansen about gisstemp we asked for and received:
1. A copy of the data–AS USED
2. A copy of the source–AS USED
3. instructions required to compile or run that source code.
here are some sample resources to get an idea of the concept behind this request.. the wavelab piece is nice.
sepwww.stanford.edu/public/docs/sep67/jon2.pdf
www-stat.stanford.edu/~wavelab/Wavelab_850/wavelab.pdf
http://www.stanford.edu/~vcs/papers/LFRSR12012008.pdf
Basically what is needed is a turnkey replication package.
So, Thanks for stopping by. If you are lucky Lucia will make you brownies. Trust me they are delicious. Ha, there’s an idea Lucia. I bet Gavin would like some as well.
Dr. Scafetta,
Thank you for taking the time to reply. There are several questions I have and a few points which are worth repeating. First, Leif claimed the data series still matches satellite trends, my own opinion on this is simple – I don’t know. He wasn’t able to provide the data yet for this claim as he said it isn’t in a sharable state. Can you provide a link, email or some direction as to where we can find data to verify the differences between the satellite and other sets?
You make this point:
The problem I saw in BS09 is not just the evident mathematical errors it contains but the arrogance of the tone.
We are all very familiar with Dr. Scmidt’s personality disorder. It’s hard to call it anything else. Don’t feel too singled out on this point.
They never contacted me to ask a clarification, for example.
Dr. Schmidt claims that he needs help getting code for clarification yet it seems he hasn’t attempted to contact you? I think perhaps a little handholding can smooth this out.
I think many of us agree completely that the wrapping of the wavelet function was a silly mistake which shouldn’t have required contacting anyone. I’m not just jabbing you Gavin, it’s true. Claims that this particular aspect wasn’t explained are weak.
[self snip] – Venting a bit.
If Gavin has been denied certain code as he appeared to claim above, I would support his request. There’s no valid reason to slow people down in replicating someone’s work. It’s science after all and good work will stand on its own. Did someone refuse Dr. Schmidt access to computer code or was this just a distraction?
RE: nicola scafetta (Comment#17482) August 6th, 2009 at 10:48 pm
My emphasis.
While I can’t follow all the nitty-gritty stats stuff, on this point alone I can dismiss the paper.
This is so absolutely fundamental and continues to be ignored by those responsible for producing all the AOLGCMs calculations. It’s Validation.
In no other applications of models that I am aware of is it allowed to be ignored.
Steve Mosher, you say:
As the point person on that engagement, I disagree with this characterization of events. I asked for source code and was pointblank refused. The source code was provided only after the “Y2K event” received massive publicity during the middle of a NASA space launch. As a result, non-climate reporters covering NASA got interested in the matter.
Even after the Y2K event, Hansen refused to disclose code. From all appearances, this was one occasion where his superiors managed to make him do something. The code was placed online with an extremely sour comment that true scientists would wait for the streamlined code that would be forthcoming in a few weeks, but which, of course, never arrived.
Having said that, once they decided to make the code available, it’s been made fully available, warts and all.
But there were pretty unusual circumstances and this remains a highly exceptional case. The code for Mann et al 2008 was published – again an exceptional case.
Failure to provide code has left some issues unresolved for years – the calculation of MBH99 confidence intervals remains a mystery after all these years, as does the retention procedure for MBH principal components. (Mann’s archived source code provided to the House Energy and Commerce Committee was incomplete.)
Solving a few old mysteries like this would be something that would be much appreciated.
Nicola, you say:
You’re totally misunderstanding what we like to see in these situations. “Code” in this context is the script in R or Matlab that is used to generate results. I routinely post these sorts of scripts at Climate Audit. I typically include print lines for important statistics and paste the statistics into the script as comments.
Such a script will include the plots.
I’ve modified my style over time to make these scripts turnkey with public materials so that the script will call and download any required data from public directories. If necessary, I’ll place data or data collations online.
This is the “code” that people are seeking, not the insides of the R-package.
This is the “code†that people are seeking, not the insides of the R-package.
Like the hundreds of lines of unreleased matlab code for Steig et al. Lately I’ve become interested in Comiso’s data processing for the sat data, it cannot have been easy to make the NSIDC dataset look that good and the difficulties Ryan has had with Comiso make me think there’s some magic in there too.
The methods in code are difficult to fully explain and often a small difference in setup code can make a substantial difference in the understanding of the processes.
Congrats Lucia on the best comment thread of BS09!
Drs. Schmidt and Scafetta both posting their points of view and reasoning. Lucia scooped them all this time. [My son was right, there isn’t anything that brownies can’t solve.]
Nicola
First, thanks for stopping by. There are several issues related to BS09 getting hashed around:
1) The tone of the BS09. I agree with you that its tone was inappropriate for a journal article. The peer reviewers should have suggested toning the thing down. (As a general comment, I think gavin’s tone, and the tone of RC does as much harm as good to his goal of disseminating his notions about what is or is not proven, certain, or tenuous in research on climate change. I suspect he has a different opinion, but, then, I don’t think he understands the minds of those who are put off by his tone. )
2) The issue of TSI. I don’t follow that much, so I know very little about it. My readers are interested in various points of view, and if you like, it would be interesting to read a blog post discussing the various points you and Leif argue over. (I could set you up as an author here. I should warn that getting involved in comments can be time consuming.)
3) The issue of how to test the robustness of a statistical method. I’m about 1/2 way between you and Gavin on this one, and I’m trying to get my mind around how to explain the general issue of colinearity.
Basically, I agree with your points that, if simulations from a model like Model E do not correctly reproduce the effects of the sun, and, the variability of “weather noise”, then you can’t properly test the “robustness” of the statistical method against that simulation. On the other hand, I think that robustness of statistical methods should be tested against some simulated data. The question is “what type of simulated data”? I may have to think about what simulated data I would test your model against.)
However, on the balance, my gut has always told me that issue of colinearity in ghg’s and TSI, plus all the noise due to volcanic eruptions make it very difficult to tease out the solar signal. So, I have always tended to suspect there is lots of uncertainty in the empirical determination of the solar signal. (That said, I prefer to have empirical determinations published. I don’t think the fact that they disagree with models is any reason to disbelieve empirical determinations! That’s nuts.)
4) The “free the code” issue. When people discuss this, they would mean you should provide the R script you ran to get your results. I’m not as hardcore on this as some others. However, had Gavin asked you for the code, and you refused any code at all, I would suggest you were in the wrong. However, based on your report, it appears he did not request the code.
So for, I can’t fault you for not providing a code Gavin never requested. (I suspect a few other people will now be asking you for your scripts to play with it. )
5) There is an issue associated with whether or not the narrative describing your method is clear. Gavin thinks a portion of your narrative was unclear as to details. You think it’s clear. I personally don’t know because I did not read S&W05 and certainly did not try to reproduce the results.
However, many of us know that when we try to reproduce the results of any paper, portions of narrative can be fairly clear, but they rarely read like instruction manuals. (If papers read like instruction manuals, they would not pass peer review.)
So, when in Gavin’s judgment, a portion of the narrative in S&W was unclear to him and he wished to reproduce it, it was his responsibility to contact you to ask for clarification.
Gavin is a big boy and he should know this. So should Rasmus.
Instead the two invested lots of effort and testing a method you did not use for “robustness” and documented that. That was foolish on their part.
So, in this regard, I can see no fault on your side. Gavin should have contacted you for clarification. Period.
6) You didn’t bring this up, but Gavin did. He says, in comments here, that your results will remain “not robust” when applied to his Model E simulations even if he applies the boundary conditions you use. Notwithstanding my reservations about precisely what can be proven about a statistical method by testing against modelE simulations, that information is worth knowing. Since BS09 paper is now “in play”, and the error B&S made publicised, I think this claim is something gavin and rasmus will need to demonstrate and they will need to document the demonstration. ( I told Gavin so above. He hasn’t responded, so I don’t know what he thinks of my advice.)
Gavin Schmidt stated:
I’m trying to locate where BS09 “clearly stated that we had not been able to fully emulate Scafetta and West’s methodology and so statements that we did something different to them were to be expected.”
On their page 8, they say: “We repeated the analysis in SW06a and SW06b, and tested the sensitivity of the conclusions to a number of arbitrary choices. The methods of SW06a and SW06b were used (1) …. We reproduced the SW06a study…”. I did not see any caveats attached to these statements. Rather than “clearly stating” that they had not been able to fully emulate SW, these statements seem to state the opposite. The only qualifications to this statement were very limited.
BS09 said:
This was only use of the word “emulate” or “emulation” that I was able to locate. The only other caveat that I located was:
Perhaps there’s a “clear statement” elsewhere in the article that I missed. If so, perhaps someone can bring it to my attention.
SteveMc
(Comment#17490)
I don’t disagree one bit with your characterization. With Gavin and Nicola here willing to discuss things I didn’t want to clutter up that discussion with re-counting all the details of your efforts to free the code, but rather wanted just to let Nicola know what my expectations were and to let Gavin know that my requests for this kind of stuff are even handed, while not always even tempered.
One more qualification in BS09. They said that: “Scafetta and
Willson [2009] claim that the ACRIM composite is more
realistic. However, the latter paper did not provide any
detailed description of the method used to derive their
results.” However, Scafetta and Willson 2009 is not the same paper as Scafetta and West and a caveat regarding Scafetta and Willson 2009 obviously is not a “clear statement” in respect to Scafetta and West.
>Congrats Lucia on the best comment thread of BS09!
Drs. Schmidt and Scafetta both posting their points of view and reasoning.
Yes, this is good. Some competing threads include one at ClimateAudit about Rahmstorf smoothing where Grinstead and Moore both replied.
RealClimate had a good one about Steig ‘on overfitting’, but then they closed it down before the principals could make their case.
This may be a completely uninformed comment since I haven’t looked at any of the details on this… however, having read the above discussion it seems to me the central issue is how robust the correlation claimed by S&W is. Gavin Schmidt’s approach to assessing that was to run (almost) the same analysis with simulated data and see how it does in picking up known causative relations – in principle that seems like a good idea, even if there may have been some issues with the way they did it. However, Scafeta here seems to object to using simulated data at all – so how else would one go about it?
Other possibilities would be simply to change the parameters on the fit – use different wavelets, expand or contract the time period looked at, try out different TSI models. Has this been tried? Any other ideas for assessing robustness and error bars in a case like this?
I agree with this point. It’s the same sort of issue that I raise all the time with bristlecones – the corresponding problem with bristlecones doesn’t seem to bother anyone in the “Community.” Does it bother you, Arthur?
Arthur:
That is the most important scientific issue in the longer term. Obviously, there is also a pissing match going on. These things happen in research. . .
My view is “on the one hand/ on the other hand” here.
The idea of testing statistical methods against simulated data is entirely sound.
But it’s also very important to ask: Which simulated data is appropriate for testing a statistical method? Clearly, if the simulated data bears no resemblance to the real data, it would be inappropriate to use for testing the statistical method to determine whether or not it’s robust.
I’m not sure he objects to using it at all. I think, to some extent, there is a difference of opinion over whether or not GISS ModelE has been demonstrated to be sufficiently faithful to real earth’s weather response to use to test the statistical method.
I’m not sure it has been shown to be sufficiently faithful to test he statistical method. But, that doesn’t make what Gavin did useless.
To my mind, using modelE data to test the uncertainty associated with application of Scaffetta’s statistical method is interesting, useful and worth publishing. However, finding that S&W’s result are not robust on that basis does not by itself put any nail in the coffin of S&W. But, since no single paper answers every possible question, the BS09 analysis if modified to use S&W’s exact method would gives us information we could draw on to assess our level of confidence in the S&W05 method.
lucia rankexploits.com (Comment#17509)
“I think, to some extent, there is a difference of opinion over whether or not GISS ModelE has been demonstrated to be sufficiently faithful to real earth’s weather response to use to test the statistical method.”
I believe that is something of an understatement. Lots of people have very serious doubts about Model E and other GCM’s. I think Nicola Scafetta has it about right:
“It is not possible to disprove an empirical analysis of real data by just using synthetic data produced by a model (GISS modelE in this case) without first proving that the model correctly reproduce the data patterns that are to be tested.”
The burden of proof lies heavily on whoever wants to use simulated data from a model to disprove something. Nobody objects to using well established theoretical constructs to question real data or the analysis of real data (thermodynamics to question claims of perpetual motion, for example), but global circulation models seem to me very far away form a well established theoretical construct. Heck, they can’t even make a reasonably accurate prediction of average temperature over 10 years!
SteveF–
I half way agree with you. However, the bit you are missing is there are two things that need to be proven:
1) Scaffetta proposed an ad hoc (i.e. special case) analytical method to dig a signal out of noisy data. That method is based on some sound physical principles but it has not been demonstrated to be able to accurately pick a signal out of noisy data. We don’t know how to compute the uncertainty associated with the methods best estimate and we don’t know the bias in the method. (In contrast, least squares regressions can pick out the best fit line that fits data, and we know how to estimate the uncertainty under certain circumstances.)
2) Gavin set out to test whether or not Scaffetta’s method ( and several others) can pick a signal out of noisy data. He decided to pick Model E simulations. Because he has many simulations, he knows what the signal looks like for his data. Gavin found that, at least applied to Model E, Scaffetta’s method does not pick out the signal from his noisy data.
For now, let’s skip the issue of Gavin not entirely reproducing Scaffettas method. (Though that matters.) Let’s just assume that when he does apply Scaffetta’s code to Model E simulations, Gavin still gets the same general result.
As you note, Gavin’s “proof” is incomplete. It may be that Scaffetta’s method failed with Gavin’s data because there is something about models that fails to reproduce the true effect of the sun. So, maybe the sun’s effect was too faint in models. Or maybe model weather noise is too strong, or has peculiar spectral distributions or something.
Maybe, despite the fact that the method does not work with ModelE, it would work on data with real earth properties.
However, we now return to Scaffetta’s burden of proof: In his paper, he does not do anything to show that the method would work for earth. He basically advances a plausible sounding idea, and applies it. We do not know the uncertainties associated with his method when applied “in the wild” (that is, with data where are expected to exhibit some sort of oscillations even if the sun was as steady as a rock and where, in addition, other forcings may exhibit oscillations with periods near 11 and 22 years.)
FWIW: At this particular moment, I am looking at the uncertainty estimates in equations (5) and (6) in Scaffetta and West, and I can’t guess how they computed them.
Lucia,
OK, but even a simple least squares regression analysis against historical temperature data shows a reasonably clear solar signal if you include in the regression AMO index, Nino3.4 (which together appear to account for most of the short term noise), and a reasonable estimate for radiative forcing. The solar signal is pretty clear, and amounts to about 0.045 C peak to valley. Were S&W proposing a much bigger number?
In their 11 year bin, they get A=0.1K. Based on all the discussion, that’s peak to valley because 1/2 A is the midline to peak. For their 22 year bin, they get 0.17K, and once gain, peak to valley. Both are larger than your 0.045C.
BTW: I don’t think BS09 are suggesting their is no solar signal.
SteveF you said “”OK, but even a simple least squares regression analysis against historical temperature data shows a reasonably clear solar signal if you include in the regression AMO index, Nino3.4″”
I think that Dr.’s Scafetta and Gavin would be in disagreement at this point since the ability of models to do AMO and other indexes reliably is contested.
Not to put words in Dr. Scafetta’s mouth, I think that he would consider Model E not a “neutral court.”
I think the question of synthetic data to do comparisons will be problematic for either side at this time.
If S&W say peak to valley is 0.1K, then they do need to show why their analysis produces a 2X larger number than a simple linear regression.
On the other hand, raising questions about S&W based on an analysis done on model E simulated data only confuses things by introducing the contested fidelity of the model to reality, and by antagonizing those who have already seen far too many (dismissive and arrogant) Model E refutations of data and data treatments.
Why not just add or subtract an 11+/- year sine wave temperature signal of known small amplitude (say 0.02 or 0.03K) to the real data, aligned with the known peaks and valleys of the solar cycle and see how the S&W treatment does recovering it. Is the recovered signal too big? In addition, you could add known random noise of small amplitude to the historical data and see how this changes the S&W results. These kinds of simple tests seem to me a lot cleaner than fooling around with simulated Model E data.
SteveF–
I’m trying to think along the lines you are. You’re method sound simpler than mine. But it won’t resolve at least two issues that concerned B&S in BS09. One is whether or not internal noise might “confuse” Scaffetta’s method; the other is colinearity in forcings.
I think one of gavin’s claims is that there is significant “weather noise” energy content at periodicities of 22 and/or 11 years even when the sun is included in Model E. So, that introduces the possibility that some of the energy S&W pick up has nothing to do with the sun. However, S&W would attribute that to the sun.
Of course, it’s possible ModelE weather variability puts “weather noise” at 22 years when there is none, but you can see the problem, right?
Also, the other difficulty is that if we look at 100 years worth of of volcanic/aerosol etc forcing excluding the sun, and filtered that we would find some energy near 11 or 22 years. So, even at that, some of the response at 11 or 22 years may be due to non-solar causes. (This is the co-linearity issue. In S&W05, all the response at those time scales is attributed to the sun. Maybe it’s not due to that.)
This isn’t of course to say that Scaffetta is wrong. Certainly, BS09 shouldn’t have scanned quite so snotty. But, qualitatively, issues Gavin raised are worth thinking about.
this discussion is very interesting
just two short points:
the 0.1K estimate for the 11-year solar cycle.
This is not just “my” estimate. This is what a lot of people have found using a lot of alternative data analysis techniques. This estimate is acnowledge also by the IPCC2007 where you read
***********
A number of independent analyses have identied tropospheric changes
that appear to be associated with the solar cycle (van Loon and
Shea, 2000; Gleisner and Thejll, 2003; Haigh, 2003; White et al.,
2003; Coughlin and Tung, 2004; Labitzke, 2004; Crooks and Gray,
2005), suggesting an overall warmer and moister troposphere during
solar maximum. The peak-to-trough amplitude of the response to
the solar cycle globally is estimated to be approximately 0.1 oC near
the surface. (From the IPCC Fourth Assessment Report, Working
Group I Report \The Physical Science Basis,” Chapter 9:Understanding
and Attributing Climate Change pag. 674 [161].)
****
the models, such as Giss modelE and others, get about 0.02-0.4K, three time smaller.
Lucia:
For this case, I’m not sure trying to separate solar forcing from “weather noise” is in this case a distinction with a difference.
After all, solar forcing is eventually responsible for the weather noise we see on earth, and it could reasonably be argued that modulation of weather is one mechanism for a net amplification of the effect of solar fluctuations on global mean temperature.
Carrick–
I’ll give some examples of the sort of thing I think worries gavin using non-contentious fluid dynamics examples on Monday. They wont’ be *the same* as the climate examples, but for some people the analogy will help.
Some may “get it” if I simply say: Vortex Shedding. Happens even in steady flows.
Lucia, I hate to reiterate, but I wanted to make sure we’re on the same wavelength.
If you turn off solar forcing, what weather noise would be present?
I would contend any weather noise that has a frequency peak at 11 or 22 years is necessarily connected to changes in solar forcing, even if it’s not coherently related.
Lucia,
I guess there are really two separate issues here:
1. Is the method itself robust? Expand robust to mean: Does the methodology accurately capture a known added (or subtracted) periodic signal? This should not be difficult to prove one way or the other. Does an added 11 year signal show up with the right size, or is it magnified? Does some of an 11 year signal become “transformed” into a 22 year signal or vice versa? Is the method tolerant of random added noise, or do the results jump all over the place when challenged with a slightly more noisy data set?
2. Assuming that the above simple tests prove the method itself is robust, is attribution of the measured signal to the sun realistic? This is an “angels on the head of a pin” question, since we only have one real temperature data set, and we would need several to resolve this question. S&W (and many others, including me) are not likely going to accept that a simulated Model E data set is a fair test without some very solid proof, while Gavin and company are never going to provide such proof, nor even agree that it is needed. While they probably would agree that Model E is not perfect, their actions suggest that they think it is reasonably close to the Lord’s word writ in silicon. I don’t see a way to resolve this one.
Carrick–
By solar forcing, I will assume you mean variations around the mean, not turning off the sun utterly and completely. Right?
In climate models even if you hold forcing constant — the earth would still probably have “weather noise”. I am not a super computer capable of solving the transport equations for the earth, but I am confident weather noise would still be non-zero.
It happens that AOGCM do get non-zero weather noise if they apply constant forcing. But, that’s not why I believe it would be so. I believe it based on things that happen in very simple fluid dynamics applications. Example: place a cylinder in cross flow. Keep the on coming fluid velocity absolutely steady. You will see vortex shedding. If I were to adopt climatatogists language, that would be “velocity noise”.
SteveF–
On your
1) I am fairly confident that if you added a know oscillating signal to noise, Nicola’s method would detect that. I’m not sure… but I’m pretty confident of that. No one is accusing Nicola’s method of not doing that.
2) Attribution may be difficult, but it is more important than “angels dancing on the head of a pin”.
Un-numbered) I agree with some of your complaints that the tone of some of gavin’s communications suggest that ModelE is close the the Lord’s write. I suspect he does not actually think this, but when he writes, he often gives that impression.
Lucia, if you had constant forcings, you would still end up with weather fluctuations. I was going to the extreme of turning the forcing off entirely. Of course in that extreme, you just get an oversized snowball.
Without solar forcings there is no weather/weather noise.
Tp clear things up, do you think if you had constant solar forcings you would end up with an 11-year and 22-year peak in weather noise? (Yes/No)
If your answer is “no” you don’t have to worry about weather noise masking solar forcing, or being mistaken for it. Weather “noise” after all is a nonlinear response of the climate system to solar forcings, and peaks at 11-year and 22-year should be a response in changes in weather to a change in solar forcings.
Secondly, I suspect that global climate models, while they do generate weather/weather noise, would underestimate the weather noise because as I understand the models they don’t fully account for nonlinear couplings that give rise to phenomena like vortex shedding that may account for some of the climate sensitivity to fluctuations in solar activity.
All this talk about atmospheric vortices reminded me of this photo of cloud vorticities.
Lucia,
Just to be clear, I suggest adding very small known periodic signals and random noise to the historical data to test the method, not a test with only a periodic signal mixed with some noise.
Have good weekend.
If detection of collinearity is the goal, I don’t see what role ModelE or any potentially predicted data has to do with the analysis.
Collinearity is a problem with the independent variables. So if solar forcings and CO2 concentrations are collinear that can be diagnosed without reference to ModelE, any other GCM or even measured temperature data.
It is true that one approach to detecting collinearity is by doing a regression and then examining the coefficients (that appears to be the approach taken by B&S) but a more robust approach involves using only the design matrix since this prevents random chance in the measurement noise from confusing the issue.
For example, one approach to detecting collinearity is to measure the condition number of the design matrix. Another is to regress independent variables against each other. Another is to simple examine a scatter plot of the independent variables.
There are better approaches. The point is that there is no need to do a full regression to detect collinearity. Hence, there’s no need to rely on data not everyone accepts to settle the issue.
A book on collinearity diagnostics I’ve relied on for many years is Conditioning Diagnostics: Collinearity and Weak Data in Regression by David Belsley
Carrick (Comment#17533)
Nice picture of downstream vortexes… the northeast winds at the islands must be pretty strong.
Joe Triscari:
I agree with your comments here… However, I’d like to point out that even with CO2 and solar forcing there isn’t a complete independence. Stronger solar forcings=warmer sea surface temperature=higher CO2 levels.
Anthropogenic CO2 emissions would be another independent variable.
Anthropogenic sulfate emissions a third.
Volcanic forcings (CO2+particulate emissions) a forth…
Outside of a solar model like Gavin’s, I’m not sure how you sort out these various effects.
And I’ll remind you that this round of comments was set off by Lucia’s paraphrase of a concern of Gavin’s that “there is significant ‘weather noise’ energy content at periodicities of 22 and/or 11 years even when the sun is included in Model E.”
Maybe Lucia and Gavin weren’t thinking of peaks in noise at those frequencies, but broadband noise (=smooth, monotonically varying noise spectrum) doesn’t usually affect the robustness of a particularly statistical method that much, past the usual S/N issues of course…
MBH98 (See Figure 7) attempted to use multiple linear regression to allocate temperature change between solar and GHG. It was cited and relied upon by IPCC TAR as an antidote to solar theory. Unsurprisingly, little of it held up. See http://www.climateaudit.org/?p=1079 also 690, 689, 685
Carrick–
I have no idea. But there doesn’t need to be a “peak” for gavin’s concerns to matter qualitatively. There only needs to be energy there. ( Admittedly, different things will affect different analyses. But with respect to SW05 there no peak is required to increase uncertainty.)
I agree they don’t fully account for the non-linear couplings. But as far as I can tell, the do account for the couplings at inertia (large) scales. The difficulty is small scales. So… they could well have too much weather noise! Who knows.
If your general point is “models might be quantitatively inaccurate in ways that matter when examining problem ‘x’. “, I totally agree. I agree for nearly any problem ‘x’.
So, at least hypothetically, I agree they could get the response to solar forcing wrong for some reason. That reason may be an unknown unknown.
SteveF–
Yes. I suspect if we added a known periodic signal to the weather noise, Scaffetta’s method would pick up that elevation in the periodic signal…. unless… you screwed up the phase and it canceled the existing signal….
Joe Tiscari–
Yes. But climate scientists can’t set up a design matrix and then run their experiment. One has historic data. The sun did what it did. Volcanoes did what they did. CO2 did what it did. Neither Gavin nor Nicola can go back an impose a design matrix on history.
I agree that there are ways other than testing whether Scaffetta’s method works with model E to test whether co-linearity might cause problems. I haven’t looked into the issue enough to know if they are practical under the circumstances.
Carrick
A(?) particular method? Which? Which class?
It is entirely possible that something that doesn’t affect some or even many classical methods could still screw up a special case method developed by an individual analyst. I dont’ know if broadband noise would, or would not affect Scaffetta’s method and I also don’t know whether or not it would matter for all possible ratios of energy in the broadband signal to any specific amount of energy in the window around 22 years.
I agree that if there is a large, distinct peak at 22 years standing out of a broad band signal, you are going to see the peak and the contribution from the broad band isnt’ going to matter. But is the 22 (or 11 ) year peak a huge, huge peak? Plus, if I undersand correctly, Scaffetta’s method is trying to find energy “near” 22 years. So, in a bin that includes other frequencies. Say it picks up everything between 18 and 33 years and then attributes that to being caused by the solar cycle. What fraction of the energy in that band is due to the sun?
Even though BW09 had a snotty tone, and even though it’s conclusions may go further than one ought to take them, these are worth while questions. If the conversations could be kept civil, I’m sure Nicola and Gavin could probably have a fruitful discussion.
Regarding Colinearity, there was a study done once, but they used PMOD, Nino 3.4, the Sato index, and a trend-they didn’t not find a problem with colinearity (Santer had expressed concern about it WRT one of their papers):
http://www.pas.rochester.edu/~douglass/papers/CR%20paper%20of%20Douglass%20et%20al..pdf
Lucia:
A particular algorithm, such as that of Scaffetta.
I’d say all of it.
If it weren’t for variations in solar forcings, there is no reason for the weather-related noise floor to have a peak there.
Solar-forcing driven weather is part of the solar forcing signal, even if its not part of the direct radiative forcing.
Carrick– These questions are real, not rhetorical. (I avoid rhetorical questions and mostly ask questions in a straight forward way?)
How do you know broadband in the earth’s “climate weather noise” wouldn’t affect Scaffetta’s method?
How do you know there would be no reason for weather related noise to have a peak at 22 years? I don’t know that. I know that in some fluid dynamical problems, you can get peaks at or around some particular frequency. While I know of no specific reason to expect one at 22 years, is it generally known we can exclude a peak at that frequency? So, basically, have you fully considered how weather noise has arisen and decided that, for some reason, no natural peaks can arise near 22 (or 11) years?
I agree that solar driven forced weather would be part of the solar forcing signal. But how do you know the energy at 22 years is solar driven and something else? Gavin’s opinion is there could be. He could be mistaken, but do you know he is mistaken? I don’t know that for sure.
The furthest I can go is this: I would not throw away Scaffetta’s results on the basis of Gavin finding sufficient energy to screw up Scaffetta’s method at 22 years in ModelE. This is because Model E’s weather noise could be totally wrong.
However I also would bear in mind that at least one model does find sufficient energy near 22 years to cast some doubt on Scaffetta’s result. At a minimum, there is a possibility that the model is at least qualitatively correct, and there is some bulge of weather noise with a periodicity of 22 years.
Is it a strong possiblity? I don’t know. (It would be nice to at least see if most models have similar bulges etc. How about some sort of mean spectral distribution of weather noise. Is there a bulge there? Beats me.)
This leaves us…where? As far as I can see, it leaves us in “we don’t really know” land.
I tend to like empirical evidence, so I would be reluctant to through out Scaffetta’s method simply because it doesn’t work on Model E. But I would still want to know that it doesn’t work there.
Lucia:
To be clear, I didn’t mean any of what I said rhetorically.
It would if the dominant weather-related component at e.g. 22 years was due to the broadband contribution and not to a peaked component, something we both appear to stipulate to.
There is no mechanism that just happens to give weather noise peaks coincident with the 11-year (and 22-year) solar cycles.
As Einstein once said, one needs to keep an open mind, but not so open your brain falls out!
Again provide a model that just coincidentally gives the same periods as the solar cycle and I’ll accept that. Sans that, it’s a very marginal possibility at best, especially since we already know that the solar forcing affects weather!
Also, I don’t recall Gavin ever stating the opinion that the weather noise peaks were not generated by variations in the solar forcing.
Lucia: By design matrix I mean the matrix used for the regression.
I apologize if the term is used differently in this community and I don’t want to be pedantic but to be clear: I mean if you are attempting to estimate the coefficients in a_1, a_2, …, a_n in the equation a_1 + a_2*x_{1,k} + … + a_n*x_{n,k} = y_k, k = 1, …N, the design matrix is the matrix, M, with rows (1, x_{1,k}, …, x_{n,k}). Then, if Y is the column vector (y_1, …, y_N), the least squares estimate for the coefficients is inv(M’*M)*M’*y (M’ is M transpose).
In other words, if you’re doing a regression, you have the design matrix (at least how I’m defining it). In other, other words, if climate scientists are doing regressions to test for collinearity, they can test for collinearity without the predicted data (the Y vector).
Carrick: When I am talking about collinearity, I don’t mean that there is or isn’t necessarily a physical connection between the variables. If the data is collinear then one of the columns of the design matrix (as I use the term) can be written (nearly) as a linear combination of the other columns. That can occur because of a physical connection or by chance (for example, if we throw a huge number of unrelated variables at a regression). Conversely, there can be a deterministic physical connection between the variables without collinearity.
All that being said, my only point is that if you’re trying to decide if solar forcings and (log) CO2 concentration are collinear, thereby making it difficult to use them both in a multiple regression, you don’t need to regress those variables against ModelE or anything else.
Joe Triscari. to be clear I was quibbling with the use of the term “independent variables”, and in particular on the point thatsolar forcing and log CO2 concentration have a casual link.
In the sense I use the word “independent” variable, they can’t be independent if there is a casual link between the two of them.
Nicola said “”this discussion is very interesting.just two short points:the 0.1K estimate for the 11-year solar cycle.This is not just “my†estimate. This is what a lot of people have found using a lot of alternative data analysis techniques. “” and “”It is not possible to disprove an empirical analysis of real data by just using synthetic data produced by a model (GISS modelE in this case) without first proving that the model correctly reproduce the data patterns that are to be tested…This is basic philosophy: from a false statement (ModelE simulations) one can deduce or disprove whatever one wishes, so not definitive prove can be deduced from it. The prove that GISS modelE is correctly reproducing the patterns is missing yet, or better it is proven that Giss modelE does NOT reproduce the data patterns (read carefully Lean and Rind (2008) that has also been misinterpreted in BS09). In particular Giss modelE predicts a too small solar signature relative to what is actually seen by the empirical studies. There is another important issue that many people are still missing. That is, what the sun did.””
Nicola, I wonder if you could weigh in on the conclusion I have drawn from your statements. The factor of three between your paper and Model E is not a scalar, but an intrinsic difference in the approach. Using Model E cannot answer the question you are asking. Model E answers the question incorrectly within your papers’ framework. Is this a correct understanding on my part?
Carrick
How do you know this? Do you know and understand why El Nino has it’s periodicity? Or the AMO? Or the PDO?
It would be a coincidence if one of the earth’s major oscillations coincided with the sun. But how do you know there is no such peak?
Gavin suspect the existence based on ModelE. His suspicion could be wrong, but I don’t see how we could know he is wrong unless we found zero peak of any size at 11 or 22 years. But if we found a zero peak, Nicola would be reporting that the sun’s 11 or 22 year oscillations have no effect.
So, as far as I can tell Gavin has not proven natural variabilty is messing up Nicola’s method. All he’s shown is that we can’t discount the possibility. The second is useful information (but should not have warranted the tone gavin assumed.)
Joe–
Sorry. My background is actually lab work. So, I thought you meant taking the data in a way that avoids co-linearity in the first place. For Engine maps, we used something like this. If Z is a function of both x and y, measure over (x,y) pairs like this:
(-2,-2) —– (0,-2) —– ( 2, -2)
—- —– (0,-1) —– —–
(-2,0) (-1,0) (0,0) (1,0) ( 2, 0)
—- —– (0,1) —– —–
(-2,2) —– (0,2) —- ( 2, 2)
Other plans were used for other experiments. But basically you never want to test with only (-2,2), (0,0) and (2,-2) because in that case there is no way to distinguish between the effects of x and y.
Unfortunately for climate scientists, their historic data is what it is. If x and y were colinear, they are stuck.
Now as for what they can do– it’s true that the climate scientists can test for colinearity in the x&y in some regressions and say in advance that they can’t distinguish the effect of x or y on Z.
But, I’m not sure there is anyway to test whether the 22 year period in the earth’s temperatures (i.e. z) is due to non-solar driven internal variability as opposed to the sun. Gavin wants to try to use models. Nicola rejects that approach. Which leaves us.. where? I think it leaves us at:
a) Nicola shows what the empirical record shows. On the assumption that the 22 year period is due to the sun he makes a computation. This is a contribution and is worth knowing.
b) Gavin shows that his model has internal energy at 22 years. His model at least attempts to mimic the earth’s behavior. To the extent that his model succeeds, Nicola’s computation may not be robust.
So… oddly, I agree with Nicola that Gavin hasn’t proven his method doesn’t work. But I think Gavin’s results were worth reporting. (It would have been better if Gavin learned to use a “nice” filter when writing these sorts of papers. However, since I don’t have that talent, I may just be the pot calling the kettle black on that one.)
Lucia:
The best you can do is a) develop a model, b) compare it with data and c) determine whether the model is consistent with the data. If it’s not and the data are reliable, the model has been falsified.
For an alternative hypothesis to be seriously considered, it has to be sufficiently developed that it can similarly be falsified.
What we have right now for an alternative theory doesn’t even amount to rigorous handwaving as far as I can tell…
The discussions have focused on math issues of distinguishing colinearity.
Joe Triscari (Comment#17551) notes “climate scientists are doing regressions to test for collinearity, they can test for collinearity without the predicted data (the Y vector)â€
Carrick (Comment#17537) notes interactions between CO2 and solar forcing.
lucia (Comment#17557) notes experimental methods to avoid colinearity.
Could the issue of colinearity be addressed by non-repetitive signals, by time or phase delays, and by distinguishing multiple drivers for CO2?
Roy Spencer notes the key issue of whether temperature changes modulate clouds or cloud variations modulate temperature.
Scafetta & West (2003)
Scafetta & West
Ch 5.2 Irradiance p 227 Climate Change Reconsidered 2009 report by the Nongovernmental International Panel on Climate Change.
See also: Scafetta, N. and West, B.J. 2006a. Phenomenological solar contribution to the 1900-2000 global surface warming. Geophysical Research Letters 33: 10.1029/2005GL025539.
Svensmark (2009) shows correlations between Forbush decreases and subsequent reduction in cloud moisture and aerosol. See
Cosmic ray decreases affect atmospheric aerosols and clouds (full text PDF). Henrik Svensmark, Torsten Bondo, and Jacob Svensmark (Submitted to: GEOPHYSICAL RESEARCH LETTERS, VOL. ???, XXXX, DOI:10.1029/,)
Could the following be used to resolve the colinearity issue between TSI and ln(CO2)?
1) Differing TSI – Tropospheric temperature phase vs TSI-Sea Surface temperature phase.
2) Serial (time) correlation between TSI-Tropospheric temperature phase and atmospheric CO2 – probably low.
3) Serial (time) correlation between TSI-Sea Surface Temperature phase vs atmospheric CO2. (Probably high).
4) Differing lags between absolute humidity vs sea surface temperature, versus between absolute humidity and tropospheric temperature.
5) Day/night magnitudes and phases in variations in TSI, temperature and CO2.
(Maybe these issues should be in a new post.)
Carrick–
I agree with what yu say in 17560. But… that just leaves us in the state of “not knowing” whether or not there could be any sort of peak in weather noise near 22 years. You said
“There is no mechanism that just happens to give weather noise peaks coincident with the 11-year (and 22-year) solar cycles. ”
That statement would seem to convey a lot of certainty about our understanding of what mechanisms exist. Wouldn’t it be fairer to say something like “Outside of models results, we have no particular reason to expect that unforced weather noise would, by coincidence, happen to have peaks that matched the solar cycle.”?
I’d agree with this sort of weaker statement.
Obviously, if the model is wrong, then it’s results cannot be used to prove the S&W results are spurious. The model results can only suggest a problem– and that suggesting is provisional on the degree of accuracy of the model. The more tests the model fails, the less seriously we take any sorts of criticisms of S&W based on the model.
That said, there is a finding in BS09 that has nothing to do with ModelE. It’s not a nail in the coffin of SW, but it’s interesting to discuss. I’m going to post on that.
Let me amend my statement to say what I meant to say “There is no [proposed model] that just happens to give weather noise peaks coincident with the 11-year (and 22-year) solar cycles”.
Obviously I can’t exclude that some mechanism could exist, just that nobody has proposed one to date.
Sorry for the careless language.
Carrick– Ok. Then we pretty much agree. I was just wondering about the definitive statement and wondered if you know something I did not. (Lots of people know things I don’t know!)
lucia rankexploits.com (Comment#17555)
Do you know and understand why El Nino has it’s periodicity? Or the AMO? Or the PDO?
Interference.This is well understood eg Michael Ghil
‘Many physical and biological systems exhibit interference
effects due to competing periodicities. One such effect is
mode locking, which is due to nonlinear interaction between
an “internal†frequency !i of the system and an “externalâ€
frequency !e. In the ENSO case, the external periodicity is the
seasonal cycle. A simple model for systems with two competing
periodicities is the well-known Arnol’d family of circle maps’
V.I. Arnol’d, Geometrical Methods in the Theory of Differential
Equations, Springer, 1983, p. 334.
M.J. Feigenbaum, L.P. Kadanoff, S.J. Shenker, Quasiperiodicity in
dissipative systems: A renormalization group analysis, Physica D 5
(1982) 370–386.
P. Bak, R. Bruinsma, One-dimensional Ising model and the complete
devil’s staircase, Phys. Rev. Lett. 49 (1982) 249–251.
P. Bak, The devil’s staircase, Phys. Today 39 (12) (1986) 38–45.