In my most recent post,, I noted that the current empirical estimate of the time constant given by Schwartz, Scafetta and in my earlier blog post is roughly 8 ±2 years. However, I also noted that Schwartz admitted uncertainties in the method itself. We all discussed a range of difficulties with the approximate methods. I’m now embarked on incremental improvements on the “lucia” method. (This method is similar to Schwartz’s method as described in his response to comments. However, I do a formal least squares regression to get my time constant.)
As my readers and I are all brainstorming ways to improve the estimate, I’m simply going to report the results form my first step in improving my estimate. So, this is sort of a ‘status report’ I am going to present the “back of the envelope” fix for two of the issues. These are:
- The inherent bias associated with estimating the correlation in the residuals and
- Estimating the “true” uncertainties.
I am deferring addressing the other issues as a matter of expediency; the order in which I address issues should not be taken to imply that other issues are of lesser importance. Also, for now, I am using annual average data, and running tests with the coarsest possible resolution; my intention is to thereby identify which issues are most important before refining the method of estimating the time constant.
🙂
For those who don’t wish to read further: The main results are after identifying a method to correct for bias, my current estimates for the time constant for the climate are between 15.5 years based on GISS Met data and 8.6 years based on Land/Ocean annual average data. Both methods contain a great deal of uncertainty; so much that it’s not worth estimating formally. The uncertainty should drop dramatically when I ultimately repeat the analysis with monthly data.
For the remaining discussion, I will be assuming people are somewhat familiar with Schwartz97, and with my previous analysis based on his method with my extension to include measurement imprecision in the GMST data! (My previous analysis is discussed here.)
What I did.
Step 1:
I created 10 independent series of synthetic data that consisted of AR(1) noise with a time constant of 12 years, plus white noise, u. The AR(1) noise is thought to represent the “true temperature” of a simple climate system; the white noise is though to represent measurement uncertainty. The ratio of white noise to AR(1) noise was set such that the ratio of the measurement noise to the “true temperature” was 1. This happens to corresponds roughly to the amount of measurement noise admitted by the measurement groups.
Data were created at monthly intervals for a period of 125 years. I then averaged the monthly data to create annual average data. I computed the autocorrelation as a function of lag time for each string.
The autocorrelation as a function of lag time was compared to the known true, underlying autocorrelation for the true process; see figure 1:

In this figure, I multiplied the bias by (-1) and plotted the computed bias as a function of the known autocorrelation. I also computed standard errors for the autocorrelation at each point, calculated a “t” value for 10 data points and created 95% confidence intervals. (These may not be correct as I have not checked to see if the errors are normally distributed.)
I then fit a straight line to the bias as a function of lag time for later use. Obviously, the line doesn’t fit splendidly. However, it does reveal that qualitatively, the bias tends to be negative, and the absolute magnitude increases as the autocorrelation decreases.
Should this method show promise, in later analysis, I will be running a larger number of synthetic data strings and using monthly data. This should reduce the uncertainty in in the calculated bias.
Note that, in principle, this method can also be used to estimate the uncertainty in the estimate of the time constant using 125 years of data. However, what I’ve basically found is that using 125 years of annual average data results in standard errors in the estimated value of the time constant on the order of ± 4 to 5 years. This uncertainty is two or three times larger than one would estimate based on the uncertainty in the in the best fit line to the natural log of correlation as a function of lag time. So, I clearly can’t use that idea as a method to estimate the uncertainty.
In addition, even the ±4-5 years represent the lack of repeatability under the assumption that the time constant really is 12 years, and the noise is the level I used in the synthetic data. I need to think about these uncertainties a bit more to decide what the real uncertainty intervals should be. However, it should also be noted that the standard error should diminish, and possibly dramatically, when I apply the method to monthly data. So, I should ultimately be able to obtain relatively decent estimates of the time scale (contingent on acceptance of the method as valid.)
Step 2:
After estimating the bias in the time autocorrelation as a function of bias, calculated the autocorrelation in the temperature anomalies reported by GISS for both Met and Land Ocean data using a 125 year string of data. I then corrected the computed autocorrelation using the best fit estimate from the linear regression I obtained in the previous analysis. (Yes, that would be the regression on the very uncertain highly scattered data. ) In all cases, this increased the correlation estimated based on the data, raising the value.
I then plotted the natural log of the autocorrelation as a function of lag time, limiting analysis to those cases that had positive values of the autocorrelation. The results for both corrected and uncorrected cases are shown below. I used EXCEL’s LINEST to obtain the slope and intercept, and computed the time constant as the negative of the inverse of the slope. The natural log of the autocorrelation and the data are shown below.

Figure 2: The natural log of the autocorrelation of temperature is illustrated as a function of lag time. Autocorrelations were corrected by estimating the bias associated with a 12 year time constant and the noise value that matches the data. (This was done iteratively; it required 1 iteration.)
Using a bias correct that assumed the true time scale is 12 years results in an experimental estimate of the bias of 15 years when computed using Met data and 8 years when computed using Land Ocean data.
Provisional conclusions
For now, this “extended method of Schwartz” is resulting in roughly 12 year time constants. However, the calculation is crude, uses annual average data, and I’ve only corrected for bias using a coarse test.
I’ll be refining this after we explore other issues like detrending, and the possibility of “spikes” of noise in the actual spectrum of the GMST. (Arthur Smith is looking at this.)
Nevertheless: For now, it appear that correcting for the bias does elevate the time constant noticably, and ought to be done.
Lucia,
One thing has been bothering me since reading your December post. You use regression, while others just form a ratio, and estimate a mean. I think regression is fine, but the regression line should be constrained to pass through the origin.
Here’s why. Your autocorrelation is assumed to be an exponential, and you are estimating the coefficient. Now the autocorrelation is always 1 for zero lag. You should be scanning the range of eligible functions to get the best fit coefficient. Instead, you are allowing ineligible functions. Straight lines not passing through the origin correspond to autocorrelations that are not 1 at zero lag. SS and Tamino don’t do this, so your results won’t correspond to theirs. And I think they are right.
Lucia,
I’ve re-read your December white-noise justification for using general regression, and I now see how it works. So I agree that your regression is OK, noting that it includes an estimate of the measurement, which adds to its variance. It also depends on the measurement error being a white noise process – with successive errors independent.
So for me your analysis is now looking very good. But as you say, the uncertainty of the relaxation time estimate is huge. And we’ve now gone from SS’s original 5 years to 8.6 years for one dataset, 15.5 years for the other. Hopefully the monthly breakdown will help.
Annan actually mentioned you today and gave you lots of linkies, but he was rather condescending
Do these discussions of time constant and CO2 sensitivity depend on the assumption that natural factors (other than volcanos) are insignificant?
For example, let’s say someone establishes a solar climate link and demonstrates that 50% of the warming over the last 50 years was due this solar effects. Would the time constant change or only estimated CO2 sensitivity derived from the time constant?
Raven–
Actually, the Schwartz method requires volcanos and other factors to matter.
Steve–
I went to Annan’s blog to comment, but I have to get a blogger/google identity. I think I have one… somewhere… but sheesh! (I hate hurdles in front of commenting. That said: his blog, his policy.)
I guess I can respond later here. But.. erhm… no. I’m not going to apologize for saying two of their criticisms are totally wrong. They are totally wrong. I was thinking of not discussing that particular JGR comment “above the fold”, but I guess maybe I should. Oh well….
You know, it’s always interesting to watch referrers when someone links me. My stats says those links sent me 27 readers. (But I think the way WP stats counts, that might mean 9 readers each clicked the three links. Or, maybe they are all Annan checking the links as he writes his post. We’ll see if we get the flood that some other bloggers can send! 🙂
Lucia says: Actually, the Schwartz method requires volcanos and other factors to matter.
The issue is how does one take a time constant and use it to estimate the CO2 sensitivity. Schwartz says that a time constant of 8-12 years is within the range of IPCC estimate of 3.0 +/- 1.5. I don’t see how such a claim could be made without making assumptions about the magnitude of the other factors at work. If these assumptions are built into the estimation process then this approach is as flawed as GCMs because it has no way to seperate the effect of CO2 from the effect of any unknown unknowns.
Raven– Schwartz’s 2007 paper discusses three things: 1) time constant, 2) heat capacity and 3) the sensitivity. The estimates for the first two are used to estimate the third.
There have been fewer comments on his estimate for heat capacity.
Lucia,
I could estimate capacitance of a black box electric circuit by applying a known change in voltage to the circuit and measuring the response over time. I could also estimate the voltage change if I have some way to calculate the capacitance. However, neither estimate tells me much about the relative contribution of different unknown voltage sources. That is why I think that using the heat capacity and time constant to estimate CO2 sensitivity is only possible if one assumes that all of the other climate forcings are known.
Am I misunderstaning something fundemental about Schwartz’s method?
Being a civil engineer, and often deferring to a geologist, I am hoping you might do some runs with geological time scales. A thousand years is a hiccup!
I think it not only illustrative, but prudent. There has to be some “time constants ?scales?” in that record.
Maybe I am wrong, but a post for discussion would be cool!
The short term (800 yr) scale findings are warranted, but, IMHO, so are the long term (800K) cycles.
Raven,
May I try to give an electronic analog of what I think the model is.
Imagine a current source in parallel with a resistor and a parallel capacitor. In this version the voltage across the resistor and capacitor represents the surface temperature, the current source represents heat flow /m2, the capacitor represents the heat capacity of the earth /m2 and the resistor is the climate sensitivity in ºC/W/m2
It is fairly easy to show that the equation in electrical terms is the same as the one that Lucia has been describing.
Call the current flowing from the current source I, the current into the capacitor I1 and the current through the resistor I2. Clearly I = I1 + I2 but I1 = C dV/dt and I2 = V/R so I = CdV/dt + V/R. If we divide all terms by C we get the result:
I/C = dV/dt + V/RC or dV/dt = -V/RC +I/C This is the same form as this equation from Lucia – dT/dt = -T/Ï„ + α F
On can clearly see that RC is the tau and 1/C is the alpha. So if RC is calculated and C is assumed it is possible to find R as well. In terms of the real world, this is very simplified but I think we can say that we are assuming a pertubation of the W/m2 arriving at the earth´s surface and we assume that some is captured by the heat capacity and some escapes from the surface by an amount proportional to the rise in surface temperature.
What is not clear to me is how perturbations in the W/m2 really come about and I have some doubts about the climate sensitivity being constant. I have to admit I don´t really understand how the time constant can be derived from simply observing the time series of temperature or, in the electrical case, the voltage. Obviously it can, or Lucia would not be doing it!!! In any event, I don´t think this exercise provides any useful contribution to the CO2 attribution debate. The W/m2 could come from anywhere and it seems likely that things such as altered cloud cover could change the value of R.
I quite like this electrical model as it reveals the exact assumptions and gives us a chance to modify it in ways that maintain a physical meaning. It also has the advantage that we can use something like SPICE to do the sums for us. 🙂
Raven,
I really like your idea of an electrical analogy. I would put it like this – you have a resistor R and capacitor C in parallel, earthed at one end. To the other end you apply a current source I and measure the voltage V. In the analogy, C=(known)heat capacity, V=temperature, I=(GHG) heat flux, and R is the (unknown) sensitivity. RC is the time constant. I say R is the sensitivity because if you apply a constant I, eventually R=V/I.
We need to infer RC from the dynamic response, and we can’t just apply a pulse. People often try to use something like a volcalo as a “pulse”, but the response is noisy. So the approach here is to suppose that the V that we see is driven by a white noise process I, which acquires autocorrelation because of the R-C smoothing. So RC is inferred from the autocorrelation.
Then everything depends on the assumption of a white noise driver, which seems to me a stretch. That may be why it’s so hard to get a stable answer.
(Jorge, I see we have the same idea. Yours appeared just before I posted. I’m leaving it there because I’ve referred to it on another thread).
Pliny/Jorge/Raven–
The electrical engineering analog is precisely correct. And yes, it need to be driven by white noise to result in AR(1) response.
For the electrical engineers, this problem appears in loads of text books on Kalman filtering.
The thing about this particular problem is it reappears everywhere and is in some sense, the simplified models for lots and lots of stuff. Other places you’ll find the model:
1) Brownian motion. In this problem, small particles with mass M are driven by white noise due to molecular colisions. The time constant term arises because for very small particles moving slowly the resistance to motion is linear with velocity “V”. So we get M dV/dt = A V + F . The time constant ends up being A/V. In this problem the “F” is really white.
2) Behavior of thermocouples. Thermocouples are small and essentially have a uniform temperature T. The if we linearize the heat transfer rate, we get C dT/dt = A T + F. If we put the thermocouple in a temperature field of “white noise” we would get the AR(1) response. (One never does this. In reality, one uses the thermocouple to measure the properties of the force F. So, generally, you want the time constant so small that it responds to the properties. The temperature measured by the thermocouple will NOT be AR(1).
So, yes. For the Schwartz method to work, the external forcing “F”, must be white. And yes, if the equation looks like one you recognize, it is!
Lucia,
So do you agree that the “whiteness” of F is the only property we are using to determine Ï„, and so the sensitivity? It seems to me that then there are two difficulties:
1. We don’t really have any evidence that it’s true. Normally you assume noise is white because you don’t have any better assumption. That may do for error analysis etc, but it seems a weak basis for making assertions about the numerical value of sensitivity, and
2. We can’t find a value of Ï„ which exactly corresponds to a “white” driver. In fact, I suspect we don’t really come close (and it should be tested). That so, it seems that a big range of values of Ï„ will give almost optimal “whiteness”. A big error range.
pliny,
No problem.
Thanks for explaining how the RC smoothing gives autocorrelation when driven with white noise. I think that was the part I have been missing from the beginning.
My difficulty with CO2 attribution is that an increase in W/m2 can be inferred from radiation models, but only if the atmospheric profile is assumed to be unaltered apart from the addition of extra CO2. In terms of the model, we have to assume that R is independent of V. In reality R is quite complicated as it depends on convection and moisture content as well as radiation effects. If R were to actually get smaller with increasing V, the resulting change of V with respect to a change of I would also be smaller.
This would be a negative feedback in IPCC terms and shows that the standard assumption of positive feedback actually implies that R increases with the applied V. It would be a nuisance if R were non-linear as it means the time constant and sensitivity would vary with the size of the forcing. It would also add frequency components to the observations that were not present in the forcing, white noise or other.
Now that we seem to have discovered cycles in the temperature observations I think it is even harder to tell a trend caused by CO2 from the upswing of one of these cycles. It reminds me of Gordon Brown, the UK Chancellor, saying he would balance the budget over the economic cycle. After changing the start and end points he still could not achieve it, so he redefined how the budget was calculated. 🙂
Jorge,
The first thing to say is that I don’t think this model of the atmosphere should be taken too far. I think everyone agrees that there are really many different timescales, and that the earth is not a simple capacitor.
In electrical terms, it’s really a linearised model, dealing with increments of I and V. So the R and C’s are more like the sort of transient impedances that you would get in analysing a transistor (hfe etc). So that is another issue – more CO2 could feed back in, but it is more complicated.
One should also say that the IPCC do investigate a huge range of feedbacks. There’s a whole chapter in the AR3 here.
The various cycles (ENSO, PDO) are one indication that the assumption of a white noise driver is shaky.
Pliny–
Using the method Schwartz proposed, whiteness is required to estimate τ Schwartz doesn’t use this to estimate the heat capacity. But, yes, using this method, you assume “F” is white. Schwartz’ paper does not provide much discussion of why this might be so. The only thing that was really done was try to see if the temperatures looked “Red”. To the extent that the temperatures look “Red”, the result is “not inconsistent” with the assumption of white noise forcing.
(See how splendid claiming to “prove” by saying something is “not inconsistent” can be? Those are the terms thrown at us to “prove” GCM’s are good too! Obviously, “not inconsistent” is not the same as “proven”. Equally obviously, everyone knows this when they don’t buy into a particular theory or assumption. 🙂 )
White noise is certainly an assumption. I’m going to be discussing why it’s not entirely implausible later, but I haven’t yet. (Note once again: “not entirely implausible” 🙂 )
Also, it appears that we will have have sizable uncertainty intervals. I think it’s still worth seeing how far this can be taken. In my opinion, which others need not share, knowing the value most consistent with an empirical approach is useful. Of course, any empirical value should be assigned appropriate uncertainty intervals. So, we are seeing what we get. I’m going to be giving it a try because I’m interested in doing that. But of course, people aren’t required to take the exercise or the final result seriously if they wish not to.
FWIW, in January, I took another tack, and fit the
dT/dt = -τ T + α F
to forcing data and temperature data from GISS. In that case, the “F” values are smoothed with no white noise. That method is not affected by the white noise assumption. In that case, the quantity α/τ represent a sensitivity, and we get an estimate of τ I sort of stopped that because I got interested in several other things going on at climate blogs– but I intend to get back to it.
In principle if the forcing does have a sizeable white component and b) the simple ODE is a decent approximatin, we should get similar values for τ both ways.
Lucia,
I hope I did not sound discouraging of your efforts to get an empirical handle on the time constant and sensitivity. It is important to try things that are based on simple concepts. They have the advantage that people can argue intelligently about them and can be made a bit more complicated if it is found necessary. Even if it turns out that the uncertainty is quite large it can still be valuable information.
I am reminded of the analogy made by Sir Arthur Eddington about making maps of the scientific territory. He talks of a system of inference where you are not wrong more than 1/q times. The maps start where you choose q equals infinity. This map is entirely correct but unfortunately is entirely blank. At the other extreme, where q =1 the map contains the most minute detail of which only an infinitesimal proportion is correct.
He then says “What a philosopher is to make of these maps I will not venture to say; but the scientist affirms that some of the intermediate maps (say between q=5 and q=20) can be of considerable assistance to a sojourner in the universe who has to find his way about.”
You have clearly been following some of the fallout from the Douglass paper and I hope you would agree that the climate modellers are making maps with too low a q. 🙂
I don´t know if you have seen the other Scafetta & West paper about the link between TSI and surface temperature but it seems to use a similar relaxation model.
http://www.acrim.com/Reference%20Files/Scafetta%20&%20West_2007JD008437.pdf
Jorge: I’m not the slightest bit discouraged. I enjoy the conversation. I actually like people to point out flaws and issues. Discussion both helps me discover things I didn’t know and also helps me organize discussions to help others better understand which features of reality are captured by this simple model.
I’m working on providing a discussion of the random components in the extrenal forcing, have found 2 figures on line and am creating one. This is important to help people understand
a) why white noise, while imperfect, might be a decent first approximation when estimating the time constant and
b) explain why the GCM model results can’t possibly be used to “prove” this method is poor. (Foster, Annan, Schmidt and Mann suggested you could show the method doesn’t work by comparing to models, so this matters in the larger scientific conversation.)
When I post the dicussion, you will see it answers some questions– but it does not “prove” what noise is ok. But, I think it likely that many of the readers (who seem a smart bunch) may make suggestions for improvements to the “simplest possible” model.
My philosophy is to always complete the simple one first and then move on. So, it may seem I ignore some of the points, but that is not so. It is simply the case that I can’t always deal with every complication immediately.
Lucia,
It’s rather funny, when you start with a simple model and get a sense of the system, everyone asks you to
complicate the model and add more physical realism. Which of course you do. And then you end up with this complicated
model with output all over the place. and then somebody suggests that you build a model to fit the reponses
of detailed model. a model of the model.
And then somebody points out. hey, this model that fits the model was the simple model we started with!
sometimes the pursuit of precision leads to ambiguity.
Lucia,
My problem with the white noise assumption is this. We’re very used to assuming white noise for all sorts of error estimation etc. There we know that even if the noise isn’t white, it won’t matter very much. But here it is critical. The extent of deviation from whiteness is actually the measure used to estimate sensitivity.
And it seems a very dodgy assumption. We know all kinds of major influences on the temperature history which contradict it. ENSO, PDO, varying serosols, solar cycles. These all introduce autocorrelation, and here this is added directly into the inferred sensitivity. Even the GHG effect, which one hoped was eliminated by subtracting a trend, will leave an effect, because we know that it isn’t linear, and the deviation will also be attributed to the relaxation time.sensitivity.
Nick Stokes
Pliny/Nick–
White Noise is a trememdous approximation. No doubt about it. Still, I think it’s worth doing the calculation that way to see what we get. Of course everyone is permitted to look askance at it.
But, if you don’t like the white noise assumption, you may prefer “lumpy”. 🙂
In January, decided to give this a try:
1) Use the simple model dT/dt = -T/τ + α F
2) Get the F’s from GISS pages. (They are available. at montly values.)
3) Solve for α and τ by minimizing the residuals.
No white noise assumption. Rather, we are assuming F is the more -or-less smoothly varying values from GISS.
I’ll be talking about this again soon, and also about the Schwartz method. I figure it’s always best to have more than one method of estimating.
As it happens, I prefer the “Lumpy” method for a variety of reasons. (It still has the oversimplified physics. Now that I’ve learned more, I think I can deal with statistical issues I didn’t understand before. So, I may be able to get a better number out of it. I also need uncertainty bounds. I think I’m going to get those after I figure out how to do it for the white noise case.)
Lucia,
I think I have got the hang of what you are doing with the model. In one case you calculate the time constant using the autocorrelation. Here you do not need to know the magnitude of the forcing as long as it is white enough. Also, by this method, you cannot find the sensitivity without another way to establish the heat capacity.
The lumpy approach is rather different as it assumes you do know the forcing and it is a matter of finding a time constant and sensitivity that will best reproduce the observed temperature fluctuations. With this approach, both the heat capacity and the sensitivity are found.
It certainly looks like an independent method of measuring/estimating the heat capacity would be beneficial in both cases. In the first you need it to calculate sensitivity and in the second it would give a fairly good check of the results.
It occurs to me that another possibility is that if both heat capacity and sensitivity are known/estimated/calculated, it is possible to work backwards to calculate what the forcing must have been to produce the observed temperature series. In my electronic model I would simply apply the temperature/voltage to the parallel RC and measure the resulting forcing/current. I am just guessing, but if the noise in the observed temperature is AR1, it might result in white noise in the forcing after this reverse transform.
I have just realised why my maths skills are so poor. I have never needed to write or solve differential equations in most of my career in electronics because, with a dual beam scope, I can observe the equations solving themselves. There I was, thinking I was just putting the components together to make a circuit, when in reality I was building analog computers to solve complex equations. 🙂
Pliny–
Great eye balls!
Jorge–
You have the general idea. The open question is, of course, can the simplified lumped parameter model work at all.
I don’t think those commenting on Schwartz demonstrated the simple parameter can’t work. But, make no mistake– that doesn’t mean simplified equation actually works. Certainly, in Schwartz’s paper he missed a lot of stuff. But, it seems to me, it would be useful to get a sensitivity and time constant out of that data (if at all possible.) So, I’m trying this a bunch of ways, and I’ll get more complicated if necessary.
(The “simple models” in the IPCC are lumped parameter type models. They just have more “lumps”.)
Hi all – yes I’m still thinking about the periodic forcings etc, need to spend a bit more time trying to understand Schwartz’s method before saying more on that.
But on the heat capacity issue – there’s actually a standard measure used on other planets and asteroids, but apparently not on Earth for some reason, called the “thermal inertia”. Some explanation of it here:
http://nathaniel.putzig.com/research/ti_primer.html
– it has unusual units that include a product of square roots of seconds – this is because diffusive processes like thermal conduction tend to progress at a rate proportional to the square root of the time, rather than linearly in time. In essence, the effective heat capacity grows as the square root of the time period you are looking at, because with more time you get deeper layers below the surface involved.
Thermal inertia is certainly relevant for the spread of heat from the surface to interior on Earth’s land regions, but I’m not sure how applicable it is to the spread of heat through liquids or gases where convective motion is available. It certainly should provide a lower bound on the relevant “heat capacity” effect though.
So I guess my question is – if you include a “heat capacity” that is actually a function of the frequency under discussion, increasing as frequency to the minus 1/2 power, what impact does that have on such a “lumped parameter” approach?
Arthur,
That’s an excellent references!
First: I am sure the simple model eventually needs to be fixed up, abandoned etc. I’m just trying to continue thinking about it to see if there is a way to extract a correct time constant.
But as for your more general question, the impact on the lumped parameter approach is to emphasize that using
dT/dt = T/τ + F
Is an approximation, and it could, of course be a very bad one. 🙂
The problem is the isothermal assumption (constant T.)
And the problem is the single time constant issue. (And it would be even if the earth were a copper lump with no atmosphere.)
Both Schwartz and Scafetta basically abandoned the single time constant in their response. Scafetta in one way; Schwartz in another. (I’m sticking with it until I find a more satisfying method ‘complicating’ the model, and find out precisely what it gets right or wrong.)
On the physics part:
In engineering, the “isothermal” assumtion is good for a homogeneous object when a parameter called the Biot number, is small
The Biot number= hD/K. It’s the ratio of the convective heat transfer coefficient, h, the characteristic dimension of an object and the conductivity K of the object.
This ‘analogy’ translates oddly to the earth, because the “K” would be some effective turbulent diffusivity for the atmosphere/ ocean etc. The ‘convective heat transfer coefficient is the constant of proportionality between heat loss from the surface of the planet and the earth’s temperature.
Obviously, we won’t get a very small Biot number because we do see temperature gradients on earth. Also, there is the problem that the rate of heat loss is through the radiatively participating media of the semi-transparent atmosphere.
If the earth were a spherical lump of copper with no atmosphere, we could probably fix the analysis for finite Biot number. (I wouldn’t be surprised if that’s what the atmospheric physicists are doing.)
We could probably fix it up for a planet with a nearly opaque atmosphere. But the earth is in between.
The fix up would involve figuring out the correction to the simple equation that happens because heat diffuses into the planet at a finite rate. So, the words on that web page read right.
But, I don’t know if we can figure out the correct fix for the earth. The earth is harder. (Also, we want to know things with greater precision, which is a big problem.
Anyway: I think we are at the same place: Can dT/dt = T/τ + F possibly apply to the earth in even approximately.
BTW: It gives odd results when I apply the fit to GISS model data with the GISS forcings. (It fits grrr8. But the fitting parameters aren’t what GISS gets for the full model. The method applied to Model E results also doesn’t get the same parameters as for the earth — but I don’t know what that means.)
Well, I think my point was, it would change the equation a little, but maybe it wouldn’t be that much harder to solve.
dT/dt = T/tau + F(t)
can be Fourier transformed:
T(t) = Sum_omega T_omega exp(i omega t)
=> i omega T_omega = T_omega/tau + F_omega
The modification I’m suggesting would change the T_omega/tau term to T_omega sqrt(omega)/tau’ so that for long time periods (omega heading to zero) the effective value of tau = tau’/sqrt(omega) becomes large. That way you have exactly the same number of model parameters, but a (perhaps) more realistic description of the time dependence of effective heat capacity…
Arthur– That would be cool! I’ll give it a try a bit later. I’m afraid I’m diverted into coming up with a simple analogy that will help people understand what’s an “apple”, and “orange” and “the average of a bunch of fruit”. It may be called: “Are Men from Kerala Short? Why IPCC models really do falsify. “
Arthur,
I don’t think the thermal inertia concept will work. Its model is an infinite medium with finite conductivity responding to periodic heating. But estimating sensitivity requires a longterm equilibrium state, which for that model is zero.
Generalising Lumpy a bit, we have dT/dt = H(T,F,t) where H is a function with some parameters. We want to estimate the parameters from transient behaviour so that, in the long term equilibrium, we can solve
H(T,F,.)=0. I’ve put in a . to say that the dependence of H on t will have to fade away. This relation gives T in terms of F, and the derivative is the sensitivity.
For Lumpy, H is linear in T and F and independent of t. To introduce diffusion, you’d have to track the temperature profile in the Earth.
Nick Stokes
Arthur,
I am not sure whether it would help much to complicate the heat capacity in terms of layers because, like you, I suspect that convection may be at least as important as conduction. However if I wanted to try, I would simply make my capacitor into a series of smaller ones separated by resistances. The capacitors would be the heat capacity per layer and the resistances would represent the conduction between the layers. At high frequency, only the surface capacitor would have much effect and at low frequency they would behave as though they were all in parallel.
I would think that we would not need an infinite number of these, as at some depth we can assume that no temperature changes occur at frequencies of interest. No doubt you can find some kind of transmission line type of equation to describe this behaviour, treating the layers as zero thickness but that is beyond my capability.
This more complicated version of my model capacitor would not alter the value of R as that represents the heat flow moving from the surface when the surface temperature deviates from some equilibrium value.
I don´t think this more detailed version of the capacitor really adds more free parameters to the model as the ratio between the heat capacity of a layer and the conduction between layers is a physical property.
As I mentioned to Lucia, I am used to solving these kinds of equation with a signal generator and a scope. My ability to do it with pencil and paper is rather limited. 🙂
Ok I’m confused about something, or maybe doing something wrong. What I started from was trying to reproduce Schwartz’s approach (2007 paper), which is outlined in his figure 5.
First pass I got a quite different auto-correlation curve (dips negative at lag of 13 years), so I was trying to track back the source of the trouble. And I realized his fig 5a that is supposed to be the “Original time series, GISS, [Hansen et al., 1996]”, doesn’t agree with the GISTEMP January-December global (land-ocean) mean temperature numbers, from the start. In particular, it comes close to a 0.0 anomaly in 1883 or so, and again in 1890. But if I look at the GISTEMP series here:
http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt
which was also in Lucia’s spreadsheet, column N of the raw GISS data, that doesn’t come close to 0 until 1926, and doesn’t go above 0 until 1937.
The closest match I could find is the monthly data for February (column C). But that has much higher ups and downs than is shown in Schwartz’s figure 5a – in particular there’s a deep dip in February 1951 to -0.39, which isn’t there at all. And when you detrend, the root mean square residual is almost twice as large for the monthly series as for the annual one, so a lot of extra noise in there that probably isn’t useful for this purpose.
So what data set is he actually using there? Or has something changed with GISTEMP since he made those figures? This seems very strange…
Arthur—
Unfortunately, one of the frustrating things about the GISS temperature record is it changes. You would think the past temperature wouldn’t change, but that’s not so. The method used to interpolate and correct for stations is such that corrections can propagate into the past– and quite far.
Climate audit has discussed this, and Steve has been archiving past versions of historic GISS data. It changes. . .
To find out the exact data set Schwartz used, you might need to contact Schwartz!
I don’t know if Hadley temperature change backwards.
Lucia – I’d heard of 0.02 degree or so changes in GISS, but this would be a major 0.2 degree change – the current GISS data is all below -0.2 C (GMST) from 1880 to 1895 except for a -0.15 reading in 1889. That’s a major, major change in the early record, if Schwartz really got it from GISS. You really think that’s the cause? But yes, an email would be useful to clarify…
No. I wouldn’t expect the numbers to change 0.2C in the record. During the period when I’ve been looking at falsification, I’ve seen small changes, and mostly recent. Steve Mc has been peering at that data for a LONG time, and so sees more changes.
So, if Schwartz’s data differs from the data by that much, I can’t really guess why. But, I do know someone who posted at Climate Audit has been downloading data regularly. When they first started, it created a bit of a brou-ha-ha. In the end, he got lots of data.
So, if Schwartz just made a mistake that’s an issue.
I’m going to be going back to looking at this now that I got together a response post. 🙂
But, one of the things I’m going to post may illustrate a different reason not to trust the single lump model! ( I fit it to model E, which we know was forced with that forcing file. )
Lucia – I contacted Stephen Schwartz and he responded: it seems he used the GISS Met Station data series, not the GISS Land + Ocean series, for his time constant analysis (figure 5 in the 2007 paper) – even though he used the Land + Ocean in his figure 2 there…
That can’t be the right thing to do though, can it? Maybe it doesn’t matter much, but I would have thought if you were arguing about the ocean heat capacity and how it may or may not be causing autocorrelation in temperatures, you ought to include the ocean surface temperature data in your analysis!
Hi Arthur– I would think Land/Ocean data make much more sense in terms of this model! If there is one time constant, you would want the closest thing to an “average” temperature for the “climate” you could get.
Looks like Schwartz definitely made a mistake – an interesting one though.
I’ve re-done his calculation using the data he used (GISS MET station annual data through 2004), and then compared it with the same calculation using the GISS Land/Ocean series (annual data through 2007). The Land/Ocean numbers are definitely more auto-correlated, and give you a consistently higher time constant, at least a year more (i.e. 6 years +, instead of 5, for the original Schwartz approach):
http://inlinethumb32.webshots.com/40863/2327358600101763211S425x425Q85.jpg
http://inlinethumb02.webshots.com/41089/2191934400101763211S425x425Q85.jpg
The first this suggests is, as with your analysis Lucia, and Scafetta’s comment, additional short-term variability (measurement noise or something else – the MET station data has a 30% higher rms residual from the trend than the land/ocean data) biases Schwartz’s method to give you too low a time constant. Or maybe it’s just that the time constant appropriate for land is shorter than for oceans?
Arthur– I think it’s possible for the time constant for the land to be different from the oceans. We know that in terms of literal fact, there isn’t “just one”. In priciple, as we get “deeper” into the planet, the time constant should be slower. That’s a problem for the models.
Because water has a large thermal mass and half way decent mixing of the upper layers, the world may act more “massive” at that point.
But… I could be full of it on this because obviously, dirt also has lots of mass etc.
Do you know whether the peak of summer is nearer the solstice over the ocean in the mid latitudes? Or does it lag the peak over land?
The higher thermal conductivity of water plus convection, which you don’t have on land, plus the high heat capacity of water, definitely makes the water temperature slower to respond. You can tell just by dipping your toes in ocean water: around 40 degrees latitude where I’ve done it, the water is cold until past the summer solstice, and then stays warm to at least the fall equinox. But that’s a month-to-month cycle of response; how does this correspond with the multi-year time constant in question? Perhaps instead of a single time-constant, a frequency dependent time constant as I suggested elsewhere would be a better match for reality.
By the way, I was trying to embed the above graphic links in the page here – how do I do that?
Ok, trying again on images. First is the autocorrelation: blue is for the GISS Land/Ocean temperature series, while red is for the GISS Met station temperatures; the red curve matches the autocorrelation curve in figure 5 in Schwartz’s 1997 paper, while the blue curve is significantly more autocorrelated for lag times of a few years:
This next curve is the time constant tau calculated by Schwartz’s method, again blue curve is the GISS Land/Ocean temperature series, and red is the Met station which Schwartz apparently used. Note that the blue curve shows time constants of at least a year longer in the early period:
Arthur,
This link gives some info about thermal conductivity and capacity for land. It turns out that moisture content can have a huge effect.
http://www.geo4va.vt.edu/A1/A1.htm
Arthur– Just paste the links into the comment. I’ll add the image html!
First image I’ve been trying to post is here (autocorrelation for GISS land/ocean – blue, and MET station – red)
http://inlinethumb45.webshots.com/16876/2327358600101763211S600x600Q85.jpg
Second one is here (time constant):
http://inlinethumb34.webshots.com/41697/2191934400101763211S600x600Q85.jpg
Thanks if you can embed them. How did John V do it?
Arthur, I changed the html, but the images don’t load. I get a forbidden message when I click. I’ve been fixing John’s html. That’s what makes me think WordPress is “protecting” me! (I’ll sort this out at some point. I may write a plugin to only let people with “trusted” emails post images automatically. I don’t want to risk porn. . . Not that it’s likely, but well.. yah know…)
Hey, it’s working now?
And now it’s not again – Lucia, sorry to bother you, I’m new with this web service – can you replace the URL’s in #2891 with the following (and delete this and previous comment)? Thanks!
http://inlinethumb45.webshots.com/16876/2327358600101763211S600x600Q85.jpg
http://inlinethumb34.webshots.com/41697/2191934400101763211S600x600Q85.jpg
Schwartz confirmed with me that he did use the Met station data instead of Land+Ocean; he’s contacted the journal to get some sort of erratum/correction in.
For fun, I’ve repeated the analysis with the HadCRUT3 annual temperature series downloaded a couple of days ago from http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/annual (using 1850 to 2007 data, so the linear trend is fit to a different time period here). Comparing with the GISS numbers, the autocorrelation of the Hadley data is clearly stronger:
and the time constant by Schwartz’ method is clearly larger:
at least 8 years. Any ideas why the difference?
== Lucia, if you could embed the images, thanks!
Arthur–
One of those is a thumbnail. I increased the size to see but then it’s blurry! But I’m assuming the higher green line is the Hadley?
I never looked at the Hadley and compared to GISS. I’d figured out the GISS LAND Ocean went from 5-8 years by adding the white and doing it my way. I’ve also run enough “Monte Carlo” type runs to see that the results are variable both under “my way” (whith white noise) and the original way (as we already knew from FASM.
So, the difference could just be the portion of the variability due to the measurement error. Or not– I’ll know better when I get back to checking how much variability you get when the “weather” part stays the same but the white noise changes.
If this turns out to be within the scatter for the measurement uncertainty, averaging the two might help. (In which case, it will be useful to figure out if we should average the temperaure first, the do the correlation, or run the correlations and then average!)
( Yep, I’m going to check this even if, at his blog, James Annan posts that he’s “amused” that I’m checking how the bias he pointed out as affecting Schwartz’s method translates to my method. FWIW, the properties of the bias look different when you add white noise. For example, based on small amounts of fiddling, the bias for the first lag one term tends to give correlations that are too high compared to the average values “expected” for a process that is the sum of red noise and white noise. The later lags do have correlations that are too low. )
Ok– Now I’m going to have to see what my method gives for the Hadley. My method gave higher values than Schwartz based on Land/Ocean and Met, so it will likely do so on Hadley. (It’s going to take a little while though. I’m dealing with JohhV’s uncertainty intervals as that seems to be the major current climate blog war kerfuffle.)