In a recent post, Tamino fit time series data for global temperature to applied forcings using a statistical model which he described as being based on a “two-box model”. Tamino characterized the specific two-box model underlying his statistical model in these words:
Hence in addition to simple regression models, I’ll also use a two-box model which allows for both a “prompt†response to climate forcing and a long-term response. This can be thought of as a rough mimicry of an atmosphere-ocean model, where the atmosphere responds quickly while the ocean takes much longer. I’ll allow the atmosphere to respond very quickly (in a single year) while for the oceans I’ll use a timescale of 30 years.
I found Tamino’s recent post a bit puzzling; in particular I thought there might be a disconnect between the statistical model underlying his regression and the set of two-box models that might correspond to a physically realistic approximation for the earth’s climate. I asked questions. Evidently they were “wrong” questions.
After some discussion I remain puzzled.
Why am I puzzled? Well, it seems that if the two boxes are supposed to represent the earth’s atmosphere and ocean respectively, Tamino’s choice of time constants used for regression may represent a violation of the 2nd law of thermodynamics.
Let me explain. I’ll begin by discussing the physical process reflected in a general two box model.
Cartoon of simple two box model
Given Tamino’s description of his box model, I think the cartoon below may represent the physics of the “two-box model”. In this cartoon, the top box represents “the atmosphere” or “surface” of the earth with a temperature Ts and the lower box represents “the ocean” with a temperature To.
One we define what each box contains, we can proceed to the energy balance. That is: Application of “The First Law of Thermodynamics.”
Energy balance for box “s” (i.e. the ‘surface’ box):
In the two box model shown, with some simplification for the radiative heat loss from the box, conservation of energy for box s can be expressed:
In this equation “Cs” represents the heat capacity for whatever portion of the climate system we consider to include “the atmosphere” (i.e. surface). The quantity Fs represents external forcing for the “surface”, and corresponds to the anomalous heat addition added to the “surface” box coming from outside the earth’s climate system; the term containing “β” represents the heat transfer between the “atmosphere” and “ocean” boxes.
The quantity τs has dimensions time and represents a type of time constant for the surface box. I say “a time constant” and not “the time constant for the atmosphere” because τs does not necessarily correspond to Tamino’s time constant of 1 year. (It is worth noting that the ratio Cs/β also has dimensions time and is represents a second time constant. )
Before proceeding to the second box, it’s worth noting some constraints on the magnitude of the some of the parameters in the box model. Heat capacity, Cs, is a physical property and is positive by definition. To make physical sense, neither the time constant τs, nor the heat transfer coefficient β may be negative. Negative values of τs or β would imply heat will flow from a cold body toward a hot body and so, violate The 2nd Law of Thermodynamics.
A similar equation describes the energy balance for box ‘o’ (i.e. the ‘deep ocean’ box.):
In this equation Co, β and τo must also be positive; negative values for either would result in a box model that violates of the 2nd law of thermodynamics. (For those counting, so far, the two equations contain three linearly independent time constants.)
Additional constraints
The constraints on the parameters discussed so far arose from restrictions imposed by “The Second Law of Thermodynamics”.
However, if we hope to create a two-box model that represents a realistic approximation for the earth, there are additional constraints on the magnitude of the various parameters appearing in (1) & (2). I’ll discuss two of these providing back of the envelope estimates of their magnitude.
Back of the envelop estimate for τo
If we think about the earth’s geometry, we are likely to notice that the atmosphere lies between the ocean and outer space. Moreover, at least on cloudy days, the top of the atmosphere is not entirely transparent, so we might imagine that very few photons number of radiating from the surface of the ocean reach outerspace without being intercepted by the atmosphere. If we went so far as to speculate all photons radiated from the ocean are intercepted by the atmosphere, we would set the ‘heat radiated from the ocean to the universe to zero”, and cross that heat loss from our cartoon as shown below:

Likewise, one might also speculate that, given the definition of forcing the external forcing should be applied to the surface box and cross that out.
If one thought these assumptions were justifiable, equations (1) and (2) would become:
(Co/ τo) To – β (To-Ts) + Fo
Now, under this assumption, recognizing that the heat capacity of the deep ocean is not zero, by striking out the term (Co/ τo) To we are effectively making the assumptions τo = ∞.
Are we going to go so far as to assume the time constant of the ocean is infinite? No. But I will go so far as to not that suggest τo in equation (2) is positive and “large”.
Back of the envelop estimate for τs
If we examine equation (1), we might recognize that the term originates from the linearized heat transfer relation for radiation heat transfer about a baseline value of Ts,o. The temperature Ts, in fact, an anomaly, or difference between the actual temperature and Ts,o. Expanding Stefan-Boltsman law about the baseline Ts,o results in:
represents a simplification for the rate of radiative heat loss from the atmosphere to the space.
The quantity Cs represents the heat capacity of the “surface box”; NOAA estimates its magnitude as follows:
The heat capacity of the global atmosphere corresponds to that of only a 3.2 m layer of the ocean.
If the heat capacity of the global atmosphere is equal to 3.2m layer of ocean waster, its heat capcity based on the properites of water is approximately
Where A is the surface area of the earth
Assuming the heat loss to the universe is represented by a , around some equalibrium temperature for the earth (say To=273K), we estimate the rate of heat loss over the area A to be
Taking the ratio of these two terms we find τs~ 0.09 years. This is an approximation, but clearly different from the value of 1 year Tamino used to capture the variations in the atmosphere in his regression.
Recall that I said I did not think Tamino’s choice of 1 year for the time constant he associated with “the atmosphere” does not necessarily corresponded to the value of the τs as it appears in equation 1; τs is a time constant not the time constant. So the difference between 0.09 years and 1 year is not necessarily a problem.
Our next step is to discover the relationship between the time constants Tamino mentioned in his narrative (1 year and 30 years), and those in equations (1) and (2).
Rearranging the generic two box model
Let us return to the two box model and general solutions for this set of ODE. When doing this, it is useful to re-arrange the equations into the following form:
where αo= ( 1/ τo); αs= ( 1/ τs); γs=β/Cs and γ0=β/Co are all inverse time constants.
General solutions to the ODEs
When considering possible solutions to equations 1c and 2c, one generally begins by examining general solutions to the homogeneous equations: that is solutions when the forcings F are equal to zero. It happens that the we can solve that set of equations by making the following substitution:
where the λ are constants corresponding to solutions; these constants are called “eigenvalues”. The eigenvalues have dimensions “inverse time” (or units 1/seconds, 1/year etc).
If I undertstand Tamino’s narrative (which I may not), when performing his regression, Tamino assumed that –whatever the values of the parameters in (1) and (2) might be — the combination results in a set of equations with these two eignenvalues:
-1/λ–=τ–=1year.
His solutions regression then based a statistical model that assumed any solution for the global surface temperature sum of two solutions of the form (3) with these two specific eigenvalues.
Let us now explore how the magnitudes of the eigenvalues (λ+ and λ–) relate to the time constants and other parameteters in the two box model. Substituting (3) into the homogenous equation corresponding to 1c and 2c we get:
For reasons that involve the word “eigenvalue” it’s possible to show that the following equation provides two specific values of λ (i.e. λ+ and λ–) which correspond to two general solutions to the system of ODE represented by (1c) and (2c):
where
c= (αo αs) + (αsγo +α0γs)
If we had actually specified our box model, all the greek symbols in the equations above would take on numerical values, corresponding to the units we chose to solve our problem. We would be able to compute the magnitude of the eigenvalues for any specific box model and compare to Tamino’s choices for his eigenvalues. Because Tamino didn’t actually specify the box model he used, we can’t quite do that.
Now, on to the 2nd law of thermodynamics!
But we can ask this:
Recall, I said to avoid violating the 2nd law of thermodynamics, the values of β, τo, and τs in (1) & (2) must be positive. Moreover, τo is “large” and τs ~ 0.09 years.
Now, let’s do a little highschool algebra. Return to equation (4) and take the sum of the two solutions to the quadratic equation:
Inserting definitions to express the (5) in terms of time constants τs and τo and the heat transfer coefficient β in the box model equations, we find :
Rearranging:
Tamino did not specify a sufficient number of parameters to test the equality; but that doesn’t mean we must stop here. To avoid a violating, the following in-equalty must hold true:
Because 0 ≤ τo
Recall I estimated τs~0.09 year. Recall tamino used τ+=30 years and τ– =1 year.
Stuffing these into (7) we find :
where both numbers have units years-1.
This statement is false.
So, it would seem to mean that if we think the two boxes in Tamino’s box model represent the earth’s “atmosphere” and “ocean”– as his narrative suggests– his choice of time constants τ+=30 years and τ– =1 year results in a two- box model that appears to violate The 2nd Law of Thermodynamic — at least they appear to do so if the boxes are thought to represent the “atmosphere” and “ocean” respectively.
If the represent something else, well, who knows?
So, does Tamino’s choice of parameters violate the 2nd law of thermodynamics?
Well…. This back of the envelope test does suggest, if the two boxes in the two-box model consist of “the earth’s atmosphere” and “the earth’s ocean” respectively, Tamino’s choice of time constants (30 years and 1 year) result in a box model that violates the second law of thermo. But, in answer to my questions about whether or not his statistical model mapped into a two-box model that was physically realistic, Tamino said he checked. So it’s possible I simply do not understand what each of the two boxes in his model are supposed to represent. Here are possibilities that might turn out to be ok:
- Maybe NOAA is incorrect when they suggest that the heat capacity of the atmosphere is equivalent to 3.2 m of sea water.
- Maybe Tamino meant us to understand the top box includes the atmosphere plus the top 30-40 meters or more of ocean. If so, it’s not clear that the global surface temperature measured 2m above the surface would be the appropriate temperature to represent the average temperature of that box. This means that it is difficult to justify such a regression. (On should regress on a temperature that captures dynamics of the average temperature of that box.
- Maybe I screwed up my algebra. (If so, please let me know.) Nevertheless, my questions about the 2nd law of thermo have been in good faith.
- Maybe… something I haven’t thought of or prefer not to suggest.
One might ask whether modifying the top box to include 30-40 m of water might result in a physically realistic box model.
My answer is, “I don’t know”.
What that modification would do is cure the specific problem associated with the requirement imposed by equations (7) above. However, (7) represents a necessary but insufficient condition on a physically realistic box model. There are other requirements that arise by subtracting the two solutions for eigenvalues, and later examining the results for the sensitivity, and the magnitude of the coefficients multiplying the two independent solutions of the form (3).
Unfortunately, until someone (like Tamino) provides a more complete description of what his two boxes contain, I can’t apply further tests to figure out whether some other two-box model with an alternative distribution of air and water between two boxes can map into Tamino’s two-parameter statistical model without violating the second law of thermodynamics. Also, until we know whether the upper box is supposed to contain 30-40m of water, we can’t evaluate whether or not it makes sense to use the surface temperature measured 2 m above the ground with to represent that temperature of that box. We also can’t do further evaluations of the general or particular solution Tamino provided.
So, unless and until Tamino sees fit to describe how his two parameters related to some real two-box model and becomes willing to discuss his two-box model (which is supposed to be a phenomemological model based on the first and second law of thermodynamics) in terms of both the first and second law of thermodynamics, it’s not at all clear that one can learn any more about the earth’s climate using Tamino’s two-box model flavored curve fit than one might learn using a 1 box model flavored curve fit. Absent further explanation of his two box model on Tamino’s part, all a person who understands thermodynamics can do is scratch their head ask questions.
For those of you wondering: SteveM’s rule against discussing The Second Law of Thermodynamics at Climate Audit ( and which appears to apply at Tamino’s blog) does not apply at The Blackboard. But be warned: there are plenty of engineers here who apply the 2nd law on a regular basis; if you suggest the entire theory of AGW violates the 2nd law, we will explain that it does not. (Also, should you wish to bring that up, that discussion may be migrated to a new thread where it will not get interlaced with this specific application.)
The eigenvalues are found by subtracting the eigenvalue from the diagonal of the matrix and setting the determinant to zero. So, we find the values of &lambda that make this determinant equal to zero:
Update: More detailed solution for Eigenvalues.
Tamino responded using slightly different notation and suggest that my solution for the eigenvalues is incorrect. I think he is mistaken. For this response, I will use his notation.
The eigenvalues are found by subtracting the eigenvalue from the diagonal of the matrix and setting the determinant to zero. So, we find the values of λ that make this determinant equal to zero:
The image above corresponds to this equation:
This is a quadratic equation for λ of the form
The solution is
Comparing (9) and (10), we see:
not b = -(ωs+ωo) as suggested by in Tamino’s response:
Update 2: A reader said his display did not show the units “years” in equation 7 and the result looked contradictory. I edited to remove the units and clarify after the equation.
Update 3: 10:33 pm.
I revisited Tamino’s blog. Since the time I posted he revised his solution for the eigenvalues:

Note that my λs correspond to his -ωs. This has to do with the sign convention, nothing more. Now that he had corrected, our eigenvalues match.
He is still confused about the problem with his model and potential additional problems that I have not discussed in detail.



Interesting. I was surprised that your analysis doesn’t include any parameters to account for the difference in the relative area difference between the ocean surface and the atmosphere (global surface) area. Would accounting for an ocean surface area substantially influence the results, or does that factor cancel out in the varous time constants?
Crashex–
Those areas could be discussed and accounted for. The details don’t matter for this back-of-the envelope estimate because a) the atmosphere is think relative to the diameter of the earth so the effect is small b) the relation 11< 1 isn't even close and c) if the "cure" to the problem with using the time constants Tamino used is to include 30 m -40 m of ocean in the top box, that would have an effect on the specific numerical values. Also, if that "cure" is suggested, there are other things to check and they matter more than the ratio of those areas.
Nice work. At least Tamino can’t edit your contribution here.
I’m not claiming that this is what Tamino did, but this is one way to enable a ~30 year time constant ocean, and matches his statement “It seems to me that a reasonable interpretation is that the fast box is the atmosphere and the slow box is the upper ocean. I suspect the time scale for the deep ocean is @b[much] longer than 30 years”:
Assume the two boxes are the atmosphere and the ocean mixed layer. Assume the mixed layer is about 100m deep (eg, about 30 times the heat capacity of the atmosphere). Assume the characteristic time constant of each box is the time constant derived from applying an external step function forcing to the box _not including_ box to box coupling: therefore, for the mixed layer it is dependent on the heat capacity of the mixed layer and whatever coupling constant you use for the connection to the deep ocean (which is assumed to be an infinite heat sink on the timescales in question), and for the atmosphere it depends on the heat capacity of the atmosphere and the radiation to space (which can be Stefan-Boltzmann).
Marcus–
Are you suggesting one box the atmosphere and the other the ocean mixed layer?
If so, this does not fix the problem because a) it is the fact that one box contains atmosphere only that causes the violation of requirement (7) and (b) if neither box includes the deep ocean, you need to permit heat transfer between the mixed layer and the deep ocean. This means you need a third box.
These are the time constants τs and τo in the equations (1) and (2).
This defines the third box. We can do an analysis to see if this rescues Tamino’s choice of time constants. But in this case, there are going to be three time constants and all must be used in any solution. (Unless you can show that the eigenvectors make them fall out of some particular fit — inwhich case, that must be discussed.)
If your general point is that there may be some sort of box model that includes time constants of 1 year and 30 years (and which may include even more) then, sure. But, if so, Tamino needs to explain what he really means and then we can figure out if those time constants and his solution map into the set of physically realistic “n” box models where “n” is greater than 2. (And, if 2
Very interesting analysis. I’m impressed you could think of it and put it all together so quickly.
However, have you considered the possibility that heat in his two box model is transferred between ocean and atmosphere via TELECONNECTION, rather than the more normal physical processes ?
No, I’m saying rather than making the deep ocean a third box, assume that it is an infinite heat sink: eg, it never changes temperature, it just sucks heat out of the mixed layer.
Much like you don’t need a box for “the universe” that is receiving heat from the atmosphere.
(and yes, the deep ocean is much smaller than “the universe” and in the long run this would be a bad approximation, but on the century timescale it should be fine)
Marcus–
To clarify, are you suggesting a system with
a) An atmosphere box corresponding to “s”
b) A mixed layer box corresponding to “o”
c) a heat sink below “o”, which has a constant temperature?
or are you suggesting
a) The top box is “atmosphere+mixed layer” and
b) the lower box is “heat sink”.
I can discuss either, but it’s difficult to try to answer both in one comment. Each have different issues.
Something is missing in the sentence “But Tamino said he checked that his model.”
Also, this model only applies in an infinitely-heating world where there is always more energy flow from the atmosphere to the ocean. There is no energy flow back from the ocean to the atmosphere, so the model is limited to conditions which are probably violated during cooling events in the “time-series data”.
The first system: atmosphere = s, mixed layer = o, heat sink below o. Alternatively, you can just take your Figure 2, uncross-out the blue arrow, replace the word “universe” with “deep ocean”, and go from there.
Marcus– That doesn’t solve the problem with equation (7). The effective time constant for the lower box remains positive, and the one for the top box is still to small to permit the time constants Tamino used.
Quote from an old GISS ModelE page (http://www.giss.nasa.gov/tools/modelE/modelE.html#part6_2):
“Generally this model takes 20 to 30 years to come into thermal equilibrium with any change in forcing, but this can be changed by specifying the maximum mixed layer depth (smaller implies a faster equilibration).”
The page suggests that the deep ocean can be explicitly represented as additional layers, but if I read between the lines that implies that the deep ocean is usually not explicitly included at all: eg, infinite heat sink.
How do these models relate to a simple EBM like “Grumpy” here:
http://www.climateaudit.org/phpBB3/viewtopic.php?f=3&t=713
Or Lumpy, which I believe is similar?
Marcus–
I’m not saying that one can’t model the ocean as a heat sink for some purposes. I’m saying that if you set up a two box model the way you describe in Comment#18290, Tamino’s choice of τ+=1 year and τ–=30 years, combined with the realistic values of τs (for the atmosphere box) and any non-negative value for the ocean box results corresponds to a box model that violates the second law of thermodynamics.
Model E is not a two box model and does not assume τ+=1 year and τ–=30 years when marching forward in time. Their simplification could be perfectly fine.
Why not just set tau-o and tau-s to 30 and 1 year?
(and yes, an argument can be made that tau-s should be smaller than 1: perhaps it includes the top 1 m of land surface as well as the atmosphere, but I believe Tamino stated that the exact value of tau-s didn’t matter as long as it was “small”)
Andrew– Lumpy and Grumpy are one box models. It’s possible to fit the same time series/forcing data Tamino used to those simpler models get regressions with high correlation coefficients.
On the one hand, Lumpy and Grumpy are both even more over simplified than Tamino’s model and can be criticized on those grounds. On the other hand,they have fewer fitting parameters (i.e. tuning knobs.)
To avoid violation of the 2nd law, Lumpy and Grumpy need a positive time constant an nothing else. So, the analysis to check violations is simpler.
Marcus
Lots of reasons. If you solve for the eigenvectors for that choice, that choice for the two boxes coupled with specific choices linked to Taminos choice of eigen values results in two decoupled boxes. Taminos solution would not apply.
I am not giong to comment on the work of Tamino as a model. I think the AGW issue does not suffer from a lack of models Tamino’s behavior, on the other hand, indicates he is aware of significant issues with the 2 box model, and simply wants to silence any discussion about it.
Tamino has responded to this post: http://tamino.wordpress.com/2009/08/22/constant/#more-1821
hunter–
There are alternative explanations. Given Tamino’s background, he may not be in the habit of checking solutions to see if they might violate the 2nd law of thermodynamics. In contrast, mechanical engineers, chemical engineers, chemists, physicists metallurgist and the lick generally cannot graduate without having been forced to check for many problems.
After all: If we did not apply the 2nd law of thermo, how would we know that Newton’s solution for the speed of sound, while not too far off was wrong? How would we know that air cannot leave the nozzle of a spray can at an infinite velocity? (Ok.. I’m asking all the ME/ Aero questions. There plenty in other fields.)
Tamino decided to try to guide his statistical fit using a model that is an application of the first and second laws of thermodynamics, and then got upset when I mentioned the second law of thermo. I suspect he honestly thinks that questions about the second law are always some sort of FUD and never real questions about the phenomenology underpinning the physical model he applied.
Tamino has just put a new article on his blog arguing against the points Lucia is making here. I have just tried to post this over there, but it did not pass moderation:
“So you think she’s wrong – fine. Why not let her post? It can’t be because you don’t want to waste time refuting her arguments – you’re just spent an entire blog article doing just that.
Seems a bit unreasonable not to let her post. ‘Open mind’ and all that.”
I’ve no idea who is right and who is wrong on this, but I know who’s got the more open mind.
Lucia,
Tamino’s done a whole post. He seems to agree with you but of course says you can’t post any more with some crap about useful contribution. As though it’s somehow reasonable for you to sit through his edit fits and assertions of falsification just to contribute something ‘useful’ to him.
Wow he’s banned Lucia for pointing out his choice of params violated the 2nd law of thermodynamics.
Also came back with the very common climate scientist response: “it doesn’t matter”.
I think he’s moved on.
Layman–
Thank you for alerting me to that. I have added an update showing the explicit solution for the eigenvalues and comparing those to the ones Tamino suggests. I think my response is more than adequate, but I will repeat it here:
The eigenvalues are found by subtracting the eigenvalue from the diagonal of the matrix and setting the determinant to zero. So, we find the values of λ that make this determinant equal to zero:

The image above corresponds to this equation:
This is a quadratic equation for λ of the form
(10) a λ2 + b λ + c=0
(11) λ = (1/2a) (-b ± √(b2– 4ac)
(12) b = -(βs+ωs)+(βo+ωo)
The solution is
Comparing (9) and (10), we see:
not b = -(ωs)+(ωo) as suggested by this:
JeffId–
I asked Tamino questions in his comments. My intention was to obtain information.
FWIW: I think his analysis contains an math error, see above.
You clearly broke protocol Lucia, when you forgot to genuflect at the beginning and end of each post. Ladbury is a good example for future reference.
He pretended that your questions were an insult and distraction, to the enjoyment of the jester court. Now he pretends that somehow a system of two differential equations is a ‘contribution’ to science and you are banned for making a post that suggested he was in error. Quite mildly I might add and with a lot more class than he has shown.
Instead of a reply or admission that perhaps you have a point, he gave credit to ‘another commenter’ and said fine you can’t play. Standard fare for Open Mind.
Good stuff BTW.
jeff id (Comment#18308) “You clearly broke protocol Lucia, when you forgot to genuflect at the beginning and end of each post.”
Prince Tammie! Fabulous he!
Tammie Ababwa
Genuflect, show some respect
Down on one knee!
Now, try your best to stay calm
Brush up your Sunday salaam
Then come and meet his spectacular coterie!
Jeff–
I didn’t read the comments on that post. I just read up to Taminos solution for the eigenvalues and then skimmed to narrative based on that solution. Of course the narrative would make sense if the solution for the eigenvalues was correct.
Did Ray point out Tamino’s algebraic error?
Lucia,
I posted to Tamino that he was scoring an own goal by banning you. Of course, it didn’t get through but the own goal still stands!
It is interesting that Tamino and Hansen use the very successful Stefan Boltzmann equations to derive all the basic parametres of the climate including the calculated 33C greenhouse effect.
But these equations are almost totally abandoned after this point and then just climate models / climate boxes are used to derive the sensitivities etc. (and using numbers that are completely different than what the Stefan Boltzmann equations would point to).
Like the formula TempC increase = 5.35*Ln(CO2)*0.75C is nothing like what the Stefan Boltzmann equations would point to. In fact, the 0.75C should directly vary with the temperature itself and today it should only be 0.18C for each extra watt for example.
http://img524.imageshack.us/img524/6840/sbearthsurfacetemp.png
http://img43.imageshack.us/img43/2608/sbtempcperwatt.png
In trying to understand how the 33C greenhouse effect operates, I think they “boxed” themselves into these climate model “boxes” and have gotten way off track.
I wasn’t able to follow all of the details in your post, but it seems that your conclusions about the violation of 2nd law of thermodynamics hinges upon your assumptions of no external forcings into or out of the slow box.
What about the 10% or so of longwave radiation that makes it from the surface to outer space? Or the non-trivial solar heating of the ocean?
Your assumption of no external forcings for the slower box would make sense for the ocean below some 10’s of meters of depth, but it doesn’t seem appropriate if the split between the boxes is the ocean surface.
As usual, I’m confused but able to be educated.
Charlie
Keep up the good work Lucia. Of course if the second law of thermodynamics gets in the way of climate models it will have to be proved wrong!
No, the result does not depend on that. That assumption was not necessary as I didn’t need to go that far in the analysis.
If Tamino’s result had not broken down at equation (7), I would have applied a second test based on subtracting the eigenvalues. Then, if it had passed that, I would have had to try to replicate what he actually did and see what values I obtained for some constants and tested them for consistency with the possible range for the eigenvectors.
So, I discussed the issue of the force being set to zero in the lower box just in case it turned out that I do have to proceed further for some other alternative distribution of atmospher/ocean in the box.
EdBhoy–
Notwithstanding the tenor of Taminos post with the statistical fit, which made it appear that his use of a 2 -box model somehow is connected with the fidelity of other climate models, the fact that his statistical model has problems says absolutely nothing about AOGCMs, or simpler model used by climate scientists.
This issue is specific to the two box model and it interacts with Taminos regression. That’s all it is.
Unless I did my sums wrong, the heat capacity of the atmosphere assuming only dry air is equivalent to 24.3 m of water (101,325 kg air/m2, 1000 kg/m3 water, heat capacity of dry air 1004 J/kgK and water 4187 J/kg/K). Unless you assume constant relative humidity, water vapor doesn’t have much effect on the heat capacity of air. Tamino’s model may also assume an optically thick atmosphere with a temperature gradient so the exchange of radiation between the surface and the bottom of the atmosphere has a higher flux density than radiation to space. I think you can neglect direct forcing of the ocean box from the sun not because sunlight doesn’t penetrate but because that contribution doesn’t change with a change in greenhouse forcing. What you do is assume equilibrium at a constant energy input, possibly using some variation on Kiehl & Trenberth then reduce radiation from the atmosphere to space by some small amount (forcing) and see how long it takes to reach equilibrium again.
Lets say the forcing is 2 W/m2. Then the heating rate is 2E-08 K/sec or 0.6 K/year initially. If we assume a blackbody atmosphere radiating 235W/m2 at a brightness temperature of 253.7 K then 233 W/m2 is a brightness temperature of 253.2 K. My guess without actually working out the proper solution is that the half time to equilibrium will in fact be on the order of a year as the forcing decreases with increasing temperature. Certainly it will be a lot longer than 0.09 years. Then for the ocean box, assume a mixed layer depth to give 30 times the heat capacity of the atmosphere, or about 750 m assuming water covers the entire planet, and put it at radiative equilibrium with the atmosphere initially. That will slow down the increase in temperature of the atmosphere because some of the excess heat will be transferred to the ocean as the air temperature increases.
Fire away.
Dewitt–
I suspect NOAA is close to right. Atmosphereic pressure is 10^5 pa, which is equivalent to 10 m of water. (i.e. rho* g* h ~ 1000 kg/m^3 * 10 m/s^2 * 10 m) = 10^5 pa.
The specific heat of of air at stp is 1/4th that of water. So, that would get us down to 2.5 meters.
The difference bewteen 2.5m and NOAA’s 3.2 m is likely due to the temperature of the air.
Bill Illis (Comment#18312)-Stefan Boltzman is useful in the former case for simply looking at the actual impact of CO2 etc. at a constant concentration. If you change, you get temperature change, which will cause the other greenhouse elements etc. to change in some way also. Think of it this way-the sensitivities they get arise because the emissivity and Albedo terms in the equations are assumed to dependent on the temperature to some extent and are not thought to be “constants”, so this means that the “sensitivity” will not be your boring old gray body values!
For instance apart from perhaps quibbling with the Cosmic Ray parts, most of this analysis is non-controversial:
http://www.sciencebits.com/OnClimateSensitivity
Especially note after he calculates the gray body value:
“This sensitivity implicitly assumes that by changing the global temperature we don’t change the albedo nor the emissivity (aka, no “feedbacks”). We assumed so because when we carried out the differentiation, we didn’t differentiate alpha nor epsilon.”
That’s fine when dealing with a situation in which temperature doesn’t change (the 33 K “GHE” figure) but not when trying to calculate how much temperature change will ultimately change if you change the CO2 concentration, because the emissivity and albedo will probably change as a result of the CO2 caused change, and changing alpha and epsilon will change the temperature.
That was dumb. I forgot to convert force to mass. Maybe assuming constant relative humidity would slow things down some. That’s a lot more complicated to calculate, though. Back of the envelope would be 4 kg additional water/m2 K to even get in the same ballpark as the heat capacity of the dry air column and over 30 kg to get the time constant up to a year or so. Even with water vapor pressure going up exponentially with temperature, I don’t think that’s reasonable as the total precipitable water in the 1976 atmosphere is only about 16 kg/m2.
Listen, Falsifier, this post is only a flesh wound to the invincible Tamino two-box model. Let’s call it a draw.
The Black Knight–
You forgot the youtube video
Andrew_FL,
I could be persuaded that CO2 increases can affect Albedo but at this part of the equation, the impact would be very small – it is not going to even come close to 0.75C.
Consider the last ice age, using Hansen’s 5.35 ln (280/180) = -2.4 watts, at this part of the Stefan Boltzmann equation, the temp impact is less than 0.5C which is not going to affect Albedo very much. Furthermore, Hansen’s -3.4 watts due to ice sheet albedo feedback only changes the Earth’s albedo from 0.298 today to about 0.307 so a smaller CO2 impact is only going to change the albedo by 0.005 or so or another 0.3C.
What causes the ice age, is the ice-albedo feedback which is much, much higher than Hansen’s number. Basically, he just fit the ice age watt/metre^2 impacts to his 0.75C estimate. None of the numbers add up.
Let’s try 5.35*ln(4500/280)*.75C = +11.1C ; this was the CO2 level 440 million years ago when there was a major ice age and the temperature was a little lower than today. The math doesn’t work.
The emissivity number is so close to 1, I don’t think it is worthwhile building it into the formula.
The paper you linked to is calculating the lamda sensitivity of 0.3C per 1 watt increase at 240 watts/metre^2 (the solar forcing level only) but there is another 150 watts (greenhouse effect) to consider at the Earth’s surface which moves you farther along down the logarithmic function. Inserting cloud feedbacks into the mix makes no sense since the models can’t simulate clouds to begin with.
Bill Illis (Comment#18322): “I could be persuaded that CO2 increases can affect Albedo” No, it’s the temperature, not the CO2, which would effect the albedo and emissivity. I’m looking into this Hansen business though and appreciate your comments about his “nailed” sensitivity.
Lucia
Scafetta published two time scale model:
N. Scafetta, “Comment on “Heat capacity, time constant, and sensitivity of Earth’s climate system’ by Schwartz.” J. Geophys. Res., 113, D15104, doi:10.1029/2007JD009586. (2008). PDF
He recently expanded this to a three time scale model:
Scafetta: New paper on TSI, surface temperature, and modeling
Empirical analysis of the solar contribution to global mean air surface temperature change.
You may wish to start a separate thread analyzing his 2 vs 3 time scale models.
David– Maybe next week. I have a few things on my plate.
Lucia,
A couple of small items:
1. The blackbody temperature of the earth is ~255K (on average, though varies a lot), not 273K. Radiative loss at 273K would be about 31% higher than the solar energy reaching the Earth’s surface, which is about 239 watts/sq meter.
2. I believe that Tamino’s boxes represent a) the atmosphere plus the “well mixed” surface layer of the oceans, which is about 50-60 meters deep on average, and b) the ocean thremocline, which extends from the base of the well mixed layer to about 1.5 Km (also varies somewhat).
The Scafetta paper using autocorrelation analysis of historical ocean temperature data (following up on Swartz’s similar analysis) appeared consistent with a “two-box model” where the fast box had an exponential approach toward equilibrium with a constant of ~0.4 year, and the slow box had a constant of ~8.7 years.
In your matrix and in the equation (8) below it, the equations Tamino shows in his latest post require a value of plus lambda, not minus lambda. That is why he gets different eigenvalues, which, as he points out, make sense physically in the case that beta equals zero. In second-order equations, e.g. harmonic vibration, one gets a minus lambda, but not for first-order differential equations (when assuming a solution of the form exp(lambda*t)).
The other disagreement between you seems somewhat semantic to me. From Tamino’s POV, the two-box model is physically correct for some for some pair of materials which give the unperturbed time-constants which he assumed, and those constants are not physically impossible a priori, but those time-constants may or may not be the good ones to use for Earth’s atmosphere and ocean. They were rough guesses which Tamino used to draft a model.
You have pointed out that his guesses were not in fact good ones based on the material properties of Earth’s air and water. From his POV, the model is based on valid physics, but simply has the wrong material properties, so he objects to your characterization of the model as violating physic law. (He says the model can also be fit as well to empirical data with more realistic time constants and produces the same conclusions.)
I can see some validity to both POV’s, and could, in hindsight, suggest ways both parties could have re-phrased their comments so as not cause offense, but it is probably too late at this point.
SteveF–
a) Yes. The radiating temperaure is lower than the temperature at the surface. It is not sufficiently low to rescue Tamino’s model. It would need to be about 150K to rescue Taminos numbers.
b) Why do you believe Tamino’s ‘atmosphere’ includes 50-60 m of ocean?
Were the more specific contents of the box revealed sometime after I posted this post which states that the model violates the 2nd law unless it contains at least 30-40 m of ocean? Was it revealed after I showed Taminos eigenvalues are wrong?
Because if it has now been revealed that his ‘atmosphere’ includes 50-60 m of ocean, I have new questions to pose:
1) How does it make sense to represent the average temperature of that box containing 50-60 meters of ocean water with surface temperature measurements 2 m above the surface? And,
2) if that is the composition of the top box, have you performed the remaining 3 or so checks to verify that this more specific model does not violate the second law of thermodynamics?
Jim V–
The choice of +&lamba; or -λ depends on whether the substitution is exp(-λ t) or exp(+λ t). My textbook, and most online references suggest the substitution exp(+λ t). If I misunderstood which choice Tamino made then, the equations should include +λ. In that case, just change the sign and solve. Sorry if my failure to notice which convention he used caused confusion.
No. His eigenvalues are wrong. The change from -λ to +λ results in a sign change. Nothing more.
The change in sign in λ will not magically cause the βs to disappear from the solution for B.
I’ll discuss this Monday. 🙂
What is the point of fitting temperatures measured 2m above the earth’s surface to a statistical to a model with parameters that apply to Krypton or Jupiter? Note the number of times I mention use the word earth in the post above.
His model may be fine for Jupiter or Krypton. I have no idea.
For what planet? Krypton? When analyzing earth data, violating the 2nd law of thermodynamics when applied using realistic properties for the earth is not valid physics.
Look, there may be some POV that makes Tamino’s choice of parameters valid for some a two box model of some planet or set of objects somewhere. There may even be some partitioning of matter on the earth that makes his model ok. That’s why I only said
I asked tried to ask questions at Taminos where fewer people would read them. I tried to make him understand what I was driving at. He became offended, and banned me.
My readers asked me to elaborate here and I did. If Tamino wants to explain how he thought matter was divided in his model and do the remaining analyses to show it does not violate the 2nd law of thermodynamics, I would welcome that demonstration.
As he elaborates, we can also ask other questions about whether it makes sense to perform the fit using surface temperature rather than some mix of surface temperature and ocean heat content etc. But, as long as he is averse to questions asked in good faith, the those of us who are not mind readers begin to guess what sort of box model he envisioned. Consequently, it appears his particular “two-box model” fit has no particular advantage over a 1 box model or even an “zero time constant model”.
OH dear.
Call me a baffled, bewildered and antique physicist but I know a thing or two.
Of course you can set up a thought experiment with two boxes, Maxwell did with his daemon.
But here we have two boxes and I observe that they do not represent the real world. The two boxes do not exist in isolation from each other, rather there is a constant interchange of energy between the two, so first of all you need an equilibrium equation to deal with that.
As I understand it, and I may be wrong, the only method of energy transfer envisaged between the boxes themselves and the universe is by radiation. Not true as we can and do observe.
Of course it is assumed the model is true in respect of the total radiation received by and reradiated by the earth.
But I repeat it is not true of the interchange of energy between the two boxes.
As a result elaborate analysis shows that the model may disobey the second law of thermodynamics according to how you adjust the variables.
But the fact that the model can be made to disobey the second law by selecting variables tells you that the model does not and cannot represent the real world, even if it might seem to do so given the right selection of variables.
So it is not and cannot be a true model of the climate system or indeed of the universe in which we live.
Because you can disprove it simply by selecting the variables and
seeing whether it then obeys the second law of thermodynamics.
Which it doesn’t and indeed from a priori couldn’t. Unless there is some magical mechanism by which energy can only flow from the atmosphere into the oceans.
So why bother with it any further?
Kindest Regards.
Hi a jones,
Good questions, but let me clarify,
If the boxes represent atmosphere and ocean, heat can transfer between the boxes by convection and conduction. This is likely the main mechanics. (This also applies if the top box contains 30 m of ocean.)
If you examine equations (1) and (2), you will see the terms containing “β” are equal and opposite to one another. When 0<β heat travels from which ever box is hotter to whichever is cooler. I hope this answer your questions. As for your last question:
This might best be asked of Tamino. He’s the one fitting data to this. (But, quite honestly, the general idea might be salvaged with different time constants. I don’t know how he picked these two. If the represent the best fit values for a two-box model, what it teaches us is the two-box model is not sufficiently detailed to use to describe the earth’s climate system. If they are just numbers he likes, it might be interesting to learn which values really do best fit the time series data.)
I appreciate your points, and yes I did notice Beta could be positive or negative.
But that does not answer the basic question about the model.
Of course elementary statistical analysis can be used to validate models, after that is exactly what Max Boltz is about.
But before you can validate a model in that way you have to have the precise mechanism in order to reduce it to a statistical form.
Statistical data may give a clue as to mechanism but is not of itself proof of anything.
By selecting variables to plug into your model you can probably show almost anything.
The test of the model is to select an extreme range of variables and see what the results are.
If they produce a result which does not obey the basic laws of physics then your model must be wrong.
So throw it away and devise a better one.
I blame it all on Matlab myself, what was once so hard to do can be done with the click of a button: and so you can correlate anything to anything. With pretty graphs.
Don’t mistake me its a wonderful tool ad no doubt we shall learn how to use it properly.
But at the moment it is toytime.
As for the above model, and the choice of time constants and indeed the depths of the ocean I am bemused.
To say that they are arbitary is understatement. And do not accord with any known observation of the real world.
But as I have learned in so called climate science the real world is just an inconvenience.
Not to say that there are not serious scientific researchers trying to find out what is actually happening, there are, and we shall learn much from them in time.
Kindest Regards.
a jones–
Yes. Tamino fit temperature data to a statistical model. He wishes us to believe that his statistical model is better than other models because his statistical model, supposedly, is based on a two box model. Because it is supposed to enforce the first and 2nd laws of thermodynamics, in principle, using the two-box model to constraint the statistical model would be a step forward relative to just picking any old statistical model.
Unfortunately, he picked parameters that constrain his model to violate the 2nd law of thermodynamics. It appears he still likes the model.
I agree with you. It might be wiser to throw away a model that enforces violations of the 2nd law of thermo rather than use it.
Just for fun, here is a paper on a two box model that does appear to have no 2nd law issues.
http://ams.allenpress.com/archive/1520-0442/14/19/pdf/i1520-0442-14-19-3944.pdf
The author realizes the limits, and in the process made a dig on RC author Pierrehumbert”
“The applicability of these box models is somewhat limited, however, because of their extreme simplifications. For example, Pierrehumbert (1995), hereinafter referred to as P95, neglected cloud-radiative effects and specified the potential temperature difference between the upper and lower branches of the tropical circulation.”
Anthony–
Those seem to be side-by-side models for a somewhat different application.
I happen to like good simple models, particularly when they are just complicated enough to explain something very specific about a phenomenon of interest.
Did you notice Tamino updated his solution for eigenvalues? I wanted to check the cache to refresh my memory on what it said before he modified… but … it’s… not.. there…
Well, I did say it was “just for fun”. I didn’t say it matched what Tam was doing.
Simple models are good for insight builders. From my perspective, some atmospheric modeling might be better served by the application of electrical circuit equations. I see capacitors, resistors, inductors, diodes, field effect transistors, and heatsinks when I look at earth’s atmospheric and oceanic processes.
Like a mechanical orrery (an analog device) can model the motions of the planets, so could an analog electrical circuit model earths heat balance and air/ocean interactions. Someday I may make one. The neat thing about an electrical circuit is that it can be tweaked and run with all that messy FORTRAN coding 😉 though experience with a soldering iron is helpful.
As for cache etc. and fleeting eigenvalues there is always this:
http://www.webcitation.org/
But it must be applied upon first recognition of puzzlement.
“The author realizes the limits, and in the process made a dig on RC author Pierrehumbert—
You are confusing a discussion on the pros and cons of that simple model with your own modus operandi.
I don’t understand the idea expressed by some here that the deep ocean can be regarded as an infinite heat sink. As a matter of fact it is a small heat *source*. All surface water that sinks into the deep ocean ultimately returns to the surface, (very) slightly heated by geothermal energy. Short term the heat exchange between the surface layer and the deep ocean probably oscillates between positive and negative since the temperature of the water that sinks into the deep ocean varies slightly (if the water is unusually salty it will sink at a slightly higher temperature), but in the long run heat must be transferred from the deep ocean to the surface (no other way that geothermal energy can be lost to the universe). I also can’t understand Tamino’s choice of time constant. Thirty years is almost certainly too long for the ocean surface only, and vastly too short for the oceans as a whole (which is probably on the order of 10^3 years).
Lucia, I’ve been puzzled by this controversy, because it seems that while Tamino’s description of what he has done was not very clear, it could be a lot less than your analysis specifies. His process seems to be inspired by a system like you describe in eqs 1 and 2, and which he talks about in his “two boxes” post. But what he actually does, as far as I can determine from his second response to you, is to convolve the forcing with two separate exponential-type functions (1-exp(-t/Ï„)), Ï„=1 and 30, and to get the best fitting linear combination. That isn’t fitting the coupled equations, and doesn’t seem to involve any 2LOT issues.
Maybe I’m missing something. The reference to SOI suggests that there is some direct link to a coupling coefficient. But I can’t see it.
Anthony–
Webcitation is pretty cool! Of course, it wouldn’t have occurred to me to immortalize Tamino’s first draft of that post. Though, I wish I had. I exploded in laughter when I read his phenomenological interpretation of the totally incorrect eigenvalues.
I also note that the revision does not warn readers that substantial changes have been made.
As I did not go back, I was unaware that by the time JimV answered, Tamino had changed his eigenvalues! So, when JimV posted the only difference between my eigenvalues and those at Tamino’s site. But my post shows Tamino’s first try at solving solving the determinant using the difficult equation (ad-cb)=0 and then solving the resulting quadratic equation (as one learns … when… freshman year in highschool?)
Now that I know Tamino corrected his post. If his readers ask me questions about the equations here, I can simply point out Tamino changed his.
Now, in place of this bullet in the post speculating why Tamino thought he “checked” that his parameters did not violate the 2nd law:
“Maybe… something I haven’t thought of or prefer not to suggest.”
I should add: It appears Tamino may have checked, but committed arithmetic errors when he solved applied the simple formula for a determinant and/or solved a quadratic equation. Both are mistakes commonly made by freshman taking working their math homework.
Nick–
First, there would be very little controversy if Tamino had answered the first through about 4th questions I asked. I tried to elicit information on what Tamino does so I could evaluate it.
Eventually, as I tried to elicit more, he banned me. My readers asked me about the issue related to possible violation of the 2nd law of thermodynamics that can arise if the modeler picks his time constants out of his a** rather than carefully considering the implications of the laws governing the physical universe. I answered.
That’s the controversy as I understand it.
Second: Are you suggesting that Tamino’s fit is almost unrelated to any two box model, and so does not violate physics because it never implements it? That all he did was arbitrarily select two time constants and fit the temperature series to the forcing and need never have mentioned the two-boxes?
If so, I agree.
There would be little to criticize had Tamino’s first post not included verbiage like this
We would say: “Oh. Two exponential decays and arbitrarily chosen time constants works about as good as 1 exponential decay. Who’d a thunk? Did you compute the Schwarz information criterion to learn whether the extra fiddle factors would impress any statisticians other than you?”
There would have been no talk of phenomenology, the first law of thermo or the second.
But, the fact is, Tamino presented his curve fit as being an improvement over other curve fits which he deems unphysical. He linked to a post discussing how his curve fit is an improvement over the one Stephen Schwartz used– which, as you know, is a physics based statistical model that, if fit to the forcings Tamino used, gives an answer he doesn’t like for the time constant.
Unfortunately, to represent an improvement over unphysical curvefits, his model should not, itself, be unphysical. This sort of improvement can be claimed for my “Lumpy” (one exponential decay) model relative to models with zero time constant.
Even more unfortunately for Tamino, if one evaluated in terms of the physical model he claimed formed the basis of his statistical model, the statistical model he selected was physically unrealistic if applied to the earth and a temperature series for the atmosphere which is the planet whose data he is regression’s.
As for your comment on the Tamino’s reference to the SOI: I have no idea why, in addition to pulling time constants out of his a** to create his “physics” based statistical fit he also added the SOI to his forcings. I suspect he did that to achieve a correlation coefficient that is not embarrassingly low.
However, given the more fundamental problem with his curve fit, criticizing him for boosting its apparent explanatory power by throwing in the SOI would have seemed a bit like piling on… Don’t ‘cha think?
Lucia, as I say, I may be missing something. But from what I have read, Tamino’s description is consistent with fitting a two-decay (prescribed) constant model using convolution, and yes, if so, the connection to the linked full two-box model could be more clearly made.
” I exploded in laughter when I read his phenomenological interpretation of the totally incorrect eigenvalues.”
A good enough reason to ban you. You have just described the actions of a troll.
Lucia
great stuff. I just love watching the team squirm. You also remind me that once upon a time I used to be quite clever. I studied Physics to masters level (and passed, of course) and therefore understood everything you have just written. Unfortunately, I don’t now. Old decrepit and retired but I love you work.
One point for “The Team” though. When I wrote and published publically I always sought the best physicists that I knew to review and correct my work before it went public. Not the team, though. They demontrate such incredible arrogance that they totally believe that their work is absolutely on the nail. Incroyable
Come on bugs,
Even you have to be able to see the humour in this:
The Fantastic Bombastic preaches from the pulpit of his revival tent presenting a simple model in an attempt to show how ignoranent and dishonest those dang skeptics are. In a meek voice from the back, our heroine points out that the model may not accurately represent “real physics” if certain parameters are chosen due to thermo considerations. “What parameters did you choose ?”
“Blasphemy !”, yells the Fantastic Bombastic, “Thou shall not invoke the second law here ! Only heretics speak of the second law ! Off with her head ! How dare you question the great Bombastic about parameters !”. Our heroine is promptly escorted out of the tent. This is not the type of thing that gets one excluded from the discussion in civilized circles, but in Wonderland with the denizens therein, this sort of thing sometimes happens.
Our heroine then (as someone with a scientific bent) continues to puzzle out what characteristics the parameters would have to obey. It then occurs to her that the basic math of the Fantastic Bombastic is utterly wrong at the very genesis of the problem. The humor here comes from the fact that while the Fantastic Bombastic exiled her from the tent for thermo consideration, and was utterly over the top in doing so (with his usual smears and assignments of motive), he flubbed his undergraduate math.
The humor here comes from “Irony” (you can look that one up on the internet bugs). When someone is utterly self-righteous and convinced of their own brilliance and then dropped back to earth by a very, very basic math error, I would think anyone would find humor in this.
Another amusement is that it seems the play is not over yet. Right on queue the Cheshire cat arrives and claims mystification about all the fuss. What’s the issue ? We shouldn’t lose sight of the fact that while the Fantastic Bombastic’s handling of eigensystems is somewhat “unique”and humorous. It is really not the main issue here.
Our heroine ask a very simple and relevant question. Yes Nick, the models hang together mathematically. Our heroine’s simple point is that the math shown in no way represents the real world if certain characteristics for the input parameters are not followed. She was curious about “what” those parameters were. I need to think further on a jones statement about general disproof, but that may be true as well.
One of the most important things for any true scientist to understand is where/how any model of the real world breaks down. It seems Lucia was right on top of this one and a questioning mind is never welcome in the revival tent of the Fantastic Bombastic.
Is it irony or just bathos if a troll calls you a troll?
Hi Lucia,
“b) Why do you believe Tamino’s ‘atmosphere’ includes 50-60 m of ocean?
Were the more specific contents of the box revealed sometime after I posted this post which states that the model violates the 2nd law unless it contains at least 30-40 m of ocean? Was it revealed after I showed Taminos eigenvalues are wrong? ”
I do not know what Tamino revealed (or did not reveal). I suggested 50-60 meters of ocean in the “fast” box because that is the only thing which makes any physical sense. Most of the solar energy that reaches the Earth’s surface is deposited within this layer (not at the surface), so it is not possible to heat only the atmosphere… mostly the sun heats the top 50-60 meters of ocean, not the air.
“1) How does it make sense to represent the average temperature of that box containing 50-60 meters of ocean water with surface temperature measurements 2 m above the surface? And,
2) if that is the composition of the top box, have you performed the remaining 3 or so checks to verify that this more specific model does not violate the second law of thermodynamics?”
It makes no sense to represent the “fast box” with anything other than the heat content of the ocean; readings 2 m above the surface are just a stand-in (a poor one!) for the heat content. Any realistic evaluation of global warming would be based on ocean heat content, not air temperature 2 m above the surface.
I have not performed the other checks, and I’m not sure it is worthwhile to do so. As a jones pointed out above, the transfer between the two boxes is not so simple, and the “slow box” is really a range of slow boxes with unknown approach constants. In addition, the “slow box” isn’t really a closed box at all, since cold water from below the thermocline slowly displaces warmer water above it as even colder water sinks to the abyss at high lattitudes. Tamino’s model is used (always with unrealistic lag constants and unrealistic aerosol “cancelations”) to defend crazy-high model sensitivities: ~0.8C per watt per square meter. The coupled ocean-atmosphere models use these same unrealistic lag constants; without them, the GCM’s would yield much lower climate sensitivities. The data clearly show much lower sensitivities (as you have pointed out many times), in conflict with the GCM’s.
No surprise that you were banned at Tamino’s site; you should consider it an honor. Like most alarmists, Tamino doesn’t really want any discussion of what is technically correct, since he has already decided what is “right”.
lucia (Comment#18333) August 22nd, 2009 at 9:00 pm
Latent heat transfer from the surface to the atmosphere is much larger than sensible (convective) heat transfer, 78 W/m2 compared to 24 W/m2 in the Kiehl and Trenberth budget. OTOH, if the heat capacity of the fast box is mostly ocean than convective transfer between the upper and lower ocean would dominate. In fact, we know that the ocean and atmosphere are closely coupled with a short time constant, otherwise ENSO would have little effect on atmospheric temperature.
Nick Stokes, if you allow that Tamino isn’t doing what he said he’s doing, then I suppose we can let him off the hook and not worry whether it is physically meaningful or not (but that tosses out the baby with the bath water from Tamino’s perspective).
If we take him at his word, we are forced to examine his work in the context of a two-box model, because that is how he framed the work himself. If the connection to the two-box model should be made more clear, it is by Tamino himself since he as author is responsible for explaining that.
Why not take this issue up with him, rather than Lucia, who is simply trying to replicate his results?
Nick–
The problem is simple:
a)If one connects Tamino’s two decay model to a real box model consisting of the box1=atmosphere and box2=ocean, the time constants he used result in a box model that violate the 2nd law of thermo and
b) If one imagines his top box is atmosphere+ocean, and the lower box is deep ocean, it does not violate the 2nd law, but it fitting the time series for the “top box” temperatures measured 2m off the surface of the earth doesn’t make any sense.
To create a box that maps into physical reality, he needed to give some more thought to his parameters. Basically: He does not understand how a two box model relates to his statistical fit.
bugs
1) I had already been banned. It was his first draft of ‘rebuttal’ that was sidesplitting.
2) The laughter occurred inside the confines of my home. Under the circumstances, I could not help laughing. Moreover, I can make no apologies if milk shoots out my nose when I discover Tamino explaining the “physics” underpinning an equation he obtained by committing highschool math errors.
Artiflex–
Highschool actually. Most people learn Kramer’s rule for a 2×2 determant and quadratic equations in high school.
That said: applying Kramers rule when the determinant includes symbols instead of numbers is an undergraduate topic.
SteveF
I think we agree. If we try to infuse sense into Taminos box model, given what he discussed, the closest thing to a sensible interpretation was his top box includes some ocean. However, in that case, it’s difficult to justify fit to temperature 2m off the surface.
In my opinion, had Tamino been willing to engage my initial questions, we might have been able to tease out what idea he had in his head, and found a way to guide him to coming up with two-time constant fit that did map into the planet “Krypton”. I tried to do that posing a number of questions before I let the “2nd law bomb” drop.
Maybe, in future, Tamino when using thermodynamics to guide his statistical curve fitting, Tamino consider paying attention to the 2nd law. Or maybe he will continue to believe that applying the 2nd law of thermodynamics to thermodynamics problems is a denialist talking point.
Dewitt– You are correct about latent heat. I guess my thought was: after water changes phase, it still convects away. But you are right. Also, rain falls from clouds into he ocean. I don’t know if that’s considered convection in any classical sense!
I recall even Homer Simpson stating (twice), “In this house, we obey the laws of thermodynamics!” – once in response to Lisa inventing a perpetual motion machine, and another time in response to something a haunted house was doing. And Homer is a buffoon of epic proportions.
I think Tamino’s approach is more of a one-box model…an enormous box to fit over the heads of all of his readers so that they are not aware of his errors. Unfortunately, the value of “n” representing the number of his readers is positive and not negative (although the overall sentiment of his site is strongly negative).
Thank you for the reply. I found it a bit confusing about the eigenvalue discrepancy, but now I see that you are saying that Tamino had revised his post before I compared your and his formulations. If so, I am very disappointed, not that he made an error – as my favorite Einstein quote goes, “All mathematicians make mistakes; good mathematicians find them.” (I like to apply that to other professions also) – but that he did not mark it as an update.
For the other part, I will try to clarify how I see the two POV’s a bit more. Say I have a vibration problem with some equipment, and start to investigate it with a simple two-mass/spring model. I set up my equations using Hook’s Law and Newton’s Third Law, and check the resulting equations for Conservation of Energy – but using named constants and variables (e.g., F=kx for Hook’s Law). I know that for the actual equipment, modeled as two masses, one mass will be large and the other small, so I guesstimate m1=1 and m2=30, then do a run-through of the model applying known forcings to it, and get a particular shape of response. I then say, look, that is the type of response we are getting, and if I make the spring stiffer we don’t get that response, so that may be how to fix the problem.
A colleague looks at my model and says, how do you know it obeys Hook’s Law and Conservation of Energy? I say, because those are the ways I derived the equations. Later he comes back and says, aha, I have weighed the equipment and m1 is not 1, it is 0.1. Therefore your model does not obey Hook’s Law and Conservation of Energy, because m1(x”) +kx does not equal the actual force on the actual mass 1, using your value of m1.
I would be a little annoyed, and wish that instead he had asked, how do you know your masses and spring stiffness are correct, and is the response sensitive to changes in them (so long as m2 is still much greater than m1)?
The latter is a valid question, and would have to be addressed. The former sounds like it is asking whether I know how to use Hook’s Law – although perhaps with more patience I could have elicited an objection which we both understood in the same way.
Lucia,
I took the time to read over the Tamino post and subsequent comments. As far as I can tell, Tamino produced no accurate description of what assumptions went into his curve fit model. However, from his graphs it appears that he simply duplicated the assumed forcings used by the IPCC (including a large and uncertain aerosol “cancelation” of much of the GHG forcing), and added a 30-year ocean “slow box” delay (supported by nothing) to further boost the net sensitivity. You can easily improve on the R^2 fit to historical data of his “model” by assuming little or no cancellation of greenhouse gases by aerosols and assuming a “two-box” ocean with Scafetta’s 0.4 yr and 8.7 yr time constants along with Scafetta’s estimated box capacities of 58% fast and 42% slow. Better fit to the data, and a climate sensitivity of about 0.28 C/watt forcing.
Tamino’s efforts don’t prove anything; if you assume (like the GISS model) that much of the GHG forcing has been “canceled” by man-made aerosols, and that very long ocean lags are delaying the appearance of the “true” warming, then of course you must conclude that the climate is very sensitive to forcings. Tamino’s post begs the real questions: are the assumed effects of man-made aerosols and long ocean lags supported by relaibale data? I think the answer is clearly no. The ARGO ocean heat content trend in the last 5+ years are clearly contrary to the very long lags, and the widely reported “global brightening” from the early 1990’s until at least 2005 suggests that aerosol “cancelation” has already declined dramatically.
What I do not understand is why Tamino feels he must be abusive to anyone who disagrees with him. If he treats people the same way in his in-person dealings, then he runs a terrible risk of being punched in the nose.
JimV–
Did you even read what I actually asked Tamino? Including the initial questions?
In the beginning, I did precisely what you suggest. I asked Tamino him how he came up with his time constants. I tried to rephrase and asked him what range of parameters he envision.
In my third attempt, I tried to explain that some choices of time constants (possibly his) might not map into physical reality that this might include a violation of the 2nd law of thermo and that one should test for these violations.
Tamino’s answer was
I predict that Tamino will post further on this. He has already responded to a query from TCO about correcting his eigenvalue solution. FWIW from my perspective, he will have to go further with this to maintain any credibility. He made some mistakes and shot down reasonable comments which would have helped him work toward the proper solution – or at least reconcile the discrepencies – which Lucia was obviously trying to do. He must put this right or risk exposing himself as untrustworthy in front of many who have been follwing both blogs.
SteveF, “What I do not understand is why Tamino feels he must be abusive to anyone who disagrees with him.”
That likely falls under the aegis of the ‘Greater Internet ****wad Theory’. Slightly NSFW.
http://www.penny-arcade.com/comic/2004/03/19/
BANNED!
Hey, Lucia, I see you are banned from Tamino! Like me and quite a few others.
Congratulations, wear it with pride. Your banning is more distinguished than mine, yours was publicly announced on the blog. I only became aware of mine because my posts stopped appearing.
We should start up an honor roll. Its a compliment to be banned from Tamino. Yours is a bit unusual. Mine was preceded by insults to my parenting abilities, and I was compared to Judas Ischariot. I don’t recall any wild personal insinuations being directed at you, you missed out on that. Oh well, maybe the gang will make up for that now you cannot reply.
What a hoot those guys are!
Layman Lurker–
Oddly, I don’t think Tamino’s behavior indicates he is deceptive. I think he is prone to the normal human failing of not wanting to believe there can be wrong with something he posted. I suspect that, like BPL, he has fallen into the trap that believing that just because some discussions of the 2nd law are nothing but confused ramblings of those who do not know what that law can be used for all discussions are that.
Tamino may well be able to rescue this and come up with something sensible.
I’ve discussed these two-box issues in comments somewhere here. I seem to recall someone asking me why I don’t give it a try.
I’ve long been of the opinion that if one is going to do a statistical fit to the two-box problem, they should use two temperature series. Unless measurement error swamps the fit, or the forces are way off, using two temperature will, by itself, help protect against getting a curve fit producing best fit results that might possibly violate known physics. (It will do this because heat really does not flow from cold to hot on earth, and so actually having the two time series should reflect that fact.)
So, if possible, they should use ocean heat content and surface temperatures. Ideally, the series should be long. Also, ideally, one should do the fit so the Forcing are partitioned in some reasonable way between boxes etc.
Once can either constraint before or check after wards. The order doesn’t matter. Doing it right is tedious. It might be worth doing.
If Tamino does it, good for him.
Michel–
Oh. There were some personal insinuations in the original thread. There were also misrepresentations of the past etc. Overall, it appears that I was banned because I suggested something was wrong with Tamino’s model and I was right.
Zer0th (Comment#18365),
You may be right about the ‘Greater Internet ****wad Theory’.
Reminds me of an observation made by one of my business partners about inhibition and character: you can better tell someone’s character when they are a little drunk; if they become aggressive/abusive rather than friendly and talkative, then that is their natural tendency.
SteveF,
Tamino probably doesn’t deal like that with real people. He’s more likely an “internet tough guy.”
What I see is a guy who has created an online life in a fantasy world. Instead of playing “World of Warcraft,” he has his own little world to control on the internet. I assume he’s a fan of Mozart’s “the Magic Flute” and took the name of Tamino from there. Of course, the character Tamino is a hero and a handsome prince that has women fall in love with him at the drop of a hat. Freud might have had a field day with the likes of him.
When I briefly posted there years ago (until I was banned, of course), he made no secret of the fact that it was his site and that he and his kind had were not going to hold back when it came to ad homs and hostility. But if you were a skeptic/septic/denialist, you had best be on your tippy-toes.
And even if you were pleasant, you would still have posts deleted or edited if you presented evidence of an error by Tamino or is ilk, had a substantive point that didn’t fit-in with Tamino’s beliefs, etc. Lucia’s run-ins with him are a perfect example.
There is another comical example detailed here http://gustofhotair.blogspot.com/2008/10/if-it-doesnt-fit-bill-just-pretend-it.html
Long ago I looked at the hadcru series for land and for ocean. It was quite interesting, you could definately see the lag in the ocean, then an overshoot in the land temps. Argg lost it somewhere. I didnt try to fit it to a two box ( beyond me.. a man even tamino has to know his limits )
Michael– I love this
Yes. I was a bit surprised at what was snipped in the final comment of mine that appeared. I don’t remember precisely what I said, but I don’t think it was offensive.
Not that Tamino is likely to care, but I doubt I will post there again.
What numerical values do T_s and T_o approach as time becomes very large?
Do the numerical values seem reasonable?
Is it likely that the Earth’s systems will ever attain states analogous to that indicated by the very-large-time-scale behavior of these equations?
Thanks
There is a comment published by the Journal of Geophysical Research that could contain the same errors as Tamino’s two boxes – note the lead author.
http://www.jamstec.go.jp/frsgc/research/d5/jdannan/comment_on_schwartz.pdf
Dan– With any two-box model the answer to your question would always depend on the applied forcing.
Bill–
I thought very little in that paper was worth beans– but I don’t see the error Tamino just made in that paper.
Michael Jankowski (Comment#18371)-I had forgotten about Lowe’s blog. His analysis of temperatures in Australia is always fascinating, if a bit over whelming. Naturally anyone who challenges Tamino and is that well equipped to dispatch him must be eliminated. I loved reading how Tamino squirmed trying to prove that rainfall in Australia is “decreasing”. I seem to recall a rather large fuss when Roy Spencer fit a polynomial to the UAH data, but Tammy thinks that it works for Australian rainfall. He criticizes anyone who draws a trend in GMST since 1998-they should do it since 1975 he admonishes-but then he makes a fuss when some one says that showing all the rainfall data makes it hard to see any trend-to get one you have to play Tammy style games like smoothing, which is bad bad BAD:
http://wmbriggs.com/blog/?p=195
or Polynomial fits! Which is bad:
http://deepclimate.org/2009/04/09/the-alberta-oil-boys-network-spins-global-warming-into-cooling/
Well, okay, it’s sometimes bad. But not when Tammy does it. See the logic (and yes, Deep is a Tammy fan. I suspect they are related personally, given the similar ego-maniacal tendencies and penchant for hysterical over reaching nonsense.)
re: Lucia #18367
Lucia, I commend your restraint. You may well be right about Tamino being overzealous rather than dishonest. However his response to your questions and comments saying you were “trying to confuse the ignorati” were wrong whether you attribute it to zeal or spin. His quiet correction of the eigenvalue math in his second post – without a proper update and recognition of the source – was weasily IMO.
Lucia,
You are far too generous. Based on Tamino’s action here and Lowe’s eye-opening narrative, Tamino appears to be deeply dishonest.
“2) The laughter occurred inside the confines of my home. Under the circumstances, I could not help laughing. Moreover, I can make no apologies if milk shoots out my nose when I discover Tamino explaining the “physics†underpinning an equation he obtained by committing highschool math errors.”
In the confines of your own home, you post it on the internet as a public pillorying of Tamino. Those who developed the scientific method realised long ago that such immature behaviour was anti-scientific.
re: Michael Jankowski (Comment#18371)
…There is another comical example detailed here http://gustofhotair.blogspot.c…..nd-it.html
Missed that one, and all I can say is… Wow. Since we all know Tammy is reading this thread and would delete (or better yet, edit) this comment if I made it at HIS HOUSE, I’d like to offer the following observation wrt the last rant on that thread. (Lucia, feel free to Zamboni this whole thing if it’s veering off the intended topic):
Tammy: When you create a blog post and link to someone’s blog, and then go on to deride him (before immediately contradicting yourself…) on said blog entry… with comments… You most certainly DID invite him in.
bugs–
Nonsense.
Re bugs #18389:
Would you describe Tamino’s last response on the thread linked below scientific, or anti-scientific?
http://tamino.wordpress.com/2008/05/30/drought-in-australia/
If you have problems finding it, it starts out with the phrase “Who the hell do you think you are?”
Bugs just made yogurt squirt out of my nose.
Me too. The interesting part is that I haven’t had any yogurt in weeks.
More bilge in Tamino’s canoe?
http://www.climateaudit.org/?p=2869
Lucia:
I second Layman Lurker’s comment about your admirable constraint.
bugs:
The evidence for nominative determinism is mounting.
Well, this was the treatment Tamino gave to me:
http://tamino.wordpress.com/2008/03/06/pca-part-4-non-centered-hockey-sticks/#comment-14278
… and then I was cencored:
http://www.climateaudit.org/?p=609#comment-222107
Half a year later Ian Jolliffe himselft confirmed what I said:
http://www.climateaudit.org/?p=3601
Still waiting for an apology … which reminds me, has DeepClimate been around here lately?
http://tamino.wordpress.com/2009/06/22/key-messages/#comment-32570
” bugs (Comment#18389)
August 23rd, 2009 at 8:43 pm
“2) The laughter occurred inside the confines of my home. Under the circumstances, I could not help laughing. Moreover, I can make no apologies if milk shoots out my nose when I discover Tamino explaining the “physics†underpinning an equation he obtained by committing highschool math errors.â€
In the confines of your own home, you post it on the internet as a public pillorying of Tamino. Those who developed the scientific method realised long ago that such immature behaviour was anti-scientific.”
Bugs, you should keep in mind that that you are allowed to post here and not be “banned” the way RealClimate, Tamino, and Climate Progress ban posters who disagree with them. I have had the honor of being banned from RealClimate and Climate Progress because I posted documented facts that were contrary to their AGW alarmism. You will find few, if any, ad hom attacks here unlike the those sites where such attacks are omnipresent and encouraged.
I don’t think bugs realizes quite how hilarious the fact of Tamino’s incorrectly posted eigenvalues was.
First: the incorrect eignevalues on his appeared in a post specifically criticizing my post. My post contained the correct expression eigenvalues. Somehow, though his equations looked different from mine, it did not occur to him to say “He. Wait. Why do the two expression appear different?”
Had he done so, he would have discovered his algebraic error before posting.
Had he made the error under other circumstances, I would not have laughed. But come on! He had the correct result right there in front of him and he still posted the wrong result.
BTW: He has explained how he came to goof up.
So, basically, he did not back substitute. The result was an equation that
a) made no sense
b) that for some large values of “&beta” alues of parameter could result in a negative time constant (which would correspond to one solution that increases exponentially forever
c) and a narrative discussing the “physics” that made no sense at all.
lucia (Comment#18404)-“Somehow, though his equations looked different from mine, it did not occur to him to say “He. Wait. Why do the two expression appear different?â€
Had he done so, he would have discovered his algebraic error before posting.”
That’s because he has an attitude that he is always right and everyone else is wrong. I guarantee he noticed the difference. He just thought “Well, obviously her version is wrong” without checking.
When error and ego collide, the results are often catastrophic, but in this case spectacularly humorous.
Bugs:
You’re full of it.
Mockery of badly done physics is a staple of the science. We even have a journal for it, the “Journal of Irreproducible Results”. Tamino’s original erroneous excoriation of Lucia would be a candidate for that journal, as maybe his 2-box model fix that violates the 2nd Law of Thermodynamics.
On the other hand, modifying your journal entry after the fact to correct your errors, without leaving the original available is borderline fraudulent behavior. If I found a student of mine doing this, he’d be looking for a new advisor.
2 things Lucia
1)
I reacted like you . Only it was Perrier that I spilled , coughed and snorted all over my computer room . It is worth the best of Monty Python . Here “ROFL” and “Pwned” are relevant
Btw I am banned now too and none of my comments passed .
Not that I will miss this place that always reminded me more “The Damned” of Visconti than any sort of human universe .
As for the “mind” bit , it reminds mostly the IQ of oysters .
When they are open they are also rotten and smelling badly .
2)
I checked something . The 2 box model with these constant works for Triton (with only some adaptation for the atmosphere) .
Lucia, I am disappointed in your attitude here. It may get lots of cheers from Watts and friends, but you know what, science is a cooperative endeavor, not mud-wrestling. You had many choices in how to respond to Tamino’s post. In the very first response he made to your comment, the very first comment on his original “not computer models” post, he wrote:
In the original article he wrote:
I.e. his assumption of 1 year as the “short” time constant was intended to be a “very quick”, “prompt” response. What you have (quite nicely, other than the vindictive tone of the commentary) shown here is that, assuming the physical interpretation Tamino gave (atmosphere vs ocean), the “prompt” response has to be considerably faster than 1 year – in fact on the order of 1 month or less. That’s fine – it has to be even quicker than his “very quick”, “prompt”, “short” assumption.
Does that change the result of doing a two-exponential statistical or phenomenological model of the response to forcings? Not significantly, simply because you really can’t use month-to-month temperatures in that modeling, so the data are annual, and 0 or 1 years is about all the choices you have – and it probably doesn’t make much difference (Tamino asserts as much elsewhere).
Furthermore, from the start of that post, Tamino was attempting to distinguish clearly between mathematical, statistical or phenomenological models on the one hand, and “computer” or physical models on the other. The whole title of the post was “not computer models” – by which he meant, not models that are based on the actual properties of the physical systems (atmosphere and ocean) responding to change. So of course, despite linking to his older “two-box” post, there was no physical underlying model involved in this case – obviously he could have been quite a bit clearer about how he did the fit, but he was clear enough from the start that it was not based on an underlying physical model.
In other words:
what you have proved agrees with Tamino’s original assertions – the atmospheric response is quick (quicker than he assumed).
Tamino answered your questions completely from the first, and your assertions otherwise indicate either some deep lack of understanding of what he’s talking about, or, well, something else.
The second law of thermodynamics has nothing to do with this. Nobody ever tries to violate it – it’s a constraint on the real world. In this case, the constraint simply forces the “short”, “very quick”, “prompt” time constant to be on the order of 1 month, rather than 1 year, for an atmosphere/ocean-based two-box. As Tamino pointed out, you could have easily offered that conclusion, though David B. Benson already pointed that out pretty quickly:
I’m sure it’s fun to trade insults, but I can see how frustrating it can be when they are as undeserved as in this case. Maybe you’ll learn something from being banned by Tamino, but I don’t see much sign of it in this thread yet.
Arthur Smith,
It is interesting, to say the least, that ciritics of AGW orthodoxy are pilloried for any sort of perceived slight against AGW promoters. Yet no rebukes are in store for the endless streams of abuse and misleading parsing that AGW promtoers depend on as par for the course. You belittle our hostess, while ignoring Tamino’s much more offensive behavior. Rather typical of AGW believers, I cannot fail to notice.
The disappointment is that you attack our hostess, while ignoring the extended pattern of boorish and deceptive behavior from the likes of Tamino. He has rewritten his post to cover his errors, and pretends he did not. While that is an accepted standard for the AGW community, in the real world it is not.
The AGW community asserts the ‘science is settled’ irt to their AGW predictions. The behavior of the AGW community proves that is simply a miselading claim.
Arthur–
I am very very saddly disappointed by your comment.
You seem to misunderstand that I tried repeatedly in comment at Tamino to explain that his specific choice of parameters needed to be check to verify they matched physics, and asked him if he had. He would not check.
It is well and good for you to quote Tamino saying this “it hardly matters to the final answer as long as it’s kept short. “, but I think you should note that this statement is utterly vague. It hardly matters which final answer for what?
It certainly matters to the answer: What are the time constants? Is Schwartz time constant right?
If Tamino simply assumes values for both, his is making the conclusion at the outset.
Nonesense. Of course you can use month-to-month temperatures in a two-box model. You can use them in a 1 box model and an n-box model.
If you believe you cannot use shorter time constants, you clearly mis-undertaand something. If Tamino believe this, he suffers from the same misconception.
Arthur Smith (Comment#18410) August 24th, 2009 at 9:59 am
You aren’t being serious, are you!! It’s a badge of honor that many proudly wear.
I predict more yogurt / coffee through more noses.
Arthur–
Also,I find your claim that Tamino did not intend his model to be physically realistic to be absurd based on the wording of his post. I also find it absurd based on the wording of his rebuttal of mine.
If Tamino wishes to say that he never meant his stastical model to reflect physical reality, he should say so. But that’s not how anything he has posted reads.
Lucia,
A quick question. I am trying to understand the point that Tamino was trying to make before he botched the math and went off his Lithium. Am I seeing the same argument that he is trying to present ?
1. Those nasty skeptics don’t believe computer models out of shear spite.
2. If they don’t believe computer models, I will construct a simple analytical model that demonstrates the same thing.
3. Of course the simple toy model is “consistent” with what he is trying to show … math errors and all.
How are two overly simplistic and potentially incorrect models better than one ?
Doesn’t this actually make the point of the skeptic position quite handily ? Using any sort of mathematical model of the real world, one must be extremely careful of confirmation bias, have a high degree of self honesty and have the capability to be truly critical of your own work. Otherwise, you end up inadvertently tuning your model to get the results you expect via confirmation bias. You can’t just stop when you get the results you desire.
It is not Tamino’s model that I think is truly at issue here. I actually feel there is no better tool than simple models for developing the intuition and understanding for more complex systems. However, all models break down at some point and are no longer a good predictor of the real world. I see absolutely no sign that Tamino and his ilk understand this.
Even Tamino’s broken model has some lessons to teach and not the least of them is alway look at any model with a healthy does of salt. You should always be asking yourself “Am I really looking at what I think I am looking at ?” It is not that I distrust the models per se that makes me a skeptic (everyone uses models at some level). It is the fact that I distrust the objectivity and self criticality of the model builders that makes me a skeptic.
Given this current demonstration, it would seem to me that my skepticism is highly justified. Surely there has to be at least one technically competent warmer out there somewhere capable of handling a scientific conversation in a polite informative tone. Anybody ? Is there a blog where I would find such a viewpoint ?
Lucia, you write:
Ummm. Obviously, “what are the time constants” is not the question he was trying to address in his post, because, as you point out, he “assumes values for both”. The analysis was not intended to ferret out the time-constants – feel free to re-do it with other time-constants you feel more appropriate. The analysis was simply a mathematical demonstration of how to extract information about the sensitivity of Earth to forcings, using observational data instead of going through physical models of the entire planet.
In that, Tamino’s original post was brilliant, and elegant. The reason for introducing the two-box model was explained very clearly:
You really think “what are the time constants” was a question he was trying to answer in that post? Seriously?
Arthur,
I am a bit confused. You make the argument that Tamino’s model is not meant to have basis in physical reality. You then go on to claim that Tamino’s claim that the atmospheric response is quicker is correct.
So you are using a model that does not have basis in physical reality to make a claim about a physical property ? How does that work ? Am I misunderstanding you ?
Arthur–
Do you really think he’s not trying to suggest that the time constant for the earth’s climate is longer than suggested in Schwartz’s model? His link back to is previous post introducing the two box model suggests that he is fiddling with these precisely to justify a time constant longer than Schwartz.
As a method of extracting sensitivity, I think Tamino’s model is an utter failure, period. I think he has quite simply failed to extract that. Part of the reason he can’t extract that is that he simply set the time constant. Part of the reason is that’s not enough information to extract them.
I think Arthur Smith’s comments should be even more verbose and whinier.
Because I Really haven’t had enough of these kind of stylings reading Really Real Climate Blogs the last 5 or 10 years.
Andrew
Artrhur Smith (Comment#18416) August 24th, 2009 at 10:40 am
More yogurt / coffee.
Box ‘models’ are a dime for dozens. Have been for decades.
Artifex,
“Surely there has to be at least one technically competent warmer out there somewhere capable of handling a scientific conversation in a polite informative tone. Anybody ? Is there a blog where I would find such a viewpoint ?”
From time to time, Gavin seems willing to communicate in a relatively civilized manner, but that is about it. Mann and the rest at Real Climate have pretty much a “take no prisoners” mindset, and snip any comment/question they do not like. Unfortunately, even Gavin does not appear to object to endless insults and ad homs in the comments. When Gavin has commented here, Lucia has cautioned other commenters to treat him decently. Too bad that can’t be reciprocated at Real Climate. All the rest of the AGW blogs I have read seem much worse than Real Climate.
Arthur Smith (Comment#18410),
I read the entire thread at Tamino’s blog. It is absurd to suggest Lucia’s questions justified his aggressive responses and ultimate banning. Tamino’s original post really was very sketchy about what assumptions he made in doing his curve fit, and certainly nobody could have gone through the same process without additional information. Lucia’s questions appeared to me to be completely sincere, and certainly did not contain any insults or personal attacks.
I suggest you consider if Tamino would ever allow a comment similar to your #18410 to post at his blog. Tamino’s endless rants, insults, and profanities remind me more of the behavior of an emotionally troubled teen than that of a normal adult. Do you not agree that Tamino’s behavior is less than constructive?
Andrew_KY,
Come on, none of that. It solves nothing.
I would point out that we are discussing not only Tamino’s math failure, but whether or not his model echo the real world reliably. For someone who does not believe in thermal physics because they have not been “demonstrated”, I would think that you don’t have a lot of wiggle room here.
Pot meet Kettle.
Artifex,
OK. I will accommodate your request to desist.
I did get a pleasant tingle when I hit ‘submit’ on that one, though. 😉
Andrew
Lucia asks:
Lucia – if you really were sincerely thinking this was what Tamino was doing, why didn’t you ask if that was what he was doing at his blog from the start? Instead we got questions about what he was using as the atmosphere-ocean heat coupling constant, which he very clearly explained was irrelevant and not part of what he was asking questions about.
No, Tamino was emphatically not trying to “justify a time constant longer than Schwartz”. Schwartz is irrelevant to that discussion, not mentioned anywhere in the post (or in the comments other than once very peripherally). As indicated in the followup discussion, the real physical system has time constants all the way up to (at least) hundreds of years – there is an entire broad spectrum of responses. Tamino’s “two-box” here was just the simplest example of a case where you might find two time constants, but if you want to model it with more or different time constants, go right ahead!
In fact that would be absolutely the most productive thing to come out of this. Lucia, you already have “lumpy” which is a single-time-constant fit of temperatures to forcings. Why not try extending that to two time constants, and pick whatever time constants you feel are best, or try a variety, or even try with three or four, and see what kind of sensitivity you get that way. My guess is whatever time constants you pick, as long as the fit is actually reasonable, you’ll find an overall sensitivity exactly in the expected IPCC range, between 0.5 and 1.2 deg.C/(W/m^2).
That is the question Tamino was actually looking at in his post. The question of relevance here is whether anything you have asserted on this has any effect on that sensitivity number. Do you really think it does?
[edited to correct the lower range of sensitivity expected]
Arthur–
Why would I ask something I do not even consider to be a question?
It was clear to me that Tamino is suggesting that his stastical model is based on a two-box model and, as such, is motivated by physics In this context, the rational question is to try to discover how in the heck his parameters relate to the two box model.
My question was relevant whether or not the purpose of Tamino’s post relates to Schwartz. I simply point that out because you are inferring something about “the purpose”.
I have discussed this in comments either with you or others before. Answer to “why not” is that I think it is probably a waste of time to try to do this unless we have access to a large amount of ocean heat content information going back a long way.
I do not believe an analysis based on two arbitrarily chosen time constant fit to existing surface data can be used to determine the climate sensitivity any better than than a one time constant model.
This is doubly true if the time constants do not map into those for the planet
Arthur Smith is quite unintentionally hilarious, in particular his comment: “Maybe you’ll learn something from being banned by Tamino”.
The only thing any intelligent person would learn from this is that Tamino is both a jerk and dishonest. Does Mr. Smith really think that anyone cares about being banned by Tamino so he can keep his little echo chamber going? Oh, and if Mr. Smith posted something on either RC or Open Mind as rude as to the proprietors as his posting here, he would be banned in the proverbial heartbeat. Arthur, perhaps you can learn something.
Arthur
Why would I ask something that is not even a question in my mind?
Reading Taminos post, it appeared (and still appears) his intention was to suggest that his statistical model was constrained by physics, and in particular the box model. In this light, my questions relate to trying to figure out how in the world his physical model relates to physical reality of a two-box model.
If, as you claim, it is clear to you that Tamino never meant his statistical model to have anything to do with physical reality, and everyone was supposed to understand that it had nothing to do with physical reality and/or a two box model, I can see why a) you wouldn’t ask the same question and b) you would not understand why anyone would ask that question.
The extension of Lumpy to two or more lumps has been discussed many times. The reason I do not believe the extension to a two box model is promising is that I think the existing surface data is insufficient to permit us to obtain reasonable values for the time constant. I think this fit must be supplmented with ocean heat content data. Otherwise, either a) there are too many fiddel factors to tweak or b) someone can do precisely what Tamino did: arbitrarily constraint to get whatever the heck answer they like.
Historic ocean heat content data is too sparse.
Arthur Smith (Comment#18424)
“My guess is whatever time constants you pick, as long as the fit is actually reasonable, you’ll find an overall sensitivity exactly in the expected IPCC range, between 0.8 and 1.2 deg.C/(W/m^2).”
Really?
Tamino got 0.69 from his model, below the range you note, and I assume you find his fit actually reasonable. He of course suggests that it is not high enough because the real ocean lags are actually much longer than the 30 years he used, based on…. I don’t know what. Maybe the desire for a higher sensitivity value?
If you assume all the IPCC forcings are correct and assume a long ocean lag, then the apparent climate sensitivity MUST be very high (or in other words, GIGO). This observation is very far from “brilliant”. There is in fact no new insight at all in Tamino’s post (ok, maybe new to him); the result of his exercise is quite obvious to anyone who gives it a bit of thought.
Arthur Smith (Comment#18410) August 24th, 2009 at 9:59 am
I would like to respond by saying that the time constants DO matter. In fact, the outcomes of this thought experiment seem to fit neatly into three boxes:
A) The time constants (and/or appropriate number of boxes) can be clearly determined from the observational record, and the n-box model turns out to be a pretty good (at least conceptual) model for the climate system despite the massive (and openly admitted) oversimplification.
B) The time constants (and/or appropriate number of boxes) cannot be unambiguously determined from the data, in which case there is (currently) nothing to recommend this simplified model over a score of others. (In this case, warmists and deniers are equally able to use this model to justify whatever conclusions they like).
C) No reasonable, consistent constants can be determined for this model which fit with the data, in which case we should probably drop this model altogether.
Given his text, I would have to interpret “not computer models” as those which can be neatly written in a textbook format, using simplified analytical expressions and possibly some hand waving, and which do not require running a numerical simulation which can be somewhat opaque (since many “deniers” and in fact some reasonable people seem wary of this step).
All these n-layer simplified greenhouse warming models or 1-d RC blah blah could be seen as “not computer models.” A purely autogressive model with no physics could be a computer model, but so could a coupled AOGCM. He could have just as easily chosen to write about “not physical models” vs. the rest, but that isn’t what he wrote about.
Of course there is a physical underlying model. The two box model as Tamino does it assumes heat transfer between a couple of boxes, each having some relative heat capacity, possibly only one of which is receiving a “solar” term, and with some interconnecting transfer coefficients; the setup cannot be separated from the physical model.
The gist of all this analysis is that you can’t just choose a bunch of time constants based on fitting to the observed data. Second law considerations (essentially the Fourier heat law here) set constraints on the relative time constants given the heat capacities (relatively well-known) and the transfer coefficients between the boxes (slightly less well known).
Oliver
P.S.: As Lucia points out, good luck fitting historic ocean content data to rule out the alternatives.
Arthur Smith:
Pffft.
Please stop the sanctimonious preaching.
There is no “spirit of cooperation” here because Tamino and his ilk, meaning you in this case, want to tell everybody else how it is going to be, what the rules are for debate, what types of questions we’re allowed to raise and so forth.
The take home message here is your crowd consist of a bunch of intellectual lightweights who can dish out insults and ad hominems by the truck load but get all nervous and excited when one of your own gets a bit of well deserved pummeling.
Screw him and screw you. This is science and nobody is going to pull punches when we see wrong crap being promoted and disseminated like this latest dreck from Tamino.
You get bloodied up in science sometimes. It’s part of the art.
OMG you are full of it.
It has everything to do with “this”. The model fails to meet important physical constraints, and that means its results can’t be interpreted physically. That’s completely damning because being interpreted physically was the point of Tamino’s exercise.
It was a lot of fluff with a simple little model that violates physical law in it.
All you’ve proven here is you have no standards.
Hunter:
I’ll expect a comment from Arthur Smith on that just a moment after hell freezes over.
OT but some very sad news for the warmistas today from their own pals LOL
http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/current.365.jpg
Lucia:
I agree.
That was very clearly the point of the modeling, and it is obvious if you look at how Tamino goes on to physically interpret the results of the modeling that this was his “author’s intention”.
As I commented just a minute ago, it’s a truism that a model that fails to meet physical constraints can’t be interpreted physically.
Carrick–
I would also note that Arthur is suggesting an interpretation Tamino did not suggest in his “rebuttal”. Moreover, Tamino’s “rebuttal” includes a discussion of the phenomenological meaning of his time constants.
So, while Arthur may wish to believe Tamino does not mean to suggest that his curve fit has something to do with physics, Taminos first post and rebuttal would suggest that to the contrary, he does want people to believe his curve fitting exercise has something to do with physics.
SteveF (Comment#18429) – you probably started your comment before I had corrected the 0.8 in my comment to 0.5 (as noted at the end of the comment). That was a translation from the IPCC equilibrium sensitivity numbers of 2.0 to 4.5 degrees C per CO2 doubling. Tamino’s number is obviously within the range, I just briefly goofed in dividing numbers.
Simple errors like that (mine first, then yours) happen. They get corrected. Harping on them as if they actually mattered is stupid – if they clearly don’t matter, as in this case.
From Lucia’s comment above, she apparently doesn’t know whether her criticism of Tamino’s time constant choice matters, and doesn’t care to prove it one way or the other. Maybe somebody else will have a try – obviously both Tamino and I are quite confident on the point that whatever time constants (or other completely different model) you use, if you can get a good fit to the past century of warming data using the standard forcing estimates, you’ll get a sensitivity very close to the IPCC’s numbers.
By the way, while the entirety of Lucia’s post here concerns the magnitude of the short time constant, which Tamino declared from the first to be the “very quick”, “prompt”, etc, so entirely compatible with it being shorter than his choice, according to her recent comments it was actually the long time constant that was of real concern. But we’ve had no discussion of any way in which that might be unphysical!
The diurnal variation of SST should give you a handle on the depth of the very well mixed ocean surface layer. IIRC, the diurnal variation is only a few tenths of a degree. Going back to my spreadsheet on that sort of thing, the water column depth on heat capacity alone is way too deep, greater than 100 meters. I need a term for latent heat transfer and the diurnal air temperature range. It’s starting to look too much like work.
You really do need a lot more heat capacity than just the atmosphere for the surface layer or the diurnal temperature range would be enormous. With the sun directly overhead at noon, the diurnal range of temperature of a single box model with instantaneous response time and no greenhouse effect (emissivity=1, albedo = 0.31) would be 45 K (247-292) for a heat capacity equivalent to 20 m of water. Once again, the water liquid/vapor thermodynamics must play a large role that I’m not taking into account.
Tamino’s trying to move on now and has put more posts up. Shove the ball of hair back under the couch.
I’m glad to see the sanctimonious ___ screw up. Like Gavin, he needs to be reminded occasionally that there is no monopoly on smart or right in this world. His reaction to questions was childish and his insistence on the wonders of the two box model for verifying computer models was overreaching. Now he’s looking for a broom.
What are you talking about? Matters to what? If you don’t say to what, no one can say whether or not “it matters”.
Plus, why do I have to show ‘it matters” to “whatever unstated thing you think it doesn’t affect”?
You and Tamino haven’t shown the choice of time constants “don’t matter” to “anything and everything that might interest people”. Neither of you have shown it “doesn’t matter” to anything you have even give a name!
Great! Then show it, and show that your choices make physical sense.
But now answer this: Why doesn’t Schwartz’ model count as already proving you wrong? Why not Lumpy? Both it would, presumably be “another model”. The best fit time constant and sensitivity don’t get you where you want to go.
As for any additional basis of your confidence, all I can say is this:
As far as I can tell, Tamino’s two recent posts tell the world absolutely nothing about the time constant or sensitivity of the earth. If he or you have some analysis on scratch pads in your homes, and it shows something, great. If this makes you confident, that’s great too.
But whatever is on your scratchpads at home, those of us who are not psychic can hardly be convinced by your telling us you and tamino are confident you will someday be able to come up with something that shows the IPCC range in some new way.
That said: color me mystified. I thought the IPCC range was already supported based on empirical data. So… your goal with this two-model or future figment or your imagination model is… what?
Arthur,
Why would Tammy say “yes, I checked” (that the model doesn’t violate the second law of thermodynamics) if (as you state) “the second law of thermodynamics has nothing to do with this”?
You and Tammy need to communicate better in order to keep your stories straight.
Arthur Smith (Comment#18436)
I cut and pasted your comment without re-reading your post for corrections.
Anybody can put together a curve fit like Tamino’s in an afternoon (I already have). Quality of fit in the model depends on the capacities assigned to the two boxes. The fit is best with a relatively short lag on the “slow” box, but the fit doesn’t become much worse until the lag is past 50 years. The apparent sensitivity rises with increasing lag by about 50% at 50 years lag. No surprises there. So fiddling with the ocean lag gives you a considerable range of “diagnosed sensitivities”; you can choose pretty much any lag you want if there is no constraint from real data, which is what Tamino does with his 30 year lag.
The more important point is that the fiddle room from ocean lag is multiplied by the fiddle room in assumed climate forcings, especially the aerosol “cancellations” which are extremely uncertain, and IMO, just plain wrong. So if I say the ocean lag is relatively short, and the aerosol cancellation is near zero (which is within the IPCC uncertainty range) then the diagnosed sensitivity is very low… far below the IPCC range of sensitivities.
Tamino simply assumes parameters which give him the sensitivity numbers he likes. To suggest that this somehow “proves” the actual sensitivity lies withing the IPCC range is simply not correct. What it proves is that Tamino can selects parameter values that give him his expected sensitivity. This is most certainly not “brilliant”.
The intemperate comments on deniers and the 2nd law may be referring to:
Gerhard Gerlich & Ralf D. Tscheuschner in
Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics, Version 3.0 (September 9, 2007)
Section 3.9 The laws of Thermodynamics pp 75 – 79.
arXiv:0707.1161v3 [physics.ao-ph] 11 Sep 2007 who take to task those claiming to push heat flow up hill from colder to warmer regions.
David–
It is true that G&T’s argument based on the 2nd law is nonsense.
Of course, that doesn’t mean that engineers and physicists are required to forget that the 2nd law exists and constrains physics. If Tamino wants to make that some sort of law at his blog… I guess we can just say
“Open Mind: Believe in universe with no 2nd law of thermodynamics!”
Woe betide the hapless greenhouse alarmist when German physicists enter the fray! As above, Kramm and Zelger take A.P. Smith to task for managing to both mangle the math and violate the 2nd law.
Comments on the “Proof of the atmospheric greenhouse effect†by Arthur P. Smith
Gerhard Kramm1, Ralph Dlugi2, and Michael Zelger2
Bjerkness, V. 1904. Das Problem der Wettervorhersage, betrachtet vom Standpunkte der Mechanik und der Physik. Meteorologische Z., 1-7 (in German, a translation into English can
be found under http://www.history.noaa.gov/stories_tales/bjerknes.html).
Gerlich, G., and Tscheuschner, R.D., 2009. Falsification of the atmospheric CO2 greenhouse effects within the frame of physics. International Journal of Modern Physics B 23 (3), 275-364.
Bjerkness (1904) – a handy reference to cite on 2nd law violations.
Arthur Smith: Tamino’s bull dog???????
Wow. See, if the sensible people of the world could understand the work and effort that goes into proving what’s wrong is wrong to those who are unwilling to budge, perhaps they would realize that perception is reality. Not reality being reality. And the reality is, none of what Tamino hysterically claims will doom the planet…will ultimately come true. Only in his open mind will it. Or closed mind in this case.
Maybe I should forget the heat capacity of the water layer and just set its temperature as a constant. Then any heat deposited in the surface layer by sunlight is transferred to the air layer by sensible and latent heat transfer and the converse when the sun is down. Then assume that the well mixed planetary atmospheric boundary layer can be up to 2 km thick at maximum surface air temperature so it can accommodate as much water as possible. Then specify a relative humidity for maximum temperature, or at least some temperature. I’ll use Tropical Atmosphere conditions from MODTRAN as a first guess. An additional constraint, of course, is that relative humidity cannot exceed 100%. Then maybe I can get a handle on how much water is moving back and forth during a 24 hour period and the effective heat capacity. In the real world, those conditions would likely result in a highly unstable atmosphere with local thunderstorms a near certainty, but that’s way too complicated.
Arthur Smith
Look forward to your rebuttal to Kramm & Zegler to see if my superficial scanning the abstract & conclusion have any validity.
You can always show that a toy model isn’t a perfect representation of the real world. That the map is not the territory is a given. So what. Using a superconductive sphere is always going to be problematical when compared to a body with essentially zero thermal conductivity like the moon. But it’s also fairly easy to construct a model with zero thermal conductivity tangent to the surface and low heat capacity that ends up looking a whole lot like the moon. It also doesn’t take all that much thermal conductivity to make the temperature distribution close enough to a superconductor that the average temperature is only slightly different than a superconductor model. Emission to space from the Earth’s atmosphere as function of latitude is much flatter than the absorbed radiative flux from the sun as a function of latitude.
The graph of the lunar brightness temperature is laughable as well. Radiant energy from the visible face of the moon is affected by emission from the Earth as well as sunlight reflected from the earth. It’s not going to get anywhere near as cold as the far side of the moon at full moon.
Lucia:
This is exactly what I was referring to with respect to “authors intent”. It’s very obvious to anybody who can read that Tamino meant the model to be interpreted physically, since he does so himself.
Authur Smith is simply in danger of making himself look like a silly guinea for continuing to push this point, and for basically attacking you because Tamino doesn’t know how to behave like an adult.
Lucia and I’ve had our back-and-forths in the past, and for myself, I can admit I enjoyed it. There’s nothing wrong at all with somebody asking critical questions, especially if they are in a different direction than the original author intended.
To try and excuse Tamino’s explosive and unwarranted response in terms of “well you have to know how he is” sort of excuses, puts the onus on the wrong person for how they behave.
DeWitt Payne (Comment#18449), why constrain relative humidity? Let it range from -200% to +200% as part of your sensitivity analysis. Don’t worry about physical impossibilities. Brilliant!
Lucia, you ask:
No, we haven’t shown the choice of time constants “don’t matter”. I have personally asserted it, and my reading of Tamino’s post was that it was implied. Perhaps I’m wrong – prove me wrong – do the fits.
As to what it is that the question is all about, I’ve repeated that several times now, as Tamino does in his original post, but to be very specific, the question at hand is one of the central ones of the IPCC reports: what is the sensitivity of Earth’s global mean surface temperature to a change in radiative forcing?
After every one of the fitted graphs in Tamino’s original post, he addresses the question of what is the sensitivity that this fit implies. An “instantaneous” model (zero time-constant) implies too low a sensitivity when including all forcings (and yes, there’s some uncertainty in forcings – that’s a completely separate issue that certainly is relevant to the level of certainty in the final sensitivity value). Your or Schwartz’s single-time-constant models I believe show slightly higher sensitivities than the “instantaneous” model, but they just have 1 time-constant, not two, and are still on the low side compared with IPCC – and how well do they actually fit the past century’s temperatures? And then the one example he’s done of a two time-constant model shows a higher sensitivity that matches IPCC and climate models rather well – and it also fits the data very well, especially with ENSO variability included.
So sensitivity is clearly the central question in Tamino’s post, and my comments here.
Do Lucia’s criticisms affect the sensitivity estimate one gets from Tamino’s approach? Shortening the short time constant from 1 year to 1 month, or even zero, seems unlikely to change anything. We’ll see if somebody actually runs with it.
But maybe Lucia’s right, there wouldn’t be much point. Because, when you get down to fiddling with the time constants, you’re starting to try to physically model the planet – and that’s what climate models are for. Might as well just return to them.
Now where is that low-sensitivity “lukewarmer” physical climate model anyway…?
Arthur Smith (Comment#18454) August 24th, 2009 at 3:42 pm
Tamino wrote in his first response to the post Not Computer Models:
“For the slow response I assumed 30 years (didn’t try other possibilities) because the GISS modelE has an equilibration time scale of about 30 years.”
Could this suggest any possibility of circular reasoning?
jorge,
chihuahua comes to mind, actually.
http://cdn-www.dailypuppy.com/media/dogs/anonymous/chihuahua.jpg_w450.jpg
Lucia – by the way, according to this post of yours and the Feb 2008 update at the end, “lumpy” gives a sensitivity (which was the primary concern of your post anyway) of 2 degrees C per doubling of CO2, or around 0.5 C per W/m^2. At the low end of the IPCC range, but within what I just claimed you would get from any reasonable-fit two-time-constant model.
And, as we discussed here Scafetta worked out a model with two time constants that ended up with a sensitivity of 1.7 to 2.7 C/doubling of CO2, or 0.43 to 0.68 in these units. Schwartz also seems to have ended up very close to 2 C/doubling with his one-parameter model, after revision.
So I think my assertion is on pretty solid ground given these precedents, even if they aren’t precisely doing what Tamino’s two-box fit did. I.e. fiddling with the time constants in the two-box model *doesn’t matter* substantially to climate sensitivity, as long as you get a good fit to temperatures.
There is experimental proof of the existence of the greenhouse effect that does not require sophisticated mathematics. Go anywhere on the planet and aim an infra-red spectrophotometer that covers the wavelength range from 4 to at least 25 micrometers at the cloud free night sky. You will observe radiation and the spectrum will show features of water vapor and CO2. If you use a cryogenically cooled bolometer to measure the total incident radiation, the flux density will be in the hundreds of watts/m2 range. Lower at high latitudes in the winter and higher in the tropics. If you still think that doesn’t make the surface temperature at any given point higher than it would be if the brightness temperature of the sky were 2.7 K with an incident flux density of 3 microwatts/m2 then you truly deserve the epithet “denier”.
Arthur–
First– a) Your sentences don’t say what it doesn’t matter to. b) No one is required to disprove assertions that no one has even attempted to demonstrate. I can simply point out that neither of you has even attempted to demonstrate your assertion.
I could assert that Leprechauns cause global warming, stamp my little foot and insist you disprove it. Your failure to do so would not provide magical evidence that my assertion is true. So, stop your nonsensical attempts to pretend that my not beavering away at disproving your vague assertion somehow proves it. (Whatever the assertion actually is. )
Tamino has not shown that his method can be used to obtain any sensitivities at all.
Arthur–
First: I don’t know why you think “any model” will give you want you want just because Lumpy gets on the very low end. This is particularly true if we were to start overlaying uncertainty in the forcings. If the forcings applying to the earth are lower than applied to Lumpy, her sensitivity is going to go down; if higher it’s higher. So, based on that analysis which give a sensitivity on the very low end if the IPCC range, I don’t have “confidence” that all possible models with all possble time constants will do the same.
I don’t have confidence that all models can even give the right sensitivity at all– so why would have have confidence the will reproduce the IPCC range? If you have such confidence in all models– including unphysical ones, I’m marvel at your confidence.
On the last bit: At least you are beginning to firm up “doesn’t matter”, now seems to translate into ” varying the two time constant over an enormous range (including unphysical ones) won’t result in a spread of time constants that is larger than the spread in the IPCC reports.”
Maybe it will; maybe it won’t. This has not been shown. Many things have not been shown in that utterly vague post.
Arthur Smith (Comment#18454)
I agree that the point of Tamino’s original post was reasonably clear (at least to me) he wanted to show that the high sensitivity produced by climate models could be produced by a simpler two-box model.
That is the only thing I agree with. Tamino’s post most clearly does assume:
1) That the IPCC forcings are correct, and
2) That the ocean lag constant is very long
So everything he concludes is based on exactly the same physical fiddling with the constants that you seem to suggest should not be done.
“But maybe Lucia’s right, there wouldn’t be much point. Because, when you get down to fiddling with the time constants, you’re starting to try to physically model the planet – and that’s what climate models are for. Might as well just return to them.”
I will say again: If you assume the IPCC forcings are correct and you assume a long ocean lag constant (which both the coupled CGM’s and Tamino’s model assume), then it is obvious that the calculated sensitivity will be high. The real issue is the accuracy of these assumed parameters, which I honestly believe to be very far from reality.
You appear to be suggesting (and I have no idea why) that only fiddling with the parameters in such a way that they yield high sensitivity (as Tamino did) is allowed. It is straightforward to plug in a short ocean lag and very low aerosol “cancellation” of radiative forcing and calculate a sensitivity far lower than the CGM’s. I am sure Tamino could do this in a few minutes, but I am equally sure he will not.
Finally, I ask you again: do you believe that Tamino’s behavior in his exchange with Lucia was reasonable and constructive? Do you believe that insults, abuse, and profanity are suitable behavior in a blog exchange, or anywhere else? You could gain yourself a great deal of credibility if you could condemn some absolutely horrid behavior.
Arthur: By the way, I think the difference between a climate sensitiivy for doubling of CO2 =2.6C or 4 C “matters” and in a very real sense. So, if you are something “doesn’t matter”, you should define what you think “matters” instead of leaving that all loosie-goosie.
SteveF (Comment#18461) August 24th, 2009 at 4:25 pm
Arthur Smith (Comment#18454)
Yes, one can add a second box and assert that some unobserved portion of the heating went there, “into the pipeline.” This is interesting, since one can immediately hypothesize much higher sensitivities to match one’s tastes.
Of course what this model immediately suggests is we should be looking at that second box carefully. For example, if the second box has a relatively long characteristic time scale, then what sort of natural variation in the first box would fit with a few years’ leveling-off in the second box?
Arthur, you have said so many incorrect things…can you perhaps make an itemized list of the claims you’ve made so they are easier for me look at one at a time and rebut? I mean really, saying so many wrong things so often in so many different posts, all before some of us have a chance to catch you in error, is really unfair. I don’t even know where to start.
Arthur, I am disappointed in your consistency here. It may get lots of cheers from dhogaza and friends, but you know what, debate is a plenary endeavor, not an exercise in cherry-picking. You had many chances to respond to many posts. Answer the tough questions too, young Padawan.
Maybe you’ll learn something from being schooled by Lucia et al, but I don’t see much sign of it in this thread yet.
oliver (Comment#18463),
“For example, if the second box has a relatively long characteristic time scale, then what sort of natural variation in the first box would fit with a few years’ leveling-off in the second box?”
Good question. Answering would take a careful analysis of Argo temperature profile data over some extended period. My guess is that very long lags could only be correct if there were a clear net migration of heat from shallower waters to deeper waters, even while the whole heat content in the top 750 meters remained constant; in other words, the average temperature profile should become less steep over some range of depths, with water near the surface cooling while deeper water warmed. I do not know if the Argo data is of sufficient quality to demonstrate this, but it would sure make an interesting project for someone like Josh Willis.
SteveF (Comment#18466)-That would also be contrary to the assumed time constants, since the idea is that the deep ocean takes quite a while to heat up. If the heat is sinking instead of accumulating in the upper ocean, something is very wrong with somebody’s assumptions.
My, my, there’s a lot of concern here about people’s feelings getting hurt here. For the record, I think Tamino was tactically wrong in banning Lucia – a better choice would have been to let her continue to post but not worry about personal responses to everything, if that was what was ticking him off. But I can also entirely sympathize with his level of anger at the misrepresentations – whenever anybody accuses me of not understanding the second law of thermodynamics or basic calculus I get ticked off too. I try to remain somewhat polite though, and certainly wouldn’t have gone as far as what T. said in this case. Tut-tut on the bad language.
Nevertheless, the insults traded don’t make Lucia right. Once again she’s descending into semantics rather than responding to my points on the scientific issues. I think that’s pretty obvious reading this thread.
It’s unfortunate I don’t see much of a cheering section for Tamino’s side here any more – Lucia’s blog seems to be descending into Watts/McIntyre/Pielke territory rather than being the relatively neutral ground it was for quite some time. Sad to see.
Andrew_FL (Comment#18467),
Well the temperature would of course continue to decline from near the surface to the deep ocean, but there could be a change in the slope of the curve through the first 750 meters, such that the total heat content of a certain range (say 200 to 250 meters) increased significantly, even while the net heat of the entire 750 meters was basically flat. This would indicate that heat was distributed (on average) a little deeper in the ocean, even though the total heat was relatively flat. If this were to happen, then a future increase thermal input to the surface waters would (in theory) not lead to as much loss of heat to the deeper layers, and so warm more for the same net heat input. I guess all that I am saying is that any very long lag constant would require the profile of temperature vs depth change significantly during a period of relatively constant total heat.
I do not understand the maths involved, so cannot comment one way or the other.
I’m assuming that Tamino’s sensitivity estimate of 0.69C/watt/m2 points to an exponential sensitivity for GHGs.
For example, the initial 150 watts of greenhouse effect produces 33C of warming or an average of 0.22C/watt/m2.
The next 2 watts of anthropogenic warming/GHGs, however, produces 1.4C within 30 years or 0.69C/watt/m2.
So it also violates the Stefan Boltzmann equations which have a logarithmic temperature response to increased forcing, not an exponential one.
Let’s take the Sun, for example, which produced about 63 million watts/m2 at its surface. With Tamino’s sensitivity, the surface of the Sun would be about 43.5 million degrees, or only about 170 times hotter than the hottest star surface found.
Either the 0.69C (0.75C) is wrong or the 2 watt estimate is wrong. (probably both).
Arthur, You seem immune to new information.
AGW believers are completely uninhibited in their insults, attacks and threats on skeptics.
That you seem unable to hold the one who actually was incorrect, insulting, and petty at least a wee bit accountable only reveals things about you. And they are not positive revelations.
Arthur Smith (Comment#18468)
“I’s unfortunate I don’t see much of a cheering section for Tamino’s side here any more – Lucia’s blog seems to be descending into Watts/McIntyre/Pielke territory rather than being the relatively neutral ground it was for quite some time. Sad to see.”
Not quite sure what to make of this comment. I am pleased that you seem to agree Tamino’s behaviour was not the most productive option.
What happened here? Tamino posted a pretty obvious result from a simple analysis and suggested that it represented some kind of insight about climate sensitivity, made a silly math error along the way, did not address the key issues of the parameter values and how they influence the outcome of his model, become furious and abusive when Lucia pointed out the math error, and finally proceeded to covered it up. What is there to cheer for?
if Tamino wants some cheers, have him post on the real issues of ocean lag periods and assumed climate forcings, and how those assumed parameters change the calculated sensitivity. A simple admission that everything that comes out (GCM’s or his two-box model) depends on what parameters are fed in would be a good start. A couragous post by Tamino would be an honest examination of how these assumed parameters control the perceived climate sensitivity, and tests of this two-box model with a broad range of parameters showing what parameter values lead to low and to high calculated climate sensitivity. I would go over to his blog and cheer that.
Arthur Smith (Comment#18468) said “”Lucia’s blog seems to be descending into … territory rather than being the relatively neutral ground it was for quite some time.””
Arthur not only is this an attack, you seem to be avoiding Lucia’s questions and comments. As someone who read both threads here and at Tamino’s it is apparent where the problem is. You have a double standard. Lucia asked a question. Tamino’s remarks do not support your hypothesis of not trying to relate to a physically true (within limits) model. You have insulted our hostess with this negative comparison (in at least your eyes). You have yet to competently support your hypothesis of what does or does not matter and now you take task with the hostess with an inferred adhom. At this point you have simply devolved into a troll, IMO. You err again. Lucia did not accuse Tamino of not understanding 2LOT, she asked a question. And look how many erroneous statements, and questions you have asked and not gotten more than what appears to be exasperated questions trying to define a poorly worded question. Tamino did not even do that. You have admitted in this post the problems that Tamino brought to the discussion with “”I try to remain somewhat polite though, and certainly wouldn’t have gone as far as what T. said in this case. Tut-tut on the bad language. Nevertheless, the insults traded don’t make Lucia right.””
You remind me of a story my wife, a school teacher tells. One day she is watching “Bad” Tammy, and Good Lucia. “Bad” Tammy hauls off and punches Lucia. Good Lucia punches back and does a better job. “Bad” Tammy goes running up to the teacher and crying “Did you see that! Good Lucia just punched me!”” Upon which the teacher replies, “But Tammy, I saw you punch Lucia for no reason!” To which “Bad” Tammy says “BUT, she hit me back, FIRST!””
You appear to agree with “Bad” Tammy’s reasoning and cannot see past the fact that Lucia asked questions, not threw verbal punches. But “Bad” Tammy did.
I seem to be a long way down a very long thread but perhaps someone will read this.
I think there is a very good two box model that was timesacles at every scale. That is if you perturb it from 0 to T at time 0, it relaxes with a time constant ( as defined by T/(dT/dt) ) that increases with increasing t.
All you need to do is introduce a diffusive ocean.
I do not know why this is so general overlooked. It does introduce one new variable (ocean diffusivity) but this does not introduce a new degree of freedom if you add the constraint that OHC must be accounted for.
It may be because the OHC constraint gives quite a low value for ocean diffusivity (but not one that i think oceanographers would not be too unhappy with). That low diffusivity means a thermally light ocean which in turn means a low values for both climate sensitivity and “warming in the pipeline”.
Alternatively it might be because it is believed that having a second box that is diffusive makes thing all too dificult. Well it shouldn’t.
Given a SST time series, it is quite trivial to calculate the heat uptake by the ocean in each time step by a single box diffusive ocean heated from its suface. You can then use climate sensitivity to calculate the out going flux from the oceans
The second box is the land, and from a LST time series you can get the out going flux from the same climate sensitivity.
The diffusive box will supply you with more time constants than you can shake a stick at, and you can justify the addition of the box by way of explaining OHC as well as T.
While I am adding boxes, there is an obvious third one, the well mixed layer. But does the WML have a coresponding constraint, yes it has, it is the seaonal phase lag over the ocean.
Flux into the diffusive layer tends to produce a 45 degree lag all by itself which does not leave much forthe WML to add if you are to meet the constraint of the seasonal phase lag. It turns out that from a thermal perspective a three box model needs a thin WML thinner than the recognised value. But there are some problems with that recognised value, in that it is defined not by known mixing but by the layer that is within some deltaT of the surface temperatures. Such a layer would exist and thicken during winter even if there was no mixing.
You can introduce a fourth box, the atmosphere and a futher constraint that of the diurnal phase lag, principally that over land.
Enough said.
I am always dissapointed that some prety obvious constraints get overlooked and that people seldom come to grips with having a diffusive box.
If anyone would like to do diffusive box modelling please let me know. Providing that you stick to the possible, the mathematics are not complicated. And you get to add OHC to your reprotoire of explained time series.
Finally using the diffusive box you can throw the whole argument from a Temperature one to a Flux one. This is a noisy environment but one in which you have accounted for the oceanic time constants and are free just to look at the forcings that might explain that flux.
I do hope someone interested reads this.
It is neat, and it’s fun.
Alexander
Arthur Smith:
Need a shovel for that bullshit?
Seriously man, do you really think we are that gullible or stupid?
Your colleague Tamino went off the deep end and you appear to have some terrible need it appears to defend him after he reacted irrationally to reasonable questions (how many times this year does that make? lost count)
That’s not our problem. Nor is your silly need to try and explain to scientists and engineers how science is supposed to be practiced.
Fix thy self then get back to us on that.
Arthur Smith:
The “only” reason we aren’t cheering old Tamino on here is he’s for the nth time behaved like a total ass, he’s wrong substantively in his paper and his follow up, and he’s been intellectually dishonest about both why he reacted so strongly to Lucia as well as in hiding his simple math errors.
You really are a dishonest hack and you should be ashamed of yourself.
Alexander Harvey (Comment#18475),
Quite a lot of information there. I will need to think about it a bit.
One comment: Argo float profile data shows very clearly that the WML is quite deep and stable over much of the tropics. It is thin in areas of cold water upwelling, but reaches more than 150 meters where there is no upwelling or strong current; this is a fair fraction of 25S to 25N. (in reality, the WML in the tropics looks like the depth of ocean which a significant amount of sunlight can penetrate). Any realistic ocean model would have to account for a substantial WML.
“rather than being the relatively neutral ground it was for quite some time”
I suppose Lucia could start banning people to truly show her relative neutrality?
I’m sure someone could come up with a reconstruction of The Blackboard’s Neutrality History, using old Russian keyboards as a proxy and teleconnect or something like that.
Andrew
Arthur
1) At Tamino’s I did not accuse him of not understanding the 2nd law.
2) I am not arguing semantics. You are however, trying to resort to vagueness by suggesting that Tamino’s analysis should somehow be seen as ok because something “doesn’t matter” to “some named thing” for “some unnamed reason.”
You can firm up your claims or not. My observing that you are not doing so is not “semantics”.
As for “Tamino’s side”, what the heck is that? Are we supposed to take sides? Are we supposed to decides what analyses are right based on “sides”?
“rather than being the relatively neutral ground it was for quite some timeâ€
Well at least you aren’t getting censored or banned, right? Why is it that the sites you so detest – Pielke/McIntrye/Watts/and now Lucia – just so happen to be sites that allow for free discussion with minimal-to-no snipping, editing, or banning?
It seems you’d prefer a totalitarian website – just so long as the moderator ruled as you liked. That way, you can all pat someone on the back for a “brilliant” post that contains simple and fundamental errors while accomplishing basically nothing, and you can block-out any criticism that might make other readers aware of the problem.
It’s that same sort of closed-mindedness that keeps folks like Michael “I am not a statistician” Mann from consulting with statisticians so that he doesn’t make statistical errors.
JeanS–
I think Tamino tends to ban people who present arguments that sound right. He may not know they are right, but he suspect they very well may be. If the point being supported cuts against a claim Tamino is making, he just banns them. From Tamino’s POV, the problem is solved.
Arthur,
” I try to remain somewhat polite though…”
You have a strange definition of “somewhat”. Do yourself a favor, take a little break and then read what you have written here. I am sure you will be embarrassed by your boorish behavior, gratuitous insults and general snark, largely aimed at this site and the host. You, unfortunately, seem to be descending into Tamino territory.
Arthur,
Proof by assertion may work fine and dandy in Tamino’s world, but it has little convincing power here. What I actually believe is irrelevant, your math and model either hold together or they don’t. Fortunately for you, I need to have no beliefs about either your calculus ability or your understanding of the 2nd law. You can demonstrate your proficiency directly and I need make no guesses about your capabilities.
Show me the math, buddy. Choose some parameters, do the spot checks to demonstrate that you are consistent with the known properties of the physical universe. Then make some effort to really demonstrate exactly what variances “don’t matter”. Really, I would like to see you to do here exactly the same thing that Lucia tried to do over there and got herself got banned.
You can even be as rude as you like as long as you’ve got the goods. If you prefer to just spout spin and verbiage and dodge the math, maybe your posts would be better over at Tamino’s where accuracy and a clearly defined problem are less valued.
brid–
I think the difficulty is that Arthur is focused on the details and not the big picture. The big picture is that if the parameters in Tamino’s curve fit are to be interpreted as having physical meaning, they must map back into parameters for a model that has some physical meaning. That model may be simple– but it should not be obviously unphysical.
Neither Tamino nor Arthur have shown any symptoms of trying to go back and see what group of two-box model parameters correspond to any time constant or solution Tamino obtained. If they took this step, i’m pretty sure they would discover other weirdnesses besides the problem with the 2nd law.
I exampled the 2nd law because, given Tamino’s reluctance to discuss the magnitude of any parameters other than his two time constants, using the 2nd law it was easy to show the pair of time constants he used could not constitute a realistic two-box model corresponding to the earth with the top box being the atmosphere. They just couldn’t.
If Tamino (or Arthur) provided additional information it would be possible to apply a bunch of other tests to their solution. But that information is concealed, and Arthur seems to play semantic games by making gushy unquantitative statements about whether things “matter”. Then, he wants to accuse me of playing semantic games by pointing out that the things the sentences he utters have nearly no meaning.
Alexander Harvey – yes the diffusive behavior of real surfaces would be why a single time-constant is in reality physically meaningless, and at best a modeling exercise. I did a bit of looking at the issue when I was working on my Proof of the Atmospheric Greenhouse Effect paper – the thermal inertia coefficient there comes from the diffusion distance during one rotation period and the associated heat capacities, but of course when you’re talking about rotation the time period at issue is fixed.
So yes, I would definitely be interested in looking at modeling with a diffusive box – send a pointer, I’d love to read up some more on it.
Yes Alex–
Feel free to provide a link/ explanation etc.
Lucia,
Thanks, that makes sense to me.
I also think you are exactly right when you speculate why Tamino bans people. It is quite ironic for Arthur to say Tamino was “tactically wrong” to ban you. In fact, it appears to be Tamino’s core tactic to ban people who can challenge him. This thread alone indicates multiple PhD types who were banned after besting Tamino. It is his modus operandi; his way of maintaining his credibility.
The pleasing part is that this latest Tamino action appears to be backfiring completely. This thread is already approaching 200 comments and growing. It has provided a forum for multiple people to narrate their less savory experiences with Tamino. This thread may well stand as the definitive commentary on Tamino and his methods.
It’s unfortunate I don’t see much of a cheering section for Tamino’s side here any more – Lucia’s blog seems to be descending into Watts/McIntyre/Pielke territory rather than being the relatively neutral ground it was for quite some time. Sad to see.
Sad to see what? That you’re being called on your BS, Arthur? The question has been posed already, but I feel the need to ask again: Given that we’re dealing with math here, not “sides,” and the math is either correct, or incorrect – why should “sides” matter?
Argue the math, not the bloke or bird doing it. Please. In that vein, bonus points for you posting on Tamino’s blog that you’re “disappointed with his attitude, because science is about cooperation” regarding his behaving like an ass (again) when someone asks a valid question.
Last, you may want to abandon the argument from authority, since you’ve continually proven that you have none. It just makes you look worse every time you try. Just trying to help.
Lucia:
Let me quantify further, then. Sometimes what is obvious to one person may not be to another.
Define the variable ‘s’ as the best estimate of the medium-term (century-scale) sensitivity of Earth’s global mean surface temperature to radiative forcings. There are many ways we can try to find ‘s’ – the behavior of all the feedbacks in sophisticated physics-based climate models is one way, estimates of paleoclimate parameters another, and comparison of the modern temperature record with believed forcings would be a third. Tamino’s modeling exercise was in regards to that third method for estimating sensitivity.
Tamino’s result for instantaneous response indicated that ‘s’ was roughly 0.25 C per W/m^2, but with a poor fit to the temperature data. His result for a two-box model with the particular pair of time constants he chose was 0.73 C per W/m^2, or 0.69 when ENSO variability was also included.
“Lumpy”, with a single time-constant, finds ‘s’ to be about 0.5 C per W/m^2. Schwartz’s corrected result with a single time-constant (the 8.5 year number) is also very close to 0.5. Scafetta’s two-time-constant model produced a range of 0.43 to 0.68.
My quantitative assertion, then, is simply this: no matter what pair of “short” and “long” time constants you choose in a two-box model (in the sense of Tamino’s post), as long as you get a fit to the modern temperature data with GISS forcings at least as good as Tamino’s “two-box” fit, you will (at least 95% of the time) find:
0.5 <= s <= 1.2 C per W/m^2
That is, sensitivities below (or above) the IPCC range are unlikely to be the result of such a phenomenological model. The precise choice of time constants will have some effect on the sensitivity, but not enough to move it out of the IPCC range.
Brid:
Which of course explains much of the rancor towards him. As far as I’m aware I haven’t been banned on his site, but then I haven’t seriously posted there in a while nor do I ever plan on either reading his blog or posting there again.
It’s pointless to try and bring up issues with somebody who ironically is as closed-minded as he or his sycophants such as Arthur Smith can be.
Arthur–
As I noted before, later on in the thread you finally did firm up your range of “not mattering”. However, I think the difference between 0.5 and 1.2 “matters”.
When I said so, you suddenly decided to degree I am playing semantic tricks. It is not a semantic trick to say that the difference between 0.5 and 1.2 “matters”.
As for your assertions: I encourage you to do whatever you have to do to show your assertion about what we would find for the box model. Knock yourself out. Have at it. Create whatever simple or complicated model you like. Then show that sensitivity you obtained is based on a two-box model with parameters in the range that makes sense for a two-box model of the earth.
So far, Tamino’s two-box model has not been shown any time pair of time constants he used can be mapped back to parameters would correspond to a two-box model for the earth. That has been my point.
Arthur Smith (Comment#18424) August 24th, 2009 at 11:45 am
Instead we got questions about what he was using as the atmosphere…
Arthur, I’ll throw you a few easy ones. What do you mean by “we got questions?” What questions did “you” get? Or, is there a committee you’re on for the immensely popular Open Mind blog? Did you and Tamino jointly create the post? Did you help answer the other questions or were consulted in the Banning Of Lucia From The Open Mind? Or by “we” are you saying this is just another part of the “sides” thing, and that you’re just a hanger-on, determined to defend your “side,” damn the facts, whatever they may be? But it’s all good because “we” need to educate the ignorati, since “we’re” right even when “we’re” wrong, and it doesn’t matter anyway wrt our conclusion.
Arthur,
Are you trying to defend the correct answer obtained with the wrong method? It doesn’t matter if a mathematical construct gives you the right result if it violates physical reality. End of story.
Can you correct this model such that it doesn’t violate any physical laws? Go for it. And show your work. Has Tamino done either of these things? No.
I don’t think anyone here really cares if such a model could be constructed or not. Will it be included in AR5? No. Will it change modeller’s assumptions? No. Will it increase our understanding of anything? No. What we have is the right result (according to some) with the wrong method, plain and simple. If the method can be corrected then prove it. Arm waving won’t cut it.
tammy has a crush on lucia.
http://www.youtube.com/watch?v=oYymDZtJvgs
sorry. Arthur lighten up
arthur,
what’s that growth on your belly?
http://www.youtube.com/watch?v=xO1kKemcwYk
Modelling Put To Good Use
When Zombies Attack!: Mathematical Modelling Of An Outbreak Of Zombie Infection
http://www.mathstat.uottawa.ca/~rsmith/Zombies.pdf
Yes its even been peer-reviewed
I said above in #18343 that I believed that what Tamino had actually done was much less ambitious than what is attributed, and criticised, here. If there is a misapprehension there, it is largely due to Tamino, who didn’t set out at all clearly what he has done in his post.
However, I do believe that he is just exponentially smoothing the forcing series in two ways, and fitting the best linear combination to surface temperature. My reason for thinking that is mainly this quote in response to Lucia:
Now the integrals in the linked post are just convolutions with exponentials, and when applied to a time series, that amounts to just exponential smoothing. So temperature is fitted with:
a1*S(F,30)+a2*S(F,1)
where F is the forcing and S(F,n) means F exponentially smoothed with a decay exp(-1/n).
Anyway, I tried that in R, using just the total GISS forcing, and annually averaged Gistemp, which seems to be what T used. I padded the years before 1880 with zeroes. Here is my emulation of the plot which Tamino introduces with the phrase In fact the two-box model using all climate forcings does a good job fitting the observed data:
It seems the same. I didn’t go on to the SOI fit, because I’m not sure how that was done.
So it seems that Tamino is just fitting the smoothed functions which might be expected from a two-box solution. As such, there is no 2LOT issue. However, it could be said that it is not really a full two-box model.
I also did a run with the shorter time constant = 0.1 years. The difference was imperceptible.
I think Tamino generally views anyone who mentions the second law in is presence as part of the angry mob that shouts at town hall physics meetings:
http://www.theonion.com/content/files/images/onion_news706.jpg
Alexander Harvey (Comment#18475) August 24th, 2009 at 7:01 pm
What sort of diffusivity are you envisioning, and how does the OHC constrain this?
Nick,
In comments Tamino said that he checked his model against the 2LOT and that it passed. Now you’re making excuses for why he shouldn’t have to check at all.
If this wasn’t a requirement of his model then he should have said this.
As I have said many times before on this and other blogs, Tamino, Stoat ect are a godsend to the skeptics. Please do not ever ban them, the longer they are allowed to perform and express their views, the better. This particular enchange has been a godsend to the anti-AGW lobby.
Joel, I’m not making excuses for anyone. I’ve just figured out what Tamino has actually done, and reproduced it to check. I think that is a useful contribution to a speculative discussion.
But Tamino didn’t exactly say what you claim. In response to Lucia’s query he said “Yes, I checked”. Exactly what he checked is obscured by the fact that much of her question was edited out, but there was a section that said:
Have you checked? I know that it is possible for those curve fits to experimental data to map into something unphysical, that violate the 2nd law of thermo, or does other things that would not make any sense.
I suspect his response was made in a spirit of impatience rather than lucidity, but it is actually true that it doesn’t do those things. Ultimately, it’s just fitting exponential smoothing.
Nick, yes I think your replication is very useful in this discussion. I suppose the more important point is whether Tamino’s contribution is useful? With glorified curve fitting, infantile rage, and a simple math goof interspersed through sanctimonious drivel, you would have to call this an own goal.
Nick,
It is most certainly not obscure what Tamino says he checked. Let’s review the comments.
Lucia said: “Basically: have you done anything to check that your specific solution for the values of the two time constants maps into a space that does not violate the second law of thermodynamics?”
Tamino replied non responsively with an ad hominem attack.
Lucia responded: “Look. Have you checked?”
Tamino’s charming rebuttal: “Yes I checked. You didn’t. I guess doing the work to find out before shooting your mouth off would get in the way of your modus operandi: FUD. Barton is right, you just mentioned the 2nd law of thermodynamics because it’s such a popular way to confuse the ignorati.”
Nick, it is quite disingenuous of you to claim a lack of clarity for something that is crystal clear. Notwithstanding your protests to the contrary, you are making excuses for Tamino.
Note also that Tamino said “So it’s a *real* two-box model” which directly contradicts your statement above.
brid, I dealt with that issue. I don’t think it was a good answer by Tamino. But the fact is that what he seems to have done simply can’t violate the 2LOT, because it is just a linear fit of two smoothed versions of the forcing functions. How could it? There’s really nothing to check, which would have been a better answer.
Nick,
If you say you dealt with the issue, why did you claim “Exactly what he checked is obscured”? Do you now accept there is no obscurity on this point?
Do you also think Tamino was wrong when he said: “So it’s a *real* two-box model†(which seems to be a a major point of confusion)?
Nick (18499) – thanks. Though your fit is precisely what it looked like Tamino had done anyway (did I ever call it ambitious? I think I said “elegant” – it’s a simple technique, and yet shows what’s going on pretty powerfully). Could you post your R script and let people fiddle with the time constants themselves, and see if my assertion holds or not?
Typically when I run these analyses, nobody reads them (witness the tropical hot spot discussion from months back) – so I think it’s very important that those here who are asserting that Tamino made some egregious error back it up with real evidence, rather than the mountains being made out of sand grains here.
On the question of physicality of Tamino’s choices for time constants – given the diffusive nature of the real system, as Alex Harvey mentioned, *any* choice of time constants is reasonable, it just corresponds to different ways of splitting the system into pieces. Maybe Tamino’s choice corresponds to “atmosphere + land” vs. “ocean”, rather than just “atmosphere” vs. “ocean” as he asserted. Or “atmosphere + a few meters of ocean” vs “deeper ocean” as others have suggested.
All Lucia has shown here is that the time constant for the atmosphere alone must be very short, shorter than Tamino’s 1-year “very quick”, “prompt” choice. As David Benson also pointed out in comments at T’s blog. That doesn’t make 1 year “unphysical” for some other arrangement of the components.
This is simply because the two-box model is not a physical model, and Tamino never presented it as such, despite his notes that you could think of it as atmosphere vs ocean – it’s a phenomenological model that can be used to very roughly capture the actual many-time-constant (and oscillatory and other response) behavior of the real system.
Yes, Tamino made a mistake in asserting that 1 year was an appropriate time constant for the atmosphere. It’s also clear that that choice doesn’t matter in the precise sense I described above.
brid, I was covering the possibility that there is something that was asked that is more specific that we can’t see (what “obscured” means). I then went on to respond on the assumption that T’s terse response was to the question of Lucia’s that was quoted. And as I say, the more correct answer would be that there is nothing to check.
Or is there? Assuming he did what I did to reproduce the result, what specific thing do you think he should have checked and failed to do so?
Arthur, yes, here’s the R script. The gissforc.txt file is just as downloaded from Tamino’s reference. The gistemp was edited by removing the headings that are inserted every 20 years.
#define exponential smoothing function
expsmooth<-function(u,v,d){ # u is the smoothed v with decay exp(-1/d)
n=1:length(ss)
e=(n-1)/d
e=exp(-rev(e))
e=e/sum(e)
u=convolve(v,e,type="open")[n]
}
#read GISS forcing data
s <- matrix(scan("gissforc.txt", 0, skip=4), ncol=11, byrow=TRUE)
vv=rowSums(s[,2:11])
#read GISS temp data
v <- matrix(scan("gistemp.txt", 0, skip=9, nlines=124), ncol=20, byrow=TRUE)
ss=v[,14]/100.
expsmooth(w1,vv,30.)
expsmooth(w2,vv,0.1)
# fit regression
h<-lm(ss ~ w1 + w2)
g=h$fitted.values
#plotting
t=1880:2003
k=21:124
jpeg(width=500, height=500)
plot(t[k],g[k],type="l",xlab="Year",ylab="Gistemp anomaly",axes=FALSE,asp=50.0)
lines(t[k],ss[k],col="red")
q=0:5*0.2-0.4
axis(2,q)
q=0:10*10+1900
axis(1,q)
dev.off()
*** edited to change arg of exsmooth from ss to vv
Come on Nick, it is perfectly clear that there was nothing obscured. Tamino’s response which directly references the second law is the final nail on this. You were quick to call out Joel as being wrong (“But Tamino didn’t exactly say what you claim”) when it is clear that Joel was completely correct on this point.
As regards your question, “Assuming he did what I did to reproduce the result, what specific thing do you think he should have checked and failed to do so?”, I think it is best directed towards Tamino. If you are correct, he is seriously confused. How do you reconcile your analysis with his “I guess doing the work to find out before shooting your mouth off would get in the way of your modus operandi”. What work is he referring to?
Arthur, apology – that script relied on something I had in the workspace. I think this is stand-alone:
#define exponential smoothing function
expsmooth<-function(v,d){ # u is the smoothed v with decay exp(-1/d)
n=1:length(ss)
e=(n-1)/d
e=exp(-rev(e))
e=e/sum(e)
u=convolve(v,e,type="open")[n]
u
}
#read GISS forcing data
s <- matrix(scan("gissforc.txt", 0, skip=4), ncol=11, byrow=TRUE)
vv=rowSums(s[,2:11])
#read GISS temp data
v <- matrix(scan("gistemp.txt", 0, skip=9, nlines=124), ncol=20, byrow=TRUE)
ss=v[,14]/100.
w1=expsmooth(vv,30.)
w2=expsmooth(vv,0.1)
# fit regression
h<-lm(ss ~ w1 + w2)
g=h$fitted.values
#plotting
t=1880:2003
k=21:124
#jpeg(width=500, height=500)
plot(t[k],g[k],type="l",xlab="Year",ylab="Gistemp anomaly",axes=FALSE,asp=50.0)
lines(t[k],ss[k],col="red")
q=0:5*0.2-0.4
axis(2,q)
q=0:10*10+1900
axis(1,q)
#dev.off()
and brid, I've already said that I don't think Tamino gave a good answer. But there's nothing substantive there to dig into. What he did can't violate the 2LOT. I think Lucia agreed with that above.
The most revealing thing here is not that Tamino made mistakes. Extremists often do.
The most amazing thing is that A Smith keeps not only dissembling about the mistakes, but pretending that Tamino is the reasonable person. A close second is that at heart so many members of the AGW belief circle still see those who question their cult as at best people to be barely tolerated, but only if they are properly ignorant and or humble.
Nick Stokes (Comment#18511),
It is pretty clear that the combination of a) assigning the GISS assumed ocean delay to Tamino’s slow box and b) using the GISS assumed net forcings will lead to estimated climate sensitivities similar to what the GISS model estimates. How could the result be any different?
My primary objection to Tamino’s post is that he appears to suggest that his result represents some kind of confirmation of the accuracy of the climate sensitivity generated by the GISS model. Both the GISS model and Tamino’s two-box model generate climate sensitivies that depend primarily on the assumed values for these two key parameters. If the assumed ocean lag is shorter (a la Swartz, Scafetta, or even shorter) combined with the GISS assumed net forcing, then the reported sensitivity will be in the low end of the IPCC range. If the assumed ocean lag is short and the assumed net radiative forcing is substantially higher than the GISS values (that is, if the aerosol ‘cancelation’ of greenhouse gas forcing is assumed to be at the low end of the IPCC uncertainty range… near zero) then the reported sensitivity from Tamino’s model (or from the GISS model for that matter) will be well below the IPCC range.
The large uncertainties in these two key parameters go to the heart of the issue, or maybe more accurately, they ARE the issue. Which is why I think climate modelers either do not address conflicting data (like Argo ocean heat content data suggesting short ocean lags), or discount them as ‘probably in error’ and/or of too short a term to be important. I think it would be more constructive if Tamino spent his time showing why he thinks the GISS assumed values for ocean lag and net radiative forcing are correct rather than confirming the obvious.
It would also be better if Tamino could bring himself to behave like a normal person instead of a raging lunatic. (Unless he really is a raging lunatic, in which case it would be best to just avoid him.)
Nick,
I’m a little slow this morning (some might say all the time) and not very familiar with R. The form for an exponential smooth of a time series that I am familiar with is S(0) = X(0), S(t) = a*X(t) + (1-a)*S(t-1), the smoothing factor a is a number between 0 and 1. It’s somewhat like a moving average but with the most weight assigned to the last data point. Is your exponential smooth equivalent to this? If so, how does your d convert to a? In moving average terms a is related to the number of points in the average by a = 2/(n+1) but that can’t be what you use because you can’t have n less than 1. If I understand what you’re doing, which may well not be the case, then the second smooth with a small n would have little effect as all the high frequency information in the time series is already gone.
Nick–
1) If what he did can’t violate the 2nd law, then the curve fitting parameters have no meaning and can’t provide estimates of the climate sensitivity. Tamino provides those.
2) Good job reproducing the temperature series. Now, I challenge you to also go through, figure out the range of parameters make sense in the context of the earth, imposing the 2nd law of thermodynamics to be sure heat transfer doen’t go from cold to hot, do the algebra to determine the eigenvectors for your problem. Since you know your constants for the surface temperature, throw that into R and show us the ocean temperature. (Or just look at the figure above.)
DeWitt,
Yes they are equivalent. You can quickly check that the solution to that recurrence is
S(t) = a*(X(t) + (1-a)*X(t-1)+(1-a)^2*X(t-2)+…(1-a)^n*X(t-n)+…)
which is a convolution. (1-a) is the decay factor, equivalent to exp(-1/n).
But my script actually does the convolution explicitly, because R makes it easy.
Lucia,
The model gives an equilibrium climate sensitivity in the same way that, say, Lumpy did. With the fitted a1 and a2, you can assume a steady incremented future forcing and produce a sum of exponentials with a limit.
There are no eigenvalues or transfer functions associated with the fitting of exponentially smoothed forcings. That comes out of associated two-box talk. And yes, Tamino started that.
“(did I ever call it ambitious? I think I said “elegant†– it’s a simple technique, and yet shows what’s going on pretty powerfully).”
No, you didn’t say it was “ambitious,” but you added another hyperbole in addition to ‘elegant’… – “In that, Tamino’s original post was brilliant, and elegant”
SteveF,
I was going to respond to your #18461 and got sidetracked with R stuff. The GISS forcings are what they are. T’s analysis is yet another in the series that works out some time-adjusted ratio between past observed temperatures and forcings to give an equilibrium sensitivity. The function of this is to allow you, in the future, to say what temperature rise might result from future forcings. Of course, you can’t get away from uncertainty associated with those forcings, but you need an estimate of that number.
I don’t think Tamino is seek ing to validate anything about GISS models – it’s an independent first principles way of getting the sensitivity. Similar to what Schwartz or Lucia (Lumpy) did. You try to get an empirical fit, with the aid of model ideas, which you can apply to a future forcing scenario. There’s an expectation that, the better the fit, the more reliable is the sensitivity.
I guess one could pick any ocean box lag time you want and come up with a different C/watt/m2 response.
Put a 100 year lag into the model – what do you get. Put a 1,500 year lag (the estimated response time of the deep ocean) into the model – what do you get.
Put a very small lag into the model (consistent with the recent Ocean Heat Content data) – what do you get.
Hansen is now pushing this Climate Response Function/Lag Response – bump the lag time up to 40% within a few years, 60% within 100 years and then finally, close to 100% within 1,500 years.
http://img291.imageshack.us/img291/1200/image002.png
Using the estimated forcings from GISS Model E …
http://data.giss.nasa.gov/modelforce/NetF.txt
… and using Hansen’s Lag function, we are only up to about 0.7 watts right now at this point which would result in a 1.18C/watt/m2.
The chosen lag time determines your result.
Nick–
Look, I agree with you that a) We can create a fit the way you describe b) We can find a fit that ain’t too bad.
That said, Tamino’s post is still muddeled because he did, (contrary to your protestations) tell people the coefficients from his curve fit have physical meaning. Specifically: He thinks he can estimate climate sensitivity from his curve fit parameter, and told people that he had done so.
However, I would like to remind you that before I was banned, I asked this at Taminos:
I realize that you may think the fact that tamino edited my later question makes his response
Ambiguous.
But you seem to be forgetting that I was the one who asked the question. It is quite clear that Tamino intended to communicate the notion that his fit is more than a curve fit. He was intending to communicate the notion that his fit is based on phenomenology and the constants obtained from the fit have physical meaning.
Until Tamino says he does not believe his constants have physical meaning, I will remain under the impression that Tamino thinks they have physical meaning.
While Arthur is vigorously explaining that his notion that the curve fit parameters can be used to estimate climate sensitivity, I will believe Arthur is treating the curve fit as having physical meaning.
I will admit that you have not been suggesting that the parameters you obtain from “R” have physical meaning. No one is accusing you of having suggested the curve fitting parameters have physical meaning.
But the fact that you don’t, and now wish to imagine that Tamino did not make that suggestion doesn’t magically erase he fact that Tamino did interpret his fits as having physical meaning. Specifically, he estimated climate sensitivity from those parameters.
This can only be done if the fit has physical meaning, and in that case, it is perfectly reasonable to check whether the physical model that inspired the fit is realistic.
I forgot to make that R code print out the sensitivity. Adding these lines after the regression fit will do it:
a=h$coefficients
sensitivity=a[2]+a[3]
sensitivity
I got 0.712 W/m2/C – Tamino quoted 0.73.
And yes, Bill, if you go to a 100 year lag, the sensitivity goes to 1.181. But the fit isn’t very good.
Nick Stokes (Comment#18523) August 25th, 2009 at 7:34 am
“The GISS forcings are what they are. T’s analysis is yet another in the series that works out some time-adjusted ratio between past observed temperatures and forcings to give an equilibrium sensitivity.”
Of course. But you can’t just wave your arms and say the GISS forcings are correct. Do you not agree that the GISS net forcing and ocean lags could easily be quite far from correct? The IPCC uncertainty limits on aerosols alone cover an enormous range, so the uncertainty in net forcing covers an equally enormous range. And who really knows the correct profile of ocean lags? If you assume different past forcings and a different ocean lag and fit them to the same temperature record, then you get a completely different estimate of sensitivity and a completely different projection of future temperatures with any assumed future forcing.
Surely you can agree that “T’s analysis” yields a sensitivity which completely dependents on the inputs of assumed historical forcing and assumed ocean lag. It is these parameters which need verification, not Tamino’s simple model, which is pretty much a curve-fit exercise, similar to what has already been done multiple times.
Nick Stokes (Comment#18523),
And I forgot to note, the quality of fit can be made very good (at least as good as Tamino’s) for a very wide range of estimated sensitivities. It is not like Tamino found the only combination that fits the historical data.
Nick, thanks for your replication of Tamino’s analysis. It is a valued contribution to this thread.
Now that we know this is just a curve fit based on circular assumptions, what exactly was Tamino trying to show? I agree with Lucia that the more useful exercise would have been to define the solution space within the physical constraints.
Layman Lurker–
For any set of parameters of the box model, it is possible to find the eigenvalues corresponding to Tamino’s two time constants. This can be done both for physically realistic and unrealistc parameters. 🙂
If Tamino was willing to say what his boxes might contain, we could estimate the range of parameters and plot out the ocean time series corresponding to his surface time series. We could also talk about a few other things.
At this point, I’m pretty sure that over the range of parameters with values for αs , αs , βs, βo, the “ocean” time series corresponding to Tamino’s (or Nicks) surface time series don’t make any sense.
If, as Nick suggests the fit has no physical meaning, then it might not matter that the ocean does bizarre things. However, in that case, we can’t compute a climate sensitivity from the fit. It’s just a fitting parameter.
People who like that model can’t have it both ways: Either it is based on physics, and the fitting parameters have physical meaning, or it is not based on physics, and they don’t.
Agreed Lucia. And I think you just put your finger on why Tamino’s commentary was so ambiguous.
lucia (Comment#18530)
Good morning Lucia!
I don’t think you’re going to get any information about the content of the boxes from Tamino. What he did is only very tenuously connected to physical reality. His objective was to “independently prove” the GISS sensitivity value is “correct”, not present a physically accurate representation of the Earth.
If the original posted value of “1 yr” constant for the fast box is assumed, then it for sure yields physically impossible results (as you correctly note many times) unless the fast box includes both atmosphere and a surface layer of ocean (say 40 meters or more). The atmosphere alone would require a constant much lower than 1 year to avoid nutty results; probably in the range of 0.05 yr or less would avoid crazy results. But even this would not make any physical sense, since most solar energy is absorbed by the top 100 meters of water, not in the atmosphere.
You could assume the fast box includes 60 meters of ocean, avoiding the crazy results, and establishing some grounding in physical reality, but then how the land air temperature 1.5 m above the surface and the model “fast box” temperature (which includes a lot of non-surface ocean water) are connected enters the picture. Tamino (in his infinite wisdom) chooses to avoid such issues by using a fast box that is non-physical…. but then of course wants to declare a confirmation of a physical parameter (climate sensitivity).
Had Tamino the good sense to explicitly lay out all the assumptions, simplifications, and much less than perfect approximations to physical reality, then everyone would probably cut him some slack and just accept his efforts for what they are: a curve fit exercise. But what the heck, we are talking about a raging lunatic here; he isn’t going to go back and rationally discuss all the problems/simplifications/approximations associated with his “model”.
If I were you, I would just declare victory and move on.
Andrew #18498
The results of the study you reference are robust. The probability of a zombie attacks seem much more likely to me over the next 100 years than the CO2 induced global warming currently predicted by GCM’s and the body count is much higher that what you get from a few meters of sealevel rise. You can outrun sealevel rising 1cm per year but you cannot outrun a throng of hungry zombies. We need to reallocate our computer modeling resources away from the preoccupation with CO2 to get a better handle on this new more likely phoenomenom.
Thanks
William
William (Comment#18533),
I note that the world’s fastest man, Usaine Bolt, is from Jamaica, where belief in the occult (including zombies!) seems more common than in most other places. He could for certain outrun any zombie; could there be a connection? Just askin’…
SteveF (Comment#18478)
I am not going to get into a big debate over the WML but I will say that it may not have much of a thermal effect, I beleive that largely the mixing occurs when the temperature gradient has collapse for thermal reasons (e.g. winter) mixing water that is already thermally rather homogenous will not produce much of a thermal effect. If there was a truly WML of say 50-100metres over much of the ocean it would tend to shift the seasons a lot (around three months) but this is not the case. If you try to take into account phase lags starting with a diffusive ocean giving about a 45 day shift on its own there is very little wiggle room for much of a WML. Yes it is there, you can measure it but it may not have the thermal impact that one would imagine. Most importantly I am taliking about models not the real world and a three box model land, deep ocean, and WML when constrained, especially by phase requirements, tends to favour a “Thermally Light” WML. At least that is what I have found.
oliver (Comment#18501)
The diffusive ocean box is just a model of an ocean that is heated from the top (or at least the top layer). All you need to parameterise this box is the area of the deep ocean (known) and the overall ocean diffusivity (to be constrained by the OHC time series). If you leave it as a free variable you are just adding an additional degree of freedom that you can use to suggest any value of climate sensitivity you like, (we have all seen this done).
In the terms of the model, given a SST record (and you need a long one) you can calculate the flux into the ocean that such a model would produce. Having done so you integrate the flux and arrive at a OHC time series which you compare to existing OHC records. You will only get tollerable matches between model and historic record for a constrained range of ocean diffusivity values.
Having done this you can have a quick check on what the oceanographers think is the correct value.
Using OHC in this way locks in all types of goodies, it give the ocean a themal weight (I believe that most/all high end climate sensitivity models rely on a themally very heavy ocean), I tend to get a thermally light ocean which favours low end climate sensitivity values) other things that it gives you is a defined filter that will give the amplitude and most significantly the phase lag of periodic forcings as a function of their frequency (i.e. the different attenuations and lags of the 22yr compared to the 11yr cycles directly without having to add numerous ad hoc time constants).
It prevents the addition of an extra degree of freedom but also goes on to predict other effects.
But it is just a box model, but a very useful one.
I do not know why its use is not standard practice. The flux calculation does require an integral (summation) over the SST time series from its start to the current step in order to determine the flux for the current step (it is the nature of diffusion to have this “memory” feature) but that is no big deal (about 5 columns in EXCEL). It is this integration that necessitates a long SST time series, you simply cannot work out the flux to any degree of accuracy unless you have enough history to provide the thermal memory that the diffusion calculation requires.
Alternatively it might not be used because people make a naive assumption that diffusive flux has to be multibox modelled (e.g you have to produce a multibox model of the ocean layaer by layrer to contain the memory of past SSTs in individual boxes) but
you don’t the “memory” can be derived directly form the proceeding SST time elements, (it is afterall this that the ocean box is remembering), there is an integral (summation) that will do this quite readily.
While I am on it, diffusive boxes have on very interesting property the phase angle between the SSt and the flux is always 45degrees, so you could think of it as being the square root of a purely capacitative box (e.g. slab ocean). Acutally this is quite a good way of looking at it. Taking the square roots features quite heavily in all diffusion mathematics. A diffusion box provides quite a gentle filter again like half of a capacitative filter. Also it is in the nature of such a filter to only be able to define a time constant once you ahve determined the interval/period of interest.
Alexander
Alexander Harvey (Comment#18536)
“Most importantly I am taliking about models not the real world and a three box model land, deep ocean, and WML when constrained, especially by phase requirements, tends to favour a “Thermally Light†WML. At least that is what I have found.”
I agree that the WML is thermally lighter than it would appear based on it’s depth in the tropics. The constraint of seasonal lags on the average WML depth is more complicated though, because the depth of the WML declines as you move away from the tropics, and approaches zero at most high latitude locations. So in regions where there is a substantial seasonal change in temperature, the local WML is much thinner and presents less thermal inertia. The wide range of thickness of the WML allows both the retention of a lot of heat in the tropics and relatively small season lags far from the tropics. The substantial heat content in the tropic WML is real, and it’s magnitude is evident when it is redistributed (ENSO and in the relatively small response of the climate to sudden forcings like volcanic aerosols).
If you haven’t already done so, take a look at the Argo temperature profiles at: http://dapper.pmel.noaa.gov/dchart/.
Cheers
Hunter:
I make mistakes too and I bet Lucia and everybody else here does to, but am not an extremist (just hard-headed). Extremists don’t admit when they are confused or make errors, that’s probably the big difference.
The thing that confounds me a bit is how some poeple are so unwilling to admit Tamino is capable of error (or if he made an error, it’s so trivial that why are we even talking about it).
What category do they fit into?
Nick,
Thanks for the explanation. I have another question. It looks like you are smoothing the forcing with a 30 year time constant and also with a 0.1 or whatever time constant and then adding them together. If you put in a step function forcing of 1 W/m2, say, at time zero, don’t you end up with a total forcing of 2 W/m2 at 150 years?
Carrick (Comment#18540)
“What category do they fit into?”
Only the lunatic category never admits a mistake and has 100% certainty of being correct.
There is a reason why Tamino will not come over here and defend his work and explain what he meant or thought. There is a reason why others have to come to explain what “the General meant to say.” Please notice how the discussion turns from what Tamino meant to what arthur or Nick think tamino meant. Or how the discussion turns ( is distracted) from banning to discussions about similarities between rank explaoits and other blogs. Arthur and nick perform a high troll role. A low troll just diverts a conversation. Then come in and make outrageous comments and the discussion goes over the cliff. A high troll, on the other hand, is smart ( nick is way smart) their role is to provide a credible defense ( giving up a couple points here and there.. like tamino wasnt clear) and to steer the conversation away from the main points. If their defense fails, then Tamino still survives.
There is only one way to put a high troll ( please arthur and nick take this with a sense of humor) to the test. Send them back to Tamino and have them post there. Have them post there that they think Tamino was wrong for banning Lucia or editing posts or being unclear. When I high troll does that, then you have a person who you can have a rational debate with. You will find very of us ( from the AGW side) who will say that anybody on their side makes a mistake.
So, kinda related, I’ve been on Watts and said I was believer in AGW, on Open Mind and said this, on Lucias and said this. Guess where I am not allowed to post? you got it, RC.
WRT nicks comments on Tammy ( here I go violating my rule). Of course you can characterize taminos actions as just fitting the curve. But as engineers when we see somebody whip out a two box model we are thinking “physical interpretation” and this effects model must pass a physical law sanity test.
Nick, thanks the script works perfectly. My first real foray into R!
Note (for those trying this) – you have to remove a line of dashes at the end of the gissforc.txt file, and also edit out the ‘**’ entries in the gistemp.txt file (in addition to the headers every 20 lines), for R to actually run the script. Also note this gissforc.txt only goes through 2003, which may be another issue to be concerned about.
So, I’ve plugged in some different time constants, and indeed it’s true that the sensitivity generally increases as you increase the “long” time constant. However, as far as I can tell, there’s essentially no change between a “short” time constant of 1 year and one of 0.05 years or less, so Tamino’s choice of 1 year (and Lucia’s criticism of it here) was certainly something that “didn’t matter” to the central sensitivity question.
I’ll try to get up a graph showing the total sensitivity and R^2 values for different choices of the long time constant in this model some time soon. Basically, once you have a long time-constant less than 12 years, you go below the IPCC range I claimed you would stick to, and for a time-constant over 100 years you go higher than the IPCC range. But the fit does get worse (R^2 drops) for shorter and longer time-constant assumptions. I have yet to find a better fit than Tamino’s 30-year choice, but I’ll run a more thorough analysis when I get the chance.
I was somewhat surprised by the fact that for almost every fit the long-time-constant response (w1 in Nick’s script) is much larger than the short one (w2) – somewhere around 10 times as big. Not sure what that implies. Getting a response w2 that is negative (which happens if you go to really short time constants – 5 years or less) I believe does give you a result that would be unphysical.
Nick,
You maintain that Tamino, notwithstanding all his comments that rather strongly imply the contrary, does not believe his model is physically realistic. You also seem confused as to what he meant by “checking” for a second law violation. So here’s a thought: you’re not banned at Open Mind, so instead of trying to be a mind reader, why don’t you just ask him?
(Just saw that Mosh basically posted the same comment – sorry for the duplication).
Arthur Smith (Comment#18544),
The best fit ocean lag depends on the assumed radiative forcing. If you scale the GISS forcing values uniformly up or down (that is multiply or divide by a constant values), you will probably find that the best fit ocean lag value moves in the opposite direction.
Aurthur, for apples to apples – shouldn’t the SOI data be fed into the model first before you start playing around with the time constants?
Well,
I knew it was just a matter of time before I found my old post.
And what a surprise, in that post I give tamino a compliment, even handed fellow that I am. Anyways, it appeared to me that if you had one lag in the land and another lag in the ocean that one way to get at this was to difference the land and ocean. can’t remember for the life of me why I thought would tell you anything!! Anyways,
I’ll rethink that. here’s the link, see the chart a couple comments later.
http://www.climateaudit.org/?p=2123#comment-145230
Arthur–
When you get your fits, be sure to report the magnitude of all the parameters in the fit, with units. Or do the algebra to find the eigenvectors, compute the constants for the ocean temperatures and plot those. 🙂
Let me put it another way and see if I get a response. It looks to me like Nick’s two time constant fit has the unstated assumption that half the forcing goes into the fast box and half into the slow box. If that is indeed correct, then what is the justification for that particular split? I guess I ought to boot up R and see for myself what it actually does.
Alexander Harvey (Comment#18536) August 25th, 2009 at 10:01 am
I’m not sure I understand this statement. We have lots of profiles showing mixed layers ranging from a few tens of meters to more than 400 m all over the world oceans.
You need, IMHO, a long SST and air temp/wind/wave state record to do a credible job of this.
What sort of diffusivities do you arrive at? That was my main question.
Oliver
Lucia:
Huh? There is no “ocean temperature” involved here – a fit based just on time constants and weights like this does not provide enough information to recreate a physical two-box model (which is in itself unrealistic, as the ocean has many temperatures, not just one, for example).
Perhaps you can explain how you came up with your graph in comment # 18515? There is no T_s and T_o in Nick’s R program.
Arthur Smith:
Are you an idiot?
w1=expsmooth(vv,30.)
w2=expsmooth(vv,0.1)
Re: SteveF (Comment#18534)
G. Romero, “Dawn of the Dead.†MKR Group., 126, SV10005. is considered to definitively model Zombie behavior. Multiple runs with very different volume and brightness inputs show me that fitness does not map well to good outcomes. Try it yourself. Other attributes may be involved.
A grant to upgrade my hardware, sadly still VHS, to a DVD player would be very useful. Low resolution and inability to fast forward at high speeds has kept me from being as productive as I could be. It is a universal problem isn’t it?
The Bolt anecdote is interesting enough that it should be looked into. Next time you run into him ask him what role Zombies played in his record.
Arthur–
The second box exists. The method does provide enough information to recreate the physical two-box model. It does provide enough information to create the temperaure series for the second box.
When you do your curve fit, tell me the magnitude of your parameters.
To get the curve, assumed some values for the parameters in the two boxes, solved the eigenvalues, assumed IC’s and marched forward.
After minor adjustments to the script (I edited out the headers and had to change the number of lines to skip) I find that w1+w2-vv is indeed greater than zero from 1943 on, reaching a maximum of 0.73 in 2003. Now to create a step function forcing test.
Can I ask my question again?
Dan Hughes (Comment#18376) August 23rd, 2009 at 3:01 pm
What numerical values do T_s and T_o approach as time becomes very large?
Do the numerical values seem reasonable?
Is it likely that the Earth’s systems will ever attain states analogous to that indicated by the very-large-time-scale behavior of these equations?
Thanks
Dan–
Who are you asking? And for what forcing are you asking for solutions for T_s and T_o? Etc?
I don’t know the answers to your questions because I don’t know what you are asking or who you are asking.
Carrick (Comment#18553)
Sure wish you had left out the ‘idiot’ part. Don’t you want to see Arthur’s constants?
SteveF–
I want to see the constants. 🙂
SteveF:
M… You’re right, that was out of line.
But d**m… I was taught to keep my mouth shut when I didn’t know what I was talking about.
It’s frustrating enough to trying and work out science issues without people creating noise for the sake of noise.
A step function forcing, 0 from 1880 to 1890 and 1 thereafter, produced a w1 + w2 forcing of 1.993333 in 2003.
Arthur Smith,
Of course if Tamino was BIG enough he could just have come here and acknowledged he had made a mistake (like we all do on occasion)
Carrick (Comment#18553)
re “Are you an idiot?”
Why undermine your argument by resorting to ad hominem ad personam?
Even though I asked it in the form of a question…. as I said David, it was out of line. Sorry, I’ll tone it down.
By the way, simply removing the short time constant doesn’t have much of an effect on the fit.
Given that the short-time constant term is just fitting high frequency noise, I don’t see how summing the coefficients has the interpretation of climate sensitivity in any sense of that term.
Carrick (#18553)
vv is the forcing, not temperature. w1 is an exponential smoothing of the forcing with a long time-constant, while w2 is basically the instantaneous forcing (very short time-constant). I don’t see a T_s and T_o, not sure why you think it’s obvious?
Arthur–
I don’t read R. For the fit, you say you can find the sensitivity.
Nick wrote this:
a=h$coefficients
sensitivity=a[2]+a[3]
sensitivity
What are the coefficients a individually? (All of them, not just 1 & 2)
lucia,
Nick’s script makes no sense. Considering that his plot looks exactly like Tamino’s, the calculation of the eigenvalues may have been a minor mistake. I say makes no sense because the fitting coefficients for what should be equivalent forcings are wildly different. w1, the slow box has a coefficient of 0.713 and a value in 2003 of 0.73 W/m2. Meanwhile the fast box, w2 has a coefficient of 0.0589 and a value of 1.190 in 2003. It must be that the + in the fitting equation means ‘and’ not sum and each is fitted separately. There can be no physical reason for the two forcings to have different sensitivities and summing the coefficients is uninformative to say the least.
Arthur, you are right it isn’t obvious.
Here are my coefficients from Nick Stokes program for anybody that want them
Lucia’s parameters (tau1 = 30, tau2 = 0.09)
(Intercept) w1 w2
-0.02688692 0.76410491 0.02539218
For Tamino’s choice (tau1= 30, tau2=1)
(Intercept) w1 w2
-0.02849619 0.73941021 0.03799963
For a 1-box model (tau1 = 30),
(Intercept) w1
-0.02296441 0.81546515
Lucia (#18555):
The curve fit gives 3 parameters: the zero-intercept, the weighting of the w1 time series, and the weighting of the w2 time series. The zero-intercept is meaningless (determined by the zero-choice of the temperature anomalies) so there are only 2 useful parameters that could relate to an underlying physical system (plus the two time constants). However, the physical two-box model has at least *3* parameters beyond the two time constants: Cs, Co, and β. As far as I can tell, a given fit could be compatible with a wide variety of different values of β.
Unless I’m missing something? I could well be – I certainly don’t follow what you’re asking for here.
Arthur–
I have to do some gross algebra. Then I will explain.
DeWitt Payne (#18569) – yes, the definition of ‘lm’ in R is to fit one curve to a linear combination of those in the list separated by ‘+’ signs – look up the R manual, it’s pretty straightforward (today’s the first time I even looked at it). The ‘+’ does not mean the two forcings are added as they are, finding the weights used to add them are what this is all about.
Given the coefficients, if you have a forcing that is constant over a long period of time, then the predicted temperature response is the sum of both the long-time-constant coefficient and the short-time-constant coefficient multiplied by that forcing, since w1 and w2 become equal to vv in that case. That’s why “a[2] + a[3}” gives you the long-term response. For forcings evident over only shorter time periods the response will be smaller.
One part of the brilliance of Tamino’s post was the way he showed how this plays out with the volcano peaks. If you fit just to the instantaneous (short time-constant) piece, the volcano peaks are over-emphasized and the fit is poor. Same thing happens when you let the long time-constant here get too short. But there is *some* response to volcanoes, so you need a short-time-constant pieces as well – or, obviously, a more sophisticated model than this two-box approximation.
Carrick, you said:
“Carrick (Comment#18540)
August 25th, 2009 at 10:38 am
Hunter:
The most revealing thing here is not that Tamino made mistakes. Extremists often do.
I make mistakes too and I bet Lucia and everybody else here does to, but am not an extremist (just hard-headed). Extremists don’t admit when they are confused or make errors, that’s probably the big difference.
The thing that confounds me a bit is how some poeple are so unwilling to admit Tamino is capable of error (or if he made an error, it’s so trivial that why are we even talking about it).
What category do they fit into?
First, I wish I had clarified what was trying to say.
*ALL* of us can and do make mistakes.
The extremists deny them, or deflect responsibility or accountability for those mistakes.
IOW, I agree with you, and thank you for the opportunity to clarify my statement.
Arthur,
A forcing is a forcing is a forcing. There can be no physical reason why the weights of the two forcings are different, especially different by an order of magnitude. It isn’t brilliant. It’s at least as dumb as my forgetting to convert force to mass above. At least I admitted I made a stupid mistake. The proper way to do this, and it’s still almost completely empirical, is to split the forcing between the two boxes, apply the exponential smoothing, sum the forcings and then fit to temperature. Then you have three adjustable parameters, t1, t2 and beta the splitting factor. Then the slope is indeed the climate sensitivity. But I doubt that you can relate the parameters to a true physical model, or at least not easily.
Dave Andrews (Comment#18563)
See the viedo I posted of arthur with Tamino on his belly. Tamino isn’t big enough to come in by himself he must be carried. Basically, tamino doesnt have the requisite moral character to either come in and explain himself OR take questioning on his own blog. Arthur and Nick are carrying Tammy’s water. That’s fine. That DOESNT go to their arguments. They are good soldiers. They should submit a comment over at Open Mind and request that A) tamino come over here or B) that Tamino invite Lucia to post there or C. that Lucia be able to comment there UNEDITED, provided she follow rules of decorum ( no politics, no religion, no C02 anti radiative theory lunacy, no funny videos,no ad homs)
She’s a girl for christs sake, what’s tammy scared of.
Dewitt–
It’s ok that the weights for the two solutions are different. That always happens when you want to apply boundary conditions in the eigenvalue problem.
I need to sort through getting the parameters out of this. I’m sure I can do it… but it also requires thought. Which sometimes makes my head hurt.
The confusing thing for people is that there is another box. The curve fit minimized residuals for one box only, but permitted the temperature in the other box to do whatever the heck it wanted to do.
Anybody know how to do the equivalent of Excel Solver in R? I want to minimize the sum of the squares of the residuals by varying t1, t2 and beta. My fit already looks better than Tamino’s using t1=2, t2=30 and beta=0.5 and I’ve just been modifying t1 so far. The F statistic is 420 and the adjusted R2 is 0.77. t1 does make a difference to the fit now. A small t1 overemphasizes the volcano forcing.
lucia,
There are in fact three boxes as far as I can tell. There’s the fast box, the slow box and the surface layer where the temperatures are measured. The forcings filtered through the fast box and slow box cause the change in temperature in the surface box whose response is instantaneous. Or at least that’s the way I look at it. Those forcings must have equal weights so I summed them before the temperature fit. I have no idea what the temperatures are in the fast and slow box. Since this is an empirical fit, I ‘m not sure I even care.
DeWitt–
Three boxes in your fit? Or Taminos? Tamino’s is two boxes.
DeWitt Payne (#18575) – there is only one forcing, ‘vv’. The “two-box” model here assumes that the temperature response is the sum of two partial-temperature-responses that are each equal to the forcing ‘vv’, after you do an exponential smooth, with two different time-constants. w1 is ‘vv’ with a long-time-constant smooth, w2 is ‘vv’ with a short one. The fitted temperature series is the sum of the first partial-response coefficient multiplied by w1, plus the second partial-response coefficient multiplied by w2.
DeWitt #18550 and #18451 It isn’t assumed that the two time constant parts make equal contributions. That is determined by the fitting coefficients that emerge. From memory, with 30 yr and 1 yr, the relative contributions are 0.66 and 0.05 to a sum of 0.71 W/m2/C. You can tell that the short time scale contributes less, because the fitted curve is quite smooth.
Incidentally, this is why there isn’t much point in going to even shorter time constants, like say 0.09. With annual averaged data, this is sub grid scale. It’s equivalent to no smoothing at all.
The Earth system is in fact made up of a continuous string of “boxes” from the top of the atmosphere to the bottom of the oceans and even into the lithosphere and so on. Now, if you get down to boxes who are as thick as atoms, getting any smaller doesn’t help much. Heat doesn’t just move from the atmosphere to the ocean or vice-versa, it flows within them, up and/or down. But is just treating them as two “boxes” good enough? I don’t know. I guess it depends what you want…
I shouldn’t be saying all this because it makes Grumpy sound really terrible. But there is something to be said for simplicity. Never right, but often a good approximation. The iterations down the line (2 box, 3 box, … n box) will allow you the possibility of getting closer to being right-though there is far from a guarantee, as this incident shows-but the effort to gain ratio is substantially worse with each box you add.
DeWitt Payne
” At least I admitted I made a stupid mistake”
And do you know what? I only ever seem to see people who might be described as questioning or sceptical, ever admitting on blogs that they made a mistake.
Such things never happen at RC or Open Mind. ‘Mistake’ is not in their lexicon.
Nick Stokes (Comment#18583)-“Incidentally, this is why there isn’t much point in going to even shorter time constants, like say 0.09. With annual averaged data, this is sub grid scale. It’s equivalent to no smoothing at all.”
Yes and the fact that GCMs can’t resolve any aspects of everyday weather is not problematic at all. If it’s smaller than we can deal with, it doesn’t matter, or it can be “parameterized”. This is silly. Why not just interpolate the forcing data like I did for Grumpy to get monthly time scales? It took forever but I did it. And you know, the monthly data reveals important details about the data which you would not see in the annual data.
Dave Andrews–
Not quite true. Gavin admitted he mis-interpreted what Scaffetta did with his fit. It’s true he also added that he thought Scaffetta’s paper was not entirely clear, but he admitted he misinterpreted.
I make mistakes all the time. I plan to make quite a few over the next few days. That’s why it’s going to take me a while to go from having fitting parameters from Nick’s code to providing the corresponding time series for the ‘ocean box’. (And even then, I’ll only provide those if I am not mistaken in my notion they can be obtained. However, I’m pretty sure it can be done.)
Andrew_FL–
By the way, I disagree with Nick that there is no point to using a time constant less than 1/12 years. It’s harder to do the comparison. However, it can be done. If the ‘real’ time constant for the box was very fast, it think it would be worth showing the result even if temperature are only reported every month.
lucia and Arthur,
I’ve given up trying to read Tamino’s mind and thought I would play with the idea and come up with something that made some physical sense, to me at least. That means three boxes. Two boxes store energy with different time constants and release it to a third box with an instantaneous response. The energy that goes into one of the boxes cannot go into the other or you could violate the First Law. In fact, a sum of weighting factors greater than 0.5 coming from Nick Stokes’s script does violate the First Law. My step function experiment demonstrates this, as at long enough time the same forcing into each box produces a sum of twice the original forcing so that a sum of coefficients greater than 0.5 gives a total forcing greater than the original. That looks like a First Law violation to me. So that meant I had to add a splitting parameter to the mix and sum the filtered forcings rather than weight them differently. I remain unconvinced that different weightings for forcings is physical. I could actually split the forcing into three parts with one part unfiltered, but from the playing I’ve done, the instantaneous splitting factor would probably get very small. Something to try next. Right now my best fit is t1=10.5, t2=106 and beta = 0.5, sensitivity 0.767. That gives an F statistic of 565 and adjusted R2 of 0.82.
It’s not a bad fit, but it puts something like 1.1 W/m2 “in the pipeline”. I really think somebody would have found that by now. Even the obviously badly spliced OHC data with the huge jump from 2002 to 2004 only has a slope equivalent to 0.4 W/m2.
lucia,
I think I’ve asked this before, but why don’t edits show up right away? For anyone reading before the edit shows up that’s t1=10.5 not 105.
Andrew_FL (Comment#18584) August 25th, 2009 at 4:31 pm
It’s no secret that the climate system doesn’t consist of n discrete boxes… it’s an attempt to construct a very simplified model that can explain some of the variation we see in the time series and tie it to CO2 forcing or whatever else you think is driving this bus.
Going to a continuous approximation might even make things easier: if you’re trying to model radiative transfer in certain ways, for example. Here, our model just allows for some (unspecified) kind of net heat diffusion between bodies of fluid.
Physical considerations might well dictate some reasonable borders for your boxes, for example: where large discontinuities in heat capacity occur; or between parts of your system with limited heat transfer. If 1 box is plain terrible, but there are reasons to suspect that 2, 3, or 4 boxes make much better sense, then the gain/effort ratio go might way up with a bit more work. Too many boxes, and of course you start wondering whether a 1-d formulation is really appropriate (why do we believe that 30 layers exist and are perfectly uniform around the globe?).
lucia (Comment#18588)
If intraannual variations are important, then I think a globally-averaged surface temperature and 1-d model are definitely too coarse.
DeWitt Payne (Comment#18580) August 25th, 2009 at 3:35 pm
DeWitt Payne (Comment#18589) August 25th, 2009 at 4:49 pm
Could you explain how the boxes in your model are physically specified and interconnected? It sounds from the description as if the surface box actually has no heat capacity at all; is its temperature a linear combination of the other two boxes?
Also, if inputs in the current system are distributed unevenly between a given set of boxes, then why would it be unphysical to have weighted distribution of additional forcings?
Oliver,
I’m taking the approach pioneered by Nick Stokes where the boxes only have time constants. As far as I’m concerned , it’s totally empirical. However, energy must still be conserved so that the sum of all forcings cannot add up to more than the original specified forcing at any time. The third box has effectively zero heat capacity and responds instantaneously to a change in forcing. I tried adding a third splitting of unfiltered forcing directly to what amounts to the third box. It improved the fit a little, the F statistic increased from 565 to 582, but the splitting coefficient for the unfiltered forcing is only 0.04 and I haven’t run an analysis of variance to see if it’s actually statistically significant, ignoring things like autocorrelation. I’m sure it would be relatively easy to do in R if I knew what I was doing, which I don’t.
Dewit–
The edits don’t show up right away because the edit plugin doesn’t tell the supercache plugin that the page is updated.
oliver (Comment#18591)-“It’s no secret that the climate system doesn’t consist of n discrete boxes… it’s an attempt to construct a very simplified model that can explain some of the variation we see in the time series and tie it to CO2 forcing or whatever else you think is driving this bus.”
I agree.
“Going to a continuous approximation might even make things easier: if you’re trying to model radiative transfer in certain ways, for example. Here, our model just allows for some (unspecified) kind of net heat diffusion between bodies of fluid.”
Well, what I’m thinking of is the time to compute/calculate the results and the complexity and construction of the model itself. Climate models don’t capture the tiny turbulent eddies for good reason: a grid scale that small would mean that a “forecast” out 10 years would take 10^20 years to actually run! Now, I can imagine that as you say:
“Physical considerations might well dictate some reasonable borders for your boxes, for example: where large discontinuities in heat capacity occur; or between parts of your system with limited heat transfer. If 1 box is plain terrible, but there are reasons to suspect that 2, 3, or 4 boxes make much better sense, then the gain/effort ratio go might way up with a bit more work.”
The effort to create a two box model is greater than a one box model (even I can do the latter!) but it will often, but not always, be better. If you do a bad job constructing your theoretically better model, there is no saving it, and your effort to gain ratio will be pretty sorry. So let’s assume that the models get better as you go up-they can, but the potential goes up less and less with each box each time. Meanwhile even if the effort to add one more box is the same no matter how many boxes you already had, your ratio is getting worse and worse. So the question is, is there an “optimal” number? I don’t know. I think it depends on how good your model design skills are in the first place, but I’m not sure there is a hard line.
Of course, there is then the issue as you said:
“Too many boxes, and of course you start wondering whether a 1-d formulation is really appropriate (why do we believe that 30 layers exist and are perfectly uniform around the globe?).”
I believe this is the motivation behind GCMs. But as we all know, this is itself problematic. Because at this point the odds of a more sophisticated model being more accurate are even less than they are for box models. The reason is because there are so many more ways to screw up in the design. So while the potential for a good model increases substantially, the reality is much more disappointing.
That being said I have to be optimistic on these counts. With the potential there, models may be substantially improved over time. And in the indeterminate long run, I’d bet they will get better.
In the mean time, we can all spew vitriol against each other about what the “better models” will do.
Andrew_FL (Comment#18595) August 25th, 2009 at 5:33 pm
I agree with your points.
I wonder if it might be possible to cobble together a reasonable several-box model that use some of the boxes to model a crude latitudinal dependence and circulation rather than just more vertical layers.
Oliver
P.S.: I guess another problem with too-many-n is that the modeling effort is supposed to be tractable as a “not-a-computer model.”
Andrew_FL #18586
There’s no use interpolating forcing in order to go to finer time scales with shorter decays. The point of the finer timescale is to pick up short-term temperature response to sudden changes in forcing. When you construct with interpolation, there aren’t any on that finer scale.
oliver–
If the goal is to fit a simple model to data, many boxes automatically means many fitting parameters. The two-box method as discussed by Tamino may appear to have only two parameters, but it’s not true. Using 2 arbitrarily chosen parameters, Nick’s solution finds three coefficients in the “a” matrix.
I need to sit down to do some algebra, but I’m pretty darn sure I’ll get a ocean temperature series that corresponds to the surface temperature series. I’m pretty sure the result will show the choice of time constants “matters” even if, as Arthur suggests, the computed sensitivity is not strongly influenced by this. What I think we’ll see is that for some sensitivities, the ocean temperature look nuts.
The question I have been trying to drive at is: Do any combinations of parameters in the solution look “not nuts”? If even one combination is “not nuts”, then the method of using the two-body box will be sufficiently useful to learn something. But if all the oceans
Nick–
If we knew the eigenvalue, I don’t see any advantage to substituting a solution based on an incorrect eigenvalue.
Maybe it would come out ok… or not. But what’s the point? It’s easy enough to just do the problem with the known eigenvalue.
DeWitt #18593
As you’ve probably gathered, you can just add extra variables to the lm() function. You could, for example, write lm( ss ~ w1 + w2 + vv) to put in the unsmoothed data (which won’t help much, since it is close to w2). I think to replicate Tamino’s SOI treatment, you just have to add in a SOI index at this point.
Ok, results are posted here:
* Case 1: changes in the long time-constant, with the short time-constant held fixed at 0.05 years:
R-squared values:

http://arthur.shumwaysmith.com/life/sites/default/files/two_box/Rsq_vs_long_time.png
Best fit is for close to 20 years, but 15 or 30 are almost as good. Fit gets worse as you go to longer or shorter “long” time-constants.
Corresponding coefficient values are here:

http://arthur.shumwaysmith.com/life/sites/default/files/two_box/two_box_vs_time.png
(green = short-time response, red = long-time response, black = total response/sensitivity)
For the best fit time-constant of 20 years, the values are 0.5548995 for the long-time coefficient and 0.04699564 for the short one, with a total of 0.60189514 in C per W/m^2, or about 2.4 C per doubling of CO2.
* Case 2: changing the short time-constant, with long time-constant held fixed at 30 years:
http://arthur.shumwaysmith.com/life/sites/default/files/two_box/Rsq_vs_short_time.png

Best fit is for somewhere between 0.1 and 0.15 years. Note the funny dip at 1.5…
Corresponding coefficient values:

http://arthur.shumwaysmith.com/life/sites/default/files/two_box/two_box_short_time.png
and there’s an interesting problem. First the total response value is almost independent of the short time constant as I was asserting earlier (“doesn’t matter”) – in fact it’s highest for the shortest values, so Tamino would have actually found a slightly higher sensitivity (as Nick and I have here) if he’d gone with something less than 1 year.
But second – the two coefficients in the fit diverge to plus and minus infinity at a little under 1.5 years – and before that the long-time-constant coefficient goes negative (at about 0.85 years). And in particular, it’s definitely negative at 1 year. I’m pretty sure that’s not a physically correct situation at all. If that’s what Tamino actually was using in his fit, it might look good in the fit and the sensitivity numbers, but those individual coefficients cannot be right.
Unless, again, I got something wrong. Apologies for the crudeness of the graphs; I’m very new to “R”. The exact R scripts used to generate the above curves can be downloaded from the same URL’s if the ‘.png’ extension for the figure is replaced by ‘.R’.
Arthur,
Even though you don’t think it matters, could you provide the magnitude of the third constant. I’m pretty sure I need it to reconstitute the ocean temperatures.
I know you don’t think that “matters” but I do.
Dewitt, Lucia, I found that the entry under the “Recent Comments” column for the post you’re editing seems to have the edits in it even though the main post doesn’t.
Nick Stokes (Comment#18597)-the monthly temperature data contain many more points for the “regression”. More information is usually not considered a bad thing.
I would also add something nobody seems to be noticing-the forcing data Tamino is using ends in like 2003 or something. Now, I don’t think that his regression would be alter very much, but since the forcing went up and the temperature “down”, it would reduce the sensitivity that gets a good fit ever so slightly…
There’s a First Law violation on the low side of the unequal weight method too. The weights must always sum to exactly 0.5 or energy is not conserved. A negative weight would be a Second Law violation because it would represent an energy flow in the wrong direction from cold to hot. I think. Maybe. Too bad Tamino didn’t publish the full fitting summary and analysis of variance.
Here are all the coefficients from my model:
t1=10.5
t2=106
alpha=0.48
beta=0.48
gamma=0.04
intercept= -0.06342401
slope=0.73345237
alpha, beta and gamma are not free variables they must be in the range of 0 to 1 and must sum to exactly 1. t1 and t2 are always positive.
and the script which I hope is turnkey if you have edited the forcing file and the temperature file to remove gaps and headers.
#define exponential smoothing function
expsmooth<-function(v,d){ # u is the smoothed v with decay exp(-1/d)
n=1:length(ss)
e=(n-1)/d
e=exp(-rev(e))
e=e/sum(e)
u=convolve(v,e,type="open")[n]
u
}
#read GISS forcing data
s <- matrix(scan("gissforc.txt", 0, skip=1), ncol=11, byrow=TRUE)
vv=rowSums(s[,2:11])
#read GISS temp data
v <- matrix(scan("gistemp.txt", 0, skip=1, nlines=124), ncol=20, byrow=TRUE)
ss=v[,14]/100.
w1=expsmooth(0.48*vv,106)
w2=expsmooth(0.48*vv,10.5)
w3=w1+w2+0.04*vv
# fit regression
h<-lm(ss ~ w3)
summary(h<-lm(ss ~ w3))
anova(h<-lm(ss ~ w3))
g=h$fitted.values
#plotting
t=1880:2003
k=21:124
#jpeg(width=500, height=500)
plot(t[k],g[k],type="l",xlab="Year",ylab="Gistemp anomaly",axes=FALSE,asp=50.0)
lines(t[k],ss[k],col="red")
q=0:5*0.2-0.4
axis(2,q)
q=0:10*10+1900
axis(1,q)
#dev.off()
Arthur Smith (Comment#18601),
Thank you for posting your results; they are a real contribution to the thread.
I would suggest only one additional test: could you determine the best fit sensitivity value (as indicated by your first and second graphs combined) with the GISS forcing scaled either 15% higher and/or 15% lower (all values multiplied by either 1.15 or 0.85)? This would indicate how sensitive the model is to the accuracy of the assumed forcing.
Arthur Smith (Comment#18601)-Pretty sure? You have GOT to be kidding me, right? 😆 Negative time violates the laws of thermodynamics, simply because thermodynamics are subject to the iron law of the arrow of time. I’m sure that there is a more sophisticated argument-I think that’s what this whole thing is about-but it is pretty obvious that the climate CAN’T respond to forcings that it has not yet be subjected to!!!
Forget the 0.5 thing. I’m confusing the parameters of the linear fit of forcing to temperature with energy flow. Failure to split the energy between the boxes still violates the First Law because there’s more energy out than in at equilibrium. A negative weight is still a Second Law violation.
By the way, I’ve posted the images with a bit of discussion up on my home blog. Comments welcome there if you think I did something wrong.
Lucia – installing R, downloading and editing the GISS files as we’ve described and running Nick’s script would take you very little time – probably less than half an hour. I already had R installed but hadn’t used it, but I think it took me just 10 minutes to replicate Nick’s result.
Anyway, for time constants of 0.05 year and 20 years for which I mentioned the 2 relevant coefficients, here’s the summary of the result in full (you really think the intercept is relevant?):
summary(h)
Call:
lm(formula = ss ~ w1 + w2)
Residuals:
Min 1Q Median 3Q Max
-0.224478 -0.066225 -0.002484 0.069505 0.217838
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.074175 0.008683 -8.543 4.63e-14 ***
w1 0.554900 0.038891 14.268 < 2e-16 ***
w2 0.046996 0.016424 2.861 0.00497 **
—
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.09359 on 121 degrees of freedom
Multiple R-squared: 0.8234, Adjusted R-squared: 0.8205
F-statistic: 282.2 on 2 and 121 DF, p-value: < 2.2e-16
Lucia #18588 Andrew_FL #18604
My point about short decays is that the point of this analysis is to analyse the association between information in the forcing and the temperature at the various timescales. In the short term – yes, we have temperature information, but not forcing (and interpolation can’t give any new info). So we can’t get a meaningful coefficient that expresses the association.
Nick Stokes (Comment#18611)-Ideally we would have both a monthly resolution forcing dataset and temperature data set. But you think that if we haven’t got a monthly data set for both, we should at least try to exploit the information in the monthly data set we do have?
@Carrick
Sir.
I lack your acquantance and hope you might forgive my boldness. May I have your permission to quote ” Carrick” as having said
Screw him and screw you. This is science and nobody is going to pull punches when we see wrong crap being promoted and disseminated like this latest dreck from (name removed).
You get bloodied up in science sometimes. It’s part of the art.
The children, sir, think what an example you could be to them.
Oliver,
I’d draw a cartoon if I knew how but I don’t so I’ll have to give a verbal description. Three boxes, 1, 2, and 3. An input forcing F. Only box 3 radiates to space. Box 1 and 2 transfer energy only to box 3. Box 1 and 2 have time constants t1 and t2. The time constant of box three is infinitesimal so all heat flow in is immediately radiated to space. Energy flows as F1 into box 1, F2 into box 2 and F3 into box 3. F1+F2+F3 is exactly equal to F. The temperature time series of box 3 is represented by the GISSTEMP global land plus sea anomalies. The temperatures and heat capacities of boxes 1 and 2 are unknown, at least to me. It’s unrealistic, but easy to program. Besides, it’s all backwards. What really happens is that flow in doesn’t change but the net flow of heat to space from box 3 is decreased by F. That causes the temperature of box 3 to go up and reduces the heat transfer from boxes 1 and 2 to box 3. Boxes 1 and 2 then warm up because more heat is coming in than going out. I’m not at all sure, unlike most of the warmers, that lowering F out is equivalent to raising F in.
Oops!!!
I did make a booboo. The short time-constant in my R script was not in the range 0.05 to 2.5 as in the above figure, but rather was using the index value, from 1 to 50!
Oops.
So the mess that I thought was around 1.5 was actually at exactly the point where both short and long time-constants were the same (30.0). No wonder they diverged (and switch places after that – it’s kind of symmetrical).
Anyway, corrected plots for varying the short time-constant with a long time-constant fixed at 30 years are as follows.
R-squared:

(the R-squared peak is actually at a short time-constant of 2 years, not less than 1).
coefficients:

the total response is almost constant over this range of short time-constants, again dropping steadily as you increase the choice for the short time. The short-time weight rises slowly as the long one falls, keeping the total roughly the same.
So, in the end, Tamino’s choice seems to have been just fine, though it didn’t give quite the best fit. If his “short” time constant had been over 17 years though he would have been in real trouble.
Arthur–
I’m not trying to replicated Nicks results. I wanted the 3rd coefficient. Carrick sent me the all three.
Nick–
I agree that if the true eigenvalue is associated with at time scale less than a month, we won’t be able to get a meaningful association. But that just means our results will be poor.
My impression is that someone had been advocating that we could substitute a solution with a larger time scale– which is different from admitting we wouldn’t be able to get a meaningful association.
Sorry if there was confusion on that point.
Arthur–
In my opinion, you have not shown Tamino’s choice is fine because you have not taken the step of finding the parameters associated with those solutions to see if they are physically realistic. In my opinion, you are addressing something utterly irrelevant to my criticism.
I realize you don’t get what I’m saying, and you likely won’t until I have a change to sit down, do some algebra describe the various parameters and show graphs of the temperature in the other box.
oliver (Comment#18551)
It is the “truly” in my phrase truly WML.
By this I mean a layer that behaves as if it was a homogenous thermal slab throughout the year. If the top 100 metres of the ocean behaved as a thermal slab it would imply the following time constant:
100 * 4200000 /CS
the second term being the Specific heat of water and the CS being the climate sensitiviy. It equates to more than a decade. This I feel would give excessive phase lags on the seasonal scale. Also it would give too small a seasonal range. I am not saying that there is not a WML that you can go out and measure. When it comes to producing a box model of the ocean a truly WML of that depth throughout the year and across the globe is incompatible with observations. It is only a single box it has to deal with averages and it seems that on average the thermal effects of the WML are best modelled as a thinner layer than the one produced by measuring the depth to a temperature horizon.
As to your second point, you have to remember that it is a diffusive box model not a wave state model. Do not knock something for not doing what it is not intended to do. The uptake of heat from the surface layer into the deep ocean is a function of that layers temperature. How that temperature is arrived at is a different matter. The objective of such a box model is to do as much as one can with the data available that data being SST and OHC time series.
The diffusivity that best matches the OHC is ~0.4cm^2/sec which is low compared to a more acceptable 1.3cm^2/sec but in a sence even getting close is reassuring. This value is going to represent the lower limit, it is calculated using the total ocean surface area as I do not have a figure for just the deep ocean, and cannot I put a figure on the proportion of which does not participate, e.g. regions that have little or no temperature gradient from top down to the abyss. If only 60% of the ocean participated the figure would rise be ~1.3cm^2/sec.
SteveF (Comment#18539)
I think I completely agree, it is not I that is simple it is the box model that is a little retarded. I think that is the point of simple models, that they are simple, but still give hints as to the way the real world might behave.
When it comes to dealing with seasonal phase and amplitude correctly I am I think in very good company. Unless they have got a lot better very recently, thereby rendering the spot checks I performed obsolete, the GCMs also make a bit of a hash of it.
This is one of my hobbyhorses, do the climate models reporduce climate (as opposed to weather). I have had a crack at converting model output into girdded climatologies on a spot check basis and comparing them to the HadCRU gridded climatology and I was not impressed. I can not remember much about it now but I think it showed up as a too heavy ocean, e.g not enough amplitude and too much lag. My thoughts being if they can not match the Earth’s Climatology then I have little faith in their climate (but not weather) credentials.
Also of importance is that SST warming, and OHC increase, also differ a lot around the globe. Unfortunately a single box neither can nor is intended to cope with all that. It must do its best with global averages and from a thermal point of view averaging out the WML (which I think gives around 100m) just does not cut it in terms of the model hoping to reproduce OHC (or at least some approximation) from SST.
To all,
Getting back to the main strengths of the diffusive box, its ability to say something about future OHC uptake, its ability to produce time constants that scale with the period of interest, and its capability to describe the attenuation and phase of the response to forcings at differing time scales and to do so by only adding a single new degree of freedom that can be constrained. Now it may not be perfect, it may not even be right but it is not the ad hoc approach that Tamino chose to illustrate his point. If you want a multiplicity of time constants I think the first place to look would be in a single box the is packed with, them not in individual boxes. It is not as though this is not well trodden ground, it was as I recall a diffusive box model that did so much to give us the imfamous and incorrectly termed “pipeline”. It is well recognised for allowing large short term fluctuations yet still providing a long tail. It was a massive improvement on the “slab” ocean box and I am baffled why its use is not more prevalent. It is not prefect but its main competitors are, in my view, dire.
If I can take an example (that has been recently dealt with elsewhere), there is the question of how to scale the long term effects of solar brightening when all you have are the short cycles to scale by. Now the diffusive box model may not give the right answer but it does give an answer because it has the required property of predicting the amount short period (years not days) fluctuations are attenuated and how there phase should lag.
In raw terms it says that the oceanic admitance is a function of the square root of the reciprical of the period. given a value for the diffusivity and another for climate sensitivity you have all the information you need to predict the attenuation at the differing timescales. I suspect it might just give the guys the break that they need to back up statements regarding 11yr, 22yr and century long time constants without what might seem to the arbitary selection of different time constants. FWIW the diffusive boxe would ave us believe that the amplitude of the 11yr cycle is attenuated by one half, suggesting that one should multiply the naive value of the forcing (calculated just using the observed temperature signal and climate sensitivity) by two to get the real forcing. Now that is not much but it would make some folks happy.
For fun I have just asked a variant of the same question, using the best fit of the “non-background increase” lean data to find the strength of the solar effect, how much would the complete data with background have had on the post 1900 record. The answer I got rom a diffusion box model was about .19C/century out of .68C/century for the data I have. Which I guess is about twice the orthodox view and the same factor as the attenuation of the 11 year solar cycle. It’s a funny old world.
Alexander
I think one has to forget trying to check the sensitivity by regressing the GISS forcing numbers onto the temperature trend to date.
For one thing, the forcing estimates, the 5.35 ln (CO2current/CO2orig) formula (and all the others) have been constructed based on a 3.0C sensitivity number.
The formula are not empirically derived through experimentation or even the MODTRAN data, they come from climate model estimations (and mostly the climate model reconstructions of the last glacial maximum) which find a 3.0C per doubling result using a 0.75C/watt/m2.
If anything, if one uses a 30 year ocean lag and the GISS forcing numbers, you are guaranteed to get a result which is between 0.3C/watt/m2 and 0.75C/watt/m2, more often closer to 0.75C.
One has to match up the temperature trend with independent-data-based-Coefficient*ln(CO2/GHGs) only.
Where did the 5.35*ln(CO2) come from anyway? The early 1980s climate models.
Note that I have corrected the horizontal scale in the second two sets of graphs in my comment #18601 above – the surrounding discussion reflects my mistaken belief then that I had actually given the code a set of time scales 20 times shorter, no longer reflected in the graphs.
Arthur–
Do you understand that I consider those graphs irrelevant to my criticism of the method used to get the sensitivites? That result has nothing to do with “the problem” and so does not show anything useful about Tamino’s method?
SteveF (#18607) – multiplying the GISS forcings by 1.15 or 0.85 would have the absolutely straightforward effect of dividing the fitted total response by the same factor, because this is a completely linear modeling problem. So a fitted sensitivity of 0.6 would drop to 0.52 C/(W/m^2) if GISS forcings should be multiplied by 1.15, or would increase to 0.71 C/(W/m^2) if by 0.85.
Lucia (#18622) – I’m sure I don’t understand what you consider relevant or irrelevant. However, by my definition of whether the precise choice of two-box time-constants “matters”, it seems pretty clear the precise value of the short time constant is irrelevant (makes very little difference to the fitted sensitivity).
On the other hand, the choice of the long time-constant does matter more than I had expected. Tamino would have gotten a better fit with a slightly shorter long-time-constant choice, and that would have given a somewhat lower sensitivity (though still in the IPCC range).
But you evidently have quite different criteria of relevance – which you don’t seem to have spelled out completely, or at least I certainly haven’t understood them. So I await your quantification of that issue in regard to this sort of fitting problem.
Arthur–
You are trying to see if the choice of the short time constant affects the magnitude of the computed sensitivity. I am concerned that the underlying parameters do not match those for the earth.
I know that a few people here understand my criteria, and I can tell that you and Nick do not. I am doing some algebra so I can do some examples to clarify the issue for those who do not understand.
To do the example to clarify, I needed all three constants a[1],a[2] and a[3]. You respond by telling me I don’t need a[1]. The person who understands what I am planning to do emailed me all three.
When I have quite time, I have to carefully write down 11 equations with 11 unknowns, avoid mistakes, solve them. Then, I will explain further. I do not know the final result, but I know that the thing that concerns me means I need to check.
This is the analysis to check whether the parameters corresponding to the solution are “physically realistic”. Finding that the magnitude of the sensitivity is insensitive to the choice of time constant will be useful information only if the parameters α1 α2 β1 etc are reasonable for the earth’s climate system divided into two boxes.
So, what you are finding could pontentialy be useful, but has nothing to do with my concern.
Arthur Smith (Comment#18601)
Nice. It would be nce to see your first figure as a family of curves
for various short time constants..
Re My (Comment#18619):
Correction, it should have read:
If the top 100 metres of the ocean behaved as a thermal slab it would imply the following time constant:
100 * 4200000 * CS
the second term being the Specific heat of water and the CS being the climate sensitiviy. It equates to more than a decade (for CS=0.8).
Alexander
One would like to point out to some of you that the following could be true. Tamino could be wrong. Mann could be wrong. Jones could be wrong. CRU ought to release the data and the code.
And CO2 emissions could still be warming the earth dangerously.
Now, do you want to be part of the problem, defending to the last breath idiots who are obviously wrong, and thus destroying the credibility of the AGW thesis?
Or do you want to be part of the solution, which is to purge the movement of its fanatical idiots and get to the truth of the matter?
Follow up question:
What knowledge has been gained or lost in the postings and comments of this thread?
Andrew
michel,
If Mann, Tamino, etc. are wrong, there is no AGW crisis.
AGW’s claims are built on what Mann, Hansen, Tamino, Schmidt, etc. have sold.
Their defenders at some level must know this, and they will defend AGW to their last breath. The attraction of a simple and wrong solution to big difficult questions is nearly irresistible. When your simple and wrong solution can offer an apocalypse to boot, it is darn near perfect.
AGW has been a massive waste of resources- time, treasure and talent.
AGW has dominated climate science to the detriment of climate science. This happened with evolutionary science in the early 20th century, as well. The science recovered, eventually.
What is it with you engineer types, lucia? All that stuff about accounting for energy inputs and outputs… you are missing the sheer elegance of the project.
Note that while there are reputable climate scientists who disagree about the correct figure for sensitivity over a very wide range and that while surface temp measurements themselves are probably suspect (see Pielke et al and that while quantifying non-GHG forcings is still difficult, and although he did not include any data for teh last 5 years, Tamino was nevertheless was able to arrive at at figure within 1.5% of the IPCC’s sensitivity estimate.
For him, this confirms the sheer clarity of his vision. For lesser minds it may simply confirm the rather obvious power of presupposition and immutable working assumptions. Two-boxes, three-boxes, whatever … this looks like the same obsessive curve-fitting exercise that Tamino has done a thousand times before. Elegantly.
So rather than let tacky engineering notions confound elegance (2LOT? puhleese–only creationists and denialists care about that stuff) you should have just put the lotion in the basket like all other visitors to Open Mind. Troublemaker.
Arthur Smith (Comment#18623),
“So a fitted sensitivity of 0.6 would drop to 0.52 C/(W/m^2) if GISS forcings should be multiplied by 1.15, or would increase to 0.71 C/(W/m^2) if by 0.85.”
Just as it should. The GISS forcing history is a combination of long term and short term forcing changes, with short term variation driven by things like solar cycles and volcanic aerosols, and long term being due mainly to greenhouse gases and aerosols. The best fit for the short lag constant is controlled mostly by the short term variations, while the best fit for the long term lag constant is set mainly by the long term changes. When you scale the entire GISS forcing history up or down, both the fast and slow forcings are proportionally changed, and so the ratio of the best fit lag values should not change, even though the diagnosed sensitivity does change inversely with the scaling applied.
However, uncertainty in the net human contribution due to greenhouse gases and aerosols does not correspond to a uniform scaling up or down in the GISS forcing history, since uncertainty in human contributions does not alter the short term variations. Uncertainty in the net human contributions corresponds more closely to uncertainty in the the overall (positive) slope of GISS forcing history, not a uniform scale change up or down. If the slope of the forcing history inceases or decreases, then the long/short ratio of the best fit time constants should change significantly; with a steeper forcing slope decreasing the ratio, and a more shallow slope increasing the ratio.
This means that uncertainty in human forcings should have significantly more effect on diagnosed sensitivity than a simple scaling of the overall forcing history. Without actually setting up R and doing the tests, I am not sure how much more effect there will be on diagnosed sensitivity, but based on other forcing/temerature curve fits I have done, my guess is that it will be a pretty big effect.
I guess I should invest the time to set up R.
steven mosher (#18627) – here you go:
Varying short time constant with long held fixed at 30 years:
http://arthur.shumwaysmith.com/life/sites/default/files/two_box/short_time_comp.png
– observed temperatures in black and the fitted curves in red (0.1 year), brown (0.5 year), green (1.0 year), blue (2.0 year) and violet (2.5 year). Blue curve here was the best fit but they’re all about the same.
Varying long time constant with short held fixed at 1 year:
http://arthur.shumwaysmith.com/life/sites/default/files/two_box/long_time_comp.png
– observed temperatures in black and fitted curves in red (5 year), brown (10 year), green (20 year), blue (30 year) and violet (60 year). Green curve here was best fit; they are all reasonably close to one another for recent temperatures (except perhaps the 5-year curve) but diverge considerably in the mid-20th-century and earlier.
The R scripts to generate these are available again by just changing the ‘.png’ to ‘.R’:
short time-constant R script and long time-constant R script.
Arthur Smith
Please provide Lucia (per #18625) with ALL your constants:
W would like to see what she can do with them.
David–
Don’t worry about it.
Carrick and Dewitt have emailed me all three constants for some cases. It’s not pressing. It’s going to take a few posts to explain; I’m proof reading.
Arthur, perhaps you could also show how the Pinatubo eruption impacted your model and the results.
Temperatures generally declined by about 0.35C during the period while the GISS solar forcing reduction was as high as -2.8 watts/m^2 (or just 0.125C /watt/m^2 which is quite a bit less than 0.75C)
(and technically, the optical depth data indicates solar forcing at the surface declined by between 3% to 10% during this period or -7 watts/m2 to -24 watts/m2 versus the recalculated/readjusted GISS number of -2.8 watts/m^2).
Interesting how climate models always play down anything that is not a GHG or an effect that does not directly lead to 3.0C per doubling.
You know, the discussion which has arisen from this Tamino model debacle has inspired me to make a climate analysis blog. And my first post will be: Does Tamino understand criticisms of Models?
The answer being No. Here’s a preview just for you guys:
1. The first problem is that Tamino-erroneously-concludes, right out of the hangar, that climate models are criticized on the grounds of being to complex. He therefore concludes that by showing that he gets the same answer with a simpler model, this criticism is worthless. But this is rather silly and in fact misses one of the strongest criticisms of models-that they can never be complex enough!
2. Tamino has made numerous questionable assumptions in fitting his model to the data in the first place.
and 3. Oddly enough he is using circular reasoning to argue that a “non-computer” model gets the same results as a computer model. That’s obvious because he admits to choosing the long time constant based on the GISS model.
4. It doesn’t “matter”. (Just kidding. 😆 )
Bill Illis (#18637) – see the graphs I linked to in comment #18634. Or Tamino’s original. Pinatubo in the observational record is the strong dip in temperatures around 1992. It’s a strong constraint on all the fits – and the good ones all get it pretty much perfectly.
Re: DeWitt Payne (Comment#18614)
Thanks for your effort in explaining this.
Arthur Smith (Comment#18634)
You misunderstood.
See your figure where you held the short time constant to .05
and varied the long Tc, plotted the Rsquared. That graph.
What would be interesting is that graph plotted with a family of curves. ie. short time = .05, .25, .5, .75, 1.0,
Which is slices through the volume of Rsquared = F(Tc1,Tc2)
Probably academic.
Or I can get off my lazy ass and load R.
if it matters.
SST when measured from Buoys samples the top 1 meter of the ocean.
I’m gonna preface this by saying that Nick did a great job putting together the framework for analyzing this problem, and this isn’t intended as a “gotcha”.
However, I am pretty sure there is an issue with how the version of code that Nick Stokes originally supplied is performing the exponential smoothing.
What he did is appropriate for periodic or probably even statistically stationary input. However, this data is far from it, because there is essentially no driving at or before t=1880 and very large driving as we approach t=2010. For this case, it is very hard for frequency-domain convolutions such as that implemented by R to not generate significant end effects.
Here is a comparison of doing it the way I consider “correct” and Nick’s original method:
2-Box Model Fit Results

The red line is from Nicks’ version and the blue is mine. The “bop-down” around 1880 is real, that is what the forcings really do. Here is a comparison of the GISS forcings (green), my filtered forcings (blue) and the R-convoluted filter (red)
GISS Forcings and 30-year filter

Note that the two filters agree very well after around 1930 (e.g. 30 years after the start of the time series). Before that there is a significant divergence between the two curves, which is reflected in the above 2-box reconstruction results.
About the only major difference other than the difference in the exponential filter is I’m fitting to the monthly data, and Nick is using the annual version. I don’t have 1-month forcing data, so I interpolated the forcing curves for that.
The exponential smoothing is really just the numerical solution to T'(t) + T(t)/tau = F(t).
After people have had a chance to comment on this if we sign off this is a better approach, I’ll post my code and my data files (I’ll provide it to anybody who wants in any case, just trying not to spam this website).
Physically the basic problem with the convolution method (and I think this is “well known”=a few people know it) is that for non-stationary problems it does not give an accurate solution of the original ODE over the entire domain of the solution space.
I’m going to close comments on this thread, and duplicate the final two on the other thread. This thread is way too long.
The new thread is here.