Arthurs Case 2
In comments, Arthur requested I post my graph of the ocean and air temperatures obtained by integrating his “Case 2” so we could compare our graphs. To permit comparison, he also posted his graphs of ocean and air temperature obtained integrating his “Case 2” data using a different method. (In that post Arthur made a few statements I think are not quite right; I’ll address those later. ) For now, here is my graph of ocean and air temperatures corresponding to Arthur’s case 2 parameter values:

In the graph show above, the atmosphere temperature shown in blue is a bit more responsive to short term fluctuations than the measured temperatures shown in yellow.
The ocean temperature is shown in red; as you can see it soars. This seems unreasonable behavior for a two-box model that is supposed to represent a simplification of the earth’s climate system.
The “mixed” temperature was computed as a linear combination of the atmosphere (9.1%) and ocean temperature (90.9%) tracks the ocean temperature closely. As you can see this mixed temperature does not match the measured temperatures.
This should clue us in that something has gone wrong because this solution is supposed to be a case where the “mixed temperature” closely matches the measured temperatures. In contrast, Arthur provides plots where the mixed temperature tracks the measurements well.
Why should there be a difference?
I don’t know precisely why there is a difference. However, I can explain how I obtained my graph.
How I computed my time series
My two temperature time series were computed directly from the two box model using the parameters expressed ass equations (1c) and (2c) from my first post on this subject:
When integrating, I selected the appropriate values from Arthur’s post:
(Ï„+ = 30 years,
Ï„– = 1 year,
αs = 1.00E+00 years1,
αo = 3.31E-02 years1,
γs = 1.00E-03 years1.
γo =γs * (Cs/Co) = 1.26E-04 years1,
Cs=1.34×107 J/Km^2,
Co = 1.06×108 J/Km^2)
I also used x=0.177 to partition the applied forcing between the ocean and atmosphere and y=0.0912 to weight the atmosphere and ocean temperature to obtain the fit for the measured temperatures. So, Fo= (1-x) FT and Fs= x FT.
Before integrating, I computed the steady state temperature for each box by setting the time derivatives to zero corresponding to an applied Fo= 1 W/m^2; I found these were Ts=0.42 and To=7.35K. I then wrote an integration routine and checked:

So, I think I used Arthur’s case 2 values, my integration routine seems to work. However, using the values I think Arthur reports for case 2, my mixed time series does not match the data– which it is supposed to do.
What did Arthur do?
If I understand Arthur’s method correctly, he obtained his temperature series integrating the eigenvalue function solutions using weights (w+, w–, r+, r–). These weights were computed from (Ï„+ = 30 years, Ï„– = 1 year, αs = 1.00E+00 years1, αo = 3.31E-02 years1, γs = 1.00E-03 years1,γo = 1.26E-04 years1, Cs=1.34×107 J/Km^2, Co = 1.06×108 J/Km^2) based on a set of equations obtained through a series of algebraic manipulations.
In particular, the algebraic manipulation begins with 5 equations and 5 unknowns. The final values of weights for the solution for the atmosphere (w+ and w–) and ratio of weights for the ocean to atmosphere (r+ and r–) should be such that all five equations balance when we insert the values of the solution.
Here are two of the equations that are supposed to balance:
(Using my notation.)
These equations agree with Arthur’s equation 14, but his “w±” and mine differ by a factor of Ï„+ and his w±o = w±sr±.
For case 2, Arthur provided x=0.177. He also reported w+s = 8.42×10^-4 K/(W/m^2) and w–s = 0.417 K/(W/m^2). If I understand correctly, these correspond to Ï„+=30 years and Ï„–=1 year respectively. (I’ve misunderstood the notation in the past, so this may still be the case.)
Interpreting Arthur’s values the way I think they are intended, if we insert numerical values into equation 2, using Arthur’s notation, we obtain the result x= 0.177 Arthur reported:
{ 8.42×10^-4 K/(W/m^2) / (30 year 3.16E+07 sec/year )
+ 0.417 K/(W/m^2) / (1 year 3.16E+07 sec/year ) } * 1.34 107 J/Km^2 1.06×108 J/Km^2 =0.177
This is encouraging since Arthur’s Case 2 solution does correspond to x=0.177.
However, there seems to be a problem with inserting values into equation 1:
{w+sr+/Ï„+ + w–sr–/Ï„–} * Co =
So, we have obtained 1-x= 0.091. But Arthur’s solution for case 2 is supposed to correspond to to x=0.177. Taking the sum: (1-x)+x = 0.27 we that Arthur’s solution for case 2 appears to not solve the required equations. If so, then his time series do not correspond to the two-box model with the parameters he thinks correspond to his solution.
So where are we? Either I have misunderstood Arthur’s notation, misread values in his post, there is a typo in the solution in his post, or something.
However, a mismatch in the values of “x” corresponding to Arthur’s solution for the weights would the presence of an algebra error somewhere. Once the weights are derived incorrectly, the temperature profiles integrated from the solutions based on those weights would be incorrect even if they appear to match the data.
I’m still a bit puzzled by this, but I haven’t quite been able to track down whether the error is mine or Arthur’s or where that error might be. For now: My graph of his solution do not look realistic for the earth. Arthur’s do but it appears those graphs likely do not correspond to the physical parameters he intends to attach to his two-box model.
Update: Wednesday
Arthur fixed the algebraic error. He’s confirmed that he now gets the values of (x, y) I mentioned in comments, and has requested I post the figures showing temperature time series with the ocean heat capacity 10 times what he previously used and the heat transfer 300 times what he previously used. The temperature profiles look much more reasonable.
Since these don’t look insane, we can probably now move on to judging whether the parameter are, or are not, realistic for a two box model for the earth. (I have no idea yet. I’ll be examining issue like whether the parameters would correspond to a venus or mars like earth etc.)
Update again
Dewitt asked for “impulse response” graphs. This is for one case. The next one is coming.

How did you choose an applied forcing of Fo= 1 W/m^2. Did you fit the model to just the surface temperature or both to sea surface temperatures, as well as land surface temperatures? What did you use as your forcing series?
1998 there was a large spike due to El Nino. The intestesting part was the rapid drop in temperature after the peak. There is a lot of information contained in this drop, that you don’t have in a corresponding rise.
If we accept CO2 causes warming, we can’t say that CO2 has an impact on the drop, apart from making it smaller.
So the rate of drop tells us something about the speed at which the atmosphere responds to whatever is shifting the temperature around.
So when you talk about reponsiveness of the atmosphere, the drops and their rates are interesting bits of information
John Creigton–
Figure 2 is just to check that my integration routine is not screwed up. It doesn’t mean anything physically. The way to interpret that is: If my routine is not screwed up, the “ocean” trace will asymptote to the correct steady value for the temperature for whatever forcing I apply. The algebra says’ that value is To=7.35K for F=1W/m^2. The atmosphere also asymptotes to the correct value.
I could use any value of F to check my math, but using “1” is easiest.
John–
I realize I forgot to answer part of your question.
For Figure 1, I used the GISSTemp forcings that Gavin originally posted as used for GISS Model E simulations. (Figure 2 is just a test of the integration routine to see if it might be screwed up. The integration routine passed that test.)
So looking at figure one again. If the ocean temperature, is more responsive then the land temperature, then that means the ocean temperature is driven by forcing rather then coupling. Also keep in mind that the longer the time constant, the greater the gain. My guess is the time constant of the ocean is too large because not enough energy dissipation from the ocean is added into the model. In the last thread evaporation was suggested as one method of heat dissipation from the ocean. Another point is that not all the forcing will make it into the ocean. Some of it will be reflected off the water directly into space. This lost energy due to reflection should also decrease the gain in ocean temperatures to low frequency temperature signals.
John Creigton–
The graph corresponds to the two-box model Arthur created as possibly giving a decent representation of the earth’s climate. That model has:
1) a roughly 20 m deep ocean which is much shallower than the full ocean on earth.
2) very little heat transfer between the “ocean” and “atmosphere” boxes relative to what I think is reflected in the literature.
3) Very low radiative heat transfer to the universe for the baseline relative to what we might expect if stefan-Boltzmans law applies
4) Tamino’s time constants of 1 year and 30 years for the eigenvalue solution. (Some people call these “atmosphere” and “ocean”, but that’s not correct.)
5) An atmosphere with roughly the amount of air we actually have on earth.
I used these because Arthur chose them and for no other reason. That selection has two solutions for parameter “x” and “y”. X describes how much heat is added to the ocean. Arthur’s post provided solutions for “Case 1” with one combination of “x” and “y” and Case 2 with a second combination.
Last week, I showed arthur’s “case 1” solution, which looked unrealistic compared to the earth. That solution had x=0.0179 which meant 1.8% of the heat went into the atmosphere and the rest went into the ocean.
Arthur asked me why I didn’t plot case 2. Case 2. That’s shown above. For this case, x=17.7%, so 82.3% of the heat goes into Arthur’s shallow ocean box.
Reflection could be accounted for in the model– but I suspect Gavin may have already accounted for that in his “albedo” forcing. (These are all anomalies. So unless reflection changes, the issue is subtracted out.)
lucia,
So y is the fraction of thermometers in the fast box? If that were reversed, the combined temperature wouldn’t look so bad, but the slow box temperature is still completely unreasonable.
Yes, y is the fraction of thermometers in the fast box.
I have reason to believe that Arthur made an algebra error in his computation of “y”. I get these for the
(x,y) = ( 0.9167, 0.0177 ) and ( 0.0177,0.9167)
I think their being reversed for the two cases is mostly a coincidence, but related to the low value selected for heat transfer between the boxes.
Here are the graphs I get for one of the two combinations in the comment above:

lucia,
Can you pick any three of the parameters as your free parameters and solve for the rest? If so, I’d pick x=y=0.18 for two of them. I’m not sure about the third, probably one of the box heat capacities.
Dewitt–
In principle I can pick any three free parameters. In practice if I pick three specific ones, I need to do algebra to organize the equations to solve for the other ones.
If I’d bought Mathmatica, I could just let use “solve” with any three, but I’ve cheaped out and been doing this by hand.
I organized the equations to pick Cs, Co and γs in order to parallel what Arthur did. I don’t really want to reorganize every possible way.
Why do you specifically like x=y=0.18?
Lucia, on case 1 you explained the hot ocean quite nicely (and I agree it gives a hot ocean) by the fact that the fraction of the forcing going into the “ocean” vs. that going into the “atmosphere” is much greater than their heat capacity ratios. However, that’s not true in “case 2” – here the fraction x = 0.17 is quite close to the heat capacity ratio Cs/Co = 0.13. I can’t see any physical explanation for why your integration here is heating up so much – unless it’s another problem with numerical precision (remember, I posted spreadsheets with 8-digit precision, use those numbers if you must).
By the way, I’ve posted my analysis in full detail, all algebra, the R-code is online with URL’s I’ve provided, as well as that spreadsheet. Lucia, you haven’t provided full documentation of what you’ve done – why not post it?
DeWitt and Lucia – I think Nick’s approach is essentially equivalent to ours but with the free parameters being ‘x’, ‘y’ (which he sets to 1) and the heat-capacity ratio (R in Nick’s notation).
Hi Arthur-==
Hmmm something is weird with my comments! The servers ate one of mine.
Here’s the physics:
If we do an impulse response problem, then for x≤0.12 with your shallow ocean, we will see the temperature gradient for will exceed that for the atmosphere at time t=0. That is the case with case 2, which as x=0.17. You can see this below:
However, for you choice of parameters, radiation from the ocean was low relative to that from the top box. So, heat cannot escape. As time goes on, the temperature of the ocean exceeds that of the atmosphere. For an impulse response problem, this happens fairly quickly as you can see in the figure above.
With the forcing profile we have, over the time period of more than a century, the ocean has warmed more than the atmosphere. However, if you compared to Case 1, you would discover this ocean is cooler than in Case 1. It’s just still warm compared to the atmosphere.
Out of curiosity, why didn’t you consider using stefan-boltzman’s law to bound the magnitude of the radiation heat loss from either box? I suggested that in my very first post when I discussed Tamino’s choice of parameters– and it was important to doing that bounding analysis. As far as I could tell, you have consistently run cases with radiative heat loss coefficient, which, if they were true for the baseline case, would result in a much, much, much hotter earth. Not quite Venus like, but toasty.
Arthur–
Yes. I’ve been partly following Nick’s stuff too. He noticed an error in one of my results in an early post and I fixed that bug.
I didn’t find any new ones since then, but then, I’ve been diverted to this issue with flexible “ys”. (I too had initially started with y=0 and y=1 cases, permitting instead some ocean to creep into the top box. So, I had introduced a “z”. I like that approach better, but I wanted to show what I get for your cases, so I spent a little time arranging to solve the quadratic for “y”, which you use.)
Lucia – “Stefan-Boltzmann’s law” does not provide a “bound on the magnitude of radiation heat loss from either box” because (A) we don’t know what real temperature each box corresponds to (Ts and To are anomalies only, and we haven’t specified anything about the underlying temperature) and (B) neither box is likely to radiate directly into space – there are strong intervening atmospheric effects (convection and latent heat transport and the greenhouse effect!) which change the heat loss level. The time constants and heat capacities define the terms of the equation; if you have an additional “Stefan-Boltzmann” constraint of some sort, it’s equivalent to a constraint on time constants and heat capacities.
I see your point on the effect of response to a constant forcing with those parameters – somehow I’d thought it wouldn’t be an issue since over time forcings go up and down, but on average it’s definitely been up which should give that strong response. It does look like there’s something I got wrong somewhere – your Eq. 1N and 2N above are identical to my Eq. 11 and 14 here which are definitely supposed to be satisfied. Hmmm…
I ran my latest spreadsheet with Lucia’s version here of Arthur’s parameters, although I couldn’t use a fractional y. I got a reasonable temp plot, which is here. The solution met the physicality test. My coefficients were different – in yr^-1:

α_s 0.3461
α_o 0.0150
γ_s 0.2702
γ_o 0.0342
Nick – could well be – however, your gamma’s are about 100 times larger than mine – I’ve definitely done something wrong. Checking the algebra now… At least we’re checking one another here…
Arthur
We haven’t done so for the box model but we certainly have some notion about the absolute temperature on the surface of the earth and in the top layers of the ocean. So we do have some notion about how how much energy is radiated from the boxes to regions outside the boxes.
Wow. So, somehow we are supposed to forget that climatologists actually have some remote estimates for the optical thickness of the atmosphere and how much energy is radiated from the climate system!
Look, we can make 1-lump parameter models where we estimate the amount of energy radiated from the lump. We can make a many,many, many lump GCM, where people like Gavin can estimate these things. It’s silly to suggest that somehow, we can’t estimate the amount of heat radiating from the each box to space. There may be some slop, but you are using values that would correspond to a very hot baseline “planet”.
Yes. Notice I said they are the same equations. Hmmm….
Nick–
I’m confused. Arthur’s parameters for case 2 are the ones listed in this post. Yours are totally different. (αs = 0.3461 vs. αs ).
Do you mean you ran something for the same Heat capacities and time constants?
I haven’t explored the full space. The graphs here are just those corresponding to the value Arthur found.
(I do get the same α’s and &gammas; he does for his choice of heat capacities and time constants. It’s only when we hit “y” that we diverge.)
There is a good discussion here about the energy balance:
http://www.climateaudit.org/?p=2581#comments
In particular it mentions the percentage of the radiation which escapes directly into space. A long long time ago, I tried to do a multi shell model but failed for numeric reasons:
http://www.climateaudit.org/phpBB3/viewtopic.php?f=4&t=53&p=917&hilit=s243a#p917
http://www.climateaudit.org/phpBB3/viewtopic.php?f=4&t=44&p=364&hilit=s243a#p364
Lucia,

I’ve put an image of the spreadsheet here. As Arthur says, I have a small set of input (“free”) parameters – the four brownish numbers top left. The rest are derived. α_s is output – I can’t specify it.
Ok–
I see your αs also corresponds to low values for radiation heat loss if we use stefan-boltzman (as in books like “A climate modeling primer”.)
Lucia, α_s is not particularly low. The Earth radiates about 235 W/m2 as IR; if you multiply by 4/287 to get S-B rad/degree, it’s about 3.2. Divide by 1.3E+7 J/m2/K for the atmosphere heat capacity, and mult by 3.15E+7 sec to convert to yr^-1, and you get about 8. But if you allow more mass in the effective surface layer, it comes down. In fact, the implied C_s from the analysis is 7.8E+7, and if you use that, it comes down to about 1.3. Which is, indeed, larger than my 0.34, but not hugely.
Nick, your comment “The Earth radiates about 235 W/m2 as IR;” got me to thinking. Shouldn’t it be a simple matter to track the declining IR emissions? That should provide good evidence of the greenhouse effect and provide a temperature proxy too.
Ok, found the problem (I hope!) – Eq. 26 here was missing some important terms (now corrected on that page, but not correct on the subsequent posts yet), which affect the solution for y given the other values. I’ll be doing a bunch of revising… we’ll see how it goes!
Genghis,
It isn’t simple – it’s the subject of the NASA ERBE (Earth Radiation Budget Experiment). The radiation isn’t expected to decline long-term – it has to match solar radiation absorbed, for energy balance. But the AGW theory is that it will drop short-term, as the Earth warms. The warming restores the outward IR to 235 W/m2. The temporary reduction in IR could be less than 1 W/m2.
Nick–
I guess at some point I’m going to have to explain why, not withstanding your valiant attempt to rescale, that rate of radiation heat loss is, indeed, low.
But for now, I’d rather at least wait to see how we all converge on the algebra!
Lucia, (#19522) – coincidentally or not, I now get
(x,y) = ( 0.9167, 0.0177 ) and ( 0.0177,0.9167)
also. Working on fixing up the original post referenced here. It really would have been helpful if you’d made some of these detailed comments earlier – the problems with Case 2 were far more obvious than case 1!
Ok, corrected cases 1 and 2, and new cases 3 and 4 with 10%-ocean Co and higher γ are now posted here.
I agree with Lucia that my choice of “slow” heat capacity originally was considerably too low – it was motivated by a constraint I thought I’d derived from that Eq. 26 that turned out to be in error, and the actual constraint is quite different. I still need to look at some of the pictures on this to get a better feel for it.
On the choice of heat transfer rate – it really shouldn’t make much difference to the solutions because it’s sort of folded into the value of ‘x’ – i.e. the “fast” body follows the forcing F(t) pretty closely, so that the heat transfer term from fast to slow is almost the same as a shift of input forcing from fast to slow.
Also, my discussion of constraints on heat transfer rate still holds – the error in my analysis had no bearing on the relation between γs and αs, αo, so those conditions that limit the heat transfer rate γs for a given choice of fitted time constants still stand. The graphs in that post do need to be updated for the changed relationship between x, y and the other parameters though.
Is it really a sensible constraint that the the energy is balanced by this type of simple model?
I mean: the model only has two boxes and an intuitive choice would be the box1=’atmosphere’ and box2=’mixed layer’. That means means we are not modelling the deep ocean. Changes in the exchange between the mixed layer and the deep would not be taken into account. Consider how a reduction of the thermohaline circulation would affect the energy balance of the mixed layer box.
Aslak, there’s an energy constraint, but no enforcement of a general balance. The γ’s express the flow between regions, and that has to balance, so γ_s C_s = γ_o C_o. But the α’s express losses to space, the deep etc, and the only requirement is that they have the right sign.
Arthur-
Gosh Arthur. You seem to be assuming I knew enough to make those specific comments earlier. Why do you even think that?
When I posted my blog post showing case 2, all I had done was plot the cases. I just used your numbers and checked that my integration was right. I knew Case 2 looked just as bad or worse– but I thought there might just be a typo in, say “y”.
In comments on that post asked my why I didn’t plot case 2. I told you it looked just as bad and I didn’t want to pile on.
We had some conversation, where you said your plots looked good.
I responded this:
My basis for saying this was that our graphs did not match. PERIOD. That’s it. If I’d know more, I would have said more.
That was Friday– the day before the Labor Day weekend holiday. I continued to shampoo my carpets, clean the bathrooms etc. before a series of parties.
It turns out that you had miscalculated your x’s and y’s. But, despite my suggesting this, you did not check your calculations and posted. I, of course, still knew that I told you I would respond to that post on Tuesday. (After the Labor Day weekend holiday.)
I began my series of Labor Day weekend BBQ get together.
Tuesday morning, I triple checked my integration routine to make sure I hadn’t bungled and posted. Later Tuesday morning, I redid the algebra and posted the comment with the specific values.
While it might have been helpful for me to detailed information before I know the information, the fact is, I can’t post things I do not yet know. If you don’t even pay attention to my suggesting that if our graphs do not match, you might have an error in your calculations, I don’t know how there is much I can do to help you.
Aslak–
The difficulty is that to account for changes in heat flux into the bottom of the box from the deep ocean would make it a three box model and Arthur is trying to show Tamino’s two box model would work.
I tend to agree that there is a problem with creating a two-box model with a very shallow ocean. When I first wrote the equations down for the two-box model, I assumed that
a) we have two boxes only, so
b) all forcing must go into one box or the other.
I hadn’t anticipated someone might propose that the ocean could be very very shallow. If it is too shallow, maybe, some forcing might, realistically, shoot through the shallow ocean and penetrate below the box. (This would, of course, mean we need a third box!)
I tend to think we will not find any two-box model with earth-like parameters and/or behavior that maps into Tamino’s regression parameters. (That is, after all, what is motivating this. Arthur thinks he can show that Tamino’s model can be related to some physically meaningful two-box model.)
Ordinarily, not finding a model would just mean we say that Tamino’s regression is an interesting regression, but there is no reason to think of it as having greater physical basis than a 1 box, or even zero box model. This would hardly be big news, since people have ideas that turn out to not be so brilliant all the time.
lucia,
My choice of x and y is related to where the thermometers on the real earth are located and what they actually measure. That is, some of them are on land and some on sea and they measure the surface temperature, more or less. So I would call the fast box land rather than atmosphere but the slow box would still be ocean or sea. That would imply y=0.3. But if you look at the land only , sea only and combined temperatures, the factor x in x*land+(1-x)*sea=combined is about 0.18. Maybe the influence of the ocean on coastal regions or regions near other large bodies of water is sufficient to reduce the effective land area, or maybe it’s just an artifact. The forcings are also applied to the surface, so it seems logical to me that x and y should be the same.
DeWitt–
I think your thinking is sound on the physics and “makes sense” part of the problem
However, due to the algebra, it’s difficult to just specify any old parameters. (It can be done, but since I don’t have Mathematica, I have to do a whole bunch of algebra, rearrange, and triple check that I coded all the values in right. As you can see, Arthur had a bit of a snafu, and I know I have done the same! If I had Mathematica, I’l just tell it to solve given any constraint I liked. I am tempted to buy Mathematica…)
My plan was always to set up the equations on particular way, pick constrains, find the subset of “possible” answers using the lose constraints Nick is applying. (That is: just don’t violate the 2nd law, but might otherwise still be unrealistic.)
Then, after finding the cases that don’t violate the 2nd law, look at the combination of parameters after to see if they are ‘realistic’.
So, for example if someone proposes “the” solution that Tamino’s regression maps into has y=99% while the ocean is 3000 meters, I would suggest that this combination did not “make sense”. After all, the thermometers are on the surface, so it’s can’t make much sense that they are somehow really measureing the temperature of a well mixed box of “deep ocean”.
But, right now it’s very time consuming to pick and chose x and y and just run those.
Lucia:
But you didn’t post your graph of case 2, which would have been at least somewhat helpful to me. When I plotted my case 1 graph it looked like what you’d plotted. I had no quantitative way of telling what your “did not match” meant. And I had some 40 equations to review by that point – of course I rechecked a few of them, and I didn’t catch the error then (I verified that all the equations involving the alpha’s and gammas and r+- values were right, for instance).
You really do like to keep stuff up your sleeve. I’m sure it’s fun to catch people out when they goof up, as I did in this instance, but it’s hardly a positive relationship-building technique. But enough on that.
There are now a “case 3” and “case 4” posted with much higher Co and gamma_s values – I would appreciate you’re using your approach to plot the temperatures and verify they look ok. Thanks.
Arthur–
Gosh Arthur! You certainly are demanding of my time and good favor all the while casting aspersion on my motives!
As for your insinuation of my motives:
Look at the time stamp on your request for the helpful graph:
September 4th, 2009 at 9:52 pm.
That’s on Friday night on a three day weekend. I don’t know about you, but I had spent the afternoon shampooing carpets cleaning bathrooms and knocked off for happy hour and dinner. I wasn’t going to post a graph that second.
I was hosting parties on both Saturday and Sunday.
I agree it might have been more helpful to you for me to drop everything and beaver away to find your specific error and prevent you from posting a rather embarrasingly wrong post on Saturday. But honestly, there is nothing you can say to make me feel one bit of guilt for not cancelling my Labor Day weekend bash to help you find an error in your 40 equations before you hurry to post something horribly.
Even given the fact that it was very late on a holiday night, I took time to tell you I had reason to believe the graphs came out differently– and that case 2 looked as bad as case 1. But this is evidently, holding something up my sleeve!
Has it occurred to you that at 10 pm on a holiday evening, my husband and I might be interested in doing something other than posting a graph for your benefit?!
Given what you knew, could have
* coded up an integration scheme to see the graph yourself. (You can still do this.)
* You could have partied on Labor Day (as normal people do) and waited until Tuesday for me to post a graph.
You seem to have some amazing notions about how promptly I should respond to your requests!
Lucia – thou doth protest too much. My complaint was only that you didn’t choose to post both case 1 and case 2 graphs in your original, or more of your reasons for doubting – or posting at least your code and algebra in full as I have done. My Saturday post on heat transfer issues was actually *not* embarrassingly wrong – the error of eq 26 affects a small part of the discussion regarding Co, but otherwise the relationship between heat transfer and the alpha values in this post was entirely correct (that was the stuff I verified in some detail after your Friday comments). The only “embarrassingly wrong” post was this one from Thursday which was what you’ve been commenting on all along here…
Arthur–
Gosh again!
I answered my reason for this long ago. You asked me why I didn’t post it simultaneously accusing me of cherry picking. I told you I didn’t post it because I didn’t want to pile on.
I didn’t see any point in posting: “Oh. And look at this second hilariously wrong graph that Arthur didn’t think to check!!!!”
Your post about heat transfer has some incorrect statements about heat transfer and box models which I have not discussed– I deferred engaging those points because
a) we have just come off the holiday weekend
b) it is not useful to sort those out until we have your math errors sorted out and
c) some of the incorrect statements “don’t matter” to any substantive point about the curve fit.
The post also does make some statments that are correct– some of which “matter” and some of which “don’t matter” to any substantive point. I’m obviously going to avoid getting into arguments about wrong points that “don’t matter”, even if they are totally silly.
That post contained graphs of time series that were simply wrong (though it appears you are revising them.) If you do not find that embarrassing, why are you belly aching about my supposed plots to “catch you out” on errors?
Lucia, whatever your reasons, or non-reasons, my point is simply that complete openness is better because it gets things corrected faster.
Arthur–
You have been insinuating that I have not been open, and suggesting that I have some bizarre motivations for this lack of openness. Oddly, you have both insinuated my initial motive for not showing Case 2 was “cherry picking” to make you look bad because case 2 looked good. You later insinuated my motive was to make you look bad because case 2 looked so horribly bad. (Never mind that I told you it looked bad. So, evidently, my motive was to read your mind, know you would think I was lying and so you would proceed on the assumption I was lying.
Then, amazingly enough, when it turns out I was telling the truth, you accuse me of lack of openness because somehow, I did not tell the truth in a way that would cause you to consider the possibility that you had an error somewhere in your algebra!
Sheesh!
The truth is, I have been completely open. I’ve posting things when I knew them, provided information as I knew it.
I suspect you are grumpy because you are embarrassed to have posted plots that were totally wrong. Stop hallucinating that that is in anyway my fault.
lucia,
I’m in the dark again (not an unusual situation). With a fixed forcing, doesn’t To have to equal Ts at infinite time, or at least 99.6% of Ts at 150 years with a slow tau of 30 years? It doesn’t look like that’s happening when you apply your step forcing. I’m probably missing something here but if I remember correctly, the initial condition assumed at equilibrium Ts=To.
With the math this complicated, why anyone is getting defensive about making mistakes is beyond me.
DeWitt–
No. Given the baseline, with fixed forcing, To=Ts=0 when the fixed forcing is F=0 for a long, long time. At other values of fixed forcing, the atmosphere can heat more or less than the ocean, depending on the parameters.
On the mistakes– I think there is no shame in making mistakes. Unfortunately, Arthur seems to be suggesting that I am somehow supposed to know precisely what information he needs to fix whatever mistake it turns out he had made and post that.
How was I supposed to know what would help him find his problem?
I don’t know why he thinks I know things long days before I post them!
Arthur–
The figure you requested are now posted as an update to the post.
They don’t look bad. So, the arguments will now be over whether those values are realistic.
But at least now the values of the heat transfer coefficient and ocean aren’t ridiculous for a two-box model of the earth.
Quite honestly, I’m still amazed that you ever suggested a 20m ocean and heat transfer coefficients that correspond to an earth with ocean and atmosphere separated by owens corning pink insulation corresponded to a box model that is “realistic” for the plant earth. I would never have called those parameters “realistic” for the earth even if they had corresponded to temperature time series that did not look widly unreasonable.
This sort of thing makes it very difficult to believe that you are objectively assessing what is or is not “physically realistic” and creates the impression that you just want to call anything realistic if it will save a theory or preconcieved notion of what you would like to be the truth.
Looking at Arthur’s new graph what are those strange dips in the surface temperature?
John–
The graphs with arthurs new parameters has:
1) Blue: Atmosphere temperature.
2) Red: Ocean temperature
3) Green: “Surface” temperature which is a linear combination of atmosphere and ocean. (It’s dominated by the ocean for this solution. )
4) Yellow: I show HadCrut. (Why? It was handy. But the regression was to GISSTemp, so it would look better compared to GISSTemp.)
So, the dips in the blue trace are supposed to be for some “well mixed atmosphere”. In principle, if we had temperature data for the full atmosphere, we could compare. In practice, the closest thing might be satellite data, so you could integrate vertically and compare over that time period. Prior to that, you would need balloon data.
The ocean is for a “well mixed” top 200 meters or so. So, one could hunt down ocean heat content data to see if that looks realistic.
There are other possible tests.
Bear in mind, there is going to be a neighborhood of solutions near this. These are just two examples.
I think these specific values of αs and αo correspond to a snow ball earth. (Oddly … I scrawled and changed my mind about believing it corresponded to a hot earth– which is what I said above. If I don’t discover algebra mistakes, or don’t change my mind about what I thought, I’ll explain more later.)
But even if these correspond to “snowball earth”, it may be possible to find a set that works out ok. These graphs are at least close enough to not call “loonie toons!”, the heat transfer coefficient is no longer several orders of magnitude too low etc.
I think the dips in the second case mean that there is to much gain associated with the fast eigenstate state. Therefore, when the input is in the right frequency range you see a huge osculation.
lucia (Comment#19629) I think what you want is sea surface temperature data. I mentioned before you can decouple your equations in terms of a difference and a some (roughly speaking). This way you can fit each eigenstate separately.
lucia,
I have a hard time wrapping my head around treating radiation to space as an anomaly. The reason being that an increase in solar radiation should cause a positive radiation to space anomaly at equilibrium but a ghg forcing will have a zero anomaly at equilibrium.
I can see how the choice of parameters could have Ts not equal to To with a non-zero forcing, but I’m not at all sure that those parameters will be physically realistic, especially wrt radiation to space from each box. If it’s not too much trouble, would you please post the impulse forcing response graphs for Arthur’s corrected parameters.
DeWitt–
Both increased solar and increased ghg forcing will result in a positive radiation anomaly at equilibrium. This happens after the temperature rises above the baseline.
Since I’d checked my entries, making the impulse response graphs was easy. They are posted in an update.
John,
If you are asking why there are dips in the temperature, it’s because there were large volcanic eruptions causing a large negative forcing from reflective stratospheric aerosols in the forcing file. Among the largest: Krakatoa 1883, Tarawera 1886, Santa Maria and Soufriere 1902 ,Katmai 1912, Agung 1963, El Chichon 1982, Pinatubo 1991. Krakatoa produced a forcing of -3.3 W/m2 in 1884.
Those are step responses and not impulse responses. Anyway, you can see some features from the step response. If you look at the plot for the surface temperature, you can see that the fast eigenstate dies out by the time the graph reaches about 0.4 and the slow eigenstate continuous afterward to approach equilibrium at about 0.6. This makes sense, since that the fast eigenstate should dominate the response of the surface temperature.
Doing a frequency or impulse response would also be interesting.
DeWitt Payne (Comment#19638) those dips seem to large compared to the temperature data. Perhaps the forcing values used for volcanoes is too large.
Lucia – thanks for posting the new graphs (any reason they’re so small on the page though?)
I know I’m defensive and embarrassed – but I’m also very glad we sorted this out. I’m human and make occasional mistakes, and I have plenty of real-life concerns too. The nuances of real personal human communication are generally lost in this medium, and I apologize that I took offense at the tone of your posts and comments towards me. And I’m sure almost anything else I say on this could be taken wrong similarly by you as has happened all too frequently up to now, so I apologize for my apparent poor skill at communicating my appreciation for your help on this (and my simple wish that that help had come a bit sooner).
lucia,
Thanks for the impulse response graphs. I’m guessing that the difference in Ts and To may be a function of the difference between x any y, but it’s only a guess.
I understand that the two box only model produces a positive radiation to space anomaly from any forcing because you only have one temperature per box, but in the real world looking from deep space, a greenhouse step change reduces emission to space which then recovers over time to zero anomaly as the surface temperature responds while a step change in solar radiation produces an equivalent change in emission at equilibrium. That may not be significant for simple modeling purposes, but it still bothers me.
John Creighton (#19640) – the sensitivity of an instantaneous or fast-box fit to the volcanic forcing was an important factor in Tamino’s original discussion of this – it’s expected that you should see bigger temperature dips in the fast box than are seen in measurements (see his first figure with the simple regression model). Though maybe some components of the satellite measurements should show this exaggerated fast response?
John,
I haven’t looked at the IPCC tables, but my guess would be that stratospheric aerosol forcing is nearly as well understood as ghg forcing because we have, or at least should have, good data from El Chichon and Pinatubo. If I understand what’s going on (always a question), the size of the response in temperature is a function of Cs, the heat capacity of the fast box; x, the fraction of the forcing going into the fast box; and y, the fraction of thermometers in the fast box. So a too large response suggests the heat capacity of the fast box may be too low, x is too large or y is too small, or a combination of all of them.
I just skimmed it. Tamino says the response to Volcanic forcing is wrong as it looks wrong on the graphs. I don’t think we should say the model is right even though it gives the wrong answer for volcanic forcing.
In a sense he is right though, in that if the fast state, is trying to fit to much of the signal which is low frequency say a period about 5 years, then the fit of the gain will for this state will end up being to high, and you will end up so that when you get an impulsive signal like a volcanic eruption the response will be way too energetic.
The edit feature doesn’t work. I meant to delete my last comment. I’m not sure now why the volcanic response is so energetic.
Is the volcanic forcing split between the two boxes the same way the solar forcing is? If the volcanic forcing is applied more to the surface box I could see the response being more energetic.
Arthur–
I chose the thumbnail option for updates since they weren’t the main subject of the post. Like all my figures you can click and see it full size.
It might be worth checking. That’s what I mean by “checking” afterwards.
Once we’ve narrowed down to cases that correspond to the regression and aren’t obviously incorrect, we can identify what we need to look up to “check” answers.
The box model has an “ocean” temperature, an “atmosphere” temperature, and is fit to a surface temperature. I think you pointed out that the better the surface temperature fit to the regression, the more confidence we have that there might be physics there. By the same token, the closer the “ocean” and “atmosphere” temperatures are to available measurements, the more confidence we have. Moreover, the closer the parameters themselves are to realistic values, the more confidence we have.
Right now:
The surface looks good (it always did unless there were errors in the regression, but Nick’s routine works so that hasn’t been a problem.)
For this combination, the ocean and surface don’t look insane (unlike the previous tries.)
As far as I can determine, the heat transfer coefficient between the boxes is not several orders of magnitude off from realistic values. Unless someone else comes up with some reason why that value is way too low… well, I’m not looking at that first. My sense is that value is within the level of uncertainty of knowledge and … well… for a box model.
I think the alpha;’s aren’t quite realistic for earth, but I haven’t explained that, and anyway, I need to check if I’ve even convinced myself. Maybe the chicken scratchings I’ve looked are wrong. If my chicken scratching are right, this could be a problem because even with a box model, we actually have decent information about how much energy is radiated at equilibrium. But, as I said, my chicken scratchings could be wrong.
I think we need to compare the ocean and atmosphere temps to anomaly values for the top 200 or so meters of ocean and the full atmosphere. (So… Levitus and satellites. )
We might want to ask ourselves if we believe that the ocean would heat more than the atmosphere in response to an step increase in forcing. That’s a sort of “all in all, do the parameters collectively seem ok” type question.
Dewitt
Yes. But the question to ask is: Is the response of the fast box too fast for what it is? It’s not the surface temperature. It’s the average temperature of something one might call “the whole atmosphere”. So… maybe from the surface to the top of the troposphere.
If that’s what the fast box is, the thing to do is get the temperature data from the satellite, integrate from the ground to the top of the atmosphere and look at it. For all I know it looks like the blue curve. Or not. Checking is “the empirical method”!
The same holds for the ocean box. Does average temperature of the top “M” meters of ocean look like that? If yes, then the box is looking “ok”. If no, then the box is not looking “ok”.
John C.
This model is simple: All forcings are treated the same using the “x” factor.
The surface is also responding quickly to solar. It’s just that solar isn’t as wild.
Arthur-
Apology accepted.
Yes. I appreciate this, and even sympathize that the help did not come sooner. I wish I could have helped sooner. However, it was not in my power to have done so as I was just as much in the dark as to what was wrong as you.
On reflection, I guess maybe it might not be so obvious to you that the reason I was not supplying graphs in answer to your requests was the fact that I was busy with Labor Day weekend partying, and so you assumed I was with holding the information. So, I apologize for getting pretty angry about your even thinking I was intentionally withholding information. (But… I wasn’t. It was just.. you know, a long weekend. Lots of wine. Fun. etc. I hope you had fun too.)
lucia (Comment#19654) I think each type of forcing should be dealt with separately. I’m skeptical of the series used for volcanic forcing, and I’m not sure that Talmino knows how many whats of forcing a given a given concentration of volcanic dust corresponds to.
John Creighton (Comment#19656) September 9th, 2009 at 4:15 pm
Maybe so, but “radiative forcings” are meant to represent comparable quantities, i.e., equivalent to TOA perturbation of x W/m^2, so to use GISS forcings I think we will need to take them as they are meant.
John–
In a GCM, I think there is no doubt each type of forcing should be treated differently. I’m sure if you ask Gavin what he does, he does, indeed, treat each forcing differently. (In fact, I seem to recall he said as much in some email conversation back around 2007 . . .)
But clearly, if Tamino’s regression is supposed to “just work”, then either someone needs to clarify why forcing should be applied in some specific way to the boxes or we need to pick something simple. So, for identifying whether or not that regression maps into some box model, I have picked the same “x” for everything.
oliver (Comment#19658) well, looking at the results. There are only two possibilities with regards to volcanic forcing. The first is the model gives the wrong result. The second is the volcanic forcing series is wrong or at least not equivalent to the solar series from the perspective of the model.
John Creighton– Why do you think that there are only two possibilities? The other possibility is that if you look up temperatures for the full atmosphere, it’s that variable.
Let me at least compare to UAH and RSS lower troposphere.
Arthur Smith (Comment#19537) September 8th, 2009 at 2:59 pm
DeWitt and Lucia – I think Nick’s approach is essentially equivalent to ours…
Arthur,
You refer mostly to my/mine, but occasionally to us/ours. Is yours a solo or group effort? I just find it weird that you switch between the two, and am curious who else constitutes “us” and “our.” When I’m working on something solo, I would never think to use collective terms like that for my work. Just seems weird, I guess.
Terry–
Arthur was addressing me. When he said “ours”, I think Arthur meant “Arthur’s and Lucia’s”. In reality, all three of us are doing similar things.
Lucia,
A definite and logical possibility, as I haven’t read this thread closely yet, thanks. One of the earlier threads (that I asked about) seemed to not include you, and then I saw it again tonight. I’ll have to go reread some things. Thanks!
There is a new spreadsheet at the CA BB. It has been improved in some minor ways. It shows Lucia’s coefficients, does a better calc for very short time constants. But more significantly, I’ve added a second sheet which does the same calculations and plot, but with SOI as an added regression variable.
I recommend keeping x not too far from the critical value (D15). Very small values leads to implied low mass boxes (C20), which can lead to large ocean temperatures.
I have put my 2 box spreadsheet model results up at:
http://moderateclimate.blogspot.com/
I think it has realistic parameters, conserves energy (except for the deep ocean heat sink), and fits the data pretty well.
Please let me know what you think (here at The Blackboard).
The spreadsheet itself is at:
http://spreadsheets.google.com/ccc?key=0Als9xXZMCAXsdHlZSkI1UWhuZVJZUmFhUVNBLUtxc0E&hl=en
There has been a few points about volcanic forcing and the fast response. Effectively, volcanoes only have about one-third of the temperature impact as would be expected based on the decline in solar energy produced by the sulfate aerosols.
The climate models deal with this by building in an “efficacy of climate forcing” factor into all the different forcing. I guess one could think of this as a fudge factor since there should be no reason to expect a different response to each watt of change. A watt should be a watt. [I would rewrite the equations using the Stefan Boltzmann equations so that a watt is a watt but I guess that is for another day].
One can read a little more about this at this link but I can’t explain the rationale fully.
http://data.giss.nasa.gov/efficacy/
Bill Illis (Comment#19674) September 9th, 2009 at 10:51 pm
Actually, the basic concept makes sense since radiative forcing is a sort of “one-box” approximation (net irradiance change at the tropopause), while a GCM is able to model different surface temperature responses to forcings which act at different levels or couple with other forcings in different ways. The efficacy is exactly the fudge factor when you still want to describe climate change in terms of forcings but also want to modify the discussion with the results from the GCM.
SteveR–
That looks good. What are the x and y parameters? Co, Cs etc?
SteveR,
I can’t read column K or S in your spreadsheet. There is insufficient contrast between the yellow text and the white background. The ocean heat capacity is completely lost as it’s yellow on yellow. Yellow is fine against a dark background, not so good on a light one.
Lucia: “What are the x and y parameters? Co, Cs etc?â€
Since my model uses a different technique, not all parameters may correspond, but this is what I use for heat capacities:
Co=400 MJ/(K-m^2) Heat capacity of ocean box (~100m well mixed depth)
Ca= 50 MJ/(K-m^2) (Comparable to your Cs?) Heat capacity of atmosphere box (~3x higher than dry air, due to heat required to maintain constant RH?).
Most of the other fitting parameters are the heat conductivities between the boxes and outside (in W/(K-m^2)):
Gao = 3.0 atmosphere to ocean
Godo=0.22 ocean to deep ocean
Gsa =1.78 space to atmosphere (this corresponds to 2.25K climate sensitivity if CO2 doubling causes 4 W/m^2 forcing).
Other assumptions are:
Both space and deep ocean are heat sinks at -0.1 K from the GISS baseline
All forcing is applied to the atmosphere box.
The temperature to compare to GISS land-ocean data is a weighted average (Ta+2To)/3 based on area.
There are some additional comments in the spreadsheet itself.
>There is insufficient contrast between the yellow text and the white background.
Sorry, that is fixed now.
Google docs had changed my green to yellow.
Steve–
My first thought was: Why the high heat capacity for the air?
Then I remembered Dewitt was trying to explain to me about the heat capacity thing. I thought he was just talking about the heat transfer part linking to the ocean. But somehow, today that finally registered.
Now I get why you might do that. (D’oh!)
Are your value related to the regression parameters from Tamino’s fit in anyway? Or are they just physical values you think make a decent box model?
Do I understand that the atmosphere radiate to space, the ocean doesn’t, but you have heat transfer to an ocean sink?
Are your value related to the regression parameters from Tamino’s fit in anyway?
No.
Or are they just physical values you think make a decent box model?
Yes.
Do I understand that the atmosphere radiate to space, the ocean doesn’t, but you have heat transfer to an ocean sink?
Yes. For simplicity, only the atmosphere communicates with space, and only the ocean (well mixed layer) communicates with the deep ocean (which is just a heat sink in the 2 box model).
I am planning on making the deep ocean a 3rd box in the next version. I think it will be a straightforward extension (which may not show much other than the deep ocean temperature not changing much on century time scales).
Steve–
Ok. Well… believe it or not, our two box model does’t really distinguish between “radiation” from the lower box and “heat transfer to the deep ocean”. The αo could, in principle, be the other box.
The reason I was wondering about your choices for Co, Cs and heat transfer coeeficients “broke” the regression parameters Arthur has posted when fitting a regression using Tamino’s time constants. (But Arthur also found a small typo, and I didn’t check with the new values. )
By “breaking”, I mean going from the regression parameters to determining the remaining time constants involved taking the square root of a negative number. This doesn’t mean there is anything wrong with your numbers– it just means they probably aren’t consistent with eigenvalues of 1year and 30 years picked by Tamino. (Arthur has provided some evidence that a long time constant of 20 years provides a better fit anyway. So….. it’s entirely possible some set of “good” values you pick won’t match something with a time constant of 30 years.)
lucia (Comment#19701)
It could distinguish between the two types of heat dissipation though because, in your linear model you need to define an equilibrium temperature (temperature in the absence of forcing). This temperature could be different for dissipation into space, then dissipation into the deep ocean.
lucia: “So….. it’s entirely possible some set of “good†values you pick won’t match something with a time constant of 30 years.”
I think (from looking at response times) my time constants are roughly 15 years for well mixed ocean and 2 months for atmosphere.
John Creighton: “This temperature could be different for dissipation into space, then dissipation into the deep ocean.”
Although ideally, _if_ there was a preindustrial climate steady state, then the deep ocean reference temperature should be the same as the space reference temperature.
John–
If we assume we begin at “equilibrium”, both the deep ocean and space have to be at the same “anomaly” temperature. The alternative is to assume things are at a disequilibrium at the beginning of the time series.
If we assume dis-equilibrium, then, technically, we need to pick three anomalies and a baseline. (The convenient baseline is for space to be at T=0, which specifies the fourth.)
So, in that case, we have no introduced a third box for the deep ocean. (And really, it doesn’t make sense to keep the deep ocean at dis-equilibrium. So, we should really set it to T=0 like space or really, really add the third box.)
Steve–
Shorter “long” time constants worked well for the 1 time constant model. So, I guess I’m not surprised it still worked well with the 2 time constant model. (That said, this exercise with a 2 time constant model is showing just how weird 2 time constant models can look.)
The Volcano responses are still wildly unrealistic. I’m not aware of anyone claim to have actually succeeded in finding a deep ocean response to Pinatubo (no clear dip in OHC associated with it AFAIK) and with the other temps the blip down is either to weak and brief or far too big an long lasting. That seems to be a very tricky feature to get right. Some GCMs seem to do okay but EBMs to my knowledge fit Pinatubo best when the sensitivity is low and the response times short.
I think Robert Knox has worked on a problem which is very similar to this:
http://www.rochester.edu/college/rtc/Knox.html
“If we assume dis-equilibrium, then, technically, we need to pick three anomalies and a baseline. (The convenient baseline is for space to be at T=0, which specifies the fourth.)”
Let me see if I have this right. For a dis equilibrium model, the surface temperature and the sea surface temperature need to be linearized about some reference point. The difference from this linearization point is the anomaly right? There is a constant forcing from the deap ocean which impacts only the ocean surface temperature. There is a constant forcing from the sun which impacts both the atmosphere and the sea surface temperature. Then there is on top of that the anomaly part of the forgings. I’m not sure why we need a temperature for space or the deep ocean, except to calculate the constant term of the forgings.
John,
No matter what, we linearize about an assumed “equilibrium” value for some value of forcing. For convenience, it’s useful to pick the forcing in, say, 1880. (You could pick anything.) For the two-box model equations to be useful approximations, the temperature of the ocean and atmosphere must not get “too far” from this equilibrium level.
Yes.
At equilibrium, this should be zero in both “anomaly” and “not- anomaly” versions. Otherwise, the ocean is not technically, at equilibrium. If the deep ocean exchanges heat with the upper ocean, then the two are, technically, not at equilibrium.
In the non-anomaly space, the forcing from the sun is assumed to have been frozen at 1880 values forever, and ever, and ever. Then the climate system equilibrates at some values of ocean temperature, atmosphere temperature and if you like, deep ocean temperature.
In the anomaly problem, the sun’s constant forcing and the equilibrium temperatures are subtracted. The remaining values are the anomalies.
So, if the forcing stays constant, the anomalous forcing is “zero” and the temperature anomalies of the atmosphere, ocean and deep ocean must stay at “zero”, in this baseline.
You don’t need a temperature for space. (Or, at least, it’s most convenient to set it to zero. You could do other things, but it makes the equations weird and adds no value and you have to shift the temperature of everything to deal with not making the temperature of space zero..)
However, if you have a deep ocean, and you are permitting heat transfer to the deep ocean, strictly speaking you need a temperature for the deep ocean. Unlike heat from the sun or the universe, if you permit heat transfer from the mid ocean to the deep ocean, the deep ocean could, at least in principle, warm or cool. This is because the deep ocean is not, technically, an infinite sink.
You might be able to approximate it as an infinite sink, but that is an assumption that ought to be tested when you are done. Also, if you think the deep ocean is so huge it might be a big sink, you need to seriously ponder whether you thought it was at equilibrium in 1880. If it’s time constant is, say, a billion years, and it got at disequilibrium for any reason, then it would act as a heat source of sink for a long, long, time. You would need to account for this.
So, once you make the ocean shallow, you do need to go back and at least take a look at the heat transfer to the deep ocean when you are done and see whether or not treating it as an infinite heat sink made sense. If it “should” have warmed, you need to think about that.
lucia (Comment#19713) although it is usually the case people linearize, linear systems about equilibrium points it isn’t strictly necessary. The only difference is you end up with a constant drift that can be represented as a forcing.
and then there are other roughly equivalent ways of linearizing about a non equilibrium point as I mention here:
http://www.physicsforums.com/showthread.php?t=325690
John–
I agree we may linearize about a non-equilibrium point and that if we do, and forcing was constant, we would drift toward equilibrium.
I guess I’m explaining it as linearizing about equilibrium because the regression method has been assuming that the system was at equilibrium in 1880. If it did not, I think regardless of how we think of the point about which we linearized, we would end up having to either a) say we know the deviations from equilibrium at time t=1880 or b) change the regression to find the deviation from equilibrium.
The expression of the equations would look a bit different. But, it can be done.
Lucia: ” I guess I’m explaining it as linearizing about equilibrium because the regression method has been assuming that the system was at equilibrium in 1880.â€
Lots of good points, but one quibble:
I think T=0 is the GISS baseline, the average from 1951 to 1980, not the 1880 temperature which is about -0.2K.
My model fits best with the equilibrium set to about -0.1K. Maybe 1880 was still recovering from the little ice age.
Steve Reynolds (Comment#19720) September 10th, 2009 at 4:46 pm
Ts is anomaly wrt 1951-1980, but NetF is 0.000 in 1880.
“Ts is anomaly wrt 1951-1980, but NetF is 0.000 in 1880.”
Yes, but even if net forcing is defined as zero in 1880, that does not mean the temperature in 1880 was in equilibrium.
I have extended my 2 box model to 3 boxes:
http://spreadsheets.google.com/ccc?key=0Als9xXZMCAXsdENKbUc1UnVtaVluNXJEYjNMNlZvd0E&hl=en
There is not much change in deep ocean temperature (about 0.05K from 1880 to now). However, it changes another 0.05K in just ten more years if forcing stays at the last (2003) value provided by GISS.
Steve Reynolds (Comment#19726) September 10th, 2009 at 8:16 pm
It’s an assumption implicit in the way “net forcing” and “equilibrium” are being defined.
SteveR,
0.05 K is a huge change for the deep ocean, if I did my sums correctly. A change of that magnitude over 10 years is equivalent to about 2 W/m2 heat transfer, which is about the same as the total forcing.
If you look at the last 50 Myrs, the deep ocean has been steadily losing heat, but the size of the leak is pretty small on an annual basis. Over a century or two, a good approximation should be that the deep ocean isn’t a heat sink or source and remains at constant temperature. Current measurements seem to support this.
I posted a comment about that a while back. In short, heat transfer down is matched by upwelling cold water. The cold water comes from the poles and the heat is lost to space there. So in fact, the deep ocean is just a different route to space for the upper ocean.
If you want to model, you could assume that the proportion of heat lost directly to space by the surface in the Kiehl and Trenberth paper applies to any additional heat. But then you have to assume that some of the forcing is applied directly to the slow box rather than just to the fast box.
I still think it’s better to think of the two boxes as land and ocean. The atmosphere above each surface can be thought of as directly coupled to the surface in terms of heat capacity and heat transfer to space. That also means that x=y~0.3. I think it may also mean that the heat transfer coefficient to space must be the same for both too. The problem for this model is quantifying heat transfer from land to ocean. It’s not obvious to me how you would do this. What do you use for area, for example. I don’t think you can ignore it either. In fact, it’s probably quite large.
I think the equilibrium issue goes like this. Equilibrium just means no change; dT/dt=0 in Lucia’s 1c and 2c. That gives an equilibrium relation between T and F. Pre industrialisation, F=0 by definition, so F=0 in 1880 is close. But T=0 is rather arbitrary (1951-80), so there’s no guarantee that T=F=0 is in equilibrium. In fact, there was an intercept in the original regression, which was small, but might sometimes be important.
Nick
My only claim is that a baseline exists where T=F=0 occurs. That’s “equilibrium” in the baseline we chose to linearize. That baseline does not have to be the GISSTemp baseline, and it is very unlikely to be the GISSTemp baseline. So, the intercept in your fit gives the difference between the baseline for the measurements and the most convenient baseline for the particular box model we are using.
In the box model baseline T=0 when F=0 for a long time because that’s how we set up the baseline for the box model. In the GISSTemp baseline, T=intercept when F=0 for a long time. We could come up with entirely different baselines for either F or T. But it’s most convenient to have discussions about “equilibrium”, in terms of the baseline for T and F where all T’s=0 when F=0 a long time. Obviously, it’s still confusing because GISSTemp’s ordinary baseline does not correspond to this baseline!
Steve–
The three box model is cool!
Dewitt–
I think we still use the word “heat sink” for a constant temperature body provided heat can transfer there. If heat diffusses and water upwells, this makes the deep ocean a “heat sink” (provided the water above is being warmed”.) If it’s so big the deep ocean stays at a constant temperaure it is a “constant temperature heat sink” or “infinite heat sink”.
However, unlike the sun which whose temperature is unaffected by heat accumulation in the earth, the deep ocean is not, strictly speaking infinite. It will be “infinite” for some analyses at some time scales and “finite” at others.
What would not be a heat sink is a zero heat flux barrier (as in “separated from the climate system by a perfectly insulated surface”.) That would stay constant temperature too, not because of it’s mass but because it is perfectly insulated.
Nick Stokes, preindustrial you still had solar and volcanic forcings, and of course both of these change over time. Historically, periods with more intense volcanic activity showed up as colder climate, and did periods with reduced solar activities.
Carrick–
If I recall correctly, Nick recognized that the assumption of equilibrium was just that: an assumption.
I can think of only two alternatives to the assumption of equilibrium. They are a) assuming non-equilibrium of a known amount or b) altering the regression to include non-equilibrium in 1880 and add a regression parameters.
The GCM modelers essentially assume equlibrium back in late 1800s; the start the 20th century from a “spinup” that is supposed to put the models in an equilibrium state. This assumption may be correct or incorrect. But…. it’s difficult to test!
lucia,
An infinite heat sink has an infinite heat capacity and heat transferred into it is effectively lost from the system. That’s not what I mean at all. I’m postulating that you can treat the thermocline as a barrier with no net heat transfer. In the real Earth the diffusion of heat into the lower ocean from the warmer upper ocean appears to be exactly balanced by upwelling cold water. That cold water is from the thermohaline circulation. So effectively, the heat that diffuses into the deep ocean is lost to space at the poles, the deep ocean remains at the same temperature and the thermocline remains at the same depth with the same temperature gradient.
If you force the deep ocean box to radiate to space at the same rate as heat is transferred from the upper ocean, then the deep ocean isn’t a heat sink or source, dH/dt=0. In which case you can ignore its existence and assume the upper ocean box is radiating directly to space.
Dewitt
At equilibrium, heat transfer up will always balance heat transfer down. This is true even if the heat transfer rates are very high, and for any mechanism you think up. So, the fact that in the real earth, the heat transfer currently appear in near balance tells us very little other than the upper ocean has not yet warmed enough for heat transfer by diffusion downward to exceed heat transfer upward by upwelling.
That said: There is nothing wrong with postulating zero heat transfer initially. Everyone does that– you have to.
But if you create a mathematical model based on assumptions, you need to double-check your results to see whether they are consistent with the assumption. You do this both before and again after you solve the equations.
After you obtain your solution based on the regression, you can test it. The test would include checking the literature for the estimates of the diffusion rate downward, and seeing whether, given those estimate of the diffusion rate, and the assumption that the deep ocean managed to stay cold, the heat transfer rate remained zero.
You basically do a computation on the hypothetical and say:
1) I did this calculation based on the assumption heat transfer into the deep ocean was zero.
2) Based on that, I assumed the deep ocean temperature is constant.
3) This is the estimate for heat transport by upwelling. (It’s constant, because the deep ocean temperature remained constant.) Qupwellling
4) This is the estimate for heat transport by diffusion. It increases because the upper ocean heated.Qdiffusion– Diff* Tupper ocean-Tdeep where “Diff” is a diffusion parameter you found in the literature.
5) At the beginning of the computation Qdiffusion=Qupwellling. But at the end, Tupper ocean has changed. So, now you check how much energy would be going into the deep ocean.
If that heat flux is significant compared to the other heat fluxes involved in the problem, you have to realize that your assumption, which looked true in 1880 might have broken down sufficiently to introduce error into your regression. If it’s small, you find your assumption held up.
You also need to check whether, based on the solution you got the deep ocean stayed at constant temperature. If you find it warmed, then your mathematical solution broke down and you have little reason to trust that model in the application where you used it. (It might be fine in another application.)
Oddly enough, this sort of analysis is done when designing heat pumps for residential applications. When initially designing the heat pump, you count on drawing heat from the earth, which you assume is an infinite heat source at ground temperature. You design the upper part of the system.
But after you design that part of the system, you need to go back and look at how much heat you drew out of the part of the earth where you have placed your piping network. You need to analyze to figure out whether, in fact, based on how much heat you intend to draw out, the earth will not have cooled excessively by February. (In the heat pump design, if you find you would have cooled that pocket of earth too much, you spread out your piping network or find someplace where you think ground water might flow thus increasing transport with the rest of the earth. But this is always an issue. It’s an issue with any model where you assume something outside the system is at constant temperature or provides zero heat flux, or make any assumption at all.)
A couple of issues: First, Hansen reports from the GISS coupled model experiments about 10-15% of the radiative imbalance is leaking off below 700 meters.
Next, I think we need to compare our lower box to the real world ocean heat data. Over the last 50 years, Levitus reports that the upper 700 meter integral of ocean has accumulated roughly 1.0X10^23 Joules. This has expressed itself as a 0.4C temperature increase near the surface. This rate of heating decreases as a function of depth below the surface. If my math is correct, to heat the entire upper 200 meter volume integral 0.4C, it would require around 1.5X10^23 Joules. The observed heat content anomaly (TOA net radiative imbalance) over this period would therefore only be able to heat the 200 meter integral around 0.26C, assuming none of this heat is leaking off below 200 meters. The no deep ocean heat uptake assumption surely cannot be valid, since the warming “signal†certainly penetrates down to 700 meters over multi-decadal periods. If dT/dt>0 at 200 meters, then there is heat uptake by the deeper ocean.
Finally, I seriously doubt the real ocean temperature can actually rise more than the troposphere in response to a step forcing change. The ocean accumulates anomalous heat in response to GHG forcing mainly from reductions in the efficiency with which it cools due to net increased downwelling longwave (back radiation) + decreased sensible losses. This is to a lesser extent offset by increased evaporative heat losses. However, unless evaporative losses from the ocean (the largest source of heat loss from the ocean surface) were to somehow decrease drastically along with the increasing net downwelling longwave, then I cannot imagine a mechanism for the ocean temperature to rise more than the troposphere in response to a GHG forcing change.
Brian S–
I tend to agree with you on pretty much all points. Although the updated figures at the bottom of the post look less unrealistic than the ones in the main post, I find it difficult to believe that the upper ocean would heat more than the atmosphere in response to a step function. Among other things, I think lack of a formal 3rd (or 4th or 5th box) may doom the notion of regressing surface temperature to a “two-box” model to get decent estimates of the sensitivity to the “not useful model” status.
It looks like an interesting idea. But then, Schwartz idea of using a 1 box model was an interesting idea. It’s just not clear that 2 boxes gives much of an advantage over 1 box.
More boxes, or a more complicated model might work– but the proof of that would be in the pudding, would it not?
Lucia,
I think there just isn’t information to support anything beyond a two time-constant model. We’re constrained at the low frequency end by the 120 yr data period. Tamino’s 30 years is at the upper end of what can be resolved. At the lower end we’re limited by the annual frequency of forcing data, which can’t be improved by interpolation. So T’s 1-yr lower frequency is actually beyond what can be accurately resolved. That’s not a problem in his calc – it just means you can’t discriminate between 1 or 2 yr, say.
In math terms, exponentials are far from orthogonal. So you can’t hope to reliably break the data into, say, 1 yr, 10 yr and 20 yr bins. The convolving functions are too similar. Actually, I’ve tried it, but you get negative coefficients, which is a common sign.
The same can be said for suggestions that, say, a diffusive model should be used. You can actually modify the R program I wrote, which does explicit convolution, to use a diffusive function. But again there just isn’t enough data info to discriminate. Any reasonable function with a mean lag of about 20 years will give much the same result.
As far as identifying the box parameters goes, we seem to be running out of steam. I’ve been surprised at how far we have got. There are actually no real boxes in nature – uniform (well-stirred) separated by distinct resistive regions. It’s like trying to Xray soft tissue. You get fuzzy responses which could represent, say, a thermocline, and give a rough timescale for the decay.
That doesn’t mean that the initial regression analysis was pointless. It does give an indication of sensitivity which is fairly robust. You can vary T’s 1 and 30 yrs to fed in unsmoothed Gistemp (0 yrs) and 20 yrs, and it won’t make a lot of difference. And going from 1 to two time constants does make a noticeable difference, as Steve Schwartz recognised.
Nick–
Sure. I agree. But not having more information to support something more doesn’t mean the two time-constant model will give reliable results.
FWIW– The issue of two box models came up in the past at this blog. I’d always thought that regressing on the two-box model was not going to b a magic bullet. We can’t regress on more realistic models for lack of data.
I never said the initial regression was pointless.
I assume by “robust” you mean “gives the same answer over a wide range”? Sure. Does that mean it’s “right”? Maybe. Or not.
But “Is it robust was never the question I was asking? Of course this is “robust”. The answer isn’t much different from the 1 box “Lumpy” answer I got long ago– I just assumed the data is “weather” + “measurement noise”. That gets you exactly where Schwartz stuff gets you.
So the question is: Is two boxes something that gives anyone any more confidence than 1 box?
If the regression happened to map into a “good” two box model, that would be very interesting. Otherwise, the two box model is sort of an interesting math idea and/or a tweak on models like “Lumpy” discussed long ago here or “Grumpy” discussed at Climate Audit’s forum.
You add a parameter to Lumpy, you get correlation coefficient. Not a big surprise.
That is not to say it’s pointless. I think the idea of the two-lump extension of the one lump is interesting. If I didn’t, I wouldn’t have looked into it in such detail. But, unfortunately, it turns out to not seem to have great advantages over the 1 lump model.
That’s not an astonishing thing. After all: If 1 lump isn’t enough, why would 2 be magic?
Even as a “blog thing” the issue probably never would have been such a big deal if Tamino hadn’t had a fit over the mere idea that one might want to see if a model he advanced as physics based really panned out when tested as a physical model. The thing might have panned out on testing… or it might not have. Even though he had a fit, Arthur you and I were still interested in testing.
As far as I can tell, after testing, the two-box model doesn’t seem to map into physical space. There are a few loose ends I didn’t blog yet, but I think the support as a physical model looks pretty hollow. That doesn’t mean you can’t use it as a more complicated regression. Just check it using statistical information criteria or something– see if it’s better on that basis. Call it a fancier statistical model. Not a problem.
lucia (Comment#19747) “I find it difficult to believe that the upper ocean would heat more than the atmosphere in response to a step function.”
Good point Lucia, maybe more consideration of this should be given when choosing the parameters. How close to we expect the steady state step responses to be for various types of forcing. In the tropics, I suspect the ocean to be cooler then the surface but I expect it to be warmer then the surface in the arctic. How should this average out over the whole earth?
John–
These are anomalies. So….
But either way, the ultimate test is compared to data. I’m planning to look at ocean heat content data, but haven’t done it yet!
Lucia says
That is not to say it’s pointless. I think the idea of the two-lump extension of the one lump is interesting. If I didn’t, I wouldn’t have looked into it in such detail. But, unfortunately, it turns out to not seem to have great advantages over the 1 lump model.
The whole point is.
Many of you have seen the graphs (included in the IPCC reports) showing that using computer models, we can reproduce temperature history if we include human factors, but not if we omit them. That’s such powerful evidence that we’re the cause of global warming, it’s no wonder denialists have tried so hard to slander computer models and to insist that without them there’s no solid evidence of man-made global warming. The truth is that you don’t need computer models to show this. Even with very simple mathematical models (and these models are indeed simple) the result is the same. Without human causation, there’s no explanation for the global warming we’ve already observed. With human causation, there’s no explanation for a lack of global warming.
That point still stands.
bugs those are awful subjective statements and if I recall correctly the intent of this blog was to productively look at climate issues rather then bicker about interpretation. For instance:
“Many of you have seen the graphs (included in the IPCC reports) showing that using computer models, we can reproduce temperature history if we include human factors, but not if we omit them. ”
First, what are human factors? How do we objectively determine if we reproduced the temperature record and how do we show we’ve exhausted all reasonable alternatives?
With regards to the simplified model, there is still much to do with testing how realistic it is let alone how robust the model is with regards to estimating CO2 sensitivity. Generally such strong statements are made for media consumption rather then for progressing our scientific understanding.
WRT to the “IPCC graphs”, it looks to me like the models miss 0.3-0.4 degrees worth of heating from ~1910~1945.
That ain’t hay.
http://img42.imageshack.us/img42/5007/ipccmodelmatch.jpg
Simply drawing a big fat smoothed red line to split the difference between a minimum and maximum and calling it “good enough” isn’t very impressive.
bugs–
Uhhmmm…. so? Are you responding to anything I’ve said about Tamino’s model? Or anything I’ve said at all?
I’ve never disputed that global warming is true and that ghg’s have in large measure caused the recent warming. I’ve often advocated for explanations based on simple models, including EBMs.
What does my preference for good useful simple models have to do with Tamino’s apparently poor almost certainly not-useful model? The model he conconcted based on his regression seems to act as an example of a model that is not useful and cannot explain the the earth’s climate without simultaneously “explaining” wildly unrealistic behavior.
While it is true that some models are wrong but useful, it is equally true that an even larger number of models are wrong and useless.
It is important to discover which are which.
Tamino does a disservice to people who accept the utility of simple models used to predict some features of climate by bringing this slipshod one forward without testing it.
Lucia said Tamino does a disservice to people who accept the utility of simple models used to predict some features of climate by bringing this slipshod one forward without testing it.
Tamino wasn’t predicting anything with his model. He was merely analysing existing temperature records, and pulling the components apart.
Bryan S (Comment#19746) September 11th, 2009 at 1:54 pm
The GISS-AOM also produces an MOC with twice the observed flow (40-50 Sv vs. ~18 Sv, 1 Sv = 10^6 m^3/s), so I’m not sure how much you can trust that estimate of heat flux into the deep ocean.
I get numbers like 1.3*10^23 J and 0.30 C for the upper 200 m modeled using a 60 m mixed layer and an exponential decay below that, so I think your figures are very reasonable.
I agree that it is pretty well established that an accounting of heat accumulation on decadal scales will need to include the upper ocean down to 700 or 1000 m.
Bugs,
“Tamino wasn’t predicting anything with his model. He was merely analysing existing temperature records, and pulling the components apart.”
I think you need to work on your reading comprehension.
Tamino was trying to “predict”, or “project”, or “estimate” climate sensitivity. Yes he “pulled the components apart” and mashed them back together with assumptions to fit his BELIEFS!!
Please pay attention to what St. Lucia of Rank Exploits is trying to teach us. It has been quite instructive to this point.
Kunkaht said Tamino was trying to “predictâ€, or “projectâ€, or “estimate†climate sensitivity. Yes he “pulled the components apart†and mashed them back together with assumptions to fit his BELIEFS!!
Your reading comprehension is the problem.
http://www.chambersharrap.co.uk/chambers/features/chref/chref.py/main?query=project&title=21st&sourceid=Mozilla-search
“5 to forecast something from present trends and other known data; to extrapolate.”
Projecting and analysing existing records are two different things.
Tamino was able to explain the existing temperature records using the known forcings. I don’t know what assumptions he made that were based on his BELIEFS (sic). Perhaps you could explain them. I see a simple model using data provided by other sources. Nothing to do with what he believes.
bugs (Comment#19768) September 11th, 2009 at 10:38 pm
Forcings “known” from what?
I hope you are appreciating the irony here.
Forcings “known†from what?
I hope you are appreciating the irony here.
You can look them up if you really want. http://data.giss.nasa.gov/modelforce
I am appreciating the irony. It is one of constantly avoiding the point.
bugs (Comment#19770) September 12th, 2009 at 12:32 am
*Shrug* I have them already.
By the way, why do you suppose the page is called “modelforce”?
Bugs talks about ‘known’ forcings. One of the forcing components is aerosols. I’m sure he knows who said that “the IPCC aerosol estimate … was pretty much pulled out of a hat.”
I’m sure they were not pulled out of the hat to fit in with the temperature record and the catastrophic AGW hypothesis.
*Shrug* I have them already.
By the way, why do you suppose the page is called “modelforce�
Just keep shifting those goal posts….
If you want to directly calculate sensitivity from forcings and temperature, you need some data source. Lucia used the GISS data, as did Tamino and Scafetta. There’s nothing better.
And bugs is right. The time series analysis that relates forcing to GMST via exponential lags does not require special assumptions about the physics of the systems.
Bugs thinks some point of Taminos still stands.
Is it this one: “we can reproduce temperature history if we include human factors, but not if we omit them. That’s such powerful evidence that we’re the cause of global warming..”?
Doesn’t Tamino think we can pull eg aerosol estimates ‘out of a hat’ that will translate into very different sensitivity estimates? Is he sure that we have not missed important natural factors in the calculations? What if the large observed cloud albedo changes on decadal scales have natural causes? (E. Pallé). If we rely on those data they point to an extremely important climate factor not reproduced by current climate models.
Tamino thinks those IPCC graphs are “such powerful evidence that we’re the cause of global warming”. I guess I’m kind of a lukewarmer but the reasoning those graphs represent leaves me stonecold.
What point of Taminos still stands?
Doesn’t Tamino think we can pull eg aerosol estimates ‘out of a hat’ that will translate into very different sensitivity estimates? Is he sure that we have not missed important natural factors in the calculations? What if the large observed cloud albedo changes on decadal scales have natural causes? (E. Pallé). If we rely on those data they point to an extremely important climate factor not reproduced by current climate models.
His model seems to demonstrate we have a reasonable grasp about what is going on.
Bugs’ faith in modeling assumptions is quite touching. It begs the question being debated here in its entirety and reflects a rather non-scientific credulity but it’s touching nevertheless.
bugs-
You are right. Tamino didn’t predict. I should have worded my sentence this way:
Tamino does a disservice to people who accept the utility of simple models used to
predictestimate some features of climate by bringing this slipshod one forward without testing it.Bugs’ faith in modeling assumptions is quite touching. It begs the question being debated here in its entirety and reflects a rather non-scientific credulity but it’s touching nevertheless.
I thought Tamino’s point was that it was demonstration, not an act of faith.
Nick
Could you explain the difference between “special” assumptions and “assumptions”?
If one claims the fit has something to do with physics, any fit requires assumptions. Tamino claimed his fit was based on the specific assumption that the earth’s climate could be modeled as a two-box model where one box is “atmosphere” and the other “ocean”. Tamino picked two specific time constants 1 year and 30 years.
Maybe these aren’t a “special” assumptions, but they are assumptions and rather specific.
It turns out the specific values for heat capacity, heat transfer coefficients etc. that map into the specific two-box model Tamino picked does not appear to act like the earth’s atmosphere and ocean.
Of course, if we forget about any claims that the regression has anything to do with phenomenology, then we don’t need to make any assumptions about they physics of the system at all. We just say: “Look. I’ve decided to regress using exponential decays. Economists might do this, why not climatologists?”
bugs–
You are exhibiting the act of faith. I’m sure Tamino thought it was a demonstration of the things he claimed when he posted and made his claims. However, he did not check his model for internal consistency.
What he demonstrated was this: Very poor models can be useless.
lucia “As far as I can tell, after testing, the two-box model doesn’t seem to map into physical space. There are a few loose ends I didn’t blog yet, but I think the support as a physical model looks pretty hollow.”
Are you saying this is true of any 2 box model?
While my 2 box model certainly has limitations, I think it can fit (using fairly realistic parameters) the data pretty well given the GISS forcing assumptions.
SteveR–
I mean the two box models that correspond to Tamino’s regression parameter with time constants 1 year and 30 years. I’ve looked at shorter versions of the short time constant too.
Yes. The (general) assumption behind the analysis of Tamino (and Lumpy) is that the present temperature is a linear function of past forcings, and that linear function would be the same looking back from other points in time. That means that you need to identify that impulse response (or transfer, or Green’s) function. Convolution with that is then the linear operator that you need. If you have it, then you can apply it to a notional constant forcing and get the sensitivity.
To identify the impulse response function, you create some math function of what you expect to be about the right kind, with parameters, and then vary the parameters so that when you do convolve, the result best fits the observed temperature. The function should be positive, decaying, but otherwise, pretty much anything you like – the test is the fit. Based on box model thinking, Tamino chose a linear combination of two exponential decays, with those time constants. The fit was fairly good – better if you also take account of SOI variations.
That part identification of the impulse response function got us to see if we can say more about the box idea that lead to it. My current answer is, not much. As Steve R says, there are a wide range of box scenarios with “reasonable” parameters consistent with the correlation. But there’s no clear best, and there are various ways in which they might not correspond to expectations. Mainly, the second box has to correspond to a rather arbitrary ocean layer of modest depth.
So as I see it, the box idea gave a reasonable structure in which we could vary parameters and approximate the impulse response. But the result isn’t unique – we could have got a good approximation in other ways. So we can’t very well reason back to make the regression tell us more about air/ocean properties.
So the “special” assumption that Tamino didn’t need to make was that the box model is “true”. It’s a basis for finding a structure for approximating the transfer function. He could have used many other things. The test is the regression, not the properties of the boxes.
Nick–
In other words, if you look at it as a math problem devoid of physics, then you don’t need to make any special assumption about physics. Of course not. You don’t need to make any at all because the fit is not seen as representing any physics.
Of course he didn’t need to make this special assumptoin to do the math. However, Tamino got pretty hoppin’ about the implication that his regression might be an interesting math problem, but might not have anything to do with physics.
Had Tamino never mentioned physics, claimed physics, criticized others for doing fits unguided by physics, or had a tantrum when I suggested that his regression might have nothing to do with physics, then no: There would be no need for any assumption about physics.
So, we agree that if we think of this regression as having practically nothing to do with physics, then we don’t need to worry our pretty lil heads about assumptions about the physics. But… has anyone ever suggested otherwise?
Since we can’t reason back to physical properties, it seems highly unlikely we should be able to reason back to climate sensitivity as Tamino tried to do. What we get are regression parameters of dubious physical meaning.
No, that’s the point! You can, exactly as you did with Lumpy. The transfer function is not just a math abstraction. It expresses the observed relation between historical forcing and temperature, and implies a sensitivity (its integral). And none of this involves the physics of the boxes.
Nick–
Who says Lumpy gets the correct climate sensitivity? I gave up trying to relate Lumpy to to very much physics long ago.
Lumpy might give an estimate of the climate sensitivity if we thought Lumpy was a sufficiently useful physical model. As a 1 parameter model, it’s difficult to test Lumpy for consistency. The only temperature you have is the one you fit. So, I didn’t try too hard to test it.
Moreover, there has never been any big disagreement that Lumpy type models may not be sufficiently realistic to return the correct value of climate sensitivity. We get a number as a regression variable– do we have confidence that value is a climate senstivity? Only if Lumpy is thought sufficiently realistic.
But if Lumpy is not a sufficiently useful physical model to get the climate sensitivity, then that value is a regression coefficient and we can have little confidence it is gives us a good value of the climate sensitivity.
Has anyone ever disagreed about this?
Now it happens that Tamino’s model is just complicated enough that we can show it does not map into realistic physics. So, we have more concrete reasons to doubt the value of the climate sensitivity from that model that we did for Lumpy which was nearly impossible to test to see if it remained internally consistent.
With Tamino’s model we can test to see if it’s internally consistent. If fails. We have little reason to call any parameter “climate sensitivity”. Of course, it might say similar things about Lumpy. But… so? Who said Lumpy is perfect.
Lumpy is a heuristic. It reflects conservations of energy, but it may not be sufficiently realistic to be useful. Lumpy’s existance cannot transform Tamino’s model into a good model.
Lucia – So the “special†assumption that Tamino didn’t need to make was that the box model is “trueâ€.
Of course he didn’t need to make this special assumptoin to do the math. However, Tamino got pretty hoppin’ about the implication that his regression might be an interesting math problem, but might not have anything to do with physics.
Had Tamino never mentioned physics, claimed physics, criticized others for doing fits unguided by physics, or had a tantrum when I suggested that his regression might have nothing to do with physics, then no: There would be no need for any assumption about physics.
Most of the physics done is unphysical, it does not take relativity into account. That does not stop it providing us with useful answers.
bugs– I assume you haven’t taken courses like physics, thermodynamics, heat transfer etc. Right?
Nick, thanks for the lucid explanation of the linearization point, that’s a nice way to look at it. However, it’s not entirely true that the resulting sensitivity is “robust” against the different choices – longer “slow” time constants definitely give larger sensitivities in the two-box model, so it might be a good thing to look at more realistic Green’s functions for the problem (I was starting to look at the ideal diffusive problem but do you have something specific in mind there?)
The continuing discussion here has long reminded me of David Mermin’s recent warning:
– from the May 2009 Physics Today.
Lucia’s persistent (and Tamino’s original) attempt to claim reality for the two components of this very general model is, in my view, another example. Trying to extract real properties of the world from such models requires care not to inappropriately reify what ought to remain abstract, because its actual meaning could be vastly different than our initial assumptions.
Arthur–
Huh? When I have I been trying to claim reality for the two components?
I’m under the impression the regression to the two-body model does not map into anything physically realistic for the earth. Because it does not, the regression parameters also don’t.
So…. are you suggesting one should not try to suggest the regression parameters tell us anything real about the climate sensitivity?
Because I’ve always been fine with the idea that Tamino’s regression may be nothing more than a mathematical regression that tells us nothing real. That is, unless one can show the regression does map into something real it’s best to assume it the regression parameters have no physical meaning.
If you’ve come around to believing the regression parameter don’t tell us much that is “real” about climate sensitivity, I should think you have come around to my way of thinking.
Otherwise, if you think the regression parameters tell something about the climate sensitivity even though the two-box model is utterly disconnected from physical reality… well, you are just going to have to explain why that might be so. It looks like you’ve got some fiddle factors that happen to fit data. Just like any proposed non-phenomenological regression cam be made to fit data. Big whip.
For a reality check on the precision and accuracy of forcings other than long lived ghg’s, I strongly suggest a read of AR4 WG-1’s full report, not the executive summary. The error bars make you wonder how Hansen et al had the nerve to use 3 and 4 significant figures in the forcing file. For the short version, go to page 220 and look at Figure 2.20.
A Smith –
Lucia’s persistent (and Tamino’s original) attempt to claim reality for the two components of this very general model is, in my view, another example. Trying to extract real properties of the world from such models requires care not to inappropriately reify what ought to remain abstract, because its actual meaning could be vastly different than our initial assumptions.
I thought Tamino was trying to demonstrate a simple idea. That a very simple model of the climate is still good enough to produce some reasonable results. I doubt he thought it was any more than that.
It was a response to the continual denigration of the far more complex GCMs. Even his ultra simple model, (for all it’s faults) was still a good enough approximation to demonstrate that climate models are a valid tool for understanding climate.
bugs
Yes. But what he showed is a simple model gives unreasonable results.
Well, then you are going to have to duke this out with Arthur and Nick now. Because they are trying to “save” the honor of Tamino’s “model” by saying it’s not a physical model. If so, it can’t tell us anything about the validity of any physical model (like GCMs.)
However, if Tamino’s model is a physical model– as you suggest– then what it tells us is that some physical models are useless and don’t help us understand climate very much. (Of course, people already know some models are useless. Creating good physical models is hard. Tamino appears to have failed. )
Lucia –
bugs– I assume you haven’t taken courses like physics, thermodynamics, heat transfer etc. Right?
I have, many years ago, but I don’t know what that has got to do with it. Engineers will often make a quick and dirty test harness for an idea. That’s all Tamino’s model was, and he was able to demonstrate it could do the job, despite it’s extreme simplicity.
The conclusion from it was that even such a simple model could demonstrate a good approximation of the climate’s behaviour. People go on about the impossibility of modeling such a complex and chaotic system, but it is apparently quite possible.
Arthur Smith (Comment#19806) “, so it might be a good thing to look at more realistic Green’s functions for the problem (I was starting to look at the ideal diffusive problem but do you have something specific in mind there?)”
I was thinking of decaying exponentials when I wrote this:
http://earthcubed.wordpress.com/2009/09/01/lagrangian-mechanics-and-the-heat-equation/
Of course one could also take as one of the functions a constant multiple of the actual current temperature profile of the ocean. I haven’t looked at it much further though because I want to better understand the atmospheric processes first like radiative feedback.
Besides, I think that a multi box model like the one proposed by lucia is sufficient way to solve the heat equation and is consistent with standard numeric methods like the crank nicolson method
http://www.physicsforums.com/showthread.php?t=334206
bug–
Your insertion of the utterly irrelevant issue about relativity seemed odd for someone who’d taken any courses area relevant to discussing this two box model. That’s why I thought you probably hadn’t taken any.
What you aren’t getting is that it doesn’t “do the job”. It falls apart. That’s why Tamino has been posting frenetically to for that post to roll of the top page and why Nick and Arthur are backing down to saying it’s somehow just a math thing. (But trying to suggest that somehow, despite it’s failure as a physical model we can get just this one physical parameter out of it.)
JohnC–
Diffusive models would probably be better than box models. However, Tamino made claims about a box model. I was focusing on testing that claim. To do that, you still with the box model. You don’t switch to the diffusive model even if the diffusive model seems more inherently sensible and intelligent.
What I mean is– when testing the two box model, you don’t switch to the diffusive model even if it’s more inherently sensible and intelligent. That’s because to test “X” you must test “X”. You can’t test “Y” even if “Y” seems more likely to pass the test.
lucia (Comment#19821) I agree that if you are going to test a two box model then you test a two box model, and if you are going to test a diffusive model you test a diffusive model. However, the two box model is a diffusive model approximated with exactly two step wise basis functions (or are they also called greens functions?)
John Creighton
Sure, the two box model is what you say it is. But…. that’s the what Tamino proposed to use and it’s what he regressed. He didn’t use something else.
It might be that greater flexibility would result in a regression that also maps into physics.
Here’s Wikipediea on “greens functions” http://en.wikipedia.org/wiki/Green%27s_function Not the feature that they are convolution integrals.
Arthur #19806
The Green’s function for diffusion into a uniform very thick slab looks like 1/sqrt(t). It doesn’t have an integral (to infinity), so you can’t use it to get a sensitivity. But you can think of it as the sort of function you might be trying to approximate with two exponentials. One to try to track the steep slope near zero, the other to track the medium term decay.
bugs,
tamnio didn’t model the climate. A climate model gives you scads and scads of output that describe the climate, surface temp is but one. What Tamino did was take the INPUTS ( some of them) to climate models and use those inputs and a proposed physical model to reproduce the observed temperatures. That demonstration is not without merit. For example, I’d use that simple model to play simple “what if” games about the future. “what if” future forcing looks like this or that. These back of the envelop, quick and dirty, assesments are useful to get a handle on the envelope of outcomes given various forcing profiles.
1. They don’t substitute for a GCM
2. They don’t address the criticisms over GCMs
Lucia’s very narrow focus is this ( speaking for her)
A. Is the proposed model actually physical –earth like.
B. Did Tamino CHECK THIS, as he claimed to have.
With regard to “B” I think it’s safe to say ( Tamino can prove me wrong) that Tamino did not check this with very much if any rigor. WRT A, I have to digest all the posts.
Lucia –
Tamino has been posting frenetically to for that post to roll of the top page and why Nick and Arthur are backing down to saying it’s somehow just a math thing.
Are you are mind reader?
Lucia “Nick and Arthur are backing down to saying it’s somehow just a math thing”
I haven’t changed my view here. I stated it in my first comment #18343 on this topic. And I’ve maintained it.
Nick Stokes,
If you would just bite the bullet and start your own blog, then I wouldn’t have to search so hard to find your perspicacious comments.
Nick–
That’s right– you always said it’s just a math thing.
If it’s just a math thing, it’s just a math thing. One can’t have it both ways and say they’ve diagnosed physics (climate sensitivity) while only doing a math thing.
Tamino presented it as a physics thing. That’s the source of the controversy. Bugs seems to be insisting that Tamino actually demonstrated something useful having to do with physics.
What I don’t see is why you have ever been puzzled by the controversy. If it’s only a math thing, but someone claims it’s physics, people are going to disagree with them and tell them they are wrong. When discussing climate, showing a model id physical is more important than doing mathematical tricks.
bugs–
Yes, sometimes.
Lucia –
If it’s just a math thing, it’s just a math thing. One can’t have it both ways and say they’ve diagnosed physics (climate sensitivity) while only doing a math thing.
Tamino presented it as a physics thing. That’s the source of the controversy. Bugs seems to be insisting that Tamino actually demonstrated something useful having to do with physics.
I don’t know if it was meant to be useful. It was just a demonstration that a very simple model could come up with the right answers.
Bugs said:
“It was just a demonstration that a very simple model could come up with the right answers.”
Yes, I agree – but with the caveat that he showed that the model could be “very simple”, but not too simple. A one-box model could not “come up with the right answers”.
Deep–
1) A one box model comes up with answers in the same ballpark as the two box model.
2) A zero box model would come up with answers in the same ball park as the two-box model. This is more or less guaranteed by the change in temperature and the change in forcing over the time period. That’s why the answers are somewhat “robust” (meaning not horrifically sensitive to choice of time constant.)
3) On what basis do you diagnose that the answer is “right”? By that basis, how do you diagnose the two-box answer as “more right”?
4) Given that the zero and one box solutions give almost the same answer, and the two-box model maps into unrealistic space, what is the value of the two-box model? (Answer: Very little.)
I’m absolutely disgusted by people saying that the answers they are getting are the “right” ones. Deep and bugs have just revealed their confirmation bias, and the Emperor does not look good naked…
Andrew_FL
Obviously, if we “knew” the right answer for climate sensitivity, no one (including Tamino) would have any reason to propose a method to determine the right answer empirically. This would, to a large extent, make Tamino’s “demonstration” valueless.
If we don’t know the right answer, you can’t know for sure whether any proposed empirical method (including Taminos) results in the correct answer for the climate sensitivity.
The reason one tests a proposed empirical method for some sort of consistency is to get a handle on how much confidence we should have in the answer.
At least with Tamino’s chosen time constants, it appears unlikely we should place a lot of confidence in empirical methods based on a two box model with his time constants.
If they thought they could still salvage this thing, advocates for the method’s accuracy (who now appear to include deep) could pursue further, download data and and do more precise tests to see how the ocean heat content matches the ocean temperatures etc. They could show that those who are dubious are wrong and that, in fact, comparisons show the ocean temperatures did do what the ocean box says, and the total atmosphere does what the total atmosphere says.
It might be tedious– but if someone wants to advocate for this method, it’s what they need to do. Because right now, based on what we see, it doesn’t look good.
Andrew_FL, Luica:
Yikes! I was quoting bugs. What I presume he meant by the “right answer” was the fit to observations engendered by Tamino’s two-box regression. At least that’s how I understood it and meant the term.
The one-box models I’ve seen don’t appear to come close to that fit, especially for the most recent period. But maybe I haven’t seen the right ones. I can’t say I’m really all that interested in studying this in detail, but if you can show a quick link to one, great.
I suppose another definition of “right” would be predictive power, the estimation of which would involve a verification of the regression model on split periods. Again, not something I’m particularly interested in, but I would think you and Arthur S and Steve R might be.
Deep
An early version of Lumpy has a higher correlation than Tamino’s two box model. .
http://rankexploits.com/musings/2008/how-large-is-global-climate-sensitivity-to-doubled-co2-this-model-says-17-c/
At the time I wrote that, I hadn’t fully thought through tests of physical reality. In any case, as I said, based what on what I now know, I know it’s not as easy to test the one-lump model for internal consistency because you fit what you fit. You can do a few tests, but not so many,
You’ll see that at the time my observations about our confidence in this model was
It has a higher correlation coefficient than Tamino’s model. That’s in it’s favor, but at best it suggest something might be ok. But, all in all, I now think additional tests might be required. (For example, we might want to see whether the amount of heat that must be being lost to a “deep ocean” makes any sense. )
lucia (Comment#19848)-The model has GISS forcings “projected after 2003”-which looks like a simple extrapolation. As far as GHG’s go, there are data to extend it to the present:
http://www.esrl.noaa.gov/gmd/aggi/
(incidentally, I believe the SRES scenarios assume increasing CH4 out to 2050, after which they mysteriously stop increasing. Apart from a truly odd blip at the most recent part of the Methane record, the rate of increase has dramatically slowed)
It occurred to me that it might be useful to compare the two-lump fit Tamino thinks “excellent” which used forcings & SOI to a one lump-fit that does uses forcings only, no SOI and.
Here’s “monthly Lumpy”

Here is annual Lumpy:


Here is Tamino’s fit which not only has an second time constant but then explain part of the variability with SOI:
Ordinarily, one would expect using a extra fiddle factor plus explaining part of the variability with SOI should result in a much better fit for Tamino’s model compared to Lumpy. But… actually, based on the eyeball test, Lumpy appears to fit better than Tammy Two-Lump. (I didn’t see correlation coefficients mentioned in Tamino’s post.)
lucia (Comment#19855) “Ordinarily, one would expect using a extra fiddle factor plus explaining part of the variability with SOI should result in a much better fit for Tamino’s model compared to Lumpy. But… actually, based on the eyeball test, Lumpy appears to fit better than Tammy Two-Lump. (I didn’t see correlation coefficients mentioned in Tamino’s post.)”
My impression is very few of the parameters have been fit. For instance we took the time constants as given instead of trying to find the best fit.
John Creighton
Agreed. By “Tammy Two-Lump” I mean the one with his time constants which he picked, evidently based on what Model E gives for a long time constant.
He didn’t explore the range of time constants, and I haven’t either. It’s entirely possible that time constants that disagree with Model E would work well.
Note that the time constant that makes “Lumpy” fit well is shorter than the long time constant Tamino picked.
Lucia, I have not changed my position on this either, although our delving into the details has certainly been, I feel, informative about the possibly limited range of the model.
Physics and mathematics are intricately linked, but Mermin’s article that I referenced (which focused mostly on quantum and particle-theory examples) pointed out many cases where thinking of a mathematical representation of reality too concretely can lead you astray.
What we have here is the question of how the Earth system as a whole responds to “forcings”, with the response in question here simply the measured global mean surface temperature. Details of the forcings do matter with regard to regional and altitudinal temperature changes, but the general view is that the GMST response is roughly independent of the type of forcing. So, input = forcings over time, output = GMST over time, and the question is can that specific relationship be modeled in a mathematical way that has both some explanatory and predictive power, in a fashion similar to the many approximate models that physicists use every day (quasiparticles in semiconductors, the spectrum of different kinds of excitations in materials, mesons, liquid drop models of the nucleus, etc.)
The standard first assumption is one of linearity – and that should certainly be true as long as we’re not precisely at some sort of singular point of the system and the perturbation (forcings in this case) is relatively small. As Nick pointed out above, under those assumptions (independence of GMST response to type of forcing and linearity) the response of the actual Earth system should be given by a simple convolution of the forcing with some particular fixed response function (a type of Green’s function) that represents the way Earth as a whole reacts to increased heating.
So if all of those assumptions are reasonable, and they generally should be if the forcings are relatively small, the mathematical question is what specific form that response function takes. Instantaneous response functions are one possibility, but as Tamino’s post noted, they respond too much to Pinatubo’s. One-parameter “lumpy” models are another possible parametrization of these models, but they may respond too slowly in such cases. The two-parameter model is another version of the mathematical question.
Both one-parameter and two-parameter models can be physically justified as representing components of the real Earth. Schwartz identified his one time-constant with ocean heating, and Tamino suggested his two boxes could represent “atmosphere” and “ocean”, and you have generally followed that division from that point. But representation in physical form in these cases may or may not be useful – certainly there is no well-defined division of the planet into two boxes as we’ve been discussing for quite some time here now. If the model succeeds in capturing the response of Earth’s system as a whole but it’s underlying components don’t correspond to the specific subsets of Earth that we’ve picked out, well that may simply be because what the model is capturing is something different than what we expected, and initially justified it with.
In other words, this sort of simple model certainly can be used to get values for the sensitivity of Earth as a whole, because that’s what it’s doing. But asserting that the individual subcomponents of the model have their own physical reality may be going too far. Or maybe in this case we’ll find they’re not that far off after all (we seem to be pretty close here, actually).
Arthur–
Or, it may be because you have limited amounts of data. It may be spurious correlation. It may be data snooping in this case (since the GISS estimate forcings back to 1880 might have been influenced somewhat by knowledge of the temperatures.) Or it may be because …
Given how many things it “may be”, it might be better to avoid believing that the overall regression provides the “climate sensitivity”, until after you discover what that other, different, thing might be.
Could the parameter turn out to be “climate sensitivty”? Maybe. Could it turn out to be something else? Yes. Could it turn out to be a very inaccurate estiamte of climate sensitiity? Abso-friggin’ lutely.
Certainly can?!
That’s a bit of a wishful thinking inspired stretch. It would be rather more reasonable to say that one might at some point discover what process the model represents and that when one does, one might know what that regression coefficients associated with the two subcomponents represent. It might be reasonable to say that the analysis is interesting as a hueristic. But saying the method can be used to get climate sensitivities? With a decent level of accuracy, precision and uncertainty? And with a very small snippet of data? And possibly poor estimates of forcings?
Or do accuracy, precision and uncertainty not matter in this epistomological treatise?
Huh? You have just pretty much suggested that the two components might have physical reality– just not the ones we originally thought. That’s not the same as saying they have no physical reality.
I am asserting that if you don’t have any clue what physical thing the subcomponents represent and you don’t know what the mathematical model means physically, then you don’t know what your regression coefficient mean physically.
Saying you you can identify the physical meaning of regression parameters for the subcomponents whose physical meaning you have not yet identified, is peculiar. Saying you certainly can provide physical interpretation of a regression coefficient when you have yet to discover any physical model that might underlie your math is going way, way too far.
While it is true that there can always be an as yet undiscovered physical model connected to a mathematical model does not mean you get to interpret the mathematical model in terms of a physical model that clearly does not match the mathematical model!
With Tamino’s time constants, they look pretty far off to me. With other time constants, that might not be the case.
Arthur–
To give a concrete example of what we don’t know:
In the process of seeking some physical basis connection between the math and elements of the climate system, you suggested one of the ‘boxes’ contains a shallow ocean. If so, we know that the shallow ocean can ‘lose’ energy either to a) the atmosphere through heat transfer b) the universe by radiating photons that manage to get through a fairly transparent atmosphere and c) by transfering heat to the deeper ocean.
Quantity (a) would be captured by the parameter we call γo. But either (b) or (c) would contribute to the parameter we call αo. To the extent that αo represents photons lost to the universe, it might tell use something about ‘climate sensitivity’. To the extent that αo represents heat transfer to anything on earth, it does not tell us anything about climate sensitivity.
So, although the parameters might be connected to climate sensitivity, unless we can identify a physical model that is not self-inconsistent, we can’t have any confidence in the numerical values of climate sensitivity obtained this way.
(FWIW– the argument set forth might suggest the values obtained are a lower bound for climate sensitivity. However, that’s not quite so because it also points to strong possibility that we third term to deal with the lower ocean).
Of course, if we don’t care what these numerical values are, you can note that we can always compute something with the correct dimensions for climate sensitivity. (That is, it has dimensions Temperature / (energy/area) ) But the fact that the item is dimensionally ok doesn’t mean the numerical estimate is within, a factor of 2 of correct. (If the forcings are mis-estimated, we could be off by a factor of…who knows.. 10? Depends how badly off they might be.)
Lucia, I think we’re talking past one another here on mathematical vs. physical realities.
To the contrary. Newton had no underlying physical model for his inverse square law, he just observed it worked. There are many possible underlying physical models for an inverse square law – it’s a natural law for spreading in three dimensions from a point source. Coulomb’s law has the same form. Bohr’s original model for the atom was fundamentally wrong – electrons don’t follow precise orbits in any way resembling the planets – and yet it allowed physicists to estimate spectral properties of hydrogen and other atoms before a more correct quantum theory was developed. And so on.
That is, the natural world presents relationships between quantities (mass and charge vs. force and energy levels, etc., and in our case GMST vs. radiative forcings) for which we can model that relationship with some mathematical approximation. We have to base that model on observed data to the best of our ability, and models that don’t match observed data clearly don’t fit. But pretending that that mathematical model actually represents the underlying reality may or may not be a useful further step. Sometimes it is helpful, but sometimes it’s just wrong and leads nowhere. That doesn’t invalidate the fact that you have in some way approximated a real physical relationship in the world by a specific mathematical function, which in itself has properties relative to the observational data.
From Newton’s law and our observations of planetary motions and Earth-bound gravitation we extract a universal gravitational constant that can be then used to weigh planets and stars and moons and do lots of other useful stuff, well before we have any underlying model of what gravity actually is (and you can pick Einstein’s GR or string theory – I don’t think we even now have a clear picture on the underlying reality).
From the observed Earth GMST vs. forcings relationship we also can extract a mathematical relationship which has specific properties *relevant to the GMST vs forcings question* that have real meaning. One of those is the total sensitivity.
But the individual components within such a model may well be meaningless – i.e. cannot be mapped to realistic subsystems of our planet. Or maybe they can. Either way, it doesn’t affect the validity of the GMST vs. forcings relationship that you have already mathematically modeled.
Arthur–
Look, this won’t do.
I could grok what you are saying if you had a collection of data that collapsed on to a curve and were trying to discover the a law of the universe or rule to explain that. But you have not observed that this method “works” or “explains observations”. You haven’t shown the method works for even one known observation.
You seem to be saying we should take for granted that the totally untested method “works” and can be used to estimate the empirical value of climate sensitivity (that we do not know) because… because… I’m not sure why. As far as I can tell, the reason is that you want it to work.
No. You hope you can do this. Not only do you only hope you can do it, but you are hoping you can do it with two time constants, forcing estimates that may contain significant errors from GISS and about a century worth of measurements that may be very noisy both due to weather and due to measurement errors. Plus, it’s never been demonstrated that this method works even if you were provided perfect data from other planets with deep oceans and think atmosphere (or any planet for that matter.)
Given all these difficulties you cannot connect it to a self consistent physical model, your argument is tenuous.
One of the reasons your argument is tenuous is that unlike Newton, Coulomb and many empiricists, you do not have a bunch of empircal data that you can show obeys a particular mathematical rule. In fact, you do not have one single example where you used this method and were able to reproduced the known value of climate sensitivity with any degree of accuracy.
Mind you, you might be able to put together a case that the two-box model method has some empirical validity by collecting together while bunch of GCM data from PCMDI and showing the method works for all the AR4 models. Then, you’d have something closer to your analogy with Newton and Coulomb. But right now, you just don’t. (And even then, for the real earth, you’d need to explore the level of uncertainty in the answer based on the amount of measurement error or forcing uncertainty in the fit. But at least you’d have something.)
But right now, you are simply claiming that a single regression that spits out a single number that you cannot compare to any known value must, be accepted as providing an estimate of the thing you hope to estimate.
Look, if you could could show this method had worked over and over and over for many, many planets despite the failure to map into meaningful individual components, I might agree.
But you have zero examples of this method working. Ever.
So, what you have is a newly proposed method with zero empirical support for working and which appears to give internally inconsistent results when mapped into the physical model that it’s supposed to rest on.
On the one hand, I know we can’t get the scads of empirical data Newton, Coulomb or even Latin builders of aquaducts had. But the fact that we can’t get it doesn’t mean that we must simply believe in the method the way we believe in Leprechauns.
Since we can’t get it, I would be happy to settle for a method that resulted in answers that were internally consistent. That’s where the internal consistency becomes important.
The fact is you need something other than wishful thinking to make the numerical value for the regression parameter leap from “regression parameter with we hope is somehow related to the climate sensitivity” to “an estimate of the climate sensitivity someone could begin to believe falls within an order of magnitude of the actual climate sensitivity of the earth.”
Arthur Smith (Comment#19861) September 13th, 2009 at 1:59 pm
We’re not talking about counterintuitive quantum systems, but a climate system where we expect such things as Newton’s laws and the Second Law of Thermodynamics to more-or-less hold — then we build from there. In our model world, it’s enough for waves to actually be waves to generate plenty of confusion.
A forcing “efficacy” ~1.5 means the forcing responses are not roughly independent of their causes. When you’re extracting climate sensitivities, these little coefficients that we like to throw away in dimensional analysis begin to matter.
Over the next few years I’m willing to give this argument some weight. Projected out decades to centuries, over several degrees of warming, it needs to be demonstrated, not asserted.
If our linearized model projects a warming of 3 deg C over the next century and a “singular point” exists between 0 and 2.5 deg C, then what meaning does the projection have at all?
The forcings themselves are not known to this precision. We have enough trouble measuring the radiative imbalance at the top of the atmosphere in the satellite era; what makes the number for direct aerosol forcing in 1910 trustworthy?
In a worst-case scenario, the model could turn out to be a fancy mathematical apparatus for curve fitting with no connection to physics whatsoever.
No. You hope you can do this. Not only do you only hope you can do it, but you are hoping you can do it with two time constants, forcing estimates that may contain significant errors from GISS and about a century worth of measurements that may be very noisy both due to weather and due to measurement errors. Plus, it’s never been demonstrated that this method works even if you were provided perfect data from other planets with deep oceans and think atmosphere (or any planet for that matter.)
Given all these difficulties you cannot connect it to a self consistent physical model, your argument is tenuous.
One of the reasons your argument is tenuous is that unlike Newton, Coulomb and many empiricists, you do not have a bunch of empircal data that you can show obeys a particular mathematical rule. In fact, you do not have one single example where you used this method and were able to reproduced the known value of climate sensitivity with any degree of accuracy.
Tamino has not tried to do that, but you insist that he should. He has merely demonstrated that a very simple model ofthe climate gives a good representation of it’s behaviour with the tests he has applied to it, without even having to do everything you demand a ‘real’ model should. The model he has created is so simple he has used only ‘two boxes’ to represent the highly complex earth climate system. His point is that people like you demand it should be more complex and meet a laundry list of requirements to get the right answers, but he has shown that you don’t need to do that. The odds of a simple model doing so through sheer chance are as good as my odds of winning the lottery.
I’m absolutely disgusted by people saying that the answers they are getting are the “right†ones. Deep and bugs have just revealed their confirmation bias, and the Emperor does not look good naked…
And a very nice day it is too.
Lucia:
Let me be clearer then. It can be used to estimate the empirical value of climate sensitivity because that number is a property of the mathematical function that represents Earth’s GMST response to increased forcing (sensitivity is simply the ratio of the long-term value of the response function to a steady-state forcing). That mathematical function exists as an arbitrarily good approximation to Earth’s real response in the linear (small-forcing) regime assuming we’re not at some singular point in the system, and therefore that long-term sensitivity number really exists as well.
Any empirical model like lumpy or the two-box (two time-constant) model is essentially providing an estimate of that response function based on the observational data. The closer you can get to Earth’s real linear response function, the closer the resulting long-term sensitivity number will be to the real one. Now as you have noted up above, because we have limited observational data the really long-term components of Earth’s response are not well-specified by the data we have.
The two-box model effectively zeroes out any response longer than the long time-constant. The one-time-constant model effectively zeroes out any response on longer time scales than that time constant. If that zeroing is a bad assumption, then (given that all components of the response should be positive, as we’ve been discussing, to get physically reasonable behavior of the subcomponents – however you may try to divide them) the full response function will have a *larger* long-term component than these functions give it credit for. Therefore mathematically we actually only get a lower bound on sensitivity from this sort of analysis.
By the way, although Kepler had tons of empirical data to work with to figure out his laws of planetary motion, Newton had extremely limited empirical data (5 planets and the moon) – and essentially nothing new for around a century after his statement of the inverse square law. Einstein had only the precession of Mercury as evidence for his General Relativity – a single number. Bohr had just a few spectral features of hydrogen to fit. Here we actually have a plethora of data in the time-series – but also a much more complex relationship to try to understand. But the difference is one only of degrees, not of kind.
Arthur–
In other words– we must believe we can get the correct answer because you postulate we can get the right answer and know the names “Kepler” and “Newton”.
And now you proceed to explain that maybe we can’t get the right answer, but in that case, the mistakes are in the directions you prefer? The two box model doesn’t zero out the long response– it’s the short time span of data. But that would only matter if there is a longer response. Equally, the minimum of one month prevents us from picking up shorter scales. Once again, that only matters if there is a shorter time response. In both cases, the errors can screw up the fit.
Moreover, in either case, the method could fail for other reasons. No one knows the net direction of errors.
You should sit down and decide whether you think we get the right answer or not rather than insisting we can, but then speculating the errors are in the direction you prefer.
Wow Godwin would be impressed — Einstein, Newton, Kepler, Bohr all dead and all used to justify in the space of 10 comments.
You should sit down and decide whether you think we get the right answer or not rather than insisting we can, but then speculating the errors are in the direction you prefer.
Tamino could, and he did.
Wow Godwin would be impressed — Einstein, Newton, Kepler, Bohr all dead and all used to justify in the space of 10 comments.
Perhaps you could address his argument rather than just ignoring it. He is talking about the sparse information that breakthroughs in science are often made on. There seems to be a culture clash here, engineering vs science.
bugs (Comment#19873)-“There seems to be a culture clash here, engineering vs science.”
Straight out of Rand-it’s the looters versus the men of the mind, the prime movers. Engineers are Atlas. Your edifice rests on our shoulders. Don’t make us shrug.
bugs–
Arthur is trying to insist we must believe a a purely mathematical, newly proposed construct that is based on no empirical data. He’s using words like “certainly” to suggest the mathematical construct must be true when there is nada, no, zilch, zero empirical verification of the the method.
This insistence that we must believe a mathematical construct supported by no data is quite different from from believing a theory based on sparse information.
In any case, even theories that were initially based on sparse information only become reputable after people acquire more information. Whether or not Newton had a little or a lot of data, back in the time of Newton, no one was required to believe it based on no data. Newtonian’s contemporaries wouldn’t have even been required to believe it based on little data. Those things he proposed that were true are now believed because, over time, more data was collected and those particular theories were true.
For what it’s worth: Newton also advanced some notions that were proven false. Those who believe those theories without testing against empirical data turned out to be mistaken.
Lucia,
It’s nonsense to say the estimate of climate sensitivity is based on no empirical data. It is based on exactly the data that it should be based on – the forcing and the temperature response. It represents the steady-state ratio.
An electrical analogy – a circuit of resistors. You watch a varying voltage (which you don’t control) applied between a node and earth and measure the current. They are proportional and the ratio is the resistance. Perfectly empirical – just what Ohm did (tho he did have control).
Now what if there are capacitors in the circuit as well. Then the current is no longer proportional to the applied voltage at a point in time, though it would be if you could hold the voltage constant long enough. But you don’t have to give up this fundamental measuring method. If you observe the changes for a while, you can figure out the pattern of delay (an impulse response). That doesn’t mean you have to be able to infer the complete circuit. Once you have estimated the impulse response, you can derive the steady-state resistance.
Now you’ll probably say that climate is different, Earth is complicated. Someone might harrumph about chaos. But though there might be criticisms there of the actual notion of climate sensitivity, it doesn’t apply to the method of estimation. For climate or circuits, it’s just a matter of estimating a linear response with time delay.
Andrew FL
Straight out of Rand-it’s the looters versus the men of the mind, the prime movers. Engineers are Atlas. Your edifice rests on our shoulders. Don’t make us shrug.
:facepalm:
For what it’s worth: Newton also advanced some notions that were proven false. Those who believe those theories without testing against empirical data turned out to be mistaken.
He did test against empirical data. Right here.
http://tamino.files.wordpress.com/2009/08/2boxall.jpg?w=500&h=325
Wow the conceit of those whose answer to critism is to compare themselves to Einstein and Newton
bugs (Comment#19873) September 13th, 2009 at 8:13 pm
Who’s supposed to be the scientist and who’s the engineer in this scenario?
Nick Stokes (Comment#19876) September 13th, 2009 at 9:06 pm
I ask again since there seem to be some posters missing a major point here:
How did we “empirically” get the forcing since 1880?
Oliver,
Of course, the forcings have to be calculated from other empirical data, like volcano info and GHG concentrations. But they always do, whether in 1880 or 2009. I repeat that sensitivity is the ratio between forcing and temperature response. The need to calculate the forcing is part of the deal.
Nick Stokes (Comment#19884) September 14th, 2009 at 1:11 am
Good grief, surely you cannot believe that all our inputs since 1880 have been equally “empirical.”
Sorry, I don’t know what that means.
Nick
It’s the Arthur’s claim that this method of estimating the climate sensitivity from empirical data for forcing and temperature “must work” that is based on no empirical data. Unlike Newton showing his equations predict trajectories, no one has has ever shown this method “just works” and returns the correct climate sensitivity.
They can’t because… we don’t know the climate sensitivity.
So… who varied the climate sensitivity, then took GMST and forcing data and showed you can get climate sensitivity with any accuracy this way?
No. I’m saying you have no empirical data that the methodology Arthur says must certainly work actually works. This has nothing to do with it climate being complicated. It has to do with “zero empirical data showing the methodology works” being easy to detect.
Nick
It means that, to some extent, the estimate of the forcing is influenced be our knowledge of the temperature response. If we go back far enough, the knowledge of the temperaure response may be one of the main things dictating someone’s estimate of the forcing.
To the extent that the forcings were or may have been estimated based on the temperature response, this makes any estimate of the climate sensitivity based on forcings a bit circular.
The interesting notion in Schwartz was to not use forcings, but simply assume that, overall, they were “white noise”. That may or may not have worked, but it was a way to get around the potential circularity associated with the potential forcings.
Turns out that if we use no forcings (like Schwartz) the answer isn’t much different from if we use forcings. That said: his sensitivity was on the low side. Even Schwartz response to criticisms still did not use forcings.
As for the general notion that anything we do with forcing and temperatures returns the climate sensitivity: Why not just regress the estimated change in temperature over the 20th century to the forcings, and call that your estimate of “climate sensitivity”? If someone criticizes that method as unproven and possibly looking implausible, why shouldn’t I just that method “just works” and drop names like Newton? Why shouldn’t I suggest he had very little data so you should just believe this method? Heck, like Arthur, I can point out the method is “linear”.
The “fit a straight line” to temperature-forcing data has just as much empirical support as the Tamino-Arthur method. (That is: it has none.)
The fact is that any explanation of what might be wrong with fitting the straight line to temperatures and forcing is similar in kind to my discussion of what might be wrong with the Tamino-Arthur method.
So, to get anyone to believe this method of estimating climate sensitivity form forcing/temperature data is somehow better than other methods you need to show some evidence that it’s better. When a method is not self consistent, that’s evidence the the method may be very, very poor. (We won’t know why it’s poor. It could just be bad data. But still… this can’t make us confident in the accuracy of the estimate.)
lucia (Comment#19896)-A simple temperature-forcing regression wouldn’t work because that would imply instaneous response. This doesn’t make the other methods better but it does mean that such a method is itself wrong a priori since response time is certainly >0.
Sigh, a server error ate my response to bugs, who apparently believes that my irreverance for the unproductive class is crass.
Andrew_FL–
Yes. And if the response time happened to be short, that method would work fine.
But in the case of Tamino’s model the question is: Why should exactly two response times be “just right”? And why the specific values he picked?
If we go back to Arthur’s “Newton” argument, and Nick’s claims about empirical evidence:
1) If either the simple temperature forcing regression had been shown to work over and over and over and over again, resulting in a known correct value for climate sensitivity, then like Newton, we could simply observe that the data said that method worked. We could say this even if we didn’t understand it.
This is, essentially, Arthur’s “Newton” argument. It is that we don’t need to understand why a method works to accept that it works. Well… sure. But with regard to the linear regression method, we would note that:
2) The difficulties are
a) we have zero evidence the simple linear regression method just works. After all: we don’t know the climate sensitivity, so we can’t test the method,
b) based on our understanding, we have reason to believe the method is likely to give poor answers. (Specifically, we recognize deficiencies in the assumption about the very short time constant.)
Now returning to the Tamino/Arthur method, just as with the “linear regression” method:
a) We have zero evidence the method just works and
b) if the method is internally inconsistent, then, based on our understanding of phenomenology, we have reason to believe the method is likely to give poor answers.
So basically, the arguments against enthusiastically embracing the linear regression method apply to the “two-lump” method. It’s true the resulting climate sensitivity doesn’t have to be wrong. Maybe it’s right. But it very well might so far off the correct numerical value as to be useless information.
So, what we have is a regression coefficient that might be close to the climate sensitivity. But Arthur’s saying things like “certainly” is certainly ridiculous.
Nick (#19826) – yes, you get 1/sqrt(t) for the response to Dirichlet boundary conditions (i.e. fixing the temperature at the surface relative to the bulk) but the boundary conditions here are different – we have a given flux (the forcing) and a response that diminishes the net heat inflow (the linear-in-T terms). The net heat going into the bulk drops as the surface temperature rises to “match” the incoming flux, and so the basic diffusive response law under these conditions should decay considerably faster than 1/sqrt(t). That does suggest maybe looking at power laws would be a good choice – after all power laws are pretty ubiquitous in complex systems… Hmmm…
On the Newton/Einstein/Bohr etc. analogies – I use them only because they should be widely familiar. From my own background in physics it might have been more natural for me to talk about Kohn-Sham density functionals, exchange correlation potentials, pseudopotentials, and that whole family of approximate mathematical methods in the theory of electronic structure – but I doubt anybody here has ever run into those at all. The point is that physics *is* applied mathematics to a very large degree, and when you get into mathematical relationships you can talk about real proof, truth, and certainty that you don’t have in the observational realm. If the assumptions underlying the mathematical treatment are valid, and certainly there are always practical limits in applying them in reality, but given the realm of applicability, mathematical treatments are very powerful.
Arthur
Arthur, fluid mechanics “is” applied mathematics to the same degree that physics is applied mathematics. But, you know perfectly well that there are also differences. One of the differences is the importance of empirical support when claiming a result is physically realistic. (Mathematicians, btw, generally understand this difference. So, I’m a bit surprised when a physicist does not.)
We are debating whether or not this particular mathematical treatment can be applied to describe something physically real. Not whether it is some sort of beautiful mathematical thing. (In anycase, it is hardly a splendiferous mathematical thing. It is a run of the mill, mundane, mathematical treatment commonly used to model simplified engineered systems particularly back in the 60s and 70s when computing power was not as widely available. )
It’s true you can prove mathematical theorems. It is also true that there can be practical limits to applying some theorems to reality. In fact, the practical limits are such that any particular mathematical treatment may nave absolutely no connection to reality. So, if you want to make a physical claim– like “this is the climate sensitivity”, you need to show that your method had not run afoul with the practical limitations.
So, the fact that many mathematical treatments are, or can be, very powerful, cannot support any sort of claim that this particular one has any utility at all.
Your argument about math is sort of like someone arguing that because Shaquille O’Neal, a man, can dunk basketballs, we know that men can be powerful, and can sometimes dunk basketballs, and do other awesome basketball marvels.
Then, you observe that Stephen W. Hawking is also a man. So you tell us we should assume he is also able to dunk basket balls and perform all sorts of basketball marvels. Then, when someone points out that the particular man, Stephen is in a wheelchair, so your earlier claim he can dunk basket balls seem implausible, you repeat “But Men can be very, very powerful and athletic!!!! And look at the whole NBA. They are also very powerful and athletic!!! Sure… there are practical limits in thinking all men are this powerful… but power, power, power!”
No one is questioning the notion that mathematical treatments can be powerful. The questions is: Does it appear likely that this particular treatment is likely to give a decent estimate for the climate sensitivity?
Yikes, all the way from Kepler and Newton’s mechanics…to Shaq and Steven Hawking’s wheelchair!
Arthur Smitth writes
This is the point that Gerald Browning has made on multiple ClimateAuidt posts in whiuch he describes his and others’ mathematical proofs that current GCMs have no bearing on reality. He has indicated that these issues will not be resolved with greater computing power or greater resolution in the calculations. The climate scicne types responding discount his proofs saying that the approximations that they use are physically reasonable and that their results bear this out.
Arthur #19902
No, it is ~ 1/sqrt(t). The impulse 1D temperature response to a unit pulse of heat supplied to an otherwise adiabatic surface (you can use reflection bc) is the Gaussian T=exp(x^2/(4Dt))*sqrt(D/(4Ï€t))/k, D=diffusivity, k=thermal conductivity. Put x=0, and you have it.
Nick Stokes (Comment#19884)-“Of course, the forcings have to be calculated from other empirical data, like volcano info and GHG concentrations. But they always do, whether in 1880 or 2009. I repeat that sensitivity is the ratio between forcing and temperature response. The need to calculate the forcing is part of the deal.”
First of all, this statement would be more accurate if phrased more accurately:
“sensitivity is the ratio between forcing and temperature response”
Equilibrium sensitivity=Se
Temperature at time of equilibrium=Te
FΔ=forcing
T0=temperature before forcing applied
So Se=(Te-T0)/FΔ.
This illustrates the difficulty of calculating sensitivity to some degree, because even if we know the temperature change precisely (there is some uncertainty) and the forcing precisely (lots more uncertainty) we would need to know a priori what the response time was. But that would make the exercise pointless, since if you know the response time, you basically know the sensitivity!
Now:
“Of course, the forcings have to be calculated from other empirical data, like volcano info and GHG concentrations. But they always do, whether in 1880 or 2009.”
Nick, I’m sorry but this is bullshit. Volcanic forcings are guesstimates before Pinatubo. There are essentially no measures of the aerosol effects before 1970ish and even those have been inadequate to resolve the various issues. TSI is also highly speculative before, and even after to an extent, 1979. Then there’s land use and black carbon, etc. Frankly GHG’s are just about the only thing we have good confidence in as far as forcings go. Now, you can say, “yeah, there is uncertainty, but…”-there is no “but” Nick. The uncertainty needs to be taken into account when doing games (and they are really games). If we are going to put forcings into an EBM, we have to see what happens if we put in an aerosol direct forcing of -.1 or -.9 or anything in between, along with an indirect effect of -1.8 or -.3…etc.
And then you need to explain why the models never get the rate of warming from 1911 to 1941 anywhere near correct, and why it was identical to the rate from 1978 to 2008. And then of course there is the matter of how all models, regardless of their sensitivity can “fit” the surface record. Oh wait, Kiehl already revealed that.
Jeffrey T. Kiehl, 2007. Twentieth century climate model response and climate sensitivity. GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L22710, doi:10.1029/2007GL031383, 2007
The forcings used are chosen to match the models to the data.
Nick Stokes,
“For climate or circuits, it’s just a matter of estimating a linear response with time delay.”
And your proof?
Nick Stokes (Comment#19937) September 14th, 2009 at 3:59 pm
We have discrete boxes and no Gaussian kernel in this model.
If you want a circuit analogy, it’s exactly the step forcing response problem for a parallel RC circuit.
I’ve followed your scheme for the Andrews after noticing another Terry at tAV and CA. Anyway – 2 points:
1) I’m surprised you used Shaq instead of Jordan (only because he’s Chicago’s patron saint of BBall).
2) Are Arthur and Tammy drawing straws or playing Paper Rock Scissors to see who has/gets to be Steven Hawking?
KK, no proof is needed. That’s how sensitivity is defined.
Andrew – yes, there are uncertainties. The Kiehl paper you cite puts them in perspective, and does not say the forcings are chosen to match… (they aren’t).
You need to bear in mind how the sensitivity is actually to be used. The key question posed is, What if we doubled CO2 (and held it there)? How much would T increase?
So, in this simple method, which is not claimed to replace GCMs, we say – OK, we have a record of forcing we compute, based on observations including CO2. Can we work out a model of how that affected temperature (time series analysis)? So if we compute forcing in the future on the same basis, what will that model do to temperature?
So although forcing is estimated with error, quite a lot of that tends to cancel out, as long as the calculation of forcing is done consistently.
It’s not true generally that “if you know the response time, you basically know the sensitivity” – only true for one-box models. What you need is the integral under the impulse response curve.
I don’t “need to explain why the models never get the rate of warming…”. That’s a different kind of model.
And for anyone trying to follow #19902, yes, the exponent should have a negative sign.
“The Kiehl paper you cite puts them in perspective, and does not say the forcings are chosen to match… (they aren’t).”
He doesn’t say it, he proves it.
This doesn’t happen by accident:
http://www.climateaudit.org/wp-content/uploads/2007/12/kiehl35.gif
Yes, there are some constraints on the forcing empirically BUT when choosing within the range of possibilities, modelers clearly keep their model’s sensitivity and the observed temps in mind.
The forcing formula were designed around “if GHG doubling results in 3.0C per doubling, and if the sensitivity is 0.75C per watt/metre^2, then the forcing formula should be …”
There is no empirical derivation of the forcing estimates. Frankly, they do not make mathematical sense if you extend them to the extremes and they do not make sense when compared to the 33C greenhouse effect caused by 150 watts of GHG (plus water vapour feedback) forcing. The sensitivity of 0.75C per watt/metre^2 is more than 3 times higher than the Stefan-Boltzmann equations says it should be.
Some shortcuts were taken early in the development of climate science theory and they just decided to stick with them.
Where did the 5.35 * ln(CO2 now/ CO2 orig) come from anyway. The first time it is mentioned is in this paper (from 1998).
http://www.climateaudit.org/pdf/others/myhre.1998.pdf
In fact, from the paper:
“Note that the range in total anthropogenic forcing is slightly over a factor of 2, which is the same order as the uncertainty in climate sensitivity. These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.” Emphasis added.
Yeah.. but I don’t watch basketball and Shaq currently does more commercials.
Nick Stokes (Comment#19944) September 14th, 2009 at 6:39 pm
“Andrew – yes, there are uncertainties. The Kiehl paper you cite puts them in perspective, and does not say the forcings are chosen to match… (they aren’t).”
If you really believe that, then you have already drunk the Koolaid. To imagine that the modelers’ estimates of forcings are not greatly influenced by their expectations of climate sensitivity is hopelessly naive.
Andrew #19946 no, that’s a perverse misinterpretation of what Kiehl is saying. Yes, models with high forcing yield low sensitivity. This simple model would too. That’s exactly a consequence of the role of sensitivity as the ratio between forcing and temperature. If a higher forcing gives the same temperature, then of course your model will say the sensitivity is lower. The sensitivity is the derived quantity, and it responds. It certainly doesn’t imply people have been selecting the forcing data.
Nick, nick nick,
Isn’t the models sensitivity to doubled CO2 determined by doubling CO2 and discovering the temperature change? So, this is a property of the model. It is unaffected by the forcings the modelers chose to apply to hindcast the 20th century. The value of sensitivity is known for individual AOGCMs.
Kiehl then found that when “hindcasting” the 20th century, modelers whose AOGCMs have low sensitivity tend to drive the models with higher forcings for the 20th century. The modelers whose modelers have high sensitivity drive their models with lower forcings.
Thus, despite models having different sensitivities, the 20th century results tend to track rather well. Had they all been driven with identical, the high sensitivity AOGCMs would tend to hindcast more warming; the low sensitivity ones would hindcast less warming.
In fact, that’s what happens in the CO2 doubling test: The models with high sensitivity predict more warming for doubling of CO2.
But when hindcasting they tend to agree on the temperature rise during the 20th century. Because the modelers whose models have low climate sensitivity tend to pick forcing on the high end, and modelers whose models have high climate sensitivity tend to pick forcing on the low end.
Khiel doens’t say they these choices are done consciously. But he shows that the correlation exists. One is left to speculate about the psychological workings of modelers that might cause them to happen to select forcings that just happen to make the models agree on the metric that gets the greatest amount of attention IPCC and other public reports. But, no one is saying a modeler actually thinks “Oh. My model got a low climate sensitivity during a CO2 doubling run. I better pick larger forcings to compensate.” Something…. just… happens. ( Confirmation bias is known to affect people. )
Nick Stokes (Comment#19951)-“Perverse”? Really? What’s perverse is that you always seem to think you can talk down to me. But more to the point, You are greatly mistaken. The temperature change is fixed. The model’s sensitivity depends on it’s parameters, which are predetermined. What do you think it means when models like that ALL can fit the observed temperatures? They didn’t vary the forcings and let the sensitivity be determined. The model’s sensitivity is pre-existing. The modelers chose sets of forcings that worked for their model. Nick, I don’t get you. I quote from papers, and you have this notion that only you are allowed to determine what they say. You were flat out wrong about the Arctic Amplification paper, you are flat out wrong now!
Nick,
Expectation bias is ever where in science (and engineering, and art, and…..)! How can not have seen this?
Re: “Perverse” 😉
Frank Costanza: Let me understand, you got the hen, the chicken and the rooster. The rooster goes with the chicken. So, who’s having sex with the hen?
George Costanza: Why don’t we talk about it another time.
Frank Costanza: But you see my point here? You only hear of a hen, a rooster and a chicken. Something’s missing!
Mrs. Ross: Something’s missing all right.
Mr. Ross: They’re all chickens. The rooster has sex with all of them.
Frank Costanza: That’s perverse!
Andrew
Re: “Perverse†😉
Frank Costanza: Let me understand, you got the hen, the chicken and the rooster. The rooster goes with the chicken. So, who’s having sex with the hen?
George Costanza: Why don’t we talk about it another time.
Frank Costanza: But you see my point here? You only hear of a hen, a rooster and a chicken. Something’s missing!
Mrs. Ross: Something’s missing all right.
Mr. Ross: They’re all chickens. The rooster has sex with all of them.
Frank Costanza: That’s perverse!
Andrew
Did you say he talks down to you? Hmmm…….
bugs–
Andrew_FL and Andrew_KY are different people.
But, no one is saying a modeler actually thinks “Oh. My model got a low climate sensitivity during a CO2 doubling run. I better pick larger forcings to compensate.†Something…. just… happens. ( Confirmation bias is known to affect people. )
From this topic
The forcing formula were designed around “if GHG doubling results in 3.0C per doubling, and if the sensitivity is 0.75C per watt/metre^2, then the forcing formula should be …â€
There are a lot more on CA.
Nice gotcha bugs, you really got something there. Not.
Lucia, Andrew,
Your real objection is presumably that the models can work off different 20C forcings and still get temperatures that reasonably match 20C values. That then arithmetically leads to a reciprocal relation between sensitivity and forcing.
Well, it’s no secret that some parameters in models need to be, and are, tuned to fit observed historic behaviour. Kiehl identifies the aerosol component of forcing as the main variable factor. Aerosol modelling is acknowledged to be a very uncertain matter, as is cloud feedback. If you want to contend that the relation between sensitivity and forcing involves some artifice, it’s much more likely to be in the tuning of model parameters than in the selection of input data.
And OK, “perverse” was a bit over the top. Sorry. But I stopped short of saying that it was bullshit 🙂
Nick–
Yes. This is what Andrew_FL said. And you told him it was a perverse interpretation of Kiehl.
And his point not objection, is that the forcings are not well constrained by the data, so any choice used in any model (or to fit the two-lump regression) contains a lot of uncertainty.
So, consequently, any empirical fit to estimate sensitivity using those forcings will be uncertain. This is true regardless of whether the method would work given perfectly accurate, certain magnitudes of forcing.
Good thing. Otherwise you’d have a dozen eggs on your face instead of just a half dozen.
Lucia This is what Andrew_FL said.
It isn’t. What he said, with emphasis, was:
The forcings used are chosen to match the models to the data.
And no evidence at all has been put to back that up. None.
Nick–
They are. It may not be intentional, but low forcings are chosen when the model climate sensitivity is high and high forcings are chosen when the model climate sensitivity is low. The result is the model hindcasts match the 2oth century data.
Whether intentional or not, the forcings are chosen such that the model hindcasts match the data. That’s what Keihl showed.
Lucia,
Kiehl doesn’t say that. He gives what he thinks are the reasons for variation on forcing in his sec [16]:
There’s no suggestion there that forcings are being selected to get the right output. And he comes close to what I said about 20C being used to tune parameters, and hence sensitivity, here:
Bill #19947
That Myrhe paper is the source of the log expression that you quoted. He derived it from a LBL and two other radiative transfer programs. It was an update of an estimate in Hansen’s original prediction paper (1988)”, which was more complicated.
Nick–
The correlation shows those with models with low sensitivity do select higher forcings and those with higher sensitivities do select lower forcings.
What don’t you get about this?
In that sentence, Keihl is saying the modelers have run (i.e. benchmarked) models using a forcings within the range of uncertainty with the goal of finding the forcings that give the correct result at the end of the 20th century.
That is: forcings are tested to discover which give a good hindcast.
Using an analogy, he says that climate modeler want to have a good math at the end of the 20th century so that they can better predict the future (i.e. the 21st).
Translation:, having benchmarked the models, and wishing to have a good starting point for predictions for the 20th century, the modelers then force their models with those forcings that result in a good match for the 20th century. This gives them a good initial condition for kicking of predictions for the future (i.e. the 20th century).
That is: Within the large range of uncertainty, the modelers using a particular AOGCM first bench mark to find the forcings that give a good match for the 20th century and then they use those forcing for that model.
So, in other words: Keihl is saying they do chose forcings to create good hindcasts. This is precisely what AndrewFL said.That is precisely what AndrewFL said!
Keihl not only says they do this: He explains their motive which is to have a good starting point to predict the 21st century and the process (benchmarking followed by using the forcings that give good matches.)
SteveF (Comment#19950)
I don’t think that’s the accusation implied here. The sensitivities are varying a lot. The charge is presumably that they are jiggling the forcing to get the temperature right. Parameters in the model can be adjusted, but I doubt that the forcings are.
Keihl not only says they do this: He explains their motive which is to have a good starting point to predict the 21st century and the process (benchmarking followed by using the forcings that give good matches.)
I thought their motive was to bring the capitalist system crashing down around them. At least, that’s what I read on the blogs.
Lucia Keihl is saying they do chose forcings to create good hindcasts.
He didn’t say that at all. Where do you see “create good hindcasts”? Or, for that matter, “chose forcings”? He said they were benchmarked to get a good initial state. The objective is, as you say, to get a good math – ie tune the structure (parameters etc). They “assimilate information”.
He says that the objective is to improve the model’s predictive capabilities. How could rigging past forcing to give a good hindcast achieve that?
Nick
Kheils is saying modelers apply forcings that result in decent matches during the 20th century for their model. 20th century runs are hindcasts. We already know the temperatures. So, in other words, they are selecting the 20th century forcings to create good hindcasts.
Yes. Just as weather forecasters “assimiliate current weather data” to get a good starting point, the climate modelers “assimilate” “which forcings return good matches for the 20th century”. This puts the model temperature anomalies at the correct level to make forecasts into the 20th century.
You can call it tuning; Andrew can call it “selecting the forcings to match temperatures”, either way, they are “assimilating” information about how their models respond to forcing and then using that information to get their models to predict the over all change in temperaure during the 20th century.
He explains the how rigging the past forcing to get a good hindcast would achieve that:
In orther words: by constraining the present state (end of the 20th century) to be close to the correct temperature, they have a shot at getting good predictions. In contrast, if the temperature at the end of the 20th century was woefully off, they would not hope to get good predictions of the future.
In orther words: by constraining the present state (end of the 20th century) to be close to the correct temperautre, they have a shot at getting good predictions. In contrast, if the temperature at the end of the 20th century was woefully off, they would not hope to get good predictions of the future.
They can be more sophisticated than that. You can less well understood inputs to the model to hindcast the first half of the temperature record, and then see how it goes with the rest of the temperature record.
Nick Stokes (Comment#19971) September 14th, 2009 at 10:59 pm
Precisely! It can’t. That in a nutshell is why a whole lot of people have even less faith in the predictive ability of GCM’s than they do in a ten day weather forecast.
Lucia,
The reference to “forced within a range of uncertainty” does likely imply that they tried different values within that range. But the criterion referred to is always “the present state”, not the hindcast. And if it’s to improve prediction, there has to be more to it than just getting the present temperature right.
There’s no point in just rigging past inputs to match past outputs. That won’t in itself improve the model for the future. The only way to make progress is to use the correct inputs, as best you can estimate, to try to match the correct outputs, modifying the model parameters if needed so that the model is a better predictor.
This is the basic difference between “tuning” and changing the input. One actually changes the model, the other doesn’t.
Nick,
Nobody is going to report the results from their model which doesn’t do at least a passable job of reproducing 20th century warming. Given a choice of inputs (within a factor of 2 according to Kiehl) all the models nevertheless report a result within 25% of “correct.” Whether the tuning is done by changing internal parameters or by choice of forcing data set is somewhat academic– the end result is a bunch of models with vastly different sensitivities and different inputs yet which manage to converge on the desired result.
If the goal is to support one number or another for “climate sensitivity,” then factors of 2 really do matter.
So Lucia this tuning/forcing/parametizing thing does it go like this?
At a real simple level the math expression of a climate model is something like:
AYt +T0 = T1
Where T1 is the temp in some year we are trying to predict/forecast/project/estimate
T0 is the temp in the starting year, t is the difference between the start year and the estimated year, Y is the forcing (like the level of CO2 or the amount of water vapour or sulpahtes) and A is the climate’s sensitivity to the forcing. So if we use T0 as the GMT in 1880 then to get the GMT in T1=1880 we add the contribution of the forcing times 100 (i.e. AXT = AX100). We know T0 and T1 fairly well (ignoring other issues and corrections) but we aren’t very sure of either A or Y. Modeller 1 starts with a bottom end estimate of Y and so to get T1 must use a high end estimate of A, whilst Modeller 2 starts with a top end estimate of Y and so to get T1 must use a low end estimate of A … both get the GMT in 1980 right and think they have the right values for A and Y and so use those to estimate the GMT in 2100. Of course there are more than one forcing and so more than one sensitivity so it is more like:
A1Y1t + A2Y2t + A2Y2t + … T0 = T1
So because there are many forcings and many levels of forcing and many levels of sensitivity there can be many models, of course the forcing levels and the sensitivity levels need to match possible real world levels (i.e. no good stating with a water vapour level of 1ppm because that is unlikely to ever happen) and sensitivity doesn’t need to be linear but could be log or something. So the less certainty we have of forcing level or sensitivity the more we need to tune the level used in the model to ensure we match a known GMT and of course we would try to do this for all known GMTs.
Is this a rough picture of what is being debated?
Whether the tuning is done by changing internal parameters or by choice of forcing data set is somewhat academic– the end result is a bunch of models with vastly different sensitivities and different inputs yet which manage to converge on the desired result.
They model the physics of the climate to the best degree possible. They also work at making models that work. Anyone would think you can just come up with a model for climate from first principles and expect it to work. Development of complex software just doesn’t work like that.
Can I just get the argument straight here. Any attempt to make a working model is doomed to failure, because it will only implement a subconscious desire to prove that CO2 is the principal component of AGW? It is not possible for scientists to actually make a model that works?
Nick–
You know they don’t just start at “the present state”. They predict the period up to the present state with the goal of hitting the correct present state. That’s creating a hindcast.
You may be correct that that’s the only way to make progress. But you are incorrect about what Keihl says. He says they adjust the forcings within the uncertainty. The forces they use are correct within the huge range of uncertainty. They pick the forcing to do what you appear to call “match the correct outputs” as best they can.
I understand the difference between both operations. The appear to do both.
The appear to do this even if you think there is no point to it. Keihl showed the evidence the do it.
Of course, the correlation may just be an accident. It may just be a total coincidence that the modelers whose models have low climate sensitivity use higher values of forcing and the ones with high climate sensitivity use lower values,even though they all made their choices without having the slightest clue how their climate sensitivity compared to the range for models or how higher or lower forcings might affect the agreement between their models and the 20th century temperatures. After all… just because they run the CO2 doubling experiments doesn’t mean they actually know the results. Just because they run sensitivity tests doesn’t mean they know how their model responds to forcing…. right?
bugs,
“Any attempt to make a working model is doomed to failure, because it will only implement a subconscious desire to prove that CO2 is the principal component of AGW?”
Not at all. The problem is making a model that ASSUMES C02 is the principal component of AGW or that ASSUMES AGW even exists to begin with. Neither assumption is justified.
Andrew
bugs–
Huh? Andrew_FL said Keihl says something about modelers choices of forcing. Nick says Keihl didn’t say that. Nick is wrong. Keihl said what Andrew_FL claims.
It is theoretically possible for modelers to create models that work. It is also difficult. They also tune their models — this is a generally admitted fact. Climate modelers, like modelers everywhere, also tune their models to match known data. This is both good and bad. But the argument here is do individual modelers with individual models pick forcings that help them match known temperature data. The answer is that
a) it appears they do
b) Keihl showed they did
c) Keihl explained their motive (and even says it’s more or less reasonable for them to do this) and
d) I bet the modelers will continue to pick the forcing data that makes their runs match temperatures until such time as forcing values is better constrained by data. The 20th century forcings will never be better constrained by data, so they will be able to hindcast the 20th century using forcings that create a good match forever.
However, there is no point in Nick deluding himself that modelers don’t do it or worse, that Keihl didn’t say and show they do it. Calling it “tuning” doesn’t make it a different thing. All it means is “tuning” includes “picking the values of forcing that create a good match between run output and already known temperatures”. That is: create hindcasts that show decent agreement.
Andrew Kennett-
Yes. Roughly, that is what is being debated. Nick says they don’t do it and also says Keihl says they don’t do it.
But Keihl says they pretty much do that. Yes.
Tuning is wrong. It’s not tuning its curve fitting, and you can’t extrapolate from a curve fit and get a meaningful answer.
1. Models need to be based on physical processes.
2. Models should only include inputs that have been shown to have been statistically significant.
ie. If you include the input, it should be shown that its statistically significant to include it. If not, it comes out.
The first deals with the question of causality. The second is basically Occam’s Razor.
Nickle
Not at all. The problem is making a model that ASSUMES C02 is the principal component of AGW or that ASSUMES AGW even exists to begin with. Neither assumption is justified.
Show me where they do that.
1. Models need to be based on physical processes.
They are, as much as possible.
However, there is no point in Nick deluding himself that modelers don’t do it or worse, that Keihl didn’t say and show they do it. Calling it “tuning†doesn’t make it a different thing. All it means is “tuning†includes “picking the values of forcing that create a good match between run output and already known temperaturesâ€. That is: create hindcasts that show decent agreement.
You can ‘tune’ them against half the existing record, then run them against the rest of the existing record. This lets you get the hindcast right, and test the ability to forecast.
bugs–
Uhmm…. that would be the parts where they
a) assume a level of CO2 for a particular model run and b
b) compute the forcing based on the level of CO2.
They do assume CO2 has increased and increased CO2 results in forcing. The question is not “Do they assume this?” The question is “Is the assumption reasonable”?
I happen to think this assumption is reasonable. Evidence does indicate CO increased over the 20th century and evidence indicates CO2 has radiative properties that result in positive radiative forcing.
But all models assume things. For example AOGCMs are also based on the assumption that mass, momentum and energy are conserved. These are good assumptions too.
Making good assumptions is a necessary requirement for a respectable physical model. AOGCMs are at least respectable. But, unfortunately, “respectable” does not suffice to ensure models can predict the future with any degree of accuracy.
bug
This would not overcome the difficulty that modelers who run models with high climate sensitivity pick low forcings and those with low sensitivities pick high forcings. The climate sensitivity is know from the CO2 doubling experiments and the relative response to forcing is known.
Look, the modelers run lots of cases. They are intimately aware of what their model does. After setting all their parameters etc. they are left with a forcing knob with quite a bit of play in it. They use that knob.
Uhmm…. that would be the parts where they
a) assume a level of CO2 for a particular model run and b
b) compute the forcing based on the level of CO2.
That is why they have to have scenarios, you can’t predict human behaviour, but the trend of CO2 increase has been pretty steady. So a) is not a problem, you tell people what will happen for the amount CO2 they produce.
The CO2 forcing can be calculated from physical principles. It’s not a matter of picking a value you like.
The CO2 sensitivity can’t be, because it’s the result of all the forcings and responses to the forcings.
Particles are a different matter, as are clouds. New generation hardware will allow clouds to be modelled more realistically, with the smaller cell sizes that will be available.
A side note on Kepler, Newton and the inverse square law: Feynman in the “lost” lecture (available as a book and CD combination, not for the math averse) presents a proof using only plane geometry that Kepler’s laws of planetary motion require a force that varies as the inverse square of the distance. It was a revision of Newton’s original proof, which Feynman admitted he couldn’t follow.
bugs,
“Show me where they do that.”
I can’t. But likewise it’s up to the modelers to show me why I should take their models seriously. You know, they should explain or demonstrate the models’ significance/relevance to real life.
Andrew
Lucia
Making good assumptions is a necessary requirement for a respectable physical model. AOGCMs are at least respectable. But, unfortunately, “respectable†does not suffice to ensure models can predict the future with any degree of accuracy.
you just disagreed with Andrew_KY.
He said
Not at all. The problem is making a model that ASSUMES C02 is the principal component of AGW or that ASSUMES AGW even exists to begin with. Neither assumption is justified.
They do not assume AGW exists. They feed in the known forcings.
They do not assume CO2 is the principal component. They feed in, once again, the known forcings.
Andrew_KY –
I can’t. But likewise it’s up to the modelers to show me why I should take their models seriously. You know, they should explain or demonstrate the models’ significance/relevance to real life.
Then don’t say they do. They do discuss the models, in the usual way all scientists discuss their work, which is the formal scientific way. The IPCC report has a whole section on models, which is the simplified version for public consumption.
bugs,
CO2 forcing at any point can be calculated if you know the atmospheric temperature, humidity and concentration profile with altitude. Do the GCM’s calculate those correctly? There’s a lot of doubt there starting with how the models implement the Navier-Stokes turbulent flow equations. But for the moment we’ll accept the IPCC assertion that this is known to +/- 10%. Aerosol forcings are a completely different matter. The IPCC says the scientific understanding of aerosol forcings is low. The error bars on the estimates of forcing are almost twice as large as the forcings themselves. the size of the aerosol forcings in the 20th century are almost as large as the ghg forcings. Going into the future, the SRES scenarios are little better than guesses. Even if the physics in the models were perfect, the uncertainties in the inputs means very large uncertainty in the output or GIGO.
bugs,
Pointing the IPCC for guidance is an appeal to authority. I would like to be shown why climate models have any value at all. Can you show me, bugs?
Andrew
bugs–
The aerosol forcings are not well constrained by any calculations from “first principles” because of uncertainty in measured levels of aerosol loadings etc. So, the modelers can “just pick” the forcings they want within a wide range. Keihl showed they do tend to make picks that make the hindcasts for the 20th century look better than if all modelers had picked the same values.
As a rule, modelers have to make assumptions. Whether you like it or not, they assume they know the forcings and use that as an input to models. Making assumptions is inescapable when modeling. The questions are: a) are the assumptions reasonable and b) do we get accurate/ precise results?
With respect to language use, if you were to make an list of assumptions the would include things like:
* Air acts as an ideal gas.
* The composition of air is x% nitrogen, y% oxygen and so on.
* The ocean consists of salt water with a concentration of…
And so on. These are all assumptions. The fact that they are good assumptions nearly everyone accepts does not turn them into “not assumptions”. The fact that the assumptions have good empirical support does not make the “not assumptions”. These are all called assumptions, because that’s what the the word assumption means when modeling.
The modelers do assume CO2 has risen over the 20th. The modelers do assume certain radiative properties for CO2. The modelers don’t include the magical effect of Leprechauns firing up their underground steam ovens in 1980 and they don’t include cosmic rays. These are all “assumptions”.
CO2 forcing at any point can be calculated if you know the atmospheric temperature, humidity and concentration profile with altitude. Do the GCM’s calculate those correctly? There’s a lot of doubt there starting with how the models implement the Navier-Stokes turbulent flow equations. But for the moment we’ll accept the IPCC assertion that this is known to +/- 10%.
I don’t know what the flow equations have to do with CO2 forcings. CO2 is mostly well mixed. At present, it has been seen that particles have been more significant than CO2 in the past. Particles probably aren’t going to change all that much than they have in the past, CO2 is on the way to doubling.
All of which ignores Tamino’s point. His very simple model, which didn’t have a Navier-Stokes in sight, but still gave a good hindcast. The odds of such a simple model being able to do this with only a few parameters to adjust seem very long to me if you think it’s only a matter of chance he got the result that he did.
Arthur Smith came up with a simple model for the greenhouse effect. http://arxiv.org/PS_cache/arxiv/pdf/0802/0802.4324v1.pdf This was done for the benefit of the deniers who don’t even believe there is a greenhouse effect.
Lucia
The modelers do assume CO2 has risen over the 20th. The modelers do assume certain radiative properties for CO2.
I don’t get this at all. The levels of CO2 can be measured accurately from the Mauna Loa record, and before that ice cores. This is well understood and accepted. There is no assumption about it at all.
The radiative properties of CO2 can be deduced from first principles, and can be verified empirically. No assumptions at all. The enhanced greenhouse effects can also be attributed to physical factors. No assumptions needed.
That is why it is one of the ‘well understood’ forcings, in contrast to the less well understood forcings such as aerosols.
Pointing the IPCC for guidance is an appeal to authority. I would like to be shown why climate models have any value at all. Can you show me, bugs?
I can assure you, they know a lot more about the models than I do. You should check with them first.
Bugs–
Tamino’s “model” is a curve fit. Lumpy, a similar curve fit based on only 1 parameter fits even better than Tamino’s model.

I posted that over a year ago. See http://rankexploits.com/musings/2008/how-large-is-global-climate-sensitivity-to-doubled-co2-this-model-says-17-c/
The odds of this fitting almost this well by using a second forcing and adding SOI as a forcing weren’t “long”. It’s amazing his fit didn’t do better. (I’m guessing, but I suspect the reason his fit isn’t better than a 1 parameter fit is probably because he picked time constants he liked instead of finding the time constants that best represent the data. I would speculate the reason he didn’t pick the two time constants that minimized the error in the fit is he doesn’t like the answer they tell him. So, he decided to use SOI to make look not-so-bad compared to even simpler models. )
Yikes! Being in a completely different time zone than everyone else lets the conversations get away from you.
Let me just say that I think Nick is confused that I was insinuating that modelers make their decisions out of sinister intent. Nonsense. They do it out of nothing more or less than an attempt to reconcile what they “know” with reality. The modelers build the most realistic models they can, and then they want to show that such models are reasonably realistic, so they use a set of forcings and run the model, and it matches the twentieth century. They choose the set of forcings that work best because the results have to be internally consistent-otherwise, what was the point? Frankly the modelers would be daft not to do so.
Let me just say that I think Nick is confused that I was insinuating that modelers make their decisions out of sinister intent. Nonsense. They do it out of nothing more or less than an attempt to reconcile what they “know” with reality. The modelers build the most realistic models they can, and then they want to show that such models are reasonably realistic, so they use a set of forcings and run the model, and it matches the twentieth century. They choose the set of forcings that work best because the results have to be internally consistent-otherwise, what was the point? Frankly the modelers would be daft not to do so.
You would be daft to do it. That is the point.
Lets take examples from finance. The reason why this is a good area to look at to see how to go about building a model is that it shares lots of features of climate modeling. Noisy data, known possible forcings such as interest rates. The advantage is there ar lots of stocks or futures where were as with climate we just have one.
A typical approach is to throw everything in, as the climate modellers do. Pick a training period, make sure the model works for this period. Then test against a known period after the training period to see if the model is good. However, here there is what is called a selection bias. You only pick the models that work in both periods, and that is just another form of only picking models that fit the entire known period.
Climate modelers are introducing selection biases all the time.
Similarly, with the put everything in, means you are more likely to produce a model that is just deep down a curve fitting. Each paramter introduced will need some constant of function to scale it appropriately to the size of its effect. The more such scaling factors are introduced the more your model is going to be a curve fitting exercise. Restricting to those that have clear physical characturistics (which they do) is one way of trimming it down. Next is to use measureable constants from the lab and not scale the effects by a fudge factor. Lastly and most importantly is that if the forcing does not introduce a statistically significant improvement in the model, then it isn’t in the model.
People who model financially have had lots more data and these mistakes have been very common, and in practice these have been shown to be mistakes.
Only statistically significant forcings should be included.
Revisions to data means that the model is invalidated until it is re-run.
A priori predictions need to be tested, and the use of a training period, and a validation period invariably leads to a bias, and a model that doesn’t work
e.g. It’s pretty clear that with the last 10 years of climate data, that the solar effect has been seriously under estimated by the IPCC.
bugs,
“I can assure you, they know a lot more about the models than I do. You should check with them first.”
This is eventually what happens every time I have a reasonably extensive dicussion with a Warmer -An admission that they don’t really have any knowledge, and that the IPCC is the gateway to belief in the models. Sigh.
Andrew
Andrew_FL
Precisely. Modelers would be daft to use a combination of forcing/prameterizations that results in incorrect hindcasts for the known 20th century data. At a minimum, they know the combination of the two can’t be correct. So, they favor combintions that give good results over the 20th century.
There is nothing nefarious about this. It’s garden variety modeling done in all fields everywhere. It’s even required by the scientific method since it violates the principle of empiricism to suggest that a combination of forcings/parameterizations — both chosen within uncertainty ranges– that gives a poor match to temperature is better than one that gives a good match to temperature.
People who have modeled anything know it, and they understand that this is why predicting as yet unknown future data is more impressive than post-dicting known data.
I want to make one other observation about creating models. For sure climate modelers are mainly honorable people who are doing the best they know how to create a model which accurately represents the behavior of the Earth´s climate. However, modelers are human beings, just like the rest of us, and clearly have expectations about the sensitivity of Earth´s climate to GHG forcings. These expectations can reasonably be suspected to influence the many choices they must make in creating and optimizing their models (even if that influence is not “willfull”). We should remember that climate modeling is not a double blind effort, so the modelers´ expectations are almost certainly bound up in the work. There is good reason for insisting on double blind tests in evaluating the performance of medical treatments; too bad that standard can´t be applied to climate models. The best we can do is insist that the models make accurate *predictions* rather than accurate hindcasts. (And there is clear evidence that at least on average the models have not made accurate predictions over the last 8 years, as Lucia has many times pointed out.)
I note that James Hansen´s estimates of climate sensitivty, and the estimates of climate sensitivity from the NASA GISS climate modeling group, have hardly changed over 25+ years, in spite of many millions of dollars expended in modeling efforts, enormously more and more accurate climate data, and 25 years of “scientific progress” in understanding the Earth´s climate. It could of course be that Hansen was just remarkably insightful, using a minimum of data and without the benefits of several generations of climate models, but I suggest that it is prudent to at least consider that expectations are playing a part in the remarkable constancy of the GISS estimated sensitivity.
Lucia
I found the constants that minimised the error in the fit, as I reported here. The answers were (1.058, 21.50) years, or (0.985, 20.18) with SOI. It may be a local minimum, but the results weren’t radically different.
lucia (Comment#19875) September 13th, 2009 at 8:30 pm
Might be interesting to do a Taylor diagram for Lumpy versus TTB
( Tammy Two Box — which sounds like a funky dance or a side show porn star )
Nick Stokes,
“I found the constants that minimised the fit, as I reported here. The answers were (1.058, 21.50) years, or (0.985, 20.18) with SOI. It may be a local minimum, but the results weren’t radically different.”
I recall that Arthur optimized using what appeared to be the same as Tamino´s method, and found best fit values of about 2 years and 17.5 years, with a corresponding sensitivity significantly lower (eg. 15%-20%) than the GISS sensitivity. It was probably this result that Lucia suspects Tamino “did not like”.
bugs (Comment#19978) September 15th, 2009 at 4:38 am
“They model the physics of the climate to the best degree possible. They also work at making models that work. Anyone would think you can just come up with a model for climate from first principles and expect it to work. Development of complex software just doesn’t work like that.”
Huh? Look Bugs any good physics model does start with first principles. When we build a model, for example, of radar performance or radar cross section we start this complex software with first principles. Like so.
http://en.wikipedia.org/wiki/Radar_cross-section
So for example when the F117 was designed the prediction codes could only handle flat plate geometries. One of the breakthroughs at Northrop was a method to handle the prediction of curved surfaces. ( at first only guassian curves.. go look at the B2 ) That complex software was entirely based on first principles physics. Now let me tell how funny it is to sit in meetings and see physics and math classified as Top Secret SAR.
very simply. The physics models were based in first principles. The challenge is finding numerical methods and computations resources to simulate these processes. So we start from first principles. Some times we can only solve for Special cases. This improves over time.
Lucia said: “Confirmation bias is known to affect people.”
All people. I haven’t met one person who doesn’t ‘suffer’ from confirmation bias. The art is to find out what your confirmation bias is and thus know thyself.
I’ve been following this discussion. The science is above me, but there’s still plenty that can be understood about the participants. Lucia, if I may ask, what do you think your confirmation bias might be? And why that particular confirmation bias?
bugs,
That statement shows how little you know about how the enhanced greenhouse effect actually works. Ghg forcing varies with latitude and longitude (Hansen, et al, 2005). I suggest a massive education effort starting with basic physical meteorology, atmospheric radiative transfer and basic climate modeling. Until then you should take to heart the adage: It is better to remain silent and be thought a fool rather than open one’s mouth and remove all doubt.
OTOH, if you actually want to learn something you ask what are probably stupid questions, put forth probably foolish hypotheses, etc. Then you turn off your pride and learn by your mistakes. But you don’t appear to be doing that either.
*plonk*
This is eventually what happens every time I have a reasonably extensive dicussion with a Warmer -An admission that they don’t really have any knowledge, and that the IPCC is the gateway to belief in the models. Sigh.
It’s an admission that I know my limitations and you don’t. The IPCC report is central case for AGW, by the experts, using the scientific method. The ‘belief model’ is the scientific method. Not a bad one. I feel inifinitely more comfortable with it compared to the old “Roman Catholic” belief model, that I found at it’s core could explain nothing and was based on nothing. However, once again, an interesting topic is digressing into other topics. I much prefer to see what Arthur and Nick have to say.
That statement shows how little you know about how the enhanced greenhouse effect actually works. Ghg forcing varies with latitude and longitude (Hansen, et al, 2005). I suggest a massive education effort starting with basic physical meteorology, atmospheric radiative transfer and basic climate modeling. Until then you should take to heart the adage: It is better to remain silent and be thought a fool rather than open one’s mouth and remove all doubt.
OTOH, if you actually want to learn something you ask what are probably stupid questions, put forth probably foolish hypotheses, etc. Then you turn off your pride and learn by your mistakes. But you don’t appear to be doing that either.
*plonk*
I said I don’t know because I didn’t know what you were inferring. The enhanced greenhouse effect, as I understand it, could be modeled without Navier-Stokes equations. They are explaining the local behaviour in a cell.
It involves secondary effects and enhanced CO2 forcing.
Changes in albedo. No NS.
Releases of other GHGs. No NS.
Higher levels of CO2 blocking of radiation than expected. http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/ No NS.
A definition of the EGE.
Enhanced greenhouse effect
The majority of scientists believe the enhanced greenhouse effect is caused by increased concentrations in the atmosphere of the following greenhouse gases: carbon dioxide, methane, nitrous oxide, perfluorocarbons, hydrofluorocarbons and sulphur hexafluoride.
Many human activities, such as the burning of fossil fuels (including coal, oil and gas), deforestation (or land-use change), and some manufacturing processes upset the equilibrium of the carbon cycle (see below). These activities contribute to the enhanced greenhouse effect.”
I could be wrong.
SteveF
I recall that Arthur optimized using what appeared to be the same as Tamino´s method, and found best fit values of about 2 years and 17.5 years, with a corresponding sensitivity significantly lower (eg. 15%-20%) than the GISS sensitivity. It was probably this result that Lucia suspects Tamino “did not likeâ€.
If I could come up withe such a close match to the GISS estimate with such a simple model, I would be feeling very pleased with myself, and feeling I had made my point quite well. The endless diversions into the usual debate on the more complex models ignores the aim of Tamino, to demonstrate you could achieve a good fit to the temperature record with a model that ignored all those complexities. The ‘skeptics’ are having an argument about something that is not what they think it is.
Bugs-changes in albedo means (generally) changes in clouds. Last time I checked, water vapor is a fluid, whether in concentrated, cloud form, or finely dispersed vapor.
bugs (Comment#20021)
“I said I don’t know because I didn’t know what you were inferring. The enhanced greenhouse effect, as I understand it, could be modeled without Navier-Stokes equations. They are explaining the local behaviour in a cell.
It involves secondary effects and enhanced CO2 forcing.”
I’m not sure it is really a good idea to model the greenhouse effect without Navier-Stokes, or at least some consequence of it like the adiabatic lapse rate. In my post:
Tropospheric Feedback
http://www.climateaudit.org/phpBB3/viewtopic.php?f=4&t=776
I show that because of the non zero lapse rate even frequencies which have very short optical paths are not completly blocked by the atmosphere. I’m not sure if this energy flow is significant but it is radiative and non zero.
Lucia –
This would not overcome the difficulty that modelers who run models with high climate sensitivity pick low forcings and those with low sensitivities pick high forcings. The climate sensitivity is know from the CO2 doubling experiments and the relative response to forcing is known.
Look, the modelers run lots of cases. They are intimately aware of what their model does. After setting all their parameters etc. they are left with a forcing knob with quite a bit of play in it. They use that knob.
The ability to run a tuned model over the existing temperature record would remove a lot of that error. You think that it destroys their predictive value because they will quite obviously head off in different directions if they pick the aerosol forcing incorrectly. But there is enough of a temperature record to see if they do that or not.
And the answer doesn’t have to be that accurate. We can’t predict the future, but we need to know not the exact climate sensitivity, I don’t think we can, but if the range of warming is low, in which case we don’t need to worry, or high, in which case we do. The amount of potential warming already in the system won’t play out for another 20 years or so, and we are steadily increasing the CO2 content. The models are, to a large degree, irrelevant, and are certainly not the whole case for AGW. The IPCC knew that models alone are not a case, and have created a comprehensive document that looks at many ways to determine if AGW is happening or not. These all support each other.
Bugs, whether “AGW exists” or not is not the issue at all. Of course AGW “exists”-all our activities certainly have some effect. The question is, what are the effects and how big are they? The case in that regard is much more slippery-and in fact is much more dependent (although not entirely) not models. The distinctions may be too subtle for you to grasp, but that doesn’t mean they aren’t important.
Andrew_FL,
“Of course AGW exists”
I have to disagree with this “of course” stuff, my excellently-monikered fellow traveller. Where does AGW exist, that I may go experience see/touch/taste/hear/smell it?
Andrew
bugs (Comment#20025) September 15th, 2009 at 4:29 pm
bugs,
Kiehl showed that the aerosol forcing and the tuning go hand in hand. The only way that model A with 2x the forcing of model B can produce almost the same result is if model A has much less sensitivity.
Saying that the models agree and therefore the science is good to go is like saying that
F = ma and 2F = (2m)a both give the same answer for a… which might be fine for some uses but not fine for inferring what m is.
The entire heat content issue is exactly about how much, if any, warming is “already in the system” or whether it’s a case of “Hey, man, just whatcha see.”
They’re all “models” of one sort or another. We’re trying to understand why this particular one is any good or not.
DeWitt –
Precisely! It can’t. That in a nutshell is why a whole lot of people have even less faith in the predictive ability of GCM’s than they do in a ten day weather forecast.
They are modeling two different things. As Tamino demonstrated. You could never come up with a weather forecast with such a simple model.
My also well named pal: We have different epistemological views on what it takes to show that something is real. I think that there is some AGW because that is what the physics leads me to conclude. Can I prove it through direct, personal observation? No, and I fully admit that. Whether that should be enough or not could be debated, but I’m not too keen on that conversation.
Let me put it this way: My Coherence Network agrees with me. 😉
Andrew_KY (Comment#20027) September 15th, 2009 at 5:00 pm
Exhale much? 😉
bugs (Comment#20029) September 15th, 2009 at 5:10 pm
Not really, and you’d be surprised.
Oliver
Not really, and you’d be surprised.
Where I live, you’d be surprised. It is only in relatively recent years that they have been able to come up with forecasts that are reliable. “Four seasons in one day”.
bugs,
Please let us know when the AOGCMs start doing forecasts that are reliable to some similar standard.
oliver,
Incessantly. So my bad breath is AGW? Seriously? 😉
Andrew
Oliver –
bugs,
Please let us know when the AOGCMs start doing forecasts that are reliable to some similar standard.
You have to test them for what they are claimed to be able to do. People often test them for other things, and reject them because they can’t do what they aren’t designed to be able to do, or the current technology can support. They can, as far as I can tell, give us an indication as to wether we are headed for serious problems or major problems.
But Tamino’s simple model already demonstrated that we can get a reasonable guess at that without even going into the complexities.
bugs:
LOL.
Who’s ignoring what the science tells us now?
Look man, if the best you can do isn’t good enough (which is what you just said), don’t expect to win friends and influence people by reeling it back to even more tenuous grounds.
That’s the sad state of AGW science right now. People throw out the science in the first step.
Bugs,
“If I could come up withe such a close match to the GISS estimate with such a simple model, I would be feeling very pleased with myself, and feeling I had made my point quite well.”
What? The point is that he did not optimize. He used the GISS forcings and the GISS ocean approach constant. The optimum fit to the temperature data with two boxed (as Arthur showed) was with a short time constant that was longer than what Tamino used and a long time constant that was shorter than what Tamino used. That using the GISS long ocean lag gave a number he liked is not so surprising; it is actually hard to see how it could could come out differently.
Bugs,
“The amount of potential warming already in the system won’t play out for another 20 years or so..”
Based on what, the non-accumulation of ocean heat over the last 6 years? Please explain where the potential warming already in the system is hiding.
Bugs
Are you from Melbourne?
“They can, as far as I can tell, give us an indication as to wether we are headed for serious problems or major problems.”
The subjectivity sneaks in….
Also, why are they not good enough to tell us that there is no problem? Never mind if they do or not, if they did, would they be good enough to be believed? I suspect not! 😉
LOL.
Who’s ignoring what the science tells us now?
Look man, if the best you can do isn’t good enough (which is what you just said), don’t expect to win friends and influence people by reeling it back to even more tenuous grounds.
That’s the sad state of AGW science right now. People throw out the science in the first step.
I haven’t thrown anything out. You demand something of models that they can’t give, and reject what they can. I’m quite happy to accept the level of understanding the models can give us. It’s all we’ve got. As Tamino demonstrated, it’s not a bad level of understanding. Scientific research is not about perfect understanding and prediction, it’s about pushing the boundaries of the things we don’t understand.
To repeat what has already been said.
To the contrary. Newton had no underlying physical model for his inverse square law, he just observed it worked. There are many possible underlying physical models for an inverse square law – it’s a natural law for spreading in three dimensions from a point source. Coulomb’s law has the same form. Bohr’s original model for the atom was fundamentally wrong – electrons don’t follow precise orbits in any way resembling the planets – and yet it allowed physicists to estimate spectral properties of hydrogen and other atoms before a more correct quantum theory was developed. And so on.
Hey bugs when is your companion volume to Newton’s treatise on Alchemy and Solomonic gold being published? And that one you wrote with Einstien about the steady state universe?
SteveF –
Based on what, the non-accumulation of ocean heat over the last 6 years? Please explain where the potential warming already in the system is hiding.
It’s not hiding anywhere. The time lag for the current amount of CO2 to cause the changes it is capapble of causing in terms of feedbacks from albedo and release of methane is about 20 years, IIRC. That is, if we didn’t any more CO2 to the atmosphere at all from this day on, we would still see the effects of warming causing changes.
The models do indicate that the warming will not be evident in a continuous, smooth curve, but one with the dips and peaks evident in the past temperature record.
Come on people, Please stop feeding the troll.
Also, why are they not good enough to tell us that there is no problem? Never mind if they do or not, if they did, would they be good enough to be believed? I suspect not! 😉
They were good enough to tell us that the satellite record from UAH was wrong and they were right.
If there was no problem, it would be highly surprising, given that there is a physical basis for AGW. The models have that physical basis built into them.
Bugs,
“Scientific research is not about perfect understanding and prediction, it’s about pushing the boundaries of the things we don’t understand.”
A lot of science really is about (almost) perfect understanding and (almost perfect) predictions; this is the pretty normal stuff that is used routinely every day in hundreds of different fields.
It’s not like climate models push the limit of our understanding of the fundamental nature of matter or its interactions, which really would limit their predictive accuracy. The models are based on a bunch of pretty well known physical processes, which happen to have complex and poorly defined interactions, and which are further complicated by substantial uncertainties in key measured values.
What is missing is enough accurate data to better constrain and define the models, so that they do make accurate predictions. The fact that multiple “state of the art” models make substantially different predictions fairly well screams that there are serious deficiencies which need to be addressed, and that climate models, singly or jointly, are at present in no way representative of reality. To propose substantial and very costly world-wide changes in energy production and use based mainly on the predictions of climate models strikes me as pure folly.
bugs (Comment#20045) September 15th, 2009 at 6:44 pm
Everyone trained in the field plays with models like these; the point is always to understand what the model is able to capture and what it does not. Many distinguished experts in the climate community are more than happy to discuss these shortcomings and pitfalls — that’s why they became scientists, after all. In the end, the discussion usually teaches a lot more than the model itself.
What they don’t do is write up a fairly superficial analysis — rife with hidden assumptions — and then declare victory and a moratorium on questions.
The “not…bad level of understanding” is fine and all, but still a bit premature for celebration.
All of this is just another repetitive digression from what was an interesting joust between Lucia on one side, and Nick and David on the other. Lucia seems to be convinced she has demonstrated the ‘two box’ model to be useless, Nick and David seem to be convinced it does what it claims to do. No resolution has been reached on the matter. What did seem to happen that rather than address the point, Lucia started meandering into various other topics.
“They were good enough to tell us that the satellite record from UAH was wrong and they were right.”
This is a moronic statement. They did not tell us the UAH record was wrong. They told us “gosh, that’s not what we say should have happened!” And well, Gosh darn it if the errors didn’t fail to make the difference go away!
“If there was no problem, it would be highly surprising, given that there is a physical basis for AGW. The models have that physical basis built into them.”
So, what you are saying is that you believe the very fact that AGW has a physical basis-implies an AGW “problem”? Okay, I’m done, there is no dealing with someone that thick.
“t’s not hiding anywhere. The time lag for the current amount of CO2 to cause the changes it is capapble of causing in terms of feedbacks from albedo and release of methane is about 20 years, IIRC. That is, if we didn’t any more CO2 to the atmosphere at all from this day on, we would still see the effects of warming causing changes.”
Uh, excuse me but that is in fact the very point which is being debated. The response time is not known, so that “fact” of twenty years worth of warming “in the pipeline” or whatever is in fact a part of Tammy’s model-except he used thirty years. Now, you don’t seem to get what is being referred to. Why exactly has Ocean Heat Content stopped rising?
http://pielkeclimatesci.wordpress.com/2009/05/18/comments-on-a-new-paper-global-ocean-heat-content-1955%E2%80%932008-in-light-of-recently-revealed-instrumentation-problems-by-levitus-et-al-2009/
OHC is supposed to increase fairly steadily.
Bugs,
“That is, if we didn’t any more CO2 to the atmosphere at all from this day on, we would still see the effects of warming causing changes.”
Sure, if the ocean lags are very long, aerosols have “canceled” much of the forcing by GHG’s, and the lack of ocean heat accumulation over 6 years is just a short-term fluke.
OTOH, if the ocean lags are in fact relatively short and the aerosol effects are small relative to total GHG forcings, then the current temperature is not at all far away from the equilibrium temperature. (Which, BTW, is consistent with flat to slightly falling ocean heat content at the same time as fairly constant mean surface temperature.)
A wide range of climate sensitivities and ocean lags are reasonably consistent with the temperature history over the last 130 years… if you can adjust the aerosol forcing history to suit your personal tastes, which is essentially what Tamino did by adopting the GISS forcing history. With the uncertainties involved, surely you can see that there is a huge potential here for GIGO results.
bugss:
Er… where have I done this?
I agree with this completely.
But how do you reconcile that statement with:
They can’t both be true can they? (The irrelevant part.)
If what you were trying to say is that it is an body of observational data from which we draw the inference that humans are modifying the Earth’s temperature, then yes that is true, and it is indeed largely irrelevant whether we can model the Earth’s temperature over time with CFD-based models at all.
Going from there to inferring climate sensitivities and predicting the consequences of human generated CO2 for example requires a full model, however.
And that of course is the critical part— knowing from science what is possible to happen to our climate—since it is that which drives climate remediation policy.
They can’t both be true can they? (The irrelevant part.)
If what you were trying to say is that it is an body of observational data from which we draw the inference that humans are modifying the Earth’s temperature, then yes that is true, and it is indeed largely irrelevant whether we can model the Earth’s temperature over time with CFD-based models at all.
Perhaps redundant is a better word.
Bugs,
“They were good enough to tell us that the satellite record from UAH was wrong and they were right.â€
It would be better to make statements that accurately reflect reality. Yes there was an error in the calculation of tropospheric temperature by UAH, which was long ago corrected. But the lack of expected warming in the tropical troposphere (predicted by the GCM’s based on an expected increase in tropospheric moisture content over the tropical oceans) remains largely missing, and the models HAVE NOT been corrected to be consistent with this rather inconvenient reality.
I have no idea why modelers have not addressed this issue except for a pitiful little paper pointing out that considerable uncertainty in the balloon data and variability in the models means that we can’t be >95% sure the models are wrong about the tropical troposphere…. only something like 90% sure they are wrong!
Nick Stokes,
“KK, no proof is needed. That’s how sensitivity is defined.”
Then it should be easy for you to point me to documentation where the “experts” agreed that Climate Sensitivity is linear, and, hopefully, their reasons for this guess.
It would be better to make statements that accurately reflect reality. Yes there was an error in the calculation of tropospheric temperature by UAH, which was long ago corrected. But the lack of expected warming in the tropical troposphere (predicted by the GCM’s based on an expected increase in tropospheric moisture content over the tropical oceans) remains largely missing, and the models HAVE NOT been corrected to be consistent with this rather inconvenient reality.
You moved the goal posts. I pointed out an example of the models making people fix their measurements. The tropical troposphere is one part of the large climate system. Apparently their is no debate about the rest of it, only this one section?
Bugs,
the models do well with precipitation patterns and amounts, not to mention cloud and other convection systems right??
Yes, I am being sarcastic.
Bugs,
“You moved the goal posts. I pointed out an example of the models making people fix their measurements.”
Yup, you are right. The models have made UAH, ARGO and others adjust their measurements. In each case the adjustment has been to make the measurement closer to the models.
Now, can you explain to us why the models have not been able to show the relatively consistent overstatement of surface temps in China and other areas??
A wide range of climate sensitivities and ocean lags are reasonably consistent with the temperature history over the last 130 years… if you can adjust the aerosol forcing history to suit your personal tastes, which is essentially what Tamino did by adopting the GISS forcing history. With the uncertainties involved, surely you can see that there is a huge potential here for GIGO results.
He ran it through some sanity checks which seemed to give it a pass.
KK #20060
Again, I say, that is how it is defined. You can read all this in the TAR Sec 9.2. It defines the familiar “equilibrium” CS, relating to doubled CO2, and the “effective” CS, which is what we are talking about here, and has the Ohm’s law form that I have been talking about:
1/e = T / (F – dHo/dt) = T / (F – Fo)
The AR3 then goes on to talk about the warming commitment, which bugs was referring to in #20047.
Kuhnkat
Bugs,
the models do well with precipitation patterns and amounts, not to mention cloud and other convection systems right??
Yes, I am being sarcastic.
The grid sizes mean clouds are a problem. New technology should improve that.
KK #20060 Me #20066
I missed a line in pasting that AR3 formula. The 1/e (actually 1/α_e) is the sensitivity we’ve been calculating, in K/(W/m2); their “effective CS” is obtained by normalising with F_2x, the forcing calculated (usually by formula) to result from a CO2 doubling.
Bugs,
“You moved the goal posts. I pointed out an example of the models making people fix their measurements. The tropical troposphere is one part of the large climate system. Apparently their is no debate about the rest of it, only this one section?”
I did not move anything. I pointed out that your suggestion the climate model predictions of tropospheric warming are more accurate that the measurements from UAH and Remote Sensing Systems is not correct. There was a correction in the calculations made by UAH, but even after this correction, the discrepancy in tropospheric warming you refer to remains substantial. There has been no adjustment in the models (at least none I have heard of) to account for a glaring discrepancy in the expected warming profile. The whole of the climate system (atmosphere, oceans, and more) is of course important, but a large moisture driven amplification of warming in the tropics is a basic prediction of virtually all GCM’s that is not supported by measurements, including measurements by weather balloons. All that I am trying to point out is that the models are far from an accurate representation of the climate system, so we would be foolish to accept their predictions at face value; those predictions have very large uncertainties.
Bugs,
“He ran it through some sanity checks which seemed to give it a pass.”
I have no idea why you think applying a set of assumed forcings (which are the the most uncertain of all the climate inputs) represents any kind of sanity check. Most everyone agrees that if the GISS historical forcings are right, then the Earth’s climate would have to be very sensitive to forcings to be consistent with the temperature history.
The accuracy of these historical forcings is the entire issue. Many people (including me) do not believe these forcings are anywhere close to right, and that aerosol effects (volcanic and man-made) are grossly overstated, while other forcings are understated. As I said before, if you use a different assumed set of forcings in Tamino’s exercise, you get an equally good (or better) match to the historical temperature record with a much lower estimate of sensitivity to forcing.
SteveF-
I did not move anything. I pointed out that your suggestion the climate model predictions of tropospheric warming are more accurate that the measurements from UAH and Remote Sensing Systems is not correct. There was a correction in the calculations made by UAH, but even after this correction, the discrepancy in tropospheric warming you refer to remains substantial. There has been no adjustment in the models (at least none I have heard of) to account for a glaring discrepancy in the expected warming profile.
I pointed out that the UAH claimed that they were right about the whole troposphere and the models were wrong. Correct.
The dispute is now about the upper tropical troposphere, a vastly reduced scope. ‘Glaring discrepancy’ is sounding alarmist given the initial scope of the dispute.
SteveF –
“The accuracy of these historical forcings is the entire issue. Many people (including me) do not believe these forcings are anywhere close to right, and that aerosol effects (volcanic and man-made) are grossly overstated, while other forcings are understated. As I said before, if you use a different assumed set of forcings in Tamino’s exercise, you get an equally good (or better) match to the historical temperature record with a much lower estimate of sensitivity to forcing.”
So you accept that the ‘two box’ model is capable of modeling the climate response to forcings with remarkable accuracy? I think that was Tamino’s main point. For all the talk of validation and chaotic behaviour and exponential responses for the GCMs, the attempts to over complicate the debate ignore what can be reduced to something that is quite simple.
Your only argument is about the forcings used that I can see. Even then, what are the odds that the forcings used could come up with the goods?
bugs (Comment#20075) September 16th, 2009 at 7:18 am
Pretty good if the GCM isn’t capturing anything widely different from linear behavior, and the forcings are the same ones that make the GCM fit.
“I pointed out an example of the models making people fix their measurements. The tropical troposphere is one part of the large climate system. Apparently their is no debate about the rest of it, only this one section?”
Your “pointing out” was moronic. MODELS DID NOT AND SHOULD NEVER MAKE PEOPLE “FIX” ANYTHING! Get the history right for heaven’s sake!
http://www.uah.edu/News/climatebackground.php
“The tropical troposphere is one part of the large climate system. Apparently their is no debate about the rest of it, only this one section?”
The tropics are not just “one part” The are a HUGE part of the system (which is not “large” anyway). And anyway, the discrepancy is global:
http://www.climatesci.org/publications/pdf/R-345.pdf
The extent of errors there would have to be in the model’s parameters to fail to get the vertical profile of temperature change correctly must be profound.
“So you accept that the ‘two box’ model is capable of modeling the climate response to forcings with remarkable accuracy?”
A two box model maybe, but not that model certainly. And please define “remarkable accuracy”. Is it really “remarkable” that you can use Hansen’s hand picked forcings, Hansen’s model’s response time, and get a fit to Hansen’s temperature data that is as good as Hansen’s model? No! It would be “remarkable” if Tamino failed to get a good fit!
“Your only argument is about the forcings used that I can see. Even then, what are the odds that the forcings used could come up with the goods?”
Pretty good when the forcings can be picked to maximize the fit.
Jeez you are thick.
Bugs:”The dispute is now about the upper tropical troposphere, a vastly reduced scope. ‘Glaring discrepancy’ is sounding alarmist given the initial scope of the dispute.”
What are you talking about? The amplification should show up in the _tropical_ upper troposphere – not somewhere else. How was the scope reduced?
Andrew_Fl, when you have a discrepancy between experiment and theory, the problem can come in from either side. Till you identify the source of the discrepancy, you can never be sure which (or how much of each in many cases) is responsible for it.
Of course, there are certainly known problems with the realism of the the global climate models that they are only now beginning to address, so that remains a strong candidate anytime you find a discrepancy.
But in this case, we already know there were problems with the satellite data, and addressing these problems has, as bugs said, reduced the level of disagreement between model and data.
The level of remaining discrepancy is in my opinion understated because of the flawed statistical manner in which the ensemble of models were treated (short version they neglected vertical correlation in individual models when lumping them together), but that is another issue.
I don’t there is a ghost of a chance that current models could adjust their parameters to explain this discrepancy, while being constrained to satisfy other physical constraints. The answer lies either in “improved physics” in the simulations or further undetected glitches in the data, or of course both.
Let’s be real!
The discrepancy between the models and the measurements of the tropical upper troposphere is about 0.5ºC. That would change projected mean temperature GMST trends from about 2.0ºC/century to about 1.5ºC/century. That would mean that AGW is descending upon us at only a ‘slightly’ slower pace than as with IPCC’s business as usual.
And how much the ocean ‘hides’ the full impact of the heat it’s absorbed, depends on presently unknown magnitudes of even SMALL possible increases in how far down heat is being transported by tropical mixing associated with the episodic downwelling and upwelling from ENSO-like (PDO, etc.) cycles; slightly deeper, the longer the lag.
Don’t look for too much in simple models.
Len Ornstein
Carrick-true, however, a discrepancy between models and data should not be what prompts a correction of the data. That was exactly what bugs claimed happened, but that is NOT historically accurate. Specific real errors were identified, and the result happened to decrease the discrepancy.
I agree, Andrew. A discrepancy should prompt a reexamination, but one should not just look for changes that make the discrepancy smaller.
That way lies confirmation bias.
bugs wrote:
“It’s an admission that I know my limitations and you don’t. The IPCC report is central case for AGW, by the experts, using the scientific method. ”
The IPCC report is NOT the central case. the IPCC report is a just that.
A REPORT, a compilation of expert opinion. If you would take the time to read through some of the reviewers comments ( which we had to FOI to get public ) you would be shocked at the shoddy approach. NONE OF THIS, however, makes AGW false. Here is the difficulty I have with AGW. I believe in the science, but the behavior by certain bodies and certain scientists is shoddy and shocking. It’s a situation that is RIPE for a ‘piltdown man’ like episode.
I believe in the science, the basic physics of GHGs. I believe in that because that science is well documented and well supported and MANY THINGS would have to change for that physical theory to be wrong. I believe in it because I worked with prediction codes, like lowtran and modtran, etc. I believe the planet is warming over the instrumented period. While the observation record is flawed and needs more scrunity, there are parallel paths of support for this observation. I believe that added GHGs are the proximate cause of this and that man is the proximate cause of this increase. Call this “mere AGW” ( with a hat tip to CS lewis )
What I don’t believe in.
1. I don’t believe the “science” is settled. it is never settled.
2. I don’t believe in defending people who hide their data or their methods. I reject their findings out of hand. The IPCC report relies on these studies, I reject it as a source of authority.
It is NOT the scientific method in practice.
3. I don’t believe in defending Tamino, or Steig, or Parker, or Jones, or Mann when they are clearly wrong. Michael mann is wrong and AGW is right. I see no contraction in this. What I see is believers unwilling to criticize their own. You see the same thing in politics and religion.
4. I don’t believe the IPCC report is a scientific document and people that point to it as a source of authority need to understand the difference between primary and secondary research and they need to read some of the backstory on that report.
Andrew_FL (Comment#20030) September 15th, 2009 at 5:11 pm
Your coherence network and mine. Andrew_KY has an incoherent network.
steven mosher,
“Andrew_KY has an incoherent network.”
Was someone’s meatloaf touching their mashed potatoEs at lunch today? My deepest apologies. 😉
Andrew
steven mosher (Comment#20085)-It might be interesting to compare our levels of belief, as I am a self proclaimed denier and you a “lukewarmer”.
I believe that: There is some AGW which is due to a number of factors, including CO2. I believe that this has contributed significantly to the 1978-2008 warming, although that was very similar to the 1911-1941 warming. I also believe that no one has adequately accounted for all the various factors, natural and anthropogenic.
I outright deny: That there is any coming disaster due to the small AGW effect that appears to be supported so far. The evidence linking AGW to various instances of unpleasant weather, even persistent bad weather, is weak at best.
Bugs,
“Your only argument is about the forcings used that I can see. Even then, what are the odds that the forcings used could come up with the goods?”
As I noted before, it is quite simple to make curve fit models like Tamino’s using a broad range of forcings, all of which hind-cast the temperature record quite well (at least as well as Tamino did). So the odds are really quite good that the forcings and ocean lags selected by Tamino would make reasonable hindcasts. How could they not? What the heck, all the different GCM’s make decent hind casts, even though they use a range of assumed forcing histories and diagnose a range of climate sensitivities. So in this case, the chance of making accurate hind casts looks pretty close to 100%. The assumed forcings and the assumed ocean lags are part-and-parcel of the GISS model, just as other combinations of ocean lags and forcing are part-and-parcel of other GCM’s. So yes, the inputs of assumed forcing and ocean lags really are the crux of the argument. GIGO.
When virtually all the models do not make accurate predictions (like the expected amplification of warming in the tropical troposphere), this suggests that all of the models suffer common errors/deficiencies. This ought to be addressed, but is apparently being ignored. Could this be because a low level of tropospheric amplification in the tropics also implies a substantially lower climate sensitivity and demands a different set of assumed forcings and ocean lags? I can’t be certain, but I suspect so, because peoples’ expectations really do affect their analyses.
Question from a layman.
Has anyone tried the matching by witholding some of the data and using the matching as a forecast tool?
Does this question make any sense?
In answer to the comments about Newton and the efficacy of his laws, one should consider Ptolemy and his epicycles. With epicycles, one can get exact predictions of planetary positions. If there are errors just add more epicycles
Newton’s Laws were preferred to the Ptolmaic model for reasons other than their match to planetray orbits. Empirical fits that cannot be generalized do not provide useful theories
TAG (Comment#20090)-Well, unfortunately that wouldn’t work. Well, I should actually say that it wouldn’t be very meaningful, since as we’ve been discussing, the “forecast” of the other half of the data makes use of data which is not independent of either the model or the rest of the data.
HOWEVER!
I suspect that any fit of the model to data from the start of the data to 1941 will over predict the rest of the data IF you let the temperature data determine your sensitivity. Why? Look at Tamino’s model:
http://tamino.files.wordpress.com/2009/08/2boxallsoi.jpg?w=500&h=325
The 1911-1941 warming is underestimated by the model!
I’m just wondering where Lucia is. Nick and Arthur stand by their claims. She disappeared after a lot of hand waving.
When virtually all the models do not make accurate predictions (like the expected amplification of warming in the tropical troposphere), this suggests that all of the models suffer common errors/deficiencies. This ought to be addressed, but is apparently being ignored.
All the models suffer from errors and deficiencies. Talk to the people who create the models and they will tell you that. One of the biggest problems is clouds, they hope that the new generation of computers will allow grid sizes to get small enough to deal with that. To what extent does it matter? The results from the existing models, despite the deficiencies, seem to be reasonable.
Well bugs (#20101) has given us the definition of a troll
bugs:
“Seem reasonable”?
Is that a statistical analysis result that I am unaware of? LOL
Otherwise all you said is to some people subjectively the disagreement between models and data isn’t well all that bad.
Big deal.
bugs,
“Nick and Arthur stand by their claims”
Then Nick and Arthur need to recalibrate. 😉
Andrew
Okay, I’m amused. Bugs has been reduced to saying models are “close enough for government work”-more or less! 😆
Bugs,
“The grid sizes mean clouds are a problem. New technology should improve that.”
This admission means the models are not useful for projecting climate, much less predicting it. Why? Because a very small change in cloud coverage balances ALL other forcings. Your granularity is too large to capture these changes even assuming the rest is correct. The longer the projection, the larger the likely variance.
This may be the worst problem, but, definitely not the only one. Yet you BELIEVE the models are useful…
As I noted before, it is quite simple to make curve fit models like Tamino’s using a broad range of forcings, all of which hind-cast the temperature record quite well (at least as well as Tamino did).
I thought the only forcing that was a significant problem was aerosols. The rest are not adapted to the hindcasting. “all of which” is a stretch.
Kuhnkat
This admission means the models are not useful for projecting climate, much less predicting it. Why? Because a very small change in cloud coverage balances ALL other forcings. Your granularity is too large to capture these changes even assuming the rest is correct. The longer the projection, the larger the likely variance.
It’s not any sort of an ‘admission’. Modelers openly discuss the pro’s and con’s of their work. Tamino’s model gave a good result without any attempt to model clouds, or much else of the components of a climate.
Okay, I’m amused. Bugs has been reduced to saying models are “close enough for government workâ€-more or less!
I’m not ‘reduced’ to saying anything. The variations between models and the errors expected from them are well publicised.
Well bugs (#20101) has given us the definition of a troll
By asking for a return to the topic? That’s a new one. Lucia has yet to resolve the differences between her stance on the matter and Arthur and Nick.
bugs:
You are saying absolutely nothing.
There are problems in the comparison between model and data, they aren’t well understood or people wouldn’t still be working on them.
There are problems in the comparison between model and data, they aren’t well understood or people wouldn’t still be working on them.
People aren’t working on improving the models and the problems with them such as clouds? News to me.
There are problems in the comparison between model and data, they aren’t well understood or people wouldn’t still be working on them.
There are problems such as grid size. A cloud is smaller than the current grid sizes, so it is hard to represent them within the size of a grid. New generation supercomputers offer the ability to make grid sizes much smaller, and hence clouds become much more realistic. It’s not a matter of not understanding, just the limitations of technology.
bugs:
I think he was saying that people are working on them.
Having a relative lack of observations and lack of fine enough models to play with is a good approximation of “not understanding” in my book.
Resolution will fix a lot of problems, but you have to understand that parameterizations do not typically scale without readjustment when you simply reduce the grid size.
Reedz moar carefulleez bugz.
That comment indeed says they are working on them because the problems aren’t well understood.
Last I checked, they didn’t get all of the physics of clouds & precipitation (in particular) correct, even when they tried doing mesoscale models. So just saying “It’s not a matter of not understanding” is showing oddly a lack of understanding on your part of the scope of the problem.
I think this applies to you (who doesn’t know that you don’t know) as well as to the climate and weather modelers, who at least know that they don’t know.
Really though, have you hijacked this thread long enough yet? LOL.
Carrick (Comment#20125) September 17th, 2009 at 1:20 am
…bugz…Really though, have you hijacked this thread long enough yet? LOL.
I hold to the notion that any good faith discussion is productive, but it would be nice to finally break out of the unit circle on one of these roundabouts. 😉
Kuhnkat
” Because a very small change in cloud coverage balances ALL other forcings. Your granularity is too large to capture these changes even assuming the rest is correct. The longer the projection, the larger the likely variance.”
Can you quantify how much cloud cover matters?
It does look like you just declared that it was critically important, but with no justification. Can you quantify how important the cloud cover is? If you can’t quantify it how do you know?
Carrick
“Last I checked, they didn’t get all of the physics of clouds & precipitation (in particular) correct, even when they tried doing mesoscale models.”
Can you quantify the problem?
Can you give a measure of how important this is?
bugs 20102 –
“The results from the existing models, despite the deficiencies, seem to be reasonable.”
I´m not so sure. There seems to be fairly clear evidence that many of the models are very poor at hindcasting. Try running your eyeballs over these comparisons :
http://www.climateaudit.org/data/models/models_vs_hadcru.pdf
Steve plotted the Hadley temperature data together with a Rahmstorf type smoothing of the data. Each model output was also smoothed in the same way. The observed and modelled curves were lined up to agree in 1990.
This is not a very sensible statistical way to do the comparisons but Rahmstorf seems to approve of this method to show how good the match is in the post 1990 period.
It is interesting that a GISS model does a rather better job at hindcasting than many of the others. This could be because they have a better understanding of the physics in their model as well as better assumptions about the forcings. Equally, it could mean they spent more time tuning the model and forcings to match a known temperature history.
Either way, it is not surprising that a simplified version of the GISS model using their forcing history and a distillation of their physics can give a decent match.
Really though, have you hijacked this thread long enough yet? LOL.
I’d like to see Lucia return to the debate with Nick and Arthur. She had claimed victory and they seemed to be wondering what she was talking about.
UAH vs the Models. The models win.
http://www.realclimate.org/index.php/archives/2005/08/et-tu-lt/
In previous posts we have stressed that discrepancies between models and observations force scientists to re-examine the foundations of both the modelling and the interpretation of the data. So it has been for the apparent discrepancies between the Microwave Sounding Unit (MSU) lower tropospheric temperature records (MSU 2LT), radiosonde records and the climate models that try to simulate the climate of the last few decades. Three papers this week in Science Express, Mears et al, Santer et al (on which I’m a co-author) and Sherwood et al show that the discrepancy has been mostly resolved – in favour of the models.
bugs–
Yes. Nick Arthur and I seem to have gone quite. Here’s my version of the synopsis.
I went on vacation to Wisconsin. Before I left, Arthur switched to saying that it doesn’t matter if the projections for the individual boxes or the magnitude of the individual parameters don’t seem reasonable, we can still assume the magnitude of the one parameter of interest is somehow reasonable. He also seemed to be buttressing this argument comparing himself & Tamino to Newton in ways that made no sense.
Nick evidently never thought it doesn’t matter whether the projections for the individual boxes don’t look reasonable.
What I’ve maintained– even in the early comment that got me banned at Taminos– is that if these don’t map into something physically reasonable for the earth’s climate system, then what we have is an interesting math problem or curve fit. Maybe it gives right answers; maybe not. The thing about curvefits is… they tend to track the data they were fit too. That’s the way curvefits work.
Obviously, Arthur at one time took the notion that the method reproducing two boxes that are physically reasonable for the earth is sufficiently meaningful to beaver away and try to show the thing mapped into something physically realistic for the earth’s climate system. So far, he has been unable to find and post such a mapping. I may be mistaken, but notion that the individual model parameters mapping into something physically realistic for the earth might not matter seemed to materialize after he realized that he probably wasn’t going to find a mapping that is physically realistic for the earth.
Right now, no one seems to be defending the solution using Tamino’s time constants can map into a two-box model that is realistic for the earth. Steve Reynolds (I think) has identified a two-box model that might be reasonable for the earth, fits the data well, but is has different time constants and climate sensitivities than Tamino’s. Nick showed long ago that Tamino did not pick the best fit time constants and that the best fit short time constant is shorter than the one Tamino chose. (The long time constants Steve found that fit well are shorter than the one Tamino chose.)
Everyone agrees that modyfing time constants doesn’t make a huge difference on the estimate for the climate sensitivity. (No one ever disputed this.) However, it does make some difference, and that difference is meaningful in terms of the range of uncertainty in the climate sensitivity that existed before this model was posted at Tamino’s.
So, to those who care whether or not the method is any better or worse than other methods, the effect of choice of time constant is meaningful. To those who just want to say “It’s an estimate; let’s not talk about accuracy, precision or uncertainty.”, the method is referred to as “robust”. (Meaning, if they apply twice to the same data, they pretty much get the same answer. 🙂 )
In more recent comments, Arthur and Nick seemed to be discussing diffusive models. It may be that Arthur has given up on the Tamino’s two-box model and change to a different type of model that doesn’t suffer from the problems of a two-box model.
Maybe you will get your wish and Arthur or Nick will come back and explain their version of events.
What I’ve maintained– even in the early comment that got me banned at Taminos– is that if these don’t map into something physically reasonable for the earth’s climate system, then what we have is an interesting math problem or curve fit. Maybe it gives right answers; maybe not. The thing about curvefits is… they tend to track the data they were fit too. That’s the way curvefits work.
I agree. That’s why when people increase the number of parameters and adjust the percenatage effect of each, they have added two degrees of freedom. 1 for the parameter, one for the percentage.
They only include the parameter if it improves. It’s a selection and it make be a chance correlation. Add in the parameter and its the same.
ie. If the model has X tuneable parameters, and it is the same as a X term polynomial curve fit, its not a good model at all.
Nick
Lucia,
Did you enjoy Wisconsin cheese during your travels?
bugs (Comment#20135)-Ah, vintage dishonest spinning from RC! Thank you for bringing this crap up again. You know, Spencer had some comments on this when those papers came out:
http://tcsdaily.com/article.aspx?id=081105RS
However, at this point it is extremely disingenuous for us to look back at some papers, at least one of which was BS, and say “AH HA! This was resolved years ago, so shut up!”
What is the situation presently? Using Santer’s own tests, the Models do not pass against UAH in the tropical troposphere. This is AFTER many corrections have already been implemented.
http://arxiv.org/pdf/0905.0445
And, for good measure:
http://icecap.us/images/uploads/EPA_ChristyJR_Response_2.pdf
Andrew_FL –
It sounds like Spencer is agreeing with them,
The third paper (Santer et al, 2005) takes a more thorough look at the theoretical expectation that surface warming should be amplified with height in the troposphere. The authors restate what had already been known: that the UAH satellite warming estimates were at odds with theoretical expectations (as had been some radiosonde measures). Now, the convergence of these newly reported satellite and radiosonde estimates toward the surface warming estimates, if taken at face value, provides better agreement with climate models’ explanation of how the climate system behaves.
It was UAH that said the models were wrong and the UAH was right in their paper. Suddenly, that paper has been forgotten.
My point still stands, there is an example of the models being right and the data wrong, and people having to correct data to fit the models. And the data that was wrong was the UAH satellite data.
SteveF–
Actually, I ate no wisconsin cheese during this trip. I go to Mauthe Lake every fall bearing homegrown basil, garlic, Italian parmesan, pinenuts, a pasta roller and home mixed dough for fettuccini . Then I make fettuccini with Pesto. We drink Chianti.
Then, I leave the guys to let them “fish”.
bugs–
It may turn out both were wrong. . .
“My point still stands, there is an example of the models being right and the data wrong, and people having to correct data to fit the models. And the data that was wrong was the UAH satellite data.”
This point doesn’t stand at all. UAH still doesn’t agree with the models. So you can’t say “the models were right and the data wrong!”. The data were wrong but that wasn’t enough to make the models right!!! And one more time, UAH was NOT corrected to “fit models” it was corrected because of REAL WORLD problems! God your dumbass.
Nathan:
It’s on the same order of magnitude as total greenhouse gas forcings. According to this reference water is about 75 W/m2 and CO2 is around 32 W/m2. NOAA gives total change in forcings from anthropogenic activity since 1750 to be around 2.7 W/m2. The wikipedia article on cloud forcings list forcings from reflection of solar radiation by clouds at 44 W/m2 and the radiative absorption of LW radiation at 31 W/m2.
The question is the effect of anthropogenic activity on warming. In very broad strokes, thin (cirrus) clouds tend to reflect less solar radiation compared to the LW radiation they absorb, so on aggregate they heat the earth, whereas heavier clouds (it is thought) introduce a net cooling.
One plausible explanation for the tropical upper tropospheric anomaly as you probably know is the iris effect in which it is posited that increasing greenhouse forcings has the effect of reducing cirrus cloud formation in the tropics.
I can’t really quantify it, but I can enumerate some of the issues.
There are papers discussing various problems.
One is they can’t predict reliably when condensation occurs. Another is instabilities.
The effect of aerosols on cloud formation is not also well understood. It may be that anthropogenic aerosol emissions may increase cloud coverage, which again could produce a net regional cooling effect.
Given the magnitude of the effect of cloud forcings, it is important to quantify the effect of anthropogenic forcings on them.
Maybe somebody could link a review of state of the art on cloud modeling if they know of a good reference.
“My point still stands, there is an example of the models being right and the data wrong, and people having to correct data to fit the models.”
Bugs, I’m pretty sure that UAH didn’t correct their data by fitting to models.
One notable anthropogenic effect is jet contrails.
Apologies for my lack of participation, but I had a major work deadline Tuesday and followup since then. Also kids are in school, lots of driving around, and I have much less time at the moment – maybe next week to get back to things…
Lucia says:
Um, sorry, but what again was wrong with my case 3 example that you posted up above in the “Update: Wednesday” up above? There you say: “Since these don’t look insane, we can probably now move on to judging whether the parameter are, or are not, realistic for a two box model for the earth. ”
In fact, the criterion that the two long-term response numbers for “slow” and “fast” boxes be close to one another (or whatever ratio you like) simply imposes one more criterion to cut through our 3-dimensional space of two-box solutions. There are still a two-dimensional infinity of possible solutions, of which case 3 above is perhaps one example (or if you like a different long-term ratio of temperature responses, we can find one with that). One example out of infinitely many doesn’t tell us a whole lot.
Huh? Probably wasn’t going to find a mapping physically realistic? I thought I had, and I was moving on, inspired by some of Nick’s thoughts here and elsewhere. Weird psychological deductions there Lucia.
In fact, my view on two-box is not precisely the way you characterize it here. From the very beginning I have been careful to phrase my commentary on the two boxes to indicate I had no particular preconceived notions what they might correlate with in Earth’s physical system – I have been consistently calling them “slow” and “fast” rather than “ocean” and “atmosphere” or “surface”, despite my adoption of something close to Lucia’s notation. Perhaps it’s surface vs. deeper ocean. Or perhaps land vs. ocean. Or perhaps the “fast” box correlates best with the polar regions, and the “slow” one the tropics. Or some odd combination of all of the above, because each portion has “fast” and “slow” responses of various sorts (which a diffusive model would capture in a perhaps more uniform way). Or perhaps you can find many different two-box systems that roughly match Earth’s climate with parameters corresponding to all of the above divisions and more… We certainly seem to have found one that allows Tamino’s fit to work with heat capacity terms that seem to match atmosphere and a few hundred m of ocean, at least.
From my very first comment here (#18410) I wrote:
No change in my position on that.
Regarding Santer, Christy’s comment points out:
So even with Santer’s paper, the conclusion still stands that a real discrepancy is present. The tropical hot spot is not there in any model that is constrained to reproduce the surface temperature record.
No amount of RC brand airbrushing fixes that.
Arthur Smith, I think Tamino would disagree with you on your interpretation of his model as non-physical.
You certainly can interpolate the GISTemp data record using any nonphysical interpolating function you would like. I’m not sure what you’d learn from it though, if you insist it shouldn’t have a physical interpretation.
Last word to you, since this is too much like a puppy dog chasing its tail for my tastes.
And the data that was wrong was the UAH satellite data.
There was nothing wrong with the satellite data, the problem was (and still may be) in the model used by UAH to convert the microwave data into temperature (including arithmetic and orbit drift errors).
Regarding the ‘two box model’ it seems that the original model based on the atmosphere and ocean failed the physical tests applied by Lucia (including 2nd Law applied correctly). Arthur et al. then attempted to find if a fit could be made which did not fail those tests. The current status from Arthur appears to be that a reasonable fit can be made but we don’t know what the two boxes represent (just what they aren’t).
Does that about sum it up?
Here’s my view of where we ended up. I think what Tamino actually did was to fit a model of forcing with delay to the GMST, from which he deduced a climate sensitivity. That’s fairly straightforward, and it seems to be done correctly. If you really wanted to find something to criticise (and it seems some do), I would advise focussing on the quality of the fit. It looks good, but whether it’s good enough to establish some kind of uniqueness for the deductions made could be tested.
That was my understanding from the beginning, but I thought it just might be interesting to see if the physical box model, which could be used to suggest delay constants, implied anything further. The fit obtained does imply some constraints on the heat capacity that you associate with the boxes, to meet physicality expectations. If you try to involve the heat capacity of the entire ocean, contradictions emerge. But that’s not surprising – we know that you can’t heat the whole ocean on this time scale. There was a relation between how the flux is divided between the boxes and the heat capacities that you could think about. If you were prepared to assume the top box represented the atmosphere augmented by a surface layer of liquid (and solid) say, and the second box represented a hundred metres of ocean, say, then you could find reasonable parameter ranges that satisfied physicality.
So I don’t think you can uniquely identify boxes that correspond to the model, but there are ranges of box properties that work, so the fitting model is not invalidated.
The attempt to optimise time constants suggested a long time constant of about 20 years, vs Tamino’s 30. I wouldn’t put much weight on the difference. The error is fairly flat in that range, and the sensitivity drops from about 0.67 to about 0.6. And there isn’t a single correct criterion. One thing we don’t know is what years Tamino actually included in the least squares count. I used the whole range, 1880-2003, but there’s a strong case for downweighting or dropping the early years, since the delay means that a lot of the data needed to estimate them properly is missing.
Lucia,
“Actually, I ate no wisconsin cheese during this trip. I go to Mauthe Lake every fall bearing homegrown basil, garlic, Italian parmesan, pinenuts, a pasta roller and home mixed dough for fettuccini . Then I make fettuccini with Pesto. We drink Chianti.”
Sounds delicious, but I´d prefer a good Tuscan.
Arthur–
The don’t look insane as plots. But the parameters look incosistent with the earth surface temperature being near 287K to me. But, presumably, you are looking at that, right? I’ve been doing other things, so I figured I’d give you time to look at your parameters.
I thought you hadn’t found them, and were going to get back to it when you find the time. But so far, you haven’t checked your parameter values to see if they are realistic.
A lot of effort but several different people when into this series of threads. In the end, it seems fair to say that Tamino´s (and Arthur´s and Nick´s) fits are reasonable, and suggest a sensitivity that is in the range of the GISS model sensitivity. It is also clear that there is no simple way to relate the model´s two boxes to specific physical parts of the Earth and its atmosphere.
I think it is also fair to say that as with any curve fit, hind casting is easy, it´s forcasting that takes skill. Whether or not Tamino´s simple model or the GISS CGM can make skillful predictions remains to be seen.
Finally, it is crucial to note that any curve fit model depends on the assumed inputs, and in this case using the GISS forcing estimates means that anything result other than a sensitivty similar to the GISS GCM estimate would be surprising. The accuracy of the assumed forcings is the weakest link in the chain, regardless of the details of the model method or how/if the model corresponds to identifyable physical components.
Arthur
In which case, he should not have been the slightest bit upset when I aksed if he’d checked to see if the box mapped into physically realistic space. He should have said “No, it doesn’t matter”. He should also have decided that violating the 2nd law of thermo didn’t matter. Instead, he said he had tested if it was physically realistic.
He also criticized other people fits as not physically realistic etc.
So, you can interpret what he said however you wish. When Tamino posts something saying he never intended anyone to think his box had anything to do with physics, then I might believe he never claimed so.
In the meantime, anyone who wants to bin it into “just another curve fit territory” is welcome to do so with no fear of contradiction from me.
Phil
That appears to be the case. Except he hasn’t shown the earth and ocean fit anything in particular. He’s just managed to get graphs that don’t look pathological like the first ones. He hasn’t checked to see whether those might map into the top “N” meters of ocean, the full atmosphere etc. He hasn’t checked to see if the values of α etc. would correspond to a world that has a temperature near 287K etc.
I have reason to believe the value he posted fail the 2nd criteria but I may be wrong. In any case, he has reacted so negatively to me showing that his choices fail simple checks that I figure I’ll give him time to do the checks.
From my point of view, until he shows that his cases actually do map into some physically realistic something he hasn’t shown it. (Of course, he’s not obligated to do so. But merely showing a case that with plots that are not insane is hardly enough to claim he has shown it’s realistic.)
Nick
And he made claims about physicality including : a) discussions of energy flow, b) links to a physical model c) criticized others for using unrealistic models when performing regressions, d) said he checked to see whether his model is violates the 2LOT and/or is physically realistic.
The claim of physicality is what the argument has been about.
If you want to focus on this as a purely statistical model, fine. But, people do get to criticize Tamino for making physical claims for his model even if you prefer to pretend we should ignore such claims. (Meanwhile, people like bugs appear and laud all the physics.)
In the meantime, anyone who wants to bin it into “just another curve fit territory” is welcome to do so with no fear of contradiction from me.
So, I’ve a question on this.
Are there any statistical test that can tell whether or not the model is any better than an equivalent curve fit?
Hence the question about a polynomial versus model comparison. Pick the degree of the polynomial to match the number of weightings (degrees of freedom) of the model, and see which is best.
If the polynomial gives a better fit, then its hard to claim that the model is better, since the polynomial is clearly going to lack predictive power.
Nickle
Nickle (Comment#20162) September 17th, 2009 at 11:37 am
Polynomials are not the only basis for the curve fit. In this case we’re dealing with one example where you can take a given time series and essentially fit to an output using it and a lag-decay version of it.
Give me enough degrees of freedom and I believe I can fit white noise to the surface temperature record.
Lucia – please explain what 287 K has to do with the linear ODE’s we’ve been discussing for the two-box model, because I don’t see it, sorry. All the physicality criteria you’ve specified up to now that we’ve been able to turn into real relationships on the two-box model parameters have been met. That was the reason I was interested in the problem, because your initial claim was that there was no solution (or none without negative beta values or something). But maybe we’ve missed another important criterion from our list, so please state it clearly enough that it can be put in the terms we’ve been using, and we’ll take a look at what solutions from our 3-dimensional or 2-dimensional infinity may still remain at that point.
Arthur–
What are you talking about? I have always said the two-box model needs to map into parameters that are physically realistic. I have mentioned the steady state solution before. Surely, with a ph.d. in physics, understand that anything expressed as anomalies has an underlying non-anomaly construct, and that anomaly problem is nothing more than a linearlization of the non-anomaly problem. The non-anomaly problem has a steady state solution, and it should note be “insane” either.
Why are you trying to make this be nothing more than “linear ODE”? As a Ph.D. why are you trying to turn this into a sophomore level math assignment where you get to ignore things that aren’t provided to you in the form of a homework assignment?
If you understand the physics in this model or even read my first post, you should be able to figure out what the underlying non-anomaly model is, and test the parmaters you found to see discover the surface temperature that corresponds to your values of α.
Nickle
Correlation coefficients and hypothesis tests are sued to describe whether a model gives a good purely statistical fit.
There are “information criteria” tests to determine whether adding a parameter improved the fit enough to warrant adding the extra parameter.
Also, people from the sciences (and often other fields) do prefer models that are based on physics rather than just a curve fit where the parameters were obtained by fitting to data. Curve fits whose mathematical form is motivated by physics fall between purely physical models and pure curve fits (like a linear trend.)
That’s why this argument has gotten so involved.
Tamino represented his model as being motivated by physics. Nicks point seems to be that we should ignore that obvious nonsensical claim and then criticize the fit as if it’s nothing more than a pure curve fit. So, he seems to be advising us to look at the correlation coefficients. (However, Nick’s approach would leave Tamino supporters like bugs and/or Arthur at time free to then imply or insist that the curve fit really is based on physics, thereby giving it more “credit” than it deserves. )
Nickle (Comment#20162)
This model has 3 fitted parameters (with intercept), 4 with SOI, and there’s some basis for saying that the time constants are also chosen with fitting in mind, so maybe 5/6. I’m pretty sure you won’t get as good a good fit with a polynomial of that order.
The fact that you can get a good fit of temp based on forcing in this way (few parameters) does point to an association between them. You’re extracting real information about temp from the forcing. That is the basis for thinking that the fit is telling you something – in this case the sensitivity – which may have predictive uses.
However, the strength of that does depend on how good the fit really is, which I suggested above as something critically disposed people might want to examine.
Lucia –
Tamino represented his model as being motivated by physics. Nicks point seems to be that we should ignore that obvious nonsensical claim and then criticize the fit as if it’s nothing more than a pure curve fit. So, he seems to be advising us to look at the correlation coefficients. (However, Nick’s approach would leave Tamino supporters like bugs and/or Arthur at time free to then imply or insist that the curve fit really is based on physics, thereby giving it more “credit†than it deserves. )
Tamino said
But such models and their results are not the topic of this post; I’d like to take a look at some simple models which are not computer models. They’re simple mathematical models of changes in global temperature, and although I’ve used a computer to do the arithmetic, I could have done so without a computer and they are most decidedly not computer models.
……
This can be thought of as a rough mimicry of an atmosphere-ocean model, where the atmosphere responds quickly while the ocean takes much longer. I’ll allow the atmosphere to respond very quickly (in a single year) while for the oceans I’ll use a timescale of 30 years.
I think Nick is right. It’s two boxes, one of which is fast, one of which is slow. He is providing these two boxes with their fast and slow characteristics because they are the two main features of the climate, but they are not meant to be good or literal representations of these physical places. Tamino’s point, once again, as far as I can tell, is that all it takes is a very rough approximation, done as a mathematical exercise, to produce a good model of some features of the earth’s climate. He then did some simple tests on that model, and it worked.
Despite the nay sayers here, the forcings are not all picked to produce a curve fit, the paper by about the uncertainty of forcings says the only one that is a real problem is aerosols. To thank that you can manipulate that one variable to produce such a good curve fit is not reasonable, IMHO.
Lurker-
fBugs, I’m pretty sure that UAH didn’t correct their data by fitting to models.
OK, I could have phrased that better. In a dispute between the models and the observations, the observations were found to be in error. After the error was fixed, the match between the models and the observations was much closer.
There is still a disputed part, the upper tropical troposphere section of the atmosphere. The radiosonde data has been problematic as well, and the proximity of the stratosphere, which has been cooling, to the upper troposphere, could be part of the reason. The dispute to date has not been resolved.
Nick
The basis for thinking the fit is telling you something also relies on on whether or not you think the fit is consistent or inconsistent with our knowledge of physics. Which is why I suggested those critically disposed might want to examine the physics.
Of course, one can also examine statistical features. But this is not an either/or issue. The statistical features don’t look bad. Bad they already looked good with a “1 lump” parameter, so it’s hardly surprising it would continue to look good with 2.
If Nick wants to look at things like various statistical information criterion to see whether the second parameter delivered sufficiently more information than 1 lump, I think he should go ahead and do it.
Obviously, those won’t address the issue of whether or not the has the added benefit of being based on physics as was represented by Tamino in his post.
Bugs
In the quote you post, Tamino describing a physical approximation of a system with “physical” features. The definition of a “physical” vs “non-physical” model has nothing to do with the word “computer”. Computers is not the aspect of “model” that makes it a “physical” model.
All “computer” means is that the model is computationally intensive and requires a computer to perform the calculations. Computer models can be physical, mathematical, astrological, leprechaunian or what have you. The feature they all share is the computations are performed using a computer.
Despite the nay sayers here, the forcings are not all picked to produce a curve fit,
Well, how do you show that this is not the case?
ie. If selecting a forcing ‘improves’ the model, it will be added.
However, that is true for a valid inclusion, as well as including it because it produces a better fit.
Nick says this
This model has 3 fitted parameters (with intercept), 4 with SOI, and there’s some basis for saying that the time constants are also chosen with fitting in mind, so maybe 5/6. I’m pretty sure you won’t get as good a good fit with a polynomial of that order.
The problem is that its not a 5/6 degree polynomial, its got more dimensions than that. Each parameter is a time series.
Nickle-
Nick’s program spits out 3 parameters. But the user picks two time constants. So, that’s 5 parameters. One is only defines a baseline– but that parameter still exists because we don’t ” just know” the temperature baselines that creates T=0 at F=0 at equilibrium.
If one wished to one could expand the two boxes to having 6 parameters. The extra one would define the level of dis-equilibrium for the surface temperature.
When hunting for “physics”, to relate to the 6 parameters, you have some more flexibility. ( For example, Arthur described the surface temperature of being a linear combination of the temperature of the two boxes. That gives you a parameter we called “y”.)
So whoever wants to think about applying any information criterion has plenty to work with. Since Nick thinks this is the more useful thing to do, I says, “Knock yer’self out, Nick!”
In the quote you post, Tamino describing a physical approximation of a system with “physical†features. The definition of a “physical†vs “non-physical†model has nothing to do with the word “computerâ€. Computers is not the aspect of “model†that makes it a “physical†model.
He says it is first and foremost “They’re simple mathematical models of changes in global temperature”. That is what they are.
He uses a “two box” model as a “rough” approximation. They don’t directly correspond to the ocean and atmosphere. The discussion here has decided already that you can’t do that and have a mathematical model that works. They just simulate the major feature of the earths climate, a significant part that has a fast response, a significant part with a slow response. He could have called them ‘x’ and ‘y’, but for obvious reasons of convenience, he called the ‘atmosphere’ and ‘ocean’. They do not directly correspond to the atmosphere and ocean, though, this is just a rough approximation. With this mathematical model, what comes out? A good match.
bugs–
You go tell Tamino his model doesn’t have anything to do with physics and it’s just an empirical curve fit. 🙂
Bugs,
“They don’t directly correspond to the ocean and atmosphere…”
Then how do you prove the validity of the extracted value?
In other words, I could have a reasonably smart mathematician take the recorded temps and develop a 2 box model that gives an extremely low sensitivity. What makes that model better or worse than Tamino’s??
Kuhnkat–
All you need to do is
a) dig up the the upper range of plausible forcings and
b) assume a shorter shorter response time.
Here’s what Arthur found for teh dependence of sensitivity on assumed long response time using the forcings Tamino used.

(Black is the total sensitivity based on Arthur’s interpretation of the meaning of the regression parameters.)
http://arthur.shumwaysmith.com/life/sites/default/files/two_box/two_box_vs_time.png
FWIW, Tamino picked a response time on the high side of what the general notion of the curve fit would suggest:

That’s interesting lucia. Your graphs suggest that that time constants less then 30 years produce sensitivities between 0.4 and 0.7. I think that is a fairly significant range of sensitivities.
John —
Those are Arthur’s graphs. But yes, given a set of forcings and temperatures, lower time constants produce lower sensitivities. This can actually be explained based on the way response based on negative exponentials with time behave.
Note that the sensitivity seems to increase by a factor of 3 when the long time constant increases from 10 to 100 years. A factor of 3 is not a trivial amount relative to the range for climate sensitivity provided in the IPCC reports.
Nick Stokes (Comment#20197)
“The fact that you can get a good fit of temp based on forcing in this way (few parameters) does point to an association between them.”
Not if they aren’t independent to begin with. And they aren’t.
The superiority of the fit to a simple polynomial is mostly due to SOI (which contains information that is not independent of the temperature itself!!!) and the volcano forcing.
By the way, it’s flat out wrong to say that, as bugs does:
“the paper by about the uncertainty of forcings says the only one that is a real problem is aerosols.”
Because, as I tried to communicate to you before, modelers have been using erroneous total solar irradiance recons for years to better fit the early twentieth century warming (Even with LEAN GISS E can’t get that warming right!). Additionally, subjective choices are made about estimating forcing by volcanoes before Pinatubo. And aerosols represent not one adjustable parameter but no less than four! That’s Direct, magnitude and time history, and Indirect, magnitude and time history. So the objection:
“To thank that you can manipulate that one variable to produce such a good curve fit is not reasonable, IMHO.”
While invalid to begin with, since even a single adjustable parameter will always allow you to get the magnitude correct, is also baseless because frankly it’s flat out wrong in thinking that aerosols represent a single variable.
I’m sorry I really don’t have the patience to continue this endless run-around anymore. If you want to firmly believe that these EBMs tell you anything meaningful, I can’t stop you.
kuhnkat (Comment#20211)
An even better scientific formulation-how do you disprove it? In the words of Wolfgang Pauli:
Das ist nicht nur nicht richtig, es ist nicht einmal falsch!
You go tell Tamino his model doesn’t have anything to do with physics and it’s just an empirical curve fit
He took the known forcings, a reasonable time constant, and out came the answer. You have implied he ‘chose’ his forcings. Once again, you are mindreading.
While invalid to begin with, since even a single adjustable parameter will always allow you to get the magnitude correct, is also baseless because frankly it’s flat out wrong in thinking that aerosols represent a single variable.
He’s matched a lot more than just a magnitude, he’s got a very good match to over a century of the temperature record.
bugs–
Known forcings? He used GISS estimates of forcings. Other modeling groups use other estimates. “The” answer? Is it “the right answer?
One parameter “lumpy” gots a better fit that his two parameter model! It’s actually rather amazing he added a fiddel factor and then did worse!
I’m looking at the climate sensitivity for Aurthur’s model and it ranges from 0.4 to 1.2. I checked out lumpy and it gave a climate sensitivity of 1.7.
http://rankexploits.com/musings/2008/how-large-is-global-climate-sensitivity-to-doubled-co2-this-model-says-17-c/
It seems that lumpy would give a higher estimate for CO2 sensitivity which is what I think Tamino was looking for.
Hey bugs since you are such a lover of Tammino’s model can you tell me what the paired t-test p-val is for the model v the GMT record. And are you going to offer it to Dr Annan in his bet with the Ruskies? Better yet are you going to ante-up yourself using Tammino’s model?
http://www.guardian.co.uk/environment/2005/aug/19/climatechange.climatechangeenvironment
One parameter “lumpy†gots a better fit that his two parameter model! It’s actually rather amazing he added a fiddel factor and then did worse!
All you are doing are proving him right. According to you, even a one parameter model is all it takes to come up with a reasonable match to the temperature record.
bugs,
Repeating the same assertions over and over again isn’t proving anybody’s point, least of all Tamino’s. It would be more productive if you would actually address specific points or else refrain from adding to the noise.
John–
Lumpy won’t give the time constant Tamino likes. Oddly…. I think this may “matter” to him. (“Some people” wigged out over Schwartz low time constant and wrote a comment to the journal.)
bugs–
No. That Lumpy is better than Tamino’s fit does not prove him right. Go back and read what I wrote about Lumpy: We can’t be confident that sensitivity is correct.
JohnC–
Also, be careful about units. Arthur switches back and forth between C/ doubling of CO2 and C/(W-m^2)
The conversion requires and estimate of forcing due to doubling of CO2. So you’ll see
Arthur’s graph doesn’t indicate the units, but they seem to be C/(W-m^2). Lumpy is in C/doubling of CO2.
“oliver (Comment#20239) September 18th, 2009 at 1:30 am
bugs,
Repeating the same assertions over and over again isn’t proving anybody’s point, least of all Tamino’s. It would be more productive if you would actually address specific points or else refrain from adding to the noise.”
Oliver: Hear, hear!
Bugs:
Oliver is right Bugs, you keep saying things like “known forcings”, and “reasonable time constant”, when these are anything but known and reasonable. For goodness sakes Bugs, the best fit using the GISS forcing history with Tamino’s model doesn’t even come from the lag constants he selected. The best fit lag values are DIRECTLY controlled by the assumed forcing history, just as they must be. Change the assumed forcing history and you get different best fit lag values and different diagnosed climate sensitivity. What part of this do you not understand?
Lucia, you claim:
But this is simply untrue on several levels. First, the only reason you can treat large portions of the planet as a “box” is because you are looking at anomalies. The non-anomaly temperature within each box is very far from being a single value – it varies significantly by day/night, by season, by latitude, by specific location from surface to tropopause and beyond, etc. Even if we make the assumption that the “fast” box is indeed the atmosphere, the average surface temperature (the 287 K you have mentioned) is quite different from the actual average temperature of the atmosphere as a whole.
Secondly, the ODE’s in question derive from conservation of energy, and the alpha terms correspond to the *change in outgoing energy flux* corresponding to a change in temperature anomaly value for that box. But that outgoing energy flux change is the energy actually leaving our atmosphere box (through the tropopause, say), which is after it’s run the gamut of GHG’s trying to keep it in. Unless you actually want to delve into the complex radiative transport issues I don’t see any simple relation that requires a certain value for alpha. But maybe I’ve missed something, I’d love to hear your further explanation of it.
Arthur–
Untrue. Either you can treat the large protions as a box or you can’t. There is nothing magical about the anomaly method that transforms “not well mixed” into “well mixed”.
Sure. And climate text books treat the entire planet using simplified treatments all the time. See, for example “A climate modeling primer” by Henderson-Sellers and McGuffie. These are applied in non-anomaly space all the time.
These are simplified treatments, but there is nothing about the “anomaly” transformation that magically erases any problems with treating the system with a simplified model.
Yes. And they correspond to exactly the same thing in the underlying non-anomaly problem.
There is, in short, no reason why you magically erase the underlying non-anomaly supporting problem.
In fact, if the two-box simplification doesn’t work because it doesn’t apply to then non-anomaly problem, then we would expect it to fail for the anomaly problem. (This is probably why AOGCM’s don’t run on anomalies….)
Andrew Kennett (Comment#20233)- LOL I thought only my mom ever said “Ruskies” anymore!
Why, just the other day, she said to me (and I was there) “That Ruskie at Wendy’s trying to not give me my change!” And I said, doing my best Yakov Smirnof “In Soviet Russia, we redistribute your change!”.
Lucia:
That is certainly true, but the two-box (anomaly) method does not require the boxes to be “well-mixed”. There is no way to divide Earth into even just two portions that are “well-mixed” in the sense of being realistically treatable as at a single average temperature. And given that, there will certainly be no simple relationship between the delta-flux/delta-T slope at a given temperature and the overall flux.
Moreover, remember forcings are defined at the tropopause, where temperatures are much lower than at the surface. For changes in incoming sunlight as a forcing it doesn’t matter where you put that boundary, but for GHG’s if you put the boundary too high, there’s no change in incoming flux at all (GHG’s don’t touch incoming sunlight), and if the surface is warming that means there’s a reduction in outgoing flux, not an increase, even though delta-T is positive. If you put the boundary too low both the incoming flux change (forcing) and the outgoing flux change will be too high. So the outgoing flux change in question has to be measured at the same point we’re setting incoming flux, i.e. the tropopause. But if that implies a necessary dependence on changes in surface temperature, I don’t see it.
Arthur–
1) What assumption do you think the non-anomaly portion of the problem requires that gets to be relaxed fro the anomaly portion of the problem? That anomaly analyses are supported by the non-anomaly problem is pretty standard.
2)
Of course. What does any of what you say after that have to do with the fact that anomaly problems always rest on the non-anomlay problem?
Arthur Smith (Comment#20280) September 18th, 2009 at 1:13 pm
Then why do we talk about GMST at all in climate science, let alone stuff like effective radiating temperature?
Oliver asks:
– because we can measure it. It’s great as an observational output to compare with output measures from modeling etc. But the average, as a single absolute temperature number, is close to worthless as input to any real physics. Absolute effective radiating temperature is slightly more meaningful but again the reality is more complex. It’s useful for simple models of course.
But when you’re talking about anomaly temperatures, the story is a little different – to first order the natural response of any body to warming is a uniform temperature increase. That, multiplied by an appropriate average heat capacity, has to match the net energy input, and there should be consequent linear effects for that uniform temperature change as an input to prediction. Of course the total average temperature also increases by the same amount when you add a uniform increment, but the anomaly increment is useful as an input, while the absolute value of the average is just not.
Arthur Smith (Comment#20285) September 18th, 2009 at 2:33 pm
We use GMST for models ranging from simple demonstrations of the “greenhouse effect” to 1-d radiative convective models to inputs/verifications for more complex models. Many of them are intended to be (if not completely successful at) being realistic treatments
Could you explain this again? (Maybe some simple equations would help to clarify).
Arthur,
If I understand correctly, I think lucia’s comment about your choice of parameters possibly being inconsistent with a surface temperature of 287 or 288 K is that while the rate of change of total heat loss to space with surface temperature can be approximated by a linear function over a narrow range of temperature, the value of the slope is still a function of the absolute surface temperature cubed. You can’t get around that with anomalies. You can estimate the value of the slope for the Earth at any given temperature by plugging surface temperature numbers into MODTRAN using the 1976 standard atmosphere and look at the change in emitted radiation at 100 km looking down. I think that’s one of the alphas or a combination depending on whether just one or both boxes radiate directly to space. At 288.2K surface temperature MODTRAN says that the rate of change in emitted power should be 3.3 WK-1m-2.
Dewitt– Precisely. Arthur can’t erase the absolute balance by defining anomalies. The absolute balance exists. The terms in the anomaly equation are linearized about the a point on the non-anomaly curve. So, the magnitudes of things like α γ can’t just be any arbitrary positive value and still describe a planet like earth.
lucia (Comment#20288) September 18th, 2009 at 3:26 pm
Lucia,
I thought you had made this clear in the very first blog post about the model equations, where you mentioned the linearization for radiative loss:
dj* = 4*sigma*T(0)^3 * dT + … (can’t remember your exact notation).
Oliver–
I thought I had made it clear in exactly the same blog post. I have no idea what Arthur is trying to explain.
Oliver–I should add that there are some tweaks based on the later introduction of the notion that the surface temperature is y Ta+ (1-y) To, but … The general idea is that the anomaly model “rests” on the non-anomaly model.
Writing conservation of energy and doing math on the anomaly model doesn’t magically make the non-anomaly physics or model go away. They are linked– always.
lucia,
Does that also mean that my contention of some time back is correct, that the heat capacity of the fast box has to be constrained within a range that gives a reasonable diurnal temperature range for the thermometers in the fast box when forced with the diurnal variation of insolation?
DeWitt– To some extent. Yes. But the difficulty was (if I recall correctly) was that you did an estimate where you set the heat transfer to the lower box to zero and then did the calculation and concluded that heat capacity doesn’t work. You can’t do that. Anyway, that’s what I understood you to be saying.
To look at the dinurnal stuff, you need to do the response of the full two box model with both boxes and heat transfer not set to zero– which if you give me values for forcings varying over time, I can actually do for Arthur’s Case 3 & 4 in Excel. ( If we go to faster time constants, I could still do it in EXCEL, but I would need to code differently, but it can still be done. Of course, something other than EXCEL might seem more impressive, but … well… this is a two box model. Fancy is not required.)
On another point though: Steve Reynolds posted, and I finally got what you meant about latent heat adding to the effective heat capacity of the atmosphere box. I thought you were just discussing heat transfer as the mechanism. But I know “get” the notion that to capture dH/dt where H is enthalpy, we need to include the increase in water associated with the increase in absolute humidity.
DeWitt,
It’s an interesting question, but also keep in mind that the diurnal forcing is huge compared to the (annual) radiative forcing numbers being discussed in the climate context.
I’m also not sure that it can be addressed one way or another by this model, since the day and night side temperatures move around the world daily, but the total heat content in the “box” (integrating over both sides) stays relatively constants.
Oliver– Yes. Sorry, I was thinking annual cycle.
Dewitt– Sorry– as I said, I was thinking annual for some reason. The box can’t quite do di-urnal because no matter what, the temperature is integrated over the globe. There is no “night” side or “day” side. So, unless there some known forcing variation over 24 hours averaged over the full globe all we do is average forcing over the full sphere, and average temperatures over the full sphere.
But, in principle, the annual cycle might be picked up if you can express the forcing averaged over the full globe in some proper way.
lucia,
I realized shortly after I posted that the full system has to be forced because there will be heat transfer to and from the slow box that will affect the temperature response of the fast box. I think you can actually see the effect of this if you fit the land only and ocean only components of the global temperature with an exponential decay model. Looking at the land only, the best fit is a single time constant of about five years and the sea only is one time constant of about twenty years. But when you use the land plus ocean combined temperature anomaly, you get two time constants of about a year and twenty years. Then again, maybe it’s just an artifact.
I guess the non-linearity, particularly of the radiation to space, would bite you if you tried to use the system as a zero(?) dimensional model forced with a 24 hour solar cycle. It works pretty well for something like the moon, though, because you can use the S-B equation directly.
lucia,
The Earth’s orbit is eccentric enough that the range of the insolation is from 1320 to 1420 W/m2 or 25 W/m2 when averaged over the sphere from perihelion in early January to aphelion in early July. At the surface correcting for albedo, it would be about 17-18 W/m2. I found data for monthly variation in global temperature here: http://www.sjsu.edu/faculty/watkins/monthlytemp.htm . He doesn’t seem to understand the lag, but I think heat capacity would explain it. It’s just like the daily peak temperature happens in late afternoon, not noon and low just before dawn.
Dewitt– I think we also need the effect of things like inclination etc. on the total forcing. Albedo etc. matters. But… we could always run it.
lucia,
The difference between land area in the NH and SH could be a problem. But I would be interested in seeing what a run would look like without bothering with all that. I think you may need to run multiple iterations (or years) until it’s stable. That’s what I had to do with my lunar model for any finite heat capacity. A twenty year time constant for the ocean is going to cause a phase shift in the temperature response. Whether it’s six months, though, is another question.
Dewitt–
Running a lot of years isn’t a problem. I’m just doing something else right now!
Hmm, I guess Lucia did originally imply that the outgoing radiation (slope dflux/dT) should be given by the bare Stefan-Boltzmann result. However, this is simply wrong.
In fact, it’s a circular argument, as far as I can tell. What we’re trying to get out of this is the (steady-state) sensitivity of the system to forcing changes. This is given by the increase in temperature at which the increment in outgoing flux matches the forcing. The slope dflux/dT is then by definition simply the inverse of the sensitivity (assuming forcing is small). If you think there’s a physical constraint that forces dflux/dT to be a certain value, then you’re actually constraining the resulting sensitivity to be precisely that inverse you specified up front. Not exactly a useful constraint for a model that’s trying to dig that number out of the data in the first place!
Arthur–
The notion that there is an underlying non-anomaly system is independent of the specific constitutive relation for the outgoing flux relative to the surface temperature. The non-anomaly system always exists.
Henderson-Sellers and McGuffie use that relation for the connection between surface temperatures and outgoing flux. But if you have some other methods of estimating, that’s fine. Suggest them.
I suspect the way I think the two have to match isn’t precisely the way you think they have to match don’t have to match the way you seem to assume. I think we can get a range of sensitivities for the type of matching I’m thinking of.
lucia,
The simple approach doesn’t do the job. Heat capacity gives a phase shift but only about 40 days with a homogeneous planet with no axial tilt. Axial tilt and difference in land area in the two hemispheres probably dominates. Land has a bigger seasonal temperature swing than ocean so the NH seasonal temperature range should be larger than the SH. Assuming the two have approximately sinusoidal response 180 degrees out of phase, the sum will have a peak temperature in the NH summer. It looks like you have to treat the NH and SH separately and sum them at the end of the calculation.
If I did my math right, It may be the case that a lot of this might not matter. I calculated that with a CO2 sensitivity of 4 degrees for doubling CO2 (which is the upper bound of the IPCC estimates) that the change in the equilibrium temperature over 100 years would only be 1.2 degrees.
http://earthcubed.wordpress.com/2009/09/19/logco2-and-scary-graphs/
DeWitt –
“Heat capacity gives a phase shift but only about 40 days with a homogeneous planet with no axial tilt.”
I am not sure what model you are using here but it sounds a bit like a single time constant from your reference to heat capacity. This could only ever produce a 90º (3 month) phase lag. It is theoretically possible to achieve 180º with two time constants but then there would be a huge attenuation of the annual signal.
At a guess you would need a minimum of three time constants to achieve 180º and a realistic reduction in the signal. Even then the three time constants would need to be fairly close in value to get the phase shift without too much attenuation.
Oh! I have just looked again at your excellent link and realised that the attenuation is really very small – circa 20%. That is going to be quite hard to achieve without adding more time constants or adding some amplification!
It would be interesting to see the annual temperature variation expressed as a function of latitude. I suspect the tilt is an essential part of the solution because I can´t see how any combination of time constants can produce so much lag with so little attenuation.
Jorge,
I agree. I’m pretty sure that the annual variation in global temperature is caused by axial tilt and the uneven distribution of land mass between the NH and SH. The change in insolation due to orbital eccentricity would be expected to reduce the annual range. It is coincidental that the current range is approximately equal to that expected for eccentricity only.
I still think there’s information there that can be used to test two box model parameters, but it will be more difficult than I had hoped. Now I have to go looking for the annual temperature ranges of the NH and SH.
This graph appears to confirm that the annual temperature range of the planet is due primarily to the difference in thermal properties of the NH and SH. Unfortunately it’s lower troposphere not surface, but the principle should be the same. By eyeball, it looks like the eccentricity contribution may be distorting the shape of the curves a little. There’s a bump in late January in the SH and a flattening of the NH curve at the same time. The resulting global curve appears to be a little flatter than might be expected in January and July.
DeWitt Payne (Comment#20303) September 18th, 2009 at 5:51 pm ,
I think the ocean heat content data (Willis et al, 2008, and several others) shows the peak in the annual cycle (top 750 meters of ocean) consistently in March, which seems just about right: a couple of months after the peak in TOA solar intensity from orbital eccentricity. I suggested this to Willis via email last year, and he told me that he thought it more likely due to much more ocean surface area in the southern hemisphere (with very low albedo versus land area with higher albedo in the north). Maybe the ocean heat variation is driven by both.
Anyway, ocean heat content may be a much better metric than average surface temperature, which weights the northern summer land temperature too heavily and yields the crazy-appearing 180 degree lag from the TOA intensity.
DeWitt-
That is a nice graph. What seems a bit odd to me is that the NH peak to peak swing of about 10º is around twice that of the SH (5º). In the simplest terms this would imply the time constant for the SH is double that of the NH but it should also mean that there is a bigger phase lag in the SH.
This does not seem very obvious from the graph. This could be because the actual time constants are quite large compared to a year and we are in the region where attenuation is proportional to time constant but phase changes are flattening out. On the other hand, it could be that the phase of the tilt forcing is working with the phase of the eccentricity forcing to modify the phases of the responses.
Presumably the next step would be to calculate the annual change in insolation for both NH and SH, taking tilt and eccentricity into account. In principle we could then match the forcings to the phase and amplitude of the responses to get the equilibrium sensitivities and thermal capacities for both NH and SH lumps – assuming both are unconnected single time constant models.
You have a lot of good ideas and I just wish I could be more helpful.
Nathan,
sorry for the delay in response about clouds;
I am not able to quantify the cloud/precipitation issue to a reasonable level. Here is what Hansen thinks of the GISS model.
http://pubs.giss.nasa.gov/docs/2007/2007_Hansen_etal_3.pdf
Check section 2.4 “Principal model deficiencies”
They Start with “Model shortcomings include ~25% regional deficiency of summer stratus cloud cover off the west coast of the continents with resulting excessive absorption of solar radiation by as much as 50 W/m2, deficiency in absorbed solar radiation and net radiation over other tropical regions by typically 20 W/m2,…”
As you can read from the “Horse’s Mouth”, the known deficiencies are rather large.
To wrap up my part of the calcs here, I did a direct computation of the (smoothed) transfer function (TF) which takes the forcing series into the GMST series. The graph and R code are here. This doesn’t involve any box-like assumptions. It is a direct deconvolution. It shows where info is lacking, at the longer delays, and is affected by noise. Against it I’ve plotted the approximation made up of Tamino’s two-box exponentials with the regression coefficients. My aim is to show that the result is approximating the TF, and it is just one of many ways you could do that. You can use box-like assumptions to help choose the form, but you don’t have to.
Again, this is not just numeric play. You deconvolve GMST with forcings, and get a transfer function that you can use for prediction. In particular, you can estimate a sensitivity. It’s not the most sophisticated, but it is simple enough to see what is happening.
I’ve edited the link in the previous post, which was wrong, but in case it doesn’t show, the correct link is here.
Nick–
1) On the math/physics issue: Tamino claimed physics.
The question is though– Is your sensitivity estimate at all reliable? What gives us confidence it is reliable?
These are always important questions, and you seem to often simply focus on “has this been done correctly as a math problem?” and seem to suggest we should all be both mute and deaf when it comes to asking, with the issue of “over all, is the choice appropriate, based on physics etc.”
1) On the math/physics issue: Tamino claimed physics.
And gave you a quote that claimed mathematics.
Nah, this is just cherry picking, Bugs. Learn 2 Read and Reason.
You gave a selective quote that said “mathematical model.”
This is a another way of say “analytical model”, as opposed to, e.g., a conceptual model. That doesn’t address author’s intent, [You did learn about that concept in college didn’t you? It is central to interpretation of any technical literature.] since e.g. any computer-based global climate model or any computer model of a physical system is a “mathematical model” in that sense.
Tamino established he thought it was physically relevant by not only defending it on those grounds, but by asserting he had previously tested his model and verified that it was physical.
E.g.:
There’s really nothing to argue about if you know how to read and interpret language. Somethings aren’t subject to reader’s interpretation or there would be no point to language as a means to communicate information.
Or this exchange:
No way to interpret that exchange in any other way than that Tamino intended it to be a physical based model and not just some pointless mathematical exercise, further that he claims to have verified that the model was physical, and that he thought it was easy enough to check that he was “surprised” that Lucia hadn’t already tested and confirmed it.
Hundreds of pages of algebra later from three different people, I think it behooves Tamino to publish his own “proof” or risk looking like as big a fool as people who can’t read and analyze simple passages of technical literature.
Nick (#20372) – great minds think alike! Just last night I posted this analysis of exactly the same issue (my “response function” is your “transfer function”). My approach is somewhat different from yours, but philosophically the same idea. And I see the same sort of wiggles – could they be real?
I.e. do we know with certainty that the response/transfer function has to be monotonically decreasing? I don’t believe it does – in fact at t=0 strictly speaking the response function should be zero and increase for a while, because the imbalance associated with the forcing has to take some time to accumulate.
bugs (Comment#20143) September 17th, 2009 at 8:37 am
UAH isn’t data.
UAH has sensors. These sensors are physical algorithms. The output of the sensor is a voltage or a bit. Not the thing being sensed. This output is then PROCESSED by algorithms, models if you like. After processing you don’t have observations. You have a model of observations. Simply, if a sensor of of this type is pointed at a source of this temperature it will result in an output like this XYZ. There are many pitfalls in this kind of instrumentation. Bottomline, when UAH doesnt match a GCM, you know one thing is certain.
1. a computer program used to produce the tempertaure series of UAH got something wrong OR
2. a computer program (GCM) used to produce a temperature series got something wrong.
the same thing goes for the land surface temps. they are NOT observations. they are not data. They are the output of a computer program. nothing more and nothing less. And without
access to the data and the code I would put them on no higher standing that a GCM or an anecdote.
Can you find the common thread? I knew you could. Now perhaps people will understand Why I beat the same damn drum. free the data, free the code.
Carrick (Comment#20382) September 20th, 2009 at 11:50 am
Taminos last theorem.
he did prove the physicality of his model but the proof was too large to include in the margin of his blog
Semantics don’t get around the problem. Transfer (or response) functions only provide correct out put when they correctly represent the response of the system they are supposed to represent. Otherwise, you they don’t provide correct output.
If this transfer function does not represent the physics of the system, then the methodology is not based on physics. It is based on math and we can’t know whether the claimed climate sensitivity is likely to be accurate.
We know it’s output of a mathematical treatment and has the correct dimension. But other than that, you can use whatever semantic terms you like. If the method does not translate into some sort of reasonably realistic physical system, there is no a priori reason to prefer this curve fit to any other curve fit.
Lucia:
… and no reason to expect that the fitted parameters of the model have a physical interpretation.
Arthur(#20385)
Those plots look great. Yes, there is some similarity in the wiggles, although mine start later. I’m using more smoothing. In the R code I posted, there’s a fragment
c(1,-2,1)*50 #weighted roughness rows
Varying the 50 varies the smoothing. I reduced the smoothing and the wiggles look more like yours, though they don’t quite match up.
The basis for expecting monoticity is that the heat transfer is dissipative. So you expect that a forcing will have less effect 20 years later than 10, since the heat will be more widely distributed. There may be el Nino type effects which combine the transfer with a wave motion which counters this to some extent.
I don’t agree with your argument that the response function should be initially zero. The temperature responds gradually because the response is integrated over an initially small time, not because the response function itself is zero. In fact, for uniform conductivity diffusion with forcing heat flux, the transfer function goes to infinity like 1/sqrt(t).
Steven Mosher,
“he did prove the physicality of his model but the proof was too large to include in the margin of his blog”
are you being sarcastic, or, is this what Tamino is now telling the unwashed mass??
Did he tell them what tools and methods he used to prove the physicality??
Sorry to bother you, but, I only go to blogs that confirm my Bias directly!!!
Nick, of course you’re right, my argument that the response should go to zero was forgetting that time factor. However, a portion of the response could be zero initially and higher later if what was heated was different than what was measured – that portion of the heat input just naturally takes some time to reach our measuring instruments and so will peak after some delay. But given that we’re in principle measuring Earth’s surface and essentially all the heat input in the system is also at the surface, that may be hard to argue realistically. Is there something on the planet that could be heated directly by the sun and then take on the order of 5, or 20, years to affect surface temperatures? Those peaks really do seem hard to understand…
kuhnkat (Comment#20397) September 20th, 2009 at 5:09 pm
I’m being sarcastic. It should be obvious that he didnt check it. Even Arthur and nick would agree to that.
Andrew_FL (Comment#20087) September 16th, 2009 at 2:14 pm
when I first started looking at this, like you, I noted the 1911-41 trend. WRT some kind of disaster coming. I’m undecided.
1. I’d like to see a competent definitive land/see record created.
There are UHI problems in the record and methodology concerns.
2. i’d like to see a engineering quality model evaluation for GCMs. There are too many models and the spread that results from this diminishes the power of hypothesis testing.
If those two things get addressed, then I think we’d be in a position to make a considred judgment
Steve Mosher:
I dunna.
I haven’t seen them stipulate to one thing derogatory towards Tamino. They act like they are his paid lawyers or something. Indeed at least one of them blames Lucia for Tamino’s ridiculous tantrum.
yes carrick I was just giving Arthur and nick the opportunity to say something. It really would go toward there credibility if they did so. There is absolutely no harm is saying that it’s clear that Tamino did not check his model for physicality. If he did, they would have simply asked him for his work. they didn’t.
they didn’t because they knew or suspected that Tammy was blowing smoke up people’s skirts when he made the claim. But, they lack the gumption to state the simple truth. AGW, in my mind, remains the best explanation regardless of the personality traits of the people involved in the debate. I’m probably being a bit harsh on arthur and Nick, but I have seen some defend meaningless mistakes to the bitter end. That’s telling. Personally, I’ve congratulated tammy on several occassions. Most notably when he did his VERY FIRST post on the two box approach. As a fan of Lumpy from way back I think these simple models are very useful. Not as useful as some think, however.
Steve #20427 I hadn’t taken that opportunity, because I think it was thrashed to death back in the first thread. My contribution starts at #18499, and a similar grilling from brid ensued. My guess is that Tamino did not spend much time checking, so his statement is probably wrong, but I can’t follow that much further, because I can’t see what he actually could have checked. I asked that in that thread, and didn’t get a response. It’s an unresolved part of the long subsequent discussion.
Nick–
The way your first comment read, it appears that you are suggesting that people should not be criticizing Tamino on the basis of physicality. In fact, he mad the claim. So, what you have been doing is studiously avoiding admitting that he did, in fact, claim that a) his model is physical and b) he checked.
I puzzled by what you think you asked that was not answered. Do you mean you can’t see how he could have checked if his answer was physical? Do you mean you asked at Taminos and he didn’t answer? Because that has certainly been discussed over and over in the here. One check is, in fact, the topic of the first post on which you commented! Other checks have been discussed in other posts, and in comments here.
Lucia, we’ve been over this many times. Tamino talked somewhat loosely about the boxes, and can be criticised for that. he should have made clearer that what he actually did was the approximation of the response function by exponentials, and there isn’t a physicality issue there that you can check. The rest is extension, talk of which Tamino should have dampened, but since it doesn’t relate to anything he actually did, there’s nothing that can be checked.
Nick–
Why would Tamino have dampend the talk of physicality? Tamino very specifically claimed physicality.
Lucia, I’d like to see a quote of that “very specific claim”. In his first post, I can see nothing like that – he said of his boxes “This can be thought of as a rough mimicry of an atmosphere-ocean model, where the atmosphere responds quickly while the ocean takes much longer.” , which is a very subdued claim of physicality.
I’m not sure why Nick an Aurther wants to ignore talk about physicality. It is true that that the fits may approximate the response function somewhat, but what Aurther and Nick have both shown, is that only the first five years of the response function is approximated well and that there is insufficient information to determine the rest of the response function with any reasonable confidence.
This is important because the step response (whose asymptote is the new equilibrium from a step change in forcing) is the integral of the response function. Modes that have small residues will not be statistically significant in the fit but could contribute significantly to this integral. The more information there is to constrain the solution space the better that one can estimate the tail of the response function which will be important in determining the climate sensitivity.
I won’t ascribe motives to Nick since they are unobservables. It would be a simple matter for nick to clarify what tamino meant. he could write a post on his blog and ask him. Why did he ban Lucia, did he check for physicality? what did he check? and a variety of other questions. But, Nick won’t. And he will have a very good excuse of why he won’t.
Steven, #20440 – you could ask those questions too. They are your questions, not mine. I know what Tamino actually calculated, and clarifying his words is not a priority for me.
Nick Stokes (Comment#20431) September 21st, 2009 at 5:16 pm
There is absolutely a physicality issue here, because what Tamino has done is hypothesize a two box model which extends Schwartz’ one-box version and presumably has greater explanatory power by allowing separate responses from an “atmosphere” and an “ocean.” His entire heading was that even a simple, “non-computer” model clearly captures the basic behavior of the climate system and even validates the GISS sensitivity numbers — so long as you let it have two boxes instead of one.
Sans the hypothesis and the trial form for the solution, all your analysis leaves us with is a cross-spectrum of one dubious input and one output of only 120 years from which you are trying to compute a transfer function which you then filter… to back out the physics, allegedly without any other assumptions? If so, let’s start with this question: What makes your arbitrary “smoothness” weighting any less of an assumption than the diffusion-form e^-(kt) decay? (And we’re not talking about the “diffusive” space-time heat kernel, but simply the response between two capacitive branches in a circuit).
And then there’s Arthur’s curiosity about “unexplained” peaks? Surely the two of have not forgotten after all this numeric play that we are dealing with a climate system which is well known to be (likely strongly) nonlinear?
#20446 Well, Oliver, I’ll put that question back to you. In the fitting of two exponential decays, what should Tamino have checked for physicality? And what is the test?
Nick Stokes:
I think you need to clarify this statement, because at it’s face it’s clearly false unless you were standing over his shoulder while he was calculating.
Are you getting paid to be his lawyer? LOL.
Nick:
There are plenty of blog entries at this point on exactly that topic, and it’s pretty hard to argue that Tamino did any of this.
You’re spinning faster than a tornado and moving the target faster than a whack-a-mole right now.
Carrick
This is just armwaving. What test for physicality should Tamino have checked? And what test does it fail?
I haven’t moved the target. Steven’s questions #20440:did he check for physicality? what did he check?
I know what T calculated, because I reproduced his results based on what he said, using the data he specified.
No, Nick, this isn’t just armwaving. From the beginning this was about asking for some physical support for (implied) hypotheses, not merely numeric play. If there’s some underlying physics suggested by these analyses then we’d all like to know about them because it really is interesting stuff, but otherwise we have to assume it’s just a curve fitting game.
Nick:
Thanks for your clarification.
You reproduced some of his reported results, you didn’t reproduce the physical tests he claimed to have made because he never published what tests those were of course. Since these were the subject of the conversation at the time, talking about the other results is a bit of a diversion.
What is arm waving by the way? Pointing to the mass of attempts at physical checks, or the mass of physical checks themselves? How are either of these arm waving?
Here again is the retort by Tamino:
Tamino claims to have checked the physicality of his model and excoriated Lucia for failing to do so before asking him whether he had done it. (What her doing the checks before asking him whether he had performed them himself has to do with anything is a bit beyond me, but oh well.)
That’s the original target, that’s the issue that Steven raised. Turning that into “what” he should have checked for is definitely moving the target.
Whack-a-Mole anyone? How many quatloos do I get if I score 1000?
Lucia put the question of violations of the 2nd law on the table.
The question of physicality related to that at least. It’s very simple.
” I believe that AGW is true and I believe that tamino did not check his model for physicality ( such as 2nd law violations) when he said he did ”
I hold those two beliefs and my head does not explode.
I hold those two beliefs and the second belief about tamino
has nothing to do with the truth of the first.
So, I ask myself, why do some people contort themselves so about Tamino?
I can’t and won’t make anything out of this. But I think it’s fair to give nick the opportunity to clarify.
Not actually stating a “physical” check. Not even actually pointing to one! Finger-pointing would be better than armwaving.
OK. 2nd Law of Thermodynamics, whether Tamino’s 2-box model violates it.
Let’s start by reminding each other that Tamino says that he checked that it does not violate the 2nd Law.
Since you apparently want to be very clear and specific in the language all of a sudden, let’s start with you stipulating to that fact.
Carrick, I’m always in favor of being clear and specific. But this won’t do. Tamino’s brief response was consistent with either a specific possibility of 2LOT violation was checked, or that he checked to see that there was no such possibility. I believe the latter situation is true for his calculation. If you are aiming to be specific (I hope so), please say how such a violation could arise, and how it could be checked.
Nick–
What someone calculates and the significance one attaches to a calculations are separate things. The argument is over the significance Tamino claimed for his results.
Had he called the his parameters “regression parameters”, and attached no physical significance, there would be no argument.
You can reproduce the calculation all you want. It’s attaching physical significance that is the issue.
Nick:
If you won’t accept such basic facts that are obvious to any causal reader, then any further dialog on this is seriously pointless.
What he said was unambiguous, it’s stunning that you still try and maintain otherwise.
Lucia:
Well I obviously agree with this, though I would go further and say we wouldn’t call the regression parameters “climate sensitivities” if there weren’t any physical significance to them.
Tamino’s lawyers want it both ways, they want the model to be unphysical and physically relevant at the same time.
Carrick
Agreed.
Precisely. It’s not physically based if we want to test to see whether or not the physics are realistic, but it is physically based when interpreting the results.
This is a neat trick. Maybe we can apply it to Monckton’s method of reinterpreting the IPCC projections.
Carrick (Comment#20464)
“Tamino’s lawyers want it both ways, they want the model to be unphysical and physically relevant at the same time.”
It is physically relevant but not with regards to the climate sensitivity. As I previously mentioned, it can approximate well the start of the impulse response function but there is not sufficient information to say anything about the tail of the impulse response function without knowing something more about the physics.
Without knowing something about the tail of the impulse response we cannot say anything about long term climate sensitivity because low frequency poles with small residues could contribute significantly to the integral of the impulse response function and the integral of the impulse response function is related to the long term climate sensitivity.
Thanks Carrick,
I was going to pose the direct question about the second law to Nick, but thought better of it. I think it should be pretty clear that nick will use any form of obsfucation, mis direction, ambiguity, lack of knowledge to admit was is plainly clear to the rest of us.
Lucia warned about possible violations of the 2nd law. Tamino claimed that he checked and she didn’t. I’ll leave it at this. Nick is good at math. Beyond that I reserve judgement. So, perhaps it’s best to go back to the math. One day I told my students that they could not convince me that the sky was blue.
Try as they might, they could not. In the end I asked them what they would make of a person who denied the obvious. Their commentary on that would apply here. So, the consensus is that Tamino did not check. Nick is a denier. Somebody go fetch Danny Bloom to comment on Nick’s willful ignorance.
Good comment, John.
In a linear system, if you knew the climate forcing associated with different inputs (big if already) and if the forcing were sufficiently broad band and measured over a sufficiently long period of time, one could simply take the cross-correlation between the input and output to obtain the associated impulse-response function. Once you have the impulse response function, the integral of that from t=0 to infinity is of course the step-function response of the system, and hence gives you the steady-state climate sensitivity.
What is clear from your comment is why this might not work: The existence of poles too close to f=0 means that you have to measure for a very long time to capture the “wiggles” in the correlation function associated with these poles.
That said, if we write down the model:
T(t) = a_S F_S(t) + a_L F_L(t)
where F_S(t) = ExponentialFilter(F(t), tau_S)
and F_L(t) = ExponentialFilter(F(t), tau_L),
with T(t) the average surface temperature, F(t) is the total forcing and e.g. tau_S = 0.1 years and tau_L = 30 years.
I’m not really sure how useful a_S and a_L will be in computing that impulse response function. Yeah, it could be related to it… the transfer function is just the Fourier transform of the impulse function of course, and since you know the frequency response of the exponential filter, you can compute the expected values of a_S and a_L from that.
In the end, it appears to me to be a contorted way of getting at the same physics that you arrive at directly from the cross-correlation function… That is to say physically connected, but not very physically meaningful.
SM –
I won’t ascribe motives to Nick since they are unobservables. It would be a simple matter for nick to clarify what tamino meant. he could write a post on his blog and ask him. Why did he ban Lucia, did he check for physicality? what did he check? and a variety of other questions. But, Nick won’t. And he will have a very good excuse of why he won’t.
You should have a word with Steve McIntyre, because he is continually ascribing motives to individuals and groups. Without doing that, his web site would not be nearly as popular.
This http://www.climateaudit.org/?p=6830 is a perfect example.
Steig and the International Man of Mystery
bugs:
In making this statement you are “ascribing motives to individuals and groups”, since without knowing their motivation for going to his website you couldn’t possibly predict whether his website would “nearly as popular” without the current journalistic style that Steve uses.
Yummy… logical pretzels.
Bugs,
“You should have a word with Steve McIntyre, because he is continually ascribing motives to individuals and groups.”
Your reading comprehension has not improved. Claiming that S. McIntyre ascribes motives is bordering on, if not actually, slanderous.
That you actually interpret the information he presents on situations as ascribing motives tells the rest of us how thouroughly he presents the data and how biased you are.
Nick,
“Not actually stating a “physical†check. Not even actually pointing to one!”
considering that Arthur Smith has complained a couple of times that Lucia keeps ADDING physicality checks, you might want to ask him what they were!!
HAHAHAHAHAHAHAHAHAHAHAHA
Are you have the same difficulty with Reading Comprehension that Bugs is??
For anyone who isn’t already bored to tears with this: The fitting process is significantly improved if the initial temperature is zero. Since we’re dealing with anomalies, there’s no reason that I can think of to not rescale the temperatures to an initial value of zero.
I found this out playing with annual temperature change in the NH and SH using the UAH lower troposphere average temperatures and calculated hemispheric TOA insolation. What surprised me about the fit is that a single time constant of about 50 days, (50.5 SH and 46.5 NH) gives the best fit. That’s on the order of the time constant of the atmosphere alone. Even though the SH receives slightly more total insolation (341.9 W/m2 vs 341.3 W/m2, the average temperature is lower in the SH (267.7 vs 270.0K, LT not surface). I expected to see a larger difference in annual insolation between hemispheres, but aphelion in the SH summer is apparently balanced by the slightly higher orbital velocity at aphelion.
I’m thinking right now that the lower average temperature and the lower annual temperature range in the SH may be more related to albedo than land/sea area difference. Does the SH have greater cloud cover? Anybody know where to find the equivalent NH and SH average daily surface temperature?
DeWitt–
There isn’t any reason to not rescale the anomalies. But shouldn’t the constant get absorbed into the third constant in the fitting algorithm?
I don’t know where to find the daily NH or SH surface temperatures. Sorry.
lucia,
It does get absorbed, but the initial error in the fit is large if the initial temperature is not very close to zero. For the annual temperature fit rather than starting from January 1, I started from day 300 for the NH and day 312 for the SH. It probably has to do with the details of the fitting process . I vaguely remember something about zero padding in the convolution process. There is likely a way around it, but I don’t know what that is. Nick, Arthur?
DeWitt #20511
Yes, that is right. The temperature is modelled as the convolution of the forcing with the response function, and that must start out as zero at time zero (whatever the forcing).
Put another way, I padded initially with zeroes pre-1880. This makes an artificial jump in that year, which creates perturbations in the initial decades. It is better to pad with the 1880 value, or some average in that region.
However, the initial years will never be fitted well by response function estimation – the fit depends on pre-1880 values, and a better guess is no substitute for data.
I’ve found that Tamino’s method seems to be well established in the literature. Jarvis has written several papers, including this rather mathematical one. Jarvis and Li had a quite interesting paper on the kind of thing attempted in these threads – trying to tease physical inferences out of a (different) simple model.
Ian Enting (see 2.5a) has been publishing response function models for many years.
In all this work, the two-exponential form seems to be the usual one chosen.
Nick,
In the 1880 to 2003 case there is little difference. The anomalies must be close enough to zero to not make a difference. I tried adding 0.25 degrees to the temperature data and all it did was shift the intercept. The fit statistics were unchanged and the graph was displaced a constant amount. The Krakatoa forcing still produces too large a temperature change.
I’m becoming uncomfortable, though, with trying to fit global temperature with global forcings. Maybe it all comes out in the wash on an annual average basis, but the daily global average temperature would be 180 degrees out of phase with the average NH + SH TOA forcing, which has a range of 20 W/m2. At least on the short term, all forcings are not equal.
DeWitt, yes, in the lm() versions it makes no difference, because there is a fitted intercept. In my later response function versions it does make a difference in the early years.
Nick–
It’s hardly surprising some have used this method for curve fitting. It’s seems obvious enough. (I thought of the general idea on reading Schwartz, and I never imagined I would have been the first to think of it.)
The paper by Jarvis and Li does look interesting. I like that they are testing their method against AOGCM data. In comments that was one of the avenues I’d mentioned as providing some method of showing the method “just work”, which, absent any clear connection to physics of a two-box model might give us some confidence the method can tease out any meaningful physical parameters.
bugs (Comment#20486) September 22nd, 2009 at 3:40 pm
1. You are changing the topic.
2. I regularly post on CA and admonish people about motive hunting.
3. Steve Mac also admonishes people about this.
4. Sometimes I slip up on this.
5. Sometimes steve slips up on this.
6. If either of us slip up, please point out the exact post and make a comment.
7. SteveMc and I exchange emails and have met in person. I can report that avoids motive hunting even in person.
I’ll look at the post in question. If I’ve made a comment that is motive hunting I will admit my error. If Steve has hunted for motives in my judgement I will psot that here and write him a mail.
You see NONE of that has anything to do with my beliefs about the science. So it is easy for me to criticize myself and SteveMc.
And if Steve is wrong I know he will not have an issue admitting it. So, why do you have a problem admitting Mann’s errors, ramesdorf’s errors, kaufmens errors, and the latest Losos error.
Why are you in denial abou the errors of climate science.
Seriously, when I see anti AGW people in denial it doesnt bother me. they don’t know the basic science and are just ignorant. When I see a climate scientist or someone with Nick’s intelligence defend the indefensible, THAT is a real problem. It’s the precondition for a Piltdown episode in which valid science is almost taken down by over zealous advocates.
Perhaps I don’t understand the Jarvis and Li paper.
As I read it, they are assuming that the large scale circulation will carry heat from the surface to the deepest parts of the ocean. But this seems to me to be a bit crazed, since the upwelling at low latitudes and sinking at high at high latitudes is driven by the increasing density of the ocean as it cools. The sinking water can only sink because it is very cold (having cooled on it’s way from low to high latitude) and so is more dense than the water it displaces; the sinking (very cold) water is what drives the overall circulation and the upwelling. Jarvis and Li seem to be suggesting that somehow warmer (less dense) water will displace colder, and then assign a variable Pi to describe the total fraction of solar energy that is “lost to the deep ocean” by this mechanism. Their proposed mechanism makes absolutely no sense to me, and the authors do not appear to offer any explanation for it.
A more reasonable expectation would be for deep circulation to slow compared to today if the total ocean surface area (at high latitudes) where water cools sufficiently to sink to the abyss declines due to surface warming. It seems to me that gradual heating of the deep ocean would have to be mainly a response to slowed circulation (that is, reduced upwelling velocity). The rate of heating of the deep ocean would then be related to 1) the reduction in deep convection, 2) the rate of diffusive/eddy heat transport down the thermocline, and 3) the immediate increase of the ocean surface temperature due to forcing.
The current average rate of upwelling (as Jarvis & Li use) is about 4 meters per year, which takes 16-18 watts per square meter to heat from ~2C to ~18C (or about 7% of the total solar energy reaching the ocean). Jarvis and Li appear to suggest this is the rate of down-mixing of surface heat. But I think this greatly overstates the rate of down-mixing, because the average rate of upwelling (4 meters per year) includes areas where there is quite rapid upwelling (like off the west coast of northern South America) as well areas where there is very little upwelling. In areas of rapid upwelling, there is little or no down-mixing of heat to the deep ocean. If the volume fraction of total deep circulation that appears in rapidly upwelling areas is significant (and I think it is), then the actual rate of down-mixing of surface heat must be substantially less than estimated from the global average rate of up-welling.
The deep ocean’s time constant must of course be very long, but the fraction of surface heat actually diverted to warm the deep ocean is likely very small, and so the large majority of any surface temperature response to added forcing should be apparent quite quickly (on the order of the time required to warm only the well mixed layer plus a bit more, a la Schwartz). It seems to me that Jarvis and Li are looking for an ocean model that allows a large diversion of surface heat to the deep ocean, since this is required to have a lot of future warming already “in the pipeline”. I do not believe there is any observational evidence to support their model.
Many of the objections to Schwartz 2007 were related to the “nearly immediate” response to forcing his analysis suggested. Even after he revised the analysis to include two time constants as suggested by Scafetta in 2008, which increased his diagnosed range of climate sensitivity to overlap the low end of the IPCC range, this did not significantly change the result that warming due to GHG forcing should be very prompt.
I found the Hadley data for the gridded absolute temperature for 1961-90 average and converted to NH and SH. Unsurprisingly, it looks a lot like the UAH data only higher. I’m not going to bother to do a fit as the time constants will be very similar to the UAH data. As far as I can tell, the fast time constant must be about 50 days or 0.14 years. That makes the fast box the atmosphere only. There is no evidence for an additional time constant on the order of 1 year. Any time constant longer than that wouldn’t show up in this data. No combination of time constants shorter than 1 year gives a worse fit than a single time constant. I think this puts rather severe constraints on the physics of a two box model.
SteveF–
Yes. Many objected to the result. Maybe they would have also objected to the method had it given another answer… but they objected to the result.
Note that if Tamino has used the best fit time constants, his response to climate would be more prompt than it is using the time constants he selected. The temperature fit would be superior and the climate sensitivity would be lower.
I would still have questions about connecting to physical reality. But really, just what is the point of applying the method and them just chosing whatever time constants give you an answer you “like better”?
SteveF,
My understanding of the THC is that there is much more than thermo involved. According to Wunsch, if I read him correctly, ocean circulation is mainly driven by the trade winds. which aren’t going away any time soon. So concerns about rising temperature shutting down the THC are overblown. It’s my conclusion also that there is little net heat flow into the deep ocean from changing ghg forcing. On the geologic time scale, the deep ocean continues to lose heat to space.
DeWitt,
“According to Wunsch, if I read him correctly, ocean circulation is mainly driven by the trade winds.”
For certain the surface currents are wind driven, and probabaly do provide most of the energy for deeper currents. But as Wunsch says:
“Both in models and the real ocean, surface buoyancy
boundary conditions strongly influence
the transport of heat and salt, because
the fluid must become dense enough to
sink, but these boundary conditions do not
actually drive the circulation.”
So even if circulation is mainly wind driven, there still has to be sufficient cooling at high latitudes to overcome buoyancy. The deep ocean would have to warm to allow warmer (but still cold!) water to sink.
Lucia,
“I would still have questions about connecting to physical reality. But really, just what is the point of applying the method and them just chosing whatever time constants give you an answer you “like betterâ€?”
Tamino just wanted a result near the GISS Model E result, so selected the long ocean lag of the GISS model, used the GISS forcings, and (presto!) got the result he wanted. Arthur showed that Tamino’s result was not the best fit (it was 17.5 years and ~2 years I think) with a diagnosed sensitivity below the GISS model. All of which just proves GIGO. The issue of potential inaccuracy in the GISS historical forcings further erodes confidence in the accuracy of even Arthur’s optimized fit. Take away some of the long term aerosol “cancellation” of GHG forcing (about 30%-35% cancellation is assumed by GISS), and the result is substantially lowered diagnosed sensitivity. Maybe Schwartz’s revised sensitivity range is about right.
SteveF,
But it ‘s always, well for millenia at least, going to be much colder at the poles than the equator and salinity is going to be increased by evaporation along the way. I don’t see a few degrees warming changing it that much. It took ~50 million years to get us to this point from the Eocene peak and a few hundred year spike in carbon emissions won’t get us back to those conditions. Even the PETM didn’t last very long and temperatures rapidly, on the geologic time scale, returned to their previous values and trend.
The explanation of why the SH is cooler than the NH and the annual temperature range is about half is glaringly obvious once I thought about it enough. Land area in the NH is about twice that of the SH. To a first approximation, the ocean doesn’t change temperature on an annual basis but the land surface does. So if you divide the NH and SH temperature anomalies by the fraction of land area in each hemisphere, you get about the same temperature range. So doing a fit based on land only and correcting for average albedo of 0.31, the sensitivity of the land surface temperature is 0.26 degrees m2/W. The time constant is also larger for the surface compared to the lower troposphere, 2,15 months in the NH and 2.6 months in the SH.
That actually fits fairly well with the land temperature sensitivity. The temperature range of the land surface in each hemisphere is about 30 degrees, but since the land in the SH is 180 degrees out of phase with the NH and about half the area, the global land range is cut about in half to ~15 degrees. That’s at least in the ballpark of the 11.5 degree range in the NOAA 1901-2000 average. The sea surface temperature does change some on an annual basis, but there is apparently some coupling with the land temperature because the sea surface temperature still peaks in NH summer, which I wouldn’t expect if the ocean were independent of the land because there’s more ocean in the SH.
sorry bugs I read that post you linked to and find no attribution of motives as you claimed. I think you owe Steve an apology. I also read the three comments he made. No attribution of motives. maybe I am missing something that you can point out. In my book ascribing motives means speculating about WHY somebody does something.
I have to agree with steven. SteveM very rarely speculates about motives. He tries to keep others from doing it.
I on the other hand, sometimes do speculate about motives. But I try not to make it too much of a habit because my psychic powers into knowing other people motives is episodic. (For example, I do believe that Tamino’s sudden, frenzied, daily, sometimes twice daily posting that caused his “violation of the 2nd law post” scroll off the screen was motivated by a desire to cause that post to scroll off the screen. I said so. But, I don’t know his motive for the current animated gif showing what appears to be a 5 year average of the central england annual cycle moving around. Maybe his goal is to prove that 5 years is too short a period to define the annual cycle due to “weather noise”? Who knows.)
DeWitt,
“The sea surface temperature does change some on an annual basis, but there is apparently some coupling with the land temperature because the sea surface temperature still peaks in NH summer, which I wouldn’t expect if the ocean were independent of the land because there’s more ocean in the SH.”
But the ocean heat content clearly peaks in early March, just about right for a two month lag behind the orbit driven solar intensity change. A 16 watt/M^2 near sinusoidal oscillation in solar energy with a period of 12 months just has to show up somewhere in the climate system.
SteveF,
Interesting.
I found this reference: ftp://ftp.nodc.noaa.gov/pub/data.nodc/woa/PUBLICATIONS/grlheat04.pdf Climatological annual cycle of ocean heat content by Antonov, Levitus and Boyer. Their claim is that heat content of the first 250 m peaks in March because there’s more ocean in the SH so the range of the heat content cycle is larger in the SH (Figure 4) and the global average will be dominated by the SH cycle. That would also imply that sea surface temperature is not a good measure of OHC, which is not at all surprising to me. It’s also another piece of evidence that looking at global averages can be misleading.
Ok, here’s my post on adding the constraint on x values – for some choices of heat capacities it greatly limits the space of allowed solutions, but for others essentially every solution (from one of the (+) or (-) choices for y) is allowed. In other words, we still have a 3-dimensional infinity of physical underlying models meeting all reasonable constraints even under Tamino’s original 1-year and 30-year time constant choices.
Arthur–
So are you ever going to try to relate any of those parameters to the range of properties that make sense for the earth? Or does the importance of the “for the earth” part of the idea simply not register with you?
DeWitt,
“Climatological annual cycle of ocean heat content by Antonov, Levitus and Boyer. Their claim is that heat content of the first 250 m peaks in March because there’s more ocean in the SH so the range of the heat content cycle is larger in the SH ”
Thanks for the link to this paper. Josh Willis told me much the same last year in a email. However, I am surprised that Antonov et al do not at least consider the contribution of the orbital driven variation, since this ought to contribute to the size of the southern hemisphere cycle. If the world were completely covered with water, then there would still have to be an annual variation in ocean heat from orbital forcing.
SteveF,
I think the orbital contribution is overwhelmed by other factors like higher albedo during summer in the SH. I’ve been playing with the Hadley CRU 1960-1991 average gridded absolute temperatures. The SH averages about 1.2 C cooler than the NH. The land surface has an annual temperature range of over 30 C, but land surface in the SH averages 11 C cooler than the NH using a simplistic method based on percent area in each hemisphere and nearly constant ocean temperature. Having an ice covered continent sitting on the South Pole makes a big difference.
Lucia, you ask:
but I really don’t know what you’re asking for. Read some of the papers Nick cited up above – for example the one that goes into Laplace transforms. We know the Earth is not a two-box system, it’s much more complex, the question is how well such models may approximate the Earth’s real behavior. The Laplace transform argument explains why you might expect the real Earth response function to actually correspond to a sum of separate exponential responses in a very general way – but like any eigenvalue problem, the underlying components of the Earth that are associated with those real exponential responses could be very complex linear combinations of subcomponents of the real Earth. I know of no a priori reason to restrict the subcomponents otherwise.
Arthur–
I mean connecting the possible values to realistic parameters for the two-box system.
Of course we know the earth is not a two box system. While the papers discuss that it’s difficult to make the physical connetion, there is nothing in those papers that permits you to claim the regression coefficients tell us the “climate sensitivity” if you don’t find the physical connection.
DeWitt,
“Having an ice covered continent sitting on the South Pole makes a big difference.”
Certainly this is a factor, but maybe not too big a factor. The difference between the area of Antarctica (~14 million sq Km) and the sum of summer average north polar sea ice area plus the Greenland ice cap (about 10 million sq Km) is only 4 million sq Km. The south – north difference in total high albedo surface is well under 1% of the Earth’s total surface area, and this 1% is all at very high latitude, where net solar intensity is relatively low, even in the summer. The average measured surface temperature in each hemisphere is for sure strongly influenced by land area vs. ocean area, but the strong seasonal cycle in ocean heat content in the southern hemisphere ought to have significant contributions from both land/ocean area ratios and orbital variation in solar intensity.
SteveF,
TOA average daily solar intensity at the poles at the summer solstices is higher than anywhere else on the planet at any time. It peaks at 520 W/m2 at the NP in June and 562 W/m2 at the SP in December. The sun being above the horizon for 24 hours more than makes up for the low sun angle.
I looked at the OHC annual heat content cycle for the top 250 meters. The phase shift is so close to 90 degrees that it’s impossible to determine the time constant from annual data other than to say it’s a lot longer than a year and probably longer than ten years. Exactly matching the time scale is also critical. I don’t know if the OHC data is for the first of the month or average for the month. I’m calculating forcing as the average for the month so if it’s the first of the month, I’m shifting the phase by half a month.
There’s an interesting post at Pielke, Sr.’s site on the relationship of surface temperature to radiative forcing, otherwise knows as climate sensitivity. It includes this paragraph from an NRC report that seems particularly relevant to this thread:
SteveF,
The summer (July, August, September) 1979-2000 CT average ice area is 6,000,000 km2 plus Greenland gives ~8,000,000 km2 total area not 10,000,000 km2. Antarctica is 14,000,000 km2 plus a summer average ice area of 2,600,000 km2 means that the Antarctic ice area is ~8,000,000 km2 larger than the Arctic or a factor of two at least. That’s a lot in my book.
I found something interesting online:
“On average, the ocean surface is about 0.8°C warmer than the air above it. Direct heat transfer (transfer of sensible heat) therefore occurs usually from water to air and constitutes a heat loss. Heat transfer in that direction is achieved much more easily than in the opposite direction for two reasons:”
http://www.es.flinders.edu.au/~mattom/IntroOc/lecture04.html
So any temperature difference less then 0.8 degrees is reasonable. As far as what temperature differences are reasonable above 0.8 degrees (I’m not sure).
Out of curiosity, the 0.8°C is measured at what height?
John Creighton (Comment#20892) September 30th, 2009 at 9:20 am ,
Latent heat transfer from a water surface to air should be much higher than sensible heat transfer. If the air temperature is much higher than the water temperature, water will evaporate (unless the air is already saturated with water), cooling the water and the air, but the net result will be an increase in enthalpy of the air because of the higher latent heat content of air with higher specific humidity.