Ocean Heat Uptake Efficiency, Chicken-laying Eggs and Infinite Silliness

In a recent post here, I discussed the nonlinearity of the Earth’s radiative flux response to temperature change exhibited in most of the GCM results, and the application of an incompatible linearised model to those results for the purpose of analyzing climate feedbacks.

The conversation in the comments section, perhaps inevitably,  got onto the subject of “ocean heat uptake efficiency”, and more specifically onto the 2008 paper by Gregory and Forster, (“GF08”).  The assumption underpinning ocean heat uptake efficiency is that you can characterize the net flux imbalance as a simple linear function of temperature change,  with a constant of proportionality, κ.  I expressed the view in the comments section that the concept was inelegant, ultimately unnecessary and founded on a mathematical fallacy.  With Lucia’s permission, I would like to try to defend this comment, and take the opportunity to address some of the questions which arose.

 

Commenter Oliver  defended the concept on utilitarian grounds:-

“But since you might like to compare the effective “sensitivities” in AOGCMs with each other or with what you get using mixed-layer oceans, the ocean “efficiency” can still be a useful concept. It’s just one simple way to get a “metric” for comparing models which have similar equilibrium sensitivity but different transient behaviors or vice versa.”

I have some sympathy with the above statement.  However, I would argue that it is simply not necessary to make the very dubious assumption of a linear relationship between net flux imbalance and temperature change in order to establish comparative metrics  between models.  This assumption has limited applicability within a strictly limited set of circumstances;  specifically, it requires a linearly increasing forcing with time and an ocean with infinite heat capacity.  A more direct measure for inter-model comparison would be, say, the integral of net flux over time at the point of doubling of CO2 – or perhaps the same thing divided by the cumulative forcing actually used by the particular GCM.  These alternative metrics would provide a measure of the ocean heat energy gain, free of any assumptions about the functional form of ocean heat flux vs temperature, and hence would have validity in comparisons made outwith the boundary condition assumptions of a strictly linear increase in forcing with time and an infinite-acting ocean.

Fred Moolten makes a slightly different  point later:-

“…what comes across (to me at least) is that the division of energy flow into ocean heat uptake and escape to space makes it possible for models to describe each by a parameter (κ and α respectively), consider a steadily increasing forcing scenario such as for a TCR in which both parameters might be considered constant, combine them into a new constant parameter ρ that accounts for all energy flow, and which therefore has the enormous virtue that it can now be used the relate forcing to temperature change directly as F = ρΔT, without a need to measure the individual values of κ and α, or to estimate associated feedbacks. In other words, as long as those appropriate conditions hold, ΔT is a linear function of forcing, and all that needs to be done to estimate TCR from historical data is get a good estimate of the forcings (GHGs, solar, aerosol, etc.) and of the temperature change, compute the value of ρ from those data, apply it to CO2 forcing, and arrive at a TCR value.”

I will show that you do not need the concept of ocean heat uptake efficiency to conclude that, if forcing is linearly increasing with time, then a plot of Forcing vs Transient Temperature Change should be approximately a straight line.   It is easily demonstrated  that this should be true over a wide range of ocean models, ranging from finite capacity to infinite ocean models.  It is not necessary to assume an infinite-acting ocean.  In other words, one of the main conclusions of the GF08 paper – that TCR can be directly estimated by extrapolation if the forcings and temperature are known  – may well be sound,  despite the poor logic on which the conclusion is grounded in the paper.

So, if I accept the GF08 conclusions, why do I dislike the concept so much?  The big problem is that  various authors are using it to draw questionable or even completely erroneous conclusions,  by disconnecting the concept from the restrictive assumptions which go into its derivation.  (This long pre-dates GF08 incidentally.  It started back in the late 90’s as far as I can tell, and has spread like a virus since then.)

There are two areas of application where we can find these questionable conclusions.  The first is in the use of the concept to partition an instantaneous flux (forcing)  – which is actually varying substantially over time  – into the bit that goes back into space and the bit that goes to warm the ocean.  This is really  quite a silly thing to do, if you think about it just a little.   It would not be silly to partition the integral form (in time) to draw conclusions, since this is then founded on a heat balance – energy in and energy out –  and does not require the restrictive assumptions on the functional form of the input forcings and the various flux responses.   So why do some climate scientists  tie themselves into a set of unnecessary and highly restrictive assumptions to analyse results, with a high likelihood of drawing erroneous conclusions, when 10 minutes on a spreadsheet would give them a more rigorous answer?  I honestly don’t know the answer to this question.

The second (and more important) area of application where we see erroneous conclusions is in the use of ocean heat uptake efficiency to explain nonlinearity in the relationship between Earth’s radiative response and temperature change.  I want to re-emphasise the critical observation that the Earth’s radiative response to a climate state (temperature) controls the net flux going into the oceans.  The net flux going into the oceans does not control the Earth’s radiative response to a given climate state (temperature).   It is here that we find an egg laying a chicken.

I am going to try to explain where the fallacy starts.

 

Energy Balance Modeling – The Deep Ocean Flux term

I will start with a climate feedback model, where the Earth’s radiative response is assumed to be a (univariate) linear function of temperature, since this underpins the GF08 conceptual model.  (This is not a necessary assumption for my conclusions regarding the conceptual problems, but it is needed for the GF08 conclusions.)   The instantaneous flux balance,  expressed  in the form of radiative fluxes at TOA responding to a small flux forcing which is allowed to vary over time, is given by:-

Net flux (imbalance)  at time t =  Cumulative Forcing  – Earth’s radiative response to the changing temperature

N = F(t) – α*Ts                                   Equation (1)

Ts here represents the change in seasonally adjusted average surface temperature from some quasi steady-state at time t=0.

I am going to now consider an ocean model of two layers   – the mixed layer and a deeper capacity – connected by a diffusive term.  This is not necessarily a good model of the real world, as SteveF keeps reminding us, but again it forms a good starting point for examination of the GF08 conceptual model, as we will see.  This ocean model is:-

Net heat flux going into the ocean , N = C1 dTs/dt + κ(Ts – T2)                      Equation (B1)

C2 dT2/dt = k(Ts-T2)                                                                                                          Equation (B2)

Where  C1  is the heat capacity of the mixed layer, and C2,T2 describe the heat capacity and temperature change, respectively,  of the deeper layer.   The heat flux across the boundary  between the two layers is controlled by the diffusive constant κ which has units of Watts/m2/deg K.   Equation (B2) is a statement of instantaneous flux balance for the deeper layer.

If we substitute B1 into Equation 1, we obtain:

C1 dTs/dt + κ (Ts – T2) =  F(t) – α*Ts                                           Equation (B3)

If we “solve” Equation (B2) for the deep ocean temperature, T2, we find :

T2(t, Ts) = (κ /C2) *exp(-κ t/C2)    0t∫ Ts(t)exp(κ t/C2) dt                  Equation (B4)

If the system is allowed to equilibrate  (i.e. cumulative forcing eventually goes to some  constant value), then we see from Equation B4 that, as t becomes large,   Ts  approaches a constant, T2 approaches Ts and the flux term κ (Ts – T2) goes to zero.    Equations (B3) and (B4) therefore provide a coherent,  closed form solution in terms of energy balance.

Although very simple, Equations (B3) and (B4)  offer sufficient versatility to be able to  duplicate the aggregate behaviour of a GCM with good fidelity,  given the appropriate choice of parameter values.

The figure below shows the reproduction of the “average” behaviour of the GCMs tested in GF08 for the 1% per year doubling of CO2 – which approximates to a linear increase in forcing with time.

Figure 1.

The linear gradients of the various fluxes with temperature (F vs Ts, N vs Ts and F-N vs Ts) obtained from OLS, correspond to the average values reported by GF08 for ρ, κ and α respectively.    This “average GCM”  has a TCR of  1.8 oC   at the point of doubling of CO2.

This second figure shows the temperature response in time, as well as the behaviour of the deep ocean flux term.

Figure 2.

So, we observe from this:-

-          The GCMs are capable of producing near-linear behaviour for net flux against temperature under conditions of linearly increasing forcing.  This is actually the source of the observation made by GF08, and previous authors,  that the net flux N can be approximated by a term κ *Ts.

-          The GCMs can only produce this near linear behaviour  in net flux vs temperature with an ocean model which approximates an “infinite-acting” ocean heat capacity i.e. with a deep ocean capacity set large enough that changes in deep temperature over the period of interest are small relative to changes in surface temperature, Ts.  For the emulated results above, the deep ocean layer capacity is set to about 5 times the mixed layer capacity.

 

If the deep ocean heat capacity is instead set to a small(er) finite capacity, the net flux term ceases to be near-linear as it is in the case above;  instead it asymptotes to a constant value for a linearly increasing forcing case.  An OLS fit to the net flux data will  of course still yield a positive value of ocean heat uptake efficiency, κ,  under this scenario  – even though it is inappropriate for this case.

However, for the small finite capacity case, a plot of forcing against surface temperature will (still) rapidly converge to a straight line relationship.   Put another way, the existence of a near-linear relationship between forcing and surface temperature does not provide a diagnostic for determining whether the deep ocean capacity is finite or infinite-acting, since both ocean models will reproduce this behaviour under the boundary condition of  a linear increase in forcing with time.   Given this, I was seriously surprised that GF08 did not examine directly the shape and statistics of the relationship  between  net flux and temperature for the historical (20th century) data;  this would have provided  a more appropriate diagnostic for validation of the (critical) ocean model assumption.

For Fred Moolten,  from  numerical experiments  on these “semi-infinite” cases  the value of ρ as defined by GF08 (the gradient of Forcing vs Surface  Temperature obtained by OLS) is not invariant with forcing history;  specifically it tends to gets larger if forcing is increased at a slower linear rate in time.    This is a numerically-based observation, but it seems quite compatible with what GF08 observed when they compared the “historical“ values of ρ from the  20th century GCM runs with the values obtained from the future scenarios where the change of forcing with time was  higher.   So  ocean heat uptake efficiency apparently exhibits some dependence on forcing history.    Let me be clear that in the mathematical model I used to generate these results there is no dependence of Earth’s radiative response on forcing history;  it is solely dependent on temperature.  Moreover, in the temperature domain, once the same cumulative forcing has been achieved between cases, there is no dependence of net flux on forcing history.  None.  Zilch.   And yet we see some differences appearing in estimates of ρ and  κ between different forcing cases from the same mathematical model.  These differences arise solely because the ocean heat uptake efficiency assumes an infinite-acting ocean and (therefore) tries to fit a straight line to the net flux vs temperature relationship.   In reality, because the mathematical model here is semi-infinite rather than infinite, the estimate of  κ  is obtained by fitting this straight line function to  a slightly concave curve;  hence the values change slightly between different forcing cases.   This feature of the simple model here is presumably a feature of the GCMs and explains some of the GF08 observations.   The inference that this proves some net flux dependence on forcing history is spurious.  I hope that this addresses Fred’s question.

 

C – The GF08 Model

I am now going to “build” the GF08 model and I am going to show that the model is founded on the assumption of an ocean of infinite heat capacity.  I will show that this gives rise to a paradox.  And I will show how a failure to understand the paradox leads to a fallacious argument that ocean heat uptake controls the shape of the Earth’s radiative response to climate.

Starting with Equation (B3), the GF08 model assumes that the variation in the temperature in the deeper capacity is very small compared to the variation in Ts.

So κ (Ts – T2) ≈  κTs                                           Equation (C1)

Equation (B3) can now be written as:-

C1 dTs/dt + κ Ts  =  F(t) – α*Ts                      Equation (C2)

Rearranging and writing  ρ = α + κ, we obtain :-

C1 dTs/dt =  F(t) – ρ *Ts                                  Equation (C3)

Equation (C3) is a  simple linear ODE, so we can solve it analytically when F(t) is an analytic function.

Let us first consider the specific case of a linear increase of forcing with time:-

F(t) = β t                                                               Equation (C4)

The solution for this boundary condition is

Ts = (t- C1 (1 – exp(-ρ t/ C1))/ ρ) * β / ρ   Equation (C5)

dTs/dt =  (1 – exp(-ρ t/ C1)) * β / ρ            Equation (C6)

From the above, we see that for small finite values of C1 :-

Temperature asymptotes to a linear relationship with time, with gradient β / ρ.

Forcing asymptotes to a linear relationship with temperature with gradient ρ.

Net flux asymptotes to a linear relationship with temperature with gradient κ.

(Forcing – Net Flux) asymptotes to a linear relationship with temperature with gradient α.

This should all look very familiar to those of you who have read the GF08 paper, since I have used the same nomenclature as GF08 for the parameters to define the asymptotic gradients.

The smaller the value of the mixed layer heat capacity, C1, the more rapidly the solution asymptotes to the straight line solutions.

We see from this that the GF08 “model” is not in itself a governing equation or a response function;  it is an approximation to the solution of a governing equation (C2) for a specific set of boundary conditions  (i.e. F(t) = beta *t).

OK so far.  Now, let us re-consider the governing equation, C2, but this time for a changed boundary condition.  We consider a step forcing F(t) = F = constant value.

The solution of C2 for this boundary condition is given by:

Ts = (1 – exp(-ρ t/ C1)) * F / ρ      Equation (C7)

dTs/dt = exp(-ρ t/ C1) * F / C1      Equation (C8)

If we allow t to go to infinity in Equation (C7),

Ts -> F/ ρ = F/( α + κ) .                    Equation (C7a)

This is the temperature at which this (mathematical) system equilibrates.

An alternative definition of equilibrium is the point at which the net flux imbalance goes to zero.  So now let us calculate the equilibrium temperature from Equation (1).  We have:-

N = F(t) – α*Ts                                   Equation (1) repeated

For the constant step forcing,

N = F – α*Ts                                         Equation (C9)

As N-> 0, we obtain, Ts -> F/ α.    Equation (C9a)           (Contradiction!)

So we have two different solutions (C7a) and (C9a) for the equilibration temperature from the same set  of equations  -  an apparent  paradox.     Since Equations (C7) and (C8) are valid solutions of Equation (C2) for a fixed step-forcing, the paradox must be somehow built into Equation (C2) itself.    Recall that this equation is at the root of the concept of ocean heat uptake efficiency.

Resolution of the paradox is not difficult.  If we consider the LHS of Equation (C2), we note that the first term (C1 dTs/dt ) does go to zero as t becomes large, but the second term κ Ts asymptotes to a constant value.  In other words, the mathematical solution of this equation cannot ever  allow the net radiative flux imbalance to close.  A quasi steady-state condition is reached when the residual net flux imbalance is exactly balanced by the heat flux term κ Ts.  This is heat which continues to go into the ocean, but without bringing about any temperature gain even over infinite time.  In other words, the ocean has an infinite heat capacity in this model form, and the temperature can never get to the point where the net flux imbalance is closed.

The problem then is not with the approximation to the solution of this governing equation (C2), but with the use of this governing equation (C2),  which has an infinite-acting ocean, to approximate governing equation (B3) or the real-world,  both of which have finite ocean heat capacities.  In fact, (B3) or indeed any alternative form which has a finite ocean heat capacity,  presents no such paradoxes.

The ease with which this approximation gives rise to  the false conclusion that the Earth’s radiative response to temperature is somehow controlled by ocean heat uptake should hopefully now be evident.  When Equation (C2) is used to model or describe behaviour – or to analyse the results from GCMs –  it appears that the κ term controls the initial gradient of net flux, but, of course,  κ must eventually go to zero for net flux to come into balance.  It is a short step from there to conclude that the net flux gradient against temperature for the constant forcing case starts with a gradient of (α + κ), the infinite-acting part, but must end with a different gradient to satisfy closure of the net flux imbalance.   Since the Earth’s response is the difference between the Forcing and the net flux going into the ocean, then it is “equally obvious” that this ocean heat uptake controls the Earth’s radiative response to the climate and explains the nonlinearity  of net flux against temperature observed in the GCMs  for a constant forcing case.  QED.   And of course,  bovine scatology.

75 thoughts on “Ocean Heat Uptake Efficiency, Chicken-laying Eggs and Infinite Silliness”

  1. Question, when the mixing layer reaches capacity you would have small flux through it to the lower layer. If you add a third lower layer where you used the difference of the geothermal influx from beneath and the down welling due to polar ice formation, (which is slightly greater cooling than the geothermal warming from my estimates), you would have a steady state that should allow an estimation of the conditional equilibrium OHC. Wouldn’t that help estimate the equilibrium the sensitivity?

    I ask because I am fairly confident that current SSTs are near the long term average of the MWP.

  2. Hi Dallas,
    I don’t know the answer to your question. There certainly exists the possibility in the real world of some small, but maybe significant, abyssal heating according to several papers. And deep cold water convection near the poles, OK. We can speculate that if you could pinpoint the fluxes in a 3-D ocean model, then you should certainly end up with a more realistic ocean model than the toy version I am using here. I’m not sure though I believe in a “conditional equilibrium OHC”, if I understand what you’re getting at. I can’t see any reason why it wouldn’t change all the time.

    One of the principle points I was trying to make was that it doesn’t matter how you change the ocean model, the net flux going into the ocean will be fully determined by the surface properties and the atmosphere at that climate state, not by the properties of the ocean itself. The reality of course is that if you start to postulate radiative and non-radiative LOCAL fluxes into the ocean model, then the Earth’s radiative response can become multivalued against an average temperature, but that is because we are trying to characterise the climate state with a single value – the average temperature.

  3. PaulK, the conditional equilibrium would be the matching equilibrium for “at that climate state.” OHC would be primarily controlled by solar which penetrates to depth. Since solar global TSI only varies by 1Wm-2 averaged over the longer cycles and does have some impact, you could use that reasonably well know value to estimate the RC value between to conditional equilibrium states, maximum solar and minimum solar. That solar change though is in the same order of magnitude as the geothermal and down welling.

    When I look at the longer term averages, greater than a century, there is about a 0.2C difference in the SST. With the average SST around 294K, that shift has a range of about 0.8Wm-2 +/1 0.4Wm-2
    impact on surface air temperature. More impact at the lower temperatures and due to the Gulf Stream, greater NH impact.

    I would think that a good model would start at the ocean’s deepest layer and work out instead of starting with the most noise and working in.

  4. Paul_K

    What does the temperature domain plot look like for the finite ocean heat capacity case once the overall forcing stops increasing? Is it still linear?

  5. Hi Paul,

    Thanks a lot for this post. I don’t have time to read it in detail right now, I got this far:

    I am going to try to explain where the fallacy starts.

    What I read in that is consistent with what I understand based on the discussions on your earlier thread, Isaac Held’s blog and some other sources. I’ll be back to read in more detail!

    Bill

  6. Oh, I mean to dredge up my question from a previous thread, hoping that I or some other commenter will look into it further, since we are talking about ocean stuff. “Consensus” so far attributes it to data coverage issues.

  7. Paul_K,
    You are preaching to the choir reverend.
    .
    A rigorous treatment of ocean heat uptake is IMO the only rational way of looking at the problem. Sometimes an approximation (infinite ocean heat capacity!) is useful, but in this case, it only leads to confusion, and worse, nonsensical conclusions.

  8. Paul_K wrote:

    Commenter Oliver defended the concept on utilitarian grounds:

    …It’s just one simple way to get a “metric” for comparing models which have similar equilibrium sensitivity but different transient behaviors or vice versa.

    …I would argue that it is simply not necessary to make the very dubious assumption of a linear relationship between net flux imbalance and temperature change in order to establish comparative metrics  between models.

    If a metric is simple to construct and understand, and still manages captures the relative differences between models behaviors well enough to point researchers in the right direction (i.e., you can tell the models apart on a scatter plot), then I think you might have to justify going more complicated. Utilitarian grounds are pretty good grounds for justifying simplified metrics (and models)

    …A more direct measure for inter-model comparison would be, say, the integral of net flux over time at the point of doubling of CO2 – or perhaps the same thing divided by the cumulative forcing actually used by the particular GCM.
    …These alternative metrics would provide a measure of the ocean heat energy gain, free of any assumptions about the functional form of ocean heat flux vs temperature

    Wouldn’t the integrated net flux divided by the integrated net forcing also be an “efficiency,” just with the integrations done before taking the ratio instead of the other way around?

    Re: SteveF (Comment #97446)

    A rigorous treatment of ocean heat uptake is IMO the only rational way of looking at the problem.

    Ocean models are supposed to be “rigorous.” The problem is in interpreting what comes out of the models because they’re so complicated and rigorous (in some sense, anyway). That’s why there’s so much interest in simplified metrics and explanatory models.

  9. Oliver # 97447,
    I understand that people are looking for simplifications in considering OHC. I should have been more explicit. A simplification which does not make physical sense (eg infinite ocean heat capacity) needs to be considered very carefully, especially if it might lead to nonsensical inferences if not very carefully applied. As to whether or not the ocean models used in CGCMs are rigorous: the simplest test of rigor is that they yield results which are reasonably close to measurements. Most seem not close in either temperature change profile over time or total heat accumulation. A simplified model which is close to measured values would seem (in some sense, anyway) more rigorous.

  10. SteveF,

    As to whether or not the ocean models used in CGCMs are rigorous: the simplest test of rigor is that they yield results which are reasonably close to measurements. Most seem not close in either temperature change profile over time or total heat accumulation.

    Strictly speaking, the ocean vertical mixing in OGCMs is wrong because it’s prescribed that way, not because an incorrect mixing emerges from the model physics.

  11. Oliver,

    Strictly speaking, the ocean vertical mixing in OGCMs is wrong because it’s prescribed that way, not because an incorrect mixing emerges from the model physics.

    If this is the case, then why on Earth not ‘prescribe’ the correct vertical mixing, or, if possible, allow the correct profile to emerge from the model (if the model is in fact robust). Prescribing what is obviously incorrect makes no sense.

  12. Re:DeWitt Payne (Comment #97442)
    June 11th, 2012 at 9:12 am

    DeWitt,
    You asked:-
    “What does the temperature domain plot look like for the finite ocean heat capacity case once the overall forcing stops increasing? Is it still linear?”
    Yes. You get a good idea of what happens by looking at Figure 1 after the forcing reaches its maximum value (after 70 years).
    F becomes a constant value.
    (F – Net flux) continues as a straight line of gradient α.
    Net flux settles immediately onto a straight line of gradient -α.

  13. Oliver,
    You asked:-
    “Wouldn’t the integrated net flux divided by the integrated net forcing also be an “efficiency,” just with the integrations done before taking the ratio instead of the other way around?”

    The difference is that one metric has to assume a certain shape of flux response; this assumption is generally not valid outside the specific conditions of a linear increase in forcing with time and an infinite-acting ocean. The alternative metric has general applicability. I think that makes it more useful, and less likely to give erroneous answers because preconditions are not fully met, but you are allowed to disagree.

    I am reminded of the anecdote about an undergraduate examination question. The question was:-
    You are on the top of a high building, and you are given a perfectly calibrated barometer. You are told the barometric pressure at the base of the building. Describe how you might use the barometer to estimate the height of the building.

    One student responded:- You drop the barometer off the top of the building and time how long it takes to hit the ground.

  14. I do hope Fred M comes back… or better, a some comments from Gregory or Forster.

  15. Paul_K,

    You and I guess Gregory and Forster aren’t really doing diffusion at all, rather you are transferring energy across a border from one well-mixed box to another. That seems to me to be way too much simplification. When I did something like that to model the lunar surface, the surface temperature response was way too slow when the sun came up and when the sun went down. Real diffusion causes the surface temperature to go down (and heat up) much faster. It shouldn’t be that hard to do it right by assuming semi-infinite diffusion for the deep ocean. I haven’t done a Laplace transform since I was in grad school, or I would do it. Either that or use a whole lot more boxes and do Crank-Nicolson finite difference.

  16. Re:dallas (Comment #97441)
    June 11th, 2012 at 6:52 am
    I’m sincerely sorry, Dallas, but I’m not really understanding your point here at all.
    If you are saying that you should be able to use an observation-based estimate of the surface temperature variation between maximum and minimum insolation cases to estimate sensitivity, then I would tentatively agree with you. In fact I think Nir Shaviv tried something similar.
    But without a little more clarity on your terms, I can’t grasp what you really are trying to say. How about some definitions?

  17. I strongly suspect that if you did diffusion correctly, you wouldn’t get a knee in the net forcing curve where the gross forcing stops increasing, or at least it wouldn’t be so obvious.

  18. PaulK Sorry for the confusion. I am not so much interested in the surface temperature variation, more in variations in the thermohaline. Vaughan Pratt has a model he is preparing that indicates geothermal flux fluctuations (say that three times fast) below the oceans contribute to the PDO/AMO cycles. I personally think variations in sea ice annual volume is the PDO driver, though there is a pretty good indication that geothermal/volcanic shifts the AMO around. I was just thinking a third layer that considers the back door diffusion would be a nice thing to consider in a model.

    On the geothermal thing, there were quite a few Earthquakes around the 1940s SST drop. That could have pushed the THC around a touch.

  19. dallas (Comment #97473)-“I personally think variations in sea ice annual volume is the PDO driver”

    The PDO is basically just a temporally smeared ENSO signal.

  20. Re:DeWitt Payne (Comment #97471)
    June 11th, 2012 at 5:55 pm

    DeWitt,
    You wrote:-

    “I strongly suspect that if you did diffusion correctly, you wouldn’t get a knee in the net forcing curve where the gross forcing stops increasing, or at least it wouldn’t be so obvious.”

    Changing out the ocean model calculation for an n-layer model, or different analytic solution would have some impact on the temperature solution in time, and hence the net flux variation in time – no question.

    However, this would not change out the knee in the net flux vs temperature relationoship. The solution to the ocean model temperature distribution in time – however it is calculated – must satisfy an input boundary condition that is set by the ingoing flux condition:- Flux into ocean = F(t) – α*Ts .

    The knee arises from the fact that the input forcing changes from a linear increase in time to a flat-top constant value after, in this case, we achieve a doubling of CO2. This is an effect of inputs rather than the ocean model calculation. Once the input forcing becomes constant, then the flux boundary condition imposed on the ocean is F – α*Ts – a straight line condition. This is effectively an input boundary condition on the ocean model solution, and will not be changed by changing out the ocean model assumptions.

  21. AndrewFL, “The PDO is basically just a temporally smeared ENSO signal.” Chicken or Egg? The strength of the pressure differential that changes the trade wind speed is related to the temperature different which is related to the rate of polar cooling. The ENSO is the stronger more documented part of the puzzle, but there are more subtle things likely to be going on.

    http://i122.photobucket.com/albums/o252/captdallas2/climate%20stuff/tasmaniassataymyrhadsst1880-1991from1864.png

    That is a paleo reconstruction comparison to SST I have been looking at. The blue, Tasmania and the Yellow, Southern South America follow the SST pretty darn close with their differences averaging out to nearly the SST. The 1940s glitch stands out with the green Taymyr Siberian reconstruction indicating that the SST glitch is a NH anomaly. By using only reconstructions with high SST impact, it appears a reasonable reconstruction of past SST is possible. Once you add NH reconstructions which have larger albedo, volcanic and a lagged SST response, you don’t get much information.

    http://i122.photobucket.com/albums/o252/captdallas2/climate%20stuff/tasmaniassataymyrgreenlandhadsst1880-1991from-2000.png

    This is the full period of that reconstruction with the new Greenland added. Greenland should be a fair indication of Arctic sea ice, which indicates that the variability has been reducing. I don’t think the amplitude is very meaningful, but the frequency is interesting. If you look at Antarctic sea ice, there appears to be more variation in volume during warming trends and less in neutral of cooling trends. Kinda like the PDO pattern. So what causes what?

  22. SteveF (Comment #97458)

    Oliver,
    “Strictly speaking, the ocean vertical mixing in OGCMs is wrong because it’s prescribed that way, not because an incorrect mixing emerges from the model physics.”

    If this is the case, then why on Earth not ‘prescribe’ the correct vertical mixing,

    1) It would help if we knew the “correct” mixing in detail.
    2) There are some reasons to believe that the “correct” diffusion in many parts of the thermocline is lower than the background numerical diffusion in current models

    or, if possible, allow the correct profile to emerge from the model (if the model is in fact robust).

    The correct sub grid mixing (aka, the mixing not resolved by the model) will likely not “emerge” from the model itself.

  23. Re: Paul_K (Jun 12 02:25),

    I didn’t mean to give the impression that the net flux wouldn’t start to decline when the CO2 leveled out. What I meant was that the curve would probably look more like Figure 1 in Winton, et.al. 2010 where the change in slope is very much less abrupt. The flux into the deep ocean should decline much faster with a true diffusive model than it does in a two box model.

  24. Paul_K (Comment #97467) 


    Oliver,
You asked:-
“Wouldn’t the integrated net flux divided by the integrated net forcing also be an “efficiency,” just with the integrations done before taking the ratio instead of the other way around?”

    The difference is that one metric has to assume a certain shape of flux response;

    But, mathematically, the two metrics are 1) take the ratio of the net flux to the net forcing and see if it is reasonably constant in time, vs. 2) take the ratio of the averaged net flux to the averaged net forcing. Is that a correct reading?

    this assumption is generally not valid outside the specific conditions of a linear increase in forcing with time and an infinite-acting ocean.

    Your main claim seems to be that there is a time dependence of the downward flux based on the “saturation” of the deep ocean on the timescales over which the transient climate response (TCR) is being considered (e.g., GF08). For example, your Fig. 2 shows a strong dependence of the forcing shape when the “deep-ocean heat capacity” is assumed to be 5x the mixed layer capacity. How do you come up with this number 5? The “rest of the ocean” has perhaps 30-40x the volume of the mixed layer.

    I am reminded of the anecdote about an undergraduate examination question. The question was:-
You are on the top of a high building, and you are given a perfectly calibrated barometer. You are told the barometric pressure at the base of the building. Describe how you might use the barometer to estimate the height of the building.
    One student responded:- You drop the barometer off the top of the building and time how long it takes to hit the ground.

    This is one of those trick solutions that sounds elegant, but I’m not sure it would work very well in practice. (Think terminal velocity.)

  25. DeWitt Payne (Comment #97469)

    You and I guess Gregory and Forster aren’t really doing diffusion at all, rather you are transferring energy across a border from one well-mixed box to another… Real diffusion causes the surface temperature to go down (and heat up) much faster.

    “Diffusion of what in what” is the question.

    DeWitt Payne (Comment #97489) 


    Re: Paul_K (Jun 12 02:25),
    The flux into the deep ocean should decline much faster with a true diffusive model than it does in a two box model.

    Why?

  26. Oliver,

    DeWitt Payne (Comment #97489) 


    Re: Paul_K (Jun 12 02:25),
    The flux into the deep ocean should decline much faster with a true diffusive model than it does in a two box model.

    Why?

    Lack of intra-box gradients?

  27. Billc (Comment #97492)
    “Lack of intra-box gradients?”

    The “gradient” is concentrated at the boundary, but your diffusivity/transfer coefficient would be different too.

  28. dallas (Comment #97485)-I don’t think one can determine what causes what very well from noisy proxy data, especially when it is unclear that it would represent the same pattern back in time as it appears to track now.

    That being said, studies have been done (based on the observational record) finding PDO to largely be an aftereffect of ENSO, and PDO tracks with ENSO very closely with a bit of delay and extra “memory”-hence, temporally smeared ENSO signal.

  29. Andrew FL, I doubt that the same pattern would really repeat all that well back in time. There are too many changes that are real mixed in with the noise. I am pretty impressed though that paleo can provide as much information as it can. Shame it is so hard to interpret and easy to manipulate.

    My saying PDO was probably wrong. Let’s call it the whatzit oscillation. Whatzit is driven by sea ice variability and impacts other stuff 🙂 Submarine geothermal/seismic messes with the Whatzit.

  30. DeWitt,

    “I didn’t mean to give the impression that the net flux wouldn’t start to decline when the CO2 leveled out. What I meant was that the curve would probably look more like Figure 1 in Winton, et.al. 2010 where the change in slope is very much less abrupt. The flux into the deep ocean should decline much faster with a true diffusive model than it does in a two box model.”

    Figure 1 in Winton et al, shows all of the relationships in the time domain. The net flux vs time plot (which I didn’t show) for my “Average GCM” result has a very similar character to that shown in Figure 1 in Winton et al, but of course is not tuned to the parameters for either of the two models shown there. If I were to retune to, say, the GFDL results, it would look identical apart from the high frequency variation.

  31. Oliver,
    “But, mathematically, the two metrics are 1) take the ratio of the net flux to the net forcing and see if it is reasonably constant in time, vs. 2) take the ratio of the averaged net flux to the averaged net forcing. Is that a correct reading?”

    Not really. If the net flux at time t is Q(t), then the parameter k represents an estimate of dQ/dT (where T is temperature) averaged over some time period. The metric I was loosely suggesting was the integral of Q w.r.t time i.e. the actual total energy take-up by the ocean, and normalised by the total cumulative forcing applied. The units are quite different, I think.

    I am heavily pressed for time at the moment and will respond to your second point a little later.

  32. Re: Oliver (Jun 12 10:01),

    For a step change, the flux in a diffusive model is initially much higher than in a two box model because the initial temperature gradient is very large, but decays very fast (F α 1/√t ). The initial flux in a two box model is proportional to the temperature difference between the boxes and decays much more slowly because you have to heat up the whole box. You can fake diffusion somewhat by picking an arbitrary size for the deep ocean box, but the time dependence still won’t be the same.

    As far as the ‘diffusivity’ being different, of course it would be. The units are completely different. Diffusivity in a true diffusion model has units m²/s while the two box model is Watts/m²/deg K

    The temperature variation in space and time for a step change at a plane boundary (linear diffusion) is:

    ΔT(x,t) = ΔT(0,0) erf (x/(2*(D*t)^0.5))

    The flux at x = 0 is then:

    F/A = kD^0.5ΔT(0,0)/(πt)^0.5

    where k is the thermal conductivity in the normal units of Wm-1K-1 and A is the area in m²

    I’m not sure the linear sweep boundary condition has a closed form solution. It’s been too long. It should be easier to solve than linear sweep voltammetry, though, because you don’t have to deal with the Nernst equation.

  33. Re: Paul_K (Jun 12 11:28),

    The net flux vs time plot (which I didn’t show) for my “Average GCM” result has a very similar character to that shown in Figure 1 in Winton et al, but of course is not tuned to the parameters for either of the two models shown there.

    So Figure 2 above isn’t in the time domain and isn’t for the “Average GCM”?

  34. DeWitt,

    “So Figure 2 above isn’t in the time domain and isn’t for the “Average GCM”? ”

    Yes, it is, and it does show a surface temperature response very similar in form to the Winton et al figure, but it does not show the total net flux response. It only shows the flux from the mixed layer to the deeper thermal capacity.

  35. Re: Paul_K (Jun 12 13:03),

    You:

    The GCMs are capable of producing near-linear behaviour for net flux against temperature under conditions of linearly increasing forcing. This is actually the source of the observation made by GF08, and previous authors, that the net flux N can be approximated by a term κ *Ts.

    Observation???

    GF08:

    This formulation views the deep ocean as a heatsink, into which the surface climate loses heat in a way analogous to its heat loss to space. It permits the influences of climate
    feedback and ocean heat uptake to be compared, since a and k have the same units. Like climate sensitivity, this formulation of ocean heat uptake is a model-based result. It is evident that its validity is restricted; it cannot be correct for steady state climate change, because N → 0 as ΔT approaches its equilibrium value, so the efficiency of ocean heat uptake must decline. The formulation was proposed only as a description for a system in a time-dependent state forced by a scenario with a fairly steadily increasing forcing.

    [my emphasis]

    Once again, I don’t see your point. The authors have stated quite clearly that the concept has limited validity and stated the range of validity. There was no need for this article to prove something that was already well known.

  36. Hi Paul – This post, like your previous one, arrives at conclusions I find convincing for the most part, and so I’ll limit myself to areas of possible disagreement, with the caveat that for GF-08, those authors would do a better job defending their work than my tentative effort below. In any case, I have found the GF-08 approach to be a useful one, and valid for its intended purposes. I’m reassured in this conclusion by the approbation of many experts in the field, including Held, but of course that is not a proof in and of itself. In any case, perhaps what Gregory and Forster would argue is the following:

    1. What you call paradoxes and contradictions, they would (and did) simply refer to as the inapplicability of the method outside of the restricted range of climate change for which it’s intended – what Held called “the intermediate regime”. For example, if (using their terms), F = N + α ΔT, and N = κ ΔT, F can be written as ΔT(κ+ α), with κ and α as constants. However, they aren’t claiming that their equation holds for a constant F but only for a certain range of increasing F, and certainly not near equilibrium, where κ must approach zero. The reasonable question is its value for TCR estimates – e.g., a 1% annual increase in CO2 up to a doubling.

    2. I would suggest that they might argue that in this case, the assumption of the deep ocean as an infinite heat sink is both reasonable and valuable, for two reasons, neither of which requires that this assumption be the only one that might eventuate in a linear relationship. Rather, the reasons are, first, that it is a good approximation of how the ocean has been behaving and is likely to behave in the foreseeable future – i.e., deep ocean temperature is changing minimally compared with the upper ocean and the surface, and its heat capacity therefore doesn’t appear to be changing appreciably (declining) as heat is flowing into the oceans. The second, related reason, is that their goal is to estimate TCR (response to future forcings) from past observations. This requires the ocean in the future to behave similarly to the behavior that generated the observed data. A rapidly declining deep ocean heat capacity in the future would not, I think they would argue, yield an accurate TCR estimate if the estimate came from observationally-derived values from the past for κ and ρ (which is derived from κ+ α). It’s not a question of linearity, but of the values for the parameters. As long as any equations they use permit the past data to be applied to the future, they should be useful, and a deep ocean infinite heat capacity seems to be one way to do that consistent with observational data as a good approximation. In any case, an explanation from G and F would be ideal rather than for me to continue to explain their rationale.

    A small point about a different issue, where I don’t think disagreement is huge. I don’t think anyone disputes your general statement that the “net flux going into the oceans does not control the Earth’s radiative response to a given climate state (temperature)”. However, there may be some nuances at play here. The efficiency of ocean heat uptake does affect the time course of an approach to equilibrium, and does not alter the equilibrium temperature. What about somewhere in between – can there be an effect on the radiative response to a given ΔT along the way? I’ll suggest tentatively that the answer is probably yes, although the effect would likely be small. It would emerge from changes in the parameter α dictated by the effect of the rate of near surface ocean warming on the hydrologic cycle, and consequently on humidity, clouds, and lapse rate – major feedbacks determining the value of α. This is something that could be explored further, but I’ll leave it as a tentative suggestion, while acknowledging that the radiative response remains primarily a response to temperature change at the surface.

  37. Re: DeWitt Payne (Comment #97504)
    June 12th, 2012 at 2:17 pm

    You: “Observation???”
    Yes. The concept arises from model-based observation. Most of the GCMs produce a pseudo-linear net flux -temperature response in the future growth scenarios. Your point was?

    “Once again, I don’t see your point. The authors have stated quite clearly that the concept has limited validity and stated the range of validity. ”

    …and then go on to apply the conceptual model to the 20th century model results, outwith its range of validity, and without a legitimate test.
    As I hope the main article makes clear, my main concern with the concept is that it is open to abuse. It may have some merit for a discussion of transient response – although, even there, I dislike the concept, but it has NO place in ANY conversation about steady-state response. Yet it keeps appearing there like an uninvited guest.

    I will address your comments re a diffusion model in the response I owe to Oliver.

  38. Re:Oliver (Comment #97490)
    June 12th, 2012 at 9:03 am

    Hi again, Oliver,
    You wrote:
    “Your main claim seems to be that there is a time dependence of the downward flux based on the “saturation” of the deep ocean on the timescales over which the transient climate response (TCR) is being considered (e.g., GF08). For example, your Fig. 2 shows a strong dependence of the forcing shape when the “deep-ocean heat capacity” is assumed to be 5x the mixed layer capacity. How do you come up with this number 5? The “rest of the ocean” has perhaps 30-40x the volume of the mixed layer.”
    The factor of 5 times the mixed layer capacity for the deep capacity is purely a crude matching parameter against GF08’s “average” GCM results. I could make it higher or lower by adjusting my diffusion constant. With a capacity much less than this, the net flux vs temperature relationship yields a value of GF08’s average κ parameter which is too low; much greater than this and the average κ parameter is too high, as well as the transient temperature response being too slow. This of course is based on the presumption that this crude model is doing a reasonable job of approximating the GCM model results.

    What if we replaced the ocean model with a pure diffusion model, along the lines suggested by DeWitt? Well, this would clearly give us a poor approximation of what is happening in the mixed layer, where the heating is via a complex process of radiative and convective. But what about below the mixed layer? Suppose we take a bulk capacity for the mixed layer (which actually does a good job of matching GCM and real world temperature short-term response) and then assume that the heating below that layer is purely via diffusion? In other words we have a semi-infinite diffusive system below the mixed layer, with a Dirichlet and a Neumann boundary condition at its top determined by the mixed layer response to the radiative flux imbalance at that time. How does this system behave?
    Well the first observation is that this system would eventually see your “rest of the ocean”. It only starts to reach a steady state temperature after the transient temperature profile reaches the lower physical boundaries of the ocean. It seems a more reasonable approximation to assume that the heat flow is governed by a convective-diffusion mechanism rather than pure diffusion, and this should then set a realistic finite depth boundary on the part of the ocean that is responding to changes in surface conditions. I certainly hope that the GCMs do not imitate a pure diffusion model! If they do, then the concept of an ECS becomes not just virtual but illusory.

  39. Hi Fred, Nice to see you back. I hope you gained some insight from the article.
    You’re not giving me much to argue about though.

    “What about somewhere in between – can there be an effect on the radiative response to a given ΔT along the way? I’ll suggest tentatively that the answer is probably yes, although the effect would likely be small. It would emerge from changes in the parameter α dictated by the effect of the rate of near surface ocean warming on the hydrologic cycle, and consequently on humidity, clouds, and lapse rate – major feedbacks determining the value of α.”
    All entirely possible. A lot of what I have been writing recently is about the incompatibility of certain assumptions, rather than saying that “this model is right and that is wrong”. We know that the Earth’s radiative response in the GCMs is not captured by αT. I don’t pretend to understand the reason why this is so, but I would like to.
    Incidentally, I dropped in on your music site recently – very cool. I play mean jazz piano myself.

  40. Paul_K,
    I am not certain what you mean by ‘convective-diffusion’. Over most of theoceans, downward propagation of changes in the surface layer temperature is driven mainly by shear induced eddy turbulence, due to (nearly) horizontal motion along iso-pyctnals. Nobody suggests true diffusion is dominant, although the eddy driven transport can be simulated as a diffusion-like process. There is deep convective transport only at high latitudes over a small fraction of the total ocean surface area, but that convection is due to very low surface temperatures (and of course higher surface densities than the underlying water). The vast majotity of measured heat uptake is not in the areas of deep convection, but rather in areas where the surface layer is substantially warmer than the underlying thermocline. The thermohaline average turnover time on the order of (1000+ years) gives a reasonable estimate for the time that would actually be needed to approach a new equilibrium following a step change in forcing… For certain well over 1000 years, and perhaps over 2000 years.
    .
    I don’t see how that makes the concept of ECS illusory, but it does tell us clearly that ECS is only part of the picture. At least as important for making projections of future surface temperatures is a reasonable model for heat uptake response to changes in surface forcing over a period of a few hundred years. What is illusory is the suggestion that the ECS tells us what will happen… The age of fossil fuels will end within a century or at most two, and the ocean will drive atmospheric CO2 and radiative forcing downward long before fossil fuel use ends. It is only a high value for ECS which leads to projections of very much higher temperatures over the next two centuries, since the equilibrium value will never really be approached.

  41. Paul_K (Comment #97518)

    Re:Oliver (Comment #97490)
    
What if we replaced the ocean model with a pure diffusion model, along the lines suggested by DeWitt?… Suppose we take a bulk capacity for the mixed layer (which actually does a good job of matching GCM and real world temperature short-term response) and then assume that the heating below that layer is purely via diffusion? In other words we have a semi-infinite diffusive system below the mixed layer… It seems a more reasonable approximation to assume that the heat flow is governed by a convective-diffusion mechanism rather than pure diffusion, and this should then set a realistic finite depth boundary on the part of the ocean that is responding to changes in surface conditions.

    I don’t actually know what exactly DeWitt was suggesting. He mentioned a “true” diffusive model, which I took to mean a model which allows depth-varying diffusion processes instead of just N-box heat transfers. I don’t believe anybody would try to realistically model the ocean using a single vertical diffusivity from the surface down, and it wouldn’t work anyway because the temperature profile decays sort-of-exponentially with increasing depth.

    If you allow a large diffusivity near the top and a smaller diffusivity below, then you can recover the bulk mixed layer behavior.

    As to the interior, the convective-diffusive model has been discussed in the literature for quite some time now (e.g., Munk 1966). With a realistic scale height of about 1 km, knowing the finite bottom depth is not particularly important to the solution. However, the upwelling velocity is important. You might handle this at a finite bottom with a mass source/heat sink which corresponds to deep water formation at high latitudes.

    A major problem is constraining the upper and lower boundary flux conditions in any physically meaningful way. You might assume that the downward heat flux through the boundary matches the convective-diffusive heat flux at equilibrium, and then maintain the same mixed layer depth and upwelling velocity in the perturbed state, but is this realistic? (Even small changes in the mixed layer depth would be huge relative to the interior heat balances.) What about the bottom? Is it realistic to have the same upwelling velocity but a higher bottom boundary temperature?

    Going to a “more realistic” model results in more, not fewer, assumptions but not necessarily a clearer picture.

  42. SteveF (Comment #97550)

    Paul_K,


    I am not certain what you mean by ‘convective-diffusion’.

    It refers to an interior balance of the form
    $latex \displaystyle w \frac{\partial T}{\partial z} = \kappa \frac{\partial^2 T}{\partial z^2} .$

  43. I think “convective diffusion” = “dispersion” = “true diffusion + eddy-turbulence” = “a diffusion-like model”, at least for these discussions.

  44. Oliver,
    I understood the diffusion equation. The question was if the ‘convective’ part of ‘convective-diffusion’ was meant to suggest something other than what can be modeled via diffusion.
    .
    With regard to what makes a more useful ocean model: As we have discussed at least once before, there is no need to presume high diffusivity near the ocean surface to ‘recover’ a well mixed layer, the well mixed layer is a requirement of penetrating solar flux and consequent heat deposition well below the surface. Yes, physical motion due to waves, wind, and surface currents add to the surface mixing, but even in the complete absence of these, a clearly defined well mixed layer would form due to simple convection. Absent any contribution from wind, waves and surface currents, the depth of the well mixed layer would be the depth where the residual energy flux from penetrating sunlight equals the total heat required to warm upwelling water. Above that depth the temperature would have to be locally uniform due to convection (that is, there MUST be a well mixed layer). Below that depth, the temperature must fall, with the rate of fall at increasing depth controlled by eddy driven diffusion.
    .
    If we assume ~4 meters per year average upwelling, and assume 18C of warming from the deep to the well mixed layer, then the top of the thermocline would be the depth where solar flux equals (4 X 10^6) * 18 = 7.2 X 10^7 calories/M^2/year (or 3 X 10^8 joules per square meter per year). This is equal to a continuous flux of 9.5 watts/M^2 down the thermocline. The thermohaline circulation therefore “uses” ~9.5 watts/M^2 of solar energy at lower latitudes (or ~3% of total solar energy falling on the ocean at low latitudes!), and this flux maintains the shape of the thermocline. Any significant change in thermohaline circulation, which would change the upwelling rate, could indeed lead to a large change in average surface temperature, even with no change in radiative forcing. Change the upwelling rate by 20% and the surface balance would change by almost 2 watts/M^2. However, as far as I can tell there is no solid evidence for short term changes in thermohaline circulation. Longer term changes in rate of thermohaline circulation seem to me to offer at least a plausible explanation for longer term changes in surface temperature (MWP, LIA, and recovery from the LIA, for example) absent any known substantial changes in forcing.
    .
    The measured change in temperature profile over time (eg Levitus et al) is not consistent with decreasing diffusivity with depth, but rather with increasing diffusivity with depth. I don’t find that at all surprising, since the density gradient (due mainly to falling temperature with depth) which acts to maintain a stratified ocean is steepest near the top of the thermocline and becomes gradually more shallow with depth.

  45. SteveF (Comment #97583)

    Oliver,

    
I understood the diffusion equation. The question was if the ‘convective’ part of ‘convective-diffusion’ was meant to suggest something other than what can be modeled via diffusion.

    SteveF,
    That wasn’t the diffusion equation. Look again at the left-hand side. It’s the steady-state form of
    $latex \displaystyle \frac{DT}{Dt} = \frac{\partial T}{\partial t} + w \frac{\partial T}{\partial z} = \kappa \frac{\partial^2 T}{\partial z^2}.$

  46. Oliver (Comment #97585),
    OK, thanks. If you assume a constant rate of upwelling over time, then does the diffusion equation not work (with changing diffusivity over depth)?

  47. SteveF (Comment #97583) 


    With regard to what makes a more useful ocean model: As we have discussed at least once before, there is no need to presume high diffusivity near the ocean surface to ‘recover’ a well mixed layer, the well mixed layer is a requirement of penetrating solar flux and consequent heat deposition well below the surface.

    If your simple physical model only includes (eddy) diffusion and vertical advection, then the only way to capture convective mixing (by turbulent eddies) is through the diffusion term.

    If we assume ~4 meters per year average upwelling, and assume 18C of warming from the deep to the well mixed layer…This is equal to a continuous flux of 9.5 watts/M^2 down the thermocline. The thermohaline circulation therefore “uses” ~9.5 watts/M^2 of solar energy at lower latitudes …and this flux maintains the shape of the thermocline.

    Unlike molecular diffusion, turbulent mixing “consumes” kinetic energy (thereby raising the center of mass of the system); that’s what people mean when they talk about the mechanical energy which “drives” the overturning circulation, not the number for the heat flux.

  48. Oliver, SteveF

    “He mentioned a “true” diffusive model, which I took to mean a model which allows depth-varying diffusion processes instead of just N-box heat transfers.”

    “…SteveF (Comment #97550)
    Paul_K,

    I am not certain what you mean by ‘convective-diffusion’.
    It refers to an interior balance of the form…”

    Now I’m really confused.

    The model solution which DeWitt proposed is the solution for the diffusion equation (not convective-diffusion) for a 1-D linear semi-infinite system for Dirichlet boundary conditions – initial temperatures known and temperature then allowed to change at the inlet. This governing equation is mathematically analogous to conduction along a well insulated bar or single phase linear flow of a fluid with small constant compressibility. This should look exactly like your equation, Oliver, but the partial derivative on the LHS should be w.r.t. time rather than the length variable.

    There is no opposing resistance to this diffusion process – unless one is specified in the boundary conditions. Given enough time and an infinite length dimension, the transient will go on for ever, under a boundary condition of increasing temperature with time or adding flux with time. (Note also, Oliver, that it can be simulated with an N-box model, and this is actually the only way to solve this system if properties are varying irregularly with depth. But this is a way of solving, not an alternative to, the diffusion equation.)

    I think this is what DeWitt was referring to as a “true” diffusion process – in contrast to my rather pathetic two-box discretization. It is, in any event what I understood, and what I was referring to in my response. One can impose a finite depth boundary on DeWitt’s solution, by means of a reflection plane. Basically, if you want to model a physical boundary at depth Z, you set up a new reflected surface at depth 2Z, and duplicate the boundary conditions at the real surface. Since the PDE is linear, you can then superpose the temperature change due to both the real and the reflected solutions at each point in space to model the finite physical boundary case. However, it is not clear to me why you would want to do this, when the real world system is very unlikely to be governed by a simple diffusion process, and I trust that the GCMs are not.

    If a simple convective flow term is added, with temperature-dependent density, it then becomes a convective-diffusion equation, in which form there is a natural limit to how far down heat can penetrate from surface heating. However, having re-read SteveF’s description of the ocean’s behaviour, I suspect that there may not be any parking places between a really simple characterisation of ocean uptake with estimation of a couple of parameters, and a full-blown 3-D simulation.

    I’ll ask DeWitt here to confirm that we are talking the same language.

  49. SteveF (Comment #97587)

    Oliver (Comment #97585),

    
OK, thanks. If you assume a constant rate of upwelling over time, then does the diffusion equation then work?

    No problem. The balance “works” in the sense that, yes, you can find convenient solutions. If you assume an exponential temperature profile in the interior, scale height ~ 1 km , constant w, then you get the classic w ~ 1 cm/day, k ~ 1 cm^2/s interior balance which matches up well with the estimated deep water formation rate but not very well with the observed diffusivities. It’s a simplified 1-d balance after all…

  50. Paul~K,

    Note also, Oliver, that it can be simulated with an N-box model, and this is actually the only way to solve this system if properties are varying irregularly with depth. But this is a way of solving, not an alternative to, the diffusion equation.

    Yes, which is why I have many times fooled around with N-box models down to 2000 meters. The only way you can get modeled numbers for OHC change that are consistent with measured changes for 0-700 meters and 0 – 2000 meters is for the diffusivity to increase with increasing depth. One issue which I have not yet figured out is how to set the initial temperature profile; the most sensible approach would seem to be forcing the ocean surface temperature to match a proxy reconstruction for years 0 to 1850, then apply the post 1850 measured (like what Carrick linked to recently) and use that to “wind-up” the ocean model so that the temperature profile with depth in 1850 (at the start of the instrument record) reflects the long term temperature history.

    I trust that the GCMs are not

    For sure not using diffusion. And way off compared to measurements. That is the problem.

  51. Oliver,
    “Unlike molecular diffusion, turbulent mixing “consumes” kinetic energy (thereby raising the center of mass of the system); that’s what people mean when they talk about the mechanical energy which “drives” the overturning circulation, not the number for the heat flux.”
    Humm. Not sure what you are suggesting with this comment. The driving force is maintenance of uniform sea level as the thermohaline circulation takes place (cold water sinks at high latitudes and up wells a lower latitudes).
    .
    Certainly the difference in potential energy as water rises from the deep to the surface does not amount to much if converted to sensible heat. And that change in potential energy takes place over ~1000 years. Any such energy change pales next to the thermal flux needed to warm the upwelling water. Like I said, I am not sure what you are trying to say.

  52. Re: SteveF (Comment #97649)
    I am saying that the downward heat transport requires turbulent kinetic energy dissipation to keep it going. No KE input = no warming of upwelling water. Then, soon the sinking water would not need to sink very far before reaching equally cold/dense water.

  53. Oliver,
    “I am saying that the downward heat transport requires turbulent kinetic energy dissipation to keep it going. No KE input = no warming of upwelling water.”
    Sure. But tidal and other currents, both near the surface as well as far below (mainly along isopycnal lines) are going to be present.

  54. Re: SteveF (Comment #97657)
    My comment about the kinetic energy dissipation was just in reference to your previous remark (Comment #97583), about the thermohaline circulation “using” some proportion of the heat flux. Turbulent diffusion is just fundamentally different from molecular heat diffusion. The transport doesn’t “just happen” because there is a temperature gradient, it’s also controlled by some feature of the flow which may or may not be related to the temperature distribution.

  55. Re: Paul_K (Jun 13 10:10),

    I’ll ask DeWitt here to confirm that we are talking the same language.

    Yes.

    Except the flux in at the surface isn’t constant. The surface boundary condition should include increased radiative loss with increased temperature. There’s a NASA report on modeling the lunar surface temperature that did that, but the differential equations don’t reproduce properly in the copy I found and the link to the original is broken. This paper has been used by the fringe to claim that there is a ‘greenhouse’ effect on the moon!

  56. Oliver,
    “Turbulent diffusion is just fundamentally different from molecular heat diffusion. The transport doesn’t “just happen” because there is a temperature gradient, it’s also controlled by some feature of the flow which may or may not be related to the temperature distribution.”
    .
    Of course; I was not suggesting otherwise. In the absence of turbulent eddy mixing, the penetration of surface heat (molecular diffusion) would be minimal. The question remains: how can we generate a relatively simple model which reasonably captures the measured temporal flux of heat into the ocean, including down the thermocline? The best I have seen is a well mixed layer of uniform temperature floating over an N-box model, with transfer rates between boxes that increase with depth below the thermocline/well mixed layer boundry.

  57. SteveF (Comment #97698) 


    The question remains: how can we generate a relatively simple model which reasonably captures the measured temporal flux of heat into the ocean, including down the thermocline? The best I have seen is a well mixed layer of uniform temperature floating over an N-box model, with transfer rates between boxes that increase with depth below the thermocline/well mixed layer boundry.

    If I can find a place to put up a couple graphs, I can show you some examples of how a convective-diffusive system responds to step forcing in the mixed layer as compared to two-box and diffusive systems. If you like, you can add bottom-enhanced tidal mixing (as is now being implemented in GCMs) but I think the simpler example shows enough. I think these types of systems are more attractive as simple models than a N-box model with empirically-fit transfers at every level.

  58. Re: SteveF (Jun 14 10:09),

    transfer rates between boxes that increase with depth below the thermocline/well mixed layer boundry.

    IMO, requiring that transfer rates increase with depth proves that the 700-2000m data is worthless.

  59. DeWitt Payne (Comment #97704),

    I am not sure why you think that; I don’t see how one leads to the other.
    .
    For me an increasing transfer rate makes perfect sense, because the stabilizing force of density gradient (its steepness) falls quite dramatically as you go down the thermocline; the drop is much more than an order of magnitude. In the abyss, the d(rho)/dx is tiny… and most any motion that produces a small rate of shear will lead to eddy mixing. Heck, some of the greatest measured diffusion rates are near the ocean bottom where there is some roughness and known tidal motion or a prevailing current.

  60. Oliver, I use dropbox these days. It is basically a remote file system that you can store data on, it has a 2-GB limit on file space and no limits on files, and you can control which files are visible, and each file can be accessed by a URL.

  61. Re: SteveF (Jun 14 11:05),

    An increase in the effective diffusivity probably does make sense, but for the NODC data to be correct below 1000m also requires an increase in flux, which doesn’t make sense as the temperature gradient should be very small. Worse, it requires a large step change in flux around 1995.

    The ocean floor is far below 2000m for most of the ocean.

  62. So we are gradually getting back to my question. If the diffusion near the abysmal layer is in equilibrium, a small change in flow, i.e. variation in the rate of sea ice formation and/or geothermal flux, would be worth considering. We needs a third ocean capacity in the models 🙂

  63. Slightly OT:

    I posted my question about the offset in 1995 of the correlation between OHC and steric sea level in the sea level thread at Climate, Etc. No substantive reply. Everybody was too busy insulting each other. Admittedly I was somewhat late to the party but… That site is useless.

  64. Carrick:

    Oliver, I use dropbox these days. It is basically a remote file system that you can store data on, it has a 2-GB limit on file space and no limits on files, and you can control which files are visible, and each file can be accessed by a URL.

    I highly recommend Dropbox as well, though I find it more impressive for collaborative work than simple file-sharing. The only problem I’ve ever had with it is my current Windows installation seems to crash if I run it, and I don’t think that’s because of Dropbox (I’ve had some other stability issues).

  65. Thanks to Lucia for helping me get these up, and to everyone else for good alternative suggestions which I may try next time.

    So here I’ve just made some plots comparing the temperature responses T(z,t) to a step forcing for two 1-d model oceans. Both have a mixed layer which some finite heat capacity and a thermocline below. The thermocline in the “diffusive” model (I) follows the equation

    $latex \displaystyle \frac{\partial T}{\partial t} = \kappa \frac{\partial^2 T}{\partial z^2},$

    while “convective-diffusive” (II) adds an upwelling velocity w to account for a deep water source
    $latex \displaystyle \frac{D T}{dt} = \frac{\partial T}{\partial t} + w \frac{\partial T}{\partial z} = \kappa \frac{\partial^2 T}{\partial z^2}.$

    At equilibrium, where
    $latex \displaystyle \frac{\partial T}{\partial t} = 0,$
    (I) has a linear solution in z while (II) is exponential in z. The “baseline” equilibrium state is removed, so only perturbations due to the forcing are kept. The constants are non-dimensional except the depth, which is somewhat like the real ocean. The initial condition is T0 = 1 in the mixed layer and T0(z) = 0 everywhere else. There is a no-flux boundary at the top of the mixed layer, while the bottom boundary is held at T = 0. The diffusivity are kept the same between models. The convection velocity w is chosen a bit high (perhaps 2–4x) to emphasize the differences between models (I) and (II).

    The numerical solutions use a central difference for the diffusion (2nd-order) term, an upwind difference for the convective term, and the forward difference for the time derivative. The time step was 0.01 arbitrary time units.

    After t=100, Fig. 1, the diffusive model (I) has mostly finished diffusing its heat downward and has taken on a fairly linear profile, while the convective-diffusive model (II) has essentially reached an exponential near-equilibrium state with a significantly warmer mixed layer.

    It’s also useful to look at the time evolution of the mixed layer temperature (MLT), shown in Fig. 2.

    The MLT for (I) and (II) begin similarly, but (II) “saturates” more quickly because some of the heat never escapes the upper ocean. Some writers might say that the “accessible heat capacity is smaller. (I) meanwhile continues to pass heat downward.

    On the same graph I’ve also plotted the response of a simple model with two boxes at temperatures T1, T2 and heat capacities c1, c2.

    $latex \displaystyle c_1 \frac{\partial T_1}{\partial t} = -K (T_1 – T_2)$
    $latex \displaystyle c_2 \frac{\partial T_2}{\partial t} = -K (T_2 – T_1)$

    The constants c1, c2 are chosen so that T1 approaches the same limit as the MLT in the convective-diffusive model (II). The heat transfer coefficient K was chosen arbitrarily so that the early time behavior is between (I) and (II). The conclusion seems to be that a simple two-box model can be constructed to behave like (I) over a short time and (II) in the long term, but you have to pick.

    Other boundary conditions could be implemented, but this was just a QD run to look at the basic behaviors.

  66. SteveF (Comment #97706) 


    …the stabilizing force of density gradient (its steepness) falls quite dramatically as you go down the thermocline; the drop is much more than an order of magnitude. In the abyss, the d(rho)/dx is tiny… and most any motion that produces a small rate of shear will lead to eddy mixing.

    Easier to produce shear instability and eddy mixing yes, but with the small vertical gradient also comes a lesser net effect from the eddy mixing. Observations seem to show that, other things being equal, it comes out in a wash.

    …some of the greatest measured diffusion rates are near the ocean bottom where there is some roughness and known tidal motion or a prevailing current.

    Here things are manifestly not equal. But as DeWitt points out, the enhanced abyssal mixing over 500–1000 m height above bottom is still well below the 700–2000 m depth range in most places in the ocean.

  67. Re: Oliver (Jun 14 13:46),

    But a no flux boundary at the top of the mixed layer isn’t very interesting in terms of the actual problem, which is a change in flux at the top of the mixed layer rather than temperature of the mixed layer. Worse, it’s not a step change in flux, but a ramp to a new level. Also, an increase in temperature of the mixed layer reduces the net flux. One (i.e. not me) should be able to write the equations and plug them into a solver of some sort and get results. I used to be able to do stuff like that, I think, but it’s been a long time and the documentation of the various solvers is less than helpful if you don’t already know what you’re doing.

    Perhaps as an intermediate step, you could hold the temperature in the mixed layer constant and allow the flux to do what it will.

  68. SteveF,

    In spite of my general dislike for mainly the comments at the site, I stumbled across this post that might interest you.

    Apparently there was a big shift in the NAO index in 1995-96. That also happens to coincide with an increase in UAH NoPol anomaly and the shift in the steric sea level/OHC correlation.

    Also, there may be a problem with ocean temperature profiles caused by interpolation of sparse data before ARGO.

    On the Observed Trends and Changes in Global Sea Surface Temperature and Air-Sea Heat Fluxes (1984 – 2006)

    W.G. Large and S.G. Yeager

    For example, the AR4 states that the oceans are warming with the global average temperature above 700m depth having risen by 0.10â—¦C between 1961 and 2003 (Bindoff et al.2007). The basic foundation for such estimates of ocean heat content change is the World Ocean Database (Conkright and Coauthors 2002). But Harrison and Carson (2007) compare warming trends from three analysis of these data and find warming to increase with the degree of interpolation in the analysis. With the least interpolated data, they find that most of the ocean does not have significant 50-year trends at the 90% confidence level. With first interpolation to standard depths, then horizontal objective analysis, the trend estimates of Levitus et al. (2005) become larger, with striking differences appearing even in regions of their statistically significant trends. Harrison and Carson (2007) suggest that interpolation over the very data sparse areas of the world ocean may have substantial effects on results. Some other issues with the World Ocean Data Base are biases in the ocean measurements, the paucity of deep observations and aliasing of signals from mesoscale eddies, the internal tide and inertial motions.

  69. DeWitt,
    I completely agree that the below 700 meter measured OHC is very sparse, and it would be unwise to not consider that data as having wide uncertainty. WRT net flux into the 700-2000 meter range, it seems to me that net flux into 0-700 from the surface could very well be close to the net flux out at 700 meters, so that there could be more net increase below 700 meters than in 0-700 meters. One might argue that the surface temperature history and heat transport mechanism are inconsistent with this happening right now, but that is different from it being impossible… It all depends on the details.
    .
    Oliver,
    Thank you for the graphs. I will post a more detailed comment later today.

  70. Oliver,
    Let me add my thanks for the graphs. Very nice.

    I agree with DeWitt that the more interesting problem relates to a different boundary condition – what he describes as a ramp to a new level of flux.

    I would be very interested in seeing your comparative results for a boundary condition on the mixed layer set by C1 dT1/dt + k(T1 – T2) = F(t) – alpha*T1. If I have understood what you have done, you ought to be able to do this with just a minor modification of the first line coefficients of your matrix. F(t) needs to be updated each timestep.

    I was thinking of doing something very similar, but I love avoiding work if I can!

  71. Re: DeWitt Payne (Comment #97763)

    Re: Oliver (Jun 14 13:46),

    But a no flux boundary at the top of the mixed layer isn’t very interesting in terms of the actual problem, which is a change in flux at the top of the mixed layer rather than temperature of the mixed layer. Worse, it’s not a step change in flux, but a ramp to a new level.

    DeWitt,
    What I showed above was the response to a step temperature change at the top. It is equivalent (up to a constant) to the impulse response to a forcing at the top. That is, if
    $latex \displaystyle T(t0)=1, $
    then
    $latex \displaystyle \frac{1}{c}F(t) = T^\prime(t) = \delta(0). $

    You can therefore determine the response to some arbitrary forcing F(t) by convolving the impulse response with F(t).

    The no-flux boundary at the top just means that no heat escapes out through the top through the physics contained within the model (diffusion and convection) after the impulse which we impose.

    One (i.e. not me) should be able to write the equations and plug them into a solver of some sort and get results.

    That’s essentially what I did to produce the plots. I wrote down the equations, discretized them as described, and then integrated forward in time..

    Perhaps as an intermediate step, you could hold the temperature in the mixed layer constant and allow the flux to do what it will.

    For a mixed layer temperature held constant we can write down the solutions analytically (as discussed earlier in the thread), so we don’t need to do the numerical integration at all! (However, the plot of mixed layer temperature response with time would be really boring.)

  72. Oliver,
    I did not previously understand what you were describing with the term “convective-diffusive”. I think I now do. It is clear that the combination of continuous upwelling of cold water and uniform (with depth) eddy-driven down-mixing will produce an equilibrium temperature profile which is exponential with depth. (Recognized since Munk’s work in the 1960’s!)
    .
    Once that exponential decay profile is established, the question is: how will the profile evolve when the temperature of the well mixed layer changes due to a change in forcing? The answer (of course) depends on what assumptions you make: Continued constant rate of upwelling? Constant abyssal temperature? I have six separate but related comments.
    .
    1) If you assume upwelling rate and abyssal temperature are constant, then the equilibrium response to a step change in surface temperature will be an exponential decay profile identical in shape to the original, but scaled at each depth to the ratio between the new surface temperature less the abyssal temperature and the original surface temperature less the abyssal temperature. That is, the temperature at any depth z at the new equilibrium will be:
    T(z) = Ta + (To(z) – Ta) * (T2-Ta)/(T1 -Ta)
    where T(z) is the new temperature profile
    To(z) is the original profile
    Ta is the (constant) abyssal temperature
    T1 is the initial surface temperature
    T2 is the new surface temperature
    If you think about this for a moment, it becomes obvious that the difference between the original profile and the new profile is an exponential decay with z as well… identical in shape to the original profile, with a maximum value of (T2 – T1) in the mixed layer and at the top of the thermocline, and falling to zero where bottom water is replaced. So under these assumed conditions (constant upwelling, constant abyssal temperature), the accumulated heat at each depth at equilibrium falls exponentially down the thermocline. Since almost all of the drop in temperature from the surface to the abyssal temperature is in the top 1000 – 2000 meters, under these assumed conditions most of the heat accumulation at the new equilibrium will also be in the top 1000 – 2000 meters, and very much less in deeper water.
    .
    2. If you assume constant upwelling rate but that the abyssal water temperature will increase at equilibrium as much as the surface (due to warmer bottom water formation at high latitudes), then at equilibrium heat accumulation will be constant with depth. That is, you just multiply the total heat capacity of the ocean by the change in equilibrium surface temperature to get the total accumulated heat at the new equilibrium. This would be at least several times greater than in #1 above.
    .
    3. If you assume both warming abyssal water, independent of the surface warming, and a concurrent drop in thermohaline circulation, then how much the upwelling rate changes at the new equilibrium needs to be specified to calculate the new equilibrium heat content (and of course, the temporal trajectory of heat accumulation). But if the thermohaline circulation is assumed to slow significantly, then the heat accumulation at the new equilibrium state would increase quite dramatically compared to #1 and #2 above. Equilibration at a new average ocean temperature (averaged over all depths) 5C warmer than today would require a TOA imbalance of 1 watt/M^2 (~ 2.2 times today’s TOA imbalance) for 1,800 years.
    .
    4. When I looked at about 150 ARGO profiles a couple of years ago, I came to the conclusion that the drop in temperature with depth in mid to low latitudes seems to be faster near the top of the thermocline and slower below ~1000 meters than a uniform exponential decay curve would predict. In other words, the shape of the ARGO profiles does not exactly match an exponential decay curve. This suggested to me that down-mixing rate is not constant with depth, but instead is somewhat greater with depth. Maybe the difference between an exponential decay shape and the ARGO data could be used to estimate how the mixing rate changes with depth.
    .
    5. If you make a set of assumptions about abyssal temperature evolution, upwelling rate evolution, and mixing rate versus depth, then it should be possible to simulate heat uptake over time in response to any specified change in surface temperature. What assumptions are reasonable for abyssal temperature and upwelling rate is not clear. It is something to think about.
    .
    6. A simple diffusive model, based on changing diffusion rate with depth, may reasonably simulate the true ocean heat uptake response, at least over moderate time periods (eg. up to several decades or more). A more physically accurate model would seem to be required to accurately simulate multiple centuries. Here is a simulation of the increase in temperature between 1955 and 2010 from a simple diffusion model: http://i47.tinypic.com/23lxnkl.png

  73. Re: SteveF (Jun 15 05:39),

    WRT net flux into the 700-2000 meter range, it seems to me that net flux into 0-700 from the surface could very well be close to the net flux out at 700 meters, so that there could be more net increase below 700 meters than in 0-700 meters. One might argue that the surface temperature history and heat transport mechanism are inconsistent with this happening right now, but that is different from it being impossible… It all depends on the details.

    I never meant to imply that it was impossible. A reduction of flux into the surface so that flux in at the top matches flux out at the bottom would result in no net increase from 0-700 while 700-2000 continued to increase. That would mean that the 0-2000m heat accumulation rate would decrease too. But that’s not what the NODC data looks like. Heat accumulation for 0-2000m appears to continue at the same rate. That’s the part that’s extremely unlikely. This is what I would expect to see from a reduction in flux into the surface.

Comments are closed.