June UAH Up!

Roy Spencer has posted the June UAH temperature anomaly: 0.314C which is well up from the 0.133C reported in May. The monthly values and trend since Jan 1980 are shown below along with projections for surface temperatures:

The trend since 1980 is now 0.14C/decade.

And now for the important bit: ErnieP, pete m and Tim-the-toolman took win, place and show! These confident betters bet 4,5,5 quatloos respectively and managed to divide the money in the pot between themselves. No one else grossed anything :(. Ernie, pete and Tim– don’t spend those all in one place!

For those wondering how they did relative to everyone else, the results are shown below:

Winnings in Quatloos for UAH TTL June, 2011 Predictions.
Rank Name Prediction (C) Bet Won
Gross Net
Observed 0.314 (C)
1 ErnieP 0.289 4 58.613 54.613
2 pete m 0.345 5 58.613 53.613
3 TimTheToolMan 0.26 5 13.774 8.774
4 ob 0.246 2 0 -2
5 BenfromMO 0.226 2 0 -2
6 Cassanders 0.223 5 0 -5
7 Les Johnson 0.41 5 0 -5
8 Lance 0.211 4 0 -4
9 denny 0.42 3 0 -3
10 Greg Meurer 0.2 3 0 -3
11 MikeP 0.197 4 0 -4
12 RobB 0.19 5 0 -5
13 pdjakow 0.19 5 0 -5
14 Tamara 0.18 5 0 -5
15 Bob Koss 0.177 3 0 -3
16 nzgsw 0.175 5 0 -5
17 plazaeme 0.16 1 0 -1
18 AMac 0.139 2 0 -2
19 ivp0 0.134 5 0 -5
20 Don B 0.132 4 0 -4
21 Pieter 0.131 3 0 -3
22 John Norris 0.12 5 0 -5
23 rob r 0.12 2 0 -2
24 Nyq Only 0.1 2 0 -2
25 J Mens 0.1 5 0 -5
26 Andrew KENNETT 0.04 4 0 -4
27 Paul Butler 0.04 3 0 -3
28 Joel Heinrich 0.031 5 0 -5
29 Freezedried 0.03 4 0 -4
30 Colin 0 3 0 -3
31 Diego Cruz 0 5 0 -5
32 MarcH -0.01 2 0 -2
33 Arfur Bryant -0.06 2 0 -2
34 Robert Leyland -0.06 4 0 -4
35 Hal -0.55 5 0 -5

The net winnings for each member of the ensemble will be added to their accounts.

Hal– you expected -0.55C; really? I mean really?

93 thoughts on “June UAH Up!”

  1. Oh well, easy come easy go. At least I was not banging the bottom this time, more like mid pack alongside AMac. I guess the oceans simply belched up more heat than I expected last month.

  2. Assuming temperatures this year will follow a similar cycle to previous years with a similar ENSO state, being 1989, 1999, 2000 and 2008, and assuming a 0.2 deg/decade warming trend, I get predicted values for temp this year (with actuals in brackets) of:

    Jan -0.02 (0)
    Feb 0.12 (-0.01)
    Mar 0.11 (-0.1)
    Apr 0.18 (0.12)
    May 0.12 (0.14)
    Jun 0.10 (0.31)
    Jul 0.16
    Aug 0.13
    Sep 0.26
    Oct 0.24
    Nov 0.22
    Dec 0.23

  3. Those assumptions are *kind of correct Michael, but you have to remember that we also have the job of figuring out the AO this year in particular. It will pick up again in the Fall/Winter and as such those values are going to be very hard to predict.

    I have found the weather patterns we have seen this year in particular to more closely follow patterns from the 1950’s. Almost an exact match. So temperatures in their totality might match most of the monthes you have listed there, but I have a feeling you will have an odd month (prob 2-3 a year with the AO we have been having) which are going to be wrong by quite a bit.

    That is the hard part with this year and last especially. And the fact that next winter is going to have the same issue.

    Combine that with la nina (potentially of course with the prediction spread) and we could have quite an interesting winter to say the least. Well, thats enough of that pesky weather fun stuff, I was actually close this time due to using the right data set lol!

  4. Lucia,
    I have a question about baselines on your plot. In the upper left hand corner it says that the baseline is 1980-1999. Is that correct? I believe that 2011 values, including the recent value of +0.314 is versus the new baseline (1981-2010). To correct the 2011 values to the 1980-99 baseline, I think, requires adding ca. 0.10 degree. Can you clarify how the base line change was handled?

  5. Here is an interesting chart where an adhoc adjustment is made to the UAH/RSS average to remove the presumed impact of the Volcanoes.

    I added back 0.5C for Pinatubo diminishing to 0.1C by year 3 (and 90% of that for the El Chichon eruptions).

    The chart looks quite different, has an even closer relationship to the ENSO and the warming trend drops to 0.095C per decade.

    http://img534.imageshack.us/img534/3554/rssuahvolcanoadjusted.png

  6. I find it interesting how many guesses were below 0.20, when it was already pretty clear by mid-month that the UAH anomaly was going to be higher than that. I guess the readers at this site are pretty biased toward expecting lower temperature anomalies than reality delivers. In this vein, I posted this comment at Dr. Spencer’s blog:

    Paul K2 says:
    July 7, 2011 at 6:55 PM
    Pretty impressive warming in June for the tail end of a La Nina.

    June 2011 comes in only behind the big El Nino year June 1998, the moderate El Nino year 2010, and within a whisker of tying June 2002 in the list of hottest Junes in the database. What will happen to the UAH anomalies in the next big El Nino? Fortunately, it appears we won’t get an El Nino next winter, so we may escape the pain for another year.

    One by one, the nails are being driven into the coffin lid of the lukewarmers.

  7. PaulK–
    Why do you think this mean the nail is being driven into the coffin lid of lukewarmers? I expect warming. Nothing I’m seeing suggests warming is faster than I thought. So… ??

    (I’ll admit some coolers call themselves lukewarmers. I don’t know what to do about that. But I don’t see how warming at a rate of 0.14C/dec instead of 0.2C/dec or more would put the lid into the coffin of lukewarming.)

  8. Owen–
    To rebaseline, I compute the monthly average over the baseline years. So: Jan=ave of january over the 20 value reported from (1980-1999) inclusive.

    I subtract the baseline for the monthly value from the temperatures. (So, January 2011 is whatever UAH reports – the baseline for January.)

    0.314C is what Spencer reports. So, for betting, you guess what Spencer reports. But for plotting, I rebaseline. I follow this convention for all plots as it permits me to put projections and all observations on the same baseline.

  9. Lucia,
    “I subtract the baseline for the monthly value from the temperatures. (So, January 2011 is whatever UAH reports – the baseline for January.)”

    So UAH reports average monthly temperatures as well as anomalies? Or do you take the monthly anomaly and add it the the average temp of the baseline under which it was determined to get the actual temperature?

  10. Lucia- First, the arguments by scientists like Dr. Spencer are that much of the warming can be explained by natural cyclical events rather than increased concentrations of greenhouse gases. Larger than expected temperature anomalies during La Nina periods is observational evidence that partially but strongly contradicts that view. If Spencer was right, then the recent La Nina cooling effect should be similar to what we have seen from La Nina cycles in the past. Looking at his data, the 2011 La Nina had a smaller and shorter cooling impact than the 2008 La Nina, and nowhere near the persistence that we saw in previous La Nina events.

    Another way to look at it: If Spencer was correct, then the June anomaly should have been in the range from slightly negative (-0.2) to slightly positive (+0.2), pretty much in the range where the majority of your bettors placed it. But the actual data was over +0.3.

    Also in this La Nina, the UAH anomaly dipped below 0.10 for only four months, and took only seven months to get back above 0.20 ; In the last La Nina, the UAH dipped below 0.10 for ten months, and took twenty months to get back above 0.20. The cooling impacts of the naturally occurring La Nina cycles aren’t as great as previously.

    Second, the warming of the lower atmosphere isn’t the best measure of global heat buildup. The oceans are soaking up over 90% of the heat, the atmosphere only accumulating about 1% to 2%. Further, the amount of heat you are concerned about (the difference in the heat added to the atmosphere needed to cause 0.20 deg C per decade rise versus 0.14 is only a fraction of one percent of the total global heating.

    So the oceans are the “big enchilada” and are currently keeping us from seeing much higher temperatures where we live. That the atmospheric cooling response to La Nina cycles seems to waning, should serve as a wake-up call. Just looking primarily at statistical trends and ignoring the physics of the actual energy balances, can lead to bad forecasts, as demonstrated by what happened to the readers here in June.

  11. RE: Paul K2 (Comment #78798)

    I don’t think so Paul. I expect warming too although at a less alarming rate than the team. I think we can all agree that a 1 month temp anomaly (up or down) really is just weather. Don’t read too much into it.

  12. Participating in online climate debates has given me a particular loathing for the phrase “nail in the coffin of…”. People are also almost invariably incorrect when they say it. Perhaps we need an eponymous law for it (ala Godwin)?

  13. ivp0: Read my comment again. I looked at the persistence of the impacts of La Nina events (four months versus ten months in the last La Nina), not just one month. I also talked about heat balances, because the temperature rises in the atmosphere are a lagging indicator of heat building up in the oceans. So the heating of the oceans is the most important indicator, something even Roger Pielke Sr. agrees with.

    Further, I am looking for signals that the interchange of heat between the oceans and the atmosphere is changing. Clearly the oceans have kept the atmosphere from heating faster by absorbing over 90% of the heat building up on our planet. If the mixed layer in the oceans was shallower, and less heat was being sequestered in the deep ocean, then the temperatures in the atmosphere would be rising even faster. The interchange of heat between the oceans and the atmosphere can be studied by looking at both the heating effects of El Nino events and the cooling effects of La Ninas over time. Unfortunately we don’t have enough ENSO cycle data to be able to clearly quantify the changing trends; but the last two La Nina events are giving us some pretty clear information that this naturally occurring cycle is having less cooling impact on the atmosphere, both in size of the impact, and persistence (duration) of the La Nina caused cooling impact.

    And if it was just weather, then July and the upcoming month anomalies would fall back and average around zero… the satellite data indicate that this isn’t happening.

  14. RE: Paul K2 (Comment #78802)
    No Paul, weather goes up and down a lot. People have been trying to accurately predict it for centuries with limited success. I do expect long term warming (50 years+)… unless this quiet sun hands us an unexpected surprise.

  15. Nails in lukewarmer coffin? What nails would that be, a decade-long virtual flatline? A multi-decade trend that significantly lags the mean IPPC prediction? One bump of .2 degrees on one monthly anomaly is a coffin nail? Wow.

    The hyper-warmists apparently are entitled to argue that if temps stay anywhere within the (ridiculously large) IPCC model collection prediction range, that somehow all the models in the range are still validated. As long as it warms at a rate of even 1.2 degree C/century per CO2 doubling then somehow all models (including climate porn scenarios of 6.0-10.0 degrees) are still vindicated and in play because there is at least one model in the collection that is not conspicuously wrong.

    In contrast, if lukewarmers say it will warm only at the low end of the IPCC predictions range, they are wrong even if that is exactly what is currently happening because of, drum roll, please–a 0.2 blip in one month’s measure from UAH. Again, wow.

  16. ——–
    “Zeke (Comment #78800)
    July 8th, 2011 at 11:35 am
    Participating in online climate debates has given me a particular loathing for the phrase “nail in the coffin of…”. People are also almost invariably incorrect when they say it. Perhaps we need an eponymous law for it (ala Godwin)?”
    ——–

    I would also suggest for inclusion my pet peeve, “smoking gun”

  17. Paul K2:

    Update me on warmer orthodoxy. I was under the impression that the current position with respect to explaining slower than predicted observed warming and a near flatline decade was to double down on the effect of aerosols, Chinese, Indian industrial growth etc., higher forcing than previously thought etc.

    It sounds to me like you’re going for the “missing heat is in the deep ocean where we can’t measure but the models tell us ought to be there because it is not happening up here” is the current line. Is that right? Is that what we’re going with currently if we are in The Consensus? One does want to stay au courant.

  18. Re: Paul K2 (Jul 8 11:54),

    You have two possible explanations for the lack of increase in ocean heat content since 2003 compared to the supposed radiative imbalance of nearly 1 W/m². First, the radiative imbalance has actually been nearly zero for the last 8 years and the modelers estimates of aerosol or some other negative forcing like increase in albedo are completely wrong, casting serious doubt on models in general. Or, the radiative imbalance exists but the heat is going into the deep ocean below 700m where it isn’t being measured directly. But of course it is being measured indirectly in the form of sea level changes, which are not consistent with absorption into the deep ocean ( see for example Cazenave, et.al. ). But if it were going into the deep ocean, then the time scale for equilibration goes up by orders of magnitude, meaning that the age of fossil fuels will be over long before temperature comes anywhere close to equilibration and the models are still deeply wrong. Peak atmospheric CO2 will have come and gone.

  19. Owen,

    So UAH reports average monthly temperatures as well as anomalies?

    No. I work purely with anomalies. But if you do the algebra, it works out to pretty much the same thing.

    Or do you take the monthly anomaly and add it the the average temp of the baseline under which it was determined to get the actual temperature?

    I only have anomalies. I should have been more precise and said I averaged the anomalies over the baseline.

  20. The warming trend since 1997 (before the super El Nino and starting in a La Nina) is 0.095C per decade – exactly the same number calculated if one adjusts for the volcanoes since 1979.

    http://img13.imageshack.us/img13/1343/dailyuahtemps1997to2011.png

    That is only half of the predicted trend for the surface under global warming theory and only 40% of the rate predicted by the climate models for the lower troposphere level used by UAH and RSS (which is 1.23 times the surface).

    The daily UAH numbers for 2010 and 2011 year-to-date is also interesting.

    http://img35.imageshack.us/img35/7663/dailyuahtemps2010june11.png

  21. Paul K2–
    It seems to me you have now switched from “lukewarmer” to “roy spencer”. If your argument is that you think evidence seems to refute roy spencers particular theory, say that. Then people can figure out if
    a) You have correctly described Roy Spencer’s claim and argument and
    b) Whether Roy’s argument seems to be refuted.

    Dr. Spencer are that much of the warming can be explained by natural cyclical events rather than increased concentrations of greenhouse gases.

    I don’t know if your argument that Roy’s notion is being disconfirmed is correct or incorrect because I don’t know how much warming he thinks exists absent cycles. Because I don’t know, I can’t begin to know if you are accurately describing what his theory would predict for the relative temperature levels at the bottom of two La Nina. (I doubt if his theory claims all La Nina’s are the same size or duration. So, I really don’t know how we can test it with two La Ninas.)

    OTOH: My general impression is that Spencer is on the very coolest end that can be called ‘lukewarmer’. In fact, he may even fall outside the range I consider lukewarmer. I consider the range to contain those who think warming is more or less in the lower half of projections and/or sensitivity is in the lower half. I’m not sure what Roy estimates as the most likely sensitivity — but he may be too neutral for me to use that word to describe him.

    Mind you: I don’t own English, so I can’t prevent people from using the word in a way I would not. As I noted: some people claiming to be lukewarmers predict cooling over the 21st century– I think this usage is nonsense.

    But also, I would strongly dispute that the recent temperature put any dent whatsoever in the lukewarming position.

    Just looking primarily at statistical trends and ignoring the physics of the actual energy balances, can lead to bad forecasts, as demonstrated by what happened to the readers here in June.

    Huh? Do you think Hal who bet -0.55C was looking at statistical trends? I seriously doubt it. Do you think he’s a lukewarmer? I really doubt that.

    When making monthly bets, lots of people cluster their bets around last month– and that often works. I think you are seriously over interpreting the meaning of this month’s bets.

  22. Lucia,

    “I should have been more precise and said I averaged the anomalies over the baseline.”
    ————————————————-
    Sorry to be so much of a bother, but the baseline changed in 2011, effectively reducing the anomalies by about 0.1 C. If you are indeed going by the old UAH baseline (1980-1999), I think the June anomaly should be ~+0.414 C on that baseline. See Kelly O’Day re-baselining to new 1981-2010 period: http://chartsgraphs.wordpress.com/
    O’Day subtracted 0.10 C from all anomalies prior to 12-2010 to create a new data set for UAH (data available)

  23. Owen,

    Sorry to be so much of a bother, but the baseline changed in 2011, effectively reducing the anomalies by about 0.1 C.

    Yes.

    If you are indeed going by the old UAH baseline (1980-1999), I think the June anomaly should be ~+0.414 C on that baseline. See Kelly O’Day re-baselining to new 1981-2010 period:
    My script is giving:
    0.406291667

    Oh– looking at the script, for temperature series I don’t rebaseline by month. I should have checked before answering– I do compute the months individually for models, but not for rebaselining observations. I just find the mean of all anomalies from Jan 1980-Dec 1999.

    I find the shift based on values in the current UAH data set is -0.08629167. So, I subtract 0.08629167 from all numbers. I recompute the baseline each time I download UAH data.

  24. Lucia,

    OK, makes sense. Not sure why your offset (0.086) is different from O’Day’s (0.0988), but the baseline readjustment has been made. Thanks.

  25. DeWitt Payne (Comment #78809),

    Yes, any heat going into the deep ocean (everything below 700 meters) is effectively gone for at least several hundred years (same thing with CO2 absorbed via thermohaline circulation, BTW), and with the lower coefficient of expansion for seawater below 2000 meters and below 4C (only about 2/3 of near surface waters), there is a reduced influence on sea level from deep heating. Current estimates for heating below 2000 meters are ~0.1 watt/M^2 averaged globally.. which isn’t much anyway. Even if you assume ~0.25 watt per square meter for 700 meters to 2000 meters (which seems a reasonable upper bound), the ocean heat accumulation is not enough to account for much of the ‘missing heat’. Either aerosol offsets are extremely high (considerably higher than modelers assume), or the true sensitivity is much lower than what the the CGCM’s say.
    .
    If you assume a exponential approach constant of ~8.5 years, you can quite accurately predict the Levitus et al ocean heat content trend from 1955 onward from the Hadley average ocean surface temperature data (R^2 >0.91), which seems reasonable, since deeper warming ought to depend on the long term average temperature of the surface. I don’t think it is a coincidence that Steven Schwartz found an effective ocean approach constant of ~8.75 years via an independent analysis based on autocorrelation of surface temperatures (and estimated fairly low climate sensitivity). The modelers jumped all over poor Schwartz, and told him that his empirical analysis had to be wrong because applying the method to synthetic data from climate models gave “incorrect” results. (I don’t think comparison with a very questionable model passes muster as a legitimate refutation of an empirical analysis.) Anyway, in the end Steven Schwartz will probably turn out to be right.

  26. Owen-
    The post you are reading says :

    In this post, I compare the satellite based UAH 5.4 (baseline 1981-2010) and RSS 3.3 (baseline 1979-1998) series.

    The offsets are as follows:

    UAH: -0.000978
    RSS: 0.098772

    Since the UAH TLT 5.4 series is based on a 1981-2010 baseline, the offset is nearly zero (-0.00098 versus 0.0). The RSS offset changes the baseline from 1979-1998 to 1981-2010.

    You’re reading the RSS offset.

  27. I watched this months jump in temperatures from early on. And it brings into focus what has me transfixed and in awe about the climate debate, just how closely we can monitor the subtle changes in our planets atmosphere.

    While most are focused on winning the arguments, I cannot be but moved by the ability to watch these changes unfold on my home computer.

    The sense of being able to watch the information flow as our knowledge of the planets responses increases is very gratifying.

  28. Re: Nick Stokes (Jul 8 18:28),

    I don’t see how sensitivity is relevant to missing heat.

    I’m pretty sure that missing heat comes out of model calculations. We don’t have the precision to close the balance at the TOA by measurement by an order of magnitude or two. We also don’t have enough data, I think, to accurately calculate the current radiative imbalance based only on RTE calculations. Albedo and absorption of incoming solar radiation are problems, not to mention aerosols other than clouds. Modeled radiative imbalance then depends on the effective equilibration time constants of the models. A long time constant will result in both high equilibrium climate sensitivity and a large radiative imbalance, heat in the pipeline as it were. A shorter pipeline means less heat in it and lower sensitivity.

    I think.

    The rate of change of ocean heat content, OTOH, is a direct measure of radiative imbalance. Assuming the ARGO and sea level data are correct, the rate of change is near zero, implying a near zero radiative imbalance. That means either the models have fundamental flaws or the forcings they’re using are wrong or some combination of both.

  29. Lucia,
    “You’re reading the RSS offset.”
    —————————
    Prior to 2011 RSS and UAH used the same baseline (1979-1998, a 20 year span). The new baseline for UAH 5.4 (1981-2010), requires a correction (of +0.098772)to RSS. It also requires a correction of (-0.000978) for a change from a 1979 to 1981 starting date. So, when O’Day created a new set of RSS data corrected to the 1981-2011 base line he used (I think) the following offset (0.09877-(-0.000978)) to each anomaly in the RSS data set (i.e., he subtracted 0.0997C from RSS). Since RSS and the older UAH had the same baseline, to convert the old UAH, measured up until 2011, to the new baseline, you need to use the same offset (0.0997). That’s the way I read it anyway.

  30. Nick Stokes (Comment #78841),

    I agree with DeWitt’s comments. In addition, the combination of ocean heat capacity and autocorrelation analysis directly yields an estimate of sensitivity. See: http://www.ecd.bnl.gov/pubs/BNL-80226-2008-JA.pdf and references. BTW, I misspelled Schwartz’s name, it is Stephen Schwartz, not Steven. The best fit of a simple exponential approach model of the Hadley SST history with the Levitus et al ocean heat data (starting 1955) yields ~8.5 years time constant, matching the 8.5 +/- 2.5 years constant Schwartz calculated once he accounted for short period (<1 year) autocorrelation, which he had neglected in his first publication. After that correction, he came up with a best estimate climate sensitivity of 1.9C per doubling of CO2.

  31. CLIMATE SENSITIVITY BY A SKEPTIC.

    1) Skeptics Position on Global Warming
    The last 130 years global mean temperature pattern continues with a global warming rate of 0.6 deg C per century: http://bit.ly/pmOEot

    2) IPCC position on Global Warming
    The last 30 years global mean temperature pattern continues with a global warming of 1.5 deg C per century: http://bit.ly/qHDBZJ
    (“The trend since 1980”)

    The ratio between the two projections is 1.5/0.6 = 2.5.

    As IPCC’s climate sensitivity is 3, the skeptics climate sensitivity is 3/2.5 = 1.2

    CLIMATE SENSITIVITY = 1.2

  32. SteveF:
    Quick note, but I notice that the Schwartz analysis appears to be a linear analysis of the global mean surface temperature. Despite reading Isaac Held’s comments that energy balance at the TOA bears a non-linear relationship to the the global mean surface temperature; that the underlying dynamics can still be studied by linear analysis but only if one chooses a surface temperature field instead of a mean surface temperature, you still seem to be adamant that the simple linear model is the correct one. A non-linear relationship would appear to indicate a higher sensitivity than a linear one – similar to the results from PaulK’s study of higher-order models. If you think Schwartz’s linear model-based sensitivity of 2C/doubling is correct, a non-linear model could imply a higher sensitivity. Why do you think that Isaac Held is wrong?
    http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/19/time-dependent-climate-sensitivity/

  33. Rb
    Held’s is also linear. 🙂

    …making the response look superficially nonlinear when it is still quite linear.

    The issue is more “one box” vs “many box”.

  34. RB #78858,
    Lucia is right.. all climate models I know about use linear approximations… it’s the number of boxes that change. In any case, even a very non-linear system can usually be reasonably approximated by a linear function over a small change in input variables; a change of 3.7 watts per square meter is only about a 1.5% change in total radiative energy flux. Non-linear processes could be important, or even dominant, over a small range in input in the case where a system is close to a (dreaded!) ‘tipping point’. I have seen no credible information which suggests that is the case.
    .
    But the whole point of Schwartz’s effort was not to generate a exact value of sensitivity, it was to generate a reasonable value independent of the very large uncertainty in aerosols. That uncertainty has for a long time made it impossible to confirm or reject the sensitivity values generated by climate models. (The use of assumed aerosols by climate models to match the temperature history is as grotesque a kludge as I have ever encountered in science.) Schwartz is a fairly well known aerosol expert; that he would try to find a way to estimate climate sensitivity independent of any information about aerosols is rather telling I think.

  35. RB,

    RE: Held’s rational
    I had already read the linked post once, but I read it again. Please note that everything Held says is related to climate models, not measured behavior… what he says is therefore only as valid as the model, a validity I think doubtful at best. Please note that Held says different models have different degrees of non-linearity.. how do we know the very non-linear model Held talks about, the one he works on, is any better (more accurate) than others which are much more linear? Held is talking about the behavior of a MODEL not the behavior of the Earth, but commenting as if the model is reality. It is most certainly not. In any case, Held’s explanation for the model’s behavior appears to depend on how warming changes heat absorption by the ocean at high latitudes many years (hundreds!) after an applied forcing. I would bet that the model Held works on (like most others) does not come close to matching the measured ocean heat uptake in the recent past. It brings a smile to my face when someone seriously suggests a model which doesn’t match recent measurements is capable of accurate projections of behavior hundreds of years into the future. It reminds me of the old joke…. it is just models (turtles) all the way down.

  36. Lucia,
    I guess I didn’t express myself clearly. Here’s how I understood it. Held is saying that while the underlying dynamics is still linear, if you tried to estimate sensitivity using a linear model that related energy imbalance at the TOA to the global mean surface temperature, this neglects spatial variations due to ocean mixing at high latitudes. The effect of this ocean mixing is that an energy balance model based on the global mean has an apparent non-linearity, which is superficial because the underlying dynamics can still be understood by linear analysis but that is an artifact of basing it on the global mean surface temperature . Models based on the global mean have a fast equilibration time and result in lower sensitivity. Therefore, I don’t know how Schwartz’s model is useful to compute the sensitivity because it implicitly assumes fast equilibration by using a model based on the global mean surface temperature. Elsewhere, Held mentions an equilibration time on the order of 500 years from models. I don’t believe Held is implicating the aerosol uncertainty in the above linked post.

    Steve: Whether or not the model is correct, it points to a phenomenon that leads to higher sensitivity. He also mentions that this ocean mixing is real and asks you to look up “CFCs in the ocean”. I’ve taken a cursory look at some publications and it appears to be qualitatively correct though models may not be getting it quantitively right. The point is that without taking the spatial surface temperature variations into account, linear analyses based on the global mean will tend to underestimate sensitivity.

  37. BTW, the Winton 2010 paper in Held’s post elaborates on these details while demonstrating how the simpler linear models tend to underestimate sensitivity.

  38. RB,

    I read quite a lot about mixing into the ocean (heat, CFC’s and other compounds), long before Issac Held’s suggestion. Yes, the models are qualitatively right in the sense that heat and chemical compounds do mix into the thermocline… but we knew that already from the existence and shape of the thermocline. Being qualitatively right is not the same as being quantitatively right; there is little support for the notion that the models are quantitatively right about the oceans. The rather large shortfall in measured ocean heat uptake compared to the CGCM projections suggests serious problems with the accuracy of at least the ocean part of the models. Measured open-ocean vertical diffusion rates of SF6 are much lower (nearly an order of magnitude!) than expected.
    .
    With regard to underestimates of climate sensitivity: I can’t say for sure if an empirical estimate like Schwartz’s (and many other simple estimates, most of which suggest relatively low sensitivity) will be accurate 200 years into the future, but I can say that any inaccuracy 100+ years out is unlikely to matter very much, since the widespread use of fossil fuels, and the peak in atmospheric CO2 (which, based on the continuing rates of ocean uptake, seems to me unlikely to ever go much over 600 PPM) will have long since past.
    .
    The reliance on the projections of climate models, which are demonstrably incapable of accounting for current measurements, to set public policy WRT energy use strikes me as terribly misguided. Bad public policy does real damage, and does it now, not 100 or 200 years in the future. Better to not build your airliner and load it up with passengers for a test flight when the computer simulation it is based on regularly crashes.

  39. RB,

    One final comment about Held’s post: the graphic he created shows (as best I can figure) deviation from the low sensitivity (~1.7C?) response starting about 100 -150 years after an instantaneous doubling of CO2.
    .
    Do you think the deviation of Held’s model from a low sensitivity response starting 100-150 years after a doubling of CO2 is a) more credible than Schwartz’s empirical estimate, and b) matters very much in terms of current public energy policy? (Not rhetorical questions.)

  40. Steve,
    This was one of the papers I came across qualitatively confirming model predictions:
    http://www.gfdl.noaa.gov/bibliography/related_files/kd9601.pdf

    You are asking me whose sensitivity estimate is correct and my answer to that is I don’t know, but if Schwartz’s estimate sounds reasonable for short-term sensitivity estimates, incorporating ocean mixing effects is likely to push the sensitivity estimate higher.

    With regards to policy, I honestly haven’t paid too much attention because I’m of the view that there will be a lot of government posturing, but not much action in any case. This view stems from game-theoretic considerations where in this case, it pays to be a free rider, and also because politicians are sensitive to energy costs – as you see from the recent decision to release oil from the strategic petroleum reserves or from California suspending the requirement for diesel truck emissions.

    I probably forgot to add: Schwartz’s analysis doesn’t disprove model-derived sensitivity because of the neglect of some key physical effects.

  41. RB,
    I think we agree that there will be little in the way of substantive government action, at least in the near term, though I think politics is probably more uncertain than climate, and history shows horribly bad public policy, 100% based on political considerations, is not unusual. Consider, for example, the history of Russia following the Bolshevik Revolution.
    .
    I don’t think that you addressed the substantive point of my question: if Held’s model response is in approximate agreement with Schwartz’s (and other) empirical calculation of sensitivity over the next 100+ years, but deviates significantly 150 – 600 years out, does that deviation matter today? Considering the uncertainties involved, I don’t think it does.

  42. Steve,
    As I said, it’s not a question I paid much thought to, this may not be the kind of clearcut answer you are looking for. But if your question is whether effects further out in the future should be a consideration, I’d draw a parallel with nuclear. Nuclear waste is long-lived, it is not the only consideration we take into account to deploy nuclear energy, but it is a consideration. On the other hand, nuclear’s negative effects are more local.

    Since Held’s model shows 3.4C/doubling, it is not in agreement with Schwartz’s empirical calculation, but it is in agreement with IPCC’s central estimates. I am yet to read Forster/Gregory 2008 but that appears to be an empirical calculation in agreement with model outputs. Then it becomes a question of assessing WGII’s assessment for various scenarios when it reaches a certain level of maturity, is there anything to do about it? I’m sorry I’m unable to get more specific than that.

  43. RB,

    There is no question that the ultimate sensitivity (>600 years out) of Held’s model is much higher than Schwartz’s empirical estimate. The question is the approximate concordance between the two for the next 100+ years, which, as best as I can figure, is in fact the case. I just don’t see that we are in a position (technically) to worry about differences in climate sensitivity that will only be manifest if a) the models are correct, and b) long after the atmospheric CO2 level is already falling.

  44. Someone recently asked if betting were addictive. I showed my then 12 yr old son the site about nine months ago. He’s been betting since and this time won as a 13 yr old. He’s been in the money several times before. If you update the overalls, he’d be near the top. I’m proud, even if he’s followed what I showed him (and gotten better results in the process!!!). I don’t tell him what to bet. He listens then makes up his own mind. What does this say (or not) about climate prediction?

  45. MikeP–
    It probably says he’s less biased by what he “wants” the outcome to be and assesses what data are available in a more balanced way than many.

    Now I’m going to have to update totals!

  46. Steve,
    For the SRES A1B scenarios, models give a 1.9-3.4K temperature increase for the 1990-2099 period. The temperature change derived from scaling the TCR estimate as described in Forster/Gregory 2008 is 2-2.8K. Therefore, the TCR-based estimate, which would be equivalent to the Schwartz estimate , is likely to be in the ballpark of climate model estimates for the next 100 years. I hope that this is not a problem either from the temperature change or from the ocean acidification, because we are likely going to get there!

    FG2008 also explain that scaling the TCR underestimates temperature change going from 2X CO2 to 4X CO2. The MWP peak to LIA bottom was about 1K, if I recall correctly. I personally hope that people don’t find themselves running a risky experiment 100 years from now.

  47. RB,
    Schwartz’s estimate most certainly was never stated by Schwartz to be transient; he said it was an estimate of the ultimate response. You appear to be accepting the critiques of his method (application of the method to GCM outputs) as correct, I am not. I have not read Forster/Gregory 2008… but if I remember right, it is behind a paywall, and I am not going to pony up $35 to read it.
    .
    WRT the SRES A1B scenario with 1.9 to 3.4 increase from 1990 to 2099. That is a pretty broad range. The measured rate from 1990 to present looks like ~0.17 C per decade (http://www.woodfortrees.org/plot/wti/from:1990/plot/wti/from:1990/trend), but that trend is a bit exaggerated by the influence of Pinatubo from 1992 to 1995, a more realistic value, discounting Pinatubo, might be ~0.14 C. That value also discounts any contribution of the (apparent) ~65 year pseudo-cycle evident in the instrumental record. If the pseudo-cycle is real and continuing, the true underlying trend may be substantially less than 0.14C per decade. But even ignoring Pinatubo and long term pseudo-cycles, the low end estimate for warming from now to 2099 is ~0.18C per decade on average. The high end estimate is ~0.35C per decade. Unless the measured warming over the last decade increases soon and substantially, even the low end estimate looks to me in doubt.
    .
    I don’t think anybody really knows how much global temperatures changed between the MWP and the bottom of the LIA. Life is risky. Being poor multiplies that risk many times. The LIA was terribly damaging because humanity was mainly poor. I hope humanity is rich enough in 2099 for climate changes to not pose much of a risk.

  48. The following line from Murphy 2009 is interesting.

    The consistency of the residual with independent,
    bottom-up estimates of aerosol direct and indirect effects
    means that it is very unlikely that there are any comparably
    large climate forcings not currently being considered.

    Which probably leads to an interesting conclusion that there might be an ocean modeling issue independent of IPCC sensitivity central estimates.

    Steve,
    FG2008 is available for free here . My criticism of Schwartz’s estimate is that, being derived from a global mean surface temperature, it is a method that will underestimate sensitivity due to neglect of ocean mixing phenomena at high latitudes. I should add that Schwartz takes two timescales into account, but that still probably doesn’t correct for the slow response arising from spatial variations in the surface temperature.

  49. One more thing: I’m aware that Schwartz did not say that his estimate was for the transient case. What I’m saying is that because of the implicit fast equilibration time when using a model based on the global mean surface temperature, his estimate is likely to be closer to the transient, TCR than to the equilibrium, ECS.

  50. RB,

    I had seen Murphy et al before. A couple of comments: I was puzzled by their analysis being limited to 2004 and before. It is odd because this paper was submitted in 2009. Ocean heat data since 2004 (essentially all the full-coverage ARGO data) suggests that their estimates of ocean heat history are inconsistent with current rates. Their best estimate of 1.1 watts per square meter for net aerosol off-set, combined with currently estimated ocean heat uptake rate (globally ~0.3 Watt/M^2 or a bit less) suggests an equilibrium sensitivity of about 0.55 C per watt/M^2 (or ~2.04 C per doubling). Some climate scientists are worried about “missing heat” for good reason…. recent data is not consistent with high sensitivity.

    I will read FG2008.

  51. RB,

    My criticism of Schwartz’s estimate is that, being derived from a global mean surface temperature, it is a method that will underestimate sensitivity due to neglect of ocean mixing phenomena at high latitudes.

    So it seems you think Held’s model is an accurate predictor of Earth’s behavior. I honestly doubt it.

  52. RB,
    A quick look at FG2008 tells me that they are assuming no natural oscillations (no consideration of AMO, PDO, nor even ENSO) which might influence the 1970 to 2006 temperature history. That already casts doubt on their analysis in my mind. I will read more.

  53. Steve
    So it seems you think Held’s model is an accurate predictor of Earth’s behavior.

    No, I don’t think that. But, as I stated earlier, I think that Held’s model along with others shows that the mixing in high latitudes, qualitatively confirmed by observations, results in a significantly higher system response time (than 10 years) and an ECS that is higher (by an unknown amount) than that would be predicted from models that neglected spatial variations in temperature and had a fast equilibration time (whatever that transient value might be).

  54. Re:SteveF (Comment #78961)

    It’s a fair point and I’m not sure if Section 4.3 addresses it adequately.

  55. Steve, can you explain how FG2008 supports the uninformed Bayesian priors used in AR4 for the only observational estimate of climate sensitivity better than did FG2006?
    =====================

  56. Re; Steve (Comment #78958),
    With regards to Murphy 2009, it looks like the plot shows an 8-year smoothing of earth heat content. Maybe they then centered it to within the time window.

    A time history of the energy going to heat the
    Earth (Figure 4b) requires differentiating the heat content.
    The ocean heat content data are too noisy for single year
    differences, so successive linear fits were performed to
    running 8-year segments of data. Eight years is the longest
    period that still cleanly separates the dips due to El Chichon
    and Mt. Pinatubo.

  57. Looks like UAH and RSS are continuing to diverge. The RSS number for June is .277. So UAH is now above RSS even though it should be .1C lower due to the different calibration period. Jeff mentioned that he thought UAH was more accurate because it used AQUA Ch 5 and the Aqua satellite has station keeping. But RSS also uses AQUA Ch 5 and its integrated satellite values are closer to AQUA CH 5 alone that UAH’s integrated satellite values. In any case, the UAH guys know that the satellites are diverging, and the RSS guys know it also. Apparently they are diverging in the TMT and the equatorial TLT even faster than in the global TLT. So there is a real problem and I can’t say that we can put much faith into any satellite data for the moment. So far, the only thing that Roy Spencer has said about the divergence is that he recognizes it and he thinks that UAH is better than RSS. This concerns me because it looks like a digging in of heels that may prevent the problem from being found.

  58. RB (Comment #78966),
    Yes, the year on year swings in ocean heat content are very large; “wild” is the description that came to my mind when I tried to connect changes to surface temperature swings… not possible to do. I don’t believe all of that is real, since the pre-ARGO data was very sparse, and the variation looks much lower since 2004.

  59. Steve:
    Forster commented at Curry’s blog that they used Murphy data to rework FG2006 to get a 3C sensitivity.

  60. Their numbers likely come from lambda=1.25 W/m^2/K as described in Murphy 2009
    For total outgoing radiation minus forcing the
    ordinary regression slopes for l are 1.31 and 1.43 W /m^2/K
    for ERBE and CERES data, respectively. The ERBE
    and CERES interannual slopes are all less than 1.25 W/ m^2/K whereas the seasonal slopes using orthogonal distance
    regression slopes are larger than 1.5 W m^2/K. We adopt
    l = 1.25 ± 0.5 W/ m^2 /K as an estimate for the response
    of net radiation to temperature variations between the
    1950 and 2004.

  61. RB,
    As I said earlier, I have doubts about the Murphy 2009 analysis. I also have doubts about the implicit assumption that all of the post 1970 rise in temperatures was independent of a background of natural variation… the evidence for substantial background temperature variation is quite credible, but appears to not even have been considered. We do not have (unfortunately) satellite data for long and short wave fluxes nor ARGO data from 1915 to 1945, where GHG forcing was much lower but the rate of temperature rise almost as great as 1970 to 2006.
    .
    I thank you for participating in our exchange on this subject, but I do not think we will reach any agreement. I am inclined to believe that the currently available heat balance (including current ARGO data) is most consistent with a quite modest sensitivity value. I do not think that an analysis like Issac Held’s (where a model is used to explain how high long term climate sensitivity is consistent with low empirical sensitivity) is any more credible than the model used… which is IMO not very credible at all. I also think that there is considerable uncertainty in ERBE and CERES data, which makes a very accurate estimate impossible.
    .
    Time will tell, but I will be surprised if the surface trend over the next 10-20 years is much more than about 0.1C per decade… and below 0.1C per decade seems to me more likely, based on the historical record. I do not expect to live to see it, but at some point I believe the discrepancy between projections of warming and measured warming will become large enough that the consensus estimate of climate sensitivity will have to be revised downward for climate science to maintain a measure of credibility with the public.

  62. Re: SteveF (Comment #78869)

    … the models are qualitatively right in the sense that heat and chemical compounds do mix into the thermocline… but we knew that already from the existence and shape of the thermocline. Being qualitatively right is not the same as being quantitatively right; there is little support for the notion that the models are quantitatively right about the oceans.

    A distinction should be made between mixing “which happens” and a “qualitatively right” mixing.

    Measured open-ocean vertical diffusion rates of SF6 are much lower (nearly an order of magnitude!) than expected.

    Perhaps, but why should vertical diffusion be expected to take place more-or-less uniformly in the open ocean?

  63. Let’s remember there are feedbacks to take into account in all this calculating.

    If one is using observational data, then all the (short-term) feedbacks are already incorporated into that data.

    Stefan Boltzmann says temperatures change by 5.45 W/m2/K and the actual observational data is very close to that.

    To get to +3.0C, one needs an additional 16.5 W/m2 (3.7 W/m2 of direct GHG forcing and 12.8 W/m2 of feedbacks).

  64. Re: RB (Jul 12 11:59),

    Look at the fundamentals: specifically conservation of energy. A high long term climate sensitivity requires that there must be a large radiative imbalance. That heat has to be going somewhere. That somewhere has to be the ocean. It begs credulity to hypothesize that as much or more heat is being deposited in the deep ocean as in the upper 700m. Radiative forcing is highest in the tropics. It can’t all be transported to high latitudes for downwelling into the deep ocean.

    SF6 isn’t the only tracer for ocean mixing. There’s also bomb testing generated 14C as well as the 14C age of deep ocean water. The actual properties of long term ocean mixing are fairly well understood but aren’t modeled well because of the orders of magnitude difference in time constants between the atmosphere and the ocean. Without tricks it would take thousands of years of model spin-up rather than a hundred years.

  65. Oliver #78979,

    A distinction should be made between mixing “which happens” and a “qualitatively right” mixing.

    I do not understand what you are saying. I don’t think I used the words “which happens”.

    why should vertical diffusion be expected to take place more-or-less uniformly in the open ocean?

    I am not suggesting that it should. Mixing is (appears to be) strongly localized, especially at high latitudes due to deep convection. That convectively driven deep mixing is very different in character from shear-driven eddy diffusion that takes place where there is a density stabilized thermocline. The projection of both CO2 uptake by the ocean and heat uptake by the ocean depend very much on which transport process dominates.
    .
    Eddy driven diffusion should jointly/inseparably transport CO2 and heat down the thermocline, and both uptake rates should be strongly time dependent, which is to say, an initially high rate of uptake should rapidly fall off with time as the ocean surface and top of the thermocline come into equilibrium with the atmosphere. In contrast, CO2 (and chemical tracer compounds like CFC’s) are carried to great depth by convective mixing at high latitudes, at a much less time dependent rate (more constant for a very long time), but heat is not so effectively transported that way… because you can’t have convection unless the surface water density is greater than the underlying water density. In other words, the surface water has to be very cold and dense to descend… the heat has to be lost to the atmosphere before convection can take place. CO2 is efficiently absorbed by cold water and so CO2 will continue to be absorbed (“pumped” to great depth) by the thermohaline circulation, even while not much heat is. The CGCM’s seem to substantially overestimate heat and CO2 uptake down the thermocline via eddy driven diffusion, and underestimate CO2 absorption due to the thermohaline circulation/deep convection.
    That is why I pointed out the vertical diffusion rate is low over much of the open ocean.

  66. DeWitt,

    One thing I am a little puzzled about: The dating of the deep ocean water via C14 (bomb or natural) seems to me to be subject to bias from the “raining” of surface particulates, especially carbonate shells, which dissolve at great depth, and which are rich in C14, into the deep ocean. Have you read anything about this potential bias?

  67. Re: SteveF (Jul 12 14:13),

    I bought a copy of Wigley and Schimel’s The Carbon Cycle, but haven’t had a chance to read it in depth. That’s probably discussed in there somewhere.

    Any 14C enrichment would reduce the apparent time constant for deep ocean mixing.

  68. Re: SteveF (Comment #78982)

    Oliver #78979,

    A distinction should be made between mixing “which happens” and a “qualitatively right” mixing.

    I do not understand what you are saying. I don’t think I used the words “which happens”.

    What I am saying is that the sheer fact of acknowledging that vertical mixing exists does not make the mixing scheme “qualitatively right,” never mind “quantitatively right.”

    why should vertical diffusion be expected to take place more-or-less uniformly in the open ocean?

    I am not suggesting that it should.

    The expectation of an “order of magnitude more mixing than is measured” is based on a traditional assumption that mixing is more or less spread out over the ocean.

    Mixing is (appears to be) strongly localized, especially at high latitudes due to deep convection. That convectively driven deep mixing is very different in character from shear-driven eddy diffusion that takes place where there is a density stabilized thermocline. The projection of both CO2 uptake by the ocean and heat uptake by the ocean depend very much on which transport process dominates.

    Deep convection is not mixing. That’s why you can measure the deep water as a separate, essentially unmixed water mass. The eddy diffusivity has been hypothesized in order to close the budget and counteract deep convection.

    Your point is valid, however, that there are important differences between sequestration of CO2 by downward convection and by “eddy diffusion.”

  69. Oliver,

    The expectation of an “order of magnitude more mixing than is measured” is based on a traditional assumption that mixing is more or less spread out over the ocean.

    I think the expectation comes more from assuming more-or-less average upwelling rates (~1.2 cm per day) at low and mid latitudes and the measured shape of the thermocline at those latitudes. The vertical diffusion rate consistent with the thermocline profile is much higher than the measured rate in most places, while the measured rate is much higher in a few places.
    .

    deep convection is not mixing

    I don’t want to argue semantics, but convention most certainly does cause vertical mixing… at least locally. I mean, water from below comes up and water from above goes down. You can see a relatively shallow lake change in appearance in the autumn when the surface water cools enough to start convection.. the lake water pretty quickly becomes mixed top to bottom.

  70. Re: SteveF (Comment #78987) July 12th, 2011

    The expectation of an “order of magnitude more mixing than is measured” is based on a traditional assumption that mixing is more or less spread out over the ocean.

    I think the expectation comes more from assuming more-or-less average upwelling rates (~1.2 cm per day) at low and mid latitudes and the measured shape of the thermocline at those latitudes.

    Does that mean we agree? 😉

    deep convection is not mixing

    I don’t want to argue semantics, but convention most certainly does cause vertical mixing… at least locally. I mean, water from below comes up and water from above goes down.

    It isn’t just semantics. If the water below and the water from above simply switch places, then has mixing occurred?

    You can see a relatively shallow lake change in appearance in the autumn when the surface water cools enough to start convection.. the lake water pretty quickly becomes mixed top to bottom.

    There’s a fairly large qualitative difference between A) a shallow lake where a big part of the surface goes unstable at nearly the same time and B) an stable, deep ocean where the downwelling region is nearly a point source, with very little mixing with the neighboring mid-depth waters.

  71. New paper to be published in GRL has found the missing ocean heat accumulation.

    (Actually, more like 45% has escaped to space due to the ENSO and 35% is skipping the intermediate depths and making it directly to the deep ocean : My comment, this is only possible in the Arctic and Antarctic bottom water formation regions : the oceans sinking is occuring at warmer temperatures than normal : -1.0C instead of -1.5C : this has already been examined for the Antarctic (Purkey and Johnson 2010) and found to be a very, very small number when extrapolated to the Arctic as well).

    http://www.knmi.nl/publications/fulltexts/katsman_voldenborgh_grl_all.pdf

  72. Oliver,

    Does that mean we agree?

    Maybe, I am not sure. You don’t seem terribly agreeable, but maybe that is a mistaken impression. 😉

    If the water below and the water from above simply switch places, then has mixing occurred?

    .
    I think the exchange of deeper waters with surface waters in regions of deep convection does in fact cause considerable mixing. The surface to bottom composition is fairly constant. In general, convection causes substantial mixing. I have never encountered a system where that isn’t true.

  73. Bill Illis,

    They say:
    “Recently-observed changes in these
    two large-scale modes of climate variability point to an up-
    coming resumption of the upward trend in upper ocean heat
    content.”
    .
    Of course, the explanation is ALWAYS that rapid heating is just around the corner. I wonder if the Dutch know the story of the little boy who cried ‘wolf’ too often? Maybe not. The good news is that these main stream climate scientists feel the need to address the reality of a lack of upper ocean warming over 8 years. It is a least a start in addressing reality. More to come, of course.

  74. Bill Illis and SteveF,

    Do you happen to know the relative heat capacities of the hydrosphere and atmosphere? I have seen numbers ranging from 1E+03 to 1E+06 for the ratio. Given the much larger heat capacity of the oceans, and the assumption that the oceans absorb (or would be expected to absorb) >95%(?) of any energy imbalance, and given the fact that we have seen only about a degree change in atmospheric temp over the past century, what kind of temperature change do we expect from the oceans? Seems like it would be very, very small per year and even per decade. Are the Argo measurements really reproducible enough to accurately measure ocean heat content, even in the upper 700 m?

  75. Re: SteveF (Comment #78991)

    Does that mean we agree?

    Maybe, I am not sure. You don’t seem terribly agreeable, but maybe that is a mistaken impression. 😉

    Of course I can’t be expected to agree with something that is, IMHO, not correct. 😉

    If the water below and the water from above simply switch places, then has mixing occurred?

    
I think the exchange of deeper waters with surface waters in regions of deep convection does in fact cause considerable mixing. The surface to bottom composition is fairly constant. In general, convection causes substantial mixing. I have never encountered a system where that isn’t true.

    One would think that deep convection would necessarily be accompanied by mixing, and indeed that is the source of Munk’s conjecture. He may have well been right, but observationally, it is still an open question.

    On the second point, the the surface to bottom composition of the oceans is not at all constant.

    Finally, there are many convective systems can be imagined which are associated with very little mixing.

  76. Re: Owen (Comment #78993)

    Do you happen to know the relative heat capacities of the hydrosphere and atmosphere? I have seen numbers ranging from 1E+03 to 1E+06 for the ratio.

    The question was not addressed to me, but the heat capacity of the oceans due to mass is on the order of 1×10^3 times the atmosphere.

    Given the much larger heat capacity of the oceans…what kind of temperature change do we expect from the oceans?

    Microscopic.

    Seems like it would be very, very small per year and even per decade. Are the Argo measurements really reproducible enough to accurately measure ocean heat content, even in the upper 700 m?

    On the other hand, oceanographers are accustomed to measuring very, very small temperature changes. The upper 700 m temperature change (if any) seems tractable given a decade or a few to measure.

  77. Re: Bill Illis (Jul 12 18:41),

    In this study we therefore trace these heat budget variations by analyzing an ensemble of climate model simulations.

    Yet another ‘experiment’ based on models. *sigh*

    At times when deep convection is weak or absent in our model simulations, the AMOC presumably weakens. This yields a warming of the deeper ocean waters due to the reduced ventilation by colder surface waters (Fig. 4d).

    Their analysis makes no sense. The AMO hasn’t been low for 2003-2011 according to the AMO index. Also warms how? The energy has to come from somewhere. The only source I can think of is that at low flow, eddy diffusion down is greater than upwelling flow and the thermocline moves lower. But that means the heat still has to go through the upper ocean to get to the deep ocean.

    Higher circulation leading to warmer temperatures in the downwelling zone makes more sense to me. That would, I think, also lead to higher radiative loss to space from high latitudes where atmospheric radiative absorption is lower because of lower specific humidity and lower tropopause altitude. The graph linked above also shows that UAH NoPol TLT, which covers ~60-82 degrees latitude, has been high for the period of interest as well as being somewhat correlated to the AMO index.

  78. Re: SteveF (Comment #78991) July 12th, 2011

    I think the exchange of deeper waters with surface waters in regions of deep convection does in fact cause considerable mixing. The surface to bottom composition is fairly constant. In general, convection causes substantial mixing. I have never encountered a system where that isn’t true.

    Now that I re-read this passage, I see that you did say “in regions of deep convection” in which case you are correct – the convective “plume” is rather well mixed vertically. It seems I have been arguing something semantic after all, since I have been viewing the deep water convection-mixing balance in the traditional “large-scale” sense of the circulation, while you have been arguing that mixing occurs within the plume itself. My apologies.

  79. Extending my comment above:

    If you look at Figure 4d in Katsman and Voldenborgh, it’s correlating the rate of change of the AMOC with the rate of change of deep and upper ocean heat content. The rate of change is zero at the max and the min. Also, the rate of change of upper and deep ocean heat content appear to be highly correlated. So the heat isn’t going into the deep ocean instead of the upper ocean because the deep ocean heat content isn’t increasing either. Their mechanism analysis is for the minimum of the AMOC, but the AMOC was near maximum for the period of interest. Did the reviewers actually read the paper before approving it for publication?

  80. Owen (Comment #78993),
    The relative heat capacity is a little complicated. The air alone (dry) has a heat capacity equal to ~1/4 of an equal weight of water. Since the surface pressure is ~1 Kg/cm^2, that means the weight of the atmosphere is equal to ~10 meters of water depth, but has the heat capacity of only ~2.5 meters of water. In addition to sensible heat, air contains latent heat (the heat of condensation of water) which adds to its effective heat capacity. The latent heat content changes a lot with altitude and with temperature, so it may be hard to determine exactly.
    .
    If you want to consider only sensible heat, and assuming an average ocean depth of 4000 meters, then the heat capacity of the atmosphere (per square meter of ocean surface) is ~2.5/4000 = ~6.25* 10^-4 of the ocean. But the atmosphere covers the entire Earth, not only the ocean surface (which is ~70%), so the total relative capacity (considering only sensible heat) is closer to (1/0.7)*6.25*10^-4 = ~8.9* 10^-4. Including latent heat would increase the calculated heat capacity for the atmosphere, so it is reasonable to say the ocean has ~1000 times the heat capacity of the atmosphere.
    .
    Of course, the atmosphere exchanges heat with the ocean surface layer and the the land surface pretty quickly, so I’m not sure it is of much practical importance to define the relative heat capacity of the atmosphere and ocean. More important is the combined heat capacity of the atmosphere, the ocean surface layer (~60 meters), and the land surface heat capacity, which combined can be considered the “fast response” heat capacity of the Earth, compared to the heat capacity of the deeper ocean. This ratio is more like ~55/4000 = 0.01375; which means the slower responding deeper ocean is about 70 times more thermally massive than the fast responding surface and atmosphere. That slower responding deeper ocean covers a huge range of response times, from several years (water just under/close to the ocean surface layer) to at least several centuries (say, water below ~2000 meters). The rate of transport of heat from the fast responding part of the system to the deeper ocean is an important factor in analyzing heat balance, which seems (so far) poorly defined…. models do not seem to agree with data very well over the last decade.

  81. DeWitt Payne (Comment #78999),

    I hope the same journal will publish a paper in 5 – 10 years which shows just how incorrect the predictions of Katsman and Voldenborgh were…. but if I were to wager, I would bet no such article will ever be published.

  82. In #78992 I asked if the Dutch known the story of the little boy who cried wolf too often… I asked my Dutch distributor, and it appears they do not tell this story in the Netherlands.

  83. Re: SteveF (Jul 13 15:57),

    I’ve decided that there should be two morals to the boy who cried wolf story. The first is the obvious that nobody believes a liar, but the second is that ignoring a positive result is not a good idea even if the test has too many false positives.

  84. SteveF (Comment #79003)
    ——————————-
    Thanks for your elaboration of the heat capacities of ocean and atmosphere. The factor of 1000 seems to be a rough but reasonable rule of thumb. If so, for a decade in which the average atmosphereic temperature were to rise 0.15 C, then might one expect the average ocean temperature to rise by 0.00015 C if the equilibration time were rapid (which it is clearly not for the deep ocean)? Do you know what sort of temperature changes are actually measured in the top 700 meters? I assume that although small they can be measured accurately.

Comments are closed.