Roy announced UAH May: +.289C

Roy announced a tiny downtick for temperature for May 2012 (and his fun no predictive value graphs now shows a 4th order polynomial smooth.) Here’s my graph with a linear fit and three choices of models used to compute the uncertainty intervals:

Note: The linear trend is distinctly positive with “no warming” rejected using any of the three statistical models shown in the figure. Meanwhile 0.2 C /decade since 1980 remains rejected if one “likes” the red noise model and uses 2-σ as your criteria for significance. (Recall 1.96 σ is the 95% confidence intervals for Guassian residuals). But it’s inside the uncertainty intervals if one “likes” the best fit ARIMA with coefficients based on the data since 1990. Note also: 0.2C/decade is for the surface and other caveats apply.

Now, let’s turn to what’s important. Who won the quatloos? This month, win, place and show ere taken by pdjakow whose bet of 0.290 almost nailed the reported value of 0.289. He edged out DonB 0.287. (Luckily, for typo prone me, this was not a tie so the “tie goes to first entrant” didn’t kick in on this month where my typo would have affected the fairness of that rule.)

Dudley Robertson took third place with 0.298.

Have fun spending your quatloos guys! Details are below:

Winnings in Quatloos for UAH TTL May, 2012 Predictions.
Rank Name Prediction (C) Bet Won
Gross Net
Observed +.289 (C)
1 pdjakow 0.29 5 81.691 76.691
2 DonB 0.287 4 52.282 48.282
3 DudleyRobertson 0.298 2 20.913 18.913
4 PatrickBrown 0.28 1 8.365 7.365
5 pdm 0.311 5 33.46 28.46
6 JohnNorris 0.313 5 3.603 -1.397
7 Anteros 0.264 5 0 -5
8 Tamara 0.318 5 0 -5
9 SteveT 0.32 4.18 0 -4.175
10 PavelPanenka 0.323 3 0 -3
11 ivp0 0.323 5 0 -5
12 Ray 0.255 5 0 -5
13 PaulS 0.25 5 0 -5
14 MichaelP 0.33 4 0 -4
15 EarleWilliams 0.246 5 0 -5
16 Genghis 0.243 5 0 -5
17 MikeP 0.24 5 0 -5
18 diogenes 0.34 3 0 -3
19 Boris 0.344 5 0 -4.999
20 ChuckL 0.232 4 0 -4
21 BobW 0.229 3 0 -3
22 Pieter 0.222 4 0 -4
23 Skeptikal 0.221 4 0 -4
24 BobKoss 0.213 5 0 -5
25 RobertLeyland 0.202 4 0 -4
26 lucia 0.2 5 0 -5
27 Cassanders 0.192 5 0 -5
28 Freezedried 0.185 5 0 -5
29 IainT 0.18 5 0 -5
30 LesJohnson 0.18 5 0 -5
31 JefffFaceyj 0.175 4 0 -4
32 Rick 0.17 5 0 -5
33 AMac 0.152 2 0 -2
34 GeorgeTobin 0.145 3 0 -3
35 PaulButler 0.144 5 0 -5
36 YFNWG 0.135 5 0 -5
37 CoRev 0.45 5 0 -5
38 MDR 0.128 3 0 -3
39 March 0.123 5 0 -5
40 bushy 0.12 4 0 -4
41 jferguson 0.112 2 0 -2
42 AndrewKY 0.11 3 0 -3
43 andreas 0.11 3 0 -3
44 mct 0.102 5 0 -5
45 ArfurBryant 0.084 5 0 -5
46 GaryMeyers 0.084 3.14 0 -3.14
47 Hal 0.011 5 0 -5
48 denny 1.25 3 0 -3

The net winnings for each member of the ensemble will be added to their accounts.

120 thoughts on “Roy announced UAH May: +.289C”

  1. Congrats to winners! Lucia, you almost gave me a cardiac arrest with the absence of a decinal point in the heading!! 🙂

  2. AGW disproven last month again by squiggly line going in the wrong direction.

    Didn’t help my quatloo stack though. 🙁

    Andrew

  3. Interesting, I put in my bet (second time too, as you lost the first batch), but does not appear…oh well, i wasn’t close…

  4. Lucia: do you have a FAQ or something where you explain in detail how you compute the curves in these graphs – especially the error lines?

  5. toto–
    I’ve discussed them in the past– but I don’t repost each month.

    The red noise uses the same method you’ve read in Santer’s paper on tropospheric trends (and has been used by Tamino in the past). The ARIMA uses ARIMA preprogrammed into R. ARIMA with fractional differencing uses a package in R. It doesn’t pick up much evidence of “d” — but that could be because “d” doesn’t much matter until you have a long time series.

    Does that answer your question or do you want to know something more?

  6. Now that we’ve got quite some data from UAH, I’d be kinda curious in seeing the deltadelta T (the change in the change of temperature, or the change in the rate of temperature fluctuations). It could be fun, and I have a feeling it’ll look logistic.

  7. Calling Anteros (seen more often here than at Dr Curry’s blog)

    You said:
    “Louise –

    Just out of interest, I wonder what you’ll say in the spring next year when at least two of the datasets show 15 years of cooling? I suppose by extension I’m quite keen to hear what Phil Jones has to say too”

    I replied that I’d book-marked this and would get back to you. If you’re still wondering what I’ll say, I’m still waiting for those datasets show global cooling

    http://judithcurry.com/2011/11/10/disinformation-vs-fraud-in-the-climate-debate/#comment-137014

    Lucia- thank you for allowing me to ‘page’ Anteros here but I don’t see him elsewhere

  8. Bah! Should have repeated my original bet of 0.283 made with my Mark 1.0 eyeball. Would have cashed. The re-bet allowed me to test my updated Mark 1.1 eyeball and it seems to be defective. 🙂

  9. Lucia: thanks, guess I’ve got some reading to do.
    .
    Louise: according to satellites (both UAH and RSS), and also HadCRUT3, temperatures are cooler now than in 1998. Therefore, according to some people, it is perfectly justifiable to say that there has been 15 years of “cooling”.
    .
    I like Gavin’s take on this (though in a different context): “What part of ‘short term trends are not significant’ did you not understand?”

  10. Louise–
    I clicked over to the curry link. I’m not quite sure what the argument is supposed to be about nor what years I should use. My script is set up to only start in January. Some of the quibble seemed to be about ‘since 1995’. Some about 15 years.

    Currently, with UAH and “red” noise, I get

    “warming since Jan 1995, but not statistically significant

    “warming since Jan 1997, but not statistically significant
    “warming since Jan 1998, but not statistically significant

    I don’t know how that fits into your arguments.

  11. toto

    “What part of ‘short term trends are not significant’ did you not understand?”

    Alas…. if only this statement weren’t seriously misleading in the context of many arguments in which Gavin advances it.

  12. Thanks Lucia.

    I said last November that I would get back to Anteros to give him my words considering the 15 years of cooling he seemed to be expecting. I’ve given Spring a decent time to get started and so am asking Anteros to show me the 15 years of cooling. I haven’t noticed it being announced but I’m sure Anteros will be along shortly to point it out to me.

  13. 15 years is a ‘short-term trend’? 30 years is supposed to be climate significant, which must be long-term in some sense. So 15 years should be a ‘demi long-term trend’.
    After all, 17 years is a ‘Santer’, by which theories survive or fall…

    Louise (#97096) – CRU was just about to hit 15 years of cooling but they changed the dataset instead!

  14. Luscia, wearing your statistical hat; what information about the noise can we get from an examination of the residuals of the detrended data?

  15. Doc–
    I’m not sure what specifically you want to know. We can estimate the lag 1 autocorrelation on the assumption the trend is linear, try to if the residuals are normal (though this has issues), test which ARMA(p,q) model has the best AICc value….

  16. I was just wondering what information we get from the residuals.
    We are sort of assuming a linear fit, rather than an exponential; is that really valid?
    I sure as hell don’t see any simple cyclical process, no sunspot cycle peaking out judging by MkI eyeball.
    The distribution of the noise from a linear detrend looks pretty Gaussian.
    Could we do it the other way around. Demand that the residuals fit a Gaussian, and from that identify the lineshape of the trend?

  17. Louise –

    You seem to be using different data sets to everybody else!

    At least two of the date sets have been showing 15 years of cooling this spring.

    Here is a graph of the RSS data showing a clear cooling trend for 15 years to the end of March 2012 [which is spring here in the UK]
    Of course, the 15 year cooling trend exists to any month this spring – precisely as I said it would.

    http://www.woodfortrees.org/plot/rss/from:1997.25/to:2012.25/plot/rss/from:1997.25/to:2012.25/trend

    I’ll leave you to plot the Hadcrut data for the same 15 years so you can see another beautiful 15 year cooling trend.

    Thanks for bookmarking my prediction! 🙂

  18. We are sort of assuming a linear fit, rather than an exponential; is that really valid?

    Uhmmm depends.

    Could we do it the other way around. Demand that the residuals fit a Gaussian, and from that identify the lineshape of the trend?

    Someone might be able to do it. It would be pretty computationally intensive and I’m not sure the problem has a unique solution. If someone wants to try doing that, I have no objection to their giving it a whirl. (I’m not sure what we would learn from the exercise, but maybe someone could explain what that might be.)

  19. Anteros,

    You were so close to winning some quatloos. With a bit of tweaking, your foolproof algorithm might end up being….. foolproof.

  20. Skeptical –

    It was remarkable that I did as well as I did because the reasoning holding my algorithm together wouldn’t support a paper bag.

    There’s always next month tho’…..

  21. toto –

    Gavin’s comment is pertinent – but as Lucia points out, it very much depends on the circumstance.

    Personally, given that I have no dog bigger than a day-old chihuahua in the warming/cooling business, I set even less store by short term trends than does Mr Schmidt. As such, my prediction of two sets of data showing 15 years of cooling doesn’t imply that I give such a thing any import, significance or meaning.

    I sometimes ponder the temperature record of the satellite era because it now covers a full third of a century which to me has a modicum of meaningfulness about it.

    Rather presciently I began my enquiry to Louise last November with the words

    Just out of interest, I wonder what you’ll say……

    And of course, now we have those two sets of data, I hope to have my question answered!

  22. Louise –

    Incidentally, I made a prediction some time ago [at tallbloke’s talkshop] that 2013 would be the hottest ‘ever’, according to the woodfortrees index.

    You are very welcome to bookmark the prediction and chase me foolishly round the internet in 18 months time.

    Given that this prediction lacks the certainty of the 15 year cooling one, who knows, I may very well be wrong 🙂

  23. Carrick,

    I read that when he posted it. Seemed OK, didn’t dig into it too deeply. Is there something particular about it that strikes you?

    I think it is neat how you can visually slice the Ch5 data into pieces and slide it around to make year-to-year comparisons. Spring 2011 and 2012 are interchangeable with just a couple of moves.

    Has anyone done a spectral analysis on Ch 5?

  24. Anteros–
    Your method beat mine. This method my method was: Enter whatever came into my head when testing to see the script was sorted out.

  25. “What part of ‘short term trends are not significant’ did you not understand?”

    And at what point does a “short term trend” become a “long term trend”? “When we say it does”? Or is there a real, objective criteria?

  26. BillC, it just has a good explanation of how and why you get this large quasi monthly variation in global mean temperature. Some really interesting comments on his site about how heat (real world) gets transferred from the surface to the upper atmosphere…. turns out rain is responsible for it. It also demonstrates how chaotic weather is on that short of an interval, which I thought was interesting.

    Has anyone done a spectral analysis on Ch 5?

    Remind me how to pull data from channel 5 and I’ll do it.

  27. Anteros, I would be interested to know why (in the simplest terms possible), why you think that 2013 will be a record year according to the woodfortrees index.

  28. Anteros, I would be interested to know why (in the simplest terms possible), why you think that 2013 will be a record year according to the woodfortrees index.

    .
    My own first guess: because it is likely to be an El Nino year after a protracted La Nina that still produced remarkably high temperatures.
    .
    My second guess: because of the smoldering ashes left behind by the 2012 Mayan apocalypse.
    .
    I’ll let you decide which is more scientific, being not quite sure myself 🙂

  29. Ray –

    toto’s first guess is my reasoning – almost in its entirety.

    Not only is an El Nino likely, [Er…. predicted by most of the people who study such things..] but the double dip La Nina produced noticeably higher temperatures than 2008.

    The woodfortrees index choice was merely to avoid cherry-picking and such like.

    Apart from it being fun to mention it for Louise’s benefit, I wouldn’t bet more than half a dozen quatloos on it happening – it just seems a very reasonable expectation.

    Have I missed something?

  30. OK so here it goes, first some comments from Roy:

    One of the most frequent questions I get pertains to the large amount of variability seen in the daily global-average temperature variations we make available on the Discover website. From Aqua AMSU ch. 5, these temperatures can undergo wide swings every few weeks, leading to e-mail questions like, Is the satellite broken?

    […]

    We have observed this behavior ever since John Christy and I started the satellite-based global temperature monitoring business over 20 years ago.

    These temperature swings are mostly the result of variations in rainfall activity. Precipitation systems, which are constantly occurring around the world, release the latent heat of condensation of water vapor which was absorbed during the process of evaporation from the Earth’s surface.

    While this process is continuously occurring, there are periods when such activity is somewhat more intense or widespread. These events, called Intra-Seasonal Oscillations (ISOs) are most evident over the tropical Pacific Ocean.

    You can see these events by looking at the time way form:

    Full set (original and annual subtracted)

    Detail of annual-subtracted

    We would expect from Spencer’s description that the warming events are associated with increase precipitation. Note we see what looks like a series of “spikes” in temperature, that are quasi-periodic in interval. These spikes are broad band so the spectral characteristics related to them are not very interesting.

    Moreover, these events are not entrained with the annual cycle, so you wouldn’t expect to see spectral peaks, and if you looked at the spectrum from multi-year data, and you don’t:

    Spectrum

    What looks like noise in interannual periods is mostly related to these spikes. Sorry it wasn’t more exciting!

  31. Anteros:

    Have I missed something?

    IMO, sort of.

    As Tidale says La Niña Is Not The Opposite Of El Niño

    While you get warming during El Niños, whether you don’t always get net cooling during La Niñas (watch his videos, La Niñas correspond to strong westerlies… and they act to “stir the pop”, there’s a lot more going on than just a cooling of SST in the central tropical Pacific).

  32. Carrick, I believe you are more expert than I at imaging as well as climate. I’m less convinced of the inauthenticity of the unmentionable document as I was a day or so ago.
    =============================

  33. Carrick –

    Interesting.

    I’ve not heard of that particular nuance wrt La Nina temps.

    Colour me very slightly better informed 🙂

    I’ll reduce my putative bet by a couple of quatloos..

  34. Anteros, he has a good explanation ENSO and associated phenomena in two videos that he links to. I think his reasoning gets quite circular when he starts saying (roughly) that there isn’t any AGW because ENSO has an upwards trend in it, though. (Um… Bob…)

    kim: interesting. 😉

  35. Paging Louise!!!!!!!!!

    Paging Anteros. Please show me the datasets that show global cooling and you can then give up wondering what I’ll say.

    Even Rumplestiltskin made good on his promise – how about you?

    I don’t think silence counts as saying something 🙂

  36. Carrick,

    Thanks for the analysis though.

    Thinking about it a bit more, is it interesting that the spectral amplitude really starts to fall off after 1/month? You see the monthly peaks in your annual subtracted detail plot (I guess you just zoomed in on a particular year?). I wonder if the other years look similar, I would guess they do.

    Must be the moon wot dunnit.

  37. And even more astrology- the reason I was interested in the spectral analysis was having seen these monthly-ish peaks in data I detrended myself, I thought, that might have implications for betting on anomalies when the cutoff dates for betting are usually mid-month. 🙂

  38. Billc:

    You see the monthly peaks in your annual subtracted detail plot (I guess you just zoomed in on a particular year?). I wonder if the other years look similar, I would guess they do

    Yes I picked just one year, but other years look similar. The spacing between the peaks are highly variable, and don’t really correspond to a 1/month frequency. If you look at the spectrum for a single year, you do see spectral peaks. But they move all over the place:

    spectrogram

    If you had a way of kicking out that “peaky” noise, you’d probably see more harmonics. Just looking at the averaged annual variability from 2003 through 2011, shows something like this:

    figure.

    I think the conclusion is most of the harmonics to the annual component come from asymmetry in the Earth.’s ocean/land masses and in the elliptically of the orbit itself, rather than from the short-period weather (which as I mention is not entrained by annual forcing)..

    As it is, in the “full signal” most of the higher harmonics are getting drowned out by the “Intra-Seasonal Oscillations” (e.g. see this)

    I admit I wouldn’t have predicted that.

    t shows how much more sensitive the higher atmosphere is to these sorts of oscillations than is the surface record, where these “bursty” periods of weather aren’t nearly as apparent.

  39. Carrick,

    You’re much more learned in these matters than I am, but a couple questions/comments:

    “If you had a way of kicking out that “peaky” noise, you’d probably see more harmonics.” But why would I want to do that? Well, if I was interested in the math/physics of the harmonics, OK. But being more interested in the general question “what causes the temperatures to vary?” I think it is remarkable how regular the peaks are. Stochastic, sure – but able to be characterized statistically with some confidence? It seems that there is a characteristic frequency and amplitude of these overturnings or whatever.

    As far as the busty (Everywhere l look, something reminds me of her!) periods of weather in the surface record, wouldn’t we have to extend the X axis on the GISS graph out to 12 and beyond to see if we see the same drop as in the spectrum of Ch 5. In fact, if you extended the GISS graph and changed the y axis to a log scale, it might even look similar.

  40. “In fact, if you extended the GISS graph and changed the y axis to a log scale, it might even look similar.”

    Except for the 0.5-year peak?! I thought this was necessary to explain the deviation of the annual cycle from a sinusoid (due to thermal lag). Does this not happen at the surface – are the solstices nearly the hottest times in each hemisphere (is that surface record the land-only record)?

  41. Billc, lol the term is “bursty” not “busty.”

    GISTEMP only updates once a month. That means the maximum frequency you can resolve is 6 yr^-1.

    I’ve thought about doing my own series based on daily reports just so I could get better resolution than this… but so far just a pipe dream, I don’t have the time that it would take to sit down and get it right.

    What’s useful from the harmonics is that it gives you some idea of the relative importance of the interseasonal oscillations (ISOs) compared to global asymmetries. The fact that you can resolve more than two peaks in the full time series with GISTEMP is an indication that it is less influenced by ISOs than is UAH.

    I think Spencer’s article is a great place to learn a lot about short-period fluctuations in weather/climate. If you go through that article, there’s lots of great comments in it too.

    Here’s what this year looks like (the grid lines demarcate months.)

    Looks like we might be expecting a temperature drop. 😉

  42. Billc:

    Except for the 0.5-year peak?! I thought this was necessary to explain the deviation of the annual cycle from a sinusoid (due to thermal lag). Does this not happen at the surface – are the solstices nearly the hottest times in each hemisphere (is that surface record the land-only record)?

    No, thermal lag (actually thermal “inertial”) doesn’t do anything except create a lag between when the forcing occurred and when the response is observed, and also filters out some of the high frequency components of the asymmetry in the forcings and climate responses.

    The point I was trying to make is that the relative heights of the annual peak (and its harmonics such as the 2 yr-1 peak) and the “noise floor” gives you some indication of the relative importance of short-period climate fluctuations compared to the annual forcings.

    The fact that the noise floor is higher in UAH than GISTEMP tells us that UAH is more sensitive to these ISOs than is the surface of the planet…

    And this makes sense, because these peaks in temperature are the result of precipitation events. Precipitation results in evaporation on the surface (removal of heat energy) and condensation at higher altitudes (release of heat energy). Since channel 5 is 14,000 feet above the surface, you’d expect to see these precipitation events as sudden increases in temperature.

  43. It’s late, but I wonder how these precipitation events influence the TOA fluxes.

  44. It’s my impression it’s these ISO like events that Dessler 2007 was looking at to establish the sign of the cloud feedback among other things.

    So yeah, I think they have an important influence.

  45. They are so prominent it would seem obvious to do so. Hope to look into it more soon.

  46. Carrick,

    At a quick glance I don’t see that Dessler 2007 (there are about 4 publications in 2007 with him as primary author). I’m probably missing some clue in the title.

    It does appear that you can get data from NASA that would give the TOA fluxes (CERES ERBE-like datasets) for just the Aqua satellite, from which one could compute relationships to the brightness temperatures. Looking around this morning, the server was down.

    I am somewhat familiar with the Dessler/Spencer discussions. It’s all couched in terms of forcings and feedbacks, which I wasn’t really thinking about. I was just wondering about the correlated variability in the OLR and reflected SW with the tropospheric temperatures (assuming the ISOs are well-represented by the presence of clouds). Probably looking at movies of the sat temps on a map would help me conceptually. Bob Tisdale showed me a NOAA website where I could do SST’s; maybe I can find the maps there.

    You mentioned about GISS only doing monthly updates. I think there is a GHCN daily product?

  47. There’s a GHCN daily, but those are the individual stations not a reconstructed global temperature product. AFAIK, all of the global temperature products first take a monthly average.

    I’m thinking of Dessler 2010 sorry. It compares CERES radiative forcings to ECMWF reconstructed temperature. (ECMWF is not a free product, sorry.)

    There’s a review of it on this blog by troy_ca that I see you’ve found, and a second treatment by Steve McIntyre (see the comments for some pretty wild exchanges). McIntyre has a compilation of the two datasets in one file here.

    Finally Roy Spencer has a reanalysis that you can find a summary of here

    Curiously none of them (that I’ve seen) see to look at finer granularity than a month…even though its available using UAH/AQUA and CERES. From what we can see in the AQUA data, it’s really needed or you’re throwing away valuable information.

    Anyway, my comment was that I believe they “rely on ISO” because this generates most of the short-period fluctuations in TOA.

  48. There’s a GHCN daily, but those are the individual stations not a reconstructed global temperature product. AFAIK, all of the global temperature products first take a monthly average.

    Well, then it’s a job for the Mosher_Hausfather_BEST team! Automated daily updates, guys – we want a website like the UAH Discover site!

    I’ve skimmed the CA thread that you’re referring to in real-time, back when I knew substantially less about this stuff. It would be worth a re-read. I recall Nick Stokes getting a workout. I’ll grab that csv file now just so I have it.

    Curiously none of them (that I’ve seen) see to look at finer granularity than a month…even though its available using UAH/AQUA and CERES. From what we can see in the AQUA data, it’s really needed or you’re throwing away valuable information.

    I might post a note on Troy’s blog about our exchange here.

  49. * All of the global products except UAH, which has a daily update. This has the advantage that you can look at TOA as well as surface temperatures too.

    Dessler emphasizes the need for EWMCF or other reconstructed field methods because you don’t get the poles except by extrapolation (people have pointed out that really it’s interpolation because you’re computing the temperature in an interior point, but IMO that’s a distinction that doesn’t make a difference—the physics at the North Pole in particular is very unique and you’re still extending the series into an unknown).

    Of course there’s the paper by Masters that is currently review that is critical of Dessler. Masters doesn’t find much difference between ECMWF, NOAA (NCDC) and GISTEMP (so of course Dessler wants him to delete the comparison!).

    IMO, because the science is enough different in the tropics than more polewards, it’s a mistake to combine everything into global numbers. At the minimum I’d probably try and use latitude as an explanatory variable, and use an analysis that employs both daily surface and TOA temperatures.

    I rather like the gist of what Bart was doing on McIntyre’s thread looking at transfer functions to compute the impulse response function Unfortunately he appears to suffer from some form of Asperger’s Syndrome or Tourette’ Syndrome and I find him difficult to communicate with. He didn’t understand (initially at least) the difference between physical delays and signal delay computed using Fourier transforms, or realize that either sign of the delay between CERES and temperature are allowed (basically you don’t know which is the forcings and which is the response, especially in the global signal you could have it going both way, with some processes where clouds are responding to temperature changes and vice versus).

  50. Billc:

    I recall Nick Stokes getting a workout

    Nick, RomanM, and later me. I got dragged into it by my back left cat paw by willard. I had been trying to avoid the discussion. I’ve got a bit of discussion of methods for computing impulse response function here.

    There’s a few errors in that discussion as noted in the text that follows it. Bart had a good idea but was wrong on multiple details in the particulars, eventually admitted to it, but his admission was a bit like “let’s call it a draw”.

    Recasting his point to my language, you can’t use time domain to compute the feedback via a linear fit, unless the phase response is linear. What he gets wrong is mixing physical delay (e.g., what you would measure with a stop-watch) with signal delay, which has all sorts of traps built into it, as I discuss to his chagrin.

  51. Christ.

    Well, there is a daily project in the works.

    admidst all the WUWT nonsense about adjustments and raw data, it kinda dawned on me that all monthly data comes from daily data.
    GHCN daily is a ginormous repository that is growing as folks release their daily data. Zeke Nick and I used it in our UHI poster. It’s not for the faint of heart. Last I looked there were 80K stations. Coverage is pretty good, but sparse in SA.

    WRT the daily project. Zeke and I are helping out on a “related” daily project. There is no fundamental reason why the BE approach cannot be used on a daily basis. We will test that in the near future. Hmm. right now I’m fiddling with creating smallish regional datasets for some EDA on the daily project. At some point I’m gunna switch to matlab. Is steig still offerring courses?

    Maybe I can convince Nick to do a daily version of his LS approach.
    however, memory is already an issue.

    other thought would be to use daily LST. Its an unbiased estimator of Air temp. Reading a geostats paper on that now. hurting my brain worse than MD 20/20

  52. Yikes. I hadn’t looked at the Masters paper & discussion before thought I saw the links to it on a previous thread here at the BB.

    Looking through that, and looking around the web at discussions of Madden-Julian Oscillation etc., I agree with what you said more or less:

    IMO, because the science is enough different in the tropics than more polewards, it’s a mistake to combine everything into global numbers. At the minimum I’d probably try and use latitude as an explanatory variable, and use an analysis that employs both daily surface and TOA temperatures.

    Quoting Nick Stokes quoting Dessler on the CA thread:

    “Surely the finding here is that there *is* no result.”

    Yes, pretty much. Here’s what Dessler says:
    “Obviously, the correlation between ΔR_cloud and ΔT_s is weak (r2 = 2%), meaning that factors other than T_s are important in regulating ΔR_cloud. An example is the Madden-Julian Oscillation (7), which has a strong impact on ΔR_cloud but no effect on ΔT_s. This does not mean that ΔTs exerts no control on ΔR_cloud, but rather that the influence is hard to quantify because of the influence of other factors. As a result, it may require several more decades of data to significantly reduce the uncertainty in the inferred relationship.”

    At a higher level of temporal and spatial detail, I’m not sure the statement

    …factors other than T_s are important in regulating ΔR_cloud. An example is the Madden-Julian Oscillation (7), which has a strong impact on ΔR_cloud but no effect on ΔT_s.

    holds. It seems T_s is important in regulating the MJO on a short time scale. Whether any of this has any bearing on longer term trends, I can’t even imagine.

  53. Mosher:

    I had lunch with a statistician the other day who knows Brillinger. We talked about Matlab and R. He uses both but prefers Matlab mostly for the custom programming options/calculation optimization, which doesn’t invalidate anything that’s been said about translating stuff in to r for documentation and free access. Specialist in time series.

  54. Billc #97345,
    Well, Dessler’s 2010 “non-result” morphed into an absolute confirmation that clouds have a strong positive feedback in a warming world, or perhaps I should say, in Andy Dessler’s warming world. See Andy’s presentation here: http://www.wri.org/communicating-climate-science#0 No mention of uncertainty, just a claim of certain confirmation. Surely this is not the way to accurately communicate the results of Dessler 2010. After seeing Dessler’s video, it becomes easier to understand why he was… ahem.. a little negative in his review of the Masters paper.

  55. SteveF,

    Reading that exchange on the review of Masters, a couple of things I noted on a non-technical level:
    1) If Dessler was not required to name himself (the other two reviewers are anonymous) then it is to his credit that he did anyway.
    2) I am surprised that journal policy permits reviewers to make comments on each other’s reviews, as Dessler did.

  56. Billc:

    At a higher level of temporal and spatial detail, I’m not sure the statement

    An example is the Madden-Julian Oscillation (7), which has a strong impact on ΔR_cloud but no effect on ΔT_s.

    holds. It seems T_s is important in regulating the MJO on a short time scale. Whether any of this has any bearing on longer term trends, I can’t even imagine.

    I agree with you that the statement is probably wrong. CERES is daily, and Nick and others are looking at a low-passed version of T_s to conclude that MJO has no effect on T_s. It may not, but you can’t use absence of data as proof of absence of effect. Further, IMO, you really want to use TLS (that’s where the forcings are usually defined, as has been discussed here) to model cloud forcing/feedbacks, not T_s, and there is no question from the daily AMSU reading that there is a huge effect on TLS from MJO. I suspect if you had daily T_s, you’d see an effect, maybe not as amplified as TLS, but it’d be there still.

    As to trend lengths, MJO tells you something about short-period climate, you’d still want to use ENSO to look at 2-5 year periods, and you need more data to measure long-period responses.

    That links back to my comment about Bart being onto something by looking in the frequency domain rather than the time domain.

  57. Billc,
    “I am surprised that journal policy permits reviewers to make comments on each other’s reviews, as Dessler did.”
    I am surprised by this as well, if only as a matter of editorial policy. But based on, shall we say, the ‘intensity’ of Desslers review of Masters, I am not surprised he would take issue with what the other reviewers said, since they seemed largely supportive.

    I am surprised that the final (revised) Masters paper has not been posted.

  58. Steven Mosher, we can think Billc for raising these questions. I had the daily GMT on my mind for a while now, it’s good to have such an explicit application of where it would be useful.

    My own estimate is the minimum averaging period you’d want to do is weekly (or if you prefer 52 samples per year instead of 12 samples per year. That throws away some information, but may be enough for diagnostic purposes.

    I’d still probably write the code in C++ were I ever to get around to it, simply for speed and because it could allow me to use larger data sizes without having to worry about memory (there are a lot of tricks you can use with C++ in partitioning data so you can fit very large data models onto a computer to run, without physically needing all of that memory). At the moment, I still prefer the NCDC approach of EOFs + orthogonal interpolation functions. I think that is preferred to Kriging, only because BEST makes too many assumptions about the correlation function, and anyway if they did it exactly right, it would just be a wavenumber transformed version of what NCDC is already doing.

  59. You (and by extension, your website) are awesome (modern meaning).

    Since Santer has nailed his flag to the mast on 17 years, would it be too much to ask that you comment (each month) on this? I’m particularly interested in significance. I realize that your scripts run from January – not a problem – 17 years; 17 years + 1 month; 17 years + 2 months etc. Reset, of course, each January. You have a facility with auto-regression that my undergrad statistics (and, later undergrad econometrics) lacks.

  60. Billc:

    If Dessler was not required to name himself (the other two reviewers are anonymous) then it is to his credit that he did anyway

    Yes it is to his credit, and he should name himself, just as Steig should have identified himself in the O’Donnell reviews.

    SteveF:

    Well, Dessler’s 2010 “non-result” morphed into an absolute confirmation that clouds have a strong positive feedback in a warming world, or perhaps I should say, in Andy Dessler’s warming world.

    Oh you didn’t get the memo? It’s perfectly already to exaggerate the facts as long as they fan CAGW hysteria.

    You’re not allowed to even make factual statements on the other hand, if they can be “misused” by skeptics to counter CAGW hysteria.

  61. Carrick + Steve^2,

    I made this comment above, it went to moderation, I am modifying it slightly: I spoke with a statistician the other day who among other thingsknows David Brillinger of the BEST team. We talked about Matlab and R. He uses both but prefers Matlab mostly for the custom programming options/calculation optimization, which doesn’t invalidate anything that’s been said about translating stuff in to r for ease of use and documentation. He is a specialist in time series.

    Carrick, I don’t know if you use matlab, but I feel like it has some neat tricks for large datas. Also, TLS or TLT?

  62. Carrick,
    “exaggerate the facts as long as they fan CAGW hysteria”
    Speaking of which, Stefan Rahmstorf’s group (well three of them anyway) have placed a new paper on the same on-line journal site as the Masters paper. It is about future rate of sea level rise, and (surprise!) no matter how they look at the data, and no matter what warming scenario they consider, the rise between now and 2095 has to be at an average rate of 7 to 12 mm per year. They (of course) suggest the AR4 (and all other lower) estimates are wrong. I could not even bring myself to place a comment. Time will embarrass them on this… if they live long enough.

  63. SteveF, when you were doing the sea level rise you came up with pretty reasonable numbers if I remember correctly. I was looking at the steric rise versus satellites trying to get an estimate of what the true “average” ocean heat content should be. The satellite data fits pretty well, but trying to extend back is a bear. So I started playing with this.

    http://i122.photobucket.com/albums/o252/captdallas2/climate%20stuff/gissminusHADsstasenergy1880.png

    I converted the average ocean temperature to Wm-2 using the Aqua 294.2K average plus the HADSST2 anomaly. I like Watts. And the GISS LOTI using 289K, to the difference to get a feel for the change in the atmsopheric response to ocean temperature.

    It actually shows the difference between the 1910 to 1940 rise which some folks miss. Still not sure about the 1940s dip, but it could be legit.

    Then I got embroiled in a little ocean energy imbalance debate on JC. I still think you can’t tell squat until you know what “average” should be, but I took that one step further to estimate the ocean imbalance http://i122.photobucket.com/albums/o252/captdallas2/climate%20stuff/oceanenergyimbalance.png

    It looks like it is in the ballpark.

    If you noticed a change in the rate of steric rise though, I might be able to tweak it for a better estimate.

  64. Anteros–
    I read that on twitter. I tweeted back to ask who the “outside counsel” was. That’s unnamed in the article I read.

  65. BIllc, yeah I use Matlab proficiently. I use it for algorithm development mostly, and small code that doesn’t have to run fast, then reimplement in C or C++ (depends on who the audience for the code is) if I want it fast.

    If you use the object-oriented aspects of C++ (or even just struct in C) to encapsulate data, you can use virtual memory sizes that are substantially larger than the available physical memory without thrashing the computer.

    Don’t want to get into a language war here, I like Matlab, it’s got places where it shines, but like any software tool there is a design space that it is optimal and optimized for.

  66. Carrick & Steve,

    The first pass at the daily update could be a simple vector multiplication of the station temps * the weighting factor for each station? How much would the weighting factors change day to day? Now, sometimes a station has a bad day or doesn’t report, so then it gets into recalculating the whole thing.

  67. BillC thanks, that’s what I was thinking though not as well expressed. One of the points I was trying to make was the impact on recovery which depends on what is selected as average OHC. As the oceans regain energy lost in the LIA there would be a persistent ocean imbalance until the OHC approached average. So the rate of change of the imbalance is more important that the magnitude of the imbalance in separating out Anthro forcings/feedbacks.

  68. Re: dallas (Comment #97384)

    BillC, yep. that is why I think a third ocean layer should be considered, the salt water equivalent of the 4C boundary.

    I hope this is not the 4°C density maximum thing again.

  69. Oliver,
    Glad you asked about the “4C boundary” thing, so that I wouldn’t have to.
    Billc, dallas,
    There is no perfect way to model the ocean heat uptake over time. A shallow slab ocean is clearly deficient. A two layer is better, and a three layer better yet. The best ‘simple’ model is a many layer diffusional model, which can duplicate the Levitus measured data pretty closely. Beyond that you have to work with complex models that consider both diffusion and circulation… like used in CGMs. Problem is, these very complex models are (for the most part) far away from the measured heat data (and the measured temperature profiles!)…. that is, they are not terribly helpful/informative (aka wrong) if your goal is to understand how OHC has changed and will change over time. My guess is that the best bet for a model that is accurate over the short to medium term (<100 years) will be a diffusional model. A century may not seem like 'short to medium term', but considering the average thermohaline turnover time for the ocean (on the order of 1,000 years), a century is not so long.

  70. BillC, Now, sometimes a station has a bad day or doesn’t report, so then it gets into recalculating the whole thing.

    Some of it is very frightening when you first look at it. Especially very northern sites that only report in the summer.

  71. I think Dallas was saying “the saltwater equivalent of the 4C boundary” to get around that stumbling block. I remember looking at it before and looking up to find the actual value for typical ocean salinities at somewhere just below 0C. I don’t know if modeling the volume below the maximum density as a “layer” would help or not for sea level rise, it wouldn’t matter for ocean heat content.

    Back to Levitus et al, if there is a prehistoric (pre-1950s) trend in ocean warming from centuries of its equilibrating toward a colder surface temperature (LIA) which suddenly disappeared in the late 19th century, and it is still “catching up” in part to the warmer surface temps of the early 20th century, then I think Dallas is right to say that trend would have to be subtracted from the total. Fortunately for AGW theory, I would say that the multidecadal trend now appears to be higher than that 50 years ago, whereas you would expect the residual warming to be a gradually flattening trend over time.

    Unfortunately for you and me, the “rate of change of the ocean warming trend” is pretty uncertain when the trend itself has big error bars.

  72. Mosher – May I suggest that if you use daily data to create an “annual” average, do it using the climatological year, e.g., winter solstice to winter solstice. Jan1-Dec31 has no climatological significance; nor do months for that matter. While I can’t see this as making any difference in the overall trend, it could help discern ongoing cyclical or psuede-cyclical behaviour.

  73. Oliver, you ever had a B52 shooter? Fluids do stratify at density boundaries. If you pour a fluid slow enough, they don’t mix and you can maintain the stratification. So the saltier salt water than results from sea ice formation, slowly sinks in a laminar flow along the stratification, which would also be a temperature stratification. I can sees them on my bathometer matey, it makes a difference in yer catch it does. If there is turbulence, the fluids would mix, which doesn’t hurt the taste of the B52 at all, but does kinda screw up the presentation. Fishin’ wise, the upwelling makes fer nice catchin’ and rough riding.

    So believe it or not, the rate of down welling in the polar oceans is pretty important, just like the rate of pour of the vodka is for a B52 mixing. Kinda like the heart of the climate.

    http://upload.wikimedia.org/wikipedia/commons/thumb/6/63/Pack_ice_slow.gif/220px-Pack_ice_slow.gif

    So yes, it’s that 4C thing agin 🙂

  74. SteveF, I have been trying to figure out what would be the best way to set up a model. So far, three ocean layers, three atmospheric layers and the “Bucky” ball like 20 overlapping hexagons (the pentagons are a PITA). The hexagons would need to move and the layers rotate at relative velocities. Pretty complex, but doable I would think.

  75. Dallas, you’re going to get people reminding you that the density boundary for saltwater is tricky (depends on the different salts and pressures) and anyways I think for the typical ocean it’s actually less than 0C.

    Regardless of how to model OHC I think your comment about the change in rate of heat uptake is pertinent, since if there was a trend before 1950s due to the ocean bouncing back from having equilibrated to lots of years of LIA surface temps, that would be considered the baseline, not zero change. Though this effect would tail off with time, and although we haven’t seen recent drastic increases in rate of OHC uptake, the rate over the past few decades is higher than at mid-20th century, for what the data are worth.

  76. dallas (Comment #97391) 
June 7th, 2012 at 11:53 am

    Fluids do stratify at density boundaries… So the saltier salt water than results from sea ice formation, slowly sinks in a laminar flow along the stratification, which would also be a temperature stratification… So believe it or not, the rate of down welling in the polar oceans is pretty important… Kinda like the heart of the climate…So yes, it’s that 4C thing agin

    Yes, but why does the ocean care about 4°C?

  77. Oliver,

    I tried to look this up and I feel like the answer is not easy to find, but I was thinking the minimum values for typical ocean salinity are south of 0C (but not far, maybe -1 or -2)?

  78. Or maybe it is easy:

    SEA WATER

    Salinity – – dissolved salt content
    Average S = 35 g/kg (p.p.t, o/oo)
    Range of S (99% of all sea water) = 30 -37 o/oo

    Dissolved salts change inter-molecular interactions and thus physical properties of sea water.

    Boiling point = 103°C

    Freezing point lowered
    Sea water begins to freeze at -2°C; salt is excluded from ice.
    Remaining water is saltier, freezes at lower T.

    Temperature of maximum density “dissapears”
    Density increases progressively to freezing point
    Higher density promotes sinking of cold sea water

    Density
    … increases as S increases.
    … increases as T decreases.

    Importance to deep circulation in oceans:
    Deep-water “masses” form at surface
    … cooling (T decreases)
    … freezing & evaporation (S increases)
    Sink to a level (depth) governed by density, and spread out

  79. Re: SteveF (Jun 7 10:43),

    considering the average thermohaline turnover time for the ocean (on the order of 1,000 years), a century is not so long.

    This is a major problem for GCM spin up. They play even weirder games with density and viscosity in the ocean than they do in the atmosphere to get the ocean model to spin up in the same number of iterations it takes to spin up the atmosphere model. It’s little wonder that the GCM ocean models bear only a passing relation to reality.

  80. Oliver, 4C is for pure water, the actual temperature depends on the impurities, since sea water freezes at just below 0 C on average, the ocean density boundary layer should be around 0 C.

    The interesting parts are that while the temperature range is pretty tight, 0 to -2 C, it doesn’t have to be exact and that cold water can hold/absorb more atmospheric gases. So CO2 dissolved in the sea water could sink deeply and be complete sequestered or less deeply and be recycled at some future time. With less impurities, the freezing point would be higher so it would likely sink less deeply.

    So I would think that conditions at the poles could vary enough to make what would be a cycle of around 60 years more pseudocylic.

    There is more, but it is pretty outrageous 🙂

  81. BobN

    On my wish list to the people who create and maintain time series
    packages. Since that work is driven by the financial community I see little hope. I’m gunna work on a hack. otherwise I would have to rewrite tons of tested code that uses jan1-dec31

  82. Mosher – Not my field but there are many companies that have fiscal years that do not match up with the calendar year. I’m surprised packages don’t include a flexible start and end dates.

  83. Owen (Comment #97393)
    “Somewhat off-topic, but still loosely related to this thread: NOOA has issued an El Nino watch, citing a 50% chance of an El Nino developing in the latter half of 2012.”
    So there is a 50% chance that that one won’t develop?
    That’s *really* helpful!

  84. Ray (Comment #97405)
    So there is a 50% chance that that one won’t develop?
    That’s *really* helpful!
    —————————————————————————–
    Glad you think so. Actually, the MEI (http://www.esrl.noaa.gov/psd/enso/mei/) is already showing an emerging El Nino as this last month (May) is well into the positive range (+0.7). A moderately sized El Nino of a year’s duration may well break all global atmospheric temperature records.

  85. Owen,

    El Ninos reduce the frequency/intensity of Atlantic Ocean hurricanes (IIRC)… that is a good thing for we Floridians. 🙂

  86. SteveF or somebody,

    How does one explain the sharp decrease in OHC from 1960-1970 in Levitus et al (graph on Roger Pielke’s site)? Particularly the lack of warming in the 700-2000 meter layer, followed by a sharp rise from 1970 to past 1975? Should we be skeptical about anything other than a fairly monotonic warming trend in this layer, given what we think we know about the evolution of surface temps and what that says about forcings?

  87. Billc, I worry more about the data coverage issue. It’s easy enough to check, just am slammed at work right now and don’t have time to study the question myself.

  88. Owen, my gut instincts say you’ll need sustained MEI > 2 to see records broken in all of the global series. (The last time that happened was of course 1998.)

  89. Carrick (Comment #97410)
    June 8th, 2012 at 9:20 am
    Owen, my gut instincts say you’ll need sustained MEI > 2 to see records broken in all of the global series. (The last time that happened was of course 1998.)
    ——————————————————–
    Well, perhaps. But note that the 2010 El Nino barely made it to MEI >1, and 2010 gave 1998 a strong challenge in some but not all of the series.

  90. Carrick, thanks for the response anyway. I suppose I can try to look into it as well but not today. I am quite aware of the variability in surface temperatures of the mid-20th century (bucket adjustments or not), but was surprised to see it reflected in the deep ocean data.

  91. Billc (Comment #97408),

    My best guess is that they have a wildly optimistic estimate of uncertainty, especially in the early part of the record. The 0-700 heat can plausible vary up and down (at least some!), but substantial up and down variation in 700-2000 M seems a bit of a stretch.

  92. SteveF:

    My best guess is that they have a wildly optimistic estimate of uncertainty, especially in the early part of the record

    I’d say not just uncertainty. I think it includes bias effects associated with geographic effects due to undersampling.

  93. die baby ice!

    2012 is off to the races. crandles, hamilton and I are all at 4.29/4.3

    record falls.

    Interesting think to watch will be the artic basin.

    open pole this year? will we see open water within 90-85N

  94. BillC

    How does one explain the sharp decrease in OHC from 1960-1970 in Levitus et al

    This is an expectation from modelling of 20th Century climate (see e.g. GISS-ER). In the models this is mostly due to the 1963 Agung eruption, but also other volcanic activity through the 1960s, slightly weaker solar activity and perhaps increased aerosol burden from anthropogenic emissions.

  95. Paul,

    Thanks. I do wonder though, the extent to which the magnitude of the decline and recovery might represent some kind of bias in the data acting to exaggerate the real signal (e.g. overrepresentation of the mixed layer and shallow layers in general).

  96. RSS anomaly figure for May now published.
    Global figure is down from 0.333c in April, to 0.233c in May, mainly due to a fall in the S.H. anomaly from +0.122c to -0.102c, whch was not replicated in the UAH anomaly figures.

Comments are closed.