UAH December: 0.28C

UAH Trends

Roy Spencer posted the December 2009 UAH temperature anomaly for the lower troposphere: It was 0.280C down from 0.497C in November.

As usual, I did not come close to winning. That honor went to Cassanders who wagered3 quatloos on 0.288C. The detailed list of winners and others who wagered is shown below. (Boris– did you really mean to bet 0C?)

For those wondering, the Dec. 2009 anomaly was the 25th coolest or 6th warmest December reading since UAH began operation. All December readings are outlined in triangles in the graph above.


[sockulator(../musings/wp-content/uploads/2009/09/UAHBets2.php?Observed=0.28?Display=1?Metric=UAH TTL?Units=C?cutOffMonth=12?cutOffDay=26?cutOffYear=2009?DateMetric=December, 2009? )sockulator]

45 thoughts on “UAH December: 0.28C”

  1. Tim–
    It’s the Southern Hemisphere that’s hot. We’re freezing up here near Chi-town. (The cat won’t even go outside. That’s amazing since the reason he is now ours is he doesn’t like to stay inside.)

  2. The Sunday print editon of the NY Times said that preliminary numbers for the US had 2009 as the 26th warmest year since 1895. I was surprised that 25 years could be warmer than last year, by their reckoning.

  3. Lucia, is there a test for a time series that tells you how short a period you can use to see a valid trend?

  4. Boris – at least you were closer with 0 than I was with 4.85! I meant to enter 0.485 – probably a typo but it seems kind of odd that 2 other people would have had typos as well.

  5. Bugs,
    I’d imagine you’d create 1000s of synthetic time series with a prescribed non-zero trend corrupted with red noise and see how many years of data are required to get a statistically significant result.

  6. My answer to bugs is it depends on what bugs means by “valid”.

    If you mean a valid comparison to the model output, then the minimum period would be the minimum period for which the model can give a reliable output. (As I’ve noted before, GCMs don’t capture short-period fluctuations.)

  7. I didn’t realize we were bidding using adjusted data. So my raw data bid of 0.184 C + 0.100 C(global radiation factor+1/4 leap year adjustment-new years day headache+C constant=0.0999999)=0.284 C and I win.

  8. Carrick (Comment#29449) ..it depends on what bugs means by “valid”.If you mean a valid comparison to the model output, then the minimum period would be the minimum period for which the model can give a reliable output. (As I’ve noted before, GCMs don’t capture short-period fluctuations.)

    Besides not being able to “capture” “short-period fluctuations”, the GCM’s havent given a “reliable output” for long term trends either. Like Kevin Trenberth they havent a clue as to whats happening or what will happen next.

  9. “Lucia, is there a test for a time series that tells you how short a period you can use to see a valid trend?”

    Bugs – IMO this is not the right question. You need to establish a causal model which can be verified through observation. I think “valid trend”, in the context you imply, is meaningless.

  10. Actually the big drop was in the Southern Hemisphere, down 0.33C on last month. Northern Hemisphere drop was only 0.21C, while the tropics stayed relatively constant (and high).

    This strikes me as odd as you folk up north have been the ones having the unseasonably cold weather, while down south here, not so much.

    Is the cold air over the continental masses balanced by hotter air than normal over the oceans?

  11. bugs–

    Lucia, is there a test for a time series that tells you how short a period you can use to see a valid trend?

    First, please define “valid trend”.

    All statistical methods give confidence intervals. You can always report the trend along with the confidence intervals. Someone could decree some particular maximum uncertainty as defining the border for something they call a “valid trend”– but it’s not a standard thing to do and so there is no standard that defines “valid trend”.

    Anyway, we don’t need to be able to determine the “valid trend” to test other trends. To use a hypothetical, if Carnac the great predicted the current trend was 100 C/decade, we could show that was wrong even though we don’t know the true trend very well at all. There is no reason we would need to wait until we knew the true trend to 0.1C/decades to decree that 100 C/decade is wrong.

  12. Chad–
    That’s a way. But bugs still has to define what he means by “valid trend”. Plus, he needs a definition that would permit him to recognize a zero trend as “valid” in the event that a zero trend applied to whatever thing he was studying. Otherwise, he has a definition that never lets him decree that the data say something is trendless.

  13. Anyway, we don’t need to be able to determine the “valid trend” to test other trends. To use a hypothetical, if Carnac the great predicted the current trend was 100 C/decade, we could show that was wrong even though we don’t know the true trend very well at all. There is no reason we would need to wait until we knew the true trend to 0.1C/decades to decree that 100 C/decade is wrong.
    Well, can we assume that there are (say) 5 predictions: .1 C/decade, .2 C/decade, .3 C/decade, .4 C/decade and .5 C/decade and determine when each of those is proven “wrong” (or “right”) by the data?
    –t

  14. Richard:

    Besides not being able to “capture” “short-period fluctuations”, the GCM’s havent given a “reliable output” for long term trends either.

    Well, as I see it, the GCMs were originally designed to study the effect of increasing CO2…that is compute the CO2 climate sensitivity under different assumptions. They probably aren’t terribly reliable for even that because they leave out the impact on the biosphere and of course cloud physics.

    For it to do much more than that, you have to predict how the forcings will change over time (AGW CO2 & sulfates, volcanic forcings and solar for starts). We know none of these, so forecasting climate in any detail is nearly pointless.

    GCM’s should be viewed at best as diagnostic tools to help us make policy decisions about e.g. fossil fuel usage. They may be useful for predicting regional shifts in climate, but I think they lack the spatial resolution to even do that at the moment. (E.g., for the US, what the ENSO is doing is critical in knowing our weather…but none of the climate models can capture the ENSO dynamics, so they can’t tell us what to expect from changes in the ENSO in response to future climate change).

  15. Happy new year to everyone!
    Ah, finally a vindication of my “gas-station-localization-principle”
    🙂 During the previous model runs (AKA bets:-)) it did not preform very well, but I finally managed to tune it ……..

    I have visited “my own” (I’m a Norwegian) “Rimfrost” webpage, lately. http://www.rimfrost.no/

    I think there are a number of interesting features possibly well worth exploring further.

    One is that a substantial number of e.g. Danish stations not released by CRU allegedly for contractual reasons (giggle) seems available.
    And while we are into the danish stations: I find a very old one in Copenhagen, dating back to the 1700ths. Very interesting data.

    Another possible treasure trove is the substantial number of Siberian stations in the area depicted as glowing red in many graphic representations.
    While I do not take the recent statment from the russian think tank seriously at face value, some data enabeling a corroboration or rejectin of their statments could perhaps be found here?

    Cassanders
    In Cod we trust

  16. As a loyal lurker, I finally joined the fray and had beginner’s luck at 2nd. It does allow me to brag to my email list!

  17. Cassanders,
    What a fascinating site you have built.
    Thank you for the link. You have put together very clear and accessible data.

  18. @hunter
    let me rush in for a correction. I am not the owner or even contributor to the “rimfrost” web-page, I work on a somewhat different turf.
    I think it is a small group at the University for Natural Sciences in Trondheim, that has put the page together.
    For the record, I think they are warmists but to my knowledge, they do simply collate and make the data available in a commendable way.
    My reference to “my own” referred to nationality, and was horribly poor wording. My bad.

    Cassanders
    In Cod we trust

  19. Time to consider ice figures:

    December (month end averages) NSIDC (sea ice extent)

    30 yrs ago
    1980 Southern Hemisphere = 11.1 million sq km
    1980 Northern Hemisphere = 13.7 million sq km
    Total = 24.8 million sq km

    Recorded Arctic min yr.
    2007 Southern Hemisphere = 12.7 million sq km
    2007 Northern Hemisphere = 12.4 million sq km
    Total = 25.1 million sq km

    Last yr.
    2008 Southern Hemisphere = 12.2 million sq km
    2008 Northern Hemisphere = 12.5 million sq km
    Total = 24.7 million sq km

    This yr.
    2009 Southern Hemisphere = 11.4 million sq km
    2009 Northern Hemisphere = 12.5 million sq km
    Total = 23.9 million sq km

    1979-2000 Southern Hemisphere Dec. mean = 11.1 million sq km
    1979-2000 Northern Hemisphere Dec. mean = 13.4 million sq km
    Total mean = 24.5 million sq km
    GK

  20. “Well, that blows my model pretty much out of the water.”
    I don’t think you’ve got the hang of Climate Science: you’re meant to say “we are investigating the divergence”.

  21. G. Karst,

    How about making me a pie with all those cherries you picked.

    December through June extent relative standard deviation in the NH is at a minimum while SH RSD is at a maximum . Let’s look at the NH time series plots for March (max), September(min) and full year average for 1979-2009. Now tell me again that there isn’t a significant trend in NH ice extent. Even if you total NH plus SH, the trend of -0.033 Mm2/year is still statistically significant with a p value of 7E-05 (uncorrected for autocorrelation).

    But the Arctic is going to be mostly frozen over in the winter on human time scales. It’s the value at the minimum that is attracting the most attention. That value in 1980 was 7.85 Mm2 compared to 5.36 Mm2 in 2009. The 30 year OLS trend for September is -0.079 Mm2/year or -1.5%/year at the 2009 value of 5.36 Mm2. That would project a loss of 2.37 Mm2 in another 30 years. Will the trend continue? Who knows? I certainly don’t. What I do know, though, is that there is no evidence whatsoever that the trend is changing now.

  22. Yes. All very nice but this is the December avg sea ice extent, month end report. As you say, the months including the minimum and maximum reports are of particular importance. However, I release a report at each month end. You will have to wait for the march report. My flux capacitor has gone amiss.

    30 yrs ago, just happens to be one climatic period. It is hardly cherry picking. This is the last report using 1980 as reference. Welcome 1981 for future reports. GK

  23. If you want just December then the trend is -0.0415 Mm2/year with an R^2 of 0.74. That’s still significantly different from zero.

  24. “That’s still significantly different from zero”

    And that would be important to the Dec. avg sea ice monthly report… how?? It’s purpose is merely to refresh, interested folks , as to, what the actual observations are/were. With all the available graphs and trends, people tend to lose track of the actual values and the extent of change. Many simply refuse to believe the values. This is a direct side effect of graphing anomalies etc… exclusively.

    Total Global sea ice extent is another term lacking from many perceptions. The data included in the report requires no graph.

    Some do not find it useful, most DO. In any event… it is what it is! GK

  25. “The snow cover for the Northern Hemisphere in December has been 45,862,000 Km2, which has been 2,660,000 Km2 more than the 44 year mean and ranked 2nd for the 44 year record.”

    Richard, those are interesting numbers. Have you a source path for historical/current monthly averages/data?

    I sometimes also put together a Great Lakes ice report as an indication of the areas winter. GK

  26. lucia (Comment#29458) January 5th, 2010 at 7:21 am

    bugs–

    Lucia, is there a test for a time series that tells you how short a period you can use to see a valid trend?

    First, please define “valid trend”.

    All statistical methods give confidence intervals. You can always report the trend along with the confidence intervals. Someone could decree some particular maximum uncertainty as defining the border for something they call a “valid trend”– but it’s not a standard thing to do and so there is no standard that defines “valid trend”.

    An hour is too short, so is a day, a week, a month, a year. How long a period of time do we need with the data we have to be confident we have a trend?

  27. @DeWitt Payne
    Looking at the graphs at e.g. “Cryophere today” I would tend to agree that the long-term trend (from 1950 -onwards) indicates a melting tendency.

    However, the OHC and heat flux into the Bartens Sea the last couple of years is quite interesting, and I think the coming years gives an excellent opportunity to to disentangle the contributions from eventual antropogenic forcings and natural variations.

    Considering the tendency to autocorrelation (e.g. from ocean energy storage) I would think even a clear discerable ice increase in the near future would make interesting hypothesis testing possible.

    Do you agree that a recurrence to say 1980 levels of Arctic ice (say the coming two decades) would be a strong indicator that AGW has been overestimated?
    Ditto for 1950 levels ?

    Cassanders
    In Cod we trust

  28. Cassanders@#29469:

    I think “Rimfrost” is from NITH, not NTNU. The difference in reputation between the two institutions is huge… I think NTNU would be offended by being mistaken for NITH:)

    Not that it really matters!

  29. Cassanders,

    I’ve been closely watching what’s been going on in the Kara and Barents Seas as well. If the Atlantic Meridional Overturning index follows the expected pattern and goes and stays negative, then I would expect Arctic ice to recover. A correlation of the AMO index to the NOAA Arctic ice anomaly has an R^2 of 0.32. A smoothed version of the AMO index peaked about 2005. However, it’s been positive since June, which probably explains why the annual average extent this year didn’t increase over 2008. I don’t see much correlation between Arctic ice and the PDO, even if you just look at the Bering Sea data.

    The positive AMO may also explain why global temperature hasn’t been increasing as fast as expected too. I haven’t tried to run any numbers, but it seems logical that pumping more heat into the Arctic where it can be more efficiently radiated to space would result in lower heat accumulation in the rest of the world. But that’s all speculation at this point.

  30. Bugs,
    Any series of data over time will generate a trend. “Is it valid” really depends on what you are looking for. I am guessing your real question is: Does the current cooling trend in this graph represent weather or climate?

    Answer: I don’t know. It is certainly a very different picture than the previous 20 years. Due to the continued quiet sun and the restart of the north atlantic circulation I suspect we have not seen the end of this downward trend. Have we reached the top of a climatic plateau and begun to shift towards long term cooling? I don’t know. I don’t think anyone really does.

    Anyone wish to predict the next twenty years?

  31. IMHO, statistical analysis of this very short data set is a bit silly. A visual inspection shows flat trend centered around 0.0 for the first 20-years is followed by a step change possibly associated with the 1998 El Nino with a flat trend with slightly less variation centered around 0.25.

  32. Howard–

    statistical analysis of this very short data set is a bit silly.

    Then you’ll probably prefer today’s post which shows results of statistical analysis to trends computed since 1950.

    I don’t believe in step changes.

  33. Absolutely, 60-years is a better snapshot of climate. Your plot of model vs measured is another nice illustration of a divergence problem.

    Step changes are a common occurrence in geology, so I admit my bias in seeing (projecting?) this type of behavior in climate. Also, experience in practical fluid mechanics such as a differential wing stall-spin sure feels like a step change… maybe that is more of a *tipping point* 😉

  34. Howard–
    Yes. The drag crisis around spheres is a step change too. But in terms of climate, I don’t know why there should be a “step change” in temperatures. It would either have to persist as a step change for a long, long time, or we would need a very good explanation for “step change” for me to buy that as a meaningful description of anything.

    Anyway, it’s not as if the temperatures were constant at some level and them jumped to another constant level. It just doesn’t look like that.

  35. Exactly. Why why why. Perhaps it was heat in a pipeline being released after being “hidden” (sequestered?) in some temporary non-atmospheric negative feedback mechanism. How the hell should I know!

    The climate is a very complex, poorly understood wild animal. When the science is said to be settled, the kids who should be spitballing exciting and crazy ideas continue to please their well-funded, politically connected masters by cherry-picking noisy proxies and expanding confidence intervals to confirm consensus.

    You are also absolutely correct on your next point: temperatures are never constant at one level. What I am seeing in the satellite record are periods where the temps bebop about an eyeball mean, then step up to the next “level” and beebop (a bit more mildly in the current millenia) about a higher eyeball mean. The 1998 El Nino (during which I was learning how to not spin a Cherokee by actively inducing spins) produced record rain intensities in Central California which may have been the “spigot”… or not

Comments are closed.