Earlier this month when UAH announced their TLT anomaly, we saw 2011 temperature anomalies rear back into the gate. Given the magnitude of the drop it’s not surprising that GISTemp also dropped from to 0.45C to 0.36C Data since January 2001 are plotted below:
Features:
The trend since 2001 is 0.006C/dec a decade and is positive but below the nominal multi-model mean trend of 0.2C/dec. If we use “red noise” to model the residuals from a linear fit, and test the hypothesis that the true trend is 0.2C/decade we would reject the a trend of 0.2C/decade as false based on falling outside the 2-σ confidence intervals. We reach the same conclusion if we use any ARIMA model up to (4,0,4) to model the residual or if we use an ARIFMA(1,d,1) model with the best estimate of d to model the residuals.
NOAA and HadCrut do not appear to have reported. I’ll compare to models when they do.
Proposed bet.
During the most recent El Nino, I watched to see if the annual average would ever manage to fall above the multi-model mean. Given the protracted La Nina, I’m waiting to see if the GISTemp trend since Jan 2001 manages to go negative. It won’t take much. (On the other hand, it will take a large unexpected dip for the trend since Jan 2000 to go negative.)
I like weird bets. If people are game, I’m thinking of setting up a bet on the minimum GISTemp trend since Jan 2001 we will see during 2012. Since this bet would ensure I post lots of plots since Jan 2001, which drives the people who never, ever, ever want a graph showing a negative trend to be displayed, I think in fairness, I think I should set up a bet on the maximum GIStemp trend since Jan 2000 we will see. There will be a spread and the min and max won’t happen at the same time.
Is anyone game for this?

I am interested. Also, do you get about .15C or .16C/decade as the maximum trend that is supported at the intervals and conditions you show? I am curious.
John– Based on the choice of start data Jan 2001, those lie above and outside the 2-σ intervals one would estimate as being explicable by “weather noise”. (On can argue about whether we can use ARIMA–but if we do use it, this is what we get.)
But if we start in 2000, that’s different. Since the choice of start time is not random, there are issues with saying what’s out. I think 2001 is the best start time for testing forecasts because that’s when the SRES were frozen and many of models runs have the 20th century ending in Dec 2000. (Some end in Dec 2999, some in Dec 2003.) But as you know, ‘some’ will fiercely defend starting in 2000 as the “right” choice. That said, no one decreed the ‘right’ choice for testing forecasts back in 2000…..
I am not one of those. I am one who recognizes 10 years of data is 10, and 30 is 30. But a decade is a decade. And the computed trend is the computed trend. I am just curious. To me it matters in that in AR4 Ch 10, they gave a limit to natural variabilty that presently, we appear to be exceeding the value for 2030. To me, this tells me something about the noise and natural variability that the authors of AR4 expected.
Now that Hansen & Co. are using the adjusted GHCN data instead of the raw GHCN data they were using, I wonder if any conclusions at all can reliably be made.
Below is a graphic comparing GHCN v2 raw with v3 adjusted for Reykavik after running it through Hansen’s mix-master. I don’t know if it is typical, but it sure raises questions in my mind when such a huge change happens.
http://i39.tinypic.com/magro9.gif
I used Clear Climate Code v2 step2 output and the current homogeneity file at Giss which uses v3 adjusted, calculating the annual values from Jan-Dec and discarding all years with a missing month.
Next month is going to be hard to explain.
http://policlimate.com/climate/ncep_cfsr_t2m_anom.png
Bill: “Next month is going to be hard to explain.”
They badly need for some bright young mind to come up with the next positive adjustment.
Lucia: “But if we start in 2000, that’s different.”
98 was an El Nino bump and 99 and 2000 were a La Nina dip. But if you use all three years their effect on the trend seems to cancel out. In other words, starting at 98, you get about the same trend using data that is not ENSO adjusted as you get using data that is ENSO adjusted.
It’s important to remember that, as atmospheric temperatures rise and fall due to the ENSO oscillation, the climate system continues to accumulate heat as is evidenced by the unabated oceanic thermal uptake. The 0-2000 meters global ocean heat content anomaly (data from ( http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/basin_data.html) is plotted here: http://imageshack.us/photo/my-images/837/02000mohca.jpg/
There is little sign of global warming abating in the oceanic record of the past decade, especially when we consider that the ocean absorbs 90% of the energy imbalance.
Owen– My post mentions La Nina. I’ve never suggested global warming has abated.
Are you interested in betting if I set that up?
Lucia,
Actually, I was just trying to give Tilo Reber the positive adjustment he was requesting.
I would most certainly bet on the issue of global warming abating, but what metric would you use? OHC? GISS? Anecdotal evidence from an agreed upon locality (Chicago, perhaps)? Results of a Rasmussen poll on the subject (we could have ongoing payoffs on that one)? I’m game.
Owen–
In the post above I suggest two betting pools. One would be “the lowest trend value for ‘trend since jan 2001’ we see based on data from ending in any month between now and Dec. 2012. The other would be “the highest value for ‘trend since jan 2000’ ” over the same periods.
Both would be based on HadCrut. Alternatively, the “low” could be based on whichever of Hadly, NOAA and GISS is currently lowest and the high based on whichever is currently highest. But I lean toward just using HadCrut which I think happens to currently have the highest decadal trends.
These would motivate showing the spread of trends that are slightly longer than 1 decade, but there’s enough noise to make betting mean something. If we use 100 year trends, we’ll all be betting on the last few decimal places!
Tilo Reber (Comment #89705) February 14th, 2012 at 8:18 pm
“They badly need for some bright young mind to come up with the next positive adjustment.”
.
Actually no. I was reading over at Real Climate, and Gavin is already making noises about how much revised areosol offsets (from China, I assume) will soon explain why temperatures have not gone up as projected.
.
I find it mildly humorous to watch the contortions. At some point a smart guy like Gavin will see just how stupid he could look in 5 or 10 years if temperatures don’t start cooperating very soon. So if he has a lick of sense (and I suspect he does), he will start toning down the ‘air of certainty’ that has always hung heavily over Real Climate. I believe that has already started happening a bit, with many more comments about significant uncertainty in recent posts than in the past. Almost looks like some progress.
Owen (Comment #89708),
There is no doubt that the ocean is currently accumulating heat (confirmed by ARGO and estimates of the steric contribution to sea level rise after taking ocean mass changes into account). But what is rarely mentioned is that total ocean heat accumulation is about 60% of what is consistent with the middle of the IPCC sensitivity range. Yes, the oceans are absorbing heat, but more slowly than they must for most of the CGCM’s to be close to correct. It is the shortfall in measured ocean heat uptake compared to the models that is the real news (indicating much lower climate sensitivity), not that the oceans continue to accumulate heat. This is a case where the data are steadfastly refusing to cooperate with the models…. one or the other is clearly wrong. My (painfully learned) experience is that the data are very seldom wrong. Models? Not so much.
I presume that the spin will simultaneously suggest that those who said the temperatures were not going up as projected were wrong to observed this. When, the fact is, it’s clear temperatures were not going up as projected.
And.. ehrmmm… the relevant chapters of the FOD deal with uncertainty better than the AR4. That said, the seconds showing the “projections” are currently a bit misleading because there is nothing in the text that admits that the spread communicatred in past projections is not the same as the spread communicated in the FOD figures. The FOD figures show much larger spread– the AR4 picked the 1 SD in model means.
Lucia,
“The FOD figures show much larger spread– the AR4 picked the 1 SD in model means.”
.
You sure seem to know a lot about the FOD’s. Where can I read them?
SteveF– 🙂
Lucia,
“I presume that the spin will simultaneously suggest that those who said the temperatures were not going up as projected were wrong to observed this.”
.
Actually, I think that anybody who does not support draconian forced reductions in fossil fuel use via heavy taxes is automatically wrong in certain circles, independent of the merit of their argument… or the content of their character. 😉
By NOAA’s definition the last La Nina event ended in March-April-May. Looks like another will have started in SON once February’s numbers are in, but there’s certainly a gap between them.
There seems to be a missing word in your third last sentence. I suggest “apoplectic”.
Perhaps Gavin should take a look at this
Or even this, when can we expect the new, improved, biologically warmer HadCRUT4?
Lucia,
“But I lean toward just using HadCrut which I think happens to currently have the highest decadal trends.”
That sentence puzzles me.
Do you mean the “highest” negative trend?
#89725 Yup
Dead flat for fortyfive years.
Ray–
There was a mistake– but an even bigger one. I lean toward using GIStemp not HadCrut.
There is no way Hansen et al will allow GisT to go negative.
SteveF (Comment #89714)
February 14th, 2012 at 9:45 pm
” It is the shortfall in measured ocean heat uptake compared to the models that is the real news (indicating much lower climate sensitivity), not that the oceans continue to accumulate heat. .”
———————————————————-
The recent paper by Norman Loeb et al,(http://www.nature.com/ngeo/journal/v5/n2/full/ngeo1375.html) estimates an energy imbalance of 0.50 (+/-0.43) W/m2, based on the OHC from 0-1800 m (and satellite measurements of incoming and outgoing radiation). That value is clearly lower than Trenberth’s 0.9 W/m2, but the ocean heat content still has a high uncertainty. Trenberth and others have been quite open about the differences between satellite estimates of energy imbalance and OHC (they are scientists after all). The numbers will get progressively better as both the data and models get better, and as measurements push deeper into the ocean.
The news that the effect is not as big as predicted is for sure political news, but in the long term a pyrrhic victory.
Nick Stokes,
For your viewing pleasure: http://www.woodfortrees.org/plot/hadcrut3vgl/from:1987/to:1997/trend/plot/hadcrut3vgl/from:1997/trend/plot/hadcrut3vsh/from:1980/to:1987/trend/plot/hadcrut3vsh/from:1969/to:1980/trend/plot/hadcrut3vgl/from:1969/plot/hadcrut3vgl/from:1989/to:2006/trend
with the AR1 to AR4 trend of 17 years… proving catastrophic warming rates are real. 🙂
With very noisy data (mainly ENSO and volcanoes) you can select even a 17 year period which either seriously underestimates or overestimates the “true” underlying rate of warming. If you do your best to account for those two factors, the residual trend STILL does not take into account potentially important longer term variation (PDO, AMO, NAO,etc.), which could lead to serious under or over estimates of the “true” underlying rate.
.
All of which begs the most important question of all: how much have man made aerosols ‘offset’ GHG driven warming? If that offset is modest (<0.7 watt/M^2) the temperature trend is not so alarming. If the aerosol offset is large (1.8 watt/M^2) then the temperature trend is very alarming. Since there seems no plausible way to determine historical (or even future) aerosol offset-sets, the measured temperature trend tells us very little about climate sensitivity, especially in light of the lack of knowledge of long term natural variation. Appeals to rapidly increasing aerosol offsets (as seems the plan for AR5) and continued reliance on models with too many assumed aerosol inputs and too many "tunable parameters" will leave us having the same damned arguments about the urgency of drastic public action over and over ad nauseum.
.
Other ways to determine climate sensitivity to a reasonable accuracy are desperately needed. The GCM's are not going to ever provide a robust answer. IMO, measured ocean heat accumulation has to be a big part of the answer.
Odd that a factual observation of model failure can be a “pyrrhic victory.”
Does that mean that there will always be new modeling or theoretical contortions to keep the political dream alive and keep lukewarmism at bay?
Owen,
“but the ocean heat content still has a high uncertainty”
.
For sure, but the level of uncertainty is small compared to that for aerosol offsets (the Freddy Kruger of climate science). And OHC data will become much more certain over the next decade. Aerosols? No.
.
I am not sure what you mean by a “pyrrhic victory”. Anything which actually helps define the true climate sensitivity seems to me to represent real progress. If that sensitivity is substantially lower that the IPCC AR4 central estimate (~3.2 C per doubling), then that is good news for everyone… and every ecosystem. I see nothing “pyrrhic” about it.
Come think of it, climate science has a lot of the same characteristics as “A Nightmare on Elm Street”… with aerosol offsets in the role of Freddy Krueger…. both unbelievable and frightening. 😉
SteveF, RealClimate ran a guest post saying temperatures would not warm for 20-30 years after which global warming would be beack with a vengeance. They are fully hedged.
SteveF,
Pyrrhic in the sense that relaxing in the the fact that rates of warming may be slower than our early (and we are still in the early stage) models predict may obscure the fact that a significant amount of climate change will still occur in a relatively short time (i.e., 200-400 years) with real impacts.
It is a real shame that the satellite system to monitor aerosols failed. Little chance that we will ever see that again in the current economic climate. Even less chance if Republicans take control, as several candidates are already campaigning on the platform of defunding climate research.
There will always be new modeling or theoretical contortions. Oddly enough, explaining what really happened is always a part of science. But no…. it won’t keep lukewarmism at bay (or at least not if “at bay” means preventing the general public from outright embracing lukewarmism or at least behaving as if they believe it.)
The only thing that will kill lukewarmism is for
a) climate modelers projections to be close enough to right for a sustained time (at least two IPCC cycles.)
b) for models to be right AND warming seen to be inconsistent with lukewarmism.
By close enough to right, I would suggest that at a minimum if what the collection of modelers writing an ARX seem to communicate is best estimate for the future is not-inconsistent with the data using red noise when the projection/prediction is compared to future data, that’s close enough to right. The reason I pick red noise is that they are still using that to communicate that a recent trends is “statistically significant” relative to zero. As long as some talking to the press use that to decree that warming is statistically signficant, I and quite a few others will consider that models aren’t so hot if they projection is statistically signficantly differerent.
(Oh. And none of the bs about the difference between projections and predictions. When those ARX’s are bandied around, those projections are communicated a preditions and lots of scientists “simplify” to when communicating to the public, communicating them as predictions so as to “not confuse” the public with these things. Well… if they communicate them as predictions, for all practical purposes, they are predictions. Claiming they aren’t after they become obviously wrong is … well… talking out of two sides of your mouth.)
I wonder if Nick is unaware that his two previous flat periods coincided with major volcanic eruptions (El chichon 1982 Pinatubo 1991) that caused significant cooling?
Owen (Comment #89736)
On Pyrrhic victories — Policy is distinct from natural science, as (almost) all posters here recognize. Still, given the fierce connection of the two within the Green movement, it can be informative to take a step back and survey the landscape.
For instance, the local NPR news this morning had the Blue governor of my Blue state again urging our Blue legislature to endorse his push for offshore wind farms. Fight Global Warming is one of the big motivators.
Never mind that this crony-capitalism boondoggle makes no sense, on the terms by which it’s being promoted. (Arguably, it does make sense if the intention is to push kWh rates so high that penny-pinching conservation of electricity becomes the order of the day — but that ain’t the pitch to the voting public.)
Perhaps ‘Pyrrhic’ is getting at the notion of “the worse the better.” I.e. that the worse that natural-science shows climate sensitivity to be, the faster will be the start of amelioration, and the more comprehensive will these policies be.
Though with friends like offshore wind farms, who needs enemies?
Nick Stokes (Comment #89727)
February 15th, 2012 at 5:16 am
#89725 Yup
Dead flat for fortyfive years.
—————————————————
Given the far lower heat capacity of the atmosphere vis-a-vis the oceans, and the efficient modes of cyclic heat exchange between atmosphere and ocean, why do we not simply take Pielke Sr’s advice and use global heat content as the fundamental metric for global warming. We can easily calculate a combined ocean/atmosphere heat content anomaly (perhaps omit land warming and ice-to-water phase changes) that will be less sensitive to the vagaries of the ENSO oscillation. If the total heat content of the climate system continues its unabated increase, it matters little that we have flat periods of atmospheric temperature change. But if we did this, what would we have left to argue about? 🙂
AMac (Comment #89741)
February 15th, 2012 at 9:32 am
————————————————-
I look at it differently. With energy use skyrocketing in China and India, we face peak oil and peak energy down the road. Like the Danes (wind) and Germans (PVC), your blue state in 5-10 years may be happy about the pre-emptive investment they made in alternative energy. We have to make that transition sometime – why not begin now?
As Bob says, GISS are now using adjusted GHCN data, which already includes a homogeneity adjustment, and then adding their own homegeneity adjustment on top of that!
And the GHCN adjustments are wrong, see the sequence of posts at notalotofpeopleknowthat, and also at Nick’s Moyhu blog. Also, they have just changed versions about a week ago (so Lucia you may find that the numbers in that GISS file you link to have changed since last week – do you have an old version?)
I’m a bit puzzled that Lucia/Steve/Jeff don’t really seem interested in this.
Re PaulM “I wonder if Nick is unaware that his two previous flat periods coincided with major volcanic eruptions (El chichon 1982 Pinatubo 1991) that caused significant cooling?”
Nick is aware of it, but he pulled a Gavin in not bothering to mention it.
Ask your favorite climate scientist a yes or no question: Are we experiencing the only decade scale temperature plateau since the 1970’s that was not preceded by a major volcanic event?
Owen (Comment #89743)
> Like the Danes (wind) and Germans (PVC), your blue state in 5-10 years may be happy about the pre-emptive investment they made in alternative energy.
If the characteristics of wind turbine generation matched what electricity consumers want (with respect to reliability, reserve capacity, and cost), I would be in wholehearted agreement. IIRC the Danes are discovering problems along this line with their investments.
Barry Woods is a prominent pro-nuke Green who has discussed such issues at some depth. (I discovered his website via his excellent coverage of Fukushima.)
PVC — not the pipes in which cabling can be run, but what? 🙂
Imagine for a moment that the solar cycle alters the steady state level of solar wind.
Imagine also that solar wind levels are linked to the levels of intracellular dust that impact the Earths atmosphere.
Dust levels increase light scattering and also provide nucleation points for cloud formation.
“The level of uncertainty is small compared to that for aerosol offsets (the Freddy Kruger of climate science)”
Lucia:
At least somewhat bearing on this topic is the sea level measurement. New data has been released and is on WUWT thread: http://wattsupwiththat.com/2012/02/14/sea-level-still-not-cooperating-with-predictions/
I see it as at least somewhat justified to view this as a “bulk thermometer” reading for the oceans. I would like to see you put up a thread addressing this issue for your galaxy of readers to comment on here.
Owen (Comment #89736) @ February 15th, 2012 at 9:01 am
“… models predict may obscure the fact that a significant amount of climate change will still occur in a relatively short time (i.e., 200-400 years) with real impacts.”
——————
Oh, come off it now – has the alarmist community now stooped to saying we have to wait for 200-400 years for their predictions to be shown as true and their models accurate?
That is one of the most ridiculous arguments I have ever heard. You can no more demonstrate that than I can demonstrate that we will have fallen into a true Ice Age at that time! You know it, too.
Re: AMac (Feb 15 10:13),
Solar? PV would be photovoltaic. C might be conversion. It’s an acronym I haven’t seen before.
Owen
It seems to me this is so vague that it means nothing. As it means nothing, parts of it can’t.
Is “significant change” the amount that was observed between 1600-1999? Are real impacts be the real impacts we observed in during those 400 years? If the answer is to both those is yes: then we don’t need models to know that significant amounts of climate change will occur and the impacts will be real.
Though I would suggest that if that’s what “significant change’ and “real impacts” mean suggesting that observing that warming has been slower than predicted by models would hardly obscure the fact that what we know from recorded history happened.
Unless you quantify that statement we can’t possibly figure out what precisely you think might be obscured.
Re: DeWitt Payne (Comment #89764)
‘C’ell.
We are beginning to see the problems with acting too early on climate change. Acting before you’ve narrowed the range of sensitivity runs the risk of sinking dollars into solutions before they are ready. yet another solar company in the US has failed. Those dollars lost in that venture are dollars that cannot be spent on better solutions when they are ready. That’s why every degree of uncertainty we can remove from the estimate of sensitivity is worth trillions. Moving too early on nascent solutions, reduces resiliency. And resiliency, the ability to adapt or mitigate as the case may be, should be the goal of climate policy. Make people richer. Educate women. fewer babies. more resilient societies.
AFPhys (Comment #89760)
Your dismissive ranting about an alarmist agenda is not particularly helpful. An energy imbalance for the planet currently exists and will most certainly persist due to increasing levels of greenhouse gases, and the earth will continue to warm. If it warms at a lower rate than early models predicted (which is the point I was addressing), it simply means we will wait somewhat longer to get to temperature and heat content levels that are problematic. I obviously cannot predict impacts with any degree of certainty. (But I can say that your suggested ice age in 200-400 years is way out of sync with known orbital forcings). Military planners and insurance companies are already looking at what they consider to be quite possible scenarios – and they are serious about it. As temperate zones move and rainfall patterns change on a rather fast timescale, we must expect impacts, for example, on agriculture. Seems obvious to me anyway. Around here it is anathema to even suggest they we should anticipate downstream issues – you get called an alarmist.
steven mosher (Comment #89777)
February 15th, 2012 at 2:05 pm
Acting before you’ve narrowed the range of sensitivity runs the risk of sinking dollars into solutions before they are ready.
—————————————
It’s not just climate change and its associated sensitivity that gives us reasons to look at alternative energies – world wide competition for diminishing fossil fuels with attendant skyrocketing prices is also a good reason to start developing other sources. Let’s get on with it.
Owen–
oh? That’s an interesting claim. First: this is not true. It’s possible that the reason warming is slow is that sensitivity is low. Second: Within a window of temperatures, it’s not clear that the absolute temperature change is a problem. But a sustained rapid rate of change that exceeds the ability of plants to and large populations to migrate or adapt is a problem. Warming at 1C/century might be ok even if it continues for a long time.
Sure.
Nonesense. Many people here suggest we should anticipate downstream issues all the time. What people generally suggest is that we should be realistic and truthful about what is happening, what is likely and when discussing the uncertainties on the high side, stop pretending there are none on the low side.
Lucia,
“Warming at 1C/century might be ok even if it continues for a long time.”
———————————————————
Well, four centuries, four degrees? We had damn well better prepare for that, as a species.
————————————————-
“What people generally suggest is that we should be realistic and truthful about what is happening, what is likely and when discussing the uncertainties on the high side, stop pretending there are none on the low side.”
————————————————
Agree completely, but I was still labelled an alarmist.
Owen,
One could very reasonably ask whether people in each of the next 4 centuries are likely to develop technology for adaptation which gets at least 1° better with each passing century.
Owen (Comment #89736),
“a significant amount of climate change will still occur in a relatively short time (i.e., 200-400 years) with real impacts”
.
Well, there has been a significant amount of climate change the last 100 years (0.8C warmer on average) with real impacts, but arguably positive on balance. I guess I am a lot more sanguine than you are, for the following reasons.
1. The ocean will continue to absorb CO2 at high (and increasing, if atmospheric CO2 is increasing) rates for probably 300-400 years or more. Based on this, extreme increases in atmospheric CO2 (1,000 PPM and higher) during the “age of carbon”, which seems unlikely to last more than 100 years from now, strike me as extremely unlikely.
2. Growing consumption of fossil fuels will lead to more careful use of this resource, and lower rates of consumption growth than projected.
3. Higher fossil fuel prices will motivate development of more economical non-fossil alternatives. This sort of thing takes a decade or two after prices start to rise significantly in real terms.
4. At some point, people of all political stripes will come to their senses and view the Earth’s reduced carbon resources as hugely valuable, and something that needs to be used with care.
The exact form of the reduced carbon does not matter (coal, petroleum, natural gas), all are enormously important for the future economic well being of humankind. What does matter is that once this resource is depleted, it will be very difficult to replace without vast energy expenditures. And this is independent of global warming from CO2. The reduced carbon in a piece of engineering plastic (like what you will find in your car and 100 other places) is far more economically important than burning that same carbon for its oxidized energy content. In the very long term, non-fossil energy must completely replace reduced carbon. We would do well to consider how to ease that transition (like a big-time nuclear energy program ASAP) in the interests of our grandchildren and great grandchildren…. and generations beyond them. I keep thinking that those who express concern about global warming will soon stop actively resisting the only economically sensible alternative to fossil fuels. But this is the part I am least sanguine about.
SteveF (Comment #89731)
” For your viewing pleasure…”
Thanks, Steve. Nature is bountiful here. You can have most anything you want. As long as you don’t worry about statistical significance.
And I agree about aerosol uncertainties. Temperature trends aren’t the case for AGW – that rests on GHG emissions and the GHE. One looks to see if the temperatures are consistent with the theory. And if the view is obscured by aerosols, then that reassurance will take a bit longer.
Nick Stokes,
“And if the view is obscured by aerosols, then that reassurance will take a bit longer.”
Wild optimism in the face of sober reality is no sin, but it is not sober. Perhaps ‘bit’ could be redefined as ‘many decades’.. too long to be helpful in your lifetime or mine. We need solid alternatives to determine ECS.
SteveF (Comment #89814)
February 15th, 2012 at 4:30 pm
“At some point, people of all political stripes will come to their senses and view the Earth’s reduced carbon resources as hugely valuable, and something that needs to be used with care.”
————————————————————–
Well said.
Lucia –
In answer to your question – of course! I have many quatloos to wager! 🙂
As a side thought, am I the only person who tends to think of trends starting in the present? I’m sure there are innumerable reasons why people want to see a trend from a point in the past – usually an arbitrary calender date, say, 1900 or 1950 or 2000. But I’m always drawn to asking what the trend/change has been from today.
So if someone is claiming there has been a decrease in global temperatures, I’m interested in the question – for how long? How long can we extend the trend back and still see a cooling trend? At what point does the trend approach zero?
So, Lucia, you might see the Gistemp trend as being (slightly) positive from Jan 2001. My question is answered by saying Gistemp has a negative trend back to the end of Feb 2001. In this situation it is almost the same thing looked at in a very slightly different way.
Hadcrut3 shows a negative trend for the last 14 years and 9 months – and has a fairly even chance of reaching 15 years when this months data arrives. Possibly why there is the push to get Hadcrut4 published – to turn 15 years of cooling into 15 years of warming – TaDaaaah!!
I certainly don’t claim anything ‘meaningful’ about this perspective [I think global temperature estimates to more than a tenth of a degree are not worth taking that seriously], but it changes the focus somewhat.
FWIW, RSS passed 15 years of a negative trend when the January data was shown. UAH has some catching up to do..
Lucia –
You could have a nice simple bet – picking the two months where the trends are lowest/highest. Although it might be that a large proportion of people all win. Assuming there isn’t a triple dip La Nina, wouldn’t most people plump for March/April for the lowest 2001, and Dec for the highest 2000?
Just a thought – are you going to have two separate bets, or just the spread? [or three bets – low, high and a bonus for the nearest to the spread? Does the house have spare quatloos to pay out?]
Don’t know if this has been blogged but Paul Hudson has nice post on Met Office warm bias
I agree, 11 predictions of 12 on the warm side, almost enough evidence to require a new, warmer, temperature series, when did the Met Office say HadCRUT4 was coming?
Lucia, I have a very simple question about you plot, and wonder if you might pose it to all?
Do the abrupt temperature swings in the UAH represent changes in
a) The relative levels of incoming and outgoing radiation
b) The location of heat already in the Earths system (water/atmosphere/vegetation/soils)
Supplemental. If the latter, what mechanisms drive the large scale movement of heat around the planet?
Doc– I don’t know. Possibly a combination of the two in a proportion I don’t know.
DocMartyn 90362,
a) Yes. Both seasonal changes and weather patterns influence this
b) Yes. ENSO accumulates heat in the western Pacific during La Nina, then distributes it over most of the world during El Nino. Longer term pseudo oscillations probably also contribute on longer times scales.
Supplemental: Mostly Hadley, Ferrel, and Polar cells (http://www.newmediastudio.org/DataDiscovery/Hurr_ED_Center/Easterly_Waves/Trade_Winds/Trade_Winds.html)
“SteveF
DocMartyn 90362,
a) Yes. Both seasonal changes and weather patterns influence this
b) Yes. ENSO accumulates heat in the western Pacific during La Nina, then distributes it over most of the world during El Nino. Longer term pseudo oscillations probably also contribute on longer times scales.
b) first. If we were looking at a regular cycle we would see harmonics in the plot, not jagged peaks and troughs.
a) seasonal and weather doesn’t cut it. These are anomalies, so seasonal factors are removed.
Weather? Between Winter 2008 and Spring 2010 we have a swing of about 0.8 degrees.
Was that the weather?
DocMartyn,
I guess I didn’t make myself clear. Most of the variation you see in the satellite data is in fact driven by ENSO. The satellite data seems to be more sensitive to ENSO than the surface average temperature. There are (of course) other contributors, and weather patterns is one of them. None of the patterns is truly cyclical; pseudo-cyclical is a better description. Compare the Nino 3.4 index to the satellite average temps… the correlation between them (offset in time by a few months) is remarkably strong.
Does anyone know if Gistemp make some adjustments to their data after it has been published?
I made the comment a couple of days ago that Gistemp had a negative trend going back to Feb 2001. I put up the same plot at Wft today and the trend is suddenly positive, although the Jan figure is unchanged. In fact the trend is only negative from July 2001.
Are there some processing steps that take place after the data has been published?
*******
Ah, OK. I’ve just visited the NASA site and apparently three Russian station’s data from Dec 2011 have been ‘corrected’ and the analysis was ‘redone’ yesterday.
Well, that explains that then. Who needs a Hadcrut4 when you can adjust the data as you go along? Cooling? What cooling?!
Anteros —
I don’t know GISS’s procedures, but they definitely make retroactive changes to their global average dataset. The other day I was working with the GISS dataset, using data I had in an old spreadsheet which dated from ~Oct2011. [The last entry was Sept11.] I updated from GISS and noticed that the figures had changed, back to 1981 at least. The change wasn’t much, typically only +/- a couple of hundredths of a K, but it was there.