Because it is the year end, I think it’s worth showing readers OLS trend obtained using GISSTemp monthly data since 2001. Voila!

Many readers are aware I focus on the period beginning in 2001 because I am testing whether or not the IPCC method for developing projections of global mean surface temperature eventually match observations of GMST that become available after the method of obtaining projections is more-or-less frozen. Because projections are obtained using models driven by specified scenarios (i.e. SRES), I picked my start year of 2001 as the year in which a supplement describing the SRES was published.
My focus has been comparing observed and predicted trends during this period. My posts discuss whether or not these sorts of negative trends are consistent with IPCC projections of underlying warming (which is about 2 C/century for the first 2 or 3 decades of this century.)
I believe that these data are inconsistent with that level of warming.
However, I do periodically think it’s important to note that I am not suggesting the recent negative trend is inconsistent with some level of underlying warming trend. I believe GHG do cause warming, and we should expect that warming trend to resume at some point.
The questions I wonder about are: How accurately can bodies like the IPCC, using the all tools currently available project these things? Even in more-or-less ideal periods when huge volcanic eruptions do not occur, nuclear war did not trigger nuclear winter and Mars does not attack the planet earth. Also, does the IPCC really convey the appropriate level of uncertainty in their estimates? Or, if they do, do various activists groups (on either side) convey the level of uncertainty in a way that people like me correctly understand it?
I want to emphasize that the data are consistent with warming
Because I do show the 2001-now graphs often, I do periodically like to emphasize that, notwithstanding the negative trend since that time, those data are consistent with some warming trend. Moreover, the longer term trends have been positive.
To remind people this is so, I show this:

In my opinion, both types of graphs are useful in their proper context. The short term graph is useful when evaluating whether or not the IPCC projections are able to predict things that happen in the future. The longer term graph is more useful if we are trying to discuss whether or not data support the general hypothesis that warming has occurred.
Hi Lucia: Are you ready for tomorrow’s(Thurs) freeze?
Would you be kind enoughto post or send me a link to the data source you use? I would like to run some of my own analyses. I could search but am not confident I could get exactly the same data.
Thanks
Jack
jack–I’m hybernating. I did make a silly error. I had an old sheet with the various Oct. versions of GISS temp, updated one, and snagged the graph off the wrong one. I’ll fix in a minute.
Actually, I seem to have put in the proper graphs. I get my data from here:
http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt
For these quick graphs, I just put this in excel. I have a macro that turns the matrix into a column, and I plot.
Of course, particularly over the longer term, a key question is whether we can trust the data you are using. As you probably are aware, Icecap has an interesting current post on its front page addressing the data quality issues –
Jan 09, 2009
2008 Coldest Year Since 2000 and Clearly Not a Top Ten Warmest Year
By Joseph D’Aleo, CCM, AMS Fellow
Of particular interest is the following quote:
“The biggest issue is station dropout. 2/3rds of the world’s stations, many rural areas in the FSU stopped reporting around 1990. Climatologist David Legates at the University of Delaware prepared this animation. See the lights go out in 1990. The below plot of the NOAA/NASA station count (blue line) and average annual temperatures (brown columns). It clearly shows the big dropoff of rural stations. The animation above shows that Siberia is one area with the biggest change.
Average temperatures for the station categories jumped when the stations dropped out, suggesting that it was mainly the colder stations that were no longer in the record. The global data bases all set up grid boxes globally and populate the temperatures for the box using data from within that box or now with many empty boxes using the closest stations weighted by distance in nearby boxes. Thus a box which had rural stations, will find temperatures now determined by urban areas in that box or distant boxes. This is why the global data bases suggest the greatest warming has been in Siberia.
Also a factor is that in the FSU era, the cities and towns temperatures determined allocations of funds and fuel, so it is believed that the cold was exaggerated in the past introducing an artificial warming. See the details of these numerous data integrity issues here. ”
Apologies if you have already addressed this issue. I am relatively new to your blog.
I have my doubts that linear projections are anything but misleading. There are cyclical variations in climate such as La Nina and El Nino on 1-2 year cycles and then there is a PDO on a 60 year cycle and many other such cycles in each of the oceans. Your 8 year trend captures a few La Nina and El Nino cycle but your 28 year trend does not even capture 1 PDO shift. Its like riding a roller coaster and when when you are on your way up and extrapolating that you’ll be a mile high by the end of the day (end of life as we know it from the heat) or waiting til you are racing on the down slope and saying we’ll be a mile underground within 10 minutes (new ice age). When will we start analyzing cyclical trends with the math we learned in high school rather than restricting it to our elementry arithmetic.
See Dr. Hansen’s latest take on GISS 2008 data at:
http://www.columbia.edu/~jeh1/mailings/2009/20090113_Temperature.pdf
It appears he’s expecting another El Nino and eventual comeback of solar irradiance to bolster projection of new record for global temperatures within next 1-2 years.
Sean–
Not all curve fits are intended to be projections. Mine aren’t intended to be projections.
I use ordinary least squares to create descriptive statistics and do t-tests. The fact that people do make the mistake of extrapolating based on least squares trends does not mean others every trend fit you see is being mis-used.
Brian– even more interestingly, he estimate the current underlying trend is 0.15C/century.
This would mean that…. he agrees it’s not 0.2 C/decade, and that, indeed, the current trend is lower than 0.2C/decade.
With regard to my blog: The message in my repeated posts is that the current underlying trend is not0.2 C/decade, that the IPCC projection in the AR4 is too high. So.. maybe my view will become the consensus view. 🙂
Consensus is an evil word. 🙂
I think you mean 0.15 C and 0.20 C per decade. If even Hansen said 0.15 C/century, we could all just stop talking about the weather all day.
Clark– Thanks. Fixed. 🙂
I think this looks similar to Moncktons graph
http://icecap.us/images/uploads/TEMPSvsCO2.jpg + the surface station data means AGW has a problem most likely to become severe these coming years LOL
Lucia (at #8700) – You may recall Dr. Hansen last March at:
http://www.columbia.edu/~jeh1/mailings/2008/20080303_ColdWeather.pdf
wrote in part:
“The monthly fluctuations of global or near-global temperature, as well as the trend over recent decades [covering satellite recordings] can be seen in Figure 2 for the GISS surface temperature analysis as well as the lower tropospheric data of UAH (University of Alabama at Huntsville) and RSS (Remote Sensing Systems). The reason to show these is to expose the recent nonsense that has appeared in the blogosphere, to the effect that recent cooling has wiped out global warming of the past century, and the Earth may be headed into an ice age. On the contrary, these misleaders have foolishly (or devilishly) fixated on a natural fluctuation that will soon disappear…
Note that even the UAH data now have a substantial warming trend (0.14°C per decade). RSS find 0.18°C per decade, close to the [then GISS] surface temperature trend (0.17°C per decade). The large short-term temperature fluctuations have no bearing on the global warming matter or the impacts of global warming….â€
It appears Dr. Hansen has followed your writings and has acknowledged less than the IPCC’s 0.2°C per decade for some time (I am also mindful of Dr. Pielke’s colorful graphs of multiple revised IPCC trend lines when writing about skill in projections). Nevertheless, I’ll be fixated (at least until early 2010) upon the projected El Nino to determine whether I will be throwing less coal in my home heating stove.
I wouldn’t jump to the conclusion that anything I write is read by Hansen.
That said, at a certain point, people have to look at 8 years (or more) with a negative trend and ask themselves what they think that means. Do they really believe that’s consistent with a current underlying climate trend of +2C/century? Believing so requires thinking “weather noise” is larger than previously thought. It requires using Gavin’s odd notion about spread among model results to estimate weather noise. It requires thinking the variability of 7 year trends is larger than manifest in the earth’s temperature trends.
It requires all sorts of things.
On the other hand, one can accept the recent trends and continue to believe +1.5 C/century without setting aside any previously held beliefs about weather noise. You just have to prefer GISSTemp to other metrics, and see if maybe the forcings in the SRES were a bit too aggressive, resulting in IPCC projections that were too high.
It’s entirely possible for Hansen to have come up with this idea on his own, paying no attention to anything I write.
Though he does specifically mention the blogosphere, Lucia. 🙂 So he’s probably not unaware of what you’ve been saying.
He mentioned the blogosphere, but then describes things I have never claimed.
But, presumably, he has some clue what is being said in the blogosphere.
Yah, but we’ll give you credit for the temp trend. 🙂
Hi Lucia, I thought you might be interested in this paper published in Sept 2008 in Geophysical Research Papers that concludes “The average anthropogenic-related warming is 0.05 K per decade from 1889 to 2006” – much more in line with your analsysis it seems.
http://www.leif.org/research/LeanRindCauses.pdf
VG,
So global warming stopped in 2002? Why do I have a feeling that I’m going to be annoyed by variants on that graph far the next year or so…
Also, people really need to stop using ridiculous units on the second axis to juxtapose CO2 with short-term temperature. Its a rather juvenile form of statistical manipulation :-p
Lucia (at #8713):
“You just have to prefer GISSTemp to other metrics, and see if maybe the forcings in the SRES were a bit too aggressive, resulting in IPCC projections that were too high.”
Given the GISS and GHCN data integrity issues raised from time to time by D’Aleo and Watts (e.g., see D’Aleo’s, “2008 Coldest Year Since 2000 and Clearly Not a Top Ten Warmest Year” @ ICECAP), particularly the issue of station drop-outs, IPCC’s apparent preference for GISSTemp over other metrics seems unpalatable if not unjustified.
Brian–
But you would have to concede that it’s perfectly reasonable for Hansen to prefer his own metric, right?
Bryan,
In cases where the IPCC displays just one temperature record, they use HadCRUT3, not GISS.
“But you would have to concede that it’s perfectly reasonable for Hansen to prefer his own metric, right?”
Not if his metric is wrong.
This reminds me of another quote-
George Costanza: Jerry, just remember… it’s not a lie, if you believe it.
Andrew ♫
Andrew–
I think it’s safe to say that Hansen believes his metric.
A statement is not a lie if you believe it. I’m pretty sure the difference between a lie and an inaccurate statement is that lies are intentional deceptions.
Of course, belief in something that is false does not make it true.
Why hasn’t anyone groused about the illiterate caption on my figure?! (I’m going to have to fix that!)
I thought it was a joke..i.e. on purpose.
Lucia: I’m sorry, maybe I’m missing something obvious, but what do you mean by “a hair above below zero slope”?
Jack–
Let’s pretend it was a joke.
Doug–
Erhmm….. You are not missing anything.
Mark your calendars .. Lucia has declared the birth of a new consensus. Death to dissenters! Lukewarmists unite! And recite the credo:
It’s not Zero
Nor three or four
It’s One Point Five
And nothing more.
Lucia Rules!!
(Comment#8707) January 14th, 2009 at 1:26 pm
and more problems with GISS
http://motls.blogspot.com/2009…..-giss.html
and now at WUWT this data may not be randomly distributed how can we work with this data?
Previous sorry try this
http://wattsupwiththat.com/2009/01/14/distribution-analysis-suggests-giss-final-data-is-hand-edited/
Well…. I don’t think I can just decree a new consensus. But we can watch how many people start suggesting 0.15C/decade is the current underlying trend.
Lucia [8713]
As you note, the “scientific” three ring circus required to “explain” the discrepancies between the models/ trend projections and increasingly recalcitrant data is becoming a singularly convoluted affair [I’m trying my level best not to think of Gavin as a contortionist..] and it’s certainly remarkable that Hansen is now underbidding IPCC trend fundamentals by a whopping 25%.
Unfortunately, in spite of this he is still very loudly arguing the CO2-as-independent-variable-causing-global-warming case. We can only hope that he one day will fathom the difference between mere correlation [as even that relationship between CO2 ppmv and AGW comes apart before our eyes] and true nature of causality.
tetris–
Sure. Don’t forget that I too believe CO2 must cause some warming. I don’t know how much– but some.
It’s going to take a much longer flat or down trend to make people who believe CO2 causes warming to stop believing CO2 has caused some of the past warming, and will continue to do so.
On the other hand, I am in the camp that says: If the data say some specific prediction improbable, we admit the data say that. I find the projected 2C/century as the current trend is improbable given the data.
BRIAN M FLYNN (Comment#8698)
James Hansen, Makiko Sato, Reto Ruedy, Ken Lo
Calendar year 2008 was the coolest year since 2000, according to the Goddard Institute for Space Studies analysis [Reference 1] of surface air temperature measurements (Figure 1, left). In our analysis 2008 is the ninth warmest year in the period of instrumental measurements, which
extends back to 1880. The ten warmest years all occur within the 12-year period 1997-2008. The two standard deviation (95 percent confidence) uncertainty in comparing recent years is estimated as 0.05°C [Reference 2], so we can only conclude with confidence that 2008 was somewhere
within the range from 7th to 10th warmest year in the record
I read the first paraghraph of the Hansen et all and I wonder what cereal box did Hansen get his PHD since that statement would get the report am F from any real college.
Larry T–
Did you ever grade reports for senior capstone lab or design courses in engineering? That paragraph would not get an F!
Students can write some truly horrific paragraphs. This stems partially from poor writing skills. I think the main factor is students don’t know what they are expected to communicate in a report. That’s understandable as part of the purpose of courses is to teach what information belongs in a report, and how these reports are typically organized.
Hansen, Sato, Ruedy and Lo at least know what they want to tell their audience, and they write sentences that convey their message.
Lucia [8738]
I know you believe there is an anthropogenic component to whatever warming may have occurred over the past century. To me there are good reasons to think that the purported 0.6C increase is within the instrumental error range for the period, and the anthropogenic component remains an unproven hypothesis.
To base billion/trillion dollar policy decisions on such threadbare evidence is profoundly irresponsible, and in the private sector would cause anyone proposing such a thing to be summarily fired. But as most of us have come to realize, “AGW/ACC” is no longer about science, it’s about politics and politicians using taxpayers money.
The second part of your comment is why I like what you do on your blog. It goes to the heart of what Karl Popper taught us: if the data doesn’t support the hypothesis, you modify the hypothesis, not the data.
Unfortunately, GISS has an amply documented track record of doing the latter.
Keep up the good work.
“I think it’s safe to say that Hansen believes his metric.”
That’s the problem. Which is worse:
1) Someone who misrepresents for political/selfish/non-scientific purposes?
2) Someone who misrepresents for political/selfish/non-scientific purposes and believes his own bullchips?
Andrew
Lcia, I was not talking about how stylistic the paragraph was but the conclusion that you can through data manipulation turn the lowest in 8 years to the 7 or 10th on record especially when GISS data itself is a severe outlier.
As an applied mathematician, i spent mot of my working career on data analysis and lot of that was spent on verification of my own data sets and computer analysis of others data sets. I even contracted with Hansen’s employer on sevaral projects.
Larry–
Ok. I see what you mean. I remember reading some funny stuff in the first report of the semester when I was a TA. But it always improved after 1 or 2 reports.
Larry,
You say that “especially when GISS data itself is a severe outlier”, but I’m still far from convinced this is the case. While there is a relatively large difference between satellite and surface records for 2008, the anomaly is by no means constrained to GISS. 2008 is the 10th hottest year on record for HadCRU as well. We often forget how remarkably similar all four temperature records are these days (ever since the great MSU debate ended):
http://i81.photobucket.com/albums/j237/hausfath/Picture9.png
Also, if you have over 120 years of data, the 9th hottest year on record is hardly unimportant, even if the prior 7 years are warmer.
Consider the statement “2000-2008 were the highest years for the DOW”. Now look where 2009 is. I haven’t looked up the values to make sure this is true but the point is that harping on the peak without identifying what it really means is not that valuable. If the trend is actually down (not proven yet) then ‘chicken littling’ about the peak disingenuous.
I have been following the analysis of the GISS data on the CA site where lucia was a respected contributor even if she disagreed with the results and am just now getting started do my independent analysis (retired, needed to upgrade home computer to get excel, and something that can run fortran, and r). My main data processing/analysis complaints are in 2 areas. First, when the UHI, equipment move corrections are applied to the raw data. The function used seems to primarily hold current data constant and decreases the historical information. This is counter intuitive and i suspect a programming error, a algorithm error, or and out and out fraud. Second, the method used to fill missing data seems to be dependent on the newest data and also seems to apply corrections that decreases historical data while holding current constant and i have the previous suspicions. If it is a programming error fix it, if algorithm is wrong, either justify it or develop a correct one. If Hansen is committing out and out fraud fire his ass. The first two do not necessarily indicate scientific dishonesty but we can see from the mortgage and financial crisis that it is easy for “experts” to lie with the data.
My pseudo scientific expertise in the field let me to predict back in the 1970’s that the global cooling scare was hype and that we would continue in the warming trend of the warming between ice ages and currently that we are in a cyclic cooling cycle (PDO and AMO the primary earthly drivers) driven by a solar minimum the depth of which has yet to be determined. This is potentially a major problem that needs to be looked at seriously and by minds better minds than mine. A mini ice age is a major problem while the human induced global warming at best is a improvement or at worse (in my opinion) a problem that can be solved through technology.
BTW, my heroes in college where the physics majors who to my eyes the brightest of the brightest. Those that could go on to get doctorates even more so. The pre-med, pre-law types were bright but more driven by money than by love of learning. I was fortunate enough to go on to graduate school in applied math/comp sci but always would be a cook book mathematician at my best. Allow me a little room to be irritated when a hero (physics doctorates) are shown to be either human or dishonest.
Larry:
Counter-intuitive – very diplomatic term!
It may be counter-intuitive, but think about the alternative…
If they chose to adjust current temperatures, then their adjusted results would be in very obvious disagreement with other metrics. If they adjust the past, they can show agreeable current values and still issue results that support their hypothesis.
I am personally of the view that the NOAA/NCDC GMST dataset(by the way Dec is 0.4821 @ ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/monthly.land_and_ocean.90S.90N.df_1901-2000mean.dat) is “worse”, ie more of an outlier than GISTemp. Smith and Reynolds(2005)blended SST and Land analysis is terrible and has a definite warm bias. As an example, NOAA/NCDC released their 2008 analysis (http://www.ncdc.noaa.gov/oa/climate/research/2008/ann/global.html) : For the US, shouldn’t this: http://www.ncdc.noaa.gov/img/climate/research/2008/dec/map-blended-mntp-200801-200812-pg.gif


be the same as this:
http://www.ncdc.noaa.gov/img/climate/research/2008/dec/01_12_2008_DvTempRank_pg.gif
Yet there are red dots where there should be blue.
They really can’t agree with themselves. See the discussion of this map:
here: http://wattsupwiththat.com/2009/01/10/what-is-the-red-dot/
Fred– are the color scales equivalent? Maybe the second color scale is sometimes also used for daily temperature anomalies. In that case, the extreme colors would need to correspond to larger excursions than appropriate for annual anomalies.
Both graphs do tell me what I already know: It was cool around here this year!
Regardless of colour scales, showing an above normal where another map from the same organization shows a below normal (or visa/versa) is just indicative of the huge data problems that the NOAA/NCDC organization has (not to mention the lousy Smith and Reynolds analysis again).
Ahhh– You were referring to the spot. Yes. That looks like a data quality issue. There do seem to be problems with data collection. Anthony has managed to find a lot of sites that seem borderline at best.
Re: Fred (#8764)
Fred, the reason those two maps don’t match is that they use different base periods. The first, as stated, uses 1961-1990. The second map uses a 1971-2000 base period. Yeah, yeah, I know, why the heck are they using different base periods. Who knows? And why doesn’t the HPRCC map have a “near normal” category, as does the second map you posted? Why always warmer or colder?
If the government is seeking ways to reduce their budget, I would like to suggest that they stop duplicating their efforts in the area of climatology.
Joseph, I disagree. The second one uses 1931 – 1990. See:
http://www.ncdc.noaa.gov/oa/climate/research/2008/500mbexp.html#tz
Lucia,
Another example: compare Georgia (the US state) between the first and second maps. Red dot on one, blue state in the other.
Fred, I think that definition is for the time series graphs, which are different from the temperature departure maps. Every NCDC temperature departure map that does state the base period states 1971-2000.
http://www.ncdc.noaa.gov/img/climate/research/2008/dec/avgtemp.dfn-200812.gif
http://www.ncdc.noaa.gov/oa/climate/research/cag3/cag3.html
Something else I think is interesting is that the temperature anomaly data is never fully quality-controlled and made “final”. It apparently remains preliminary forever.
http://www.ncdc.noaa.gov/oa/climate/research/2004/cmb-prod-us-2004.html
And yet they do significant analysis using the 20th Century average as well… Perhaps this is the explaination:
The number within the state represents that state’s ranking for the given period, compared with all other such periods for that state for the 1895-present period of record. If the length of record is 114 years, the number 114 equals the warmest or wettest; the number 1 equals the coolest or driest. For example, if a state has number 103, this means that out of 114 years, this stated period ranked 103 out of 114, or twelfth warmest or wettest for that state. If a state has number 6, this means that out of 114 years, this stated period ranked 6 out of 114, or sixth coolest or driest.
The “Below Normal”, “Near Normal”, and “Above Normal” shadings on the color maps represent the bottom, middle, and upper thirds of the distribution, respectively. The lowest and uppermost tenths of the distribution are marked as “Much Below Normal” and “Much Above Normal”, respectively.
OT–JAXA shows 2009 arctic extent in third place just 60,000 km2 below 2004. Are you going to have a bet on max extent? it’s about 2 months away.
Larry T, 8759:
“My pseudo scientific expertise in the field let me to predict back in the 1970’s that the global cooling scare was hype and that we would continue in the warming trend of the warming between ice ages and currently that we are in a cyclic cooling cycle (PDO and AMO the primary earthly drivers) driven by a solar minimum the depth of which has yet to be determined. This is potentially a major problem that needs to be looked at seriously and by minds better minds than mine. A mini ice age is a major problem while the human induced global warming at best is a improvement or at worse (in my opinion) a problem that can be solved through technology.”
Just want to say that I agree very much with this statement! I don’t see the myopic brainwashed policy-makers even CONSIDERING the effects of a severe cooling. The effects of only 2-3 C could be devastating, because of its effects on the world’s food supply.
I never got my brownies for winning the November ice contest…. 🙁
I know… I let myself get so behind. I’m behind on two sets now. Send your address again!
lucia in (Comment#8713) January 14th, 2009 at 2:10 pm
You are great at science and rationality, not so great at understanding debate, irrationality, and human nature.
If Bob says that Carol is not worth listening to, that he does not listen to Carol, that no one should listen to Carol, and that Carol says all sorts of outrageous and absurd things, you can be pretty sure that Bob is listening to every word Carol says and knows exactly what Carol does in fact say.
Joseph
” something else I think is interesting is that the temperature anomaly data is never fully quality-controlled and made “finalâ€. It apparently remains preliminary forever.”
Could this be because it is not possible to make it final as it is based on various uncertainties and always incomplete data?
Why is this (“looks”)
http://weather.unisys.com/surface/sst_anom.html
so different to this?
http://www.osdpd.noaa.gov/PSB/EPS/SST/data/anomnight.1.15.2009.gif
VG
Ican’t see much difference myself… Different projections yes, but if you use the legends the anomalies are pretty much the same (the colours may be what is deceiving you)
Wait, what?
“In my opinion, both types of graphs are useful in their proper context. The short term graph is useful when evaluating whether or not the IPCC projections are able to predict things that happen in the future. The longer term graph is more useful if we are trying to discuss whether or not data support the general hypothesis that warming has occurred.”
That’s nonsense. Short term data doesn’t say anything about how temperature is going to change in the future since there is so much noise. If you removed the noise then it would make some sense. If you removed ENSO and NAO effect from the data there would be at least some sense to say something about the temperature record of this century and what it tells from the future.
If you take any other dataset out of historical temperature records (since the beginning of the rapid warming which started like 50 years ago) which have showed declining temperatures for a few years, they have always been followed by rapid warming. Looking at the data we should expect rapid warming in the next few years instead of expecting that the temperatures aren’t rising at the pace the long time trend would expect. But that’s just guessing. The short time data doesn’t really tell us anything because of the large amount of noise.
I bet you have already read this: http://tamino.wordpress.com/2008/12/31/stupid-is-as-stupid-does/
Tuuka–
I didn’t say anything about predicting what is happening in the future using that graph.
How do you remove the noise?
What I do with these is use a statistical model that uses the noise properties to estimate the uncertainty in the trend, then compare to predictions. Large noise–>Large uncertainty. However, as long as uncertainty intervals fintite, rather than infinite we can say something and exclude some possible hypotheses or theories.
Were this not so, we would not even be able to say that the data since 1970 was inconsistent with no warming. It’s not as if those uncertainty intervals are zero.
It might be wise if you learned about how to interpret data from a broader base of sources. It is true we can’t learn everything from this short period; I have never said we could. We can’t learn the one very specific thing Tamino wants to focus on. That seems to be whether or not there is sustained warming. There is too much noise to reliabily distinguish between 0 C/century and say, 1 C/century of warming using only 8 years of data.
But we can learn something. And one of the somethings we can learn is that the current trend is inconsistent with the about 2 C/century of warming projected by the IPCC in the AR4.
Tuukka Simonen (Comment#8809)
Tuukka, I suggest you spend at least the same amount of time exploring this site as you do that other site you mention. If you do, and have bit of understanding of statistics, you’ll find that Lucia does take weather variability into account. Read the caveats she has given regarding her trend calculations. To say that trends of eight years mean nothing is foolish, as would be the opposite concusion that it proves a long term negative trend. What she shows is that a present trend as high as 0.2C/decade for the first portion of this century is unlikely, a conclusion that even Jim Hansen apparently now is accepting (although I hesitate to give great credence to his pronouncements based on his past history).
Lucia, I made the above reply before reading your response. If I’ve mistated your position too badly, feel free to delete.
Tuuka,
Removing the effect of ocean oscillations from the data is taking an essential part of what controls surface temperatures (and climate in general) from the data. What would be the point of that? Are we also going to remove the effect of clouds and other surface temperature controlling mechanisms from the equation? Besides if we exactly knew what their effects were we wouldn’t be so bad in modelling climate in the first place.
Chris Schoneveld —
I’m also puzzled how Tuuka proposes actually removing the PDO and ENSO from the data. There may be ways to add parameters other than time to a data fit, but that’s not precisely the same as removing the effect of PDO and ENSO.
Lucia,
“But we can learn something. And one of the somethings we can learn is that the current trend is consistent with the about 2 C/century of warming projected by the IPCC in the AR4.”
I don’t think you meant to say that. Did you? 🙂
Jorge– Erhmm.. yes.
Lucia,
““But we can learn something. And one of the somethings we can learn is that the current trend is consistent with the about 2 C/century of warming projected by the IPCC in the AR4.â€
Are you using “consistent with” to mean “not statistically excluded”?
Also, I noticed you used the words “can learn” and not “have learned”. Was that choice of words significant?
Duane- I forgot the “in” in “inconsistent”. I think we can learn something from short term data and have learned something. Scientists learn something from small amounts of data all the time. They learn more from more data.
Tuuka seems to have developed the misconception that we can’t learn anything at all from short amounts of data. That’s just incorrect.
Jorge, Lucia types as fast as she thinks 😉
Dave Andrews (Comment#8798)
Dave, I don’t think so. The rankings data (divisional and state) IS finalized, and it is the same data that is used to create the anomaly dot and contour maps. The rankings are anomaly rankings for those areas (division or state). I am guessing that NCDC just doesn’t consider those maps important enough to link them to the final data. Who knows?
Joseph (commnt#8881),
Off topic!
You tempted me with your capital IS. Data ARE finalized and data ARE used to create etc.. I guess it is a European obsession to honour the original Latin usage. It’s the same with the plural usage of the word media. We say the media ARE biased, for example.
Niels,
“Lucia types as fast as she thinks”
So do I, but I only type with one finger!
I am in the “data” can be plural or singular camp.
Here in “Ask Oxfprd” http://www.askoxford.com/asktheexperts/faq/aboutgrammar/data
Here’s Merriam, and American dictionary. http://www.merriam-webster.com/dictionary/data
In the US, “data are” has become a lost cause. A few people still try to enforce it, but most use it like “hair”. Hair may be singular. It may be plural. I can’t remember the last time I heard anyone say “datum”.
“Data” is indeed used in a singular sense as a synonym for “information”. I don’t like it but have to accept it, like in the case of the word agenda. However, the use of the word data in scientific literature is mostly in reference to sets of numbers and in that context it is normally (in Europe) recommended to treat it as a plural noun. Fortunately Americans have not yet corrupted the word to the extent that they make it “doubly plural” i.e. datas, like they have done with the word visas. In Dutch we say one visum and two visa. In English they say one visa and two visas.
I think the disconnect between CO2 increases and temp increase is made quite clear by this graph.
Particularly if you look at the CO2 trend over the same time.
Here are the CO2 growth rates from Mona Loa:
2000 1.74
2001 1.59
2002 2.56
2003 2.25
2004 1.59
2005 2.53
2006 1.72
2007 2.14
2008 1.58
Thus the total CO2 growth is 17.6 ppm, from 368 in 2000 to 385.6 in 2008.
But the preindustrial CO2 levels were supposedly 280, thus the anthropgenic increased level of CO2 is but 88 above normal in 2000, and thus the 17.6 ppm rise since 2000 represents an ~ 20% increase in anthropogenic CO2. (The CO2 forcing factors you see for CO2 discount the background level, see IPCC 2001)
So one can see the clear disconnect.
Anthropogenic CO2 has increased significantly
Methane is up a tad
NOx is also up
CFCs are steady
And yet the climate hasn’t warmed at all over this time frame.
In fact if you look at the UAH/RSS temps the opposite trend seems to be evident.
OOPS
Arthur