GISTemp reported a September 2010 temperature anomaly of 0.56C. This represents a slight rise relative to August’s anomaly of 0.53C but falls below the September 2009 anomaly of 0.66C. This month’s temperature anomaly is compared to anomalies reported since 1980 and all September anomalies in the figure below:

(Note: Anomalies are relative to the reference period Jan 1980-Dec 1999.)
Note the least squares trend computed since Jan 1980 is 0.166C/decade, which falls below 0.2C/decade.
Now for those watching the horse-race to see whether GISTemp will break the record for the calendar year, here’s a graph of 12 month lagging averages.
As many already know, earlier this year GISTemp broke the previous all -time high 12-month average set in July 2007. Assuming the average for Jan-Sept remains unchanged and the average temperature for Oct-Dec doesn’t drop at least 0.037C below September’s value, the GISTemp anomaly for calendar year 2010 will also exceed the previous record set in 2005. Surface temperatures aren’t plunging as rapidly as some handicapping the horse race might have predicted: It’s going to be close.
Meanwhile, those interested in how the 12 month lagging average is tracking the multi-model mean for models driven using the A1B scenario will notice that after just piercing the multi-model mean after the top of the earth’s El Nino, the observations are once again fall below the multi-model mean, though only slightly. Will the 12 month lagging average lie below the multi-model mean after December’s anomaly is reported: Yes– unless temperatures rise. That seems unlikely, but then again, September 2010’s anomaly was higher than August. So, who knows?

Something I noticed because of my post in the July GISTEMP thread: NASA has downgraded the temps for two months this year. Temps have to be 52 for the rest of the year to not break a record.
The bottom of the la Nina cooling that follows most el Ninos comes about 18 months after the el Nino driven global average peak. The peak was in April, so about a year from now we should be near the maximum cooling. Temperatures appear to fall almost linearly (but with noise!) from the el Nino peak, so it is likely that we will see a drop of another ~0.12 by December, or a drop of ~0.06C for OCT/NOV/DEC relative to September…. so a record for all of 2010 seems unlikely.
Is there a Quatloo bet going on if 2010 breaks the GISS record?
SteveF– No betting on that. I should set that up. It’s going to be close.
What do you estimate the uncertainty in the baseline means to be?
Would it be appropriate to not only measurement uncertainty, but also consider weather noise when estimating the baseline uncertainty?
I questioned the purpose of the 0.2C/decade trend when it was there. No I wonder why it’s gone. Change is jarring.
Here is an example of what this chart looked like 3 months ago… Why the change?
http://rankexploits.com/musings/wp-content/uploads/2010/07/GISSTempMonthly.jpg
Ron
No real reason. It’s sort of random. I have both in the spreadsheet. I am not discussing the uncertainty intervals in this post, and hadn’t in a while, I happened to pick the one that doesn’t show them. (The reason I hadn’t in a while is I’ve been trying to budge time a bit. That’s also why blogging was lighter in Sept.)
I’m not sure what your question is about showing 0.2C/decade for reference. It’s a concrete number in the IPCC report. They project about 0.2C/decade right now, and more specifically, that’s the tables temperature increases relative to the average for 1980-1999. Comparison of the observed trend to 0.2 provides people with information to decide if this is “about 0.2 C/decade”, “less or more than 0.2 C/decade”, or “much more or less than 0.2 C/decade”.
What is it you question about showing a graph that provides this information?
Robot–
What do you mean by “uncertainty in the baseline”? The baseline is just a reference choice. Here, the baseline is the average temperature computed using the reported values from Jan 1980-Dec 1999. I believe I have correctly downloaded the values from GISS. GISS reports two 2 significant figures. There will be round off error in the reported values. (That is: GISS could report more sig figures if they wanted too. ) So, the “error in my determination of the baseline” based on the values GISS provides would be roughly ~ 0.01/sqrt(12*20)=0.001 C.)
I suspect this is not information you were looking for. But the “error in the baseline” is just not of much interest and weather noise is never included in computing “the baseline”. It’s considered when interpreting the value of anomalies relative to the computed baseline.
Are you concerned about the uncertainty in monthly values? Annual average values? Etc? As in: Do you want to know to what extent do we think GISS’s anomaly (relative to any arbitrarily chosen baseline) gives us the correct anomaly value for any given year? That’s a bigger number. 🙂
Or are you interested in something else? (The determination of the trend has a smaller uncertainty than the uncertainty for any annual value.)
My graphs just show what GISS’s values currently compute to.
I kinda of remember that graph showing up in some very short time scale charts – like 2001-2008. That seemed questionable. For instance, was that time frame large enough for valid comparisons to what was intended (I assume) to be a multi-decadal trend. But your recent charts had included the longer time frame and I became kind of accustomed to them. Mostly I was just noticing the change.
I hope that you can provide us an update to the long-term trend in this chart in January. I think I missed this during the discussion of MMH2010
http://rankexploits.com/musings/2009/year-end-trend-comparison-individual-model-runs-2001-2008/
Ron–
Yep. I have shorter time scale graphs too. I think those are fine and I sometimes show those too. I could now. I manually change time stamps on graphs when I fiddle with spreadsheets– looks like last time I might have shown a shorter time graph was August 14? (Or not. I’d have to look.)
I mostly switched because some people seem to want to believe the only reason the computed trend is lower than the projected one is due to the short time period. This is not true. The computed trend is lower than the projected one if we start in 1980 or 2001 or any number of start points.
I could show these when Hadcrut comes out. (If I do, in fairness, I’ll show GISS since it’s the one that makes the models look “least bad”. )
Warning: When I do, the post will not discuss the computations of the uncertainty intervals in any detail, but the will be on the graphs.
Ron– I remember another I’ve been favoring the longer graphs. When people started wanting to ask whether we set all time records, I wanted to show graphs that include the all time record. For some (not all) of the observational data sets, that means we need to include 1998. That means I needed to show a longer graph, so I picked the versions that start in 1980.
I don’t see any need to show 20 graphs when announcing the monthly GISS revealed temperature, so I’m pretty much showing 2. But if you’d like to see the shorter graphs, not a problem for me. The GISS trend since 2001 is 0.07 C/decade which is quite a bit lower than 0.2 C/decade.
@Lucia: I agree that the definition of the base-line is as simple as you say. I raised the question because you plot two series together and talk about the difference after ‘base-lining’. Specifically you state: “the observations are once again fall below the multi-model mean”. Some of that difference could be due to weather noise (including things like el nino). Subjectively, the difference looks rather insignificant. It could probably easily change sign if you chose another base-line.
When I asked about base-line uncertainty, I really meant the uncertainty shifting the two series to a comparable level. Comparable, in the sense that the conclusions are robust to changes in base-line interval. That is why i think it is appropriate to think of it in terms of a relative base-line uncertainty. Anyway, my very crude estimate of “this” uncertainty based entirely on your graph is ~0.05C = 1.96*0.1/sqrt(17). (17 is my estimate of the effective degrees of freedom in the weather noise).
I hope it makes more sense now.
robot
Agreed. So, I will explain why I chose this baseline:
I show the both observations and projections relative to the average temperature from jan 1980-dec 1999 because the discussions of projections based on models in the AR4 describes projected increases in temperature relative to that specific baseline.
But you are correct that choice of baseline makes a difference. If I chose the baseline from 1900-1999 inclusive, the models would look worse than they do in the figure above. Because the way re-baselining works, picking a recent baseline (e.g. 1980-1999) will tend to make models look ok because models and observations are mathematically forced to agree during a recent period. So, for example, using the baseline from 1980-1999 inclusive, the average temperature for models and observations are forced to agree during that relatively recent period. They could have different slopes of course– but still, some degree of agreement is imposed by the arithmetic.
It happens that that particular baseline tends to make the models look “less bad” than most other choices– though I’m not sure if it the choice that makes them look “best”. I chose it because it closely corresponds to the choice I think the IPCC made when projecting forward using models.
I’m still not sure what you mean. I also don’t know which conclusion this magnitude of uncertainty would affect. What specific thing do you gauge as having an uncertainty of 0.1? Why is 17 the effective degrees of freedom in “weather noise” and for what? I’m assuming the 1.96 corresponds to wanting some 95% confidence interval.
If you question is: Given any baseline, if GISS reports an annual anomaly of 2C for the annual average value ending dec 201x, what is the uncertainty in that that annual average anomaly– that is reported by GISS. This isn’t really the ‘uncertainty in the baseline’. It’s an observational uncertainty, and we’d get the same value for the uncertainty (but a different one for the anomaly) for any baseline.
@Lucia:
Lets say that we have two series A & B. They are both anomalies of global average surface temperature in Kelvin. So, we have
A(t)=T_obs(t)-k1
B(t)=T_modelaverage(t)-k2
The problem is we don’t know the two constants k1 and k2, and so cannot say whether T_obs(present)>T_modelaverage(present) from A and B alone. We have to evaluate whether this is true:
A(present)+k1-k2 > B(present)
This is usually why we choose to re-baseline them to a common interval. We assume that the mean of T_obs and of T_modelaverage should be the same over the base-line interval. If we do that then we can derive (k1-k2). I am simply arguing with the assumption of equality over the base-line. So, I am talking about the uncertainty in (k1-k2) obtained from base-lining.
—
I don’t think the numbers are important for the discussion but here goes anyway: I got the 17 from counting wiggles in your plot and the 0.1 was just a estimate of the typical ENSO standard deviation. As I said it was crude. But I think you would get roughly the same using a more solid approach. One idea would be to check it using the model-ensemble. I would pick different realizations from only one single model. Then we know that they really should be the same over the baseline interval except for weather noise. I would then withhold one model run and consider that a surrogate of “the observations”. In this case we know that (k1-k2)=0 and we can see what we get from re-baselining. From this a simple bootstrapping approach could be devised for the uncertainty in (k1-k2) estimated using baselining.
Robot–
The baseline is whatever you get from computed values which are reported numbers. There is no “uncertainty” in that. All other values are relative to that ‘0’ reference point.
Once again: Are you, in some convoluted way, trying to figure out if we can know whether the 12 month annual average anomaly in 2010 for the actual earth is really “xC” relative to the average for 1980-1999 if or measurements were perfect? Or how much uncertainty we have about the real earth anomalies? GISS has estimate for those and they are larger than what you are getting.
Also– I have no idea why you are worrying about “weather noise” in your discussing of k1-k2. The weather on earth was the same when GISS measured it in Jan 1980 as when Hadley measured it. The earth had el ninos or la ninas volcano eruptions etc. at the same time for GISS and Hadley.