HadCrut June data are in. I don’t know if I missed the alert earlier, or if I just happened to catch this before the alert arrived. Either way, the temperature anomaly was 0.534C making it the 2nd warmest HadCrut June anomaly, exceeded by June 1998. Temperature anomalies since 1980 are shown below with June anomalies circled: 
As in previous posts, I’m watching the 12 year monthly averages to detect records. The HadCrut 12 month average is illustrated in blue; a “what if” 12 months average computed by assuming the current temperature freezes is indicated in green.

The 12 month HadCrut average has not broken the previous record for all time high 12-month temperature anomaly, and it appears unlikely to do so. Moreover, it’s looking more and more likely that the 12 month average temperature anomaly at the top of this El Nino will not pierce the multi-model mean projection based on IPCC models projected using the SRES.
Just for information: When you write “anomaly”, do you mean “difference between current monthly temperature and average of temperatures for the same month over a certain baseline period of N past years” ?
Thanks!
An observation; the 08/09 la Nina really jumps of the page at you almost as much as the 98 el Nino but otherwise the 2000s ENSO signal is not particularly strong (low volatility market traders would say).
well, second warmest, not a record. “sceptics” celebrating “global cooling” all over the world, but mostly on WuWt!
“sceptics†celebrating “global cooling†all over the world, but mostly on WuWt!
= = = = = = = = = = = = = = = = = =
They do tend to be doing victory laps on every post whenever I read over there.
Personaly I am much more interested in where people think the models seem to be wrong. By people I mean sceptics without the quotation marks.
Re: sido (Jul 22 01:25), Yes,that’s the definition. Hadley does the subtraction.
Ok, thanks a lot!
In regards to the anomaly, does that mean that the absolute global temperature is now .534 degrees warmer? In other words, the absolute temp is now 15.534Ëš as compared to 15Ëš one hundred years ago?
Genghis,
Its 0.534 C warmer than the 1961-1990 average temperature (not from 100 years ago). Converting anomalies to absolute temps is doable, though keep in mind that warming isn’t uniform; e.g. some parts of the world warmed much more than 0.534, other much less (or even cooled).
sod (Comment#49591) July 22nd, 2010 at 5:42 am
“well, second warmest, not a record. “sceptics†celebrating “global cooling†all over the world”
As one of those people suffering the ‘year without summer’ in the Pacific Northwest I wouldn’t call if ‘celebrating’.
Our vegetable garden shows absolutely no sign of producing anything edible.
The height of the corn in the local cornfields is about knee high, absent a hot August and September the local corn farmers are going to have very poor harvests.
Crop selection and planting is dependent on having some idea of what the weather conditions will be for the growing season. The current habit of government forecasting of ‘hotter then ever’ isn’t particularly helpful if the ‘hot’ fails to appear. Corn doesn’t grow in ‘cool’.
Harryw2
Our Chicago area summer is fine for growing. One of my pepper plants has a red pepper on it!
“One of my pepper plants has a red pepper on it!”
Lucia,
I’m soooo jealous. All I have are cucumbers so far… and those by accident. :/
Andrew
My gardesn is finally doing very well. Call it July normal, after an atrocious June. Too hot, NO RAIN, and ~16 tons of composted horse Doo-Doo. The horse poo took a little time to break down completely.
Anyway, except for being a little late and the deer and other critters taking what they wish, we’re having, now, a normal garden with extra growth due to that ole Poo.
Up in Wisconsin, my veggie garden is doing fine. I’ve gotten a bunch of peppers already, and my sweet corn plants are over 6 feet tall. But it’s never a good growing season everywhere. It’s weather, not climate.
There have been quite a few adjustments since I last downloaded HadCRUT in March, particularly in 2010.
Zeke – “Its 0.534 C warmer than the 1961-1990 average temperature (not from 100 years ago).”
I guess what I am really asking for is what is the actual absolute global mean? In other words what is the absolute global mean for 1961 – 1990?
dorlomin (Comment#49592)
The models are wrong where they have to use sulfate cooling to counter the warming they think GHG’s should (but do not) create. Their mistake is changes in the rate of ocean upwelling… a perfect example of that is ENSO (although there is much more and on different time scales). I do not know if the physical mechanism is in the actual change in the temperature of the sea surface or changes in clouds (or both)
None of the measures of sulfate cooling, particularly optical thickness meaasurments goig back to the 50’s, support this idea of global dimming created by the post WW2 economic boom and subsequent cleaning up of the air.
It’s a classic example of how the AGW crowd and their grand assortment of activist scientists have had to create something to explain problems with their hypothesis.
Genghis
That’s a difficult thing to say with precision. It’s somewhere around 14 C.
Carrot eater – “That’s a difficult thing to say with precision. It’s somewhere around 14 C.”
Can’t they simply average the temps together from 1961 to 1990 and come up with a very precise number? Say 13.778563? It seems that they are very precise with the anomaly number, shouldn’t the mean global temp be extremely precise?
I am trying to wrap my head around why it seems so hard to get an absolute global mean temp?
@Genghis.
http://hadobs.metoffice.com/indicators/index.html
Q. Why do you use anomalies?
A. Anomalies vary slowly from one place to another – if it is warmer than average in London, it is likely to be warmer than average in Paris too – but actual temperatures can vary greatly from one weather station to its nearest neighbour. The average anomaly for, say, Europe is likely to be representative of a large area, the average absolute temperature will be representative of only a very limited one.
I’d add that anomalies are less sensitive to changes in the station network too.
Genghis–
Oddly enough, it’s harder to get the absolute temperature rather than the anomaly. Sub-optimal distribution of thermometers makes a greater difference to absolute temperature than anomaly!
This is less odd than you might think. There are loads of other measurements where it’s easier to measure a differential value than an absolute value. The absolute temperature will likely be biased by the fact that it’s easier to place thermometers in relatively convenient places (like, say, someones cow pasture) rather than difficult places (like, say, the top of a mountain). But you can still detect an anomaly fairly well if you can assume the mountain top and farmers field warm at more or less the same rate. (That’s not to say there are no problems with the assumption. But the bias will likely be smaller.)
These sort of mind-bending idea that you can measure differentials (i.e. anomalies) better than absolutes is not limited to measuring temperature anomalies. It’s a snap to measure differential pressure, but not quite as easy to measure absolute pressure.
How meaningful are the slopes if the area is unknown? In other words what if the anomalies are within the area?
Can the slopes mean anything if we don’t know the area? I am envisioning lots of functions at the same time and us measuring only one of the functions.
Lucia might do it properly, later, and with HadCrut, too. But somehow I am thinking “dancing peak to peak” and wondering what Excel says about the rate of change of UAH temperatures from peak (February) in the Niño-Year 1998 to peak (March) in the Niño-Year 2010. The linear trend is upwards, 0.0003°C per month.
Maybe Dr. Jones is right about there being no significant rise.
(But maybe I made a few mistakes).
The two peak months in the 98 El Nino were about .15 warmer than the two peak months in the 2010 El Nino per UAH
It would be more accurate to say this is 20th highest anomaly in the record.
The anomaly already takes into account what month of the year it is. July is the hottest month globally so most July’s would be hotter than June 2010.
One could argue that the seasonality is changing but I haven’t heard that one yet.
Not correct, Bill, You start running into that whole land fraction thing again
“MikeC (Comment#49689) July 23rd, 2010 at 2:50 pm
The two peak months in the 98 El Nino were about .15 warmer than the two peak months in the 2010 El Nino per UAH”
Yes, but I calculated the rate using all 146 month from “peak to peak” (but obviously not the 13-month means). The slight rise comes from the months in between. Took some copying and pasting since UAH data is not in Excel.
Just how robust are the various trend estimates shown here (and if not particularly robust why show them)?
On the issue of anomalies the things that has always niggled at me is what it does to the parameter and error estimation (i.e. moving from anomalies at a whole lot of observation points across the globe to estimation of the average anomaly for the globe as a whole).
I haven’t seen many references to error estimation of the global anomaly (appreciate any pointers to the literature) but I can’t help feeling that using anomalies tempts one into ignoring systematic biases in the variability at observation points (the variance is underestimated because we measure temperature at points on the globe where the temperature is more stable).
It does surprise me a little that more work doesn’t seem to go into absolute temperatures. Much of the controversy around the temperature record seems to be around the adjustments that are made in order to cope with limitations in the ability to account for where and how temperature is being measured. This leads to a lack of transparency (arguments over UHI effects, sparse data, land/sea ratios etc).
The risk I keep worrying about about is the use of uncertain estimates being used as inputs into uncertain models to produce even more uncertain results, with no attempt to carry error terms through. Too much faux determinism for my liking.
I therefore idly have wondered about an alternative approach that goes back to absolute temperatures and uses the data set to directly estimate parameters for a model that partials out location effects (e.g. type of instrument, local population, sea/land, latitude, height above sea level, and other objective parameters known to influence temperature locally, regionally and globally). At least this would give estimates of temperatures over time with some sense of the significance of the parameter fit and the estimates themselves.
Anyone seen this done?
HAS
Eh? It’s a lot easier to accurately obtain the anomalies. That’s why they’re used.
There are many different sources of uncertainty you could aim at.
GISS only considers the uncertainty due to not having thermometers in all places. See here, Hansen et al (1987)
http://pubs.giss.nasa.gov/abstracts/1987/Hansen_Lebedeff.html
on page 13362-13363 for starters
The CRU guys calculate all manner of error bars. Try Brohan et al (2005) “Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850”
(as the title tells you, the entire paper is about this topic)
http://hadobs.metoffice.com/crutem3/HadCRUT3_accepted.pdf
You think the weather stations are at locations where the temperature is more stable over time? What places are you worried about in this regard, that aren’t being measured? Greenland and the Arctic maybe?
Because it’s very much more difficult to calculate the (global or regional) mean absolute temperature and then track how its changing. If it isn’t clear to you why, mention that.
Land/sea ratio is just one guy confusing himself. UHI and data sparsity.. this comment is already too long, but those can be discussed.
If you review the papers above, you’ll see some healthy uncertainty bars there. I don’t know what models you have in mind, but the temp anomalies you see above are not inputs into GCMs.
Thanks carrot eater.
I would have thought the actual temperature record was easier to obtain than the anomaly. Apart from the obvious point that the former is the raw data, to do the latter you need data from a fixed period to normalize your anomalies against. The usual point about anomalies is that they are easier to combine, my point was that this simplicity might just be a trap.
I had in fact read Brohan (but had forgotten) but not Hansen et al. The latter rather nicely illustrates my point. Here we have the output of a GCM being used to estimate the errors in estimates of gridded SH temperatures. This confusion of models and reality is the point I was making (and I was using “models” in the general sense, not the specific). I doubt it would pass muster today, but the risk is that the errors reported here are potentially being relied upon.
Another example of how the underlying variability in the data and estimates gets lost can be seen in Brohan’s attempts to make the variances in each of the grids the same (rather than what they actually are) for subsequent analysis that relies on this.
Anyway back to the main point and your question. Why bother to make the models explicit and estimate the parameters directly from those.
Right now it is difficult to know what assumptions, models, errors etc have been used as one moves through data reading, adjustments for station changes/moves, missing data, changed environmental effects, estimation of grid temperatures etc. Buried within Brohan and hence the data set are a large number of assumptions about all these issues, some of which conceal implicit parameter estimation and the like.
If you work directly with the data and make your parameter estimation explicit (and strictly empirical) I suspect you’ll get a better understanding of what’s being measured.
MikeC (Comment#49689) July 23rd, 2010 at 2:50 pm
The two peak months in the 98 El Nino were about .15 warmer than the two peak months in the 2010 El Nino per UAH
According to Spencer’s site the Aqua Ch 5 has just experienced the hottest ~4 days in the record since 1979!
Indeed within the thickness of the lines on the graph you could make a case for every day of July having been hotter than (or as hot as) any other days in that record!
Phil– On Aqua/channel 5, I’m getting that matching calendar days, July 1-4 were not the hottest July 1-4s. But the remaining july days are the hottest anomalies for matched days. These are also the highest anomalies achieved this year. However, the all time maximum daily anomaly is not broken.
The published July anomaly should come in with a high value.
lucia (Comment#49885) July 25th, 2010 at 6:14 pm
Phil– On Aqua/channel 5, I’m getting that matching calendar days, July 1-4 were not the hottest July 1-4s. But the remaining july days are the hottest anomalies for matched days.
I wasn’t talking about anomalies just the hottest temperatures, looking at the graph 1-4 are probably just slightly lower than previous days, however most of the days in july are hotter than any previously recorded days.
These are also the highest anomalies achieved this year. However, the all time maximum daily anomaly is not broken.
The published July anomaly should come in with a high value.
If July doesn’t come in with the highest July anomaly ever then there are some very strange adjustments going on!
Re: HAS (Jul 24 16:45),
“Anyone seen this done?”
That’s rather similar to what TempLS does. It doesn’t partial out individual properties like altitude, but it bundles them into local variables that are allowed to vary with station and month, leaving a global temp function to be determined by the least squares fit. Paper here.
Re: HAS (Jul 24 16:45),
“Anyone seen this done?”
It’s close to what TempLS does. It doesn’t try to separately model altitude etc – instead it just bundles all that into a local effects contribution to a least squares model which depends on station and month (seasonal) but not on year. The sum of that with a global temp component which does not vary with location is fitted using weighted least squares.
Nick: _
Thanks for this it’s interesting.
My instinct would be to estimate the local temp function L(s(location, environment),date) with the global temperature G(date) being just the integral of this function over the surface of the globe. A major problem I think with the way in which you have characterized the model is that it assumes that all changes in global temperature are date dependent (and only date dependent). However one of the things that is of particular interest is the extent to which this (and apparent auto correlations in the temperature time series etc) is an artifact of local factors.
Another issue is that it doesn’t allow the use of the dataset to estimate common location and environment relationships and introduce them into the model.
In HadCrut data the monthly anomaly was 0.756 in February 1998, but in the figure the maximum is 0.6. Is there an error ?
Anton– I shift baselines to 1980-1999 for all reporting groups. That way we see numbers that mean the same thing when viewing different graphs.