William Connolley suggested one can show that the very recent down-trends and flatlinining of the GMST is expected based on past variability. He points out that ‘natural variability’ results in 5 year average down trends happen during major uptrends. We can learn that by examining a graph like this:

See loads of down-trends before the 1998 super El Nino. Loads of 5 year flat and negative downtrends in temperature before 2001 and/or 1998!
So… do these downtrends that make the recent flat trends explicable within “natural variability”?
To see, I highlighted the overwhelming majority of the flat spots and down-trends by finding the major volcanic eruption, and more or less centering the box around that year. Down turns tend to happen after major volcanic eruptions. So, five year trends ending during a volcanic eruption or just after tend to be negative.
Notice on the right hand side, we see some down trends. But the graph doesn’t show any volcanic eruptions near the early downtrends. So, yes, down-trend are consistent with natural variability 1970- 2000: volcanic eruptions are natural!
In case you think it’s a coincidence, or I’m exaggerating the effect of volcanic eruptions, here is a graph showing how much anomalous forcing is caused by volcanic eruptions. I created it when comparing the annual average value to monthly values.

Clearly, I need to find new volcano data. Because looking at the recent temperature trends it’s obvious I missed the memo about the huge 1999 volcano eruption!
No, but you did apparently miss the memo on the largest El nino ever recorded. Call it an anti-volcano. 🙂
The IPCC must have missed that memo too. Included in the ‘increasing body of observations [giving] a collective picture of a warming world’ in the SPM of the TAR were the statements that ‘Globally, it is very likely [90-99% chance] that … 1998 was the warmest year in the instrumental record, since 1861’, and that it was ‘likely [66-90% chance] that, in the Northern Hemisphere, …1998 [was] the warmest year [during the past 1000 years].’ There was no mention of 1998 being the year of the largest El Nino ever recorded.
I googled. This seems to be the Southern Oscillation index:
http://www.bom.gov.au/climate/current/soihtm1.shtml
Based on when people say El Nino’s occurr, I’m assuming negative is El Nino? I’m not seeing anything extraordinarily high or low in that.
Boris, how do you define the largest ever recorded? Is there a metric? Is that available on line?
Lucia,
For those that prefer a visual:
http://www.cdc.noaa.gov/people/klaus.wolter/MEI/
Looks like there were some pretty hefty El Ninos during the temperature ramp up, but I hear THOSE were caused by global warming.
http://www.terradaily.com/reports/El_Nino_Affected_By_Global_Warming_999.html
🙂
JohnM– Thanks. Information on ElNino is useful. That graph would suggest the 1983(?) El Nino was bigger. But maybe Boris has another source.
The more sources the better, I say!
Lucia, there are a couple more El Nino indices. The Oceanic Nino Index (ONI) from 1950 is here:
http://www.cpc.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
The NOAA CPC data from 1950 (including SST) is here:
http://www.cpc.noaa.gov/data/indices/sstoi.indices
The JISAO Global SST ENSO Index is here (Atlantic, Pacific, Indian Oceans 20N-20S)
http://jisao.washington.edu/data_sets/globalsstenso/globalsstenso18002007
Trenberth did a reconstruction of NINO3.4 data back to the 1800s.
http://www.cgd.ucar.edu/cas/catalog/climind/TNI_N34/index.html#Sec5
Many of the indices are tweaked in some fashion: standardized, normalized. And the anomaly data don’t necessarily share the same base years, so nothing seems to match.
Do the selected base years mean the El Nino data versus La Nina data truly reflect the impact on global temperature? Probably not.
A curious thing–I had just found the Trenberth reconstruction a few weeks ago and was looking for a trend. I created a running total and this is what leapt out.
http://tinypic.com/fullsize.php?pic=2wcr3pc&s=3&capwidth=false
Scaled and compared to global temperature:
http://tinypic.com/fullsize.php?pic=2t95d&s=3&capwidth=false
(Trenberth questions the reliability of the data before 1910 or so, where the two diverge.)
I created a video and a couple of follow-ups.
http://www.youtube.com/watch?v=DZ5OG6TMKBg
http://www.youtube.com/watch?v=V4DzpWpETsw
http://www.youtube.com/watch?v=dGHleDxQ8fQ
It also works with monthly data. I’ve yet to run it against specific anomaly data–Southern and Northern hemisphere, LAT, SST, etc–but I have a funny feeling about which will compare best.
I still don’t know for sure what it means.
Regards
Lucia, here’s the MSU Northern Hemisphere and North Pole anomalies and trends from 1978 to 1997.
http://tinypic.com/fullsize.php?pic=2qjvwqw&s=3&capwidth=false
Can’t really tell the difference in trends. And these are the MSU Northern Hemisphere and North Pole anomalies and trends from 1999 to 2007.
http://tinypic.com/fullsize.php?pic=mcb5md&s=3&capwidth=false
The two indices have diverged and the trends have increased, with the North Pole trend more than doubling the Northern Hemisphere trend. What happened between 1997 and 1999 to cause this divergence?
http://tinypic.com/fullsize.php?pic=zswazc&s=3&capwidth=false
Volcanoes:
“Volcanic Eruptions and Climate†by Alan Robock:
http://climate.envsci.rutgers.edu/pdf/ROG2000.pdf
“After the 1982 El Chichon and 1991 Pinatubo eruptions the tropical bands (30S–30N) warmed more than the 30N–90N band…producing an enhanced pole-to-equator temperature gradient. The resulting stronger polar vortex produces the tropospheric winter warming…†The winter warming lasts for a few years after the volcano.
Could a stronger polar vortex also have something to do with the drastic warming in the years following the 97-98 El Nino?
Also: “If a period of active volcanism ends for a significant interval, the adjustment of the climate system to no volcanic forcing could produce warming. This was the case for the 50 years from 1912 to 1963, when global climate warmed.â€
I’m beginning to doubt your good faith, as well as your ability to spell my name. I pointed out (well before this current kerfuffle began) that 5 year trends are a bad measure of long term variability. Thats all. Your post is misleading.
William says:
“I’m beginning to doubt your good faith, as well as your ability to spell my name.”
I think you set the tone for this discussion with your initial blog post:
“Atmoz has looked at this a bit, but if you prefer pointless statistics without physical understanding you want Lucia, or Prometheus.”
What I can’t figure out is why you can’t simply accept the result as statistically valid, acknowledge that the IPCC probably underestimated the cyclic effects related to solar cycle/ENSO and move on. If you are right the temperatures will turn around in a year or two. However, if they don’t this kind of analysis is what is needed to send the climate modellers back to the lab to figure what went wrong.
William,
You responded to Roger’s post discussing mine with a hastily written post riddled with snarky remarks. The post certaintly did more than simply suggest that 5 year trends can be misleading. You tried to wave away average results with irrelevant discussions of dice throwing, responded to those who try to engage in discussions with links to posts where you show that these sorts of down trends have been known to occur after volcano eruptions. When questioned about your counter argument to the rarity of downturns during periods with no volcanic eruptions, you decreed that responding that we were now off track of the main point.
I responded to your individual points, as some were echoed by my readers here. My readers have expressed interest in these sorts of things, so I discuss them. If you feel this reflects some sort of bad faith, I must ask which faith you suggest I adopt?
Raven, Atmoz post suggests an interesting idea. He is thoughtful, and many of his posts are well reasoned. However, for the most part, the idea doesn’t stand up when we use realistic values for the possible magnitude of the variability do to El Nino. I will be posting on that later this week. If the variabilty due to ENSO were anywhere near the magnitude he suggests, it would dwarf the excursions due to Pinatubo.
That William didn’t see this makes me doubt his claim that he is basing his views on physical understanding.
I think that the temperature trends you described in many of your posts may be supported by the extraordinary increases in both SH and NH ice (see cryosphere today). This is not only “weather” it seems to be at least a 6 month trend (SH). But of course we will have to wait and see. My bet is that global temps will continue flat or fall for the next 5-6 years
What is quite disturbing is that from what the modelers say in public you get the impression that they are not at all concerned about validation of any sort. I was actually offended by Williams suggesting that the one small part of a validation attempt made by Lucia was “pointless statistics”. I put much more efforts in model validation for a million dollar investment than the IPCC have done so far with their model ensembles used for multi billion dollar decisions. The IPCCs wordings are also suggestive: “assessing capabilities” rather than outright test it with normal means and expressing “increased confidence” and the like instead of giving us the actual numbers for the models skill along with the graphics used to assess it. The only time they mention “residual” in their assessment chapter is with respect to the assumed randomness of ENSO…
It is however somewhat comforting that the dismay I feel about their naive attempts at modelling is exactly what they must feel about all the laymans weighing in on climate science debates… 😉
If this is a pointless statistic, then what of the “statistics” used to validate models before the IPCC made projections? And what statistics do they use to validate models?
The responses we are getting lead to a very real question: Should “the models are right” even be elevated to the status of “null hypothesis”? Has this been shown to the level that it ought to be?
Individual models or even models of a certain type being uncertain in not the same as AGW or radiative physics being in doubt. Parameterized models being uncertain is normally the working assumption until we get good agreement for a long time– and by good agreement, we mean with statistical measures!
Lucia,
I don’t know if you have seen this paper: http://www.environmentalwars.org/articles_climate_of_belief.php
It uses the errors in cloud cover to estimate the true uncertainty in the model outputs and comes up with ±130 C by 2100. (yes one hundred thirty).
Thanks Raven!
I have a lot of reading to do. 🙂
Raven,
Thanks for the link to Pat Franks paper. It literally tears a new one in the modelers “baby”. No wonder that confidence limits are never discussed or revealed. Error propagation as he describes must be considered. Climate science modelers should be ashamed for their obvious imcompetence if they can not refute his claims.
Here’s one to chew on from HADCRUT
http://www.cru.uea.ac.uk/cru/climon/data/themi/g17.htm
Lucia, the Patrick Frank paper. It cannot be that simple can it? They cannot possibly have overlooked something as simple as the fact that errors increase with time because each estimate’s error range is then subject to the same amount of error as you move forward? Surely that cannot be? These are perfectly intelligent people. Help!!
fred says:
“They cannot possibly have overlooked something as simple as the fact that errors increase with time because each estimate’s error range is then subject to the same amount of error as you move forward?”
Their error bands get wider as time goes on so they do appear to be aware of the issue when it comes to the parameter that they are focused on (e.g. sensitivity of climate to CO2). However, I suspect that they assume that the uncertainty on CO2 sensitivity will incorporate any errors due to cloud cover. I don’t know if this is a legimate thing to do.
The Franks paper is a rediscovery of a phenomenon that was identified by the nuclear energy safety authorities back in the mid-1980s. The reactor vendors and the US government have spent literally billions of dollars since the late 1960s to develop computer models to predict the behavior of nuclear reactors during reactor accidents, in large part due to the pressure of environmental groups to “prove that these safety calculations are valid”. These efforts have included the performance of a large number of experiments, at various scales, in test facilities all around the world. The experiments are accompanied by calculations that are often performed in a blind or a double-blind fashion. Blind calculations occur when the modeler has never seen the results of the experiments, and double-blind calculations occur when the modeler has never seen ANY results of experiments from a facility. Ironically, one of the key arguments by environmentalists in those days was that without a full-scale test of an accident in a real power plant, the calculations could not be depended on for making regulatory decisions. They now try to use the climate models to try to make enormous changes to human life on the planet.
The US NRC and the German nuclear regulators developed, in the 1980s, several different rigorous methods to quantitaviely determine the error in these calculations, based on the amount of error in the understanding of the physical models, the error in the models themselves, scaling issues from one facility to another, geometrical uncertainties, and even human performance issues. When you combine all of these uncertainties rigorously, the resulting plots of the figure of merit (peak fuel cladding temperature)as a function of time become uncomprehensible, because there are too many degrees of freedom to the calculation. It cannot be constrained well. I was personally involved in one experiment where the Italians tried to quantify this uncertainty, and the uncertainties made the plotted results unintelligible.
As a result, the most advanced nuclear power safety analyses now try to constrain the results by biasing as many sources of uncertainty as possible, in a “conservative” direction, so as to maximize the peak temperature. A number of imputs that are amenable to uncertainty analysis are varied, usually using a Monte-Carlo scheme over a large number (~100) of runs of the same base scenario. The highest value from these runs is used to show compliance with regulatory limits.
Now, you are asking, what the hell does this have to do with global warming? It is directly applicable, because the atmosphere is so chaotic that the uncertainty in these models is enormous, as Dr. Franks showed. There is poor understanding of the fundamental physical processes, from the sun as the driver, to the clouds, to the reaction of the oceans. The results from these calculations are pure speculation, and can be made to go wherever the modeler wants them to go. The other posts on this site that compare the predictions of the models to the data show that they are not capable of predicting anything.
This observation comes from someone who used to work as a nuclear power plant regulator, evaluating whether computer models could be relied upon to predict the behavior of reactors during normal operation, transients, and accidents. I have NO financial interest in the outcome, because I am retired living on a pension. I personally believe that nuclear technology is worth developing safely, and it seems likely that the global warming scare has brought about a “nuclear renaissance”, but for all the wrong reasons. I find it fascinating to see all of the commenters trying to find “meaning” in the individual bumps and random trends divined from the raw data. It reminds me of the efforts of Wall Street “chartists”, who try to predict the future performance of a stock by looking at the graphs of past performence.
Climate modelers do not understand the fundamental physics of climate enough to be able to make predictions of the future. Weather forecasts are not really good beyond a few days, but some believe that a computer modeler can predict the climate 100 years from now. It is sad to see so much intellectual effort spent on such a fruitless effort. I guess it is the modern equivalent of trying to figure out how many angels can dance on the head of a pin.
Fred– I haven’t read the Patrick Frank paper, so I can’t comment on what the paper claims. Generally speaking, models use approximations. To the extent that the approximations are uncertain, the results are, themselves, approximate. So, for every model, one must ask, a) How accurate is it. (That is, are the results on average correct.) and b) How precise it it.
And we must ask this for every measure we are interested in predicting.
Right now, I’m focused on Global Mean Surface Temeperatures. I’m trying to figure out: What did the IPCC actually project and how does that compare to data after the projections were made.