Observed Trend Surface Negative Since 2001

HadCrut has posted their December temperature anomaly here. It will not surprise climate blog addicts to learn the December 2008 surface temperature anomaly was cooler than the November anomaly.

Here are the anomalies for the past three months:

HadCrut Temperature Anomalies

Month Current Report In Cache
Dec. 2008 0.307C
Nov. 2008 0.400C 0.387C
Oct. 2008 0.453C 0.438C

I’ll let you subtract. 🙂

Comparison to Cache

In total 11 months showed differences between the cached and displayed values. The average of these differences was 0.0028C. The largest positive adjustment was 0.031 C for March 2008; the largest negative adjustment was -0.019C for January 2008. HadCruts numbers tend to be fairly stable– though they do change as fresher data arrives, or as the service finds errors in past records.

How does the data compare to “about 0.2 C/decade” projection in the IPCC AR4

As readers are aware, the IPCC AR4 projects “about 0.2 C/decade” of warming for the first two (or three depending on which chapter) or decades this century, so I regularly compare data to this value.

I have computed the ordinary least squares trends (OLS) based on observations from Jan 2001-Dec. 2008, along with . The graph below shows OLS trends for GISSTemp, HadCrut, NOAA/NCDC and the average of all three. (This average is selected for blog purposes as it permits users to compare the projections to a trend value that falls between the maximum and minimum trends reported by the three agencies.)

The uncertainty intervals shown correspond to the red-corrected ±95 uncertainty intervals for the observation obtained by averaging the data from the three reporting agencies. The method for obtaining these intervals is discussed in equations (5) and (6) in Santer17. A trend of 0.2 C/decade IPCC AR4 projection is shown for reference.

Data Compared to 2 C/century

As readers can see

  1. Trend since Jan 2001 are negative no matter which of the reporting agencies one favors.
  2. Taken as a point prediction, a surface temperature trend of +0.2 C/decade falls outside the 95% confidence intervals for the observation created based on the average of anomalies reported by the three agencies. This suggests that 0.2 C/decade as a point projection falls outside the range consistent with the data.

Now that year end data are in, and I have downloaded the model runs from the IPCC AR4, I will be running some more statistical analyses to account for the “about” issue in “about 0.2 C/decade”, as this related to the scatter in the model predictions. New graphs should appear over time. 🙂

145 thoughts on “Observed Trend Surface Negative Since 2001”

  1. Lucia. I’ve been lurking here for some time now, and I get the feeling that while you are clearly interested in honest and “geeky” statistics concerning the world temperature within the frame that is Global Warming, I also get the idea that you are deluding yourself.

    I will explain better.

    You have been posting a key number of posts regarding the downward trend that is occurring for the last 7-11 years, and been skeptical about Tamino’s claims that this is normal, given that the climate works like a chaotic bounded phenomena, but rise in temperatures is still inevitable due to CO2.

    There’s a catch though. Is this downward trend really “unprecedented” (using known terminology here), or has it been quite a regular and natural phenomenon that probably has occurred already somewhere within the decades of 70s, 80s and even the nineties? Just by looking at the graphs, I’d conclude that, and that this period we are going through is not that different.

  2. Luis,

    I have been lurking here for some time as well. I think that this was discussed at length when recent trends started to be examined about a year? ago. This is the only down trend of this length in what I will call the CO2 era that was not linked to a major volcanic explosion putting tonnes of dust into the stratosphere.

  3. So the 2008 mean temperatures and rankings of 2008 as the nth warmest year are:

    RSS 0.0953, 13th
    UAH 0.0485, 16th
    GISS 0.423, 9th
    HAD 0.324, 10th

    Note the significant divergence between the satellite data and the ground based, as recently commented on at WUWT. Clearly the satellite results need to be ‘re-assessed’ again to make them conform, as they were a few years ago.

  4. Luis–
    There are a number of things in your comment that puzzle me.

    I have never suggested the recent trend is “unprecedented”. I’m not sure I’ve ever used such a word here, so I don’t know why you think that’s the terminology used here.

    I don’t know where you get 11 years. I have been comparing mostly since 2001 to IPCC projections. At this point, that makes 8 years. (When Roger Pielke first asked me a particular question, we were just past 7 years.) Every now and then, people ask particular questions to see if someone at some other blog or forum through out statistic that was incorrect. But, I don’t think I’ve every had any particular focus on 11 years. (More particularly, I’ve always avoided starting anything in 1998, as we all know that start date is cherry picked.)

    I focus have mostly been focusing on testing IPCC projections since the AR4, and select my start data based on the year when the IPCC published the SRES.

    You bring up downtrends in the 70s, 80s and 90s. Yes. Such trends occurred. For example the temperature plunged after the eruption of Pinatubo, El Chichon, Agung and Fuego. Trying to reconnect these to your notion that I am deluded: Why do you think these downtrend occurred suggest the hypothesis tests I have been doing recently don’t give proper results?

    Or, if you like a more open ended question, could you clarify what opinions of mine you consider deluded?

  5. Lucia, thanks for the reply.

    The “unprecedented” terminology is not yours, is the IPCC’s usual one.

    Yes, you may be right that we don’t get 11 years, but if we get 7,8 or even 9, I fail to see 11 as extraordinary. Also mind that anything that gets 1998 in the first part of that trend (11 years will do just that) will obviously skew the trend down. So all we can gather is at most a 9 year flat, and here you show one that has 7 years.

    Anyone that knows a bit about statistics, knows that while people with common sense don’t believe it, huge series of a single side of the coin is really supposed to appear in a sequence of coins thrown.

    It’s therefore not at all surprising that such a trend, probably full of natural phenomena that lowers temperature and offsets CO2 effect to a point, may indeed appear within our lifetime. There are probably more things to account than volcanoes, I’d say, and if these things do align themselves, it’s not that improbable what’s going on.

  6. Luis–
    Extaodinary in terms of what? I’ve never suggested the downtrend is extraordinary. I’ve only suggested it’s inconsistent with a projected positive trend of 2C/century which models suggest should be occurring now.

    Of course a huge series of singleside coin flips do appear if you flip a coin an infinite number of times. Who has suggested otherwise?

    But it is possible to say that you if you start now, the the probability that you will get 9 / 9 heads for the next 9 flips is 0.5^9, which is less than 2%.

    So, if someone suspected a coin was biased toward heads, they could propose this test. If the coin showed 9/9 flips head in a fresh experiment, they would report that the coin, does, indeed, appear biased.

    If the defenders of the virtue of the coin insisted that the 9 flips out of 9 in no way suggested the coin was biased– because if we flipped it 10000000 times we will sometimes see 9 flips in a row, what would you say to that?

    They would, of course, be reporting a correct factoid: We will sometimes find 9 flips heads in a row.

    In my analyses, I am trying to account the randomness in a qualitatively similar way to trying to determine whether the coin is biased. In my opinion, if we were to be arguing about the coin, and started fresh flips, and got 9 out of 9 heads, the one who suggested it was biased would be permitted to say the evidence suggests the coins is biased.

  7. The warmers can accept natural climate cycle explanations when temperatures are declining but when they are going up, there is no such thing as natural cycles, just GHGs.

    Let’s pick an extreme example of natural cycles in play.

    Hadcrut3 February 1878 anomaly : +0.379C
    Hadcrut3 December 2008 anomaly : +0.307C

    Total decline in temps over 129 years 10 months : -0.072C

    1877-78, of course, was the biggest El Nino known – year-end 2008 temperatures were affected by neutral Nino conditions – not so much global warming evident when the natural cycles are accounted for.

  8. Luis,

    You are right that the recent period may not be unprecedented. After all, 1950-1975 showed cooling. But think about this: what is the overall trend of the past 100 years? It was NOT 2 deg/century. I think it was less than one degree. Furthermore, the starting point (say 1900, or even 1890), is somehow (if unvoluntarily) cherrypicked, because it was the end of a rather steep cooling period, so it does not seem implausible (to me, anyway), that it was the bottom of a rather long (solar?) cycle. In any case the end of the 18th century was pretty damn cold. So if you include such a cycle in the 20th century warming, you end up with much less than 1 degree that could be attributed to GHG’s (this was also the conclusion of the Scafetta and West study, and also of a recent Lean and Rind paper). So the fact that an 8 or 9 year “cooling” period is not “unprecedented” in the past record should also remind you that there is little evidence in the past for a 2 C/century trend. What I mean is that if we base ourselves on the past record only as an empirical guide to CO2 sensitivity, we don’t get 2C/century. That number is just the result of model projections, with hypothetical (and unproven) water vapor and cloud feedbacks. If you look at a graph with past temperatures being projected into the future, in the IPCC reports, for example, you should note that the slope always has a sudden change right at the present-future boundary. If you just project the past, there is mild warming. That, in my opinion, makes the models highly suspicious. It also makes them, as cleverly demonstrated by Lucia, less and less “consistent” with reality.

  9. Bill Illis:

    Remove warming and cooling due to ENSO, you still see a clear and significant warming trend over the last 130 years.

    Fred:

    Historically, satellites have had much bigger issues with accuracy than surface measurements. After all, UAH used to show cooling from 1979 to the mid 00’s before they discovered that orbital decay and diurnal effects weren’t being corrected for accurately. That said, past adjustments do not necessarily indicate that the current method is more or less accurate than surface temperature measurements. I wrote up a brief article on this awhile back: http://www.yaleclimatemediaforum.org/2008/04/common-climate-misconceptions-global-temperature-records/

  10. Assuming the surface temperature record of the last 130 years is a) complete enough to make a GMST comparion for all 130 years and b) hasn’t been adjusted, homogenized, interpolated, extrapolated beyond all recognition of the raw data. No one outside of GHCN, NOAA, GISS and Hadley really knows the degree of manipulation that data goes under. But there have been numerous examples, at WUWT, CA and elsewhere, to show that there are severe issues with the surface station datasets, let alone the transparency of the manipulation compared with that of the satellite datasets.

  11. Zeke Hauzfather Remove warming and cooling due to ENSO, you still see a clear and significant warming trend over the last 130 years.

    Yeah, how much?

    Does the warming since 1880 support a 0.2C per decade prediction for today.

    Okay trick questions, I have done this already and there is substantially less warming to date than the theory predicts and it would not support a 0.2C per decade for today’s warming trendline – maybe 0.05C to 0.1C per decade.

  12. Luis Dias, you’re misunderstanding what Lucia is doing. She performs a statistical test for validating a certain aspect of climate models. There are other tests but hers is a very common method when you want to validate a model without noise against data with noise.

    There are ample other methods, some are really simple and straight forward. E.g you plot the residuals, and assess if they are random (they should be) and small. You screen data so that it is not correlated to things it should not be correlated to. You also validate against data that came after the publication of the model as otherwise you might do “elephant fitting” instead of modelling (it is said that with four free parameters in a model, you can paint an animal ;-). This is why Lucia is focusing the period after the IPCC report in 2001.

    You might want to ask yourself: if all modelers and decisionmakers perform rigorous validation on any kind of model (engineering, geological, social, economic et c), why haven’t the IPCC? They have a whole chapter included called “validation” but it doesn’t contain any validation.

  13. Zeke:
    “Remove warming and cooling due to ENSO, you still see a clear and significant warming trend over the last 130 years.”

    -Well, with half of the warming before the emissions started to increase in earnest after 1945.

    “Historically, satellites have had much bigger issues with accuracy than surface measurements.”

    -Yeah… extrapolating the values given by a drunk swede with cold feets guessing the value of a thermometre placed in convenient proximity to his cottage is clearly the way global temperature records should be developed. At least if the majority of measurement stations are placed in the developed world in or near growing cities or airports and nearly none over that strange wet blue thingy between continents or in those pesky forrests.

    On a serious note: we cannot talk about accuracy. At most we can hope for some precision and a fair global coverage. I cannot grasp how anyone could favour land based thermometres over satellites but I challenge you to explain why you would, if indeed you do. Start with “removing biases” and continue with “open methods, algorithms and sources”.

    Best,

  14. A. Vindar, well said. I suspect Luis is used to blogs like Tamino’s where he drops wisdom from on high and then his sycophants try to outdo each other falling to their knees and praising their master.

    This blog, like CA is fascinating, because so many bright people offer ideas and viewpoints, and nobody takes things at face value.

  15. avfuktare,

    These days I’m fairly agnostic between surface and satellite measurements, though I tend to favor the RSS method a bit over the UAH method. I was just pointing out that, prior to 2005, satellites were quite a bit off (http://www.uah.edu/news/newsread.php?newsID=60).

    “The net result of changes in how the data are analyzed added about 0.09 C (about 0.16 degrees Fahrenheit) of global warming over the past 26 years, with most of that previously unreported warming occurring in the tropics.”

    Now imagine how big a stink there would be if Watts et al discovered an error in GISS that resulted in a 25% lower rate of warming than previously reported, and how much people would trust that data after that discovery. And bear in mind that there was an even larger correction in 1998!

    That said, with these two corrections satellite data is largely in line with surface data, though they seem to be affected more by ENSOs than surface measurements (which perhaps has to do with the effect of ENSO on lower tropospheric temperature, but who knows!).

  16. Fred: It is not at all evident from the pretty pictures on WUWT that surface station composites are biased. Watts has NOT done a real analysis of impact. And the one done by John V (superior to that of Steve McI since it had gridded areas to remove geo-confounding) showed no impact of “best stations” versus adjusted temp as reported. You, sir, are an example of the hoi polloi and how messed up they can get by looking at all the blog posts and thinking it is more of a proof than what it is. You are the EXAMPLE.

    On content: It’s hard to evaluate what Lucia is doing given all the meandering separate posts. I don’t really even know what she alleges, what she demonstrates, etc. Even to engage…it is hard…since there is no clarity on what she really says.

    On content squared: To the extent that Lucia really makes good inferences on how recent temp record argues against AGW impact THAT is more valuable than semantic games of what IPCC alleged or didn’t. It ends up being a real science inference as opposed to arguments over what her opponents said.

  17. Since TCO and others appear to be concerned about people questioning the surface records, I have an assignment for them. Let’s start with the GISS-derived surface temperature record. Please download the GISTEMP software here:

    http://data.giss.nasa.gov/gistemp/sources/

    After extracting the archive, read over the algorithm that is being employed in the code (it is described online at the GISS website and is contained in some of the readme files) and then compare that with the FORTRAN implementation. Please confirm for us that the algorithm as described in the documentation has been correctly implemented in the software. You may wish to write down all of the variable definitions and associate them with key parameters in the GISS algorithm. You may also wish to flowchart the code so we can more easily see how it works.

    Again, we wish to know that the algorithm is correct as written, and in doing so be confident that the transformations done the raw data by NASA GISS are scientifically valid and beyond reproach.

    Good luck, and please report back what you find.

    Frank

  18. TCO,
    I do not appreciate your tone. John V.’s analysis was on the US48 where UHI may or may not be accounted for depending on which dataset is used.
    However, it is the ROW which is DEFINITELY is not 100% current, has much less than 100 % coverage and is generally not is great shape.

  19. TCO–
    Who do you deem my opponents? And what do they say? I don’t know what argument is going on in your head?

    It’s fine with me if you aren’t familiar with my various arguments. One of those reasons is that, as far as I can tell, you have only recently begun to visit. There may be other reasons. But my blog is a blog. I’ve never claimed otherwise.

    There are others here who have visited regularly, who seem to understand the gist of my argument. FWIW: Yes. I am in the process of writing an article. I was waiting for HadCrut Dec. data so I could slap it together, with a data set that had more-or-less well defined beginning and end points. While I think ending in any month is fine for a blog, I prefer to run full years when submitting an article which, for better or worse, will be published sometime after I compose it. In such circumstances, ending in “Month X” looks odd.

    That said: I think much of your complaints about bloggers failing to publish are nothing more than your own idiosyncratic ramblings. I don’t feel any need to justify when I do chose or don’t chose to publish.

  20. Strange doin’s in the UAH temperature for January.

    http://discover.itsc.uah.edu/amsutemps/

    (You need to ask the site to draw a graph, the link won’t work if I draw and then link).

    Granted, this is compared to a cool January 2008 (compared to most recent years), but 2009 may get off to a relatively warm start, despite brutal temps in the US.

  21. Fred:

    1. That tone was on purpose. It was a napalm your babies tone. I hope it chapped your ass good. You need that.

    2. WUWT is doing a survey in the lower 48. So McIian redirection to ROW is off topic. You sited WUWT surface stations. Now you want to shift the goal posts. You need to do lots of extend arms with the M-1. Maggot.

  22. Lucia: TCO–
    Who do you deem my opponents? And what do they say? I don’t know what argument is going on in your head?

    I THINK YOU ARE FOR WHOEVER IS ANTIAGW LIKE WATTS AND SUCH AND ANTI THE PRO AGW. NOT THAT HARD SWEETIE…SO DON’T LET IT CONFUSE YOU.

    It’s fine with me if you aren’t familiar with my various arguments. One of those reasons is that, as far as I can tell, you have only recently begun to visit. There may be other reasons. But my blog is a blog. I’ve never claimed otherwise.

    There are others here who have visited regularly, who seem to understand the gist of my argument.

    I DOUBT THEY REALLY DO. I THINK THEY ARE ANTI AGW HOI POLLOI WITH ATTENTION TO THE SOCIAL FUNCTION.

    FWIW: Yes. I am in the process of writing an article. I was waiting for HadCrut Dec. data so I could slap it together, with a data set that had more-or-less well defined beginning and end points. While I think ending in any month is fine for a blog, I prefer to run full years when submitting an article which, for better or worse, will be published sometime after I compose it. In such circumstances, ending in “Month X” looks odd.

    GOOD GIRL.

    That said: I think much of your complaints about bloggers failing to publish are nothing more than your own idiosyncratic ramblings. I don’t feel any need to justify when I do chose or don’t chose to publish.

    NO. THEY ARE REASONABLE, RELEVANT, TRENCHANT AND POINTED REMARKS.

    P.S. OORAH!

  23. TCO–
    How many times do I need to tell you not to call people names? Specifically: No calling people Maggots. Also, no “chapped your ass”.

    You just came off a time out. Do I need to do it again?

  24. Please, no. I was trying to avoid curse words and sex talk. I didn’t know that I couldn’t drill your weaker allies. Maybe it will strengthen them to be exposed to an adverse environment. I just have this vision of making them do pushups and getting that sort of trembling in the back and the burn in the triceps…it’s for their own good.

  25. I mean seriously…would you rather have a weak Fred ally or a strong TCO opponent. All the romance novels say strong, mysterious….

  26. Lucia,
    Isn’t it amazing how a little bit of statistical rigor can stir some people up and there only response becomes name calling and ad homin em attacks. I know it is your blog but if you want to ban TCO I’d be happy and we can all get back to testing the models and not have to search through the comments to find something relevant.

    Andrew

  27. It would be hard for anyone do a better job than TCO of convincing lurkers that the AGW position is flawed. Hats off to TCO. Probably a skeptic in disguise..

    That said, wouldn’t it be wonderful if one of the RC or Tamino people came and posted a *strong* rebuttal.. wonder what that would look like.. if there is one..

  28. Lucia: Do you want effete allies like Andrew or burly teases like me?

    *click click go the needles*

  29. Zeke Hausfather: You wrote, “Remove warming and cooling due to ENSO, you still see a clear and significant warming trend over the last 130 years.”

    Do you? Even without removing ENSO, I see little warming over the past 150+ years. It must depend on the dataset you’re viewing and how you interpret it. Here’s a graph of SST anomaly data from 1854 to present. It’s ERSST.v2 data available through the NOAA NOMADS system. Note the dip and rebound.
    http://i33.tinypic.com/25tguq8.jpg

    Now, I can highlight that same data, adding a few parallel lines from the 1850s to the mid-1970s, and show that there really wasn’t any warming to speak of over that period.
    http://i42.tinypic.com/2ibc87o.jpg

    Looks to me like the only real warming took place after 1976, but even that doesn’t look anomalous when compared to the dip and rebound.

  30. Andrew23: I’ve activated the new version of TrollControl. I’d had it activated from Sunday morning until roughly 3 pm this afternoon. Then I deactivated.

    I’d hoped TCO would behave, but, alas no. Unless he changes IP’s he will be blocked from reading or responding to comments.

  31. Zeke Hausfather (Comment#9019)

    “These days I’m fairly agnostic between surface and satellite measurements, though I tend to favor the RSS method a bit over the UAH method.”

    I recommend that you take a look at what Jeff Id is doing:
    http://noconsensus.wordpress.com/

    He makes a compelling case that UAH is the better dataset for 30 trends and that the two are virtually equivalent for the last 10 years.

    It is unlikely that the issues with satellite measurements in the past have showed up again since there are so many eyes on the data. The more plausible explaination is the surface measurements are baised.

  32. Raven, it’s hard to compar all four. And they measure different things anyway… The only reason skeptics favour UAH is because it gives ‘cooler’ readings.

  33. For the record, I did NOT, in any of my comments, limit the critique to US48 surface stations, but to surface station datasets (NOAA/NCEP, HADCRUT, GISTEMP) in general. So TCO, please read carefully before you stick your stick your foot in it.

  34. Nathan,

    The surface measurements cannot hope to offer the consistency and converage of the satellites. Given the well documented issues with UHI, station metadata and data collection problems it does not really make sense to use the surface records at all. I am pretty sure the surface measurements would have be long abandoned by the alarmists if the satellites produced results more to their liking.

    The are also peer reviewed analyses that explain why UAH is the better measure than RSS. Jeff Id has confirmed this analyses on his blog.

  35. Regarding certain especially obnoxious blog posters, I think that until we incorporate breathalyzers with computers to control access, we will continue to be subjected to their rants. This is based on several years of observations. Too bad, because when sober, they sometimes have something useful to contribute.

  36. Raven, then how do you explain the very close agreement between all four? If you use the same base period for them all, they show basically the same thing. All this stuff about stations being corrupt is fluff. Can you quantify the error?

    You should get Lucia to do an anlysis and see if they are consistent.

  37. Nathan,

    The aggreement only exists if you use the ‘ordinary eyeball test’. If you actually analyze the trends in detail you will see diverging trends between the surface and satellite data.

    You will also see an unphysical splicing error in the RSS data which explains the difference between UAH and RSS 30 year trend. If you remove this splicing error both satellites show a lower 30 year trend.

    I recommend that you read the analysis on Jeff Ids blog. He is also reproducing stuff from the peer reviewed literature (i.e. it is not just his opinion).

    The station location and metadata problems identified by Watts and McIntrye make it clear the surface record is a mess and cannot be considered reliable.

  38. Raven
    “The aggreement only exists if you use the ‘ordinary eyeball test’. If you actually analyze the trends in detail you will see diverging trends between the surface and satellite data. ”

    Is this not enough for data sets that are actually measuring different things? You cannot expect the four data sets to match identically if they are measuring different things. What is the agreement that you need? Can you summarize Jeff Id’s findings? What is the magnitude of the divergence?

    Watts had an opportunity to do some good science with his surface stations, but seems to have given up. Why has he failed to continue with the program and why has he failed to provide any quantification of the problem?

  39. Nathan,

    You can read it here:
    http://noconsensus.wordpress.com/2009/01/09/rss-uah-giss-comparison/#more-1825
    http://noconsensus.wordpress.com/2009/01/14/give-a-kid-a-toy/
    http://noconsensus.wordpress.com/2009/01/15/1853/
    http://noconsensus.wordpress.com/2009/01/19/satellite-temp-homoginization-using-giss/

    You are correct that you cannot compare the two directly, however, there are peer reviewed analyses which show that the satellite trends should be 1.2 to 1.3 times *more* than the surface. This means the discrepency between the satellite and surface is even larger than it looks based on the raw numbers.

    Anthony has always insisted that he wanted a complete sample before analyzing the data. He is close to that now.

  40. Nathan, Watts have explained why he must focus on his business (he’s not funded you know). To my opinion Watts IS doing good science by assessing the quality of current stations. That may lead us to conclusions on how good or bad our historical networks is. Personally I feel that it will probably not influence the AGW as much as some people think, but at any rate he is following the scientific method (test the hypothesis that we can measure temperatures accurately).

    Zeke,

    I see your argument and it is valid. However, the GISS temperature series should be unrelated to industrial activity. It is not. Whilst that could be a random error the law of great numbers lead us to believe that it is biased. So in fact you have two (well four 😉 bad datasets. The satellites because they are not wetted enough, and stations because they are neither wetted nor free from correlations to data they should not be correlated to.

    Best

  41. I posted this at WUWT yesterday, and no one answered it. I suppose it’s more pertinent in this thread. Besides, I’ve been lurking here far longer than at WUWT. By the way, Lucia, your explanations and the tone of your moderation are greatly appreciated. It makes the science much more accessible to amateurs like me!

    Speaking as a person with only a high school stats background, I wonder – how many data sets are necessary to validate which (if any) temperature record is the outlyer?
    For example in my world (aviation), we use GPS extensively, but only if we have data from 5 or more satellites. With that many discrete signals, the software in the GPS receiver looks at each set of sat data, and can accurately identify and discard a single bad signal. My question, then, is this – are 4 separate temperature records enough to identify one of them as spurious, to a high level of confidence?

  42. Re: Bob Tisdale (Comment#9038)

    The SST graph you show — do you know if the data used to produce it has been corrected? The Hadley uncorrected SSTs show a steep rise in 1939 — a date which I’m interested in — but corrected data sets from various agencies all use the Folland and Parker correction which skews things upwards in the mid to late 30s. This made the data fit the models so I’ve heard, but that may just be sour grapes. Without the corrections to Hadcrut there’s an enormous jump from 39 — 44/45 which is thought to be a series, an _unprecedented_ series, of el Ninos. To me it’s the most obvious thing about the graphs, but people seem to accept it without wondering why it’s there.

    Tracing your graph led me to the SST anomalies — great stuff, the ocean gyres stand out wonderfully, warming nicely in accordance with theory, while the Sakhalin and North Slope outlets glow red.

    JF

  43. Raven your Satellites should show higher theory is from the Tropical Tropospheric hotspot line of modelling…

  44. Nathan,

    “your Satellites should show higher theory is from the Tropical Tropospheric hotspot line of modelling…”

    So? That is what the models claim so when comparing model outputs to satellite temperatures we would expect the model outputs to predict less warming than the satellites show. The models actually show a lot more warming than the satellites which suggests that the models are wrong (of course, I realize we are dealing with climate science rather than real science which means some will insist that the measured data must be wrong if it does not agree with the models – but that is different problem).

  45. TP

    how many data sets are necessary to validate which (if any) temperature record is the outlyer?

    This is hard to say. When we did undergraduate experiments, the rule of thumb was never a minimum number. We might calculate the standard deviation based on the number of data points, consider how many we had, and throw away points that were wildly unexpected. So, say we collect 10,000 data points. Then, based on the distribution of the data, one of the data points has a value that is expected to occur 1 in 1,000,000 or 1 in 10,000,000 times. We might deem that an outlier. There are ways to quantify this — if we have plenty of data.

    (We did similar things in grad school. With LDV experiments, you can get something called “shot noise”. They result in wildly high velocities. We would screen them out.)

    My rough guess: If you have fewer than 10 data points, it becomes very difficult to deem one an outlier based on statistics. It may not be impossible, but it’s very difficult. What makes more sense is to see that one is different and then trouble shoot to see if you can find a problem and correct it.

  46. Duane–

    I think that until we incorporate breathalyzers with computers to control access, we will continue to be subjected to their rants.

    I can even tolerate the occasional rant. But arriving at the computer to find 9 comments in one hour, with 8 from the same exact poster?

    I will eventually modify troll control to specifically count the number of comments from a specific IP and block comments when there are more than 3 per hour. It can be done. I had done something similar, but the method I chose is too CPU intensive. But for now, the way it’s will deal with TCO.

  47. Lucia,

    I don’t know if you’ve had the pleasure of reading TCO’s comments on CA. They started out having some technical merit and devolved into ranting on Steve’s lack of publication and (what appears to be drunken) insults. It devolved faster here. I got the impression Steve M knows who it is behind the pseudonym. He must be really lonely 🙂 Unfortunately I don’t think limiting him is going to solve the problem, only an outright ban. Notice how he promises to be good and immediately goes back to his old ways.

  48. BarryW–
    I’ve read TCO elsewhere.

    Based on IPs, I know where TCO works and the region of the country from which he posts comments. If SteveM has additional information that could be enough to figure out who he is. I don’t happen to know.

    For now, he is blocked from reading or posting comments. Later today, I may tweak to only block comments, or to count and limit to one comment every three hours. If that doesn’t work, I can eventually just ban him. He’ll still have other outlets where he fits in and is welcome.

  49. Hi Lucia: Regarding climate models, could you please verify what their application is: are they simulations using a monte-carlo type set of runs, generated by a pseudo-random number generator. Or are they solutions to PDE’s using finite difference methods. (I realize that case one would involve PDE’s ). Just wondering.

    Also, I would ban TCO and save electrons. He/she/it is just a tease and adds little to our enlightenment. It is your call, of course.

    Thanks

  50. Julian Flood: Yes, the ERSST.v2 data in the graphs have been corrected. And thanks for reminding me of the correction that was made to the step change in the COADS data. I presume the rise you’re talking about is what caused the COADS and ERSST.v2 datasets in the following graph to diverge.
    http://i41.tinypic.com/156tu7b.jpg

    BTW, both datasets are available through the KNMI Climate Explorer Webpage, though the ERSST.v2 data I used in the graph is from the NOAA NOMADS site. The NOMADS ERSST.v2 data extends back to 1854, where the KNMI version only goes back as far as 1880.
    http://climexp.knmi.nl/selectfield_obs.cgi?someone@somewhere

    Since you’re interested in that step change, you might enjoy my two posts on the step changes caused by El Ninos (that weren’t suppressed by volcanic eruptions) since 1976.
    http://bobtisdale.blogspot.com/2009/01/can-el-nino-events-explain-all-of.html
    http://bobtisdale.blogspot.com/2009/01/can-el-nino-events-explain-all-of_11.html

  51. Jack–
    AOGCMs are solutions to PDEs.

    Currently, TCO is being given a time out and is blocked from posting or even reading comments using IPs he has used to post comments.

    I’m looking at the wp-cache plugin and fixing it so I can cache most pages but serve the fresh ones with comments omitted to TCO.(Caching saves CPU, which is useful since I’m getting more and more traffic.)

  52. Duane,

    I’ll take a look at Jeff’s stuff. It seems quite interesting. I must admit that I’m still a bit of a neophyte when it comes to the science involved in satellite measurements.

    Bob Tisdale,

    I’d be highly skeptical of the geographic distribution and methodological uncertainties associated with temperature measurements pre-1880. That said, it really does depend on what datasets you use. While SST alone does not show as dramatic a warming trend, land-sea composites (e.g. GISS and HadCRUT3) show a much stronger trend.

    Still, conflating warming pre-1970s with warming post-70s is not really a fair comparison, since its only in the last 40 or so years that anthro GHGs have emerged as the dominant climate forcing. Meehl et al have some good numbers on historical forcings, and Robert Rhode turned them into a damn good (albeit slightly dated) graph:

    http://www.globalwarmingart.com/wiki/Image:Climate_Change_Attribution_png

    avfuktar,

    We will see. Given that the only real formal analysis of the effects of surface station quality on GISS that I’ve seen so far is that of John V, I’m withholding judgment.

  53. Zeke– I’ve seen JohnV’s site. But, I’ve never found any formal analysis of stations quality on GISS. What I’ve seem to recall is I found is pages discussing some publicly available software that one might be able to use to create a temperature profile based on which ever stations one prefers. If so, that would permit someone to do an analysis if they knew which stations they liked.

    But, has anyone done the analysis, described how they chose their stations and posted what they got? If so, I”ve never seen that. Do you have a link?

  54. Lucia,

    Perhaps formal is the wrong word, since John V never really followed up that much on it, but his analysis was in an old Climate Audit thread here: http://www.climateaudit.org/?p=2124#comment-147569

    I also wrote an article about his analysis and Watt’s work: http://www.yaleclimatemediaforum.org/2007/10/independent-audit-supports-official-us-surface-temperature-record/ (though Anthony was not too happy about the fact that I didn’t contact him while writing it, something that I later apologized for).

    That said, John can probably give you more details than I can, since he lurks around here every now and than.

  55. Zeke–
    Thanks! That’s more information on results than I’ve ever seen. It’s a pity JohnV didn’t write more than a comment in a blog post at a blog he doesn’t even control. It’s nice you pulled it up and put it somewhere where it could be seen.

    It will also be interesting to see how that firms up as Anthony gets more stations.

  56. I was just browsing and ran across my name. That’s always fun!
    .
    I intended to write up the analysis in more detail, and I was even naive enough to invite Steve McIntyre and Anthony Watts to work with me. I thought “OpenTemp” (as I called it) could be really useful. A few of us were doing lots of interesting analyses on CA until the trolls took over.
    .
    Then life got busy and I forgot about it for a while. As SteveMc will rapidly point out, I only did an analysis on the USA48 and the rest-of-the-world could be different. Anthony likes to think that I didn’t use enough stations and has asked me not to promote the results on his site — even if his regulars ask for them.
    .
    I’d like to do more analysis and maybe extend it to the rest-of-the-world but I need to get my own blog setup first, and I need to finish a backlog of client work, and I need to do some home renos, and I need to go to the gym…… 🙂

  57. JohnV–
    I hear you about the trolls. . .

    It would be interesting to see more. Still, I think we can all understand why people can’t formally document everything everywhere. I was mostly glad Zeke knew where that was.

    Maybe by the time you have your blog set up, I’ll have “troll controll 99” perfected!

  58. Nice to see Steve M admit that:

    “surfacestations.org has made a concerted effort to identify high-quality stations within the USHCN network (CRN1-2 stations) and preliminary indications are that the GISS U.S. estimate will not differ greatly from results from the “best” stations (though there will probably be a little bias.)”

    One thing that both sides in the climate debate definitely need to get better at is admitting when results don’t conform to their preconceptions. After all his work taking pictures of badly sited stations, I’m sure Anthony others contributing to the Surface Stations project were sure that they would find some fundamental flaw with U.S. GISS numbers, but so it goes.

    I’ll hold off any judgment on the rest of the world till I see a compelling analysis one way or another. While initial analysis by Steve M seems to indicate a number of serious issues, the cumulative magnitude of those issues may end up mattering a lot or mattering as much as the badly sited stations in the U.S. We will see!

  59. Zeke,

    I agree, it was nice to see SteveMc’s quote above. I gained a little respect for him today. I don’t know what his pre-conceptions were but I do know that he made consistent efforts to raise uncertainty about my results.
    .

    Lucia,
    If you’re interested, the CA comment that Zeke posted was one of my earlier results. Later on I analyzed station data that was corrected for the time-of-observation bias and the agreement with GISTemp was even better.

  60. Zeke,
    I didn’t know you wrote that article. Why didn’t you ask me any questions? Why didn’t you get my name right? We both have difficult last names — you know how important proper spelling is. 🙂
    (It’s “Van Vliet”, not “Vliet”).

  61. John,
    Where can we find comments or posts on your later comparisons? It’s really a shame for results to be buried in comments.

    I’m almost surprised RC didn’t invite you for a guest post! But in all honesty, if you have time to organize various things into posts, I could host them above the fold. One warning: You could get lots of comments, which eats time.

  62. JohnV,

    I actually made an effort to try and track down your email address for that article, but I was unable to find it anywhere. My apologies about the last name (and correct me if I’m wrong, but is Van Vliet Dutch in origin?).

    You should definitely write something up for here or elsewhere. The “U.S. temperature records are untrustworthy due to crappy stations” argument still comes up regularly, and it would be nice to have a place to send people for a comprehensive rebuttal.

  63. Lucia,
    Thanks for the offer. I’d still like to write it up on my own site. When I do, I would definitely appreciate a critique by you.


    Zeke,
    Thanks for trying to track me down. I found that there was a lot of hostility over at CA and some of it spilled off into prank phone calls and emails. I was trying to keep a low profile. I forgot about that. I’m slowly learning how to make a point without picking a fight.


    Re Dutch Canadians:
    The Dutch makes us stubborn, and the Canadian makes us do-gooders. 🙂
    Haven’t seen too many other Albertans around. Most of my neighbours must be too busy counting and spending their oil money. (Not that there’s anything wrong that).

  64. Zeke Hausfather, you wrote, “Still, conflating warming pre-1970s with warming post-70s is not really a fair comparison, since its only in the last 40 or so years that anthro GHGs have emerged as the dominant climate forcing.”

    I disagree. Comparing the early rise (~1910 to ~1940) in SST anomalies to the recent rise (~1976 to ~2005) is a good way to verify if anthropogenic GHGs are dominant in the latter portion. All one needs to do is compare the slopes of the two periods to see if GHGs have added to the rate of rise. Scroll up to the first graph I provided in comment #9038. Even eyeballing it, they aren’t close. No need to throw on linear trends. The early period clearly has a higher rate of rise than the latter period.

    And thanks for the link (twice) to the Rhode’s graph of the Meehl (2004) study. The natural forcings (solar) they employed for the curve fit don’t represent the current understanding in any way.

    First, in the abstract of Meehl et al states, “The late-twentieth-century warming can only be reproduced in the model with anthropogenic forcing (mainly GHGs), while the early twentieth-century warming is mainly caused by natural forcing in the model (mainly solar).”
    http://www.cgd.ucar.edu/ccr/publications/meehl_additivity.pdf
    The solar study referenced by Meehl et al is Hoyt and Stratten 1993, (yes, that’s right, 1993). Refer to Table 1 and to the discussion on page 3723 for confirmation of the source of solar data.

    The following link is a comparative graph created by Leif Svalgaard of several TSI reconstructions and composites. Note that the current understanding of TSI variability is represented by the Svalgaard (red) curve and that Hoyt is represented by the light gray one. Not even close. That means that the Meehl et al study relied on a solar forcing that is extremely unrealistic in order to reproduce the warming in the early part of the 20th century. If the forcings they employed are so erroneous in the early years, there’s no reason to believe the anthropogenic forcings in recent years are realistic.
    http://www.leif.org/research/TSI-recon3.png

  65. Anthony likes to think that I didn’t use enough stations and has asked me not to promote the results on his site — even if his regulars ask for them.

    I’m unflabbergasted by this information.

    Yes, it’s me, Boris. FREE TCO!!! 🙂

  66. No comments from Zeke Hausfather or John V about the negative trend in GISS temps since 2001?

    Negative trends over 7 years (from all the temperature series) are certainly not consistent with global warming’s +0.2C per decade prediction.

    In fact, GISS’s own charts show you can extend that negative trend over 10 years now.

    http://data.giss.nasa.gov/gistemp/graphs/Fig.C.lrg.gif

    10 years is a LONG time for “the weather noise” explanation/misdirection.

  67. Bill Illis:
    I would not have expected the current negative trends in the presence of AGW either. I have a few ideas about their significance, but am going to stay quiet until I have the time to do a proper analysis. Meanwhile, you can continue basking in the relative cold…

  68. Bill,

    While the consistency with 0.2 degrees warming per decade is certainly open to challenge (and I think Lucia is doing a good job challenging it!), there is really no reason to be that surprised over a 7-year negative trend. Lets do a quick little exercise (ala Tamino) to demonstrate:

    First, lets look at the underlying trend prior to 7 years ago (since thats the cutoff point for a clear negative trend that you identified; we could use 10 years as well, but it doesn’t change much). Lets start at 1979, in part because the mid 70s are the beginning of unambiguous modern warming, but mostly because thats what I have on-hand as far as monthly GISS data goes. Again, we could pick an earlier point, but starting in the 70s make the curve steeper (and thus would only make a modern cooling trend more unusual).

    A simple OLS regression on monthly GISS data from January 1979 to December 2001 (exactly 7 years prior to our last available datapoint) yields an equation of roughly y = 0.001155x – 0.141468, or 0.1386 degrees per decade. Now, if this trend were to continue, we would expect 95 percent monthly temperature to fall within two standard deviations of the variation around the trend.


    http://i81.photobucket.com/albums/j237/hausfath/Picture17.png

    To show this, we calculate the residuals for every month from 1979 to 2008 using the OLS from 1979 to 2001 (so we are essentially comparing the observed data post 2002 to what we would expect given the prior 22 year trend). I also plot the 95% confidence intervals (e.g. two standard deviations of the residuals). You can see that the vast majority of months post-2002 fall well within two standard deviations of the pre-2002 trend. Given that there have been 83 months since 2002, we would expect abut 4.5 months (e.g. 5 percent of months) to exceed the 95 percent confidence interval. We find that only three months exceeded it: March 2002 and January 2007 are above two standard deviations, and January 2008 is below.


    http://i81.photobucket.com/albums/j237/hausfath/Picture15.png

    So while the last 7 (or 10) years may not be consistent with 2 degrees C per century, they are perfectly consistent with the previous trend in GISS data. That’s not to say that the current reduction in temperatures is unimportant, but rather that its not unexpected in light of past trends. Future modeled warming is a completely different ballgame, and does not have much to do with the GISS temperature record per se.

    Lucia, as an aside, it would be an interesting post to examine what the practical implications of falsifying 2 degrees C per century current warming. What implications does it have on the 2100 projections? Does it tell us that we have underestimated natural variability? Is there some forcing that may be unduly minimized in climate models? Granted, it might spill the beans on the conclusions section of your upcoming paper, but it would be fun to debate!

  69. Zeke–

    Does it tell us that we have underestimated natural variability? Is there some forcing that may be unduly minimized in climate models? Granted, it might spill the beans on the conclusions section of your upcoming paper, but it would be fun to debate!

    It won’t spill any beans on any conclusions. At most, it would spill beans on “speculations”, which may not get in due to page limits anyway.

    To some extent I like to keep the questions of 1) Is 2 C/century off? Somewhat distinct form 2) What would it mean if it is?

    Presumably, we should be able to answer #1 without answering #2. (Moreover, the answer to #2 should not influence our conclusions from the data comparison.)

    That said, 2C/century would not automatically tell us anything about what happens in the long run. It sort of depends on what’s gone wrong with the models to make the be off. Were the SRES totally off? Are the sensitivities out of whack? Which models remain.

    But… once I do have the paper together a bit better, I’ll do this for fun: See what projection we get if we based them on a multimodel mean using only models that did use volcanic forcing and did not fail individually. (There are a couple of individual models with projections that one would be generous to call “not so terrific”. Some with no volcanic forcings look fine– but in some sense,that makes no sense! )

    FWIW, I haven’t done this…

  70. Zeke– I forgot to mention: There is a paper by Kielh that explores a mystery about model agreement in the hindcast. He shows that GCM with the highest sensitivity have mostly used higher estimates of aerosol forcing to drive the hindcasts. Those with the lowest use lower aerosol forcings to drive the hindcasts.

    Hypothetically, it’s possible that the multi-model mean is off because the models with lower sensitivity are closer to correct but the collection of models is biased high. Or, it may not mean that. I don’t know.

  71. I’ll have to check out the Kielh paper. I recall Hansen saying something similar recently, that climate sensitivity in models is largely dependent on the choice of aerosol forcings (which makes sense, given that they are the single largest remaining uncertainty in calculating radiative forcing, especially indirect aerosol effects).

    Given that a fair chunk of future warming in the post-SRES scenarios in the AR4 is driven by decreasing aerosol emissions, if aerosol forcing does turn out to be on the lower side it will be good news indeed!

  72. Lucia, does the IPCC prediction of +0.2C/decade mean that each decade should show that increase to be ‘correct’? Or does it mean that over the next Century the average decadal increase should be 0.2C to be ‘correct’?

  73. Zeke–

    I don’t think it’s that sensitivity in models depends on choice of aerosol foricings. I think the sensitivity depends on choice of modeling parameters within the range permitted by experiments. That is: the cloud model, the convective parameterizations etc.

    That said, modelers do sensitivity studies. And I think what has been suggested is that since matching the surface trend is sort of considered important for a credible model, and the historic aerosol forcings are not strongly constrained, what ends up happening is that groups whose AOGCMs have lower sensitivity end up forcing the runs with aerosol forcings on the higher end of the range consistent with data (and vice versa.)

    The result is all the models get roughly the same warming in the 20th century. However, each model matches the historic data for somewhat different reasons.

    But, of course, we can’t tell whether the highsensitivity/low aerosol forcing or the low sensitivity/high aerosol forcing case is better based on the data match in the hindcast.

    Now, moving onto the future: If start in 2000 and drive all the models with the same forcings from any source,then generally speaking, the ones with high sensitivity will tend to warm more and those with lower sensitivity will tend to warm less.

    Maybe the reason the average trend is too large in the projection is the average sensitivity of the batch of models is too high. That is: The low sensitivity models may be more correct.

    Or… maybe it will turn out to be something else.

  74. Zeke Hausfather (Comment#9149):

    “I recall Hansen saying something similar recently, that climate sensitivity in models is largely dependent on the choice of aerosol forcings ”

    Translation: climate models are curve fitting exercises.

  75. Nathan–
    The “about 2 C/century” statement in the IPCC AR4 applies to the first two or three decades of this century, not the full century.

    To get something more specific, you need to download the model runs. The multi-model mean trend of IPCC runs driven by SRES A1B for the years from 2001-2008 inclusive is greater than 2C/century.

    The statement does not mean that we will see this precise trend any particular year, decade, set of N years etc. The observed value will vary just as weather does. But, we can make statistical statements describing the that the trend will differ from the predicted value by some amount.

  76. TCO

    Yes, it’s me, Boris. FREE TCO!!! 🙂

    TCO is roaming freely over at Tamino’s blog. Based on Tamino’s inline comments, TCO’s is exercising his penchant for colorful language and getting snipped. Accusations of drunkenness are being hurled by the regulars at Tamino.

    http://tamino.wordpress.com/2008/12/31/stupid-is-as-stupid-does/#comment-27559

    You can read comments like this

    TCOisbanned? // January 20, 2009 at 2:31 am

    Tammy, I am too drunk to remember what was edited out….but I suppose it was pithy. Please remind me?

    [Response: If you’d been sober, I doubt you’d have said it. It wasn’t contrary or insulting, but was definitely crude and I won’t repeat it.]

    I don’t know how Tamino’s time stamps line up with mine– but I either TCO was posting there and here in parallel, or he moved over there when I reactivated troll yesterday evening.

  77. Lucia, so if the 0.2C per decade is for (at least) the first two decades, how much data do you need before you can disprove it? Do you need two decades worth? Can it only be disproved after the event?

  78. a pity about TCO.. I am sure that when he/she is sober some useful comments could be generated assuming he/she is a climate scientist?. I for one do not believe there is such a thing as “climate science”.. yet.. its just beginning to develop for the time being I rely on meteorologists and geologists ect.. LOL

  79. just to add previous comment… there is of course a great future for “Climate Science” so however much we skeptics rant on about it.. it should continue…one day they will start to get some of the models right.. we hope.

  80. Lucia, so if the 0.2C per decade is for (at least) the first two decades, how much data do you need before you can disprove it?

    The answer depends on the signal to noise ratio in the data and on how poor the prediction. If the noise was really, truly zero, and the predicted trend was really truly linear, and the trend was off by a huge amount, we could disprove it in a month. If the noise is huge and the predicted trend non-linear, disproving could be a practical impossibility.

    Do you need two decades worth?

    As a practical matter, it appears we do not.

    Can it only be disproved after the event

    Which event? We can only disprove it based on data collected during the period when 2C/century is predicted to apply. That’s now. But we don’t need to wait 2 decades. If they’d predicted a linear rate of increase of 2C/century for 2 million years, we wouldn’t have to wait 2 million years. We could check the rate now.

  81. There shouldn’t be so much leeway to “force” the aerosol forcing.

    This should have been settled already and it should at least match up with measured aerosols over the last several decades. I’ve plotted GISS’ Model E aerosol negative forcing before and it is more-or-less just a straight line going down since 1880 (maybe 1920). It really should have accelerated throughout the 1960s and 1970s and then slowed down and potentially declined thoughout the 1980s and 1990s (until the Asian Brown Cloud got going around 2000). Nope, its just a straight line going down. I’ve also shown the Volcano forcing is far off the mark as well.

    How many people who have so much faith in the climate models know this is the case? Not too many.

    The 0.2C per decade warming estimate is an artificial number which falls out of the basic forcing formulae and the plugged negative forcings for aerosols and other factors. It doesn’t fall out as an inherent characteristic of the model simulations, it is not a “result”, it is an assumption.

  82. It doesn’t fall out as an inherent characteristic of the model simulations, it is not a “result”, it is an assumption.

    Where do you guys get this stuff?

  83. Boris,

    Any modeller who produced a model that did not confirm the “concensus” view on CO2 sensitivity would be told that their model is wrong. What this means is they will have to go back and adjust parameters like aerosol forcings to ensure that the resulting CO2 sensitivity conforms to expectations.

    This would not happen in fields where a modeller could prove the correctness of a ‘non-consensus’ model via experiments. Without that connection to the real world validation climate models are reduced to proving themselves by showing how well they blend into the crowd.

  84. How good are our current measurements of aerosol levels and associated forcings (current as in now, or over the last several decades)? What instruments are measuring that? Where are the datasets?

    The alternatives that seem to be available to explain the general “below-the-trend-line” nature of the last couple of years, compared to the “above-the-trend-line” of 2000-2005 at least, are:

    (1) the solar cycle is currently near a minimum; perhaps the climate is more sensitive to TSI changes than normally estimated. But near-invariance of response to forcings means increased sensitivity to all forms of forcing, so the impact of doubling CO2, all other forcings held steady, will be that much greater, and the full trend in coming decades should be greater than 0.2 C/decade, and will be particularly fast during the upswing phase of a solar cycle.

    (2) perhaps aerosol forcings in the most recent few years (Asian Brown Cloud?) are more negative than is normally estimated. Aerosol lifetimes are short; the present economic downturn and calls for environmental cleanup in developing countries may reduce this effect within a year or two, and we should see a bounce back in temperatures to the expected trend and above.

    (3) unaccounted energy flows: perhaps the oceans and land are absorbing more of the incoming energy imbalance than expected, through melting ice and increasing temperatures below the surface, leaving surface temperatures lower than expected. This could continue suppressing surface temperatures for a long time…

  85. Lucia,

    You are right, of course, that the sensitivity in models emerges from the choice of modeling parameters rather than any a priori decision on aerosol forcing. That said, even Hansen agrees that there is a good deal of arbitrariness when climate modelers decide on the aerosol forcing used, even if it is constrained by the need to correctly hindcast.

    The recent presentation by Hansen is a fascinating read, wherein he argues that we may have a higher aerosol forcing and a faster response time due to less rapid ocean mixing than is currently modeled http://www.columbia.edu/%7Ejeh1/2008/AGUBjerknes_20081217.pdf

    Interestingly, a higher aerosol forcing implies a higher climate sensitivity, which also means that natural variability would have a greater impact on the climate than in models with low aerosol forcing.

    I’ll quote the relevant parts below:

    “There are two major forcings in the industrial era, both human-made. The greenhouse gas forcing is large and positive, causing warming. It is known very accurately. Aerosols cause a net negative forcing, via their direct effect on sunlight and their effect on cloud properties, but the error bars are huge.

    IPCC has a good way of showing this uncertainty. The greenhouse gas forcing is a sharp function, well-known at about +3 W/m2. But the aerosol forcing might be anywhere between zero and -3 W/m2. So the net forcing, the red area, is anywhere between zero and +3 W/m2, probably between about +1 and +2 Watts.

    How do the different climate modeling groups decide upon the aerosol forcing? I asked my granddaughter Sophie, and she said that it was about two Watts. Her brother could only count 1 Watt. But I took Sophie’s advice, not Connor’s.

    I’m just kidding, of course, we had a rationale for the aerosol forcing that we used, but my point is that there is a good deal of arbitrariness in the decision, and we must admit that the error bar is huge. So keep in mind that it is almost as likely that that the actual net forcing is close to Connor’s +1 Watt as to Sophie’s +2 Watts…

    When we use Sophie’s 2 Watts net forcing, we get beautiful agreement with observed global temperature over the past century.
    This climate model, the GISS climate model, has sensitivity 3C for doubled CO2, which is realistic (by the way, the realistic fast-feedback climate sensitivity of the model does not mean that the individual fast feedbacks are accurately modeled, only that their net effect is approximately correct). So does this result confirm that the net climate forcing really is about +2 Watts?

    No – because there is another important variable, the climate response time. And we now have several reasons to believe that the climate response time of the GISS ocean model, and most ocean models, is probably too long. Most of the IPCC ocean models seem to mix too rapidly…

    Comparisons show that all four models have similarly long surface temperature response times. Unfortunately, this does not indicate that the models are right. On the contrary, there are numerous indications that they have a common problem. First, overall, they tend to mix transient tracers more than observed. Second, theoretical work at GiSS, by Vittorio Canuto’s group, shows that mixing parameterizations, such as the common KPP approximation, cause too much mixing in the upper ocean.

    Third, there is the most important measurement – the change of ocean heat content. Twenty years ago, when I was asked ‘what is the most important measurement for global climate change’, I said ‘ocean heat storage, because that defines the planet’s energy imbalance. Measurements are getting better, but most measurements are mainly in the upper ocean. Reanalyses of old data, such as this analysis for the upper 700 m, are doing a better job of correcting for instrumental changes, but there are still big uncertainties and disagreements between the researchers.

    The situation in the deep ocean is worse. There is not enough good data.
    Levitus’ analysis yields very little heat gain in the deep ocean. I think that he may underestimate heat storage because of an assumption of no change where no observations exist. Nevertheless, the ocean data show very little increase of ocean heat in the past few years. Overall, it has become clear that there is a discrepancy between observations and the heat gain calculated in most models, if the models use a net human-made forcing of +2 W/m2 and if the oceans mix as deeply as most ocean models do.

    Most of the IPCC models that had a realistic sensitivity of 3C for doubled CO2 used a net forcing of about 2 W/m2. Conceivably there is a sub-conscious preference for a forcing that yields a surface warming similar to that observed. Now, if the climate model response function is too slow, what does that imply? It means that the net forcing must be less than 2 W/m2, if we want to retain good agreement with observed global warming…

    Now, we would like to know: what are the consequences if the real-world ocean mixes less rapidly than in GCMs? The response in the first 10 years depends mainly on the ocean mixed layer, but on longer time scales the surface response is faster if mixing into the deep ocean is slower. The real world probably falls between the blue and red curves, but we know not where. The faster climate forcing, the red curve, would require a net forcing closer to Connor’s 1 Watt. Are there any testable consequences of these two alternatives? Well, if Connor is right, if the net forcing is closer to 1 Watt, then the portion of the forcing that the planet has not yet responded to is much smaller than in the case of net 2 W/m2 forcing, i.e., the planet is closer to energy balance. That means that global temperature will be more responsive to ongoing changes of global climate forcing, even moderate changes such as solar irradiance changes of 0.2 W/m2…

    This smaller net forcing makes the effect of the solar cycle only a bit more apparent, in the waviness of computed temperature, which rises to a new record level within the next few years. The calculation assumes that the coming solar cycle will be similar to the last one. (A few small volcanoes in the past few years also contribute slightly to the waviness; stratospheric aerosol data was kindly provided by Larry Thomason).
    So I expect to see new global temperature records within several years. But when you add in chaotic variability, short-term change of global temperature does not provide a very strong discriminate for the forcing.”

  86. Arthur,

    You are starting to use skeptics arguments!

    A solar cycle that can cause a 0.2 degC drop in temps cannot be explained by mere TSI. There must be a currently unknown mechanism at work. If that is the case you cannot assume that it follows the 11 cycle exactly nor can you assume that it does not have an underlying trend (i.e. some of the warming in 20th could be attributed to this unknown solar mechanism).

    Arguing that the deep ocean can sequester heat for unknown periods of time implies that some of the warming could be due to previously sequestered heat surfacing from the deep ocean. This would mean that it would be impossible to determine what CO2 sensitivity.

  87. Zeke–
    Hansen’s argument assumes the sensitivity of 3 C for a doubling of CO2 is correct, and the forcing ‘fiddle’ then adjust for getting ocean response time wrong. So, in this way, the forcing ‘fiddle’ in the hindcast permits them to match the historic surface temperature, by offsetting errors in the ocean response time.

    However, Kiehls paper discusses the possibility the forcing ‘fiddle’ is offsetting the factor of 3 range in climate sensitivities across models. (I say ‘fiddle’ because I think people understand tuning to be adjusting the parameters that dictate the climate sensitivity and the ocean heat uptake etc.)

    Kiehl does discuss the complicating factor of heat uptake in the ocean. But his analysis shows that, as a practical matter in implementation by modelers, the slop in our knowledge of the historic levels of aerosol forcing ends up with this situation:

    A) Groups who came up with models with high climate sensitivity apply the lower range of aerosol forcings.
    B) Groups who came up with models with low climate sensitivity apply the higher range of aerosol forcings.

    I don’t know if anyone has done a similar analysis to figure out if Hansen’s suggestion that the ‘fiddle’ in applied forcings is correlated with the time constants in models. However, that may be so also.

    If Hansen’s speculation that the fiddle offsets ocean heat content, then yes, the rest of his argument holds.

    The difficulties are:

    a) We can’t know for sure whether the “aerosol fiddle” mostly offsets incorrect climate sensitivity or incorrect ocean heat content. Other than the need to get a full set of tunings and fiddles which match hindcasts, the modelers aren’t intentionally trying to skew their model in any particular way. They are trying to get things right.

    b) Kiehl has shown the “aerosol fiddle” is correlated with the models climate sensitivity. This argues at least in part againsts Hansens speculation that it is being used to correct for incorrect time constants for the climate. (It does not argue completely because the correlation is hardly perfect. Kiehl discusses this.)

    c)Finally, Arthur listed many reasons why the models may, on average, be find on the physics and that instead, the problem lies in the projected forcings. After all, the SRES were published in 2001 (on web in Nov 2000). So, we didn’t actually know the 2008 forcings in 2008.

    d) In the end, all the failure to project for any model tells us is: Something is wrong. That the multimodel mean is off tells us: Something is wrong on average. We don’t know if it’s sensitivity, ocean heat uptake, the forcings used to drive the models etc.

  88. Any modeller who produced a model that did not confirm the “concensus” view on CO2 sensitivity would be told that their model is wrong.

    ‘Cus it’s all a conspiracy. It always comes back to that for you guys, doesn’t it?

  89. Raven,
    The most widely cited value for GMST sensitivity to the solar cycle is about 0.1C (peak-to-trough). A typical solar cycle increases for about 4 years and decreases for about 7 years. A decrease of 0.1C over 7 years implies a solar-cycle-induced trend of -0.14C/decade.

    Solar cycle 23 is ending a little differently. It’s been on the downward trending side for about 8 years, and the current low is lower than most solar cycles. It’s not a stretch to think that the peak-to-trough on this cycle could be 0.12C (20% more than the most widely cited value). A decrease of 0.12C over 8 years implies a solar-cycle-induced trend of -0.15C/decade.

    Remove the solar-cycle-induced trend, remove a little ENSO-induced trend, and the current trend gets a lot closer to 0.2C/decade.

    This is all consistent with IPCC-endorsed studies and communications from Hansen. Tamino has looked at the solar cycle and concluded that the effect must be small — I’m not sure if 0.1C peak-to-trough would fit his definition.

  90. Raven–
    I think when JohnV says “most widely cited value for GMST sensitivity to the solar cycle”, he means: The one paper to detect any measureable sensitivity cited in the AR4 finds a 0.1C peak to trough sensitivity. The results are not easily replicated, and involve a lot of data manipulations to find the signal. Not everyone believes the sensitivity is this high— but the paper did pass peer review, and it is cited in the AR4.

    Many climate models don’t bother to include the effect in simulations. I don’t know if this is because it’s too difficult to include in the forcing files or if it’s because modelers don’t think it matters much.

  91. Lucia,

    I think there is a fair amount of agreement between Kiehl and Hansen that models with high climate sensitivity apply the lower range of aerosol forcings. Given that GHG forcings are known with a fairly high level of certainty, the only real adjustment that once can make to change anthropogenic forcing is the choice of aerosol forcings. A model with a high climate sensitivity needs something to offset the bulk of anthro GHG forcing to effectively hindcast past temperatures OR the model needs a faster ocean mixing rate. For a given climate sensitivity, a slower ocean mixing rate would necessarily imply stronger aerosol forcings.

    That said, as Kiehl points out, the interactions of forcings, feedbacks, and thermal inertia can allow for models with a broad range of sensitivities to effectively hindcast past temperatures. However, it seems like the observed relationship between the chosen aerosol forcing and model sensitivity is in part a result of the use of a relatively constant ocean energy storage across models. If Hansen is right in his assertion that current models may be overestimating ocean energy storage, it opens a whole new range of possible uncertainties!

  92. lucia,
    .
    The only papers that I’m aware of regarding solar-cycle sensitivity have found a range from 0.06C from GISS Model E to 0.18C (Camp&Tung, 2007). Douglass, Clader, and Knox (2004) found a sensitivity of 0.1C. Hansen consistently states 0.1C in his communications. If I remember correctly, basic radiative balance would lean to a peak-to-trough effect of 0.05C (before water vapour feedback).
    .
    Are you aware of any studies that find a smaller effect of the solar cycle on GMST?

  93. By the way, the term “lower range of aerosol forcings” is somewhat clumsy, since it effectively means that aerosol forcing is higher (as in, have a greater absolute value in w/m^2).

    Kiehl points out the strong inverse relationship between the absolute value of aerosol forcing and total forcings in climate models, which is obvious, as well as the inverse relationship between total forcing and climate sensitivity. This implies that higher absolute aerosol forcings –> lower total forcings –> higher climate sensitivity.

    I’m assuming we are both reading the same Kiehl 2007 paper: http://www.gfdl.gov/~ih/jerusalem_papers/kiehl.pdf

  94. Zeke–

    Yes. I think of the ocean heat storage in terms of time constant. But yes, if models oceans respond more quickly than reality, then, in the model, they would take up more heat than in the real earth. Assuming the model gets the correct climate sensitivity, then the “fiddle” during periods when aerosol forcing is positive would be to ramp up the effect of aerosols, and get the model planet to warm up. In this case, the model aerosols make up for the excess heat lost into the model ocean. So, you turn forcing onto “high”.

    On the other hand, if the applied forcing in the model is right, then then the climate sensitivty must have been too low.

    The difficulty is: We don’t know if Hansen’s speculation about the ocean is right or wrong. Suppose the ocean mixing is faster and the time constant lower than in models?

    The difficulty with all this is we have three knobs to explain agreement with the hindcast. So every discussion involves assuming one knob is about right (or biased in a particular direction). That’s why I said it’s difficult to say even if we agree there is currently a disagreement, it is difficult to say what current disagreement specifically about models. (And this is so even if we disregard that the disagreement could be due to poor SRES.)

    More generally, one can say: Either the SRES are inadequate in some way, or the biases in models do not cancel.

    As for SRES inadequacy: Even if the issue is solar, there is the difficulty the models switch to average solar in the forecast. (Some use that all along.) Based on wording of the AR4, I suspect they all dropped down to the average instead of setting solar to the peak forever and ever–but I haven’t checked that. For model inadequadies: We can’t know whether the issue would be models all with slightly too high climate sensitivities, or slightly too low– or high ocean mixing!

  95. I was thinking about the super el nino in 1998 and I noticed that the trend before it occurred seemed very different than that afterwards, and the overall shape of the data seemed different. See this graph .

    I did OLS trend lines for the UAH data up to the start of the peak and after the end and got the plot in this graph. The trend before is 0.003 deg / year (purple) and after (green) is 0.011 while the overall is 0.012 (red). The red solid line is what I ignored (I used the zero crossing points to decide where to cut). The two horizontal dotted lines represent the means of the two subsets. The difference amounts to just over .2 degrees which is significantly larger than that produced by the trend during the data period before the event.

    What it looks like to me is that the el nino caused some sort of damped oscillation in the temp data. El Nino’s are a release of heat and it’s got to go somewhere, but where? Back to the ocean, upper atmosphere? Notice that the extended purple line starts to intersect with the data after the event as if the pulse is decaying back to the .003 trend line. My hypothesis is that that portion of the released heat that wasn’t radiated away in the initial release went somewhere and then was returned to the surface causing the temperature rise that occurred after the event and that this is starting to dissipate. My guess is that the temps will go back to the trend line or “bounce” once more before they do, but you’d need a lot more data to validate that. I haven’t checked to see if there are any other events like the 98 one that exhibit the same pattern.

  96. JohnV–
    One of the known features of peer review is that people don’t report non-results in the peer reviewed literature. This is true in medicine, climate, everything. I’ve read the papers discussing the finding of measurable peak to trough swings and they look like masterpieces of filtering, selecting, refiltering etc.

    I’m under the impression Hansen systematically says “up to” in his communications and generally in a tone that suggests this value is not to be intepreted as any sort of probable value. So, he is communicating the highest value suggested. This does not imply that 0.1 C is “the most widely cited.”

    Where did you find the modelE sensitivity to peak to trough solar forcing? I remember looking at model E averages and it was nearly impossible to see any peak to trough variability. Is my memory faulty? Has someone shown something else?

    The basic radiative balance represents an upper bound because it does not account for response time which greatly moderates the temperature swings. (Thank heaven, or summer would be much hotter and winter much colder!)

  97. Bill Illis (Comment#9163)

    Have you ever tried to plot total forcing against temperature using a suitable scale factor? This would be equivalent to Lucia’s Lumpy model with a zero time constant (no heat storage).

    I would not expect this to be as good as Lumpy but it might reveal how much of the hindcast fit of Model E is due to the choice of historical forcing rather than any particular model skill. If the “shape” of the forcing tracks the temperature fairly well we might conclude that the simplest model is just T = k F where T is temperature change, k is equilibrium sensitivity and F is change in forcing.

  98. Lucia,
    .
    You’re right about Model E — I found a temperature response in Model E of only about 0.03K. It was a different model that had a 0.06K response, but I can’t find the link right now.
    .
    It’s difficult to call the basic radiative balance an upper bound, since it does not include the well-accepted water vapour feedback.
    .
    Again, do you have any references that suggest a smaller response to the solar cycle? A simple yes or no will do.

  99. Zeke–
    What do you mean by this

    Kiehl points out the strong inverse relationship between the absolute value of aerosol forcing and total forcings in climate models,

    On page page 1 or 4, Kiehl says “The temporal evolution of well mixed ghg’s is more constrained.” That is to say: Modelers can’t fiddle with this as much and still remain respectable.

    So, do you mean that, since aerosol forcings are negative while ghgs is positive, there is an inverse relation in total forcing (the sum of both) and the strength of aerosol forcing selected? ( I can read it to mean other things, so I’m asking.)

    I think maybe I misexpressed things:
    In figure 1 in Kiehl, models that are driven by higher forcings have lower climate sensitivity. Models driven by lower forcings have higher sensitivity. The result is a good match with data.

    The method modelers use to adjust the total forcing is to adjust the aerosols forcing in the proper direction to achieve a respectable match between the overall trend in GMST over time.

    So, all models get decent agreement with the hindcast, However, good agreement may have been achieved for the wrong reasons. (The right reason would be correctly simulating that climate given known forcings and also using the correct forcing.) Since climate sensitivity varies by a factor of 3 in models, we can be nearly certain good agreement is achieved for the wrong reason by at least some models.

    Having said that, suppose we are now presented with this issue:

    The projection from a particular model overshoots the observations. Let us,for the time being stipulate that we all agree the are overshooting. ( I realize we don’t all agree… but to discuss what it would mean, we need to assume it’s true.)

    What could be the cause? Let’s assume for the time being the ocean heat uptake is bang on perfect as are the SRES. In this case, does the overshoot point to excess model sensitivity or the opposite?

    1) Suppose the model sensitivity for the model was too high, and the total forcing applied too low.

    This means the modelers added too much cooling due to aerosols. Due to the Montreal protocols, we expect the antrho effect of aerosols to have dimished, warm in the short term and eventually not matter so much. Once diminished, the aerosol tuning can no longer matter to the model and can’t compensate for the excess model climate sensitivity.

    Meanwhile the warming effect ghg forcings are thought to increase at a faster rate.

    Plus, we think there is “heat in the pipeline” — due to the model sensitivity. The net effect is for the model with too high climate sensitivity to predict warming at a rapid rate than really occurs.

    2) If model sensitivity is too low, the model will predict slower heating that really occurs.

    Of course, all of this is complicated by the fact that the ocean heat uptake could be wrong and the predicted forcings could be wrong. The Asian brown cloud may be cooling etc. So…. hard to say what any current overshoot means.

    I think it’s still useful to figure out if it has happened (and I think it has.)

  100. JohnV

    Again, do you have any references that suggest a smaller response to the solar cycle? A simple yes or no will do.

    My answer was:

    One of the known features of peer review is that people don’t report non-results in the peer reviewed literature. This is true in medicine, climate, everything. I’ve read the papers discussing the finding of measurable peak to trough swings and they look like masterpieces of filtering, selecting, refiltering etc.

    And I continued.

    I should have begun with “No”. I don’t. But
    a) I’ve read the two papers finding the effect, and they are pretty unconvincing
    b) the way modelers refer and use the cited findings suggests many are dubious of that number.

    On the “upper bound” issue: Sure, I missed the water vapor part, uou are right. But, any steady state solution misses the time constant feature. If response were instantaneous, then we’d expect something similar for a) the seasons, b) the annual cycle and c) respose to ghgs. There would be no “in the pipeline” issue.

    Where did you find the temperature response to the solar cycle for Model E is 0.03K? And what method did you use?

  101. Lucia,

    I meant the same thing as you. I was referencing Kiehl’s statement that “Figure 2 shows the correlation between total anthropogenic forcing and forcing due to tropospheric aerosols. There is a strong positive correlation between these two quantities with a near 3-fold range in the magnitude of aerosol forcing applied over the 20th century” wherein figure two shows total forcing decreasing in climate models as the magnitude of negative forcing increases.

    I do agree that a lower rate of warming in the past decade could suggest that climate sensitivity is lower than expected. On the other hand, models with high climate sensitivity are more susceptible to natural variability (since GHG forcings are masked by aerosols and less dominant). So another interpretation is that sensitivity is higher than expected, and temperatures in the last 10 years are more strongly impacted by solar and ENSO than currently modeled. I guess the best way to check would be to see how natural forcings would behave under a high sensitivity model, and how closely that matches past temperatures.

    You mentioned that “due to the Montreal protocols, we expect the antrho effect of aerosols to have diminished”, but I think you may be mixing up aerosol sprays containing CFCs with sulphate aerosols from fossil fuel combustion. We will have Kuznets, rather than Montreal, to blame for any declining aerosol emissions: http://en.wikipedia.org/wiki/Kuznets_curve

  102. John V,

    1) Leif’s argument is we have a 90 w/m2 change in TSI every 6 months so there should be some evidence for an annual solar signal if a 1 w/m2 TSI effect can have a large change even if the response is damped. Leif seems to feel that no one has demonstrated that the annual signal exists which makes claims of a large TSI induced effect less plausible.

    2) Using statistical analysis to estimate the solar effect is misleading because it is not possible to seperate what is solar from what is ENSO. The same issue exists with attributing water vapour feedback. e.g. Spencer argues that clouds are a decadal forcing in themselves and this means the climate models are overestimating water vapour feedback.

  103. Zeke–
    You are right on the Montreal bit.

    Anyway, I think you can see why I think it’s difficult to go from “models projections off” to “and this is precisely why”. We can speculate, but the speculations then require further digging. Depending on how we argue, we can decide it means the distant future is either better or worse than currently predicted.

    The reason why I’m focusing on “are or are not off” is that the digging to figure out “why” is required if and only if the projections are off. So, I want to get this together and write something more formal before I go off trying to figure out why they are off. In any case, others would be looking into that question. So, I’m not worried it won’t be looked at.

    On the ENSO reference: I don’t think that’s a forcing. It’s natural variability. It might be interesting for someone to dig and do some funky correlation analysis to see how a model ENSO looks compared to the real ENSO. It’s not obvious to me how it could be –but that doesn’t mean it can’t be done by someone.

    I’m a bit amused by the argument that current overprediction should point to higher climate sensitivity. Did Rahmstorf suggest under prediction pointed toward lower climate sensitivity when they published in 2007?

    Not to suggest you are responsible for their intimations. But it would be a bit amusing if we decide that models shooting too high means climate sensitivity is probably under estimated by models and modes shooting too low means climate sensitivity is probably under estimated by models.

    I don’t know what the currrent over-projection means. But I’d be a bit more persuaded if the idea that under-projecting temperature means high climate sensitivity had been brought up in papers like Rahmstorf who were telling us the models were over shooting.

  104. Lucia,
    .
    I looked at the Model E sensitivity somewhere in a comment on your site. If I remember right, I simply did a Fourier analysis on a Model E simulation that was run using only solar forcings (ie. all other forcing constant). It was in this thread:
    .
    http://rankexploits.com/musings/2008/what-about-the-solar-cycle-yes-john-v-that-could-explain-the-falsification/
    .
    I believe there are more than two papers that find the effect of the solar cycle on GMST. A quick search reveals many titles but I have not read them to confirm.
    .
    Anyways, I’ve had my say. I was initially only responding to Raven, who said something about a 0.2C peak-to-trough solar cycle response. Whether it is real or not, we can both agree that a solar cycle response of about -0.1C from 2001 to 2008 would substantially affect the current trend relative to the underlying trend.

  105. JohnV–
    I’m not sure I can agree with the sentence with the “whether or not it is real” bit:

    If there is a real, deterministic effect that leads to a drop of 0.1C over the 8 years and that effect not accounted for by the models, this could explain the why the model projections do not match what really happened. This is true whether the deterministic effect is solar, land use changes, black soot or anything else.

    However, if the solar effect is not 0.1 C over that time period it cannot account for the discrepancy. This is why the issue of whether people believe that effect is real is important to the discussion.

  106. Yes, lucia, that’s what I’m saying.
    I’ll re-phrase:
    “If it is real, we can both agree that a hypothetical solar cycle response of about -0.1C from 2001 to 2008 would substantially affect the current trend relative to the underlying trend.”
    .
    We often have these misunderstandings in how things are phrased.
    Maybe it’s my dialect.

  107. Raven (#9187) – and you’re starting to use alarmist arguments? 🙂 I guess Leif’s somewhere in middle ground though.

    But I’m quite sure Leif is aware that there is a strong annual temperature signal in Earth’s global mean surface temperature, which is removed in the “anomaly” series since those are anomalies relative to the average for that month (over the reference period), and not compared to the full annual average.

    Of course, the interesting thing about Earth’s annual temperature signal is that it is in the opposite direction to changes in TSI: TSI is highest in southern summer (January – when Earth is closest to the Sun), but actual GMST peaks in northern summer (July seems highest on average), due largely to the imbalance in land surface area between the northern and southern hemispheres.

    One estimate of the full range of GMST is here: http://www.ncdc.noaa.gov/oa/climate/research/anomalies/anomalies.html
    i.e. from 12.0 C in January to 15.8 C in July; 3.8 degrees.

  108. JohnV–

    I think in casual blog talk, people tend to parse things they way they think they probably are. I am under the impression that you think the 0.1C peak to trough is probably right and widely accepted.

    I think that amount has been reported in the peer reviewed literature, and it is admitted as a possibility. I also think many people are dubious and think the true effect is smaller.

    So…. unless we are very careful, I say phrase things in ways imply that the peak to trough is in super-mega serious doubt, you phrase things as if it is widely accepted.

    But if you say “I think we can both agree”, then the way you phrase it becomes important. I can’t agree if the phrasing seems to imply that the that level of response is accepted as true.

    Blogs being what they are, this is inevitable. (It would happen in coffee house discussions too. But there we would have tone, and you’d probably get that my reply is just saying “yes… but … X”.)

  109. Arthur and Raven–
    The models even predict the annual variation in GMST.

    http://rankexploits.com/musings/2008/questionanswer-about-pielke-srs-post/

    Here are two models selected for no particular reason:

    Being out of phase would be the sort of thing predicted if the response time climate system were high. (Or at least it would be if we treat this as a 1-d single lump climate.) Similarly, being damped relative to what you by estimating based on steady state climate sensitivity is something you expect if the time constant is high.

  110. Regarding the solar impact, it is very, very hard to tease this out of the temperature series properly.

    One can use the satellite TSI changes to produce an estimate of the temperature change over the solar cycle and this calculation produces a small number (0.9 W/m2/4*0.32C) to (1.0W/m2/4*0.75C) or (0.07C to 0.19C).

    But when one tries to tease these kind of numbers out, it is very hard to confirm – the 0.19C seems to be much to high. Spectral analysis of the temp series seems to show there is more of a 22 year Hale cycle in the numbers rather than an 11 year Schwabe solar cycle but again it is hard to pull out.

    So one is left with a range of solar cycle temp estimates from first principles and then clearly some kind solar cycle influence in the numbers which doesn’t allow you to do much with it but does indicate there is probably a solar cycle influence in the numbers.

    So one is just left with a rough estimate that it is probably 0.1C difference peak to trough.

    I’m prepared to go with the proposition that the currently weak Sun has contributed a 0.1C decline from 2001 to 2008.

    And these numbers WILL be included in ALL the climate models from now on (they do need the big negative forcings to make the numbers work of course). Hansen more-or-less confirmed this in his most recent blog post.

    Jorge asked about a timeline of GHG forcing versus all other forcings. Here it is for GISS ModelE. I can break down the “other forcings” into the 5 big components (including solar) if someone wants.

    http://img183.imageshack.us/img183/6131/modeleghgvsotherbc9.png

  111. Bill Illis:
    That’s an interesting graph. Do you have monthly source data for all of the forcings? A reference would be great if you do. Thanks.

  112. JohnV
    Somewhat similar graphs from GISS are here (scroll down). On the line under the graphs there is a pointer to annual forcing data in various categories.

  113. Bill Illis,

    Thanks for your graph. Also for the GISS reference. It looks to me that almost any model of the climate will do to reproduce historic temperatures as long as one is free to choose the historic total forcing.

  114. I enter this discussion reluctantly, knowing that I’m one of the least knowledgeable people here and that my only qualification is a capacity for rational thought. However, I’ve been patiently reading every post looking for something that isn’t being brought up (although Lucia may be hinting at it obliquely).

    So, Climate Sensitivity is thought to be 3C for a doubling of CO2 and the other main factors in the models seem to be sulfates and ocean response. We haven’t seen warming that fits a 3C sensitivity (based on the observation of 0.6C warming for an increase of CO2 from 280ppm to 380ppm). Models that are over predicting have less sulfate forcing and those that are closer use more sulfate forcing. The beauty of sulfate forcings was that they helped explain the slow (or nonexistent) rate of warming in the 70’s and the accelerated rate in the 90’s. But this is a problem for the current temperature trends. Ocean response was another explanation for the lack of warming in the 70’s, but is inconsistent with current trends. Once the warming is “in the pipeline” and the pipeline is kept full, it shouldn’t stop coming out the other end.

    The obvious next step seems to me to be to question the 3C sensitivity assumption. Yet, nobody seems to be bringing up the topic. Why?

  115. The JohnV incident reflected badly on CA and particularly (apparently) on Watts Up. Given that, Real Climate, Open Mind & Deltoid would have been worse, though that’s no excuse. If the proposition that GISS UHI adjustments work reasonably well for the US48 is still in doubt, couldn’t someone just give John V half a million dollars, have Anthony pick the stations and let Steve McIntyre audit the project? That would be a far better use of our tax dollars than determining the affect of AGW on chipmunks.

    The important point is that if GISS UHI adjustments work reasonably well for the US48 and 1934 is now the hottest year in the US48, doesn’t that call into question the ROW UHI adjustments? Another half a million there ought to shed some light on things. Instead the skeptics keep carping about microsite issues and the AGW’ers say things like “the U.S. is only 2% of the world”.

  116. Bill– Zeke and I have been discussing the climate sensitivity issue on another thread. But, basically, the test gives insufficient information for us to pinpoint what might be the cause of the discrepancy. The possible reasons are infinite.

    For example any one or all of the following could cause this problem:
    1) Climate sensitivity in models wrong.
    2) Ocean mixing rate wrong.
    3) SREAS don’t match data.
    4) Modelers didn’t include solar forcing, and the solar forcing really does cause largish excursions.
    5) Modelers in some groups have bugs in their forcing files that result in their model giving wildly high projections.
    6) AR1 doesn’t describe weather noise and underestimates the uncertainty in 8 year trends.
    7) Other.

    Each individual possibility requires additional tests to tease out what whether it’s the reason or not. (If it’s solar, we should see heating as soon as the sun warms up!)

  117. Lucia,

    I agree that an out-of-phase response to a TSI forcing would be plausible explaination for peak in the NH summer, however, the usual explaination given for that peak is a greater amounts of land in the NH which leads to more heating of the atmosphere.

    This means that establishing the TSI signal is more complicated than simply plotting a graph. One would need to estimate the effect of the greater land ares and subtract that from the signal.

    In fact, if you look at the SH temps which have not risen at all in the last 30 years one could conclude that the annual signal is entirely due to the land/ocean difference and any TSI signal is swamped by it.

  118. Bill, Lucia, climate sensitivity isn’t an *input* to the models, or an *assumption* – it’s an output. And it varies considerably from model to model (that’s why IPCC quotes a wide range, not a single number).

    Also, on Lucia’s list of possible causes, note that #4 and #6 go away if you look at temperature changes over a long time compared to the solar cycle (i.e. the 30 years normally used to define climate). And I think #1 and #2 end up folding into a single “transient climate response” number which IPCC puts at around 2 C for doubling (rather than around 3 C for equilibrium response). #5 is certainly possible for some models, but doesn’t explain the “ensemble” behavior of all of them…

    And then on #3 – different SRES assumptions about GHG emissions would have almost no impact on temperature response over a short period of time, but related assumptions about aerosol forcings could make a much larger difference quickly. So I think the possibilities actually boil down to:

    (a) TCR is much lower than IPCC’s 2C for doubling (either because equilibrium sensitivity is lower or ocean coupling is lower)
    (b) aerosol emissions since 2000 considerably exceeded estimates used in models
    (c) this is a short-term blip that has no relevance to the long-term climatological trend (either because solar cycle forcing is keeping things down or we effectively have much less independent data than Lucia estimates)
    (d) something else…

    If that helps 🙂

  119. Arthur,

    The A1B marker SOx emissions are:

    SOx total MtS
    1990: 70.9
    2000: 69.0
    2010: 87.1

    so it should be fairly easy to compare them to actual emissions (assuming our actual emissions numbers are good…) to figure out your second possibility.

    Out of curiosity, if both aerosol forcings and climate sensitivity are endogenous to the models, what inputs or parameters explain their wide variation across models? My reading of Kiehl (2007) seems to suggest that either climate sensitivity determined aerosol forcings, or vise versa for models with the same ocean mixing rate.

  120. Arthur–
    Of course climate sensitivity in models isn’t an input. All I said is the magnitude could be wrong. This can be true even though it is an output based on choices of parameterizations.

    I agree we can combine 1&2 that into short term climate sensitivity. But, Zeke and I have been discussing them separately on another thread. Also, what over predicting now means for the future difference depending on whether the problem is the steady state climate sensitivity of the time constant.

    #4– It’s true that including solar forcing in files is likely unimportant if you are trying to predict over the long term. However, as JohnV points out, it can be important when testing projections over some finite period of time. As projections and even hindcasts can only be compared to data from finite periods of time, when trying to find reasons why observations don’t match projections, one must consider the possibility that the difficulty arises from solar forcing. Whether modelers like it or not, projections must and will be compared to data as it trickles in. So, leaving that out of the forcing files causes a problem.

    It’s fine to say we must wait N years before making projections– but why make that time periods longer than necessary by failing to include some level of solar forcing variations in the forcing files? At least for the short term?

    As it happens, I tend to think that the solar forcing this is not the reason for the current discrepancy; JohnV thinks the opposite. (Also, I haven’t looked at the forcing information to know whether the modelers left the solar forcing at the peak 2000 levels or set it to an average level. The IPCC document would imply they set it to average. That would suggest solar forcing in the forcing files it dropped suddenly to average in 2000. But… I don’t actually know.)

    Anyway, it agrees that we both have multiple possible things it could mean. Unexpected levels of aerosols seems plausible to me. I’ll be explaining later why I don’t think the AR1 noise give a poor estimate of the noise. But… that explanation is model based!

  121. Here is GISS’ ModelE solar forcing over time. There is just a small solar impact, it is not big enough to have much effect on the main questions surrounding global warming.

    http://img355.imageshack.us/img355/3111/modelesolarej8.png

    Unless (or until) we find there is more variation in solar output than the current estimates of 1 Watt/m2 (divided by four).

    Here is TSI from the SORCE instrument back to 2003 – not even 1 W/m2 – (the other instrument Virgo is showing more of a decline in this solar cycle but that is thought to be due to deterioration of the instrument.)

    http://lasp.colorado.edu/cgi-bin/ion-p?ION__E1=PLOT%3Aplot_tsi_data.ion&ION__E2=PRINT%3Aprint_tsi_data.ion&ION__E3=BOTH%3Aplot_and_print_tsi_data.ion&START_DATE=0&STOP_DATE=2300&TIME_SPAN=24&PLOT=Plot+Data

  122. kuhnkat

    Then what is CA talking about when Steve Mc says

    Unlike CRU and NOAA, GISS makes a decent effort to adjust for UHI in the U.S.

    ?

  123. Ran across something different about the solar cycle.

    Here is Dr. Judith Lean (one of the foremost experts on solar irradiance) talking about the solar cycle influence and global warming theory in 8 minutes (she is a very fast talker).

    A couple of interesting points – she pulls ENSO, volcano, GHG and aerosols influences out to arrive at the solar cycle residual – And then she splits the atmosphere into the surface, middle troposphere and stratosphere in which she shows there is much bigger solar cycle influence in the stratosphere +/-0.3C versus the surface at +/-0.1C and the lower troposphere is also higher +/-0.2C. Something we will have to take into account when we talk about the solar cycle in the future.

    http://www.youtube.com/watch?v=OOMsQcEN1Bg

  124. Thanks Bill–
    The discussion is near the end. Around 7:09, she says the solar signal is definitely in the observed record, but that models generally don’t reproduce the signal. She diagnoses this as indicating models don’t fully capture the mechanisms involved.

    So, if the ±0.1 C associated with the solar cycle really exists on earth, then yes, that could be the deterministic factor causing the discrepancy. However, as she also says the models don’t reproduce that ±0.1 C, one might expect the model projections models would have been off for the current period even if the forcing has been included. (Or course, we can’t know that since they didn’t include the 11 year solar cycle when simulating into the future to create projections.)

    But… does everyone agree with Lean on the existence of the ±0.1 C? I still don’t actually know. (Maybe they do though? Maybe not? )

  125. Thanks for the link Bill.
    .
    In my opinion, anything that Lean, Hansen, Douglass, Spencer, and Christy agree on can’t be that controversial. Particularly since there’s no evidence of anyone disagreeing.
    .
    There seems to be a weakness in the models in that they don’t reproduce the solar cycle effect on temperature well. I believe there’s a new paper that looks at how the models reproduce solar cycle effects on precipitation. Maybe there will be more analysis about the solar cycle coming soon.
    .
    The amplified effect in the troposphere will influence the tropospheric trend quite a bit. Similarly, the effect in the stratosphere is important when considering stratospheric trends.
    .
    I’ve been advocating for trend calculations that start and end at the same point in the solar cycle — 1996 to 2008, 1986 to 2008, 1990 to 2001, etc. Whatever effect the solar cycle has on temperatures can be effectively removed by trending this way.

  126. JohnV–
    Does Hansen agree it’s 0.1C?
    What he said in his 2007 mailing was:

    “Several analyses have extracted empirical global
    temperature variations of amplitude about 0.1°C associated with the 10-11 year solar cycle, a magnitude consistent with climate model simulations, but this signal is difficult to disentangle
    from other causes of global temperature change including unforced chaotic fluctuations”

    So all he is saying is that some empirical analyses suggest this. Then he tacks on a trailer that suggests doubt.

    We all agree that some empirical analyses got this value: The ones cited in the AR4.

    If you have been advocating starting at some particular part of the solar cycle, why did you specifically request 1990 to now? My response to that request is here.

    I know have my scripts proofread and I can do stuff tomorrow. But you keep changing your requests!

  127. lucia,
    .
    As you know, I asked Roger Pielke Jr about trends from 1990 because he was doing a “slide-and-eyeball” from 1990. I suggested that a trend calculation would be a more robust way of doing that comparison. I did not pick it as my preferred start date, as you seem to suggest. Do not mis-represent me like that.
    .
    Look at Hansen’s other communications. Does he ever suggest a different value? You seem to believe that a 0.1C solar cycle effect is controversial, but I fail to see the controversy.
    .
    I did not request anything from you. I merely stated that it makes sense to me to calculate trends between solar minimums or maximums.

  128. JohnV–
    On the 1990: You specifically asked me to do that set of years. I know why you made that request. I don’t remember your ever suggesting the current particular set of years.

    I prefer picking years based on dates documents are published. The volcano eruption (El Chicon and Pinatobu) are at least as confounding as the solar factor. Some models don’t include the volcanoes either. So, by picking start dates near periods where temperature may be depressed by volcanoes, and we have models that don’t include volcanic forcing, we create problems just as large or larger than the solar factor.

    I have model data downloaded and scripts written now. But it still takes me a bit of time to check any particular set of years and proof-check. But I did the calculation quickly, and the overall the answer for 1986-now seems to be this:
    * multi-model mean based on cases with volcano treatment: This model mean over-estimates the trend and the rejection is easily statistically significant.
    * multi-model mean based on cases withOUT pinatubo or El Chicon: are good and underestimate the trend.
    * Together: The blend over predicts the trend but does not reject.

    So, for a start year where the trend may be dragged down by two major eruptions, the models that include the correct physics overshoot and reject. But the models that don’t capture the cooling effect at the beginning do ok– and in fact show too little warming.

    On your question: I didn’t say Hansen suggests other numbers. I would be surprised if he did. However, when he does say 0.1C he seems to consistently add extra words to suggest that he’s not certain about the 0.1C. Of course, maybe these words don’t mean much and are just windowdressing. The only way for sure is for someone to ask him.

    So, you state he agrees with the 0.1C. My question is: Does he agree there is a 0.1C associated with the solar cycle? How do you know that? Because I don’t know what he thinks about the 0.1C.

  129. lucia,
    I believe I asked RP Jr if he had trend values from 1990. I then calculated and provided the trends. You then wrote a post about me asking for the trends. You then said:

    If you have been advocating starting at some particular part of the solar cycle, why did you specifically request 1990 to now?

    When I pointed out the reason why, you said:

    On the 1990: You specifically asked me to do that set of years. I know why you made that request.

    .
    So you knew why I made the request, but still thought it was appropriate to imply otherwise. Interesting — but no longer surprising.
    .
    Anyways, on to more substantive matters… (away from your blog)

  130. JohnV–
    My question was rhetorical.

    I guess “know” why you asked is too strong. I know that you asked Roger and that you asked about trend rather than slide and eyeball. But, I thought you asked Roger about trends from those start dates because you think comparing the “about 2 C/century” to those particular values makes the AR4 looks good. So, you thought your question would make us change our minds about how the data compare to the projections.

    It turns out that start date doesn’t make the AR4 look particularly good because the “about 2/century” isn’t relative to 1990. If we pick that date, and compare trends to what is preicted by the AR4 models, they don’t look very good. (They don’t look as bad as staring in 2001.)

    Picking 1986 doesn’t make it look particularly good either. (That is, unless you like the fact that the no-volcano models don’t look too bad, but the ones that include volcanic trends look good.)

    My point is: You don’t consistently pick the same start year or go on the principle that we must start on the solar cycle. Sometimes you make other choices.

Comments are closed.