Roy Spencer has announced the September, 2011 UAH anomaly. It’s +0.289C, down from +0.327C from August. I’ll be showing who won the quatloos at the end of the post! 🙂
Since I’ve been showing trends since 1980 for a while, I’ll switch to shorter term trends this month. I usually show trends since 2001, but for the current post, I’ll show trends since 2002 which has, for no apparent reason sometimes been selected by some to diagnose the evolution of global warming and decree that it has stopped.
![]() |
![]() From “Global Warming has Stopped”, by Monckton. Based on when temperature data cuts off, the publication date appears to be near the end of 2009. |
Readers will note the trend since 2002 is positive. For those wondering: The trend since 2001 is also positive. The trend since 2000 is also positive. Using method of the eyeball, it appears the trend since 2010 is likely negative.
Turning to more important matters: Who won the quatloos?
This month, Arfur Bryant edged out 0.29 Adam Soereg to take first place. Both bet 0.29 C, but Arfur got his bet in first. Nzgsw (whose name looks amazingly like an adding enabled ‘bot) took third place.
Congratulations winners! Nzgzw, if you are a bot, congratulations for learning to add and entering a value other than 0C. That’s quite a feat.
(Tip for those who bet late: Enter 3 significant digits. You are less likely to tie and lose to someone who bet slightly prior to you.)
Sadly, the entire pot of quatloos was divided among those who held “win, place and show”, and no one else garnered any quatloos. You can see how you did relative to others below:
| Rank | Name | Prediction (C) | Bet | Won | |
| Gross | Net | ||||
| — | Observed | 0.289 (C) | |||
| 1 | Arfur Bryant | 0.29 | 5 | 79.136 | 74.136 |
| 2 | Adam Soereg | 0.29 | 3 | 37.985 | 34.985 |
| 3 | nzgsw | 0.288 | 5 | 48.629 | 43.629 |
| 4 | Pieter | 0.292 | 5 | 0 | -5 |
| 5 | ob | 0.294 | 1 | 0 | -1 |
| 6 | LC | 0.295 | 5 | 0 | -5 |
| 7 | Tamara | 0.295 | 5 | 0 | -5 |
| 8 | EdS | 0.28 | 5 | 0 | -5 |
| 9 | Anteros | 0.277 | 5 | 0 | -5 |
| 10 | ivp0 | 0.302 | 5 | 0 | -5 |
| 11 | Owen | 0.305 | 4 | 0 | -4 |
| 12 | Don B | 0.27 | 4 | 0 | -4 |
| 13 | torn8o | 0.268 | 5 | 0 | -5 |
| 14 | Edbhoy | 0.315 | 5 | 0 | -5 |
| 15 | Steve T | 0.26 | 3.75 | 0 | -3.75 |
| 16 | RobB | 0.255 | 5 | 0 | -5 |
| 17 | John Norris | 0.253 | 5 | 0 | -5 |
| 18 | plazaeme | 0.25 | 1 | 0 | -1 |
| 19 | AMac | 0.249 | 2 | 0 | -2 |
| 20 | Bob Koss | 0.333 | 3 | 0 | -3 |
| 21 | Tim W. | 0.333 | 5 | 0 | -5 |
| 22 | KAP | 0.337 | 5 | 0 | -5 |
| 23 | MarcH | 0.35 | 5 | 0 | -5 |
| 24 | enSKog | 0.353 | 3 | 0 | -3 |
| 25 | John F Pittman | 0.353 | 3 | 0 | -3 |
| 26 | Jon P | 0.221 | 5 | 0 | -5 |
| 27 | KÃ¥re Kristiansen | 0.22 | 4 | 0 | -4 |
| 28 | Pavel Panenka | 0.377 | 3 | 0 | -3 |
| 29 | MikeP | 0.38 | 5 | 0 | -5 |
| 30 | pdjakow | 0.19 | 5 | 0 | -5 |
| 31 | Paul Butler | 0.19 | 3 | 0 | -3 |
| 32 | Lance | 0.389 | 4 | 0 | -4 |
| 33 | BenjaminG | 0.391 | 5 | 0 | -5 |
| 34 | mandrewa | 0.394 | 3 | 0 | -3 |
| 35 | Ray | 0.4 | 5 | 0 | -5 |
| 36 | denny | 0.125 | 3 | 0 | -3 |
| 37 | Robert Leyland | 0.114 | 4 | 0 | -4 |
| 38 | alicia | 0.03 | 5 | 0 | -5 |
| 39 | Freezedried | 0.013 | 3 | 0 | -3 |
| 40 | Bruce | 0 | 5 | 0 | -5 |
| 41 | KAPsockpuppet | -275 | 1 | 0 | -1 |
The net winnings for each member of the ensemble will be added to their accounts.


nzgsw — Adding is no big deal. But you can also project!
From one ‘enabled bot to another, grudging congratulations.
Rats! I won some nice loos on the sea ice extent and then gave most of them back with Sept. UAH. Oh the vagaries of gambling.
ivp0– The 5 quatloo limit is to prevent people from blowing their whole wad the next week! 🙂
Let’s not forget that UAH is now diverging from RSS and we don’t know who has it wrong. Also, the divergence between RSS and UAH in the mid troposphere is even larger. I’ve put off drawing any conclusions about the surface trend based on both RSS and UAH.
I am not a bot. I’m just very quiet.
> I am not a bot. I’m just very quiet.
Ah. In that case, non-grudging congrats!
Trend since 2002 may be positive now…but wait a few months.
FredN–
Yep. It’s likely to be negative again fairly soon. But you probably have to wait 4-6 months. Notice the current temperature is above the trend line.
The surface trend should be less according to Roy Spencer
Now that AMSR-E is dead, we’re taking a step backward. No other passive microwave equipped satellite has station keeping ability. Orbital drift corrections will be back in full force. You might want to skip trying to bet on the October anomaly.
“Global Warming has Stoppedâ€
“Readers will note the trend since 2002 is positive”
But is it statistically significant?
To be precise it is 0.0067164 C per year.
(The RSS trend is -0.00148704 C per year)
Dewitt Payne
Not sure what you mean. The UAH LT Temps do not come from AMSR-E but from the NOAA 15 and 16 satellites, not AMSR-E…
Richard,
That’s 0.06/dec or 0.6/century. It’s computed from the very tippy-top of El Nino to now. 🙂
HADCRUT is -0.00593867
Tiptop happy or not thems the facts 🙂
Fred and DeWitt–
It wouldn’t matter for betting. As long as Roy is going to report something we can bet. Noise, sudden changes in equipment etc. just makes it harder to predict.
Yes Richard,
HadCrut is negative. You don’t even have to start in 2002 to make HadCrut negative. I’m also fiddling with my script to compare the absolute values to models. Right now, for sort of ‘logistical’ coding reasons associated with the learning curve with R and my desire just to see things quickly, I show 25 month smoothed model means. I know R better and I’m tweaking to show 13 month-smooth. (Odd numbers are better for centering.)
I’ll be showing surface trends tomorrow so we can see how they compare to projections. The trends are low compared to models– but there is no evidence global warming has stopped. Even Hadley having a negative trend is not evidence global warming has stopped.
Tilo Reber,
The Global RSS this month is virtually unchanged, so with a slight fall in UAH, this may be the first sign that at the surface they are moving together again.
This seems to be mainly due to the SH, where UAH was down and RSS was up.
For the record, the Monckton graph is from Nov 2008, and the data ends in June 2008. At that time, the global trend since Jan 2002 was:
GISS -.076 K/decade
Hadcrut3 -.234 K/dec (Monckton labels this “Hadley”)
UAH -.268 K/dec
RSS -.330 K/dec.
Trends from Jan 2002 to current (Aug/Sep 2011, depending on dataset):
GISS +.014 K/dec
Hadcrut3 -.084 K/dec
UAH +.045 K/dec
RSS -.053 K/dec.
‘Readers will note the trend since 2002 is positive’
Really? Call me Dr. Suspicious here but I suspect that one could compare the 50%CI of the line and the 50% CI of a flat line, using the same variance as the data, begin both at the same point and both CI would overlap. This would mean that these was a 75% chance that both population could be identical. Indeed, there is no slope, the temperature could be dropping by 2.5 degrees per century or rising by 2.5 degrees per century.
Add to this the amount of ‘cleaning’ the various data-sets have had and we find zilch.
Doc–
The observed trend is positive. We can question whether the observed trend is strong evidence of a positive climate (i.e. underlying) trend, but the observed trend is +0.045C/dec which is positive.
With respect to the comment on other data sets: I think it’s clear in context that I am discussing the trend corresponding to reported UAH data, displayed in the figure directly above the sentence you quote and also the subject of the entire post. In that sentence, I am not discussing the trend in other data sets.
Bad luck Adam, the early bird catches the worm!
Lucia, if I admit to ‘pulling my figure out of a hat’, does that void my win? 🙂
ps, the overall trend since the IPCC’s ‘start of accurate data’ is about 0.06 deg C per decade… Shorter term trends are a waste of time. The overall trend today is lower than it was in 1998, which was the highest temperature recorded.
Arfur–
People are allowed to bet based on numbers pulled out of hats. You are even allowed to bet based on methods that result in numbers that are no more likely to be right than those pulled out of a hat.
(And if Monckton was even remotely concerned about accurately describing what others said, what he conveyed to readers about what I wrote would involve the later, not the former.)
DocMartyn, yes you should use the bounds, not just the central value, to determine the significance of the trend. And while it’s true the central value is positive but near zero, it’s even more true that “there is no evidence for warming or cooling over the period 2002-now and what this really means we will learn in time.”
On a separate note, one might suppose there substantially no difference between UAH and GISTEMP given their uncertainties…however, their errors are highly correlated, because they are largely based on the same underlying data set, so the error probably is meaningful, even if its interpretation leads to the same “no evidence for warming or cooling…”
Lucia, just a presentation suggestion:
You might want to include what confidence level (e.g., 95%) you used in obtaining the bounds on your graph.
Carrick–
True. It’s 2 σ for the trends. I’ll have to add that.
By the way, here’s where Lucia’s uncertainty falls on my Monte Carlo analysis.

What I did to produce the Monte Carlo was first estimate the power-spectrum of the temperature anomalies (In this case I use GISTEMP over the interval 1880 through 2009.)
I then used the “random phase” method to compute a series of instances of “climate noise” of varying length. One of these is the detrended GISTEMP series, the other an instance of the climate noise. Exercise to reader to figure out which is which. 😉
For each of these, I computed the slope using ordinary least squares (should be zero). Repeat this 100 times. What are shown are the 1-sigma uncertainties computed from each of these runs.
Repeat for each time window to produce the effect of the climate fluctuations on OLS slope as a function of time window. Note these are mean uncertainties. This means if you were to pick an 10 year interval on average the uncertainty would be about 1.05 °C/century.
Intervals that have large El Niño events or major eruptions will have larger uncertainties, intervals where the climate is relatively “quiet” will have smaller uncertainties. (I could in principle update the graph to include the 95% variation the expected value of the uncertainty.)
By the way, If you look at the the uncertainty versus “integration period”, there is a “kneepoint” in the
dataMonte Carlo simulation at roughly two years.I think this is due to the GISTEMP method “smearing out” fluctuations that are shorter than 2-years in length, which may be an artifact of the method used for computing their anomalies.
(As a result, the real uncertainty is actually greater than e.g. 16°C/century for a one year window!)
Carrick–
I assume you are generating “noise” to match the spectrum?
Do you mean
a) the standard deviation of the 100 trends for the 100 runs? or
b) the average of some uncertainty computed using some sort of time series assumption.
I figure you mean (a).
Where’s the beef/warming.
When will it appear/re-appear.
The current La Nina looks to lock us into declining temperatures for at least 6 months now.
Technically, it looks like the ENSO is the most important driver of temperatures on short timescales and medium timescales. Long-term is still uncertain.
Now that AMSR-E is gone, an important source of ocean SSTs is gone. We need climate research funding to be re-directed into data gathering – satellites etc – rather than the unending list of climate model studies into the impacts of 3.0C global warming (on gophers, flies, whatever).
Let’s reduce the uncertainties first. Let’s just measure what is really going on.
Lucia I am generating noise by using random phases and the
observedestimated power spectrum.And yes I mean (a). I am computing the OLS slope e.g. 100 times and then computing the standard deviation of that series.
(I don’t remember what the actual number of trials I used anymore, sorry, though “100” is the default number for the script)
Re: Carrick (Comment #83102)
That’s pretty neat!
Of course the other problem with linear trends is that if the underlying evolution is nonlinear, they are only meaningful over sufficiently short time intervals. So there is a tradeoff: over very short times the noise does not let you see the trend, over long times the linear trend is not an accurate predictor.
If we knew what the “true” (deterministic) functional form of the temperature time series is supposed to be, we could try to compute the optimum time window to minimize both errors together. The result would be a minimum “intrinsic” sort of error, one below which no statement about the global temperature trends would be meaningful. From your graph I’m going to guess that would be close to 0.05 C/decade, the error associated with “climate noise” averaged over a 20-year interval.
Also, not surprisingly, the estimated uncertainty follows a power law: I have a best-fit (2-20 years) value of approximately:
$latex \sigma_T(t) = 15.6^\circ \hbox{C} \times f^{-1.14}$.
If you want to measure a 2°C/century slope to a 10% accuracy, that suggests you would need a 46-year interval to do so.
julio:
“Meaningful” is a relative term of course. 😉
My average road speed on a long trip is meaningful even if it doesn’t tell the full story. Especially if the mean value were greater than the speed limit. o_O
But absolutely right: You would need a model to regress your forcing against (e.g., the 2-box model). Of course the Monte Carlo method lets you make things as complicated as you want in examining the effect of noise on your parameter estimation.
Yep. I’ll point out though, that if you were to regress the time series on say MEI, you might be able to reduce the variation of the residuals, and shorten the time interval needed to obtain a meaningful temperature trend.
(Lucia’s done this sort of thing in the past. Tamino has written about it too. Not sure who’s idea it was originally.)
@ lucia (Comment #83087) “..there is no evidence global warming has stopped.”
There is no evidence it is continuing either if the uncertainty is larger than the trend.
(I get the trend as 0.040 per decade upto Sept and not 0.067 or 0.045)
Richard–
It is certainly true that if one insists on looking at sufficiently short time periods and ignoring longer term trends, one will always be able to decree there is no evidence of any trend one way or another. So: Yes. You can’t prove warming is continuing using the trend since 2002. But I didn’t say you could. I’ve pointed out that Monckton had previously decreed it had ended based on data since 2002– though the data was quite tenuous.
Lucia:
Indeed, I would say there is absolutely no support for such a statement. You can’t make any meaningful conclusions from such a short interval. I would say Monckton should know better, but I don’t think he does.
So by stopping at 2008, Monckton is hiding the incline.
Carrick–
Yep. I have to add MEI next week. 🙂
Ray:
What bothers me about both UAH and RSS is that the ENSO cycles used to swing around the trend line in a fairly balanced way – depending on the size of the event of course. Now we see strong swings above the trend line for El Nino and weak swings below the trendline for La Nina. This is happening for both RSS and UAH data sets, but more so for UAH.
dlb–
Based on dates, he likely wasn’t hiding any incline by ending the graph with fairly recent data. But picking 2002 for no apparent reason was a bit odd — especially as he provided no uncertainty intervals. The only reason I can think of for picking 2002 is that it is a relative max and you want to get low trends. Similarly, the only reason I can think of to pick 1999 would be to get a high trend. (As far as I am aware, no one picks 1999 for anything.)
Not showing any uncertainty intervals at all? It’s one thing to argue about how large they should be– but it’s another to show none at all, and decree warming has ended with no estimate of uncertainty what-so-ever.
1999
http://www.woodfortrees.org/plot/hadcrut3sh/from:1999/plot/hadcrut3sh/from:1999/trend
Not as good as 1998 of course …
http://www.woodfortrees.org/plot/hadcrut3sh/from:1998/plot/hadcrut3sh/from:1998/trend
Picking 2002 as a start date is no different than picking the first decade of the 2000’s and saying it’s the hottest decade ever.
Bruce-
Why would you plot the southern hemisphere only?
Do GISS
http://www.woodfortrees.org/plot/gistemp/from:1999/plot/gistemp/from:1999/trend
Fred–
People traditionally call either the year ending in 0 or 1 the start of a decade. (There was no year zero. People used to give this the reason for the first century being x01-(x+1)00, but lots of people now to with x00-x99.) So, it makes some sense to compare those particular decades. For comparison purposes, it can also make sense to talk about “the most recent N years”. People often discuss 10.
But it’s also different to say “the most recent 10 year period is the warmest recorded” if that happens to be a fact and decreeing “global warming has stopped” with no particular basis.
Lucia, in scanning through Tamino’s post, I ran across this comment:
This statement however applies to a heavily massaged residual series, and not the original data series—many of which indeed show “cooling” in the conventional sense of the central value of the trend being negative, and moreover, like Monckton, doesn’t include any statement about the magnitude of the uncertainty in the data compared to the mean over that interval.
In other words, Tamino has made exactly the same exact gaffe as Monckton did here, and his comment is in fact “absolute nonsense.” lawlz.
I feel I must step in for the good Lord Monckton. You must cut him some slack. After all he is an English Lord, and English Lords are noted for their eccentricities, and decrees.
At the moment when he decreed that Global warming had stopped from 2002, it had indeed stopped. Unfortunately it started again. And God knows it may stop again. So lets wait for it.
The decrees of English Lords are not to be taken lightly.
Carrick–
I agree. It’s not “absolute nonsense” that warming might have slowed relative to some previous period. That said: I doubt if it has slows in the sense of there being less greenhouse. I think the earth’s warming in the late 90s was partly the result of recovery from period with big eruptions and there was a period of ‘aerosol clearing’ recovery.
I doubt if warming has stopped– but if it did, our ability to recognize that will lag the “stopping”.
Tamino did do an awful lot of correcting didn’t he? I’m wondering why his uncertainty intervals aren’t smaller. I guess I’ll look at that a bit more tomorrow.
Bruce-
Why would you plot the southern hemisphere only?
..
And why avoid the variance adjusted data. Maybe Bruce doesnt understand the difference
http://woodfortrees.org/plot/hadcrut3vgl/last:168/plot/hadcrut3vgl/last:168/trend
Flat for fourteen years now. At what point does this actually mean something or is it just an expected flat spot in the upward trend postulated by Tamino here http://web.archive.org/web/20080409021707/tamino.wordpress.com/2008/01/31/you-bet/ .
Also, I wonder if Tamino had used HadCrut, would his bet outcome be different yet?
Without beating up Tamino too much more over his language abuse— warming means exclusively a positive trend in measured temperature over some interval and not a tortured-till-it-confesses derivative product of temperature—it could easily cool in a decade without that overturning the fact that the long-term trend is for warming.
It’s even possible, given that climate oscillations like ENSO affect cloud patterns and cloud patterns affect the global mean albedo, you could even end up with a net loss of heat energy from the system over periods of as much as ten years. But since these are quasi-periodic oscillations, they don’t have a lot to say about what happens to the long-term trends.
Once you correct for this natural variability, I agree with Tamino that the you get significance for the corrected long-term trend, even over a single decade. I digitized NCDC just for kicks, and found for 2001-2010 a slope of 0.0095±0.0035 °C/year (p < 0.008).
They didn’t look too unreasonable to me (10+ sigma), if anything they seemed kind of tight. I got an R of 0.96 with 35 points for the NCDC data, which is the equivalent of 5.7 sigma (this was my digitized annual data, but annual average shouldn’t affect the estimate of the uncertainty in the trend–not for 35 years anyway) versus 17 sigma for his NCDC fit (my central value was 0.0176 versus his 0.0172 so the error in digitization shouldn’t have made that much of a difference).
If anything, my guess is, assuming he used the monthly data, he didn’t properly correct the uncertainty for autocorrelation.
Just for clarification, I digitized NCDC from this figure.
Judith Curry has apparently made a global temperature prediction for the next 14 years.
In late August she made a presentation in Boulder at a NOAA workshop on water showing how drought varies in the US with PDO and AMO oscillations. In her Scenarios: 2025 slide no. 11 she predicts more La Ninas and “Decrease/flattening of the warming trend.”
http://www.esrl.noaa.gov/psd/events/2011/pdf/water-cycle-presentations/Curry_noaaWaterClimate.pdf
Judith must be a Lukewarmer.
Re: Fred N. (Oct 5 11:11),
We’re both wrong.
Carrick
What is the definition of “sigma”?
I’m seeing these numbers:
If “sigma” is “standard error” in the mean trend, that means he’s reporting a 1σ uncertainty of 0.023C/dec for the trend from jan 1975-dec 2010. I think your graph is 2σ . Right? But your formula above says 1 σ and matches your graph. Which is which?
The ones on my graph are 2 σ
Anyway, I’m asking not because I’m saying one value is right and other is wrong. I just want to know how various ones were computed. After that we could argue the merits. Mine above are just garden variety red-noise, arima, and a fractional difference method that lets me do FARIMA(1,d,1) and applied to the time series we fit to the data. They may or may not be “right”. They just are what we get.
.
Global warming sounds a lot like a cigarette smoker trying to quit. “I swear I’m stopping – I’ve stopped three times this week already!” 🙂
Yes it is. In the first case you are looking at year-to-year data, which has a lot of noise, with a sample of size 10. In the latter case, you are looking at decadal data, which has less noise, and a larger sample to compare it against (because we have more than 100 years of temperature data).
.
I’m too lazy to compute the p-values, but I’m willing to bet some serious quatloos that it would be larger in the former than in the latter.
Lucia, I’m using 1-sigma in my graph too (I should have said this too on the graph!). I computed your sigma by taking the (max-min)/4 then multiplying by 10 to display on the graph (since I’m using °C/century rather than °C/decade).
For the time axis I used 9.75, since from my finger counting exercise Jan 1, 2002 – Sep 30, 2011 was 9 years and 9 months inclusive. Here are the three values that I computed from your figure:
9.75 1.3625
9.75 1.3875
9.75 1.5875
As I mentioned, I’m using the average spectrum over the 1880-2009 (inclusive) period to compute the temperature fluctuation spectrum. Technically, I should show the 95% CL bounds on this error, because what this predicts is what the average error you should obtain over, e.g., a series of 10-year sample periods across the entire time series.
The purpose of my exercise was for solely for the application that Julio mentioned above:
Namely I wanted to answer for myself how long of a period you needed to integrate over (assuming just OLS) before you could make a “detection”. In other words, this was meant to be a predictive graph, rather than a diagnostic one regarding a particular time interval.
I could use the same method and compute the temperature fluctuation spectrum just for the same time period that you used, but that wasn’t the purpose of the original exercise.
re: Carrick (Comment #83103)
This is an interesting bit of speculation Carrick. I wonder how your analysis would crunch out using the RomanM / Jeff Id method for combining temperature series.
Mosher: “Why would you plot the southern hemisphere only?”
Lucia dared me … “As far as I am aware, no one picks 1999 for anything“
Carrick (Comment #83123),
The other problem I see with Tamino’s approach is the assumed linear function of temperature with time rather than temperature with forcing…. which is what is really of interest. Using a bunch of variables (MEI, volcanic forcing, etc.) along with an assumed time linear function is going to generate a nice linear time fit once you “remove” the fitted influence of the other variables from the raw data. A regression of temperature against calculated combined GHG forcing, solar cycle forcing, volcanic forcing (which is not so accurately known) and a representative ENSO parameter (3-month lagged Nino 3.4 seems good), would be a more meaningful exercise. Between all the variables and all the lags, you can indeed draw whatever you want… even an elephant.
Layman Lurker:
What you’d look at is the power spectral densities. Here is a comparison of Jeff & Roman, which is land only, with CRUTEMP (which is land only) and monthly Clear Climate Code (land-only mask):
figure.
And it does appear that Jeff/Roman’s method provides more fidelity to the “high frequency” content. The disadvantage of this comparison is you are comparing apples to oranges, because there are other differences between the methods besides centering.
Steven Mosher’s got a version of their method implemented in his suite of code. “When I get time” I plan on downloading and and running these and directly comparing different methods with each other (if Steven doesn’t beat me to it now).
I wont beat you to it Carrick. I’m beavering away at something else… But let me know when you get time. I have a bunch of changes I need to commit and the push out a new build.. need to check on the R 2.14 schedule as well
Nice carrick.
SteveF, I agree if you’re going to regress on a model that has a list of forcings, why not try and estimate the climate sensitivity directly???
Lucia, Nick Stokes, Tamino, Arthur Smith and others have done things like this in the past using the GISS-forcings (you need to an aerosol history as well as a cloud-forcing model), but we didn’t include MEI. It’d be interesting to repeat the exercise with that added.
Here’s Lucia’s posts on it based on her “two box” tag.
I noticed in looking at the reference page on NASA’s website that they now appear to be updating it more regularly. It was last updated 2011/08/02.
I used the wrong link for the figure comparing Jeff/Roman’s method to CRUTEMP & CCC land only.
Corrected link.
Re SteveF (Comment #83144):
“A regression of temperature against calculated combined GHG forcing, solar cycle forcing, volcanic forcing (which is not so accurately known) and a representative ENSO parameter (3-month lagged Nino 3.4 seems good), would be a more meaningful exercise.”
Exactly, as Tamino’s current method yields quite a difference in the response to each forcing. For instance, he mentions that the surface response is around 0.4 C per W/m^2 for the solar forcing (which corresponds to a sensitivity of ~ 2.2 C if only 70% of the forcing is “instantaneous”). However, the response to a volcanic forcing, if you look at his chart (or run the regressions yourself) , the response is about 0.1 C per W/m^2 (corresponding to a whopping sensitivity of 0.54 C per doubling of CO2 if, again, we assume only 70% of the forcing is instantaneous). It certainly seems like the method yields non-physical results if one type of W/m^2 has 4 times the effect of another.
One commentor did ask about the results of the regressions within in the context of climate sensitivity, to which Tamino responded:
Looking back over the post, he mentions that he was submitting it for publication, but I would think this issue of wildly different responses to a W/m^2 perturbation may make it difficult to publish.
re: Carrick (Comment #83145)
Good point. My hunch is though, that Roman’s method would not only give sharper HF resolution, but sharper spatial resolution as well. IMO, this would lead to questions on the implications of this type of error as the data gets run through subsequent processing steps – homogeniety adjustments, etc. IOW, error or biases potentially getting compounded with each step in data processing. If I recall, Chad Herman posted some interesting experiments related to this a while back, plugging simulations into the various methods for combining and anomalising station data. I think it was prior to the work from Roman and Jeff.
Troy_CA:
I wondered about it too. The whole thing seemed rather ad hoc’d.
It seems to me more fruitful to either go with somebody’s computed forcings, or to develop a more physically based model of your own for the various influence of the forcings agents. I don’t see how this is publishable without a great deal of revision honestly. Even regressing on the MEI isn’t new as far as I can tell, but “borrowed” without attribution from somebody else.
SteveF: I noticed this paper of Hansen’s on the NASA web site. It might be worth a read.
Heh, bake the fudge in a private kitchen, not subject to government sanitary inspection.
===========
Carrick –
I read the Hansen et al paper which you linked, a while ago. Does anyone else think that they’re using aerosols as a fudge factor? They assign aerosol forcing to be -0.5 times the greenhouse forcing for the last 20 years. [Caption, figure 1.]
They also state “But aerosols remain airborne only several days, so they must be pumped into the air faster and faster to keep pace with increasing long-lived GHGs.” But Stern shows sulfur emissions decreasing during the 90s.
Troy_CA (Comment #83150)-The different transient responses may make sense if the solar cycle variation is acting as a proxy for the true solar cycle forcing-that is, it is proportional to the true forcing, but smaller. This would push the responses closer to the volcanic estimate. Whatever the reason for a large solar cycle forcing greater than expected from irradiance alone, it appears to exist.
Carrick (Comment #83152)-The earliest paper that I know of to use MEI to regress out the ENSO effects was by Michaels and Knappenberger. Lots of other people have done similar analyses.
Adrew_FL==
Even so, depending on the reviewer, some might find the inconsistency something that needed to be explained. Also, there are a number of different lags. That these might be different might also be explicable but it does prompt a question about whether we know any more after doing all the corrections than before doing them. That is: What precisely is the unique contribution of the paper?
Here’s things that cannot be the contribution:
* Warming since 1975 is statistically significant. Using simple methods to estimate the uncertainty (plausibly those similar to what T has done) we know that before adjusting for MEI, Volcanoes etc.
* That the dips were caused by Volcanic eruptions? We know that
* That ENSO affects the short term wiggles? We know that.
* That if we regress with ENSO and temperature, some wiggles go away? We already know that.
* That if we regress with a shit-wad of parameters permitting ourselves to find a whole bunch of different lags, and being open to the possibility that the short terms climate sensitivity is different for each one, we can take out a whole bunch of wiggles? I think we know that.
I think the claimed contribution is that the resulting time derivative is somehow more “real” than the others. Let’s say we agree with that. Can this “more real” trend be compared to models? No. Because at least from 1975-1999, they were forced with solar, volcanic aerosols etc. So, you actually would want to compare the model trends to the “not real” value that was not corrected for volcanoes. The only open issue is that– given the models and the earth supposedly share volcanic forcings and that causes “wiggles” how do you estimate the relevant uncertainty so as to have a sensitive method to detect differences. T’s method isn’t suited to that.
So, one good question is: if we call this “real”, why is this ‘real’ trend more interesting than the “unreal” ones we knew before all the massaging?
HaroldW:
I think it’s used as a sensitivity dial, I’ll speak for SteveF, he’s said as much too. What they can’t get from sulfates, they trump up with out-of-date solar forcings and non-physically based cloud feedback models, not to mention the unhealthy effects of politics and conflicts of interest on the researchers.
I’d describe the situation for the current state of GCMs as “one small step above a complete shambles”.
Andrew_FL, could you provide a link to (or more complete reference for) that paper? — Thanks!
Layman Lurker:
I was time pressured earlier (was drawing wife aggro), so I didn’t get a chance to ask this earlier.
I was wondering if you could expand this comment a bit further?
By the way, I’m already “all over” GISTEMP for their ad hoc 1200-km smearing method. Something a lot more optimal is needed than this. E.g., see NCDC’s empirical orthogonal function + optimal interpolation methods. If I were to develop a method, once you’ve performed the homogenization of data, this is the direction I would go in.
Lucia:
The correct answer (SteveF alluded to this) is there isn’t anything physical about it. It’s an arbitrary parameter from an arbitrary hacked-together model, drawn with no particularly deep inspiration from the work’s that’s gone on before, which has gone through absolutely no verification testing.
But remember!
>.<
Carrick (Comment #83128) “warming means exclusively a positive trend in measured temperature over some interval and not a tortured-till-it-confesses derivative product of temperature—it could easily cool in a decade without that overturning the fact that the long-term trend is for warming.â€
1. What “interval†are we talking about?
2. What do you define as “long-term trend“? What is “the term”?
3. How do declare a fact before it has happened?
4. Assuming it will happen, how do you distinguish it from the evidence of a 1,500-year natural climate cycle in which the Earth naturally warms and cools?
(No, unflattened).
May I tentatively suggest:
Stick to the math (& the quatloos). (Not so) veiled digs ….
Well, you don’t seem to like directness, so I’ll leave you to fill in the blanks. Incidentally, I don’t care if none of your denizens respond. Nor whether you like or dislike His Lordship.
I value your math.
Richard,
1) You pick the interval, and the sign of the trend tells you whether it’s warmed or cooled over that interval. It’s definitional.
2) Long enough to separate it out from short-period climate noise. Roughly 60 years.
3) Your valance is backwards here. If we have a 120-year period over which we’ve established a statistically significant long-term warming trend, to overturn or contradict the conclusions drawn from this 120-year period, you will need a longer observation period than 10 years.
4) Model + measurements.
BTW, on topic:
I would agree with HaroldW and your (shortly thereafter) comment.
Heretic
Over at Bishop Hill you commented that your comments were ignored. That made me wonder what you’d written. I checked, saw what sorts of comments you left (including the single word comment ‘boring’.) I saw they were ignored.
I didn’t necessarily think you cared whether you were ignored, but I thought people would be interested in knowing I’d confirmed that you were being ignored, so I left a comment over at BH.
So you must be happy when I ignore your comments both here and at Bishop Hill. I would suggest that if you don’t care if I like him, you stop trying to initiate conversations about that irrelevant issue.
Maybe you should stick to discussing the topics you value then.
Re run it with Svalgaard TSI instead of Lean
Yep, Mosh. I listen.
The most accurate temperature proxies we have are the ice core records. From the GISP2 data:
1. For the interval 10,000 years to now it has cooled, for the interval 2,000 years till now it has cooled.
Has it cooled uniformly over the period? – No. Over 10,000 years there have been 10 warmings over the last 2,000 years there have been 2 warmings. The last one starting about 1,150 years ago till about 780 years ago.
Then it cooled till about 180 years ago and then started warming. If it lasts as much as the last warming we still have about 170 years to go. The previous warming was even longer for about 5 centuries.
3. A period of 120 years, during a period which appears to be natural warming, is ill suited to prove anything.
4. The measurements have not verified the model and cannot unless we have a long enough period of several centuries, however they can disprove the model if cooling, or little or no warming takes place say over say 20 years.
Richard the most recent temperature data from GISP2 is dated from 1845 so you may want to revise your statement.
(Bloodied but unbowed).
#83164
Honestly, I’m not a troll.
Phil. (Comment #83170) Actually 1855. Revise it how? Did it not start warming about 180 years ago?
Richard (Comment #83172)
October 6th, 2011 at 3:15 pm
Phil. (Comment #83170) Actually 1855. Revise it how? Did it not start warming about 180 years ago?
Replace ‘now’ by 1855. Twice.
Phil. (Comment #83173) “Replace ‘now’ by 1855. Twice.”
I presume you mean ” For the interval 10,000 years to now it has cooled, for the interval 2,000 years till now it has cooled.”
I’ll have to see about that. After I update the data from 1855 to now and append it onto the ice-core data, an I’m not sure how that is done.
I suspect I will not have to replace them though. I am talking about trend.
Richard, I’m not going to deal too much with data prior to when we had instrumentation. We’ve barely enough instrumentation of the various types needed since the satellite era. Indeed, the surface temperature instrumentation get constantly called into question regarding their quality. I don’t see how can we do that at the same time we rely on data from a few isolated locations in remote regions of the Earth using a proxy of uncertain quality with little of the corresponding climate information needed to make sense of it.
That depends on what you’re trying to prove. I think you need to sort that out in your head before you can decide whether it is “ill-suited” or not. That where Monte Carlo type analyses can provide some insight.
Whether it’s warming or not is an observational issue, whether you’ve acquired enough data to verify a particular model depends both on the model and the quality of the data.
I believe “several centuries of data” is just a number you pulled out of the air.
Again, answering what intervals you need is the point of Monte Carlo analysis and other quantitative methods similar to it—it will tell you how long you have to observe before you can start making inferences of this sort.
Regarding 20 years…you’re moving the goal posts now. I said you could not overturn 120 years of measurements with 10 years of data. Now 20 years magically appears from nowhere. You’re not even coherent in your reasoning.
Anyway, if you want to work out how many years of observation might be needed, first you have to ask “what was the observed trend for the prior 120 years”? The answer would be about 0.5°C/century. Put it in the formula above $latex \sigma_T(t) = 15.6 t^{-1.14}$ and solve for $latex \sigma_T = 0.25$ (95% CL), and you get 38 years…. it would take 40 years to demonstrate at the 95% CL that temperature increase observed for the previous 120 years had stopped.
(This model assumes only that global warming has stopped, as Monckton asserts, not that we suddenly go into a rapid cooling period.)
What AMac said!
😉
Re: Richard (Oct 6 14:46),
(This comment was written prior to reading Carrick’s #83175. His points are more important to the topic of discussion, but what I raise here is nonetheless relevant.)
Could you link to the GISP record you are discussing?
On scales of millenia and longer, these paleo records provide valuable context.
I am very skeptical of the value of paleotemperature reconstructions with a purported precision of centuries (or less) that cover the past millenium or two. In particular, I think the error bars that their creators assign to them are likely to be much too narrow. I suspect that there is negative value to most efforts to link reconstructions of the temperature history of the past 1,000 to 2,000 years to the questions of interest on the current climate. (PAGES-CLIVAR is an example of this type of project. It should be abandoned.)
Hopefully I’m wrong, and GISP does provide reliable and useful information over this time period.
Troy_CA,
For sure. And that sort of thing points to a) the vast uncertainty in the volcanic forcing estimate, and b) overall uncertainty in climate sensitivity. Tamino does not even touch on the principle issues of uncertainty in climate sensitivity…. the miraculous aerosol direct and indirect effects. Which are conveniently allowed to be most any value that you need to calculate a frightening climate sensitivity. The IPCC’s ‘credible range’ for these two off-sets together (in AR4) is -0.35 watt/M^2 to -2.6 watts/M^2. Just freakin’ fabulous science if I ever saw it. It is great when the ‘credible range’ for the implied sensitivity covers everything for “who cares” to “we are doomed to become like Venus”.
.
As I think I first said three years ago: the whole exercise is so intellectually corrupt as to beggar belief, yet it goes on. Count on ‘aerosol offset’ continuing to be the magic kludge of climate science for AR5 and the foreseeable future thereafter… or until the US Congress comes to it’s senses and starts a defunding the worst offenders.
Tilo Reber (Comment #83064)
October 5th, 2011 at 9:50 am
“Let’s not forget that UAH is now diverging from RSS and we don’t know who has it wrong. Also, the divergence between RSS and UAH in the mid troposphere is even larger. I’ve put off drawing any conclusions about the surface trend based on both RSS and UAH.”
There is a new paper from NOAA where they use a novel method for constructing the sat temp record (Except for TLT).
http://www.star.nesdis.noaa.gov/smcd/emb/mscat/mscat_files/ZW_AMSU_paper_JGR_Accepted_version.pdf
For the record, they are working on a TLT product.
Richard (Comment #83169)
October 6th, 2011 at 2:46 pm
“The most accurate temperature proxies we have are the ice core records. From the GISP2 data:”
You do realize the significant uncertainties and the amount of modelling required to produce that record right?
Secondly
“I’ll have to see about that. After I update the data from 1855 to now and append it onto the ice-core data, an I’m not sure how that is done. ”
You can’t just append on data to an ice core record like that…
http://www.skepticalscience.com/crux-of-a-core1.html
http://www.skepticalscience.com/crux-of-a-core1b.html
http://www.skepticalscience.com/crux-of-a-core2.html
http://www.skepticalscience.com/crux-of-a-core3.html
RE: Richard & GISP2 data
If you are referring to the findings of Dr. Richard Alley (who was actually down in the hole in Greenland for his ice core studies during the 90’s) you may find he has very different ideas. Dr. Alley is a very interesting guy on the subject of paleoclimate and he believes AGW is clearly happening today. He does note a clear NH signature of the MWP and LIA but that does not in any way change the reality of AGW.
If you are referring to the interpretation of Dr. Alleys work by Dr. Don Easterbrook that hit the news a while ago you may be very disappointed. Dr. Easterbrook made some rather large errors in his interpretation that failed to support his conclusions. His GISP2 interpretation is not well regarded as a scientific study.
“He does note a clear NH signature of the MWP”
AD1300 Event in the Pacific suggests he is wrong.
http://goliath.ecnext.com/coms2/gi_0199-7024384/The-A-D-1300-Event.html
Just noting that most of the recent Greenland dO18 isotope/temperature conversion forumulae used starting from Richard Alley 2000 and on, are off by a factor of 2. Take whatever anomalies are provided and divide them by 1/2. The formula for the summit (GISP) and northern icecap (NGRIP) of Greenland is:
–> TempC = (dO18 + 13.4)/0.8
That will get you very close to the real temperature and the real changes over time. Alley used about 0.4 in the last term (which is closer to the tropical number than to the summit of a 3 km high glacier).
Robert (Comment #83180) “You do realize the significant uncertainties and the amount of modelling required to produce that record right?”
Keep your sarcasm to yourself! If you have something to tell me do so. I am not a Bl#$%y climate scientist.
“Greenland ice-core records provide an exceptionally clear picture of many aspects of abrupt climate changes, and particularly of those associated with the Younger Dryas event, … Well-preserved annual layers can be counted confidently, with only 1% errors for the age of the end of the Younger Dryas 11,500 years before present. Ice-flow corrections allow reconstruction of snow accumulation rates over tens of thousands of years with little additional uncertainty. Glaciochemical and particulate data record atmospheric-loading changes with little uncertainty introduced by changes in snow accumulation. Confident paleothermometry is provided by site-specific calibrations using ice-isotopic ratios, borehole temperatures, and gas-isotopic ratios. Near-simultaneous changes in ice-core paleoclimatic indicators of local, regional, and more-widespread climate conditions demonstrate that much of the Earth experienced abrupt climate changes synchronous with Greenland within thirty years or less. ”
This answers your “significant uncertainties” and also Carrick’s “I don’t see how can we do that at the same time we rely on data from a few isolated locations in remote regions of the Earth using a proxy of uncertain quality with little of the corresponding climate information needed to make sense of it.”
Carrick I am disappointed that you try and make sense of 120 years of instrument record with Monte Carlo analysis, while ignoring science, dismissing data and also acknowledging the drawback of the 120 year record.
The “surface instrumentation” indeed is faulty because of poor locations, huge grids left out, poor instrumentation, and the basic flaw of measuring air temperatures over land being mixed with sea water temperatures over the sea to arrive at a “global temperature”. This makes absolutely no sense to me.
However all this is just over the top of my head for me. I will answer your post later.
Quickly “I believe “several centuries of data†is just a number you pulled out of the air.” – No look back – I gave my calculation of a doubling of CO2 will take place in over 2 centuries at the end of which period we will know if the models are correct or not.
“Regarding 20 years…you’re moving the goal posts now.” Again no. I read somewhere that the likelihood of 2 decades of cooling in the climate models was very low.
Richard (Comment #83169)
October 6th, 2011 at 2:46 pm
The most accurate temperature proxies we have are the ice core records. From the GISP2 data:
###
That really doesnt tell us a lot. really accurate compared to what?
Also, as others have pointed out these records involve some very heavy modelling. Finally, your talking about one place on the globe.
Imagine I tried to go onto WUWT and argue that with one temperature record I could tell you the average temperature of the globe.
last time I listened to Jones ( AGU 2010) he had a very interesting talk about ice cores in greenland.. and calibrating them by using a teleconnected grid square off the coast of finland. The point being this. you have large uncertainties in the paleo record. You have poor coverage. Those uncertainties are larger than the uncertainties in the modern record. the coverage is less than the modern record. And the signal to noise of a spatial location changes over time. If you believe the modern record is too uncertain to determine anything then running to a less certain paleo record to prove your points is really espistemic nonsense.
Bruce.
This suggests that Nunn was wrong about the 1300 event
http://www.pacificarchaeology.org/index.php/journal/article/view/15
Like I said, trying to base an argument on paleo data is an epistemic minefield.
whether you are arguing for AGW or against it.
Nunn disagrees.
http://pacificarchaeology.org/index.php/journal/article/view/37
But what exactly does Fitzpatrick disagree with?
As Steven Mosher points out, proxy-based paleo recons are a package deal. It isn’t particularly surprising that the inventors of Reconstruction X offer glowing endorsements of Reconstruction X’s superlative accuracy and precision.
Remember, no cherry picking.
“Similar to the NH, this SH
expression of the MWP is not homogeneous in time. Rather,
it is composed of two periods of generally above-average
warmth, A.D. 1137–1177 and 1210–1260, that are punctuated
by years of below-average temperatures and a middle
period that is near average. Overall, this translates to a
MWP that was probably 0.3–0.5C warmer than the overall
20th century average at Hokitika and, for the A.D. 1210–
1260 period, comparable to the warming that has occurred
since 1950.”
http://ruby.fgcu.edu/courses/twimberley/EnviroPhilo/CookPalmer.pdf
And to prove this tree ring chronology was as accurate as the teams (although they had a better excuse):
“As before [Cook et al., 2002], the post-1957 tree-ring data
were severely affected by known logging of the stand and,
therefore, not used for either calibration or verification.”
What is it about 1957 or 1960 that cause trees to quit growing?
Amac, Mosher if proxy reconstructions aren’t allowed, then you AGW people have to prove current warming (even with the current pause) is unusual.
Until then I assert that the post-1957 or 1960 cooling is correct. It is only bad data that suggests it is warmer.
Bruce:
Wow, that’s a trip.
If you don’t agree with Bruce, then you’re an AGW person.
Our disagreement couldn’t have stemmed from what he is saying being palpably wrong.
Richard, you can be disappointed as long as you want. My judgement of the data is pre 1950 it’s poor quality (needs correcting beyond current methods to be used) and pre 1850 (proxy data, limited instrumental measurements), it currently has nearly zero value. Steven Mosher pretty much summed up what I also think in more detail on this.
“Until then I assert that the post-1957 or 1960 cooling is correct. It is only bad data that suggests it is warmer.”
This phenomenon is general, tree rings diverge from stations temperatures since the beginning of the twentieth century by about 1 °C per century. Obviously this comes from the disturbances affecting the stations (See this discussion : http://rankexploits.com/musings/2011/analyzing-surface-stations-part-1/#comment-76436).
The ice core and benthic foraminifera dO18 isotope proxies provide us with the best information about the historical climate. Let’s just say it is always quite consistent with what we know about local climate conditions at the time. Ice ages, rapid warming, volcanoes, hothouse Earth, ice age Earth going back hundreds of millions of years and with close to annual resolution for the most recent periods.
Tree rings, on the other hand, are almost useless except with respect to fossil/buried trees which can tell us where the tree-line for Pine was in 1000 bc for example.
We can also go back 2.5 billion years with the Carbon 13 isotopes showing several Snowball Earths over the period.
http://img824.imageshack.us/img824/108/paleoclimate2500mya.png
Bill Illis:
At a few, very atypical geographical locations. And they don’t provide any of the other information you need to make any use of this information (cloud forcings, solar forcings, arctic ice coverage etc).
Let’s use data—that in no way represents mean climate—to prognosticate about climate in favor of the relatively high detail of data provided by surface instrumentation actually designed to measure temperature (and the plethora of other measurements that allow for the possibility of modeling the effects of forcing over time).
They have some value, but it is very limited. Noticed that your “supporter” phi thinks the divergence between temperature and tree rings must be a problem with precisely built instrumentation designed by measure temperature, not pieces of wood growing out of the ground responding mostly to moisture changes that some nuts are trying to inferring temperature from.
Carrick,
Ring widths are highly dependent on rainfall. This is not the case of ring density (MXD). The problem is not with instruments but with their situations. Here, trees are much more reliable.
An illustration of the problem at local level:
http://img38.imageshack.us/img38/1905/atsas.png
And globally (NH) :
http://img708.imageshack.us/img708/1363/anomthn.png
phi —
As best I can figure, you are claiming that tree ring density records (MXD) are more reliable than instrumental records from thermometers, as far as providing a record of temperature at a given location.
This might be humor or parody, or you might be serious. I can’t tell. Clarification would be helpful.
Amac, Mosher if proxy reconstructions aren’t allowed, then you AGW people have to prove current warming (even with the current pause) is unusual.
Until then I assert that the post-1957 or 1960 cooling is correct. It is only bad data that suggests it is warmer.
####################
actually, not. As a thought experiment place yourself in 1850. Give yourself AGW theory. Heck give yourself the science of the day. Increased C02 will cause warming.
Now, predict the future from 1850 to today. would you predict cooling? nope.
The physics tells us that more GHGs will cause warming, warming above and beyond any natural cycles. Lows will be higher, peaks will be higher still. I don’t need to know anything about the MWP. In fact, I would not be suprised is the MWP was higher than today. It wouldn’t change anything I know about the physics of AGW.
To be sure, climate science ( Mann) has tried to make paleo a center piece of the argument. That’s a mistake. Its a physics mistake, an epistemic whopper, and a PR disaster
Thanks Amac.
You notice how quickly Bruice’s ( I like that spelling) rigorous skepticism flies out the window when he finds a paper or chart he likes. He is quite unlike those of us who demand to see code and data for ANY claim. He suffers from “selective” skepticism.
He doesnt get this about himself.
AMac,
Sorry, there was not as humorous as this in my message.
For a specific location, a thermometer is obviously the ideal but it is not the temperature of a specific place that one seeks to know. What we are trying to approach is the evolution of regional and global temperatures. The critical, at this level is not the accuracy of the instrument but its spatial representativeness. That is why MXD are more reliable than stations.
Evolution of MXD is consistent with TLT and with the melting of glaciers, but not with the temperatures of the stations. However, the main argument and the more convincing is the discontinuities cooling bias of raw temperatures series.
Re: phi (Oct 7 12:34),
That tree rings are more reliable than weather stations: an extraordinary claim. Thus, it would require extraordinary evidence — likely in a more accommodating format than a blog comment.
As Steven Mosher might remark, you also have an epistemological problem. To show that MXD provides a better measure of temperature than do thermometers, you will need to compare both the MXD proxy-derived record and the thermometer record to the Gold Standard for temperatures.
What will that Gold Standard be?
evidence that Supports his conclusion is “phi’s” gold standard
Tree growth responds first to moisture, secondly to fertilization (availability of needed minerals), thirdly to number of sunlight hours, and forth or lower to temperature. Some of these are very nonlinear (influence of tree ring on temperature is very nonlinear), some of the effects are additive while others are not, and some of the parameters have mutual interactions (high precipitation levels lead to low available soil mineral content for example).
There’s an argument for matching tree ring width or MXD or whatever other metric one favors to local temperature that allows one to use trees in certain settings, but its based on entirely hand-waving arguments and and ad hoc theories with little experimental support, or real chance of being right for more than a period of say 40 years.
For example, there is the “temperature limited growth” in tree lines—that works well as long as the tree line doesn’t shift. Same goes for temperature limited growth in the sub-artic climate. Works again, as well as the temperature doesn’t shift.
I would argue that the “gold standard” is in the coherency of the measurements from multiple scientific instruments designed to measure temperatures (surface temperature stations compared to satellite measurements), not a plant designed to optimize its growth patterns to changing environmental conditions.
For this period, tree ring proxies do remarkably bad as temperature proxies.
For me this is all I need to know in order to write them off as a usable proxy for temperature. Even if you could argue that it was CO2 in this case that is driving the divergence, that is hardly proof that other environmental factors won’t drive similar divergences for periods during which no alternative temperature measurement is available.
Forget about it. MXD is a totally useless metric (not without maybe 100 more years of observations, so we can tease out all of the interaction terms, and maybe be able to really use tree ring growth as a reliable metric for temperature.)
Steven Mosher:
Is there a word “preclusion” (a decision that was made before the evidence was presented?) I think phi and Bruce both operate in that world: First decide what is true, then mine for data that supports it.
Number of sunlight hours varies by the way in the following ways, among others 1) changes in cloud cover with changing climate, 2) the presence of other mature growth in the vicinity of the tree in question—this second factor increases and decreases as trees grow or are killed by fire, pestilence or disease, all of these three having significant interactions with prevailing climate conditions.
As a for instance, in a period of several years of drought, one might expect a higher incidence of fire, which eradicates near by trees. The thing to remember is you’re picking the trees that made it through the fire (the ones with long proxy records), so there is a selection bias present as well.
In my view, epistemological problems are not where you think and it seems that there is much confusion about the dendro.
Widths and densities of tree rings are two very different parameters which do not react the same way to environnement constraints. MXD are essentially driven by the temperatures of summer months. It’s just a known fact. You can also see this from this graph:
http://img97.imageshack.us/img97/8966/dendrod.png
Unfortunately there is no gold standard for the evolution of regional and global temperatures, we are reduced to expedients. Comparison with proxy (TLT, MXD, glaciers, etc.). is one. The most striking, however, is the analysis of raw temperature series. You probably know that the raw series are affected by discontinuities of all amplitudes. Only the largest can be identified by statistical methods. These detected discontinuities are very significantly in cooling. The only sound explanation that I know for this bias involves a powerful disruption by urbanization. When moving stations, these effects are sharply reduced by the improvement of the environment. The magnitude of the effect supposed to disturb is of the same order than the divergence with proxies.
ivp0 (Comment #83181), the data I have taken is indeed from Alley, R.B.. (2004) GISP2 Ice Core Temperature and Accumulation Data.
The opinions are neither Alley’s nor Dr. Don Easterbrook’s (who I have never heard of). Why should I take anyone’s opinion to plot the trends from the data? (Or use anyone elses brain to think for myself?)
That Global warming is happening is immaterial. I have pointed out the data above shows that it has happened several times in the past.
Carrick (Comment #83175) “I don’t see how can we do that [deal with data prior to when we had instrumentation] at the same time we rely on data from a few isolated locations in remote regions of the Earth using a proxy of uncertain quality with little of the corresponding climate information needed to make sense of it.â€
Carrick so you would rather rely exclusively on 120 years of flawed “instrument†data that we have since the end of the little ice age to determine whether there is AGW and whether it will continue?
Your tight knit cabal of climate scientists have your heads so firmly stuck in manipulating this data with Monte Carlo analysis and models etc that you have forgotten basic science, the scientific method and logic.
When confronted by simple questions and logical conundrums you retreat into complicated mathematical analysis and making pseudo-scientific, astrological like predictions.
Science deals with subtle evidence. A ships mast disappearing into the distance, or a pile of rocks lining a moraine. Would you similarly dismiss continental drift, the ice-ages, evolution?
You say that the ice-core data is “proxy of uncertain quality with little of the corresponding climate information needed to make sense of itâ€, but Carrick the ice-core data has exceptional and unbroken accuracy for the temperature data of that place, far more accurate than your instrument record.
From this data I calculate that the AVERAGE temperature from approx 1855 to 1800, a period of not just one but 5 decades, at GISP2 was -31.6635 C and the average temperature from approx 1011 AD to 925 AD, a period of 8.6 decades, was -30.6592 C, OR 1 C WARMER THAN THE 5 DECADES PRIOR TO 1855.
This also corresponds to a period within the Medieval Warm period AD 950 to 1250.
A year or two could be a flash in the pan but 8.6 decades?
Not only that I also calculate from the data that the average temperatures from about 4 AD to about 273 BC, (that’s close to 2 ¾ centuries!), the average temperature of GISP2 was -29.9874 C, or 1.67 C WARMER THAN the 5 decades from 1855 to 1800.
GISP2 is 1.67 C warmer than it was from 1800 to 1855 for 273 years and this would not be reflected elsewhere? I don’t think so!
And you carry on about ONE decade being 0.7 C warmer than the end of the little ice age and the melting of the Arctic Ice as a phenomena unprecedented in the annals of time.
I would bet my bottom dollar that the Arctic Ice disappeared during summer during those 2 ¾ centuries and very possibly during the Medieval warm period also.
Just eyeballing the graph I can see even warmer periods for long periods of time in the past, as much as 3 degrees warmer for many decades.
And there is corroborating evidence, which AGWers work hard to deny (who are the deniers here?), there is the Medieval Warm Period the Roman Warm Period and the Holocene “Optimumâ€.
The Younger Dryas a global phenomenon lasting just a few years is accurately reflected in the GISP2 records.
As for evidence that Global warming has stopped or not using your analysis – sorry there is none. That is simply nonsense. It could stop one day and start again the next day and carry on for decades, or vice-versa.
There is nothing good about trying to reverse our temperatures, even if we could, to go back to the mythical halcyon days of the little ice age. If our temperatures plunged back to those levels we would be facing global catastrophe not with warmth.
I wrote a post it disappeared. Maybe it was too long – pity
Tears.
====
Here it goes again edited
ivp0 (Comment #83181), the data I have taken is indeed from Alley, R.B.. (2004) GISP2 Ice Core Temperature and Accumulation Data.
The opinions are neither Alley’s nor Dr. Don Easterbrook’s (who I have never heard of). Why should I take anyone’s opinion to plot the trends from the data? (Or use anyone elses brain to think for myself)
That Global warming is happening is immaterial. I have pointed out the data above shows that it has happened several times in the past.
Carrick (Comment #83175) “I don’t see how can we do that [deal with data prior to when we had instrumentation] at the same time we rely on data from a few isolated locations in remote regions of the Earth using a proxy of uncertain quality with little of the corresponding climate information needed to make sense of it.â€
Carrick so you would rather rely exclusively on 120 years of flawed “instrument†data that we have since the end of the little ice age to determine whether there is AGW.
Science deals with subtle evidence. A ships mast disappearing into the distance, or a pile of rocks lining a moraine. Would you similarly dismiss continental drift, the ice-ages, evolution?
You say that the ice-core data is “proxy of uncertain quality with little of the corresponding climate information needed to make sense of itâ€, but Carrick the ice-core data has exceptional accuracy for the temperature data of that place.
From this data I calculate that the average temperature from approx 1855 to 1800, a period of not just one but 5 decades, at GISP2 was -31.6635 C and the average temperature from approx 1011 AD to 925 AD, a period of 8.6 decades, was -30.6592 C, OR 1 C WARMER THAN THE 5 DECADES PRIOR TO 1855. This also corresponds to a period within the Medieval Warm period AD 950 to 1250.
A year or two could be a flash in the pan but 8.6 decades?
Not only that I also calculate from the data that the average temperatures from about 4 AD to about 273 BC, that’s close to 2 ¾ centuries!, the average temperature of GISP2 was -29.9874 C, or 1.67 C WARMER THAN the 5 decades from 1855 to 1800.
GISP2 is 1.67 C warmer than it was from 1800 to 1855 for 273 years and this would not be reflected elsewhere? I don’t think so!
And you carry on about 1 decade being 0.7 C warmer than the end of the little ice age and the melting of the Arctic Ice.
I would bet my bottom dollar that the Arctic Ice disappeared during summer during those 2 ¾ centuries and very possibly during the Medieval warm period too.
Just eyeballing the graph I can see even warmer periods for long periods of time in the past, as much as 3 degrees warmer for many decades.
And there is corroborating evidence, which AGWers work hard to deny (who are the deniers here?), there is the Medieval Warm Period the Roman Warm Period and the Holocene “Optimumâ€. The Younger Dryas a global phenomenon lasting just a few years is accurately reflected in the GISP2 records.
Edited it and tried again – Nope doesnt work – oh well 🙂
Richard– I fished them out of the spam bin. I don’t know why they went there.
The spam trap also contained some comments by Dr. J. None were substantive, so I left them there.
“First decide what is true, then mine for data that supports it.”
As opposed to: Blame it on CO2 and stick fingers in ears and over eyes when other data suggests it wasn’t CO2?
A) It is true that there is more than one climate variable than CO2.
B) Albedo has changed by 7W/m^2
C) Bright Sunshine has changed by a similar amount.
D) Conclusion from Mohser: CO2 contributes 50% of all warming
E) Conclusion from me: Laugh at Mosher’s conclusion.
With a budget of 0$ for research I can put it a dent in claims for CO2’s culpability.
Imagine if even 10% of the billions allocated to the IPCC and green propagandists were spent on alternative’s?
Imagine if “climate scientists” didn’t fear for their career if they contradict the team?
Imagine if it didn’t take the awarding of a Nobel Prize to momentarily dispel the fear of retaliation.
phi:
LOL. That’s great!!!
A proxy that may work 3 months of a year. But not tested. Or where rigorous testing is available, it fails.
Carrick, phi, one other point: For dendrochronology, neither stands nor individual trees are sampled randomly. They have to be selected so that the attribute in question (e.g. temperature) is properly represented.
Thus, like medicine, dendrochronology is not merely some set of robotic, mechanical procedures. It is a calling in which, after long apprenticeship, the seasoned practitioner skillfully merges Science with Art.
This doesn’t strike me as terribly charming, but I can see how others might view it that way. Reliable? Alas, my imagination doesn’t stretch that far.
Carrick: That graph could very well be part of such a rigorous test to me. Did you see the match between the non-smoothed curves? What do you think the p-value is for such a match?
.
So I’d like to ask phi where he found this graph, just to see how exactly it was made.
.
Amac: thank you for pointing out that medicine is not reliable. I think I’ll still pay a visit to my physician every now and then though. 😉
Richard:
That’s not a good description of the philosophy of science.
Science deals with converging lines of evidence. Dissonance between different measurements is seen as evidence against something. Coherence is seen as evidence for.
If you want to try and use proxies go ahead. I think they are worthless.
In my view a better statement of the philosophy of measurement science is first prove the method, then analysis the results of data collected using that method and then interpret them.
The right approach is not “well there isn’t anything else”. If the only method that is available is inadequate, then you have nothing to work with.
toto, I don’t know the providence of that graph, and without knowing that, I don’t have any idea what’s really being compared.
Even if right, it only helps you 3 months of a year, and still doesn’t give you the other data you need for any meaningful climatology, such as cloud cover, global albedo etc. We might get there with proxies in 100 years, I don’t think we’re there right now. (For one thing you need a long enough overlap of the proxy with the “gold standard”, which is surface temperature record + satellite measurements.)
I’m used to seeing curves that have a large divergence between temperature proxy and temperature post 1960. The whole “hide the decline” compares out of that.
Maybe phi could post a link to a peer-reviewed paper with that comparison in it.
Compared to climate science, medicine is incredibly advanced as a science. If you received 10th century medieval medical care, it was more likely to kill you than to cure you.
Also since phi is claiming that it is a “known fact” there is an exact equivalence between temperature and MXD, he should post a link which definitively demonstrates this.
Here is a recent review paper uses tree-ring density.
The divergence is still present in that data.
Doesn’t look very promising to me.
With respect to convergent versus divergent lines of evidence, reflect on the nature of a concept that requires the coining of a term like “spaghetti graph.”
If one of those spaghetti strands is the “reliable medicine,” the others of necessity must be unreliable.
So, which one do you choose as your elixir?
Read what Carrick wrote above, and the answer is obvious.
Nice Carrick.
At AGU jones also said something interesting about proxies that reconstruct the winter being far more valuable than those that reconstruct the summer.. and as i recall NH proxies are more important than SH proxies.. tied to the variance I guess and the correlation structure..
So, a winter proxy has more info about the full year than the summer does..
I wonder if the proxy lovers above realize that most proxies reconstruct a SEASON and not the whole damn year..
but these nuggets of data and paintings of river crossings are venerated. go figure
uRichard:
“From this data I calculate that the AVERAGE temperature from approx 1855 to 1800, a period of not just one but 5 decades, at GISP2 was -31.6635 C and the average temperature from approx 1011 AD to 925 AD, a period of 8.6 decades, was -30.6592 C, OR 1 C WARMER THAN THE 5 DECADES PRIOR TO 1855. ”
read your numbers. Go look at temperatures in Greenland. Do you see a problem?
Richard:
Sorry you got shunted to the spam bin. It does make conversation difficult.
In your earlier posts you never linked to your data or findings so we were all left to guess your sources. I don’t speak for Dr. Alley, but if I did I might suggest that you are inferring global conditions from one locality and that presents many problems. GISP2 is certainly one of our most high resolution tools for a look back into time but it represents 1 location and 1 location only. Every single location contains much higher variability in weather and climate than the earth taken as a whole over time. Dr. Alley would be the first to suggest that GISP2 studies do not a global climate record make.
here richard.
some average temperatures from greenland
http://www.greenland.com/en/about-greenland/natur-klima/vejret-i-groenland/gennemsnitstemperaturer.aspx
So your calculating -31C.. for those MWP years.
is that a yearly? average. and why was it colder ?
AMac (Comment #83177) “Could you link to the GISP record you are discussing?”
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/greenland/summit/gisp2/isotopes/gisp2_temp_accum_alley2000.txt
“On scales of millenia and longer, these paleo records provide valuable context. I am very skeptical of the value of paleotemperature reconstructions with a purported precision of centuries (or less) that cover the past millenium or two.”
On the contrary the ice-core records provide exceptionally accurate temperature records at very narrow intervals, at the places where the cores are taken.
For the GISP2 data the average interval between samples over 2000 years is 8.87 years. More recent cores have much higher resolutions than those.
PS After Phil. pointed out that “now” or “present” was not the now of today I did a quick search and found that someone claimed it was 1855. However according to Alley the first record was 95 years “before present”.
Can someone enlighten me on what he meant by “present”?
If it was 2,000 that would make it 1905 or if it was 1997 (original measurements published by Cuffey and Clow (1997) ), then it would be 1902
steven mosher (Comment #83271)
“some average temperatures from greenland
http://www.greenland.com/en/ab…..turer.aspx”
steven those temperatures are for the coastal areas. The temperatures I gave are for GISP2 much further inland and elevated on the ice sheet.
“So your calculating -31C.. for those MWP years. is that a yearly? average. and why was it colder ?”
Yearly average – colder because of the above.
GISP2 altitude 3200 m
This is what NOAA has to say about the accuracy of the ages:
“The current conservative estimate of the age error is 2% for 0-11.64 kyr B.P., 5% for 11.64-17.38 kyr B.P. years ago and 10% for 17.38-40.5 kyr B.P. years ago (Alley et al., 1993). “
Verily, I have seen the error of my ways… Proxies are the path of righteousness… Mann08 was correct… upside-down is bizarre…
Brother Carrick! Brother Steven! Brother ivp0! Repent! Before it’s too late!
Richard:
That’s error in age, not error in mean global temperature. Whether there’s even a correlation between the proxy and mean global temperature is not established. There’s other effects too, such as loss of temporal resolution as you go back further in time with ice core data.
You like the story it tells you so you accept the data without question on that basis.
Richard, before present is 1950.
Please quit using Richard Alley’s data. It is corrupted by an inappropriate isotope / temperature conversion formula.
This has been corrected by the more recent estimates put out by the scientists working with the Greenland data (they have just decided to not criticize Alley while correcting the record).
The NGRIP isotope extension out to 123,000 years ago would have Greenland at +10C in the Eemian interglacial using Alley’s formula. There would be no glaciers left at those temperatures (when the evidence shows even the southern third remained fully glaciated).
http://img830.imageshack.us/img830/6250/alley2000finaltime.png
Here is the corrected temps for Greenland and Antarctica back to the Eemian (divide by 2 for global temperatures).
http://img836.imageshack.us/img836/9484/lasticeageglant.png
http://img706.imageshack.us/img706/294/dryasevents.png
Carrick (Comment #83277)
Alley “Confident paleothermometry is provided by site-specific calibrations using ice-isotopic ratios, borehole temperatures, and gas-isotopic ratios. “
Carrick (Comment #83278) – It does say thats for radiocarbon dating. But if true for this then 1855 which I have taken is correct.
re: Carrick (Comment #83158)
Sorry for taking so long to respond Carrick.
You asked me to expand on my comment regarding sharper spatial and temporal resolution in RomanM’s method.
Jeff Id posted the spatial trends since 1979 using Roman’s offset method here. It has been a while since I have looked at this but the impression I got at the time was how striking the spatial discontinuities were. Jeff commented in the discussion thread at the time in his thermal hammer article (here) about plotting a histogram to explore the distribution of these trends. Jeff posted another graph using the raw gridded gchn data here and it appears that the distribution of trends are much more noisy and therefore more difficult to distinguish any spatial discontinuities (although the graphic uses a different color scale).
Assuming it is true, that Roman’s method gives sharper spatial resolution, then it should be possible to identify with more confidence which grids have discontinuities and follow up by exploring some interesting questions: Are these discontinuities random or biased? How well do the adjustment methods perform in removing these discontinuities?
At the time of these posts, I had not begun to dabble with R. This discussion refreshes my interest and it would be fun to explore this a bit. However I have not done anything in R yet with spatially gridded data so I still have a bit of a learning curve ahead of me. Still too busy for now unfortunately.
Bill Illis (Comment #83279) Can you give the data for this graph:
http://img706.imageshack.us/im…..events.png
Carrick (Comment #83277)
“That’s error in age, not error in mean global temperature.”
GISP2 doesnt claim to measure the mean global temperature only the temperature at GISP2
“Whether there’s even a correlation between the proxy and mean global temperature is not established… You like the story it tells you so you accept the data without question on that basis.”
I do not accept the data without question. If you could point out valid reasons why it should be rejected I will do so.
Bill Illis has called it into question so maybe he could elaborate further. But you have given no reason for it not to be considered.
In the meantime based on the data have you even read what I wrote?
“From this data I calculate that the AVERAGE temperature from approx 1855 to 1800, a period of not just one but 5 decades, at GISP2 was -31.6635 C and the average temperature from approx 1011 AD to 925 AD, a period of 8.6 decades, was -30.6592 C, OR 1 C WARMER THAN THE 5 DECADES PRIOR TO 1855.
This also corresponds to a period within the Medieval Warm period AD 950 to 1250.
A year or two could be a flash in the pan but 8.6 decades?
Not only that I also calculate from the data that the average temperatures from about 4 AD to about 273 BC, (that’s close to 2 ¾ centuries!), the average temperature of GISP2 was -29.9874 C, or 1.67 C WARMER THAN the 5 decades from 1855 to 1800.
GISP2 is 1.67 C warmer than it was from 1800 to 1855 for 273 years and this would not be reflected elsewhere? I don’t think so!”
What do have to comment on that?
(that should be 277 years)
Carrick are you an expert in oxygen isotope-temperature relationships? If not on what basis do you say that they are inaccurate? Why do you scoff at the science while promoting elaborate examinations of faulty “instrument” records?
Richard (Comment #83283)
October 7th, 2011 at 7:12 pm
Bill Illis (Comment #83279) Can you give the data for this graph:
——————–
Epica Dome C temp reconstruction to 800,000 years ago.
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc3deuttemp2007.xls
NGRIP do18 isotopes extended to 123,000 years ago by matching to the Antarctica ice core data.
http://www.iceandclimate.nbi.ku.dk/data/2010-11-19_GICC05modelext_for_NGRIP.xls/
Use Temp C = (dO18 + 13.4) / 0.8 — for the summit at NGRIP or GISP – the 0.8 changes to as low as 0.69 for Camp Century nearer to the coast and at a lower elevation.
http://img808.imageshack.us/img808/4070/do18isotopetempconversi.gif
http://img819.imageshack.us/img819/2569/do18formulaelocation.gif
http://www.science.uottawa.ca/eih/ch3/ch3.htm#tt18ocip
RIchard:
My first point was the errors you quoted were with respect to time not temperature. My second is we have no idea to what degree the temperatures at GISP2 correlate with global mean temperature. I think you’ve ended up admitting to this too.
Richard:
1) on the basis it’s not global mean temperature, 2) we don’t know how the local temperature correlates with global mean temperature, 3) we have no way to verify how well d18o collected from an ice core ncorrelates even with local mean temperature. You don’t just push a button and get this data. It’s d@mn hard work, and even then the possibility of sample contamination is something you can’t easily control for.
As I said, if you’re comfortable using this data “good with me”. I wont use them or consider them without better independent validation of the method.
Carrick (Comment #83288) and (Comment #83290) I just did the trendlines for NGRIP they also show a negative trend for the last 10,000 years and the last 2,000 years.
“Science [among other things] deals with converging lines of evidence.”
A picture is worth a 1000 words. If you just eyeball the green Greenland Temperature Estimates you can clearly see that the trend, after the rise after the peak high altitude summer insolation, is downwards.
http://img706.imageshack.us/img706/294/dryasevents.png
The last 120 years of instrument records show a positive trend. But this maybe just part of natural up and down movement (coupled with a slight AGW element) in the much longer trend towards cooling.
It looks so from those graphs. But you resolutely refuse to consider anything beyond 120 years.
Do you think though that this might be even remotely possible?
Bill Illis (Comment #83287) – Thanks for that Bill
Richard, how do the paleo-climatic temperature data you link to from Greenland square with recent findings of warmer than today conditions in Greenland 8000-4000 years ago:
“During the so-called Holocene Climate Optimum, from approximately 8000 to 5000 years ago, when the temperatures were somewhat warmer than today, there was significantly less sea ice in the Arctic Ocean, probably less than 50% of the summer 2007 coverage, which was absolutely lowest on record…Today, perennial ice prevents any sort of beach from forming along the coasts of northern Greenland. But this had not always been the case. Behind the present shore long rows of beach ridges show that at one time waves could break onto the beach unhindered by sea ice. The beach ridges were mapped for 500 kilometres along the coast, and carbon-14 dating has shown that during the warm period from about 8000 until 4000 years ago, there was more open water and less coastal ice than today.”
http://www.sciencedaily.com/releases/2011/08/110804141706.htm
I tend to agree with Carrick on this.
Carrick,
“A proxy that may work 3 months of a year.”
Three to six months, there is no perfect instrument. During this period the temperature is integrated physically. From this point of view it is better than the stations that are based on two daily point values.
“But not tested. Or where rigorous testing is available, it fails.”
Wrong!
************
AMac,
“It is a calling in which, after long apprenticeship, the seasoned practitioner skillfully merges Science with Art.”
Indeed. It is a constant character in climatology. We must be aware of.
“Reliable? Alas, my imagination doesn’t stretch that far.”
Extended imagination is the first quality of a climatologist!
*************
toto,
Dendro series are available here:
http://www.ncdc.noaa.gov/paleo/treering-wsl-data.html
Temperatures here: http://www.meteosuisse.ch
I built the graph on these basis.
***************
Carrick,
“For one thing you need a long overlap of the proxy with the” gold standard “Which is surface temperature record + satellite measurements. ”
Temperatures of the stations as “gold standard” is a joke, of course!
For satellites, see the second chart I referenced in the message # 83204.
“Also since phi is claiming that it is a “known fact†there is an exact equivalence between temperature and MXD…”
No exact equivalence, no. This probably does not exist. You have no other alternative than to use imperfect data and make cross-comparisons.
******************
Steven Mosher,
“I wonder if the proxy lovers above realize that most proxies reconstruct a SEASON and not the whole damn year..”
I wonder if the stations lovers above realize that most of the thermometers reconstruct URBAN anomalies and not the whole damn regional evolution…
Richard,
How else is Carrick supposed to interpret your apparent lack of curiosity about the error on the two parameters in Bill Illis’s calibration equation above? (Make that three if you include the fudge factor for converting from local to global.) In contrast, we are very curious about that.
@SteveF (#83178)
[“Just freakin’ fabulous science if I ever saw it. It is great when the ‘credible range’ for the implied sensitivity covers everything for “who cares†to “we are doomed to become like Venusâ€.
.
.
As I think I first said three years ago: the whole exercise is so intellectually corrupt as to beggar belief, yet it goes on. Count on ‘aerosol offset’ continuing to be the magic kludge of climate science for AR5 and the foreseeable future thereafter…“]
To paraphrase Willy Wonka:
“So shines an insightful truism in a weary climate debate…!”
Well said, SteveF.
phi,
In addressing posters such as Carrick and Mosher please do not under-represent the uncertainty in the science of climate reconstruction.
.
Rather than mince words |(“gold standard”, “exact equivalent”), why not address the argument square on? Why not simply demonstrate that the uncertainty on your favorite works is not devastatingly large?
phi,
why do you dodge AMac’s substantive question about reconstruction “reliability” and attempt to switch the topic to one of “imagination”?
.
It’s not hard for me to imagine that you have difficulty either accepting or understanding the arguments being put forth here – that the science is fairly imprecise. To pretend otherwise would be nothing short of dishonest.
bender,
“gold standard” and “exact equivalent” are inventions of Carrick. For my part, I put in line graphs illustrating uncertainties, contradictions and inconsistencies but also interesting correlations on which we can work.
Climatology evolves in quicksand and holds as much Art as Science.
bender (Comment #83300)
October 8th, 2011 at 4:12 am
…. the error on the two parameters in Bill Illis’s calibration equation above? (Make that three if you include the fudge factor for converting from local to global.)
———–
Converting from local Antarctica and Greenland temperatures to the global temperature change is not really fudged. The isotopes indicate that the mid-latitudes only declined by 5C and the tropics by 2C to 3C in the ice ages. Globally it seems to work out to between 4C to 6C and 5C is just the most commonly used value.
That also gives us a polar amplification factor of 2 which seems to show up in all the paleo-evidence over all timelines. (Even the PETM numbers which are often quoted are already a 2X polar amplified value – global change was only half as much as the commonly used numbers).
bender,
“It’s not hard for me to imagine that you have difficulty either accepting or understanding the arguments being put forth here…”
Arguments ? Just hand waving !
“To pretend otherwise would be nothing short of dishonest.”
Listen. Reconstructions of temperature by CRU include an artificial warming of about 0.5 °C per century for compensation of raw series discontinuities. None explanation for this extraordinary bias. How would you rate this?
Mincing words now on “fudge”. All estimated parameters are subject to error. Including this number “2”.
phi,
Hand-waving? Oh, so you’ve missed the argument entirely? Shall I spell it out for you? Arguing uncertainty is not “hand-waving”. It’s a request that standard errors be provided, confidence intervals … that sort of thing. Thanks.
bender,
If you pick up the thread of the discussion about dendro, you will discover easily that I am the only one to have made arguments documented and quantified. Speaking of uncertainty in the air is meaningless.
phi,
Happy to do so. However the discussion here is about ice cores, not trees. Yes, “speaking of uncertainty in the air” can be “meaningless” … unless it compells someone to answer a very specific question about standard errors, confidence intervals, and suchlike. So, sorry,I do not accept your assertion that I am “hand-waving”. I am asking a question. Whereas you are trying to argue that my question is not relevant or that it has already been answered elsewhere, or …
blender,
“However the discussion here is about ice cores, not trees.”
The post concerns UAH anomaly but neither ice nor trees. Specific discussions are formed randomly. Are you the judge of that ?
Otherwise, if you have a specific question, please, ask.
phi,
I’ve asked several questions and you’ve answered none, other than pointing vaguely to a “dendro thread” somewhere else. Though you’ve dodged quite a number.
.
When I say “here” I mean at the bottom of this thread. Not the opening post. Though the issue I raise is relevant in 50% of the threads at lucia’s blackboard.
.
You have no expert opinion to share on climate reconstructions from ice cores? Nothing that touches on the debate here between Richard, Carrick, Bill Illis, Niels Nielsen? Then good day.
bender,
You do not know what we are talking about, you interfere with dialogue by reading half of what is written, you ask ghosts questions, decide the topic for discussion and send home those who annoy you !!!
Who are you ?
Any unpleasant half god of the wind ?
phi,
Do I need to list for you the questions that I’ve asked here, or can you read for yourself? If all you want to do is mince words, then let us know and I’ll not bother.
bender,
As I said before, if you have a specific question about what I have stated, please just ask.
phi,
What is the uncertainty on temperature reconstructions going back to HTO? Bonus marks for showing your work.
Macedoine des Mots? My specialty!
============
phi, it is obvious you have little understanding of how the scientific process works, if you don’t understand that two independent means of measuring a quantity that they were intended to measure agreeing with each other should serve as the basis for testing the degree to which a proxy can be used as temperature, and that you have a lot of road to cover in your understanding of the scientific method before you and I have anything of substance to discuss.
Secondly, you don’t understand that even MXD as a proxy is only so when you can choose “temperature limited growth”, it’s not a general characteristic that the two are going to agree.
Because of the selection of areas such as tree lines, the growing seasons for the trees tend to be very short…three months of the year is typical, not the six months you erroneously claim. So even if it were a “good” proxy for temperature, it would only be a good one for about 3 months growing season, which makes it a crappy thermometer, even if it worked when you claim it is supposed to work.
Thirdly, your own little graph does not agree with the reconstructions of experts in the field like Briffa, who still very clearly show a divergence between global-mean-tempearture and MXD-proxy for temperature post 1960, when they use what is considered the “state of the art” in the field.
It seems to be your argument that the two series that were designed to measure temperature are wrong and that your own interpretation of MXD (not that of Briffa either mind you), should be the one we should all accept.
Without proof. (Because any real proof gives a “spaghetti curve” of proxies with varying degrees of disagreement with each other, and shows a substantive disagreement with “real temperature measures” over the interval when they should agree. No doubt the problem is they don’t have phi’s remarkable insight into MXD used properly. >.< )
So I think you need to 1) address why Briffa and other experts get it wrong and 2) address how you know without confirming evidence that MXD proxies can be used as a good measure of annual global mean temperature, or 3) explain to us why something that isn’t in any way a measure of annual mean temperature somehow is still of use climatologically and how.
If you can’t do this, I think there’s no point to further discussions between us.
bender,
“What is the uncertainty on temperature reconstructions going back to HTO?”
Hmm, I never talked about such a period of reconstruction. My assertions concern only the twentieth century. See my comment # 83244.
bye bye phi
Take a break and learn to read.
bye bye bender
Carrick,
“it is obvious you have little understanding of how the scientific process works, if you don’t understand that two independent means of measuring a quantity that they were intended to measure agreeing with each other should serve as the basis for testing the degree to which a proxy can be used as temperature, and that you have a lot of road to cover in your understanding of the scientific method before you and I have anything of substance to discuss.”
I fix, I have a long way to go before understanding climatology. What you are stating is not scientific.
“…three months of the year is typical, not the six months you erroneously claim…”
What MXD measure is the density of laterwood but for some reason, the correlation is better with April to September than with July-September (probably a question of reserve accumulation). April to September is also the standard dendro calibration.
“So even if it were a “good†proxy for temperature, it would only be a good one for about 3 months growing season, which makes it a crappy thermometer, even if it worked when you claim it is supposed to work.”
So six months rather than three. It’s not bad, hard to find proxy do better. In addition, the correlation between summer and winter is not that bad. Anyway, MXD allow me to do a critique of summer stations temperatures.
Regarding the date of divergence, you have two variables to adjust. By modulating, you can bring up the divergence around 1960 but a calibration early in the century is frankly better. See the two graphs of my message # 83204. There is little doubt.
Carrick (Comment #83152),
.
I had seen the Hansen paper some time back. The paper continues to assert that there is a large planetary imbalance; they manage to support this by accepting the only ocean heat reconstruction (by Von Stuckmann, co-author) which shows rapid heat accumulation as correct and ignoring all the other reconstructions which show very little.
.
IMO, Hansen is making himself irrelevant with contorted efforts like that. The document was not peer (or even pal) reviewed either, FWIW. I don’t think it ever will be either.
toto,
The graph represents only the 70 series classified as Swiss and complete from 1901 to 1950. I standardized the series over the period prior to averaging.
Beware, values are detrended, the divergence MXD-T is about 1.5 ° C in the twentieth century!
phi,
Before saying goodbye you might want to address Carrick’s argument. Your mincing of his words was not all that satisfying.
phi says:
“There is little doubt.”
Talk about “meaningless” “hand-waving” “words in the air” on the matter of reconstruction uncertainty. Irony meter broken.
[OK, trying one last time…]
Carrick:
.
His graph does not clash with Briffa’s work in any way.
.
Briffa and others have repeatedly pointed out that the post-1960 divergence only occurs in some data sets and not others. In particular, divergence does not seem to occur south of 55degN.
.
His graph only uses data from Switzerland (latitude <= 51degN). So we don't expect to see any post-1960 divergence there.
.
Phi: to confirm, by "detrending", did you mean just calculating the linear trend and subtracting it?
Amac.. who knew? I mean why to we even USE thermometers ( which are liguid in glass proxies to measure the temperatu
Oh, Amac, do you a good pool guy.. mine is busy mopping the floor with phi.
bender,
“Before saying goodbye you might want to address Carrick’s argument. Your mincing of his words was not all that satisfying.”
If you have a criticism, be more specific, thank you.
toto,
“Phi: to confirm, by “detrending”, did you mean just calculating the linear trend and subtracting it?”
Yes, for the graph “Suisse, valeurs normalisées détrendées 1915 à 1998”. The other graphs are not detrended.
phi,
When you say to Carrick “what you state is not scientific” that is a little vague. Please be more precise in your critique. That is one example of many where you choose to make things less clear rather than more clear. If that’s your way of doing business, well, that’s, um non-scientific, to quote a friend of mine. But that’s your business.
.
You appear to not have answers to my questions on the uncertainty of paleoclimatic reconstructions. That’s fine. It would have been preferable for you to just say so, rather than force me to put the words in your mouth.
.
Paleoclimatology is about as imprecise as a science can be before it must be called guesswork artwork.
bender,
Sorry, but under fire, sometimes I rush my answers a little.
Carrick’s argument regarding the independence of temperature reconstructions (CRU, GISS, NOAA, etc.) is obviously not valid. On the one hand the sources largely overlap, but above all, the entire methodology of data acquisition is in question and there, there is absolutely no independence. All these reconstructions roughly measure the same thing, but what? That is the question. BEST is unlikely to provide any light on this. So the whole point of the proxy is to allow a truly independent check.
I am interested mainly in the temperature of the twentieth century and then I try to estimate the reliability of proxy for this period. We can eventually extend the conclusions for earlier periods but we have to be careful.
phi,
reading the thread you referenced –
your lack of explanation for the divergence phenomenon only underscores that these are very weak proxies. This was Carrick’s original argument back then, and yet here you are, 4+ months, later carrying over the same argument – this absurd notion that paleoclimatic reconstruction is an honest and precise science.
.
That the divergence phenomenon “seems” to vary geographically in strength gives me even less confidence that the paleos understand what’s going on with these tree responses to temp x precip x other stuff.
.
Please: no more wild goose chases down months-old threads. Your song is over. You entered right in the middle of a discussion on uncertainty in paleoclimatic reconstructions dating to HTO based on ice cores. I presumed you had something to say on that topic. I was wrong. Apologies to lucia’s readership.
bender,
You do not understand the issue and just base your opinion on unsubstantiated claims without trying to verify.
The discussion started from UAH monthly anomalies and not from HTO or ices cores!
************
Still just one thing. In my opinion, the bias of the discontinuities is really the central issue of reconstructions of global and regional temperatures. Someone has an opinion on the subject?
Richard (Comment #83284)
Look at the 3 sks articles I posted. They discuss in detail the dating issue. Also there were measurements taken at Gisp 2 core during 1990… (temperature measurements). That would be a better comparison (still a crappy one).
phi,
I understand the issue all too well. Which is perhaps why you continually refuse to engage on the details that matter, only those that don’t matter. Run along, now.
Robert (Comment #83347) can you point me to the data of temperature measurements taken at GISP2?
Saw the exchange bender vs phi, looks like phi is the winner by tko
phi ran away from Carrick. If you want to call that a “tko”, be my guest. To me, that’s a tco = technical chickening out.
.
Paleoclimatology is in denial of the uncertainty surrounding its primary outputs. To this, phi says what? That I don’t know what I’m talking about? Ok. When challenged to express the uncertainty on key propositions, such as degree of warmth around the HTO, phi says what – that he doesn’t want to discuss that topic?
.
Yes, phi is the ko kid. Color me bloody.
toto, I looked at your reference. I saw no direct correspondence of the Briffa’s earlier graph comparing his reconstruction to global mean temperature, but what I saw looked like total s**t. That was some fug-ugly data. Maybe you could point to a figure that you found to compelling bring you to the conclusion that this data doesn’t suffer from similar divergence problems as other data in its class? [*]
I think this is a fitting conclusion to my contribution to this thread:
[*] My true hope is the data did not suffer long.
phi:
Sigh. You can’t even follow an argument.
I spoke to the coherence between surface data and satellite data, not the coherence of the surface data sets among themselves.
You just really don’t get basic scientific methodology, and neither, from what I can tell, does Richard.
What I said earlier for people with cognitive and reading deficiencies and people who are too bullheaded to follow simple arguments:
“I spoke to the coherence between surface data and satellite data, not the coherence of the surface data sets among themselves.
You just really don’t get basic scientific methodology, and neither, from what I can tell, does Richard.”
Without getting too deeply into your argument with phi, basic scientific methodology I get, not sure if you do.
If a data set flouts basic science and is non-scientific it does not redeem itself if it is coherent with another data set that is scientific.
The “instrument data” computes global temperatures by measuring the air temperature haphazardly near the surface over land, and the sea temperature over sea by dipping buckets into the sea by passing ships. Then combining them to get a global temperature. This is basically flawed.
To argue that it is ok because there is a good correlation with satellite temperatures for the last 30 years is fallacious. This is basic science. There could possibly also be a good correlation of an astrologers data.
The data remains flawed.
Since we do not have any other data before the satellites, it is perhaps marginally better than astrology for that period, but there is no excuse not to discard it after we have satellite data.
ko Carrick on phi
🙂
Richard:
Your claim of “basically flawed” would hold water if the surface temperature record were irreconcilable relative to the satellite measurements. They’re not and the proper conclusion is your claim doesn’t hold any water.
No, what smacks of astrology is to use measures like plants, which were designed to respond optimally to changes in environment, as temperature gauges without the slightest meaningful validation of the degree to which this holds true and to similarly put some misplaced trust in ice core data to have some higher degree of verity than much more carefully designed measurements.
Having demonstrated that surface and satellite measurements are coherent with each other, there is now no excuse to discard surface data. To speak bluntly, this really does demonstrates your lack of understanding of the scientific method.
Any actual scientist is always going to want multiple converging lines of measurements, each with their own advantages and disadvantages:
Surface measurements are obtained in the atmospheric boundary layer, satellite temperatures are measured above it (there are necessarily differences in how the two respond to different sorts of natural fluctuation).
Surface temperatures have nearly complete temporal coverage (24-hour measurements usually at 1-hr intervals). Satellite measurements are performed once a day at approximately the same time, so these measurements are discrete in time.
Satellite temperature measurements are uniform geographically, surface measurements are at discrete positions.
These two types of measurements compliment each other. Only a complete noobie would abandon either one. Smart people would identify the strengths and weaknesses of both and strive to improve the measurements over time.
They might also try and introduce proxy measurements. But smart people would be glad that the response of the proxies is multimodal! Otherwise, you’d be stuck just with thermometers and no way to determine precipitation or other important climatological factors needed to understand climatological events like e.g. the Little Ice Age.
Eventually the current weakness of the proxies will become their strength: At some point in the future we will be able to extrapolate backwards into the past, and be able to reconstruct the global climate in a meaningful way. This wouldn’t be possible if tree ring density only responded to temperature for example. In a way it is fortunate that phi is wrong. Temperature without the other climate variables is borderline useless in trying to diagnosis why previous large-scale climate fluctuations occurred (just knowing they happened isn’t very useful, that’s categorical information, and it is already present from historical accounts of those periods).
Realists who live in the now would recognize that the point is still in the future where proxies might eventually attain this ideal, and will require much more hard work by the experts in the field is needed before that idealization can be reached.
Actually, DeWitt, think about the two drawings a second, Earth would be a BB emitting 242 average diurnally, Kick some GHG with about 136 DWLR without changing the average input and watch both increase at the same rate. Where does it end up?
Carrick (Comment #83384) “Your claim of “basically flawed†would hold water if the surface temperature record were irreconcilable relative to the satellite measurements. They’re not and the proper conclusion is your claim doesn’t hold any water.”
What exactly do you mean by “irreconcilable” or “reconcilable”?
Do you mean that they have a correlation? That they do not disagree by more than a certain extent?
Please note I am talking here only of calibrating the “Global Temperature” anomalies.
If you do two experiments and you know that one experiment is flawed by design, yet the results are close to that of the well designed experiment, the flawed experiment remains flawed.
By claiming that surface temperature records are “reconcilable” to the satellite measurements, you are implying that the satellite measurements are the “correct” ones and the surface measurements are validated because they more-or-less agree with the surface measurements.
I am not saying that surface measurements should be discarded for every analysis, but for ascertaining the “global temperatures” and their resultant trends, because they do not. They measure apples with oranges. This is logically wrong. GISS for example differs from the others. You cannot arrive at a right answer with the wrong method. You whole treatise is wrong and a fudge.
You say “Surface measurements are obtained in the atmospheric boundary layer”. This is only for erratic and sparse areas of 20% of the Earth’s surface.
You say “misplaced trust in ice core data to have some higher degree of verity than much more carefully designed measurements”. Can you tell me what are “much more carefully designed measurements” that have a higher degree of verity than the Ice core data? And what do they say for example, these carefully designed measurements, of the temperatures of the past 2,000 and 10,000 years?
If you could point me to the data, I would be happy to use them also and see how they correspond, compliment or reconcile with the ice core data.
Another way of putting it
The “instrument data†computes global temperatures by measuring the air temperature haphazardly near the surface over land, and the sea temperature over sea by dipping buckets into the sea by passing ships. Then combining them to get a global temperature. This is basically flawed.
“Your claim of “basically flawed†would hold water if the surface temperature record were irreconcilable relative to the satellite measurements. They’re not and the proper conclusion is your claim doesn’t hold any water.”
No my claim of it being basically flawed is not affected by “reconciling” the two records. It holds water by its own argument which is not falsified by the reconciliation.
Carrick,
Okay, I’ve misunderstood. You spoke so satellite-stations. But it is a very very bad argument, much worse than what I thought!! I do not even talk about the hot spot effect, not necessary. See this chart:
http://img708.imageshack.us/img708/1363/anomthn.png
Carrick,
Now I have to thank you for defending my case so brilliantly. You, better than I could have done, explained the various arguments that highlight the weakness of stations temperatures. You have clearly demonstrated the need for multiple independent sources such as satellites and the necessity to compare them carefully, taking into account their respective strengths and weaknesses. I am particularly acknowledging because the conclusion of all this should not be very pleasant for you.
phi,
can you please address the substance of Carrick’s argument:
“what smacks of astrology is to use measures like plants, which were designed to respond optimally to changes in environment, as temperature gauges without the slightest meaningful validation of the degree to which this holds true and to similarly put some misplaced trust in ice core data to have some higher degree of verity than much more carefully designed measurements”
by providing for us standard errors on paleoclimatic reconstructions in the 10 and 100 Ky ranges?
I predict the answer will not be healthy for your misguided faith in biophysical climate proxies.
bender,
Carrick make just sentences. We must go beyond this stage and study the relationship quantitatively. I did it for a modest local case (see: http://img97.imageshack.us/img97/8966/dendrod.png) for the twentieth century. I think the conclusions we can draw from this case are partially valid globally for the twentieth century. And that’s what interests me first. Now it also says something about the validity of the paleoclimate data but it is obviously more difficult to draw conclusions. Physiological discrepancies in the long term are not excluded but that’s another subject.
Ok, so the answer to my question is “no”. At least you’re consistent in your policy of disengagement.
bender,
You are completely missing the point and you still do not understand the subject of the discussion. What is at questions is on one hand the start date of the divergence of dendro and on the other hand the reliability of stations temperatures.
phi,
That may be your preferred topic of discussion. However it is not the central question here.
bender,
You land in the middle of this thread. You address unpleasant words without even bothering of the subject. And you’re wasting my time. Thank you and goodbye.
phi,
You advocate paleoclimatology as a precise sceince, but fail to provide evidence to support your position. When challanged, you dodge the questions. You suggest I need to “learn to read”, yet you expect me to treat you with respect? Whatever. Paleoclimatology is a joke and so are many of its defenders.