No fancy statistics in this post! I’m just going to show 5 year running means of observations by GISSTemp and the average over all runs from A1B and A2 scenarios:
- The 5 year running mean for GISS temp is computed by taking the average over the 60 months prior to the date.
- The 5 year running mean for models are taken by averaging anomalies over the same 60 months, and then averaging over all runs available for those SRES. (Thanks Geert and Chad.)
- Everything is baselined from Jan 1980-Dec. 1999. Projections for surface temperature in the IPCC AR4 are provided using this baseline.
- The choice of 5 years is arbitrary. I decided to look at this because it usually catches some El Nino and some La Nina and since it’s roughly half the solar cycle tends to average that out.
- Currently, GISSTemp is well inside the ±95% distribution of all runs in all models of A1B and for A2. A2 has a slightly slower rate of increase in temperature and also has a wider distribution of temperature anomalies.
- This sort of comparison is very sensitive to choice of baseline.
I plan to add Hadley to this graph. We can examine it from time to time. Five year averaging smooths out bumps, so it’s won’t change quickly. But it will be interesting to see whether the 5 year average manages to pierce the projected mean when we reach the top of El Nino.
So, what do you think you see in this graph? 🙂
Update I realize I did something silly. I’d run the models baselining into the future and rebaselined in a non-model specific way. I’m going to be adjusting the figures. I should have realized the smallest uncertainty bands should have a waist during the baseline period. D’oh.

Ooooh oooh! I think I see the red line touching the lower lavender line, Mrs. L! 😉
Andrew
Thanks Lucia,
I’m glad you used a running mean. I see the models performing OK, That’s kinda been my opinion since first downloading modelE results ( see a CA post steveMc did on this a while back)
Here is what I would like to see.
1. SELECT YOUR MODELS by their skill. don’t just lump all models into the average:
A. you are mixing the skilled with the unskilled
B. you get a false sense of security with increased “n”
For example you have 5 runs from a skilled model and 1 run from an unskilled Model. Can you average these and call the N
6? I dont think so. If somebody does think so, then we can
multiply the number of models enormously, make them really simple and get a huge N.
2. Show the projections our to 2100 just for kicks.
Anyways. You owe me an email on your background or I’m gunna write that you are a housewife with no degree who bakes cookies knits, writes haiku, and ghost writes for her really bright husband . sue me!
Lucia,
Looks like the models have “knowledge” of volcanic aerosols in the pre-1999 period. There are two ways to make the modeled cooling from volcanic aerosols match the GISS temperature response: 1) higher climate sensitivity combined with slow (long lag) ocean cooling, or 2) lower climate sensitivity combined with faster ocean response. Since the ocean heat content data are sparce before Argo, it seems either could be possible.
But as you suggest, setting the baseline to an earlier period (1960-1980)?) seems a more reasonable approach, and would show the models are substantially overestimating the warming by about 30%.
Lucia,
I’m somewhat surprised that the CIs of the A2 and A1B scenarios are so different for the last few years, given that their emission scenarios at present are relatively similar. Specifically, there is a greater range of variability in the A2 scenario despite the fact that emissions are higher in A2 vis-a-vis A1B! Mayhaps its something of a statistical artifact?
Any thoughts?
SteveF,
One of the inputs to the models prior to present is known forcings prior to 2000, e.g. http://www.theoildrum.com/uploads/12/hansen_fig5.png . So I wouldn’t be surprised that they accurately reflect volcanoes. Future forcings (including stochastic events like volcanoes) are based on Monte Carlo simulations, SRES scenarios, etc.
IMHO it doesn’t go back far enough. If you go back to 1920/30 (the last temperature peak) I think you’ll see more divergence. Notice that the only time it’s coincident “by eyeball” is around 1980 to about 2000. I think that the model alignment is just localized curve fitting. Notice too that the GISS is running “hotter” during the time previous to 1980.
Andrew_KY– The red is close to the lavender– but just inside.
SteveF–
Some do; some don’t.
Zeke–
Me too. There are fewer A2 models and runs. That’s a factor. But this alone doesn’t explain the size of the difference. It could just be the A2 models tended to be noiser. There is some overlap, but some modeling groups picked ran A2 only, some A1B only, some ran both.
BarryW–
I can make graphs all the way back. But I prefer not to go back to the 1880s because I don’t trust those temperatures anyway.
I think GISS has a strong positive bias since the 2000 ish timeframe. From the model runs you specify, what altitude is the data taken from?
NOTE: All comments prior to this were based on goofed up graph! D’oh.
Jeff–
This is surface. But the bias isn’t strong– it only looked that way in the first graph. GISS is consistently below the the multi-model mean for both models– but the models have a big range of anomalies.
Anomalies should take longer to show differences, and 5 eyar anomalies should be slow. But it’s interesting to compare to the mean.
When you look at the smoothed GISS you see peaks at about 1880, 1940, and 2000. That’s why I suggested the time frame I did. If there is a 60 yr cycle (and others have noticed it too), then you’re just matching the positive part of the cycle to a constant trend. Go back another ten years and the divergence will be larger is what I’m guessing.
Barry–
We can’t track projections very well by looking at the hindcast.
DOH!
The width of the waist interests me the most.
For all the models during the “waist” period, I’d like to see more detail.
If the models don’t hindcast how can you trust their forecast? What if it falls outside of the distribution in the hindcast? Doesn’t that invalidate the models as well as if the present value falls outside?
BarryW:
Models do hindcast, generally based on known past forcings. If you stick in future projected forcings (e.g. GHG concentrations), they forecast. The forcings are usually exogenous, and the models handle the feedbacks.
Barry–
The observations don’t fall outside the distribution of the hindcast. If we baseline to 1980-2000, the distribution back in 1800 is huge.
Even at that, they models track the mean pretty well. This isn’t too surprising because different modeling groups apply different solar, ghg, aerosol etc. forcing. Even though some might like to believe that modelers somehow make their choices utterly, totally completely based on evaluations of each factor without regard to their specific model sensitivity, the fact is they might have to be super-human to do so.
I see that the 5 year running means for GISS temp prior to 1975 (or 1976) are all higher than those for A1B and A2. (I really don’t know what A1B and A2 scenarios are.) Perhaps the GISST data have inhomogeneities that were caused by some changes.
Lucia,
Did you mean to say that it is surprising? I’ve been going over the different forcing histories used by the AR4 models and its a bit annoying that the IPCC didn’t require all modelling groups to use the same exact forcing. Some set the solar irradiance as a constant for 20c3m-a1b. Others use reconstructions that are far more physically realistic. Virtually all models (a1b) regard the solar constant as, well, constant with absolutely no variability whatsoever. GISS EH/ER maintains the irradiance at year 2000 levels but maintain the solar cycle. I’m currently looking into how this may influence hypothesis testing.
Chad–
I think it’s not surprising that each modeling group manages to hindcast more-or-less ok and that at a minimum, the observations fall inside the spread. After all, they use all sorts of different forcings for the past.
On testing future variability– their setting solar variability should make the “model weather” in the 21st century less variable. But how much? I don’t know. I did a check to see how using the ‘red noise’ approximation reproduces variability across models with more than 1 run using the new data you got me and the a1b data I already have. When applied for 8,9 or 10 year trends it works well for a “typical” model. But… for some models it underestimates and some over-estimates. (I can’t do the test for more years because there aren’t enough samples to give the statistical test I like any power. The red noise approximation does under-estimate uncertainty intervals for 5 year trends– but it should even if the noise is red.)
None of this tells me anything about what happens as a result of the left out solar.
Lucia,
Does this graph show that the five-year running means all fall well within the 95 per cent confidence intervals, or am I looking at the wrong things?
David–
Yep. I actually do sometimes show things that make the models not look so bad. Averaging over 5 years means we have ony 2 averaging periods after the projections begin. But it can still be interesting to see.
For the models, the spread of the 5 year mean is mostly dominated by “model spread”– that is dominated by differences in parameterizations, and not “weather noise”.
I suspect the projections will probably stay inside this spread, staying on the lower end. I’m waiting to see if the this El Nino can kick earth observation above the mean– I’m expecting that distance to spread over time.
And of course means of values are different than trends among those values. However, my suspicion is that what the trends will show by 2015 will result in you no longer be a luke warmer. 😉
David–
Why do you think that?
lucia,
Because if my position is the correct one – and I have confidence that it will – then the data under your tests will show it relatively clearly by then. Obviously, the lukewarmer position will not yet be inconstent with the data by that point, but it will be somewhat towards the edge (15 years of data will, of course, reduce the size of the error bars in the trend). I think that this data will be sufficent to bring you, at least tentatively, into the warmist camp.
(By ‘warmist camp’, I am talking about those who think that a sensitivity of three degrees per doubling or greater is of high likelihood)
(My prediction excludes significant volcanic eruptions, however).
David Gould,
“…and I have confidence that it will…”
What are your confidence intervals??
Lucia,
Oh! I misinterpreted what you said. Yes, the more forcing histories that are used the more spread, thus the better hindcast. Understood.
kuhnkat,
+/- i 😉
David,
I hope you took serial correlation into account 🙂
HadAT only shows about 0.45C of warming since 1958.
http://hadobs.metoffice.com/hadat/update_images.html
Some interesting data shown here (and some math errors as well, especially the Santer/Douglas-equivalent tropospheric warming rates per decade charts).
Just where exactly does the temperature reconstruction come from and can it be replicated by independent sources not associated with GISS? What is the point going through this exercise if the data is in question? Or am I missing something?
Chiefio suggested that the GISS station deletions in the last 10 years “coincidentally” had the effect of increasing the warming trend. It is a lot easy to match data to model projections when the modellers are also the unaccountable custodians of the data.
Wondering by chance if you have seen this.
From the abstract:
Forgive me if it is “old news”.
I dont see anything spectacular in the graph – but then I’m not an expert.
But there are a few things about the GISS data that seem odd.
GISS ascribes temperatures to gridcells which don’t contain a single temperature station. Hansen says this doesnt matter. And Hansen is an honourable man.
How does “gridcell averaging” effect the results? Hansen says it doesnt. And Hansen is an honourable man.
There is a huge decrease in stations in the last 20 years coincidentally with the increase in trends. Hansen says it doesnt matter. And Hansen is an honourable man.
Temperature stations seemed to have marched southwards in step with the warming trends. Hansen says it doesnt matter, this is just a coincidence. And Hansen is an honourable man.
David Gould:
I hate prognosticating because we humans aren’t nearly as good at it as we’d like to believe (otherwise we wouldn’t need computer models), but I’d give a 50/50 chance it will be cooler in 2015 than now.
I think that often people don’t understand the gradual, long term nature of the human-generated portion of global warming, and how significant natural forcings are in comparison to that…at least until you get to 30+year intervals. We are likely looking at a 1.5°C temperature increase between now and 2100. That works out (assuming linear trend) to 0.075°C in 5 years. That’s in the noise floor compared to natural forcings that aren’t included in the computer models.
One of the thing that always bothers me about these types of analyses is I think they over-estimate the “wiggle-room” available in a given model.
Normally, what we do is perform a least-squares fit, optimizing the parameters of a given model to fit the experimental data (which wouldn’t just be surface temperature data btw, it would include the middle-troposphere, upper troposphere and lower stratosphere) as well). We’d then use chi-square statistics on the residuals of the fit (using the appropriate formulation for when you have correlation in your noise and hence residuals) to obtain a “p” value to determine “goodness of fit”.
I suspect if you go through all of this, you’ll end up with none of the current models coming close to describing all of the available experimental constraints.
David Gould If you believe the AGW hypothesis you are extraordinarily naive and gullible.
Here is how I look at it from the Physics angle:
Incoming Radiation (I) – Outgoing Radiation (O) = Heating or Cooling or Balance depending on whether I – O is +ve, -ve or 0.
The AGW Theory
I-O = Anthropogenic “forcing†(what is causing us to warm) = 1.6 W/m2 (the rest is in balance)
Clouds (poorly understood) = cooling of around 30 W / m2
(See the difference? 30 is much bigger than 1.6. Get that “poorly understood†bit wrong and it will dwarf Anthropogenic 1.6)
(Other things which cancel out, may not cancel out, if you get it slightly wrong to dwarf that 1.6 )
So AGW does a whole lot of plusses and minuses and comes up with a figure of 1.6 for the last 50 years, which happens to be the Anthropogenic portion – the rest of it (much of it such as clouds and solar poorly understood) is in PERFECT BALANCE. Not only that it will REMAIN IN PERFECT BALANCE FOR THE FORSEEABLE FUTURE, while the the only unbalancing portion will be the anthropogenic 1.6 W/m2 and we will warm inexorably.
How could any reasonable person believe that?
Richard,
When you start by saying that I must be extraordinarily naive and gullible and end by saying that I must not be reasonable, I cannot really see the point in having a conversation with you. And why on earth would *you* want to have a conversation with someone extraordinarily naive, gullible and unreasonable? It sounds the very essence of futility and frustration. So, I will not bother replying to your posts any longer, as I do not want to frustrate you by making you deal with such extraordinary naivity, gullibility and unreasonableness. 🙂
Could you please do a plot of the residuals of GISST minus the two models. I suspect we will see a divergence of 0.25 degrees per century.
It is also nice to run two two smoothing’s, one a prime (like 5 or 11) and the second a compound number (12 being the best). A difference in the pattern of the two sets of residuals can be very informative.
Carrick (Comment#29060),
Yes, for certain. The models appear to be “optimized” to match the surface temperature history, in spite of protestations by modelers that they are based on “fundamental principles”. The substantial difference between models and measurements of tropospheric warming shows this to be the case. Even the best estimate of tropospheric warming according to Santer 08 shows a substantial discrepancy, although uncertainty in models and data make it impossible for 1978 to 1999 to say with 95% confidence the models are wrong. After 1999 the discrepancy becomes worse.
When the models get it right, there will be no such discrepancies, and I believe the estimated sensitivity will then drop to near the lower limit of the IPCC range (~1.5 per doubling or a bit less).
SteveF, the models are certainly missing a lot of important physics, including clouds, atmospheric boundary layer (that’s important because that’s where most climate data comes form), realistic precipitation models and the biosphere for starts.
Plenty to keep people busy for a long time to come.
More fun with numbers! 0.4 +/- 0.3. Wow!
Lucia,
How well do the models CO2 emission assumptions coincide with reality?
Carrick (Comment#29077),
I think one of the keys to finding a path out of the confusion is an accurate representation of ocean heat accumulation, since all the high sensitivity estimates depend on quite extreme ocean lag periods. Since there is only accurate ocean heat data since Argo went into operation (2003) it is not yet possible to generate an accurate model of ocean heat uptake versus time. When 10-15 years of Argo data are available it should be possible to pretty well constrain ocean lag, and in turn constrain overall sensitivity. With sensitivity constrained, other pieces should start to fall into place.
Greg F,
I was just looking at some data that answers your question. The A1B emissions are lower than the actual realized emissions. The time series for A1B emissions begins in 1990 and since then there’s been a divergence of about 0.14 ppm/yr.We’re about 2.6 ppm above the A1B scenario.
Chad,
i think it should be half your number given that the airborn fraction is about 50% of the emissions.
Ahab,
I shouldn’t have used the word ’emissions’. Concentration would be more appropriate.
Stupid question. Why are the error bars for the models higher in 1950 than now? Surely if the models were any use they would have greater confidence going backwards corresponding with the instrumental, not less?
Chad (Comment#29081)
I’ve been using this table from GISS on the A1B scenario.
http://data.giss.nasa.gov/modelforce/ghgases/GHGs.IPCC.A1B.txt
Do you have something different.
Bill Illis,
The CO2 abundances are calculated using carbon cycle models. The numbers you have from GISS use the ISAM model’s output. I was referring to numbers from the Bern-CC model. I should have been clear about which model output I was using.
Bill,
Forgot to post the link: http://www.ipcc-data.org/ddc_co2.html
Lucia, there’s some things that I don’t get about all of this. First, as far as I know, the IPCC scenarios all start in 1990, and have identical emissions in 1990 and 2000. Yet you say you are showing them back to 1950 … why is that?
Next, A1B and A2 are not “scenarios” as you say. They are families of scenarios. See this link for details, page 4. Since the “scenarios” you list are families and not scenarios, which scenarios were actually used?
Next, all of the scenarios have identical emissions up until the year 2000. After that, they start to diverge slightly. So what exactly are you claiming is shown by using scenarios whose differences are virtually invisible on the scale that you are using? The difference in atmospheric CO2 by 2009 between the highest and lowest of all A1B and A2 scenarios is 0.6% … so why would you plot two “different” scenarios that are identical on the scale you are using?
Next, I’d be very interested in seeing the same look only using the satellite data rather than GISS. All of the records except GISS (UAH, RSS, HadCRUT) show no warming since 2000 … why have you chosen GISS, the only record that does show warming (thanks to the kind ministrations of James “civil disobedience” Hansen)?
Next, you say that the “5 year running average remains inside the ±95% of the distribution of the A1B and A2 runs” as if that actually meant something. It is absolutely meaningless. For example, suppose I have a whiz-bang model that gives three predictions each year – a linear projection of the most recent trend, that same projection plus one degree, and that projection less one degree. Does the fact that the GISS “data” falls within the ±95% bounds of my model results mean anything at all about my model?
Finally, all of the models are tuned to reproduce the 1950 – 2000 period. Why would anyone have the slightest interest in their results for that period? It is totally meaningless, and it seems totally deceptive for you to even discuss it.
Showing the bogus “hindcast” gives a totally false picture to those who are unaware of the tuning, making it look like the models can actually hindcast, when they are simply tuned to slavishly reproduce a past trend. I object most strenuously to this misrepresentation. I can tune a model to slavishly follow the stock market from 1950 – 2000, but I’d be thrown in jail if I claimed that meant I could forecast the market.
All in all, I would rate this as a most deceptive posting. Unintentionally deceptive, I would wager … but very deceptive nonetheless.
Willis:
I disagree. In spite of the tuning, in fact, the models fail to faithfully reproduce mid-20th century warming.
Either way (whether they can reproduce this warming or not), this comparison tells you something important about the models.
In this case, there is physics that is producing natural fluctuations (driven or internal remains to be determined) that are not being currently modeled.
Like you, I’m pretty skeptical about the range of uncertainties given by the models. I don’t think simply averaging the models and computing their standard deviation is going to adequately describe the true modeling uncertainty at a given time.
Carrick,
To really get a handle on the uncertainty created by the tuned sub-grid parameterizations is to run the model many times with perturbed physics to see how it affects the various fields. Too bad the IPCC didn’t require the modeling groups to produce runs for this.
Willis:
Harsh. It almost sounds as if you are accusing lucia of endorsing every suspect element of the IPCC output merely by posting an unremarkable graph fairly depicting the (probably slightly fudged) GISS temps against some of those weirdly aggregated model collections.
It is interesting how powerful (and utterly irrelevant) graphing recent warming is. In the short term we have no clue how clouds function nor can we do an even remotely serious global energy balance. Over the longer term we may face temperature shifts of huge magnitude back to an ice age if the past 400,000 years is any guide–and we don’t have a predictive handle on that either. Our real world uncertainty is probably several degrees. Yet we have those very sciencey-looking error bars to suggest a high level of precision in the range of tenths of a degree. What a joke.
George:
He stated that it was an unintentional deception and I would have to agree. When the model predictions begin in 1990 it is foolish to start the comparison in 1950 because tuning makes the models look a lot better than they really are. I also believe that using the temperature profiles provided by GISS is also improper because they diverge from the satellite records and cannot be independently reproduced from the unadulterated surface records. While I am not opposed to ‘adjusting’ the raw data, i oppose accepting the adjustments when the metadata is not provided and the adjustments are not justified.
Willis–
There are listed as A1B and A2 in the AR4. All modeling groups used their now versions of “historic forcings” during the 20th century– they all used different forcings during the 20th century. That’s what the IPCC did.
I agree it means almost nothing.
Chad:
Yes, I agree. This is essentially the Monte Carlo method: You randomly varying the forcings over time to encompass your uncertain in these forcings according to some physically reasonable assumptions about the forcings, and with e.g. 10 or so runs, you would have a pretty decent estimate of the uncertainty of a given model as well as it’s mean value.
This model+error is then compared against data+error to obtain a goodness of fit for each model.
That is a well-defined thing to do and it is indeed unfortunate that the IPCC didn’t require it to be done. Lumping models on the other hand, is just a poor man’s substitute for “doing it right”.
On the other hand, I don’t think the “CLs” from the lumped approach have any meaning. They are mostly just a gimmick for the modelers to inflate the uncertainty in their given scenarios to make it look like they’ve nailed the physics, when they really haven’t.
Vangel:
Again, if you ignore the CL curves, the period between 1950 and 1990 have the largest deviations wrt to the data in that very period. So yeah, it tells us something important about the models, mainly they don’t do that great of a job of explaining the observed temperature fluctuations in that period.
Figure here (Data—Model).
Simply because you tune a model to in some sense agree with data, doesn’t mean that the model contains enough degrees of freedom to actually properly describe the observed variations in the data.
Lucia, in the head post you ask: “So, what do you think you see in this graph?”
Personally — and please feel free to correct me if my understanding is wrong — what I see is the results of combining two groups of climate models: The group that is wrong about the surface observations and the group that is wrong about the tropical troposphere observations.
As far as I know, all of the models belong to one of those two groups — no model is correct about both the surface observations and the tropical troposphere observations.
Now, by happenstance, it turns out that when these two groups of models — both wrong about part of the observations — are averaged, they produce a hindcast close to the surface record — and thus this average can be touted as a measure of their validity.
Whereas, when one wants to examine these models against the tropical troposphere observations, one simply dismisses the average (al la Santer et al) and insists that the full range of model results must be included — which can then be shown to encompass the tropospheric observations.
So you’ve got the best of both worlds — an average that can be touted in one context and a sufficiently broad range of results that can be employed in another.
Mike–
I see that, in general, the models tend to overpredict warming. The anomaly method makes the fit look pretty good during the baseline period (where the average is forced to match mathematically) and a huge spread in trends over all models.
The average model has tended to overpredict the rate of warming– even in the hindcast. If this persists, we should see them overpredicting in the forecast.
Lucia,
Yup. When I looked at ModelE in isolation it tended to
1. Miss cooling periods ( early 20th century) 40-50s.
2. Over estimate warming periods.
So I’m a lukewarmer until 1.5C century is ruled out.
Merry Christmas
Willis Eschenbach,
Your snarkingly refer to the relatively high temperature in GISS as a result of “the kind ministrations of James Hansen”. Would you care to back this up with an actual reference of where in the GISS code such willfully fraudulent ministrations are found? If pouring through arcane fortran code is not your forte, you can try it in newly minted python as well:
http://data.giss.nasa.gov/gistemp/sources/GISTEMP_sources.tar.gz
http://code.google.com/p/ccc-gistemp/
Unless you can back up your assertions with evidence, obliquely accusing scientists of “cooking the books” on temperature records is morally reprehensible at best.
Carrick #29106 seems to look at this about as I do. And no doubt others have covered the same point more technically.
What do I see?
I see that the baseline should be at 1950. Or even 2009. Forcing the models to be correct at 1980 makes them look very good for the next 30 years up until 2010.
If 1950 is the base it is easy to see that the models always over predict the warming.
Since the models didn’t exist in 1950 and probably not by 1980 either I would prefer that they either totally hindcast from today or totally forecast from 1950.
It is very nice to have decades of actual data before you build models which show the least error.
But we don’t build the models.
Hausfather intones: “Unless you can back up your assertions with evidence, obliquely accusing scientists of “cooking the books†on temperature records is morally reprehensible at best.”
That’s rich coming from someone willing to engage in straw man arguments to support a “move along, nothing to see here” whitewashing of the morally reprehensible actions of the CRU crew.
Hansen’s a “true believer” who thinks the end justifies the means — and if the means include the physical destruction of other people’s property, he’s prepared to assist the destroyers in getting away with their crime. Having demonstrated a willingness to do that, neither he nor you are in any position to complain when we suspect that he’s quite willing to put his thumb on the scale to keep the story going.
In my book, Hanson’s actions and statements to date fully justify skepticism over his honesty — just as your straw-man littered puff piece at the Yale forum justify my skepticism over your objectivity.
Zeke Hausfather (Comment#29114) December 24th, 2009 at 11:18 am
“Willis Eschenbach,
Your snarkingly refer to the relatively high temperature in GISS as a result of “the kind ministrations of James Hansenâ€. Would you care to back this up with an actual reference of where in the GISS code such willfully fraudulent ministrations are found? ”
How about starting here at E.M.Smiths audit?
http://chiefio.wordpress.com/gistemp/
“obliquely accusing scientists of “cooking the booksâ€
It’s evident that this particular kind of accusation strikes pretty close to the target or it wouldn’t cause the hurt feelings, as it apparently does.
Andrew
Climate change is not about climate change.
It’s about global governance – empowering the UN to greater levels and enriching its administrators. The point is money, always money, and the core purpose of COP15 will be achieved when the wealth transfer begins.
The Obama administration needed to stand up for our country, but instead he promised them our money. “Global warming” is a political movement, not a scientific discipline.
Unless voters understand that the most important issue before them in 2010 and 2012 is to retain their freedom and vote out every politician who wishes to seize it through the global warmism agenda, those freedoms will be gone forever.
Please go to PBS here and post a comment regarding Jim Lehrer’s interview with Obama
http://www.pbs.org/newshour/rundown/2009/12/excerpt-obama-on-disappointment-in-copenhagen.html
If you go further back in time in your hindcast than 1950, the early-20th century warming period is even less well-explained by the models. In that sense cutting off the model results at 1950 is either “fortuitous” or a trick to “hide the model decline” depending on your perspective.
Andrew_KY: Drat, you are on to me! Seeing as I’m an honorary cabal member and thus privy to all the secret data manipulation and all that. And we’d have gotten away with it too, if it weren’t for you pesky kids and your climate weiner dog!
But on a more serious note, you really can’t see that false allegations of malicious data manipulation would cause hurt feelings? I certainly would be rather put off if anyone accused me of fabricating results in my own publications to bolster my position, especially if it was neigh-impossible to convince them otherwise even after releasing all my code and work!
Anyhow, happy holidays everyone. Time to catch a flight to the Bay Area to spend some time with the family. Lets use the spirit of the season to reflect on the fact that even though we seem to often reach diametrically and intractably opposed conclusions given the same information, we still can have civil and constructive conversations here, which is more than I can say about most places on the internet where such charged issues are discussed.
Zeke,
It’s not “secret” data manipulation. It’s “obvious” data manipulation.
I do agree with your last paragraph, though.
Hope you and your family have a have a Peaceful and Happy Christmas. And safe travels to you, sir!
Andrew
Zeke Hausfather (Comment#29011),
I took a look at your linked graphic of Dr. Hansen’s estimated forcings. I had seen this graph multiple times before, and my opinion of it has not changed: it is nothing but pure tripe to support high climate sensitivity. I take specific issue with two of the “off-setting” forcings which are used to explain why a very sensitive climate system has not warmed much more than observed:
1) The “reflective tropospheric aerosols” are shown to gradually increase from 1880 to 2000, off-setting most all of the the radiative forcing until 1960, and a large fraction through the end of the graph (looks like 2003). At no time does this effect decline. Since there were no aerosol measurements before the 1960’s, most of the reflective tropospheric aerosol line is simply an arm-wave, not based on data. The increase in aerosol reflection shown since the early 1980’s is in direct conflict with multiple independent measurements of increasing solar brightness at the surface (the widely reported global brightening). While the effects of volcanic aerosols from 1980 to ~1993 confuse the issue, there is overwhelming evidence of substantial increases in clear-sky solar intensity at the surface since 1993, with an estimated increase of ~2%-3% or more. Dr. Hansen’s post 1993 tropospheric aerosol off-set is almost certainly wrong; it is nothing but a fig leaf.
2) At the same time that man-made reflective aresols are supposed to be increasing, there is a fall in the estimated warming effect of black carbon on snow. How is this possible? Combustion formed aerosols rise, while combustion formed black carbon falls? That is just silly.
Dr. Hansen’s net forcing curve is a perfect example of self-serving confirmation bias. My memory of what Thomas Kuhn wrote in “The Structure of Scientific Revolutions” (I must admit that I read it a long time ago!), is that major practitioners of science seldom, if ever, accept new data which fundamentally conflicts with important conclusions of their own past research; that is, they seldom accept new data which undermines (rather than builds upon) the conclusions of their earlier research.
It is not until the “old guard” is gone (dead or no longer scientifically active) that conclusions based on the conflicting new data are broadly accepted in the field. Albert Einstein never accepted the probabilistic description of the world from quantum mechanics, in spite of overwhelming evidence, because his earlier research, and his resulting view of the world, was deterministic, not probabilistic. I note that GISS models have NEVER budged from Dr. Hansen’s earliest estimates of climate sensitivity, despite many many millions of dollars spent on climate research at NASA; this is IMO no co-incidence. Watch what happens to the GISS sensitivity estimate when Dr. Hansen is no longer involved.
Carrick,
What I meant was indeed Monte Carlo but each realization uses ‘different’ physics. The tunable parameters take on random values each run to reveal how sensitive the run is overall to the uncertainty in the parameterization.
Okay, one last point before I run to the airport.
SteveF:
Black carbon decreasing while aerosol forcings increase is actually quite an interesting problem. If you read Ramachandran’s work, he discusses how the location of aerosol emissions is quite important for BC forcing, given that the mechanism is a function of the surface upon which they are landing and the atmospheric lifetime of BC particles is quite short. BC on snow/ice has a large forcing, BC on anything else not so much. Now, with the clean air act and comperable legislation in Europe, the major source of aerosol (and BC) emissions has shifted from US/Northern Europe to India/China. Given that the BC forcings are location dependent but aerosol forcings are mostly not, this rather neatly explains the apparent contradiction.
I’m by no means that well versed in aerosols, so that is just my off-the-cuff explanation, but its seems reasonably plausible.
SteveF (Comment#29125)
December 24th, 2009 at 1:21 pm
The Aerosols forcing is even worse than you thought. This is what it looks like when the scale is easier to see. Straight lines and the watts/metre^2 forcing translates into a -0.6C temperature impact from the direct and indirect impact as at 2003.
http://img58.imageshack.us/img58/855/modelaerosolsforcingp.png
Zeke Hausfather (Comment#29127) ,
I think this is not even close to a reasonable explanation.
The Hansen curve says that the effect of black carbon gradually declined starting in 1880, in spite of much more than an order of magnitude increase in industrial activity in the norther hemisphere during that period (and enormous increases in black carbon emissions!). The clean air act was almost a century later, and industrialization in China took off in the late 1990’s.
And I repeat: clear sky solar intensity has increased substantially since at least 1993. There is no justification for using increased tropospheric aerosols to cancel substantially increased radiative forcing when there is clear evidence that the exact OPPOSITE actually took place. Come on Zeke, it’s a fig leaf!
Have a safe trip, and happy holidays to you and yours.
Bill Illis (Comment#29128),
Thanks for the graphic.
Happy holidays to you and yours as well.
And the same goes for you warmers out there.
Carrick (Comment#29100) December 24th, 2009 at 2:14 am
You are correct that they fail to reproduce the “mid-century warming”. However, Lucia didn’t show that. She showed the part that they reproduce pretty well …
lucia (Comment#29104) December 24th, 2009 at 8:41 am
Thanks, Lucia. As I understand it, the AR4 used the same scenarios as the TAR, which are the ones that I described wherein the A1B and A2 are families. It appears that you may be describing the “message” scenarios for the two families. Is that the case?
My concern is that a lot of people without detailed knowledge of the field will read your post and go “Wow, those models are really doing well!”
Zeke Hausfather (Comment#29114) December 24th, 2009 at 11:18 am
Zeke, you are right, I suppose I should act like a real climate scientist like Ben Santer, who said (CRU email 1255095172):
But enough tu quoque, back to Hansen. Since GISS was first published, he has made a number of changes to the results. Every change that I know of has increased the amount of warming … coincidence? You be the judge.
He also ascribes very warm temperatures to gridcells which don’t have a single temperature station. Michael Mann says (CRU email 1224176459):
Coincidence? You be the judge.
For me, ascribing any temperature at all to a gridcell with no temperature station is a fraudulent ministration, much less ascribing a high temperature, but YMMV …
Finally, a man who publicly advocates committing crimes to advance his scientific viewpoint is already fraudulent in my book, and Hansen has done that.
I have been following this website for a week
http://justdata.wordpress.com/
I have no idea how valid this analysis is. Maybe some of the more knowledgeable here could comment.
If his 7th chart is correct (he does not number them) would that not invalidate the models as they are tuned to match the past temperature data?
[blockquote]My concern is that a lot of people without detailed knowledge of the field will read your post and go “Wow, those models are really doing well!”[/blockquote]
That is the same problem that I see. Even though I am an engineer by training there are times at which I misread and misunderstand some of the things that are being stated. Lucia comes at this from a position of strong understanding so she does not see that her postings may be helping the warmers by making the models look better than they actually are.
It is still my contention that there is one point that needs to be made more often; the temperature reconstructions cannot be verified by independent sources and diverge greatly from reconstructions coming from the raw data. While some ‘adjustments’ may be justifiable, we cannot evaluate them until more information is provided by the gatekeepers. That means that we really have no idea about the actual temperature profile and can’t even show that we are warmer than the 1930s.
Vangel: Here is the raw temperature data. Let me know how the independent reconstruction turns out. http://dss.ucar.edu/datasets/ds570.0/ . Thankfully the U.S. was better about digitizing old raw temperature records than CRU was :-p
As far as information on temperature adjustments being provided, what additional data would you like? For GISS at least, the source code is readily available (in two programming languages), and the helpful folks at NASA tend to be pretty good at promptly responding to emails whenever people have questions. As has been shown a number of times now, the existing GISS code as well as descriptions of the methods used in the peer reviewed literature are fully sufficient for people to undertake independent reconstructions of the methods used.
Chad:
I’m not sure how this approach would work out.
If you add clouds, you do the best you can to characterize them, use a Monte Carlo approach to quantify the goodness of fit of the model to the data, including the uncertainty in the model forcings. You compare that to without clouds and that tells you how much your model has improved.
In my opinion, it simply isn’t sensible to combing with cloud model to without clouds. There’s no debate that clouds need to be present, so treating that like it’s an uncertainty I think is wrong.
Carrick,
That’s what I mean. Test the sensitivity of the parameterizations with random realizations. I didn’t mean running a model with and without a parameterized cloud model to gauge the uncertainty.
Zeke
Maybe I have missed comments elsewhere on this blog, but the GISS code is getting crucified on this blog – http://chiefio.wordpress.com/gistemp/ and perhaps GHCN (and thus NOAA, Hadcrut etc.) on this – http://justdata.wordpress.com/.
Knowing some history (the arctic in the 1930/40s being as warm or warmer than today, the continental US warmer in 30/40 than today) and seeing how more and more seemingly knowledgeable people are finding problems with the temperature records close to their home areas, I find the chart at justdata mentioned above becoming more plausible that anything put out by the so called “climate research units”. If any of this criticism is valid, and the models are tuned to the GISS, Hadcrut etc temperature records, the models are indeed crap and the subject of this posting is just circular mumbo jumbo.
Even though some might like to believe that modelers somehow make their choices utterly, totally completely based on evaluations of each factor without regard to their specific model sensitivity, the fact is they might have to be super-human to do so.
So what you are saying is that the models are not based on fundamentals, but curve fitting.
Which is another way of saying that the modelers do not have a clue. Or: they can assign any cause they like and jigger the rest to fit. Very scientific that.
It is great having Scientific People doing science.
http://en.wikipedia.org/wiki/The_Stars_My_Destination
How well do the models CO2 emission assumptions coincide with reality?
I would like to ask a more detailed question. How well are the natural CO2 emissions correlated with reality? How well are the man made CO2 emissions correlated with reality?
What is the proportion between the two? i.e. which is dominant?
But on a more serious note, you really can’t see that false allegations of malicious data manipulation would cause hurt feelings?
Sure they would. Now how about we start with the individual station records and audit the whole deal to find out if the accusations are true. After all the CRU data dump has provided probable cause.
According to NASA the adjustments to the data amount to .6 deg. See the chart here:
http://www.talk-polywell.org/bb/viewtopic.php?p=30486#30486
Now isn’t it interesting that the adjustment is about equal to the purported trend? Just another reason for a full audit.
Or as they say in forensic accounting: you can’t trust the books without an honest audit.
If in fact the books are cooked and the models are fitted to the cooked books then what you have is a load of …..
Which is to say the fancy error bars in that case are useless. And worse yet – so are the models.
That is the problem of fitting vs modeling from first principles. In fact the close agreement should send up red flags in a situation where there is so much uncertainty in the relations of the various factors.
Zeke
We already know what the raw data is telling us. Do you see any unusual warming?
http://cdiac.ornl.gov/epubs/ndp/ushcn/mean2.5X3.5_pg.gif
The warming does not come from the temperature readings but from the adjustments made to those readings. The warming comes from the computers, not the data.
http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
It is clear that NOAA/GISS are making adjustments that are not justified. Such adjustments can take a station trend that shows cooling and make it look as if it is warming.
http://blog.jim.com/images/santa_rosa-nm-data-comparison.png
From what I see, the whole AGW argument has been one big con that depended on controlling access to the temperature data and on hiding the adjustments. It is clear from the CRU e-mails that the gatekeepers were scared of people who understood the mathematics needed to do independent reviews and that steps were taken to ensure that such people never saw all of the data and metadata that would be necessary to perform independent reconstructions. In particular public enemy number one was Steven McIntyre, who was able to use available data to show that the results cited by the IPCC were not valid. He embarrassed Mann, Briffa, Osborne, Bradley, Hughes and Hansen, along with many of the other AGW supporters by pointing out obvious errors that are inevitable when the science is not on one’s side and data manipulation has to take place in order to sell an idea that is not supported by real world observations.
Vangel, that raw series appears to have a global warming signal in it. The only big difference, is the variance of the data is much larger because it hasn’t been corrected for shifts in site locations and so forth. Had the data been run through a 10-year low-pass filter, I don’t think you’d see a huge difference between it and the 10-year LP filtered GISS data.
If you know of a text version of these data, link them, and I’ll do the processing and compare them on the same plot.
Vangel (Comment#29197) December 26th, 2009 at 9:50 am
We have known since 2007 that the adjustments for USHCN raise the temps. Several points.
1. Not all the individual adjustment are in the warming direction.
2, You can’t merely STATE that the adjustments are invalid you must show this. I’ll give you hints below.
3. These are the adjustments for the US ONLY, other countries are a black box.
USHCN ADJUSTMENTS:
A. FILNET. This fills in missing data. It does so with a model. The model is not public. It needs to be. The errors of FLINET (+-) are not carried forward. This can impact the true SD.
B. SHAP: documented station changes, typically adjustments for lat/lon and Altitude ( using lapse rate). These adjustments are justified. But the program needs to be public. Also, errors need to be carried forward.
C. TOBS. Time of observation. This is a valid adjustment. The paper describing it has not been audited ala CA. It’s a 1986 study centered on CONUS. It needs to be revisted, especially if GW changes the Diurnal spread. Also errors are not carried forward.
D: MMTS. this is an instrument change. The adjustment is valid. HOWEVER, the size of the adjustment may be wrong since it was based on Qualye’s study which assumed a MMTS sited properly. MMTS have not been properly sited. Also, errors are not carried forward. This is a problem.
E: Undocumented changes ( menne’s adjustment) This is a statistical attempt to identify undocumented station changes through change point analysis. Errors are not carried forward.
The biggest issue with all these adjustments is not, in my opinion, the direction of the changes, but rather that the changes are made and the error is thrown away. If the errors are predominately in the “warming” direction, they will understate the confidence intervals. This is very simple to demonstrate.
Maybe later when I get the time, but you can do it yourself simply with excell. You can adjust for changes like these and recover the mean, but if you dont treat the errors right you will be more confident than the data allows.
This is probably the real scenario
http://wattsupwiththat.com/2009/12/26/no-statistically-significant-warming-since-1995-a-quick-mathematical-proof/#more-14553 just your stuff Lucia!
Vangel, that raw series appears to have a global warming signal in it. The only big difference, is the variance of the data is much larger because it hasn’t been corrected for shifts in site locations and so forth. Had the data been run through a 10-year low-pass filter, I don’t think you’d see a huge difference between it and the 10-year LP filtered GISS data.
I would expect some warming to come from the UHI effect, which is quite substantial. The audit done by Anthony Watts (http://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf) found that 69% of the USHCN stations had a bias of more than 2C, which is substantially greater than the claimed warming. Another 22% had a warming bias of more than 1C, which meant that only 10% of the stations had a bias of less than 1C.
What I found disgusting was the fact that NOAA/GISS had ignored the obvious problems and never did anything about it to improve the data quality. If you look at the links below you will find disgraceful examples and should get a good idea where some of the warming is coming from.
Here we see the MMTS sensor close to a building and right next to asphalt and air conditioning units that would obviously add a significant warming signature to the readings at night.
http://gallery.surfacestations.org/main.php?g2_itemId=831
Here you have a Stevenson screen next to asphalt, which would make the night readings higher than they would be if the site standards were complied with.
http://gallery.surfacestations.org/main.php?g2_itemId=3406
Here is a rural station where the sensor is near a trash burn barrel and an asphalt tennis court surface.
http://gallery.surfacestations.org/main.php?g2_itemId=1335
Here is one of my favourite stations. Can you guess where the heating signature came from?
http://gallery.surfacestations.org/main.php?g2_itemId=12970
And here is the Grand Canyon site.
http://gallery.surfacestations.org/main.php?g2_itemId=38525
Why would you be averaging the two sites to figure out the correct temperature when it is obvious that the Tucson readings need to be thrown out? Steve McIntyre, who has been asking GISS for all of the data and algorithms required to perform an independent analysis without much luck covers the issue here. (http://climateaudit.org/2007/07/31/adjusting-in-arizona/)
Here (http://climateaudit.org/2007/07/31/marysville-and-orland-revisited/) is a discussion about the adjustments made to Marysville (http://gallery.surfacestations.org/main.php?g2_itemId=831) and Orland (http://gallery.surfacestations.org/main.php?g2_itemId=552). While the Marysville UHI effect is easy to figure out if one looks at the sea of concrete and asphalt near the station, McIntyre asks why the Orland station, which has been in the same remote location since 1909, gets a warming signature added to it.
What bugs me is why GISS/NOAA ignore the obvious bias and why they have done nothing to improve data quality and why the various versions of their data are not archived and documented properly.
If you know of a text version of these data, link them, and I’ll do the processing and compare them on the same plot.
There is serious problem with the GISS data; it keeps changing without notice and there is no proper documentation to show which version of the data set one is looking at and why it was changed. As an old auditor of aircraft quality systems I smell a rat and would not trust what is presented by GISS. If the FAA or Transport Canada found the type of discrepancies with my data record keeping that I find with GISS, I would have been shut down.
1. Not all the individual adjustment are in the warming direction.
2, You can’t merely STATE that the adjustments are invalid you must show this. I’ll give you hints below.
3. These are the adjustments for the US ONLY, other countries are a black box.
I know that not all of the adjustments are in the warming direction because GISS does account for the UHI for some stations. But while it is easy to justify making a downward adjustment for stations that are on a sea of concrete and asphalt, it is hard to justify adding a warming signal to rural stations that have been in the same place for quite some time.
We have already seen that GISS has made many unwarranted warming adjustments to a number of the CRN 1 stations. Until all of the information required to perform an independent review is provided it is hard to accept the GISS/NOAA data as valid. That said, in 2007 NASA admitted that Steve McIntyre was right and corrected an error that resulted in 1934 being the warmest year. It also admitted that four of the top ten years were from the 1930s (1931, 1934, 1938 and 1939,) while only three of the top ten were from the previous decade (1998, 1999, and 2006).
Of course, NASA keeps changing the data without letting us know why so it is certain that the 1930s will get colder while the current temperatures are adjusted upwards. But that is what this debate is about, isn’t it?
and that there were more top ten warmest years in the 1930s than the 1990s. That disclosure should be sufficient to
Steven Mosher,
“B. SHAP: documented station changes, typically adjustments for lat/lon and Altitude ( using lapse rate). These adjustments are justified.”
Sorry, I disagree with adjusting SURFACE temps with different elevations based on lapse rate. Due to the many vagaries of circulation and other ground effects comparing a lapse rate in open air to the lapse rate next to the ground is poorly framed.
Nope, UH UH, BAD.
steven mosher,Vangel, kuhnkat
Doesn’t Briggs tackle the problem with adjustment here?
http://wmbriggs.com/blog/?p=1459
I own a house on the north coast of California, at an elevation of about 800 – 1000 feet on the seaward side of the coast range. This elevation range on this coast is what’s called the “banana belt”. It has this name because it is warmer, often much warmer, than the lower elevations. This area is much warmer than the lower elevations for two reasons:
1. It is above the persistent sea fog which comes in from the ocean at lower elevations.
2. Cold air flows downhill, gathering in the valleys.
So regarding the idea that
Anyone who adjusted for elevation in this region using lapse rate would not only be wrong in amplitude. They would be wrong in sign, adjusting in the entirely backwards direction.
This is a recurring problem with climate science – things like adjusting for altitude using lapse rate, which are “obvious” or “evident” or “a result of basic physics”, turn out to be totally invalid and often going in the wrong direction entirely in a complex, chaotic, inter-linked, resonant, driven system like the global climate.
So whenever people say things like “Increasing CO2 has to warm the planet, it’s simple physics that’s been known since Arrhenius”, I just break up laughing, I can’t help it. There is absolutely nothing simple about the climate, except for simple people whose simple explanations are simply wrong. In fact, the Constructal Law guarantees that the simple explanations are wrong … but that’s the subject of another thread.
kuhnkat:
I think you’re right on this one. I think a lot more work needs to be done to fully quantify the effect of the boundary layer on instrumental data, as used for climate monitoring.
Willis, you have no idea what you are talking about.
Read this, then get back to me.
http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html
bugs,
Read this ( http://wattsupwiththat.com/2009/11/17/the-steel-greenhouse/ ) and then get back to me. I would say that Willis understands the physics of the greenhouse effect a lot better than you. He also has the patience of a saint, as you can tell by reading the comments.
bugs (Comment#29224)
December 27th, 2009 at 7:19 am
Did you really mean to link to that?
Not much there except someone filling us in on how they’re working on a book.
Do you actually read the talking point links someone else gives you?
John M,
In bugs’ defense there was a working copy of the book there for download. I had read through it once. Guess it was taken down because it’s going into print.
The link to that book is here (PDF)
Thanks Chad.
Also found it here
http://web.archive.org/web/20061021001911/geosci.uchicago.edu/~rtp1/ClimateBook/ClimateVol1.pdf
Perhaps Bugs was practicing argumentum ad bookmarkem.
(How long before some latin geeks hop on me for that one?)
And it looks like the book in question is aimed at trying to simplify climate science without all those nasties like fluid dynamics.
From the preface, I don’t think this:
is inconsistent with what Willis had to say. Particularly if “interesting structure” and “surprising and profound collective behavior” are properly interpreted to mean “complex as hell”.
bugs (Comment#29224) December 27th, 2009 at 7:19 am
I hate it when people do that, point to some giant tome as their “citation”. It’s like when people say “You’re wrong, the IPCC says so”, or “You’re wrong, the Koran/Bible/Bhagavad Gita says so.” Yeah, OK, but what part of the IPCC/Koran/Bible are you talking about? Are you talking about this statement, for example?
Now, I happen to think that cloud feedback is one of the two mechanisms keeping the earth’s temperature at an equilibrium value. See my treatise on the subject here. If you find flaws in that idea, let me know. Your text says that not only is there no “simple physics” that will help us there, we don’t even have any complex physics to describe the situation. He coyly calls the answer “elusive”. For those not familiar with the term, that’s sciencespeak for “we don’t have a clue”.
So you may be correct, bugs, but if you’d give me a chapter and a verse we’d have something to actually talk about. Until then, there’s nothing to see here, you’re just handwaving. I’m not going to try to guess exactly which page you are talking about.
Now that I’ve downloaded the book, I see who wrote it … BWA-HA-HA-HA. Before taking the Chevalier’s work as your new religious text, you might want to do a bit of research on him. You could start here …
Willis Eschenbach,
If clouds keep the temperature of earth at ‘an equilibrium value’ why have there been temperature changes of greater than five degrees in the past? Or is this ‘equilibrium value’ that you speak of not a single value but rather a range of values? Alternatively, is it like a series of peaks and troughs, with certain events pushing us over a peak and triggering a new equilibrium value for the earth?
DeWitt Payne (Comment#29227) December 27th, 2009 at 9:51 am
It’s a perfect illustration of the fact that he knows no more about the science than I do. As an interested amateur, I like to add my 1 cents worth to the debate, Willis adds no more to it than I do. His article ignores the fact that the radiation imbalance is what is causing the extra energy to absorbed by the earth, hence the whole point of the exercise, AGW. A good correlation of this is the cooling troposphere.
http://www.ipcc.ch/ipccreports/tar/wg1/214.htm
Cooling troposphere?
I could completely ignore McIntyre as the master of reframing. He is terrified of the whole basis of AGW, the physics, as Pierrehumbert says, “it’s the physics, stupid”. If you don’t start with that, you have nothing.
If you don’t start with observation you have nothing to base your physics on.
bugs (Comment#29246)
December 27th, 2009 at 6:15 pm
What’s the physics say about the “cooling troposphere”?
John M (Comment#29248) December 27th, 2009 at 7:01 pm
My bad, stratosphere.
Steve Hempell:
Doesn’t Briggs tackle the problem with adjustment here?
I agree with some of the points made by Briggs and have no problem with not making adjustments for the UHI effect in cities because that is the real temperature experienced by the people that live there. But as he points out, we cannot use the data biased by the UHI effect to average up the surrounding readings.
But there is an even bigger problem. When a thermometer from a Stevenson Screen is replaced by an electronic sensor in an MMTS shelter there has to be an adjustment made for the expected difference in temperature even if the move is very small. Given the fact that MMTS shelters are close to buildings because of the cables needed to be run to the sensors it is clear that nigh-time temperatures are likely to be much higher than they were with the Stevenson Shelters. That means that even a minor move can have a great effect.
What bugs me is the lack of quality control over the data. GISS has lost track of stations that still report data, have used data from the wrong month without checking to figure out why the results are so different, averages data from good stations with that of stations that have a massive bias (such as the Tucson example where the Stevenson screen was placed on top of asphalt), and have changed the data sets without archiving the changes and the methodological differences that created those changes. That is not science and GISS should not be given cover by discussing variations of model predictions with the temperature reconstruction as if the reconstruction were meaningful. There are too many people who do not understand the issues well enough and when we engage in some of these debates as if the data had validity they get fooled into thinking that GISS is a credible source. The bottom line is that it isn’t credible and will not be credible until we have full disclosure and until we can independently reproduce the temperature profiles that it publishes.
bugs (Comment#29249)
December 27th, 2009 at 7:08 pm
Ahhh…the stratosphere. Right right right right ri-i-i-i-ght.
So that gets us back to this graph.
http://www.ssmi.com/data/msu/graphics/plots/sc_Rss_compare_TS_channel_tls_v03_2.png
You’ll recall our discussion at http://rankexploits.com/musings/2009/false-precision-in-earths-observed-surface-temperatures/#comments
I presume you recall:
So why is it “wishful thinking”?
Where is the cooling in the absence of a volcanic eruption?
bugs:
You’re going to need something better than Pierrehumbert’s book if you want to understand the physical basis behind AGW. I’d suggest something more like this.
But I agree with curious, if you don’t have reliable observations you have nothing, and there are plenty of experimentalists who can’t do the modeling themselves (and modelers who can do measurement or observation).
Vangel:
There are data analysis methods that are insensitive to station motions. I don’t know whether GISTEMP uses them or not….
Actually my bit worry isn’t the station moves and equipment changes as much as changes over time in regional fauna (whether natural or human made). As I’ve pointed out here, if you are look at a 24-hour average, with a 3 m/s average wind speed you are sampling a cone-shaped column of air 250-km + long.
Carrick (Comment#29252) December 27th, 2009 at 7:29 pm
The physics has been measured and observed very well. The temperature record is a problem, but that is just the situation we have been given. Given the other observations, it’s probably not far off the mark, given that the current observations are in step with the satellite record.
bugs, in honor of your comment on how the physics is all so simple and well understood, I’ve just written and posted an article at this link. I invite people to read it, as it is quite relevant to the bugs’ claims.
w.
bugs (Comment#29243) December 27th, 2009 at 5:45 pm
bugs, once again you are just waving your hands and making extravagant claims without any support. If you have scientific arguments against my work, bring on the chapter and verse.
But just saying that “radiation imbalance is causing the extra energy to be absorbed by the earth, hence the whole point of the exercise, AGW” is meaningless. What “extra energy”? Where can it be measured? What “radiation imbalance”? How can we detect it? What is “the exercise”, and what is the “whole point of the exercise”?
This is a scientific site. Until you do more than make vague statements, you won’t get any traction here. Be specific or you will be ignored.
“This is a scientific site. Until you do more than make vague statements, you won’t get any traction here. Be specific or you will be ignored.”
LOL. It’s an interested amatuers site.
http://www.skepticalscience.com/global-warming-stopped-in-1998.htm
Which is about http://www.agu.org/pubs/crossref/2009/2009JD012105.shtml
Which features a very interesting graph, “Earth’s Total Heat Content Anomoly”.
bugs:
If you’re talking about at the fundamental level, I agree. I hope you’re smart enough to understand that at the level of climate physics, the system is indeed still poorly understood and even more poorly measured.
Terms like “not that far off the mark” is pseudo-science at best (it has no defined meaning in science) and pure BS at worst.
Truthfully, you and other semi-climate-science-educated people like yourself aren’t serving the cause you are advocating for by overstating the level of agreement of models with data, or even different ways of measuring the same data (satellite data are certainly in disagreement with ground based measurements, and the difference almost certainly is related to ABL physics issues).
bugs:
I agree you are an amateur. Not even a very well educated one at that. Beyond that, speak for yourself.
bugs (Comment#29258) December 28th, 2009 at 1:33 am
bugs, if you could spell either “amateur” or “anomaly”, your views might be treated with more respect.
And the fact that you think that “amateur” and “scientific” are somehow opposites speaks volumes about your point of view. I am one of the few amateur scientists to have anything published recently in Nature Magazine (a “Communications Arising” regarding climate science). So while you may claim that I bring nothing to the table, Nature Magazine disagrees with you.
While there are both amateur and professional scientists who post here, it is a scientific site run by Lucia, who is definitely a professional scientist. If you think that we are impressed by your citing claims from an un-named blogger who in turn cites Susan Solomon’s study, think again. Our dear Susan has proven over and over that she is not a scientist of any kind. Instead, she is a politician who is willing to do anything to keep real science out of the IPCC. She was up to her ears in the attempt to evade the FOIA as shown in the CRU emails. See here and here for details of her reprehensible actions.
You seem to be intent on proving that the earth has warmed in the last decade. In fact, there has been no significant warming since 1995. There is a clear mathematical demonstration of this fact at Lubos Motl’s site.
So here’s how it works on a scientific site. If you find anything wrong with Lubos’s math, let us know. If you don’t find anything wrong with the math, let us know. If you don’t understand Lubos’s math, let us know.
But don’t cite the vague ramblings of some blogger to try to disprove the math.
“You seem to be intent on proving that the earth has warmed in the last decade. In fact, there has been no significant warming since 1995. There is a clear mathematical demonstration of this fact at Lubos Motl’s site.
So here’s how it works on a scientific site. If you find anything wrong with Lubos’s math, let us know. If you don’t find anything wrong with the math, let us know. If you don’t understand Lubos’s math, let us know.”
The math is perfect, the data selection too ;).
You could look at the massive but very short term El Nino spike at one end, and the La Nina dip at the other end, for example.
Update in Progress……….
bugs (Comment#29263) December 28th, 2009 at 3:36 am
So you agree that from 1995 to the present, there is no statistically significant warming?
And since you find no problem with his math, you also agree with Lubos’s conclusions, viz:
This is a recurring problem in climate science. Natural series are typically highly autocorrelated, and poorly described by white noise. But the numbers given for statistical significance are almost always calculated using white noise …
Willis and Kuhnkat.
WRT SHAP.
The situation is this. You have a station at location xyz where z = sea level. You move it to x’y’z’ Where z’ = 1000 feet. When I say that adjusting using lapse rate is justified I mean just that. If you know nothing else about the site than this, altitude went up 1000 feet, then your best guess will be to adjust it using a standard lapse rate. better would be to study each individual site.
Can you construct examples where this adjustment will be wrong?
Of course. ITS AN ADJUSTMENT. Its a model. It has errors. Now, none of us have seen SHAP or evaluated the errors. But “adjusting” for these changes is valid. The precise adjustment may have errors. they may be two sided, large, small, biased.. blah blah blah. BUT, they are justified.
heres a bet. Pick 100 stations. Move them up in elevation 1000 feet. would you adjust, yes or no.
Well, bug, I just wasted $9 buying the Solomon et al. scientific fantasy. Now I don’t mind a good science fantasy novel, but when it claims to be science, that’s another matter.
Here’s a few of the problems with the study. They claim that they can measure the global energy imbalance as follows:
Their error is four tenths of a watt per metre squared?? Who are they kidding? Not one of the underlying measurements is to the nearest tenth of a Watt/square metre.
How do they do that? They say:
OK, let’s look at those. They say about the ocean heat content:
So their ocean numbers for the deep ocean have an error of ±15%. But that’s an error based on the values for the top 700 metres of the ocean. Solomon and her friends somehow forgot to include that figure, but they got their data from Levitus. He says that the error in the top 300 metres is on the order of ± 4 W/m2. In addition, in all cases the Levitus error includes zero … so using the Levitus data, we can’t even say if the oceans have warmed at all. See http://www.sciencemag.org/feature/data/
1046907.shl for details.
Now, remember that they are calculating the energy balance by combining the ocean heat content, atmospheric temperature, and radiative flux. The ocean component alone has an error on the order of 5 W/m2 … and the other components can only increase that error.
Regarding the radiative forcing, they say:
Since they claim a net radiative forcing over the period of about 2.5 W/m2, this adds (in quadrature) another 0.4 W/m2 to their error. Note that this single error alone is the size of their claimed total error.
The error in shortwave forcing is given as being ± 1 W/m2 (Fig. 3). I could go on, but you see the point. They’ve taken a group of observations with very large error bars, with the ocean heat content alone having an error of some 5 W/m2, and they claim that their result is accurate to ± 0.4 W/m2 … nice work if you can get it.
That’s why I don’t believe their math. Our knowledge of the values of the various components of their analysis is far, far too fragmentary and too inaccurate to support the claimed accuracy of their conclusion.
This kind of nonsense is all to common in climate science. Climate science is unique among the physical sciences in that it does not study physical things. It studies only averages, as climate is defined as the average of weather over a sufficiently long period of time. Since it is the study of averages, a good knowledge of statistics is a prerequisite. But the practitioners often have only a very rudimentary understanding of statistics. See the excellent series of posts by the climate statistician William Briggs for further details regarding this problem, the series starts here.
Finally, despite their claim that their results are based on observations, they make the assumption that the sensitivity of the climate is a single value. That is to say, they assume that the forcing is linearly related to temperature. There is a huge problem with this assumption, which I discuss here.
Finally, what about the clouds? The albedo is about 30%. A change in the albedo of 0.3% wipes out their entire “residual” … here’s what they say about that:
So despite the claim that they are using observations, they are actually using a parameterized linear equation to estimate radiation balance as a function of temperature, which in turn is used to estimate changes in net radiation … perhaps you’d care to give us an error estimate for that procedure, as they fail to include that source of error as well.
So that’s my problem with the Solomon paper. The conclusions are by no means supported by the underlying data. Even with their bogus error calculation their result barely achieves statistical significance at the 95% level. With anything approaching a real error estimate, it is totally insignificant.
steven mosher (Comment#29266) December 28th, 2009 at 3:51 am
Mosh, thanks as always for your interesting points. Would I adjust for changes in station elevation? Depends on what I was hoping to get out of the data.
Generally we are interested in the trend. I use first differences for calculating temperature trends. I do not know what the effect of elevation on first differences is, but simply adding an offset as you suggest doesn’t change first differences at all. So an adjustment based on the lapse rate doesn’t adjust anything if you are using first differences. I just throw away the first difference for the year of the adjustment.
Now if I used some other method, I certainly might adjust. However, I don’t think that I’d do it using the lapse rate, because in the real world (as my example shows) this can give errors which are not only incorrect in amplitude but in sign as well.
So I’d likely make the adjustment based on the data itself (changes in mean/max/min/range before and after the station move) or based on nearby stations, rather than on theoretical principles like lapse rate which may or may not apply.
Finally, if you do use lapse rate, which one do you use? Wet adiabatic? Dry adiabatic? Again, location, location, location. A couple of years ago I lived in Hawaii, in Waimea on the Big Island. Where I lived it was almost always foggy from my house down to the coast on one side, and almost always clear from my house down to the coast on the other side. So if I went down a thousand feet on one side, wet adiabat, and on the other side, dry adiabat. That’s the problem with adjustments based on theory … nature is patchy and inhomogeneous, while theories involve averages. That’s why I prefer to adjust based on data rather than theory.
All the best,
w.
Willis Eschenbach,
Okay, I had a read of your ideas re earth’s thermostat. You do talk about *two* equilibrium states, and you talk about a variation of +/- 3 per cent. I tend to think that both of these mean that you effectively remove the notion of equilibrium from the discussion.
Firstly, two metastable positions mean that the earth is *not* kept in equilibrium at all, by whatever mechanism.
Secondly, if you are talking about degrees Kelvin – and I think that you must be to get figures of +/- 3 per cent – you are talking about changes in degrees of +/- 8 degrees! This is not an ‘equilibrium’ given that changes of a few degrees can cause significant changes to climate and the spread of life on the planet.
However, it is possible that you have another definition of equilibrium in mind. It is also possible that you are not talking about degrees Kelvin, although that was the temperature scale mentioned in your writing.
Anyway, while it is an interesting idea, I think that there are some problems with it, as you can tell.
I’m saying he is cherry picking, since short term noise like the El Nino/La Nina will crowd out the long term signal, especially if you have a warm period at the start of your selection and a cool period at the end.
This is what Lubos doens’t consider
http://www.realclimate.org/index.php/archives/2008/01/uncertainty-noise-and-the-art-of-model-data-comparison/
Willes
Could you comment on this analysis:
http://justdata.wordpress.com/
I have posted elsewhere on this, but just get ignored. Is it that bad?
Willis wrote
That sort of logic is the “when did you stop beating your wife” type.
http://wattsupwiththat.com/2009/12/20/darwin-zero-before-and-after/
The reason for the sudden drop in temperatures readings would have been the war. There were significant and urgent changes made in the town, as the Japanese were heading straight for it.
http://www.john-daly.com/darwin.htm
David Gould (Comment#29240)
December 27th, 2009 at 4:26 pm
David – the BBC are trailing this show as covering the issue you raise:
http://www.bbc.co.uk/programmes/b00pfsls
Time stamp – due to air in about 22h30m from now!
bugs (Comment#29276)
December 28th, 2009 at 10:25 am
Bugs, I think you’re making yourself dizzy.
What in the world does the link you provide from John Daly’s blog have to do with Willis’ claim that the Darwin station was adjusted manually?
Willis Eschenbach (Comment#29268) December 28th, 2009 at 4:57 am
Good points all around. I think the approach taken by Karl, peterson, quayle, menne, easterling.. all those guys is as follows.
They are interested in the trend and the value. That is, they can’t just throw away the first difference. They want to “compute” a an “accurate” anomaly for 1900 and for 2009. When TOBS changes, they want to adjust to get back to “reality” so they “model” that change. When a station moves they want to “make” long records, so they adjust. The appeal of long records is understandable, but like briggs I’m leary of what the manipulations “get you”. Why not treat each station move as a “new station” that’s what it is? nevertheless both Jones method and Hansens method either find or “construct” long records. WRT which lapse rate. I dunno. I am only guessing at the underlying math of SHAP since it’s not documented.
For all I know SHAP looks at a series before the change in altitude and then looks at a series after the change and makes an adjustment accordingly…. how? dunno its a mystery.
That would be a fun one to look at. I think anthony could get you a station or two that had altitude changes. Oh heck,
look no further than Hansen where 1C is added to st helena record.
http://74.125.155.132/search?q=cache:lrGPGSR2-YgJ:data.giss.nasa.gov/gistemp/sources/gistemp.html+hansen+st+helena+nasa&cd=2&hl=en&ct=clnk&gl=us
bugs (Comment#29271) December 28th, 2009 at 5:27 am
First, RealClimate treats the surface temperature data to be valid even though there is a divergence from the satellite and radiosonde measurements. Second, the change in PDO/AMO phases are not considered to effect the temperature trend even though the observations clearly tell us that they are important to temperature measurements. Third, the leaked CRU e-mails show that the people at RealClimate are not beyond lying and making false arguments to advance the cause. If I were you, I would try a different approach, particularly given the strength of the Lubos Motl argument.
Vangel (Comment#29214) December 26th, 2009 at 8:02 pm
Most of us who have been around here for a few years are familiar with the issues raised by surface stations. I’m an Anthony supporter from the inception. In fact, I was the guy who found the station rating paper and passed it in to folks.
let me just respond to one issue and so you understand some of the issues at play. you wrote:
“Here we see the MMTS sensor close to a building and right next to asphalt and air conditioning units that would obviously add a significant warming signature to the readings at night.”
Yes, one of the issues as I noted above is the MMTS adjustment.
There are several studies on the MMTS adjustment. The first done by Quayle in 1991. ( the basis of the adjustment)
From Hubbard and Lin, paper:
The results based on 424
MMTS stations and 675 CRS stations showed that av-
erage minimum temperature changes of +0.3ô°ˆC and av-
erage maximum temperature changes of -0.4ô°ˆC were
introduced (Quayle et al. 1991). At the same time, a
side-by-side comparison was conducted that concluded
that the MMTS underestimated the maximum temper-
ature by as much as 0.6ô°ˆC but that found virtually no
bias for minimum temperature (Wendland and Arm-
strong 1993). Neither study stated which temperature
system produced higher quality obser vations, but they
suggested that ambient solar radiation and wind speed
are two factors that likely affect the air temperature
difference between the MMTS and CRS systems be-
cause both shields were nonaspirated.and high-resolution measurements.
( please note the side by side comparision compares just the instruments both sited properly– its just the instrument effect)
So, it’s not as simple as you suggest. The photo documentation is the first step, but the bias you suggest is “significant” has to be quantified. how small or big? 2 years back when I looked at very early data from surface stations it seemed from both a theoretical standpoint and some preliminary analysis that the effect would be most pronounced in the minimum temperature measurements.
Basically, the closeness to the building would allow the waste heat or stored heat from the building to impact Tmin. It would not do this 100% of the time. So just for example, close proximity to a building caused Tmin to be elevated in the winter months by a maximum of say 3C ( again just for example) . This effect will be modulated by winds at night. On still nights you might see the full 3C. On windy nights, you’d see this effect diminished. So you might see something on the order of half the full effect ( just for example, again to show the basic problem with finding this signal) that is, on the average winter night you might see a Tmin 1.5C warmer. Well, Tmean = (TMAX+TMIN)/2 so you are down to a .75C average effect.
And you if you have 4 months of cold nighttime temps ( temps below the temperature held by the building) you would then see a .25C effect in the yearly TMEAN. But this change over to MMTS happens once in the stations lifetime. So you’ve basically got ( in this example) a .25C bump up in the time series, the longer time goes on ( MMTS is a 70s-80s introduction) the smaller the effect in TREND you will see from a MMTS discontinuity. Picture this, a simple thought experiment: from 1900 to 1980 a station has zero temperature trend. In 1980 they switch to MMTS close to a building. You see a bump of .25C. Then you have 1980-2009 of flat trend. Now do the trend analysis. will you see a false trend from 1981-2009? Nope, you’ll see zero trend.
Will you see a false trend from 1900 to 2009. ya, you’ll see a .25C rise in 109 years, or LT .025C per decade.
Again, looking at early data from the project I found a warming bias between the best stations and worst of about .1C to .15C thereabouts. If the warming effect of switching over to MMTS is significant then you would see a significant step function in the switchover. I’d argue that the effect is real. the effect will be found in TMIN, but when you move to TMEAN and then decadal trends the trend is likely to be swallowed up in the noise. All that means is that the analysis has to be very focused to find it.
Hope this helps.
bugs (Comment#29246) December 27th, 2009 at 6:15 pm
“I could completely ignore McIntyre as the master of reframing. He is terrified of the whole basis of AGW, the physics, as Pierrehumbert says, “it’s the physics, stupidâ€. If you don’t start with that, you have nothing.”
WRT the physics. Mc has a pretty consistent view on the physics so I don’t know what you are babbling about. So we start with the physics. The best physical theory we have says that adding GHGs to the atmosphere, all other things being equal, will produce warming at the surface. Of course, the next questions we will have are the obvious ones.
1. What does the observation record show?
2. Can we make predictions using this theory about future observations.
So, ya I accept “the physics ” Now I want to examine the observation record. dang. Can’t do that just yet. To do that I need all the raw data, I need the metadata, I need to understand how this data has been adjusted. It’s basic book keeping and statistics. Now, don’t go point me at melting ice or retreating glaciers. I’ve already accepted that adding GHGS will warm the planet. I wanna know how much? 1C in the last 150 years?
.85C? .92C. How well can we know this? I like those details. A lot of other people like those details too. So ya I accept “the physics”
Now can we move on to the observations?
And WRT #2. Making predictions. Ya I accept GCMs as a TOOL to make predictions. GCMs are collections of physics simulations. I used to work with those kinda beasts. Step 1. Run the simulation and see how well it compares to past data. Hind cast.
Ok, I’m ready to hindcast. I need to check the output of the model against past observations. DANG. there is that observation problem again.
Lets wave that problem. Now I want to forecast. Which models do I use? Why the best of course! ok, which is the best? I dunno which one did they best at hindcast? Dang. there is that observation problem again.
Now, you may want me to just accept hadcru or Giss or UAH or RSS and move right along. Sorry, I’d rather get the observation data settled on first. So, get the land data sorted out. Then the SST data ( that pesky bucket problem) then get the UAH RSS problem sorted. And dont point to me that these sources “largely” agree. I know that. I want the best source.
steven mosher,
Out of interest, how would you define ‘the best source’? In other words, let us say that Hadley, GISS, UAH and RSS resolved every issue that you have with them, and yet still showed differences in the amount of warming. How would you judge which was the best source?
Merry Christmas to you and yours. There was an article at RC comparing observations to model predictions The link is:
http://www.realclimate.org/index.php/archives/2009/12/updates-to-model-data-comparisons/
I would be very interested in your reaction to to this article.
Chuck L (Comment#29285) December 28th, 2009 at 6:12 pm
Don’t know if you are talking to me, Chuck, but I wouldn’t increase their page count by one if you paid me. I’ve been censored so many times at RC for honest scientific questions I’ve lost count. See my account of one time among many at this link.
Visit RC? I wouldn’t visit to piss on them if they were on fire. As the CRU emails clearly show, they are nothing but a mouthpiece for liars and cheats.
Steven Mosher,
Thanks for that explanation/estimation of the effects of sensor change. That is very enlightening. Would you care also to give an estimate of the effect of TOBS adjustment – perhaps as a proportion of the daily range? Or can you point me to a ready summary?
I agree with Willis that the effects of station moves are often inadequately identified and/or adjusted, and also that it is difficult to generalise – that an individual approach is required.
bugs (Comment#29246) December 27th, 2009 at 6:15 pm
While I agree that if you don’t have the physics you have nothing, perhaps you might consider what that means for the Tinkertoyâ„¢ climate models, which are about 50% physics-free.
Regarding Steve M., he has been asking for an engineering quality exposition of the physics underlying the IPCC claims for some years now … and neither the Chevalier nor any of his friends who are always raving on about the physics have provided it. Here’s one of Steve’s requests:
If you think that’s a man who is “terrified of the whole basis of AGW, the physics”, you haven’t done your homework. You have the story backwards. Steve has been asking the hard questions about the physics, and Pierrehumbert and his ilk have been running from Steve’s questions like cockroaches when you turn on the light at midnight …
It appears that you have swallowed the RealClimate misdirections and evasions and false statements and plain mistakes and outright lies wholesale, and now you have come to regurgitate them undigested back up on us … please do some reading on the subject before repeating any more RC bilge, you are just making yourself look foolish.
Willis Eschenbach (Comment#29286) Don’t know if you are talking to me, Chuck, but I wouldn’t increase their page count by one if you paid me. I’ve been censored so many times at RC for honest scientific questions I’ve lost count. See my account of one time among many at this link.
Visit RC? I wouldn’t visit to piss on them if they were on fire. As the CRU emails clearly show, they are nothing but a mouthpiece for liars and cheats.
I go there only to see what they are saying because in many of the “discussions” I have with the AGW true believers, they refer to that website, and I think it is important to see what the other side is claiming. The arrogance and snark of “The Team” and the AGW accolytes and sycophants who post there are truly remarkable and I do have mixed feelings about being a “hit” on that website.
I thought GISSTEMP was known to be biased with an upward trend due to the way the historical temps were adjusted down. Does it really prove anything if the temp chart is wrong?
steven mosher
let me just respond to one issue and so you understand some of the issues at play. you wrote:
“Here we see the MMTS sensor close to a building and right next to asphalt and air conditioning units that would obviously add a significant warming signature to the readings at night.”
Yes, one of the issues as I noted above is the MMTS adjustment.
There are several studies on the MMTS adjustment. The first done by Quayle in 1991. ( the basis of the adjustment)
From Hubbard and Lin, paper:
The results based on 424
MMTS stations and 675 CRS stations showed that av-
erage minimum temperature changes of +0.3ô°ˆC and av-
erage maximum temperature changes of -0.4ô°ˆC were
introduced (Quayle et al. 1991). At the same time, a
side-by-side comparison was conducted that concluded
that the MMTS underestimated the maximum temper-
ature by as much as 0.6ô°ˆC but that found virtually no
bias for minimum temperature (Wendland and Arm-
strong 1993). Neither study stated which temperature
system produced higher quality obser vations, but they
suggested that ambient solar radiation and wind speed
are two factors that likely affect the air temperature
difference between the MMTS and CRS systems be-
cause both shields were nonaspirated.and high-resolution measurements.
(please note the side by side comparision compares just the instruments both sited properly– its just the instrument effect)
Let me stop you here. I expect that most of us would expect that a side by side comparison would yield similar results. But that is not the problem that we encounter in the GISS/NOAA data
But when stop taking readings from a field, as is usually the case with a thermometer housed in a Stevenson screen, and start taking readings with a MMTS cabled sensor that is next to a building because the cables are not run beneath side-walks, parking lots and walkways you get a warming bias that will cause the night-time minimum readings to spike.
So, it’s not as simple as you suggest. The photo documentation is the first step, but the bias you suggest is “significant” has to be quantified. how small or big? 2 years back when I looked at very early data from surface stations it seemed from both a theoretical standpoint and some preliminary analysis that the effect would be most pronounced in the minimum temperature measurements.
You are making my point for me. I do not believe that it is easy to go back and make proper adjustments for changes. That is exactly why I don’t trust the forced continuity of the NOAA/GISS adjusted data. It is clear that when scientists have to make guesses and fill in missing data they will choose assumptions that fit their preconceived notion of what is happening to the temperature record, which is why we cannot trust those assumptions until after all of the data, algorithms, and metadata are released for independent review.
Basically, the closeness to the building would allow the waste heat or stored heat from the building to impact Tmin. It would not do this 100% of the time. So just for example, close proximity to a building caused Tmin to be elevated in the winter months by a maximum of say 3C ( again just for example) . This effect will be modulated by winds at night. On still nights you might see the full 3C. On windy nights, you’d see this effect diminished. So you might see something on the order of half the full effect ( just for example, again to show the basic problem with finding this signal) that is, on the average winter night you might see a Tmin 1.5C warmer. Well, Tmean = (TMAX+TMIN)/2 so you are down to a .75C average effect.
And you if you have 4 months of cold nighttime temps ( temps below the temperature held by the building) you would then see a .25C effect in the yearly TMEAN. But this change over to MMTS happens once in the stations lifetime. So you’ve basically got ( in this example) a .25C bump up in the time series, the longer time goes on ( MMTS is a 70s-80s introduction) the smaller the effect in TREND you will see from a MMTS discontinuity. Picture this, a simple thought experiment: from 1900 to 1980 a station has zero temperature trend. In 1980 they switch to MMTS close to a building. You see a bump of .25C. Then you have 1980-2009 of flat trend. Now do the trend analysis. will you see a false trend from 1981-2009? Nope, you’ll see zero trend.
Will you see a false trend from 1900 to 2009. ya, you’ll see a .25C rise in 109 years, or LT .025C per decade.
Again, looking at early data from the project I found a warming bias between the best stations and worst of about .1C to .15C thereabouts. If the warming effect of switching over to MMTS is significant then you would see a significant step function in the switchover. I’d argue that the effect is real. the effect will be found in TMIN, but when you move to TMEAN and then decadal trends the trend is likely to be swallowed up in the noise. All that means is that the analysis has to be very focused to find it.
I am sorry but I do not buy the analysis because you are still guessing. My ten year old son did a project to find the size of the UHI effect in the late spring and found a difference that was larger than 3C on a number of consecutive warm evenings that had wind speeds of around from 5 to 15 km/hr. I suspect that the difference is much larger than what you imagine it to be and since we have no idea any guess is likely to be biased by the people making it. Given the pressure for GISS employees to be warmers I suspect that most adjustments will be on the high side. And given the size of the errors involved and the total supposed change in the trend, the noise is simply too high to make a meaningful comparison.
As I have written before, this entire speculation about average temperature trends is just narrative and has nothing to do with real science the way we were taught to conduct it.
vjones (Comment#29287) December 28th, 2009 at 6:37 pm
TOBS. The tobs adjustment is “documented” in Karls 1986 paper.
I’ve linked to it several times.
A long while ago we had a thread on it at CA
http://climateaudit.org/2007/09/24/tobs/
So go google Karls TOBS paper ( i put a linky to it somewhere on Briggs)
If you doubt TOBS you can just go see the effect for yourself ( search CA commenter jerryB he had a data set and program)
http://climateaudit.org/2007/09/24/tobs/#comment-107763
The issue with TOBS, in my mind, was the model was built and verified against CONUS data. There is an error associated with the model that is not carried forward in subsequent calculations. If you change the time of observation you will change the Tmean that gets recorded. I didnt see this until I went through the analysis that jerryB had done with ASOS data from around the US. Since you had continuous measurements of temp you could see what changing the TOBS did. Somewhere out there there is even a model for detecting changes in TOBS that have not been documented.
Also, in the 1986 Paper karl says his code is available for a fee. If somebody wants to criticize the work, they should start by requesting the stuff from Karl.
This is part of the unfinished business on the “anti warmist” side of the fence.
On the thread I linked to Steve Mc suggests that people who are interested can just download the CRN data ( ask I’ll show you where) and you can look at 5 minute data. Calculate the min/max with different TOBS
For more folks interested in TOBS..
I c an just say what I said back in 2007. Go get some hourly data and check for yourselves
http://climateaudit.org/2007/09/24/tobs/#comment-107790
Vangel
You wrote:
“Let me stop you here. I expect that most of us would expect that a side by side comparison would yield similar results. ”
On the contrary I would have expected a MMTS and a CRS to be different, as the study showed. Read it and see why.
“But that is not the problem that we encounter in the GISS/NOAA data
But when stop taking readings from a field, as is usually the case with a thermometer housed in a Stevenson screen, and start taking readings with a MMTS cabled sensor that is next to a building because the cables are not run beneath side-walks, parking lots and walkways you get a warming bias that will cause the night-time minimum readings to spike.””
You need to read Quayle. See above in which MMTS in the feild are compared to CRS, in the field. I agree, theoretcially one should see a warming bias, but you have to consider the details.
The building is 67 degrees inside, for example. The day cools from a high of 32 degrees F to a low of 17 degrees. The MMTS is 100 feet from the building. What’s the bias? its 50 feet away, whats the bias? its 15 feet away? there’s no wind, the wind is gusting to 20MPH, the flow around the building is turbulent, the flow is laminar, lots of variables. If the nighttime temperature “spiked” that would be apparent. I certainly would not deny that one should expect to see a difference. I’m arguing that the difference might be smaller than you imagine because of other factors. For example, if you want to understand the impact of placements at airports go read the CRN study. You’ll see what effects siting at an airport has. My advice. Don’t overstate your case.
“You are making my point for me. I do not believe that it is easy to go back and make proper adjustments for changes. That is exactly why I don’t trust the forced continuity of the NOAA/GISS adjusted data. It is clear that when scientists have to make guesses and fill in missing data they will choose assumptions that fit their preconceived notion of what is happening to the temperature record, which is why we cannot trust those assumptions until after all of the data, algorithms, and metadata are released for independent review.”
Well. My position is different. I suspend judgement about these things. I try not to ascribe bad motives to the scientists ( try) The adjustments need to be checked. That’s it. You dont ADD ANYTHING by ascribing motives. You detract from your objectivity. So, suspend judgment. The adjustments may make sense on their face, but they need to be checked. That position puts you on a nice neutral grounds. I trust they did what they thought was correct. It needs to be checked. Not because they are engaged in any bad behavior or have ulterior motives or suffer from confirmation bias. It just needs to be checked.
“I am sorry but I do not buy the analysis because you are still guessing. ”
the POINT of my words was to show you all the ways in which a signal ( bias signal) can be modulated. So guessing, yes. The point was this. There are many ways in which this signal can be corrupted. Again. Simply. Go pick a site: You can go get daily data from USHCN. go see if you find BIG EFFECTS from the kinds of changes you are concerned about. The effects are not big.
Since 1980 ( lets say the introduction of MMTS) to today you are probably looking at something on the order of of a .6-.7C warming over land. Ok? How much of that is real? How much of that is UHI? how much is microsite? Go check the sat data.
you’ll probably find that the land has warmed maybe .4C-.5C. ( lower trop )That kinda sets an upper limit on the UHI/microsite bias.
“My ten year old son did a project to find the size of the UHI effect in the late spring and found a difference that was larger than 3C on a number of consecutive warm evenings that had wind speeds of around from 5 to 15 km/hr. I suspect that the difference is much larger than what you imagine it to be and since we have no idea any guess is likely to be biased by the people making it. ”
Good for him. You should post up his test, data and methods.
I’m not sure how it applies to this specific problem iff at all.
I’m fairly confident that the effect is small (less than .3C) , but you shouldnt trust me. That doesnt mean its unimportant. One thing that should tell you its small is the Sattilite data. Also, Go read Ross mcKittricks paper on UHI. basically, Ross puts the UHI effect at around 50% of the increase seen in the land record.. that would be around .3C give er take. That comports with the example I gave.
Also you need to distinguish between UHI and microsite bias. They are at different scales.
“Given the pressure for GISS employees to be warmers I suspect that most adjustments will be on the high side. ”
1. GISS dont do the adjustments. NOAA does.
2. You’re suspicions are not facts. The adjustments need to be looked at with a fair and open mind. Having suspicions is probably not a good thing.
“And given the size of the errors involved and the total supposed change in the trend, the noise is simply too high to make a meaningful comparison.”
well that’s something you need to establish not just say.
pompous git.
Steve,
I’m preparing to look into the magnitude of such data problems using climate model data. I’m currently tracking down multi-hourly high resolution data, hopefully from ECHAM 4 or MIROC. The idea is to corrupt the surface temperature data (change the Tmean calculation, make like there’s an air conditioner near the thermometer, etc.) and see how it compares to the original data in terms of size of anomalies, trends, etc. Should be interesting (and very CPU time consuming!)
Chad.
Sounds like fun. I have some other ideas for that data as welll…
Steven Mosher,
re direction for TOBS info – thanks. Should have thought to look there; i’m a few years behind, being a relatively recent convert, so lots of reading to catch up on!
vjones.
You’ll find that not much has changed in 2 years.
As I understand it, the IPCC models are not falsifiable. If this is true, it is somewhat idle to compare the computed to the observed temperatures, for the models are never wrong!
Willis Eschenbach:
Many thanks for your numerous and valuable comments in this thread as well as the post “The Unbearable Complexity of Climate” at WUWT. I almost always learn something from your comments.
Yes, there IS a substantial amount of evidence that SOME warming has occurred over the last 150 years — the thermometer record, the melting glaciers, etc. Likewise, there is little doubt about the radiative physics of CO2 and long wave IR emissions, and that CO2 levels are rising.
However, it is a fallacy — shamelessly promoted by the alarmists — to claim that the relative certainty which with we know the few simple facts above applies to the whole AGW argument, including the step of attributing the warming to man and the inference that it will have catastrophic consequences.
These last two conclusions are HIGHLY dubious in my mind. So it is refreshing indeed to see someone so thoroughly expose this fallacy by demonstrating the actual levels of uncertainty that exist in both our understanding of the climate as well as in the wild claims being pushed by the alarmists.
So thanks again.
Steve,
I’m interested in what ideas you’d have for such an analysis. Do tell.
Chad,
It depends how many years of data you have, but something to do with reconstructions and psuedo proxy data
Steven Mosher:
However, it has been shown that it is easier and more praise-garnering to do pseudo reconstructions of proxy data (at least until somebody releases all your email).
“Finally, if you do use lapse rate, which one do you use? Wet adiabatic? Dry adiabatic? Again, location, location, location. A couple of years ago I lived in Hawaii, in Waimea on the Big Island. Where I lived it was almost always foggy from my house down to the coast on one side, and almost always clear from my house down to the coast on the other side. So if I went down a thousand feet on one side, wet adiabat, and on the other side, dry adiabat. That’s the problem with adjustments based on theory … nature is patchy and inhomogeneous, while theories involve averages. That’s why I prefer to adjust based on data rather than theory.”
All you do is say it’s all too hard and throw your hands up in the air. As an interested amateur, that’s ok. That’s why we pay professionals to deal with and research these problems full time. The issue you raise is only an issue if a thermometer has been moved, and only if it has been moved to what you yourself say is an unusual situation, and only if it was moved up, and only if there were no corresponding thermometers moved down. The odds are rapidly getting small that it’s a real issue and not just alarmism.
“John M (Comment#29278) December 28th, 2009 at 11:30 am
bugs (Comment#29276)
December 28th, 2009 at 10:25 am
Bugs, I think you’re making yourself dizzy.
What in the world does the link you provide from John Daly’s blog have to do with Willis’ claim that the Darwin station was adjusted manually?”
Darwin was a ‘frontier’ town that was bombed by the Japanese in WWII. Daly’s page lists all the issues with it, and refers to a fax by the chief of the BOM that was never displayed that shows there was an intensive look at what adjustments would have been necessary because of it’s history.
bugs (Comment#29314)
December 29th, 2009 at 5:24 pm
Bugs, if your argument is that a manual adjustment was justified, that’s fine. But Willis is arguing that GHCN represents an undocumented manual adjustment, and he’s getting some heat for his claim.
Do you have any position whatsoever on whether GHCN Darwin data actually was manually adjusted?
Mosher
You see, that’s not the message I’m getting from the ‘skeptics’. I have said all along, the only question is how much warming, but the vast amount of blogspace and anger seems to be directed at
* individuals
* bad science such as beck and G&T
* hand waving
* amateur investigations that don’t really achieve much at all
* the latest *shock* *horror*
I don’t see McIntyre framing it in those terms at all, nor at him pointing out to all his outraged fans who are calling for criminal charges to be laid, that this is the only issue, and, indeed, it is the focus of the active scientific debate.
bugs (Comment#29317) December 29th, 2009 at 7:51 pm
Mosher
“You see, that’s not the message I’m getting from the ’skeptics’. I have said all along, the only question is how much warming, ”
Then you share this opinion with McIntyre who said the same thing on FOX. I can wish that skeptics would be more like Mc, I think they would do better if they didnt overplay their hand.
steven mosher (Comment#29321) December 29th, 2009 at 11:39 pm
But that is not the message I am getting from CA. He seems to have two personalities, one for CA, one for Fox.
bugs:
What message bugs is getting is a very tiresome topic of conversation. Why not start your own blog entitled “whatbugthinks.com”? People who actually care can go there to read it.
Bugs–
If you think SteveMc is saying “no warming” at CA or anywhere you have serious reading comprehension problems.
“But that is not the message I am getting from CA.”
Bugs – I agree with you, all this unjustified hand waving etc etc is most tiresome.
If I recall rightly SteveM offers the opportunity of guest posts at CA to those with something useful to say. One of the things he has not been able to locate is a satisfactory derivation of the forcing claimed for increased CO2 concentrations in the atmosphere. Following on from this I believe he has also sought, without success, a justification for the “global temperature sensitivity” associated with this claimed forcing.
I think you should take this opportunity and write a guest post for CA on these issues including spelling out the basic physics and the metrics you use to evaluate them. In this way, not only will you be making a much needed contribution, you will at a stroke solve the problem of the “message you are getting at CA”.
curious wrote:
“I think you should take this opportunity and write a guest post for CA on these issues including spelling out the basic physics and the metrics you use to evaluate them. ”
bugs,
Your Eureka Moment could be close at hand!
Andrew
bugs (Comment#29322) December 30th, 2009 at 6:12 am
Huh?
You need to read his comments and posts more carefully. here is a clue: you CAN, you logically CAN, say that all climate reconstructions are a pile of crap and STILL believe, still logically believe in AGW, even CAGW. You can, you logically can, believe that some climate scientists are downright frauds and STILL logically still believe in taking action against climate change.
But in a debate that has been tribalized you will confuse the hell out of casual readers.
bugs,
By the way, CA is not a haven for Denialist Commenters (like myself). As soon as Steve sees that a comment is by me, he deletes it for precisely that reason. He likes to keep the politics out, and that means me. 😉
Andrew
OT Alert:
Mosher interviewed by Animal Mother. Mosher to Mann: Flush out your headgear newguy
Howard, got a link?
Unfortunately I cannot find the entire audio… Maybe Steve can get it for us
http://biggovernment.com/2009/12/29/the-green-religion-and-climategate-interview-with-steven-mosher/
The whole interview.. I’ve posted it on my Facebook. So, At least one person here will be able to listen
I’m kinda concerned that it might lead to a bunch of questions that I don’t have time to answer and I really have to finish the last rewrite. Plus I’m working on a longish piece on Mann. One of the guys from Breitbart is posting up a piece on what really happened between Nov 12th and Nov 19th. Even in that piece there are details that I will not divulge. It’s not a matter of saying “buy the book” to find out. It’s simply a matter of protecting certain sources.
Gerlich and Teuschner got themselves published. Where do you want to draw the line, because a vital part of the peer review process is preventing papers from being published.
There are limits to what can be accomplished by the peer review process, which is why scientists don’t rely on it as the primary mechanism for communicating with each other. The same applies with peer-reviewed grants, fortunately there are other funding mechanisms available too.
“because a vital part of the peer review process is preventing papers from being published”
I think the opposite is true. Imagine that. I think it’s vital to a good process that papers get reviewed and the reviews and the papers published, so people can see why the paper is good or why the paper is bad.
bugs, how much are they paying you to keep this up? 😉
Andrew
steven mosher (Comment#29334) December 31st, 2009 at 12:00 am
Like I said before, what I’m being told and what I’m seeing are two different things. Scientists worked out years ago you don’t target the person, you publish your science. Yet here you are, a longish piece on Mann. Mann is not the science.
“you don’t target the person, you publish your science.”
bugs,
This principle somehow doesn’t stop you from targeting SteveMc, FYI.
Look forward to your publications.
Andrew
bugs (Comment#29338) December 31st, 2009 at 8:27 am
steven mosher (Comment#29334) December 31st, 2009 at 12:00 am
“Like I said before, what I’m being told and what I’m seeing are two different things. Scientists worked out years ago you don’t target the person, you publish your science. Yet here you are, a longish piece on Mann. Mann is not the science.”
Bugs I well recognize and have repeated said that you don’t target the person in science. You publish your data and code and personalities are removed. When you don’t do this, you raise the spectre of ulterior motives. Who did that? If you read through the mails, who screamed fraud when MM05 was published? And who did he scream it to? Now, I’m not going to argue that it is a form of projection. That would be too easy. But even his collegues realized he had lost the ability to be objective. WRT the new piece. It’s not about Mann specifically; it’s not even science. It’s about sociology.
It appears that Dr. Crazypants likes the sound of a lil’ ‘clink, clink, clink…’. Sounds sweet dunnit?
“What has also almost entirely escaped attention, however, is how Dr Pachauri has established an astonishing worldwide portfolio of business interests with bodies which have been investing billions of dollars in organisations dependent on the IPCC’s policy recommendations.”
http://www.telegraph.co.uk/news/6847227/Questions-over-business-deals-of-UN-climate-change-guru-Dr-Rajendra-Pachauri.html
Andrew
Take this poor excuse for analysis http://wattsupwiththat.com/2009/11/10/bombshell-from-bristol-is-the-airborne-fraction-of-anthropogenic-co2-emissions-increasing-study-says-no/
and look at the comments. Confusion reigns, job done. The paper says nothing about AGW that was not already known, and nothing about the effects of AGW. It reports on one of the side issues of AGW, the extent to which CO2 sinks are working. The main issues it the level of CO2 in the atmosphere, and we know it is increasing due to us putting more of it up there.
“This certainly is a bomb right in the core of the AGW Doctrine.” No, it doesn’t. Anthony doesn’t do anything to clear up his confusion.
bugs–
Anthony did not write that quote and is not responsible for it. Also, no blog author clears up ever confused thing said in comments. At a popular blog, it’s not even possible.
Clearly, you need to start a blog to clear up all these confused thoughts. Write it well… and who knows? You might attract people who think you are a useful source of information.
lucia (Comment#29350) January 1st, 2010 at 8:52 am
bugs–
Anthony did not write that quote and is not responsible for it.
He headed his post “Bombshell from Bristol”. I think bugs’ summary is accurate.
Nick–
Anthony does follow the yellow journalism style of titles (as do many bloggers.) But why would the bombshell be what bugs quotes from someone else? Given past alarming reports that the sinks are vanishing, can’t the report that they are not be a “bombshell”?
Lucia,
Where are these “alarming reports”? The AR4 says almost everywhere that the AF has been stable, with no increase expected, In Chap 2:
“Assuming emissions of 7 GtC yr–1 and an airborne fraction remaining at about 60%, Hansen and Sato (2004) predicted that the underlying long-term global atmospheric CO2 growth rate will be about 1.9 ppm yr–1, a value consistent with observations over the 1995 to 2005 decade.”
In Chap 7, Exec Summary:
“… since routine atmospheric CO2 measurements began in 1958. This ‘airborne fraction’ has shown little variation over this period.”
7,3,2:” From 1959 to the present, the airborne fraction has averaged 0.55, with remarkably little variation when block-averaged into five-year bins (Figure 7.4)”
“The consistency of the airborne fraction …”
Bombshells everywhere!
***************
bugs (Comment#29338) December 31st, 2009 at 8:27 am
Like I said before, what I’m being told and what I’m seeing are two different things. Scientists worked out years ago you don’t target the person, you publish your science. Yet here you are, a longish piece on Mann. Mann is not the science.
*****************8
Mann isn’t the science, but he is a scientist right in the middle of climate science. There is nothing wrong calling him out for non-scientific behavior. In fact, Mann likely should be fired for his unprofessional behavior. That, IMO, would greatly improve the SCIENCE.
Nick:
The fellows at Real Climate do claim to be alarmed in the manner lucia suggested, see for example Is the ocean carbon sink sinking?.
I thought it was a matter of CAGW dogma that the capacity to absorb more CO2 was already tapped out, that plants can’t really grow faster because of this or that nutrient shortage, that carbon feedbacks were already unleashed, oceans maxxed out etc.
It is accurate to say that the language in AR4 in re carbon sinks and projected increase of CO2 is restrained and cautious. Accurate assessments of global carbon budgets are obviously hard to do.
The Knorr study cited by Watts is hardly the last word given the range of uncertainties it notes but it helps to provide some perspective.
I’m surprised at the number of smart people that allow themselves to be churned by “Bugs” if that is, indeed, his (or *her*) real name. *It’s* assignment is to waste the time of dupes who might otherwise uncover a real actual nugget.
Mosher, I thought you had a book to finish. I would rather pay money to read that rather than skip over your long posts here explaining tinkertoys to erector set aficionados.
Happy New Year.
Lucia: Thanks for being a great blog host.
bugs (Comment#29344) December 31st, 2009 at 10:49 pm
I’m not sure what your point is here, bugs. You say “the main issue is the level of CO2 in the atmosphere”. But the future of this level is totally dependent on two rates, the rate of emission and the rate of sequestration. How is an article that cuts to the heart of your “main issue” not important?
As to whether this is “already known”, the answer is definitely no. There have been many, many claims that the sequestration rate is either decreasing or is going to decrease. A quick look at Google finds thousands of articles from 2007-2008 alone making the claim, with titles like:
I could quote hundreds more, but I’m sure you see the point.
You point out that the UN IPCC notes that the airborne fraction has been steady over the last fifty years. You either don’t know or don’t mention that the IPCC also notes that the models show “… the mean tendency towards an increasing airborne fraction through the 21st century, which is common to all models.” (IPCC FAR Figure 7.13) The IPCC also notes that “All C4MIP models project an increase in the airborne fraction of total anthropogenic CO2 emissions through the 21st century,” and “Climate change alone will tend to suppress both land and ocean carbon uptake, increasing the fraction of anthropogenic CO2 emissions that remain airborne and producing a positive feedback to climate change.” (Ibid p. 538)
Now, since all the screaming is about the “climate change” that occurred in the latter half of the 20th century, and the IPCC says that climate change alone will increase the airborne fraction and also that the fraction hasn’t increased, I don’t know how they reconcile those two facts. Seems very contradictory to me, but since the IPCC is a corrupt UN idiocracy I suppose it should not be surprising …
In any case, since each and every one of the models say that the airborne fraction increases with increasing levels of CO2, a scientific observationally based study saying that those model results are hogwash is certainly worth highlighting. And of course the AGW supporters and the main-stream media gave it prominence … NOT.
So while “blockbuster” may be a bit over the top, an study showing that (once again) the models are shown to give incorrect results is certainly an important study. And it is understandable that those fighting the bogus consensus call a study disproving model results a “blockbuster” … would that the exaggerations on the other side were so mild.
Nick and bugs are engaged in the usual “fake but accurate” defense in using a quote from one person to bolster the argument against a separate person.
Beyond that, by only complaining about arguments from one side of the debate, they would rather those they disagree with be held to one set of debating rules, whereas those they support complete leniency to engage in ad hominem attacks, character assassinations or anything else they feel like doing.
Author: steven mosher
Vangel
You wrote:
“Let me stop you here. I expect that most of us would expect that a side by side comparison would yield similar results. ”
On the contrary I would have expected a MMTS and a CRS to be different, as the study showed. Read it and see why.
Sorry. My error. I did read the Quayle study, which I did not find very useful because there wasn’t enough detail for me to figure out if the study was meaningful. There was no way to determine if the comparisons were proper because of the siting issues involved. Just looking at data is not enough because it is not always clear that the new sensors were exactly in the same location. It is clear from the independent analysis done by Anthony Watts that a large percentage of the instruments are nowhere near the previous locations due to the need for running cables to the sites. That is why you see so many of them next to buildings in clear violation of the site quality standards that you are familiar with.
The Hubbard, Lin, Baker and Sun paper was even more disturbing because it showed that there was little understanding of the effect on specific stations and that there were many different factors that added bias to the readings. Essentially, that gives the ‘scientists’ free reign to determine what adjustments they need to make to the raw data, which would allow them to introduce their own bias to the game.
As I argued before, there is no reason why the new instruments can’t be treated as new stations. Clearly pretending that one knows more than is actually known is not exactly the mark of a sound scientific method. And what it particularly disturbing to me is the fact that these so-called assessments missed the very obvious bias that was found by Anthony Watts and his volunteers. The fact that researchers missed that 89% of the sites had a warming bias of 1C or more and that 69% of the network had a bias of more than 2C should be very disturbing.
“But that is not the problem that we encounter in the GISS/NOAA data
But when stop taking readings from a field, as is usually the case with a thermometer housed in a Stevenson screen, and start taking readings with a MMTS cabled sensor that is next to a building because the cables are not run beneath side-walks, parking lots and walkways you get a warming bias that will cause the night-time minimum readings to spike.””
You need to read Quayle. See above in which MMTS in the feild are compared to CRS, in the field. I agree, theoretcially one should see a warming bias, but you have to consider the details.
The building is 67 degrees inside, for example. The day cools from a high of 32 degrees F to a low of 17 degrees. The MMTS is 100 feet from the building. What’s the bias? its 50 feet away, whats the bias? its 15 feet away? there’s no wind, the wind is gusting to 20MPH, the flow around the building is turbulent, the flow is laminar, lots of variables. If the nighttime temperature “spiked” that would be apparent. I certainly would not deny that one should expect to see a difference. I’m arguing that the difference might be smaller than you imagine because of other factors. For example, if you want to understand the impact of placements at airports go read the CRN study. You’ll see what effects siting at an airport has. My advice. Don’t overstate your case.
As I said, I did read Quayle. I found his paper to be inadequate. To actually understand the differences one would have to do real field experiments where the sensors were side by side and studied for a few cycles under different conditions. It is only then that one could do adjustments with any degree of comfort and that would only be acceptable for situations that fit the experimental conditions. As I said, I would not accept an adjustment to a sensor that was moved from a field next building as being valid because there is no reason to expect that one would know what adjustments to make.
I have a serous problem with the studies you are citing because they do not deal with the actual real world situation. There is still far too much uncertainty and noise to do what NOAA/GISS and other alphabet soup agencies are trying to do. That part of my case cannot be overstated enough.
“You are making my point for me. I do not believe that it is easy to go back and make proper adjustments for changes. That is exactly why I don�t trust the forced continuity of the NOAA/GISS adjusted data. It is clear that when scientists have to make guesses and fill in missing data they will choose assumptions that fit their preconceived notion of what is happening to the temperature record, which is why we cannot trust those assumptions until after all of the data, algorithms, and metadata are released for independent review.”
Well. My position is different. I suspend judgement about these things. I try not to ascribe bad motives to the scientists ( try) The adjustments need to be checked. That’s it. You dont ADD ANYTHING by ascribing motives. You detract from your objectivity. So, suspend judgment. The adjustments may make sense on their face, but they need to be checked. That position puts you on a nice neutral grounds. I trust they did what they thought was correct. It needs to be checked. Not because they are engaged in any bad behavior or have ulterior motives or suffer from confirmation bias. It just needs to be checked.
First, scientists are human and react to incentives as others do. When there are billions handed out for research to show that man impacts the climate they will ensure that they tailor their studies to support the requirements and will often throw in a few words to that effect even when their studies do not support the hypothesis. Second, we have already seen how the opinions held by the management and leadership positions in a scientific or technical organization may not reflect those of the employees. A perfect example was provided by Richard Feynman during the Challenger disaster investigation. NASA’s leadership stated that the odds of a catastrophic failure was 1 in 100,000 while the engineers had it closer to around 1 in 100. The same is true of the IPCC Summary for Policymakers. I know of few scientists who agree with all of the conclusions that are put together by the small group of lead authors yet that is what is being sold to the public as indisputable science and consensus. I do not lose credibility to state that there is no clear way to make proper adjustments to the raw data because there isn’t enough information known by the people who make those changes. I do not lose credibility when I point to obvious errors in quality controls that are admitted by the very people who are putting out the reconstructions. And I certainly do not lose credibility when I point to the CRU leaked e-mails and code to support my position.
If you have gone over the e-mails you will find clear examples of cases where the data was fudged, methods were found to be inadequate, and when the very IPCC authors who claim that the science is settled did not agree with the published conclusions. When ‘scientists’ talk about destroying data rather than allowing it to be reviewed or about balancing the needs of science with those of the IPCC you have to start wondering about their credibility and not that of their critics.
“I am sorry but I do not buy the analysis because you are still guessing. ”
the POINT of my words was to show you all the ways in which a signal ( bias signal) can be modulated. So guessing, yes. The point was this. There are many ways in which this signal can be corrupted. Again. Simply. Go pick a site: You can go get daily data from USHCN. go see if you find BIG EFFECTS from the kinds of changes you are concerned about. The effects are not big.
Let me stop this here to look at one site. If we look at the Tucson weather station (http://www.norcalblogs.com/watts/images/Tucson1.jpg) we see that there is a clear bias to the readings. How does NOAA/GISS ‘adjust’ this data? By averaging it with that of the Grand Canyon station. But why is this a proper approach and why would it yield a better result than throwing out the data?
How about this one?
http://www.norcalblogs.com/watts/images/Marysville_issues1.JPG
http://www.norcalblogs.com/watts/images/Marysville_issues2.JPG
Which study do you have that can show NOAA/GISS data keepers how to ‘adjust’ the data to yield the correct readings? I didn’t see Quayle or Hubbard deal with that scenario. Or these ones:
http://wattsupwiththat.files.wordpress.com/2009/03/perry-ok-ushcn-visible-and-infrared.jpg
http://4.bp.blogspot.com/_Q16GN8b2AzE/SA5GenaBj2I/AAAAAAAAAew/QnO3QSIUfaE/s400/fay_ir.jpg
http://surfacestations.org/images/Roseburg_OR_USHCN.jpg
http://surfacestations.org/images/woodland_cwo.JPG
http://surfacestations.org/images/Hopkinsville_current.jpg
http://surfacestations.org/images/petaluma_east.jpg
http://surfacestations.org/images/Aberdeen_WA_450008_rear.jpg
As I pointed out, we know that more than half of the audited stations have a warming bias of 2C or more. That was not disclosed by GISS/NOAA or any of the papers that you have been citing. Instead, we had to find out from an unpaid group of amateur volunteers who provide more information and documentation than the so-called scientists that the GISS/NOAA use. (http://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf)
Watts pointed out the following:
A trend illustrated by the photos above is for the newer style MMTS/Nimbus thermometers to be installed much closer to buildings and radiative surfaces than the older Stevenson Screens. NOAA�s sensor cable specification cites a maximum distance of 1/4 mile, but installers often can�t get past simple obstructions such as roads, driveways, or even some concrete walkways using the simple hand tools (shovel, pickaxe, etc.) they are provided to trench a cable run. The photo on the next page, of the USHCN station in Bainbridge, Georgia, illustrates the systemic problem.
What this means is that the studies that you are citing are useless and cannot be applied in the real world situation when the switch in sensors meant relocation next to areas that can influence temperature readings directly. You cite the greater cooling effect of wind on MMTS maximum readings but that assumes that the observations are actually valid and there is a true side by side comparison with no other factors having been changed. But in the real world, which is where temperature readings are taken, the assumptions are not usually valid. As Watts noted, a far more common occurrence would be a move such as the one shown below due to the cabling requirements. So what you see is the original site of readings, a screen in a field, being replaced with a new site, a sensor next to a building where it can be influenced by air conditioning units that are ten feet away. What do you suppose that move does to the minimum temperature readings during those humid Georgia nights?
http://www.norcalblogs.com/watts/images/bainbridge_ga_ushcn.jpg
Since 1980 ( lets say the introduction of MMTS) to today you are probably looking at something on the order of of a .6-.7C warming over land. Ok? How much of that is real? How much of that is UHI? how much is microsite? Go check the sat data.
you’ll probably find that the land has warmed maybe .4C-.5C. ( lower trop )That kinda sets an upper limit on the UHI/microsite bias.
My point is that you have no idea about the amount of bias and that we certainly cannot trust the institutions that missed the bias discovered by independent amateurs and the institutions that have done nothing to correct those biases. Given the fact that billions have been spent on climate change research over the past few decades you would expect that the least that NASA and the NOAA could have done was to spend a few bucks on cleaning up the network quality issues.
“My ten year old son did a project to find the size of the UHI effect in the late spring and found a difference that was larger than 3C on a number of consecutive warm evenings that had wind speeds of around from 5 to 15 km/hr. I suspect that the difference is much larger than what you imagine it to be and since we have no idea any guess is likely to be biased by the people making it. ”
Good for him. You should post up his test, data and methods.
I’m not sure how it applies to this specific problem iff at all.
I’m fairly confident that the effect is small (less than .3C)
He got several degrees on several different experiments. It is very similar to the results obtained by Warren Meyer and his son, who did the experiment for Phoenix.
http://www.climate-skeptic.com/2008/02/measureing-the.html
That doesnt mean its unimportant. One thing that should tell you its small is the Sattilite data. Also, Go read Ross mcKittricks paper on UHI. basically, Ross puts the UHI effect at around 50% of the increase seen in the land record.. that would be around .3C give er take. That comports with the example I gave.
That would be a problem for the AGW argument because if half of the increase can be attributed to UHI and at least a quarter to changes in solar output there isn’t much of a change to consider meaningful or due to CO2 emissions.
Also you need to distinguish between UHI and microsite bias. They are at different scales.
I know that and assumed that the readers could figure it out. It is obvious the addition of air conditioning units near a Stevenson Screen or the movement of a sensor next to a wall will add a bias to the readings of the site immediately. It is also obvious that when a city grows out towards stations that were formerly in the suburbs we will see a slower increase due to the UHI effect.
“Given the pressure for GISS employees to be warmers I suspect that most adjustments will be on the high side. ”
1. GISS dont do the adjustments. NOAA does.
I believe that I already cited the figure from the NOAA. It is clear that NOAA makes adjustments. But it is also clear that GISS has its own algorithms that make further adjustments to the data.
http://jennifermarohasy.com/blog/wp-content/uploads/2009/06/hammer-graph-3-us-temps.jpg
2. You’re suspicions are not facts. The adjustments need to be looked at with a fair and open mind. Having suspicions is probably not a good thing.
It is a fact that NOAA/GISS have adjusted the data. It is also a fact that they have not been archiving their data changes and allowing outsiders to look at the algorithms and metadata that were used to make the changes. I have no idea about your training but mine made it clear that one was to be suspicious of everything, particularly your own methods because it was easy to make errors that one was not aware of. That is why I am very distrustful of CRU/GISS; they have hidden raw data and algorithms used to make the changes to that data. That requires an assumption of honesty and competence that I have no reason to make given the available evidence.
What I would suggest is that anyone who is not sceptical given the CRU leaks needs to re-examine his critical thinking schools.
“And given the size of the errors involved and the total supposed change in the trend, the noise is simply too high to make a meaningful comparison.”
well that’s something you need to establish not just say.
I have already pointed out that the size of the errors have already been established. And I have pointed out that organizations that have missed the fact that around 90% of the USHCN network has a bias that is greater than the claimed warming since the end of the LIA cannot be trusted to come up with credible results.
Howard (Comment#29356) January 1st, 2010 at 1:11 pm
Ya. Let me give you a status. The initial write is done. probably 200 pages. So, now I’m working on the rewrite and gross copyediting. preface ,chapters 1-5, &7 Are done with rewrite and gross copy editing. the balance 6,8,9 are all done, and I am currently rewriting them ( tighten, gross copy edit, style crap ) should all be put to bed in the next 2-3 days.
When I need a brain break I log on and post. Also, I do try out arguments here and other places. Blogs are kinda a testing ground.
There will be some posts made by others that will start to fill in the blanks about what really happened between Nov 12 and Nov 19.
I have to agree with Bugs and Nick. calling the study a blockbuster is total irresponsible and over the top sensationalism. The science must be wrong, therefore.
Further airborne fraction does not matter whatsoever. If the models show it increasing, they are right. If they show it decreasing, they are right. If they show it increasing, they are right. They are right no matter what because they are physics models and they cant be wrong.
Carrick (Comment#29358) January 1st, 2010 at 3:43 pm
You aren’t making any sense. Anthony regularly produces such alarmist drivel, and his pal McIntyre, who is over the AGW science like a limpet, is content to let such nonsense pass him by.
You are confusing the formal science with the public and private dialogue. I think that private conversations are still free in a Western society. If Gerlich and Teuschner can get their paper published then the formal scientific process is indeed being gamed.
If Anthony wants to play by the rules, then let him do so. One of the issues that comes up in the emails is how the formal scientific process cannot compete with the scientific equivalent of ‘News of the World’ in the public arena.
Bugs,
Since when is McIntyre the keeper of Anthony?
Anthony posts a press release. he POSITIONS that release according to his bias. His position is something that doesnt interest Mc. Why should it? Why should a man who is interested in tree rings and paleo recons, a man who is interested in FOIA and CRU confabulations, take any notice whatsover of Anthony’s marketing of various findings? Why? McIntyre is entitle to HIS interest. he makes his interest clear. If you dont like his interest
then #Si
But there is no moral scheme whatsover or rationale for requiring mcintyre to even comment on Watts.
bugs:
Uh right, bugs.
That’s why you had to choose a statement from another person to make your case.
You’re making a whole lot of sense!
What rambling nonsense.
But regardless, you have no idea what you are even talking about.
Science actually has no problems competing, as long as the scientists stay out of politics.
The issue here, as the emails make clear, is precisely that we have a group of amateur politicians (Jones, Manns, Schmidt, etc) who are subverting the science for political ends, who are mucking up the science with their amateurism in addition to bungling the politics.
I went back and read the comments to the thread bugs targeted, and really they are pretty balanced. Count how many variations in “It is not a bombshell” you can find.
Actually seems quite balanced to me.
Now imagine the RealClimate people allowing any sort of “naysaying” commentary through their screening at all. Kind of hard to do. I saw “piling on” but little or nothing allowed through of the nay-saying variety.
*********************
bugs (Comment#29362) January 1st, 2010 at 5:13 pm
If Anthony wants to play by the rules, then let him do so. One of the issues that comes up in the emails is how the formal scientific process cannot compete with the scientific equivalent of ‘News of the World’ in the public arena.
****************
Let’s see, we have Al “The Liar” Gore jetting all over the world, spending all kinds of money, lying about “climate change” every fifth word out of his mouth and you have the gall to call Anthony an “alarmist???” Bugs is a comedian!!
Bugs,
“Gerlich and Teuschner got themselves published. Where do you want to draw the line, because a vital part of the peer review process is preventing papers from being published.”
And in a PHYSICS JOURNAL. Burns your ah ah ah doesn’t it??
Scares you that some of that “garbage”, as you atmospheric radiative physics types call it, is closer to reality than the AGW myth!!
Well Bugs, better get used to papers trashing your favorite delusions. There are going to be an increasing number of them in many journals.
<b steven mosher:
"You see, that’s not the message I’m getting from the ’skeptics’. I have said all along, the only question is how much warming, "
Then you share this opinion with McIntyre who said the same thing on FOX. I can wish that skeptics would be more like Mc, I think they would do better if they didnt overplay their hand.
The sceptics are not overplaying their hand. They accept that we are now warmer than we were at the end of the Little Ice Age 150 years ago. What many of them have trouble with, is the claim that we are warmer than the 1930s without seeing data that can support that claim. As I wrote above, the US data was certainly not showing much in the way of warming. As late as 2007 we had Hansen admitting that four of the top warmest years in the past 100 were in the 1930s and only three were from the previous decade. He also admitted to 1934 being the warmest year in the US record, not the hyped up 1998. Given the fact that we have no global data to suggest anything else because Phil Jones has claimed that he destroyed pre-1980 raw data, the warmers have nothing to support their claim of warming, let alone trying to tie that warming to human emissions of CO2.
Yes, Kuhnkat, you epitomise the problem.
@Vangel:re global warming years.
.
Here’s what I came up with running the publicly available CRU methods for global gridding and averaging against against 1398 stations common to both the publicly available GHCN raw data for mean temps (not the adjusted) and the CRU subset in the publicly released All.zip.
.
Top 10 years from 1901-present
1998 0.949
2005 0.808
2003 0.741
2008 0.725
2006 0.710
2004 0.699
2007 0.662
2009 0.626
1926 0.615
1999 0.612
.
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean.Z
http://www.metoffice.gov.uk/climatechange/science/monitoring/reference/All.zip
http://www.metoffice.gov.uk/climatechange/science/monitoring/reference/station_gridder.perl
http://www.metoffice.gov.uk/climatechange/science/monitoring/reference/make_global_average_ts_ascii.perl
.
Note 1: I used the provided the meta-data from the CRU files while processing. I suppose I should make a run without the metadata. Let me know if you are interested in those results.
.
Note2: These are preliminary results. I am still working through the GHCN->CRU scripts.
.
Note 3: Land surface station temps only; no sea surface
.
“Since when is McIntyre the keeper of Anthony?”
McIntyre damages his credibility by allying himself so closely with someone like Watts. It’s pretty clear that Watts is either dishonest or dumb. I say a little from column A and a little from column B.
Now Watts fanboys (and gals, right, Lucia?) will defend him and look fairly clueless doing so. Let me help by starting a few sentences for you…just fill them in!
Boris, Anthony has never posted anything that wasn’t 100% accurate and your communists buddies…
The CRU leaked emails prove that…
The broken hockeystick makes you leftists cry and Watts’ brilliance merely illustrates…
Show me one post where Anthony said something wrong and I will spend fifty posts arguing over the meaning of a word and thus prove that everything was correct and Anthony is smart and honest and also, you are a commie ’cause your name is Russian, comrade.
Oh, also repeat the phrase “asking questions” and “third world kleptocracy” a lot. Hope this helps.
Boris
Thanks for illustrating the mean-spirited, anti-intellectual nature of the “pro-AGW” crowd, Boris.
Boris–
Your sentence fragments make me think, “huh?”
Mann isn’t the science, but he is a scientist right in the middle of climate science. There is nothing wrong calling him out for non-scientific behavior. In fact, Mann likely should be fired for his unprofessional behavior. That, IMO, would greatly improve the SCIENCE.
He is an incompetent fraud who is incapable of understanding the statistical methods that are required for the work that he does. He is also a bully who tried to corrupt the peer review process at journals and the review process at the IPCC. As such he should be fired.
Author: Ron Broberg
Comment:
@Vangel:re global warming years.
Here’s what I came up with running the publicly available CRU methods for global gridding and averaging against against 1398 stations common to both the publicly available GHCN raw data for mean temps (not the adjusted) and the CRU subset in the publicly released All.zip.
Top 10 years from 1901-present
1998 0.949
2005 0.808
2003 0.741
2008 0.725
2006 0.710
2004 0.699
2007 0.662
2009 0.626
1926 0.615
1999 0.612
I believe that you are looking at the adjusted data. Since we do not have access to the original data and to the code used to make the adjustments the CRU reconstruction is worthless and hardly scientific.
The US data yields totally different results. First, we had Hansen admit in 1999 that, “The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934.”
And in 2007, after Steve McIntyre found yet another error in the GISS data, Hansen had to admit once again that four of the top 10 years were in 1930s. These were 1934, 1931, 1938 and 1939. Only three of the top ten warmest years were from the previous decade. These were 1998, 2006, and 1999. The years 2000, 2002, 2003, 2004 showed a temperature anomaly that was lower than that for 1900.
1934 1.25
1998 1.23
1921 1.15
2006 1.13
1931 1.08
1999 0.93
1953 0.90
1990 0.87
1938 0.86
1939 0.85
The accessible data shows that the 1930s were warmer while the super secret CRU data shows something else. I will make up my mind which is right when the CRU data becomes available and the value added set can be confirmed to be accurate by independent sources. Of course, that might be difficult given the Phil Jones claim that he destroyed the original data because he did not get a big enough budget to buy a few filing cabinets or a hard drive. I guess that he spent it on aeroplane tickets and hotel accommodations.
Boris (Comment#29372),
What a bizarre comment.
.
You ought to consider the possibility that many people who doubt predictions of extreme global warming, and catastrophic consequences from that projected warming, do so based on a reasoned analysis. Many of these people want mainly to avoid unwise public expenditures (a gross mis-allocation of resources) that can’t rationally be justified based on the existing state of climate science. To suggest that anyone who differs from you in their evaluation of the threats posed by global warming is either stupid, a liar or both reveals remarkable naivete and arrogance.
Vangel, the US is about 3% of the total land mass of the Earth, so it’s hardly surprising you’d see different numbers with when you look at that subset of the data.
Also, there’s plenty of public global temperature data out there outside of CRU, both adjusted and unadjusted.
The reasons for seeing the CRU data is to be able to replicate their method and understand the assumptions going into their adjustments, and not because that’s the only way we would know otherwise.
Vangel wrote:
” after Steve McIntyre found yet another error in the GISS data, Hansen had to admit once again that four of the top 10 years were in 1930s. These were 1934, 1931, 1938 and 1939. Only three of the top ten warmest years were from the previous decade. These were 1998, 2006, and 1999. The years 2000, 2002, 2003, 2004 showed a temperature anomaly that was lower than that for 1900. ”
You appear to be talking about US temperature averages, which were found to be slightly in error and changed by some small fraction of a degree. That error and correction had an imperceptible effect on global temperature averages, which are displayed in this graph:
http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.lrg.gif
You can see in it that, globally, every year in the ’00s was warmer than every year 1880-1989, and the average for the ’00s is substantially higher than the average for the ’90s.
There is no doubt that the instrumental record shows the ’00s was the warmest decade on record.
“To suggest that anyone who differs from you in their evaluation of the threats posed by global warming is either stupid, a liar or both reveals remarkable naivete and arrogance.”
The stupid/dishonest dichotomy was applied specifically to Watts and is based on reading his blog and watching his actions. So, read a little more carefully next time.
I agree that people come to a reasoned conclusion that global warming might not be such a problem, but if you dig deeper, those people almost always base their argument on things like the CO2 lag in ice cores or the erroneous notion that CO2 is saturated or another of the many misconceptions that people like Watts popularize. Why would I not have utter contempt for someone who clouds the issue as much as Watts does? More importantly, why wouldn’t you?
“Thanks for illustrating the mean-spirited, anti-intellectual nature of the “pro-AGW†crowd, Boris.”
I knew I didn’t need to give you suggestions for an insulting post. Do I take it by your umbrage that you think Watts is an intellectual giant and a moral paragon?
Do I take it by your umbrage that you think Watts is an intellectual giant and a moral paragon?
So our options are limited to agreeing with you that Watts is both dishonest and stupid — or uphold him as an “intellectual giant” and a “moral paragon”?
Do you ever actually offer an argument that is not riddled with one or more logical fallacies?
*******************
Boris (Comment#29380) January 2nd, 2010 at 11:11 am
I agree that people come to a reasoned conclusion that global warming might not be such a problem, but if you dig deeper, those people almost always base their argument on things like the CO2 lag in ice cores or the erroneous notion that CO2 is saturated or another of the many misconceptions that people like Watts popularize. Why would I not have utter contempt for someone who clouds the issue as much as Watts does? More importantly, why wouldn’t you?
******************
So, Boris, you are saying the lag of the rise of CO2 vs Temperature ISN’T a problem? I read Watts all the time and my impression from what I have read is that each doubling of CO2 has less a warming effect – not that it is saturated. I think you are just throwing up a straw man on that one.
So, explain why you believe the CO2 lag isn’t a problem.
Boris (Comment#29380)
“I agree that people come to a reasoned conclusion that global warming might not be such a problem, but if you dig deeper, those people almost always base their argument on things like the CO2 lag in ice cores or the erroneous notion that CO2 is saturated or another of the many misconceptions that people like Watts popularize.”
Well, that’s a start.
.
But I believe that if you actually “dig deeper” you will find that most skeptical comments made at The Blackboard are not based on the “the many misconceptions” that you refer to, and that most comments are made by practicing scientists or engineers. My comments certainly are not based on obvious technical misconceptions.
.
In the comments made following my guest posts at WUWT, I found a rather wide range of technical sophistication/understanding among both “alarmists” and “skeptics”. Was there some wild-eyed, non-scientific nonsense? Sure, and on both sides. In my email exchanges with Anthony, I have found him to be a gentleman, quite rational, and not someone who willfully misrepresents facts.
******************
Boris (Comment#29381) January 2nd, 2010 at 11:17 am
**************
Boris. If you think man-made CO2 will cause a catastrophe, maybe you should take an hour to view the justification for the CLOUD experiment at CERN. After one takes into account the false warming shown by the instrumental temperature record, it looks like there isn’t much warming from the extra CO2. Gee, could it be that clouds are the governor of how much sunlight can get through to the Earth? What say you Boris? Take a look at the video.
http://cdsweb.cern.ch/record/1181073/
Boris (Comment#29372) January 2nd, 2010 at 9:57 am
I am sick to the bone with this mean spirited kind of puerile attack from AGW supporters, your claims that people are “dumb”, “dishonest”, “fanboys” and the rest of your ugly litany.
Steve is Steve, Anthony is Anthony. Both are like myself, just fools whose intentions are good and who are fighting against both bad science as well as the constant slimy character assassination you and your ilk practice. And like me, both of them are wrong sometimes. And they, like me, are both courageous enough to hang their ideas out in the public agora to be attacked by the wise and the ignorant alike.
I have posted extensively at both sites, and I have found both of them to be decent, principled individuals who are neither dumb nor dishonest. I find your willingness to call people dishonest morally repugnant. Where I grew up, calling a man a liar was something that you didn’t do without both rock-solid proof and great provocation. You haven’t a shred of either one, yet you say that both men are liars.
This absolutely cancels your vote with me, and makes you a laughingstock among decent men and women. Your actions in making unsupported accusations reveal clearly that you have the morals of a gutter rat, the intellect of a highly evolved insect, and the spine of an annelid. You are so foolish that you think that attacking anyone in that irresponsible and disgusting way, much less Steve and Anthony, somehow gains you points in the discussion. It does not. It merely makes people point and laugh at the child who does not understand the first principles of civilized behaviour, yet styles himself a giant among men.
PS – Am I Steve or Anthony’s “fanboy”? No, I’m something else, something you may not recognize because you may lack of experience in that particular arena — I’m their friend …
“clearly that you have the morals of a gutter rat, the intellect of a highly evolved insect, and the spine of an annelid.”
You are the most sanctimonious sack of garbage in the whole denialosphere. You guys get pissed off because Mann used a non-standard statistical method in a first of its kind paper–frothing at the mouth with your accusations of fraud and you insinuations of incompetence. All of this while shrugging it off when Anthony Watts posts some retarded crap from Roy Spencer about how maybe the CO2 rise isn’t from burning fossil fuels. The fact that you and Steve let that stupidity go by whilst simultaneously giving Anthony Watts a cyber rim-job for his pictures of barbeque grills and parking lots is a testament to your ideological biases. It boggles the mind that you hold Watts up as a decent and honorable gentleman with all the slurs he has thrown at James Hansen and his attempts to get him fired based on a bizarre reading of a no doubt drool stained printout of the Hatch Act. So you can take all of your concern trolling and go and cherry pick more stations like Darwin airport to illustrate that you are a dishonest little dick yourself. Are you guys actually afraid to do more than a half-ass analysis on anything, or are just going to submit FOI requests until somebody does the actual work for you?
Maybe you should take a look at your “friends” and choose them more carefully–because you’ve chosen a couple of real A-holes. But then again, maybe they are perfect for you.
(And here come the Willis fanboys concern trolling about how rude Boris is. I’m sure you can find a good knitting blog where your delicate sensibilities will not be tainted.)
*****************
Boris (Comment#29390) January 2nd, 2010 at 3:06 pm
(And here come the Willis fanboys concern trolling about how rude Boris is. I’m sure you can find a good knitting blog where your delicate sensibilities will not be tainted.)
*********************
I agree with Willis. You are a slob. A bloviating troll, all blather and no substance whatsoever. The bytes you consume on the Internet are wasted. Mann is a gutter rat right in there with you. He needs to be fired. He gives other scientists a bad name. He can’t use statistics properly. The hockey stick is a fraud. The temperature reconstructions, include the instrumental one, done by the Hockey Team is a fraud.
“I agree with Willis. You are a slob. A bloviating troll, all blather and no substance whatsoever.”
I would be frustrated if my heroes didn’t write papers too. How many did Mann co-author last year? Oh, but that’s right he’s got the worldwide conspiracy going for him, the conspiracy that’s keeping Willis in his place. Too bad. Life is tough!
**********************
Boris (Comment#29392) January 2nd, 2010 at 3:46 pm
“I agree with Willis. You are a slob. A bloviating troll, all blather and no substance whatsoever.â€
I would be frustrated if my heroes didn’t write papers too. How many did Mann co-author last year? Oh, but that’s right he’s got the worldwide conspiracy going for him, the conspiracy that’s keeping Willis in his place. Too bad. Life is tough!
*******************************
How much garbage did the typical garbage man haul off last year? More garbage than Mann published and more valuable work at that. Did you view the CLOUD video?
OK guys enough already. Can we get back to the science?
Boris can you answer Jim’s question?
Jim (Comment#29383) January 2nd, 2010 at 12:00 pm
“the lag of the rise of CO2 vs Temperature ISN’T a problem?”
and Jim
“that each doubling of CO2 has less a warming effect”
My understanding is that each doubling theoretically has the same temperature effect (~1.2 Deg C) but, of course, it takes more CO2 ie 280 to 560 = 1.2 Deg C
560 to 1120 = 1.2 Deg C
Boris,
Regardless of what you perceive to be dishonest provocations by “denialists”, you would be better served by toning down your personal attacks and focusing on “the science”. Where there are areas of technical agreement (and there really are some) there is a chance for rational discourse which can be productive. While you may not like Willis, Anthony, or Steve M (or me for that matter), exchanging in flame throwing serves no useful purpose.
Boris, you had a choice. You could either have remained silent and had us all suspect you were a nasty trolling idiot, or speak out and remove all doubt. Fortunately, you have chosen the path that gives us certainty. Since there is no longer any question, I’ll let you be now. I should have remembered the basic rule of blogs, DFTT, my bad.
Back to the science …
w.
“You could either have remained silent and had us all suspect you were a nasty trolling idiot…”
You are a huge hypocrite, Willis. You throw around ideas on gentlemanly behavior and chivalry and then you come full bore at me with insults. And then you shriek in indignation when I return them, in kind. But when you insult me, I insult you back. Unlike you, I don’t put on some false veneer of being a “gentleman.” I’m polite until met with impoliteness. You saw at Deltoid where I defended you the other day? Lambert called you a liar and I defended you because I thought you were better than that. But I see now that your constant f ups can’t just be coincidence anymore. And coupled with this grandstanding, holier-than-thou behavior it’s clear that you don’t really care about the science. It’s more important to abandon your “gentlemanly” principles and go on a tirade against Boris than it is to confront your friend Anthony Watt’s behavior. You don’t even make an attempt to defend his stupid posts. You just ignore them. Perhaps you are a better friend than you are a scientist. In either case, who needs ya?
Author: Carrick
Comment:
Vangel, the US is about 3% of the total land mass of the Earth, so it’s hardly surprising you’d see different numbers with when you look at that subset of the data.
Actually, it would be surprising for a large area of the globe to show no material warming over 70 years while the rest of the globe did.
Also, there’s plenty of public global temperature data out there outside of CRU, both adjusted and unadjusted.
I agree that you can find data outside of CRU. Willis did a good job in showing that some areas in Australia had a similar result to what was seen in the US. There was no warming since the 1930s in the raw data. All of the warming came from an unjustified ‘adjustment’ that made the temperature profile fit the expectations of the data keepers.
My point is a simple one so I really can’t understand why you guys don’t really understand it. I merely point out that you have NO OBJECTIVE EVIDENCE THAT THE GLOBAL TEMPERATURE PROFILE IS SIGNIFICANTLY DIFFERENT FROM THE US EXPERIENCE. Your hyped up temperature profile came from the CRU and has never been independently verified because the data, algorithms, and metadata were never released to independent scientists who could replicate the results. This means that the IPCC claim of significant warming is just narrative and has no scientific support.
The reasons for seeing the CRU data is to be able to replicate their method and understand the assumptions going into their adjustments, and not because that’s the only way we would know otherwise.
If the data showed what the CRU claimed that it did Phil Jones would not be hiding it or destroying it. As I said, it isn’t science unless you can replicate it. Now you may choose to believe and are free to do so. But until we see all of the data released and all of the codes all you have are tall tales that are indistinguishable from lies.
Vangel to be clear it’s not my “hyped up temperature profile”.
As to whether 3% should be reflected in the other 97%… I’m not sure what’s surprising and what isn’t. I do know you need more than hand-waving to prove it one way or another.
My own interpretation is it’s not surprising to find some land regions warming while others are cooling… this is just natural variation at work, and the more regionalized you make the comparison, the larger the net variabilities you will likely find (and the hence the longer you will have to average over to pull out the secular temperature change from anthropogenic CO2 emissions).
Finally, there are many converging lines of evidence for the Earth warming up since circa 1850. You don’t even need surface temperature instrumentation records at all to arrive at that conclusion. The mere fact we were in a Little Ice Age in 1850 and we are not now is sufficient evidence to substantiate this observation.
Boris:
Willis is no hypocrite.
Since you lack any modicum of normal, decent behavior yourself, you can hardly expect anything more in return.
I’ve told you in the past, I don’t back down from verbal bullies like yourself. You’d like to have one set of rules for how you (mis-)behave and a second set of rules for everybody else on the blog.
Ain’t gonna happen.
For all your posturing about “civility” Boris, it’s plain for everyone to see in these comments that YOU were the one who abandoned civil discourse with your attack on Watts and subsequent lame smears of his “fanboys”.
You deserve every comment Willis made about you.
********************
Steve Hempell (Comment#29394) January 2nd, 2010 at 3:55 pm
Temperature ISN’T a problem?â€
and Jim
“that each doubling of CO2 has less a warming effectâ€
My understanding is that each doubling theoretically has the same temperature effect (~1.2 Deg C) but, of course, it takes more CO2 ie 280 to 560 = 1.2 Deg C
560 to 1120 = 1.2 Deg C
*****************
You are correct, I didn’t write what I was thinking. What I meant to say was the the warming effect of CO2 isn’t linear with concentration. My bad.
I would still like Boris to comment on the CLOUD video. Or does Boris now believe that CERN are in cahoots with MM and WW?
Boris:
You guys get pissed off because Mann used a non-standard statistical method in a first of its kind paper–frothing at the mouth with your accusations of fraud and you insinuations of incompetence.
Actually, the Wegman committee already established that Mann was an incompetent who did not understand statistical techniques very well and made a number of errors that were caught by McIntyre. In the report we read: “In general, we found MBH98 and MBH99 to be somewhat obscure and incomplete and the criticisms of MM03/05a/05b to be valid and compelling. We also comment that they were attempting to draw attention to the discrepancies in MBH98 and MBH99, and not to do paleoclimatic temperature reconstruction. Normally, one would try to select a calibration dataset that is representative of the entire dataset. The 1902-1995 data is not fully appropriate for calibration and leads to a misuse in principal component analysis. However, the reasons for setting 1902-1995 as the calibration point presented in the narrative of MBH98 sounds reasonable, and the error may be easily overlooked by SOMEONE NOT TRAINED IN STATISTICAL METHODOLOGY. We note that there is no evidence that Dr. Mann or any of the other authors in paleoclimatology studies have had significant interactions with mainstream statisticians.”
And let me point out that it wasn’t just McIntyre who challenged Mann’s competence and honesty. Even supporters such as Wahl and Ammann, who wrote a paper to bail him out, concluded, “The comparison of the MBH reconstruction, derived from multi-proxy (particularly tree ring) data sources, with widespread bore-hole-based reconstructions … is STILL AT ISSUE in the literature.” They also note, “A further aspect of this critique is that the single-bladed hockey stick shape in proxy PC summaries for North America is carried disproportionately by a relative small subset (15) of proxy records derived from bristlecone/foxtail pines in the western United States, which the authors [MM] mention as being subject to question in the literature as local/regional temperature proxies after approximately 1850 …. It is important to note in this context that because they employ an eigenvector-based CFR technique, MBH do not claim that all proxies used in their reconstruction are closely related to local-site variations in surface temperature.” As Wegman pointed out, bristlecone and foxtail pines are CO2 fertilized and as such cannot be used to directly capture the temperature results. The fact that the AGW proponents still use proxies deemed inappropriate by the literature shows exactly how little evidence they have for their case.
It also seems that Briffa and Cook seem are tired of Mann’s whining and crying. In (http://www.climate-gate.org/email.php?eid=263&keyword=esper) their e-mail to Mann, written after Mann had a fit because the Esper paper showed the existence of the MWP, they wrote, “we have to say that we do not feel constrained in what we say to the media or write in the scientific or popular press, by what the sceptics will say or do with our results. We can only strive to do our best and address the issues honestly. Some “sceptics” have their own dishonest agenda – we have no doubt of that. If you believe that I, or Tim, have any other objective but to be open and honest about the uncertainties in the climate change debate, then I am disappointed in you also.” Briffa is clearly no fan of Mann’s. He was the guy who wrote, “I am sick to death of Mann stating his reconstruction represents the tropical area just because it contains a few (poorly temperature representative ) tropical series. He is just as capable of regressing these data again any other “target†series , such as the increasing trend of self-opinionated verbage he has produced over the last few years , and … (better say no more)”
It looks to me as if Mann does not have many friends or admirers even within his own community. Hopefully, the Penn State review will be thorough and transparent and we will finally get an even clearer picture of how the Hockey stick fraud was pulled off.
**********
Steve Hempell (Comment#29394) January 2nd, 2010 at 3:55 pm
and Jim
“that each doubling of CO2 has less a warming effectâ€
My understanding is that each doubling theoretically has the same temperature effect (~1.2 Deg C) but, of course, it takes more CO2 ie 280 to 560 = 1.2 Deg C
560 to 1120 = 1.2 Deg C
*******
Well, just as soon as I retract the statement, R. Spence comes up with this:
“And one very real possibility is that the 1 deg. C direct warming effect of doubling our atmospheric CO2 concentration by late in this century will be mitigated by the cooling effects of weather to a value closer to 0.5 deg. C or so (about 1 deg. F.) This is much less than is being predicted by the UN’s Intergovernmental Panel on Climate Change or by NASA’s James Hansen, who believe that weather changes will amplify, rather than reduce, that warming.”
http://wattsupwiththat.com/2010/01/02/spencer-earths-sans-greenhouse-effect-what-would-it-be-like/
Vangel to be clear it’s not my “hyped up temperature profile”.
The CRU temperature profile is certainly hyped up and passed off as legitimate science when it is nothing of the kind. As I wrote before, it is not science because it cannot be reproduced independently from the raw data that was supposedly used by CRU.
As to whether 3% should be reflected in the other 97%… I’m not sure what’s surprising and what isn’t. I do know you need more than hand-waving to prove it one way or another.
The US has about as many temperature stations than the rest of the world combine so the reconstruction should be taken more seriously, particularly given the fact that there is more data accessible to independent reviewers and that the US did not have discontinuities caused by wars and other factors to the same extent as the rest of the world. If the US data shows no warming than the rest of the data is suspect until access is provided to it.
My own interpretation is it’s not surprising to find some land regions warming while others are cooling… this is just natural variation at work, and the more regionalized you make the comparison, the larger the net variabilities you will likely find (and the hence the longer you will have to average over to pull out the secular temperature change from anthropogenic CO2 emissions).
I would agree if you are talking about small periods of time but not if you claim that there is a trend that takes global temperatures higher. I also don’t see how one can look at places like China, where a long civil war, a world war, a program like the Great Leap Forward, and the Cultural Revolution created conditions that made it impossible to have continuous records of high quality and ignore the uncertainty. The same would be true of Europe, Latin America, Russia, Central Asia, and most of Africa. Only someone who has no knowledge of history would not question the issue of data quality in most of the world.
And even though the US data is supposedly the best in the world, we still have seen Anthony Watts’ audit show that around 90% of the stations have a bias that is bigger than the claimed warming since the end of the LIA and more than half the stations have a bias that is greater than 2C. Add to that the issue of the UHI adjustment and you have many questions about the claimed warming.
Finally, there are many converging lines of evidence for the Earth warming up since circa 1850.
Of course there are and no sceptics are questioning the conclusion that it is warmer now than it was at the end of the Little Ice Age. The question I asked was is it warmer now than it was in the 1930s. That is important because the IPCC has admitted that human emissions of CO2 before WWII were small and that overall emissions exploded in the 1940s. If I am correct and most of the warming had taken place by the time WWII rolled around it would be hard for me to take the CO2 as a driver of temperature argument very seriously.
Let me note that it is also clear that the warming from 1600 to 1850 was greater than the warming after 1850. That also makes it difficult to link it to CO2 emissions, which is why Mann had to hide both the MWP and the LIA.
You don’t even need surface temperature instrumentation records at all to arrive at that conclusion. The mere fact we were in a Little Ice Age in 1850 and we are not now is sufficient evidence to substantiate this observation.
As I said, the sceptics have never denied the existence of the Little Ice Age. That was Mann and the IPCC. And let me be clear again. It did warm after the end of the LIA. My point is that most of the warming took place by the 1930s and the increase since has not been material. Given the fact that most of the emissions came after WWII that is a problem for the AGW side.
Jim
Not sure of the point you are making here – the effect of a doubling stays the same does it not. Just 0.5 Deg instead of 1.2
Usually, people say – a doubling causes ~1.2 Deg C – all things staying the same. Spencer is pointing out that it is very likely all things don’t stay the same.
Vangel, there is plenty of evidence…increase in ocean levels, loss of glacial ice, shift in climate zones…for the current world temperature to be warmer now than it was in the 1930s.
I also think you don’t understand the impact that regional climate fluctuations have in hiding what is admittedly a pretty weak signal so far from AGW.
Your other argument regarding uncertainties in China for example doesn’t in any way substantiate your claim that US surface temperature records can be used in place of world temperature records (which is effectively what you are saying). At most it just means “we don’t know”.
****************
Steve Hempell (Comment#29406) January 2nd, 2010 at 9:59 pm
Jim
Not sure of the point you are making here – the effect of a doubling stays the same does it not. Just 0.5 Deg instead of 1.2
Usually, people say – a doubling causes ~1.2 Deg C – all things staying the same. Spencer is pointing out that it is very likely all things don’t stay the same.
***************
I think what you are trying to say it the apprx. 1 C doubling is still there, but is partially offset by other effects of the presence of the CO2 molecule. That’s my take on it. Of course, most GCMs don’t handle clouds very well either. Those will probably be a game changer.
Benjamin:
You appear to be talking about US temperature averages, which were found to be slightly in error and changed by some small fraction of a degree. That error and correction had an imperceptible effect on global temperature averages, which are displayed in this graph:
http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.lrg.gif
I am amazed how bias causes so many smart people to miss the point. Initially I assumed that I was not being clear enough but after repeating the point over and over again in various ways I have come to the conclusion that it isn’t me but the readers who don’t follow. Let me try again.
First, we have had access to all of the raw US data, which makes it far harder for GISS to play games with the reconstruction than for CRU. The US data has been very clear. It showed that 1934 was the warmest year and that four of the top ten years came in the 1930s. Only three of the ten years that preceded 2007 were in the top ten. That means that the US, which has the greatest number of stations and the best data, has not experienced major warming since the 1930s.
Benjamin:
You can see in it that, globally, every year in the ’00s was warmer than every year 1880-1989, and the average for the ’00s is substantially higher than the average for the ’90s.
There is no doubt that the instrumental record shows the ’00s was the warmest decade on record.
If there is no doubt why do you think that Phil Jones destroyed data and was hiding information even though the FOI Act required that he disclose it?
And what kind of ‘scientists’ would say that there was no doubt when the reconstruction has never been replicated independently? Yours is the argument of an activist, not a scientist.
And before anyone gives me the Gavin Schmidt excuse about everything being there for review, let me note that the released CRU data has already been ‘adjusted’ with no disclosure of what the adjustments were and why they took place. Let me also note that the reconstruction comes after a selection process that drops many stations from consideration. That means that one would need the metadata to figure out what Jones did. For the temperature profiles to be considered valid scientifically we would need all of the information, all of the code, and all of the metadata to be used to replicate the CRU version. Until that happens there is no valid global temperature profile.
“You can see in it that, globally, every year in the ’00s was warmer than every year 1880-1989, and the average for the ’00s is substantially higher than the average for the ’90s.”
Vangel, do you mean that every year in the ’00s was warmer than ANY year 1880-1989?
Vangel, do you mean that every year in the ’00s was warmer than ANY year 1880-1989?
Sorry. I quoted what Benjamin wrote. As I stated, his figure has never been independently reproduced from the original data so it has no scientific value whatsoever. As such, the claim of unprecedented warming is just speculation.
What I pointed out is that the US data, which is available, comes to a different conclusion. In the US the warmest year in the last 100 was 1934 and four of the top ten warmest years were in the 1930s. The analysis clearly showed that the US has not experienced much in the way of warming in the past 75 years.
The claims of major global warming are based on the CRU data and the now discredited dendro studies of Mann, Bradley, Hughes, Jones, Briffa, and a few others. McIntyre and McKitrick already showed that the MBH98 and MBH99 papers on which the hockey stick graph that was the centrepiece of the IPCC’s Third Assessment Report. The Wegman committee, which sided with M&M slammed MBH and the reviewers of the papers as being inadequately familiar with proper statistical methods and not using an independent peer review process. It also noted that the palaeoclimatology community seemed to be using proxies that were incapable of separating the warming signal from other material factors and of cherry picking proxy series to support predetermined conclusions. Subsequent analysis showed that the Polar Ural proxies, which had been used to support the claimed warming yielded no significant warming signature for the 20th century. It also showed that when the full Yamal data was used no warming appeared and that almost all of the warming came from a single tree.
The CRU e-mails showed that Jones was reluctant to share his temperature data and was willing to destroy it to prevent McIntyre, who had the skills to find errors and evidence of manipulation, from seeing it. They also revealed an attempt to manage the publication of research that contradicted the claims made by Jones and Mann. I am sorry but I cannot give the benefit of the doubt to people who are as petty and deceitful as Jones and Mann appear. To accept the graphs that were provided by Benjamin we need an independent reconstruction from the original data set and full access to the metadata showing how the reconstruction was carried out.
Vrangel,
Sorry I suggested that your writing was unclear – but it is good to use quotes and “(sic)” to alert us to the inclusion of other’s words among your own.
And you are very clear. Thanks.
Vangel:
No it doesn’t. I think you are missing an important point, and that’s why it seems to you we aren’t following your arguments.
The surface area of the US is much smaller than the surface area of the entire land surface area of the Earth, so the fluctuations in the US record will be much larger than for the entire Earth.
If you compare the GISS versions for US versus world, in fact, the trends aren’t so very different:Figure.
The main difference is the US is just much nosier, due to the much smaller land mass you are averaging over.
For comparison, here is the listing by average temperature (in °C) in each decade:
Decade US World/Land
1881-1890 -0.246 -0.169
1891-1900 -0.163 -0.269
1901-1910 -0.087 -0.264
1911-1920 -0.281 -0.212
1921-1930 0.080 -0.085
1931-1940 0.462 0.024
1941-1950 0.118 0.020
1951-1960 0.194 -0.002
1961-1970 -0.105 -0.039
1971-1980 -0.089 0.039
1981-1990 0.258 0.281
1991-2000 0.471 0.383
2001-2008 0.761 0.586
The 1930s was an anomalous weather period in the US, it’s not surprising that a period that had “outlier” weather fluctuations would be a bit of an outlier itself. The rest of the difference between the US and world land temperature record can easily be explained just in terms of the difference in the sample size.
No need to invoke a red flag on the GISS folk at all.
Vangel
“Subsequent analysis showed that the Polar Ural proxies, which had been used to support the claimed warming yielded no significant warming signature for the 20th century. It also showed that when the full Yamal data was used no warming appeared and that almost all of the warming came from a single tree.”
Please. You don’t understand Steve’s point WRT Yamal. The trees dont “show” warming. The trees are not evidence of warming. The temperature record is taken as a “given” in these studies ( most of them take it as a given) Then the trees are selected based on their correlation with the record. They are not evidence of the warming. In fact, if they diverge from the warming or dont correlate with it they are dropped from analysis or truncated ( see hide the decline). Second all of the “warming ” did NOT come from a single tree. A good portion of the variance was explained by one tree, it’s a problem but you overstate the case. Go read steve’s work. The biggest problem with that tree is the 8-9 sigma excursion. That’s beyond the normal increase seen due to temp alone.
Steve’s work on yamal is very nuanced. You have to be careful when you try to paraphrase what he said, versus what you read.
The sample was too small for the standardization methodology.
One tree explained a large part of the variance.
Other samples from other sites show different patterns.
The methods for site selection and standardization method have not been adequately explained or defended.
Carrick:
You are missing my point. I am questioning the global data set because it has not been made available for independent review. Given the fact that the results that you have been referencing have never been independently replicated I maintain that they have to be rejected as scientifically invalid.
Now you may wish to believe that Phil Jones did everything right and that the chose to hide the data from review for some noble reason as is your right. But that still leaves you with a problem of relying on unsupported results, which is not very scientific.
Carrick:
Where do you get your data from? Have the validity of the ‘adjustments’ been independently verified by anyone outside of CRU?
Making statements that cannot be supported by results that are independently reproduced does not get you very far. Until the CRU data is released and we know just how the global temperature profile was created we have nothing scientific to make any meaningful conclusion about the extent of warming, if any, over the past several decades. All we can really say is that we are warmer now than we were during the Little Ice Age. You may recall that was the period that the IPCC tried to hide (along with the MWP) in AR-3.
steven mosher:
We may be having two one way conversations here, you may have misunderstood what I wrote, or I may not have been very clear or missed something.
Given that your knowledge is supposedly greater than mine, can you please explain how exactly the temperature record ofthe 16th century (to pick a period) is a given and matched up with tree ring withs? The way I understand what the palaeoclimatology community is saying is that it can reconstruct temperatures by looking at proxies such as tree rings. To me that means that they are using trees as thermometers.
Please note that I do not intend to be subtle about my point. I simply point out that the dendro people used the Polar Ural proxies to make one claim and dropped it after the set was updated because Briffa’s conclusions could no longer be supported by the new data set. Briffa dropped the Polar Urals and went with Yamal, which created a temperature profile that was primarily influenced by a single tree. According to SM, if the full set was used Briffa’s conclusions would be invalidated.
Feel free to correct me if I am wrong but didn’t the IPCC claim that they could reconstruct the temperature record from the tree ring data? And how else are people supposed to interpret the statement in the link below other than to conclude that tree rings are used to come up with temperature data?
http://www.columbia.edu/cu/pr/96/18926.html
If you are looking for subtle points in my argument let me assure you that you won’t really find them. It is my belief that this whole scam is so removed from valid science that we can falsify it by pointing out the obvious issues that we were first taught in our first science lessons. In case anyone has forgotten let me remind them again of some of the requirements. We were taught that data must be made available so that it can be checked by others to ensure that our method was sound. We were also taught that experimental results must be reproduced by others within the scientific community before they can be considered valid.
The AGW argument falls apart because the results have not been replicated and the data and methods have not been made available for independent review. Without both nobody can reproduce the global temperature profile that the IPCC keeps hyping. That makes that profile as invalid as Mann’s discredited Hockey Stick, which did try to use trees as thermometers.
Vangel:
You were bringing the US GISTemp data set as “proof” there was a problem with the global data set. I’m simply pointing out the the two data sets are consistent with each other, once you allow for sample size. Both data sets are from the GISS online temperature record.
It’s my own opinion that trying to argue that the climate hasn’t warmed since the 1930s is a lot like tilting at windmills. Too many other climate indicators that line up with a warming climate for that to make any sense.
Vangel you’ve got so much going on in your post I don’t no where to start. First with the Press release. Horrible writing of the headline. That happens.
Lets see what else
“Given that your knowledge is supposedly greater than mine, can you please explain how exactly the temperature record ofthe 16th century (to pick a period) is a given and matched up with tree ring withs? The way I understand what the palaeoclimatology community is saying is that it can reconstruct temperatures by looking at proxies such as tree rings. To me that means that they are using trees as thermometers.”
The reconstruction works like this.
1. You have a tree ring series or chronology. This is a collection of cores that have been standardized to that you get a long series that starts in say 2000 YBP ( years before present) and ends say in 1990 ( for example ) This chronology is a just a long vector of ring widths that have been standardized. So, you a long series of numbers, lets say 1,2,1,3,4,1,4, etc etc. One number for every year. Then You have another series. The temperature series.
Lets say its siberia, so you may have records from 1900 to present. That’s temperature. Then you line up the temperature and the series. lets make it real simple. Let say that in 1990
the temperature = 6C. Lets say the ring width = 12. Now
look back in time. Lets say at year 0 the ring width = 6. What’s the implied temperature? 3C. Now thats a gross simplification, but your should get the idea. The tree’s are selected because their ring width CORRELATE with temperature over the CALIBRATION Period.
Lets make up a little dummy example.
Tree rings: 1,3,1,2,3,4
Temp: ?,?,6,12,18,24
Guess the missing temperature.
here’s what the divergence problem looks like:
Tree rings: 1,3,1,2,3,1
Temp: ?,?,6,12,18,24
Crap, tree rings diverged in the calibration period.
So, the temperature in the calibrattion period ( say 1850 to present) is given. The trees are calibrated against it. In some cases researchers will question the instrument record.
Next:
Please note that I do not intend to be subtle about my point. I simply point out that the dendro people used the Polar Ural proxies to make one claim and dropped it after the set was updated because Briffa’s conclusions could no longer be supported by the new data set. Briffa dropped the Polar Urals and went with Yamal, which created a temperature profile that was primarily influenced by a single tree. According to SM, if the full set was used Briffa’s conclusions would be invalidated.
Whoaa. Way too much here. You need to go read the threads.
Steve’s real questions were this:
1. Does Yamal have enough samples ( cores) to use the RCS standardization technique. probably not there is no clear literature on the number of cores required. basically RCS is used to remove the effects of age on growth by standardizing by using a negative exponential fit ( or spline.. depnds) WRT to the selection of chronologies, Steve asks for the rationale for dropping or including various series ( see the detail on windowed variance for example ) WRT the single tree, You can go look at the posts and see how much variance is explained by that one tree. It’s substantial, but whats the take away? As a analyst I’d be a bit concerned about one core doing that much work, especially when its growth spurt is so anamalous. Its a cause for concern.
The issue is this. Steve’s work is all about uncovering these decisions and asking why? It may be there is a good reason, maybe not. he also just does the basic work of sensitivity analysis. So it doesnt invalidate Briffa’s work but it puts some tough questions to it.. It goes to the robustness of the result.
If you make some other analytical decisions ( use this data set as opposed to that) then the answer will change.. How much? Dunno. basically it comes down to this. It “looks like” the decisions are made to depress the MWP.
“f you are looking for subtle points in my argument let me assure you that you won’t really find them. It is my belief that this whole scam is so removed from valid science that we can falsify it by pointing out the obvious issues that we were first taught in our first science lessons. ”
When you use langauge like “scam” you realy go farther than you have to and you assume you know the motives of people. You dont. Second, watch your use of the word “falsify”
“We were taught that data must be made available so that it can be checked by others to ensure that our method was sound. We were also taught that experimental results must be reproduced by others within the scientific community before they can be considered valid.”
Yes, data and methods must be available. When they are not available, you’ve got some choices.
1. You can trust the scientist
2. You can suspend judgement and tell them to supply the data
3. You can scream fraud.
In my mind 1 and 3 are not justified. Or put it this way, you cant convince me from the fact that the data is unavailble that I should either trust the scientists or scream scam. I just dont know. I dont know what their motives are. I dont know what the true figure will be. I just know, that I can rationally suspend my judgement in the case. Unemotionally, logically, rationally, shrug my shoulders and say “dunno” , show your work. if they blather on about other evidence, I just come back and ask them to stick to the point. What’s the temperature record? Can’t show me? Ok, no worries, come back and talk to me when you can. So I dont need to get all in a lather about fraud and scam and hoax and divining their motives.
************
steven mosher (Comment#29451) January 5th, 2010 at 12:35 am
1. You can trust the scientist
2. You can suspend judgement and tell them to supply the data
3. You can scream fraud.
In my mind 1 and 3 are not justified. Or put it this way, you cant convince me from the fact that the data is unavailble that I should either trust the scientists or scream scam. I just dont know. I dont know what their motives are. I dont know what the true figure will be. I just know, that I can rationally suspend my judgement in the case. Unemotionally, logically, rationally, shrug my shoulders and say “dunno†, show your work. if they blather on about other evidence, I just come back and ask them to stick to the point. What’s the temperature record? Can’t show me? Ok, no worries, come back and talk to me when you can. So I dont need to get all in a lather about fraud and scam and hoax and divining their motives.
*************
After they have repeatedly refused to comply with #2, #3 becomes more and more likely.
What happens when you find that the people who you are asking have refused to allow anyone to look at their data for a decade? Or that they have released e-mails and code in which it is clear that they have ‘fudged’ other data and they engage in cherry picking so that they can produce results to support their own beliefs?
steven mosher writes:
But you see the problem, don’t you? Trees do not respond to some global or regional temperature measure but to local factors that include temperature as just one factor of many. You have no accurate temperature for the trees that supplied you the data, which means that you are just playing statistical games that come up with conclusions that claim a precision that cannot be justified by real world observations. We already know that some trees respond to CO2 fertilization or changes in precipitation patters so those trees cannot be used for reconstructions unless you can come up with an accurate way to separate the signals provided by each factor. We also know that dendro data used by Mann provided temperature results that diverged from the results that came from the surface data. As SM pointed out, that is what the, ‘trick to hide the decline,’ e-mail was all about.
Boy, you seem to be reading different material than I am. Steve is very clear that Briffa has violated his own standards and has made serious errors. I do not have the reference but in one of his posts in the past three months he brought up the Briffa paper in which Briffa discusses the need for sufficient core counts to perform the RCS analysis.
But that is not the point I was making. My point is that the dendroclimatology community claims to be able to use trees as thermometers and use tree-rings to reconstruct temperatures. That is how the IPCC sold the hockey stick graph. People were told that trees were like thermometers and could tell us the temperatures in the past. But it was clear that this was not the case because the Mann tree-ring analysis managed to miss both the Little Ice Age and the Medieval Warm Period. Instead of pointing out that Mann missed something, Briffa, Jones and Osborn defended his methods, which means that they were not exactly competent to figure out what went wrong. Well, if they were not competent then why are we supposed to think them competent now?
And let me reiterate that it is very clear that there was a divergence between the tree ring reconstructions and the surface record over the past 50 years. That makes the entire notion of using trees to tell temperatures with any degree of certainty very questionable.
And let me repeat. You are reading something into my statements that I did not intend. It was never my intention to discuss the finer points of SM’s work because I do not think it necessary to look into the subtle points that he makes when the deficiencies of the AGW proponents are so obvious. They have not demonstrated the level of knowledge required to do the work that they want to do with the precision that they claim. The use by Briffa of the YAD06 core provides further evidence of incompetence. And even if the dendro people were competent, their failure to allow independent analysis of their work and an independent reconstruction of their temperature profiles turns their work into fictional narrative rather than legitimate science. The same is true of the CRU data-keepers, who refused an independent review of the raw data prior to the adjustments and a review of the station selection process and the code. Of course, given the fiasco revealed by the CRU whistleblower it probably made sense for them to hide their code from anyone competent.
I don’t have to know the motives to recognize a scam. When researchers destroy data rather than comply with FOI requests, talk about suppressing legitimate research that does not come to the same conclusions that they have published, and write codes in which they use ‘fudge factors’ to change the results I think that the word scam is appropriate.
Sorry but the CRU people hid the data for nearly a decade and once it became apparent that the FOI Act would force them to share they claimed to have destroyed the original data and that we would have to settle with their adjusted data instead. They also wrote e-mails in which they advised each other to destroy data and e-mails pertaining to their work so that SM could not get another scalp by figuring out their errors as he did on several occasions.
I do not know about you but I will not trust those people until after all of the original data, metadata, and code are released.
Carrick :
There is nothing consistent between the CRU temperature profile and the US data. In 2007 GISS admitted that the US data has 1934 as the warmest year and that four of the top ten years were in the 1930s versus only three in the previous ten years. The CRU data has the 1990s substantially warmer than the 1930s.
Vangel:
I checked this assertion and you appear to be wrong.
Here I’ve averaged the 5×5 gridded HadCRUT data over the continental United States and compared it with GISTemp:
figure
They seem pretty darn close to me.
You have failed to recognize that averaging over the globe versus over a subset of the land surface can significantly affect both the bias in the temperature trend as well as the amount of variance observed in the data.
Vangel
“But you see the problem, don’t you? Trees do not respond to some global or regional temperature measure but to local factors that include temperature as just one factor of many.”
Yes of course. They respond to many factors. The typical correlation that dendros use to decide is around .4. The amount
of variance explained by temperature drives the confidence interval. The real problem is the problem of U shaped respones.
Can I suggest that you read some threads at CA.
“You have no accurate temperature for the trees that supplied you the data, which means that you are just playing statistical games that come up with conclusions that claim a precision that cannot be justified by real world observations. ”
Ahem. I’m trying to explain to you the theory which you clearly did not understand. Errors or bias in the temperature will of course lead to errors and biases in the reconstruction. It’s not a statiscal “trick” It’s a statiscal method. One of the issues that Mc
has is that Mann and others use NON STANDARD approaches. With standard approaches ( please see the posts by UC on CA) you get
floor to ceiling CIs. Again, It is one thing to point out limitatons
( as Mc does) and quite another thing to try to throw out a whole science with arm waving. The fact that you can reconstruct temps is shown quite handily in studies that have a calibration period ( to create the model) and a verification period to test the model.
For example. you have temps for 1850 to 2000. You calibrate on 1925-2000. build a model and retrodict 1850-1925. Then you check ( verify) how well your model worked in the verification period.
“We already know that some trees respond to CO2 fertilization or changes in precipitation patters so those trees cannot be used for reconstructions unless you can come up with an accurate way to separate the signals provided by each factor. ”
Well not exactly true, but I suspect you’re not interested in a nuanced view of things. When I first looked at this I was like you. Then after reading CA and the literature I came to understand that is not as balck and white as you think
“We also know that dendro data used by Mann provided temperature results that diverged from the results that came from the surface data. As SM pointed out, that is what the, ‘trick to hide the decline,’ e-mail was all about.”
I’m sorry, you are talking about Briffa not Mann. I’m well aware of the divergence problem and hiding the decline. But please note this was a briffa dataset not Mann. Again, go read the orginal briffa articles on this, also rob wilsons follow up. The divergent trees are a problem thats for sure. But you really dont seem to understand who what and why.
“not exactly true”, “a nuanced view of things” and “that is not as balck and white as you think”
This is what smart people like to say when they have no knowledge to refute a point that someone just posted. 😉
Andrew
Andrew KY.
There are of course challenges with reconstructing temperatures from tree rings. If you read CA, if you read the basic literature you will see what those challenges are. It’s not a black and white thing. It’s a probable thing. The mistake, I think, that skeptics make is that they throw the whole baby out with the bath water and they lose credibility. So, whereas Vangel or you might say “there is no way to reconstruct temperatures” Someone like steve or myself would say this: If we take the science of tree rings at its word, what kind of precision can you get in a reconstruction using standard methods. Well, it depends. Or to put it another way, someone like Mann could argue that he could reconstruct the temper +-.5C. Someone like me would say, If you use standard methods you get a figure more like ( say for example only) +- 2C.
You see the first thing you do is you accept the ruling paradigm and then show how even within the ruling paradigm the answer is “wrong.” Then you can go attack the whole science if you want to or need to. But taking down a whole science requires more than a paragraph from a non expert. So Again, read the CA threads.
It’s best probably to read comments by bender, long time CA regular.
I will link to a nice little study that shows you how it is done.
But the basic approach is to CALIBRATE and then VERIFY.
So, if you read steve MC you will see that one thing he wants to see if verification statistics. Its the kind of statistic that Mann and others would not supply. In the study linked below they supply these stats, so you can see how well the model works.
See figure 4.
With a little education your argument becomes STRONGER.
#
http://www.clim-past.net/5/661/2009/cp-5-661-2009.pdf –
#
Steven Mosher,
Thanks for the response, but I can’t get past the first sentence.
“There are of course challenges with reconstructing temperatures from tree rings.”
This has left me to wonder why it is incumbent upon me to find meaning or value in a method or science with multiple “challenges”.
It might be personal fun for a person with extra time on their hands, like solving a crossword puzzle, but what good is it? And if you don’t know the answers and there’s no answers in the back of the book to check against, should anyone care that you think you got 2 down or 8 across?
Andrew
Carrick:
Where exactly is the warming in your reference? Both sets of data (Hadley and GISS) show that the 1930s were warmer than current temperatures!!! That means that there is no support for the claim of unusual warming over the past eight decades, which is the point that I am making.
And where exactly is the temperature reconstruction that you are showing found in the IPCC report? I certainly don’t see it in there because the IPCC claims substantial warming since the 1930s because that is the only way it can push the AGW theory.
What I see is not anything like what you just referenced but more like the graphs below.
http://www.cru.uea.ac.uk/cru/data/temperature/nhshgl.gif
Vangel:
Please keep track of your own arguments. You said
I just showed there was, and concluded this with the comment…which is obviously true..that:
Perhaps Andrew_KY can now explain that this is something smart people “like to say when they have no knowledge to refute a point that someone just posted.”
Also:
Are you parsing anything people are saying? Let me know if you aren’t so I don’t waste any more time.
What I said was:
Questions?
“Perhaps Andrew_KY can now explain that this is something smart people “like to say when they have no knowledge to refute a point that someone just posted.â€
I was describing three sloganish things that Steve Mosher said.
Why are you asking for explanations from the guy that’s supposed to be the student here concerning things he didn’t know he was supposed to comment on?
Andrew
Steve Mosher,
” But taking down a whole science requires more than a paragraph from a non expert. So Again, read the CA threads.
It’s best probably to read comments by bender, long time CA regular.”
I believe Bender has stated that no one has “proven” that trees are reliable or even unreliable thermometers.
The proof has to be on the other guy. Where are the papers showing the probability of a particular species on its survival edge does a reasonable job of tracking ambient through its growing season??
At this point in time we have a group of people pushing the reliability of a new branch of an older science with little work to support it. Until they have built a somewhat reliable literature I can dismiss it with a snort and a guffaw.
Also here’s a comparison of US land decadal average:
Again, even within the US record, clearly we are “seeing warming” over the last two decades.
Comments:
1) Last decade average is approximate (I only had 11-months in the version of the GISTemp record I used).
2) The CRU US land temperature records are derived by me using 5°x5° data. This could be improved some.
3) The CRU data set were shifted to match the anomaly baseline for GISTemp. The shift amount was +0.118°C.
4) Andrew_KY, I was just poking at you a bit. IMHO, you’re too ready to dismiss arguments that are contrary to your prior assumptions is all.
Kuhnkat,
If we say benders name three times he is bound to appear. right now he is cleaning my pool. bender bender bender.
Crap.
What question would we ask him?
Can we dismiss tree ring ology outright or is there a substantial body of research supporting the general contention that tree rings can under certain circumstances be used to reconstruct local past climates with some degree of accuracy?
bender bender bender
http://climateaudit.org/2007/04/03/1322/
here he is in person arguing just as I do about eliminating weak arguments
“What one *thinks* is largely immaterial. That you think it “more and more†is equally unworthy of publication. It’s what the data have to say that matters. And when there is a lack of data, which is the case here, the focus must be on quantifying the uncertainty. You keep repeating that temperature reconstruction is not possible, and this is just wrong. It may be impossible to reconstruct temperature with absolute certainty. But nobody’s claiming that. Let’s grow up a little.
Fighting AGW alarmisim with such vague skepticism does no good. What did Steve M say about trying to eliminate the weakest link from your arguments? You always go one step too far, jae. Temperature reconstruction with tree rings is possible. The question is how accurate these reconstructions are.”
Now bender, get back to pool duty. the book is off to the editor. I spent about 90 minutes on the phone with Mcintyre. Exhausted I am.
Long ago bender told me to read the whole blog at CA. to make sense of climategate I had to. As a result, I realize now that I should have listened to bender long ago.
bender bender bender
http://climateaudit.org/2006/08/04/survivorship-bias/
http://climateaudit.org/2007/04/01/more-on-positive-and-negative-responders/
bender bender bender
Posted Apr 1, 2007 at 3:34 PM | Permalink | Reply
How does one interpret a 0.65 correlation with NH temps, but a maximum 0.315 correlation with concurrent growing season temps?
Cautiously?
Look folks, a correlative model is not an overfit model. Fact: [some] treeline conifers are [somewhat] reasonable temperature proxies. It’s been proven too many times in too many places in too many species for the correlation to be random chance alone. There is no uncertainty here. So don’t waste your brain cells doubting that categorical fact. The uncertainty is in regard to the degree of accuracy of the reconstructions (which ARE based on overfit models). If temperatures are now warming to the point that precipitation is now limiting treeline conifer growth, then all it means is that better models are required to get more accurate reconstructions. This might affect reconstructed temperatures during MWP, but it is not going to change the nature of the debate over the size of A in AGW. The temperature trend may or may not be “unprecedentedâ€. But whether it is a problem or an opportunity is a totally different question.
“Fact: [some] treeline conifers are [somewhat] reasonable temperature proxies. It’s been proven too many times in too many places in too many species for the correlation to be random chance alone. There is no uncertainty here.”
Steven Mosher,
OK, which tree species are good thermometers and which ones arent? And how accurate are the good ones?
Since you assert that there is no uncertainty, you should have a list of tree types and numbers on how accurate they are at your disposal. Certainty. Let’s have it.
Andrew
This argument between stephen and Andrew_KY is amusing, but it’s probably not going to get anywhere when statements like “[some] treeline conifers are [somewhat] reasonable temperature proxies” are equated with “no uncertainty” and then challenged on that basis.
But, just to stir the pot: Some trees can be good indicators of past CLIMATE, not temperature. At this time, we cannot completely separate the signal generated by precipitation from that of temperature. The controversy arises from intentionally ignoring other influences on tree growth in order to force trees to become thermometers. It just isn’t going to work. We can make interesting hypotheses about past temperature, and use other types of data to bolster them. But trees are not thermometers, any more than they are rain gauges.
Tamara,
I like it when people stir the pot. 😛
Andrew
But some Snarks are Boojums?
Tamara and Andrew.
I’m not interested in litigating this. Here is how I view things.
The “court” we operate in accepts the principles of dendrochronology. Past climate, either precipitation or temperature, can be reconstructed from a study of the growth properties of trees. Imagine, if you will, that you are in a court that accepts DNA evidence. As a lawyer what do you argue:
1. chain of custody.
2. were the statistics calculated correctly.
You don’t go after the whole science. Not because you couldnt (theoretically) but because you can make the same case on narrower grounds. You don’t have to disprove DNA, you just have to show the chain of custody ( like provenance of the rings) is suspect. So if you want to go argue against tree rings go hunt down my expert witness bender.
Here is another way to look at it.
I have a temperature series from 1850 to 2009, for a gridcell in country X. Got that. I have a pristine temperature record from 1850 to 2009. I hand you the temperature data for 1925 to 2009.
Thats all I give you. Got that. I hold back 1850 to 1924.
Now, with no other information to go on I ask you to guess
at the temperatures for 1850 to 1924. You cant duck the question, you have to guess. its a quatloo bet. How do you bet? Andrew will you bet that you can get closer to the held back data than Tamara? Tamara how about u?
Now, I offer you both tree ring data from 1850 to 2009.
Would you use it to make your bet?
tamara, if I gave that data only to andrew would you feel disadvantaged? Andrew if tamara had the data and you didnt would you demand odds to bet?
Steven Mosher,
I think my questions to you are valid and you have done nothing to address them.
Andrew
It’s hard to sensibly argue with Moshers position here. All proxies will have noise. Some proxies will be better or worse due to local effects. The CA plow through the lake varves last year was a great example of localized and temporal noise and signal spikes from weather, climate, geology, hydrology, geography, etc. This is why the meta data is so important. Think of the thermometer temperature record as a proxy as well. Nothing is a perfect tool, some are better than others and most (but not all) are better than nothing.
In any profession, objective evaluation of meta data is the bull$hit detector. This is what makes all of the raw data, methods and code the key to understanding. We all hope Moshers mantra will be realized. Until then, it’s too easy to arm waves. Unfortunately, the highly negative economic and environmental justice impacts from fetishistic climate science causes emotional debate.
I investigated the question of whether the Mann 2008 proxies contain any signal here.
My results show that there is indeed a common signal in the data, but that the noise is about five times as large as the common signal … which makes digging the signal out both difficult, and subject to large error.
There is a larger problem, however, one which is more difficult to solve. My analysis says nothing about what the signal represents. Does it show temperature, or moisture, or cloudiness, or (most likely) some combination of the above that varies over time? At present, I see no way to tease temperature out of that mix in a very noisy signal.
Howard,
“Until then, it’s too easy to arm waves.”
Like you (and Steven Mosher) have done. i.e..-
“Nothing is a perfect tool, some are better than others and most (but not all) are better than nothing”
I almost died of Staggeringly Irrelevant Generalizations when I read that. 😉
Andrew
Carrick:
And what I said was that the US data did not agree with the CRU (global) data. CRU had the world warming while the US was not warming. Now you can claim that CRU must be right because its version of the US data also shows no warming. But I note that we will not know if it is right until we can look at the ‘adjustments’ made by CRU to the original global data. Unlike the US, which is covered with stations the globe is missing a great deal of data that is conveniently filled in by CRU ‘scientists’ to come up with some made up temperature. Conveniently, the ‘adjustments’ come up with warming that fits the theory that the CRU advocates have been pushing.
The problem is that for something to be scientifically valid it must be reproducible and on that count the CRU temperature profile fails because nobody has been able to produce the same type of chart from the original data.
I get particularly pissed off when I read parts of the leaked code that go like this:
See where the global warming signal comes from?
Carrick:
We seem to be having two one-way conversations. When I write, “”There is nothing consistent between the CRU temperature profile and the US data,” I am talking about the difference between the US temperature profile, which comes from the somewhat accessible GISS data set with the CRU global temperature profile, which shows significant warming but comes from data that is unavailable for review by independent sources. In fact, Phil Jones, after stalling for years when other scientists asked CRU to make the data available, claimed to have destroyed the data after the FOI Act made a loss inevitable.
The fact that there is little difference between the US profile between the two sets is not relevant; what is the difference between the US temperature record, which shows no warming since the 1930s, and the CRU global average, which shows substantial warming. To me such a major divergence means that there is something wrong, particularly in light of work done by scientists who are showing that the temperature profile shown in the IPCC report for places like Northern Europe do not agree with the profiles reconstructed using the available data for the region. Add to that Willis’ excellent review with the manipulation of the Australian data at Darwin and we have serious questions about the credibility of CRU.
Andrew_KY (Comment#29509) January 7th, 2010 at 12:33 pm
Steven Mosher,
I think my questions to you are valid and you have done nothing to address them.
Andrew
You are welcomed to your opinion. However, you and I have a history of discussing epistemology and other subjects. In my opinion you have an inability to “reason together” You see, I could reason with bender and shift my opinion. Willis and I reason together. Boris and I reason together. Carrick, kuhnkat, nick stokes, johnV, anthony, lucia, steve mcintyre. I’ve experienced “reasoning together” with all of them. I have not seen that capacity in you. Sorry. If you want to flat out reject all possibility whatsover that a tree ring, or latewood density, or isotope could be used to provide an estimate of past climate, then we simply can’t reason together.
“Sorry. If you want to flat out reject all possibility whatsover that a tree ring, or latewood density, or isotope could be used to provide an estimate of past climate, then we simply can’t reason together.”
I am open to the possibility. Why don’t you tell me how it’s done and I’ll read.
Andrew
Willis Eschenbach (Comment#29512) January 7th, 2010 at 3:06 pm
I think the more general question willis is this: is it impossible to reconstruct a tempertaure signal. I think when we look at studies that have both a calibration period and verification period ( I cited one above) one can observe that, as bender notes, there are examples of cases where a temperature signal can be reconstructed.
Put in my terms, if you are reconstructing temperature there are cases where a ring width signal will provide a more skillful retrocdiction than a mere guess. But the devil is in the effin details Oh, and not likely in a mann dataset.
hey, I left a msg for you over at WUWT on the BBC thread relative to some raw data from the observatory in Ireland. Looks like the perfect opportunity to look at a site that goes back to pre 1800.. Drop me a line.. BTW, the book is done. I spent about 90 minutes talkig to Mc. Thanks for all your help bro!
PS willis,
I put you in the acknowledgements along with bender, UC and bunch of the whole sick crew. ink is cheap, especially on the kindle version.
Vangel,
I have just produced a graph of the US temperature data from here:
http://climvis.ncdc.noaa.gov/cgi-bin/cag3/hr-display3.pl
and compared it with a graph of global temperatures data from here:
http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/annual
Both are taken over the same period, 1895 to 2008. (That is how far the US data seems to extend).
The graphs are of very similar shape.
Taking a simple trendline over the period (and remembering that the US data is in farenheit and the hadley data is in centigrade), I get slopes of .0066 C per year for the global data and .0124 F for the US data. Converting, .0124 F equals .0069 C. Thus, over the period the data is suggesting that the US has actually warmed faster than the globe, albeit this is not statisitically significant.
(Note that this is consistent with other datasets over shorter periods – for example, the UAH data has the continental United States warming significantly faster than the globe as a whole since 1979).
Obviously, the R^2 value for the US slope is smaller than the R value for the global slope. For the US, it is .218; for the globe, it is .7137. However, the US slope still appears to be statistically significant, even taking autocorrelation into account.
So I think your claim of inconsistency between the two datasets is not substantiated.
Oops: the link does not work, so here is another one:
http://www.ncdc.noaa.gov/oa/climate/research/cag3/na.html
Vangel,
Having just now seen the ‘since 1930’ point in your post, I checked the data since that start date. While it is not statistically significant, the slope since then is still positive, albeit lower than the global slope since that period – the US slope is .0057 C per year, with the global slope being .0074 C per year.
It seems to me that there is still no evidence that the two datasets are inconsistent. For you to make that claim, there would have to be a statistically significant difference between the two datasets. There is not, as the error margin for the slope for the US data easily encompasses the slope for the global data.
Vangel:
And I explained to you that the US data set is obviously a subset of the global land surface data, and you can’t directly compare them.
In general, selecting a subpopulation has two effects, it introduces a net bias to the mean, and it increases the variance of the data set. Both of these are observable in the comparison of US versus world land temperature data from GISTemp.
And when you take the 5°x5° gridded CRU data and average just the subset of that data over the US land surface area, you get virtually the same result as you do with the GISS US land temperature average.
Again, the only way to test for consistency is to use the same data set. Period. Doing it using the same subset of the Earth’s area is not only relevant, it’s the only correct way to test the consistency of the two data sets.
Also you are really saying “global CRU dataset>“??? Are you not aware that land heats and cools substantially compared to ocean surface? At the minimum you need to compare global CRU land surface area, not the entire CRU data set. But even that is frankly a boneheaded thing to do, for reasons that I gave above.
The only thing that is wrong is your understanding of how one compare results taken using different sample populations.
While I agree with your comments on reproducibility, it’s separate from whether the sets happen to be consistent or not. One can be correct and proprietary at the same time.
David Gould:
I’ve checked this too and I agree they are consistent. Because there are large temporal correlations in the data sets, it’s better to subtract the two from each other than to compare the trends separately.
However, even if they were consistent, that wouldn’t mean there is a problem, because comparing them to start is a nonsensical thing to do.
steven mosher (Comment#29519) January 7th, 2010 at 5:38 pm
There are a few very fundamental problems.
1. Both excess heat and excess cold result in narrow rings. Thus the system cannot be inverted. Not sure how you can deal with this.
2. Temperature is not independent of moisture w.r.t. plant growth. If a plant has lots of water, it will grow happily at a temperature at which it will shrivel if there is not enough water.
The usual response to this one is to choose sites at high altitude. Unfortunately, at high altitude the air is often dry, and the soil is often poor. As a result, the assumption that the growth is “temperature limited” is often not justified.
3. If you take a look at just about any cross-section of a tree, you’ll see that many of the rings are wide on one side and narrow on the other.
Taking two samples per tree can help with this, but can’t solve it. And in many cases only one core is taken.
4. In general, the high-altitude sites used for tree-ring proxy analysis don’t have nearby weather stations. This introduces further errors into the reconstruction.
As I said above, I do think there’s a signal in there … but digging it out leads to huge confidence intervals if you include all of the possible errors.
I responded there, and said:
Finally, you say:
Where & when can we buy a copy? Support your usual suspects …
Andrew_KY (Comment#29518) January 7th, 2010 at 5:37 pm I am open to the possibility. Why don’t you tell me how it’s done and I’ll read. Andrew
So now you demand spoon feeding. I hope you liked the pablum served up.
Willis that was a nice summary. I would have added
5. Tree ring growth reflects the warm part of the season.
The way you get around the U-shape is not in my opinion to go to the treeline (that works for some time periods, but not for all, as the divergence problem illustrates). What you do is select different collocated species that have different optimal temperatures (temperature at which the growth rings are maximized). That will allow you to disambiguate the double-sided growth curve for each species.
Steven Mosher,
” Fact: [some] treeline conifers are [somewhat] reasonable temperature proxies. It’s been proven too many times in too many places in too many species for the correlation to be random chance alone. There is no uncertainty here.”
This is quite pointless. The ONLY way it can be currently established that a particular tree or group of trees have a reasonable correlation is by comparing them to a KNOWN TEMPERATURE!!! There are no tables of metadata for selecting those treemometers in the past where there is no reliable temp record.
Until there are, we can assume SOME trees are treemometers and STILL not be able to make a reconstruction that has any meaning.
That signal STILL may be more water vapor, CO2, bear fertilisation, solar insolation… OR ACCIDENT!!!
Being able to see a MAYBE signal is the BEGINNING of the investigation. Now someone needs to start working on the biochemistry of the variety under various conditions until there is more data than just ring width, diameter, and density. Where are the chemical analysis of the composition. Number of branches on which sides with what diameters and heights from the ground… Positions and sizes of other trees close enough to influence. Land grade and composition… If we do not have detailed growth information on trees we can have full meta-data on, how the heck can we extrapolate to a tree that grew under mostly unknown conditions??
Another way of looking at it is, what are the odds that any particular tree is a treemometer based on a survey of all the trees in an area where there IS one or more tested treemometers!!! From my limited exposure it is not good. How is this useful in selecting treemometers in the past?
With few valid statistics you have nothing but confirmation bias and guessing.
Imagine you have a 30% chance of the trees in an area are treemometers. You sample and graph all the trees. You are lucky and they ALL graph with similar wiggles but the endpoints are a range of -2 to +5. Do you select the high, middle, or low range?? Without more data you have little.
Carrick (Comment#29530) January 7th, 2010 at 10:44 pm
Carrick, you raise an interesting possibility. If temperature were the only thing affecting the growth rates, and if we had accurate growth vs. temperature curves for both species, that could work in theory.
Unfortunately, in general neither of those is true. The ring width function is not
Ring Width = f(Temperature)
It is at a minimum
Ring Width = f(Temperature, Moisture)
So with two variables, to disambiguate them, you’d need at least three different species to compare … but that’s only if ring width is a function of two variables. We also have the question ring width depending on cloud cover, and fog/mist, and timing of the variables, and sensitivity to early/late frost, and snow vs rain, and …
Again, that doesn’t make it impossible. It just makes the confidence intervals run from the floor to the ceiling.
Willis, I had thought of that issue too, of course. Just didn’t want to cloud the idea by throwing too much into it at one time…
In practice, you would need to reconstruct both temperature and moisture, possibly using a separate proxy for rainfall.
Nice post Willis.
Look at the data from Ireland and the supporting paper it looked like there were 3 different series.. I was thinking about your first differences approach. they also had cooked up there own TOBS formula..
It might be an interesting study to compare raw data versus final CRU figures. The history of the site is just the kind of data/documentation that you/anthony/steve/me love.
WRT tree rings. Like bender I get a little annoyed at the people who scream impossible without looking at the problem.
Being nuanced in this debate ( as steve can tell you) is bloddy hard.
Arrg sorry Willis.
Timing on the book. We are shooting for before next friday.
Over the next few days I think some blog pieces will start to appear from other people. Bit by bit I will lay out the untold story from Nov 12th to Nov 19th when I did the first post here at Lucia’s
Tom’s handling all the marketing. I’m trying to put together a blog piece or two based on material I didnt use.
Mc is driving me crazy. Just when I plow 3 feet into the pile of manure and pull out a small gem, he plows 6 feet deep and yanks out the hope diamond.
steven mosher (Comment#29534) January 8th, 2010 at 12:24 am
I don’t say impossible. I say really wide confidence intervals.
kuhnkat (Comment#29531) January 7th, 2010 at 10:56 pm
I think you misunderstand me. I said Bender was a believer in ringomtery. you said he wasnt. The quote I gave you was bender.
Arther than address your long post ( Im not bender) I’ll just repeat my question. I hold in my hand a temperature record that is 1850 to 2009. I give you the portion from 1925 to 2009.
I ask you to reconstruct the tmeperature back to 1850. With no further data from me. What do you do?
Now I select a tree ring series that is correlated with temperature from 1925 to 2009. It goes back in time to 1850.
Would you use it to guess the temperature back to 1850? yes or no.
Thats threshold question. I suppose bender would say “it depends”
I suppose you would say no. I suppose willis would? i dunno.
Personally, since bender is a biologist type of the tree variety, I’d bet with him.
Now, maybe he will effin show up and say I have my head up my ass, at which point I will dutifully remove it. crap, I’ve been wrong before.
Willis,
On the floor to ceiling issue, I’m wondering if that’s always the case. didnt UC have some work on that
steven mosher (Comment#29535) January 8th, 2010 at 12:32 am
Steve McIntyre is a never ending sense of amazement to me, somehow he never runs out of new finds. Go figure …
Anyhow, congratulations on the book. It’s a huge pile of work in a very short time. What is it, a month now since the emails were released into the wild? Well done.
w.
steven mosher (Comment#29538) January 8th, 2010 at 12:49 am
Dunno … you have a link?
Regarding the size of the confidence interval, everyone interested in the question should definitely read William Briggs’s five part series on the question … and that’s just the error in the temperature itself.
Now, if you’re using those temperatures, with their associated large errors, to calculate and validate a proxy which has unknown confounding variables and is inherently not invertible, my claim is that you’ll get really, really big confidence intervals.
However, I’ve been wrong before more than once …
3. If you take a look at just about any cross-section of a tree, you’ll see that many of the rings are wide on one side and narrow on the other.
Taking two samples per tree can help with this, but can’t solve it. And in many cases only one core is taken.
.
This one reminded me of an anecdote of a work I have done when I was student .
I had to analyse the resistance of plastic bottles to compression (thin shells , radial symmetry etc) .
I constructed a special expermimental device to measure that and wanted to correlate it to a dozen of parameters . Used PCA .
A very important parameter was the wall thickness .
However by taking ultrasound measures I saw that the thickness was never uniform .
So I calculated for every sample the average radial thickness and the standard deviation .
Do you know what was one of the first PC ?
The standard deviation of the wall thickness 🙂
So if I have taken only one measure of thickness or even used the average thickness , I would have missed one of the most important factors explaining the resistance to compression .
I am not saying that mechanical properties of thin shells have much in common with tree rings .
But I am saying that if one implicitely postulates radial symmetry when there is none , one may be VERY wrong .
Willis. I read briggs but i was hoping he would do a real example. I played a bit with synthetic series…take a temperature series (w noise) add a step function (tobs+ error)
then subract the avaerage tobs.. just noodling
Now, if you’re using those temperatures, with their associated large errors, to calculate and validate a proxy which has unknown confounding variables and is inherently not invertible, my claim is that you’ll get really, really big confidence intervals.
big confidence intervals don t help the sceptic cause.
once in a while, sceptics have to take a stand. the claim that the MWP was warmer than today, is one of those points.
just increasing treering (and other proxy) confidence intervals does not help your position. it simply makes it hard to say, what temperatures were like in the past.
treerings have massive advantages over other proxies, that din t get mentioned enough. the annual rings make dating easy, and help prevent dating errors.
those working with trees, know what they are doing. i have limited confidence in the results of you folks ad-hoc analysis.
“So now you demand spoon feeding.”
Howard,
I’m not demanding anything. I was just asking. Sheesh.
Good morning and Happy Friday to all the independent, critical and scientific thinkers out there! Don’t be afraid to let us know what you think! 😉
Andrew
Willis, if you look back at the Brown and Sundberg posts at CA, there’s a start on how to calculate CIs in reconstructions. The key concept is “inconsistency” among the proxies makes CIs wider.
steven mosher (Comment#29543) January 8th, 2010 at 3:37 am
Making comments after 3 AM?
Looks like book writing requires that you never sleep… 😉
Sod
Cause? Whose cause?
Either the confidence intervals are wide or they are not. Either the paleo reconstructions can tell use a lot about temperatures in the MWP and LIA or they can’t.
Believe it or not, there are people who are interested in a) seeing that analyses are done right and b) knowing what the correct answer is. As for any “cause”– do the analysis right and let the chips fall where they may.
Lucia, I agree 100% with your point here.
For example, our Mann (and other) critiques did not attempt to show that the MWP was or wasn’t warmer than the modern Warm Period – only that Mann hadn’t proved the uniqueness of the Modern Warm Period with his data and methods. For people who say: if the Stick is wrong, then the situation is worse than we think (because of higher sensitivity), my answer is: well, if your premise is right (and I don’t know that it is), we should know and govern ourselves accordingly and give no thanks to people whose obstruction has delayed or prevented the identification of problems.
SteveMc–
I agree with you 100% there. If the MWP being warmer than currently means things would be even worse, then we need to know that. If doubt means there is a risk thinks are worse, then we need to know that.
But all of that is rather irrelevant to the dry task of figuring out whether or not paleo-reconstructions contain enough signal to reveal that temperature and/or whether the methods used by “researcher X” correctly reconstructs the temperature and provides appopriately sized uncertainty intervals.
kuhnkat
I think that you are being too kind. From what I have seen it is obvious that dendrochronology fails the reliability test. The fact is that the dendro people do not have as complete a knowledge base as they need to support their methods and their results. Below is an example of one possible factor that the dendro people have overlooked.
http://www3.interscience.wiley.com/journal/122597017/abstract?CRETRY=1&SRETRY=0
“it is obvious that dendrochronology fails the reliability test”
Yet some people who comment on this very blog claim “there is no uncertainty here.”
Explanations, speculations, irrelevant slogans, or funny stories anyone?
Andrew
steven mosher
I am glad that you mentioned the tree line issue. If you look at the leaked CRU e-mails you find Hantemirov write, “There are no evidences of moving polar timberline to the north during last century.” That makes it difficult to make the AGW argument that Jones and Mann were pushing.
While I am just an engineer by training it is clear to me that the dendrology people know far less than they believe that they do.
http://www.eastangliaemails.com/emails.php?eid=76&filename=907975032.txt
steven mosher:
I know that you are very smart and good at making theoretical arguments but as a practical person I care more about the way things are in reality. Please show us one real world example where your statement, “I have a pristine temperature record from 1850 to 2009,” would be true. And if you can’t, how can you justify the process being used to reconstruct temperatures?
While I am at it, why do you ignore the other factors that influence tree growth? After all, we have evidence that precipitation patterns, CO2 levels, and even cosmic ray flux affect tree growth. How do you separate the influence of each relevant factor on each species in each region so that you can isolate the effect of temperature?
Vangel (Comment#29566) January 8th, 2010 at 12:15 pm
Vangel, Steven Mosher is proposing a thought experiment. The question he wishes to examine is, are tree ring proxies better than nothing? That is to say, if we wish to estimate prior unknown temperatures, is our guess likely to be better with or without using proxies?
I believe (as I think Steven believes) that our guess will be better using the proxies. However, that simple statement covers a host of questions and uncertainties …
I can’t believe the “better than nothing” argument is being tossed around at a purportedly serious blog, let alone a “scientific” one.
I need a drink. 😉
Andrew
Steve McIntyre (Comment#29546) January 8th, 2010 at 8:09 am
Steve, thanks for your reminder. Your investigations of the work of Brown and Sundberg should be required reading for those daring enough to venture out onto the thin ice of temperature proxies. A good place to start is here (warning: rated “R” for explicit math content, viewer discretion is advised).
Unfortunately, the proxy situation is worse than a naive application of Brown and Sundberg would indicate. This is for a couple of reasons.
1. In proxy based reconstructions, what we are dealing with are generally not a bunch of individual proxies (e.g. they are not single trees). They are a bunch of averages, and even averages of averages, of individual proxies. This masks the inconsistency among the proxies, and artificially reduces the CIs no matter how the CIs are calculated.
2. At both the individual proxy (single tree) and average levels, many of the individual and average proxies end up being discarded because they do not correlate with the local temperatures. However, for an accurate analysis of the confidence intervals, these discarded individual and averaged proxies need to be taken into consideration.
w.
Andrew_KY (Comment#29572) January 8th, 2010 at 12:58 pm
Say what? Science, particularly in poorly understood areas, is often a “better than nothing” deal. Take a look at the various heuristic formulas used for things like wave actions along the coastline. Turbulence and breaking waves are extremely hard to describe mathematically, so equations derived solely from experience, because they fit the results, are used to estimate the wave actions. There is little scientific basis for these heuristic wave formulas, but they are definitely better than nothing, so they are used extensively for things like harbor and breakwater design …
In the present instance, the question we are discussing is whether there is useful data in the proxies or not. If using the proxies is indeed “better than nothing”, then there is in fact useful data in the proxies. How is that conclusion unscientific? We’re not saying that we should base trillion dollar decisions on the temperature data contained in the proxies, we’re just saying that there is useful information there.
Andrew_KY (Comment#29572) January 8th, 2010 at 12:58 pm,
Let me second what Willis said. I too have often had to deal with very noisy data from poorly understood systems. So long as you treat such data with caution, and realistically evaluate your confidence in what that data is telling you, you can sometimes glean useful information/understanding. Perfect? Hell no! Useful? Yep, especially if it helps you understand what other (uncontrolled) variables are leading to the “noise”.
sod:
They also have a divergence problem, and until that is fully resolved, they are unusable as a proxy wrt to determining if the MWP is warmer or colder than today.
(For myself, I don’t find it that interesting of a point, other than from the paleoclimatology perspective.)
Willis Eschenbach,
The “better than nothing” argument may or may not be true, in this case. Better how? If you have nothing to verify your guess against, how do you know it’s ‘better’? It’s meaningless.
And I have a story for you:
Little Andrew gets his kickball stuck in the basketball net. He is too little in stature to reach up to get it out. He sees an old box he can stand on and he puts it under the net and stands on it. It only gets him a little way farther to reaching his kickball and he still can’t reach it.
Now, the box got him closer to his target, but still unable to reach it.
Was putting the box there and standing on it, better than nothing? He still doesn’t have his kickball.
That’s with knowing what your target is. What if Little Andrew simply lost his kickball and didn’t know which direction to go to find it? Would standing on the box still be Better Than Nothing?
Andrew
Andrew_KY, what you do is use different proxies that involve different methodologies and have different systematics associated with them. The degree to which they agree helps assign an uncertainty in the reconstruction.
Andrew_KY (Comment#29585) January 8th, 2010 at 1:41 pm
It’s called a “thought experiment”, and if you re-read it, you would have your question answered:
Seems pretty clear to me how we’d know “it’s better” …
You’ve postulated a thought experiment where close doesn’t count. Either little Andrew can reach the ball or he can’t.
Now, you’re using that thought experiment to argue that whether he is closer or further from the target is meaningless … which is a tautology in that situation, because you’ve defined a situation that’s all or nothing.
So yes, you’ve successfully proven that in an “all or nothing” situation, the only thing better than nothing is all … but since that has absolutely nothing to do with what we’re discussing, your example simply doesn’t apply.
If you and Tamara would just answer Steven Mosher’s question, rather than doing everything you can to avoid answering it (as Mosh prophesied above), the point would be clear.
Willis E,
It was Steven Mosher who started the practice of not answering questions in this thread, FYI.
Andrew
AND… I thought we were in a situation where we are trying to determine what the temperature was at a certain time without any human-recorded thermometer records from that time.
Seems to me we either can determine what the temperature was, or we can’t. Conslusions like ‘close’ or ‘better than nothing’ being entirely dependent on something to compare results to…
…like what the temperature was ‘supposed to be’. 😉
Andrew
Andrew_KY:
I’ve already pointed out how you do this. You use multiple proxies and calibrate them with the instrumentation period. Then you compare the proxies, each with its own systematics, during the measurement period to get an estimate of temperature and uncertainty in temperature.
There’s nothing wrong with the concept, arguably there are problems with its implementation in paleoclimatology.
It’s similar to how we use observation in astronomy to build up distance scales to stars and progressively more distant galaxies when we can’t directly directly measure them beyond the Earth’s atmosphere.
I mean, even the distance to the Moon isn’t a direct measurement with a tape measure (analog to the thermometer). By your logic we don’t know the distance to the Moon, right?
Carrick,
Thanks. I understand there are methods for making estimates. Like you said, temperature recons are not exactly the same as lassoing the moon. I think we are in agreement. The moon scenario is similar to my Little Andrew basketball story. A target is out there to progress towards. The moon is not a proxy for the moon or an unknown. It’s the target.
Andrew
Andrew_KY,
By extension of your logic (reductio ad absurdum), it’s pointless to measure anything because we can never know the exact value no matter how precise the measurement.
Carrick,
Actually, we measure the distance to the moon with something better than a tape measure, laser reflection from the reflectors left by the Apollo astronauts. The precision is on the order of 1mm.
DWP,
Not at all. We are talking about the value of tree ring temperature reconstructions, not what whether there is value in measuring something else.
Andrew
Speaking of the moon…
Great song by the Best Band Ever that somehow fits. 😉
On certain nights
When the angles are right
And the moon is a slender crescent
Its circle shows
In a ghostly glow
Of earthly luminescence
Earthshine
A beacon in the night
I can raise my eyes to
Earthshine
Earthshine
A jewel out of reach
http://www.youtube.com/watch?v=l4W88obP9Ik&feature=related
Andrew
Andrew_KY,
Indeed we are talking about the value of tree rings as a temperature proxy. But the obvious implication of your thought experiment on measurement by proxy (and essentially all measurements are by proxy, certainly all measurements of temperature) is an either/or proposition, i.e. it’s not horseshoes or hand grenades. My point stands.
One thing to think about with E&M radiation travel time as a proxy for distance is that the speed of light depends on the medium. For GPS it depends on the ionosphere and lower troposphere mostly. The ionosphere can be modeled, the troposphere usually isn’t (in principle if you know the vertical temperature & wind speed profile you can correct for it too).
My point is even when it seems straightforward, there are corrections that need to be applied. Tree proxies depend on their environmental in a complex manner, but that doesn’t stop them being used as a good proxy. In my opinion, what is stopping them is for all the work people have done, nobody has developed a careful, model based approach for inverting temperature from them.
od (Comment#29544) January 8th, 2010 at 6:37 am
“big confidence intervals don t help the sceptic cause.
once in a while, sceptics have to take a stand. the claim that the MWP was warmer than today, is one of those points.
just increasing treering (and other proxy) confidence intervals does not help your position. it simply makes it hard to say, what temperatures were like in the past.
treerings have massive advantages over other proxies, that din t get mentioned enough. the annual rings make dating easy, and help prevent dating errors.
those working with trees, know what they are doing. i have limited confidence in the results of you folks ad-hoc analysis.”
I’ve got no opinion on the MWP. THAT is what the large CI’s do for me. Some dumb ass skeptics persist in making positives claims about the existence of a MWP. Show me the data. Some dumb ass belivers do reconstructions to disappear this so called event. Show me the data. Where was was it? how big was it? does it even matter?
WRT to dating. you might want to have a look at some of the issues with crossdating. Anywho, You trust the dendros. I’ve read their mails and the papers. Sorry, I don’t trust them. I don’t MISTRUST all of them. I’m abivalent. I’m not going to be stupid and say they all are wrong. I’m not going to be niave and swallow the kool aid.
Willis Eschenbach (Comment#29569) January 8th, 2010 at 12:45 pm
Thank you Willis. I think the goal of the thought experiment is to get the WEAK skepticism off the table and move onto the practical skepticism. I take the same approach on the GHG issue, viewing the weak skepticism as arguments against radiative physics and the strong skepticism to be arguments over feedback. Similarly with GCM skepticism. The weak skepticism is directed at models per se, while the strong skepticism ( like Lucia’s) is directed at the skill of models.
Recognizing of course that I’m nearly alone in this.
Of course, one can’t practice the strong skepticism without access to data and code. So mostly I get to shout.
Andrew_KY (Comment#29518) January 7th, 2010 at 5:37 pm
““Sorry. If you want to flat out reject all possibility whatsover that a tree ring, or latewood density, or isotope could be used to provide an estimate of past climate, then we simply can’t reason together.â€
I am open to the possibility. Why don’t you tell me how it’s done and I’ll read.
Andrew”
Andrew, I’ve directed you to CA and the threads there for reading. I’d suggest starting with Fritts book.
But for grins you can start with some older work. here is a nice old one ( 1979) that looks at various issues. precipitation, temperature, ring width, density measures..
http://www.treeringsociety.org/TRBTRR/TRBvol39_29-38.pdf
Andrew_KY (Comment#29638) January 9th, 2010 at 8:56 am
“AND� I thought we were in a situation where we are trying to determine what the temperature was at a certain time without any human-recorded thermometer records from that time.”
No. They are trying to ESTIMATE the past temperature without having a ‘liquid in glass” proxy for temperature. Note that I called a thermometer a proxy for temperature, the average kinetic energy of particles in a system. That’s what it is. Anyways, Moving on. I ask you to ESTIMATE the “temperature” of the planet from 1960 to 1991. You look at 1000’s of temperature proxies ( thermometers) scattered about the land. You average them taking care to account for the spatial distribution. You come up with an average. 14C plus or minus say .1C That’s your global average for those 30 years.
Now I ask you to estimate the temperature in the year 1600-1630 when there were no temperature proxies called thermometers. Can you do it? Sure. can you do it AS accurately? nope. here is a simple really stupid way to do it. I call it goal posting. Was the average global temperature Less than 0C? Nope. How do we know? documentary evidence. Was it greater than 100C? Nope. How do we know? Documentary evidence. The human species is a temperature proxy. Hmm. What about other species. Was the scots pine alive in 1600? Yup. How do we know? Those trees are still alive today.
Do we understand the maximum and minimum temperatures they can survive at? Yup. Can we set boundary conditions for the amount of water they need? Yup. Can we study how the treeline moved? Yup. Does this narrow the range of POSSIBLE temperature for the area they have been living in? YUP. does that give me a better guess at the temps back in 1600 FOR THAT REGION? YUP. Will Looking at the width of the tree rings give me an even better guess? Yup. How good?
Ahhh, that’s the tricky math part. Mann and others say it gives them small confidence intervals ( +-.5c) Willis says they are much bigger. He doesnt deny that you can estimate the temperature, he just says ( rightly I think) that the bands are wider than mann thinks. So Wide in fact that you cant say with a lot of confidence that it was warmer or colder in 1600. he might say something like this: The data shows the probably of it being cooler than 14C for the period 1600-1630 is 75%. Or he might do an estimate of 13.5C +- something. ( I prefer the first approach)
“Seems to me we either can determine what the temperature was, or we can�t. Conslusions like �close� or �better than nothing� being entirely dependent on something to compare results to�
�like what the temperature was �supposed to be�.
Andrew”
That’s an odd way to look at things. Andrew, I just weighed myself. I weigh 212 lbs. And.. wait a second.. yup I am 6ft tall.
Can you determine my height and weight on Oct 31, 1970. Well,
can you? Let me give you some hints. I was born in 1958.
So. Lets just guess that I was 1 foot at birth. In 50 years I’ve grown 5 feet. Do humans grow Linearly? Was I 2 feet at 10, 3 feet at 20? 4 feet at 30? probably not. Would a model of human growth help you narrow things down? Yup. Would it help to know that the average 12 year male is 5 feet tall and 100 lbs? Sure. That would give you a good guess that I was somewhere between 4 and 6 feet tall on oct 31 1970. weights a bit tougher perhaps. I give you a proxy for my height. A picture of me standing with my classmates in the 6th grade. I’m kinda in the middle with the other boys. I give you a pair of my childhood shoes. They are mens size 8. My shoes today are 11.
You see that I am not grossly overweight in the picture.
So can you Determine my height and weight? If you mean, can you go back in time and make the measurements? no. Can’t do any time travel. can you estimate it? Sure. How accurately, that’s the tricky math part.
Carrick:
My point has been a simple one. We live in the real world and have to deal with reality as it is when meeting the needs of science. It is my contention that all of you very smart people have forgotten about those needs because your high intelligence allows you to use some very sophisticated tools to manipulate data that is not good enough to be used in a truly scientific study.
In the case of trees, they respond to local conditions that may vary greatly over very short distances. There is no accurate way to calibrate a particular tree to a regional temperature record that has specific accuracy and uncertainty issues that are not always known by the researchers. The tree may also respond to other factors that are not captured by the instrumental record or accounted for by the researchers during their analysis. As such, no accurate analysis can really be done and the level of uncertainty becomes so high that meaningful conclusions are not really possible. This is particularly true when researchers start to use their own judgement to select the ‘best’ proxies for their studies and get to throw out proxies that they do not like.
I am sorry but there is no way that one can try to argue that the dendro approach can be justified given what is known about the sampling, the instrumental record shortcomings, the various influences of other factors, etc. The system is too complicated and the tools too inadequate for studies to be seen as meaningful except in a very broad and crude sense.
Steven Mosher,
I recall you and I already going down a similar discussion path.
Vangel put the conclusion of all this very nicely.
Your weight analogy is flawed because in the case of estimating your past weight, we have a specific target pre-determined to gather info about – You.
In the case of estimating what the temperature was, what is the target of our investigation? A tree?
I asked you before what kind of trees we should be looking at
when reconstructing and you didn’t answer.
The tree has to be old enough, for starters, right? What are the other requirements?
Andrew
Sorry Andrew I posted this for you on the wrong thread.
You keep asking me to post articles and such for you to read but you apparently dont read them. Let me know when you finish reading all of CA. For starters. Then Fritts.
Anyways,
here is another
Andrew,
Can you determine the temperature currently in my part of san francisco? Right now? Well you are not here to measure. Can you estimate it? Is it hot enough to boil water? 100C Nope. you kinda know that without being here.
What if I told you that the hottest its been since 1875 was 73F
Can you make a better guess? The coldest was 32F. can you guess better now? Would you guess somewhere between 32 and 73?
knowing NOTHING about todays weather, what is your best guess?
53.5
Are you close?
damn, you almost nailed it. see you can estimate things you cant measure directly. go figure.
Lets do another one. Guess the jan temperature back in 1000. go on guess. Given no other information whats the best guess you can make? I’ll say 53.5. But what about the spread of data? can you guess? of course. How good will your guess be? That depends. Will you be able to say with confidence that it was warmer ? Perhaps, You might say you were 25% confident it was warmer, or maybe 50% or 62%. It would depend upon the proxy evidence you had. You might have no proxy evidence for san francisco. Would you bet that it was warmer than 73F? Warmer than 136F? Why, why not? Was it colder than 32F? colder than -80F
Is there evidence we can look to to narrow these ranges?
Yup. Is it complex? yup? Is there a lot of certainty in it? nope.
What’s it look like for san francisco.
here’s a nice little start. Looks like everything points to a climate back in 1000 that is pretty close to today..Dry and warm as opposed to the wet and cold of the LIA
http://www.lib.berkeley.edu/WR…..Ingram.pdf
Is this data certain enough to establish that climate we see today in SF is unprecedented in the past 1000 years? Nope.
Vangel
“In the case of trees, they respond to local conditions that may vary greatly over very short distances. There is no accurate way to calibrate a particular tree to a regional temperature record that has specific accuracy and uncertainty issues that are not always known by the researchers. The tree may also respond to other factors that are not captured by the instrumental record or accounted for by the researchers during their analysis. As such, no accurate analysis can really be done and the level of uncertainty becomes so high that meaningful conclusions are not really possible. This is particularly true when researchers start to use their own judgement to select the ‘best’ proxies for their studies and get to throw out proxies that they do not like.
I am sorry but there is no way that one can try to argue that the dendro approach can be justified given what is known about the sampling, the instrumental record shortcomings, the various influences of other factors, etc. The system is too complicated and the tools too inadequate for studies to be seen as meaningful except in a very broad and crude sense.”
Well at LEAST we have moved you off the position that a reconstruction is impossible. very stupidly, if a 300 year old scotts pine is growing in my back yard, I can be pretty damn confident that 300 years ago the place wasnt a desert or a glacier. Thats a wide wide wide range, but All I have been saying, and all Willis has been saying is this.
ITS POSSIBLE, however, the important question is HOW ACCURATE.
You get that now. Whew!
Now to the question of HOW ACCURATE. It depends on the species and the place. How does it depend? Well, start reading the literature. Bottomline question.
Is it accurate enough to determine the temperature 1000 years ago to +-1 dec C? McIntyre doubts it. Willis doubts it. I doubt it.
BUT they are open minded to looking at the math and data.
THAT’s hard work.
They avoid the easy skepticism.
Let me give you variants of easy skepticism.
1. Tree are not thermometers
2. Its too complicated, dont even try.
3. fraud fraud fraud.
4. if you cant show me how it works, then it must be wrong.
Wanna know why Mc has respect amongst some dendros?
Cause he doesnt practice easy skepticism.
Steven Mosher,
You are the one who used the word ‘certainty’. (Which prompted me to comment in the first place) Now you are talking about estimates and guessing and ranges. Either you haven’t been clear about what you mean by ‘certainty’ or you have been contradictory.
And in your comment to Vangel, you ask:
“Wanna know why Mc has respect amongst some dendros?”
Irrelevant. The goal is not to impress a certain group of people. (Although it seems like you Science Community types have an emotional need to do that, for some reason) The goal is to find out what the truth is!
Andrew
Error Correction:
Steven didn’t use the word ‘certainty’… he used the phrase ‘no uncertainty’. 😉
Andrew
Steven Mosher,
“I recall you and I already going down a similar discussion path.
Vangel put the conclusion of all this very nicely.
Your weight analogy is flawed because in the case of estimating your past weight, we have a specific target pre-determined to gather info about � You.”
Duh! you have the same thing with a climate reconstruction.
As you will see above I picked a specific target. San Francisco.
yesterday it was 53F. How warm was it in the past? there.
Well you dont have temperature proxies called thermometers
which are real accurate ( say .1C) you’ve got other proxies,
( remeber my example of my old shoes, used to guess my height)
you’ve got geological data ( where the shore was) you’ve got sediments ( where the dunes were), you’ve got flood data, you’ve biological remains, salinity records, multiple proxies with various time resolutions and temperature resolutions.
So vangels point really isnt one.
“In the case of estimating what the temperature was, what is the target of our investigation? A tree?”
No. the target is a region of the world. For example the sub artic.
You go to the sub artic regions. You collect data, proxies. You see what those proxies tell you, about the climate of the past.
Doh! do you find evidence of a forest, exposed when a glacier retreats? DOH! date the evidence, estimate the climate based on your knowledge of the species. How accurately can you do this?
depends.
“I asked you before what kind of trees we should be looking at
when reconstructing and you didn�t answer.”
DUH! correct. Do you see why now? It depends. In sanfrancisco for example, you don’t have any tree ring proxies. The closest thing you probably have is tree ring series in the serria nevada. That’s the problem of spatial density in reconstruction. So, the best one could do is reconstruct that site and then correlate with SF.
more error of course.
“The tree has to be old enough, for starters, right? What are the other requirements?”
There are whole host of them. It has to be old enough. It has to not be complacent. There have to bee enough of them to get a good sample. There has to be some evidence that they are in a location that is temperature limited,
Here is a start. dont comment back till you read it all> I will know
if you read it all cause, well, I wont say how.
http://www.sonic.net/bristlecone/dendro.html
http://www.ltrr.arizona.edu/dendrochronology.html
http://web.utk.edu/~grissino/
http://web.utk.edu/~grissino/principles.htm
EXAMPLE:
(Click picture to enlarge.) This principle states that a tree species “may grow and reproduce over a certain range of habitats, referred to as its ecological amplitude” (Fritts, 1976). For example, ponderosa pine (Pinus ponderosa) is the most widely distributed of all pine species in North America, growing in a diverse range of habitats. Therefore, ponderosa pine has a wide ecological amplitude. Conversely, giant sequoia trees (Sequoiadendron giganteum) grow in restricted areas on the western slopes of the Sierra Nevada of California. Therefore, this species has a narrow ecological amplitude. This principle is important because individual trees that are most useful to dendrochronology are often found near the margins of their natural range, latitudinally, longitudinally, and elevationally. The diagram above shows the different forest types as one increases elevation along a mountainside. To maximize the climate information available in ponderosa pine tree rings, we would likely sample trees at their lower elevational limit around 7000 feet (2130 meters).
(Click picture to enlarge.) This principle states that the environmental signal being investigated can be maximized, and the amount of “noise” minimized, by sampling more than one stem radius per tree, and more than one tree per site. Obtaining more than one increment core per tree reduces the amount of “intra-tree variability”, in other words, the amount of non-desirable environmental signal peculiar to only tree. Obtaining numerous trees from one site, and perhaps several sites in a region, ensures that the amount of “noise” (environmental factors not being studied, such as air pollution) is minimized.
http://www.plantbio.ohiou.edu/dendro/
This is a cross-section of a Douglas-fir (Pseudotsuga menziesii) from the Catalina Mountains north of Tucson, Arizona. The sample shows excellent ring-width variability from one ring to the next. The inner ring on this sample has been dated to 1928, and the outer ring dated to 1965.
Douglas-fir is one of the preferred species for dendrochronology in the western portions of North America. The species has exceptional circuit uniformity, meaning that the rings are usually concentric around the middle. The rings are also well defined – in other words, there is a sharp definition between the earlywood (wood formed early in the growing season) and latewood (wood formed later in the growing season). In addition, Douglas-fir is well distributed from Canada all the way down to northern Mexico, making this an ideal species for large-scale climate reconstructions.
http://www.ltrr.arizona.edu/lorim/lori.html
http://www.blackburnpress.com/trerinandcli.html
Vangel:
There are methods for determining the reliability of a reconstruction technique like this. If after considerable effort, it is found not to be workable that’s one thing. In the mean time, I see no reason for people to stop working on it.
You won’t get very far in science if at every tough issue, you just punt.
ANDREW,
I didnt use the word “no uncertainty.”
lets start with this. Please dont be embarassed by what you didnt read.
“Andrew_KY (Comment#29485) January 6th, 2010 at 3:06 pm
Steven Mosher,
Thanks for the response, but I can’t get past the first sentence.
“There are of course challenges with reconstructing temperatures from tree rings.—
You see that. YOU CANT GET PAST THE FIRST SENTENCE.
ROLL TAPE:
here is what you find ME saying about certainty and reconstructions. AFTER THE FIRST SENTENCE.
Andrew KY.
There are of course challenges with reconstructing temperatures from tree rings. If you read CA, if you read the basic literature you will see what those challenges are. It’s not a black and white thing. It’s a probable thing. The mistake, I think, that skeptics make is that they throw the whole baby out with the bath water and they lose credibility. So, whereas Vangel or you might say “there is no way to reconstruct temperatures†Someone like steve or myself would say this: If we take the science of tree rings at its word, what kind of precision can you get in a reconstruction using standard methods. Well, it depends. Or to put it another way, someone like Mann could argue that he could reconstruct the temper +-.5C. Someone like me would say, If you use standard methods you get a figure more like ( say for example only) +- 2C.
DOH!!!!!!!!!!!!!
NOW, you get it even more wrong by attributing the phrase
“no uncertainty” TO ME. Ding dong. here is the comment verbatim see if you can tell your mistake.
HINT: I call benders name three times. Then I post links to his work on CA. you didnt read them.
Then I cut and paste a comment HE MADE in april 2007.
Which you read, and thought was me. The argument I was HAVING with people was about WWBS. what would bender say. DOH.
read.
MY COMMENT:
Long ago bender told me to read the whole blog at CA. to make sense of climategate I had to. As a result, I realize now that I should have listened to bender long ago.
bender bender bender
http://climateaudit.org/2006/0…..ship-bias/
http://climateaudit.org/2007/0…..esponders/
bender bender bender
Posted Apr 1, 2007 at 3:34 PM | Permalink | Reply
How does one interpret a 0.65 correlation with NH temps, but a maximum 0.315 correlation with concurrent growing season temps?
Cautiously?
Look folks, a correlative model is not an overfit model. Fact: [some] treeline conifers are [somewhat] reasonable temperature proxies. It’s been proven too many times in too many places in too many species for the correlation to be random chance alone. There is no uncertainty here. So don’t waste your brain cells doubting that categorical fact. The uncertainty is in regard to the degree of accuracy of the reconstructions (which ARE based on overfit models). If temperatures are now warming to the point that precipitation is now limiting treeline conifer growth, then all it means is that better models are required to get more accurate reconstructions. This might affect reconstructed temperatures during MWP, but it is not going to change the nature of the debate over the size of A in AGW. The temperature trend may or may not be “unprecedentedâ€. But whether it is a problem or an opportunity is a totally different question.
NOW ANDREW KY I WILL TEACH YOU HOW TO READ WHAT BENDER SAYS.
in detail.
“Look folks, a correlative model is not an overfit model. Fact: [some] treeline conifers are [somewhat] reasonable temperature proxies.
Do you undestand this. Some conifers are SOMEWHAT reasonable
proxies?? Andrew it means they have error.
“It’s been proven too many times in too many places in too many species for the correlation to be random chance alone.”
it’s been proven. WHAT has been proven? That SOME trees are
SOMEWHAT [not perfect] reasonable proxies.
” There is no uncertainty here. So don’t waste your brain cells doubting that categorical fact. ”
WHAT CATEGORICAL FACT? This one, that SOME trees not all,
are SOMEWHAT [not perfect, they have errors] REASONABLE
proxies. bender isnt saying that there is no problem with accuracy. he saying its a fact that some trees are reasonable proxies. IN FACT BENDER MAKES THIS CLEAR IN THE VERY NEXT SENTENCE:
“The uncertainty is in regard to the degree of accuracy of the reconstructions”
So andrew.
1. I didnt say “no uncertainty” bender did.
2. he was refering to the FACT that some trees makes reasonable proxies.
3. he specifically states that there is uncertainty in the accuracy.
please read.
Steven Mosher,
I tried to follow the links in your last comment, but they went to page not founds.
Anyway, I accept that it was bender that was the source of the ‘no uncertainty’ that some trees make reasonable proxies. So I guess my issue is with him, too.
…and an upsidedown pie plate looks like a UFO, if you squint. And rock lyrics written by a stranger have a relationship to your own life, if you want them to. 😉
Andrew
Steven Mosher,
Your links are broken. If you mouse over them you get exactly the same URL as displayed in the post, with all the periods. Unfortunately, too much has been left out to reconstruct the correct links.
Found one I think: http://climateaudit.org/2006/08/04/survivorship-bias/
And the other:
http://climateaudit.org/2007/04/01/more-on-positive-and-negative-responders/
Steven Mosher,
“How does one interpret a 0.65 correlation with NH temps, but a maximum 0.315 correlation with concurrent growing season temps?”
.
Bad temp data
.
Bad tree ring data
.
Accident
.
Poor procedures
.
Math errors
.
…
HAHAHAHAHAHAHAHAHAHAHA
.
It needs much more work before a conclusion can be drawn. That work has NOT been done!!!!!!!
kuhnkat,
I’m not going to defender bender. I think the issue was this.
I claimed bender held these positions. People said he didnt.
I pulled the quotes.
By now however this whole thread has become a hash, especially
with my links broken. It was probably a bad idea to appeal to bender, especially when he is showing up randomly.
So, perhaps we should just start over and state our positions on tree rings and reconstructing temperature. I won’t try to skew the debate by setting an agenda, but first, can we agree to clearly state our position and see if we actually dis agree.
Can we agree to that, and who wants to play?
If that’s fair to all concerned
Steve:
I think that we are having those two one-way conversations again. I have never implied that it is impossible to use dendro data to tell us that the LIA was cooler than the MWP. I have just questioned the accuracy and precision and pointed out that the dendro people are to arrogant about the extent of their knowledge.
But that is my point. I see a temperature reconstruction that tries to show a 0.7C trend and suggest that such a trend is meaningless. I am merely pointing out that what we have are intelligent people using tools to do a job for which they are not suited.
Which is very true in the real world. The variability is simply too great and the factors influencing the tree growth are not known to the degree that is required.
The first part is also true. It is too complicated because you cannot know all of the relevant factors that make a particular tree grow as it did.
In some cases that is also true. Briffa, Jones and Mann come to mind.
No. The sceptics maintain that conclusions are meaningless unless they can be reproduced independently and the data/methods are available for independent audit.
I respect him because in addition to being very skilled mathematically he is a very thorough and thoughtful individual.
But Steve’s work goes a long way towards supporting the sceptic case as I have laid it out. I have no trouble with assuming that trees can act in a similar manner as thermometers and yield useful data. But that assumption has to be supported with real empirical evidence, which actually points the other way.
I am even willing to listen to the argument that if other factors can be stripped out of the signal it is possible to have trees yield useful temperature data. But again, I would need a demonstration that would show sufficient knowledge to perform such an analysis and have yet to see it. It has been established that factors such as precipitation changes and CO2 fertilization will impact tree growth but we have yet to see how one can track those changes and make the necessary changes to the proxy data to strip out their bias. And then there is the evidence that changes in CRF can also influence tree growth. Where is that in the dendro analysis?
My point is very simple and should have been clear by now. What we have is a very complex system that requires far more study before any definitive statements can be made about it with any degree of accuracy. The denro people have a long way to go before they can be considered real scientists who truly understand their field no matter how good they are at playing statistical games. Many smart people get so hung up on the math that they forget to see if it actually reflects reality. The reason that I have as much respect for Steve is because he does look at the reality and is far more sceptical than most.
Carrick:
Sorry but the methods are not good enough to provide the type of accuracy that is needed. And I have never said that people should, “stop working on it”. I simply point to the fact that no meaningful conclusions can be drawn to provide support for any policy actions as advocated by the IPCC. I don’t think that we should blow trillions because some researchers come up with conclusions that cannot really be supported.
I have always supported science. My point is that much of the ‘research’ that is being done comes up with conclusions that are not obtained by scientific means. You can’t use trees to provide you with data that they are incapable of providing accurately.