Mark Morano linked to Steven Goddard’s post crowing that the roughly 15 year trend in HadCrut3vgl starting in January 1997 is flat (the best fit trend is pretty flat). I hadn’t been running my monthly data comparisons, but that made me curious to look at the trends since 1980, 2000 and 2001 which are the years I generally examine. I used HadCrut3gl.
Fits since 2001
I’ll begin with my favorite for testing projections. In my view, 2001 is the first year in during which modelers were useing frozen SRES and so the first year were projections are truly frozen. For that reason, I consider this the year for testing projections. It happens to be an intermediate start in the sense that it doesn’t begin at a relative max or minimum. The trends for HadCrut are shown below:

Examining this graph we could conclude that if 2001 is the “right” year to start an analysis (or better yet if the choice of 2001 was some how random), and if the ‘weather noise’ can be modeled as red, the data since 2001 are consistent with 2 σ uncertainty for the trend of -0.170 C/dec to +0.031 c/decade with a best estimate of -0.069 Based on this one would not rule out a positive real trend. However, the noise might not be red– that’s merely an assumption. And many at blogs have suggested we should use other ARIMA fits to fit the data with arguments over the ‘best fit’. There may not be enough data to identify the best fit so, my script is set up to test all ARIMA(p,0,q) trends, find the one that gives the largest uncertainty intervals and show those. (I do a few things to increase those uncertainty intervals a bit further.) That uncertainty interval is show in green. Using that fit, the best estimate of the trend is -0.053 C/decade with 2 σ uncertainty bands of (-0.159 to 0.054) C/decade. For those who want to consider fractional differencing, the purple trace shows what I get if I consider the best fit ARIMA(1,d,1) with d<0.5 to estimate uncertainty around the OLS trend. Based on this fit we could not conclude that trends have turned negative. However, if 2001 was selected randomly and we believed these models encompass the possible range of models that might properly describe the data we would state that the multi-model mean trend of “about 0.2C/decade” projected by modelers in the AR4 lies outside the bounds consistent with observations. That is: Accounting for earth weather, a trend of 0.2C/decade is inconsistent with this observation of the earth’s trend and the earth’s weather variability.
That the trend of 0.2C/decade appears inconsistent with observations of earth weather is true even if some individual models exhibit lower mean trends averaged over multiple runs from that model.
Fits since 2000
It’s worth noting that even though I think 2001 is the most appropriate year in which to begin testing, some prefer the choice of 2000 for various and sundry reasons with which I disagree. But there is nothing wrong with looking at both start dates. So for many years, I’ve shown both 2000 and 2001. This is the trend since 2000:
Using data starting in January 2000, we find the best fit trend using ordinary least square is -0.002 C/dec. If we model “weather noise” as “red noise”, the 2σ uncertainty intervals are (-0.111, 0.107 )C/dec. The 2σ uncertainty intervals computed using the ARIMA that gives the largest uncertainty intervals is (-0.102, 0.141) C/decade. So, qualitatively we obtain the same result as using the start date of 2001: If we use ARIMA or ‘red noise’ to model the deviations from the mean caused by weather, the trend of 0.2C/dec lies outside of the bounds consistent with earth weather. However, positive trends lie inside the uncertainty intervals, so if we use 2σ uncertainty intervals as your criteria for statistical significance the small negative trend is not statistically significant.
Comparison beginning in 1980
I’m not a big fan of testing projections published in 2007 using models that were new sometimes around 2000 and scenarios that were not frozen until late 2001, but some people always insist that longer time scales are the only thing one can use to test models. Others adhere to the bizarre notion that one cannot test whether observations are consistent with a trend of 0.2C/dec unless the period is sufficiently long to result in a statistically significant positive trend. I don’t buy into these notions, but nevertheless, I think it’s useful to show the result if we test whether 0.2C/decade falls inside the 2 σ uncertainty bands based on earth weather.
The trends and estimated uncertainty intervals using current HadCrut3 data are shown:

Once again, despite the fact the existence of a strong trend prior to 2000, 0.2C/dec does not fall inside the range consistent with HadCrut3 observations. However, using this longer period the trend is positive (0.141 C/decade). Using the same criteria we use to decree the difference between the observed trend and 0.2C/dec is statistically significant, we find this positive trend is statistically different from 0.
So, if one accepts the HadCrut3 data as sound, and accepts that this method can be used to detect whether the positive trend is statistically significant, then one should agree that 0.2C/decade lies outside the range of trends that are consistent with the observed trend — and vice versa.
What about NOAA and GISTemp?
I haven’t looked at those today. I’ll be examining one or the other tomorrow. Then we’ll see whether choice of data set makes a difference for our results and if they do, what sorts of differences we observe.

Since 1850 is where most alarmists choose to start trends due to being the end of the LIA, looking at 1850 to present in 15 year steps is reasonable.
As human produced CO2 did not take off until after WWII, if there are no drastic change in the positive slope between these earlier steps and the last 15yrs step, how can one logically assume that CO2 is the main component of climate and postulate CAGW?
Correlation cannot assume causation, but even correlation can not be found as I read the temp/CO2 graphs.
Ed Forbes–
Whether or not something is reasonable depends on what you are trying to learn. It’s actually a bit difficult to decide which years are reasonable to show something. Ordinarily with lab work, the idea protocol is:
1) Say you are going to do a test. Report what methods you plan to use.
2) Collect the data.
3) Apply the test.
This isn’t possible with climate data. Even with lab data, the method may be honored more in the breech than otherwise.
OTOH: I think many medical experiments are required to file a test methodology and follow it. Not working in that field, I don’t know how good the oversight is. People running physical plants know that they need to follow that procedure because if things are failing you lose money. Arguments can also be made that one shouldn’t be an utter slave to the previous principle.
But still– the fact is that when computing temperature trends it is simply not possible to pick the start data in anything remotely resembling a ‘blind’ fashion. So… 15 years may be a cherry pick. Or it may be a good round number. Two years from now when it’s 17, the cherry-pickedness of it will begin to be evident.
Lucia I as a faithful lurker will interject a piece of information here.
I believe the 15 year time span comes from a quote form NCDC/NOAA from a document I can no longer get to load for some reason.
Goddard has a post about it here:
LinkText Here
Have a nice day.
I have a problem with using the least squares fit for evaluating such things. The issue I have is that if you are evaluating, “Global warming stopped in 1998”, then having colder temperatures in 1999 and 2000, as happened, should help your case. However it increases the trend as calculated.
The Earth’s climate has been described by many scientists (I believe even the “climate scientists”) as a non-linear coupled chaotic system.
As such, why do “climate scientists” persist in trying to fit straight lines to climate data?
If you know the system you are ATTEMPTING to model is non-linear, trying to fit straight lines to the data is self-evidently silly in my opinion.
PeterB, not really. Unless taking the derivative of a function with a nonconstant slope is silly, that is.
PeterB–
The air flow around an airfoil is also a “non-linear coupled chaotic system”. Flow inside a pipe is too. So is heat transfer on a heated flat plate. That doesn’t prevent engineers from fitting straight lines to data describing lift coefficients, drag coefficients or any number of things. (Though sometimes we fit curves. It just depends.)
I fit a straight line through the mean surface temperature as a function of time between 1980 and now because except for the dip due to the eruption of Mt. Pinatubo, the multi-model mean trend from the AR4 is a fairly straight line Doing this does not contradict the statement that climate or weather are “non-linear coupled chaotic systems”.
Not sure if previous comment got lost some how so re posting this link:
Goddards source for 15 year time frame
NCDC/NOAA 2008 report
If it is a cherry pick its not his I suppose.
Lucia “So… 15 years may be a cherry pick. Or it may be a good round number. Two years from now when it’s 17, the cherry-pickedness of it will begin to be evident.”..
…
I agree that any one 15 yr step would perhaps be a “cherry pick”. If you peeked at the data first, most likely it is a “cherry pick”. You see “cherry pick” arguments constantly in the climate wars.
….
What I was trying ( and failed ) to convey was a series of 15 yr steps starting at 1850 to present and comparing the series for changes in trends.
Carrick,
I suppose it depends on the time-frame you are looking at, and exactly what you are trying to prove.
Sure, it is obvious to anyone that temperature overall has increased since 1850 (I don’t think we would like it very much if it had not), and it SHOULD be obvious to anyone that there has really been no statistically significant trend in temperature whatsoever from 2000 to present. Those things I can understand.
However, when you try to use straight lines to show that the current temperatures are somehow “unprecedented” and that the rate of warming is accelerating, then I think all you are doing is trying to fool people.
Obviously, just from looking at Lucia’s plots, the 1980-present graph has a much greater slope than the 2000-present graph, and that does teach us something. However, a sine-like curve showing that temperatures were rising and MAY now be starting to fall might be a more accurate representation of what is actually going on in the system. We may not know that for another decade or two yet of course, but in studying past data (at least the past data that I consider to be the most reliable), climate seems to by a cyclical system, with period, amplitude, and other features strongly affected by different variables in the system.
The other problem I have with such graphs is the use of anomalies. Graphing temperature anomaly might be great for statisticians, but for the average human being it gives the illusion that the 0.0C line is somehow “normal” or “ideal” in some way, when in reality we do not really know what is normal or ideal for the climate of the earth, or for us as inhabitants of the earth either.
For statisticians and those who know some calculus, such graphs are ok, as far as they go, but for the average Joe trying to understand the nature of the climate system of the earth, anomaly graphs with a straight line through them (usually without the dashed upper and lower bounds provided by Lucia – hat tip to you Lucia for providing uncertainties!!!) are not a good representation of the true nature of the climate system.
I could rant on about this for longer, but I think you get the point for now, so \rant off (for the time being :))
Trying to leave the link out since the first 3 times I tried to post this it got lost.
Trying to provide a source for the 15 year time span
Goddard quotes a 2008 NCDC/NOAA report as saying
“Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
You can find this in a Sept 1 posting:
http://stevengoddard.wordpress.com/2012/09/01/climate-models-falsified-by-their-own-standards/
If it is a cherry pick I suppose it not Goddard’s.
Lucia and Carrick,
Perhaps I should explain myself more explicitly. I don’t deny that you CAN fit straight lines to non-linear coupled chaotic data, and you can even get meaningful information about the system from such an exercise, often this data is presented to the general public in the form of a straight line through the data with no uncertainties shown, and many in the general public think that the 0.0 C line on the chart somehow represents “normal” or “ideal” temperature in some way.
For the general public (at least those who are not all that familiar with calculus and statistics, straight lines, though simple and easy to understand, are probably not the most accurate way to present what is actually going on in the system (which is very likely more cyclical in nature), and often these straight lines can even be used to mislead people without a mathematical background.
By the way Lucia, thank you for providing the uncertainty bounds in your graphs and the explanation of those bounds in your post. I know I shouldn’t have to say that, but so many temperature anomaly graphs don’t even mention uncertainty at all that I feel it necessary to say “thank you” to someone who does things right and explains it as well.
If more “climate scientists” did that, then I would at least start to be tempted to remove the quotation marks that I use….
PeterB–
If something is (pseudo-cyclical + trend) or even (truly-cyclical + trend) it can still be useful to compare to a straight line. You have to be careful how you interpret the graph. But there is nothing wrong with trying to detect trends that might exist concurrently with cycles.
Lucia,
I agree, it can be useful to show if there is an upward or downward trend in such a system. I just am of the opinion that especially when it comes to the climate system in particular, quite often too much emphasis is placed on the straight lines, and they are presented as having (at least in my opinion) more meaning than what they actually are likely to have.
If you make a straight line from 1850 to present, you certainly show quite a warming trend, and if you draw a straight line from the 8.2 kiloyear event to present you get a much steeper warming trend, but if you draw a straight line from the height of the Minoan warming period to present, you would show a significant cooling trend.
In the case of flow around an airfoil, and water flowing through a pipe, you can show truly meaningful coefficients which do teach you a lot about such systems. I would suspect that doing so with climate data gives you truly meaningful coefficients as well, although I don’t thing we have a very good handle yet on exactly what the meaning of those coefficients really is because we don’t have nearly as good of an understanding of the climate system compared to the other systems being described.
PeterB
I agree. Whether or not fitting a straight line to that period depends on what you are trying to learn. But, as it happens, the multi-model mean is clearly not a straightline during that period. So even aside from any cyclic or pseudocyclic behavior, no one would expect the “climate” to exhibit a linearly increasing temperature during that whole perod.
Feel free to delete my repeat comments that just got out of moderation. Sorry for spamming.
Ben–
Sorry. The spam filter got you. Once it happens once, it gets stubborn. Your comment is fished out.
Ahh!! sorry. I didn’t remember the amount of time for that.
FWIW: I disagree with their diagnosis that a 15 year old 0 trend is required to create a discrepancy. The reason is this:
It is true that some runs in some models have low trends. But I don’t think it is reasonable to suggest that the spread of trends over models each of which has a different mean trend tells us anything about the spread of trends we expect in 1 model or on earth.
I have to gear up to do take a new look at things I was looking at 2 years ago. It had to do with explaining spread in individual models– but at 10 years out, the amount of noise put thigns just on a bubble. The narrative will be clearer now. (Might have already been in January, but I’ve been busy.)
Ed Forbes (Comment #103228)
[“As human produced CO2 did not take off until after WWII, if there are no drastic change in the positive slope between these earlier steps and the last 15yrs step, how can one logically assume that CO2 is the main component of climate and postulate CAGW?“]
Ed, because one doesn’t use logic as the basis for the assumption. One uses belief.
By the way, the overall (straight line) trend since 1850 is currently less than 0.6 deg C per decade. The trend reached its steepest in 1878. Your question is a valid one.
One thing that has struck me from a visual standpoint is the difference in the trend pre and post 1998 (the super el nino). Is it possible to show whether the post 98 trend is statistically different from the pre 98 trend? I’ve always had a suspicion that the sel released a lot of heat which had been building up and is now dissipating.
“Since 1850 is where most alarmists choose to start trends due to being the end of the LIA, looking at 1850 to present in 15 year steps is reasonable.”
No 1850 is selected because it is the first year in which Phil Jones thought he had enough data to apply his method. You can, if you choose a different method, start at an earlier date. Or, using Hansen’s method you can start at a later date.
There is no reason not to use all available data provided you can make a reasonable calculation of uncertainty.
“Lucia I as a faithful lurker will interject a piece of information here.
I believe the 15 year time span comes from a quote form NCDC/NOAA from a document I can no longer get to load for some reason.”
One underlying problem here is that the 15 year time span comes from looking at models that cover the entire globe.
You can’t compare that directly to HadCru3. The reason is pretty simple. HADCRU3 does not sample the arctic very well and consequently runs cooler. To do the comparison correctly you would have to.
1. switch to hadcru4 for a better representation of the northern latitudes.
2. RESAMPLE the GCM output using hadcru3 sampling locations and recalculate the probably of getting 15 year year trend periods in that.
That said, generally the GCMs do not represent cooling regimes very well and they tend to run hot both of which should lead to to suspect the 15 year period. Even Santers 17 year period has problems since his sample included GCMs that dont simulate volcanic forcings.. that would effect both his S/n ratio and the 17 year conclusion.
My bet.. 20 years.
It’s ambiguous, but IIUC they are talking about ENSO-corrected temperatures.
The report is “Do global temperature trends over the last decade falsify climate predictions?” , Knight et al. 2009 and google will not let me copy/paste a usable link, but the PDF is online.
No, because they aren’t. Woodfortrees to the rescue!
Your lying eyes. Don’t trust them.
Lucia,
I tried a graphic
here which gives an overall survey of the effect of different start/end years (for various indices). It also shows the time series graphs. There’s a fancier version
here which gives a similar plot of t-values and OLS upper and lower CI’s, or can mask according to significance.
I vote for 38. There actually is a reason, involving statistical significance for large numbers, but I don’t remember it well enough to explain it.
Nick– I’ve seen your graphic before and it’s a neat programming trick. Looks like someone would have to do a lot of cherry picking to find a trend of 0.2C/dec end. But of course your graph isn’t specifically for that purpose.
How hard would it be to subtract the multi-model mean trend with the same start and end years. That could highlight things.
toto, break points matter.
First: Yes.
It was never clear to me how the heck they did that for models — nor why. (Though, If I recall correctly, it the end date for data on which that paper was based was such that correcting for ENSO raised the estimate of the observed trend above the uncorrected level.)
Second: Since any ENSO correct can be debated, its hard to understand why the authors didn’t give the results both with and without ENSO correction.
Third: What do you think ENSO correction would do to the current temperatures? (I actually don’t know. I guess I’ll hunt down the and give it a try tomorrow.)
Fourth: I still don’t agree that one can substitute the spread of trends models with different multi-model means for the spread in trends we expect in one model. And the spread of runs for one model is more analogous to earth than examining trends of 22 “earths” each of which has slightly different physics.
But this is one of those things where it is becoming clear over time that the trends are in the lower range of the projections and the mean of the temperature projections is too high. (The fact that the ice loss projections are wrong in the opposite direction does not ‘balance’ this. It just means the models have problems with both.)
toto,
You can play many types of games with the data. See http://www.woodfortrees.org/plot/hadcrut3vgl/from:1999/to:2011/trend/plot/hadcrut3vgl/from:1985/to:1997/trend/plot/hadcrut3vgl/from:1985/to:2012/trend
The slope from 1985 to present is the highest of all – higher than 1985 to 1997. Whooda thunk?
Lucia,
“How hard would it be to subtract the multi-model mean trend with the same start and end years. That could highlight things.”
The arithmetic is easy, though it might be harder to find a multi-model set that is generally accepted as representative and which covers the years (ie all models with data for all years). I thought the CI plots could help here – you take a proposed (eg multi-model) trend value and then see where the data is significantly above or below. But only OLS limits at this stage.
.
I would guess it would look a lot like this, since ENSO is the major correction in that graph.
.
.
I guess it depends which question you’re looking at. My take is that if a certain aspect of Earth behavior occurs with enough frequency in “the models”, then this aspect of Earth behavior cannot be said to invalidate “the models”, as per the Steve Goddards of the world.
Nick
The set used in the AR4 has data for all years and the IPCC used it to make projections. Do you mean something else?
Yes. Eyeballing the colors on the axis ending “now” indicates that unless you take short periods, the 0.2C/decade is pretty well out of contention for HaCrut. But when one just wants to see trends ending “now”, I think it’s easier to show that with a graphic rather than the color triangle. Maybe I’ll gin one up tomorrow. (Since toto mentioned the ENSO correction, I’ll look at that too.)
toto–
You can neither validate nor invalidate “the models”. So, I think it’s pointless to talk about what comparison validates or invalidates “the models”.
You can test (or validate) their ability to predict a specific thing like temperature, rate or ice loss. Also, one can test a proposed method of project ‘X’ that happens to be based on models.
If someone has proposed that the multi-model mean is the best estimate, you can test whether or not that mean is consistent with earth weather. If the multi-model mean projection is too high, fact that the models disagree with each other and some subfraction of the ensemble might not disagree with observations does not salvate the multi-model mean which remains inconsistent with observations.
People changing the subject from something we can test to discuss a concept of validation that isn’t even conceptually possible just gets everything muddled.
toto (Comment #103276),
“I would guess it would look a lot like this, since ENSO is the major correction in that graph.”
It would if you accept all of Tamino et al’s choices for their curve fit exercise. That analysis does not even make any sense: lags from heat capacity of the ocean are allowed to vary for each forcing they considered… just nuts.
Mosher, HADCRUT4 is missing data after 2010. And the data in 2011 and 2012 is pretty cold.
HADCRUT3 post 2010 mean 0.350
HADCRUT3 1997 to 2010 mean 0.422
bruce then the only option is to resample the gcms.
Hi Lucia – So what would be the null hypothesis you would favor if you were hypothesis testing for anthropogenic effects? It seems to me the null hypothesis would have to be the observed warming since 1850 is a continuation of warming from the LIA. Is there a way to compare warming before 1850 and warming after to see if the current warming is consistent with the earlier warming? Or is all this basically just a craps shoot?
Why? I wouldn’t expect recovery from the LIA to continue forever. I certainly wouldn’t expect it to accelerate.
Why? I wouldn’t expect recovery from the LIA to continue forever. I certainly wouldn’t expect it to accelerate.
Well, since you put it that way, I guess I wouldn’t.
You get a higher trend because 2001-2010 are warmer than 1991-2000, and it’s not even close with the exceptions of 98 and 99.
For example if I subtract .2 from 2000 and .3 from 2001, the trend increases from .019 to .025
Mosher (#103303)
“the only option is to resample the gcms.”
Rather than average global temperature, I think it would make a lot of sense to use a metric such as average over 60N to 60S. Without having done the math, I suspect this would eliminate much of the disagreement between e.g. GISS vs. HadCrut3, or Hadcrut3 vs. Hadcrut4, or GISS(1200 km smoothing) vs. GISS(250 km smoothing), as the differences seem to be concentrated in the poorly-instrumented polar regions. It wouldn’t seem to be too hard to re-evaluate historic series, or GCM results, with respect to this region.
.
The only drawback is that people would have to be precise in specifying whether their metric (avg. temperature, climate sensitivity) is with respect to the entire globe or 60N-60S (which comprises ~87% of global area).
.
Edit: It was pretty simple to compare GISS-1200 with GISS-250. The area-weighted change in the annual averages from 1980-2011, is
(globally)
GISS-1200 0.4878 K
GISS-250 0.4473 K
difference 0.0405 K
(60N-60S)
GISS-1200 0.4219 K
GISS-250 0.4066 K
difference 0.0153 K
The choice of smoothing represents a 9% difference in the global average, but less than 4% in the 60N-60S average.
Mosher (#103303)
“the only option is to resample the gcms.â€
Chad did that. It doesn’t make that big a difference. I just don’t have the resampled gcm data.
The trend for the last 30 years is 0.152C/ decade. But the trend for the individual months runs from .09C/decade in Jan to .18C/decade in July.
For the last 15 years the trend is -0.017C/decade and the monthly trends run from -.214C/decade in Feb to .053C/decade in Jun.
5 months are negative: Jan,Feb,Mar,Apr and Dec.
Jan 2008 was the 32nd warmest January. Jan 1863 and Jan 1878 were warmer. As was 42, 44 and 58 …
Jan 2012 was 18th. Jan 2011 was 22nd.
Lucia did he resample them using CRUTEM3 by position and month?
“It seems to me the null hypothesis would have to be the observed warming since 1850 is a continuation of warming from the LIA. ”
That is NOT a null hypothesis. That is a ‘renaming’ of the observations.
The difference in solar forcing between the LIA and today is mousenuts. If that small changing in Watts drives a 1C change in temps then the planet is really sensitive and adding 3.7 additional watts from doubling C02 spells much more trouble.
Mosher, Wild found an increase from bright sunshine from the early 90s to the 2000s that he calculated average .51W/m^2 per YEAR.
Not mousenuts. Huge.
http://i55.tinypic.com/34qk01z.jpg
Mosher:”HADCRU3 does not sample the arctic very well and consequently runs cooler.”
so? neither does Point Nemo. Why move the goalposts?
Mosher
I don’t know the details. My impression is the grid somehow varies over time– so maybe? I don’t really know.
MrE–
It’s valid to point out that certain comparisons might not be granny smiths to granny smiths. HadCrut3 is an attempt to get the global surface temperature, but it doesn’t get the poles. Integration of model results includes the poles.
Other temperature series try to extrapolate over the poles and account for it.
I know that chad did resample– and I know it made a small difference in trends. But does make a difference.
Bruce (Comment #103311)
Could you summarize and tell us what point you are trying to make?
Mosher
The implicit assumption being that the LIA was caused by changes in solar forcing, which is quite dubious. AFAIK the cause of the LIA is unknown. The wiki page has plenty of speculation to chew on.
Steven Mosher, have you seen
http://pubs.giss.nasa.gov/abs/sc08400z.html
which Gergis et al 2012 referenced?
The Arctic Ocean is less than 3% of the Earth’s surface and we have relatively scant long term data on it. It also fluctuates much more drastically than the most places of the earth. If it makes a big difference on global temperature records and it has poor data then why let it over influence the data? As an outlier or anomaly prone to more error doesn’t it just add more uncertainty?
Sue, my search couldn’t find the reference to Gergis. Where is it?
It was put “on hold” or “withdrawn”…
http://climateaudit.org/2012/06/08/gergis-et-al-put-on-hold/
It maybe nice to reflect upon how some indicators of warmth show flat spots some of the time, and I’m sure it’s comforting to those that do, however those that don’t cherry pick their evidence will consider all factors, like sea level rise, ocean heat content and question things like “Where did the other fucking half of the ice cap go?”
I noticed, Lucia, you called Lewandowsky a wanker for wasting people’s time with regards his “poll”. I’d reckon arguing over how flat one indicator of surface temperature is over a specific amount of years rather wasteful too.
The planets warming. Wake up, move on and start writing about something a bit more useful and productive.
“The planets warming.”
I think this claim is meaningless. Chris, you aren’t providing any context that makes this anything more than poetry. It’s not puncuated correctly, either. Three words are not enough to describe what’s happening climatically.
Plus, in your first paragraph you like people to “consider all other factors”, but you do not do so yourself. At least in your comment you dont.
Andrew
lucia (Comment #103279) September 12th, 2012 at 4:27 pm
toto–
You can neither validate nor invalidate “the modelsâ€. So, I think it’s pointless to talk about what comparison validates or invalidates “the modelsâ€.
——
Lucia, I know what you mean, but then in what way are the models “science”?
MrE,
I think it might be reasonable to argue this but you’re missing the point: if you do want to ignore the polar regions in observations you have to do the same with the model outputs or you’re not comparing like-for-like.
Ed Hawkins performed this masking a few months ago for the CMIP5 models.
cui
People (including me) use sloppy language. Sometimes that’s fine.
But to answer your question: The truth is models in general are fundamental to science. Newtons first law of motion is “a model”. So is the law of gravity. So, is … well everything.
These more familiar models might be called foundational or “first principles” models. They are used as “building blocks”.
We also have things like “constitutive relations” which are sometimes widely accepted under appropriate circumstance ( e.g. ideal gas law, stress-strain relationship for newtonian fluids etc.) and sometimes merely empirical observations of things we might want to describe. Some fall in between.
Scientists and engineers build more complicated models based on these “first principles” models, ‘constitutive relations’ along with a bunch of assumptions and often mathematical simplifications. Both of the latter may be appropriate or inappropriate depending on application. (Extreme example: idea gas law works for air at STP. It totally doesn’t work for … well.. water. It often doesn’t happens not to work for gases at high pressures.)
GCMs are a type of complicated model that do all the above. Since they are based on scientific principles and created following the method all scientists and engineers create predictive models, they are scientific. Of course no model — not even a foundational one– is science itself. When people say “science” they can mean the collection of knowledge we have accumulated using a particular method and methods and the method itself.
At it’s very base, the method is to test ideas of how the world works against observations. In the physical sciences these ideas are nearly always described using mathematical models (e.g. F=ma). The ideas that don’t work are toss or revised. (F=ma has been extended to account for what happens at the speed of light and so on. )
As for invalidating a model: You might be able to ‘invalidate’ “a model” by showing it’s assumptions aren’t true, or showing it doesn’t work in a some limit (e.g. F=ma doesn’t work near the speed of light. So, it’s been revised).
Hypothetically, you could look at all the modeling assumptions in a GCM collectively and show it violates the 2nd law and the collective errors are material. (If you showed it violated the 2nd law when winds approached the speed of light, no one would care.)
If you showed something like that, you might say you’ve invalidated “the model”. (In that case, someone would correct it. The problem would likely lie in one of the many simplyfing assumptions, or a constitutive relation– i.e. “sub grid parameterization”.) Doing this sort of thing might involve a mathematical proof not comparison to data.
What I or others are generally trying to do when comparing output of a GCM (or the mean of GCMs etc.) is test a claim based on the model. That claim would be “the best estimate of temperature over time is X”. That’s not validating “the model”, “the models”, “the science” or “science”. It’s testing a specific hypothesis.
Many thanks Lucia. That’s a very full exposition. I understand.
lucia: “Could you summarize and tell us what point you are trying to make?”
What some perceived as AGW was in fact the absence of months with very cold anomalies caused by some unknown reason such as the end of the last PDO cycle.
Jan/Feb seem to be dragging down the average below the 1940s.
And a couple of Nov/Dec.
There are 23 months in the 1940s with a larger anomaly than Jan 2008. etc
1941.750 0.271
2008.250 0.271
2010.917 0.267
2011.834 0.263
2011.084 0.259
2011.917 0.243
1944.000 0.240
1944.667 0.232
1944.584 0.226
1943.750 0.221
1944.500 0.220
2012.000 0.217
1942.000 0.215
2011.000 0.194
2012.084 0.194
2008.084 0.192
1944.750 0.190
1941.500 0.180
1940.917 0.177
1941.417 0.167
1941.250 0.159
1943.917 0.154
1940.667 0.150
1944.417 0.147
1940.500 0.138
1941.834 0.117
1944.084 0.111
1944.167 0.109
1941.584 0.102
1941.084 0.066
1941.917 0.055
1940.584 0.054
2008.000 0.053
Is it warmer recently than in the past? No.
Bruce–
I don’t see how the existence of only 23 months since 1940 warmer than a recent strong cold dip in Jan 2008 means there is no global warming. Jan 2008 was a recent strong dip in temperature due to La Nina and it was nevertheless warmer than (2008-1940)*12 – 23 previous months!
Part 2.
HADCRUT3 has 3 peaks. 1877 /78, 1942/44 and 1998/2006.
Recently there have been months colder than both earlier peaks.
This is 1877/78
2011.667 0.365
*1878.084 0.364
2011.750 0.343
2011.334 0.329
2008.917 0.327
*1878.167 0.322
2011.167 0.322
*1878.250 0.317
2008.417 0.308
2012.167 0.305
2008.334 0.278
2008.250 0.271
2010.917 0.267
2011.834 0.263
2011.084 0.259
2011.917 0.243
2012.000 0.217
2011.000 0.194
2012.084 0.194
2008.084 0.192
*1877.917 0.179
*1878.000 0.160
*1877.584 0.103
*1877.084 0.091
2008.000 0.053
*1877.834 0.047
“I don’t see how the existence of only 23 months since 1940”
23 months in the 1940s peak. Not since the 40s peak.
Feb 1878 is warmer than 17 recent months including 2 this year.
Lucia, I agree with cui bono, that one should be promoted to a post. Nice write up.
RE: matt (Comment #103322)
September 12th, 2012 at 11:10 pm
Mosher
The difference in solar forcing between the LIA and today is mousenuts. If that small changing in Watts drives a 1C change in temps then the planet is really sensitive and adding 3.7 additional watts from doubling C02 spells much more trouble.
The implicit assumption being that the LIA was caused by changes in solar forcing, which is quite dubious. AFAIK the cause of the LIA is unknown. The wiki page has plenty of speculation to chew on.
#######
Matt,
The simple fact that we don’t understand the cause of the LIA is a problem in predicting future climate. At it’s lowest point, best estimates of the LIA were -1-2C in Europe and North America (Alley). That was pretty cold and we know what it wasn’t… CO2. If we don’t understand what caused such deep cooling how can we now say we do understand what is causing rapid warming? logically, we can’t. Too many pieces of the puzzle are still missing and speculation reigns. We know from radiative physics that increasing atmospheric CO2 must cause warming at some level but attempts to attribute “most” of the warming since 1950 to CO2 is speculative.
“means there is no global warming”
I think, according to HADCRUT3, there was warming. But many recent months are colder than 2 periods 70 and 135 years ago.
It WAS warmer (according to HADCRUT3). It isn’t now.
And I would proposes that if UHI was accounted for, a large chunk of 2008 to 2012 would be totally colder than 1877/78 and 1942/44.
Bruce, “warmer” refers in the sense of climate to mean tendency, we like to think of the temperature about which climate is operating as indicating whether it is a warmer or colder climate.
It remains the case that the amount it’s gotten warmer by is still much smaller than the year to year variation in mean annual temperature, and even much less important than the interannual variation for any given site (let alone the diurnal variation in temperature!).
If I wanted to know whether it was warmer or colder in terms of climate operating temperature, I’d look for averages over long enough of a period of time that short-period climate fluctuations are averaged over (you have to average over the periods of time that most climate variability is observed in) and this turns out to be periods of less than 10 years.
As a result, we’ll use decadal averages and look at GISTEMP temperature:
1900-1909 -0.264667
1910-1919 -0.26625
1920-1929 -0.170083
1930-1939 -0.04025
1940-1949 0.0343333
1950-1959 -0.02475
1960-1969 -0.01375
1970-1979 0.00341667
1980-1989 0.187167
1990-1999 0.322833
2000-2009 0.521333
If you look at this data, do you see a positive trend?
If you do, “the climate is warming”.
(We can argue over the significance of this, but this is the take home.)
Bruce:
I would assume you would agree that ocean’s don’t have UAH, right?
If you agree with that (seemingly reasonable proposition), then looking only at ocean temperatures should tell us something about whether UHI is affecting the conclusion that “climate is warming”.
Here’s HadSST2
1900-1909 -0.450451910-1919 -0.443525
1920-1929 -0.314583
1930-1939 -0.142867
1940-1949 -0.0791
1950-1959 -0.159575
1960-1969 -0.105308
1970-1979 -0.0830583
1980-1989 0.0453583
1990-1999 0.18435
2000-2009 0.331892
Oceans have a smaller predicted amplification effect than land does. There are model based reasons to expect this. If you were to anticipate a strong UHI component, it should show up in latitude bands where people live and be smaller in ones where they mostly don’t. That doesn’t show up in the data either (the latitudinal effect gets bigger towards the North Pole). UHI cannot be a significant contribution (more than 10%) and still allow us explain the trends seen in the data.
Using HADCRUT3.
Last 5 years mean = .36C
Previous 5 year mean = .45C
Previous to that 5 year mean = .41C
And the 1940s 5 year average was .03C.
Where is the .2C / decade CO2 signature?
Yes, it got warmer according to HADCRUT3.
It is NOW the coldest 5 year period of the last 3 5 year periods.
It peaked. And is now falling. And it is just the same ~70 year cycle.
And UHI is still unaccounted for.
So the peak that occured in the last decade may not really have been that high.
Carrick: “I would assume you would agree that ocean’s don’t have UAH, right?”
They have bucket/inlet problems don’t you agree?
That lead to an artificial warming in the 1940s. It doesn’t explain the trend.
Bruce:
That’s a misunderstanding on your part of the relationship between temperature and CO2. It’s the total radiative forcings that matter, not just one component of it.
And regardless of uncertainties, it is known that sulfate levels were higher in the 1940s than now. Since sulfates tend to cool climate, you have to combine the effects of anthropological warming from CO2 with the cooling from sulfates to predict the total net warming or cooling expected.
It is thought that until the 1970s, just due to the way industrial processes work, that the increase in CO2 was mostly masked by the increase in sulfates. (Similarly when we cleaned up our sulfate emissions, we had a “rebound effect” with an artificial warming associated with the removal of the sulfates. This effect happens regardless of the amount of CO2 in the atmosphere.)
Again relating to UHI, one can’t simply throw concerns on the table and dismiss a theory without quantifying the effect. Many people have looked at this from all sides of the debate, and the general consensus is it is not a significant contribution (< 10% of the land trend and < 3% of the global are the numbers I'm comfortable with, 95% CL).
Bruce–
Your method is a bit like saying July isn’t really warmer than April because the coldest days in July colder than the warmest days in April. That happens nearly every year here in Illinois. Nevertheless, July is warmer than April.
You need to apply your method to other questions and see whether you would really say the same thing. If you wouldn’t say Illinois Julys are warmer than Illinois Aprils, then no… it’s not currently warmer than the past. But I don’t think that standard is useful. If July was not as warmer than April, air conditioning would not be used widely in Illinois. Most people in my neighborhood and all stores, restaurant, bars, hospitals etc. have a/c.
I’ll summarize.
3 peaks exist in the HADCRUT3 record. And Now (say the last 5 years).
I say peak 1,2 and 3 were caused by Mechanism B.
The three peaks are about 70 years apart.
All 3 peaks ended and became valleys … we just don’t know how deep the current valley will be, but the data I posted shows it potentially could be very deep if winters keep on the current track.
Peak 1 and 2 overlap with Now, but not so much with Peak 3.
Some people think Peak 3 was caused by Mechanism AGW. I don’t.
I also think Peak 3 was exaggerated by UHI.
The people who believe in Mechanism AGW have failed to convince me it is a separate mechanism than Mechanism B.
Mechanism B probably was surface radiation changes (not TSI).
http://i51.tinypic.com/eb3pmb.jpg
“Your method is a bit like saying July isn’t really warmer than April because the coldest days in July colder than the warmest days in April. ”
No. I’m saying that January/Feb in Peak 1 and Peak 2 were warmer than Jan/Feb in the Now phase (not the Peak 3 phase)
Peak 3 is over. It ended. It is not Now.
Consensus on the effect of UHI is not very large:
http://www.drroyspencer.com/2012/08/spurious-warmth-in-noaas-ushcn-from-comparison-to-uscrn/
Another interesting study which shows a logarithmic relationship between population and UHI:
http://wattsupwiththat.com/2012/08/31/recent-paper-demonstrates-relationship-between-temperature-and-population-density-in-the-uhi-of-new-delhi/
The nature of this relationship is not surprising. One can easily demonstrate by a simple model that the increase of UHI effect is more or less independent of the size of the agglomeration but it is directly related to the rate of population growth. This means that studies based on the distinction rural / urban can not conclude anything about data pollution.
Studies that look at areas that remain completely rural and areas that go through urbanization tells you how important UHI is to global mean trend, regardless of the model you propose, as you are only looking at categorical data.
The problem with Hadcrut3 is that it does not account for warming at the poles adequately. Hadcrut4 a better job of this. The warming at the Arctic is highly significant, for obvious reasons, and not including that is going to leave your temperature record short.
“it is known that sulfate levels were higher in the 1940s than now.”
Not even close.
Global Total 1940 , 1950, 1960 etc to 2005
49.694
58.158
90.502
126.544
130.788
127.795
106.869
115.507
http://www.atmos-chem-phys.org/11/1101/2011/acp-11-1101-2011.pdf
PS Any chance of releasing 103358 from the moderation queue?
bugs==
I can find Hadcrut3, I can find crut4. Where is Hadcrut4?
HADCRUT4 is at the Met
http://www.metoffice.gov.uk/hadobs/hadcrut4/
Let us know which of the 100 ensemble members you use …
Bruce–
Thanks. The medians are here:
http://www.metoffice.gov.uk/hadobs/hadcrut4/data/time_series/hadcrut4_monthly_ns_avg.txt They go through through December. 2010.
bugs, let me known when the medians are publishing more or less real time.
CRUTEM4 is available in the CRU usual location.
The last 5 years are negative trend. It would have gone negative sooner I think except for January 2007 had an amazing 1.677 anomaly which was .573 higher than CRUTEM3.
Bruce–
To replace HadCrut, I need HadCru4, not CRU4.
@Lucia
Q: Why does the data set end in 2010?
A: The paper describing the HadCRUT4 data set was submitted in 2011 and 2010 was the final complete year of data available at the time. Regular updates will follow and are discussed later in this FAQ.
It is a work in progress, you can see the difference the extra polar data creates up to 2010 compared to HadCRUT3, so I would suggest that relying on what is now known to be a deficient tempearture record, that is missing the dramatic polar amplification, is a waste of time.
bugs– It might be a waste of time. Or not. I’m updating my scripts and I’m going to use HadCrut3 until HadCrut 4 is available. HadCrut3 has the same issues it always had, it’s still useful if even it’s not perfect.
Bruce: “not even close”.
Yeah, you’re right. Peaked around 1970.
bugs:
I don’t know that it’s a waste of time. Relative to uncertainties, there isn’t that much of a spread.
I understand one of the big issues for polar temperature is they use sea surface temperature as a proxy for surface air temperature for open water, and nearest land temperature station when it’s frozen over.
Sounds a bit iffy to me.
Steven Mosher said, “The difference in solar forcing between the LIA and today is mousenuts. If that small changing in Watts drives a 1C change in temps then the planet is really sensitive and adding 3.7 additional watts from doubling C02 spells much more trouble.”
The rate of ocean cooling is not the same as the rate of ocean warming.
http://i122.photobucket.com/albums/o252/captdallas2/climate%20stuff/oceaniaTmin.png
That is Tmin for Oceania. Except for the 1940s cooling event, the rate of increase of Tmin would have been a steady .7 to .9 C per century. That is your data set. If y’all did regional Tmins things would start making more sense.
http://redneckphysics.blogspot.com/2012/09/using-southern-oceans-baseline.html
That looks at the impact of base line selection on extending proxies from the instrumental data. The Neukum et al, Southern South America has some minor issues, but on the whole it is a reasonable reconstruction of southern ocean temperatures. Roaring forties and furious fifties and all that.
Carrick: “Yeah, you’re right. Peaked around 1970.”
Around 1980 globally. Around 1970 in US and Europe. And then fell until 2000 and started to rise again thanks to China.
Dropped by about 18% from 1980 to 2000. I wonder what climate did from 1980 to 2000?
Coincidence? It shouldn’t be.
An 18% drop in sulfate emissions should have done something.
Hmmm … what could it be?
http://i40.tinypic.com/xgfyok.jpg
Rats, too late to edit. One of the model issues is the assumption that the sensible heat associated with latent cooling is a net near zero with the condensation regaining heat as it is returned to the oceans/land. That is true eventually, but the time required is a big question. If you cheat and use a physch chart, the sensible heat ratio is roughly 0.59 for total cooling of the oceans. In the northern oceans, particularly the north Atlantic, a large percentage of that sensible heat can be stored as water in whatever, snow pack and ice. The north Atlantic ocean can easily lose 40Wm-2 in a Little Ice Age scenario. We haven’t experienced that yet, but that is very likely. The energy lost in the north Atlantic would be regained by the thermohaline circulation. That takes roughly 60 years with every PDO shift and volcano burp adding to the recovery time.