At some point, I will have to ask Christopher Monckton his basis for determining his version of the “IPCC’s currently-predicted” warming trends shown on his graphs. Graphs from his monthly recap (pdf) are currently showing at IceCap.
I’ve superimposed my best understanding of “The IPCC’s currently-predicted” best estimate of warming using a dark blue line. I interpret this to be multi-model mean over trend obtained by averaging over 22 models that form the basis of the IPCC projections in the AR4.” This gives 0.21 C/decade for the specific period between Jan. 1980 and Dec. 2008. The alternative choice for the IPCC’s best estiamte is is “about 0.2 C/decade” based on the text or various other trends on can obtain from tables. (They are all around 0.21 C/decade).
My lighter, grey-blue lines show the trends associated with ±1 standard error in the mean trend as determined using the method described by Santer et al 2008. (That is: I correct assuming all residuals from linear are AR(1) noise; also, this is not the standard deviation for all trends. )
Based on additional analysis applying the method Santer applied to test the multi-model mean trend in tropospheric temperature against observations, I find that the IPCC projected trend for this period would be rejected if we use 90% confidence levels. The trend would fail to reject at 95% confidence. (Additional interesting results apply if we break examine models that include volcanic treatment separately from those that do not.)
So here are two questions some might want me to answer:
- Do the IPCC trends give me confidence the models accurately predict trends in future data? Nope. Over 2/3rds of the data used in this comparison is hindcast, and the trend still rejects at 90%.
- Do I think Monckton’s graph is a fair representation of the IPCC trends and their uncertainties? Nope.
Lucia – are you after another entry in the RC Wiki?
It is a wonderful place to visit. The kind of balanced and informative enterprise than only credible/serious scientists are capable of producing in their spare time!
Personally I think your previous entry is a shining light though you have a long way to go to catch up with the number of entries by such beacons of tact and diplomacy as Joe Romm, Tim Lambert and Tamino!
Unfortunately your recent critique of Debra Saunders did not make it for some odd reason but this one may have a good chance to follow its predecessor.
Lucia:
“I find that the IPCC projected trend for this period would be rejected if we use 90% confidence levels. The trend would fail to reject at 95% confidence.” Correct?
Is the second sentence applied to “inconsistency with observations”?
If not, please forgive an elementary question. I thought I had a (small) handle on this.
Thanks.
Clivere– I didn’t even know I made it into the RC Wiki. What a coup! 🙂
Brian– When a test is based on probability, you need to state use a probability criterion. If you had said: “I’ll reject the models as making correct predictions, if I first assume they predict the true trend but find that the observations fall outside the rante containing 9/10 possible trends”, that forms the 90% confidence intervals.
So, I find (given some assumptions ) that if the models are true the trend that actually occurred would fall this far from the model trend would only happen by chance fewer than 1 in 10 times. So, we reject the models predictive value if we used 90%.
However, if we wanted to give the models a greater benefit of the doubt, we might only reject them as true if the observed trend was so far off it could only happen by chance fewer 1 in 20 times. The observed trend from 1980-now could happen 1 in 20 times give the variability of “weather noise”.
That said: to get this result, we are using a time period where 2/3rds of the data were known before the “prediction” was made. So, I think failing at 90% is give me little confidence the models are good. Other people may have other opinions, but I think the multi-model mean is not looking good!
The models are looking even worse on ocean heat content, according to this: http://climatesci.org/2009/02/09/update-on-a-comparison-of-upper-ocean-heat-content-changes-with-the-giss-model-predictions/
Your +-1 standard error looks smaller than I would expect based on my intuition. If I ran even 1 single model 100 times, I’m not convinced that I would see the trend of 70 runs fall within such a narrow bound (though I might be wrong: I guess trends will show much less variance than individual ending years). And if you asked me to run 100 different IPCC models, I would be quite surprised if 70 runs fell within a band that small, given the differences between models in climate sensitivity, ocean uptake, aerosol forcing, volcano representation, emissions estimates, and so forth. I grant you’ve done the statistics and I haven’t – I guess that if you take the 26 GCM models, about 17 of them do fall in that span?
I note that if we look at Simon Evan’s graph from your previous post, just the surface land temperature measurements seem to have a larger spread since 1980 than your 1 std. dev.
On a related but separate note: Given that the even the models which have ENSO-type behavior do not claim to get the pattern of historical ENSOs correct, and given that we know that ENSO does have an effect on temperatures, what is the impact of historical ENSO on these trends? I note that the last two years have had negative Multivariate ENSO indices in real life – so if we picked only those model runs that ended with negative MEIs in the last two years, what effect would that have? Or, better yet, run enough runs that you can get a number of them that approximately duplicate the 29 year MEI pattern, and see what effect that would have. I note that MEI over the 29 years does have a negative slope, and just for fun, if we assume that MEI is linearly correlated with a temperature anomaly around a 0.21 degree trend, then adding 0.27*MEI will turn a 0.21 degree/decade trend into a 0.15 degree/decade trend. And, besides missing Pinatubo, it even seems to do a decent job simulating the SPPI index (though I wish I had the base data so I could do annual averages and plot my simplistic approach on top of Monckton’s graph).
As far as Monckton is concerned: he loves taking IPCC trends from 2000 to 2100 and turning them into simple functions that he can then show don’t match current trends. He may have done something like take A1FI, at 4 degrees warming (1980/99 to 2090/99) which is about 3.9, and then stuck a symmetrical distribution around it (2.4 to 5.3), and then linearized back to present and compared to the current 1.5.
I’m very much a lurker on these sites and not a climate scientist so my comments are just speculation. Monckton’s IPCC estimate appears to derive from the IPCC SPM Table 1. Given that Lord Monckton seems to focus much of his efforts on “Policy Maker” education (re-education), it doesn’t seem unreasonable that he would use the extreme case numbers used in the 2007 IPCC SPM since that is the most that many of these individuals would have read.
Oh. I guess my Monckton comment is superfluous, given your earlier blog post.
I’ll comment on his CO2 graph then! He took a simple exponential, which is clearly higher than the A2 concentration in IPCC Figure 10.2 (the largest CO2 projection figure), and his central line is _maybe_ congruent over the first 2 decades with IS92a (which itself is significantly above _all_ the AR4 scenarios) from IPCC figure 10.24.
jae [10171]
I made the same observation on the other Blackboard ” Trends since 1900″ thread. Pielke Sr argues that the GISS model is off by a factor of 2.5 compared to a growing record of verifiable data.
It bears repeating: models are models, are models, only models, ad infinitum… They are most certainly not reality!
Lucia
It seems to me that the only thing that is important here, is the red line in Monckton’s graph. If that data is correct, all else is wrong. Please correct me if I am missing something.
Marcus–
That’s the standard error on the mean trend- s.e.. It’s different from the standard deviation (sigma). If you ran “N” models and found the mean, the standard error is sigma/sqrt(N). The are both types of standard deviations, but one is the standard deviation for the a mean, the other is for the population. In this case, (N=26).
Wait – maybe I’m misunderstanding (highly likely – I took one probability course and one econometrics course, and sadly any sophistication with statistics dribbled out of my brain after my coursework was done – one day I’ll find some time to relearn it all) – but I feel using sigma/sqrt(N) is inappropriate for determining how consistent a single realization of reality is with a model mean trend.
As N-> infinity, doesn’t sigma/sqrt(N) go to zero? And therefore, if I ran thousands of model runs, I would be pretty much guaranteed to have a rejection of the trend at 99+% no matter how good my models were (given any chaotic behavior at all in the system).
This isn’t like weighing a rock, where you expect some type of normal distribution of measurements around a “true value” if you have a perfectly accurate but imperfectly precise scale, and therefore the more measurements you take, the closer the mean of your measurements should be to that true value. Predicting climate is more like predicting the sum of 10 six sided dice: I can take my model system of 10 dice and test it out, and come up with a distribution centered around 35. But then I actually roll the “earth” (the real dice) and come up with 31: even though my model was _exactly right_, and even though reality ended up being within 1 std dev of the mean, if I was using a standard error I would reject my guess at 99% if I had a N large enough (I’d rolled my model 10 dice a billion times, for example).
One thing that I haven’t found yet, is where the 1980 date comes from. Did Moncton arbitrarily pick it or is it related to a IPCC date? Admittedly they’ve changed their baseline but isn’t it a valid argument that if they had made that pronouncement in their a report that their projection wasn’t even close? Otherwise they get to constantly provide a moving target.
The IPCC trend forecast is based on the growth in CO2 and other GHGs less the different model’s estimated impact from aerosols.
They are way off.
It doesn’t matter if you start in 1980 or 1991 or 1850. The theory produces as set rise in temperatures based on the rise in GHGs (to be more precise, the LN growth in in GHGs).
These estimates are off by half based on the empirical data to date.
If you used the theory starting in 1850, it would have predicted a rise in temperatures of 1.3C to 1.4C to date.
Lucia, the logic of treating model runs statistically, taking their means and estimating errors from their distribution, evades me.
We have 15 people each coming up with their best estimate lines for the plot above. This means they have fixed all parameters to specific values. Are the parameters the same and the equations/gridding and other assumptions different? Are the parameters different and the equations the same? We, I, do not know. They are each the output of a black box.
If I were a reviewer, I would ask each modeler to supply the 1 sigma error around the above curve for his/her specific model. If you had those errors you could start treating them statistically, imho.
They cannot do it and get a reasonable answer. Only changing the albedo parameter by one sigma will move each model curve like a fan more than 1 degree C in the plot above.
Didnt’ Mount St Helens go off in 1980?
Starting a trend at a time of low temperatures caused by volcanic erruptions increases the slope of the line. A cherry pick to watch for, just like picking your end point
Nick
Marcus-
You’re doing mostly ok. 🙂
There are different tests one can do and each answers different questions. After I’ll explain– you’ll also see that my using standard error uncertainties means that I should also add uncertainty intervals to the earth’s observation. (Excuses to follow after I explain the different tests.)
In principle, you could ask these three questions:
1) Are the multi-model mean and the “underlying trend” for the earth temperature consistent. This is the question asked in Santer and tested. For this, you use the standard error for the multi-model mean. (I show these error bars. I’ll discuss this more later.)
2) Does the earth’s trend fall in the range of all possible “weather” associated with a particular model? This is question would use the standard deviation of the weather for the runs from one and only one model. (Other models have different parameterizations, and have different weather on average.) This question can be asked, but it’s a rather weak question. To give an analogy– you could find 100 women, measure their heights. Then, you could measure my husband, who is 5’8″. The answer to: Does his height fall in the span of heights for women is: Yes. And yet, we know that men’s heights are different from women’s. So, if our goal had been to figure out if men’s heights differ from women’s we haven’t learned what we want to learn. (The purpose of question 1 above is more like “are men’s heights diffrent from women’s”, rather than “is one man’s height inconsistent with all women’s”.)
3) We could ask whether the earth’s trend falls inside the span full range of multi-model means. If it doesn’t then nearly all models are off individually. For this test, you use the standard deviation of the mean trend for all models. (So, don’t divide by the square root.) Bear in mind: If all the groups had run many model runs, this test would have almost nothing to do with “weather noise” in models. But because many of the groups only ran 1 run, in some sense, this test cannot be done. The standard deviation of the mean trend across models has some cases with 1 run- – and so dominated by “weather”; other cases have 7 runs– so a fair amount of “weather” is averaged out.
So, now, for the santer tests:
For comparing whether the mean associated with the earth’s observation and the model match, Santer used a paired t-test. For that, we use both the standard errors for both the model mean and the earth’s trend.
The standard error for the model mean trend is found in the normal way. I calculate the 22 mean trends for the 22 models. I then calculate the standard deviation over the sample of 22 and divide by squr(22).
Now, if I had a bunch of “realizations” for the earth’s trend, I’d do the same for the earth. Then, I’d calculate a normlazed difference by subtracting the two means and dividing by the square-root of the sum of the squares of the errors, look up the “t” for the confidence level I want and check.
But…. I don’t have 22 realizations for the earth. So, instead, Santer proposes we assume the earth’s “weather noise” is AR(1). Then, we use the residuals to estimate the standard error in the mean for the earth’s noise. This gives us a standard error for the earth’s trend.
(It’s at this point you should say “Aha! You didn’t put that on the picture! Then, I Marcus and all your readers could eyeball that to see if they overlap! Better yet, readers who actually remember a little bit of statistics would not be confused. What you are thinking is more or less correct.
My excuse is– I had the value for the IPCC prediction in an EXCEL file already. Getting the other value would have been work. Plus, my main comment is: Where in the heck does Monckton get his values? Anyway, I’ll be posting more on how the graphs should look a bit later.)
So.. the short story: Both the standard deviation and the standard error are useful metrics but each gets used for answering different questions. When reading any statistical analysis, it’s always useful to pinch yourself and ask: “What question does this answer?” Analogies help here. Often, all the questions are valid ones. But, you don’t want to think you answered question (1) when you did analysis (2).
It looks as though Monkton correctly calculated a mean projected rate of 0.6 for the 29 -yr period but based the graph on something else.
The top value in what is presented as IPCC modeling ranges would be about 5 deg/century. While it is clear that the majority of AR4 models did not project anything that high, I got the impression from graghic representations in the reports that a few did venture up there. And it is not as if the über-alarmists have never tossed out 3-6 deg/century scenarios just to keep us on our frightened toes.
In making the valid and irrefutable point that exisitng trends do not validate the high end of alarmist scenarios Monkton needed to do a better job of labelling that which he was refuting.
This is especially true in cliamte science because all outomes are “consistent with” AGW so it is important to identify which part of the protean corpus is under review.
BarryW
Monckton’s article doesn’t seem to say. I think 1980 is one of three good candidates for start dates when testing AR4 projections because in the section discussing projections of temperature anomalies, all results are given relative to the baseline from Jan 1980-Dec 1999.
So, it follows my self-imposed rule of picking things based on dates associated with those specifically discussed in the AR4. (I think the other two ‘good’ candidates are a) 2001, the SRES were informally published in Nov 2000, and formally published sometime in 2001. I then like to start in Jan. and b) 2000, as this is the official “break” point for the 20th century runs and the projected runs. Of these two, I prefer the 2001. My reasoning is that when the SRES were frozen matters more than the “break” point.)
The IPCC does not project linear trends in temperature response, and it never has. The simplified AR4 statement “For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios” is referencing 2011-2030 in my view (all dated projections in Chapter 10 begin at 2011). ‘Next’ doesn’t mean ‘current’ (2001-2010) and it certainly doesn’t mean beginning at 1980!
The science does not imply that x units of GHG = y temperature response at any given time. The response is dependent upon the stage in the climate’s evolution. For example, over the next two decades some 0.1C/decade response is the consequence of committed warming, which has accumulated over time. Or, for further example, the effect of changes in feedback is non-linear.
The change in rate of modeled temperature responses is most simply illustrated in this table from the TAR (the AR4 doesn’t have the equivalent, unfortunately) –
http://www.grida.no/publications/other/ipcc_tar/?src=/CLIMATE/IPCC_TAR/WG1/552.htm
– as can be seen, for the high emission scenarios (e.g. A1F1 and A2) the rate of response increases towards about 2060.
On another point, I don’t know where Monckton has got his figures from – he states “IPCC predicts warming at 2.4, 3, 3.9, 4.7, 5.3 C/century” and says these are “currently predicted”. The ‘best estimate’ figures from the AR4 are actually as follows, from table SPM 3:
B1 scenario 1.8
A1T scenario 2.4
B2 scenario 2.4
A1B scenario 2.8
A2 scenario 3.4
A1FI scenario 4.0
Why has Monckton used different and higher figures?
Terry–
Even by this measure, where did the values come from?
Table 1 in the IPCC SPM for the WG1 discusses estimates of sea level rise. Do you mean a different summary for policy makers? (There are several groups.) Do you mean table 3? That has temperature changes from 2090-2099 relative to 1980-1999.
I speculated before that Monckton is taking values from the whole century and mis-applying them to “now”. That’s not the same as simply using the upper bound of of what’s predicted because the IPCC is very clear that they are not predicting the best estimate for the trend to be constant over the century.
If Monckton would say in his articles, then we could specifically rebut his choice. As it stands, he doesn’t say. So, I can rebut the hypothetical reason why he picked those numbers. But… is that why he picked them? Or is there some halfway legitimate reason for his choice?
Tetris–
It depends what “all” encompasses. Monckton is trying to make an argument about the inaccuracy of the IPCC current projections. To do that, he has to compare the red line to something that could conceivably be called the IPCC projected during the time period shown.
Monckton doesn’t do that.
There are both political and scientific issues involved in Monckton’s representation of the IPCC projections. Both are important, but in different ways.
Since the IPCC wasn’t established until 1988 it’s interesting to see that Monckton is examining their projections from 1980 😉
The IPCC First Assessment Report (1990) projected a business as usual outcome of +3C. Since this is worst case, it can be compared to Monckton’s +5.3C claim.
The Second Assessment Report (1995) reduced the projection to +2C by 2100. “This estimate is approximately one-third lower than the “best estimate” in 1990. This is due primarily to lower emission scenarios (particularly for CO2 and CFCs), the inclusion of the cooling effect of sulphate aerosols, and improvements in the treatment of the carbon cycle.”
The Third Assessment Report (2001) projected scenarios as follows (I’m giving the link again for convenience):-
http://www.grida.no/publications/other/ipcc_tar/?src=/CLIMATE/IPCC_TAR/WG1/552.htm
As can be seen, the A1F1 scenario gave the maximum rise to +4.49C by 2100 against 1990, or 4.33C/century. The projected rise for that maximum rise scenario for the 2001-2010 decade is only 0.16C. Compare this to Monckton’s suggestion that the average prediction is 0.34C.
I’ve already given the scenario figures for the Fourth Assessment Report in my last post. Apart from the fact that it’s entirely meanigless to ‘backdate’ such projections to 1980, the figures are not those that Monckton claims.
Any which way you look at it, Monckton’s ‘analysis’ is a fabrication.
Simon–
I think using 1980 as a start point is fair based on the wording of the AR4. If you note table 3 in the SPM you’ll see all projections are stated relative to 1980-1999. So, unless people are going to insist that comparisons must start on the date the AR4 was published, 1980 is one of the reasonable candidates for a start point.
Rahmstorf backdated the TAR predictions to 1990. The reason for this was the TAR used 1990 as their start date. So… what’s sauce for the goose ought to be sauce for the gander. I think 1980 is an ok start date fro the AR4. (But… if you want to insist on 2001, I’m fine with that! It makes the models look much worse than 1980! BTW: They don’t look so hot starting form 1990 either. 🙂 )
Anyway, it’s always fair to test hindcasts. The IPCC does it. I have no problem with it– as long as we admit we tested a hindcast and don’t pretend it proves a forecast/projection.
But one needs to create a fair test of the hindcast. The IPCC does not predict what Monckton says they predict for the period between 1980-now.
So… I think the choice of start date is ok…. No one has suggested a explanation for Monckton’s analytical choices that strikes me as remotely fair.
Lucia,
I believe Monckton gets his trend from scenario A2 in Fig. 10.4 in AR4 Chapter 10. If you:
.
1. Assume the end date is 2080 (from Table 10.5)
2. Visually pluck the value off Fig. 10.4 (instead of using the 3.13 listed in the Table 10.5)
3. Fix your start point at 1980
4. Draw a straight line from 1980 to your endpoint . . .
.
Voila! You get 0.30 – 0.40 deg C/decade or thereabouts depending on how calibrated your eyes are. You can do the same process to achieve his error bars by assuming that the dark pink is the outer boundary of the A2 scenario.
.
It’s a wholly unfair way of using Fig. 10.4 . . . and since it’s necessary for him to use Table 10.5 to get the 2080 value (steps 1-4 don’t yield Monckton’s results without doing that), then it would seem a deliberate attempt to mislead by then going back to the graph and visually estimating your temperature.
.
Anyway, I’d be willing to bet that’s how he got his pretty graph.
Lucia – sure, I have no problem at all with 1980 as a start for hindcasting (1880 even better in many ways!), nor with it as a reference point, I just can’t see that it was ever the start of a projection…… but I think we’re in agreement on the ‘fair test’ issue anyway.
The only sense I can get from Monckton’s stuff above is the rather obvious point that temperatures will have to rise at a faster rate in the future than to date if the IPCC projections are to be on target going forward. That’s exactly what the high fossil fuel scenarios project, of course, as fig 10.5 shows well for the A2 scenario:
I’m all for assessing projections, though unconvinced of the best way to do that. Going forward a few decades the assessment of their average might be rather ‘kind’ to the IPCC, since they include emission reduction scenarios that we currently look unlikely to achieve. On the other hand, if we have a Pinatubo-scale eruption tomorrow then it would seem churlish to me to point fingers at the IPCC for the lowered temperatures over the following two or three years. At the moment we’re experiencing a longer than expected solar minimum, which must influence the assessment of any current scenario. Furthermore, it’s a tough call to assess the impact of the economic crisis upon emissions. It seems to me that the fairest test of the models’ usefulness would have to involve an assessment of the actual inputs against the modeled outcomes from such, rather than of the different sets of guesses embodied in the scenarios.
Apologies if I’m just stating the obvious 🙂
Simon–
Obviously, the IPCC can’t be “projecting” since 1980. Most of that is hindcast. But if that were the only problem, I’d say we were quibbling over word choice.
I think 1980 is a fairer date for commenting on IPCC AR4 projections than, say, 1998, 1972, or even 1990; the reason being projections are stated relative to the baseline period starting in 1980. This can be important because if we compare to IPCC simulations, sometime the choice of start year makes a bit of a difference. (That crucial “reject/ fail to reject at 95% sometimes hinges on the choice of start year! That’s why I try to think about which years are “best” to start an analysis.)
But… no. That I think 1980 is a better “start” year than, say, 1981-1999, doesn’t make the period from 1980-1999 a projection.
Precisely. That’s why you can’t just draw a straight line from 2000 to 2100 and then compare that trend to data since 1980. The IPCC doesn’t suggest that trend applies to data from 1980-now.
Fair enough. But, there are ways that are clearly wrong. Monckton seems to have devised one.
My approach is two pronged:
1) Compare and see if they match. Do this regardlss of whether or not the forcings were projected correctly.
2) After doing 1, ponder the significance.
But we should be able to answer (1) without regard to our conclusions about the significance.
If an eruption like Pinatubo occurs, we should expect the projections that don’t include that to be off.
On the other hand, if the forcings are close to the projected values, Mars doesn’t attack, Pinatubo doesn’t erupt and so on, and the projections don’t match. That’s a different story.
So we’ll see. Right now the projections don’t look good. Why are they off? Dunno.
Thanks for answering my questions! Someday I’ll have to dig into the math behind determining a standard error from the temperature data using the AR(1) assumption, but for now my intuition is much happier.
(at least, my intuition is happier regarding the methodology – I’m still a little surprised at the 90% rejection result, and will have to think more about whether I can think of any real statistical basis for that 2nd intuition)
The IPCC AIB warming scenario (linked to Simon Evans above) can be approximated with this formula:
Temp C Anomaly = 4.15* LN (CO2) -24.66
And you can extend this formula backwards or forwards to approximate greenhouse global warming theory for any time period.
It is just slightly higher than the formula GISS ModelE uses which is 4.053 LN (CO2) – 23.0
It is just a coincidence that that the AIB CO2 growth scenario produces a more-or-less straight line from 2000 to about 2080 or 2090.
@Simon Evans
(re Your commnet 10227).
I would think the years to come hold some interesting prospects for testing a number of the simplifying assumprions in the models and the “current paradigm”.
(Espescially if we don’t get a volcanic event of Mt Pinatubo size/character).
There might however be some problems in discern effects of reduced anthropogenic carbon emissions (due to the recession) from the current apparent lack of solar activity.
Cassanders
In Cod we trust
Yes, you will. In fact you should have done this even before your previous post about this. Steve Mc is constantly being criticised for commenting on scientists work without contacting them first, though in his case the criticism is unwarranted.
Why don’t you stop speculating and find out what the basis for his numbers are, by contacting him via the SPPI? Are you accusing him of fabricating numbers out of nowhere? When we know how he got these numbers from the IPCC report, then perhaps we can have in informed discussion. [And please don’t try to argue that he ought to be aware of your blog, like you did last time. How many climate blogs are there?]
PaulM–
I don’t think I need to ask SPPI. Monckton, a trained classics scholar, is issuing monthly reports that provide absolutely no explanation for his claim of what the IPCC projects. This alone is worth of some remark.
Many people have read the resport, we all know where the relevant tables are and no one can figure out where his numbers come from. (Or, more precisely, no one can justify those numbers are fair representations of what the IPCC projects.) The numbers contradicted by any fair reading of the IPCC article.
Monckton has been a politician, and posted in a public forum. As such, people will read and comment. Presumably, his reason for posting and disseminating the information is precisely to engage discussion. I’m discussing. His numbers look like hooey.
FWIW: Monckton is aware of my blog; he has commented here in the past. He spends a lot of time on climate issues.
Given his focus on this issue, and the fact that he is the Chief Policy Adviser for SPPI, I would be surprised if he doesn’t subscribe to a service to learn the reaction to his writings in the blogosphere or to sample arguments in the blogosphere at all. (If SPPI doesn’t subscribe to such a service, they should. Failure to do so will impede their ability to fulfill their mission.)
However, let’s assume SPPI doesn’t know to subscribe to such services. If you wish Monckton to explain the provenance of his mystery number, you can email him and ask.
For my part, I suspect someone already has. That said, I may be wrong.
It’s not my duty to ask Monckton, it’s yours. You are the one criticising him and indulging in speculation. If this bugs you, as it seems to, there is a simple way to sort it out. You said it yourself in your first sentence.
Do you really think he studies all the ill-informed comment about him in the blogosphere? He has better things to do with his time, and you are overestimating the importance of your blog.
Now you’re speculating again!
PaulM (Comment#10265)
I am inclined to criticise the opinions you have expressed here, Paul, but should I email you first just to check that up with you?
Btw, can you confirm that Monckton e-mailed the IPCC before he posted his concocted ‘graph’ which, by labelling it ‘IPCC’ is an explicit misrepresentation of anything the IPCC has presented? Of course, the IPCC may have better things to do than study the ill-informed commentaries of Monckton, so one would have to presume it to have been his duty to have contacted them before publishing his stuff….
Incidentally, if you want a clear example of Monckton being obviously ill-informed, just consider his statement “IPCC predicts”. Just everyone knows that the IPCC has no predictions. If Monckton doesn’t even know the difference between a prediction and a projection he really should do some reading before publishing such ill-informed statements, don’t you think?
Paul–
It’s not anyone’s duty to ask Monckton. But you seem to want him to be asked, and if you wish to, the option is open to you.
Yes. Of course I’m speculating. The word “suspect” implies speculation.
As for this:
To answer the question: No. But he appears to be interested in the state of public opinion. So, if he is unaware of automated services to inform him, then he will not learn the current state.
As I believe him to be an intelligent energetic man who keeps up to date with current events. So, I suspect he would have learned of these services and subscribe to one. Failing that, I would suspect SPPI would have done wo.
Of course, I am speculating based on my (admittedly speculative) notions about his interests and level of energy and intelligence.
I don’t know why you think I am over-estimating the impact of my blog. I’m perfectly content if Monckton ignores it. Despite that, I feel free to comment.
However, you at least seem to think it’s sufficiently important for me to take proactive steps to nail down Monckton when he is vague. I suspect that you would not deem this important if you believed my blog had zero importance.
But who knows, maybe in the end, I will email Chris. I should warn you: I doubt you will like the outcome.
I think it may be time for a poll!
PaulM – Whether she is right or wrong, Lucia is free to speculate, suspect, and comment about this topic however she wishes on her blog as long as she doesn’t become slander anyone (which she has never come close to).
Simply reading the IPCC AR4 makes it quite clear that it projections for time period illustrated on Monckton’s graphs under all emissions scenarios is about 0.2 C/decade.
Two relevant quotes are:
“Committed climate change (see Box TS.9) due to
atmospheric composition in the year 2000 corresponds
to a warming trend of about 0.1°C per decade over the
next two decades, in the absence of large changes in
volcanic or solar forcing. About twice as much warming
(0.2°C per decade) would be expected if emissions were
to fall within the range of the SRES marker scenarios.
This result is insensitive to the choice among the SRES
marker scenarios, none of which considered climate
initiatives.”
p. 68, AR4, Technical Summary
“There is close agreement of globally averaged
SAT multi-model mean warming for the early 21st century for
concentrations derived from the three non-mitigated IPCC
Special Report on Emission Scenarios (SRES: B1, A1B and
A2) scenarios (including only anthropogenic forcing) run by
the AOGCMs (warming averaged for 2011 to 2030 compared
to 1980 to 1999 is between +0.64°C and +0.69°C, with a range
of only 0.05°C). Thus, this warming rate is affected little by
different scenario assumptions or different model sensitivities,
and is consistent with that observed for the past few decades
(see Chapter 3). Possible future variations in natural forcings
(e.g., a large volcanic eruption) could change those values
somewhat, but about half of the early 21st-century warming is
committed in the sense that it would occur even if atmospheric
concentrations were held fi xed at year 2000 values.”
p. 749, Ch. 10, AR4
(Note that in the second quote, although they compare averages for the time period of 2011-2030 to 1980-1999, they basically anticipate a 0.64 to 0.69C increase over a 30 year period (i.e., about 0.2C/decade).
BTW, Simon, based on the above quotes, I believe that 0.2C/decade applies to 2001-2020, though one could reasonably argue that it applies to the entire time period of 2001 -2030.
Lucia, I see know reason for contacting Monckton. Unfortunately, when someone so readily seems to either misconstrue or misrepresent the IPCC projections, it does little to elevate their credibility whatever the merits of their other ideas.
Bob–
I’m going to comply with the poll. I voted “Don’t give a hoot.” But, if the poll tells me to email, I’ll email. I’ll also speculate which category of people voted the way they did. (I wish I could have a two question poll. Then I’d a second question and compute the correlation between what they wanted me to do and what they believe about AGW.)
Bob North (Comment#10276)
BTW, Simon, based on the above quotes, I believe that 0.2C/decade applies to 2001-2020, though one could reasonably argue that it applies to the entire time period of 2001 -2030.
Bob, point noted and I will mull it over :-). Tbh, I think I’d best describe my own view as being one of thinking it not too bright to keep pumping emissions into the atmosphere regardless of our ability to model effects accurately…
(I was also a ‘don’t give a hoot’ vote, btw – he gives himself plenty enough exposure IMV, and I’ve yet to see him respond with integrity to any criticism).
Re: Simon Evans (Comment#10266)
I confess to being confused by the prediction/projection distinction myself. As far as I can tell, a prediction is a definite statement of what is expected (“this will happen…”) while a projection is a conditional prediction (“if this occurs then that will happen…”). If the assumptions underlying a projection are met, then doesn’t it become a prediction?
— Ralph
Ralph–
I think if the assumptions are met, then the projection becomes a prediction.
But, in some way, unless major volcanoes erupt or Mars attacks, I think it’s the quibble is a distinction with little difference. The method of projection in it’s entirely starts by predicting things like economic growth, changes in technologies used etc. The out put of those predictions are used as input to predictions about levels of CO2, other ghgs aerosols etc. Then the output of those are used at input to AOGCM’s.
In the end, it’s all sort of a prediction. By tracking each portion of the method used to predict things we might be able to figure out how and why the predictions go wrong. But, it’s still sort of a prediction.
Ralph Becket (Comment#10287) February 12th, 2009 at 6:04 pm
Re: Simon Evans (Comment#10266)
I confess to being confused by the prediction/projection distinction myself. As far as I can tell, a prediction is a definite statement of what is expected (â€this will happen…â€) while a projection is a conditional prediction (â€if this occurs then that will happen…â€). If the assumptions underlying a projection are met, then doesn’t it become a prediction?
– Ralph
Yes, I agree with your description, Ralph. Given that the IPCC have been asked to offer guidance to policy makers rather than to make policy we can hardly expect them to predict what that policy might be. We might fail to undertake any significant mitigation of emissions, or we might mitigate vigorously – the prediction of which we will do is maybe in the special fields of sociologists, psychologists and economists (or maybe astrologers) rather than of the IPCC.
I do think that once the inputs are reasonably known (which is bound to be after the event, or at the time of the event) we can then assess the model projections just as we would assess any prediction. They will have been more or less useful.
IMV, Monckton et al. use the word ‘prediction’ because they are looking to assert a pass/fail ‘test’ of the science. I think the models can be assessed with regard to the changing inputs, but that calls for a rolling analysis.
We know that meteorological predictions are quite frequently obviously ‘wrong’ , and even when useful (as I think they are for most of the time) they often lack precision. That doesn’t lead us to question the basic science of meteorology (well, I don’t think it does!).
The uncertainties are considerable, I think – not only in terms of net cloud effect but in terms of current assessment of aerosols. Personally I don’t take too much comfort from uncertainty. Some, I think, seek to ‘disprove’ the science by means of revealing its uncertainty.
Anyway, to sum up, I think we need to do the best we can to assess the modeled relationship between inputs as they become known and projected outcomes. Which is not easily done!
I’ve reconstructred some of these charts which may help visualize one of the problems with the IPCC – they keep changing the Baseline starting points all the time.
I downloaded the A1B GHG growth scenarios by year out to 2100 (and added the actual CO2 from 1980 to 2000) which is here (I just use CO2 as a proxy for all the GHGs – there isn’t a lot of difference between using just CO2 versus using all the GHGs.)
http://data.giss.nasa.gov/modelforce/ghgases/GHGs.IPCC.A1B.txt
The baseline one chooses has an impact on what the charts look like.
It appears Moncton’s numbers are off some, his chart has temps rising to 1.23C or so by 2009 but by using the IPCC AR4 A1B temp growth rates and using the same starting point and baseline as Moncton, one only gets to about 0.83C by 2009.
And I have to give a major Thumbs Down to the IPCC AR4 for starting their Baseline at just 0.2C in the year 2000 when the temp anomaly was already up to 0.5C or 0.6C in the Hadcrut temp series they were using to illustrate the temerature increase over the last 100 years.
Bill Illis,
The stated reference for projections in the AR4 is relative to the average temperature from these twenty years: 1980-1999 inclusive. So, the projections are already at roughly 0.2C in 2000 because they project roughly about 0.2C/decade, and the average temperature for the period from 1980-1990 inclusive happens very near 1990.
The HadCrut temp series uses a different baseline. The anomaly method has “features”. One of them is that it hides the disagreement between the models projections for the mean earth temperature and the actual mean earth surface temperature. The other is that when comparing projections to earth temperatures, you must set things to a common baseline. HadCrut and GISS were near the projected value for the anomaly in 2000 if you use 1980-1999 as the baseline. They are now low.
Other choices of baseline can make the anomalies very, very bad or not so bad. If we pick 1900-1999 as the baseline, the measured anomalies are currently quite low:
[caption id="attachment_3207" align="aligncenter" width="500" caption="Figure 2: Difference between observations and model mean (monthly)."]
(Click for larger.)[/caption]