What weather would falsify the current consensus on climate change?

In January, Roger Pielke Jr. asked

What behavior of the climate system could hypothetically be observed over the next 1, 5, 10 years that would be inconsistent with the current consensus on climate change?

Meanwhile, Gavin Schmidt explained that we really can’t look at short-term weather to assess models. While Gavin’s answer is perfectly correct , it falls short of answering Roger Jr.’s specific question: Roger would like to know if we can state, in advance, any sort of weather that is inconsistent with the ‘current consensus’ on climate change.

In fact, no matter how variable weather, it is possible to answer Roger’s question– provided it is made a bit more precise. In my opinion, the plot below answers the question:

“What trend in GISS Land/Ocean temperatures over the next 5, 8 or 10 years, would be inconsistent with the most recent IPCC projections of climate change?”

This question can be answered, because it nails down a metric– GISS Land/Ocean, and specifies the “current consensus” with projections that are published, and, so, knowable. The answer is a bit complicated, since the IPCC provided a range of projections, described the probabilities in somewhat vague terms and has different projections for both short and long terms trends.

In my opinion, the short answer to the question is: If, the weather is such that an ordinary least squares fit to GISS Land/Ocean data for the next decade shows any negative trend, this would be inconsistent with the IPCC’s short term projection for temperature which appears to be 2.0°C per century.

A trend of 2.0°C per century, evidently, represents the mid-point of model projections based on SRES scenarios. So, in some sense, falsifying a 2.0°C per century predicted trend would amount to falsifying the current GCM’s prediction for the central tendency, while accounting for the range of uncertainty introduced by weather.

Details about IPCC spread.

In reality, the IPCC provides a range of predictions/projections for warming trends, the magnitude of the projected temperature trend varies both according to time frame and emissions scenario,. The most commonly cite values are increases expected to occur during the 21st century: The full range, including the most extreme scenarios is 1.1°C/century to 6.4°C/century. The most probable range is said to be 1.8°C/century to 4.0°C/century, with, evidently 2.0C/century representing the midpoint of SRES model projections.

After performing a statistical analysis, I plotted the trend in GISS Land/Ocean temperatures trends calculated using ordinary least square (OLS) using data collected over the next 5,8 and 10 years that I think would be inconsistent with the IPCC range of projections in climate change. That plot is shown below.

GISS GMST test for IPCC

In today’s blog post, I’ll explain how to read the graph. Some time soon, I’ll explain how I concocted the graph. I may also post a few other graphs and explanations of statistical tests next week.

How to read the graph.

First, assume we believe “consensus” position is that global mean surface temperatures (GMST) will rise at a rate of 2.6°C per century — the midpoint of the “most probable range” suggested by the IPCC. Further, let’s assume we believe temperatures are already rising at 2.6°C per century. Then, of course, one would expect that the probable rate of rise for GMST over the next decade is about 2.6°C a century, right?

But, as Gavin correctly pointed out, due to weather noise (and volcano eruptions and other unpredictable phenomena), even if the underlying trend due to AGW is, 2.6°C/century the trend in temperature averaged over decade might be higher or lower than 2.6°C/century. But, how much higher or lower?

This question can be answered using some statistical reasoning- and I’ll explain that later. For now, let’s assume my method is sound and read the answer of my graph.

Draw a vertical line from “2.6°C/century” — the supposed “consensus” value, stop at the yellow line corresponding to a 10 year trend, and then read “0.6°C” to find the lowest possible trend over 10 years that is ‘not consistent with’ 2.6°C/century being true.

So, according to my graph, a mean temperature rise of 0.6°C obtained by Ordinary Least Squares (OLS) using data collected during the upcoming 10 years is “not inconsistent” with a prediction of 2.6°C/century.

In contrast, if the measured trend over the upcoming 10 years, is less than 0.06°C over 10 years, then this will be inconsistent with an assumed trend of 2.6°C/century .

(For those wondering, a measured trend of 0.6°C a century measured over 10 years also ‘not inconsistent with’ the real trend being 0°C/century. Weather being what it is, all sorts of trends are ‘not inconsistent with’ a variety of hypotheses. )

Other values read from the chart are also provided below.

Table 1: Minimum temperature increases that are “not inconsistent” with a particular projected temperature trend,
Rise per Century Minimum rise per century over period.
5 year 8 year 10 year
1.1°C -6.3°C -1.9°C -0.9°C
1.8°C -5.6°C -1.2°C -0.2°C
2.0°C -5.4°C -1.0°C 0.0°C
2.6°C -4.8°C -0.4°C 0.6°C
4.0°C -3.4°C 1.0°C 2.0°C
6.4°C -1.0°C 3.4°C 4.4°C

For any given predicted or projected rate of temperature increase show in the left hand column, you can read the minimum rate of temperature increase, calculated using ordinary least squares, measured over 5, 8 or 10 years periods is ‘not inconsistent’. Measured rates below the appropriate value to the right are inconsistent with the projected value.

Oh, by the way. My analysis includes variability due to volcanic eruptions. So, and I’ve made choices that make it difficult to falsify. I’ll explain how I did this later. Few people want to read math unless they are interested in understanding a result, but I strongly suspect others will want to understand what I did so they can figure out if there are any analysis choices they would dispute.

So, is there any weather that could falsify IPCC projections?

Yep, in principle, some sorts of weather could falsify IPCC projections.

Unfortunately, it’s not very easy. Weather, even when averaged over 10 years is so highly variable, that the range of weather that is ‘not inconsistent’ with even 10 years worth of projections is quite large.

In particular, if the trend over the next 10 years falls below “0°C/century” of warming, this will be inconsistent with 2°C/century of warming, which appears to be the midpoint of model predictions for the next two decades. (See Wikipedia; look for discussion of “SRES” projections.)

It is a bit ironic however, that the break even point for ‘falsifying’ the IPCC claim happens to correspond precisely to no-warming. This was, as it happens, a complete coincidence, but there you go!

44 thoughts on “What weather would falsify the current consensus on climate change?”

  1. Interesting analysis.

    So to be clear — if there is no warming 2001-2007, then if this continues through 2010 then model predictions of IPCC AR4 (2.0 degrees/century) can be said to be falsified?

  2. Wrong number of years, but otherwise yes. If there ordinary least squares for 2008-2017 has a slope of ‘0 C/century’, that contradicts the IPCC AR4 2C/century central tendency. I look at future values only, so I start in 2008.

    If this happens, those who really don’t want to give up a theory could say IPCC wasn’t wrong their documents’ permits 1.1C/ century, but the higher values are out.

    If we try the OLS sooner- running 2008-’15, then the trend needs to be -1C/ century.

    I’ll be posting how I calculated this when I can. Likely tomorrow or Wednesday.

    The explanation of how it’s calculated will clarify, and if anyone reads my blog, discussion might help people understand where there are some assumptions. But the method I’m using is the one I’d use when planning an experiment. There is always a little wiggle room after the experiment because numbers might not match a few assumptions, but these should be good.

  3. OK, thanks. But do note that for AR4 the “future” started in 2000 (which is when they started their predictions).

    These seem to be very small numbers, as a 1.0 C/century decrease is equivalent to a decrease of 0.1 in a decade, which seems to be a pretty small value. Looking forward to the details, Thanks!!

  4. Roger —
    Yes. I’m aware that, when the IPCC wrote their document in 2007, the “future” begins in 2000. That’s obviously one of the truly bizarre things the IPCC does with regard to calling things “predictions” or “projections”.

    Yes, in 2007, when they were “predicting” the temperatures from 2000-2020, the IPCC already knew the temperatures for 2000-2006 and part of 2007. So, of course, they were “right on” with those.

    The math to figure out what would falsify can be done accounting for the already know data at the time the projection was made, but it’s stupid and tortured to do that.

  5. Roger–
    I know.

    The IPCC cheats by calling data that’s all ready in “predictions”. If we all put blinders on and pretend their 2007 document “predicted” temperatures in 2000-2006, then, of course, it starts to become next impossible to falsify.

    A validation of 2000-2010 against data would involve using 70% data that were already incorporated into the “prediction”. One could, hypothetically come up with numbers that would, hypothetically, falsify this “prediction”, but the weather that has to occur in the last quarter of 2007, 2008, and 2009 is so statistically improbable that it can’t happen. Weather would need to cool very dramatically by an amount that is so far out of the normal weather variability bounds that, it is statistically nearly impossible. (Even counting this cold January.)

    I don’t think I’m the only one who believes that post-validations should be limited to data collected after projections were made. Gavin did that when evaluating Hansen A,B and C. (I have differences of opinion about some things he did, but he did start evaluation with data that came in after the model was “frozen” and computations were underway.)

  6. Lucia, Nice post, I am looking forward to hearing our thinking on the construction of the graphs for falsifying the IPCC predictions. Question – why can’t the last ten years from the present be used? We know the IPCC have their fingers on the scale by including known data in predictions, but that is to their advantage. If nevertheless, there is zero trend for the last 10 years (which BTW there is) then why is that not falsification, despite the cheating. Alternatively, are there graphical lines that provide the same result as the ones you have shown, for the present back, rather than for the present forward?
    Cheers

  7. David–
    You have to be careful here. You can’t easily do statistics with endpoints, you need to do fits.

    Here is a fit to the past ten years data:
    Trend over last 10 years

    If you do an ordinary least squares, the trend is positive; in fact the slope obtained this way is 1.8C/century.

    What you’ve probably read others claim is that this trend is not statistically significant.

    That’s true: if you limit yourself to only 10 years, and there is quite a bit of scatter, a measured trend of 1.8C/century is not statistically significant different from zero. But that’s not what I’m saying in my table. I’m saying to falsify the 2.0 C/century claim the trend itself coming out of the OLS has to be less than zero. It’s an odd coincidence that the break even point happens to be zero, but, oddly enough, it is.

    Now, for why we need to test the accuracy of IPCC projections using data collected after their projections: this needs to be done to place the temptation for cherry picking beyond anyone’s reach. By saying we will stick with data starting after the IPCC published their report, neither you, nor I, nor the IPCC, nor anyone can cherry pick.

    I don’t know what 2008 is going to end up as. The IPCC certainly didn’t know in 2007. And 2008 is the first full year after their report was issued. This is the year to pick.

  8. Lucia, thanks, great blog BTW, as a focus on verification statistics is very rich seam!

    I recently looked at the trend on the monthly temperatures 10 years back using January 2008 HadCRUT data. The trend was almost zero.

    Anyway, based on your table, there is a predicted trend that is falsified for every observed trend.

    So, the the actual dataset and trend is not the issue for the moment. That was an aside. My question is about the constraints you are putting on that the new data must be from the present on.

    I could imagine that each of these falsification scenarios have a different answer/graph.
    1. Ten years on from the present.
    2. Ten years back from the present.
    3. Any ten years I get to cherry pick.

    I don’t see how you can rule out 2 and 3 and admit only one. While I could imagine scenario 1 as being the strictest test, scenario 2 as requiring greater observed trend, and scenario 3 as having the most observed trend, I would think there is some quantitative relationship between them.

    For example, I could imagine an analysis that says that this particular 10 set of observations has a 1% chance of being from that model prediction over that time period, even when the model covered the period in question.

  9. David,
    One problem with statistical reasoning is that for these numbers to really “work”, there should only be one experiment. If you start giving yourself choices (2)-(3), then all uncertainty intervals need to be shifted. That’s why we get so many arguments– no one can decide in advance which experiment is “right”.

    In some sense, you can do 2-3 for something, but even if the IPCC says they are ‘predicting’ the past, they really aren’t. It’s the past. That’s why I exclude it.

    Here’s why I don’t pick 2 or 3:
    What you would be showing with your (2) and applying to 2001-2010 is that the IPCC ‘predictions’ don’t even describe the past. This is would certainly cause people to lack a lot of confidence in the IPCC, but the IPCC could say they always expect the rate to pick up, but it was lower in the early portions. Then where are you? The IPCC isn’t falsified because they say you started too soon. (Although, I think reading indicates they do expect the 2C/century in 2001.)

    So, in this case, while IPCC ‘predictions’ failing this in 2001-2010 has some weight, it doesn’t quite falsify.

    On (3)

    You can’t just pick any set of years you like and apply this criterion. The statistics change if you get to pick whichever string you want. That’s because if you had 20 possible runs of years, we actually expect one of those runs will have the low slope by pure random chance. Obviously, you pick the low one for your case. The other guy picks the high one for his case and says it looks fine. And then where are you?

    Now for (4)
    If you always look at the most recent 10 years, you might at least be able to claim you see “the top”.

    The difficulty with this strategy is that, if the real trend is 2C/century, you’ll “see the top” once every 20 years, but these will happens in strings. So, you’ll claim you are right for a few years, and with confidence, but you’ll be wrong.

    The problem is,of course, that you are doing the equivalent of flipping coins over and over until you get the string you like so you can “prove” something about the coin.

    Still, at least if you say you’ll always look at 10 years to find confirmation of “tops”, at any given time, you have only 1 experiment!

    If it’s not a top… well… In a few years, you’ll be shown wrong. So, you will have erred.

    As for using other data sets: Yes. Of course you can use Hadcrut. The standard deviations might be a bit different and this would shift things a little. I picked GISS because to do the calculation, I need a data set.

    If Hadcrut and GISS results differ, well, we have a situation where one waits a year.

  10. Hmmm… David, I looked at the Had-Crut data. Temperatures have been flatish since 2001, haven’t they?

    Yeah, my normal inclination is to only look at data after predictions are made. But, if the IPCC’s basis is 2001-2010 for their claim, things aren’t looking good for their predicition even accounting for the fact that 7 years were already in!

  11. Lucia,

    I read somewhere that HadCRUT is surface record that the IPCC uses in its projections so any falsification of the IPCC predictions should be done with that record.

    GISS is typically higher than the other datasets because it includes estimates for the poles. However, these estimates are pretty dodgy given the poor coverage in the polar regions.

    After reading your analysis I went looking to see if some of the more absurd predictions could already be declared falsified and found yet another twist that makes the warmer claims completely unfalsifiable: tipping points. According to warmer logic the earth will reach tipping points if the temperature reaches a certain level which will cause a rapid acceleration of warming. This means that they can always claim that catastrophe is ahead no matter what the temperatures do in the short term as long as there is some warming.

  12. Brushing up on Popperesque terminology, there are distinctions between unfalsifiable hypotheses (e.g. psuedo-sciences), and the strategy of using auxiliary hypothesis—hypothesis other than the test hypothesis which is assumed to be true and is needed to derive the test implication, and ad hoc hypothesis—an auxiliary hypothesis introduced for the sole purpose of saving a test hypothesis being threatened by adverse evidence.

    I think ideas such as ‘tipping points’ and ‘heat in the pipeline’ could serve as ad hoc hypotheses, and that GW is falsifiable via data such as slightly increasing temperatures with increasing CO2. However, it is hard to see how they could be used to defend the theory against stable or declining temperatures. Tipping points is more of an auxiliary hypothesis to motivate the impacts side of AGW, i.e. to counter the argument that even if temps increase it will be at a rate we can adapt to.

    I think that AGW proponents cant afford to let AGW theory be seen as unfalsifiable, but they will use every strategy to minimize ‘falsification risk’, such as using existing data to verify models, produce regular reports without evaluating past predictions, constantly modify the hypothesis, ‘move on’ etc. To governments, the IPCC endeavour could be seen as a form of avoiding ‘falsification risk’ by transferring the risk of failure to an external authority.

  13. @Raven– Obviously, no one can do a statistical test to falsify a prediction that is not quantified in the first place. Maybe tipping points exist. Or not.

    As an empirical matter, only way we’ll ever know they do exist is if we reach them. The only way to falsify the claim is if a) someone tells us when the point will be reached, b) tell us how to recognize the tipping poing and c) what will happen afterwards. In that case, we could falsify the “when, how and what”. But otherwise… well, the tipping point may exist, but there is no way to disprove it with empirical data.

    @David– I’ll be having a look. I’m actually rather new to the whole AGW game, so I don’t know what’s in all the documents. But, I do like to be a bit careful and not basically trick myself into cherry picking.

    Part of the reason my answers can be a bit hesitant is I’m not all that familiar with the reams of data. My blog is something like a month and a half old. I started my open thread to hear the various pro/con arguments, and then I started blogging.

    Anyway, it struck me that Roger’s question was one that can perfectly well be answered. The general question is common; to answer one has to make it more specific. But still, every experimentalist tries to figure how they would use data before doing an experiment. If you don’t think about how data would be used to perform a hypothesis test, how would you figure out how much data to take? Or which specific data? You don’t want to take tons of data and miss one thing that you really needed. And how do you figure out a sampling rate etc.?

  14. lucia,

    Very interesting post. Another potentially useful way to look at this would be to find the range of underlying temperature trends that is permissible given the observated OLS trend. Basically read your graph backwards. Start with the observed trend and find the range of permissible trends.

    Using your example above, if the observed trend is 0.6C/century for 10 years then the maximum underlying trend is 2.6C/century.

    Assuming your algorithm is symmetric, the “minimum rise” could be easily augmented with a “maximum rise”. Assuming symmetry and using your example again, an observed trend of 0.6C/century for 10 years is compatible with an underlying trend between -1.4C/century and 2.6C/century.

    Used in this way, the results could falsify claims in both directions.

  15. John V–

    This is a one tailed test, so the error bars aren’t symmetric. My reasoning is: If the trend is higher than claimed by IPCC, no one calls them “falsified”.

  16. lucia,
    I think it’s interesting to attempt to falsify in both directions. Claims that warming has stopped could be falsified with the other side of the test. That adds a little spice to the analysis.

  17. John V–
    Sure. And if, in 3 years, there is enough data to provide one falsification, you can be sure arguments for the two -tailed test will kick in! 🙂

    Right now, there is not enough data to achieve any sort of falsification. I’ll probably post the test your asking for— but even before that, I plan to discuss β error– a topic that will surely make all readers eyes glaze over!

  18. Lucia, To be clear. radiative physics would not be disconfirmed. ( dont go C02 loony on me)

  19. Steven–
    The radiative physics would not be disconfirmed. Also, rates of temperature increases less than 2.0C/century would not be disconfirmed.

    As I see it the arguments aren’t over whether or not AGW is possible at all the questions are: What’s the rate? Are the IPCC estimates on the high side? The low side? Or about right?

  20. Lucia,

    Nice discussion. The problem of course is that the simple GMST buries a lot of information about an extremely complex system. A far better test would be to have the modelers publish global temperature maps for the next five to ten years in advance with error bars. RSS and UAH would then independently compare the predictions with the satellite measurements of temperature.

    David, 831. The only test that matters is (1), it’s a genuine prediction. The other two can be fudged by a model with enough tunable parameters.

  21. Paul– Sure. But there is no mechanism to force scientists to publish anything in particular. Currently, the main metric used by the IPCC, modelers and pretty much everyone prognosticating is GMST. So, really, that’s what has to be used.

    For the most part, if AGW is such that the trend is greater than 2C/century (or less), this fact would be more-or-less reflected in all possible metrics. These things will be at least somewhat positively correlated. You might confirm or falsify a signal faster with one or the other, but they’ll all tend to move the same way.

  22. Lucia,

    if AGW is such that the trend is greater than 2C/century (or less), this fact would be more-or-less reflected in all possible metrics.

    That’s not obvious to me. The entire increase could be due to a few hot spots like Waldo that Steve McIntyre keeps searching for in the temperature records. Some of the models produce unphysical results, freezing temperatures along the equator is one I’m aware of.

  23. Paul Lindsey-
    Fair enough. But when validating projections, I focus on measured temperatures for the real earth, not model predictions. Also, statistically, we get much faster falsification or validation with global measures like GMST.

    In principle, if we look at a local temperature– like my back yard near chicago, we will eventually see validation or falsification, but it takes much longer because the variability of any local measure is larger.

    There is a limit where, if the whole world incrases 6C, you aren’t likely to see many spots getting colder.

  24. Lucia, In your comment 827, you say that ‘A validation of 2000-2010 against data would involve using 70% data that were already incorporated into the “prediction”’.

    With respect, I don’t think that this is so. The IPCC “prediction” in question is based on simulations which make use of the projections of future emissions specified in the Panel’s Special Report on Emissions Scenarios (SRES). This Report was approved and published by the IPCC in 2000. As the Panel decided at its Plenary Meeting at Vienna in November 2003 that ‘the SRES scenarios provide a credible and sound set of projections, appropriate for use in the AR4’, NO post-2000 data have been incorporated into the ‘prediction’ that you are seeking to test. It is therefore valid to use post-2000 data to test the ‘prediction’ for 2000-2010 in AR4.

    As there has been a good deal of misunderstanding of the matter at issue, I’ll key in here the text of a letter sent to the Chair of the IPCC, Dr. Pachauri, by Dr. John Mitchell on 30 October 2002. Dr. Mitchell was writing in his capacity as Chair of a Group which included representatives from all of the major modelling groups, and his letter was tabled at the 28th Session of the IPCC Bureau in Geneva in December 2002.

    “The World Meteorological Organisation JSC/CLIVAR Working Group on Coupled Models (WGCM), which includes representatives from almost all the major climate modelling centres contributing to the Third Assessment Report, recently considered what work needed to be done to ensure that the best modelling advice will be available for the next IPCC Assessment, due to report in 2007. For each emission scenario, it is necessary to run an ensemble of simulations to define the uncertainty due to natural variability, and to do this with as many models as possible to define the range of uncertainty in modelling the earth system. These uncertainties mean that there is little scientific justification in running new scenarios since the resulting climate change outcome is unlikely to be indistinguishable (sic) from existing scenarios with similar radiative forcing. Hence the WGCM unanimously urge IPCC to retain the current SRES scenarios without change, to make sure a sufficient number of model runs are available for the next assessment;

    – to define the uncertainty range associated with current scenarios; and,
    – to ensure that these simulations are available in time in a wide range of impact studies in the 4AR.

    “We appreciate that small changes in the emission scenarios may require large economic and social changes, and that the effect of the social and economic changes could be assessed in time for the next report. However, unless the accompanying changes in radiative forcing are likely to produce detectable changes in climate, we believe that [it] is better not to try and run new model experiments, but to stick to the scenarios used in the TAR. This will allow a better definition of the range of uncertainty in projected changes due to model uncertainty and natural variability, which are likely to dwarf any difference due to tweaking the existing emissions scenarios. This we believe will provide the best scientific basis for the next IPCC Assessment. Please feel free to contact me if you need clarification or further information.”

    Thus the evidence suggests that the IPCC retained the SRES scenarios without change in order to allow “a better definition of the range of uncertainty in projected changes due to model uncertainty and natural variability” – i.e., in order to facilitate the type of analysis you are undertaking. The fact that the IPCC authors “knew” of changes in estimated observed temperatures after 2000 is irrelevant, because these data were not incorporated in the projections. I think that your comments 826 and 827 reflect some misunderstanding of the way in which the temperature ‘predictions’ in AR4 were produced.

    .

  25. Lucia,

    A last comment and then we’ll just agree to disagree. There are an infinite number of wrong temperature maps that will produce a given trend but there’s only one map that matches the physical climate. I think that maps would quickly sort out which, if any, of the models are correct. My bet is none of them.

    I looked up Lisle, IL. Do you work at Argonne or FermiLab? I spent time at both in the long ago.

  26. I agree with Ian. Foolish as it may seem, I don’t think they included 21st century data.
    ====================================================

  27. Paul–
    I agree there are lots of maps. I guess if you had one that was some how “the official map” and we could do the work to falsify or validate against the GISS full earth temperatures. But…. it’s a lot more work for a blogger just doing this as a hobby.

    Still, it appears that bloggers doing things as a hobby can at least talk about HOW it could be done.

    I do, indeed, work for ANL. But part time, which gives me lots of time for hobby blogging.

  28. @Paul–
    Ok. Then, if these predictions were made in 2000, then 2001 is a fair start date to validate. That’s convenient, because we don’t have to wait 10 years to start!

  29. And so, what kind of temperatures would it take in the next three to four years to keep their prediction from being invalidated? I know I’m on thin ice, here, and my knowledge base is leaden rather than bouyant.
    ===============================================================

  30. Kim,
    This means the answer I already got is unchanged.

    In contrast, if they’d said solar variations did still matter, and the sun got ‘stuck’ in a maunder, they could attribute a flat spot to the sun.

  31. Fascinating stuff. It could mean that the debate might be going on for quite a while longer, but I guess better decadal prediction will tie it down before then.

    The range of 2100 projections is based on scenarios with different changes in emissions through the century. All scenarios have similar projected emissions and therefore the midrange projected rise up to 2030 is about 2.6C/century. Therefore it would be valid to take the 0.6C/century target over the next 10 years rather than the zero warming target.

  32. Steve,
    Realistically, some aspect of the debate will go on forever. Even after we all agree on the science, there is still the question of what policy would best solve any problems.

    The 2.0 is in the guide for policy makers. So, that’s why I picked it. Otherwise, I would also have picked 2.6 C/century.

  33. lucia, I’m not sure Gavin is right about weather being noisy.

    Of course, at any given location weather is noisy, but we are talking about the aggregate weather across the entire planet. How noisy is that?

    Given that weather is just heat and moisture moving across the Earth’s surface. I don’t see how there can be much noise in it (over the entire planet).

    Most of the ‘noise’ will result from measurement issues, i.e. we can’t accurately measure the temperature across the entire planet.

    And I’d add, that if the noise isn’t due to measurement issues then the Earth’s atmospheric heat content is fluctuating up and down (ignoring seasonality), which would seem to invalidate the IPCC’s Forcings model, or perhaps show there are significant forcings not recognized by the IPCC.

  34. we are talking about the aggregate weather across the entire planet. How noisy is that?

    How noisy?… it depends what you call “noise”. Weather is variable, or “fluctuates” or what have you. Gavin likes the word weather noise to describe variability. I’m not sure I’d use that precise word, since the weather is the weather. Still, when posting on a blog and trying to describe the difference between climate and weather noise is as good a word as any.

    How big is the noise in GMST? Counting measurement and weather “noise” together, the residuals to straight line fit to annual average GMST appear to be ±0.1C. That’s my estimate of noise! 🙂

    Weather– as measured by temperature, is a bit noisy, even averaged over the surface of the planet and over a year. After all, there is heat in the ocean etc. Given the equations that govern weather, it should be noisy– the only question is how noisy, and on what time and spatial scales.

    Fluctuations in surface temperatures don’t falsify the IPCC. We expect temperature to fluctuation around a “mean” value even when constant forcing is applied. We see the same things in engineered systems involving transport of mass, momentum and energy. It just happens.

  35. David, two things. Firstly, we are talking about GISS here (technically unfair, at least HadCrut would make since as it’s used by the IPCC, as far as I’m aware) which has plenty of positive trendiness in the last ten years (although a quick look at the monthly anomalies from Jan 1998-Jan 2008 in the RSS lower troposphere data reveals a very slight negative trend(and yes, I did actually do the linear regression for this, but I already knew since I had created a similar chart the month before with basically zero trend). The second thing is that the point is to have about twenty years, I think, of basically trendless data (no? I actually think not, becuase then the trend in the next ten years would have to negative, strongly, to overcome the positive trend in the last ten years of GISS).

    Interesting to know what would be inconsistent with the most dire predictions, but I should add that a positive ten year trend wouldn’t necessarily prove the catastrophic point. Pat Michaels tends to argue that warming will probably be exactly on the long term linear trend line, which wouldn’t be much at all.

  36. Hi Lucia

    The maths here is way out of my league, but my question is, if the IPCC current predictions for mid rate of temp growth were applied to the last5, 8 and 10 years for which data have been available. Would the IPCC forecast have been falsified?

  37. Mark R:

    if the IPCC current predictions for mid rate of temp growth were applied to the last5, 8 and 10 years for which data have been available. Would the IPCC forecast have been falsified?

    No, no and no.

    With respect to the 5 years– there is just too short an amount of time.

  38. Having now read Gavin’s post, he basically asserts there is weather noise at the global scale and then goes on to discuss sampling issues, which for me is just a subset of measurement issues.

    Anyway, let’s assume Gavin is right and aggregate global weather is noisy/chaotic and we do have sampling problems, then it would appear to be a simple problem (simple theoretically, not necessarily logistically). Take truly random samples of temperature over the earth’s surface.

    My knowledge of statistics is fairly basic, but I don’t believe it would take a large number of random temperature measurements to get a measure of global temperature with a high statistical confidence. This would at least answer the question – How noisy is global aggregate weather?

    Or am I missing something here?

    BTW, I agree with you. The only significant source of (aggregate) weather noise I.e. not external forcings) I can think of is variations in ocean/atmosphere heat exchange (and perhaps water exchange).

  39. Phil_B.
    Theoretically, the issue of getting random samples is easy. But, for climate science, as a practical matter, it’s a pain in the neck.

    If this were a lab experiment, you just wait a long enough time between data samples to ensure each is uncorrelated with the previous sample. The problem is, with the earth’s climate, you need to wait about 3 years to space samples in a way that ensure they are independent.

    In a lab experiment studying pipe flow, you might work .1 second.

    So, you can see the problem.

Comments are closed.