Specific Question about VS.

I’m not embarked on tracing back. I came across a VS comment made on March 23 which strikes me as important. It’s posed after my questions.

VS does a test that checks whether the trend in the data from 1900-1935 is real. He finds that trend is not statistically significant. Thereafter, he appears to assume the trend is zero for all future analyses. (I may be mistaken– it’s difficult to follow in comments.)

My question is: Does anyone know if VS reported the statistical power of his “fail to reject” the zero trend below? If he reported it, I’d love to read that comment.

Also, if someone could point me to literature specifically describing the “ADF” test, and information on computing the significance level of the coefficients, I’d love that. (The reason is I’d like to compute the power of the “fail to reject” the trend. After all, the multi-model mean exhibits a trend during that period, and I want to know whether I would reject something that might be called the “consensus expectation of the trend” if it was used as the null.)


The comment that prompts my question

——————-

PLAYING THE ARIMA GAME

——————-

The point that I was trying to make in my previous couple of comments, is that the probability arrived at by Zorita, Stocker and von Storch (2008) is not very informative.

Allow me to elaborate.

Zorita et al (2008) assumed the temperatures to be a stationary process, an assumption which, as I mentioned here, is not supported by observations.

How should we proceed then? Well, let’s constructe a very simple and naive specification by ‘listening’ to the data.

——————-

SPECIFYING THE NAIVE ARIMA MODEL

——————-

Well, first of all, we found here and here, that the temperature series in fact contain a unit root. The calculations of Zorita et al (2008), when applying the Whittle method, in fact independently confirm this (observed) non-stationarity.

We will therefore model the first difference series, which is stationary (again, see test results).

since the ADF test equation employed three autoregressive (AR) lags in first differences (see test results), we try out that specification. We simply model the (first difference) series as:

D(GISS_all(t)) = constant + AR1*D(GISS_all(t-1)) + AR2*D(GISS_all) + AR3*D(GISS_all) + error(t)

The estimation results are given here, coef (p-value):

————–

Constant: 0.006186 (0.1302)
AR1: -0.452591 (0.0000)
AR2: -0.383512 (0.0000)
AR3: -0.322789 (0.0003)

N=124
R2=0.23

We furthermore test the errors for normality via the Jarque-Bera test:

JB, p-value (H0: disturbances are normal): 0.403229
Conclusion: normality of disturbances not rejected

————–

Note that the constant term is statistically insignificant (the AR terms are significant at a 1% level). Again, we let our test results guide us, and ‘reject’ the presence of a constant term in the simulation equation. (Actually, we ‘fail to reject the non-presence’, I elaborated on statistical hypothesis testing here.)

We reestimate the model, now without constant:

194 thoughts on “Specific Question about VS.”

  1. I agree about the requirement for some demonstration of statistical power. I read, from VS:
    “the temperature series in fact contain a unit root”
    and he references his ADF test which fails to reject the null hypothesis of a unit root. This gliding from “fails to reject” to “in fact contains” seems to be common in this kind of argument.

    I’m still looking for some statement of level of confidence in these I(1), I(2) claims that are bandied around.

  2. At the risk of appearing to be stupid, (well, maybe there’s not much risk of that), what is a unit root and why is it important in this context?

  3. Lucia, perhaps I’m getting this wrong, but you seem to be trying to understand VS use of statistics here as though he’d invented these procedures himself? Whereas I understand he is implementing recognised and standard econometric procedures? The initial question therefore might be: is VS’s implementation orthodox?

  4. Gary–
    The Dickey Fuller test is a standard test. Asking about the statistical power when interpreting fail to reject is also standard or should be.

    I want to know whether they reported statistical power on that particular bit.

  5. I tried last night to slog through all the (very long) tread where VS lays out his analysis, thought I must admit that I skimmed the obvious ad-homs that littered the thread.
    .
    While my statistical training is limited, my take on the thread was that VS confirmed (or at least appeared to me to confirm) that the temperature history is I(1), with references to several published reports that appear to reach the same conclusion. VS then shows (or at least appears to show) that based on I(1) and the instrument temperature history up to 1935, that the post 1935 temperature history completely lies within the 95% confidence limits calculated based on pre-1935 data.
    .
    FHIW, it seemed to me solidly reasoned and demonstrated.

  6. Tamino has what seemed to me a very readable post on the unit root problem and the statistical tests for unit roots like Dewey-Fuller, Augmented Dewey-Fuller and Phillips-Perron. I first saw comments from VS at WUWT on the Beenstock and Reingewertz paper thread. As far as the ADF test, the power decreases as the number of lags increases and whether a linear trend is included or not. So for a test with possibly low statistical power, failure to reject is certainly not proof of existence, especially with noisy data. Then there’s the problem of what inability to cointegrate means. Does it mean that there is no linear correlation between the variables? Or does it mean that a linear correlation might be spurious or must be spurious. For CO2 and temperature, the short time scale relationship isn’t expected to be linear. Time constants are involved.

  7. DeWitt Payne:

    For CO2 and temperature, the short time scale relationship isn’t expected to be linear. Time constants are involved

    Plus, it is the sum of forcings, CO2,sulfates, solar, volcanic, etc. that is supposed to correlate, not just CO2.

  8. Re: Carrick (Mar 25 21:17),

    I think I’ll crank up the two box model software and run the unit root tests on the output of the model. If I only use one time constant, I think it will be something like lucia’s model.

    There’s also the question of whether a near unit root process is equivalent to a process with an exact unit root. Temperature probably has at least one near unit root as it seems to have long term persistence, but is bounded.

  9. If a time series is I(1), then doesn’t that mean that the series is increasing?

    But, if the constant is zero, as seems to be the case here, then does integrating over the series do anything? I guess my real question is this: is an ARIMA(3,1,0) series with a constant of 0 significantly increasing or not?

  10. Why Z,

    I’m not sure that ‘increasing’ is the right term.
    I(0) simply means that the series is oriented about zero, I(1) that you have to take the first difference to get it oriented about zero. I(2), double difference, and so on.
    Where difference – subtract the previous series value from each element in the series – the delta at each step. I(n) = I(n)-I(n-1)

    See
    http://landshape.org/enm/orders-of-integration/

  11. Dewitt.

    There’s also the question of whether a near unit root process is equivalent to a process with an exact unit root.

    Wikipedia says the ADF test has low power when the system has a “near unit root”. I’m also wondering about the fact that the forcing is non-linear. That means if there is noise and you average over loads of cases, the average of all trends will be non-linear. The ADF as described at wikipedia only has a linear trend with time.

    Someone sent the the link to the 2,500 word comment. I think that answers the issue of power and the null hypothesis because VS did tests both with “no unit root” as the null hypothesis and with “unit root” as the hyppthesis.

  12. Those two have some points
    Zorita and DeWitt Payne.
    Time, sum of forcings.
    ==============

  13. On a related note, I’d be interested in hearing the believers’ response to two posts LuboÅ¡ made.
    .
    The first is on the similarity between temperatures at different times scales. He claims that climate trends of periods shorter than millennia are noise.
    http://motls.blogspot.com/2010/03/self-similarity-of-temperature-graphs.html
    .
    The second is the effect of 1 and 2 sigma observations, like those in climate science, on the stability of a science. He claims that such work can reduce a science to rubbish (his word) in less than 10 years.
    http://motls.blogspot.com/2010/03/proliferation-of-wrong-papers-at-95.html

  14. MJ,

    RE: CO2
    It can get very confusing with all the claims, counterclaims, side issues and uncle tom cobley and all.

    VS has been talking about temperature only, with asides that B & R and their paper will be covered in the future.

    David Stockwell produced the graphics and explanation here –

    http://landshape.org/enm/orders-of-integration/

    that shows the CO2 I(2) derivation, and also has some other posts confirming/investigating B&R elsewhere on the site

  15. [quote Chuckles (Comment#39313) March 26th, 2010 at 6:49 am]
    David Stockwell produced the graphics and explanation here –
    http://landshape.org/enm/orders-of-integration/
    [/quote]

    Thanks Chuckles.
    .
    I have to admit I found Stockwell’s post totally confusing, even though I know what he’s trying to say about I(1) and I(2).
    .
    His data doesn’t look at all like the data I downloaded for Mauna Loa. So I’m left guessing on how he interpreted the data to come up with his numbers.
    .
    The best I’ve been able to do is guess that his CO2 value represents a percentage increase, not a parts per million (ppm) increase, and he’s using his percentage increase to calculate I(1) and I(2).
    .
    But even with that, it’s not clear why his I(1) and I(2) calculations look so different from mine.
    .
    It would be nice if he labeled his charts.

  16. Lucia

    I’m still mystified by what it is you are questioning here? Apologies in advance as my stats is godawful, but I think you are misunderstanding VS point?

    [quote]VS does a test that checks whether the trend in the data from 1900-1935 is real. He finds that trend is not statistically significant. Thereafter, he appears to assume the trend is zero for all future analyses.[/quote]

    I think you have this wrong: I don’t think he is doing a check on the trend in the data, I think he is attempting to “naively” model the data as ARIMA, he does a first guestimate and finds a possible constant of 0.006186, but decides he can do without this as it is not significant. He then models the data without the constant and again finds it a good match for the actual 1880-1935 data.

    From this point he is trying to determine whether data at the end of the 20C, beginning 21C is inconsistent with his 1880-1935 modelled ARIMA data, and finds that it isn’t.

    I don’t think he has rejected a “trend” in this post; he rejected a trend way before this in countless posts where he demonstrated that numerous tests fail to reject the presence of a unit root; therefore GMST does contains a unit root; therefore application of OLS and similar statistical techniques are invalid.

  17. MJ,

    That’s an odd one. Cumulative annual increase in co2 from a start date baseline perhaps? It’s referred to as the increasing level of co2 in annual steps….

  18. [Chuckles (Comment#39321) March 26th, 2010 at 8:09 am]
    MJ,
    That’s an odd one. Cumulative annual increase in co2 from a start date baseline perhaps? It’s referred to as the increasing level of co2 in annual steps….
    [/quote]

    I’m guessing it’s some sort of cumulative total too, but it’s not ppm. Annual change in CO2 is ~1 ppm, but his chart shows a fraction of that.

  19. Gary,
    I get his point.

    …but decides he can do without this as it is not significant.

    If he does not know the statistical power of the test, the “decision to do this” can be disputed in fields because it means that all further analyses based on the assumption the null is true have a high likelyhood of being based on a mistake.

    What happens in frequentist statistical analysis is this:
    1)You pick a null. VS picked m=0.
    2) You assume the null is true (in this case, m=0) then test. If the probability of your results show that if m=0, the data outside the range that encompasses 95% of outcomes, you reject m=0. Otherwise, you fail to reject. (This assumed you picked 95% as the critical cutoff. That dictates your typeI error.)

    Many people stop here for a variety of reasons, and you, others and VS in comments over at Bart’s keep explaining that 1 &2 have been done, so VS proceeds applying the null as if it’s true. (In this case, that’s m=0. ) That’s often done for many reasons. People then often proceed using the null for further analysis. This is considered ‘respectable’ and is particularly so if we had quite a bit of confidence in the null before we started the analysis.

    The difficulty is that, to some extent, whether or not m=0 is a major conclusions in VS’s analysis. It’s not just a minor issue like “are the residuals normal” or “what’s the AR1 coefficient”.

    So here’s the thing: you don’t have to stop at 2.

    If you like, before launching off on analyses that pre-supposed m=0 (because you assumed that in the first place and didn’t statistically significant evidence it was wrong) you can do further tests to evaluate your confidence that m=0 really, really, really is true. You can ask about statistical power. Statistical power is (1-typeII error.) Type II error is the rate at which you accept m=0 as true when it’s false.

    Statistical power tests are a pain in the neck to perform. So they are often skipped (particularly for assumptions like normality of residuals etc.) One of the reasons they are a pain in the neck is that to perform them, you need to state a second alternate hypothesis. (That is, not H1 used in the test for significance.) For example, if you know a large number of people believed H2: m=0.005 C/century, you could use that as an alternate hypothesis. Then, you do an analysis to discover the rate at which you would “fail to reject m=0C/century” under the assumption that m=0.005 C/century is true.

    This rate is the “type II” error. Depending on the statistical tests, the properties of the data etc, assuming the significance level you chose for the first test was &alpha=5%, you will find a typeII error rate ranging form β=95% to 5%.

    Suppose you find β<5% under the assumption H2:m=0.005C/century. Then you will have shown that you are only taking “fail to reject” the theory you “like” (i.e. Ho:m=0) if you would get “fail to reject” in fewer than 1/20 instances when the theory he “likes” is true.

    In contrast, if it turns out that 50%<β, you are going ahead with the theory you “like” even though there is a 50% chance that “his favorite” theory is the true one.

    In the case of VS’s argument, I think (but an not sure) that some of his evidence for I(1) etc. are based on the assumption that m=0. But the demonstration of m=0 is only a demonstration that m≠0 is not statistically significant– but the power of that test is not stated.

    Afterwards, at least some findings of I(1) are based on the assumption that m≠0. But the difficulty is that if the statistical power associated with accepting m=0 is low, that means one can dispute the finding that I(1) really is true. It might be the finding is the result of using a mistaken notion that m=0.

    Mind you, VS still would have a respectable case that his error bands for temperature are more appropriate. The reason is that a) m≠0 cannot be shown with statisitical significance and b) if we use m=0, which we cannot disprove given the evidence we have, we get larger uncertainty intervals. So, that’s our uncertainty.

    I could perfectly well accept that. But it still leaves me curious about how confident we are that temperature really “is” I(1) rather than just thinking it looks that way because we don’t have enough data to tell otherwise and we’ve set up the nulls to treat as false certain things that phenomenology suggest are true. Specifically, we have set up the nulls to favor m=0, when phenomenology suggest 0<m.

  20. Re: magicjava (Mar 26 08:15),

    Has he done a log transform on the Mauna Loa CO2 data? CO2 forcing to a first approximation is: F(t)= A*ln(CO2(t)/CO2(to)). IIRC, the IPCC value for A is 5.3. The PP (Phillips-Perron) test in R package tseries rejects a unit root after the first difference for the log transformed data. PP is supposed to be more powerful than ADF.

  21. Lucia, at the least one would need to explore what assumptions were made in the statistical methods, whether any of these assumptions are violated by the data, and then if any were (which is highly likely, IMO), and then construct toy model examples (Monte Carlos) to test the robustness of the statistical methods against failure of any of these assumptions.

    For example, I’m worried about what happens in a driven system where the forcing term containts e.g. a 1/f noise component over some pass band,meaning that there is a band-start frequency below which the noise spectrum does not continue to increase.

    Many models tend to “redden” the noise that is driving them (a statement that their transfer function has a low-pass characteristic, probably true with climate…it certainty will not response to “very high” frequency forcings).

    Since 1/f^2 noise is an exemplar for a nonstationary process (which is as I understand it what many of VS’s methods are testing for), if you tested for stationarity of the climate system, you should get a failure, unless you integrate for a long enough period of time that you observe the “band cut-off frequency.

    And even then, the question would be “how long do you need to integrate over before your statistical methods can distinguish this case (short term quaisi-nonstationary) against a truly nonstationary process.

    Here’s the thing though, if you knew the “quasi-nonstationary forcings”, you could pull the variance associated with these out of the global mean temperature series, just as you already have the MEI.

    Related question: Does the MEI look like it has a nearly unit root? Over short enough time scales (which could be decades) I bet it does.

  22. [quote DeWitt Payne (Comment#39325) March 26th, 2010 at 8:32 am]
    Re: magicjava (Mar 26 08:15),
    Has he done a log transform on the Mauna Loa CO2 data? CO2 forcing to a first approximation is: F(t)= A*ln(CO2(t)/CO2(to)). IIRC, the IPCC value for A is 5.3. The PP (Phillips-Perron) test in R package tseries rejects a unit root after the first difference for the log transformed data. PP is supposed to be more powerful than ADF.
    [/quote]

    .
    It could be. Unfortunately, I just don’t have the time to track such things down. It’d be nice if he said how he modified the data. I’m not saying his modifications are wrong, just that I don’t know what the modifications are.
    .
    Anyway, I found an error in my calculations. To get I(1) and I(2), I was just taking the difference between one entry and the previous entry. That’s wrong. I needed to take the difference between the integral of one entry and the integral of the previous entry to get I(1).
    .
    When doing that, CO2 is indeed I(2). My apologies for the mistake.

  23. All of this makes me wonder why we are having this discussion 25 years into the global warming debate.
    .
    Isn’t this day 1 type of stuff? Shouldn’t all of this been figured out one way or the other before anyone made claims between CO2 and temperature?

  24. My take on what VS claims:

    1) GISS Temp series contain a unit root (at least under his assumptions of the underlying trend, eg CO2 concentrations or forcings only, linear trend, or two linear trends with a structural break. Notably he has *not* used estimates of net climate forcing or GCM output, which I think would be much more useful).

    2) Therefore OLS is invalid

    3) He produced two models based on the 1880-1935 temps. One is stochastic, the other deterministic (linear trend). Observations towards the end of the timeseries (~2000) fall outside of the CI for the latter, and inside for the former. Neither is surprising: He showed that a deterministic model that nobody believes to be realistic (extrapolating the linear trend from 1880-1935) fails. And that the data remain within the bounds of a stochastic model that possesses hardly any predicting skill (i.e. almost anything within physically reasonable bounds goes). No surprises there. I think the conclusion that indeed temperatures have been stochastic is unwarranted. Esp in light of the physical constraints I’ve repeatedly pointed out, and which he has repeatedly ignored (http://ourchangingclimate.wordpress.com/2010/03/18/the-relevance-of-rooting-for-a-unit-root/)

    I would have a question though: If OLS is not valid, how would you best estimate the trend? My understanding is that OLS (e.g. applied to 1975-2009 as I did) still gives a half decent estimate of the trend, although it gives an underestimate of the error around the trend. Is that correct?

  25. Magicjava–
    Claims must precede the tests. When tests are done assumptions are made so people always want to know the assumptions, decide if they think the assumptions are realistic and if the truth of assumptions are debatable, they want to know how sensitive the outcome of the test is to the assumption.

    So, it’s totally normal for claims to be made about the relation between CO2 and temperature to be made prior to people having performed every possible statistical test. It’s also normal for people like VS to come up with new ones and for others to ask him details about what he has done.

    It’s relatively new for this to happen at blogs. There are some confusing aspects at blogs– but it’s really no more confusing than poster sessions, private conversations at conferences etc. It’s just more public.

  26. Lucia,
    .
    I can understand your point if we’re talking about scientists chatting amongst themselves, or even in a series of papers.
    .
    But not when the information is being presented to the public as a fact when no one has even done the work to see if the two series have a valid correlation.

  27. Bart

    I would have a question though: If OLS is not valid, how would you best estimate the trend? My understanding is that OLS (e.g. applied to 1975-2009 as I did) still gives a half decent estimate of the trend, although it gives an underestimate of the error around the trend. Is that correct?

    Well, if you are wrong, that’s something I couldn’t say.

    I think OLS gives the BLUE estimate (Best least squares unbiased) of the trend if the deterministic trend generating the series really is linear, if the noise is ARMA(p,q) of any sort and if you have correctly specified all endogenous variables. If you leave out a endogenous variable that actually matters, your trend will be biased. That is: If the temperature is a really a function T=f(time, ‘population of leprechauns’), your fit should include population leprechauns. Otherwise, if your sample exhibits correlation between (time, ‘population of leprechauns’ your temporal trend will be biased because the math will attribute all changes to time. Of course this bias arises whether or not you know that leprechauns matter.

    Under the assumptions above, uncertainty intervals computed assuming white noise will be wrong if the system is ARMA. Usually, they will be too small. (I’m not sure they are always too small for all 0<p+q. Someone else probably does know. ) Oddly, if the true functional form for the ‘deterministic trend’ is nonlinear, but you assumed it was linear, the uncertainty intervals you compute will be too large relative to what you would get if you ran repeat samples with the same functional form of T with time, and same type of noise.

    I don’t know what the effect of a unit root would do. Seems to me OLS should still be unbiased. After all, even if a system wanders around with no tendency to return to a preferred location, it could have equally well wandered “up” or “down”. That said, I don’t actually know. Someone else would have to tell you whether the trend is BLUE when there is a unit root.

  28. Re: magicjava (Mar 26 09:44),

    But not when the information is being presented to the public as a fact when no one has even done the work to see if the two series have a valid correlation.

    To some extent, the IPCC only tries to report what scientists are chatting about most and seem to find most persuasive right now. That is, to some extent, the state of the science. That’s just the way it is.

    When journalists interview individual scientists, the scientists generally expresses his notions, which can range for trying to give his interpretation of what “most” other scientists find convincing to explaining what he, himself finds convincing. That depends on the scientists and also what the journalist asks.

    So, if the public is interested in the state of the science, we will read what the current claims are, and which any particular scientists or group of scientists think appear best supported by the evidence. That’s just what happens.

  29. magicjava:

    Isn’t this day 1 type of stuff? Shouldn’t all of this been figured out one way or the other before anyone made claims between CO2 and temperature?

    We’re still in the territory where global mean temperature is not solely caused by CO2 forcing, nor is the relationship multiplicative to the extent it exists, nor is it linear.

    At the least you need to test the order of integration of the sum of all forcings.

    Which it turns out is I(1).

  30. The whole test is more than a bit goofy from my perspective because you are treating a primarily deterministic signal like CO2 as it were noise, and then trying to make conclusions from the fact that the signal is not stationary.

    (CO2 produced by man is very much a consequence of economic decisions we make. In that sense there is a cause and effect relationship.)

    Obviously there are problems for predictivity of future CO2 forcings which does impact the predictivity of GCM because we can’t predict future economic decisions very well. News flash: We didn’t need a statistical test to tell us this.

    Further we can in practice quantify the uncertainty in future CO2 production (whether this has been done well is another question).

    Order of integration and nearly unity roots has nothing to do with anything here, IMO.

  31. Carrick

    (CO2 produced by man is very much a consequence of economic decisions we make. In that since there is a cause and effect relationship.

    I think to some extent, economists treat this as noise. Different people make different choices, and the government makes different choice etc. It’s sort of like “statistical homo-economous dynamics”.

    Those from physics backgrounds treat CO2 as “it is what it is”, and then predict the effect on temperatures.

  32. Lucia:

    I think to some extent, economists treat this as noise. Different people make different choices, and the government makes different choice etc. It’s sort of like “statistical homo-economous dynamics”.

    Except here CO2 atmospheric levels are measurable, and in principle you can compute the impact of changes in CO2 levels on global mean temperature.

    A lot of the tricks economists use are related to them not having an underlying physical model. Not having an underlying model limits what you can do.

    Since we have such a model, it doesn’t make sense to ignore the information this tells us.

  33. Carrick

    Since we have such a model, it doesn’t make sense to ignore the information this tells us.

    I agree. That’s why VS is getting so many questions he probably considers nutty. I suspect it also colors his answer to some of them.

    For example: If we had no physical model suggesting the trend is positive 0

  34. Lucia:

    Did you show this formally? Or just eyeball from the graph in the later comment?

    I just eyeballed it. If somebody wants to do further, it’d be interesting to see the results.

    I can’t imagine anybody looking at that series and having alarm bells going off though.

  35. Carrick–
    I agree with you.

    Since the I(?) of ghg’s relates to the co-integration paper, your image suggests that computing the I(?) of CO2 is rather irrelevant, since we expect the time series for temperature to be affected by volcanic eruptions too. Even if CO2 matters more to the long term trend, ‘noise’ in the temperature series is affected by the volcanic aerosols. Everyone accepts this. So, a paper that shows that CO2 can’t explain by itself doesn’t test a theory anyone really believes.

  36. Is there a physical model/mechanism to go with the VS notion that it is more about rate of CO2 increase that the amount of CO2 at any given time? I understand that this is a mathematical application but what would explain that behavior if the math is right?

  37. Re: George Tobin (Mar 26 11:19),
    None that I can think of. I think such a result would be spurious.

    If CO2 matters and it’s the only thing that matters, then what should matter is the overburden of CO2. So, there should be some “quasi-steady” temperature for a given amount of CO2. If the world is warmer than that quasi steady temperature, it will tend to cool. If it’s cooler, it will tend to warm.

    Depending on the time step between data, the manner of CO2 addition and the time constant(s) of the earth’s climate, you could probably get all manner of results. If you are permitted sufficient flexibility, you could gin up a toy model and all sorts of forcings to get all sorts of results– provided you focus only on transients.

  38. Carrick (Comment#39343)
    …(CO2 produced by man is very much a consequence of economic decisions we make. In that sense there is a cause and effect relationship.)…

    and
    Carrick (Comment#39349)
    Except here CO2 atmospheric levels are measurable, and in principle you can compute the impact of changes in CO2 levels on global mean temperature.

    Hi Carrick – I’m surprised to see you make what seem such strong statements? From memory I think fossil fuel CO2 contribution to the atmosphere calculated from global consumption figures is a single figure ppm increment. Given the questions DeWitt, tonyb and others have raised over the dynamic nature of the atmospheric CO2 cycle, whilst I agree there is a cause and effect relation between human activity and the CO2 we produce, if we add in land use, biomass and agriculture I think there are still some significant uncertainties about our net contribution. Even if our net contibution were clearly known there still seems to be a lot to learn about the driving factors in the actual atmospheric CO2 concentration. As far as being able to calculate effects on global mean temperature from changes in measured CO2 levels, again I think we are a long way from this. Apologies if I have misunderstood but please can you clarify? Thanks

  39. curious, we measure the amount of CO2 in the atmosphere at any given time. That’s what drives the forcing from a climate perspective. I agree predict how much CO2 goes into the atmosphere is more complicated, but measuring the amount that actually does is pretty well understood, IMO.

    Secondly yes there is much to learn about direct and indirect forcings of CO2, but my point is it is learnable in principle (and maybe not even so eventually in practice), and to the extent we “get it wrong,” we still can account for it using an analysis of variance with the appropriate front end (e.g., maybe something as simple as the 2-box model DeWitt mentioned).

    The econometrics viewpoint seems to be that we can’t control for CO2 and have to treat it as an unknown “noise-like” driving.

    I’m simply suggesting that’s a flawed approach when you have a method for dealing with the source of the variability and its impact on the measure of interest.

  40. lucia (Comment#39324)
    I think I understand your argument. Does it not, however, cut both ways? If VS has to demonstrate that m=0 for his approach to be valid, isn’t there a similar requirement (e.g. m≠0 or m=??? or m>???) for an OLS to be valid? Wouldn’t proponents of OLS have to demonstrate that they meet some requirement before proceeding with OLS? Demonstrating that VS is wrong does not make OLS right. Proponents of AGW have been criticized for taking the position that since they can’t explain temperature rise any other way it must be (by default) CO2. I guess I am looking for a positive reason to use OLS rather than it being a default position. Although I read (not the same as understand) most of VS’s comments I rely on Lucia for an objective explanation of what it means.

  41. PMH

    I think I understand your argument. Does it not, however, cut both ways? If VS has to demonstrate that m=0 for his approach to be valid, isn’t there a similar requirement (e.g. m≠0 or m=??? or m>???) for an OLS to be valid?

    I think what you want is to suggest they need to demonstrate there is no inappropriate pesky unit root. To answer your questions:, yes, a) they need to either demonstrate it, or b)admit that they can’t and acknowledge it would change their results. In case of (b), they would also need to explain why they think the inappropriate unit roots do not exist. Reviewers would then decide if they buy that argument.

    My guess is that reviewers with backgrounds in physics would buy the argument that we don’t expect a unit root. Those in econometrics might not.

    Demonstrating that VS is wrong does not make OLS right.

    I haven’t demonstrated VS is wrong. I haven’t even tried to demonstrate he is wrong. I’m asking a question about the statistical power of the test that he used to justify continuuing on with the assumption m=0.

    Because my orientation is physical (fluid dynamics) not economics based, the question of power is the sort of question I ask when I want to gauge how seriously I should take an analysis that appears likely incorrect based on my understanding of physical principles. Because I think the physics points to 0<m, I am more accepting of analyses that don’t assume m=0 even if that assumption “failed to reject”. On the other hand, if m=0 fails to reject with high statistical power relative to values of “m” I expect to be true, I will consider the tests based on m=0 meaningful.

    So far, VS’s answer to my statistical power questions appear to be a) he didn’t do it and b) he doesn’t think it’s important. My impression is that if I sprung from econometrics, I would tend to think it unimportant too. Btu I don’t spring from econometrics; I spring from engineering. I wouldn’t even plan a lab experiment if someone showed me the statistical power of my results was going to be low!

  42. PHM–
    I should add that in their heart, everyone is a Bayesian. It colors what questions they ask. So, since I do expect the trend to be positive (based on phenomenology) I want to know the statistical power with which m=0 “fails to reject” before I fully accept someone using “fail to reject” as a reason to use m=0 in future analyses.

    Someone who really thought m=0 is absolutely, totally, completely the appropriate “null” — meaning a hypothesis one should absolutely accept until data says otherwise would think the question about power is ridiculous.

    So, for example, I will happily use the assumption residuals are normal without requiring amazing proofs of their normality before proceeding.

  43. lucia,

    I’ve thought about your tank model and decided I like the electronic circuit approach better. A unit root process is a pure integrator. Put a perfect capacitor between the output of an op-amp and the negative input and ground the positive input (I think, it’s been a while). Then any charge that flows into the negative input is collected and held in the capacitor and shows up as a voltage at the output. An electronic integrator is inherently unstable. The slightest offset will drive the output to one limit or the other. White noise with a mean exactly zero will produce a random walk output.

    But a tank with a drain is a leaky integrator with a finite resistance in parallel with the capacitor. It cannot be a unit root process because for any impulse input of a finite charge, the response will not be a step function but a spike with an exponential decay back to zero. So the integration order is not an integer but a decimal fraction x between zero (zero resistance or a voltage follower circuit) and one (infinite resistance). A leaky integrator will produce a signal from any input. I haven’t worked this part through, but I’m betting that the signal will always be of integration order x.

    I think that’s also how you convert a change in CO2, which amounts to changing the value of the resistor, to a forcing or a change in the input signal level.

  44. Re: DeWitt Payne (Mar 26 13:21),

    I think that’s also how you convert a change in CO2, which amounts to changing the value of the resistor, to a forcing or a change in the input signal level

    .
    No. But forcings provided by groups like GISS are presented as changes in the input signal, not as changes in the resistance due to changes in GHG’s.

    Since forcings are presented that way, it’s a consistent level of weirdness between the way forcings are provided to the public and a simple lumped parameter type model I could make.

  45. Re: lucia (Mar 26 13:25),

    If you have a constant input current of 1 mA and a resistance of 1,000 ohms, the output voltage at equilibrium would be 1 V. If I increase the resistance to 1,100 ohms, the output voltage at infinite time would be 1.1 V. The value of the capacitor only changes the time constant so it can be ignored. But I can get the same output if I increase the input to 1.1 mA and leave the resistance at 1,000 ohms. Heat flow is analogous to current so I’m transforming the change in the effective thermal conductivity of the atmosphere to a change in current or forcing, I think.

  46. Re: lucia (Mar 26 13:52),

    Yes. Besides, a resistor is already a linear element, you don’t have to postulate some sort of porous plug with a linear pressure/flow characteristic. Btw, is there such a thing?

  47. I wouldn’t even plan a lab experiment if someone showed me the statistical power of my results was going to be low!

    Bingo. However, I do recall that in one of his comments he did a test to detect the presence of a unit root, many of the tests reject, but I think he did one where the null was reversed. Can’t find it though. arrg.

  48. Re: DeWitt Payne (Comment#39387) March 26th, 2010
    Yes. Besides, a resistor is already a linear element, you don’t have to postulate some sort of porous plug with a linear pressure/flow characteristic. Btw, is there such a thing?

    Yes, it’s called a “small hole.” 😉 (At least in the classical Bernoulli’s law problem). Oh wait… linearly! Oops. 🙂

  49. here lucia:

    KWIATKOWSKI-PHILLIPS-SCHMIDT-SHIN TESTING

    ————————–

    The careful read has probably noted that the null hypothesis of the ADF test is that the series actually contains a unit root. One might argue that, due to the low number of observations in the series, or simply bad luck, this test fails to reject an untrue null-hypothesis, namely that of an unit root, in the level series. In other words, the possibility that we are making a, so called, Type II error.

    We can however test for the presence of a unit root, by assuming under the null hypothesis that the series is actually stationary. The presence of a unit root is then the alternative hypothesis. In this case we ‘flip’ our Type I and Type II errors (I’m being very informal here, the analogy serves to help you guys ‘visualize’ what we are doing here).

    To do that, we use a non-parametric test, the KPSS, which does exactly that. Namely, it takes the null hypothesis as being stationarity around the trend, and the alternative hypothesis is the presence of a unit root.

    See also: http://en.wikipedia.org/wiki/KPSS_tests

    “In statistics, KPSS tests (Kwiatkowski-Phillips-Schmidt-Shin tests) are used for testing a null hypothesis that an observable time series is stationary around a deterministic trend.”

    IMPORTANT NOTE: The KPSS test statstics’ critical values are asymptotic. Put differently, the test is exact only when the number of observations goes to infinity. The ADF on the other hand, is exact in small samples under normality of errors (that we tested for above using the JB test statistic).

    KPSS Test result, for two different bandwidth selection methods. The spectral estimator method employed is the Bartlett-kernel method.

    The asymptotic (!) critical values of this test statistic are:

    Critical values:

    1% level, 0.216000
    5% level, 0.146000
    10% level, 0.119000

    So once the Lagrange Multiplier (LM) test statistic is ABOVE one of these values, STATIONARITY is rejected at that significance level.

    Newley-West bandwith selection:
    TEST STATISTIC: 0.165696
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.154875
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    PARZEN KERNEL:

    Newley-West bandwith selection:
    TEST STATISTIC: 0.147904
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.130705
    Conclusion, stationarity is not rejected at 1% and 5% significance levels. Rejected at 10% significance levels.

    Let’s now try to interpret the results of the KPSS test.

    We see that the null hypothesis of NO unit root, is rejected at 10% for all methods used, and 5% in most cases. At a 1% significance level, it is however not rejected.

    Two things to note:

    (1) The test is asymptotic, so the critical values are only exact in very large samples

    (2) The null hypothesis in this case stationarity, and the small sample distortion severely reduces the power of the test (the power is the ‘inverse’ of the probability of a Type II error). In other words, the test is biased towards NOT rejecting the null hypothesis in small samples.

    However, in spite of this small-sample bias, we nevertheless manage to reject the null hypothesis of stationarity in all cases, at a 10% significance level and in all but one case using a 5% significance level. I conclude that there is strong evidence, when testing from ‘the other side’, and minding the small sample induced power reduction of the test (i.e. the fact that it is biased towards not rejecting stationarity in small samples), that the level series is NOT stationary.

    I(0) is therefore rejected.

    ————————–

  50. Re: steven mosher (Mar 26 14:08),

    The KPSS (Kwiatkowski-Phillips-Schmidt-Shin) test has the hypotheses reversed. The null hypothesis is that the process is stationary as opposed to the ADF and PP tests where the null hypothesis is that the process has a unit root. The problem with these tests as implemented in R is that you don’t know the significance of the coefficients that are calculated inside the test. One should really restrict the number of lags in the ADF test to only those with statistically significant coefficients. How to decide is the tricky part. You can also include a linear trend as part of the tests. But are the slope and intercept of the calculated trend significant? Maybe a more sophisticated (read expensive) statistical package would give you those answers.

  51. Re: steven mosher (Mar 26 14:19),

    But saying that temperature is not stationary is not exactly a news flash. Saying it’s not I(0) does not make it I(1).

    Assuming that because you cannot reject m=0 means that m is identically zero for further analysis sure looks to me like the classical begging the question fallacy.

  52. lucia (Comment#39335) March 26th, 2010 at 10:00 am

    Re: magicjava (Mar 26 09:44),

    “To some extent, the IPCC only tries to report what scientists are chatting about most and seem to find most persuasive right now. That is, to some extent, the state of the science. That’s just the way it is.”

    The IPCC starts from a position that AGW is a fact. It’s openly stated policy is to report science which demonstrates AGW. Therefor they are biased fronm the start. CRUgate emails demonstarted that the “Warmist” gatekeepers would go to any length to suppress opposing views. At base the work is statitical, but AFAIK no profesional statistician is consulted.

  53. First cut on the two box model.

    R version 2.10.1
    unit root tests from package tseries

    The sum of the forcings, columns 2-11 in gissforc.txt vv:
    Augmented Dickey-Fuller Test

    data: vv
    Dickey-Fuller = -3.7162, Lag order = 4, p-value = 0.02569
    alternative hypothesis: stationary

    Phillips-Perron Unit Root Test

    data: vv
    Dickey-Fuller Z(alpha) = -44.9131, Truncation lag parameter = 4, p-value = 0.01
    alternative hypothesis: stationary

    Warning message:
    In pp.test(vv) : p-value smaller than printed p-value

    KPSS Test for Level Stationarity

    data: vv
    KPSS Level = 2.3545, Truncation lag parameter = 2, p-value = 0.01

    Warning message:
    In kpss.test(vv) : p-value smaller than printed p-value

    So ADF and PP reject the presence of a unit root and KPSS rejects stationarity in the summed forcings.

    Fitting vv to the temperature series with time constants of 1 and 19 years using a somewhat modified Nick Stokes R script gives g.

    > # fit regression
    > h summary(h|t|)
    (Intercept) -0.076463 0.008723 -8.766 1.38e-14 ***
    w1 0.058997 0.020674 2.854 0.00509 **
    w2 0.523258 0.043381 12.062 < 2e-16 ***

    Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

    Residual standard error: 0.09336 on 121 degrees of freedom
    Multiple R-squared: 0.8243, Adjusted R-squared: 0.8214
    F-statistic: 283.8 on 2 and 121 DF, p-value: anova(hF)
    w1 1 3.6798 3.6798 422.19 < 2.2e-16 ***
    w2 1 1.2680 1.2680 145.49 g=h$fitted.values

    Augmented Dickey-Fuller Test

    data: g
    Dickey-Fuller = -0.2262, Lag order = 4, p-value = 0.99
    alternative hypothesis: stationary

    Warning message:
    In adf.test(g) : p-value greater than printed p-value

    > g1 adf.test(g1)

    Augmented Dickey-Fuller Test

    data: g1
    Dickey-Fuller = -5.1897, Lag order = 4, p-value = 0.01
    alternative hypothesis: stationary

    Warning message:
    In adf.test(g1) : p-value smaller than printed p-value

    So ADF fails to reject a unit root for the synthetic series but rejects a unit root for the first difference.

    PP fails to reject for g but rejects for g1
    KPSS rejects for g but doesn’t reject for g1 if null=Trend

    So the meat grinder does produce a non-stationary series from a series that tests as stationary.

    I would like to see independent confirmation of the integration order of the summed forcings as I get different results than David Stockwell, for example.

  54. Re: steven mosher (Mar 26 14:19),

    Thanks for finding that. That strikes me as the right test to build from because stationarity is the null. Do we know if in that test VS preset m=0? Or not? (I looked at the wiki page. It’s possible to have a deterministic trend.)

    Now I’ll have to read about that test.

  55. DeWitt Payne:

    So the meat grinder does produce a non-stationary series from a series that tests as stationary.

    I’d put quatloos down that if you increased your time window to say 500 years you’d find the results looked stationary in the 2-box model output.

    At least you are finding things that make sense, like I_model ≥ I_forcings

  56. I’d put quatloos down that if you increased your time window to say 500 years you’d find the results looked stationary in the 2-box model output.

    I’m interested in seeing that. Maybe use a SRES for the future?

  57. Carrick says:
    “I’d put quatloos down that if you increased your time window to say 500 years you’d find the results looked stationary in the 2-box model output.”

    At the risk of both totally jumping the gun and grossly oversimplifying I think that ultimately this may be the point. There is not yet sufficient data of sufficient quality to establish a trend, e.g. the existence of a unit root cannot be ruled out until there is more data. Nothing is proved or disproved, it is only an indication that the available instrumental temp data set is sufficiently small that noise (natural variability) cannot be separated from signal (ghg forced temp change) with much certainty.

    Ultimately this doesn’t tell us anything that we didn’t already know. We just have to wait a couple of hundred years and see how hot it gets or not. Just like economic forecasts! Dismal science indeed… as Bart & Lucia keep saying at least we have physics to occupy us in the meantime.

  58. Re: lucia (Mar 26 16:20),

    Sure, why not. I was going to do it for Niche Modelling but things like a trip to Sebring for the 12 hour race got in the way. I have a lot more work that I can do. Perhaps I should break it into smaller chunks.

    Re: Carrick (Mar 26 16:08),

    You would win. Both PP and ADF reject for a 500 point random noise forcing.

    I cumsummed a 124 point random noise series twice to produce an I(2) forcing series and tested that. The PP test rejected after two differences. But we don’t know temperature precisely and there are other sources of variation that aren’t modeled. So I added white noise to the synthetic temperture series. It really didn’t take much to have the PP test reject after one difference. It looks way less noisy than the actual temperature series. If I knew how to do it in R, I would add redder noise and test that.

  59. Eric:

    There is not yet sufficient data of sufficient quality to establish a trend, e.g. the existence of a unit root cannot be ruled out until there is more data.

    If all we had was statistical inference and no model, you’d be right.

    But as we do have a fairly detailed model (with varying levels of success in applying that to numerical code, but that’s not the issue, it’s not an intractable problem) to help guide us, so I’d say you’d be wrong. 😉

  60. DeWitt Payne:

    If I knew how to do it in R, I would add redder noise and test that.

    If you put in a non-stationary forcing, then you should see a nonstationary outcome.

    That’s not the same thing as showing the model’s output was non-stationary (in this case it would be the input).

  61. Carrick – yes my comment pertained only to statistical inferences that can be drawn from the instrumental temp record in isolation without reference to models.

    As I understand it that is all that VS has shown thus far.

  62. Re: steven mosher (Mar 26 14:19),
    I tried to see exactly what the KPSS test was testing and it is a puzzle. A usual requirement of a null hypothesis is that you can use it to construct a range of solutions, and then say the observation is out of range (reject). But “stationary” as a null hypothesis seems to offer a very broad range of solutions.

    To put it another way, with an ADF or similar, you can say that the model has roots that you can restrict to a range that does not include 1. The best a reversed test can tell you is that a root lies in a range that includes 1. But you need another criterion to say how narrow that range should be to declare the test satisfied.

  63. Re: Nick Stokes (Mar 26 20:09),

    Doesn’t the KPSS test exclude zero when it rejects rather than including 1?

    Re: Carrick (Mar 26 17:44),

    If you put in a non-stationary forcing, then you should see a non-stationary outcome.

    Well, yes. But that’s not what I’m talking about. Temperature data are noisy. There’s sampling error and variance from processes that lucia has referred to as weather noise. Noise makes the unit root tests less powerful and cause them to reject the hypothesis of a unit root when one is actually present. I would like to know if red or pink noise is more or less effective for this than white noise.

  64. Re: DeWitt Payne (Mar 26 21:10),
    Depends on what variable you are referring to. Excluding stationarity means, in this limited context, requiring that one of the roots of the characteristic equation has magnitude one (or close, but how close?).
    If the recurrence relation is expressed in difference form, that’s true if the zero’th order coefficient is zero.

  65. Lucia: “I should add that in their heart, everyone is a Bayesian.”

    That one is quotable. Certainly all physical scientists are. Mathematicans are not, however.

    Econometricians like VS prefer to look at a data series as merely a collection of numbers, presumed to change with time, but otherwise uninterpreted. So a series may represent monthly means of readings from a set of thermometers, or mean weights of trout caught annually in North American lakes. They don’t care. Without knowing anything about the source of the data or the “data generating process” (DGP) driving the changes in the series, you can decide whether the series is I(0), I(1), etc., by determining how many times the series must be differenced to make it stationary.

    When comparing two series (again, uninterpreted, except that both are assumed to vary on the same time scale), if one of them is I(1) and the other I(2), then variances in the I(1) series must be compared to the first differences in the I(2) series, not the point-to-point variances in that series. That means that changes in the I(1) series correlate, not with the raw changes in the I(2) series, but with the rate of change in the latter series.

    What VS (and Beenstock and Reingewertz) are *claiming* (which was one of your questions) is that one cannot conclude that changes in atmospheric CO2 are *driving* changes in temps *from analysis of the two data series alone*. Nor (per B&R), can one reach such a conclusion from analysis of trends in other GHG concentrations, for the same reasons.

    But that does not mean that CO2 and other GHGs are *not* drivers of temp changes. As you said, everyone is a Bayesian, and makes some assumptions regarding the mechanisms underlying the data (the DGPs). Those assumptions are testable, and though (per VS) cointegration tests do not support any simple causal relationship between GHGs and temps, they do not rule out some more complex relationship.

    But then, no one assumes the relationship is simple. So we’re right back to the problem of feedbacks and sensitivity.

    (Just another lurker’s take on the controversy . . . .)

  66. lucia (Comment#39376)
    “I haven’t demonstrated VS is wrong. I haven’t even tried to demonstrate he is wrong.”
    It wasn’t my intent to say you were trying to prove VS wrong. Perhaps better phrasing would be “if he is proven wrong”. I will try to be more careful in my phrasing.
    Although my statistical background is limited I am aware that statistical analysis is often performed on data that doesn’t meet the appropriate criteria. For lack of better knowledge I have always defaulted to OLS and am now wondering what was my mathametical justification for that default position.

  67. Contrarian (Comment#39430),

    A pretty good summary I think.
    .
    But I’m not so sure about “no one assumes the relationship is simple”. The calculated forcings involved for GHG’s (and aerosols, for that matter) are a very small fraction of the overall energy flow in the system. So while it is true that the relationship between GHG forcing and temperature is most likely not at all simple (and not linear), over the relatively small fractional changes we a re talking about, a linear approximation of the relationship between GHG forcing with average temperature seems a reasonable expectation, which is why people talk about “climate sensitivity” as a single value. Of course if you believe the system is very near the edge of a transition from one metastable region to a very different one (the dreaded tipping point argument), then that’s a different story. There is plenty of evidence of climate instability in the downward temperature direction (very modest Milankovich forcing driving large glacial/interglacial temperature changes), but in the upward temperature direction there seems to be no evidence of tipping points, only speculation about their existence.

  68. SteveF

    Although my statistical background is limited I am aware that statistical analysis is often performed on data that doesn’t meet the appropriate criteria. For lack of better knowledge I have always defaulted to OLS and am now wondering what was my mathametical justification for that default position.

    I do the same thing. Even with this unit root issue, it strikes me as the correct default until proven otherwise. Similarly, gaussian noise is often a useful default until proven otherwise. You usually have to make some assumption to proceed, don’t you?

    OLS works under a wide range of circumstances. Obviously, we don’t want to apply OLS to a mis-specified system. One thing I’m not entirely sure of with VS is whether anyone applies OLS to the system he is telling people to avoid testing with OLS. Also, if we look at this:

    D(GISS_all(t)) = constant + AR1*D(GISS_all(t-1)) + AR2*D(GISS_all) + AR3*D(GISS_all) + error(t)

    Did anyone ever think that equation applies to temperature observations during the time span he applied it? I think the answer is absolutely no. Everyone thinks the part in blue– which is a deterministic part, is not a constant. Being a constant would violate our understanding of the phenomenology.

  69. Hi Lucia,

    I would like to continue the discussion at Bart’s blog (so if you reply to this, reply there), but since my reply to you is stuck in moderation, here’s the link to the power function (one sided testing, 5% sig):

    http://img62.imageshack.us/img62/1609/powerfunction.jpg

    The specific probability you asked for is: 0.4518, which I think is very reasonable.

    As for the OLS thing. OLS demands (trend-)stationarity, which is clearly violated in the case of the temperature record due to the presence of a unit root (my latest post has an overview of the test results). This doesn’t mean that you cannot apply OLS, it however does mean that you cannot apply it on the level series (including the calculation of deterministic trends) 🙂

    Also, that constant is our ‘trend’ estimate. Not ‘the model’ for ‘the temperatures’. Once we get to cointegration, we’ll study actual determinants of temperatures… what I presented was a formally derived (stochastic) trend estimate.

    Here’s are two links for proper ‘vizualization’ of the implications of the two deterministic and stochastic, i.e. OLS linear trend and ARIMA(3,1,0) spec without constant – trend estimates. Remember, trend estimates are projected pdf’s! 🙂

    http://img13.imageshack.us/img13/5472/det3d.gif

    http://img687.imageshack.us/img687/2554/stoch3d.gif

    Finally, since you seem to know what you are talking about, here are some *original* results 😉

    Ramsey RESET test results (i.e. testing for unaccounted for non-linearities, the null hypothesis is an acceptible specification, I use 5% sig because the test is quite ‘radical’ in its implications and we have a mere 128 obs :), applied to the ARIMA specification with constant, lagged fitted values (p-value):

    1 – 0.294208
    2 – 0.397147
    3 – 0.591577
    4 – 0.748892

    without constant:

    1 – 0.096926
    2 – 0.161940
    3 – 0.295070
    4 – 0.448999

    At 5% significance level, no evidence to suggest non-accounted for non-linearities in the ARIMA(3,1,0) specification, with or without constant (i.e. with or without upward ‘drift’). If you then add parsimony to the consideration-mix, I believe the specification is very adequate.

    Finally, I also performed a structural break test on the ARIMA(3,1,0) specification with constant, and I set the break year to 1964. I used the year 1964 because that’s what the Zivot-Andrews unit root test found when endogenously determining the most probable break in the level series, assuming it existed (note: unit root was not rejected).

    Chow breakpoint test:

    H0: no break
    Ha: break in stochastic trend, in 1964

    p-value: 0.455494

    So, we’re good there as well.

    Oh, and Josh did a nice cartoon on all of this, haha:

    http://www.cartoonsbyjosh.com/unit-root-presence_scr.jpg

    Hope that clears up some issues, and I’m looking to reading your reply on my newest post (once it clears moderation, that is).

    Cheers, VS

    PS. If you refer to these results on Bart’s blog, do copy them fully, for reference. Thanks!

  70. VS–
    Thanks! That answers my question. (Though, I’m not sure that I would consider a power of 45% to be adequate given that I think a positive trend ought the be the null. Still, at least it’sn not teeny tiny. Certainly for strong drifts the power is very high.

    I’m now going to have to look up the value of drift we expect from the multi-model mean of climate model runs. I’ll read that off the graph.

    Unfortunately, I’m cleaning the house getting ready for a week-long visit from my brother and nephew who will be arriving from Albuquerque!

    I actually have lots of questions, but I’m doing laundry, moping floors, and my husband is actually buying then assembling a chest of drawers. (Then, tonight I go to the opera! whooo hooo!)

    Thanks for dropping by. I’ll return there to ask more questions… some time.

    By the way, a few people have suggested I invite you to guest post. Are you interested?

  71. It is funny to read dhog and sod WRT VS.

    I think I have wrapped my mind around the VS issue WRT trend estimations, particularly useful is his definition of trend, which is different than we all use.

    In a nut shell if you look at the data and look only at the data and use only the data to do a Forecast of future data based on a DGM ( data generating model ) that is specified by the data, you get some pretty wide CIs. To do a trend forecast from the data you are restricted to looking at the data: The data indicate the presence of a unit root or stochastic trend. So starting with what is observed in the data for 1880-1935 you do a forecast of the data to follow. That forecast assumes a unit root as the data indicated the presence of one. If you look at that forecast, you’ll see some rather wide CIs, but the observations post 1935 are consistent with this forecast. So far I see nothing remarkable in this claim and nothing that has anything to do with climate science proper. it may touch on how we do testing of claims however.

    If we take the approach of calculating a “trend” by using matrix algebra and “fit” the data to a deterministic trend and then “forecast” from the data present in 1880-1935, then this naive forecast will not encompass the data post 1935. And for good reason, GHGs increased during that period. Another way to look at it is this. No one who believe is AGW would agree that you could fit a line from 1979 to 2010 and then use that line as a good forecast of future temps. It’s unphysical since the line would at sometime go to infinity. That raises an interesting point because if you look at VS’s 1000 year forecast it is not unbounded.

    Here is funny way to look at it. If you are fitting climate data points to a determistic trend and then comparing those trends your underlying models are unphysical.

  72. steven mosher,

    you got it, only leave out the GHG part, that’s completely unrelated to this analysis..

    ..so you got it, but do you also ‘see’ it? 😉

  73. lucia, quickly:

    “I think a positive trend ought the be the null”

    …that’s what the Milanković ‘lecture’ was about; the ‘null’ is in fact *supposed* to be negative, but close to 0.. 😉

  74. I think you have it right Steve, and I agree with your final comment

    “If you are fitting climate data points to a determistic trend and then comparing those trends your underlying models are unphysical.” People seem to be treating OLS trends as a kind of short hand model. But just putting a trend in and nothing else just leads to a mis-specified model so the usual OLS properties (BLUE etc) don’t hold. The point with al this unit root stuff is that its the first thing econometricians do these days when examining an individual times series. Once the order of integration is established for whatever variables you think you ought to be including, then you see whether or not they are co-integrated. And that is where the physical explanations come in.

  75. VS,

    If I am to understand you object to this:

    “If we take the approach of calculating a “trend” by using matrix algebra and “fit” the data to a deterministic trend and then “forecast” from the data present in 1880-1935, then this naive forecast will not encompass the data post 1935. And for good reason, GHGs increased during that period. ”

    And you object to that because it presupposes a causitive role between GHGs and Temperature, a causitive role that is not apparent in the data.

    But surely we can do a forecast on 1880-1935 data that assumes
    a Delta_t that is roughly modelled as a response to GHG forcing
    ( Much as lucia has done in her lumped paramater model )

    And surely we can conclude that a model that assumes a delta_t
    =F( ghgforcing) is more skillful than one that does not ( a simple linear fit )

  76. Mikep,

    I think this may get at part of the difference between physics and economics. A physicist, I doubt, would ever check for co integration to ascertain if a variable like GHG concentrations could influence temperature. The argument would not proceed from the data, to the mathemetical structure that generates the data. The argument would go the other way around. Experiments are THEORY DRIVEN. that is, the earliest researchers into GHGs looked at the physical laws and theories made predictions based on that given mathematical “structure”
    that is they argued that if GHGs increased, then the temps would warm. Almost all experiments are theory driven. Since 1900 or so we have been carrying that experiment out. With other sciences the theory driven experiments can happen in a lab.

    Now, not all science is theory driven all the time. For example, Abduction plays a role in some scientific “discovery.” But the fundamentals of Climate science ( radiative transfer equations) were not “discovered” by abduction. They are derived from equations and confirmed by experiment and refined and expanded by observation.

    So when a physicist looks at the data from the temperature series it would not occur to him test for Cointegration. The theory under test assumes that GHGs drive the temperature. That is what the laws of physics dictate. The question is how much and in what fashion and how do feedbacks mitigate this or accelerate this.

    This makes me want to look at the stock market and appearences of bubbles and crashes from the aspect of unit roots, as conceptually I suppose one could argue that these phenomena are the result of negative and positive feedbacks.. that is, for the most part the current value of the market doesnt drive the future value, until you get into prolonged ramps and declines in which case people change the the time horizens of their valuations and want to either “get in” on the ramp or jump off the sinking ship.

  77. PS. I’m not assuming that the presence of a feedback loop inmplies a unit root. Still trying to warp my mind or wrap it around the unit root thingy and what physical system actually manifests a unit root.

  78. Steve,
    I’m not sure see the big difference. VS continually points out that the GCMs, while based on physics, are a long way short on being deduced only from physics which is already known. There are numerous approximations and parameters have to be chosen., and modelling choices about things like clouds. Economists build models too and they are not purely data driven, but are informed by, but not deducible from, economic theory. The gap between theory and models is no doubt much bigger in economics than in the harder sciences, but when you are modelling something as complicated as climate you can’t just deduce the model from the underlying physics. Econometrics is all about the application of statistics to non-experimental data using information from economic theory. Beenstock in one of his presentations actually draws a comparison with a controversy within economics between the so-called Real Business Cycle modellers who propose an approach rather like the GCMs and the mainstream econometric modellers who make much more use of data.

    But the basic point here is that the data is the data is the data. Virtually everyone who has ever looked at global average temperature time series, with the exception of Tamino, has concluded that it is not stationary, but its first difference is. If the output of the GCMs does not have similar properties then there is a systematic mismatch between models and observations, i.e. the GCM prediction minus the observed temperature has a systematic component and is not white noise. So either the models are missing something systematic, or the observations are wrong.

  79. mikep,

    I would be interested to see what the approach said about this:

    http://www.wtrg.com/oil_graphs/oilprice1869.gif

    If one only looks at the data.

    Clearly if one is forecasting the future price of Oil one takes into account the “physical” constraints, as well as assumptions about the price of recovery, the “law” of supply and demand.

    Do we know anyone who would forecast the future price of oil merely from “the data”

  80. If one only looks at the data.
    Clearly if one is forecasting the future price of Oil one takes into account the “physical” constraints, as well as assumptions about the price of recovery, the “law” of supply and demand.

    There is a good example of what not to do with statistics in this problem,eg Pike 2008 December issue of the journal Significance, published by the Royal Statistical Society.

    The much-heralded Climate Change Act, given Royal Assent last week, which aims to reduce annual carbon dioxide emissions by 80 per cent by 2050, sets the right agenda for the UK and the science community, says Richard Pike, chief executive of the Royal Society of Chemistry.

    But, he warns, on a worldwide basis, where emissions are fifty times the UK figure, current international plans will remain an unfulfilled fantasy because of mathematical errors in basic assumptions and a global underestimate of the true challenge ahead. His concerns are published in the December issue of the journal Significance, published this week by the Royal Statistical Society.

    Dr Pike said: “This is an extraordinary challenge that must begin with the right facts. The RSC is sending a copy of the Significance article to the mathematics department of every secondary school, to expose the ‘schoolboy howler’ in statistics that is misleading governments everywhere, and compromising our ability to address the potential catastrophe that lies ahead.”

    http://www3.interscience.wiley.com/cgi-bin/fulltext/121556858/PDFSTART

  81. Steve,
    Not sure I understand what you mean. any econometrician will try and explain the behaviour of some variable or set of variables by looking at the behaviour of those other variables which are expected to influence it. But the first thing they do these days is check the time series behaviour of the individual series by seeing what order of integration they exhibit. This is a check before the real explanatory work gets started, to avoid spurious regressions which of necessity have non-normal residuals. The explanatory regression should only include variables with the same order of integration because otherwise the residuals will not be stationary, violating all the assumptions that make OLS attractive; and even if the variables do have the same order of integration the residuals may not be stationary. But if they are co-integrated then the residuals will be stationary.

  82. mikep,

    I implore you: unsheathe your steel… mother academia is calling you to arms.

    All the best, VS

  83. VS

    that’s what the Milanković ‘lecture’ was about; the ‘null’ is in fact *supposed* to be negative, but close to 0.

    But that effect is much much smaller than we expect for ghgs. So, the null is supposed to be positive.

  84. mikep

    But the basic point here is that the data is the data is the data. Virtually everyone who has ever looked at global average temperature time series, with the exception of Tamino, has concluded that it is not stationary, but its first difference is. If the output of the GCMs does not have similar properties then there is a systematic mismatch between models and observations, i.e. the GCM prediction minus the observed temperature has a systematic component and is not white noise. So either the models are missing something systematic, or the observations are wrong.

    There may be no mis-match with models. Has anyone run the output of GCM’s through the tests VS applied? One of the models has at least 7 runs extended through the 21st century. When I have time (which may not be soon because my brother is visiting next week) I may discuss the assumptions in VS’s analysis vis-a-vis what GCM’s predict. I think there is an important mis-match.

    DeWitt has driven a 2-lump model with forcings applied to models, and I think he found the output not stationary according to ADF. He put a very brief comment up– and I’ve invited him to guest post.

  85. Re: sod (Mar 28 06:19),
    Well…. No. One reason is I don’t plot forecasting confidence intervals for temperature. I plot the uncertainty in a parameter, which happens to be the trend. I think VS’s lower figure you showed is computed assuming the trend is m=0 C/century during the whole period and everything is explained by noise.

    My basis for selecting nulls hypotheses to test the models projections is to assume they are right So, the hypothesis of my test spring from that. I’ll keep my hypothesis even if other’s think that for some other purpose, the null should be “the models are wrong” or “the models should be ignored, today we are looking at what we can learn from the data in isolation”.

    That means I won’t be doing ADF and then computing the uncertainty based on the assumption the models are wrong. My choice does not imply anything about other people’s choices of nulls being right or wrong. Choice of nulls can depend on what question you are asking.

  86. Lucia,
    I should have made clear that I know nothing about the time series properties of the model outputs – the point was hypothetical. All that the unit root tests do on a given time series is describe what properties that series has. And if the temperature series has those properties your model needs to match them or you are in trouble. I don’t know whether or not the model outputs do match them..

  87. Re: sod (Mar 28 06:19),

    You do realize that accepting those confidence intervals means that you believe that there has been no significant warming since 1880 (m=0). That seems rather at odds with all your previous posts. And that’s not to mention that the implication that CO2 must be differenced means that it will have no long term effect on the climate. So you’ve come over the dark side, as it were?

  88. Re: lucia (Mar 28 06:38),

    I think he found the output not stationary according to ADF. He put a very brief comment up– and I’ve invited him to guest post.

    Well, it depends, whether it tests as stationary that is. More on that will be coming.

    I was having trouble with the structure of the post, but I think I’ve found a good narrative hook for the introduction and should have something in a few days.

  89. Re: steven mosher (Mar 27 22:39),

    Do we know anyone who would forecast the future price of oil merely from “the data”

    I don’t know about oil specifically, but forecasting the future price of stocks by looking only at the data is what technical market analysis is all about. Then there’s the IPCC SRES which don’t appear to consider possible supply constraints at all.

  90. What is the physical principle that governs the reported Global Average Surface Temperature. Is that principle that there is at present a net input of radiative energy into the Earth’s systems due to the presence of GHGs and thus an increase in energy content is occurring and therefore the temperature of the Earth’s systems must increase?

    If so, then the reported data itself show that that is not what is happening.

    It is only when a sufficiently long time series of the data is assumed to exhibit a linear increase that the data are ‘consistent with’ the principle. And, during the time period over which the linear increase is assumed to obtain, there are periods for which the data show temperature decreases and periods of almost no change.

    What does this mean? There are a number possibilities as follows:

    (1) There is not a constant, or increasing, net energy input into the Earth’s systems.

    (2) The temperature data are not correct ( thrown in for certain groups ).

    (3) The reported Global Average Surface Temperature does not represent the physical quantity that corresponds to the radiative energy phenomena and processes associated with GHGs in the atmosphere.

    (4) The reported Global Average Surface Temperature down here within the planetary boundary layer and the troposphere is dominated by phenomena and processes other than the direct radiative effects associated with GHGs in the atmosphere.

    (5) The well-known indirect effects on radiative energy transport of an increasing temperature with respect to water, its vapor phase, and the solid, liquid and vapor phases of water in the atmosphere sometimes negate the effects of GHGs.

    (6) Model imperfections.

    Try the unit root stuff on the linear fits and see what happens?

    All corrections to incorretos will be appreciated.

  91. MikeP

    “Not sure I understand what you mean. any econometrician will try and explain the behaviour of some variable or set of variables by looking at the behaviour of those other variables which are expected to influence it. But the first thing they do these days is check the time series behaviour of the individual series by seeing what order of integration they exhibit.”

    My observation, and its only an observation and not a description of how one should proceed, is that the physicist is not going to start his understanding of the climate by looking at the global warming curve. As I said before, in economics the danger of spurious regression is ever present ( ok a bit harsh ) You’ve got your list of variables that you think might influence another variable, but in some case the connection ( mechanism) is hard to cash out. So what do you do? Well if your are going to do a regression to “understand” the underlying model, then yes you want to avoid a spurious regression and you would look at the order of integration ( I think I’m wrapping my mind around that concept.. so bear with me ) But the physicist will rarely try to understand the data by doing a regression ( dendro’s dont COUNT)
    The physicist will try to represent the model directly. He will write the equation.

    Again this is just an observation. In any case, Here is how I view the first half of the problem.

    If you decide to calculate the trends of the global temperature from the data and only the data and you want a model that will only be wrong 5% of the time, then the VS approach would be the way to go. Thats just a description of the data.

  92. DeWitt,

    The SRES where the first thing I latched onto in this debate, but nobody wanted to discuss them from the standpoint of constraints or FEEDBACKS. and by feedbacks I mean if things get so bad from AGW that economic output is cut, then you get a change in GHG output ( understand that there is a time lag of course )

    PS.. I dont want this to become a peak oil discussion.

  93. Lucia

    There may be no mis-match with models. Has anyone run the output of GCM’s through the tests VS applied? One of the models has at least 7 runs extended through the 21st century.

    I suggested this over at Tammy’s. Got snipped. But apparenty somebodu over there has and said he would mail the results to Foster grant,

  94. Dewitt,

    I’m struggling with this “Co2” must be differenced. In the past
    In doing some regressions I had good solid physical reasons to transform a variable prior to regressing, So WRT C02 since the effect of C02 is a log response wouldnt you look at that as opposed to the raw measurement.. err does that make a difference

  95. Steve,
    You say

    We seem to be at cross purposes here. First, every econometrician uses economic theory to inform the choice of explanatory variables and the structure of the model. Economic theory provides less guidance than physics, especially about the size of certain key relationships. Typically the constraints are thing like demand curves should slope down (i.e. qualitative constraints rather than quantitative ones) But climate models cannot be deduced from first principles – there are parameters that have to be put in. The “econometric” approach to climate models would be, I suppose, to estimate them using actual data, rather than constrain them using prior “knowledge” (in inverted commas because the knowledge is not based on first principles but on e.g relationships in the lab which have never been tested in the real atmosphere).

    The second point is that there is a very real sense in which the unit root tests on an individual series are descriptive statistics. They show if the actual observed series is stationary or not. That’s all it shows on its own and is just the first step in analysis.

    One of the main reasons econometricians worry about stationarity is that if you regress two quite independent non-stationary series on each other you find that you get “significant” values of R2 and t quite a lot of the time. The most famous paper on this was by Granger and Newbold as long ago as 1975, though a very similar argument had been put forward by Udney Yule in the 1920s – so that is not new stuff.

    So what do you do if your variables are non-stationary? That’s where all the fun starts. One simple idea was to only use stationary series and get them by differencing. But it turns out that that throws away valuable information about long run equilibrium. Hence the co-integration approach.

    But finding the order of integration of any variable you are going to use is just the first step. VS is just saying that when you look at temperature, the numbers are not consistent with it being determined by a deterministic trend, but they are consistent with it being a stochastic trend. If you regress a stochastic trend against a deterministic trend using OLS, the OLS estimates do not have the properties they would have if all the variables were nicely behaved stationary variables, and the normal confidence intervals you would get from a simple regression package are just wrong.

  96. mikeP

    One of the main reasons econometricians worry about stationarity is that if you regress two quite independent non-stationary series on each other you find that you get “significant” values of R2 and t quite a lot of the time. The most famous paper on this was by Granger and Newbold as long ago as 1975, though a very similar argument had been put forward by Udney Yule in the 1920s – so that is not new stuff.

    One difficulty is that no one with any sense is regressing “temperature” on “ghgs” using the forms of equations that by Beenstock and Reingewertz is assuming to test for stationarity. I’m perfectly willing to accept the B&R paper says one should not do regressions of the form they tested. But everyone on the physics side would have said precisely the same thing: Don’t expect linear regressions of that sort to work because they violate the physics. The physicsts don’t do those things-. So, as far as I can tell, B&R merely confirms that a method no one who understands the physics would propose doesn’t work.

    The fact that assuming a statistical model with form of equation B&R tests results in something non-stationary doesn’t really tell us much about whether the forms of equations actually used are mis-specified.

    VS seems to be discussing something a bit different as he hasn’t gone into the co-integration issue. To me, his issue seems to revolve around whether, based on examining the data alone, we can say the recent warming falls outside the range we might have predicted using linear curve fits only. So, it at least seems to relate to the VonStorch, Zorita paper saying that recent warming does fall outside the range we would have predicted based on analysis of data from the early portion of the 20th century.

  97. MikeP

    “But climate models cannot be deduced from first principles – there are parameters that have to be put in. The “econometric” approach to climate models would be, I suppose, to estimate them using actual data, rather than constrain them using prior “knowledge” (in inverted commas because the knowledge is not based on first principles but on e.g relationships in the lab which have never been tested in the real atmosphere).”

    I’m not certain you understand fully how physics models are built. For example Some of the “knowledge” that constrains the models is based on first principles. Other knowledge is represented at the effect level ( for computational reasons).
    For example, the use of band models for RTE as opposed to LBL models.
    Some of the knowledge is tested in the real atmosphere. Anyways, I’ll just focus on those areas where we can agree and where your can explain things to me about unit roots and co integration.

  98. Lucia,

    “we can say the recent warming falls outside the range we might have predicted using linear curve fits only. So, it at least seems to relate to the VonStorch, Zorita paper saying that recent warming does fall outside the range we would have predicted based on analysis of data from the early portion of the 20th century.”

    If folks just focus on what VS is saying about how one estimates a CI from data, then I think it’s nothing too remarkable. Except for the notion that the warming is “not surprising” statistically not surprising. But then the information content in the data is not very informative, with Wide CIs nothing much comes as a surprise.

    Now, folks like Willis and others will say “then there is nothing to explain” if the null is 0. Clearly this is a rhetorical trick. For while there may be nothing statistically surprising in the rise of data, that is not the same question as “can you explain the rise”

    For example, If I constructed a simple model that took TSI as an input and the output matched the temperature curve EXACTLY that would be a surprising thing. I would have a purported explaination of something that looked “unsurprising”.

    Part of this of course is the notion that one requires a 95% CI to have something worthwhile to investigate. It would be surprising if the global temp were a constant.

    Another way to look at this. From the view of the data and only the data, there is nothing surprising in the data. There is nothing to be explained.From the view of the climate scientist that spike in 1998 is information. Those dips after volcanoes are information. the trend after a volcanic eruption looks like “nothing” to the pure statistics approach, but to the physicist its confirmation of a theory that predicts such a thing

  99. Steve Mosher: “From the view of the data and only the data, there is nothing surprising in the data. There is nothing to be explained.From the view of the climate scientist that spike in 1998 is information. Those dips after volcanoes are information. the trend after a volcanic eruption looks like “nothing” to the pure statistics approach, but to the physicist its confirmation of a theory that predicts such a thing.”

    Well, I don’t think that’s quite right. Though there may be nothing surprising in the data, that doesn’t imply there is nothing to be explained. As good determinists we wish to be able to explain *every* wiggle in the data — we wish to know why it is warmer in 2009 than it was in 1900, just as we wish to know why it is warmer at noon than at midnight, and warmer in August than in January. The rise in temps over the last century is certainly in need of an explanation, even if, from a statistical point of view, it is not anomalous (surprising) in the context of the entire Holocene record.

    I.e., while the trend of the past century merits interest, it doesn’t (necessarily) merit alarm.

  100. Likely OT, but relevent to the VS discussion, I think.

    Background:
    As I understand it, the currently prevailing theory in climate science is that climate is best modelled by a deterministic trend with a stochastic series “added” to it – ie, trend + noise. The “trend” comes from known forcings (Milankvitch, solar, CO2 etc etc) and the “noise” comes from no-one knows where – either completely random, from complex interactions, or from “forcings” we do not yet understand or perhaps do not even intuit the existence of. Furthermore, that the “noise” is of sufficient magnitude that it can currently counterbalance the “trend”, which is the reason for the current “pause” in the “trend”. And furthermore that such trends in the “noise” are common in the record, and that they can last up to one, if not several decades. (corrections welcome)

    Question:
    Is there any reason to believe, for either physical reasons or statistical ones, that such trends in the “noise” cannot and do not exists at longer timescales?

    Discussion:
    I think this is an important question, especially considering a number of commenters at Barts blog have suggested that a stochastic trend in the data is “wrong” because there are physical reasons to expect a deterministic trend and since a trend does exist, it’s reasonable to assume it’s said deterministic trend. If there is no basis for rejecting stochastic trends at longer timescales, then would it not be reasonable to assume that the trend we observe is a combination of the deterministic trend and a stochastic trend of unknown magnitude and timescale? This would significantly change our estimate of the magnitude of the deterministic trend, or at the very least, widen our confidence intervals for the determination of the, err, deterministic trend as calculated from the data itself.

  101. Steve,

    I am quite prepared to accept that my knowledge of physics based models is very limited. And it probably isn’t worth my pursuing that line as long as we agree that there is some wiggle-room in the models and a lot more wiggle-room in most economic models.

    Lucia, I don’t think you are interpreting B&R correctly, though I have not read the paper very thoroughly. They do the first stage of the analysis where the find the order of integration of the various time series. Then they do the “proper” co-integration analysis. And they do find that some of the variables co-integrate. VS suggested that the best way to think of co-integration is as corrected correlation. My understanding is that B&R find that solar variables are “correlated” with temperature in this case, but only the change in GHGs is “correlated” with temperature, not the level. Their equation has nice white noise residuals. In a sense this is an observation about observables, a fact about the relationships that can be discerned in the data. If we can’t think of a physical relationship that would explain this, then something must be wrong somewhere. there are various possibilities, like omitted variables (though there is no obvious warning sign – see the residual), non-linear relationships (though I think B&R test for that to some degree), failure of imagination on our part etc. Richard Tol suggested, I think, that the missing heat is in effect in the oceans. B&R undoubtedly overstated their conclusion, but their observations need explanation.

  102. Re: lucia (Mar 26 10:52),
    I see a reference to an old argument here. But it’s relevant to the interpretation of these I(n) tests. To what extent can fail to reject affirm the null hypothesis? That’s really the logic being used with the ADF.

    Just reviewing first principles – null hypothesis, a coin is unbiased. Toss six times. If six heads, H0 probability 1/64 – reject. The coin, at 95% confidence, is not unbiased.

    If five heads, probability about 0.1 – fail to reject. You can’t assert the coin is biased. But it’s weak evidence for asserting that the coin is unbiased.

  103. Contrarian (Comment#39513) March 28th, 2010 at 3:14 pm

    Though there may be nothing surprising in the data, that doesn’t imply there is nothing to be explained. As good determinists we wish to be able to explain *every* wiggle in the data — we wish to know why it is warmer in 2009 than it was in 1900, just as we wish to know why it is warmer at noon than at midnight, and warmer in August than in January. The rise in temps over the last century is certainly in need of an explanation, even if, from a statistical point of view, it is not anomalous (surprising) in the context of the entire Holocene record.

    I.e., while the trend of the past century merits interest, it doesn’t (necessarily) merit alarm.

    Good points. I would add, until we can explain the pre-modern blue line in this chart http://img267.imageshack.us/img267/4330/vostok2.gif, we have a long way to go before we can explain the recent rise.

    Neil Fisher: your comments are relevant here as well. Of course we need to remember that stochastic = generated by processes we don’t know about, or don’t understand.

  104. contrarian:

    Well, I don’t think that’s quite right. Though there may be nothing surprising in the data, that doesn’t imply there is nothing to be explained. As good determinists we wish to be able to explain *every* wiggle in the data “”

    We agree. maybe i wasnt clear

  105. MikeP

    “Steve,
    I am quite prepared to accept that my knowledge of physics based models is very limited. And it probably isn’t worth my pursuing that line as long as we agree that there is some wiggle-room in the models and a lot more wiggle-room in most economic models.”

    It wasnt so much as a observation that “you don’t know enough” or an observation that there is a difference in wiggle room. It was an observation that the physics approach ( for the most part ) does not begin with phenomenology. So they don’t exactly get what you are doing when you start with the data and tease out the DGP constraints. How do I put this, doing a regression is almost an admission of failure to understand the process. Ha,
    thems fighting words I suppose. Any way, I’ve said my piece, more interested in the “how” of doing what VS is doing..

  106. Alex:

    Good points. I would add, until we can explain the pre-modern blue line in this chart http://img267.imageshack.us/im…..ostok2.gif, we have a long way to go before we can explain the recent rise.

    If they aren’t related in any model sense, then it’s not that necessary to explain them, is it?

    Eventually if you can obtain conclusive proof that doubling CO2 raises temperatures by 6°C and it could happen in say a 100 year period, it really doesn’t matter that much what slowly varying forcings do to climate over thousands of years, does it?

  107. Alex:

    If what aren’t related to what?

    The temperature fluctuations, which presumably are the response of climate to long-term, slow variations in total forcings, have nothing to do with temperature changes associated the forcings associated with rapid increases in CO2.

    So I disagree with your comment that “until we can explain the pre-modern blue line in this chart”…the pre-modern blue line has nothing to do with the question of whether humans are changing the climate and if so by how much. They also have little to add to the answer to “if you double CO2 in the atmosphere, how much of a change in global mean temperature do you acquire?” Nor do they even add much the question of the impact on environment of (hypothetically) 6°C change in global mean temperature.

  108. Carrick,

    your certainty on this question amazes me. The pre-modern blue line shows numerous instances of changes with similar rate to, and much greater magnitude than, the modern temperature rise. If climate scientists cannot explain them, it is hard to see how they can rule out the same factors that caused those rises operating now. The “CO2 dunnit, because we can’t think of anything else” line has a rather hollow sound to it when past changes that apparently didn’t involve CO2 go unexplained.

  109. Alex, I’m not sure what you find amazing, but whatever.

    If you think about it, the two problems really have little to nothing in common:

    You are looking at measurements of a quantity that may have nothing to say about temperature or climate on the rest of the planet over an 8000-year period, a period for which we have little data on the specific changes in forcings that are needed to have a hope of explaining this.

    Now compared to then, the Earth is instrumented like the Star Ship Enterprise, so we’re talking about a much more “controlled experiment” in which we are injecting massive amounts of CO2 over a short time while monitoring the response of the climate to this change in the forcings.

    As to the “CO2 dunnit”, you have the directionality wrong (see e.g., Arrhenius circa 1900). People were realizing even then that CO2 acted as a greenhouse gas and that humans might be able to modify climate long before the models say AGW started.

    Also, there really is a strong body of physics there that underpins CO2 as a forcing. (Science of Doom is one of my favorite sites for discussing this sort of physics in detail without rancor.) So while I agree with anybody who suggests that we can’t accurately predict the impact of CO2 on climate, I also believe that we understand the underlying radiative physics associated with how it acts as a forcing.

    In the one case we have a well understood mechanism (CO2 forcing) with models that need work connecting it to full climate, in the case of your blue line, we have a proxy for temperature in a remote place of the world that may or may not resemble true temperature with absolutely no instrumentation, and very little capability of acquiring the required information to fully understand it.

    So, no, in response to your original comment, there is no requirement that we learn why the blue line did what it did and what caused it before we can understand our current climate. Indeed it it more likely that we will understand our current climate through scientific study and that understanding will eventually lead to a better understanding of paleoclimate.

  110. Carrick,

    OK, I can see where you are coming from now. You may be right. However, I still think we have a long way to go before we fully understand natural climate variation, and also some way to go before we understand what effect CO2 has in the real world. Some of the pieces will fall into place over the next ten years or so. The CERN experiment should rule in or out a role for cosmic rays; whichever way that falls I expect over that time period we will find out a lot more about cloud formation, and I would think the energy budget concerns raised by Trenberth in his recent paper should be clarified.

  111. Carrick,

    SOD science of Doom. Oh god it cant be true.

    scienceofdoom is one of my favorite places to visit and send people who want a good physics lesson. very civil and WELL written

  112. PS, I agree with you on ScienceofDoom. I am wading through his backposts at the moment.

  113. Carrick: “Eventually if you can obtain conclusive proof that doubling CO2 raises temperatures by 6°C and it could happen in say a 100 year period, it really doesn’t matter that much what slowly varying forcings do to climate over thousands of years, does it?”

    True. Unfortunately, the only proof which would be conclusive is doubling CO2 over 100 years and observing the resulting delta temp. If we hope for some sort of answer sooner than that, we’ll have to try to identify and quantify the factors driving previous excursions of comparable magnitude over comparable intervals. Unless we can correlate a change in atmospheric CO2 to one or more of those episodes, we will still not have “conclusive” proof, but perhaps we’ll understand the relationships among the other factors well enough to guess how they will respond to a fairly rapid CO2 forcing (which may be no different from their response to any other forcing of the same magnitude).

  114. Carrick: “So while I agree with anybody who suggests that we can’t accurately predict the impact of CO2 on climate, I also believe that we understand the underlying radiative physics associated with how it acts as a forcing.”

    Sure. I think that neatly sums up the position of “lukewarmers.”

  115. Alex:

    However, I still think we have a long way to go before we fully understand natural climate variation, and also some way to go before we understand what effect CO2 has in the real world.

    That beautifully sums up my own position.

    My objection was solely that I didn’t think the Volstok data tells us that much, because there are so many unknowns there. We’re doing what amounts to a controlled experiment with climate at the moment.

    I also think a lot more needs to be done with understanding the origins of natural fluctuations, it appears we agree there too. (I also think there’s some beautiful atmospheric-ocean science to be unburied there).

  116. Contrarian, physicists are used to building up a coherent picture from indirect evidence.

    The temperature record is one of the weakest portions. Probably the radiative picture is the strongest to start from. I’ll point you here for a place to think about that, if you haven’t already.

  117. Stephen Mosher:

    scienceofdoom is one of my favorite places to visit and send people who want a good physics lesson. very civil and WELL written

    I agree with you 100% of course.

    His 6 7 part series on the greenhouse gas effect (first link) is the most lucid (and error free) treatment I’ve read on CO2 yet.

    Willis should read it and tell us whether he still thinks afterwords whether an increase in CO2 to 5% wouldn’t result in a measurable effect on climate (yes he said that).

  118. Three cheers to Thompson for actually addressing the science of the issue, and not disputing the fundamental science behind AGW withe the often referenced G&T and other nonsense. He got it wrong, but at least he started off on the right foot.

  119. Carrick,

    now I really understand where you are coming from. You are right to say that Vostok is a special case – we know that Antarctica behaves differently, eg the temperature variations there are wider than experienced elsewhere.

    It seems like we are mostly in agreement – I certainly don’t dispute the radiative physics, my doubts are related to feedbacks.

    From the past record, though, it seems to me like there must be some other factors operating that nobody has really gotten onto yet. For example, I believe I recall seeing that the Milankovich cycles can only account for about 20 per cent of the change between glacial and inter-glacial periods.

  120. Neil Fisher wrote: “As I understand it, the currently prevailing theory in climate science is that climate is best modelled by a deterministic trend with a stochastic series “added” to it – ie, trend + noise.”

    Strictly speaking, A Stochastic Process exhibits variation from both ‘assignable causes’ and from randomness. 20 Years ago, people reserved the term ‘deterministic’ to only apply to a deterministic process (one whose behavior could be determined by modeled variables). More recently, people have been talking about the ‘deterministic component’ of a stochastic process (rather than ‘assignable cause variation’). I don’t like that usage because I think it can be confusing.

    Its often difficult to strictly isolate random variation from assignable cause variation, so most practitioners of methods use black box modeling techniques. Here, you say ‘I don’t care what the internals of the process are, I only care about modeling the output’. Once the output is reasonably well modeled, you can then perturb known inputs, observe the results, and refine your model if need be.

    As a former (long ago) practitioner of stochastic methods, I find the discussion of ‘but that’s not physical’ frustrating. It assumes that we have nearly perfect knowledge of the physics. In contrast, stochastic methods assume that you don’t really understand the physics very well (a good assumption, in my opinion, when dealing with climate).

    Mandelbrot argues that nearly all physical processes are stochastic at some level of resolution (measurement time interval for example). I could easily build a deterministic model for a ball rolling down an incline plane (a very well understood set of physics). My model would accurately predict the time it took for the ball to arrive at the bottom on the ramp, down to the millisecond. However, if I increased the resolution of my measurement such that I wanted to predict the arrival time of the ball down to the nanosecond, my deterministic model would fail. I would discover that the arrival time at this resolution was quite noisy. By contrast, a stochastic model would give me a probability density function that would allow me to calculate the probability that the ball would arrive at any instant in time.

    In the VS thread, there are a lot of people who are saying that the results are invalid because they allow for an unbounded output. It is true that my stochastic model of the ball rolling down the ramp would allow one to calculate the probability of the ball arriving in no time (approaching instantaneously) — which is clearly unphysical. The probability would approach infinitesimal as time approached zero, but there would be a calculated probability none the less.

    People need to understand that all models are just approximations of reality. The only important question should be, ‘which kind of model best approaches reality?’ Deterministic models are also un-physical at some level. After all, these are digital models that model a fundamentally analogue word. So I don’t ‘get’ the argument that a stochastic approach is no good because, at the margin, it can behave unphysically. VS gave a good example where we model the distribution of height in humans as being a normal distribution. Yet a true normal distribution allows for some humans to be 1mm tall and others to be 10M tall. We don’t throw out the model just because of odd behavior at 10 sigma.

  121. Against my better judgement another comment on modelling. I think we need to make a clear distinction between constructing models and testing models. Physicists have a lot of ready made material for constructing models, but not everything they need. Economists also have some ready made material, though not nearly as much and they tend to fill in the rest by estimating relationships whose qualitative form and structure are dictated by theory, but whose magnitude and specific functional form is not given by prior knowledge. These models are distinct from the pure time series models which statisticians tend to like and contain no theoretical restrictions at all and are simply data-based.

    But once you have built the models you can still test them to see how they work. And this, at least for economists, is a serious business. The best test is forecasts out of sample. Provided that people have done their work properly the in-sample performance should meet minimum standards (if it doesn’t it was a bad model in the first place). Over the last thirty years or so economists have learnt the hard way that trying to forecast out of sample performance without taking account of the time series properties of their data often goes wrong. And the co-integration literature both explains why and offers a way forward. Forty years ago, as David Hendry remarked, it was common to see – in the peer reviewed literature – estimated equations where the R2 exceeded the Durbin-Watson statistic. Now everyone would say not “therefore we need to correct for serial correlation” but instead “this is evidence of a misspecified relationship”.
    Note that economics like climate science is largely non-experimental. That means that confronting the theory with the observations has to be done by statistical means. And those statistics are quite complicated because we can’t isolate a single factor to look at at a time – simple bi-variate regressions usually suffer from severe omitted variable bias. So the fitted equations from something like B&R can be thought of as experimental results with all the confounding factors thrown in and accounted for. The B&R results therefore are facts that need to be explained. What must be going on for results like this to be observed? Now it’s possible that their results could be explained as artefacts of the method chosen. That seems a bit unlikely on a quick look. So if the results really are unphysical you have the problem of explaining how, if the world is as you think it is, we can nevertheless observe “correlations” (using the word in the VS sophisticated adjusted correlation sense) as B&R find.

    I think it’s the testing of the models, not their construction, that is the issue here.

  122. Re: (Comment#39544)

    Wow you’ve come a long way baby! 😉

    Watched a show a few weeks ago about some underwater cave divers collecting stalagmites and stalactites from underwater caves (at the time I thought of you all crunching numbers about climate change)

    These guys dove in “Blue Holes” in the Bahamas; very dangerous work! They cored these things (the build up of the layers from the drip drip drip of water and chemicals in these structures show them sea level high stand records; and indicate time on this Earth and indicate temperatures. (higher sea level-warmer world-cave floods).

    This website:http://natgeotv.com/ca/diving-the-labyrinth/facts
    says it will air Thursday 22 April at 12:00PM

    This website says:

    Stalagmites and stalactites’ shape and chemistry capture a biography of the surface environment, including how much it rained, what chemicals were in the rain and soil, and temperature.
    · Stalactites and stalagmites stop growing when a cave floods.
    · A red dust forms a thick layer in the cave wall, indicating that at one time, the island’s surface was covered with red dust.
    · The red dust originated from the Sahara desert, about 6437 kilometres away.
    · Stalagmites sometimes reveal tree like rings, with each strata signifying years of growth.
    · The Bahamas stalagmite grew at a rate of 10 thousandth of a millimetre a year.
    · The Bahamas stalagmite has a bands of iron, possibly from the Sahara, corresponding with several of the Heinrich events
    · Heinrich events are global climate changes.
    · Sahara dust has been correlated with six Heinrich events from the last 80000 years.

    Wikipedia says http://en.wikipedia.org/wiki/Heinrich_event:

    Heinrich events are global climate fluctuations which coincide with the destruction of northern hemisphere ice shelves, and the consequent release of a prodigious volume of sea ice and icebergs. The events are rapid: they last around 750 years, and their abrupt onset may occur in mere years (Maslin et al.. 2001).

    Also see “Causes” on that wikipedia page for Heinrich events; and
    there’s a chart that also includes Vostok data on that page.

    –the show I watched said “these events are rapid” also; and they said they happen in “mere years” as well. These scientist/divers on the show were more specific then Wikipedia page however and said “it could happen as fast as a human life time;in only 50-60 yrs time”

    The reason I am commenting!!! is data like this can be found all over the world-and it does match pretty closely and after reading the blog here I want to comment because geologists and other earth scientists like these already understand climate change can happen really FAST on this planet.

    This is another reason people (such as my husband the environmental scientist) say “so what!” when they look at the charts and numbers of the modern “global average temp” from the climate models with fractions of 1 degree going up over a 150 yr or so period supposedly caused by the physics of CO2 concentrations in the atmosphere-that could be full of errors.

    My point is: Climate on earth has “changed” much faster then some people think it is doing “now” many times before.

    my 2 cents!

  123. (Comment#39553)

    hmmm googling after copying and pasting some of the claims in that RC link (already know the character of those guys); I found this blog too written by:

    James A. Peden – better known as Jim or “Dad” – Webmaster of Middlebury Networks and Editor of the Middlebury Community Network, spent some of his earlier years as an Atmospheric Physicist at the Space Research and Coordination Center in Pittsburgh and Extranuclear Laboratories in Blawnox, Pennsylvania, studying ion-molecule reactions in the upper atmosphere. As a student, he was elected to both the National Physics Honor Society and the National Mathematics Honor Fraternity, and was President of the Student Section of the American Institute of Physics. He was a founding member of the American Society for Mass Spectrometry, and a member of the American Institute of Aeronautics and Astronautics. His thesis on charge transfer reactions in the upper atmosphere was co-published in part in the prestigious Journal of Chemical Physics. The results obtained by himself and his colleagues at the University of Pittsburgh remain today as the gold standard in the AstroChemistry Database. He was a co-developer of the Modulated Beam Quadrupole Mass Spectrometer, declared one of the “100 Most Significant Technical Developments of the Year” and displayed at the Museum of Science and Industry in Chicago.

    Sounds like a smart chap and his editorial post and opinion called “Editorial: The Great Global Warming Hoax?” :
    http://www.middlebury.net/op-ed/global-warming-01.html
    and the section “Next, let’s take a look CO2 from an Atmospheric Physicist’s view – straightforward physics that we hope most of you will be able to follow” …

    couldn’t find “scienceofdoom” bio anywhere on those pages.

    So much info, so little time!

    Willis is a smart chap as well!! God Bless Willis!

  124. Re: liza (Mar 29 05:55),

    The “Into the Laboratory, it’s time to go to work.” section of your link is pathetic. Atomic absorption in the visible and UV is very different from molecular absorption in the IR. There are relatively few atomic lines and they are quite narrow, on the order of a few pm. There are tens of thousands of lines in the molecular absorption bands and they can be pressure and doppler broadened to widths on the order of nm. They also vary widely in strength of absorption. So as the concentration goes up, more lines at the edge of the band come into play and the total absorption continues to increase. I gave up at that point because it’s obvious that he doesn’t really know what he’s talking about.

  125. Carrick,

    On thing I liked about Science of doom was the explanation of where the log version of the impact of C02 came from. Perhaps when Mc gets back I’ll direct him to that. While it wasnt an “engineering” level of a report, after reading it I had a better Idea how the figure was arrived at and it was just the kind of approach we would use in engineering when the matter required it.

    WRT science of doom. I wish more skeptics would just get over the fact that the fundamental physics works. It really is annoying that that particular zombie argument won’t die. While Scienceofdoom takes it down to say a college level of physics, maybe it needs to be taken down to a high school level.

  126. Hi lucia,

    On the drift parameter issue:

    Estimated ARIMA(3,1,0) with constant. Performed Log-likelihood test on redundancy of constant (i.e. drift parameter, i.e. implicit time variable).

    So, we are basically comparing ARIMA(3,1,0) specification without drift, to the one with drift.

    H0: Redundancy (i.e. adding the constant doesn’t result in a significantly better fit)
    Ha: Adding a constant significantly improves the fit of the specification

    F-statistic: 2.2744
    p-value: 0.134150

    Log likelihood ratio: 2.328267
    p-value:0.127043

    Those who come with a hypothesis (in this case deviating from what the cycle theory would imply) should come up with the (statistically) significant evidence.

    Cheers, VS

    PS. Still agreeing with mikep.

  127. liza,

    I read the link you provided. Sorry, have to agree with DeWitt.

    I would say one positive thing about what he wrote. Unlike some skeptics he at least acknowledges that C02 can ‘block’ radiation at certain wavelengths. Whew, thats a start. At least we dont have to discuss greenhouses made from rocksalt.

    Before you go to science of doom, just start with some other basics. So you can see that the science of how radiation transfers through the atmosphere here are some links.

    ( if you want to know how radiation gets reflected, absorbed, scattered, backscatter, and turned into heat when it hits an airplane we’d tell you but then we would have to ‘shoot you’
    just kidding )

    When we build transmission systems why do we pick certain
    wavelengths for those system. What’s cool about microwave?
    what about millimeter wave? When we build radars how do we decide on the frequency to select? What tools do we use to estimate the performance of these systems before we build them? How do we know that the physics of radiation transfering though the atmosphere is correct?

    http://www.crisp.nus.edu.sg/~research/tutorial/em.htm

    http://www.crisp.nus.edu.sg/~research/tutorial/atmoseff.htm

    http://www.crisp.nus.edu.sg/~research/tutorial/absorb.htm

    Scattering is very cool, especially mie scattering.

    http://www.crisp.nus.edu.sg/~research/tutorial/scatter.htm

    Probably best just to start with some basics.

  128. VS

    Those who come with a hypothesis (in this case deviating from what the cycle theory would imply) should come up with the (statistically) significant evidence.

    Sure. Equally, no one is required to demonstrate the statistically significance of hypotheses they don’t advance. Whose hypothesis suggests the trend with time is linear? As far as I am aware, no one’s. So, quite a few people are trying to wrap their heads around how the statistical model you tested relates to any meaningful hypothesis.

    So, we are basically comparing ARIMA(3,1,0) specification without drift, to the one with drift.

    H0: Redundancy (i.e. adding the constant doesn’t result in a significantly better fit)
    Ha: Adding a constant significantly improves the fit of the specification

    I wouldn’t expect adding a constant improves the fit. When I have time (which won’t be this week) I’m going to see what subtracting the multi-model mean of the models does. That’s a lot closer to the functional form that one might suggest people who believe in AGW think holds to some level of approximation. It doesn’t look like a
    T= a + b*time + error
    in anyway, shape or form.

    But still, if someone is going to fit to T=a+b*time+ error and then decree “b” is zero, I want to see “b” rejected with at least a power of 50%. While I accept that we could say that lacking a demonstration of statistical significance means that the analysis shows no support for 0<b, what we also see is that, basically, there just isn’t enough data to make any judgment.

  129. Hey guys, I can’t begin to understand all this. That’s the point.
    That’s why I said “so little time so much to read”. Maybe he doesn’t have it right, but maybe you don’t either. My husband seems to think you don’t based on the geologic record!

    My understandings don’t matter however I like to read what another scientist says- good credit to his name as anybody here, and he’s not the only Atmospheric Physicist in the world. I have talked to ASTRO physicists (my friend was dating one) and my husband has worked with them; and the fact is all kinds of crazy stuff goes on in the top of the atmosphere we can’t begin to understand yet. I’ve got emails with polar bear pictures in them from that man. In my experience most of the scientists I know including that one, seem to feel that the balance of “power” so to speak is on the cold side … most earth scientists think in epochs not decades. We are still in an ice age right now by it’s scientific definition.

    I liked from that link how that scientist explained how small the Co2 concentrations were . As my husband calls it “that teeny tiny gas” . C02 is referred to as a trace gas! What about that part of it? And how much water vapor the planet handles each day and the other gases climate scientists don’t worry about as much?

    I posted a link to another essay by the Australian Geologist “On Revising the Nine Times Rule” that my husband knows here also. I got one comment for that link (and it’s 5 or 6 pdfs long) and that comment was positive from someone who doesn’t comment often here. Everyone else didn’t say anything. In that essay; he asks “if C02 is such a great insulator; why isn’t it used for one? ” lol

    CO2 concentrations have been thousands of times more then they are now and there was no run away burning up planet.
    Dr. Mann et al has temperature going up in his graph in lock step with C02 concentrations. Then there’s all the physics gobble de gook you throw in on top of if. Amazing. Read my post about the caves again and forget the other one. The point is Earth processes and cycles don’t stop because humans add a little bit more C02 into the air for a very short time and the climate changes much faster then what some claim it is doing now.

    Have a good Monday!

  130. mikep:

    Agreed on the testing of models. One of the issues with GCM testing is the statistical approaches used. In a way a GCM makes a massive number of predictions, since it simulates the climate of the future it makes “predictions” about the temperature, humidity, rainfall, circulation, etc etc. Which do we select as MOM, measures of merit. for example, we may look at how it gets sea level right or wrong, or ocean heat content, or temperature. and different models can do well on one and not so well on others.. and the collection of models can do better than any individual model.

    lastly:

    “The B&R results therefore are facts that need to be explained. What must be going on for results like this to be observed? Now it’s possible that their results could be explained as artefacts of the method chosen. That seems a bit unlikely on a quick look.”

    I have no trouble saying that iff B&R results are in conflict with physics, that I will reject B&R on inspection.
    What you think unlikely, I find highly probable. Let’s put it this way, which is more probable that over a 100 years of science is wrong or that there is something wrong with their method?

    Suppose you prove to me that I cannot exist. Am I required to show the flaw in your argument? Na, I get to choose. If you arguement is true then I dont exist. But I Exist, therefore your argument is false.

    Simply, I get to choose how I structure the argument:

    My form:

    If (B&R) then ~( physics)
    physics
    Therefore ~ B&R.

    your form:

    If ( physics) Then ~(B&R)
    B&R
    Therefore ~ physics.

    That’s a nasty trick

    hehe. But seriously that does point to the main issue, which is what part of the physics does B&R actually touch.

  131. Liza,

    Well, there are a variety of misconceptions that you have and one post won’t explain them all. If you take the time to study some of the links we sent, then you can come back with questions rather than arguments. Because most of your arguments are “i talked to somebody”. The only way we can continue this
    and be polite to others is to find another place to discuss things.
    scienceofdoom is such a place and the writer there is very nice and nobody throws insults. its a quiet place where you can ask questions and get answers. The whole issue of “trace gases” gets covered there so you can understand. If you have time to write comments here, you have time to write comments there.

    before you go there, I want you to google “how a thermos works” and then you will understand how heat is transfered ( look at conduction and radiation) and then see if you can figure out why a thermos has a silver lining and why a vaccum is important. Don’t comment back here because this thread is not about those issues directly. See you on scienceofdoom!

  132. What happens when a temperature series is mapped onto a random walk,Anti-persistence.

    Anti-persistence is a problematic problem for the proprietors of AGW models.eg Carvalho et al 2007.

    Anti-persistence in the global temperature anomaly field

    Abstract. In this study, low-frequency variations in temperature
    anomaly are investigated by mapping temperature anomaly records onto random walks. We show evidence that global overturns in trends of temperature anomalies occur on
    decadal time-scales as part of the natural variability of the climate
    system. Paleoclimatic summer records in Europe and
    New-Zealand provide further support for these findings as
    they indicate that anti-persistence of temperature anomalies
    on decadal time-scale have occurred in the last 226 yrs. Atmospheric processes in the subtropics and mid-latitudes of
    the SH and interactions with the Southern Oceans seem to
    play an important role to moderate global variations of temperature on decadal time-scales.

    A significant constraint on the conjecture of a monotonic warming global signature.

    Conclusions
    The anti-persistence of the temperature field on inter-decadal
    time scales is part of the decadal variability of the climate
    system and this property has not been identified before. Processes
    at time scales longer than that of ENSO are also
    responsible for maintaining stationarity in the temperature
    anomaly field. In addition, our results indicate the importance
    of the Southern Oceans in regulating temperature fluctuation
    regimes on long time-scales. The origin of interdecadal
    fluctuations in the climate system is currently one
    of the most challenging problems in climate dynamics

  133. Steve,

    I’d put it the other way round. Which part of physics is B&R supposed to contradict? I have yet to see a clear reply.

  134. Re: mikep (Mar 29 14:30),

    B&R reject CO2 concentration (strictly speaking ln([CO2])) as a first order temperature forcing, replacing it with d[CO2]/dt (or maybe d(ln([CO2])/dt, it’s hard to tell from their references). There is a clear physical mechanism for [CO2] from radiation transfer physics backed by a lot of observational data for the spectral part. There simply isn’t one for d[CO2]/dt.

  135. Thanks Dewitt.

    I suppose My question would be this.

    Clearly B&R’s result cannot invalidate radiative physics. If it does then it is clearly wrong. However, Radiative physics merely holds that all things being equal an increase of Co2 will increase the temp. So, doesn’t B&R really go to the question of feedbacks?
    or am I missing something.

  136. Re: steven mosher (Mar 29 15:29),

    What’s the math behind turning a direct forcing into a rate of change forcing (empirical curve fitting is hardly proof) and what does that have to do with feedbacks? Until I see something coherent along those lines, I remain unconvinced that B&R has anything to say at all, except possibly about confidence intervals on trends.

  137. steven mosher (Comment#39559) March 29th, 2010 at 10:04 am

    [blockquote]
    WRT science of doom. I wish more skeptics would just get over the fact that the fundamental physics works. It really is annoying that that particular zombie argument won’t die. While Scienceofdoom takes it down to say a college level of physics, maybe it needs to be taken down to a high school level.
    [/quote]

    You that’s settled?

  138. Radiative physics aside, has it not been observed that CO2 concentration appears to lag temp increases in the geological time-frame? Is this a generally accepted observation or is this a skeptic meme? Being mostly skeptical myself I can’t say for sure what is real and what is due to my source bias 🙁

    I ask because that observation seems, at least superficially, to be consistent with the idea that d[CO2]/dt is a greater driver than CO2 concentration.

  139. bugs:

    What do you think of the ’saturated gassy argument’ in two parts?

    It’s been a while since I’ve read it. I’ll have get back to you on it, but if for clarity, depth of material, and comprehensive coverage I still say SOD has everybody else beat.

  140. Re: Eric (Mar 29 15:57),

    A lag between CO2 and temperature is both expected and compatible with current theory. See for example my post here. Milankovitch cycles or something operating on the same time scale sets off a reduction in sea ice area. The consequent reduction in albedo causes additional absorption of energy and a temperature increase. That rise in temperature causes an increase in CO2 through a variety of mechanisms including, but not limited to, decreasing solubility in water as temperature increases. The increase in CO2 further amplifies the process. Then you get overshoot and a slow decay back to glacial conditions. That last sentence is my opinion rather than the accepted dogma. In the current situation, interglacial conditions are the exception rather than the norm and there is no need for a trigger to end an interglacial.

  141. The ice age lead-lag relationship was even predicted before it was observed. Some sceptic arguments are 30 years behind the ball. The saturation argument, probably 50 years behind.

  142. Re: carrot eater (Comment#39588) March 29th, 2010 at 4:35 pm
    That is the arm waving of all time ! Source please!

    “A lag between CO2 and temperature is both expected and compatible with current theory.” from Dewitt is arm waving too!

    steven mosher (Comment#39579) March 29th, 2010 at 3:29 pm
    With all do respect; my husband agrees with that website about the minimal role of C02 in the atmosphere. How thermos works is not the same as “how this whole planet works”.

    And again,
    DeWitt Payne (Comment#39558) March 29th, 2010 at 9:23 am
    Re: liza (Mar 29 05:55),
    “”The “Into the Laboratory, it’s time to go to work.” section of your link is pathetic. Atomic absorption in the visible and UV is very different from molecular absorption in the IR.””

    You didn’t even read it all(?) He isn’t talking about just UV light. I see him speaking about IR too. And never claiming it is “the same”. I see him explaining how “Atomic Absorption Spectrometry” is a method by which we can measure precisely which wavelengths of radiation a particular gas is capable of absorbing.” How do you figure it out?

    He also says “As we can see above, carbon dioxide absorbs infrared radiation (IR) in only three narrow bands of frequencies, which correspond to wavelengths of 2.7, 4.3 and 15 micrometers (µm), respectively….and he goes on.

    That not true?

    I’ll talk to you guys when I talk to you. I’ll try to read your links too! I didn’t have time today (and scanning them I think you are missing something here!)

    I mean no harm and no insult truly but holy cow you are really believing in the “theory” so much you brush aside, skim and ignore a heck of a lot of big things! 🙂

  143. Liza,

    you really, really would benefit enormously from reading ScienceofDoom’s blog. ScienceofDoom takes what I would call a properly sceptical position. That is, he looks at the actual science papers and what they say before coming to any conclusion.

    Start here http://scienceofdoom.com/about/ and then move on to here http://scienceofdoom.com/roadmap/. When you’ve read through all the posts and comments, you will have a much better idea of which bits of climate science are uncontroversial and which are still open to dispute. ScienceofDoom will, I am sure, patiently answer any questions you have that aren’t already answered.

  144. Steve Mosher: “I have no trouble saying that iff B&R results are in conflict with physics, that I will reject B&R on inspection.”

    Nor would I. But that is not what B&R are claiming. They are claiming that, whatever the IR absorption characteristics of GHGs (the physics) may be, the effects of that absorption are not discernible in the 20th century temperature record — that there is no evidence of a deterministic trend of any kind in that record — except that changes in the temp record seem to be correlated with *rates of change* in the GHG record.

    Radiative physics tells us nothing *directly* about climate change. To get from the physics to the phenomena (the observed temps), a large number of intermediate processes (feedbacks, sinks, confounding factors, noise) must be traversed first, many of which are poorly understood and some of which are probably unknown. Call these “HPs” for “hypothesized processes.” The net result of all of these may modify the theoretical forcing to a degree that the CO2 “signal” becomes indiscernible in the data.

    Your syllogism needs to be modified:

    If B&R then ~(physics AND HPs)
    B&R
    physics
    Therefore,

    ~HPs.

  145. Lisa says

    With all do respect; my husband agrees with that website about the minimal role of C02 in the atmosphere. How thermos works is not the same as “how this whole planet works”.

    The models do a ‘who this whole planet works’ using the known physical processes and forcings. They do a good job of getting their outputs to match reality.

    And again,
    DeWitt Payne (Comment#39558) March 29th, 2010 at 9:23 am
    Re: liza (Mar 29 05:55),
    “”The “Into the Laboratory, it’s time to go to work.” section of your link is pathetic. Atomic absorption in the visible and UV is very different from molecular absorption in the IR.””

    You didn’t even read it all(?) He isn’t talking about just UV light. I see him speaking about IR too. And never claiming it is “the same”. I see him explaining how “Atomic Absorption Spectrometry” is a method by which we can measure precisely which wavelengths of radiation a particular gas is capable of absorbing.” How do you figure it out?

    He also says “As we can see above, carbon dioxide absorbs infrared radiation (IR) in only three narrow bands of frequencies, which correspond to wavelengths of 2.7, 4.3 and 15 micrometers (µm), respectively….and he goes on.

    That not true?

    That’s true, but so is the fact that that’s enough to make a significant change. That’s what the models’ work out, the direct forcing, and the responses which include feedbacks. Higher temperature due to CO2 causes more water vapour, which is also a GHG. Plus if you want to, read ‘a saturated gassy argument’, in two parts.

    http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/

  146. Liza


    steven mosher (Comment#39579) March 29th, 2010 at 3:29 pm
    With all do respect; my husband agrees with that website about the minimal role of C02 in the atmosphere. How thermos works is not the same as “how this whole planet works”.

    1. I asked you to do that NOT so you could understand how the planet works, BUT so that you could get a simple understanding
    of the difference between conduction and radiation. And also to explain later about C02 and “insulation”

    2. Your husband agrees with a website that is wrong.

    Your man picker and web site picker is broken. Please visit the scienceofdoom. or have your husband visit.

    Last comment from me on this.

  147. Contrarian..

    I think we agree. But I want to watch some more argument from carrick and others.

    how do I say this. my knowledge gap about how radiative physics “plugs into” a GCM is inadequate. And my days of reading fortran are gone.

  148. bugs (Comment#39604) March 30th, 2010 at 12:46 am
    You say: “That’s true, but so is the fact that that’s enough to make a significant change. That’s what the models’ work out, the direct forcing, and the responses which include feedbacks.”

    I say: Yeah that’s the ticket. The “feedbacks”. You need those made up and mysterious things for that model. Like Contrarian says above “a large number of intermediate processes (feedbacks, sinks, confounding factors, noise) must be traversed first, many of which are poorly understood and some of which are probably unknown. ”

    SteveMosher, we are talking over and around each other. You say: ” I asked you to do that NOT so you could understand how the planet works, BUT so that you could get a simple understanding
    of the difference between conduction and radiation. And also to explain later about C02 and “insulation”
    2. Your husband agrees with a website that is wrong.”

    I understand the difference between those things. The comment about C02 and insulation came from the essay I posted in the past and you are taking the sentence out of context; it’s more of a sarcastic remark in the context of the whole essay. I also understand things like convection, turbulence; and gravity, and the fact that this planet is not a cold dry black rock, but a wet shiny (blue) one with a furnace deep inside, that has mountains growing up toward the sky, and subduction zones, tectonic plates shifting all around; I also know that it wobbles, tilts and shakes as it travels around the sun, that it bulges in the middle (the oceans do and this makes measuring sea level a chore) and that the shape of the atmosphere is not a perfect circle around us up there; it is more of a cone shape; and I see other things like how big the continent of Antarctica is and most people don’t see that and I know there is a huge swirling vortex in the upper regions of the atmosphere at the South pole too. (some of this sounds like I am talking about a women! hee hee)

    I thought that “my” website you think is wrong getting into the size and space between molecules was cool too. I also found this site to read more about: “”Infrared Spectroscopy” of C02
    http://www.wag.caltech.edu/home/jang/genchem/infrared.htm
    (still learning)

    I didn’t say my husband agrees with that whole website either.
    I said he agrees with the idea of a minimal role of C02 in regards to the over all climate on this planet.

    C02 concentrations have been much higher in the past and it still got cold. Much lower and it still got very warm again. Or, the concentrations stayed relatively constant and it got warm or colder (22 ice advances and retreats in the last million years or so correct?). See my links about the cave divers and the data they collected also.

    That blog “science of doom” (so far I’ve read of the links everyone is giving me) doesn’t get into the geological record does it? The Greenhouse Effect is what it is, nobody denies what it is; it is the power it is given over the climate that is in question. who was writing that blog btw?

    You don’t have to answer! I don’t mean to highjack the topic. I am busy this week and have family stuff to do. Happy Passover and Easter everyone.

  149. I say: Yeah that’s the ticket. The “feedbacks”. You need those made up and mysterious things for that model. Like Contrarian says above “a large number of intermediate processes (feedbacks, sinks, confounding factors, noise) must be traversed first, many of which are poorly understood and some of which are probably unknown. ”

    The feedbacks aren’t made up or mysterious. They are firmly based on physical explanations. A warming world will melt ice, and is melting ice. Ice reflects better than what it reveals underneath. Nothing mysterious about that. Like I said, the models contain our understanding of the processes of climate, they seem to do a good job of creating a good match to the real world.

  150. bugs (Comment#39617) March 30th, 2010 at 6:23 am
    “seem” is the most important word.
    Melting glaciers in a warm world add fresh water to the ocean; rivers too if they grow for some reason; fresh water can change ocean circulation that can make places on Earth colder, then that does something else; and then there’s wind, clouds,etc etc etc…too infinity and beyond! 🙂

  151. Re: carrot eater (Mar 29 16:35),

    The ice age lead-lag relationship was even predicted before it was observed.

    I think you step carefully through the literature, you will find this is not so. Hansen’s paper “predicting” the lead-lag relationship was not published prior to conference articles showing experimental evidence pointed toward that CO2 lead. Boris and I went through this at some point, and found the various dates or articles and it’s just not the case that the lead in temp was predicted before it was observed by those analyzing ice cores.

    The lead-lag issue is explainable through feedback. But it was not actually predicted.

  152. liza (Comment#39616)
    “SteveMosher, we are talking over and around each other.”
    It is my understanding (she can correct me if I am wrong) that she is not saying she doesn’t understand the physics or that the physics is wrong. She is saying the physics is incomplete. In a recent Guardian interview (http://www.guardian.co.uk/environment/blog/2010/mar/29/james-lovelock) James Lovelock seems to agree when he said “We haven’t got the physics worked out yet.” It might be best to wait for the physics of the ocean, clouds, aerosols, and forcing (among others) to be worked out before we discount VS’s argument. Although it is appropriate that we all have opinions based on our current knowledge, we should be cautious about over selling them.

  153. So Liza,

    You do you know the difference between conduction and radiation? Do you understand why SOME scienctists thought just like you for many decades that C02 could not have an effect because it was just a trace gas? Do you understand just how they misundertood
    how the atmosphere works. Perhaps the RC thread on a “saturated gas” might be a better place to start.

    “What happens to infrared radiation emitted by the Earth’s surface? As it moves up layer by layer through the atmosphere, some is stopped in each layer. To be specific: a molecule of carbon dioxide, water vapor or some other greenhouse gas absorbs a bit of energy from the radiation. The molecule may radiate the energy back out again in a random direction. Or it may transfer the energy into velocity in collisions with other air molecules, so that the layer of air where it sits gets warmer. The layer of air radiates some of the energy it has absorbed back toward the ground, and some upwards to higher layers. As you go higher, the atmosphere gets thinner and colder. Eventually the energy reaches a layer so thin that radiation can escape into space.”

    Didn’t the fact that water vapor thoroughly blocks infrared radiation mean that any changes in CO2 are meaningless? Again, the scientists of the day got caught in the trap of thinking of the atmosphere as a single slab. Although they knew that the higher you went, the drier the air got, they only considered the total water vapor in the column.”

    NOW, here is the part I love I will explain more below, but read this for now

    “The breakthroughs that finally set the field back on the right track came from research during the 1940s. Military officers lavishly funded research on the high layers of the air where their bombers operated, layers traversed by the infrared radiation they might use to detect enemies. Theoretical analysis of absorption leaped forward, with results confirmed by laboratory studies using techniques orders of magnitude better than Ã…ngström could deploy. The resulting developments stimulated new and clearer thinking about atmospheric radiation.

    Among other things, the new studies showed that in the frigid and rarified upper atmosphere where the crucial infrared absorption takes place, the nature of the absorption is different from what scientists had assumed from the old sea-level measurements.”

    “Measurements done for the US Air Force drew scientists’ attention to the details of the absorption, and especially at high altitudes. At low pressure the spikes become much more sharply defined, like a picket fence. There are gaps between the H2O lines where radiation can get through unless blocked by CO2 lines. Moreover, researchers had become acutely aware of how very dry the air gets at upper altitudes — indeed the stratosphere has scarcely any water vapor at all. By contrast, CO2 is well mixed all through the atmosphere, so as you look higher it becomes relatively more significant. The main points could have been understood already in the 1930s if scientists had looked at the greenhouse effect closely (in fact one physicist, E.O. Hulbert, did make a pretty good calculation, but the matter was of so little interest that nobody noticed.)”

    So the science moved forward in part because a practical problem had to be solved. The problem of detecting IR transmission through the REAL atmosphere.

    Let’s take a problem like a sensor on the ground Looking UP at an airplane. The airplane gives off radiation in a variety of wavelengths, but lets just focus on IR. Can I see the IR from the ground? Well, that IR has to travel through the atmosphere and if you dont understand and model the atmosphere correctly and understand how it changes as you go up or down, you’ll never solve the problem of seeing that airplane. Now lets suppose that you are in an airplane looking DOWN at an IR source on the ground. If you dont understand the physics of how IR gets transmitted ( absorbed, scattered, reflected, transmitted) you’ll never be able to predict and test the performance of airbourne IR sensors. And if you want to look from one airplane to another airplane at the same altitude you’ll need to understand how these things change with altitude. Then you actually go out and test this. As engineers you are given tools.
    Like a database: ( Hitran) and then this data is put into models that are used to model the transmission of IR through the atmosphere.

    http://www.cfa.harvard.edu/hitran/
    click on the nice interview:

    “HITRAN stands for HIgh-resolution TRANsmission molecular absorption database, and it goes all the way back to 1961, before I did my Ph.D. on an aspect of molecular spectroscopy. Through some prior research I had done with an Air Force lab nearby in Bedford, Massachusetts, I got a job out there. This Air Force lab was interested in detecting jet aircraft from a distance, with detectors, say, from another aircraft. But you have the Earth’s atmosphere in between, obviously, with a lot of absorption at different frequencies. Hot sources give out a lot of energy in the infrared region of the spectrum, but if you happen to tune your detector, for instance, to where there’s major water-vapor absorption, you’re not going to see anything. What the Air Force needed was a database, a whole compendium of the major absorbers in the Earth’s atmosphere, where they are and how strong they are. The thing you have to realize is that the molecules in gasses have discrete frequencies of absorption; all these guys—water vapor, carbon dioxide—absorb electro-magnetic radiation at discrete frequencies, but they’re smeared out a little because they’re colliding all the time. If you had a database of all this information, you’d have a fingerprint of what’s going on in the atmosphere. It can be an incredible amount of information.

    I should also mention that these absorbers in the infrared range are what some people call the minor constituents of the atmosphere. These are the trace constituents, like water vapor, methane, and carbon dioxide, which happen to be incredibly efficient absorbers.”

    So Liza,

    If you do a test in a lab and shoot IR through a tube containing gases then you can measure how much IR is blocked by C02. But this experiment ( run very early on in the century ) doesnt capture what happens in the column of air above your head. To understand that you have to make measurements at various altitudes, you have to understand where there is water and where it is dry. You build a model of how radiation will be transmitted through the entire atmopsheric column and then you test that.
    for example I use to use Modtran

    http://en.wikipedia.org/wiki/MODTRAN

    here is a nice link as well

    http://en.wikipedia.org/wiki/Radiative_transfer

    and here:

    http://en.wikipedia.org/wiki/List_of_atmospheric_radiative_transfer_codes

    you can have your husband play with modtran here

    http://geoflop.uchicago.edu/forecast/docs/Projects/modtran.orig.html

  154. The geological records shows long periods of CO2 concentrations 10-20 times higher than present during which time temperatures were only a few degrees higher than present.

    Is this consistent with the currently accepted theories on CO2 and global temperature. If it is not, then is it reasonable to conclude that the science on CO2 and warming is well understood? Perhaps the geological records are in error, but perhaps it is our understanding that is in error.

    Would it not make sense under the circumstance to re-examine the data statistically in light of the problem with OLS to determine confidence levels?

    While we could argue that there is no need to re-examine the data because our understanding is perfect, I do not find that argument convincing.

    If after re-examination that data is found to be consistent with OLS treatment, then this strengthens the case that our understaning is correct and the historical records are misleading.

    If after re-examination the data is found to be inconsistent with OLS treatment, then the method developed by econometrics provides a means to deal with this to improve the accuracy of climate science.

    In both cases, re-examination improves the confidence in climate science improves. However, by arguing against re-examination it opens the debate to questions of transparency and confidence in the current state of climate science.

  155. lucia (Comment#39621) March 30th, 2010 at 7:50 am
    Thanks for that. Someone has to keep these guys grounded. 🙂

    PMH (Comment#39627) March 30th, 2010 at 10:55 am
    yes that’s sort of what I mean.

    steven mosher (Comment#39629) March 30th, 2010 at 11:56 am
    I hope you know this is all in good fun and intention. I like sparring too. All that you said is fine; I think I said ” look at all the tripy stuff that happens at the top of the atmosphere” some place here already didn’t I?

    It still doesn’t mean anything to the subject of surface temperatures . ( another thing that bugs me; science of doom et al describes CO2 in the atmosphere as “well mixed”. Okay, but that still sounds like a big generalization to me– have you ever been to or read about Mammoth Mountain? I am going to have to look up were that term “well mixed” comes from)

    Anyway, as ge0050 (Comment#39651) March 30th, 2010 at 8:18 pm says:

    “”The geological records shows long periods of CO2 concentrations 10-20 times higher than present during which time temperatures were only a few degrees higher than present.””

    So to me at least (and too my husband) all that cool information in your comment is sort of just more arm waving!

    The study of the Earth and its processes (geology, etc) has roots in the philosophy of naturalism, called “uniformitarianism”.

    Uniformitarianism assumes that the same natural laws and processes that operate in the universe now, have always operated in the universe in the past and apply everywhere in the universe. It is frequently summarized as “the present is the key to the past,” because it holds that all things continue as they were from the beginning of the planet. (IOW or in a nutshell; in the past C02 didn’t do what you are claiming it does in much much higher concentrations.) Then we have the models that “seem” to work with C02 driving the temp. But you only have so many parameters to input in a model. Every time you give C02 more power in the model; another Earth, sun, orbit, ocean, moon, cloud, gravity, wind, rain, mountain, dust, ocean; etc etc process we truly don’t understand fully yet; gets downgraded in that climate model.

  156. liza,

    “well mixed” is what they call a euphemism. (you already knew that, of course) 😉

    And there is a certain collection of people whose entire worldview is based on euphemisms.

    “the science is settled”
    “pro-choice”
    “denier”
    “green”
    “scientist”
    “trick”

    …etc., etc., etc.

    Andrew

  157. Re: liza (Mar 31 09:16),

    “the present is the key to the past,”

    Yes, but to use the paleo record to quantify the effect CO2, there are things we need to know that we simply don’t know. Has solar output been steady over hundreds of millions of years? Modern theory says not, that it has gradually increased, but we don’t really know. Orbital parameters on the multimillion year time scale are chaotic, if I read Tom Vonk correctly, so we can’t accurately calculate that effect. The continents have drifted causing changes in ocean circulation patterns, etc. But for the next thousand years we can assume those things won’t change. Knowing the spectral and radiative properties of CO2, there is good reason to believe that an increase in CO2 that is larger and faster than anything in the recent record will cause an increase in temperature.

    The real question is not whether and how much the temperature will increase, but what are the probable effects and costs. That’s where the IPCC and the warmers are actually the weakest. Arguing about the fundamental physics is a distraction from the real issues. Policy decisions are always made with insufficient information. The argument that the temperature won’t go up enough to make a difference is easily countered by “But what if you’re wrong?” Risk management is all about probabilities and costs. The warmers own the high ground on costs right now, even after the glacier flap. So all they have to say is that while the probability of a large temperature increase in the future may be small, the costs will be catastrophic so it pays to do something now to minimize that risk.

  158. The real question is not whether and how much the temperature will increase, but what are the probable effects and costs.

    I agree that this is where the uncertainty really kicks in. Biology, economics and politics are much, much harder than the physics. Though the physics does allow for a good bit of uncertainty.

  159. “The real question is not whether and how much the temperature will increase, but what are the probable effects and costs.”

    Well that’s not earth science or climate science is it if “that’s the real question”. That’s politics; (or politics and economics ). It also makes AGW perfect propaganda doesn’t it? Sheesh.

  160. Liza

    describes CO2 in the atmosphere as “well mixed”.

    That’s because it is pretty well mixed. Though not perfectly mixed; you’ll get slightly higher readings in the Northern Hemisphere than the Southern.

    Water vapour, on the other hand, is not well-mixed at all, because it can condense out. There is very little water in the stratosphere.

    “”The geological records shows long periods of CO2 concentrations 10-20 times higher than present during which time temperatures were only a few degrees higher than present.””

    in the past C02 didn’t do what you are claiming it does in much much higher concentrations.

    See here, then re-assess.

    http://en.wikipedia.org/wiki/Faint_young_Sun_paradox

  161. carrot eater (Comment#39685) March 31st, 2010 at 10:25 am
    Yeah and then you disregard or “rule out” a changing sun for fractions of a degree “global average temperature” now?
    LOL

  162. carrot eater (Comment#39687)
    You might realize I know all that; and also know that the technology hasn’t been around long enough to determine such things one way or another. Compared to the vast geologic record it’s barely, hardly, any data at all. Just like we’ve not been measuring ice extent at the poles for very long to know how it behaves on such tiny time scales compared to the vast geologic record.

    Look, I’ve already discussed all this with the help of my husband in the other topic “How to talk to a skeptic” about real measuring including the sun relationship and being able to claim you know any of this for sure or even find “averages”. I don’t understand how you all can’t be skeptical for claims about the “average global temperature” in tenths of one degree at this time (up or down even) being “alarming” or “unusual” when this is all clearly very complicated and could be “normal” (and don’t forget the margin of error!) I believe any reasonable person would be skeptical about every single bit; and I think MOST people are. Especially with politics being involved! And I just read here what the “real question” is.

  163. Wait, it might not be in “How to talk to a Skeptic” it might be in another thread. Anyway, it was when we were arguing about “averages” having no meaningful information in regards to the surface temperatures…

    Too hip got to go!
    Family coming to town and all that. Happy Spring!

  164. “…etc., etc., etc.”

    Like:
    “unprecedented”
    “peer reviewed”
    “ït’s about physics, isn’t it?”

  165. Dewitt Payne: “Knowing the spectral and radiative properties of CO2, there is good reason to believe that an increase in CO2 that is larger and faster than anything in the recent record will cause an increase in temperature.”

    Yes, there is good reason to believe that. But (as yet) no good reason to *conclude* that. I.e., it is a plausible hypothesis worth investigating.

    “That’s where the IPCC and the warmers are actually the weakest.”

    And,

    “The warmers own the high ground on costs right now, even after the glacier flap.”

    Those seem inconsistent. But I agree that estimating the risks is the heart of the issue. We simply don’t know how to estimate risk in the face of *structural* uncertainty.

    Here is one take (though I don’t agree with his conclusions):

    http://www.economics.harvard.edu/faculty/weitzman/files/modeling.pdf

Comments are closed.