This month we will be betting on the GISTemp Land Ocean October monthly anomaly that will be published at
http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.C.txt
when it first appears in November, 2012.
Note: We are not betting on the value that October anomaly that will appear in December, 2012. We all know that will likely update both because data from some stations arrives late and also because the GISTemp method causes historical temperatures anomalies to change as later data arrives. So: You are betting on the value that will be posted in November.
GISTemp is replacing UAH this month because I am uncertain about whether UAH will announce an anomaly promptly in November and I suspect GISTemp will be the first of the surface based groups to announce their anomaly. When UAH returns to reporting promptly we will revert to that series.
The betting script is below:
[sockulator(../musings/wp-content/plugins/BettingScripts/UAHBets5.php?Metric=October GISTemp: Land and Ocean TTL?Units=C?cutOffMonth=10?cutOffDay=15?cutOffYear=2012?DateMetric=September, 2012?)sockulator]
Bets Close 10/14/2012
For those wondering how GISTemp trends are comparing to the multi-model mean projections from the AR4, trends computed for various start years are shown below:
The “heavy slate blue” is trends computed based on the monthly model mean monthly anomalies ( I need to add this to the legend.) The black is the trend computed based on GISTemp monthly anomaly values. The heavy purple is the ±95% uncertainty (two sided) in the trend computed using GISTemp estimated assuming the data can be modeled as (linear trend + ARMA(1,1) ‘noise’); the bias for too-tight uncertainty intervals was corrected using monte-carlo runs.
The very light slate-blue trace with open circles is the full uncertainty for evaluating the agreement between observed trends and the trend for the multi-model mean. It is computed by taking the pooled confidence intervals based on the standard error in the multi-model mean trend (computed over 22 models ) and the uncertainty in the trend associated with the GISTemp observations.
For years when the open-slate blue circles representing the upper confidence interval lie below the closed slate blue circles that indicates the statistical model suggests we “reject” the hypothesis that the multi model mean is consistent with observations with a confidence of 95% and we conclude the multi-model mean is warming at a faster rate than the observations. These rejection does not tell us why the multi-model mean is warming to fast, but it indicates that it probably is warming faster than the earth is warming.
Does this information help when betting on October values? Not really. When betting on October values, it’s more useful to know that Augusts published value is 0.56C and that ENSO has been on the higher end of “La Nada” over the past few months. Not El Nino. Not La Nina. September’s anomaly would be useful to know but has yet to be published; it may or may not be published before the betting tables close.
Good luck!
There still may be time to bet on the September GISTemp figure !!!
Ray–
True. But I wouldn’t want to risk having bets close after they post!
Intrade takes bets on GISTemp incl Sept, but it is low volume. On some of the the bet costs $8 but to bet the other way is $6 for a payout of $10.
My bet is that the next posted month will be 2012.71
What’s the exchange rate on quatloos anyway?
quatloos = sqrt(-1) $US
I’m going to sit this month out, my Quatloos from last month should pay for a party for everybody. Live it up guys.
I have zero confidence in anything coming out of NASA GISS.
redc–
Maybe if you give Roy some of your quatloos he can get UAH in shape so I can be confident it will report! 🙂
Actually, it probably will report. But I’m just a little worried.
The other disadvantage with GISTemp is it has only 2 posted significant figured. Mind you, that’s reasonable scientifically; but 3 sig figs is nice for betting!
This seems like a reasonable place to ask a random question about indices.
Does anyone know why the Reynolds OI SST dataset gives a trend since 1980 of about 0.08K/decade, while HadSST gives about 0.13?
I have heard the Reynolds OI (satellite) data undersamples high latitudes. I doubt that is the only difference.
GHCN has again released a new version of their data set. Since GISS uses the adjusted data from them any adjustments during the GISS base period 1951-1980 will impact everyone’s predictions.
Paul Homewood has a cursory analysis of the changes made.
http://notalotofpeopleknowthat.wordpress.com/2012/10/09/new-ghcn-version-cools-the-past-even-more/
I don’t trust GISS at all. Think I’ll skip betting this month.
Well lets see, time for the same old spiel again it seems.
It’s generally accepted in the climate community that the warming prior to 1970 was dominated by natural forcings with anthropogenic forcings associated with warming being largely canceled by anthropogenic cooling from e.g., sulfate emissions. See e.g., the GISS assumed radiative forcings. [*]
If as is claimed by the NCDC that
then what this has done is elevate the relative importance of natural forcings by increasing the trend during the period that is believed by the climate community to be dominated by natural forcings relative to anthropogenic ones.
That is …the new temperature series suggests that CO2 is less important as a climate forcing than previously thought.
Test question: Which side of the debate does warming prior to 1970 help? A) Lukewarmers and skeptics, B) Warmers.
[*] Not getting into whether GISS or the rest of the climate community “has it right” with respect to radiative forcings. I’m just trying to accurately convey the views of the community.
Carrick, how does increasing the effect of natural forcings in the past decrease the role of CO2 in the present? It should increase climate sensitivity, not decrease it. After all, the larger the response to a forcing, the larger the sensitivity.
I can see arguing this change means current trends fit projections worse than before, but that’s a different issue.
Carrick:
1) Warmists always talk about .8C of warming in the last century because it is more dramatic to claim all warming was CO2 warming, not just the post-1970 warming.
2) Cooling the past makes the graphs/trends steeper from past to present creating the illusion of impending doom.
3) In reality the 1930s were warmer than the AGW cult wants us to believe.
4) A downward slope from 1934 to 1998 would ruin the scam.
Brandon:
I’ve heard that argument before but I don’t think it works out, if you do the math. That is, increasing the climate sensitivity but keeping the forcings the same in order to increase the pre 1970 temperature trend, doesn’t just give you a larger trend pre 1970 and the same trend post 1970: Increasing climate sensitivity and keeping the forcings the same also causes an increased period of heating post 1970 that is inconsistent with observation.
The only way you can get a larger trend pre 1970 while keeping the post 1970 trend the same, is add a larger component to the natural forcings (regardless of origin). It could also possibly involve tweaking the climate sensitivity, but if you have an additional component of natural warming (suppose it lasts from 1900-2000), you’d almost have to drop climate sensitivity to get the keep the post 1970 temperature record consistent with observation.
Larger component of natural forcings = relative influence of the CO2 increase from 1970 top now has to be smaller than previously suggested, relative to natural forcings.
This new record also says we have a poorer handle on natural variability and forcings than the climate scientists are letting on… yet another example of “understating the uncertainty”.
I’m not sure why skeptics would object to that.
Carrick, did you miss my second paragraph? You say:
In my second paragraph I said I can see making that argument.
Doing this would require saying our knowledge of natural forcings in the past is wrong. I find it awkward to respond to a modification in the temperature record by modifying the radiative forcing record.
My preference is to alter the view of unforced variability/feedbacks. I figure until someone can show me reason to believe forcings can explain something, I’ll not assume they can. I’d rather just increase the general range of uncertainties.
Brandon, I didn’t miss the second paragraph, which we agree on (it does increase uncertainty).
However, I disagreed with the first comment
I just don’t see how your second paragraph saves this comment. You can fix the pre-1970 trend without resorting to fiddling with sensitivity, but you can’t fix it with just increasing sensitivity. What it requires is a different forcing history than the climate models have assumed.
BillC (Comment #104739)
Climate Explorer allows you to easily clip datasets to within given latitudes so you can test whether including high latitudes matters.
I couldn’t see any change in the difference between OIv2 and HadSST2 when clipped to 60ºN and S. It seems to be systemic, regardless of latitudinal clipping choices.
It looks like the difference manifests as a step change in the late 90s, with the 2000s and the period up to mid-90s overlaying fairly well (OIv2 exhibits a larger linear regression trend between 1981 and 1996 but the large variance makes this almost meaningless). As a punt, I believe one change in recent observations has been a fairly rapid shift from ship to buoy-based measurements and the latter are known to be biased cool in comparison. Could be the difference is related to how buoy measurements are incorporated.
I’ll deliberately pass this month. Anything that lends credibility in the slightest way to GISS is a net negative in my view.
There isn’t such a clear step change in a HadSST3 comparison, but arguably it’s still there.
PaulS,
Thanks! Time for me to play around with Climate Explorer. I think OIv2 is entirely satellite-based…
Interesting. According to NOAA, they no longer use the satellite data: http://www.ncdc.noaa.gov/oa/climate/research/sst/papers/merged-product-v3.pdf
So – PaulS – your conclusion may be right, but I don’t think you can get Climate Explorer to show Reynolds data – it is going to show NOAA data, what NOAA calls ERSST3, which no longer includes the satellite measurements.
This would be a big difference between say, Bob Tisdale’s analyses and everyone’s I guess…
Update: I found Reynolds.
Here is what I get from climate explorer (linear trends in K/decade by dataset and latitudes)
HadSS2 -60 to 60; 0.077
HadSST2 -90 to 90; 0.075
NOAA -60 to 60; 0.104
NOAA -90 to 90; 0.100
Reynolds -60 to 60; 0.090
Reynolds -90 to 90; 0.087
All show a slightly lower trend for the full globe than for the -60 to 60 region, which might be expected due to ice damping. I have no explanation why HADSST shows up as 0.13 on woodfortrees.
BillC (Comment #104756)
I think you’ve gone wrong somewhere. HadSST2 global trend is definitely ~0.13ºC/Dec for 1981 to present. I got 0.13 from Climate Explorer when I ran it.
Carrick, I asked:
If our calculation of the effect of natural forcings increased, that means the response to the forcings has increased. That means sensitivity has increased. In other words, if we have the formula:
Temperature = Sensitivity * Forcing
Any increase in Temperature (which is really delta Temperature) will result in an increase to Sensitivity if Forcing is kept constant. Since we have nothing new to indicate Forcing changed, it’d be resorting to opportunistic post hoc reasoning if we said it has.
I get that an increase in Sensitivity causes problems for modern observations (and this raises all sorts of issues about the undiscussed Noise), but I don’t see how you escape it. If Temperature goes up, Sensitivity goes up.
Paul_S,
I don’t know if it’s that simple. Are you doing anomalies? What is the base year for the anomalies?
I did absolute temperatures. For absolute temperatures, from 10/1981 to present (to match Reynolds available data) I get, from Climate explorer:
HadSST2 NH: 0.136
HadSST2 SH: 0.028
Global by NH+SH/2L 0.096
Global by 0.6*NH+0.8*SH/(0.6+0.8) for ratio of ocean SA: 0.081
(different from the 0.75 but not by much, perhaps explainable by latitude coverage)
Doing anomalies using WFT: NH, 1981-present: 0.017
SH: 0.009
Easy to see how you get to 0.13 this way. What I don’t get is what makes the trend in anomalies different from the trend in raw temps. Looking on WFT it says the Hadley anomalies are 1961-1990. If that’s still true, I can use Climate Explore to figure it out. The only possible explanation is seasonality, but that’s kind of an !oops!.
OK, nevermind (maybe).
I was doing HadSST1. When I do HadSST2 at climate explorer the trend works out to 0.129.
So, basically my initial question remains. To figure it out, I’ll probably have to get the full gridded NetCDF files. Shame you can’t get HADSST2 in raw temps, just anomalies.
BillC (Comment #104761)
I get 0.13 from the “absolute” data too, though the values in that dataset look like anomalies, which suggests climexp don’t store HadSST2 absolute data.
Are you sure you’re not using HadISST1? That gives me similar values to the ones you’re quoting.
Ah, slightly late there. You can get HadSST2 absolute value netcdfs from the Met Office pages.
PaulS
Thanks, grabbed the NetCDF I found which appears to be of the format “climatology + anomaly”. Makes for a bigger file but hey. I’m going to put up a plot by latitude which will hopefully help show the discrepancies.
Brandon Shollenberger (Comment #104759)
Hi Brandon,
Here’s how I’m interpreting Carrick’s argument, although he can feel free to correct me. This is an extreme over-simplification on my part, but suppose we have period 1 as pre-1970, and period 2 as 1970 to recent. You have your two equations:
(1) dT1 = dF1 * S
(2) dT2 = dF2 * S
(T = temperature, F = forcing, S = sensitivity). Now, suppose we broke up the change in forcing in each period to anthropogenic (AF) vs. “natural” source, the latter of which can be broken up into “natural variability” over the period (NV) and some sort of common “efficacy” (NV_eff) for that natural variability (e.g., the temperature change resulting from low frequency ocean variations):
(3) dF1 = dNV1 * NV_eff + dAF1
(4) dF2 = dNV2 * NV_eff + dAF2
This results in:
(5) dT1 = (dNV1 * NV_eff + dAF1) * S
(6) dT2 = (dNV2 * NV_eff + dAF2) * S
Now suppose we have balanced both these equations (implicity in GCMs) using assumed values for NV_eff and S, and that we have a good handle on dT2, dNV1 and dNV2, dAF1 and dAF2. However, we increase dT1 (the temperature change prior to 1970)…
As you correctly point out, you can increase S to balance equation (5) above, but this will throw off equation (6). On the other hand, you can also balance equation (5) by increasing NV_eff (the influence of natural variations), but this change alone will throw off equation (6) as well. The only way to balance both is to modify BOTH NV_eff and S. You could decrease NV_eff and increase S in theory, except that if NV_eff is already assumed to be near zero (as in very little natural contribution to post 1970 warming), you can’t decrease NV_eff enough to offset the increase in S. The only other way to balance the equations is to assume a larger NV_eff (greater contribution from natural forces) and thus decrease S.
Again, obviously this leaves out a huge number of factors and I’m not sure that I agree, but I think I do understand the logic of how one goes from larger temperature increases prior to CO2 really “kicking in” to a potentially larger role for natural forcings in modern times, at least tentatively.
Paul S,
Here is the plot I promised showing the 1981-2012 (+/- a few months, depending on the index) trends by latitude, with Reynolds, RSS and HadSST. Whatever the reason, it is clear where on the globe the differences are. Maybe later I’ll grab a NOAA ERSST3 or whatever NetCDF and throw it up.
Interesting what that plot seems to show about net tropospheric amplification in the tropics vs. the middle latitudes.
Troy_CA, I don’t think your interpretation makes sense once you reach equations 3 and 4. You say you’re breaking up the forcing variable into anthropogenic and natural, which makes sense. But you then break up natural into “natural variability” and “efficacy.” I’m not sure either of these make sense.
First, I don’t understand why you would take natural forcings and scale them before multiplying by climate sensitivity. Forcings are (supposed to be) directly comparable regardless of their source. As such, scaling them makes no sense. Second, natural variability isn’t made up just of forced variability. Including unforced variability in your forcing term is bad.
If we just remove the scaling factor you use, the problem is intractable. Suppose, for example, we had these equations:
dT1 = (dNF1 + dAF1) * S
dT2 = (dNF2 + dAF2) * S
With basic algebra, we can solve for S, substitute and get:
dT1/(dNF1 + dAF1) = dT2/(dNF2 + dAF2)
If you then change dT1, something else must change. However, nothing else has changed. This means the equation no longer holds true. There are only two ways to solve that. Either we say something else has changed without any real basis, or we rely on an additional noise factor (which covers unforced variability).
Added NOAA ERSST3 (I must have done something wrong!) and GISSTEMP. Clipped lat bounds to -60 to 60.
Update
Brandon, I don’t really understand your argument.
[Ah never mind, I got it now. I’ll follow up in a separate comment.]
Actually I’d start from here:
where I’ve set dAF1 = 0, since that’s what the IPCC/climate community assumes. (That’s my tenet here… we don’t get to make up our own theories at the outset, we start with what they have assumed and see what needs to change).
What NOAA says is dT1 is larger than previously thought, but dT2 has the same magnitude. We’ll call this new trend (prior to 1970) dT1_new.
We’ll also stick with your assertion:
I think it’s clear that increasing S -> S_new will violate the second equation, so “it should increase climate sensitivity” doesn’t follow from making these equations balance.
The most parsimonious thing to do would keep everything fixed, except increase dNF1_new so that dNF1_new = dT1_new/S_old.
My original comment:
Follows by considering the fraction of total warming dT = dT1 + dT2 that comes from natural forcings:
natural/total = dT1_new/(dT1 + dT2) = dNF1_new/(dNF2 + dAF2) > dNF1_old/(dNF2 + dAF2)
Finally, if we look at historical data, we see that there is long-period variability that is not yet fully explained by the models. We have the following sequence of climate variability
Roman Warming Period
Dark Ages Cooling Period
Medieval Warming Period
Little Ice Age
Pre-anthropogenic GW period (1850-1970)
AGW (1970-now)
It’s not unreasonable (and I think this is what Troy was driving at) that if the natural component of the trend from 1850-1970 were to increase, that the post 1970 contribution could well increase too (it doesn’t have to of course, but it’s not unreasonable to assume the “rebound” from the LIA was due to this longer multi-century-period climate variability, which over a course of 100 years would resemble a secular trend).
Anyway, if we increase dNF2 and hold dAF2 fixed (as we pretty much have to), clearly the only way to solve these equalities is to actually reduce climate sensitivity S_new < S_old to accommodate the larger natural forcing.
Re: Carrick (Oct 10 13:21),
Indeed.
There appear to be lots of long term ‘oscillations’ in the climate record. There’s the glacial/interglacial cycle that currently has a period on the order of 100,000 years. It was ~44,000 years earlier in the current glacial epoch. There’s the Dansgaard-Oeschger/Bond cycle of ~1500 years and there’s the AMO with a period of ~60 years. The D-O and AMO are controversial. There were 25 D-O events in the last glaciation, but whether that is an actual cycle that continues through the interglacial period has not become part of the consensus view because no obvious driver has been identified, unlike Milankovitch cycles and glaciation. The same applies to the AMO. It’s sort of like continental drift before mid-oceanic ridges were discovered and mapped.
Carrick,
I don’t think what you’ve said about early 20th Century warming accurately represents the beliefs of the climate science community. AR4 SPM says:
This is a relevant summary from Chapter 9:
Paul S look it this figure: http://www.ipcc.ch/publications_and_data/ar4/syr/en/mains2-4.html
Models with and without anthropogenic forcings are indistinguishable before 1950.
Brandon,
I understand your objection, and think the problem here was my over-simplification. I don’t mean to imply that we should scale the natural _forcings_ by some factor***. Rather, that we may have some notion of naturally varying processes, but we don’t know the forcings associated with them, and thus require the factor to determine that forcing or contribution to dT. For example, let’s take a potential natural “forcing” source such as cloud changes associated with the AMO, where the actual number for the “forcing” associated with this is unknown. There are a couple of ways that the processes associated with the AMO may contribute to dT: 1) Changes in cloud cover or distribution, in which case this would produce a TOA imbalance that “forces” temperatures, 2) Or a simpler heat transfer to the surface, in which case this would be your “unforced variability” (and technically the TOA imbalance would turn negative for its contribution to rising temperatures due to the Planck response). From my understanding, GCMs have little contribution from either #1 or #2, which is also possible. So, to be more precise and avoiding the simplification, my equations (5) and (6) would have been something more like this for each period:
dT = (dNV * NV_eff_F + dAF + dNFK) * S + dNV * NV_eff_T
Where NV again is the “natural variability” (excluding well-constrained natural forcings), NV_eff_F is the “efficacy” factor that represents what this natural variability contributes as a TOA forcing, AF is again the “anthropogenic forcing”, NFK is the well-constrained (“known”) natural forcing, and NV_eff_T is the “efficacy” factor that represents what the natural variability contributes directly to T (your “unforced”/”noise” term). Even here we must add caveats about working in the short-term (where the imbalance caused by the “noise” term does not loop back into the transient response) and we haven’t equilibriated at TOA during the time period.
Regardless, if you have NV_eff_F and NV_eff_T set to 0 (as you tend to get in GCMs for that NV term with periods > 25 yrs), increasing dT in the pre-1970 interval can only be balanced by an increase in S. But again, as pointed out, this results in incorrect values for the post 1970 interval. The other alternative (apart from modifying anthropogenic aerosols) is to increase NV_eff_T or NV_eff_F (which I over-simplified to “NV_eff”) and decrease S, which would balance out the equations.
Obviously, there is also the complication now that some “leftover” imbalance due to natural variation may cross the 1970 threshold, but hopefully the above clarifies my point.
***Although I don’t believe that all forcings of the same globally averaged TOA magnitude will produce the same response, I take your point here and it is a fair enough working assumption for the time being.
Carrick, first a matter of confusion:
I don’t get this. Why would you start off by changing dNF1 to dNF1_new if much of the discussion is about whether or not dNF1 needs to be changed? You’re just begging the question. Regardless:
While true, this is irrelevant as these equations are not the reason I said that. The only point of me discussing those equations was to show the impossibility of changing only a single value within them.
You can argue that is the “simplest” solution if you’d like (I’d disagree), but that doesn’t make it useful or true. Simplicity is only a determining factor when two explanations are equal in other regards. That’s not the case here.
Again, you’re begging the question. You’re telling me if we assume we change the forcing record, your conclusions hold… in response to me saying we don’t need to change the forcings record.
I assume this is supposed to be an argument for modifying the forcings record, but it’s like you didn’t finish your thought.
Sure, if we modify the pre-1970 forcings record in some undetermined way without any particular method or justification, we could easily wind up causing the same to happen in the post-1970 record.
That’s not actually the only way, but even if it were, it assumes we’d increase dNF2 simply because dNF1 could be increased, even though we’ve given no actual reason to increase dNF1.
Somewhat OT:
The Monday WSJ had a special section on environmental policy with contributors pro and con for these issues:
Do we need subsidies for solar and wind power?
Should there be a price on carbon?
Should cities ban plastic bags?
Should the world increase its dependence on nuclear energy?
Are we better off privatizing water?
Should Washington block the Keystone pipeline?
The knock on nuclear power is that it’s too expensive. I’d always wondered, expensive compared to what? Well now I know. It’s more expensive than natural gas fired power plants. I wonder if the greens who use “it’s too expensive” understand that they are, in fact, arguing for increased reliance on fossil fuels even if, in the short term, converting coal fired plants to natural gas will reduce emissions some. Replacing nuclear with natural gas certainly won’t.
But of course we should tax carbon and subsidize wind and solar!
At least they didn’t bother with biomass. I’m not holding my breath waiting for cellulosic ethanol plants to come on line. In line with my proposition that irony always increases, oil companies are paying fines to the government for not using cellulosic ethanol in their gasoline even though there isn’t a commercial source and likely won’t be for some time to come.
Troy_CA:
Niels A Nielsen (Comment #104775)
From a cursory glance the ‘natural’ spread looks like it contains the possibility of a near-zero early 20th Century trend. If the natural forcing, sensitivity and internal variability of a model run which produced such a trend were the same as that which, hypothetically, occurred on the real Earth then nearly 100% of observed early 20th Century warming must have been caused by anthropogenic forcing, and that would be entirely consistent with the figure you linked.
I have difficulties with the notion that natural variability can play much of a role in the warming from 1980 to the present, where we have detailed measurements available. During this time the three major heat sinks (ocean, atmosphere, and land ice) of the climate system have all been accumulating thermal energy. Realistic mechanisms for natural variability would involve intra-system heat transfers (for example, from ocean to atmosphere, from atmosphere to land ice, etc). Thus, warming of the atmosphere and melting of land ice would necessarily involve a loss of heat from the ocean. All three are simultaneously warming. This suggests to me a pervasive forced response. I have yet to see a mechanism postulated for natural variability that could explain the simultaneous warming of ocean, atmosphere, and (melting of) land ice.
Re: Owen (Oct 10 16:07),
We don’t actually know that. We’re reasonably confident the ocean above 2000m has been warming, but we really don’t know much about the heat content of the ocean below 2000m, and there’s quite a lot of it. ARGO doesn’t go that deep. It seems possible to me that a lower overturning circulation could result in a lower flow of cooler water descending into the depths resulting in a net decrease in heat content below 2000m. I’m not saying this is happening, but I think it’s a possibility.
Brandon, I think you need to look up what “begging the question” means.
It has nothing to do with asserting that you need to change dNF1, which I’ve explained as many ways as I could, why you need to.
If you don’t mind, I’ll bow out of this quickly becoming circular discussion.
Owen
It also happened from 1910-1940. All three of these, naturally according to the IPCC. What causes large scale natural variability, or do you think it just doesn’t exist?
But If I bet on GISTEMP i would feel like I am betting on how many UFO’s are going to be sighted in October, or how the Russian judge is going to score the vault, or some other human adjusted number. Can’t we bet on something we have some confidence isn’t going to get adjusted after it is calculated?
Owen, DeWitt and Carrick,
Add to your recent comments the fact that long-term natural variability may have modified “other climate system parameters” (mainly I suppose cloud patterns)! As Carrick was explaining earlier w/r/t ENSO (though I think SteveF was skeptical). Anyway, you could have a scenario where all 3 systems – atmosphere, oceans, cryosphere – gained energy in the absence of an external forcing, considering that we’d put clouds in the bin of either feedback or as part of the variability. Suppose that changes in the distribution of ocean heat content altered cloud patterns while the total heat remained constant or even increased.
And Owen,
Do we know with confidence that global ice volume has shrunk?
I don’t know the answer.
Carrick (Comment #104786)
October 10th, 2012 at 5:45 pm
“It also happened from 1910-1940. All three of these, naturally according to the IPCC. What causes large scale natural variability, or do you think it just doesn’t exist?”
——————————————————–
I most certainly think natural variability exists, but I would like to hear a good explanation of what might cause energy to accumulate everywhere measurable at once if there were no forced response. DeWitt suggested variable changes in thermohaline circulation resulting in less or more heat than normal going into the deep ocean. BillC suggested changes in cloud patterns to reduce albedo (?). IMO we have far more evidence for the mechanism of the forced response than we do for the mechanisms of natural variability.
Owen:
I think everybody else would too. 😉 I don’t think the origin of the warming from 1850-1970 is on solid grounds for example.
Why is it that 1910-1940 has nearly the same slope (within statistical uncertainty, e.g., 95% CL bounds) as 1980-2010?
Mind you, IMO CO2 has to be contributing to the warming trend. But if you start with this premise (GISS Model E forcings), CO2 only had a measurable contribution post 1970. And as SteveF and DeWitt will jump in and point out (or they have in the past!), there is a lot of uncertainty in aerosol forcings.
Given that that the models give a lower bounds of 1°C or so for global climate sensitivity (I don’t personally think it’s that low), there is a lot of headroom for increasing the relative contribution of natural warming. The question is origin of the natural variability, I agree with that. But simply because you don’t have a ready answer, doesn’t make it not so.
“The absence of evidence is not evidence of absence.”
BillC:
I think so, though as you know there is still dispute for example over where the Antarctica ice sheet is losing ice mass, but I am not sure we can say with confidence that the amount of global ice ice loss is greater post 1980 than the period say 1910-1940, where as I’ve pointed out the models suggest the anthropogenic component did not lead to a measurable contribution to warming.
[I think a much stronger case for the role of anthropogenic CO2 comes from radiative physics.]
random walk plus a little bit colder
Carrick, I find comments like this useless:
It’s just saying, “I’m right because you don’t know what you’re talking about.” I could easily respond by saying, “No you should look up what it means because it’s exactly what you’re doing,” and we’d have had a useless exchange of pointless posturing.
The only reason I can see saying things like that is if you hold what the person has said (and perhaps them) in such contempt you don’t think it merits any actual response. I’d like to think you don’t feel that way here.
You’re welcome to bow out anytime, but if it is circular, that’s only because you’ve consistently failed to address what I say.
Owen:
If we limit “natural variability” to unforced variability, I think I agree. But remember, not all natural variability is unforced.
Brandon, my final word on it:
Any adjustment to make these equations agree requires a change in the net forcings. You can make the equations agree without modifying S, you can make them agree by increasing S and you can make them agree by decreasing S.
So, in order to get the equations in agreement, 1) it is a requirement that the forcings have to change, 2) it is not a requirement that S has to change, and 3) it is certainly not a requirement that S_new > S.
What’s to argue with? Why I’m done.
Paul S:
The point is that anthropogenic forcings did not play an important role in any warming prior to 1970, even according to the model summary from that report. If you are left to arguing there was actually no warming prior to 1970, well I would say that’s in stark disagreement with a large number of different sets of observations. Clearly there was warming, and the modelers don’t appear to agree with you on the need to include anthropogenic forcing to explain the warming for that period.
Carrick:
Endlessly repeating yourself without actually addressing what I say is silly so it’s good this is your final word on the matter!
I’m curious. Has anyone here looked at why there was a change in this temperature record? I just finished reading the technical report that details what was changed about their process, and I can’t see anything questionable about it. All they did was fix bugs in their program. That seems like a good thing to me.
I think if you take issue with this change, you have to take issue with the entire process, not the change itself. Fixing bugs, and being extremely explicit about what effects that has, is not a bad thing.
Re: Brandon Shollenberger (Oct 11 10:34),
Your argument assumes your conclusion in its premise. That’s the logical fallacy known as ‘begging the question’, a term that is used incorrectly more often than not. Until you understand that, you won’t understand why you’re wrong.
Re: Owen (Oct 10 20:39),
Is that not what I said?
We know there is natural variability just like we knew that the west coast of Africa fit the east coast of South America like pieces in a puzzle. But many prominent geologists refused to believe that continents could move because there was no accepted mechanism. You can’t model something without having a mechanism for it. Therefore climate models do not model the natural variability of the climate when they are run without external forcing. In fact, as far as I know, the modellers go to a great deal of trouble to make the models produce nearly constant global average temperature when unforced. Which is probably why the variability of models does not look much like the variability of the real world.
Actually, I’m uncomfortable with the terms forced and unforced as most people seem to use them. In reality, there is no such thing as unforced climate variability. There are natural forcings and there are anthropogenic forcings. Just because we don’t understand and therefore can’t correctly model all the natural forcings doesn’t mean they don’t exist.
Carrick: “there is a lot of uncertainty in aerosol forcings”
Human produced SO2 in the atmosphere went down from 1980 to 2000 the equivalent of one Pinatubo’s worth per year.
If one Pinatubo of SO2 can cool the earth by .5C, then removing one Pinatubo’s worth of SO2 should increase temperatures by .5C.
(And that leaves the other six Pinatubo’s worth of SO2 entering the atmosphere alone for now).
Bruce,
Anthro SO2 doesn’t go directly to the stratosphere in the way that a big volcanic eruption does (if nothing else that affects the residence time in a big way, but there are other differences)…having said that I wonder if there’s much difference in the fact that most Anthro SO2 originates a lot higher above the ground than it did in say, the 1950s. In some cases, perhaps high enough to get out of the boundary layer by the time you consider effective stack height…
BillC, I wonder what affects thermometers more? A lot of SO2 concentrated nearby, or a very thin layer high in the stratosphere?
I do know there are recent papers about the Netherlands and Switzerland that say surface solar radiation is way up because the air is cleaner.
Can we bet on jobs reports? They are in need of some auditing.
Carrick (Comment #104801),
I note that the least certain curves on the graph that you lined to are the aerosol influences, which, even according to the IPCC, could range from quite a lot lower to quite a lot higher than those assumed by GISS. If the historical aerosol forcing is considerably lower than the GISS assumed levels in your graph, and the net GHG forcing consequently higher, then 20th century warming driven mainly by GHG forcing makes perfect sense… so long as you assume a relatively low climate sensitivity (IOW, you assume little additional amplification from clouds over that due to water vapor). I don’t believe Hansen’s gang is going to assume that. They are fitting their aerosol history to preserve consistency of the temperature history with a modeled climate sensitivity near 3C; a sensitivity due in large part to parametrized cloud behaviors.
.
We have been through this discussion before, of course, but I think it is important to differentiate clearly between things that are reasonably certain (like GHG forcing) and things that are not (like aerosol influences).
DeWitt Payne, did you really mean to direct your comment toward me?
You describe begging the question accurately, and it is exactly what Carrick is guilty of. I, on the other hand, couldn’t possibly be guilty of it as I’ve merely outlined multiple possible solutions and stated which I prefer.
Could you clarify if you meant to direct that to Carrick, or if not, where I assume a conclusion in a premise?
By the way, there’s no reason to dismiss the idea of unforced climate variability. Radiative forcing is calculated at the tropopause, but there is no reason all forms of variability would have to have their effect at that elevation. For example, if the ocean served as a giant capacitor, periodically discharging heat energy it absorbed, it would create a source of variability not measured by radiative forcings.
Re: Brandon Shollenberger (Oct 11 17:08),
And what makes the capacitor discharge when it does? You’re assuming your conclusion again, i.e. that all forcings are radiative and can be converted to W/m² at the tropopause and everything else is therefore unforced.
Re: Brandon Shollenberger (Oct 10 14:46),
And nobody disagrees with that.
But that’s not what you said originally, though.
Which would be only changing a single value, which you now agree is impossible.
So the responses from Carrick and Troy_CA were addressed to your original point that an increase in the pre 1970 trend must result in higher sensitivity. They demonstrated that an increase in sensitivity required more changes in ‘natural variability’ than a decrease in sensitivity. You then proceeded to move the goalposts and whinge.
DeWitt Payne, you claim I would only be changing one value when I say:
That’s rubbish. Nothing in that says anything about all other values remaining constant. In fact, I immediately referred to the problem which requires another value change, saying:
At no point did I suggest, much less say, only a single value would change. It’s remarkable you would so greatly misinterpret what I’ve said then say:
Insulting a person for saying something they never said is rather pathetic.
DeWitt Payne:
This is rubbish. You claim I am “assuming [my] conclusion again,” but all I’m doing is using a word the way it is commonly used. When discussing global warming, people use “forcings” to mean “radiative forcings.”
Outside random variations caused by quantum decoherence (and those might not even be random), everything is deterministic so we could say everything is “forced.” But that’d be useless. The fact every action has effects which could theoretically be mapped out into causes for a near infinite number of other effects doesn’t mean I am obligated to use “forcings” differently than everyone else discussing this topic.
Re: Brandon Shollenberger (Oct 11 19:32),
No, that’s not what I claim at all. I said you moved the goal posts. And you did. Your original statement quoted above had precisely nothing to do with changing only one value. It was about the direction of change of a particular value. That was answered by Carrick and Troy_CA. The answer was not that the effect of natural
forcingsvariability in the past is increased, it’s that the magnitude of the naturalforcingsvariability must have been larger than what was thought previously both before and after 1970. But you didn’t like that answer for reasons that are not at all clear.You obviously have a blind spot on this and so it’s pointless for me to continue.
DeWitt Payne, after providing a quote, you explicitly said:
I described your comment: “You claim I would only be changing one value when I say….” You now say:
You said I “would be only changing a single value.” I said “[y]ou claim I would only be changing one value.” Please tell me how my near-exact copying of your words is an inaccurate description of what you said.
You said several things. Pointing to one thing you said does not mean we should ignore other things you said.
Seeing as you grossly misrepresented comments by both of us, it probably is pointless for you to continue. However, the problem is not that I “have a blind spot.” The problem is you’re giving extremely inaccurate descriptions of what people have said.
Along those lines, your description of what Carrick and Troy_CA claim is also wrong. But as you say you’re not going to continue, there’s no point in dwelling on that.
Here is my analysis of the betting:
NO. OF BETS 23
MAX 0.720
MIN 0.115
MEAN 0.498
MEDIAN 0.550
STD DEV 0.172
MEAN 1-12 0.436
MEAN 12-23 0.568
MEAN PLUS 1 SD 0.670
MEAN MINUS 1 SD 0.326
WITHIN +/- 1 SD 15.00 65.22%
ABOVE MEAN 15 65.22%
BELOW MEAN 8 34.78%
The average temp. seems to have increased during the course of the betting but that is due to some very low figures in the first 12 bets.
I am surprised to find myself with the 4th highest bet, since I thought I was being conservative!