With the newly released CRUTEM4 data we can redo our usual comparison of land temp records. For these graphs I’m using a 10-year running mean to smooth out the noise and show longer-term variations between the series, with a common baseline of 1961-1990. Note that these records represent land areas only and are not a global temperature record.
(click to embiggen)
If we focus in on the 1970-present period, we see some differentiation (e.g. NCDC runs a bit higher than the rest), but they are all pretty similar:
Looking at annual values instead over the past 30 years shows us a bit more detail:



Just wondering why you’re still using a common baseline of 1961-1990? Is there still that much data missing to use 81-10 or even 71-00? I realize base periods have often been changed and I understand Phil Jones concerns; using a more recent one that the anomalies will seem less warm. Are those concerns still relevant today?
I see 1998 continues to get colder.
(OT)
Lucia, what’s happened to the left side margin? I don’ t see one at all, which makes some text a bit difficult to read.
(Actually, I’ve found I need a window more that 1160 wide to get a left margin).
steveta_uk,
Please avoid confusing land with land/ocean records. 1998 was always much more of an outlier in the latter than the former (as well as in TLT measurements). The 1998 measurements in CRUTEM4 are effectively identical to those in CRUTEM3.
Peter,
Using a later base period (e.g. 1970-2000 or 1980-2010) would simply have the effect of slightly increasing the agreement at the end of the series and slightly increasing the disagreement earlier on, though it would be such a small effect that you would have to squint to notice the difference.
Apart from that, baselines only affect y-axis labeling, which is fairly irrelevant.
Zeke –
Although the effect is likely to be minor, doesn’t changing the baseline period also affect the seasonal adjustments? That is, isn’t each month adjusted individually?
HaroldW,
When you are generating your own anomalies, yes. Here I’m working with annual values for existing series, so I just subtract out the 1961-1990 mean to get them all aligned.
The code I use in STATA is pretty straightforward:
foreach series of varlist gistempmasked-annualanom {su `series' if year >= 1961 & year <= 1990
replace `series' = `series' - r(mean)
}
“Please avoid confusing land with land/ocean records. 1998 was always much more of an outlier in the latter than the former (as well as in TLT measurements). The 1998 measurements in CRUTEM4 are effectively identical to those in CRUTEM3.”
Is not the point for not seeing the 1998 excursion the fact that you are using a 10 year moving average?
Also what are the CIs for the BEST data based on? And could you show a comparision based on annual anomalies?
The value of the comparison should be between the BEST and the Rest (CRU, NCDC and GISS) as the Rest use much the same data and methods for adjustments. BEST as you have previously noted uses, at least in recent times, a much larger collection of stations. The differences arising from the Rest must be related to how they bring the data together to calculate a regional or global average. Any thoughts on that proposition?
Also since some climate modelers are attributing effects on transfer of global heat to more regional temperature differences I think it might be appropriate to keep in mind local temperatures and differences between data sets. Is that possible to do with BEST where I recollect they kind of us a black box approach to average temperatures and do not adjust the local temperatures? Also GISS is hestitant to show individual station data because they base station data on maintaining the trends of the rural stations on all stations.
Also what are the CIs for the BEST data based on?
jackknife. I’ll probably have some more detail in the next couple of weeks. For now, read the existing methods paper.
The value of the comparison should be between the BEST and the Rest (CRU, NCDC and GISS) as the Rest use much the same data and methods for adjustments. BEST as you have previously noted uses, at least in recent times, a much larger collection of stations. The differences arising from the Rest must be related to how they bring the data together to calculate a regional or global average. Any thoughts on that proposition?
Differences:
1. Stations. CRU, and GISS use station selection criteria which
depend upon stations having overlapping periods. We could
run BEST with new CRU stations or perhaps with the GISS stations.
2. Methods: you have two differences
A) empirical “homogenization” or scalpeling, versus algorithmic
adjustments ( adding and subtracting from stations )
B) weighting: Berkeley weights included quality.
one useful exercise for people to do is a unweighted simple average. then think about how that unweighted line is wrong
if the underlying sample is not spatially uniform. weights get applied monthly to correct for the number of stations, the area they cover, and in the case of berkeley the underlying quality of the data.
at the time of the Falklands War, Jorge Luis Borges compared it to 2 bald men fighting over possession of a comb. The temp sets as portrayed by Zeke show no significant differences. Why argue? If the uncertainties are factored in, which is a major short-coming of every graph of this global temp index., there could be a small increase/decrease or temps are the same for the last x years.
“The temp sets as portrayed by Zeke show no significant differences. Why argue? If the uncertainties are factored in, which is a major short-coming of every graph of this global temp index., there could be a small increase/decrease or temps are the same for the last x years.”
Sorry, but this comment makes absolutely no sense to me. I am especially interested in temperature data sets uncertainties and how those uncertainties are estimated and what assumptions were made in making the estimates. The data sets of the Rest group use nearly all the same data sources for raw temperature data and adjustments to the raw data. The BEST uses more station data in recent times but an algorithm using breakpoints that is similar to what NCDC (GHCN) uses.
I am not so much concerned with the recent years where we have had satellite data and better global coverage with ground stations – even though I am still interested there in getting the estimates of CIs correct. I am more interested in the series prior to that period and again getting CI estimates correct.
I have been doing a considerable amount of analyses of late on the algorithm that Menne and Williams constructed for the GHCN data set. I have had good feedback from Menne and Williams on some basic questions but none on some of my more far reaching queries. The benchmarking that was done on the Menne algorithm is good in that it was a blind test but the skill metrics are evaluated on an average temperature anomaly rather than evaluating capability of the algorithm to get the individual stations right in a simulated series of station data.
http://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/algorithm-uncertainty/williams-menne-thorne-2012.pdf
In my view, a better estimation of skills in benchmarking for several competing algorithms was established by a European group.
http://www2.meteo.uni-bonn.de/mitarbeiter/venema/articles/2011/2011_cost_home_homogenization_benchmark.pdf
My concerns that have thus far gone unanswered have been questions from my analysis of the Menne algorithm that are summarized as:
1. The nearest neighbors in the USHCN data set after adjusting the TOB series continue to show about 1/2 of breakpoints that correspond to those in the algorithm adjustments. These breakpoints are less distinct than they were in the TOB series as measured by the increased CIs for breakpoint dates. I was speculating the result of the algorithm doing another iteration or two in attempts to remove and adjust for those breakpoints.
2. I was questioning how well the algorithm can perform if there were no meta data. The benchmark on simulated data allowed meta data to guide the process. The results of runs, where no meta data were used, were not reported in the benchmarking paper.
3. I have been able to determine that climate related breakpoints appear in the USHCN data that are unrelated to the non climate breakpoints between difference series with nearest neighbors. I want to use these breakpoints as measure of how localized climate actually is. The analysis would be more readily done if the Menne algorithm were carried to completion.
It’s pretty clear that BEST — and everyone else — are still setting there error-bars unrealistically tight.
Thanks, Zeke, for putting together this interesting compilation.
Peter Tillman:
It’s not that clear to me. What’s your objective criterion for saying “it’s pretty clear”?
Maybe the 30-year interval for base period is not sliding, but defined as fixed (1961-1990) until we can use 1991-2020. Anybody knows?
Since Mosher will not tell, does anybody else know the mean temperature in Reykjavik in 1940? 3°C or 5°C?
I should have googled first about the base period:
http://www.wmo.int/pages/prog/wcp/ccl/mg/documents/mg2011/CCl-MG-2011-Doc_10_climatenormals1.pdf
Zeke, interesting to complete your chart:
http://img215.imageshack.us/img215/5149/plusuah.png
Carrick (Comment #93572)
“It’s not that clear to me. What’s your objective criterion for saying “it’s pretty clearâ€?”
Nor me, as I was under the impression that the BEST CIs and methodology for estimating these intervals were not final. That is why I posed my question to Zeke about the CIs. Also of interest would be CIs of temperature anomalies for the various regions of the world. Jeff Condon had a question coming out of his analysis of the BEST methodology about the CIs to which I believe BEST has not yet replied.
The greatly expanded number of stations used by BEST in recent years should decrease the CI range in that time period. The use of the breakpoint algorithm by BEST, if indeed it follows closely on the GHCN algorithm developed by Menne and Williams, apparently does better against some other algorithms as noted in the European benchmarking paper linked above but not as well in getting the local anomalies correct.
I think it also depends on the availability of good meta data. Menne informed me that almost 50 % of the adjustments to the US part of GHCN are derived from meta data and the adjustments are made based on the meta data without any reference to breakpoint analysis. In fact he did not have an available number for the overlap of adjustments from breakpoints with and without meta data.
I would think we would do more comparisons between satellite and surface based temperature data. Surely the satellite data is more spatially complete and devoid of land use, UHI and micro climate effects. Satellite measures something that is different than surface temperature and we have climate models that show that satellite warming should be greater in some areas of the globe. Although not near perfect it would be of interest to make the necessary adjustments to make a better satellite to surface temperature comparison.
phi,
I mostly didn’t include those since they are somewhat apples to oranges (surface vs TLT). That said, the discrepancy is particularly noteworthy given that (as far as we can tell) the majority of the difference doesn’t seem to be due to UHI or siting biases in the land record.
Kenneth Fritsch,
The final published paper on methods will differ a bit from the draft, though I don’t think the jackknife procedure for estimating error will change that much. David Brillinger or Robert Rhode are probably the best folks to ask about the specific details.
Zeke,
“That said, the discrepancy is particularly noteworthy given that (as far as we can tell) the majority of the difference doesn’t seem to be due to UHI or siting biases in the land record.”
Must be the well known Cold Spot Effect or a too great correction of the Urban Cold ICEland.
Snark aside, it worth remembering that UAH and RSS don’t actually measure temperature, and there are quite a few adjustments and other fun statistics that need to be done along the way to tease our a temperature signal. Some methodological choices produce results much more closely in line with the instrumental record, e.g. http://www.agu.org/journals/ABS/2006/2005JD006798.shtml
Actually, phi some rather small changes to the assumptions made by UHA and RSS could entirely wipe out the difference between the lines.
Stations also have their small problems:
http://climateaudit.org/2007/02/16/adjusting-ushcn-history/
To my knowledge this bias has never been explained. Ah, so yet, in Hansen et al. 2001.
http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf
“Never been explained”? You mean outside of the literature on the subject?
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2009.pdf
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-williams2009.pdf
Re: Zeke (Mar 22 10:28),
It’s also worth remembering that nothing actually measures temperature. Absolute thermodynamic temperature is only defined in terms of an ideal gas thermometer. Since such an animal doesn’t exist in the real world, we measure physical properties like the voltage of a thermocouple, the expansion of a liquid, the resistance of a platinum wire or the emission intensity of the wings of the oxygen 60GHz band which we then relate empirically to a scale defined by a number of fixed points like the triple point of water and the melting point of pure zinc metal and a standard measurement method to interpolate between these points. The real problem with RSS and UAH is that the emission altitude is poorly defined in part because the measured bandwidth must be large to have sufficient sensitivity to make the measurement at all.
Zeke, always the killjoy I see. Learn to embrace your inner conspirationist!
More seriously, the “expunged” USHCN V1 data seems to lie smugly on the NOAA servers:
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/daily/
Note the “Last modified” date: 11/07/2002, 12:00:00 AM.
Zeke,
A lot of smoke and few explanations.
1. As Hansen says, we should not correct systematic biases in temperature series.
2. TOB correction is a joke. Additionally, nothing helps to ensure the real cause of this correction.
3. For the Alpine region, Böhm 2001 find the same orders of magnitude (0.5 ° C for the twentieth century) but time of observations is there for nothing.
Of course we should, as long as the “raw” series are still available. Which they are.
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/
I’d really like to see your basis for this bold claim.
Well, now the paper behind the correction TOB.
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/karl-etal1986.pdf
The methodology does not ensure that the problem comes from the time of observation.
You still have to keep in mind that these are thermometers min / max and there is no reason that a time change leads to anything other than very small random effects.
The authors have probably been trapped by a logical correlation between the change of hours and displacement of site.
phi. a change in TOB will lead to major changes in recorded tmax and tmin.
See the CA thread. Download Jerry Bs data. do the analysis yourself
For folks who have seen me make a stink about the jackknife method. That is because I don’t believe it was properly implemented due to the re-weighting between each mode. Subsequent posts at tAV showed that the differences between the current BEST method and a corrected version wouldn’t be terribly large.
I’m glad to hear that it is being addressed.
toto,
“Of course we should, as long as the “raw†series are still available.”
When we are not sure what the source of a bias is, we can not know whether or not a correction is justified. Consider the case of a rusty scale whose needle goes in jerks. Should we substract the jumps and reduce the weight indicated at the end of the load?
if phi wants to challenge the need for a tobs correction there are data he can use. skeptics have had access to this since the days of john daly. make the case or admit u are clueless phi.
phi. the tobs correction is specific to geographical regions. the model used for ushcn is not applicable outside conus. the validation of the model is published and solid. swiss alpine is not covered by an empirical correction that depends upon usa latitudes and longitudes. did you even read how the model for correction was developed and tested. no. because if u had u wouldnt write stupid things. off the top of your head without looking what was the standard error of prediction for the model and how was the model tested? dont know? i thought so
steven mosher,
Those who are clueless are those who imagine that changing the TOB of a thermometer min / max can lead to differences in annual averages more than 0.3 ° C. This has no physical basis.
Karl et al. is an empirical study trapped by a correlation.
Those who are really clueless think they can arrive at statistical inferences without knowing statistics. That’s really completely clueless.
phi,
The net effect of the TOB adjustments isn’t 0.3C. Per Menne et al 2009:
“The net effect of the TOB adjustments is to increase the overall trend in maximum temperatures by about 0.015°C decade (±0.002) and in minimum temperatures by about 0.022°C decade (±0.002)”
Zeke,
“The net effect of the TOB adjustments isn’t 0.3C.”
Yet this is indicated by your reference:
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2009.pdf
Fig. 4.
Instead of the sly remarks from all particpants here we could be discussing how we determine the CIs for adjusted temperatures over time and space for the surface and satellites temperature data sets. How this is accomplished is not a done deal in my view and I am always curious how it was done and under what assumptions. This back and forth on conspiracies makes me sick.
As I remember for surface data, actually attempting to estimate overall CIs for temperature data sets was an excercise that started in the 1990s. The satellite data do have certain unique features that I would think would want to be exploited by researchers. What are the listed CIs for the satellite data and how do posters here see those CIs as being accurate or meaningful? Has anyone made an effort to factor in the spatial variation that an ensemble of climate models might estimate for the ratio of surface and to troposphere warming?
Kenneth,
It is an interesting topic, which Lucia is more qualified than I to chime in on, but one thing that BEST did well was to attempt to calculate the CI for the whole algorithm. Were the individual series not iteratively re-weighted, it would be an accurate result. On exploring the method, I found it to be a distribution based conclusion, whereas the distribution of the data created a bias in the jackknife formula.
I could be wrong, but really don’t think I am – who does 😀 ?
As far as determining a CI, estimating individual points by use of the standard deviation of composite data /sqrt(N) would be a good engineering check to see if their method-inclusive error bars made any sense. If the BEST error bar were much narrower than this, we likely have a real problem.
Too bad I don’t have any time.
Jeff–
Well… and I have to send mosher some code because I think he’s thinking about that question. But I’ve been a bum focusing on my bots stuff.
But figuring out the CI for the whole algorithm would require looking at all the station data and knowing a lot about that. So it’s an issue Mosher, Zeke, Ron, Troy would know more about than I do!
any assessment of how CRUTEM4 changes the picture?
Kenneth Fritsch, I did not respond because I feel that the global temperature index is just a calculated number of little real relevance. Asking for details of the error bars is akin to asking for details of how much poo emerged in your new-born’s nappy. It is that precise and meaningful.
Lucia,
“But figuring out the CI for the whole algorithm would require looking at all the station data and knowing a lot about that.”
Actually, we have such a large sampling now, that I think for a CI representing the algorithm and mean of the noise (not systematic error) might just fall into your realm pretty well. BEST doesn’t attempt to represent systematic station error.
The link below gives the major potential uncertainties in satellite measurements of temperature which Spencer lists as:
(1) Orbit Altitude Effect on LT
(2) Diurnal Drift Effect
(3) Instrument Body Temperature Effect
I would suppose one could question the weighting used to derive the troposphere ad defined by the measurement. If the weighting would for instance see too much of the stratosphere than I would suppose that the ratio of surface to troposphere warming as derived from climate models would be biased in the measurement.
Like the surface stations the satellites also have counterpart measurements that can be used to check the satellite measurements in the form of radio sonde (balloon) measurements. I know that radio sonde measurement require a goodly bit of adjustments but then again so do surface measurements.
I am a bit puzzled why there is not a better cooperative effort by those analyzing surface and satellite (and radio sonde) derived temperatures.
http://www.drroyspencer.com/2010/01/how-the-uah-global-temperatures-are-produced/
“It is an interesting topic, which Lucia is more qualified than I to chime in on, but one thing that BEST did well was to attempt to calculate the CI for the whole algorithm.”
Jeff, I agree with you. I have analyzed the methods others had used to estimate CIs but BEST in my estimation was the BEST. I am not sure what is gained for BEST by only obtaining an adjusted average regional or global temperature and not adjusted station data. I continue to have some questions about the breakpoint algorithms used by BEST and GHCN.
Jeff, you might want to look at the papers I linked above on benchmarking various homogeneity adjustment methods using simulated data. In my view if one can provide realistic simulated data (here I think the Europeans do a better job) and can than show how well the algorithms get the truth of the simualted data correct, the issues of CIs can be properly estimated.
With the GHCN benchmark I was frustrated because they used messaged climate model data for simulation and did not report the results they obtained without meta data and only reported the distribution of results using various parameters in their algorithm and did not report variations from the individuals stations.
The methodological thing they did right was to run a Monte Carlo to test their algorithm.
Unfortunately I think their implementation is lacking…
One of the assumptions they make is regional scale azimuthal symmetry in their correlation functions. I don’t think this is accurate and think that they at least need to include a two parameter correlation function when implementing their Kriging algorithm. But this starts getting ad hoc in a hurry.
Franzke has a nice paper out here on an improved method for modeling atmospheric variability that I’ve been meaning to read (not behind a play wall, bless JAS).
And then there are empirical orthogonal function approaches (e.g., what NCDC does). Anyway, either of these approaches would allow you to generate more realistic simulated climate temperature fields that you could test your algorithm on.
What they have done (it seems to me) is kind of circular, they’ve fed the same assumptions into what was supposed to be model validation (the big one relating to their assumed structure of the correlational function), and confirmed that given these same assumptions their code worked.
Actually that’s a useful thing, but it’s called “model verification” (“does the code work right?”). What they need is a more empirically based Monte Carlo to test their code against for model validation. (Are the physics assumptions that their code is based work correctly?)
Please forgive a few simplistic questions. There appears to be an oscillation with about 30 yr wavelength in the old data (I am referring to the first graph with the 10yr moving average) that is diminishing in amplitude as we get closer to recent years, where the data seem to follow a smooth line without oscillations. 1. Are these oscillations real? 2. If no, what causes the data appear that way? If yes, why did the diminish? Is this statement correct: The amplitude of the early oscillation is so large that even if the temperature in recent years is heavily influenced by AGW, the oscillations would be well discernible if they were there?
I am obviously no climatologist, just curious.
denny,
There were some major volcanoes during both dips, though its unclear how responsible they were. Similarly, the error bars (2 standard deviations) are pretty large during that period, so our best understanding is not inconsistent with much less (or more) variability.
I notice that the above does not include a comparison with CRUTEM3. In the case of CRUTEM4, the formula (2NH+SH)/3 has been used to calculate the global average, whereas CRUTEM3 used (NH+SH)/2.
While the new formula is probably more appropriate, since the new formula gives more weight to the NH, and since the upward trend in the NH is higher than in the SH, does this mean that CRUTEM3 has been understating the global temperature trend?
“Actually that’s a useful thing, but it’s called “model verification†(“does the code work right?â€). What they need is a more empirically based Monte Carlo to test their code against for model validation. (Are the physics assumptions that their code is based work correctly?)”
Carrick, I neglected to add that a simulation of the truth obviously must include intoducing into the series a realistic pattern of breakpoints caused by realistic non climate related non homogeneities in the simulated series. That could be a bit of a circular excercise.
It can be readily established that many non climate small breaks in a data series are much more difficult to find (without becoming over sensitive and finding false positives) than a few large breaks. Also a series with heavy climate related noise makes finding even a few and larger non climate breaks difficult. Using difference series with nearest neighbors reduces some of the climate noise and thus makes finding non climate breaks less difficult.
Testing a method appears easy until you have to do it.
The general assessment, from RC to the GWPF, is that it doesn’t.
“I would suppose one could question the weighting used to derive the troposphere ad defined by the measurement.”
dig here.
“dig here.”
Steven, when you get wordy in your replies like this I lose my train of thought.
Kenneth:
Yes, I had thought of that too… I discussed this a few times on different blogs in some detail.
\If what you are trying to do is vaidate the homogenization algorithms, you’d have an independent party generate the tests for the homogenization algorithms, without telling you what assumptions went into it (but it must be empirically based). That would remove much of the circularity, especially if they were given it as a challenge “see if you can break our code.”
steven mosher,
“no. because if u had u wouldnt write stupid things.”
Ok. You are right on this point. I did a quick check with data of some stations, and I came to the same orders of magnitude than Karl et al. 1985.
Kenneth, I don’t have time to go into it, but I would look at weighting functions. Always.
I am beginning to think that one might as well not bother with any of this reconstruction crap. The whole thing can be distorted in any direction by selecting or deselecting different stations. As long as the array of stations changes throughout the record, one can make anything happen. We do not know if the choice of station location is the cause of what we see.
I think a 50 year run just using stations that have no moves, no changes in method of reading and have undergone no major changes in population size would be better than all this crap.
We want to see change, not the average. Adding more and more stations where their is ‘warming’ screws the whole thing up.
Carrick (Comment #93697)
If what you are trying to do is vaidate the homogenization algorithms, you’d have an independent party generate the tests for the homogenization algorithms, without telling you what assumptions went into it (but it must be empirically based). That would remove much of the circularity, especially if they were given it as a challenge “see if you can break our code.â€
Carrick, to be fair to the GHCN benchmarking, I think they were aiming at a double blind test as I think you are suggesting. That still does not necessarily make the simulation realistic, but does lessen the problem of circularity.
“Kenneth, I don’t have time to go into it, but I would look at weighting functions. Always.”
Steven, now you are not only being wordy but you are repeating yourself.
I am less concerned about the weighting when I recalled that the lower troposphere (LT2? ) and the upper troposphere (T2?) are both measured by MSU and those two temperature trends apparently correlate well. The upper troposphere should be more likely to be contaminated with the stratosphere. A good skeptic does, however, keep looking.
As an aside I was wondering whether the MSU measurements lend themselves to breakpoint analyses like the surface temperatures in looking for non climate breaks. If you have sufficiently long overlap periods with the various satellites it might apply. You have the radio sonde data also. I believe I have read papers by Christy or Spencer where breakpoint analysis were applied to radio sonde data and used for adjustment.
“I think a 50 year run just using stations that have no moves, no changes in method of reading and have undergone no major changes in population size would be better than all this crap.”
And how would you determine there were no changes that were not climate related? And how many stations do you think would be constant over 50 years. It is easier to spew than chew.
toto – it should change the picture a little in that it would bring the data sets into closer alignment over the last decade. They have increased coverage in Siberia, decreased Southern Hemisphere coverage and changed the averaging algorithm. And if you read James Annan’s blog, they have somehow found temperatures for the Indian Ocean that they did not have before.
diogenes (Comment #93717) & Zeke
What is the justification for decreasing SH coverage?
Surely not because it’s not warming so fast down there?? 🙂
DocMartyn, Kenneth Fritsch,
“…would be better than all this crap.”
“And how would you determine there were no changes that were not climate related?”
That’s the whole point of regional studies to avoid problems of coverage while allowing to isolate the non-climatic influences.
Zeke, 2 standard deviation is only 95.4%. What is the justification for that? You make that sound like a lot. I wouldn’t stake my money on 95.4%
What I had not realized, until Zeke started posting these data, was how much faster the land temperatures were rising than the global ones. It looks like the large heat capacity of the oceans is keeping the global average down, but on the land (which is where we actually live), according to these data, a temperature increase of 3 C per doubling of CO2, in real time (not hundreds of years in the future) seems not only plausible, but actually probable.
Construction of the RSS V3.2 Lower-Tropospheric Temperature Dataset
from the MSU and AMSU Microwave Sounders
julio,
“…according to these data, a temperature increase of 3 C per doubling of CO2, in real time (not hundreds of years in the future) seems not only plausible, but actually probable.”
We could always return to live in the woods.
http://noconsensus.files.wordpress.com/2009/11/before-and-after1.jpg
http://www.klimanotizen.de/2005.02.13_Landnorth_of_20N_temperature_anomaly.jpg
No pay wall on that one:
Construction of the RSS V3.2 Lower-Tropospheric Temperature Dataset from the MSU and AMSU Microwave Sounders
Look at the pretty weighting curves.
It is difficult to judge by eyeball but it is obvious that uniform hetaing would impact negatively on the equatorial zones. But just how much landmass lies outside the topics? and the proportion of landmass that would benefit from warming is….?
Actually Steve something that caught my eye in their article “Moisture profiles can vary substantially between sondes launched 1 h and 5 min before overpass.”
The other thing that can vary is the humidity sensor used. Humidity sensors are notoriously unreliable (at the resolution they are looking at). I’d have to see two simultaneous launches to see how much reproducibility they can achieve before I’d believe variations on this scale are anything other than noise.
julio: “on the land … a temperature increase of 3 C per doubling of CO2, in real time (not hundreds of years in the future) seems not only plausible, but actually probable.”
According to BEST, land temperatures have increased 2 K over the last 200 years. It doesn’t appear to me to have been disastrous.
HaroldW,
“According to BEST, land temperatures have increased 2 K over the last 200 years. It doesn’t appear to me to have been disastrous.”
Indeed. The issue is that this warming of 2 K is purely imaginary. Remember that this so-called warming can only be read on the thermometers of stations. No proxy keeps track of it.
julio,
I think this argument discounts three things:
1) The difference in surface level humidity between land and ocean can substantially change atmospheric heat and moisture transport.
2) The relatively fast heat transport by the atmosphere between land and ocean (on the order of 1-2 month lag), means ocean heat uptake influences both land and ocean surface temperatures, almost in ‘realtime’.
3) The current rate of ocean heat uptake (about 0.4 to 0.5 watt/M^2 globally averaged, or about 0.57 to 0.71 watt/M^2 over the ocean surface) is not very large compared to overall man-made greenhouse forcing (a bit over 3 watts/M^2).
.
The obvious divergence between land and ocean trends post 1980 ( http://www.woodfortrees.org/plot/hadsst2gl/from:1900/mean:12/offset:0.2/plot/best/from:1900/mean:12 ) is puzzling in light of the above three observations.
My guess: perhaps some data problems, combined with a real difference in net “climate sensitivity” between land and ocean. I think it is worth keeping in mind that the biggest observed differences in temperature are at high northern latitudes in winter over land, while at the same time the warming is minimal over Antarctica.
http://www.ssmi.com/data/msu/graphics/TLT/plots/MSU_AMSU_Channel_TLT_Trend_Map_v03_3_1979_2011.png (Remote Sensing systems)
Perhaps the stark north/south divergence (much greater than the land/ocean divergence!) is evidence of a north/south see-saw which has caused exaggerated northern warming, with much less happening in the south.
Finally, there is the high northern latitude divergence between surface measurements and satellite lower tropospheric measurements; Roger Pielke Sr. and coauthors have argued that the divergence is the result of changes in the wintertime surface boundary layer over land.
This is the standard figure I show. The oceans (above -60°S and excluding the Arctic Ocean) seem to warm at a constant rate. There is a substantial divergence between land and ocean as you go northward, the so called “north polar amplification” effect.
(Antarctica mainland by all accounts, including S09, has either not be warming or even been cooling since 1980.)
One thing to note in this is land surface that is near ocean is heavily “contaminated” by the marine-land interface, and you would expect land coastal areas (where there isn’t wintertime ice anyway) to track more closely with the marine trend. Since the fraction of land is much smaller in the Southern hemisphere than the North, that means that the marine land interface should be expecting to play a larger role.
The exception to this is the interior of Africa… but there are big data gaps there.
Here is a comparison of seasonal trend figure. Note that for mid-latitudes, the accelerated warming occurs in spring and fall, it’s in the far north where you see the largest warming in winter.
(However, there are problems with instrumentation up there for wintertime. Not all stations report in the winter, and it’s not clear how much of this effect is real versus changing in geographical sampling area from summer to winter.)
Carrick,
Interesting graphics (except the almost black one is hard to read!). The interior of Africa (based on satellite data) has warmed less than the surrounding oceans, except for the Sahara, which has warmed more.
.
Your second figure might convey more useful information if the x-axis were the sine of the latitude rather than the latitude. The sine weighted x-axis reflects the relative global area contribution at each latitude (which is much smaller at very high latitudes, of course). Everything north and south of 70 degrees represents only ~6% of the total surface area of the Earth (if I have done my math right!).
SteveF, Carrick,
Thanks for the comments! I’l mull them over as time allows (see my previous post elsewhere).
Re: SteveF (Mar 26 06:59),
The satellite data extends to 82.5N, but only to 70S latitude. The reason being that most of the Antarctic continent is high enough that the data used compute tlt is too contaminated by surface emission. The same goes for the Tibetan Plateau and surrounding mountain ranges and the peak of the Andes in South America. Those are the white areas in your image.
DeWitt Payne (Comment #93796),
I was aware of that. I based the comment on Antarctica on the well documented lack of warming over most of the continent in recent decades.
Fouse says:
March 26, 2012 at 7:27 am
Joe
Weather forecast models use greenhouse effect.
Dr Roy Spencer has written:
“Regarding those weather forecast models, without a proper handling of the greenhouse effect, they would utterly fail in about 24 hours or so, with unrealistic surface cooling and a marked change in weather systems away from reality.
Do the critics of greenhouse theory ignore tomorrow’s weather forecast because weather forecast models depend critically upon greenhouse effect calculations? I doubt it.â€
http://www.drroyspencer.com/2012/02/yes-virginia-the-vacuum-of-space-does-have-a-temperature/#comment-34679
So, the globgal land/sea CRU4 is out:
http://www.cru.uea.ac.uk/cru/data/temperature/crutem4gl.txt
Upon comparison, this new data set has some serious variance to explain:
http://climateweenie.blogspot.com
1. What causes the large spikes?
2. Why is the variance so much larger than GISS?
3. Why does CRUTem4 diverge so much from GISS in the last decade?
This is not a confidence inspiring effort.
Never mind, I confused CRUTEM with HadCRUT.
Still, the spikes are disconcerting.
Climate Weenie,
Because oceans have a lot of specific heat capacity, they will change less in temperature from month to month than land. I’m not sure the monthly variance in land in CRUTEM4 is particularly unusual compared to the other series, but I should check.
Nope, nothing unusual with the variance in CRUTEM4 vis-a-vis the other land records:
http://i81.photobucket.com/albums/j237/hausfath/BerkeleyGISTempNCDCandCRUTEM4Comparison.png
Re: Zeke (Comment #93851)
You can feel this effect in the “moderate” climate typically found on the coast compared to inland.
Another thing is that heat tends to get moved around a lot more quickly in the atmosphere than in the ocean. The warm air mass that was sitting over the midwest U.S. last month (or week) may be totally different from the cold air that’s there this week. On the other hand, lots (and I mean lots) of the water in the ocean just kind of sits around doing… not much.
Climate Weenie:
I believe the spikes are ENSO events. I am pretty sure they are real.
You also need to compare CRUTEMP to GISS land only.
Climate weenie,
The influence of ENSO on global temperatures is pretty clearly illustrated in this graphic from Remote Sensing Systems:
http://www.ssmi.com/data/msu/graphics/tlt/plots/MSU_AMSU_Channel_TLT_Time_Lat_v03_3.png
Each significant el Nino event is visible as a “(” shaped pattern of warmth, which is the warmth of the tropical Pacific propagating gradually to higher latitudes. The “spikes” and “dips” in the global temperature record are in large part due to the ENSO.
10 years of global flatsies:
With HadCRUT4 not yet available on WFT (love the acronym) I took a look at GISTemp from 2002 to 2012 (I think WFT interprets this as 2002.000 to 2012.000 or Jan-Jan) since GISTemp looks closest to HadCRUT4. Bypassing all the usual caveats about cherry picking (it should be a pretty good one in terms of ENSO cycles) the trend is extraordinarily flat. 0.000 to 3 decimals.
Interestingly in this same time period RSS went down by -0.003/yr and UAH went up by -0.003/yr. Since there is more month to month variation in these a change of start month might have more effect. Anyway it’s neat.
Atmosphere deltaQ = 0.
Land deltaQ = 0.
Sea surface deltaQ < 0.
I know Lucia does the monthly updates sometimes for the model validation exercise. I'm not after model validation here. I'm after the missing heat. Where is it?
The obvious answer is it's in the deep ocean. How deep?
From (pin picture 2 in the series)
http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/
the ocean heat content INCREASE has not changed much since the early 1990s, when land and air temperatures were going up.
Admitting to all the fuzz that's in this, where did the heat go?
BillC,
Well, deep oceans (0-2000m) don’t seem to have the same deceleration of warming seen in the 0-700m layer. That said, the data is somewhat newer for deep oceans (only really reliable post-2005), so I’d take it with a slight grain of salt.
http://i81.photobucket.com/albums/j237/hausfath/ScreenShot2012-03-27at103623AM.png
http://i81.photobucket.com/albums/j237/hausfath/ScreenShot2012-03-27at103400AM.png
(Figures via http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/index.html )
By the way, anyone know if the anomalies for global shallow and deep ocean energy content are available anywhere? All I can find on the NODC site are massive grid-cell files, and reconstructing a single global anomaly value from those is a pain.
Zeke,
Re your question I did not find it but I did not spend a lot of time looking.
To sort of answer my own question, building on what you said, I guess I would consider that maybe the mixing is getting more efficient? Something about cold meltwater in the arctic summer maybe. The uncertainty in the deep ocean heat content combined with its heat capacity should be enough to completely overwhelm the surface and atmosphere data (?).
BillC, in any case, you’d expect to see a flat or negative trend about 10% of the time for a 10-year interval, assuming constant trend of 0.2°C/decade overlaid with natural variability.
Put another way, 10 years is too short of an interval to effectively measure secular changes as small as 0.2°C/decade.
Zeke,
3-month world data:
http://data.nodc.noaa.gov/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/3month/ohc2000m_levitus_climdash_seasonal.csv
and
http://data.nodc.noaa.gov/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/3month/ohc_levitus_climdash_seasonal.csv
BillC (Comment #93857)
March 27th, 2012 at 11:31 am
If you look at the 0-700 m and 0-2000 m series, the former being contained within the latter (http://imageshack.us/photo/my-images/21/ohc0700and02000.jpg/), it appears to me that it is only very recently that heat has begun to flow from the 0-700 m layer into the 700-2000 m layer, but is now doing so at an increasing rate. The 0-700 m layer is therefore put into a sort of steady state where it gains solar heat energy while it simultaneously loses energy via downward transfer. The diverging nature of the two OHC curves in very recent years would seem to be consistent with that explanation. Thus I see only continuing oceanic heat accumulation as the 0-2000 m data gives the more complete measure of heat accumulation.
Carrick: “BillC, in any case, you’d expect to see a flat or negative trend about 10% of the time for a 10-year interval, assuming constant trend of 0.2°C/decade overlaid with natural variability.”
Do you include excursions caused by volcanos in the natural variability? A volcanic lull like the present should contribute to warming and yet we see temperatures flat or cooling. Of course there is uncertainty in measurements but see what a regular commenter (guess who) wrote at this blog 4 years ago about a somewhat shorter negative trend:
“Anyways, assume that at the end of 2008 we have a negative slope for the 8 year span of 2001 -2008.What to make of this?1. Climate coolists: Its the end of AGW, AGW is wrong, models are wrong, radiative physics is wrong, we are entering an ice age. That hand is Doyle BrunsonÂ’s 10-2.2. Climate Warmists: itÂ’s the weather. we have no explaination. It happens all the time. It happens all the time. there is no information in an 8 year trend. none. is there information in an 8 year trend? Well, one approach to that problem is to bicker about error bars. Another approach is to see in the actual record, lets say the past 100 years, how often we see a negative trend over an 8 year peroid. Is it common? is it rare. If its RARE, then information theory tells me it has a HIGH information content. But thats just my take on things.So. I went to look at all 8 year trends from 1900 to 2007 ( 2008 isnt done) Here is what you find1. Every Batch of them ( save 1) is associated with volcanic activity. In the early 1900s, in the 6os,in the 70s, in the 80s in the 90s. If you find a 8 year negative slope in GSMT, You had a volcano.This is a good thing. It tells us the science of GW understand things.2. The SOLE exception is the batch of 8 year negative trends in the mid 40s. Now, until recently GCM had not been able to match these negative trends (hmm) BUT now we find that the observatiion record, the SST bucket/inlet problem may be the cause of this apparent cooling trend. So, from 1900 to 2000, a time when C02 was increasing we find that on rare occasions we will see 8 year trends that are negative. The cause: volcanos, and bad SST data. Now, look at beyond 2000 and the last 8 years. negative trend. Any volcano? nope. any bucket problems?err nope. So for the first time in 100 years you have a negative slope that is not correlated with either volacanoes or bad observation data. That looks interesting. Wave your arms and cry weather?ThatÂ’s not science. Thats like waving your arms and crying weather when it gets warmer. The appeal to ignorance. We have a cooling regime. a cooling regime that is not associated with volcanoes and not associated with data errors. I think Thats interesting and meaningful. Dont know what it means, but its the kind of thing you want to investigate rather than shrug off.”
Was he wrong?
Carrick – I disagree, this is only when you exlude ocean heat content from the analysis, because that is the major driver of decadal-scale temperature variations – including OHC should cut the variation frequency down significantly, at least down to some kind of ENSO-mediated cloud albedo effect (assuming no black swans like cosmic rays etc. etc.)
Owen – I more or less agree as long as we are operating within the conventional framework.
Niels – huh? I get “volcanoes”. With that, you have a point, which gets back to my response to Carrick, i.e. we think we know more or less what the sources of natural variability are.
So – it’s in the ocean, unless it’s in the clouds, or the sun ;).
In the northern hemisphere summer what happens to all that melted ice?
Going back to the flatsies all around. I just do not think that is correct. There is a flatness in the atmospheric (surface and tropospheric) temperature profiles, that is true.
There is NOT a flatness in the ocean heat anomaly series, in that the 0-2000 m series, which is the most complete assessment of heat uptake that we currently have available, shows unabated heat uptake from the ARGO float data from ~2003-2012 (this is the period of atmospheric temperature flatness).
There is NOT a flatness in the melting of land ice as both GRACE gravity determinations (http://imageshack.us/photo/my-images/703/greenlandgraceicemelt20.jpg/) and mass accounting methods (http://imageshack.us/photo/my-images/36/mbmlossoflandice2011.jpg/) show continuing melting of land ice.
There is NOT a flatness in Arctic sea ice extent or volume, with an noticeable acceleration in volume loss (http://psc.apl.washington.edu/wordpress/wp-content/uploads/schweiger/ice_volume/BPIOMASIceVolumeAnomalyCurrentV2.png?%3C?php%20echo%20time%28%29%20?).
I think it is simply wrong to say that warming has halted, or leveled out, because we have a period of flatness in the atmospheric temperature series. It is the ocean that is absorbing the great bulk (90%) of the heat imbalance, and the climate system (oceans, atmosphere, sea ice, land ice) as a whole is clearly continuing to warm as predicted by an enhanced greenhouse effect.
Oh – right, I forgot the Chinese aerosols. When did they start cleaning them up?
Niels – I re-read the quote, ok scratch my ‘huh?’. Anyway who said that? Peter Gleick? Mosher?
Owen, we’re leapfrogging each other, but I agree with you partially. However when you melt ice the heat goes somewhere, at least compared to a “control” scenario where same amount of ice is present and less melts (colder temps). I wrote that specifically to avoid conflating it with any kind of ice albedo feedback. So the heat of melting goes into the air and into the water, right?
BillC (Comment #93869)
March 27th, 2012 at 1:22 pm
“However when you melt ice the heat goes somewhere….”
——————————————–
When you melt ice the thermal energy is consumed in the phase change. There is no increase in temperature associated with it.
Owen,
No – energy is not “consumed” unless you are operating outside of the bounds of thermodynamics (nuclear reactions etc.). True, it does not raise the temperature of the melt water from that of the ice – both at 0C – that’s left for the next Joule. I’ll rephrase what I said – when you melt ice the ocean and atmospheric heat contents should go up (ocean first, then atmosphere through convection and evaporation). The ocean basin volume essentially would get bigger – encroaching into the area formerly occupied by ice) and its heat content should go up. I don’t know if that’s included in the calculation. I guess its heat content per unit volume would go down.
BillC (Comment #93871)
March 27th, 2012 at 1:38 pm
“No – energy is not “consumed†unless you are operating outside of the bounds of thermodynamics (nuclear reactions etc.). True, it does not raise the temperature of the melt water from that of the ice – both at 0C – that’s left for the next Joule. I’ll rephrase what I said – when you melt ice the ocean and atmospheric heat contents should go up (ocean first, then atmosphere through convection and evaporation). ”
———————————————-
I don’t believe you are correct in the above. The energy is completely consumed in the phase change. In the case of Greenland land ice melting, for example, the oceans will actually get colder as near 0 degree C water enters the ocean from the ice sheets. The best you can say is that the heat consumed in melting is stored as a type of latent heat for a period of cooling that causes re-freezing.
Owen,
I think its confusing to talk about consumption of energy in a thermodynamic process, but outside that I will generally concede the point.
I do think it is correct to say, as I said in the later half of my last comment, that the total heat content of the ocean basin must increase, but since you are also adding that mass of water to it, the heat (enthalpy) per unit mass decreases, which is the same as you are saying with the 0C Greenland meltwater.
I don’t know, but I would guess that the OHC measurements are “floating” within the ocean basin – the top 2000 meters is measured from sea level down, even as sea level may be rising w/r/t land. In that sense, the energy “consumed” by the melting ice is in some metaphorical sense transferred to the deep deep ocean (below 2000m, lets assume for fun it’s all at 0.00001C), as you add mass to it at the same temperature, this is where the heat went. The more I think about this, the more I wonder why I haven’t seen this explained before. A minute change in sea level caused by melting ice is a huge heat gain. Seems like you could add that directly to the OHC measurements if the top 700m or 2000m or whatever is truly floating “on top of” the overall rise”. Combine this with the fact that the maximum density of water is around 4C and I would think the uncertainties in this would be enough to more than cover the flatsies in the land/air/sea surface record. So I agree with you.
However, I would guess that at the present time these measurement uncerainties are not resolved to the point of saying the flatsies are definitively NOT caused by some sort of cloudiness increase independent of ENSO or accumulating with each ENSO cycle or whatever Roy Spencer thinks…
My comments are disappearing and I don’t think they are going into moderation?
I think it is misleading to talk of energy being “consumed” but outside that I’ll concede the point more or less.
Interestingly if the ocean heat content measurements are the “top 2000m” in a rising ocean, having this moving frame of reference sort of relegates the enthalpy gain of the melting ice down below that level.
That makes me wonder why I haven’t seen someone show the increase in “heat content” simply caused by isothermally adding mass to the oceans. E.g. expressing non-steric (is there a word for this?) SLR in Joules.
Depending on the temperature distribution you could even have a case where this caused sea level to fall, given the density maximum at around 4C…
@me “That makes me wonder why I haven’t seen someone show the increase in “heat content†simply caused by isothermally adding mass to the oceans. E.g. expressing non-steric (is there a word for this?) SLR in Joules.”
er, cuz it’s on the order of 0.01 W/m2/mm SLR. nevermind. i guess the earth’s surface isn’t totally covered with ice.
latent heat of melting 334 kJ/kg
BillC (#93874):
“given the density maximum at around 4C…”
Minor point: density maximum at ~4C is for fresh water; it’s not true of seawater.
Re: HaroldW (Comment #93880)
Not completely unimportant given the context of the comment tho.
(Also not sure why the freshwater/seawater maximum density mixup seems to recirculate from time to time…)
@oliver – mixup continues because of non-experts like me who don’t take the time to look it up. it seemed better than saying nothing about it at all….
BillC,
Sorry, wasn’t meaning at all to ride you for the mixup. I have seen the idea of a possible volume reduction due to warming more than once before, and I am genuinely curious whether it is just an idea that comes up from time to time, or whether there is a common site of origin.
Oliver – in my case I have seen the 4C actually quoted by folks arguing about sea level. Bob Tisdale comes to mind though I could be wrong. Anyway, I spent some time looking at it yesterday and I might come back to this.
I did a bit of google research on the density issue, looking for quantitative info. There’s a helpful page here: http://linkingweatherandclimate.com/ocean/waterdensity.php
Can’t vouch for the accuracy of the source info, but it suggests that seawater with a salinity of ~35 psu has a maximum density at the freeze point.
Earle – thanks. I guess the most interesting thing to me is what happens to the cold meltwater from arctic or antarctic ice melt. I’ve heard it decreases overturning since the surface is now less saline and more buoyant. But OTOH it makes the surface colder than it would be otherwise.
Earle – thanks. The interesting part of this relates to Owen’s comment #93864 above (quoting)
“it appears to me that it is only very recently that heat has begun to flow from the 0-700 m layer into the 700-2000 m layer, but is now doing so at an increasing rate. The 0-700 m layer is therefore put into a sort of steady state where it gains solar heat energy while it simultaneously loses energy via downward transfer”
Looking at the curves this appears prima facie true – but what is the mechanism for this in the face of the idea that melting ice caps will slow down the thermohaline circulation (since lower salinity albeit colder water is now on top).
I don’t think it’s simple conduction. Of course it could be the Chinese aerosols…
the linked figure from Murphy et al 2009 in JGR is interesting and gets at what I was thinking about with the flatsies analysis. It’s still couched more in terms of forcings than I would like, but I understand why:
http://img3.imageshack.us/img3/9380/earthenergy.png
i guess church et al have updated this. will look.
from aforementioned church et al:
Table 2. The Earth’s Heat Budget
Component 1972–2008 1993–2008
Shallow ocean (0–700m) 112.6 45.9
Deep ocean (700–3000m) 49.7 20.7
Abyssal ocean (3000mâ€bottom) 30.7 12.8
Total ocean storage 193.0 79.4
Glaciers (Latent only) 3.0 1.7
Antarctica (Latent only) 1.4 0.8
Greenland (Latent only) 0.7 0.6
Sea ice 2.5 1.0
Continents 4.7 2.0
Atmosphere 2.0 1.2
Total other storage 14.2 7.3
Total storage 207.2 86.7
Solar + Ozone + (wm)GHGs 1461.7 709.6
Energy consumption 13.0 6.5
Volcanic (GISS) −207.8 −14.6
Outgoing radiation −343.4 −199.3
Total forcing 923.6 502.2
Total forcing †Total storage 716.4 415.5
they apparently use the last line to back into an aerosol estimate
i guess this is why spencer etc look at clouds
Yup it was me
http://rankexploits.com/musings/2008/do-short-time-series-neglect-energy-at-large-time-scales/#comment-4122
I’m currently looking at negative trends in long station series.
why?
cause they need to be explained, if possible.
At the end of the day I may have to throw up my hands and say
“hey anomalies happen!” but its not very satisfying.
The models continue to be out of wack with the observations.
why? There are a categories of response.
1. AGW is wrong? not an explanation
2. Models are imprecise? thats a path for investigation
3. Random shit happens? not an explanation
4. Observations are wrong? thats a path for investigation.
anybody doing 1 and 3 aint doing science and they most certainly are not following their nose. You kill for situations like this as a modeller or theorist.
Had a similar discussion with a scientist at Berkeley earth yesterday. The mismatch between models and observations is
really interesting. Ambiguity is precious. disambiguiating is even better. But ambiguity comes first.
Steven Mosher (Comment #93896),
You say…
Then you say…
Then you say…
So, the models are out of wack with the observations BUT you suggest that we cannot even consider AGW being wrong. Is that a good way of “doing science”?
Skeptical.
Let me draw an related example.
We build a model of how radar energy reflects off a surface.
This is straightforward optics. We test that model and we find
that the radar return is not what we predict. Do we conclude that
the laws of optics are wrong?
No. we look to see why the model failed to make perfect
predictions. And we see that we neglected to include certain
key variables.
You are under the mistaken impression that science proceeds
by “falsification”. That’s a quaint philosophical idealization
of how things operate.
When an experiment fails it is never clear why. You cant conclude anything from failure. could be many things are wrong.
Steven Mosher (Comment #93902),
I agree. However a computer model is not an experiment.
It is accepted that CO2 in a laboratory accepts radiated heat and re-radiates it. That’s real science done by real experiments. What has not been done by experiment is to prove how CO2 works in the atmosphere. You just can’t build a test chamber that big. The only test chamber we have is the actual atmosphere. The observations we make of the atmosphere is the experimental data.
If you choose to believe the theory of the models and question the actual observations, you are effectively rejecting the experiment because it doesn’t match a pre-conceived notion.
Proof of theory by repeatable experiments is how science proceeds.
“You cant conclude anything from failure.”
I thought Warmerism couldna get any stupider.
I was wrong.
Andrew
Skeptikal:
That’s true in experimental sciences (mainly true anyway), but absolutely false in observational sciences, where controlled experiments are not possible.
How does stellar astrophysics advance, for example?
(Hint: It does.)
Skeptical, we also have something we call “signals of opportunity” in my field. These provide tests of our theoretical frameworks, without having preconceived the experiment.
Same thing happens in seismology: If you had to actually generate a magnitude 8 earthquake to prove your theory, you’d have a big problem doing it first, and secondly, there’d be a lot of fricking (or frakking if you prefer, since we’re discussing earthquakes) people angry at you.
Same applies in climate science. We’re in the middle of a long-term (unintended) experiment in which we double the CO2 in the atmosphere from pre-industrial values.
We also occasionally have a volcanic eruption that has a measurable impact on climate (and this can be used as a “signal of opportunity”).
Other things like that abound (e.g., global response of climate to ENSO events).
The argument is over the reliability of GCM forecasting of the impacts of the changes in the climate resulting from our modifications to atmospheric chemistry. Other things, like predicting day-to-day weather aren’t possible from GCMs, nor where they intended to be. Climate impact on human life isn’t measured in terms of just the average of daily weather, so the argument that GCMs can never be useful unless they can predict e.g. the next ENSO event, is specious.
Bottom line, we use controlled experiments when feasible because they are a controlled, efficient mechanism for advancing our knowledge. But other methods are available, when experiment is either too costly, impracticable, or impossible.
Skeptikal (Comment #93905)
“So, the models are out of wack with the observations BUT you suggest that we cannot even consider AGW being wrong. Is that a good way of “doing scienceâ€?”
I think you don’t understand what the models are doing. They aren’t models of AGW. They are models of the general circulation of the atmosphere. It’s true that they can estimate the effect of adding GHG’s to the atmosphere, but that is a minor part of their functionality.
Models don’t perfectly model the atmosphere; their imperfection would be much the same whether AGW were correct or not. So as Steve says, AGW is not an explanation.
Skeptical. Theories are never proved. Theorems can be proved. Scientific theories are confirmed or disconfirmed. Not proved or disproved. Proof happens in logic math and geometry.
A scientific theory is a vast interconnected series of “laws” “observations” , logic, and math.
When an experiement or observation is in ‘conflict’ with the theory, there is always a weighing and judging. It’s never clear which element of a theory needs to be adjusted or replaced or amended.
In the end a theory stays in place as long as it is useful.
Are computer runs experiments?? of course they are.
look. I have a model of a wing. and I have a model of the airflow over that wing. Ask nick stokes how “realistic” the models of these things are. Anyway, I run my model and it tells me that the airfoil sucks that it departs at low Angle of attack. Is this an experiment?
surely it is. I use the information and make a better wing.
Then my program says the wing doesnt suck. so I press forward
and I do a wind tunnel test with a scale model. Is this an experiment? I get better data than my computer program.. Is my computer program wrong? no, do I reject the theory of flight because my model and the wind tunnel gave different answers?
Nope. Then I do flight test. is this an experiment? The flight test give different answers than the wind tunnel? what do I make of that..
Long ago some philosophers told some fairy tales about how science worked. They tried to tell scientists how they should proceed and they pretended that they were describing how science was actually being done.
Philosophers should have been more scientific about what the “scientific’ method is. They should have actually observed a lot of science. They didnt.
Steven, not sure it was the philosophers at fault, as much as the people setting high-school curricula that didn’t factor in how diverse the approaches to advancing science can be.
As you pointed out, some problems do get solved with numerical experiments… after all the accuracy of F=ma etc that it’s based on isn’t in dispute here and it’s often easier and cheaper to run a numerical code (if it can be made accurately enough) than to set up an complicated experiment to test something. (And yes, you can falsify a theory using numerical experiments, so they have a role in validation of theories in spite of their virtual nature.)
Others get solved by simple thought experiments. Take Einstein’s Relativity, one of the most important developments since Newton’s Principia, based on experimental data sure, but the crucial insights were purely based on theoretical ruminations.
Einstein’s genearl relativity was thought to be in trouble when it predicted an expanding universe (and Eintsein’s “fix” of a cosmological constant wasn’t a fix because it was unstable, like a bowling ball balanced on a cue stick) and because his theory predicted singularities. Now the expansion of the Universe is not in question (and Einstein’s cosmological constant is still required by quantum field theory) and singularties are routinely exploited in astronomy as gravitational lens
Theories are useful if they can tell us stuff (make predictions) we wouldn’t know without them. They don’t have to be an exact replication of the universe to be useful, and they don’t require experimental replication before we’re allowed to place any faith in them…. that latter I’m afraid is just a high school science curriculum level FAIL at how science works.
The point is that the climate system is vastly more complicated than radar reflecting off a surface or air flow over a wing. In those cases, we know what the controlling physical processes and equations are and we know the parameter values accurately.
Steven Mosher,
Your philosophic preferences make your science valid. So do mine.
Andrew
Mosher 93916:
Many years ago, my professor of philosophy of science defined science as “what scientists do when they are doing science†which was his cute but annoying way of rejecting narrow definitions. It was probably also because he was French.
Unfortunately, after our liberation from Karl Popper so that we can do string theory and other non-falsifiable undertakings there was also a move to see science as a social construct, or a language community or some other approach designed to make what goes on at JPL or Bell Labs analytically indistinguishable from hockey, voodoo or lesbian mime theater.
A cultural side-effect appears to be a large number of scientists who are incapable of clearly separating the cognitive and logical imperatives of science from the aesthetic and ideological preferences of their surrounding but pervasive academic culture. (See e.g., Michael Mann).
In the climate arena, we have seen the worst of two philosophical worlds – behavior that is more like (an ideologically monolithic) web of socially constructed units who have an unreasonable expectation that their political preferences will be treated as if they were purely the end product of old-fashioned regular science.
It seem the more they exempt themselves from the rigors of properly scrutinized science in the name of politically correct goals, the more deference they demand from non-scientists. That can’t end well for anybody.
Paul:
Stellar astrophysics is vastly more complicated too.
George Tobin:
Chronology is wrong at the least. Science as a social construct has been around for decades prior to string theory and prior to AGW. And there are very few physicists who take string theory seriously. Almost nobody except the mathematicians pretending to be physicists who work on it.
The problems with AGW as a science don’t have anything to do IMO with your criticisms. It’s an observational science, and people who are good climatologists understand how to make advancements in that field without using what amounts to a high-school version of how science progresses.
Carrick (Comment #93924)
March 29th, 2012 at 8:25 am
“…….people who are good climatologists understand how to make advancements in that field without using what amounts to a high-school version of how science progresses.”
————————————————
Yes!
@Carrick (Comment #93918)
“they don’t require experimental replication before we’re allowed to place any faith in them…. that latter I’m afraid is just a high school science curriculum level FAIL at how science works.”
I absolutely disagree with this! In fact, your own examples about relativity disagree with that you said there.
No one put any “faith” in relativity without some sort of observational or experimental evidence to back it up. That’s how all science works. Anyone can say anything, but without the observations or experiments supporting, it’s just supposition and hot air. (Why do we believe CO2 can reflect heat? Is it because someone simply said it, or because we have -observational and experimental- evidence showing that?)
What made relativity a scientific conjecture was the fact it -could- be tested. The theory spawned testable hypotheses, which are the basis of science. And then what did we do? We started looking for the evidence that would support or contradict those hypothesis and thus the theory of relativity.
What we observed were singularities. Did people take singularities as -fact- the moment Einstein wrote about General Relativity? No. We looked to find out, we didn’t just assume it as true, -we looked-. And then we did find them, and that supported Einstein’s theory, though not completely as other theories were developed that could explain the phenomenon by other methods.
And then we launched satellites to do Frame Dragging experiments, and used atomic clocks to test relativistic divergence based on differences in speed of one clock against another. All these were experiments that generated hard data. These experiments were informed by the testable hypotheses born from relativity, and without the theory we would not have known to test for these effects.
All that is -exactly- in line with the “high school view” of science you’re maligning. Make a theory and hypotheses, then test them. No faith is ever put in anything, at least not till we have it tested to some degree, before we start assuming it’s true (enough) and thus an informed window into how reality works. And from there, new theories are made and hypotheses generated for testing. Exactly like “high school” science.
The core of science is a methodology. There’s nothing mysterious or black magic-y about it. It is immensely simple and that’s what makes it so powerful. Really, you and Mosher seem to be arguing in circles, saying the very things you are saying you’re not saying.
So, what about computer models? They allow us to -mathematically- test the relationships between variables as -we define them in computer code-. They are no substitute for actual experiments or actual observations (obviously, since the variables have to come from and be constrained by actual data). And computers have plenty of flukes of their own (oh hi Float Point errors and CPU architecture divergence).
The computer models should NOT be taken as evidence in my view. Not even remotely! They are evidence of nothing other than the validity of the mathematical relationship between the variables as defined to the computer. What they DO do however is inform us of where to do more experiments and observations (what satellites to launch); what data we need to answer the questions we are lacking. Models are not evidence in and of them selves, but they expose were we need to find more evidence and what nature that evidence is probably hiding as and within.
In that way, computer models are the same as the ruminations of Einstein in generating the theory of relativity. On its own, relativity meant nothing, just writing on paper, but it informed experiments which then could add evidence and reality to the theory, giving it greater predictive power and allowing us to take it as true for how we view the universe (and thus what theories and scientific directions we head off in).
Everything was and is supported by experiments and observations in science. Exactly how it’s put in “high school”, because science at its core is, as I said, immensely simplistic and never changes no matter the scale you’re looking at (or it’s no longer science and is something else. And yes, there are other methods to gaining information about the world besides the particular methodology called science–it is one of many ways).
View everything in its proper place and give it all its proper weight and that’ll make the discussion informed and people better scientists.
@Skeptical,
I agree with all you said except “What has not been done by experiment is to prove how CO2 works in the atmosphere. You just can’t build a test chamber that big.”
A clever scientist will find a way. The fact people are wasting so much time and elaborate math games instead of finding a way to cleverly observe the data kinda makes me sad. It’s not a one or the other deal, as computer models inform the experiments so we can do them efficiently and effectively. But we do need more experiments, and I haven’t really seen people focused on figuring out brilliant ways to conduct them. And there must certainly be ways to make “micro atmospheric” chambers to do these experiments within, and generate the hard information we need to extrapolate to our whole planet with greater confidence.
The truly genius scientists of history found ways to test or observe things no one else thought to or could think of how. Who will step up to the plate to do that for modern climate science?
GED,
Climate science, including the models used to explain and predict are based on measurement as well as on theory- many, many types of accurate measurements: spectral measurements of CO2 and the measured pressure dependence of the rotational-vibrational absorption bands, measurements of CO2 levels in the atmosphere, measurements of vertical temperature and humidity profiles for the atmosphere, measurements of temperature and salinity profiles for the oceans, satellite measurements of TSI, of upwelling thermal IR, of albedo-reflected solar radiation, and a host of other elegant and sophisticated physical measurements. The climate models explain and predict – its just that the length of the experiment to fully test the models is far longer than most of us would like. Nonetheless, great progress is being made, and it is the models that incorporate available data into theoretical frameworks that are driving the process – they are critically important. As predictions fail, we learn. As predictions are verified, we learn.
Ged, it’s easy to show why you’re wrong.
Let’s use “experiment” to describe any procedure taken to test a theory or hypothesis.
If you have a theory or hypothesis, you can falsify it by
1) Demonstrating that is it logically inconsistent. This is a thought experiment, and since you can use it to falsify a hypothesis or theory, it’s a valid type of scientific experiment.
2) By running a numerical code that makes fewer assumptions to show that one or more assumptions made is not valid. This is a numerical experiment, and since you can also use it to falsify a hypothesis or theory, it’s also a valid type of scientific experiment
3) By experimental manipulation of a physical quantity and verifying the theory correctly predicts the response of the system. This classical and best way to test a physical hypothesis or theory, but it’s by no means the only way.
4) By waiting for a chance occurance (e.g., volcanic eruption, distant supernova, earthquake, etc) or secular change in a forcing that you can measure but don’t directly control, and verify that the response of the system is consistent with that which would be derived from the hypothesis or theory.
In empirical science, ultimately validation of a physical theory or hypothesis always involves physical measurement of some sort. I think we can agreed on that…
And I never claimed otherwise. (Other than to say once you’ve validated it well enough, enough is enough. We don’t need to start out testing anew whether energy is conserved or F=ma before we can proceed beyond that point.)
The new temperature data from CRUTEM4 has added 628 new weather stations, including strangely enough over 50 from Kyrgyzstan. Most of these stations are in far northern latitudes. There are none in the Southern Hemisphere. Since all observe more warming in the Arctic than anywhere else it is hardly surprising that CRUTEM4 anomalies have now increased a bit over CRUTEM3. This is even more obvious if you compare average temperatures – see: http://clivebest.com/blog/?p=3528 The sampling has become more biased to northern latitudes.
The MET office press release puts 2010 as being 0.1 degree warmer than 1998. However this is a meaningless statement as the error on each year is ~0.1 deg.C.
Clive,
Typo: Met Office table puts 2010 0.01 warmer than 1998 (one tenth of the uncertainty). I complained to them a week ago about this. No reply yet.
Carrick
“Stellar astrophysics is vastly more complicated too”.
Yes indeed – I used to work in this area. Much more complicated than radar reflection or aircraft wings. But still simpler than climate because although you are solving 3D PDEs at very large Re, it is essentially a closed system, hardly influenced by external factors.
And an interesting difference with climate scientists is that most solar physicists will quite happily admit that they don’t know why the sun has an 11-year cycle. They just know it’s something to do with the magnetic fields and differential rotation.
Paul,
Quite correct. 2010 is supposedly 0.01 +- 0.1 degrees warmer than 1998. madness !
Paul:
Cool! My very first publication was in this area (it related to the more limited problem of the effect of internal rotation and the potential impact of that on the then unsolved solar neutrino problem).
I think you’re overgeneralizing here. 😉
There are somewho will always pick the answer that benefits their perceptions of “best policy”, but by and large the people I’ve had interchanges with have been pretty open about the gray areas in climate science.
@ Ged 93931
“A clever scientist will find a way. The fact people are wasting so much time and elaborate math games instead of finding a way to cleverly observe the data kinda makes me sad.”
A couple things: 1) Maybe there isn’t an INSTEAD OF. The fact that so much of this is done at all, may reflect the IPCC/AR cycle. Absent that, those folks might not be doing climate.
2) Earlier on this thread, the discussion of the satellite calibrations etc. makes me a bit more confident that clever scientists are finding a way.
Speaking of astrophysics- see this article about earthshine:
http://www.scientificamerican.com/podcast/episode.cfm?id=earthshine-sets-example-for-life-li-12-03-06
I was musing on Curry’s blog last week that some astrophysicist needs to come up with a way to measure sea surface temperatures from the 1940s based on the reflected earthshine from some object 35 light years away….
Carrick said, ” people I’ve had interchanges with have been pretty open about the gray areas in climate science.” Finally, more of the gray areas are being openly explored.
I noticed that your land use impact appears to be pretty small. That is a big gray area I think. In general, crop albedo is slightly higher than natural albedo. The height of the crop though makes a difference. Trees absorb more light than grasses, but trees have a greater cooling impact on the surface. That impact surface station reading and true surface temperatures as well.
Crop evapo-transpiration also has a greater impact in high latitudes than in the lower latitudes.
Replacing natural brush and trees with food crops would have a much greater impact in the high latitudes than in the tropics. The temperature data tends to support a greater land use impact in the higher northern latitudes than what is estimated.
“Crop albedo geoengineering has been tested in HadCM3 by Ridgwell et al. [2009] and Singarayer et al. [2009] and in CAM 3.0 by Doughty et al. [2011]. We follow a methodology similar to that of both Ridgwell et al. [2009] and Singarayer et al. [2009], apart from using the MOSES 1 land surface scheme rather than MOSES 2.1, used in these studies, in order to provide consistency with the other simulations presented in this paper. We adopt the same definition of crop extent, with the crop area being defined as C3 or C4 grasses that are within human-controlled or disturbed areas as defined by the Wilson and Henderson-Sellers [1985] land-type data set. The total area covered by crops is 15.7 × 106 km2, 3.1% of the Earth’s surface area or 10.6% of the land area (Figure 1a). To these areas we apply an increase in snow-free albedo dependent on the fractional crop coverage in the grid cell.”
Back calculating from the geoengineering studies, the percentage of warming since the agricultural revolution is likely greater than what is used in your chart.
I think Mosher has some information on the impact on surface station measurement that could be teased out.
Carrick:
With respect to chronology, philosophy of science as a field of study has largely disappeared in favor of sociological or policy-oriented studies of science. How much earlier the notion of science as social construct existed is irrelevant to my previous post. That notion did not gain prevalence nor did the uniformly enforced political zeitgeist on-campus come into being until more recent times.
IMO the problem with climate science is EXACTLY that some political and ideological inclinations are so pervasive that turning an observational science into a normative adjunct to a pre-conceived policy agenda has occurred to a large extent with little protest or resistance within the profession. The pathetic self-sacrifice of a journal editor for abetting heresy, the email machinations of the inner circle in Climategate etc all suggest an intellectual climate that is not conducive to free inquiry.
It is also tiresome to be told that any criticism of the models reflects not the limitations of the models but of the critic’s failure to move beyond simplistic notions like falsifiability.
Whatever the modelers intent or understanding, the models have been expressly sold to the public as predictive tools incorporating the highest arts of climate science such that we can bet our collective future on them–unless they don’t appear to be performing well, in which case they are not simplistic theorems, you rubes, but are of a nature that can only be grasped by the enlightened ones.
We go from (simplistic) empirical verifiability to a kind of gnosticism in a single bound. If you don’t accept AGW because it turns out it is not based on anything as reliable, reducible and testable as good old-fashioned high school science you still have to accept it because the illuminati say so. That unappealing sales pitch is less a reflection of the public’s lack of scientific sophistication but of the illuminati’s limited grasp of the origins of their own certainties.
“Ged (Comment #93931)
March 29th, 2012 at 11:22 am
@Skeptical,
I agree with all you said except “What has not been done by experiment is to prove how CO2 works in the atmosphere. You just can’t build a test chamber that big.â€
Ged.
We know exactly how C02 works in the atmosphere. The first experiments were done over a century ago. We know precisely enough to build working devices based on “AGW” science.
There is no need to build a chamber as big as the atmosphere
@Mosher,
We know some principles, but not all. And no chamber as big as the atmosphere would be needed. In fact, it could be something extremely small and fit in the palm of your hand, as long as it was able to approximate the features of the Earth system. And I was not talking about CO2, but the entire atmospheric system, as well as including land and ice and “clouds” (as much as can be approximated in a micro system); that is climatology. CO2 is not climatology, it is only one minor part of a vast field.
But even if we speak of CO2, I’d love to see those working devices based on “AGW” science built.
well we have radiosondes.
can BEST tackle the radiosonde data after it does the sea surface temps?
BillC,
‘sonde data will have to complete with ocean heat content, satellite records, precip data, and all the other cool stuff to analyze, but lets not get ahead of ourselves :-p
don’t bother with the satellites? seriously, I would think the best allocation of effort would be stuff that can bridge the gaps between paleo and present day like the land surface temps. ?
BillC,
Reconciling satellite and surface obs is a rather interesting subject, since station siting and UHI don’t seem to adequately explain it.
Zeke,
It does not seem at first glance, at very firs glance.
Parallel divergences of proxies and TLT tend precisely to call in question UHI and local disturbances.
Ged (Comment #93962)
Right, and precipitation and snow, and stratification, and rotation, and day and night, and the tides, and the overturning circulation, and all time scales from minutes to thousands of years, and … piece of cake!
Mosher said it:
Ged.
We know exactly how C02 works in the atmosphere. The first experiments were done over a century ago. We know precisely enough to build working devices based on “AGW†science.
There is no need to build a chamber as big as the atmosphere
so the flat-lining since 1998 is becuase…….?
diogenes.
“so the flat-lining since 1998 is because…….?”
http://www.woodfortrees.org/plot/best/from:1998/to:2011/plot/best/from:1998/to:2011/trend/plot/none
Well, its not a “flat” line
1. You have to calculate the uncertainty due to
a) your selection of the starting point
b) auto-correlation
when you do that you’ll find a range of estimates.
2. C02 is not the only forcing, so you have to look at net forcings
3. The fact that we understand how C02 works ( radiative physics ) says NOTHING about our state of understanding about
a) other forcings
b) internal forcings ( natural variation)
We understand how C02 and GHGs work. That’s been well known for a while. What is less understood is how the system evloves over time. For that you’ve got big gaps of understanding. big data gaps.
Let me put it this way. If it got cold for another 100 years that still would not change the physics of how IR and C02 interact. what it would indicate is that there are other processes that we dont understand as well as we understand GHGs.
Get that straight. The uncertainty in GCMs doesnt flow from a lack of understanding about the radiative physics of GHGs. It stems from a lack of understanding about other factors, clouds, aerosols, land processes, oceanic cycles. THATS the physics where the uncertainty lies. get it.
diogenes:
Could just be natural variability raising it’s ugly head. Not really been flat-lined in any useful sense since 1998 though. More like 2002, and it’s not even really flat-lined since then (just a smaller positive slope).
If you had a set of dice, and you rolled a “1” twice in a row, would you consider the dice loaded?
I’m guessing the answer is “no”.
The odds of having a 10-year period with no warming, given the historically observed natural variability and a 0.2°C/decade warming trend are actually better (about 10%) than rolling two ones (which is about 3%.)
I’m not saying this is the case, what I’m saying is you have to wait longer than 10 years to rule out natural variability as an explanation for seeing a 10-year period of limited growth.
GED:
Read about them here.
well it’s good to see that Steve Mosher admits that there is more complexity than the CO2 narrative, although he does his best to hide such sophistication. The other day, just for amusement, i ran a Woodfortrees on a 17 year smoothed version of HADCRUT since 1850 …and the final segment was looking as if it drooped.
It is not a flat line, it is curving nicely though 🙂
http://i122.photobucket.com/albums/o252/captdallas2/maunaloavsouthernoceans.png
Mosher, you know that adding additional stations will change the overall mean for a data set that does not include them. The whole thing is bollocks, and deliberately so.
i note also that the past is getting still cooler with every advancement.
Why you offer support for the CRUTEM4 changes is beyond me.
I am glad that Mosher finally admits that he doesn’t understand because, most of the time, he implies that he does understand all the interactions in the Earth’s climatic system. Glad to get that cleared up. Statements that amount to saying that the planet would be warming if it were not for factors that mean it is not warming do not strike me as great advertisements for science. I would not put a penny in the slot for a statement like that.
“Mosher, you know that adding additional stations will change the overall mean for a data set that does not include them. The whole thing is bollocks, and deliberately so.”
Dataset 1: 1,2,1,2,1,2,1,2,1,2
mean: 1.5
Adding stations: 0,3,0,3,0,3
########
Question: do I know that adding stations will change the mean?
Answer: Adding stations can raise, lower, or keep the mean the same. when it comes to temperature series, and adding them to the mean.. well. lets see. I’ve yet to see an addition of stations that caused a significant change to the mean. Push a little up there, drop a bit down there.
its still warming. Now, if adding 400 stations to the arctic
move the mean from .5C to .55C do I think that’ s significant?
NOPE. is switching around meaningless ‘warmest year’ records
significant? Nope.
Mosh: “The uncertainty in GCMs doesnt flow from a lack of understanding about the radiative physics of GHGs. It stems from a lack of understanding about other factors, clouds, aerosols, land processes, oceanic cycles. THATS the physics where the uncertainty lies. ”
Confessing that I wouldn’t know what to do with it, is there a list of all the suspected influences and interactions? If there is such a list, has anyone taken a shot at quantifying them? Even as ranges? percentages?
The interactions among these factors must also have uncertainties of their own.
I guess I may be asking the impossible, but of all factors which may affect “temperature” what might be the percentage, and/or magnitude of those well understood?
Mosh: “Get that straight. The uncertainty in GCMs doesnt flow from a lack of understanding about the radiative physics of GHGs. It stems from a lack of understanding about other factors, clouds, aerosols, land processes, oceanic cycles. THATS the physics where the uncertainty lies. get it.”
Then the forcing directly attributable to a doubling of CO2 is exactly what?
Mosh: further to the above. Although “affect” looks right, it should have been “effect.” The question was what are the components which sum to temperature.
All of them? Well, not all of them, but the ones whose contribution rises or sinks away from de minimus.
Where would one look for this?
‘diogenes (Comment #93995)
March 31st, 2012 at 4:40 pm
I am glad that Mosher finally admits that he doesn’t understand because, most of the time, he implies that he does understand all the interactions in the Earth’s climatic system
##############
HUH? you are particularly stupid. You can go back through 5 years of comments and find that my position is consistent.
I do not, science does not understand all the interactions.
But as a engineer I do know that adding GHGs to the atmpshere will and an effect. That effect will not be a cooling effect. It will be a warming effect.
Dataset 1: 1,2,1,2,1,2,1,2,1,2
mean: 1.5
Adding stations: 0,3,0,3,0,3
“Dataset 1: 1,2,1,2,1,2,1,2,1,2
mean: 1.5
Dataset 1: 1,2,1,2,1,2,1,2,1,2
mean: 1.5
Stdev: 0.53
SE: 0.33
Dataset 1: 1,2,1,2,1,2,1,2,1,2,0,3,0,3,0,3
mean: 1.5
Stdev: 1.03
SE: 0.50
I think you guys lose sight of the more important issues with the personal exchanges you engage in. A very critical issue in using temperature data is the CIs that we can place on it.
Determining those CIs is no simple or straightforward task. Obviously adding stations and spatial coverage as BEST has done should decrease the CIs, providing the added data, unlike Mosher’s proffer, has reasonable variability.
Although I am not sure that the CIs for temperatures are properly estimated to date, I suspect our temperature data is much more reliably constructed in the last few decades than in earlier times. We have both surface and satellite data (and even radio sondes) to compare over that period. I would hope more work is done in those comparisons.
An important issue with BEST’s more complete station coverage is how far back in time does that coverage go. When estimating the longer term temperature trends we need reliable temperature data going back in time. In the meantime we need to at least recognize the importance of the CIs for these trends and make efforts to properly estimate those trends.
Kenneth Fritsch:
I agree…they get lost in minutia and forget the big picture, which is, in empirical science it’s how well do you know the answer (uncertainty limits) and how well can you justify the method you used to obtain your bounds?
(On a side note, with the data currently hovering at the 95% CL wrt models, it’s amusing to watch otherwise rational people hem and haw about the relative importance of CIs.)
“You can go back through 5 years of comments and find that my position is consistent.”
Yes, you consistently sell AGW speculation despite it’s obvious flaws.
Andrew