Get a load of this hilarious exchange!.
# Chip Knappenberger Says: 25 March 2009 at 8:04
Gavin,
It is bit more complicated than one-liners can capture.
…. In other words, something else is acting to slow the trend over the past decade or so–perhaps it is something like PDO, or perhaps the models are too sensitive to the forcing increases, or perhaps it is something else. But, as Pat’s testimony shows, recent temperature trends are pushing to lower limits of model expectations.
-Chip[Response: So are you going to sign the ad? As for the recent testimony, you know as well as I that the new graph you made is just as affected by end point effects as the standard ‘global warming has stopped’ nonsense. While purporting to show 15 independent points, they are not independent at all and will move up and down as a whole depending on the last point. Plot it for 2007, or using the GISTEMP data for instance. If a conclusion depends on one point and one specific data set then it’s still a cherry-pick and one would be foolish to draw conclusions. – gavin]
Hhmmm…. How wrong can an inline comment response be?
- Why does it seem Gavin is missing the point about the effect of varying the start date on the results in Pat Michael’s testimony to Congress?
The point is not that the trends computed over a range of start year is not that the trends areindependent of each other. The point is: it’s not a short term result.
That is, the graph responds to Gavin’s constant (and rather obtuse) complaint that various analysis rely “…relies on both a feigned ignorance of the statistics of short periods … .” - Seems like Gavin is also missing the point that Pat and Chip’s analysis also responds to Gavin’s gripe that some analyses he has criticized require “… cherry-picking the start year, had the period been “exactly a decade” or 12 years then all the trends are positive.”
Someone’s analysis somewhere may rely on cherry picking. But Pat and Chip’s graph is designed to specifically demonstrate in a way obvious to a Congressional representative with minimal scientific training that their result is… what is that word… “robust” to start date.
(Oddly enough, currently, to make the models look “good” using the test Chip and Pat use and relying on HadCrut, it’s model supporters who have to cherry pick years. The same can be said if you apply the method discussed in Santer et. al 2008: For all but a few choices of start years, the results indicate the model mean projected trend is biased high.
Mind you, if Gavin wants to criticize the result of either the Michaels/Knappenbeger method of analysis or my less prominent one, that’s fine. However, it’s obvious neither result was obtained by cherry picking the start date.)
- Finally: Evidently, Gavin seems to have a gripe with .. OMG….the idea someone might actually use the most recent available data.
Mind you, it is true that if we compare models to temperature data during periods where the modelers knew the observed temperatures before running their models and modeler also had some flexibility to select from range of possible forcings, the models tend “predict” the already known temperatures fairly well.
Under what circumstances do model predictions look bad?
Only when the comparisons include a moderate amount of data that came in after the simulations were run or after joint scenarios used to drive models were frozen.
But how is including the most recent data cherry picking? Is 2007 such a special year we must freeze analyses to end there? Do we get to use 2008 in 2010? Or will 2009 magically become the correct end year them?
Seems to me the cure for Gavin’s concern about the results being dominated by the analyst decision to use the most recent data is simple: Wait a while. New data will trickle in. If the models are correct and the recent downtrend is some super-mega outlier causing a once in a blue moon excursion, the temperatures will jump up. Then Gavin won’t need to flail around wildly explaining the models look good if we only ignore recent observations!
In the meantime, complaining that people do incorporate the most recent data into their analyses seems peculiar idiosyncratic. Trying to give authority the bizarre unusual notion in a snarky tone strikes me as idiotic unconvincing.
Well… that’s enough fun for today. I’m going to go exercise now.
Read the rest of the thread. There’s another one here (#192):
.
.
1. So if there is no hot spot then all of meteorology is wrong??? Now that’s an interesting claim.
.
2. Even if #1 were true (which it isn’t), we conclude that if the observations don’t match the theory, as long as the theory’s been around long enough, it’s the observations that are wrong??? I guess Einstein should have concluded that the discrepancy in the advance of Mercury’s perihelion was an observational problem.
Ryan O [12503]
As Hansen’s attack dog progressively corners himself, the snarling will continue to get more vicious…. Normal behaviour, Pavlov tells us.
And if need be, then all of meteorology is wrong. And some pesky verifiable data getting in the way of the theory? That wouldn’t do at all, would it now? Climate science, I tell you….
RyanO–
It is true that when someone thinks a theory is plausible and based on either very few assumptions, or the assumptions hold very well, they will double and triple check data before tossing the theory.
.
That said… if the atmosphere’s temperature profile were always stuck to the moist adiabat, I could see where someone might think the hot spot must, automaticaly follow and we’d know how hot the spot would be with no possible expectation of anything else happening. But, the atmosphere’s temperature profile doesn’t always follow the moist adiabat. Sometimes the atmosphere is convectively unstable. (At this time of the year, meteorologists sometimes suggest this can even result in tornados.) Sometimes the atmosphere is convectively stable.
So, presumably, figuring out the exact shape of vertical profile involves not only understanding the moist adiabat, but knowing why the temperature profile on earth isn’t constantly in perfect accord with that profile. Maybe someday those saying “most adiabat” will explain these finer points.
While what you wrote is true, what Gavin wrote is not. He dismisses the linked post by implying that the modeled trophospheric response must exist lest our entire understanding of the moist adiabat be overturned.
.
This is not true at all. Our understanding of the moist adiabat can be correct and the models simply get the CO2 forcing wrong, or the water feedback wrong, or both – not to mention the surface temperature measurements could be incorrect.
.
The binary choice Gavin presents (hotspot must exist or our basic understanding of meteorology is wrong) is simply a false dilemma.
In another situation, Schmidt’s blog made strong allegations against Courtillot for failing to use up-to-date data when the “significance” of a result deteriorated using up-to-date data. See here where Pierrehumberted asserted at RC that the use of obsolete data was “ugly” and that its use had an “illegitimate reason”:
Of course, Santer Schmidt et al 2008 used obsolete data (ending in 1999) purporting to show that there was no statistically “significant” difference between model and trend for any TTT dataset – a result that was reversed for UAH T2 and T2LT using 2008 (or for that matter 2007 data, which the authors admitted to be available at the time of submission.)
Ross and I submitted an article reporting these results to IJC by the way and are awaiting a “Special Decision”.
RyanO–
Also true.
Should the hot spot fail to materialize, I doubt this would cause any meteorologists to doubt that convective instability can trigger processes that result in tornados.
They would certainly look for other explanation first! Since there would be plenty of candidates, they look a long time before they doubted high density fluids lying over warm low density fluids are unstable.
You’re a little confused, Ryan. Not to get back into the morass that was the fingerprint debate, but the moist adiabtic rate has been observed and confirmed again and again. That’s what Gavin is referring to. Yes, there are some poor long term observations that don’t match the theory, but it is erroneous to say that observations are at odds with theory.
SteveM–
Yes. It is rather standard to use the latest reliable data when testing a model. So, criticizing Michaels and Knapperger for using the most recent data, and suggesting the results would be different if we used the magic end point of 2007 is rather silly.
If I were to adopt the hifalutin’ language of Gavin, I might suggest his response was based on “a feigned ignorance” of the notion that not ignoring recent data is frowned on by scientists, engineers, and just about anyone who likes to test any hypothesis against data.
That said, I understand strictly speaking, math doesn’t require comparisons between results and data. Sometimes those whose formal training was mathematics just don’t get why others are so concerned about empirical grounding for model predictions.
This is even more confused. How would getting the CO2 forcing wrong affect the tropical troposphere? Ditto for WV feedback?
Okay yes, the surface temps could be incorrect. That’s the only argument that even comes close to making sense–except that you are relying on a worse dataset to invalidate a better one. So maybe it doesn’t make that much sense.
Boris–
Could you clarify you mean by the moist adiabat has been observed over and over? Of course the moist adiabat exists. But that’s like saying “chocolate has been observed all over”.
Are you suggesting the temperature profile constantly agrees with the moist adiabat? The temperature profile in the atmosphere is very frequently not in agreement with the profile predicted by the moist adiabatic lapse. The lack of agreement is monitored by meteorologists to predict things like thunderstorms, tornados etc.
What has been observed over and over is that when the atmosphere is convectively unstable, air from the surface tends to rise, and accelerate as it rises. It is replaced by cooler air. That is: the atmosphere is unstable. Diagnosing instability involves the moist adiabatic lapse rate.
You can read more about the adiabatic lapse rate, complete with illustrations of typical temperature profiles that are absolutely not following the adiabatic lapse rate at Wikipedia
So, if you would please clarify
a) what precisely it is about the moist adiabat you think has been observed over and over and
b) because no-one is disputing the moist adiabatic lapse rate exists, but only what Gavin claims its existence implies, please how, failure to detect a hotspot would overturn our understanding of anything about meteorology specifically in so far as it relates to the moist adiabat?
Boris,
.
I wish there were a consternation smiley.
.
Gavin is dismissing a criticism of a divergence between model predictions of temperature trends in the troposphere and observations. If the models have the CO2 forcing and/or water vapor feedback too high, then they will predict a hot spot that is greater in magnitude than what will be observed.
.
.
I was merely repeating what Gavin said. So if you wish to call someone confused, please redirect that criticism to Gavin. The original poster (Aaron) said nothing about the moist adiabat. He was asking for clarification about why the MODELED TT hotspot was not observed. In response, Gavin says that if the TT hotspot does not exist then
.
It is not a binary proposition that either the models are right or our understanding of the adiabat is wrong – yet that is exactly what Gavin implied.
.
.
Please provide a description of what makes one data set “better” than another. If the “worse” data set excludes the possibility that the models are right (of which I have not seen convincing evidence) then what does it matter if it’s “better”?
.
Besides, you’re assuming my comment about the surface record is because the radiosonde data might not simultaneously support both the measured surface air temp increases and the model predictions. That is not the basis of my comment at all. There are several indications of contamination in the surface record, from McKitrick’s paper to the much more recent Jones paper. Not to mention the fact that neither UAH nor RSS match CRU or GISS.
Boris,
.
I forgot to ask: Do you agree with Gavin that there are only 2 choices:
.
1. The modeled trophospheric temperature response is correct, and thus the observations are wrong; or,
.
2. The observations are correct and all meteorological understanding of the moist adiabat is bunk.
In other words, the hotspot must really be there, because the putative surface heating that the absence of the hotspot disproves would force the hotspot to be there because the adiabatic lapse rate for a saturated convective system that is not adiabatic as soon as one drop of rain falls will eliminate the environmental lapse rate. I’m underwhelmed.
jorgeKafkazar–
The actual lapse rate doesn’t necessarily match the adiabatic lapse rate on sunny days either. 🙂
True. The environmental lapse rate and the saturated adiabatic lapse rate are different. The SALR is a theoretical (though useful) model that assumes, among other things, that condensed water doesn’t leave the system. I.e., that it doesn’t rain.
What I’m saying is that once you get past the noise, the moist adiabat is observed over shorter and longer time scales. Over the longest time scales, there is some disagreement, but we have very few long term trends to look at.
The reason this would be a shock to understanding is because there is no known physical reason for the moist adiabat to fail to be observed over long term trends. It’s not impossible, but it would imply some weird goings on.
If you look at the troposphere temperature profile for the equator, the word “static” is not even strong enough.
A nice site to check how temps are changing at different heights and different latitudes is here.
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/
Here is a typical temp profile of the surface to the top of the stratosphere over one full year at the equator (2008 is just the example but I can’t see any difference in any year – nice straight lines).
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_MEAN_ALL_EQ_2008.gif
The northen hemisphere, however, has really been impacted by the record Sudden Stratospheric Warming event in January. Temps in the stratosphere spiked 50C (a record) and then became at least 28C below normal (another record) right after.
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_ANOM_JFM_NH_2009.gif
The satellite troposphere temperature measurements are still going to be well above normal in March but by April, they may be well below normal.
jorgekafkazar, that’s a pretty bad assumption! At least where I come from, it rains a lot…and of course, rain has generally been increasing (as follows from increased evaporation) though at a rate greater than models predicted (see Wentz et al. “How much rain will Global Warming bring?”
Boris, can you show evidence that the radiosonde and satellite data sets are all suffering from more and obvious deficiencies than the surface data? Because otherwise, your reference to surface temp as the “better” data set is baseless. For one thing, many issues have been documented with surface temperature measurement and trend assessment. For instance:
http://www.climatesci.org/publications/pdf/R-321.pdf
Boris, what sort of “weird things”? Anything in particular? From the pro-AGW side, “weird” could easily mean “anything which suggests AGW won’t be such a big deal after all” so I have to wonder at your meaning…
Yes, a consternation smiley would be useful 🙂 😐
Yes..but we’re talking about the ratio here. If CO2 forcing were so out of whack, then the models wouldn’t match the observed surface warming. The hotspot is a relative feature in models, dependent on the surface warming.
I can’t think of another way to explain the discrepancy. If surface temperatures hadn’t warmed at all–but that is extremely unlikely in my view. Or maybe if WV in the tropics actually decreased with increased surface temperature? No evidence of that either.
Boris–
Link? Ref?
Aren’t you familiar with the problems of the satellite data analyses? UAH in particular had a severe error for years. There are likely still problems with both satellite sets. Not to lay blame on the authors of those analyses since accounting for all the variables in MSU data is a pain. Radiosondes have known issues and are constantly being revised.
I have no clue. It would be a complete unknown (to me, anyway, maybe someone somewhere has a theory). You can’t say it wouldn’t be exciting.
Lucia,
Willis E posted a graph on this very blog at some point. Probably the mammoth fingerprint thread.
Boris, I clearly meant outstanding, not past errors, with the satellite data sets. Maybe you have faith that more errors will be found, or are present, but in reality each correction data undergoes should reduce the probability of future revision, not increase it. Anyway, you still haven’t addressed why the problems with radiosondes and satellites are any worse than the surface dataset’s problems. Your claim that that data is ‘better” seems to require evidence that their are fewer issues with them than the satellites, yet I see none of it. The documented issues with the surface data cast serious doubt on that claim, to.
BTW, the claim that there is no evidence of WV decrease in the tropics is not quite correct. There is no reliable evidence, maybe, but radiosondes, which you have of course stated you doubt tremendously due to constant revisions, might after reanalysis. The issue was recently hashed out on Climate Audit, were it seems there is a lot of health skepticism of those reanalyses, but causally dismissing them is a little dishonest.
Andrew,
If you don’t believe the surface has warmed, what can I say? But even the sat data says the surface has warmed. Also, it is very difficult to get a temp at a anything close to a specific altitude in MSU data. See the RSS website for example.
Boris-Strawman much? I never said I don’t believe the surface has warmed. It clearly has. What I did say was there are documented issues with surface measurements-as in those shown in Pielke et al’s paper. I made no attempt to quantify them. My beef is not with the claim of surface warming, but with the claim that the surface data are “better” than the satellite or radiosonde data. Clearly not, from my perspective!
Boris,
.
Right. Because no one ever back-estimates aerosol forcings.
.
And we’ve got that climate sensitivity number nailed.
Boris-0
I did a search on “adiabatic” on the admin side of the blog. I didn’t find anything supporting what you claim. However, NealJKing did leave a link to Wikipedia which explains that the adiabatic lapse rate differs from the environmental lapse rate.
The environmental lapse rate is the one that is… erhmm.. observed. So, if you can find some link to suggest the adiabatic lapse rate holds, let us know. And please let wikipedia know too.
Ryan O-couldn’t have said it better myself!
Here’s the over-and-over observations of the moist adiabat:

.
.
Ref: ftp://eos.atmos.washington.edu/pub/breth/papers/2006/EIS.pdf
Hmmm…. dots don’t fall smack dab on the moist adiabat line. Fancy that?
There is no trend in the humidity data (even in the newer, more accurate databases that were noted in the Climate Audit thread).
It just looks like humidity is a fixed quantity. There is a very slight rise in lower atmospheric levels and a slight reduction in higher tropospheric levels (where the climate models say it should be increasing). On average, on a weighted-average basis, there is no change.
http://img147.imageshack.us/img147/7908/specifichumidity.png

The coarse grid of the models can not accurately model deep moist convection as it actually occurs. The models must equilibrate the entire atmosphere in a 100 km2 column when the temperature profile becomes unstable. In reality, upward convection occurs in relatively small plumes with very high humidity in the plume. So the temperature and humidity profile in the convection plume will be quite different than the surrounding air. Not to mention the thunderstorm that is the frequent result of such convection in the tropics Also the local convection does not necessarily lead to large scale mixing because the updrafts are nearly matched by local downdrafts. The resulting vertical wind shear is what makes flying through a thuderstorm such an exciting experience as well as driving the charge separation that leads to lightning.
Lucia, check comment 7672. Willis shows the amplification come out from the noise after about 3 months or so.
Boris–
Excuse me, but how does what willis said address what we’ve been discussing?
We aren’t debating whether or not there is evidence amplification does or does not occur. We are discussing whether or not if after data become reliable, it is never observed we would have to throw out what we know about meteorology.
What willis discusses is irrelevant to the point Ryan and everyone else is making. Willis’s post doesn’t prove what you initially claimed, for which I found a reference.
But that’s ok, because RyanO provided the reference which contradicted your claim.
1. You think that standard meteorology suggests that we observe amplification on scales from three months to 20 years, but after twenty years, there is no amplification?
2. Ryan’s ref does not in any way contradict my claim. If you read the paper the figure comes from, you’d see that the authors expect the temperature profile to be close to the moist adiabatic rate and they provide the figure Ryan posts as evidence that this is true. So, like Gavin says, we expect the temperature profile to follow the moist adiabatic rate closely, and if it doesn’t–as it doesn’t in the longest trends–then we’d have to rethink our ideas about temp profiles following the moist adiabat. Just who do you think claimed that observations would fall “smack dab” on the saturated adiabatic rate?
3. What Willis shows is evidence that amplification is present over all but the shortest and longest timescales. The amplification is what we’d expect from theory. Please show me anything that suggests that we wouldn’t see amplification over those long timescales.
4. Ryan O is completely confused in thinking that fiddling with aerosols or CO2 forcing would create a temp profile that doesn’t have a hotspot when the surface warms. Moreover, he has no physical explanation of how such a process would occur. His criticisms make no sense at all.
The problem is that everyone seems to assume that adiabatic expansion is valid over long time scales and all altitudes, at least within the troposphere and applies to spatio-temporal averages over large scales as well.. But this can’t be true. Otherwise an isothermal atmosphere, either warm or cold, would be stable. It’s not. Even a simple 1 D radiative/convective model shows that in both cases, the temperature profile converges over time to approximately what we see in the real atmosphere.
This debate is pointless because resistance to the models is futile.
The fact that recent temperatures are not consistent with previous models simply means that we need a new model with (a) cooler past temperatures; (b) revised modeling assumptions for atmospheric moisture behavior, aerosols etc. that exactly correspond to any departure from preferred/expected warming, and (c) an increased warming slope that begins its uptick at a later date than the rise projected by AR4 (that way we can simply dismiss the analyses done by that rather irritating lucia person for at least another decade).
Instead of declaring the old model set defective we will praise the refinements in the new set and declare the science not just settled but Really Really Settled and the snark can rise to unprecedented levels.
You will be absorbed, just like the data. The models will rule. Resistance is futile.
George Tobin [12548]
The Borg lost. Not?
George,
No kidding. But I try to look at the bright side, I *am* William Riker and I *am* ready to have a date on the Holodeck with Jennifer Love Hewitt. 😉
Andrew
Whenever Gavin’s name is mentioned, I feel it is appropriate to pause and check to see if he has done anything to address the dreadful state of Model E documentation. Let me check…
http://www.giss.nasa.gov/tools/modelE/
Nope. Still the same…oh well, Gavin DOES have a good time blogging…
George Tobin [12548]
In the end, the Borg were defeated by contaminating the Cube’s communication pathways, and by turning their linear logic against them. Same way the models are being defeated by contaminated data processed by fundamentally flawed algorithms.
🙂
Boris–
You have gone off on a tangent debating a something RyanO never suggested. You are also mis-representing what Gavin actually said.
Now, it may be that Gavin meant the much more complicated and nuanced thing you claim. But it’s not what he said in his very brief inline comment.
In answer to your first question: Why do you ask that? All I’ve said is I don’t think that if the hotspot is never observed, our only option is to throw away what meteorologists still actually know and have observed about the moist adiabat.
On your second bullet: You didn’t say “close”.
And my point is: to the extent the true lapse rate does not follow the moist adiabat either instantaneously or on average, other factors also influence the lapse rate. Moreover, there are other assumptions than simply “moist adiabat” that result in the claim Gavin actually made.
Unless you make a case that these other factors don’t change under warming, you can’t just jump to saying something about what we must conclude if the no ‘hotspot’ forms. (And, as your recall, I make a distinction between ‘hotspot’ and ‘enhanced warming’. Miniscule undetectable warming is not a ‘hotspot’. And if experimentalists can’t detect it because the warming is orders of magnitude less than predicted, that’s will mean models predicted things incorrectly.)
Point 3 Wilis made is irrelevant to what Gavin actually said.
Point 4: Why do you add the “when the surface warms” bit to RyanO’s statement. You know perfectly well that some people dispute the surface will warm.
Look: Gavin wrote a brief inline comment. YOu are trying to fill in a bunch of caveats and assumptions he must have meant. But he didn’t say them when he made a bold claim about throwing out some long accepted meteorological truths if a hotspot fails to appear. It may well be that whatever assumptions he thought but left unstated were reasonable, plausible or what not. But he didn’t say them.
Presumably, in some followon post, Gavin will mention the other assumptions–like relative humidity stays approximately constant, that other factors that affect fraction of time the atmosphere is unstable and/or super-mega-uber stable don’t change enough to substantially negate the leading order effect postulated based on the moist adiabat. Those are certainly plausible assumptions, but it’s possible for them to be violated without violating the first and second laws of thermodynamics.
Boris,
.
It’s fun to have you back in argument mode. 🙂 (No snark intended.)
.
Boris says:
.
First, why would I take the authors’ word for something when I can look at the graph and decide for myself? I note that the spread of the lapse rate is rather large. What do you think the confidence intervals are for that?
.
Also, this is only 1 year’s worth of data, and it is restricted to the free trophosphere, restricted to temperatures of 270 to 300 K and (maybe you missed this) restricted to less than 50% relative humidity. What do you think happens when you open up the restrictions to include all dates, all temperatures, all humidities, and you go all the way down to the surface? Lest you despair, I do have an answer: when the restrictions aren’t met, the data doesn’t follow the moist adiabat:
.

I added the emphasis.
.
In the free troposphere, the authors show that the temperature gradient follows the moist adiabat. However, I would challenge you to take that raw data and find where the moist adiabat would be if it weren’t aIready printed on the plot. I would also challenge you to then show that given the range of the data – and the resulting confidence levels – that the data is excludes the possibility that the modeled TT hotspot is wrong.
.
Besides, this relationship only holds above the planetary boundary layer and for a specific set of conditions anyway – not all the way to the surface. In the decoupled layer and the surface mixed layer, all bets are off.
.
This, in fact, is the whole point of the paper – to determine a method to predict the degree of decoupling at the planetary boundary layer. The reason the paper even exists is because temperatures do not follow the moist adiabat from the surface to the tropopause:
.
.
.
Far from saying that the moist adiabat is a good way of predicting the temperature gradient from the surface to the tropopause, the whole point of the paper is to explain why it isn’t and present a correction factor.
.
Boris says:
.
Actually, no. What Willis showed is that the amplification decays with time, which contradicts the model predictions that the amplification approaches a limit as t->infinity. Not only that, but the short-term behavior in the observations (from ~ 0 – 30 months) at high altitudes behaves in the opposite manner as that predicted by lapse theory.
.
http://www.climateaudit.org/?p=4962
.
Boris says:
.
Qualitatively, you are correct, Boris. If the surface warms, we would expect some amplification in the tropical troposphere. But that’s not the point. The point is if the modeled TT hotspot is not present, then it is NOT TRUE that all of meteorological understanding is wrong as Gavin claimed.
.
But since you seem to be so sure, I can’t help myself but explain that your statement even qualitatively does not necessarily apply in all cases. There is a difference in the behavior of the troposphere when the surface is warmed due to time-varying radiative forcing vs. natural variability (no time-varying radiative forcing).
.
.

Ref: http://www.climatesci.org/publications/pdf/R-271.pdf
.
So of the 84 times in the unforced runs where the 22-year surface trend exceeded 0.08°C decade–1, 56% showed more warming at the SURFACE than the troposphere. This indicates that if the warming is due to natural variability in the system, then a hotspot does not necessarily follow.
.
In non-control model runs – where CO2 and other forcings are allowed to change with time – the rate of increase of the CO2 forcing is greater than the natural variability in the system, which masks this behavior.
.
If the models get the CO2 forcing wrong, they also get the rate of increase of CO2 forcing (and, by extension, aerosol forcings) wrong. If the rate of increase is wrong, then the parameterization of their forcings and feedbacks is wrong. If the parameterization is wrong, then (assuming the surface record to be correct – which might not be a great assumption to begin with) natural variability accounts for more of the temp changes than the models assume. If natural variability accounts for more of the temp changes, then there is an increasing chance that surface temperature increases do NOT result in tropospheric temperature increases.
.
Models don’t calculate the effects of CO2 forcing (or anything, for that matter) from first principles. They parameterize. When you parameterize, you must set boundary conditions. The boundary conditions necessary to match the observational record change as the parameters (including CO2 forcing) change. If these boundary conditions are not right, then the behavior of the model will diverge from the real system under consideration. The models parameterize the adiabatic lapse rate as well. Our understanding of the moist adiabat could be just fine, but the models may not have it parameterized properly (and given the range of lapse rate forcing used by the models – which is the output of the parameterization for the adiabat), most of them must be wrong:
.
.
Ref: http://www.gfdl.noaa.gov/reference/bibliography/2006/bjs0601.pdf
Ryan,
Your post supposes that someone somewhere has argued that the moist adiabat rules all other things. No one argued this. It’s exasperating to argue with skeptics sometimes. Did you actually think that I said that warming would follow the saturated adiabat at all times? Of course it won’t–especially when the air is not saturated. I thought this was obvious.
Boris–
So what is your point? And what does it have to do with what Ryan said in the first place? You first critique of Ryan was
This is why my first response to you was
I then continued. Unfortunately, you never clarified what you mean by the moist adiabat being observed over and over. So, we have to guess. Now you are all upset that we tried to guess, but evidently, you think we guessed wrong!
All that is clear at this point is whatever you were suggesting is observed over an over:
a) You didn’t mean the actual lapse rate always equal the moist adiabat because it’s been observed over and over that it doesn’t.
b) you didn’t mean that the average of the lapse rate is equal to the moist adiabat because it’s been observed over and over that it’s not.
So, tell us, in your own words, what do you think has been observed over and over, in the sense that it absolutely can’t change and would require enhanced tropospheric warming on the level suggested by models to such an extent that if this is never observed even with perfectly accurate instruments we must throw some ancient meteorological truth out the window?
(Note: this is not the same question as “please explain why you think it’s plausible for a hot spot to arise.” The wording of Gavin’s inline response was much more extreme than that. He may eventually clarify. But for now, you seem to be either
a) suggesting Gavin didn’t mean what he actually said. Then using your mind reading powers, you are evidently telling us what he actually meant and then trying to explain why what he actually meant.
Or
b) trying to defend what he actually said.
Meanwhile, others have simply made comments on what Gavin actually said– and interpreting your sometimes terse to the point of vagueness claims in that context.
I have to admit to not knowing whether you are trying to do (a) or (b). But if you are trying to do (b) you are failing. If you are trying to do (a)…well, you aren’t psychic.
Boris-you think ~others~ can be exasperating? You still haven’t apologized for misunderstanding my point about the surface data by throwing in the towel and misrepresenting my view of them, say I “don’t believe the surface has warmed”. Well considering the fact that I do, I will remain exasperated until you retract.
.
Where did I say this? Where did I even imply this?
.
I was responding to your contentions here:
.
Here:
.
Here (where I ask you if you agree that either the modeled TT hotspot is right or our understanding of the moist adiabat is fundamentally wrong):
.
And here:
.
I was showing that there is a great deal of scatter in the observed lapse rate and that it approximately holds under certain conditions. It may be good enough for qualitative work and some quantitative work, but the challenge to you was to show that it is good enough to provide a mutually exclusive test between the existence of the modeled TT hotspot and the body of meteorological knowledge of the lapse rate. After all, it is you who is contending (like Gavin) that if the modeled TT hotspot is not present that our understanding of the adiabat is all hosed up . . . and, therefore, it is the observations, not the models, that are likely to be wrong.
.
It must be exasperating to argue with someone who both quotes what you actually said and remembers that the burden of proof is on you.
.
I await your evidence. 🙂
I don’t disagree. But then you go farther and seem to imply that we’d expect long term trends to show no amplification. All I’m asking is where you get that idea or impression? Since when is that what’s expected by theory?
Take a look at Willis’ post and find me something that explains the long term trend divergence.
Your own post shows that the temp profile behaves in a way expected if it is influenced by the moist adiabat. I’m not sure what more you want from me.
As far as what I want, really nothing – it’s fun simply to debate. But if you really desire a “want” that’s on-topic, that would be any supporting evidence that says that if the modeled tropospheric temperature response does not match observations then the only 2 alternatives are that the observations are wrong or that we need to overturn “maybe a century” (in Gavin’s words) of meteorological knowledge.
.
I would contend that it’s not a binary question and that the scatter in the data would simultaneously allow the observations to be correct and preserve the vast majority of our body of meteorological knowledge. That’s all.
.
AFA saying that the long term trends show no amplification, no, that’s not not what I was saying. Willis’ analysis shows that it decays, but it still remains above 1.0 at the longest time scale he analyzed. The models show an asymptotic increase during that same period. That means there is a difference, but it doesn’t mean we need to overturn all of meteorology because, simply put, we don’t know much about the long term response as it is . . . so there’s not much to overturn.
.
I never claimed to know why this happens in the data – and neither did Willis. And when it comes to the temp profile in my post, that’s an idealized profile and the time scale was not identified. For the quoted Fig. 1, the data range was only 1 year – so not really long term. You’re comparing apples to oranges there.
.
For the control model results – I don’t have a physical explanation, either. All I can do is to show that as the time-varying component of radiative forcing is reduced, models can produce a response that does not require enhanced warming aloft when the surface warms. I could speculate – but I don’t know, and as far as I know, no one does.
Boris,
as I have no more time to follow this debate I must now declare that based on your original asertions Ryan O (et al) have won Please act accordingly. Thank you.
Kayren.
Kayren,
I hope you enjoyed playing referee. Did you wear the striped shirt?
Sincerely,
Boris
How about reffing the little disagreement me and Boris had? Did he misrepresent my views or not?
Ryan O and Lucia,
I’m afraid Boris is up to his usual tactics of setting up one straw man after the other, watching as someone tackles them and then rolling out yards of baling wire for them to get caught up in.
As long as Boris has been trolling at various sites with tenors different from RC that’s been the modus operandus. Even if he changed his call sign you’d spot him right away. You never get a straight answer or comment back, always a deflection or a curve ball. And heavens forbid, never ever as much as a “yes, you may well have a point” or “I stand corrected”. You’re always in the wrong.
Boris, the master of shadow boxing.
Tetris-Strawmen like accusing me of not believing in any surface warming just because I think the surface data are of dubious quality?
tetris–
Your suggestion Boris never admits that others have a point is not quite fair.
If my memory serves me correctly, Boris admitted that Hansen did not predict the rise in CO2 leads the temperature rise before the empirical data were available. (Or, at least Hansen did not do this in any public forum. We can’t begin to guess what he might have muttered to himself in the shower when thinking about the problem in private.)
In fact, Hansen provided the explanation afterward as made evident by the date of various published documents and the names of co-authors on those documents.
In this particular case, I think we are talking past each other. Boris doesn’t seem to be quite getting the point Ryan was advancing and so seems to want to argue something else.
Lucia [12572]
You must be the most fair minded of referees…
That said, I will agree with you only, and only, if Hansen, in the shower [bear the thought] or not, never be allowed to stand in for “7of9” anytime. All the while we demonstrate that GCMs are much like the Borg Cube [ref: 12552 above].
🙂
“Boris doesn’t seem to be quite getting the point Ryan was advancing and so seems to want to argue something else.”
Just like he did to me? Come on, Boris, I’ll keep pouring on until you retract your offense against me. At some point you have to notice. I just feel sorry for everyone else who has to watch me whine that you misrepresented me.
I wonder what it would be like to have a sycophant to go around various threads on the internets to revise and extend my remarks whenever I stepped in something.
I would argue you guys don’t get the point I’m making and want to argue something else. But if you can show me some cite that says that amplification is not expected over 25 year (or whatever it is) trends, I’ll happily admit that I’m wrong on that issue. It’s clear from Ryan’s research that we’d expect amplification, mostly because of the moist adiabat, but there are other processes at work (aren’t there always?).
Andrew, I think I misunderstood you rather than misrepresented you. It sure sounded like you were arguing that the radiosonde and satellite data not showing amplification meant that the surface records were inaccurate.
Pielke’s unforced runs don’t have much of a trend. I’d guess the warming trend doesn’t rise much above the noise to draw any meaningful conclusions. Just a guess.
I wonder what it would be like for some jerk to make insulting hit and run comments and then cowardly disappear? Oh, thanks for showing me 🙂
Boris,
I am glad you take the time post here. Echo chambers don’t offer much opportunities for learning.
Boris [12582 and 12583]
Once again. Anyone run the “pattern recognition program” and find the troll: lo and behold, “Boris”.
“GA” “Boris”, perhaps? Or [12583] Pooh Boris…, for show’n you up.
🙁
Raven [12584]
Other than being a run of the mill “sh.t disturber”, and in the face of high minded “facts and arguments”, Boris has little to contribute other than distraction.
Consider that a few times over and maybe the “echo chamber” will reveal to you why that is….
That said, this entire thread is based on a nonsensical set of comments made by ” Hansen’s Guard Dog” Gavin [Schmidt] himself…
Boris is merely one of several self appointed blogosphere “Guard Dog” ‘s guard dogs.
I know, it gets complicated, but that is how the AGW “debate” works.
Meanwhile, the Borg lost. Pls see [11552] above.
🙂
Tetris,
I would have never seen the interesting graphs that Ryan O posted if Boris had not provoked the discussion. I could see your point if the same issue end up getting rehashed over and over again but this discussion is new (to me at least).
tetris,
You have proven yourself a conspiracy theorist over and over. Facts must seem like they’re covered in grease to you. Thanks for the free psycho analysis, though. It’s worth a hundred times what I paid for it.
Actually, my basis for believing that the surface data are iffy (not showing something that totally isn’t there, just iffy) stems from the work of RP Sr. et al.-So, I believe that the surface record contains biases, but that there is some surface warming. My objection was the claim that satellites and radiosondes were objected to while the surface data got a free pass…
Raven says:
.
Hopefully there was no snark intended with this, because I agree. I enjoy debating with Boris – and I agree with Lucia that, for the most part, Boris and I were simply talking past each other. There’s little learning if everyone agrees; learning comes from having someone tell you that you are wrong, and then you doing research to see if they are right.
.
Every time I debate anything with Boris, I find myself doing research and learning things I otherwise would not have done. And to be quite honest, it didn’t take me long to realize that Boris (along with many of the regular posters on here) know a heck of a lot more about the subject than I do.
.
For me, it’s a game where I can learn. The game part is trying not to lose the argument. To do that, I have to learn stuff. 🙂
.
So personally, I’d like to thank Boris for posting.
tetris–
Boris isn’t a troll.
Boris–
We get the point you are trying to make. Had you not introduced this in context of defending Gavin as somehow being correct in his over the top claim “that for this to be wrong would overturn maybe a century of meteorology. “
So, we were discussing this over-the-top issue before you jumped in and wanted to start discussing the mere plausibility of the hotspot.
I realize you may be frustrated that we aren’t letting you get away with diverting the attention from Gavin’s over-the-top claim by discussing something much more moderate. But… the original point of RyanO’s criticism had to do with the over the top claim. It’s impossible to believe any didn’t understand the focus of Ryan criticism because he emphasized in bold.
So, clearly, people are going to keep coming back to this to ask you if you can support that claim. We are all going to do this even if we understand you are trying to have a dicussion of the basis of a plausible hypothesis. Merely plausible is neither bullet proof or proven. If a merely plausible hypothesis is overturned, people would look at the other assumptions before overturning 100 years of meteorology (whatever that vague claim of Gavin’s may have meant.)
Boris says:
.
Not much of one, but the cutoff for consideration was 0.08 deg C/decade – which isn’t tiny, either, and the GISS model had 9 cases of the trend exceeding 0.16 deg C/decade (8 of which showed no amplification).
.
But for the most part, yah, I would agree with you. The only conclusion that you could draw is that surface warming in the absence of a change in radiative forcing does not necessitate amplification. Since I think most of us agree that there is at least some time-varying CO2 forcing, it would be perfectly legitimate to question how well this applies to real life.
Lucia [12591]
It is your blog: Boris is not a troll. Noted. I stand by my analysis of his “straw man” method.
Boris [12588]
I’m glad to have been of some service in terms of your psycho analysis.
As far as conspiracy goes: you might want to revisit Pielke Sr’s site for instance and familiarize yourself with his archives which contain several rather damning compilations of how the IPCC et. al. cherry picks and chooses as it pleases to make sure the “data” fits the dogma. If that doesn’t provide you with an outline of what “conspiracy” is, than maybe you should also have another look at CA and re-familiarize yourself with the how Mann, Amman, Jones, et. al., et. al. operate their “science”. Covered in grease, indeed.
Than again, in your mind maybe Pielke Sr and McIntyre are conspiracy theorists as well.
That;’s the problem. You guys thought you knew what he meant. Note Ryan’s rephrasing Gavin’s claim as “all of meteorology.” Accuse Gavin of being vague in an inline comment–that’s fair (How about holding SteveM to that standard? Can you imagine? But I digest…) But don’t set up an interpretation of his words to knock down when you admit you don’t even know what he meant.
Whacha digestin’ Boris? Sounds tasty! 😉
Boris–
We can only read what he said and interpret the plain meaning of those words. So, were commenting on what he said. No one here has psychic powers and no one is claiming to read his mind so as to discover that what he said is not what he meant.
What he said makes very, very little sense. It may turn out that what he meant is less peculiar and he may clarify.
But people can read what was said and note that it makes no sense.
As for Ryan translating 100 years of meteorology to all of meteorology…. Are there any 500 year old meteorological principles that are so correct they have stood every test of time? Would these be: It tends to warm up during spring, whose arrival we can forecast by watching the stars?
In anycase, Ryan quoted. So I think we all understood what his paraphrase meant.
Boris says:
.
I know it’s a typo, but it still made me laugh. 🙂
.
Do we know exactly what Gavin meant? Nope. Still, it’s not as if we have no clue, either. Gavin is quite obviously saying that if the modeled TT hotspot does not exist, then it means that we have a fundamental misunderstanding of meteorology – therefore, it is much more likely that the observations are in error.
.
No matter how you interpret this statement, it is simply not true. The observations very well may be in error – but the reason is not because the converse demonstrates a fundamental misunderstanding of meteorology.
“that for this to be wrong would overturn maybe a century of meteorology. “
No all it means that the GCM for AR4 are incomplete.
eg The Impact of Stratospheric Ozone Recovery on the Southern Hemisphere Westerly Jet
In the past several decades, the tropospheric westerly winds in the Southern Hemisphere have been observed to accelerate on the poleward side of the surface wind maximum. This has been attributed to the combined anthropogenic effects of increasing greenhouse gases and decreasing stratospheric ozone and is predicted to continue by the Intergovernmental Panel on Climate Change/Fourth Assessment Report (IPCC/AR4) models. In this paper, the predictions of the Chemistry-Climate Model Validation (CCMVal) models are examined: Unlike the AR4 models, the CCMVal models have a fully interactive stratospheric chemistry. Owing to the expected disappearance of the ozone hole in the first half of the 21st century, the CCMVal models predict that the tropospheric westerlies in Southern Hemisphere summer will be decelerated, on the poleward side, in contrast with the prediction of most IPCC/AR4 models.
http://www.sciencemag.org/cgi/content/abstract/320/5882/1486
No big mystery here.
There are 2 UN expert assessment bodies
How about these meteorological principles.
http://www.cmos.ca/weatherlore.html
No big mystery here.
There are 2 UN expert assessment bodies
And they are both wrong. The temps are going down and the O hole is not disappearing.
How about that. 2 for 2!!
Parameterize
Moist adiabatic rate.
Heat, or not to heat?
=====================
Logically
Phoney problem requires
Phoney solution
Andrew
Freezedried “The higher the clouds the better the weather.”
But, but…those are greenhouse clouds! :/
Models that predict the ozone hole to show a statistically significant recovery by 2024 are already wrong?
But I’m not surprised that Ozone denial is strongly correlated with AGW denial. In fact, I find it oddly comforting.
“Models that predict…by 2024”
Back to the Magic 8 Ball, I see.
I’m going out for chinese just to get the fortune cookie tonight.
The suspense is truly exciting. 😉
Andrew
Boris says:
“But I’m not surprised that Ozone denial is strongly correlated with AGW denial. In fact, I find it oddly comforting.”
I don’t know why. I never had any reason to question the science on CFCs until I realized how shoddy the science on CO2 was. Those problems make it hard to accept any computer model based analysis of the earth’s atmosphere.
I have felt for a long time that AGW promoters are doing serious damage to the reputations of scientists in society. Once it becomes clear that AGW has been exaggerated I expect scientists to join journalists and lawyers as professions perceived by the publish to be untrustworthy and self serving .
Boris (Comment#12595):
It’s comments like this that triggered my snark.
Make your points by all means, but putting yourself as the only one capable of reading Gavin’s mind and giving the true interpretation of his remarks, seems sycophantic. The last 100 years of meteorology isn’t all of meteorology of course, Ryan was way off base implying that we’d have to start over from scratch if Gavin is right. We’d only have to start over from 1909.
Boris, I thought that those nasty CFC’s were supposed to continuing destroying the O3 for seventy years-how can a partial phase out since the late eighties translate to recovery by 2024? I’m not sure why you find all this so comforting. Are you convinced that 1) Ozone depletion is incredibly solid theory, even more-so than AGW and 2) That being wrong about one thing makes people wrong about everything? At least on the latter point, I weep for your reasoning skills.
That said, are you really surprised to find that people who defend freedom from scientific authoritarianism consistently do so?
Kazinski-given that while meteorology was essentially started as a science by Aristotle in 325 BC, and nothing significant was added for 20 centuries after, it may be off base, but the last hundred years is still a big chunk of meteorology.
I’m just familiar with this argument and what Gavin has previously said in response to the discussion of the hotspot and how it is not a forcing-specific artifact:
I’m sure someone will parse this ‘graph to death. Blogging is as blogging does.
If I may parse, it sounds like the atmosphere is expected to warm that way because of the WV feedback. So, if that feedback doesn’t operate on long time scales (however unlikely you may find that) then such an effect wouldn’t be observed long term, now would it? So the long term observations if correct would not so much overturn meteorology as the WV feedback. Which I’m sure you are even more confident in than a century of meteorology. NOW I get the resistance to the observations being correct!
If you look at their graph, http://www.worldclimatereport.com/wp-images/testimony_fig2.JPG
There are similar dips in previous years. Yet the trend is, overall, up. This dip should not be given any significance yet, as the time frame is far too short.
“Yes. It is rather standard to use the latest reliable data when testing a model. So, criticizing Michaels and Knapperger for using the most recent data, and suggesting the results would be different if we used the magic end point of 2007 is rather silly.”
The problem is that ENSO and other cyclical events can create short term anomolies that are stronger than the long term trend. There has been a recent La Nina that has a cooling effect. I would you refer you to 1998 when there was a huge El Nino. That also would have given the wrong impression if you used that as your end point.
In fact, Gavin already said that.
“So… Michaels writes a paper stating that ENSO variability is the big driver of short term trends and yet still asks people to sign on to a statement claiming that those same trends indicate that models are abject failures?”
The models don’t model short term trends such as ENSO, and there has never been a claim that they can.
bugs:what is the models’ source for their “wiggly” behavior if not some kind of “weather” like ENSO?
http://www.worldclimatereport.com/wp-images/testimony_fig1.JPG
They do model events like ENSO, but cannot predict when they will occur, because you cannot predict weather in those timeframes.
bugs–
1) Find similar dips that didn’t happen after volcano eruptions. (Volcano eruptions are thought to cause dips. So,that leaves this one of the very few unexplained ones.)
2) So what if there was a recent La Nina? The variability due to weather noise should be included in the distribution of trends over all the model runs. In any case, Michael’s analysis shows that, when compared to Hadcrut, the models look bad even if you analyze over a long trend. The analysis periods included numerous cycles of Enso over that period, and some of the time periods tested both begin and end in a La Nina. So, this complaint about La Nina is baseless. You don’t get to throw away data from 2008 just because you feel it’s inconvenient to your case.
3) Nothing in Michael’s testimony is based on the assumption models can predict weather. He compares to a spread over an ensemble of models. The theory– as advanced by modelers– is that models don’t have to predict weather because we can do comparisons to ensembles of model runs (in various sorts of ways.)
4) I have no idea why you think the models are accurate. As far as I can tell, they appear to be heuristic tools that give qualitatively correct results, but whose accuracy at predicting trends has not been demonstrated and certainly has not been quantified.
There is one criticism of Michael’s testimony that stands up: He chose to ignore GISSTemp. That choice was not justified. Gavin is correct to add GISSTemp and discuss it.
“The variability due to weather noise should be included in the distribution of trends over all the model runs.”
If you look at the 1998 El Nino, it was out of bounds too, the other way. Some spikes just go out of the range the models can cope with. That doesn’t invalidate them, because they aren’t capable of modelling short term extremes. The ENSO variations are noise, but of larger magnitude than the underlying trend. This noise is cyclic, so it doesn’t doesn’t affect the long term result, which is what we are interested in. What we want to know is: “roughly speaking, where do we end up in 50 years, 100 years, 200 years?”
Bugs–
What are you talking about? (Which graph figure?)
If you mean the anomaly based on 1980-1999 in a recent blog post: 1) those are 1 standard deviation uncertainties not 95% uncertainties. 2) I’ve always said using the anomalies is a poor way to test models for many reasons.
If you mean the the trend since 1998, with Hadcrut it’s well “out” on the high side, and with GISSTemp it’s right on the bubble. See this post.
If you mean the 8 year trend ending in El Nino it’s not “out” of the 95% confidence intervals on the low side see
“If you mean the the trend since 1998”
The “trend” between an unusually large El Nino and an La Nina over such a short time period is meaningless.
bugs–
You were the one who tried to suggest that the 1998 El Nino was some sort of outlier that would overturn the Michaels analysis. Neither Micheals analysis nor mine are sensitive to that event. Comparison between models and data look poor if we start long before that El Nino; they loom poor if we start after El Nino. You can avoid making the models look poor only if:
a) You insists on using selecting tests with unnecessarily low statistical power. (Comparing a trend to the range of “all weather in all models” instead of testing whether model means match.)
b) Even when the models fail with tests with unncessarily low statistical power, you find excuses to make the power lower by either:
i) Insist we must ignoring data that came in after 2007, and suggest that the models must fail at 95% confidence two years in a row.
ii) Decide to include fake data for the end of 2009, and then insist that the models must be proven wrong with the fake future data.
As for option (c): The AR4 that used those models to project was made available in PDF in august 2007. So this amounts to insisting we ignoring 2/3 of the data that came in after the “predictions” were published! Clearly, the data had disagreed with the model mean average in 2007 when the scientists were deciding on the “best” way to use the models to project, the scientists would have decided the model mean was not a good way to “predict” the future! So, concocting a test that way is silly.
So, I’ve asked you to explain how or
bugs,
I don’t think you understand the test Lucia’s doing.
My understanding (which probably needs to be corrected by Lucia) is:
The test does not rely on a single starting year. It looks at all possible starting years and checks the number of years that exceed the 95% confidence interval. If the models accurately predict the trend then we would expect 5% of the trends to fail the test by being above or below.
What this means is the test would not reject the models if the 1998-2008 trend was the only trend that failed. The trouble is the test with HadCrut data shows that almost 100% of the trends fail the test and most of those failures are on the low side – suggesting a systematic bias.
The test also uses trends of up to 49 years in length which means the effect of the 2008 minimum has a minimal effect on the trend calculation. The fact that the size of the rejections increases as the trends get shorter illustrates that the test does take into account the effect of the local minimum on short trends. If the test only failed for the short trends one could reasonably conclude that models simply underestimate the magnitude of the ENSO ‘noise’. However, ENSO noise cannot explain the failures for the 30+ year trends.
Now it is important to remember that the test does not say why the models over predict warming. It could be because the GHG forcings they use are too high or the aerosol forcings are too low. However, the fact that the longer trends consist of mostly hindcast should exclude this possibility since one would expect the modellers to use the true forcings for the hindcast period.
That said, one thing I am not sure of is the meaning of the ‘spread’ for the d* values. If the models underestimated the ENSO noise then would we expect the ‘spread’ for d* to be larger than the 95% confidence limits even if offset by a bias? The actual spread is much less which suggests the models already overestimate the effect of ENSO noise on long term trends. I am not sure if this is a reasonable statement to make.
bugs didn’t “get” my point about the “wiggly” behavior of models at all. It shouldn’t matter if they can get the timing right, as the analyses cover multiple models in which there should be multiple examples of La Nina induced trends, which would be captured by th confidence intervals.
Raven
This would be 5% of all hypothetical test cases. So, in principle, we would have to run many realizations of the earth. So, you can’t exactly look at the graph and tally up to see if 5% were outside. Nevertheless what the graph does show is that if we’d picked a start year out of a hat, we’d have concluded the models were poor. The ‘reject at 95%’ was not achieved by cherry picking and is also not limited to analyzing over short periods of time.
Both those arguments have been advanced and, in fact are advanced in Gavin’s blog. So, it is sensible for Michaels/Knappberger (and me) to show that the those criticisms are baseless.
Correct. In fact, if we picked 2001 as the “main” year (as I did for reasons explained previously) the models reject if, inspired by Gavin, we pretend 2008 did not happen. Also, if we add “pretend” data for the rest of 2009 (in imitation of Gavin.) I can show these graphs later.
Assuming the models are wrong will we switch from “fail to reject” to “reject” and back and forth for a while before rejecting constantly? Of course. That’s virtually guaranteed to happen if the models are wrong. But pretending there is something tenuous about observing that the models are currently being rejected is foolish.
It would be wiser for Gavin and those inspired by Gavin to simply admit that one gets 5% false rejections when using p=95%. If the models are right, the false rejection will be over turned. But adding other criteria like insisting it’s foolish to note the rejection until it’s been sustained 2 or 3 years in a row is the equivalent of insisting on some higher (unknown and unknowable) confidence interval. Maybe it’s p=99% or p=99.9% or whatever.
If Gavin (or others) need to see rejection at p=99.99999% to think it’s probably the models are wrong, they should just say so!
The denominator for d* contains contributions from two things:
1) The spread of the mean trend for individual models. Because of the assortment of models and runs in collection used, this includes both the spread due to model biases and spread due to “model weather noise”. (Had the modelers run a sufficient number of runs, this spread could contain NO weather noise, so, strictly speaking the test does not require the models to correctly mimic weather noise. That’s actually a nice thing about the “Santer t-test”.
2) An estimate of the “weather noise” based on observations of the earth temperatures during the time period of the sample.
So, the test I do does not require the models to correctly mimic the earth’s “weather noise”. It accounts for the “earth’s weather noise” based on earth data. (And, it can be argued that if the trend includes periods with volcanic eruptions or non-linear response to variations in the forcing, it over estimates the magnitude of the “earth weather noise”, thus leading to fewer than 5% false rejections when we are saying we are accepting the risk of 5% false rejections.
“So this amounts to insisting we ignoring 2/3 of the data that came in after the “predictions†were published! ”
When you say things like that, I know you are caught up in some sort of systemic misunderstanding of the problem you are trying to resolve. You already know the models are completely incapable of short term resolution. As I understand it, they are only capable of long term projections. Stop insisting on using short terms for evaluation. It doesn’t matter how many short term trends you test them out on, all those tests are a waste of time.
bugs–
I am not insisting on short term trends and never have. Trends since 1950 are not short.
I think you don’t understand the issue about neglecting recent data. “Predictions” were not made in 1950. Neither the models, the model runs nor anything about the “predictions” dates back that far. Many of the modelers making the prediction weren’t even born.
The “predictions” were made recently. Those insisting the “predictions” must be tested using data available before the “predictions” were made are insisting on something odd. If someone insists that test must ignore data from 2008 forward, they are insisting the “predictions” be tests by ignoring quite a bit of data that arrived after the “predictions” were made.
If someone wants to come up with a good reason why we should use only longer trends: fine explain the reason. But there is no good reason to suggest that test that are performed must ignore recent data.