Butterflies on climate blogs
create confusion.
Some time ago, Steve Mosher asked me what I thought of the whole “chaos” discussion that broke out comments at RC. I thought… I don’t want to even go there. But then, I read Henk Tennekes’ discussion at Roger Pielke Sr’s blog, and I got sucked a tornadoes generated by the flap of the butterfly’s wings.
So, the following is stuff and nonsense, which mostly answers questions that popped into my head when I read Henk’s post. (I hope I can take the liberty of calling Henk, Henk! 🙂 ) First, off, I’m going to assume you all read Henk’s post. It’s here.
Now that you are back, I’ll just comment on a few bits.
Henk said this:
Let me illustrate this with the simple model Ed Lorenz used to popularize nonlinear behavior. The repeated iteration
x(n + 1) = x(n)^2 – 1.8 is sensitive to initial errors, but it is also sensitive to other kinds of mistakes.
Many are familiar with the bit about “sensitive to initial errors”.
Since Henk mentioned the possibility of a parameter in front of the x^2 term, I fiddled with this equation:
Exploring sensitivity to initial conditions
First, I ran it starting with different initial conditions. If you start with x(n=0)=1 you get different answers than if you start with x(n) = 1.01. I’ve illustrated the two solutions to the left. The blue curve starts with x=1; the red curve begins with x=1.01.
It’s easy to see the value of x at each step quickly diverges until the two predictions have nothing to do with each other.
Because the simple equation above shares a property with those describing weather, this simple problem illustrates the difficulties inherent in predicting weather. To get the correct prediction for the weather, we need to start with a perfect description right now.
But, as we often read on blogs by modelers, for climate we don’t necessarily care about individual trajectories (aka ‘realizations’) of weather. What we care about is the average behavior.
Translating into this simple analogy that means: They don’t care that the red curve and the blue curve are different: that’s weather. They care whether or not the red and blue curve give the same answer over time.
Likely as not, those who try to predict climate will point out that if we solve Lorenz equations starting at x=1, the solution for x(n) will have well defined averages. We can even plot x(n+1) vs x(n) and make cool looking graphs like the one shown to the right. They will throw around words like “attractor” or “ensemble average”.
Should you be more statistically minded, it’s possible to show that over long periods of time, the average value of x exists. (I’m getting about -0.17 based on 1292 time steps.) It’s also possible to calculate a standard deviation: I get 1.21.
Yes. I would get the same answer for the “red” case which I started with x=1.00 and the “blue” case, which I started with x=1.01.
So yes, one might hope that we could learn about the average properties of “x” even if we can’t predict the individual trajectories for “x”. To take the analogy further: even though we can’t predict weather, we might hope we can predict climate.
Sure. Maybe. But hoping and praying aside, the questions one still must ask:
Given uncertainties in things we actually know, how well can we hope to predict the average properties? Does the non-linearity that results in the unpredictability of weather cause problems when we try to predict climate?
It turns out the same non-linearities that cause problems for predicting weather cause problems for predicting climate. They are just different problems. Recall: With weather we want to predict the trajectory. With climate we want to predict the averages, standard deviations etc.
Let’s now focus on this part of Henk’s statement: “…. but it is also sensitive to other kinds of mistakes.”
Henk first discusses one kind of mistake: What if the parameter “1.8” in the equation must be estimated, and the modeler is off by 10%? That seems like a small mistake, does it not?
Here is the effect of a 10% difference in the constant 1.8. (Blue represents the true solution with a parameter of 1.8; red is calculated using 1.62=0.9*1.82):
But, maybe, as modelers we don’t care about this visual trick which suggests a 10% change in a parameter causes a large change in the solution. Maybe we are only we were interested in the average value of “x”. How much has that changed?
Well, the average of “x” is -0.12, that’s 71% of the value of “x”.
So, yes, a roughly 10% mistake in the parameterization resulted in a 29% mistake in x.
So, is being 29% off bad? Well…. That depends. When modeling is in its infancy, being 29% off is considered rip roaring success! These sorts of models are generally considered useful– which is to say: being 29% off is better than having no prediction at all.
Yes, one might hope that the fact that we can’t predict weather doesn’t mean we can’t predict climate. But that’s small consolution. All we’ve shown (by analogy) is that predicting climate is not necessarily impossible. But one must also recognize there is a huge distance between “not necessarily impossible they predict climate” and “they skillfully predict climate”.
And so far, I’ve only shown pictures and results for the difficulty Henk actually mentioned: that is, not knowing the correct magnitude of the parameter 1.80, and being off by 10%.
So… did Henk fully explore the types of mistakes?
Yep.
Many of you have heard about the difficulty with models capturing “sub-grid” processes. As a result, modelers must not only parmeterize things, but they must also “smear” the effects of small scale features inside a grid box.
So, to mimic that behavior, I’m going to add a little “smearing” to my model. What I’ll do is change Henk’s equation to:
where “r” is a smearing factor.
What this does is smear the current value of “x” by averaging in a bit of the value from the previous step. (This is sort of like the effect of “artificial viscosity” that Gerry Browning is going on about. It’s not entirely the same, but then this is a simplified “toy” equation.)
So, if I smear with r=2.5%, I get this:
How much difference does this 2.5% smearing make on the average value of x, you ask? The average is now -0.30. That’s a 78% greater than the “correct” value of 0.17!
Wow!
But it gets worse!
You might think: “I bet Lucia hunted around for the changes that would make things look as bad as possible for predicting averages. I mean… 2.5% smearing make a 78% difference? She probably tried 1.0798%, 2.7643% etc. until she found something really really bad! ”
Nope. In fact, you can play and explore the sensitivity to the parameterizations. This very simple non-linear equations with 1 parameter turns out to be very, very sensitive to both the magnitude of the parameter (1.80) and the amount of smearing. Download the spread sheet, change the 0.9 to 1.10. That corresponds to a 10% error in the parameter 1.80– but this time making it too large. See how “attractor 2 changes”! Heck, change the smearing of 0.025 to 0.05! 🙂
So what have we learned about chaos and climate?
Beats me! But we have learned that
- the fact that averages exist doesn’t mean we can predict their magnitude very well.
- small errors in parameterizations can lead to large errors in predictions for average values.
- the need to smear small scales can have dramatic consequences on the solution.
So, sure. The Chaos argument does tell us that the fact that we can’t predict weather doesn’t mean we can’t predict climate. But the same chaos argument shows us that accurate, skillful climate prediction will be very difficult– and it may also be impossible!
The fact is: The chaos argument doesn’t resolve anything.
Climate models contain parameterizations, which are estimates of the true value. Climate models have large grid sizes, and so must smear the effect of the small scales. They do apply conservation of mass, momentum and energy. But the general class of models that use the sorts of parameterizations in climate models must be compared to data. Ideally, the comparisons are done to data that does not pre-date the selection of the parameterizations.
That’s why I try to compare to new data.


Tennekes commentary is thoughtful, but I find it a bit self-contradictory. In the first half, he presents a variety of arguments about the difficulty of upscale error propagation and stability of averages in systems like turbulent flow through a pipe. Then he switches gears to butterflies and hurricanes, and seems to argue precisely the opposite – that microscale weather and tipping points translate to macroscale uncertainty. He introduces a straw man, “The very claim that there exist no processes in the climate system that may exhibit sensitive dependence on initial conditions, or on misrepresentations of the large-scale environment in which these processes occur, is ludicrous.” I haven’t heard anyone seriously argue that there aren’t any chaotic processes in the earth system; only that they fall out at large scales because the system is fundamentally governed by negative feedback – which would seem to be the same argument Tennekes started with.
The question of parameter sensitivity and measurement cuts both ways. If a parameter has a sensitive influence on behavior, then it should be easy to measure it with some precision (i.e. wrong choices lead to bad fit). Also, the interpretation of % sensitivity here is somewhat problematic, because the mean is near zero. You could add an arbitrary constant to x and wind up with much different parameter sensitivity, just as % sensitivity of a temperature would differ with the zero base of C vs K (with only the latter having a sensible physical interpretation).
Tom:
Yes and no. The difficulty is the parameter are not teased out from data for say, GMST (global mean surface temperture). They are teased out from experiments on a more micro level. So, the boundary layer parameterization comes from understanding of boundary layers. The parameters for clouds from direct understanding of clouds.
The 10% error on a boundary layer parameterization doesn’t make the boundary layer go haywire, but it could introduce a disproportionate amount of error to predictions of GMST. Or, it might not. Who knows? That’s the meaning of ‘uncertainty’.
===
On the general issue of Tennekes’ post: I agree that Tennekes’ discussion sort of moves around. There may be a strawman issue with regard to the Initial Condition issue (IC). But, quite honestly, I’m not sure.
But I actually think Henk is not arguing a total strawman strawman. He is arguing against something some prominent bloggers do say from time to time. (Maybe they say it in casual circumstances when they are not being serious, but some do advance the argument climate is a boundary condition issue and not an initial condition. I googled and found this comment by Gavin:
(Italics mine)
ref: http://www.realclimate.org/index.php/archives/2005/01/climatepredictionnet-climate-challenges-and-climate-sensitivity/langswitch_lang/fr#comment-1151
When Tennekes says this is not proven, he is correct. It is not proven that climate is purely a boundary condition problem.
So, in some sense, Henk is arguing against a real, honest to goodness argument that is sometimes made by prominent climate bloggers. Taken to the extreme, saying climate absolutely is a BC problem and absolutely is not an IC problem is saying that the initial conditions don’t matter — at least in some sense.
Given what we know about Chaos, even over very large amounts of time, initial conditions could matter. Like it or not, we are trying to predict climate in the next 100 years or so– not the average state over 100 ba-jillion years. In a very real sense, initial conditions probably do matter at least a little to what happens over the next 30 years. Even the eruption of Krakatoa may ‘matter’ to what happens next year. We don’t know.
The physical phenomena and processes occurring in weather and climate cannot be mapped to a simple algebraic iterated equation. There are no theoretical basis for using such a simple illustration to be representative of weather, climate, or more nearly complete mathematical models of these (NWP and GCM). Algebraic iterated maps are useful, however, as reminders that different discrete approximations to the continuous equations will lead to different numerical results just as changes in the parameters and ICs lead to different results. Additionally, changes in the sizes of the discrete temporal and spatial increments become analogous to such changes. And, the expansion method applied to obtain the various Lorenz-like systems leads to systems that might or might not indicate chaotic response. The number of modes, and the coordinate directions in which these are used, lead to different systems some of which do not correspond to the theoretical basis of the original Lorenz systems. Finally, chaotic response is not obtained for all values of the parameters and the ICs, even for the original Lorenz system. Periodic motions and even motions that reach new equilibrium states have been observed.
How is it possible to know which of the many possible responses is to be obtained given the number of continuous equations (algebraic, ODEs, PDEs, and computer-language constructs) for the several physical phenomena and processes and different discrete approximations and huge number of parameters used in NWP and GCM models and methods.
I think one of the more important aspects of the original Lorenz 3-equation system as discussed in Butterfly Flapping post on RC has been overlooked. The original Lorenz 3-equation system (very, very roughly) modeled a fluid system for which there is constant energy addition into the representation of the fluid motions. Without this energy addition, the flapped Butterfly wing would have no effect what so ever. Raypierrie’s discussions of micro-nano-and pico- changes in temperature somehow, ‘before you know it’, ‘given enough time’, affecting the energy content of a spatial region as large as Kansas are nonsense in the absence of the required energy additions. Increasing energy content of increasing masses of material always requires energy additions; there are no exceptions.
Simple iterated algebraic equations cannot represent the energy accounting that is necessary to maintain motions of real fluids for which energy losses due to resistance to motions are always present. There must always be energy sources present, or the flapped wing cannot induce motions into material initially unaffected by the motions. Power must always be supplied to drive the mean motions. Take the blast-off of rockets for which you can see with your own eyes the temporal and spatial extent of the motions induced by the exhaust of the engines. Especially the vapor plume generated from the water used to cool the platform is seen to disappear soon after the rocket, and the engines providing the energy source, leave the platform. Or also consider the effects of your basic fusion-based weapon device, aka The H-bomb. Now there’s your basic perturbation. So far as I know there have been no correlations indicating increased extreme weather events during the times of testing of these devices.
The original Lorenz system contains a delicate balance between the losses and energy additions such that motions can continue. And these motions are bounded aperiodic motions that continue only so long as energy is added into the system. Changes in the numerical values of the parameters, in the ICs, and in the sizes of the discrete time increments, when the system is solved by numerical methods, can all lead to different responses. Some of these responses are not chaotic. Small changes in the numerical values of the coefficients for the terms that represent the frictional losses can cause the system to attain equilibrium states.
All NWP and GCM models and methods solve the discrete equations as an initial value problem in time. Even as new equilibrium states of the climate are approached, these being set by the boundary conditions, the numerical methods are solving an initial value problem in time. Theorems about consistency, stability, and convergence of discrete approximations do not differentiate between states near-to or far-away-from equilibrium states. The problem is treated as an initial-boundary-value problem no matter what the state of the system. BTW, if new equilibrium states are the expected results, the system is not chaotic. Equilibrium states are the ultimate in not-chaotic. It is not clear to me that if the entire climate system is expected to approach an equilibrium state how a given trajectory, of a part of the system, can be chaotic.
The hypothesis that the mathematical models and methods of weather and trajectories-in-climate represent chaotic response remains untested. The hypothesis is simply assumed in order for ensemble-average methods to be applied to the noisy output generated by un-Verified numerical solution methods.
Oh, I forgot to mention. The Earth’s Climate and the atmo-bio-aquao-cryo- chemo- geo- and othero- subsystems that make up the complete system have very likely never been and will never be in the future at an equilibrium state.
Dan–
I agree with many of the things you say. That’s why I say “So, the following is stuff and nonsense” near the beginning of the post. Yes, we can answer some questions this way. Unfortunately very few. Obviously, neither this equation, nor the more complicated full Lorenz equations define any weather system. (If they did, we wouldn’t have AOGCM’s. We’d just fiddle with the Lorenz equations. )
Now I have some questions for you:
The earth’s fluid system does have constant energy input and energy output. The input is from the heat of the sun. The losses are due to radiation. These happen, on average at different locations (more heat input at the equator; less at the poles. More on the side of the sun; less on the night side.)
So, this big driver causes motion. The sun’s energy is the mechanism that supplies “power” to all the agitation that become weather.
Within this system the Butterfly wing might have an effect. We could argue about whether it does or not, but we can’t simply say it’s not possible based on your argument here. You need to show more.
Also, I think the issue with the Butterfly is not so much that it creates tornados in general but that it affects which tornados happen when. In these butterfly arguments, the butterfly flaps don’t cause a net increase in the mechanical energy. It’s more like: The specific locations of roughness elements on a pipe wall will cause individual trajetories in pipe flow to vary. However, we only need to know the average roughness need to know to predict the pressure drop and other average features of the turbulent flow.
Yep. With regard to the equation Henk Tennekes suggested, play with the spread sheet. By modifying the 1.8 parameter within a 50% range, you can easily get nearly period behavior. (Exactly period takes a bit of fiddling.)
With regard to turbulence:
a)In pipeflow, if the viscosity is high, resulting in large frictional losses, flow is laminar.
b) When modeling turbulent flows, if someone throws in a ridiculous turbulent diffisity, the answers are changed. We know this without the “toy” model in my post. That said, the analog in the “toy” model in my post is adding that “r” for smearing. Diffusivity smears. Notice a small amount of incorrect smearing changes the result. If you set the smearing to about 5%, you’ll see the motions are nearly periodic.
But, of course, all this tells us is that very small errors in the toy model can make big differences in the average outcome. It can’t tell us if the levels of turbulent viscosity in models are bad, wrong, cause problems. For one thing, we know that in turbulent flows in engineering, we do use turbulent viscosity. (Though, we generally add it in more complicated ways that they seem to do in climate models. If I’m reading correctly, they just stuck in a known value? We’d have the value change on the fly. But there are some at least quasi-legitamate reasons why we do one thing and they do another.)
I don’t think anyone, anywhere anticipates true equilibrim in the sense of a flow that is not time dependent.
Loads of engineers use the concept of quasi-equilibrium and/or fully developed turbulent flows.
I don’t think modelers are doing a particularly splendid job showing the world evidence their models are skillful. When pushed, I notice Gavin, for example backs up to arguing that the models are not totally useless. (They aren’t totally useless.)
But I also don’t think we can hold climate modelers to some sort of mystical perfection we, as engineers don’t expect of ourselves. We don’t claim there is such a thing as strict “equilibrium” in turbulence. They don’t claim it in climate.
great post
Lucia,
Relative to an energy balance for the material initially perturbed by the flap, it’s the local states surrounding the material and its evolution following the flap that matter. The global energy balance doesn’t enter the picture.
It seems to me to be very unlikely that the trajectory of the evolution would be such that net inflow of energy would always occur as some kind of preferred condition. The temporal scale also matters as the longer it takes for the initial perturbation to affect larger and larger masses of material the more likely that some transfer of energy out of the material will occur. It would take an extremely exceptional set of conditions to always allow for net inflow, or a net increase over the life of the evolution. And when there are losses they are such that the motions are not completely shutdown. And then there must be periods of net energy inflow in order to makeup for the losses when the motions are not completely shutdown.
In the original Lorenz system energy is always added into the system. Trying to model periods of evolution of the motion when the surrounding states vary both spatially and temporally, and for conditions of both energy losses and gains, all such as might obtain in The Climate, is a very hard problem.
Mother nature is naturally dissipative. Extended periods of exponential growth don’t happen. Otherwise it is very likely that we wouldn’t be here.
BTW, when ‘sensitivity to initial conditions’ is invoked as a condition for chaotic response, that description by itself is incomplete. It must be modified to account for the fact that not all such changes lead to bounded aperiodic responses. Sensitivity to changes in the initial conditions that show exponential divergence from the results for the previous values of the ICs and that lead to bounded aperiodic states, is a more nearly complete description. Changes in ICs that lead to periodic states or equilibrium states don’t count.
All comments on incorrectos appreciated.
Dan
I’m not sure what big picture argument you are trying to make. I can comment in the small picture issues:
And…so? Even if we draw a control volume around the butterfly, we can still have positive mechanical work done on the control volume. The butterfly wing flaps themselves do work on the air. This comes at the expense of some sort of conversion of calories from the sap it ate sometime earlier to motion of the wing.
So, considering the work on the surfaces of the control volume, and the work by the butterfly, work can be done on this small control volume. I haven’t calculated a Reynolds number associated with the tip velocity of the butterfly wing, but I’d be surprised if this flow were laminar! So, locally, the disturbance is unlikely to be dissipated rapidly by molecular viscosity.
What does this mean for weather? Climate? I don’t know.
But, the butterfly does perturb the flow in the box, just as an irregularity on the surface of a pipe wall perturbs a flow. In turbulent flows, this is known to disturb the system. In many cases, this makes a noticable difference to individual realizations of flow. In some cases, this makes a noticable difference to the mean flow. (Engineers sometimes take advantage of this: we can trip turbulence and often get a result we prefer this way. In fact, we all know why golf balls have dimples: the small surface irregularities trip turbulence in boundary layers that would otherwise be laminar. This all happens without any net input of energy into the system. )
It seems unlikely to me too. But has anyone, anywhere ever claimed any net inflow of energy would always occur? And net inflow of energy into what?
As far as I’m aware, no one is claiming any net inflow of energy into anything.
They are claiming that the butterfly wing affects a trajectory. A tornado that otherwise might have hit Ames Iowa, hit the Story City. Or failing even more dramatically, the Quad cities. But the butterfly wing doesn’t cause tornados that were otherwise impossible.
Of course not. Once again: No one claims the do. That’s not what the whole “butterfly wing flap causes a tornado” means.
In my opinion, the problem with all these chaos arguments on blogs is no one knows what question they are trying to answer. Everyone also often seem to be presenting counter-arguments to arguments no one made. This leads to a lot of heat and no light.
On a final note: If your larger picture argument is that you anticipate that Roger Pielke Sr. will show that, given the levels of molecular viscosity that exist the disturbance created by a butterfly flap is not enough to make some sort of difference to large scale behavior, then maybe he will.
He hasn’t published that yet, so I haven’t read it. Whatever he does or proves, it’s not something that can be easily addressed or explained in a blog comment. It’s going to involve math, or simulations, or something.
As I understand it, this is all related to the whole debate over predicting climate being a “boundary condition” problem or, like weather, an “initial condition” problem. For me, intuitively, I don’t see why initial conditions wouldn’t matter for climate, but I don’t really have an opinion on it. Roger has stated his opinion on it in the past, I believe, for instance here:
http://climatesci.colorado.edu/publications/pdf/R-210.pdf
Others (skeptics and AGW advocates) disagree. I don’t need to explain RC’s position, but I know that Roy Spencer at least disagrees:
http://www.weatherquestions.com/Roy-Spencer-on-global-warming.htm#predict
Andrew–
The chaos issue gets brought up in at least two circumstances:
Q1: Is climate an initial condition problem or a boundary condition problem. and
Q2: We know it’s impossible to predict weather. Does that mean it’s impossible to predict climate?
The answer to Q2 is: It may be possible to predict climate even if it’s impossible to predict weather. (This is very different from claiming it is possible to predict climate. It’s simply saying that the impossibility of predicting weather isn’t absolute proof we can’t predict climate.
I think the answer to Q1 (the boundary condition vs. the initial condition question) is literally correct answer predicting climate is both. To say only one or the other requires one to say: On what time scale? In the absolute extreme, if we could hold all variabiles constant, and we only care about averages over, oh, say 10,000 years, it’s likely a boundary value problem. (Although, even then we have problems. What if life evolves? Or mountain rise?)
Over 5 years it’s clearly an climate-weather is an initial condition problem. (In which case people then say “Oh, that’s weather!”) That leaves us in a situation where one debates: over 100 years which is more important? Initial conditions? Or Boundary Conditions? Over 30? Over 10? 20? 19?
If 30 is the magic number, it ought to be possible to explain why 30 is the magic number and in what sense initial conditions no longer matter and only boundary conditions matter. At least on many blogs, those who claim 30 is the magic number seem to answer “why 30?” with “because we say so!”
Quantitative answers should be possible. Oddly enough, though I disagree with Gavins’ estimate of “weather noise” based on models, at least the post discussing the variability of predicted trends over 7 yrs vs 20 years is heading in the direction of answer the questions modelers seem to want to avoid answering: That is why do you think 30 years is the magic number?
3 Lucia-inspired climate haikus:
The butterfly drowns
In a old canvas bucket
I can still believe
Boundary is content
Any numbers are welcome
When Falsify flees
Lukewarm butterfly
Measured so very badly
Yet graphs stay happy
sunset stroll
searching data
for butterflies
Excellent haikus!
Climate blogging benefits
from little poems.
Lucia,
agreed that the butterfly wing input does not cascade to actually create a hurricane. So, what is now being suggested is that the energy from the butterfly influences the creation or trajectory of the hurricane.
What I understand of Roger P’s argument is that the amount of energy put out by the butterfly, and this applies to much larger energies also, would dissipate before a fraction of the distance the influence would have to cover. That is, the normal radiative, convective, and conductive conduits would bleed off the energy before covering a small fraction of the distance. It should be remembered that the energy from the wing would not be directed in one direction even.
In an enclosed area with no energy input you would be able to track the energy from the butterfly, but, even there, unless your space is outside gravity, there will still be damping. In the earth’s atmosphere even directed energy is overwhelmed, broken up and dissipated by the semi-random vectors of the atmospheric components. The odds of the butterfly energy having a special path of supportive “feedbacks” or “levers” or “waves to modulate” to overcome normal physics is so small as to be non-existent.
As you note, a small imperfection in a path can cause a large disruption in flow. I would point out that there is a LARGE difference in the magnitudes of the results. We can say that the fixed imperfection is imparting a specific amount of turbulence PERMANENTLY whereas the Butterfly energy is extremely time constrained. The amount of energy the imperfection is deflecting is large, depending on the velocity of the flow, compared to the butterfly.
On the other foot, how many other initiators are occurring that would counteract, deflect, and randomize the butterfly connection??
Forgot the link to Roger’s post:
http://climatesci.org/2005/10/06/what-is-the-butterfly-effect/
lucia, this probably falls under the “becuase we say so” category, but thirty years has been the bench for climate sense, well, ever as far as I know. Its just an arbitrary standard, with, as you correctly point out, no physical basis. I think your probably right on the “both” thing. I would propose that the relative importance scales with the time scale, so that initial conditions are only of zero importance at essentially infinite time (and there are no questions about equilibrium by this point, to!) while initial conditions are of dominant importance on really short, or even slightly “long” time scales (probably even remaining non neglidgible beyond thirty years, though maybe making to small a difference to measure). Personally, I think that climate prediction will be possible eventually (being dominated to a fair degree by boundary conditions) but is not predictable now (and by this I don’t mean the problems of economic/social forecasts, or even that volcanoes or other random events constitute wild cards, I actually mean I don’t think the models are complex enough in there internal processes).
KuhnKat–
Thanks for the link. What Roger says may well be right. I don’t actually know. But in other complex flows, a small trigger in a particular place could grow easily; a small trigger somewhere else could not.
So, for example, it’s likely true that a butterfly flap inside a jar would make no difference– but a butterfly flapping at the edge of a hydrodynamic instability would.
But the fanciful idea is just often used as a mantra to explain weather’s unpredictability. (Or something. It’s never clear to me in these argumetns.)
Andrew–
I agree 30 years has been used a long time. I also have no problems at all with it when used by the USDA or other entities to describe things like Chicago’s climate and make recommendations for agriculuture, tourisms, heating, air-conditioning, construction etc.
But when we are discussing issues like, “Can we test predictions?” or “Is climate a BC or an IC?” tradition is a somewhat inadequate response. (It’s a particularly inadequate one when applied irregularly. As we all know, if we get the “right” answer, 17 years seems to be “enough”. So, it might appear that for some, the correct number of years depends on whether or not they get an answer they like. 🙂 )
one of my favorite Spring pieces.
NOTHING is so beautiful as spring—
When weeds, in wheels, shoot long and lovely and lush;
Thrush’s eggs look little low heavens, and thrush
Through the echoing timber does so rinse and wring
The ear, it strikes like lightnings to hear him sing;
The glassy peartree leaves and blooms, they brush
The descending blue; that blue is all in a rush
With richness; the racing lambs too have fair their fling.
What is all this juice and all this joy?
A strain of the earth’s sweet being in the beginning
In Eden garden.—Have, get, before it cloy,
Before it cloud, Christ, lord, and sour with sinning,
Innocent mind and Mayday in girl and boy,
Most, O maid’s child, thy choice and worthy the winning.
There is a fundamental error here, made by the realclimate team and others.
The climate is not just the long-term average of the chaotic weather.
This is so obvious that I find it annoying that anyone falls for it.
If this was true, fluctuations would get smaller when averaged over longer time-scales, but they dont.
The climate is driven by larger-scale, longer-time-scale effects than the weather,
like extent of ice caps, strength of gulf stream, your favorite ocean oscillation, etc,
and these things all interact with each other in a complicated nonlinear way to
give a different chaotic system that is unpredictable over the timescale of decades
and centuries in the same way that weather is unpredictable over weeks.
Climate Blog Haiku:
Realclimate sucks,
Climate Audit is better,
I love Lucia.
PaulM – I’m not quite sure what you’re getting at, but generally in statistical physics for instance, one has to subdivide a system into “fast” and “slow” processes; the “fast” processes occur on a time-scale too short to be of interest, and so any quantitative attributes come about as averages over those “fast” processes (whether they’re actually chaotic or just vast numbers of small independent terms, the averaging is still valid). The “slow” processes will still be happening and cause those quantitative attributes to change with time; this is true of any system, unless you completely suppress all the “slow” processes, so you wouldn’t expect long time-scales to necessarily all smooth out. The challenge is where you put the dividing line.
For climate the line seems to depend on the context. For projections for the next century, “fast” processes that are averaged over roughly include anything on a scale of 10 years or less – that includes day-night variations, storm systems, yearly orbital variations (the 11-year solar cycle would be roughly the limit of “fast” processes), ENSO and other short-term cycles, etc. Water vapor response to temperature is a “fast” process (about 10 days), which is why it’s considered a “feedback” effect. On the other hand, ice sheet melting, CO2 and methane feedbacks, vegetative migration and related biophysical responses would be mostly on the “slow” side, which means they need to be treated as parameters of the climate system that will change with time. Slow ocean response processes (PDO if it’s a real oscillation, for instance) would also have to be treated as “slow” effects until you start looking at multi-century time-scales.
Hansen’s recent paper talking about a 6 degree C climate sensitivity number was folding all of those slow processes (except, I believe, CO2 feedback itself) into a very-long-time-scale climate discussion – except for the speculation that some of the responses could be quicker than we expect. When you’re talking about hundreds of thousands of years then even the Milankovich cycles become “fast” processes you would average over, and “climate” on that scale is something rather different.
I agree that a priori or micro parameters could pose problems when transplanted to a macro context, but they still ought to be susceptible to validation of some sort, though that doesn’t mean that the right things have actually been looked at. Same goes for the excess viscosity argument. I would be more convinced by a concrete example from Tennekes.
Power law scaling of fluctuations doesn’t guarantee that endogenous climate variability is the origin. It’s not clear to me that the instrumental record is long enough to resolve whether this is weather or forcings or both.
It seems to me that those who propose that climate is an IC problem, or similarly that it is chaotic on multidecadal timescales, ought to identify the persistent state variable that makes it so in the face of T^4 negative feedback. The atmosphere doesn’t seem like a good candidate; it doesn’t have enough heat capacity and any big nonlinearities should be revealed by the seasonal cycle. The ocean is the obvious alternative, which I think is more or less what RP Sr argues. Whatever it is would have to be just the right size (in some thermodynamic sense). Too small and it’s not going to matter on climatic time scales; too big and it’s effectively a constant this century. If the Schwartz et al. idea is right, that the time constant of the ocean coupling to the surface is fairly short, then the canonical 30yr climate horizon makes some sense (i.e. it’s long enough for transients to die out). However, I’m inclined to believe that the idea might not survive a more realistic ocean specification.
So, perhaps the question is whether the flap of a sculpin’s fin can cause an upwelling in Peru.
On an unrelated note: “Eyeballing” Trends in Climate Time Series: A Cautionary Note
Sure. All models ought to be susceptible to validation of some sort. All require validation. It also has to be done correctly, and with an open mind to seeing problems. Ohh.. gotta go click the link now. 🙂
re:
comment 3110 lucia May 31st, 2008 at 8:20 am
Lucia said:
And I responded in Comment 3112 by focusing on the requirement that the energy picture must be examined local to the point of interest. The global energy balance is not part of the issues. I note, however, that the earth’s fluid system does at times at all spatial locations experience energy losses. That situation is not conducive for growth of perturbations.
comment 3113 lucia May 31st, 2008 at 10:59 am
Commenting on
comment 3112 Dan Hughes May 31st, 2008 at 9:22 am
In the absence of energy addition to the fluid the disturbance will always diffuse and dissipate. In the presence of viscosity and thermal conductivity, both always leading to irreversibilities, the energy transfered to the fluid by the Butterfly must diffuse and dissipate (momentum diffusivity in the momentum balance equations, energy diffusivity and dissipation in the thermal balance equations). The motions will always cease unless energy to overcome the resistances is available. In the presence of resistance, power must always be added to the fluid, or any body of matter, to maintain its motion. There are no exceptions.
For your example of fluid flow in a pipe, power is being added somewhere in the system by some means to the fluid. The disturbance grows only because energy is supplied by the mean flow to the disturbance. Power is supplied to the mean flow in order for the mean flow to flow; a pump is a straightforward example. For the golf ball, the energy was supplied when the ball was struck by the club head. The motion of the golf ball, of course, always ceases because it experiences resistance to its motion and energy is not continuously supplied to maintain that motion. If a Butterfly is sitting on a stationary golf ball and flaps its wings, the energy transferred to the fluid might induce turbulent flow, but that initial flow will cease. In the absence of energy addition to the fluid surrounding the golf ball, there is no flow, laminar or turbulent, dimples or no dimples.
What is you understanding of the following comments by raypierre at RC?
A comment by Stormy here:
http://www.realclimate.org/index.php/archives/2008/04/butterflies-tornadoes-and-climate-modelling/#comment-85339
got this response by raypierre:
The comment by Piekle Sr. here:
http://www.realclimate.org/index.php/archives/2008/04/butterflies-tornadoes-and-climate-modelling/#comment-85300
got this response by raypierre:
Conservation of energy. The disturbance cannot make any sort of difference to changes in large scale behavior in the absence of energy addition to the large scale disturbance in which the perturbation is contained.
I understand that you are reluctant to pursue these issues here, so maybe we’ll have to agree to disagree.
Dan:
1) I didn’t understand Stormy’s point exactly, but mostly I think Ray other than the “none of this is right” in Raypierre’s response, I don’t find anything wrong with Raypierre’s response.
When raypierre says this:
I think that is the claim. This is like the tornado hitting cedar rapids instead of Ames. The butterfly doesn’t actually create large energetic events, it can just “bump” the flow over to a different trajectory.
On the butterfly in a jar in a room issue: I don’t know whether Raypierre is right or wrong on that. I think that is very difficult to prove specifically. Neither he nor Roger Sr. have actually proven anything there. But it’s fun to play what if? 🙂
On this response to me:
Conservation of energy doesn’t mean a disturbance can’t make a difference. Chaos doesn’t say the new trajectory has a different amount of energy from the other one. So, this doesn’t prove the butterfly flap makes no difference. It’s just that the difference must not be one that violates conservation of energy.
On the issue of discussing these things: I don’t mind discussing them here. I just don’t want to post long comments in tiny comment boxes at RC, wait for them to appear, go back, see if there is a response etc. I also hate their inline comment responses that put a commenter on a different footing that the blog owners. (Their blog, their rules. But, I dislike it enough to avoid commenting there.)