In comments, several of us have strayed into discussing Chaos and the Lorenz attractor. Naturally, the connection to claims about climate change is also getting discussed. I think we may also be encountering issues associated with mis-matches in terminology, and consequently, having difficulties sharing ideas. There is also the possibility that some of us (including me) don’t know squat about chaos. (This goes to the extent that I’m not even sure we all know if we agree of disagree.)
The goal of this post is to permit people to post both useful and non-sense questions and answers about chaos. I make no claims about understanding much about chaos, and most of what is posted below is posted as a strawman to guide discussion. It uses the Lorenz attractor as a focus for discussion. Those trying to learn about chaos would be wise to understand much of this is a “cartoon” type discussion.
With that warming, I will begin!
The Lorenz Attractor
The Lorenz attractor, which actually does teach us something is represents a solution to these equations:
dx/dt = σ (y – x)
dy/dt = x (Ï – z) – y
dz/dt = xy – β z
where σ is called the Prandtl number and Ï is called the Rayleigh number. All σ, Ï, β > 0, but usually σ = 10, β = 8/3 and Ï is varied. The system exhibits chaotic behavior for Ï = 28 but displays knotted periodic orbits for other values of Ï.
(The physical interpretation of X, Y and Z is discussed at mathworld.
I coded the three equations above in EXCEL, then using using σ = 10, β = 8/3, and Ï = 28, I integrated using dt=0.0025 and plotted (z,x). I get the cool butterfly below left, which matches the appearance of butterflies at other sites, example from StateMaster below right:
![]() |
Lots of things can be said about this butterfly. However, to foster discussion, I am going to show two graphs. The first is a plot of the attractors (Z,X) at two different Rayleigh numbers: Ï=28 and Ï=50. The Rayleigh number happens to represent the temperature difference across two plates, and so is the parameter that drives flow. Using language some might abhor, we could suggest that Ï=28 shown in blue represents low “forcing” and Ï=50 shown in red represents high “forcing”:

Comparing the red and blue attractors, one might notice that the red attractor results has a different wing span and what one might call it’s ‘center’ is at a higher value of “z” than the blue attractor. Some might suggest that the increasing the ‘forcing’ (i.e. Ï) caused the attractor to “shift” and also “deform”.
I have no idea whether this language is used by honest to goodness chaos experts, but I am using it because that’s how it looks to me, and that’s the language people are using in comments to discuss the possibility that some average feature in a chaotic system might have different average values under different ‘forcings’. I don’t have much more to say about this because I don’t have any more insight.
Time series
Next, I’ll show a time series plot for the two systems whose attractors are shown above. 
The figure above shows the “z” (whatever that is) averaged over dt=0.025, which happens to be 10 integration units. I chose to average to make this somewhat similar to collecting “monthly averages” and reporting monthly. As you can see, the average “z” at high “forcing” (i.e. Ï=50) is higher than the average “z” at lower “forcing” (i.e. Ï=28);
Given this quantity of data, despite the fact that both cases plotted are chaotic, I think few people would suggest that the difference in the average value of “z” for the two cases cannot be discerned. I would suggest that some might say “the change in forcing caused the average value of ‘z’ to shift.” Those examining spectral properties would also notice they changed, as would numerous statistical moments.
I’m not going to go out further on a limb to say more. However, I’m attaching the excel spreadsheet so some of you can play. Maybe someone will tweak the ‘system’ to make Rayleigh number (i.e. Ï) change with time and see whether statistical methods can discern a shift is “z” as a function of time. Maybe someone will think of other things to do. Maybe someone will do things in R. 🙂
Here’s the spreadsheet. It’s set up in a weird way that let me easily create (y,z) or other attractors. You’ll have to fiddle to figure it out.

Nice post; I did a series of posts on the Lorenz ’63 system a while back (uses python instead of excel). It is mostly focused on verification with MMS, initial condition ensembles, and the distinction between weather-like functionals and climate-like functionals.
From the closed thread:
Bart Verheggen (Comment#40606) April 13th, 2010 at 8:07 am
SteveF,
You wrote: “The multiplied uncertainties reduce the ‘credibility’ of the threat of warming.â€
That to me is a very strange proposition, and seems based on discounting the chance of things being even worse than expected (i.e. you’re only taking into account the uncertainty meaning that things could be not quite as bad as expected, and forgetting about the other direction uncertainty goes).
An example may be sea level rise. It’s very uncertain how fast sea level rise will proceed. But past analogues hint at a very steep relation between global avg temp and sea level, if the climate has been at said temp at a long enough time (see e.g. http://www.glaciology.net/Home…..mperature) How long is enough? We don’t know. Is that uncertainty comforting? I don’t think so. Sea levels were approx 6 metres higher than today in the previous interglacial, whereas global avg temp was only a few (1-3) degrees higher (different estimates going around).
Questions:
1: If in fact there have been cyclic periods of glacial and interglacial changes in climate, has any thought been given as to what might be the nature of the topology of a theoretical chaotic system which might produce these cycles?
2: Is there evidence of mass biological extinctions occurring during the warmest period of the last interglacial, extinctions which are clearly associated with that interglacial’s warming peak?
3: If we surveyed a population of newspaper and television reporters who write about climate science topics, how many of them would exhibit any knowledge whatsoever of the earth’s geologic history and of its past climate regimes?
Chaos is ideal
to me (you see)
Wot was the question again?
Lucia’
” I think we may also be encountering issues associated with mis-matches in terminology, and consequently, having difficulties sharing ideas. There is also the possibility that some of us (including me) don’t know squat about chaos. (This goes to the extent that I’m not even sure we all know if we agree of disagree.)”
Are you saying that not only do I not know what you are talking about but that YOU don’t know what you are talking about and I don’t know what I am talking about??
OK.
I think you said that we don’t even understand what we don’t understand, but I might have misunderstood what you intended to say.
I think this is happening. 🙂
kuhnkat: Are you saying that not only do I not know what you are talking about but that YOU don’t know what you are talking about and I don’t know what I am talking about??
.
Hopefully Tom Vonk and/or Dr. Koutsoyiannis will step in before this thread devolves into something like a scene from Get Smart.
If I take a large, chaotic system and heat it such that it’s average temperature rises by between 3 and 6 degrees C, it will be between 3 and 6 degrees warmer. It will still be chaotic in there, but it will be a warmer state of chaos.
bugs
That’s what I think.
And if it’s getting warm, it could be getting fuzzy.
Tom V.:
Lucia:
Doesn’t it matter how long our observation period is in relation to the orbit period? In Chaos: A Very Short Introduction [link to my review of the book] Smith says,
…the duration of our observations needs to exceed the typical recurrence time. It may well be that the required duration is not only longer than our current data set, it may be longer than the lifetime of the system itself. This is a fundamental constraint with philosophical implications. How long would it take before we would expect to see two days with weather observations so similar we could not tell them apart? That is, two days for which the difference between the corresponding states of the Earth’s atmosphere was within the observational uncertainty? About 10^30 years. This can hardly be considered a technical constraint: on that time scale the Sun will have expanded into a red giant and vaporized the Earth, and the Universe may even have collapsed in the Big Crunch. We will leave our philosopher to ponder the implications held by a theorem that requires that the duration of the observations exceed the lifetime of the system.
Tom V. said:
What I meant as not making much sense is to ascribe causes for the trajectory inside the attractor , not the attractor itself. Because we can only observe during finite times , what we observe in practice is the former not the latter .
Lucia said:
How do we know that, owing to finite times for observations of the earth’s surface that what we observed in practice is the former not the latter?
Doesn’t it matter how long our observation period is in relation to the period of the orbit? In Chaos: A Very Short Introduction [link to my review of the book], Smith says (in the Takens’ Theorem and embedology section),
…the duration of our observations needs to exceed the typical recurrence time. It may well be that the required duration is not only longer than our current data set, it may be longer than the lifetime of the system itself. This is a fundamental constraint with philosophical implications. How long would it take before we would expect to see two days with weather observations so similar we could not tell them apart? That is, two days for which the difference between the corresponding states of the Earth’s atmosphere was within the observational uncertainty? About 10^30 years. This can hardly be considered a technical constraint: on that time scale the Sun will have expanded into a red giant and vaporized the Earth, and the Universe may even have collapsed in the Big Crunch. We will leave our philosopher to ponder the implications held by a theorem that requires that the duration of the observations exceed the lifetime of the system.
Okay but what does this tell us, except that: a system which is heated such that it is 3 to 6 deg C warmer… ends up 3 to 6 deg C warmer… ?
I’ve been beating up a guy named DJ Chaoz on Gangster Wars all day long… I guess it must be coincidence
oliver (Comment#41011) April 19th, 2010 at 5:41 pm
That’s right. Is chaos theory going to be much use to us here?
Re: bugs (Apr 19 20:48),
It is the assumption that “heating up” is done independently of the chaotic system under study. Temperature/heat is only one manifestation of energy inputted.
Energy (in its various manifestations) are the variables that should enter in the study of the chaotic behavior of climate.
And let me give my energy rant here. It is total energy that is conserved, not heat, not radiation, not evaporation,condensation, sublimation, biological absorption and storage, etc. They all have to be added up for conservation of energy. There will be missing heat, missing radiation, because the energy will go into other manifestations.
Let us take a simple example of a pot set to boil.
Energy in the form of heat can be introduced in steps . If the temperature of the water is below the boiling point , the energy will mostly turn into heat, with a bit of radiation,convection and evaporation. Over 100C it turns into convection evaporation etc., cannot get hotter. So “adding heat” has a different effect depending on the boundary conditions.
I also am a dabbler in chaos, but I have sat through a few weeks of lectures some years ago.The example given in the introduction by Lucia, according to my introduction to chaotic dynamics book (Baker and Gollub, p140), was created as a simple model of convection in fluid dynamics.
sigma rho and beta characterize the properties of the fluid, y is proportional to the temperature difference between the upward and downward moving parts of a convection roll, z describes the non linearity in the temperature difference along the roll, and x is related to the fluid’s em>streamfunction.
Even in this simple model, adding “heat ” will not be a linear business.
The climate is described by deterministic coupled differential equations and these should enter into the chaotic study which will become much more complicated , imho.
Scott Brim,
1. Glacial/interglacial cycles are governed primarily by Milankowitz cycles (changes in earth’s orbital paramaters), which influences the amount of insolation received at high latitudes. Ice sheet feedback and carbon cycle feedback have important roles to play in the the eventual amplitude of the temperature change.
2. Not that I know. During some large (and fast) temp changes in the much deeper past there have indeed been mass extinctions AFAIK. See eg http://www.skepticalscience.com/Earths-five-mass-extinction-events.html
3. I don’t know, but my guess is that most journalists won’t know the specifics, but are (or at least should be) aware that many climate changes have occurred in the earth’s past. They should likewise be aware that by itself that doesn’t say anything about the cause of the climate change we’re currently experiencing. (forest fires occur natually, but I can still start one)
Sure, and there are limits to the forcing of CO2. The models can’t predict the day to day weather, they do a good job of explaining the climate from the past, and predicting the future climate.
Re: bugs (Apr 20 05:41),
Sure, and there are limits to the forcing of CO2. The models can’t predict the day to day weather, they do a good job of explaining the climate from the past, and predicting the future climate.
Sure, in a system with more parameters than degrees of freedom one can fit an elephant. I have not seen one successful prediction of the models.If one keeps fitting more and more data as time goes on, that is not a prediction.
In any case my example was in order to give an intuitive feeling that just adding heat does not mean moving linearly the temperature of the system, even if one is not talking about deterministic chaos.
The butterfly trajectories above just tell us that the solutions for the temperature differences of the the convection roll are bounded.
In this limited example the anomalies will not exceed certain bounds up or down . One would have to prove that the whole system would change linearly with an increase or decrease in absolute temperature.
If we had a corresponding attractor diagram for climate temperature anomalies we would be reassured that the variations in anomalies are to be expected from the solutions of the chaotic nature of the system, and would be bounded. Certainly though, because of the many possibilities of energy dissipation and distribution on the planet, a linear relationship between temperature/heat input and bounds could not hold.
I am trying to say that one needs the chaotic paradigm to understand climate.
Re: jstults (Apr 19 16:46),
What’s this time scale estimate based on? Does the model that supports that time scale include the Milankovich cycles and any variations imposed by chaotic forcing from the sun?
lucia:
He wasn’t that clear; and my puppy just had a fun time chewing my copy this morning so I can’t re-read it right away. It was in the section on Takens’ Theorem and embedology.
anna v:
I’m pretty sure GCMs generally have lots fewer parameters than dof (each grid point has a state with O(10) dof all by itself), but the data we have to compare them to is rather sparse. There’s a good quote from one of the early turbulence modelers about ‘give me three parameters and a nonlinear PDE and I can fit the world’, but I can’t find a link / reference; anyone got a handle on that one?
lucia:
I think (and now I’m really flying blind cause of that darned dog) that forcing from the sun is more properly termed stochastic rather than chaotic, since the sun is rather isolated from all the interesting dynamics in our neck of the woods.
But who knows, in climate science you get to call things forcings that everyone else would call material properties…
jstults–
If I were actually doing the problem instead of drawing cartoons, and I was looking at an arbitrary system– say the simpler Lorenz system– I’d first look at the problem with steady forcing. (The Lorenz system has steady forcing.) I’d find length of time corresponding to approximately 1 travel around the “butterfly”. Then, I’d figure if I can sample continuously, that I can get a pretty good average of properties like temperature at a point in the test rig by averaging over 10 travels around the “butterfly” and integrating temperature over time, then dividing by time. ( Actually 2 might be enough.)
After I got that, I’d change the forcing (which in my cartoon is Ra. I could change that by increasing the temperature of the lower plate and keeping everything else constant.) I’d do the same experiment, measuring over 10 cycles of the full butterfly. Then, I’d compare.
Next, if someone wanted to present me with a challenge, and they varied the temperature at the top of the plate without letting me know how quickly they’d change things, I’d suspect that I would be able to distinguish whether changes in the forcing caused changes in the average temperature (or its probability distribution function) provided that I got to observed trends over 20 cycles that I computed based on the steady state problem.
If we consider the lorenz problem an analogy for something to do with climate, in this ge-dunken experiment, someone changing the temperature at the top of the plate could correspond to a) milankovich cycles, b) ghgs, c) (possibly) the 11 year cycle of the sun. The spread cycle around the butterfly wings might include oscillations, for example ENSO etc.
So, the question then is: Absent milankovich cycles or ghgs, how long are the cycles? I don’t think we have earth data. Someone might be able to discover what the climate models suggest based on control runs. But I’m not sure even they
So, I am wondering whether that article you cited is estimating the time-scale including the milankovich cycle / sun cycles etc. Even if the time scale doesn’t include those, I’m also wondering whether the criteria isn’t likely too strict. The way it reads it seems that time scale is the time before we get to the same point on the attractor. In the lorenz system, thats’ the time to get back to (x0, y0, z0) [within observational uncertainty.] I suspect that time is many, many multiples of travels around the full butterfly!
Jstutls-
Isn’t convection in the sun also a non-linear dynamics problem? Lots of people argue about whether turbulence is chaotic or stochastic. (Equally or more often, one argues about whether thinking of it as chaos is useful relative to just treating it as stochastic.)
Solar radiation exhibits roughly 11 year cycles. There’s something coherent going on; that’s points toward chaos of some sort.
I thought chaos was supposed to be aperiodic (Tom V. schooled me on this one over on my site, talking about numerical simulations of chaotic systems).
Right, whatever we call it, lets make sure we’re talking about the same thing, and figure out what we know about it that’s useful.
Re: jstults (Apr 20 09:08),
Look at the time series above. Quasi-periodic behavior is common in chaotic systems, AFAIK. But if you created a periodic model with fixed frequencies and amplitudes based on a limited time sample and extended it into the future, it’s very unlikely that the data would match the model. Or in other words, it’s periodic until it isn’t.
Bart,
If glacial/interglacial cycles are completely determined by Milankovitch cycles, please explain why the frequency changed from ~44,000 years to ~120,000 years. That looks to me very much like what would be expected from a chaotic system exhibiting quasi-periodic oscillations. The climate appears to have lots of these: NAO, PDO, ENSO, Heinrich events…
The Lorenz 1984 system contains variability
The Lorenz system of [1984] and [1989] and recently the subject of two publications in [2006a] and [2006b], is given by
dX/dt =−aX−Y2 −Z2 +aF
dY/dt = −Y + XY − bXZ + G
and
dZ/dt = −Z + bXY + XZ
This system has been studied also by Lorenz in 1989 and 2006 as noted above, and using a step size of 0.025 time units. Pielke and Zeng [1994] have given results of very long-term integrations of the system also using 0.025 as the step size. The objective of these latter calculations was to determine if a short-term variation (seasonal variations) can lead to significant long-term variability. Pielke and Zeng integrated the equation system for a time span of about 1100 years: much longer than the investigations by Lorenz 1989.
The 1984 Lorenz system was developed to explore the effects of heating variability in the atmospheric system over the seasons with the aim to investigate that the climate is or is not intransitive. The general outlook being that intransitivity is unlikely. The seasonal variations of the heating was obtained through the F and G terms. Lorenz 1989 first investigated the calculated behavior for constant values of F and G for the Winter season with F = 8 and G = 1. Lorenz 1989 found that under these conditions that the calculated numbers exhibit chaotic response.
E. N. Lorenz, “Irregularity: A Fundamental Property of the Atmosphereâ€, Tellus, Vol. 36A, pp. 98-110, 1984.
Edward N. Lorenz, “Computational Chaos – A Prelude to Computational Instabilityâ€, Physica D, Vol. 35, pp. 299-317, 1989.
Abstract
Chaotic behavior sometimes occurs when difference equations used as approximations to ordinary differential equations are solved numerically with an excessively large time increment r. In two simple examples we find that, as r increases, chaos first sets in when an attractor A acquires two distinct points that map to the same point. This happens when A acquires slopes of the same sign, in a rectifying coordinate system, at two consecutive intersections with the critical curve. Chaotic and quasi-periodic behavior may then alternate within a range of rc before computational instability finally prevails. Bifurcations to and from chaos and transitions to computational instability are highly scheme-dependent, even among differencing schemes of the same order. Systems exhibiting computational chaos can serve as illustrative examples in more general studies of noninvertible mappings.
E. N. Lorenz, “Can Chaos and Intransitivity Lead to Interannual Variability?â€, Tellus, Vol. 42A, pp. 378-389, 1990.
Abstract
We suggest that the atmosphere-ocean-earth system is unlikely to be intransitive, i.e., to admit two or more possible climates, any one of which, once established, will persist forever. Our reasoning is that even if the system would be intransitive if the external heating could be held fixed, say as in summer, the new heating patterns that actually accompany the advance of the seasons will break up any established summer circulation, and an alternative circulation may develop during the following summer, particularly if chaos has prevailed during the intervening winter. We introduce a very-low-order geostrophic baroclinic “general circulation†model, which may be run with or without seasonal variations of heating. Under perpetual summer conditions the model is intransitive, admitting either weakly oscillating or strongly oscillating westerly flow, while under perpetual winter conditions it is chaotic. When seasonal variations of heating are introduced, weak oscillations prevail through some summers and strong oscillations prevail through others, thus lending support to our original suggestion. We develop some additional properties of the model as a dynamical system, and we speculate as to whether its behavior has a counterpart in the real world.
Edward N. Lorenz, “An Attractor Embedded in the Atmosphereâ€, Tellus, Vol. 58A, pp. 425-429, 2006.
Abstract
A procedure for deriving a three-variable, one-level baroclinic model by truncating the familiar two-level quasigeostrophic model is described. The longitude, latitude and isobaric height of the low centre are introduced into the new model as alternative dependent variables. State space then becomes equivalent to geographical space, and the attractor becomes a structure within the atmosphere.
Edward N. Lorenz, “Computational Periodicity as Observed in a Simple Systemâ€, Tellus, Vol. 58A, pp. 549-557, 2006.
Abstract
When the exact time-dependent solutions of a system of ordinary differential equations are chaotic, numerical solutions obtained by using particular schemes for approximating time derivatives by finite differences, with particular values of the time increment _ , are sometimes stably periodic. It is suggested that this phenomenon be called computational periodicity.
A particular system of three equations with a chaotic exact solution is solved numerically with an Nth-order Taylor series scheme, with various values of N, and with values of _ ranging from near zero to just below the critical value for computational instability. When N = 1, the value of _ below which computational periodicity never appears is extremely small, and frequent alternations between chaos and periodicity occur within the range of _ . Computational periodicity occupies most of the range when N = 2 or 3, and about half when N = 4.
These solutions are compared with those produced by fourth-order Runge-Kutta and Adams-Bashforth schemes, and with numerical solutions of two other simple systems. There is some evidence that computational periodicity will more likely occur when the chaos in the exact solutions is not very robust, that is, if relatively small changes in the values of the constants can replace the chaos by periodicity.
Roger A. Pielke and Xubin Zeng, “Long-Term Variability of Climateâ€, Journal of the Atmospheric Sciences, Vol. 51, No. 1, pp. 155-159, 1994.
Abstract
In this research note, we address the following general question: In a nonlinear dynamical system (such as the climate system), can a known short-periodic variation lead to significant long-term variability? It is known from chaos studies (e.g., Lorenz 1991) that any perturbations in chaotic dynamic systems can lead to a red-noise spectrum; however, whether a significant long-term variability can be induced is unknown. To perform this study, an idealized nonlinear model developed by Lorenz (1984,1990) is used. The model and the results are presented in sections 2 and 3, respectively. Finally, the implications of our research to the understanding of the natural variability of the climate system due to internal dynamics will be discussed in section 4.
Roger A. Pielke, “Climate Prediction as an Initial Value Problemâ€, Bulletin of the American Meteorology Society, Vol. 79, pp. 2743-2746, 1998
On old sol, here’s what I found with a little googling: Results from simple models can be compared with the observed sunspot record over the past 380 years, and with proxy records extending over 9000 years, which show aperiodic modulation of an 11-year cycle.
jstutls–
If you were to measure the time between the “peaks” you’d notice a sort of time period. However, the periodicity is imperfect.
Chaos is a periodic. But I don’t thin that mean it can’t have “oscillations”. Look at z v. time in the graph above:
If you were to make a movie of the butterfly showing a streak with some sort of time span, you’d see the streak would move around one wing, spiral here for a while, then move to the other one and so on. Nothing is every perfectly periodic, but there is a sort of “typical” amount of time it takes for the streak to get around the butterfly. That’s a time scale.
I forgot; a = 0.25 and b = 4.0.
I suggest that it is better to discuss variability within the framework of this system in contrast to making stuff in the original system be variable or a function of time. Some values of the parameters in the original system can lead to non-chatic response, so they can’t be simply plucked out of the air.
Dan
Better how?
The fundamental question many of us are puzzling over is, “How do you distinguish changes in an measured variable that arise as a result of changes from outside the system (e.g. changes in the Ra, Pr or b) number) from those that arise traveling along trajectory within a system with unchanging parameters (e.g. Ra, Pr and b are constant.)
We can’t answer that by not talking about what happens if external parameters change. The answer involves understanding both what happens if the parameters don’t change and what might happen if they do change.
Of course the parameters can’t be plucked out of thin air.
Dan–
I think I misunderstood your comment. By “this system”, do you mean the system of equations discussed in the papers you cited? As opposed to the Lorenz system whose equations I typed into the blog post?
Yes. Note that to investigate variability Lorenz changed the system. He did not make parameters in the original system be variable.
Dan–
Yes. That makes sense. I was looking at equations for a leaky water wheel over the weekend, and it was clear that if we wanted to examine variable flow or adding water at different locations, we’d need extra terms in the equations.
Thanks.
Milankovitch cycles are far from convincing evidence of determinism in climate. Indeed, the match of climate to orbital parameters is surprisingly poor. DeWitt touches on some problems above, Carl Wunsch gives a very detailed analysis in this paper.
As Dr Wunsch illustrates, the glacial/interglacial are actually pretty good evidence for stochastic behaviour on the 100kyr+ scales, probably even more so than evidence for deterministic behaviour.
Dr Koutsoyiannis has some good commentary on this elsewhere, as well.
Bugs and Lucia:
“If I take a large, chaotic system and heat it such that it’s average temperature rises by between 3 and 6 degrees C, it will be between 3 and 6 degrees warmer. It will still be chaotic in there, but it will be a warmer state of chaos.”
I apologise if I missed a good response above.
If the system is closed you can determine if you have warmed it 3-6c. Of course, if the system is closed it is not chaotic.
Yes. Note that to investigate variability Lorenz changed the system. He did not make parameters in the original system be variable.
.
Very important remark of Dan Hughes .
Among other it shows the obvious limits of looking only at the 3D Lorenz system (because it is most popular and well behaved) and then drawing far reaching conclusions about chaos or possibly all non linear systems .
I didn’t comment much sofar because what the charts show is that solutions of a non linear ODE system change when the constants change .
Well … sure they do .
.
A way to generalize would consist to consider Ra no more as a constant but to promote it to an independent variable .
The phase space becomes then 4D instead of 3D and the price to pay is modest – just add a new equation containg Ra , X , Y , Z and t .
Preferably an ODE . Solve .
This new system will have a higher dimensional attractor in a 4D space .
Even if some like it , this new dimension can be called “forcing” as opposed to the other 3 dimensions that are called “non forcing” .
Others could prefer talking about a previously external constant that has just been internalised to the system so that all dimensions are equivalent .
In any case terms are not important , what is important is that we have now a 4D system instead of 3D .
The figure 3 is then just trivially a projection of this new attractor on some plane where one erased an (uncountable) infinity of trajectories with Ra different from 28 and 50 .
I am not sure that this says something “fundamental” about this particular Lorenz system .
Things that are generally considered fundamental are the Lyapounov coefficients , ergodicity and the dimensions of the attractors .
.
As to this eternal question stochasticity vs chaos vs (a)periodicity .
I think that I have answered this question in dozens of posts , some of them here too .
Let’s try again .
1) As time flows , the system visits different places of the phase space .
2) Some very often , some rarely and some not at all .
3) Question : will the system visit the same places with the same frequencies for ALL initial conditions ? (Remark : please let constants stay constant . If you want them to be variable , increase the dimension of the phase space and have other constants constant)
4) Answer : IT DEPENDS !
5) If yes then there exist a natural PDF that is said to be left invariant by the dynamics . In THIS case and only in THIS case a stochastical description makes sense .
6) If no then no stochastical description makes any sense .
.
Particular comment 1 : for the 3D Lorenz system considered here the answer is yes . That’s why it doesn’t come as a surprise that a stochastical description seems possible by eyeballing .
Particular comment 2 : of course it is not because something works for this 3D Lorenz system that it can be promoted to a universal natural law for all systems .
The Lorenz system represents a single convection cell in the Earth’s atmosphere.
x is the speed at which the cell is turning.
y is the left-right temperature difference, ie difference between warm rising plume and cold sinking plume.
z is the up-down temperature difference.
The system has a steady state (in fact a pair of equivalent states) which exists for Ï >1, in which
y=x=+-sqrt(β (Ï-1)), z=Ï â€“ 1.
In this state the convection cell is just turning over at a constant speed. These steady states are located in the holes in the center of the butterfly wings in the pictures.
If Ï is small (but > 1) these states are stable, so if you run your code it will approach these points.
But for Ï greater than some critical value (around 10-20, depending on β and σ) the steady states are unstable and you get these chaotic solutions. If you start near the steady state, the solution spirals away from it. If you want to impress people with jargon, you can say there is a subcritical Hopf bifurcation.
The 3 harmless-looking equations are pretty complicated. A guy called Colin Sparrow has written a whole book about them!
Tom–
No one is trying to say anything fundamental about the Lorenz system. What I (and I suspect others) are trying to do is place the discussions of chaos that arise in the context of diagnosing whether the rise in temperature observed are due to changes in forcing or not.
For that discussion, some of the language of chaos is obfuscating. Does the Lyapounov coefficients matter? Other than recognizing the dimension of attractors for climate must be “many”, does the actual number matter to our understanding of climate?
What matters is that some rhetoric at blogs suggests that if the system is chaotic, then the variations in surface temperature we see must arise from traveling on the attractor, and cannot arise from systematic variations in heat addition to the system. This is clearly bunk.
(Mind you, it’s not clear any specific person who understands chaos or just fluid dynamics and heat transfer makes the obviously bunk claim. But it is clear that some readers sometimes think that’s what someone claims when results of chaos studies are stated in terms of words those working in chaos like to use. )
Anyway, the simplified Lorenz problem shows the notion that observed increases in surface temperatures cannot be the result if increases in heat but must be consistent with travel along an attractor at constant heat addition is bunk. Showing that changing the RA number affects the shape of the butterfly and the magnitude of “z” only holds RA=constant in each case demonstrates that sort of claim is bunk. The demonstration holds even if showing this is trivial (because everyone knows it’s true). The demonstration shows even if greater understanding of chaos and climate could be obtained by using a system that included the possibility of RA varying with time.
The “cartoon” is not an attempt to explore what the attractor would look like if RA (i.e. forcing in the Lorenz problem) did vary with time.
I have to disagree with you here. Even in the Lorenz system the Rayleigh number is a forcing in the sense that it is temperature difference that drives flow. When the temperature difference across the top plate and the bottom plate is zero, Ra=0 and there is no flow. So it is a forcing. In contrast, viscosity and coefficient of thermal expansion or spacing between the plates is not a forcing.
If we added a fourth variable that would describe how forcing varies with time. The value containing temperature difference would still be “the forcing”.
From a pure math point of view, and a pure exploration of chaotic dynamics, your question– with the requirement that we answer your question as worded by extending the mathematical system to include the variable forcing is fine. But if we impose your remarque, your question because utterly uninteresting to those who want to get an idea of whether the changes we have seen arise from the variation in forcing rather than travel along an attractor at constant forcing.
The problem with this “question” is that your remark has just avoided the question people really want to ask. That question is: With respect to earth’s climate, we visiting different places because the parameter that describes how forcing varies with time indicates forcing increased. (That is, are we visiting different places because, forcing=Ra varied with time in the extended Lorenz problem. Or, are the variations we see indistinguishable from what we would see if forcing were held constant. )
It’s find to increase the dimensionality. But in this case, with respect to climate, one question people want to ask is: “Can we distinguish whether rises in average surface temperature are due to the variation in forcing (i.e. RA) with time? ” Of course we can test sensitivity to initial conditions, and do all sorts of things. But you would have to compare what we see with Ra=constant to what we might see if Ra varies with time in some set way.
(Ideally, we do this for the full climate system. But if not, we do it Lorenz– which we treat as a cartoon climate.)
Of course not. The 3D Lorenz system, and even any 4D Lorenz system is a cartoon. Also, no one here is trying to suggest that this cartoon system explains the natural laws of the universe for all system.
However, if something someone seems to claim is not even true for the Lorenz system, then obviously, that claim cannot be a natural law of the universe.
kuhnkat (Comment#41126) April 20th, 2010 at 6:17 pm
Bugs and Lucia:
“If I take a large, chaotic system and heat it such that it’s average temperature rises by between 3 and 6 degrees C, it will be between 3 and 6 degrees warmer. It will still be chaotic in there, but it will be a warmer state of chaos.â€
I apologise if I missed a good response above.
If the system is closed you can determine if you have warmed it 3-6c. Of course, if the system is closed it is not chaotic.
Not true, my thesis was on oscillatory reactions (before it became know as ‘chaos’). It was possible to have the system oscillating (either periodically or aperiodically) with an amplitude of ~300ºC and a change of 1ºC in the environment would cause the oscillation to cease and instead achieve a small steady temperature excess. (transition from a unstable focus to stable node).
Phil–
Interesting!
Could you describe the system? (Or do I have to order your thesis?)
Re: lucia (Apr 21 08:14),
Anyway, the simplified Lorenz problem shows the notion that observed increases in surface temperatures cannot be the result if (of?) increases in heat but must be consistent with travel along an attractor at constant heat addition is bunk.,
as you say, it is a simplified model.
Its simplicity is such that it avoids the physical fact that one can add heat to a system and the temperature may not increase because heat is not a conserved quantity for a real earth. Another way to say this: when you raise the temperature in
Re: lucia (Apr 20 08:28),
Then, I’d figure if I can sample continuously, that I can get a pretty good average of properties like temperature at a point in the test rig by averaging over 10 travels around the “butterfly†and integrating temperature over time, then dividing by time. ( Actually 2 might be enough.)
After I got that, I’d change the forcing (which in my cartoon is Ra. I could change that by increasing the temperature of the lower plate and keeping everything else constant.) I’d do the same experiment, measuring over 10 cycles of the full butterfly. Then, I’d compare.
In a real atmosphere the heat you have to supply for this change in temperature of your lower plate cannot be a constant, because of all the other energy forms it could be transformed to rather than raising the temperature of the lower plate.
So in the limited model yes, you are demonstrating a correlation with temperatures, but it cannot be a model for a real atmosphere, again imho. More dimensions seem necessary.
Instead of “cannot be”, one would say “might not be”, and “may be consistent”.
If someone says “might not be”, I have no nits to pick. That said, the next step is to try to figure out how you might be able to distinguish the reason for any change in average temperature over a particular time period.
Forgive me (or perhaps, indulge me) for a philosophical digression.
Chaotic systems can be divided into three types. Type I systems are systems of low to moderate complexity for which we have developed tools and methods that allow us to understand them to a reasonable extent.
Type II systems are systems of moderate complexity for which we can anticipate having tools and techniques in the future that will allow us to understand them, but for which the tools we have today are wholly inadequate.
Type III systems are systems of such complexity that they are beyond the cognitive limits of the human mind to understand – and will always be beyond our ability to understand.
Let me digress briefly regarding Type III systems. Despite my best efforts, my dog has been unable to learn calculus. And it’s not just a small gap in his abilities – it’s not like he can get derivatives but not integrals. Rather it’s a quantum gap in his abilities. It turns out that dogs lack the physiological capacity in their brains to make sense of abstract concepts. This raises an interesting question: does the human brain also have such fundamental physiological limits? In my view it would require a highly developed sense of hubris to believe that the human mind is physically capable of understanding anything and everything. So I support the idea that systems of Type III complexity exist and are, perhaps, numerous.
I generally believe that the climate is a system of Type II complexity. Our current tools and methods are inadequate to model the system—we can’t even get close. I personally suspect that we could develop such tools and methods on a time scale of say 100 to 200 years at our current rate and pace.
Some have argued that the climate is a system of Type III complexity — maybe. But this is unknowable (in the near term at least).
I think that applying the tools we have today to type II systems is intellectually interesting and necessary, but is grossly inadequate.
“There is also the possibility that some of us (including me) don’t know squat about chaos.†And perhaps we never will.
Re: lucia (Apr 21 14:51),
That said, the next step is to try to figure out how you might be able to distinguish the reason for any change in average temperature over a particular time period.
That is why I am intrigued by the method used in the Tsonis et al method,
https://pantherfile.uwm.edu/aatsonis/www/
It is like creating an analogue computer, putting in the differential equations letting the system develop and look at the output. Chaos at work, so to speak, as with the numerous coupled pendulums on videos and studies. https://pantherfile.uwm.edu/aatsonis/www/2007GL030288.pdf
In fig.4 they do give a temperature prediction.
When I was a graduate student in a small institute in Greece back in the early sixties, there was a computer science graduate student working with analogue computing. I think the institute had at the time an HP analogue computer where one had to change the capacitors etc to create the differential equations and boundary conditions that would solve the problem, as an output voltage. At the time there was strong interest in analogue computers since they were very much faster in solving equations than digital, and from my naive point of view then, it seemed touch and go which way the ball would roll, versus analogue-digital.
When I stumbled on the climate problem a few years ago, I thought an analogue approach would be useful, modeling chaos hands on. The Tsonis approach is the closest I have seen to this.
Lucia – Appreciate your raising the issue of chaos in climate. Some observations that may be useful:
1. Sensitivity to initial conditions (SIC), sometimes called “the butterfly effect.” This is the property of chaotic attractors that receives the most attention. Initialize the equations at two ever so slightly starting points, (x0,y0,z0), in the present case and, if you wait long enough, the solutions become uncorrelated. More generally the only thing you can predict is what amounts to a probability distribution, the so-called “invariant measure,” which gives the likelihood of the system being in different regions of the phase space – in this case, x,y,z space. An animation showing how 100,000 initially close to each other points get spread over the Lorenz attractor can be viewed at http://bill.srnr.arizona.edu/demos/mixing/mix.lorenz.html. Go up one level in the directory tree and you can get to some other examples.
2. Topology. SIC is a consequence of the topology of chaotic sets – specifically, the fact that every point on such a set is arbitrarily close to a saddle, which can either be a saddle equilibrium or a saddle cycle. By saddle, I mean that the object in question is attracting in some directions (stable manifold) and repelling in others (unstable manifold). Because saddles are dense on chaotic sets, it follows that any pair of initial points will be split by the stable manifold of some saddle. In the Lorenz equations, there is a saddle at the origin – (x,y,z)=(0,0,0). In the animation referenced above, you can watch this point split the ensemble of evolving points into groups that go one way or the other. At smaller length scales, there is additional splitting by the saddle cycles, but this is not apparent in the animation. Correspondingly, if one plays the same game with the Rossler funnel, the most obvious splitting is accomplished by a saddle cycle – see the animation at http://bill.srnr.arizona.edu/demos/mixing/mix.funnel.html.
3. Dance of the cycles. Because the cycles are saddles, an evolving chaotic orbit can be viewed as a choreography in which the lead dancer (the orbit in question) successively dances with a sequence of partners (the cycles), each of which is always abandoned. That is to say, the orbit shadows first one saddle cycle (as it approaches in the direction of the stable manifold) and then diverges (in the direction of the unstable manifold) only to approach another saddle in the direction of its stable manifold. This “dance of the cycles” explains why chaotic motion is a mix of statistical periodicity and apparent aperiodicity. It also explains, I believe, why climatic time series manifest just such a mixture of behaviors – see http://bill.srnr.arizona.edu/mss/Surfeit.pdf (published in Energy and Environment) for details. And, it is what underlies observations of climatic shifts – see, for example, recent papers by Tsonis, Swanson and their associates.
4. Response to external forcing. While it is certainly the case that forcing can induce qualitative changes in behavior that are “expected,” the response of chaotic systems to such forcing is actually quite complicated. To see why, one has to shift one’s focus from the phase space [(x,y,z) in the present example], wherein trajectories evolve, to parameter space [(sigma, rho, beta) in the present example], and from individual trajectories to invariant sets (equilibria and cycles). Then one can delimit regions of parameter space for which exist cycles, some stable, some not, of different base period. Often these regions take the form of overlapping “horns” or “tongues” – see the paper previously cited for discussion and references. The horns have an extraordinarily complex structure, but the important point is that as one varies a parameter, one cuts across a series of tongues, with the consequence that periodicities come and go in a non-intuitive fashion. Having said that, one can also observe that there is often what the nonlinear chemists call an “oscillatory region” corresponding to periodic, quasiperiodic and chaotic dynamics. Outside this region, the long-term dynamics are equilibrial – see the comment by Phil above.
5. Constant vs. periodic forcing. The preceding comments are equally applicable to constant and periodic forcing in the sense that the same qualitative topology obtains in both case. This is because a non-autonomous (periodic parameters) can always be recast as an autonomous (constant parameters) system of higher dimension. An oscillator subject to periodic forcing, for example, is equivalent to a pair of coupled oscillators, with the coupling being unidirectional – this point made in comments above.
6. Coexisting attractors. When resonance horns overlap, there will be “coexisting” attractors, which, depending on the particular, can be periodic or chaotic. Then different initial conditions can make for long-term dynamics that are qualitatively different in the sense that there are different invariant measures. This is a second type of uncertainty distinct from SIC, and is sometimes termed “initial condition uncertainty” (ICU) – the classic example being the raindrop that winds up in the Mediterranean or the Atlantic depending on where it falls. In other words, with ICU, we have multiple invariant measures. Note that “noise,” can kick the system from one “basin of attraction” to another, with the result that you can get an abrupt change in behavior. In other words, there are two ways of getting “shifts” – successive shadowing of different periodic orbits and jumps from one basin to another. As discussed by Yorke and his associates back in the 80’s, the basin boundaries can be fractals, in which case, ICU can apply “large” areas of the phase space.
7. Dimension. Regarding types of chaotic systems (PaulM), the critical point is the dimensionality of the motion, i.e., of the evolving orbit. If it is low, there are mathematical tools that can be brought to bear; if high, one is pretty much stuck with simulation. In the case of the Lorenz attractor, the dimension is slightly in excess of 2, and we can say a great deal about what goes on – see Sparrow (1982 – http://www.amazon.com/Lorenz-Equations-Bifurcations-Attractors-Mathematical/dp/0387907750) for an exceptionally lucid treatment, as noted by PaulM above. When the dimension of the motion is even modestly greater, one is stuck with simulation. Note that it is the dimension of the motion, not of the phase space, that is at issue. A single partial differential equation (PDE) is, after all, formally equivalent to an infinite number of ordinary differential equations (ODEs). Yet some PDEs admit to equilibrial solutions (dimension = 0), in which case things are simple.
For what it’s worth, I believe that the climate system manifests field marks of chaos – specifically the mix of periodic and “random” behavior, that viewing it as an equilibrium system subject to “forcing” is a prescription for error and that the focus of climate modelers should be the replication of past behavior as opposed to forecasting. At the least, all those cycles (AO, PDO, etc.) have to come from somewhere. Simple models have the advantage that one can understand what they do and why. In the case of all-but-the-kitchen-sink simulations, one replaces a system one doesn’t understand with a model one can’t understand, at least not mathematically.
Lucia :
.
For that discussion, some of the language of chaos is obfuscating. Does the Lyapounov coefficients matter? Other than recognizing the dimension of attractors for climate must be “manyâ€, does the actual number matter to our understanding of climate?
.
A resounding yes to both questions . The Lyapounov spectrum is even the most important issue that matters . It is what allows to know whether a system is chaotic at all .
The dimension is something different but equally fundamental .
This belongs to the most fundamental concepts taught in non linear dynamics .
However if you asked whether the Lorenz attractor matters for the understanding of the climate (even only as example) , the answer would be a clear no . Among many reasons the most important is that the Lorenz system is temporal chaos while the climate is spatio-temporal chaos . They really don’t box in the same category .
I can’t see something “obfuscating” about Lyapounov coefficients and you have surely noticed that when discussing “chaos” I studiously avoid too “technical” concepts like Hopf bifurcations , hyperbolic instabilities and eigenvalues of Jacobians 🙂
.
What matters is that some rhetoric at blogs suggests that if the system is chaotic, then the variations in surface temperature we see must arise from traveling on the attractor, and cannot arise from systematic variations in heat addition to the system. This is clearly bunk.
.
I don’t know if it is clearly bunk neither if it matters .
What I know is that it is clearly confused .
IF the system is chaotic then every single variation of its dynamical parameters arises from the travelling on the attractor . Not only temperature but pressure , concentration , whatever .
It doesn’t even stand to debate , it is the definition of the attractor .
Whether I add heat or momentum to the system , whether it is constant or variable is irrelevant . It is already taken in account in the dynamics provided that I didn’t make a mistake in the definition of the phase space .
Having the same number of equations as unknowns also helps 🙂
I believe that the confusion comes from the difficulty to understand that when one says that an attractor is an invariant of the system , it really means that it is invariant .
If it varies , then tautologically it is not invariant so it is not an attractor and the system is either non chaotic or I completely bungled the phase space .
(Mind you , I may study the stability of the attractor to perturbations to either initial conditions or some of the constants .
To do that , like W.M.Schaffer says , one shifts from the phase space to the parameter space . Such studies are rather difficult but give important insights in transitions to chaos .
However this is not the same thing as a study of a given well defined system where , per definition , what one assumes constant may not vary else it would have been assumed variable . Also it should be noted as I already said that if one assumed Ra variable , the projection of the higher dimensional attractor (invariant !) on a plane wouldn’t look at all like the figure 3 .)
.
This answers also this :
The problem with this “question†is that your remark has just avoided the question people really want to ask. That question is: With respect to earth’s climate, we visiting different places because the parameter that describes how forcing varies with time indicates forcing increased. (That is, are we visiting different places because, forcing=Ra varied with time
.
But I have already answered in the original post too (here : A way to generalize would consist to consider Ra no more as a constant but to promote it to an independent variable .) .
Just to be sure that it’s clear , W.M.Schaffer says also the same thing in slightly different words : This is because a non-autonomous (periodic parameters) can always be recast as an autonomous (constant parameters) system of higher dimension. An oscillator subject to periodic forcing, for example, is equivalent to a pair of coupled oscillators,
So in a nutshell we are visting different places because the attractor is the invariant of the system and all independent variables and constants contribute to make it what it is . None plays a “special” role .
.
W.M.Schaffer
.
I agree with every word of your comments .
Mpaul :
.
This raises an interesting question: does the human brain also have such fundamental physiological limits? In my view it would require a highly developed sense of hubris to believe that the human mind is physically capable of understanding anything and everything. So I support the idea that systems of Type III complexity exist and are, perhaps, numerous.
.
Just to enhance your digression .
As it happens the same kind of approaches is used for studies of spatio-temporal deterministic chaos in brain and in the climate .
Namely discrete modelisation via CML (coupled map lattices) .
The amusing part is that while the brain can be considered as being really a coupled map lattice because the number of neurons is finite albeit large , for the climate this is only a gross approximation .
This suggests that the climate dynamics is more complex than the brain dynamics and it is an euphemism to say that the latter is vastly beyond our grasp .
That’s why I rather agree with your digression 🙂
The answer to this question is not necessarily strictly tied to chaos. The utility of averaging a system can be determined by the Hurst exponent, whether a system is chaotic or not.
The Hurst exponent (H) is a measure of the uncertainty of a system at different scales. A Markovian process or a system like the simple Lorenz system yield H=0.5 for timescales large with respect to the time constant of the system.
A Hurst exponent of 0.5 indicates that a sample average rapidly converges on the population average as timescales increase; specifically, the standard deviation (uncertainty) reduces with the number of samples raised to the power H (i.e., the square root of the number of samples).
A Hurst exponent of 1 means that the uncertainty does not reduce at all with averaging; that is, a single point sample is no less accurate an estimator than a million samples all averaged. This is counter-intuitive to many, but a fundamental property of some stationary time series.
We can estimate the value of H given enough data, and as mentioned, the H value of both AR type noise and the Lorenz system is 0.5 for time scales larger than the time constant.
On the other hand, temperature has a Hurst exponent H of around 0.9-0.95. This is confirmed by instrumental data, recent reconstructions and long-term reconstructions (such as the ice core).
None of this is particularly relevant to chaos theory, but is more relevant to the specific question you were asking.
Chaos may be relevant, however, as a potential cause of the high Hurst exponent. Although the Lorenz attractor has a low value for H of 0.5 (thereby showing itself to be largely irrelevant in the context of climate analysis, although valuable as a tool to learn about chaos), there are other chaotic systems that may be more applicable (ref 1 below for an example).
On other topics: as ever, Tom Vonk is doing a better job than I could ever do of explaining the difficulties of analysing chaotic systems, and endorse his comments here strongly. WM Schaffer’s contribution is very well expressed and a valuable insight into some of the issues. On the topic of dimensionality of the chaotic attractor, I have heard a number of climate scientists assert that there is evidence that the chaotic attractor of climate is of the low dimensional type and is therefore suitable for simplistic analysis. I think that ref 2 below is a good discussion on this topic and questions whether the low dimensional claim has merit.
ref 1. Koutsoyiannis, D., A toy model of climatic variability with scaling behaviour, Hydrofractals ’03, An international conference on fractals in hydrosciences link
ref 2. Koutsoyiannis, D., On embedding dimensions and their use to detect deterministic chaos in hydrological processes, Hydrofractals ’03, An international conference on fractals in hydrosciences link
Spence_UK
.
Nice to read you again .
2 comments .
.
– We did with Dan Hughes several Hurst analysis on the Lorenz systems (the old and the new) with D.Koutsoyiannis method and exgchanged with Demetris .
The thing with the Hurst coefficient is not so simple and we saw several rather strange things .
The H actually seems to vary with scale and also with some arbitrary decisions like the sampling period (you need to extract a finite sample out of the continuous solution and THEN work on it) .
.
– There are papers analysing the observed time series (especially local pressure) and concluding to deterministic chaos . The dimension would be around 5-6 .
One can and must be skeptical – as in “the science is NOT settled” 🙂
First even if the authors didn’t fall in the obvious trap which would be analysing space averaged numbers but used local values instead , it doesn’t make go away the spatial correlations which pollute the local signal .
Second is that nobody knows how long should be the observation time to be reasonably sure that the attractor has been sufficiently covered and the dimension computed accurately enough . It is probable that this necessary time is several orders of magnitude above the length of the time series we have .
Demetris can make some rather relevant comments because HE works on a time series that is 3000 years long … 🙂
Lucia, I’m very busy through the weekend and will be away from my computer most of the time, I’ll get back to you later. I could probably send you some old papers?
Lucia, as Tom says, Lyapounov exponents do matter a lot. Basically , the reciprocal of the largest Lyapounov exponent, or a few multiples of this, tells you the time-scale over which you can successfully predict the future of the system. This is because trajectories diverge like exp(lambda t).
PaulM:
That’s for a ‘weather’ prediction (state at a certain time), what about the ‘climate’ prediction (higher moments of state independant of time)? It would seem the things Tom V. and Spenc_UK mention matter for that:
Spence_UK:
Tom Vonk:
Which goes back to what I posted up-thread (#41006) about what size of time series we need. Links to the analysis you guys did on the Lorenz system?
Since Tom Vonk mentioned eigenvalues of Jacobians, doesn’t it matter if we’re in the basin of entrainment of the climate system? Won’t that determine what effect our forcing has on the system? Whether it would ever be possible to control climate?
Lucia et al.
Regarding dynamical measures computed from time series. One or more positive Lyapunov exponents in continuous systems (magnitude > 1 in discrete systems) indeed diagnostic of chaos absent exogenous noise. With noise added, things become murky. In my experience (but I’ve never worked with Hurst exponents), this applies to all of the measures that have been suggested as diagnostics for chaos. The problem goes back to Takens (1981) who proved that invariants such as the capacity (one of many “fractal dimensions”) of motion that lives on an n-dimensional compact manifold (think surface of a sphere) are preserved when one replaces the real (x,y,z,…) phase space with a surrogate space of dimension m>/=2n+1. This requirement sufficient, but not necessary. Often, one can get away with fewer dimensions – see, for example, Swinney’s reconstructions of low-dimension attractors for BZ reaction back in the early 80’s.
The most frequently used surrogate coordinates are delayed values of one of the real variables, but one can in principle use almost any “observable,” i.e., function of the originals. More precisely, X(t) is the observable, the surrogate coordinates are X(t), X(t,+T), X(t+2T), … , where T is a time delay. Takens proved that quantities such as the capacity of the reconstructed system are generically identical to those of the original. By “generically,” I mean that if you have a box of all possible observables and delays, and randomly choose one of each, you get the correct result with probability one. That doesn’t preclude the existence of choices that give the wrong answer – there can be a lot of them – it’s just that the chance of choosing the wrong ones is zero.
As an example, consider the set of all numbers. Of these, there are infinities of both rational and irrational numbers. But the former are countably infinite; the latter, uncountably, infinite. Choose a number at random and the chance is 100% that you get an irrational. Interestingly, this has relevance to chaotic motion in the sense that one can identify (symbolic dynamics) the periodic motions on a chaotic set with the rationals and aperiodic motions with the irrationals.
Genericity arguments are often characterized as arguments of last resort. Their conclusions can be irrelevant to the problem at hand. Consider, for example, the set of all possible dynamical systems. Generically, such systems are dissipative – your chances of picking a conservative system from the “box” are zero. But if you’re studying planetary motions, this fact is irrelevant – you study the properties of conservative systems (Hamiltonian dynamics).
Returning to noise and surrogate dynamics. It is easy to see that the larger the embedding dimension – m above – the more trouble you’re in. That is, if the nth coordinate is X(t+nT), the larger the value of n, the more likely it is that X(t+(n-i)T) – i ≤ n-1- has deviated from the value that would obtain absent noise addition.
Even in the case of simulations without noise, one’s choice of observable, and delay can be practically important. For example, in the case of Rossler’s equations (http://bill.srnr.arizona.edu/demos/mixing/mix.rossler.html)
calculating dimensions for x or y gives “good” results. But choosing third coordinate, z, underestimates the true dimension. In this case, the reason is that x and why vary smoothly (colloquial sense of the word – they’re more or less sinusoidal) with time, whereas the z time series is “spikey.”
Bottom line: When the aforementioned techniques work, the results are spectacular. But outside the confines of the well-controlled laboratory, diagnosing chaos by attempting to estimate dynamical invariants is fraught with difficulties – to the point that most people who attempted to do this way back when simply gave up.
Implications for climate modeling: Seems to me that demonstrating the existence of periodic orbits corresponding to the “natural oscillations – AO, PDO, etc. – in a low-dimensional model (maybe one of the two-box ocean models) for defensible parameters (ain’t that the challenge) would be useful. One could map out the resonance horns, possibly identify regions of parameter space corresponding to behavior that corresponds qualitatively to what one sees in the historical / prehistorical record. Anyone interested in pursuing this, perhaps Lucia could pass on my email address.
Tom Von, MPaul:
Regarding the natural history of seizures (epilepsy). Most seizures self-initiating and self-terminating. Some inducible by environmental inputs (environmental epilepsies); some don’t quit absent intervention, but instead progress to status epilepticus, which can be fatal as it shuts down the brain stem, and the subject stops breathing. Most common example of environmental epilepsy is induction of brief absences (petit-mal seizures) by flickering light. But words and thoughts also reported to induce seizures. Thus, the example — can’t give you a reference off the top of my head — of the woman who had a seizure every time she said the words “eeny, meeny, miny, mo.” In other words, there are preimages of the seizure state in the dynamical system that corresponds to brain activity. Likewise, the abortion of developing seizures by delivering a blast of sound to the ears has been reported — again, I would have to look for the reference mdash; a Hungarian doc, possibly back in the 70’s, I believe.
Tom,
Thanks for the comments. I choose to take regular long breaks commenting on the climate blogosphere for the sake of my sanity. I do not have your adept skill at ignoring the trolls and get easily drawn in to pointless shouting matches, which does nobody any good. So I take long enforced breaks…
On the Hurst exponent, you make some good points which I would reiterate. The Hurst exponent is not an innate quality of chaotic systems, merely a statistical tool. As a statistical tool however, it is a very powerful indicator of the utility of averaging – and can potentially be used to give an indication on whether complex chaotic systems will yield up useful information through averaging. This is closely tied to the question Lucia asked. However, it may not be invariant and there may be serious problems estimating its value (you touch on some of these in your post). As noted above, due to my background, I am less knowledgeable about chaos theory (although I think I know enough to be aware when someone is pulling the wool over my eyes), and approach many of these problems from perhaps a more statistical perspective.
Your notes on the Hurst exponent and the simple Lorenz system are interesting. I did play with this some years ago. My thoughts were:
(1) To determine the value of a parameter at a scale, I used integration vs. time. The curve is smooth (if a small enough time step is used) so it seemed reasonable to assume that the error from simple numerical integration methods (e.g. Euler’s method) would be much smaller than the variance due to the dynamics of the attractor. I think I tried a second order numerical integration (Adam’s method?) which seemed to make little difference to the results.
(2) Trying to put the orbits of the Lorenz attractor into a statistical context was interesting. There are a number of issues likely to screw up statistical estimators, including
(2a) The effects of the orbit itself. Clearly, any spectral analysis (or spectrum-based estimators) are going to be thrown by the large amount of high-frequency spectral power in the orbits. However, the pattern within a single orbit is within prediction range and not terribly interesting. So if using averaging (see below on this!), I felt the aim must be to minimise the influence of this part of the orbit on the estimator. Now, if we average around a single orbit, we end up with a value somewhere in the middle of that orbital path (with some small error on it). Unfortunately, the orbits do not have fixed period, so if we use fixed averaging period, we inevitably include some fraction of an orbit which adds an error on to our estimate of the centre of the orbit. One approach to minimise this is to include a large number of orbits in our overall average. Then the extent to which the fraction of an orbit is included is minimised. Unfortunately this leads to a second problem…
(2b) The transition between attractors. The nature of the Lorenz system is that the trajectory orbits one attractor, in an expanding series of orbits, before transitioning from one to the other. The number of orbits around any one attractor is not fixed, and is another event that will influence estimators. The transitions occur at some multiple of orbits, and are less frequent, so require longer averaging periods to average out. If we are attempting to average across transitions, we are now making a statistical estimate of a different thing – we are now estimating the average point between the two attractors, as opposed to the centre of each attractor if we only average between transitions.
But there are consequences of (2b). Consider that the phase space diagram is in 3D (unfortunately, difficult to present on a screen without anaglyphs!). Imagine a vector, V, which passes between the two centres of attraction. If we measure a parameter P in phase space which is almost orthogonal to V, then the effects of (2b) are largely irrelevant. If the parameter being measured has any significant component of V parallel to it, then (2b) becomes important.
I’ve seen a few presentations of this topic with regard to climate science, e.g. a blog post on realclimate in around 2005 called “chaos and climate”, and Lucia’s example above both exhibit this. (Unfortunately, I clicked on the realclimate link – so you lot wouldn’t have to! – and the graph I refer to has gone). The “forcing” they have imposed is orthogonal to the attractors, which means the transition between attractors much more obvious, particularly in the time domain. Interestingly, RealClimate placed up a similar time space diagram to Lucia’s figure 4, but with a longer period (and, admittedly, a smaller step). RealClimate argued this shows the utility of averaging.
But note this: figure 4 has no averaging whatsoever. Neither did the RealClimate diagram. It was a direct plot of the trajectory of the attractor. These diagrams, far from showing the merit of averaging, show the ability of the human eyes to use metrics other than averaging to spot differences in the orbits.
In fact, the eye is drawn largely by the extent of the attractors, not the average value, and this is what the eye uses to differentiate between them. To be fair, because of the orthogonality to V, actually the scale averaging does yield a useful estimate in this case, and the Hurst exponent H, when estimated for a parameter largely orthogonal to V and using a method designed to minimise the impact of (2a) above, actually is a pretty good estimator in these circumstances. Also, this is the area where H gives a value close to 0.5.
However, for a value not orthogonal to V, where we run into difficulties with the transitions between orbits, the average is a much worse statistical estimator, and the time constant (and value for H) somewhat higher and less consistent (more uncertain). Yet even though averaging does not work here, what the eye uses – the extent of the attractor, not the average – is a much more powerful statistic.
Ironically, for people who follow what is going on rather than parrot without understanding, the RealClimate post shows that averaging isn’t a the best estimator even for the Lorenz attractor – the exact opposite to the claim in the text!
Once again though, as you and WM Schaffer correctly point out, whilst some of these things are interesting to play with, they are supremely weak evidence. We are throwing statistics at a problem we do not understand well and hoping something sticks. As can be seen from the simple Lorenz case, all may not be as it first seems. Take that to a more complex system, where we cannot draw the insights that we make in the Lorenz case (where we know the equations of the underlying system exactly), the arguments and evidence are incredibly weak, and that caveat must always apply.
Sorry for the long post 🙂
A friend of mine posted this on his facebook account. I think you may find this quite interesting.
“A new kind of Science/ Is the universe fundamentally creative?
This is Stephen Wolfram (Inventor of Mathematica, Caltech educated physics PHD, MacArthur Genius Award winner, etc.) speaking about his book “A New Kind of Science” Simple rules in nature create extraordinary organized systems. ”
http://www.youtube.com/watch?v=_eC14GonZnU
All these discussions reminded me of Bohm’s “implicate order, explicate order”, how one level of organization can be the building block of a higher organization.
The simplest example is the alphabet as implicate for written words, and then written words as the building blocks of meaning that rides on the words.
In physics, implicate would be quantum mechanics for quantum statistical mechanics, then quantum statistical mechanics implicate for thermodynamics.
The interesting point is that each level has an independent organization and rules and the organization that rides on it is independent of the implicate level’s organization and rules.
Seems to me that studying chaos is mixing up these levels, mixing up content and context, taking from context and studying it within content. That is what happens when looking for temperatures coming out of trajectories.
Maybe I am wrong in this hand waving view though.
Spence_UK: For the parameter values discussed here, there’s only one attractor, albeit with two “wings,” each of which is organized about a saddle focus (stable in one direction; unstable, in two). As you imagine, the transitions from one wing to the other are important. Have a look at this.
Thanks for the correction on my terminology, I haven’t studied chaos formally (my interest is more from statistics, but I recognise the importance of relating the stats to the dynamics of the system) so my use of technical terms isn’t all it should be.
The link is interesting because it shows some behaviours that look like fairly conventional distributions, but even then there are many artefacts which may cause problems to statistical estimators. And this is before we get into the problems associated with self-similarity etc….
Took some time to visit jstults’ blog as well, which has a lot of interesting stuff in it. Seems to me there is a remarkable depth of knowledge and understanding contributing to this thread… very humbling indeed! Thanks to all involved, and I have a few new websites to absorb now as well.
Re: kuhnkat (Apr 23 17:49),
“A new kind of Science/ Is the universe fundamentally creative?
Thanks for the link. I heard the lecture . I do not know if it is a new paradigm for science, as he claims, but interesting it is.
Pythagoras called it the music of the spheres, numbers. There has always been fascination with numerology, and I know that theorists have been playing with the idea that time and space are quantized ( that there is a minimum unit of time and space) and his construct goes that way.
So, is the universe countable? What about Goedel’s theorem: “the set of all sets is open”. I think it ties with that.
We live in interesting times.
Jstults :
.
Since Tom Vonk mentioned eigenvalues of Jacobians, doesn’t it matter if we’re in the basin of entrainment of the climate system? Won’t that determine what effect our forcing has on the system? Whether it would ever be possible to control climate?
.
You probably mean attraction basin .
As this is only relevant for the transitory (which has always to be skipped over because largely untractable) , we have left it long ago . We have been running through the attractor (if one supposes that it exists) for some 4 billions years .
Now if you suppose that 1 of the coordinates of the phase space , f.ex the CO2 concentration has a very short (~ century) transitory pulse , then you’ll probably deviate the trajectory a bit but it is not enough to perturbate the attractor itself .
As for the last , no I don’t believe that climate can be controlled in any meaningful sense of the word control .
.
Spence_UK
.
But note this: figure 4 has no averaging whatsoever. Neither did the RealClimate diagram. It was a direct plot of the trajectory of the attractor. These diagrams, far from showing the merit of averaging, show the ability of the human eyes to use metrics other than averaging to spot differences in the orbits.
.
I also agree that the figure 4 is not very helpful and certainly doesn’t say anything about averaging .
For me what it shows is that when I take 2 very different systems , then I will always be able to plot something that shows that the systems are , indeed , different .
Whether I plot instantaneous values or averages , it’s just a question of taste .
.
… I haven’t studied chaos formally (my interest is more from statistics, but I recognise the importance of relating the stats to the dynamics of the system) …
.
Indeed this is in my eyes one of the most importants insights one gets when studying non linear dynamics .
It is even an interesting feature that while nothing deterministic can be said about the evolution despite its deterministic description , one can SOMETIMES say something very deterministic about the probability density of the dynamical states .
However , and I said so many times above already , this is NOT a given !
The existence of an invariant measure (e.g PDF) for the Lorenz system is proven but it must be rigorously proven on a case by case basis for every specific system .
I also repeat the important caveat – jumping from the finite phase space domain of non linear ODEs (Lorenz&Co) to the infinite phase space domain of non linear PDEs (weather&climate) allows (almost) no transport of results . One litterally changes of planet .
.
W.M.Schaffer
.
Seems to me that demonstrating the existence of periodic orbits corresponding to the “natural oscillations – AO, PDO, etc. – in a low-dimensional model (maybe one of the two-box ocean models) for defensible parameters (ain’t that the challenge) would be useful. One could map out the resonance horns, possibly identify regions of parameter space corresponding to behavior that corresponds qualitatively to what one sees in the historical / prehistorical record. Anyone interested in pursuing this, perhaps Lucia could pass on my email address.
.
While it is not exactly the approach you describe , what Tsonis is doing is very near to it . Sure he’s making it the other way around , taking those “natural” oscillations and looking empirically at the interactions but it’s not far from your idea .
The Bejan’s constructal theory goes also along these lines .
Bejan constructed a “simple N-box” model of the Earth where he showed that some large scale spatial structurees (f.ex Hadley cells) are almost trivial consequences of the constructal law .
However I am not aware that somebody would have tried to go beyond and explicitely construct a model with the target that you describe .
.
AnnaV
.
All these discussions reminded me of Bohm’s “implicate order, explicate orderâ€, how one level of organization can be the building block of a higher organization.
.
I have read that Bohm’s book too but don’t think that it is very relevant to the epistemology of the chaos theory .
For me the chaos theory is what shows the best Wigner’s “The Unreasonable Effectiveness of Mathematics ” .
It shows that even such highly contraintuitive mathematical beings like Cantor sets or continuous functions differentiable nowhere have their equivalents in the physical world .
My conviction is that wanting hardneckedly ignore the complexity and impose simplist (not simple !) numerical models and hypothesis on probably the most complex system we know , will ultimately lead nowhere .
Tom Vonk:
No, I meant basin of entrainment; that has implications for control of chaotic systems by forcing (and it turns out to matter for using the method of manufactured solutions to verify a numerical implementation too).
A little more on the question: solar forcing, stochastic or chaotic?
I said:
Lucia said (emphasis added):
On the short time-scales that’s true (aperiodic modulation of the 11 year cycle). But on the long time-scales treating that stuff as Gaussian noise (stochastic) is pretty useful in explaining ice-age transitions as a stochastic resonance [pdf, good review article] that amplifies the effect of the tiny long time-scale orbital perturbations. I think this is what Smith was referring to in calling solar forcing a stochastic rather than chaotic influence on the climate.
So I guess the answer to which way of thinking about things is more useful is the old standby: “it depends”.
Tom, you must know about analogue computing.
It used to be back then when they were introduced, a system of electronic elements: integrators, differentiators etc that could, LEGO like, be combined to solve complicated systems of coupled differential equations.
It seems to me that one could create a “climate model” introducing the coupled differential equations and letting the system run to produce a chaotic behavior. Do you see a fundamental error in this proposal?
kuhnkat (Comment#41376) April 23rd, 2010 at 5:49 pm
A friend of mine posted this on his facebook account. I think you may find this quite interesting.
“A new kind of Science/ Is the universe fundamentally creative?
This is Stephen Wolfram (Inventor of Mathematica, Caltech educated physics PHD, MacArthur Genius Award winner, etc.) speaking about his book “A New Kind of Science†Simple rules in nature create extraordinary organized systems. â€
Its a nice book. promises more than it delivers though
Jstults :
.
No, I meant basin of entrainment; that has implications for control of chaotic systems by forcing (and it turns out to matter for using the method of manufactured solutions to verify a numerical implementation too).
.
OK I see , sorry I misunderstood your question . So more precisely .
.
1) The entrainement method only works if you know exactly the dynamics of the system . We don’t . On top you need existence of convergent regions . We don’t know they exist .
The principle is something that W.M.Schaffer and myself have already mentionned :
The dimension of the dynamical system generally increases by 1.0 when a forced perturbation is added. Then, a saddle node bifurcation that is not inherent in the original dynamics may occur. Higher-dimensional systems such as the dripping faucet may induce a global bifurcation further. It was shown that the change of the global structure in the phase space is essential in realizing the entrainment for the present system. [Kiyono 2008]
2) The “chaos control” theory has only some (partial) results for temporal chaos . These and other more classical results don’t transport to spatio-temporal chaos (weather and climate) .
.
As for the numerical treatments the problem is twofold :
1) A numerical solution doesn’t converge . There is an absolute “computational horizont” .
2) As I have already mentionned on your blog , any numerical treatment leads necessarily to the violation of the no intersection theorem (issue related to the previous point) . This is a deadly problem .
.
But on the long time-scales treating that stuff as Gaussian noise (stochastic) is pretty useful in explaining ice-age transitions as a stochastic resonance [pdf, good review article] that amplifies the effect of the tiny long time-scale orbital perturbations.
.
I am not very familiar with the technicalities of stochastic resonance and it is not realated to deterministic chaos .
In any case the paper you linked and that I skimmed over several years ago is very good and to be recommended .
I do not believe that models based on white noise can be generalised very far so was not very interested by this theory .
In a way , what I answered to Lucia several posts above concerning the transformation of constants (here Ra) in variables could be called “chaotic resonance” .
And it has already been looked at anyway (for those curious enough , look at http://arxiv4.library.cornell.edu/PS_cache/chao-dyn/pdf/9405/9405012v1.pdf) .
.
AnnaV
.
It seems to me that one could create a “climate model†introducing the coupled differential equations and letting the system run to produce a chaotic behavior. Do you see a fundamental error in this proposal?
.
Yes I do Anna .
We don’t know the equations 🙂
Well, we do, even a two box model gives good results.
Definitely I have a problem with links .
Hope it works now :
http://arxiv1.library.cornell.edu/abs/chao-dyn/9405012
.
Kuhnkat and S.Mosher when talking about creativity of the nature , did you look at Bejan’s work ?
.
P.S for Anna
Those would be coupled PEDs and not ODEs .
Regardless that we don’t know them all , imagine just that you want a very “simple” toy model consisting of coupling N-S , heat equation , radiation and equation of state for a water only planet .
You could never construct an analogic circuit for that .
Re: TomVonk (Apr 27 03:54),
You could never construct an analogic circuit for that .
Certainly I could not :).
But when I google
“solving coupled differential equations with analogue computing” a number of links come up that show people are working on the general problem.
After all GCMs claim they are numerically solving some system of equations, so if it can be done digitally it should be possible in analogue too.
http://rankexploits.com/musings/2009/two-box-model-algebra-for-ver-1-test/
two box model. it works.
http://rankexploits.com/musings/2009/arthurs-case-2-i-think/
Re: bugs (Comment#41611)
For those who do not feel like wading through millions of posts in the threads linked to by bugs: Tamino’s original two-box model was not really a physical model but a sophisticated-looking two-parameter regression scheme. It did not actually yield a better fit than a simple single-parameter lumped model; the advantage, rather, was that, because it had an extra parameter, it allowed the modeler to set the “climate sensitivity” of the model to pretty much anything he wanted it to be. 🙂
The threads linked above document some valiant efforts by Lucia and other posters to try and derive plausible two-box models from physical principles. The general conclusion appears to have been that the cause was more or less hopeless.
Personally, I have spent more than a couple of weeks now (since Carrick directed me to the GISS model E forcings) struggling with several two- and three-mode models of my own, only to come to the same two conclusions: (A) the goodness of the achievable fit in any of these generalized linear models is surprisingly limited, and (B) attempts to constrain the climate sensitivity in this way are pretty hopeless, since one can always get an infinitesimally better fit with a wildly different sensitivity.
Possible reasons for (A) are, of course, that the underlying dynamics is nonlinear (at least, I think, it must be nonlinear enough to “clip” the volcanic aerosol peaks), and/or that the conjectured forcings are not entirely reliable. As for (B), it may be telling us that there is really not enough information in the temperature and forcings time series to effectively constrain a many-parameter model, a point that was made already by somebody (I forget who) in one of the original two-box model threads.
Nonetheless, I tend to side with bugs’s original comment (way towards the beginning of this thread) that there is at least this much that’s predictable about the Earth’s climate system: if you pump heat into it, it warms up. I’ll present the evidence for this in a later post; now I have to run to teach a class 🙂
julio – “Nonetheless, I tend to side with bugs’s original comment (way towards the beginning of this thread) that there is at least this much that’s predictable about the Earth’s climate system: if you pump heat into it, it warms up.”
Well yes, that’s possible but as Anna V (I think) observed up thread that might not be the only response. Simple example is an endothermic chemical reaction. How much energy is sequestered in vegetation growth? Tidal/air movement? Erosion? Phase change? How do these numbers compare to the assumed CO2 forcing W/m2? Please can you build these into your post?
Re: curious (Apr 27 16:20),
We know approximately how much energy is converted to biomass every year and it’s less than 0.1% of total solar irradiance reaching the surface. That’s also why biomass as a source of energy can never amount to much. Pretty much the same thing goes for ocean and atmosphere circulation. Heat comes in and most of it goes back out again. The proposed radiative imbalance is way larger than any rational storage mechanism. You can rearrange the temperature distribution of the planet to radiate more energy without raising the average temperature, but it would be very unlikely, i.e. cool the tropics while warming the poles.
All right, so this is the promised evidence that the Earth warms up when radiative forcing goes up. It is the best I have been able to do with the GISS model E forcings. (Note: I am doing a running average of the forcings, as well as the temperature anomaly.)
http://comp.uark.edu/~jgeabana/av68.png
You might think: “climate vs weather” is all very well, but a 68-year running average?? But I actually have a good reason for it. It turns out that the most distinctive feature of the power spectrum of the temperature time-series over the past 130 or 160 years is actually a sharp low-frequency peak corresponding to an oscillation with a period of 68 or 69 years. By “sharp” I mean its width is comparable to the spectral resolution of the time series, which means the decay rate associated with it is on the order of 100 years or longer. I do not know if this is an ordinary (linear) resonance, possibly excited by some low-frequency spectral component of the forcing (there are such components, I checked), or if it is a nonlinear, self-sustained oscillation (limit cycle), but either way it is not the sort of thing you can ignore when looking at trends over the span of the instrumental record.
However, if you take a running average over about 68 years, the oscillation should (mostly) go away, and you can then really look for the underlying trends. (Also, if you try shorter running averages, you get some really weird stuff from the forcings, mostly due to those huge volcanic aerosol peaks.) What I have shown here is pretty much what you get from 66-, 67-, 68-, 69- and 70-year averages. It is a nice linear regression; Mathematica gives me an R-squared of 0.97 for this particular fit, and a “standard error” of 0.012 for the slope (I have no idea how it calculates that, though).
The “climate sensitivity” from this fit is 0.453*5.35 Log[2] = 1.68 C per CO2 doubling, +/- 0.04 C.
Do I believe it? Well, I think it is at least as good a guess as any. It is simple, it does not assume anything, and the only adjustable parameter–the averaging time–is taken from the data itself (the spectrum of the time series). I like it, personally, also because it reinforces my prejudices (I am human that way). Of course, I would have a hard time believing in such a low climate sensitivity if we had actually seen any substantial warming over the past decade…
Thanks DeWitt – as ever your posts suggest you do the numbers. I should do the same.
Julio – please can you clarify your graphic? It reads as if you are using actual temp records but what is the source for the values on your forcings axis? Is the graphic saying that a zero forcing maintained for the whole 68 year period corresponds to a cooling in a 68 year running average temp? Thanks
Re: curious (Comment#41638)
Yes, sorry about that. Each point in the graph represents an interval of 68 consecutive years. The “zero forcing” point represents a particular interval for which the forcings all averaged to zero. Over that same span of time, the average temperature anomaly was -0.15 or so. These are temperature anomalies, so the negative signs do not really mean anything.
The main point is that, generally speaking, larger forcings result in larger temperatures, in an approximate linear way. You cannot see time in the graph, but it more or less advances from left to right (both the average forcings, and the average temperatures, have increased with time more or less steadily).
The radiative forcings come from
http://data.giss.nasa.gov/modelforce/RadF.txt
or would if the NASA server was responding.
Thanks Julio – the cached version of the page is there as a data table. What is the source for the values in the table?
Re: curious (Comment#41640)
Ah, if only we knew…! 🙂 Some people might say it’s the fevered mind of James Hansen. 🙂 🙂 No doubt, though, they are a combination of actual measurements (the CO2 data, for instance), and educated guesses (at least some of the aerosols).
Despite an impressive level of precision for the 1880’s I’m afraid I have my doubts as to what those numbers mean – are they raw data, adjusted data or generated data? If they are generated data what algorithms generated them and does your graph risk being a visual representation of the algorithms?
Re: DeWitt Payne (Apr 27 18:21),
It is not only biomass that is in the game. How about kinetic energy in motion of ocean and air currents? Electricity? How about evaporation? How about conduction to the lower land levels?
From this plot
http://www.drroyspencer.com/wp-content/uploads/CERES-Terra-1.4-fb-removed.jpg
I claim there is a seasonal variation, like the breathing of CO2 in the AIRS records.
From the peaks and valleys I would handwave about 1 to 0.5 watts/m^2 are playing ball here, and that would be a good place for heat to be transformed into wood dry leafs algae etc.
Certainly though the heat turned into in bio is larger than the 0.1% claimed
Not so small and comparable to the small CO2 contribution.
AnnaV
.
Certainly though the heat turned into in bio is larger than the 0.1% claimed
Not so small and comparable to the small CO2 contribution.
.
Even if this has nothing to do with chaotic systems , you can do a fast back of envelope estimation of the orders of magnitude .
Plants don’t use heat , they use the visible part of the light spectrum and more than half of the Sun’s radiation is useless for plants .
The typical photosynthesis efficiency (e.g the ratio
used energy/total received energy) is around 1 % . The dispersion is wide – around 1 order of magnitude up or down .
You receive around 300 W/m² so the typical plants will use and store 3 W/m² of it . So you are right , it is far from being negligible .
The next step is to get more sophisticated and to multiply by m² because not everywhere are plants and it is also not 300 W/m² everywhere .
But as plants are mostly there where it’s more than 300 W/m² (tropics) and microalgaes are everywhere , one can expect that the result is about the same order of magnitude .
Let’s say that 1 W/m² which was your handwaving is a reasonable order of magnitude 🙂
However if the surface of plants is constant , this energy storage is constant too .
Sure , if it doubled you would see a measurable effect on many dynamical parameters – humidity , cloudiness , temperature etc .
Julio (Comment#41636) April 27th, 2010 at 7:02 pm:
Change your formulae into C / W/m2.
This is really the test of global warming.
Global warming theory is now based on this figure being around 0.75K/W/m2 (or 0.81K/W/m2 sometimes) …
Which depends on feedbacks adding to the initial forcings outlined in RadF.txt. One needs water vapour and albedo feedbacks of another 2.5 times to get to 0.75K/W/m2.
Your number of 0.453 is really 0.22C/W/m2
Another way of looking at it is that the 3.7 W/m2 of additional forcing from a doubled CO2 has to eventually translate into 11.5 W/m2 of total forcing including feedbacks to get to the 3.0C per doubling (and then the lapse rate has to remain constant for surface temperatures to increase by the same 3.0C as the troposphere).
To be fair, the theory assumes that there is a lag until the 0.22C increases to 0.75C but looking at the last 12 years and the ocean heat content figures, we are not seeing any lagged response at all right now.
Re: TomVonk (Apr 28 04:21),
According to data from this source, total annual solar energy at the surface is 5700 ZJ. Total photosynthetic energy conversion for the fixing of 2E11 t/year of CO2 is 3 ZJ. That’s 0.053%. I believe that qualifies as less than 0.1%. Remember that for much of the world, plants only fix significant amounts of CO2 for less than half the year. Also the NPP for the ocean is about 1/3 that for land on an area basis.
Re: Bill Illis (Comment#41654)
Well–with the forcings assumed, the number is 0.45 C/W/m2. As the saying goes, I do not make up the news, I just report them…
I am well aware (and also glad) that this is lower than the more alarmist figure of 0.75K/W/m2 used in some predictions. (I even think there is a nice irony in the fact that I can get such a low figure using Hansen’s own forcings!) I am also well aware, however, that this is higher than what you would get with no positive feedbacks at all (although your figure of 0.22 C/W/m2 strikes me as too low even by those standards).
So maybe there is some positive feedback. Why shouldn’t there be?
Re: TomVonk (Apr 28 04:21),
Well, the climate community seems to think that all energy that comes from the sun is turned into heat or radiation according to SB and ignore all other forms of energy to which the incoming radiation from the sun is transformed. You are right that the plants turn the energy directly to chemical bonds.
In addition we can consider the temperature regulating mechanisms of trees,
http://www.uncommondescent.com/biology/trees-regulate-photosynthesis-temperature-by-design/
which is another energy storage.
The plot I linked to shows increasing storage, and that goes with increasing CO2 which is a great food for the flora. I do not know how much is stored in the slowly biodegradable debris of flora, wood takes hundreds of years for example, not counting coal and turf. 20% of imbalance going into long term storage is not a bad estimate either.
Re: DeWitt Payne (Apr 28 07:50),
Plants do other things than just fixing CO2. Except the heat regulation quoted above, they also grow= slow kinetic and various other energy consuming chemical exchanges.
( My biology is highschool biology and this
http://www.emc.maricopa.edu/faculty/farabee/BIOBK/BioBookPS.html
is more than I want to know about chlorophyl)
So I do not think it is that simple,
http://earthobservatory.nasa.gov/GlobalMaps/view.php?d1=CERES_NETFLUX_M&d2=MY1DMM_CHLORA
but yes, it has not much to do with chaos, except adding some more dimensions to the problem.
curious:
I do not know how they estimated the aerosol numbers. Keep in mind that they are parameters for a model, not (as presented) data, which is, presumably, why no error bars are given.
You and Bill Illis may find the following figure interesting, though. It is what I get if I throw away everything in the file except for the second column, the one labeled “well-mixed greenhouse gases”:
The implication is that it would be possible to explain all the warming we have seen over the past 130 years by assuming a “climate sensitivity” of only about 0.253*5.35 Log[2] = 0.94 C per CO2 doubling. The reason why one expects the sensitivity to be higher than this, is that the heating effect of the greenhouse gases has to fight (and overcome) the cooling effect of all kinds of aerosols (see the figure at http://data.giss.nasa.gov/modelforce/RadF.gif)
In any case, this does show that as the radiative forcing goes up, the temperature goes up too (the straight-line fit has an impressive R-squared of 0.997).
Tom: I know this is not chaos, but it is relevant to an important prior question, namely, how much of the observed behavior of the system can be understood by simple linearization.
Julio,
I don’t want to be too negative here – it is evident you have put a fair amount of work in tracking down data, generating graphs and checking your ideas – but I am a little sceptical of what you say. To try and be positive about this – I think what you have done here is no worse than what others in climate science have done. But I also think it is really no better, either.
Part of the challenge is the big unknown, which is what we might term “natural” variability of the climate system (which climate scientists might refer to as the unforced variability). And this ties back to issues around internal dynamics, chaos, and self-similarity.
There is a second problem, which is producing a theory which is consistent across all scales. For example, to accept the simplistic forcing theory we need to accept that the climate converges on an equilibrium with a timescale of ~30 years (for climate scientists) or over a 69 year cycle (in your example). But if that is the case, then on very long scales (>10kyr), the climate should track the forcings very faithfully. This means at the Milankovitch scale, we need to find a forcing of 20W/m2 to generate the 10 degree C swings we see in the glacial-interglacial transitions. That’s a big forcing, given the orbital parameters at the right timescales are negligible. That’s one hell of a forcing not to be able to find. Which means, with your model, and that of climate scientists, we need one model to make annual variability work, another model to make the decadal-centennial scale work, and yet another model to make the millenial scale and upwards work.
There is a third problem here with the statistics as well: that the models are tuned with the data they are subsequently tested on. You argue that this is a benefit, but I see it as a serious problem. If you construct a theory using one set of data, it is inappropriate to apply a hypothesis test to that same data – you need to find an independent data set. Hans von Storch has a nice write up of the problems that this creates under the Mexican Hat discussion in his book (excerpt here).
Getting back to the first problem: the natural variability is often assumed to be ergodic (i.e., scale averaging or filtering reduces the uncertainty). This may not be the case. In fact, there is empirical evidence for self-similar behaviour in the climate system. Self-similar behaviour is interesting because it means there is greater spectral power at the lowest frequencies compared to other types of uncertainty. This means that whenever we look at data – whether proxy reconstructions, instrumental data, etc – the series appear to be dominated by a low frequency component – or trend – that closely matches the length of the data set. People naturally see this as a dominant cycle or trend of the data set. But then, when moving to a larger scale, what appeared to be a dominant cycle or trend at the lower scale simply becomes random noise at the larger scale.
We see this throughout climate science. The Milankovitch cycles at 100kyr scale. Fred Singer’s unstoppable 1500 year cycle. Your 69 year cycle. The IPCC trend. But are we really seeing deterministic cycles and trends here, or part of a self-similar variability within the natural dynamics of climate?
The follow up to all of this, though, is that I don’t really know what is right. It all amounts to empirical speculation. And a lot of AGW consensus people dismiss my theories on chaos and self-similarity as having no explanatory power.
Yet my model is consistent across all scales, whereas every other model I’ve seen works at one scale only, and needs Tycho Brahe style fixes at other scales. Whilst my model makes very limited predictions on how the future of climate might turn out, it makes one prediction which I think has been a very powerful prediction that has been demonstrated over and over again. That people studying climate science will be “discovering” interesting cycles at or near the length of the data set they are analysing and making claims of deterministic behaviour associated with them. I think this prediction has been demonstrated time and time again 🙂
Don’t be too put off by my negativity, though. Your ideas are interesting, and no less valid than anything being published in climate journals today. So keep crunching those numbers, you never know what you will find.
I think Spence_UK is being overly negative, and appears too invested in
unsubstantiatableapplied math terminology. I hope he similarly isn’t put off by the degree of negativism I have towards applied mathematicians pretending to understand experimental data. They are certainly no “less valid” than what any scifi writer might produce.</tongueincheek>
Hey, are you calling me a mathematician?
That counts as fighting talk where I come from!!! 😉 😀
Hi,
I have a climate physics model that I’m working on, but unrelated to chaos. I would like it vetted by some of the CBA’s that hang out here.
Is there an appropriate thread to post to? I also have a few graphs, so how does one go about having the ability to post them as well?
Thanks, AJ
Re: anna v (Apr 28 09:59),
Yes, and everything else they do uses the energy from the sugars produced by photosynthesis and is immediately converted to heat. Please don’t try to tell me that the gravitational potential energy of a tree, for example, is a significant form of stored energy. Even the temperature regulation is done by production of latent heat in the form of water vapor. In the end, the only meaningful number is the heat of combustion of the harvested, dried plant. For switchgrass, which seems to be the most efficient converter of sunlight to stored energy, you get ~1.3 W/m2 under ideal conditions, completely dry, pure oxygen etc.. That’s better than 0.1%, but most plants are nowhere near as efficient as switchgrass.
Spence_UK:
“Well you know… experimental science. It’s not like it’s mathematics or something.”
(Spoken by a math friend.)
Spence,
Thanks for all the comments, which I actually thought were very kindly expressed. Don’t worry: I have no ego invested in this, I’m just looking at it as a total outsider, and trying to tackle it with the tools in my toolbox. Unfortunately, chaos theory is not one of those tools!
I found your observations about self-similar behaviour and recurring low-frequency peaks fascinating. I admit I was a bit worried about “my” peak being so close to the zero-frequency end of the spectrum, but it seems a pretty robust feature: it shows up in the 130-year GISStemp series and in the 180-year HadCRUT series, before and after detrending, and in both series it is very nearly at the same frequency (as best as I can locate its position by interpolation). And, after all, why wouldn’t the system have a resonant frequency or a limit cycle? All you need is some kind of inertia; it could even be, literally, some physical inertia, as in a convection current.
In any case, I was not that concerned with the oscillation per se; my point was rather that if you try to calculate a trend or an average and ignore the underlying oscillation (or whatever it is)–which, given its long lifetime, can be properly considered part of the system’s “natural variation”–you may fool yourself into thinking, for instance, that the warming trend is larger than it is, if you average only over a half period while the thing is going up (which some people suspect is what many climate scientists have actually done). In short, I just brought the thing up to justify the use of a 68-year-long average, which otherwise might have appeared rather excessive and arbitrary.
Anyway… I quite agree with your contention that “If you construct a theory using one set of data, it is inappropriate to apply a hypothesis test to that same data”. I am going to spend some time reading von Storch’s article, because it looks really interesting. And I’ll try and backtrack to some of your earlier comments in the thread and see if I can learn something from them, but most of this will have to wait until after final exams…
I made a few math mistakes above in my first comments above on julio’s calculations.
Additional comments is that
http://data.giss.nasa.gov/modelforce/NetF.txt
http://data.giss.nasa.gov/modelforce/RadF.txt
… which ends in 2003 at +1.92 Watts/m2, is only the direct Anthropogenic forcing. There is still Water Vapour and Albedo feedbacks of about another 200% to add in to these numbers in the long-run.
GISS Model E then calculates a temperature increase of 0.62C from this additional +1.92 Watts/m2 of forcing or just 0.32C / Watt/m2 to date.
This has to increase to 0.81C / Watt/m2 if there is to be +3.0C per CO2/GHG doubling which is estimated to provide an additional 3.7 Watts/m2 of forcing.
We can use the Stefan-Boltzmann equations to estimate how much the Water Vapour and Albedo feddbacks have to be to reach +3.0C for the troposphere if there are just 3.7 Watts/m2 of additional GHG forcing. And that is another 7.8 Watts/m2 of feedbacks.
So far, 1.92 Watts are providing 0.62C of increased temperature or just 0.32C / Watt/m2 which is just barely above the basic no-feedback 0.265C / Watt/m2 Planck response figure so there is almost no feedbacks at all yet.
The water vapour data we do have does not clearly show any increase at all (if anything, there is declining water vapour in the troposphere and in the stratosphere with a small increase near the surface). Albedo? well, there is a very, very, very small decrease from Ice Albedo and a 4% decline in the average cloudiness of the Earth.
So the math is not adding up so far.
Trenberth is publishing papers saying there is “Missing Energy” and putting forth a new concept “Negative Radiative Feedback”.
http://img202.imageshack.us/img202/7316/trenberthmissingheat.png
http://img638.imageshack.us/img638/8098/trenberthnetradiation.jpg
We are already at 1.9 or 2.0 Watts/m2 toward the eventual 3.7 Watts/m2 for a CO2 doubling (half-way) and there is only 0.7C of warming so far.
The feedbacks better start kicking in soon or they will be rewriting the physics of global warming.
When you turn on the radiator in a room, does the room immediately reach the equilibrium temperature?
bugs:
Depends on whether I remembered to pay the gas bill.
>.>
Re: DeWitt Payne (Apr 28 17:21),
I find that the 0.1% you are quoting is the lower limit for biomass.
I also find 11% efficiency for algae , and they fill up the oceans
http://energy-conservation.suite101.com/article.cfm/biodiesel_from_solar_energy
current state of the art and challenges†from volume 35 number 421 of the Journal of Industrial Microbiology and Biotechnology, provide an estimate of maximum quantum efficiency of solar to chemical energy conversion of roughly 11.6%.
If one looks, there are also crops with high efficiency like sugar canes beets etc.
So I think that the hand waving estimate from Spencer’s plot may be a lower limit after all. There are a lot of algae in the oceans.
deWitt
.
For switchgrass, which seems to be the most efficient converter of sunlight to stored energy, you get ~1.3 W/m2 under ideal conditions, completely dry, pure oxygen etc.. That’s better than 0.1%, but most plants are nowhere near as efficient as switchgrass.
.
I think that there is something wrong with those figures (1 order of magnitude) .
Not that it is very important but one should get the figures right at least to the best 10 power .
Photosynthesis efficiency , as I posted above , is in a range of
~0.1 % to ~10 % with an average around 1 % .
Switchgrass is not really very efficient – as photosynthesis stores energy best in sugars , common sense would say to look for highest efficiencies in sugar plants .
Indeed sugar cane comes near to the 10% . Like Anna is saying , the microalgaes are also rather high .
I had a look at your reference .
They neither explain where the number of fixed CO2 tonnage comes from nor how they convert t of CO2 in J .
In any case the number obtained by dividing the latter by the total energy received is 1 order of magnitude off the estimated average photosynthesis efficiency .
I would suspect that the problem is in the number of biomass/year – it is probably not known at better than a factor 10 .
Indeed it must be pretty hard to measure with some kind of accuracy the mass of all the microalgaes all over the planet .
anna and Tom,
Quantum efficiency is not total efficiency. A quantum efficiency of 10% means that for every 10 photons of the correct wavelength, you do one conversion reaction. The correct wavelengths are only a small part of the solar spectrum. The ocean isn’t full of algae either. The growth of algae in the open ocean is limited by the availability of micro-nutrients like iron and phosphorus. The figure of 0.05% for the planet comes from net primary productivity data which is measured from satellites. Plants grow fastest when they’re young and there is little competition. Anywhere that isn’t being farmed or used as pasture, forests, e.g., will not be growing very fast.
As for sugar cane: The peak photosynthetic efficiency is ~8%. But not all of that is stored, as the plant requires the energy for a lot of other things like polymerizing the sugar to cellulose for its structure, growing roots, etc. Also, peak efficiency occurs at relatively low light level, ~ 100 W/m2. So overall efficiency actually decreases at local noon on clear days. Estimates for net efficiency are much lower:
That’s probably an underestimate of the average insolation during the growing season. Sugar cane is great for making ethanol because the sugar content is much higher than corn and the technology for conversion of sugar to ethanol is well established, unlike cellulose to ethanol. But the overall efficiency for conversion of sunlight to usable energy is still much lower than for photovoltaic panels. For switchgrass, it probably makes more sense to pelletize it and burn it directly in a power plant or gasify it and make methanol or whatever from the CO and hydrogen. South Carolina has contracted to supply 350,000 tons of pelletized switchgrass to be burned in European power plants.
The point is still that only a tiny fraction of incoming sunlight is stored by photosynthesis. Any energy used to circulate air and water ends up as heat in short order. Any imbalance between incoming and outgoing radiation will be reflected by a temperature change somewhere, with the ocean being the most likely candidate.
deWitt
.
The point is still that only a tiny fraction of incoming sunlight is stored by photosynthesis.
.
Yes I think we all agree with that . But then , like with the CO2 effect , all is in the question how tiny it is .
We know what the bounds are – around 0.1% tiny minimum and around 10 % tiny maximum .
I also agree with all the qualitative comments you did and could add more .
– plants also use reflected and diffused light
– a plant’s effective surface is much bigger than it’s cross section
– microalgaes concentration is highly variable and they contain a large fraction of the total biomass while having a high photosynthetic efficiency . I strongly doubt that any satellite can measure underwater concentrations .
– the received energy as well as the plant’s dynamics are extremely variable with latitude , altitude and season
– and of course changes of land use will impact the balances too
.
Beyond that as we are not biologists , I’ll settle with 1% as a right order of magnitude simply because it can’t be equal to either the minimum or the maximum .
DeWitt’s 0.05% gets a tick from the FAO:
“Approximately 5.7 x 10^24 J of solar energy are irradiated to the earth’s surface on an annual basis. Plants and photosynthetic organisms utilize this solar energy in fixing large amounts of CO2 (2×10^11 t = 3×10^21 J/year)”
That’s the way to do it. The amount of CO2 absorbed is known, as is the free energy of the reaction.
Chaos!
Marine algae can produce large amounts of a compound (dimethylsulfoniopropionate or DMSP) that when broken down by bacteria produces dimethyl sulfide (DMS),Dr Bond said.
DMS then enters the atmosphere and is thought to contribute to condensation of water vapour and cloud formation.
These algae can be found in such large numbers in the world oceans that the amount of DMS released can increase the reflection of sunlight by clouds which may contribute to a reduction in global temperature.
http://www.uq.edu.au/news/?article=11433
Re: Nick Stokes (Apr 30 05:36),
Those are, in fact, the numbers I used. Let’s look at it another way. If plants were globally 1% efficient at converting sunlight to stored chemical energy then it would be 4E12 t/year CO2 converted to biomass. But the atmosphere only contains 3E12 tons of CO2. The best measure of stored energy, IMO, is the heat of combustion of dried biomass, which is what I used for the switchgrass numbers.
AJ (Comment#41692) April 28th, 2010 at 4:55 pm
Hi,
I have a climate physics model that I’m working on, but unrelated to chaos. I would like it vetted by some of the CBA’s that hang out here.
Is there an appropriate thread to post to? I also have a few graphs, so how does one go about having the ability to post them as well?
Thanks, AJ
Contact Lucia, or try JeffId. Put your thick skin on.
TomVonk (Comment#41604) April 27th, 2010 at 3:54 am
Definitely I have a problem with links .
Hope it works now :
http://arxiv1.library.cornell……yn/9405012
.
Kuhnkat and S.Mosher when talking about creativity of the nature , did you look at Bejan’s work ?
.
Link still is no Joy. No Tom we didnt discuss Bejan’s work. I plowed through about 50% of wolframs book..I had some thoughts about using CA for a project I was playing with. Have you ever read Braidenbergs stuff.
http://en.wikipedia.org/wiki/Braitenberg_vehicles.
when I was working on NLG ( natural language generation ) back in the 80s, his work was a fascinating analog for what I was thinking. Underneath the hood ( in the mind of man) there are these really simply machines with simple rules, but they give rise to really complex behavior..
Looking at a fractal you wouldnt guess that underneath the rule was really “simple”. looking at the words people write, you wouldnt think the underlying machine was really simple.. looking at the braidenberg vehicles you wouldnt think that underneath the circuit was really simple. Later when I had to write some AI algorithms to control the behavior of “threat” air vehicles, I found you could do it with some pretty simple rules..
you’d imagine something really complex.
you might even imagine a ghost in the machine.
So.. in this weird intuitive way.. I saw this similarity in all these things.. anyways 20 years later Wolfram does his book and I was hoping for a lot more than it delivered. Says more about my expectations than his talent.
And then there was the question. Can I tell anything about the underlying rules by measuring the complexity of the output..
steven mosher (Comment#41781) April 30th, 2010 at 8:27 am
Thanks Steven… but perhaps a little miss-communication on my part. I meant to post my model as a comment, not as a headline. As I’m just a hack with a spreadsheet and a formula, it’s probably either wrong or uninteresting 🙂
Is this thread a “free for all” or specifically about chaos?
AJ
Re: steven mosher (Apr 30 08:51),
And then there was the question. Can I tell anything about the underlying rules by measuring the complexity of the output..
Well, I would think not.
I listened to Wolfram’s lecture introducing his book. I am not convinced it is more than numerology to be induced to buy it.
I think it is along Pythagorean lines, and going back to the power of numbers is going back to the ancient paradigm with all the golden rules and ratios.
I think calculus was a real shift in paradigm and what introduced the modern scientific age.
On the other hand, theoretical physics that introduces string theories also goes back to a Pythagorean view of the world: every elementary constituent is a vibration on a generic string: the music of the spheres. And of course vibrations are quantized and once one gets quantization one can introduce numerology.
Then the question of the chicken and the egg comes. Does the simplicity that comes from complex dynamical calculations which then creates observed complexity dominate causality or is it that the complex dynamics do so?
As a physicist bred on Action integrals and Lagrangians and constants of motion I am biased towards the view that it is dynamics that is causative, and see no change in paradigm.
I
‘Is this thread a “free for all†or specifically about chaos?’
AJ, Whatever else it is, from the title, it’s certainly fairly self referential?
Re: AJ (Apr 30 13:33),
I would think that an “open thread” means more or less free for all.