While browsing I ran across a publication list of recent papers. This title was intriguing:
Tuning the climate of a global model ;
Mauritsen T. , B. Stevens E. Roeckner T. Crueger M. Esch M. Giorgetta H. Haak J. H. Jungclaus D. Klocke D. Matei U. Mikolajewicz D. Notz R. Pincus H. Schmidt L. Tomassini null : ” Tuning the climate of a global model ” , Journal of Advances in Modeling Earth Systems 4 , doi: 10.1029/2012MS000154 , http://www.agu.org/journals/ms/ms1208/2012MS000154/2012MS000154.pdf
The paper includes interesting discussions of model tuning. Along the way it touches on behaviors that model behaviors consider when tuning. I’m not going to try to give any sort of summary. I’m just going to provide quotes of bits I found interesting.
Absolute Temperatures, the Cold Bias and NH Sea Ice
Of course we’ve discussed the spread in absolute temperatures in models before. That is: although models give fairly good estimates of the total temperature rise over the 20th century, individual models do not agree on the absolute temperature for the earth.
I found the interlaced discussion of the spread in absolute temperatures, the cold bias in models and a plausible consequence for their ability to predict the rate of melting of northern hemisphere sea ice interesting:
We usually focus on temperature anomalies,
rather than the absolute temperature that the
models produce, and for many purposes this is sufficient.
Figure 1 instead shows the absolute temperature evolution
from 1850 till present in realizations of the coupled
climate models obtained from the Coupled Model
Intercomparison Project phase 3 (CMIP3) [Meehl et al.,
2007] and phase 5 (CMIP5) [Taylor et al., 2012] multimodel
datasets available to us at the time of writing,
along with two temperature records reconstructed from
observations [Brohan et al., 2006]. There is considerable
coherence between the model realizations and the observations;
models are generally able to reproduce the
observed 20th century warming of about 0.7 K, and
details such as the years of cooling following the volcanic
eruptions, e.g., Krakatau (1883) and Pinatubo (1991), are
found in both the observed record and most of the model
realizations.
[6] Yet, the span between the coldest and the warmest
model is almost 3 K, distributed equally far above and
below the best observational estimates, while the majority
of models are cold-biased. Although the inter-model
span is only one percent relative to absolute zero, that
argument fails to be reassuring. Relative to the 20th
century warming the span is a factor four larger, while it
is about the same as our best estimate of the climate
response to a doubling of CO2, and about half the
difference between the last glacial maximum and present.
To parameterized processes that are non-linearly
dependent on the absolute temperature it is a prerequisite
that they be exposed to realistic temperatures for
them to act as intended. Prime examples are processes
involving phase transitions of water: Evaporation and
precipitation depend non-linearly on temperature
through the Clausius-Clapeyron relation, while snow,
sea-ice, tundra and glacier melt are critical to freezing
temperatures in certain regions. The models in CMIP3
were frequently criticized for not being able to capture
the timing of the observed rapid Arctic sea-ice decline
[e.g., Stroeve et al., 2007]. While unlikely the only
reason, provided that sea ice melt occurs at a specific
absolute temperature, this model ensemble behavior
seems not too surprising
I was struck by this because it has always seemed to me that the bias in absolute temperatures could likely result in models simultaneously failing to predict the rapid melt rates while over-predicting the rise in Global Temperatures. Had there been a 1K warm bias, the base level of ice in the arctic might have been too low– consequently the rate of melting would have been under predicted.
If this argument about the effect of the cold bias on predictions of the rate of melting hold true, the rapid rate would not be an indication that models sensitivity was low nor that warming was progressing at a faster rate than in models. Rather it merely suggests that because the absolute temperature of the earth is warmer than in models, and because water melts or freezes based on absolute temperatures, not anomalies, we observe the melt transition from ice to water at a lower “anomaly” value.
Energy leaks
There are some more arcane discussions. For conservation of energy aficionados, the section of model drift discusses the possibility of energy leaks in models.
If a model equilibrates at a positive radiation imbalance it
indicates that it leaks energy, which appears to be the case
in the majority of models, and if the equilibrium balance
is negative it means that the model has artificial energy
sources. We speculate that the fact that the bulk of
models exhibit positive TOA radiation imbalances, and
at the same time are cold-biased, is due to them having
been tuned without account for energy leakage.
[31] We investigated the leakage of energy in MPIESM-
LR of about 0.5 Wm22 and found that it arises for
the most part from mismatching grids and coastlines
between the atmosphere and ocean model components.
Further, some energy is lost due to an inconsistent
treatment of the temperature of precipitation and river
runoff into the ocean, and a small leakage of about
0.05 Wm22
I can’t begin to speculate how energy leakage would estimates of climate sensitivity but I can say it’s sub-optimal in a model that is supposed to conserve energy.
Hindcasts
Because we are often concerned with models prediction of surface temperatures, I was struck by this discussion of the ability of their model to fit 20th century temperature data:
The MPI-ESM was not tuned to better fit the
20th century. In fact, we only had the capability to run
the full 20th Century simulation according to the
CMIP5-protocol after the point in time when the model
was frozen. Yet, we were in the fortunate situation that
the MPI-ESM-LR performed acceptably in this respect,
and we did have good reasons to believe this would be
the case in advance because the predecessor was capable
of doing so. During the development of MPI-ESM-LR
we worked under the perception that two of our tuning
parameters had an influence on the climate sensitivity,
namely the convective cloud entrainment rate and the
convective cloud mass flux above the level of nonbuoyancy,
so we decided to minimize changes relative
to the previous model. The results presented here show
that this perception was not correct as these parameters
had only small impacts on the climate sensitivity of our
model.
[66] Climate models ability to simulate the 20th century
temperature increase with fidelity has become
something of a show-stopper as a model unable to
reproduce the 20th century would probably not see
publication, and as such it has effectively lost its purpose
as a model quality measure. Most other observational
datasets sooner or later meet the same destiny, at least
beyond the first time they are applied for model evaluation.
That is not to say that climate models can be
readily adapted to fit any dataset, but once aware of the
data we will compare with model output and invariably
make decisions in the model development on the basis of
the results.
This seems to be an admission that modelers have known their models would be compared to 20th century data early on. So, early models were tuned to that. We are now in a situation where models can — mostly– match 20th century data. So, the good match in the hindcast for the historic surface temperatures in no longer a very good metric for determining which models are good or bad.
Overall: Definitely an interesting read.
I’ll be looking through other papers in the list of recent publications and encourage you to do so. I’d love a pointer to any that are particularly thought provoking. (I can also say I eagerly await “No consensus on consensus: The challenge of finding a universal approach to measuring and mapping ensemble consistency in GCM projections; (Citation)
McSweeney C. F. , R. G. Jones null : ” No consensus on consensus: The challenge of finding a universal approach to measuring and mapping ensemble consistency in GCM projections” , Climatic Change , SUBMITTED “) I hope they are going to discuss consistency with observations rather than just between models. Live in hope! 🙂
Lucia writes “So, the good match in the hindcast for the historic surface temperatures in no longer a very good metric for determining which models are good or bad.”
True. This negates any argument that a model has value because it can “successfully” hindcast. Arguing otherwise would be like suggesting a particular species might not have traits needed to survive. Obviously it does now although it might not in the future…
How long are these models run before they “equilibrate”? Could the earth sustain a radiation imbalance for, say, 1000 years while the deep ocean slowly equilibrates with the atmosphere? If the models aren’t run for a length of time corresponding to the equilibration time, I don’t how one can say that the model is “leaking” or gaining energy, since some of the energy will be contained internally in the changing temperature.
OK, now that I have actually looked at the paper, I see that they (sort of) answered my question–they understand the thousand-year requirement, but seem to think they can get around it. One would like to see some proof though.
“2.3. Controlling the Global Mean Surface Temperature
and Climate Drift
[26] A particular problem when tuning a coupled
climate model is that it takes thousands of years for the
deep ocean to be equilibrated. In many cases, it is not
computationally feasible to redo such long simulations
several times. Therefore it is valuable to estimate the
equilibrium temperature with good precision long before
equilibrium is actually reached. Ideally, one would like to
think that if we tune our model to have a TOA radiation
imbalance that closely matches the observed ocean heat
uptake in simulations where SST’s are prescribed to the
present-day observed state with all relevant forcings
applied, then the coupled climate model attains a global
mean temperature in reasonable agreement with the
observed.”
I had a quick look at the paper looking for the use of annual ocean temperature variability at depth. It appears that lots of atmospheric measures are being used in this regard, but I couldn’t find a mention of ocean metrics. With ARGO climatology data now being available for 0-2000M, I would see this as being an important target. I tend to view the long-term trends in ocean heat uptake as being highly uncertain, so the focus should be on replicating the annual signal.
AJ–
As good reliable data become available, it starts to be a target.
Lance–
I obtained a few control runs for AR5 models from KNMI. Some are clearly still drifting at the end of the control run. The rate of change in global temperature is small– but some are definitely still drifting. These things need long spin-ups.
Stephen will be along to tell you the models are NOT tuned in 10…9…8…
“Yet, the span between the coldest and the warmest
model is almost 3 K, distributed equally far above and
below the best observational estimates, while the majority
of models are cold-biased.”
Lucia points out that water freezes according to absolute temperatures, not trends. This is a canny observation that seems to have been missed by the entire climate science Team.
But even if a model matches both the observed absolute temperature and the trend, what if the observed temperatures are wrong (e.g., affected by UHI as Anthony Watts suggests in his recent draft article)? Then all models will be tuned to the wrong absolute temperature and eventually their predictions will depart from observations. Could this be what is happening as Lucia and commentators have been noting recently?
That would depend who is on “the team”. The authors of the paper I discuss above note that absolute temperature matters for phase transitions– like ice melting. Those authors are climate modelers.
Absolute temperatures do a lot matter for somethings– others not so much. Ice melting is something affected by absolute temperature. So is absolute humidity. So, in principle, feedback due to water vapor could be affected because the amount of moisture air holds is non-linear.
The lack of discrete conservation of energy is a little disconcerting. This is a standard metric in other areas of fluid dynamical simulation. People long ago concluded that discrete conservation of mass, momentum, and energy was critical to evaluating the models tuned parameters. Lack of conservation can lead to “accidents” that lead one to confuse numerical noise with the continuous model being evaluated.
But surely one has to ask WHY they do not use the models for absolute temperature evolution or prediction. IF that is because they give grossly innacurate results then they must be discarded. As I understand the basic physics of the models they USE absolute temperatures to determine the thermodynamic transfers including the radiatve transfers. something is wrong if they dont give reasonable results for absolute temps I would have thought.
Lucia–
I was using “the Team” in the McIntyre sense, those who close ranks to defend the consensus. Agreed that Thorston Mauritsen is demonstrably NOT on the Team, and he says so pretty directly near the end of this short clip:
http://www.youtube.com/watch?v=0OLB82KZbfI
David writes “The lack of discrete conservation of energy is a little disconcerting.”
So much for being physics based eh? If ever there was a simple proof they weren’t, that single statement is it.
They are physics based. But somewhere in the model, there is some sort of mis-match that results in energy leaking in or out.
A failure to conserve energy in the models casts further doubt on the assertion that long-term climate prediction (e.g. to 2100) is meaningful even for zero’th-order metrics such as average temperature. I’ve always been dubious about the assertion that “it’s a boundary value problem, not an initial value problem” because the earth/ocean/atmosphere system is not closed and we can not accurately quantify energy gain/loss at the TOA. I was always under the impression, though, that regardless of the accuracy of their energy fluxes, the models conserved energy.
Failure to demonstrate conservation of energy in the models makes the claim that we can reliably estimate how much energy will accumulate in the climatic system, even less credible.
The models underestimated the rate at which the ocean would absorb heat into the deeper levels. That would explain a lot. We understand to a greater degree where that the energy being absorbed by the ‘greenhouse’ effect is going. Since the deep ocean research is only relatively recent, the early models have missed out on this input. As I understand it, the transfer of that much energy that deep in the ocean that quickly had not been expected.
http://www.gfdl.noaa.gov/blog/isaac-held/2011/10/26/19-radiative-convective-equilibrium/ Here is a good descriptions of one of the problems with certain models.
A discussion of Browning and Kreiss here: http://climateaudit.org/2007/11/17/exponential-growth-in-physical-systems-3/ where Lucia provides comments and humor.
“They are physics based.”
This sounds a lot like an over-simplification intended to bestow on the models credibility they don’t necessarily deserve. Historical novels can be history-based and be completely erroneous. Any kind of contraption can be physics based and be completely useless at the same time.
Andrew
bugs- Underestimating the ocean heat uptake rate is not the same as an energy leak. And energy leak means either
a) Energy is created or destroyed somewhere.
b) Energy “leaks” in or out somewhere unintended.
The first could happen if there is some sort of “goof up” patching together an interface between ocean and air or something or a mistaken formulation for an ordinary internal grid cell. (The latter would be very unlikely. Formulations of physics on internal grid cells are the first thing people think about.)
The second might happen if no-one ever properly thought about something like what to do at the ‘bottom’ of the ocean which represents a computational boundary.
For that matter, I’m not entirely sure what is correct at the ‘dirt/sea water’ interface at the bottom of the ocean or for that matter the earth. Heat does propagate in and out of ‘dirt’ and at sufficiently long time scales, it would matter if you are diagnosing “leaks”. I don’t know what typical models do about heat transfer into ‘soil’ (or rock or dirt. Whatever they have at the bottom of the ocean.)
It’s my impression, and experience, that generally when you simply parametrize basic physics, you are always in danger of producing models that don’t represent the underlying physics. Tamino’s 2-box model as a physical representation of the real climate system comes to mind. That something pulled out of thin air would be physical should have been unsurprising to him: Anytime you replace something based on underlying physical principles with a kludge of those physics, just parameterizing “by hand” isn’t likely to yield a faithful representation of that physics.
(There are other issues with breaches of physical law related to introduction of spurious solutions by a too-coarsely discretized version of the equations and aliasing of positive wave numbers into negative wavenumber space. This is usually avoidable with linear systems of PDE, often unavoidable when the system is nonlinear.)
This has an exposition on wavenumber aliasing in nonlinear PDEs.
Carrick– Like “wiggles” in heat transfer problems? ( Classic problem is heat equation in a bar. Bad discretization can cause temperature in box ‘i’ to be unlinked from temperature in neighbor boxes. )
I think with Tamino’s two-box, it just didn’t occur to him that if you don’t *explicitly* check the 2nd law, a two box model could violate it.
Also: with respect to kludges, sometimes, it’s known they don’t work in some limit. But one doesn’t necessarily care. The difficulty for climate models and leaks is that they actually *do* want to try to determine sensitivity of models after the model reaches equilibrium. They also want to know what happens over relatively “short” time scales– like decades.
In contrast, if it was a weather model where even a month would be a fairly long forecast, a small energy leak issue might not matter. In fact, it almost certainly wouldn’t.
Energy leaks could also result from improper numerical methods where small truncation errors can subsequently grow. In iterative modelling, this is an important consideration.
From Wikipedia
http://en.wikipedia.org/wiki/Numerical_analysis
Numerical stability and well-posed problems
Numerical stability is an important notion in numerical analysis. An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is ill-conditioned, then any small error in the data will grow to be a large error.
I think David Young (Comment #105940) and I are addressing the same issue.
AJ, ironically it’s Edward Lorenz, a meteorologist, who first showed this problem…
Tibor Skardanelli (Comment #105961)
Yes, I thought about using the butterfly effect as an example.
This seems to be an admission that modelers have known their models would be compared to 20th century data early on. So, early models were tuned to that.
A point that needs to be made again and again; the models are better trained than performing seals.
Tibor Skardanelli (Comment #105961)
Actually no, the butterfly effect is an initial conditions problem. I’m talking about a poorly implemented algorithm returning incorrect results compared to a properly implemented one, although the math in both is correct. It’s been many (many) moon’s since I took Numerical Methods and I haven’t touched it since, but I seem to recall the example of calculating sine(x) thru a series expansion, where x was outside of a certain range. One method led to proper results and another erroneous results.
Personally, I would think that this kind of error would become self-evident in the model runs, but you never know. There might be some seemingly unimportant calculations sitting in the bowels of the model that have been overlooked.
AJ/Tibor—
With regard to ‘energy leaks’ the short-term/ long term issue isn’t the main difficulty. Even in problems where there is no ‘butterfly effect’ type issue, someone making a prediction might do an order or magnitude calculation and decide that the error due to ‘energy leakage’ isn’t going to have much of an effect on the answer for computations at short term.
There are all sorts of problems where one ignores an effect that is small *in the region of interest* even though that effect might be moderate or even large if you were doing a different problem.
Consider this: If you are trying to figure out the force you need to apply to push a car out of a ditch, you likely don’t consider the aerodynamic drag that arises once the car is moving. Heck, you might not even consider it if you were trying to figure out the time it takes to accelerate from 0-50 mph so that you can easily merge on the highway.
In contrast, if you were trying to figure out the gas-mileage on the highway, you will likely need to consider the aerodynamic drag.
Even though energy leakage sounds like a big fundamental problem– you still might not care if you were trying to predict the evolution of weather over the next 4 weeks. You likely won’t even care much about ocean heat uptake. (Meanwhile, the Lorenz problem will begin to interfere with predictability.– likely long before the ‘energy leakage’ does.)
AJ, you are right but the parallel was irresistible. If I remember Lorenz lost his data during after the crash of a disk and reentered them from the listing he had kept, but the truncations on the listing gave a completely unexpected results.
Lucia, absolutely yes, but it was worth to mention it. Besides, there is a deeper problem with chaotic systems, it is not only a rounding error problem but an intrinsic one (Bernouilli shift is the perfect example), we can legitimately wonder if the models we use to represent natural phenomenon could really be used for long term prediction even if they were absolutely free of rounding errors. It took almost 50 years to produce the KAM theorem after the three bodies problem was stated.
It is very important we understand climate but i’m dubitative on the prediction power of our systems at least for these reasons.
And sorry to be off the subject, consider it was break.
lucia (Comment #105934)
“As good reliable data become available, it starts to be a target.”
Agreed and I’d be surprised if the ARGO data wasn’t being targeted. It’s probably just not mentioned in this paper. A couple of years ago I compared the ARGO data to 10 CMIP3 models for depths 0-1000M. IIRC, GISS-ER was an egregious outlier in terms of both absolute temperature and annual variability. My guess is that they will perform better in the CMIP5 simulations.
Tibor–
Chaos is interesting– but it doesn’t always mean what people thnk it means. The vast majority of all fluid dynamics problems of engineering interest are chaotic. But we can predict things like pressure drop in pipe flow sufficiently well to size pumps. So, the fact that the earth’s climate/weather is chaotic does not necessarily preclude our being able to predict some things.
For example: we know increased CO2 should and will lead to some warming. It has to do so. Saying it doesn’t is a bit like suggesting that the fact that the Navier Stokes equations result in chaos will prevent water from flowing downhill. In fact: even with chaos present, water will flow downhill. Also: if you set up a little “channel” mock up in a lab, water will tend to flow faster if you include the channel more steeply. This problem is even pretty predictable.
And yet, if you were able to introduce a very, very small colored bead in the flow and tried to predict it’s path– that path would be just as unpredictable as weather. Because the system is chaotic. It’s just that chaos doesn’t always mean everything is unpredictable. Some things are predictable. And the same is likely true of climate. No reason it need no be true.
Of course “can be predicted” is not the same as “people are predicting things correctly”. But merely saying “Lorenz” is not enough to imply climate cannot be predicted.
“we know increased CO2 should and will lead to some warming”
Except when it gets colder. The squiggly line goes down sometimes.
Andrew
Being a modeler, I know about energy leaks, boundary problems, numerical instability, and approximations to PDE physics. It is even worse when things like clouds are represented in the models (no physics tells you how to represent a cloud so it behaves right). Yet I can’t count how many times commenters at for example Judith’s have insisted that the models are “just physics” and are not tuned. It makes me crazy. Instead of increasing confidence in the models, such assertions merely reveal the person to be a cheerleader.
Craig–
I think we’ve all heard tons of people in comments make the equivalent of a foot-stomp and then insist models ‘aren’t tuned’ and that those who think they are tuned ‘just don’t understand’. Sometimes what one supposedly doesn’t understand is the difference between curve-fitting and climate models. Well.. yeah. Many of us understand the difference. But just because tuning is done differently — and climate models are have a basis in phenomenology– does not mean the are not tuned.
The vast majority of physical models for complex processes are tuned in some way or another. Climate models are not an exception.
Well… now here’s a peer reviewed article by a climate modeler who not only admits his models are tuned but discusses how it is tuned. Moreover, he even discusses the fact that– to some extent– the tuning is done over and over so there is a tendency to not want to change things too much.
When someone insists models aren’t tuned, you can link the paper. 🙂
Lucia, I never said we cannot predict some things, I say there are things you inherently cannot predict, beyond any question of measurement or calculation error. I also say that homeostatic phenomenon like cloud formation increasing albedo are at the moment beyond our knowledge, I express therefore doubts on our capabilities to predict behavior of system a way more complex than a pump. I hope we’ll have more and more accurate models but I wonder what things we can really predict concerning the impact of CO2 on climate, call this reasonable skepticism. BTW gravitation is at the basis of the three bodies problem.
Craig Loehle (Comment #105975)
Yes, I frequently see “just physics” written in conjunction with “first principles”. Kinda bugs me.
Tibor
I know. 🙂
I knew you know it was a joke 🙂
Concerning the use of absolute temperature versus temperature anomalies, it is surprising how often those who think of themselves as being scientifically informed on the topic of climate change don’t acknowledge the implications of working with one as opposed to the other.
For a simple example, several years ago at a company party, I asked a question of one of our self-advertised knowledge experts on the topic of climate science, “OK, what you are saying to us — in essence — is that the earth is running a fever. And so I ask this question: what is the earth’s normal operating temperature, stated in degrees Fahrenheit, and how long would we expect the earth’s normal operating temperature to remain stable in the absence of ever-increasing levels of carbon dioxide?”
So he asks, “What do you mean by ‘the earth’s normal operating temperature’?”
I responded, “Well, for example, we humans have a normal body temperature of 98.6 degrees Fahrenheit on average; plus or minus some range of temperature values depending upon the specific individual. But if most any of us experience a body temperature above 100 F for some length of time, we are thought to be running a fever.”
He obviously wasn’t going to go down the path where such a question might lead, and his response basically was that most all of the work done in climate science deals with temperature anomalies, not with absolute temperatures.
Craig Loehle (Comment #105975)
“Instead of increasing confidence in the models” .. what confidence? If a model leaks energy, it is plain wrong. It needs a lot of work. Meanwhile I would not call it even a work in progress. It is a work in its infancy. It is nice to be able to model the past, but modeling the future would be even better. CAM5 model uses a constant latent heat of vaporization, 2.5% error in energy transfer at 25C. No one cares. You have yet to earn my confidence.
What I find interesting is that if we accept that the models deal only with anomalies, and that they have problems with overall energy balance, ie cannot predict absolute temp, then we have to ask what are the components of those imbalances. Given that the anthropogenic component of the basic warming is only a very very small fraction of the overall energy balance, surely these “small” imbalances will result in major errors in the model prediction (the old issue of the difference between two large numbers).
Off topic, but a topic that could be discussed here. Sandy.
http://www.theaustralian.com.au/news/breaking-news/sandy-worsened-by-climate-change-report/story-fn3dxiwe-1226509537406
You don’t have to just rely on the models.
bugs – Anthony has an “on topic” post for you to join in with:
http://wattsupwiththat.com/2012/11/02/next-time-somebody-tries-to-tell-you-hurricane-sandy-was-an-unprecedented-east-coast-hurricane-show-them-this/
“I can’t begin to speculate how energy leakage would estimates of climate sensitivity but I can say it’s sub-optimal in a model that is supposed conserve energy.”
you missed a word?
lucia (Comment #105954)
November 2nd, 2012 at 7:12 am
I am referring to a damping effect like a capacitor. It slows the rate of warming, but the warming still gets to the same position.
As for flaws in models, of course there are flaws, they are models.
RE: Craig
I see in Craig’s comment a summation of the issues I’ve been raising at various blogs. It has been frustrating to me how many people who are novices in mathematics and computation with PDE’s argue with well known and in many cases provable statements. Whoever said that all models of complex phenomena are tuned is absolutely correct. And the phenomenon doesn’t have to be that complicated. Take high Reynolds number turbulent flow of a uniform compressible fluid in a channel with an interesting curvature or flow over a backward facing step. There are at least 50 turbulence models for this and none of them hold up very well and ALL HAVE TUNABLE PARAMETERS, in some cases scores or hundreds!! They cannot be a priori right, so the tuning process is critical and should be absolutely front and center in any ethical scientific discussion of the issue. This is for pseudo steady state flows. In time dependent flows, its a lot more challenging.
People like Andy Lakis are informative because they actually knows technical details if you look past the rhetoric. I question whether Gavin Schmidt is as up front. I don’t know if he understands the issues, because his communications are not detailed enough to tell. By the way, hope all you New Yorkers are well and safe.
Anyway, I applaud Lucia for bringing this paper to our attention.
Lucia writes “They are physics based. But somewhere in the model, there is some sort of mis-match that results in energy leaking in or out.”
I think you’re too kind Lucia. How much energy is “allocated” to parametrised clouds? Thats not a properly physics based part of the model and simply cant be energy neutral.
lucia (Comment #105977)
“When someone insists models aren’t tuned, you can link the paper.”
.
Lucia, Thank-you for this post. I think your point is what drew me to this discussion, but I decided to focus on the minutiae.
.
However, I still can’t decide what to trust. A GCM or a simple n’th box model?
.
Regards, AJ
.
BTW… Instead of linking the paper, I’ll just link this post.
Also BTW… Love your hair.
Decide for yourself …
Home experiment No.1
A plastic bowl in a 750 watt microwave oven is not heated by the high intensity radiation (photons if you like) whereas the same bowl in front of a 750 watt electric radiator is heated by a similar intensity of radiation. So the bowl “detects” the frequency difference. Many seem to think that would not be possible and that all photons are the same and all cause warming. The frequency of the microwaves is less than that of the spontaneous radiation emitted by the bowl itself at room temperature. But the frequency of the radiation from the electric radiator is greater. That’s all that matters. That is a simple demonstration of how a surface “pseudo scatters” radiation which has lower frequency than its own emissions, and is not warmed by such radiation. This is the whole point of Prof Claes Johnson’s “Computational Blackbody Radiation” paper. So I have provided at least one example of empirical evidence which is not in conflict with what he has said. There has never been any empirical evidence to disprove what he said, and never will be. I have explained more in the first five sections of my paper.
Home experiment No.2
Check the outside temperature just before, and then soon after low clouds roll in. Why is it warmer when there are low clouds? Water vapour radiates with many more spectral lines than carbon dioxide, so its radiation is more effective per molecule in slowing the rate of radiative cooling of the Earth’s surface. It is also much more prolific in the atmosphere, so its overall effect on this slowing is probably of the order of at least 100 times the effect of carbon dioxide. Hence it is not at all surprising that low cloud cover slows radiative cooling quite noticeably and, while it is present in that particular location, the rate of cooling by non-radiative processes cannot accelerate fast enough to compensate. But that is a local weather event, not climate. Over the whole Earth and over a lengthy period there will be compensation. In any event, what is being compensated for is almost entirely due to water vapour, with carbon dioxide having less than 1% of the effect on that mere 14% of all heat transferred from the surface which enters the atmosphere by way of radiation.
bugs:
Damping relates to resistors (reduces energy in system).
Capacitance is more like a spring (stores energy, does not reduce energy).
While we’re at it “resists motion” is a like a mass (electrical analog is inductor).
Sorry “damping effect like a capacitor” just made me chuckle.
Doug:
Ok, I will.
You are a nut-job.
Carrick (Comment #106010)
In fairness to Bugs,
ENSO does behave like a capacitor, storing energy during La Niña and releasing it during El Niño.
A resistor doesn’t ‘damp’ like the way I meant. To a signal that’s changing, the capacitor will slow down it’s rate of change, but it will get to the same level in the end. I’m having a chuckle. heh.
bugs, “damp” has a specific meaning in science, and it just isn’t used the way you’re trying to use it.
Chuckle if you want, ignorance is bliss and all that. You and Doug should get together and discuss “science” sometime.
Bugs
That’s only due to the impedance, which is not unique to the capacity. Inductors and resistors also has impedance. In fact if you tune and inductor and a capacitor their impedences cancel out and there is no “damping”
What makes a capacity unique is its ability to store charge.
C=q/V
John:
I didn’t say “capacitor” was a bad word, I said “damp” was the wrong word to describe what a capacitor does.
(Minor quibble: The ENSO does other things besides store energy. )
For bugs reading link.
Note especially this table:
Analogous quantities:
force ⇔ voltage
velocity ⇔ current
displacement ⇔ charge
damper ⇔ resistor
spring ⇔ capacitor
mass ⇔ inductor
bugs:
Technically if the system is changing (AC signal) and you put a capacitor in there, but no resistor, it will have no effect.
You need a non-zero resistance to produce a lagged, reduced response in the circuit.
Look at Capacitively coupled. It’s used to isolate DC offsets between systems. If you read the wiki, low-frequency components get degraded, but this is due to the finite resistance of the system.
Most audio amplifiers are capacitively coupled, with the roll-off being around 1-Hz or so…usually due to the finite resistance of the wire in the speaker coil (which is typically a couple of ohms). This is done to protect the coils against heat generation when the impedance looks resistive, which it typically does below 10-Hz on a subwoofer.
Jesus, it was just a simple analogy. I know what a capacitor and circuits do. A capacitor is like a bucket, it can hold charge, but only so much. By doing that, in a properly designed circuit, with other relevant components, it will slow rising signal. Do you think that the capacity of the ocean to take energy down deeper and quicker than first realised would make warming slower now, but only slower, it will still get to where it’s going. There will be a lot energy stored down there, it may help melt sea ice, and release more energy in the El Nino/La Nina cycles.
Tamino has a graph of the increasing weather events around the world. It was done by a real capitalist company, that wants to make money and not lose it, it is watching these events and adjusting premiums accordingly. What price the home insurance next year it gets renewed along the low lying New Jersey coast?
bugs:
which explains this:
You could have fooled me. It’s the finite resistance that causes the response of the system to be slowed (and attenuated btw).
Sounds like what you’re trying to make an analog to is a low-pass RC filter. Like this.
Set R=0 and the capacitor has no effect.
You also said earlier:
In science, words have assigned meaning, and you don’t get to choose what they mean.
Anyway, muddled explanations such as yours are evidence of muddled understanding.
Tamino would because he suffers from a worse case of confirmation bias than you do.
More muddled logic combined with severe confirmation bias. Insurance company findings aren’t proxies for scientific data.
You should look at actual US hurricane landings versus decade, or cost of hurricanes in 2012 dollars, or numbers of tornados, or droughts (if you also adjust for population it even looks more lopsided). All of these point to the 1920s through 1950s as being more severe.
Speaking of muddled, this is so very confused an explanation as I’ve seen in years:
If you put in a finite frequency signal (not DC), then how much energy gets stored in the thermal reservoir depends on the time constant of the system. High enough frequency fluctuations like ENSO, for example, don’t ever “eventually get where it’s going”, except at levels so low they are only of academic rather than practical interest. The same works in reverse: If the time constant is large, the heat energy exchange from the deep ocean to ENSO will also be negligible.
Adding CO2 to the atmosphere is not a DC signal (at some point we run out of fossil fuel and CO2 will return to a lower level, possibly not reaching pre-industrial levels due to hysteresis effects). The point of this being the frequency response of the oceans does matter for the impact of a finite duration signal like CO2 emissions.
If the time constant is longer, for the same frequency content, it does mean that you will get a smaller net response of the system.
The climate change signal is a step function, like turning on a light. If you put a capacitor in parallel with a light, it will absorb a finite amount of charge, slowing the time it takes the light to reach peak brightness. It’s a simple analogy.
The real signal is not a step function. Step functions are infinite duration signals, anthropogenic CO2 is very definitely finite duration.
It’s a wrong analogy.
Speaking of muddled language, hows this?
/chuckle
A capacitor in parallel with a light.
Can you draw that diagram for me?[ok you were being literal! Lol. Wow.]If you can’t explain something clearly that means you don’t understand it yourself.
Here’s a calculation for bugs, take a 60-W bulb, assume 120-V power supply.
We have P = V^2/R so R = V^2/P = 120^2/60 = 240 ohms.
Take a typical capacitor value of C = 1 µF, then the time constant is τ = RC = 0.0024 sec.
Compare this to the amount of time it takes the filament to heat up, which is on the order of hundreds of milliseconds. You’re two orders of magnitude off being even interesting here.
(Note: Please don’t try this one at home, the capacitor, if it isn’t rated for 120 V, will explode when you turn the switch on, and it can lead to eye damage.)
I suspect that tuning a climate model is like tuning a lute. There is an old saying that a lute player spends half his time tuning the lute and the other half playing out of tune.
Not sure anybody but me would have noticed, but I was missing a zero:
Ï„ = RC = 0.00024 sec.
So make that three orders of magnitude off. You’d need about 3mF capacitance to get a turn-on roughly equal to the time constant for a 60-W bulb. The largest capacitance I could find on Digikey that was even rated for 120V was 2.2mF.
Back to my thermal measurements. Laterz.
I have a question. How man made part of C02 increase is assessed? The rate of fossil energy consumption increase since 1960 is much higher than the the rate of atmospheric C02 concentration increase on the same period.
Carrick (Comment #106041)
November 3rd, 2012 at 12:00 am
Not sure anybody but me would have noticed, but I was missing a zero:
Ï„ = RC = 0.00024 sec.
So make that three orders of magnitude off. You’d need about 3mF capacitance to get a turn-on roughly equal to the time constant for a 60-W bulb. The largest capacitance I could find on Digikey that was even rated for 120V was 2.2mF.
Back to my thermal measurements. Laterz.
You spent far more time on the analogy, which is not supposed to be taken literally, but as an idea, and then completely ignore the point I am making. Lets pretend you can buy a 550V, 200Farad capacitor, just for the sake of argument.
The analogy is more appropriate than you think, though , since in geological time, this is happening in an extremely rapid time period.
Carrick (Comment #106037)
“Take a typical capacitor value of C = 1 µF, then the time constant is Ï„ = RC = 0.00[0]24 sec.”
No, that’s the time constant appropriate if the capacitor is charging through the light bulb. In fact, it’s charging direct from the power supply, so what you need is the source impedance for that. Or more exactly, the source impedance in parallel with 240 Ω.
Hi Lucia
Hope to see your blog about Sandy in the near future…..
lucia: “we know increased CO2 should and will lead to some warming. It has to do so.”
While I’m inclined to believe that increasing CO2 concentration does indeed result in an average global temperature’s higher than it otherwise would be, I’ve heard no compelling argument as to why it “has to be so,” although the overwhelming majority of pundits–skeptic and warmist alike–seem to believe it.
It is surely the case that some of the numerous interacting processes affect global temperature are not even monotonic, so it is conceivable that one of the relevant quantities could directly or indirectly respond to carbon-dioxide concentration as a tunnel diode’s current does to voltage. That being the case, it seems to me that “it has to be so” is a little strong.
But perhaps you’ve encountered the compelling argument that I’ve so far missed?
Analogous quantities:
force ⇔ voltage
velocity ⇔ current
displacement ⇔ charge
damper ⇔ resistor
spring ⇔ capacitor
mass ⇔ inductor
How about Ocean ⇔ battery?
Monty–
I have nothing much to say about Sandy. I guess I could mention my nephew lives in New Jersey but not on the shore. My sister sent me photos of his house after the storm. A young tree was uprooted and fell harmlessly parallel to his house. He lost a piece of flashing off his house. His inlaws were closer to shore and moved in with him– I don’t know how much damage their house sustained. Must be much more than the damage at to Hank’s house.
Joe Born
I don’t think it is a little strong. I think it has to be so.
Lucia: “I don’t think it is a little strong. I think it has to be so.”
I was actually looking for an argument rather more compelling than ” I think it has to be so,” but I appreciate the input anyway.
lucia (Comment #105977)
I found this in the paper…
How is this different from curve-fitting?
Joe writes “perhaps you’ve encountered the compelling argument that I’ve so far missed?”
There are two arguments for CO2 warming the atmosphere.
1. CO2 has a probability of capturing radiation (and transferring it to the atmosphere thus warming it) centred at a couple of wavelengths (4 & 15nm if I remember correctly) and lower probabilities the further you get away from those optimal wavelength values. With more CO2 you simply have a higher probability of picking up the radiation that is further away from the optimal values and so more energy is transferred to the atmosphere.
2. All things being equal with more CO2 in the atmosphere, the average height at which energy is radiated away from the earth (by the CO2, because CO2 both captures the radiation down low and radiates it away from up high) must increase and therefore it happens at a lower temperature due to the lapse rate. So then to radiate the same energy, the atmosphere at that height must warm up a bit and so everything below it warms too.
IMO 1 is a relatively valid argument but 2 is not at all because all things most definitely are not equal.
Joe–
If you mentioned any specific mechanism for why temperature might drop with CO2, I might give a more specific counter argument. But I’m not going to attempt some general long treatise to address something utterly non-specific.
TimTheToolMan:
Thanks. I actually do understand the arguments for higher carbon-dioxide concentration’s tending to require a higher average surface temperature (or at least average surface temperature^4) to achieve the same radiation back out into space, so, yes, I’m inclined to believe a higher concentration will to some extent result in a surface temperature higher than it otherwise would be. It’s just that, given the myriad other mechanisms involved in climate, I have not found a reason for being certain of that.
Lucia:
You’re right: I have identified no particular mechanism. But, then, I haven’t heard an adequate proof that there isn’t one, either. Your comment above made me wonder whether you had perhaps encountered such a proof. I infer from your reply that you haven’t. Thank you.
Joe Born – even agreeing terms over springs and dampers needs to be treated carefully:
http://en.wikibooks.org/wiki/Engineering_Acoustics/Electro-Mechanical_Analogies
Joe–
I’m not trying to be argumentative for the sake of being argumentative. have no idea what you think constitutes “proof”. If one simply says “X might (or might not) be true” and then demands “proof” — there are all sorts of things I cannot prove. Example:
1) I have no “proof” that the Harry Potter stories are not literally true.
2) I have no “proof” that global warming is not caused by leprechauns.
3) I have no “proof” the bottom of hobbits feet are (or are not) covered with hair.
As far as I can tell, you have vaugly stated totally no specific ideas that maybe CO2 doesn’t warm because…well… many things in the universe are not monotonic. So maybe this is a non-monotonic one too. And that’s about “it” as far as your notion goes.
That is so vague and general that there is no place to start– other than possibly the beginning of a whole two-volume series on what CO2 does etc.
I have encountered things I consider “proof” and I believe your inference is wrong. However, I have no idea what sort of thing you would even constitute “proof”– of whether you would equally well suggest that Harry Potter is real and “infer” that I do not have one if I don’t tap out a 1,000 page treatise on why I think he is not.
Re: Joe Born (Nov 3 08:21),
And you likely never will. The logic that there isn’t one is inductive and therefore can never be bullet proof. But it’s in the same class as Santa Claus and flying saucers whose non-existence can’t be proven to everyone’s satisfaction either.
Lucia, DeWitt, you are unfair with Joe, I believe. I do not think Joe is challenging the greenhouse effect of CO2 but rather its real impact on Earth in the industrial age. When he says there is no proof of a decisive impact I think he is right, the difference with Harry Potter (who is a close friend of mine) existence is that as the existence of the angels it is un undecidable question, while with time enough we will know the real impact of CO2 or our children will know.
His question is relevant on contrary of what you say Lucia, and not so vague, because as the paper you propose us to read shows our models are not very robust nor accurate, we can legitimately wonder what is their real predictive power. I understand it can be very difficult to explain Navier Stokes equations, Clapeyron formula and black body principle in lay terms and at a moment or another we have to trust (or not) science; all the Glaobal Warming debate is polluted by political considerations and unfortunately science is used as a weapon. You can admit that other people like Richard Lidzen and his Iris effect, for instance, challenge the consensus with sound arguments and this simple fact give reason to Joe (and me by the way) : there is no decisive idea allowing to assess the real impact of CO2 on temperature on Earth, even if we believe that pumps exist, apples fall and CO2 has a logarithmic greenhouse effect.
Nick:
Um, no, the “source impedance” is in series with the source voltage supply and the external load, see this.
It is typically small (an ohm or less), and at less than 0.5% of the value of the light bulb is much less than sample-to-sample tolerance of the bulb itself. In other words, for bugs little example it can be ignored.
Carrick, Nick Stokes,
.
Entertaining electrical analogs. 😉
.
Nick, the utility feed line has pretty low impedance. The effective impedance of most any large value (non-ceramic) capacitor across the AC line is usually closer to the equivalent series resistance of the capacitor, not the power company’s line impedance. This internal resistance generates internal ohmic heating and can easily lead to spectacular failure of the capacitor at higher AC voltages and higher frequencies (some people call that failure mode a ‘blue smoke failure’), even if the capacitor would happily withstand much higher DC voltage.
Tibor–
The vague aspect is merely that his argument is merely that “some phenomema are not monotonic so maybe this one is too”. That is too vauge to rebut. For example: If I said “If we turn the heat up on on my oven, it will get warmer”. And then he responded, “I’m unconvinced. Because there are many complex physical phenomena and some physical phenomena aren’t monotonic.” Ok… But that is too vague for me to start to figure out what sort of evidence to bring forward. Am I going to have to prove all phenomena are not monotonic? Well, of course I can’t prove that. For example: plants in my garden grow fastest in a certain temperature range. If temperature got hotter, they would fry. Colder they will die. But that is irrelevant to the phenomenology of my oven. For my oven problem, will it be sufficient to show a simple heat transfer model for my oven and show it will warm if I turn the heat up? I have no idea what sort of evidence or proof might satisfy Joe because what he wrote amounts to a vaugely stated position of being dubious.
I think if Joe wants to hold that position– that’s ok. He gets to do that. I’m not going to lambaste him as stupid, lack understanding and so on. But I am going to point out that if his position is just a vaguely stated “I doubt”, I’m obviously not going to try to hunt around for the “proof” that would address whatever it is that causes him to doubt. If he wants a “proof” or “evidence”, he needs to describe what makes him doubt more specifically. Meanwhile, if he has doubts– well, I can perfectly well sleep at night and/or do more interesting (to me) things than spend my time trying to pull out his more specific reasons. I am perfectly content just pointing out that his reasons are too vague to address.
dallas:
Technically a battery implies a conversion of energy from electrical to some other form of energy. Thermal energy is just a special form of mechanical, so I don’t think that would really apply.
Ah Lucia, you have very sharp thoughts, and you are obviously right! But your sarcasms are very sharp too, anyway thanks for the paper.
Tibor
Actually, if you believe CO2 has a logarithmic greenhouse effect, then you actually agree with me that CO2 causes some warming. Joe wrote this:
The literal meaning of Joe’s words are that CO2 might not have a logarithmic effect. Rather, there is a region in which increasing CO2 results in decreasing temperature. This is not consistent with believing that CO2 has a logarithmic effect.
Lindzen does not posit any sort of “tunnel effect”. Nor as far as I am aware has anyone advanced any sort of argument why increased CO2 might result in cooling. Joe seems to want me to “disprove” any and all theories that might hypothetically suggest increase CO2 results in cooling– while leaving it entirely to my imagination to (a) first dream up the theory and (b) then rebut it.
This really is a bit like asking me to first dream up theories for how and where Santa Claus exists and hides out and then proving that any and all theories I might dream up for how and where Santa hides out are wrong. This is an impossible task.
If Joe has a reasonably specific theory of how CO2 can result in cooling, he needs to explain it. After that, people judge it. But in the meantime: No. I have never disproven whatever undisclosed theory he has for why CO2 could cause cooling.
Tibor–
I’m not trying to be sarcastic. But I thnk Joe’s writing
Does require an answer to explain how one ought to interpret failure to find a “proof” of the sort he seems to suggest one should seek.
I see your cats taught you “some things”, I do not play with the mouse I just train it.
for the angels I meant “un unfalsfiable assertion” and not “un undecidable question”
Tibor–
Believe it or not, that particular chipmunk was dropped and ran off. The orange cat loved to play “catch and release”. The current grey one kills rabbits and birds quickly. He’s not interested in “playing” with the bunnies.
I’ve toyed with the idea of showing the rabett carcasses. But it’s just too icky!
Icky perhaps but certainly tasty!
Carrick, Conversion is conversion The chemical is implied because most batteries involve chemical conversion. A solid state battery, the holy grail of energy storage, would simply contain potential energy with low leakage. So if the oceans leak energy slower than they can be charged, they would be a battery of sorts.
A solar pond with limited convection thanks to salinity layering can charge to about 90 C degrees after all.
But now you would be challenging the 33C no GHG assumption and the Faint Sun Paradox. OOOH scary stuff! But since the no GHG Earth would not have 240Wm-2 equally distributed from the “true” surface, but more like 399 Wm-2 at the equator, you would have a trickled charged ocean with roughly 4 to 5 C “normal” no GHG temperature.
Tibor–
The cat usually makes a real mess out of these rabbits. He usually has a nice meal and leaves the remains in the yard. They are not at all appetizing by the time we find them. (Except for when he first adopted us. We didn’t feed him promptly when he begged. He then went out the cat door and in 10 minutes brought us an absolutely fresh “teen aged” bunny and put it right on the step. He then meowed very loudly. We think he was trying to show us he was useful and worthy of taking in. But now he just leaves dead ones ‘around’.)
When I want to cook rabbit, I’ll go to the store. Both my local Butera and the local Asian food store always have them in stock.
When my wife was pregnant, our cat used to bring her mice under the bed, I must confess it was not tasty for us but for her (the cat) it should look very tasty.
You want proof? Here’s proof that God exists:
http://www.artmusicdance.com/vaspi/documenta.htm
Hoi, it’s the complicated version of Cantor’s statement : “God invented the primes numbers, the man did the rest.”
Oh, dear, this became unedifying while I was away.
Lucia:
I apologize if you took my inquiry as asking you to disprove the existence of a mechanism I had not identified. That’s not what I intended.
For my sins in a previous incarnation I was sentenced in this one to spending much of my career having experts explain aspects of physical systems to me. And it not infrequently happened that an expert would profess agnosticism about many specifics of a complicated system’s internals but could nonetheless state with certainty that the system as a whole would or would not exhibit a certain behavior. Often that certainty was based on, say, information theory or the conservation of energy. The level of certainty I no doubt erroneously read into your statement that “I think it has to be so†raised in my mind the question of whether you had happened upon a proof, based on some similar widely accepted law, that would apply independently of what the other internal processes—which, as you correctly observe, I did not identify—might be. I didn’t expect that you had, but, not foreseeing how my words might get misinterpreted, I didn’t think it would hurt to ask.
Anyway, I don’t know what all the processes are in the causality chain between carbon-dioxide concentration and average surface temperature (or, for that matter, in the causality chain between average surface temperature and carbon-dioxide concentration). On the other hand, I do know that many natural processes, from water’s density to temperature as a function of altitude to the rate my lawn grows, are not monotonic. So, for all I know, some (by me, unknown) processes in one or both of those causality chains may not be, either. Personally, therefore, I see some daylight between the certainty with which one can state (1) that the ultimate effect increased carbon-dioxide concentration has on average surface temperature is to increase it and (2) that global warming is not caused by leprechauns. Since my own level of certainty about the former proposition is not one that I would express with “it has to be so,†your expressing it in that way raised the above-mentioned question in my mind.
But you have answered that question adequately, and, again, I thank you.
DeWitt Payne:
Thank you for your input. I am edified by your discussion of inductive logic. Still, you may find my remarks above relevant.
curious:
I’m not sure I’ve grasped your meaning, but I actually have had occasion previously to familiarize myself with the physical analogies discussed in the page to which you linked. Thanks.
I still prefer this source as proof of model tuning.
http://www.realclimate.org/index.php/archives/2011/11/keystone-xl-game-over/
“The graph is from the NRC report, and is based on simulations with the U. of Victoria climate/carbon model tuned to yield the mid-range IPCC climate sensitivity.”
When will Tamino post about this? He once told me that models aren’t curve fitting, they represent everything we know about the physical world.
I think it would be great to have an article here to explain why a layman can be rationally skeptical about the importance of CO2 on Global Warming, there are many reasons to forge such an opinion.
This is not a disrespect or an a priori defiance of scientific work it is just a question of democracy: a world ruled by wise people is just the contrary of our dearest principles.
Lucia, I think the “oven analogy” would be better if it were along the lines of …by putting the roast on the bottom shelf the oven gets warmer.
There is scope for the atmosphere to essentially not heat up due to CO2 because unlike your oven analogy there is no additional energy involved. You can’t arbitrarily dismiss the possibility the atmosphere will behave slightly differently to compensate.
Joe
It is sometimes — in fact frequently– entirely possible to state the system as a whole must have certain specific feature without knowing everything about the system internals.
Tim–
Why do you think the analogy would be better? I think mine was better at illustrating the point I wanted to make which is that simply sometimes we can predict certain things.
Given the BEST teams recent success in resolving the historical temperature record issue to everybody’s approval maybe they could now have a go at building a climate model from scratch.
TimTheToolMan:
In a sense, when you increase the capacity of a system to retain heat energy, additional energy is involved.
Since solar radiation is measured in units of energy per unit of time per unit area, you have to factor in the “integration time” of the system (along with effective area of course) to compute the amount of radiative energy that is involved, with a longer time integration constant implying more available energy.
Since CO2 increases the capacity of the atmosphere to retain heat energy, it does require a theory with some novelty to explain how, with the additional thermal energy available, the mean temperature of the system does not increase.
Because at the heart of Joe’s commentary is an implication that some things can be predicted and some cannot. Turn up the oven and it will get hotter is predictable. Put the roast on the bottom shelf and the oven gets hotter is not.
Same goes for our atmosphere. Turn up the sun and the earth will warm up is predictable, however adding CO2 to the atmosphere is not one of those “it must be so” results.
On the issue of “energy leaks” in the models, one of the issues I’ve been trying to get a handle on is whether or not the models are accurately including sudden stratospheric warming events that occur every few winters or so, mainly in the NH, but also in the SH winter as well. Why this might be important is that SSW events are not well understood in the first place, do involve large amounts of energy being moved both toward the poles from the tropics and subtropics and upward from the troposphere into the stratosphere. Ultimately a great deal of this energy ends of going into the mesosphere and right out into space. These SSW events might represent enough energy to be consequential in terms of Earth’s energy balance. Does anyone know:
1) Are SSW events represented in the models?
2) If they are, how much energy do they show leaving the system during one of the larger ones. Here, for example, is a chart showing the EP Flux for the big 2009 SSW event:
http://i45.tinypic.com/wpttv.jpg
That’s a lot of energy leaving the system, and considering that it occurs over many days, it would be nice to get a handle on how many Joules it actually is. If anyone has even a remote credible estimate, I’d love to hear it.
Tim–
My point is some things can be predicted. So I chose an analogy where something could be predicted.
CO2 doesn’t inherently increase the heat capacity of the system. Arguments about increased mass not withstanding…
“Since CO2 increases the capacity of the atmosphere to retain heat energy, it does require a theory with some novelty to explain how, with the additional thermal energy available, the mean temperature of the system does not increase.”
CO2, is not the main factor concerning heat energy retention, but water vapor, when heat rises the cloudiness structure changes as atmospheric moisture. There is no novelty here it is Richard Lindzen’s theory for more than ten years, recent studies seems to confirm his ideas.
Climatology is not my domain of expertise but it seems obvious the emphasis has been put on CO2 which could perfectly be a minor driver in warming. There is a strong ideological prejudice behind this emphasis: what happens is caused by man. I do not affirm anything but I have doubts on the objectivity of these choices.
TimTheToolMan, just the opposite.
CO2 inherently does increase the capacity of the system to retain heat. That part is basic physics.
How much the system actually heats up isn’t basic physics.
Tibor:
You’re making arguments against statements I never made.
What I said was CO2 increases the capacity of the atmosphere to retain heat. I recognize that water vapor feedback plays an important role in putatively amplifying the amount of warming caused by CO2 (it may double the additional heat energy available, all other things held equal, but notice it’s a feedback, it multiplicatively increases the amount of heating generated by the increase in CO2).
If you add CO2 to the system, there is additional heat energy available. The novelty would be in explaining how, in spite of the additional heat energy available, the system does not heat up.
Your “non-novel” invocation of Lindzen and water vapor feedback doesn’t even address my original comment. You realize this, right?
It’s all in about the feedbacks not the forcing. Doubling CO2, all other things held equal, causes about a 1.2°C increase in global mean temperature. If you get a total gain of 2x from the feedbacks, that raises it to 2.5°C or so. You need more than a 3x increase from feedbacks to put us in “dangerous territory”.
Point being if you don’t have the initial 1.2°C forcing from CO2 increase, you never get any change in temperature (the feedbacks can only amplify the effects of changes in forcings of course).
There’s nothing political about starting with this issue, correctly identifying response of climate to CO2 increase as a key issue, without having to get politically enmeshed over the outcome of such a discussion.
The politicization occurs over arguments about how much amplification of this initial change in forcing you can expect, and how much you need before you start getting catastrophic effects.
Carrick (Comment #106064)
“Um, no, the “source impedance†is in series with the source voltage supply and the external load, see this.”
No. The source/discharge pathway from the capacitor (unearthed end) is via the 240 Ω and the source impedance in parallel. Both are pathways to earth – ie fixed voltage, even if one is at 120V.
Suppose the light were removed – open circuit. Do you think the capacitor would take forever to charge?
SteveF,
Yes, the internal capacitor resistance is indeed in series and is probably larger. And could lead to blue smoke – but I think we’re talking DC power here.
“Your “non-novel†invocation of Lindzen and water vapor feedback doesn’t even address my original comment. You realize this, right?”
I think it addresses it, because it induces negative feedback. I do not say it is useless to study climate response to CO2, I say the emphasis has been put on it for ideological reasons, not political ones; the idea that human hubris will be punished exists in all the political spectrum of western culture.
Nick, you may want to check your engineering terms there
Source impedance is a term of art, it refers to a series resistance with the load impedance, with respect to the EMF.
Open circuit??? The leads of the capacitor are still hooked to the terminals of the voltage supply (otherwise it wouldn’t charge).
But let’s do the math.
Ï„ = RC = 0.
That was difficult.
To say it in other words, Arrhenius thought this CO2 capacity to reflect IR was a blessing, the ideology of his time was dominated by the ineluctability of entropy, he only found unfortunate that this capacity was logarithmic. I believe ideology influences or views no matter our efforts to be objective. There no such a thing as a man without prejudice, the problem is that we do not have the traditional scientific tool to reject our prejudices: experiment. We use computer simulations instead, it’s a very dangerous drift allowing positive feedback for prejudices (it’s a joke)
Tibor:
Well only sort of. What I said was it would take a novel theory to not have additional warming from the increased forcing from more CO2.
Lindzen’s Iris theory argues that climate models exaggerate the amount of warming, which is an entirely different argument than there is no additional warming from the added CO2.
Carrick (Comment #106105)
It’s ancient theory – Thevenin (also Norton).
If R1=240Ω, R2=source impedance and R3=capacitor resistance, then the time constant you want is (R3+R1*R2/(R1+R2))*C
“which is an entirely different argument than there is no additional warming from the added CO2.” We agree, I did not want to dismiss your point, I wanted to temperate it 🙂
I just fear that this focus on CO2 could be a waste of money and time. The answer is maybe trivial but why the rate of C02 increase is much less than the rate of fossil fuel consumption since 1960 ?
And tx for the Lindzen’s paper, have a nice evening, it’s getting late here.
Nick, This is the standard diagram for a source impedance.
These resistors are in series.
Wow.
Alrighty then ….. here’s the full circuit diagram. This is the diagram for a resistor + capacitor in parallel, with a voltage source with a finite output impedance.
The impedance of a capacitor is Zc = 1/2Ï€ifC1
So the load impedance is:
1/ZL = 1/R1 + (2Ï€ifC1)
Or
ZL = R1/(1 + 2Ï€ifC1 R1)
and
Zt = Rs + ZL = Rs + R1/(1 + 2Ï€ifC1 R1)
Note for f → 0, this is just Zt = Rs + R1 and for f → ∞, we have Zt → Rs.
Nick this is classic electric circuit theory. The configuration shown is a classic low-pass RC filter in series with a source impedance.
This is really basic stuff, the sort of stuff you get taught in second 2nd
yearsemester physics. I’m really hoping you just are having a brain fart on this one.Otherwise “know thyself” and thy limits.
Having actually modeled electrical systems (as part of more complex electro-mechanical systems), and confirmed my ability to do so with measurement, I’m “pretty sure” I’m right on this one.
Real data.
Tibor:
Fair enough. Though what I claimed was fairly modest.
No problem. Have a good evening. Getting thunder boomers here… must be the CO2. 😉
Carrick “CO2 inherently does increase the capacity of the system to retain heat.”
That’s not an inherent part of CO2, nor is it basic physics (arguments of increased atmospheric mass notwithstanding again). CO2 doesn’t bring energy with it. You may argue that more CO2 means more energy is captured (as per my point 1 above), but the counter argument is that more energy is radiated away too.
Arguments that more CO2 means heat is retained in the atmosphere longer entirely rely on the rest of the atmosphere not responding to minimise that effect or increased DLR causing increased evaporation/clouds causing increased albedo.
All arguments for “definitely some heating” in the context of the atmospheric system rely on more than just CO2 being present.
Now I’m not actually saying the atmosphere wont heat with more CO2, but what I am saying is that its not a given. Turn up the sun, then it’d be a given.
Carrick (Comment #106099)
“CO2 inherently does increase the capacity of the system to retain heat. That part is basic physics.
How much the system actually heats up isn’t basic physics.”
Technically CO2 does not increase the system’s capacity to retain heat, i.e. Delta Cp is negligible. CO2 increases the system’s capacity to absorb long wave radiation. But, since the incoming radiation is short wave, mostly, and the outgoing is long wave, more outgoing is absorbed than incoming. Hence an net increase in energy. I’m sure that’s what you actually meant but some way not realize that.
TimTheToolMan (Comment #106116)
The radiative physics is more subtle than that. The CO2 absorbs outgoing LW radiation leading to a more energized state. At typical atmospheric pressures/densities the molecules then are more likely to transfer that energy to other surrounding molecules through conduction (molecular collisions) than to re-radiate it. At lower densities and pressures that would not be so. Because of this the surrounding molecules, mostly O2 and N2, heat up. As the atmosphere heats the molecules either conduct heat until they reach equilibrium, or the radiate heat in all directions equally.
Radiation is dependent on the heat of the radiating body only, and independent of any receiving body.
TimTheToolMan:
You’re mixing what I’m saying now. I wasn’t discussing CO2 in isolation, I was referring to the properties of an atmosphere with CO2 in it, and yes it is basic (radiative) physics. There just is nothing subtle about it.
Here you’re mixing up concepts of dynamic system behavior with basic properties of a system. At the risk of oversimplifying it, CO2 absorbs LWR emitted by the surface of the Earth, which has been heated by SWR from the Sun.
It is that fact that makes it behave like a GHG, and increases the total thermal energy available to the system by increasing the system’s integration time. Again this is pretty basic.
What I also said that wasn’t basic is how the system responds to an increased forcing associated with increased CO2 (it is in fact quite complex). But a more accurate way of putting what you said is “the idea that CO2 doesn’t increase temperature relies on the rest of the atmosphere responding to minimise that effect or increased DLR causing increased evaporation/clouds causing increased albedo.”
In other words, for this to happen requires a “conspiracy of nature”. These usually only show up when there’s some fundamental law creeping in. I don’t see any that apply here.
John Vetterling:
I think you’re mixing up concepts here.
I wasn’t discussing local properties of gases, or I would have stated I was using those. Instead I very carefully used the verbiage “capacity of the system to store heat energy.”
More technically, it increases the ability of the system to absorb and retain heat energy. If you don’t want to call that “increasing the capacity of the system to store heat energy”, be my guest, but it seems like an apt description to me, as long as one is careful to retain the term “system” in the descriptive text.
Carrick
We are just using capacity in different ways. To me, the capacity of the system does not change materially, while the process does. But the distinction is mostly semantic.
John, I agree. I was trying to not get bogged down in semantics, so perhaps I should have avoided using the word capacity entirely.
The way I categorize systems, I lump in the processes as part of “system behavior”.
If you just knew how many epistemological wars I’ve had the misfortune of participating relating to terminology describing systems, you’d really understand my reluctance to engage here. 😐 I suspect you’ve been exposed to enough of your own to understand anyway.
Carrick (Comment #106113)
OK, with your circuit diagram, put a charge on C. What is the discharge pathway? Through Rs and Rl in parallel.
Total impedance to earth Zt:
1/Zt = 1/Rs + 1/Rl + iωC
Zero at i/ω= C*Rl*Rs/(Rl+Rs) … the time constant
Since 1/Rl is small, it scarcely contributes.
Carrick says “In other words, for this to happen requires a “conspiracy of natureâ€. These usually only show up when there’s some fundamental law creeping in. I don’t see any that apply here.”
But it does Carrick. One of the fundamental effects in nature is that a temperature gradient (for example) will be minimised and that can happen by the AGW expected increased altitude of radiation leading to lower heating OR increased convection/evaporation/albedo etc or both, obviously.
I know we’re talking past each other and basically fundamentally agree, but you did specifically say “CO2 inherently does increase the capacity of the system to retain heat.” and an important part of that and this discussion is “system”.
System includes all sorts of mechanisms that will try to minimise that heating. Convection, evaporation and so on. Its not basic physics to look at a single component of that system and say it will dominate.
Nick, we’re examining the response of a system to the EMF from source, not what happens if the EMF gets replaced by a short.
The diagram I gave was for the system specified by bugs (the turn on of the system with an external voltage supply), and one you apparently understood until you didn’t, since you said:
Note “charge” not “discharge”.
You’ve completely changed the circuit when you remove the voltage source and replaced it with a short.
I suppose you realize this though.
Obviously the system will look differently with the source turned off, but in an ordinary circuit, you will remove the source impedance as well as the EMF if you turn off the source!
Here’s the Kirchoff loop equations (see this figure.) Note I added a switch to illustrate what happens when you switch on/off the power supply. I also flipped the capacitor and resistor in the load (conceptually it’s easier for me to think about), but you’re free to do this because it retains the original network topology.
Anyway here are the equations from Kirchoff’s Laws:
i1 Rs + (i1 – i2) R1 == Vs ⇒ i1 (Rs + R1) == Vs + i2 R1
i1 Rs + I2 Zc == Vs
[You can use (i2 – i1) R1 + i2 Zc == 0, if you prefer. Same answer of course.]
So we have:
i2 == Vs/Zc — i1 Rs/Zc
So:
i1 (Rs + R1) == Vs + Vs R1/Zc — i1 R1 Rs/Zc
i1 [Rs (1 + R1/Zc) + R1] = Vs [1 + R1/Zc)]
i1 [Rs + R1/(1 + R1/Zc)] = Vs
Or
I1 [RS + R1 Zc/(R1 + Zc)] = Vs
Or
i1 Rt = Vs
where Zt = RS + R1 Zc/(R1 + Zc) = Rs + R1/(1 + 2Ï€ifC1 R1), where as before Zc = 1/ 2Ï€ifC1.
Note that Rs is in series with ZL = R1 Zc/(R1 + Zc).
Also, note if you turn off the switch S1, you still get the same time constant. It’s only if you red-ink the original circuit and replace the EMF with a short (damaged system) do you get radically different behavior.
Willing to admit your error yet?
TimTheToolMan:
Fundamental as I would use it would refer to conservation of energy, or the Pauli Principle. This is more of a “rule of thumb” and again refers to dynamical system behavior not to constitutive behavior.
When you think of the CO2 absorbing LWR from the ground this is the sense in which I use “basic physics”. Issues related to how a complex system responds to that driving are not to me “basic physics”. They are d@mned complicated!
Anyway, I think this is semantics again.
Carrick writes “When you think of the CO2 absorbing LWR from the ground this is the sense in which I use “basic physicsâ€. Issues related to how a complex system responds to that driving are not to me “basic physicsâ€. They are d@mned complicated!”
I completely agree. I get the feeling many (forum) AGW supporters understand the first part and think no more on the matter.
Carrick (Comment #106125)
“Willing to admit your error yet?”
No, not at all.
Here it is in ode form. Let Vc be the instantaneous voltage on C, Q the charge.
Then C dVc/dt = dQ/dt = nett current = -Vc/Rl +(120-Vc)/Rs
= -(Vc-V0)(1/R1+1/Rs) where V0=constant=120*Rl/(Rs+Rl)
V0 is the dc equilibrium value of Vc
So d(Vc-V0)/dt = -(Vc-V0)/τ
where τ = C*Rl*Rs/(Rl+Rs)
Nick, start with Kirchoffs laws not some nonsensical made up “Nick’s Universe” version.
Do I need to review these for you too?
In differential equation form, they are:
q1′(t) R1 + (q2′(t)-q1′(t)) R2 = Vs(t)
q1′(t) Rs + q2(t)/C1 = Vs(t)
Replace q1′(t) with i1, q2′(t) with i2, q2(t) with i2/i w, Vs(t) with Vs, and then let Zc = 1/i w C1 and you will arrive at the equations I presented above.
I’d set up the physical circuit and measure the response, but you’d probably accuse me of cheating or something. Anything to avoid admitting you have a conceptual error.
My advice: Start with the fundamentals.
Kirchoff’s laws (law of current and law of voltage) are the fundamental relationships here, since they govern conservation of energy and conservation of charge. I skipped to the loop equation version, but if you want it broken out to that most basic level, I can do that for you too:
i1 Rs + i3 R1 = Vs
i1 Rs + i2 Zc = Vs
i1 = i2 + i3
i3 = i1 – i2
so
i1 Rs + (i1 – i2) R1 = Vs
i1 Rs + i2 Zc = Vs
These follow directly from Kirchoff’s Laws and hence conserve energy and charge. For this circuit what you have written does not.
What you have written is totally wrong for this circuit. It applies to a broken circuit where the EMF is replaced by a short.
Here’s another opinion.
See page 13.
In this circuit the capacitor is replaced with a resistor, but conceptually it is the same. He writes r = R1 + R2 // R3 (r = total resistance, R2 // R3 is value of R2 and R3 in parallel).
Apparently it’s a conspiracy. Everybody is trying to keep Nick down!
He also explains how to correctly apply Thevenin’s (Norton’s) theorem. My suggestion: Avoid short cuts until you are sure you have the fundamentals right.
And if there is a question, go back to the fundamentals, don’t just jump up and down and insist your “short-cut” was right, prove it was right by starting with the fundamentals. Nick will not be able to do this, because he is conceptualizing the problem wrong, and he will merely demonstrate himself in error.
Carrick
“What you have written is totally wrong for this circuit. It applies to a broken circuit where the EMF is replaced by a short.”
No, it is simply the equation for decay of a capacitor in disequilibrium, however caused. It has nothing to do with shorts.
There are a lot of undefined symbols in your Kirchhoff equations. But I think one of your currents has the wrong sign.
Carrick,
No, your p 13 formula is exactly the same as mine back at #106108, where I had (per SteveF) a resistance in series with the capacitor. What he’s saying is that his r is the effective source resistance. Put C across his terminals, and the time constant is rC, which is, if you check, exactly my #106108 formula, with a perturbation of numbering.
Nick:
Lots of undefined symbols? This is a bizarre claim given that the only new symbol is i3 and you’ve defined none of your own symbols.
Here’s the updated figure with that one new symbol.
I presume you know how to mentally relate q1 to i1 and q2 to i2.
Let me know if you still think there is a sign error.
This is utterly mushed up thinking. The decay depends on the load the capacitor sees. If you open the switch S1, it only sees R2. If you close S2, it doesn’t discharge, it charges. The only way you get your equation is by replacing the EMF by a short.
So you don’t believe me, you don’t believe another source, prove me wrong starting with Kirchoff’s laws, not by the initial step “then a miracle happens and Nick’s claim is true!”
Nick:
Bloody hell Nick, advice: stay away from electrical circuit design. You’re really awful at reading schematics!
It’s exactly the opposite of yours. The electromotive force is on the left as with my figure, R3 is equivalent to my capacitor in parallel with R2.
I’ll notice once again (and this is becoming comical for me) you keep talking about capacitors discharging, when what we were discussing was the charging of a capacitor by an external voltage source.
The discharge of the capacitor in the way that you keep insisting is correct only happens if you replace the EMF with a short. If you do the discharge correctly, first turn on the switch at S1. Let it fully charge (to within some “epsilon”), then open the switch.
Guess what? The time constant is still Ï„=R1 C1. The source impedance plays no role in the discharge in a normal circuit.
Carrick,
Have you checked that p13 equivalence? His R3 is my R1, his R1 my R2, and his R2 is my R3.
Carrick,
There’s no difference between charge and discharge when it comes to time constant. It’s the same ode.
What the p13 guy is saying is that his circuit behaves exactly as the Thevenin equivalent – a voltage source in series with r. Connect a capacitor across that and the circuit is exactly what we have. And the time constant is rC. What else could it be?
NIck, instead of arguing over this, how about writing out Kirchoff’s laws as I suggested?
Start with the circuit that was originally proposed, which is an EMF on the left hand side, with a resistance and a capacitor in series.
See what you get. It takes five minutes.
Nick:
If the switch is open, it’s R1 C. If the switch is close and you are charging it from the voltage source on the left, it’s still R1 C. It’s the impedance seen by the source that governs the charging of the capacitor, not the impedance seen by the capacitor.
Seriously, this is how circuits work. Take five minutes out of your life. Learn something new.
By the way, I’m willing to set up a circuit on Monday and measure this.
I’d propose Rs = 10 Ohm, R2 = 500 MOhm and Cs = 47 nF. I’ll use a battery as a source (e.g., 9V battery).
I have these values lying around.
I propose the loser dress in a pink tutu, have themselves photographed, and post it on this blog (ok, I’ll accept the loser just stating they were wrong).
Carrick,
My equation
C dVc/dt = dQ/dt = nett current = -Vc/Rl +(120-Vc)/Rs
is exactly a Kirchhoff current balance equation.
My equations can be taken to simply time the charging of the capacitor after the 120V is switched on, if you want to see it that way. I believe that was the original problem.
“I propose the loser dress in a pink tutu”
I don’t have one. I could shave my moustache 🙂
Nick, I was asking you to start with Kirchoff’s law of current and law of voltage not jump to your “Nick’s miracle just occurred” step.
Obviously I don’t find your argument acceptable because in my view you start by assuming the conclusion.
Nick:
Nah that’d be cruel.
What is your time constant for the circuit I’ve proposed?
Mine is 9.4 seconds.
Nick writes “I could shave my moustache”
…and start afresh for Movember. I know you’re an Aussie and its a big thing down here 😉
Carrick
Yes, my time constant is 0.47 μs – may be hard to measure. But it’s a contrast.
To be specific, when you switch on the battery in series with a 10Ω resistor, applied to a 500MΩ resistor in parallel with a 47 nF capacitor, the voltage across the capacitor will rise as 9-9*exp(-t/τ), τ=0.47 μs. Leave out the 500MΩ and you won’t be able to measure the difference.
That assumes the battery has internal resistance much less than 10Ω. You should check that the 9V is maintained.
TTFM,
Yes, Movember is big here. I’ve had my mo++ for 40 years, so I haven’t had a chance to impress there.
Carrick,
Make life easier – use a 10kΩ resistor – time constant 0.47 miillisec
I’ll use 1 MΩ, that’ll make it easy to measure if you’re right, which I’m thinking you are now.
I’m thinking that we had talking past each other to some extent, which was my fault. I’ve been thinking of the amplitude/phase of the response with frequency considering the input current “i1” into the system and you’re considering the charge build up on the capacitor.
I know that’s what you said from the start, but I had my head on other things, as in the transfer function between output and input. Really, that’s just giving the pole-zero structure of the circuit (yes they are related and I do know how to derive the time domain response—impulse response function—from the transfer function, but not at 3 AM my time).
You have to look at the solution of the ODEs if you want to work out the time constants…. and on reflection the Thevenin equivalent circuit does suggest you’re right on the time constants being really short.
It’s fun being wrong once in a while ’cause that means I learn something new. I figure I’m wrong, but I’ll go ahead and solve the ODE in the morning, and for fun do the little experiment on Monday.
Carrick,
I’ve found the Thevenin/Norton ideas of reducing to current/voltage sources with source impedance very useful in all sorts of contexts – for example, radiant heat transfer.
it was just an analogy.
Nick,
Comment 106108
I believe that R3 should be outside the brackets as the ESR of the capactor is in series with the parallel resistance of R1 and R2.
Like you, I have always found the simplification that is possible using Thevenin/Norton equivalents is invaluable.
Carrick, Nick,
Too much discussion of simple RC circuit behavior. The time constant for R and C in series is just R*C. Adding a resistor in parallel with the capacitor reduces the voltage which the capacitor reaches at equilibrium, because the resistor pair in series is just a voltage divider. No need to argue about behaviors which are perfectly known and can be confirmed by measurement.
SteveF, it’s a question of which R to use, and actually Nick ended up being right. The “R” to use is Rs // R1.
I got stuck on what happened to the current (charge) leaving the EMF, which sees RS + R1 // Zc. What I did, as far as I went, was correct, I just didn’t finish the problem. A case of confirmation bias at work there too.
Here’s the formula for q2(t), the charge on the capacitor.
link.
Carrick (106156),
I do all the circuit design for my company’s products. ‘Nuff said.
Well the only point of discussion was what resistance to use in the RC constant. You skipped over the only issue in debate in your comment, hence my comment.
Carrick 106158,
In this figure: http://dl.dropbox.com/u/4520911/Electronics/RC-Current-Kirchoff.jpg it is Rs, of course; time constant is Rs*Cl. And the ultimate DC voltage on the capacitor is Vs * Rl/(Rl + Rs).
Interestingly the time constant is actually (Rs || R1) * C1.
The charging time constant is (Rs//R1)*C1. The discharge time constant is R1*C1.
The applied voltage has no effect on the time constant which is determined only by the value of the capacitor and the resistance it ‘sees’ looking back into the circuit.
I don’t feel quite so bad now.
Anyway, Nick was correct and Ï„ = (Rs || R1) * C1.
Sounds like I should go ahead with the experiment.
I predict this formula to be correct.
If you open S1, I agree the discharge time constant is R1*C1.
Here’s the figure for anybody who wants to play “What’s my time constant.”
2PIi ?
Whoops, no 2 pi i. Frequency domain attack (LaTeX transliteration error).
Better?
Carrick, RobWansbeck,
The charge rate for the capacitor deviates only very slightly from a simple exponential approach, with a time constant of C*Rs, to a final value of Vs * (Rl/(Rl + Rs)), so long as Rs is relatively small compared to Rl (Rs less than a few % of Rl). This is the situation I though we were talking about.
Here is the difference between the true charge rate and the rate calculated from a simple exponential approach for Rs = 4% of Rl.
http://i45.tinypic.com/np3src.png
SteveF, we were originally talking about output impedance here, which can be considered to be very small compared to R1, and for that, your approximation was completely appropriate.
Nick and I started talking about how to measure it, and we quickly decided we’d need to make Rs a lot bigger, if we were to measure the actual rise time.
I have no objections to using approximations but they should be made clear especially when people are trying to understand a calculation. They should also be consistent. The argument applied to the time constant also applies to the final capacitor voltage yet it is calculated in full.
RobWansbeck,
“The argument applied to the time constant also applies to the final capacitor voltage yet it is calculated in full.”
I do not understand what you mean. The final capacitor voltage is set by the values for Rs and Rl.
The time constant is set by the values of Rs and R1 and C.
The final capacitor voltage is set by the values of Rs and R1 and V.
If R1 >> Rs then Rs//R1 ~ Rs and R1/(R1 + Rs) ~ 1
If you accept the approximation then R1 can be ignored in both cases.
I would like to thank Carrick, Nick, RobWansbeck, SteveF, bugs et al for illustrating how the linkage between language, math, and physics is so difficult to keep straight. Even when it is just an analogy.
I agree Criag, it is challenging to convert descriptive models into mathematical ones and have them make sense. That’s the trouble with verbal analogies. You can pretty much make them say way you want to, as long as you never have to write down analytic formulas.
RobWasnbeck,
“If you accept the approximation then R1 can be ignored in both cases.”
The error in the final voltage can be relatively large if you don’t consider the Rl/(Rl +Rs) factor, while at the same time the influence on the rate of approach to that final voltage is relatively small. I sure wouldn’t ignore the Rl/(Rl+Rs) factor, but I probably would ignore the deviation of the approximation (simple exponential approach with rate constant RsC) from the true approach function.
Re: AJ (Comment #105959)
November 2nd, 2012 at 8:03 am
You wrote: “I think David Young (Comment #105940) and I are addressing the same issue.”
Just for the record, I do not think that you are addressing the same issue, and the difference here is very important.
David Young expressed the view that it was “a little disconcerting” that there is a lack of discrete conservation of energy. (I think that this is masterly understatement.) With respect, you are talking about something quite different.
Suppose I consider, as an example, modelling a highly unstable fluid displacement process. I have some choices to make when I formulate the governing equations for this problem; I then have some choices to make about how I discretize the problem in space and in time, and how I manage boundary values; finally I have some choices to make about how I solve the resulting system of discretized equations.
If I make bad choices in the third-step, and use a non-robust or inefficient solution algorithm, then I will see very slow convergence at each time-step – or I may not be able to find a solution at all. But let’s assume here that I make a good choice of solution algorithm i.e. I can get an accurate solution of the simultaneous equations I have formulated.
The choices I make at the second stage – discretization – must also include the numerical formulation. It is possible to discretize in time using an explicit, a semi-implicit or a fully implicit formulation (with increasing overhead). The choice is typically problem-dependent. If I make bad choices at the second stage, then I will have unacceptable truncation errors. In unstable systems, such as we are discussing here, these errors will propagate in time and space. However, these errors will manifest themselves as errors in the predicted DISTRIBUTION of the system properties being modelled. These are the types of errors which I think you are discussing.
David Young on the other hand is talking about the choices made at the first step – the governing equations and their numerical expression. In the case of my fluid transport example, my primary governing equations would probably be in the form of mass conservation equations. I would then ensure that the numerical expression of these governing equations also guaranteed mass conservation. Broadly speaking, this means that if you add up all of the simultaneous equations, you would be left with an expression for the total system which looks like:- mass in LESS mass out = mass accumulation
This would NOT mean that my solution would be free of truncation errors. However, it would mean that, despite truncation errors giving me some error in the distributed properties of the system, I would still have discrete mass conservation. This latter then works to bound the space of possible solutions.
Without such bounds, it is difficult or impossible to say whether a solution is driven by numerical “noise” or something else.
Lucia wrote some time ago “Sometimes what one supposedly doesn’t understand is the difference between curve-fitting and climate models.”
If you consider an extreme curve fit where an arbitrary function is chosen to best fit the curve over the observation range and has no bearing on the physics behind the curve then that’s one thing and this is what (I believe) you would consider a curve fit.
The other extreme is a “precise” equation that describes the physical process. Lets use a simple one F=ma
This is perfectly physics based and will work fine for some instances. Think of these instances as “weather”.
Now consider a star ship where we used the perfectly good physics to determine the force needed to accelerate the starship to 0.999C
Is that going to work? No because F=ma is more of a curve fit to the physics when we’re considering relativistic situations. Think of this as Climate.
That’s the way I see it anyway…
TimTheToolMan (Comment #106185)
“If you consider an extreme curve fit where an arbitrary function is chosen to best fit the curve over the observation range and has no bearing on the physics behind the curve then that’s one thing and this is what (I believe) you would consider a curve fit.”
While I have not been following this thread closely, I have skimmed through the linked paper that is the subject of this conversation. TTTM, your comment touched on some thoughts I have had which deal more with basic approaches in climate modeling and where do those approaches lead. I am not so sure I follow your analogies here.
My thoughts are more specifically that I see, where in order to provide some founding and credibility for the modelers climate models and the capability of the models to predict future climates, the models must be able to represent climate in hindcasting. The tuning that is required and the amount of that tuning I would think is correlated with the modelers lack of ability to represent past climate with straight physics. I would suppose that the parameters that are tuned and how well those parameters can be represented by straight physics and tested independently could represent a gray area with regards to over fitting a model. The authors of the linked paper, “Tuning the climate of a global model” would appear in my reading to admit to some necessarily arbitrary tuning. The authors show how the models can be tuned to fit the historical temperature records, but, of course, it is this tuning that detracts from the models capability to predict future temperatures.
I think I can appreciate the limitations that the climate modelers face and their rationale for the approaches they take, but what I do not understand is what is the end game. Can tuning become like an addiction where instead of facing the limitations head on and attempting to resolve those problems more directly, the modelers, in their quest to provide information to policy makers and organizations like the IPCC, over use the expediency of tuning? Does the end game produce models that do not require tuning in order to accurately hindcast, or is it more limited to perhaps producing parameters that are tuned based on a more complete physical understanding of those parameters.
More importantly do the modelers convey in a statistically comprehensible manner the uncertainty that tuning represents for the models results in predicting future climate?
Paul_K #106184,
I wonder if we could come up with a worse way to develop a climate simulator than:
1) Start with a complex, highly parameterized model meant to predict the weather over the next few days.
2) Try to back into conservation of mass etc.
Well, sure we could, but it’s fun to say.
Re: TimTheToolMan (Comment #106185)
Analogies are curve fits. 😉
Re: BillC (Comment #106187)
“1) Start with a complex, highly parameterized model meant to predict the weather over the next few days.”
The highly parameterized part is the only thing that makes it even remotely feasible, so you can’t complain about that too much.
Lucia,
A fascinating paper, and a great article.
I am left very puzzled by many elements in the paper. It has not improved my confidence in AOGCM’s.
One large question in my mind is the accommodation of the energy leakage. (I have a larger question about why it should be there in the first place, but I will leave that for now!) The authors seem happy to assume that it is a more-or-less constant value, although as far as I can see, it is only estimated at pseudo-steady-state after the pre-industrial control runs. It would make more sense to me, given the attributed cause, if it were a function of total flux imbalance and/or surface temperature.
Let’s assume however that there is a more-or-less constant leakage. Normally the estimation of ECS from AOGCMs entails making a plot of net flux vs average surface temperature from a long-term run of doubled CO2, and then extrapolating a line through the late datapoints to the point of interception on the temperature axis (net flux = 0).
However, if it is known that a particular model has a steady-state balance after 1000 years of pre-industrial control which corresponds to a positive net flux value of 1 W/m^2, say, then why is the ECS for that model not computed as the intercept with a horizontal line drawn at net flux = 1, and data after that point treated as inconsequential fluctuation or model error?
Oliver,
Sure. But won’t some of the parameterizations used to predict weather actually wind up harming the model’s ability to predict long term climate?
I don’t know, but I would guess so.
Paul_K does an excellent job explaining things. I agree with his post. It is generally accepted that discrete conservation improves accuracy a lot.
Re: David Young (Comment #106192)
November 5th, 2012 at 11:34 am
Thanks, David. I have seen from some comments you have made here and elsewhere that you are no novice yourself when it comes to numerical methods.
One of the things that went through my head when I read the Mauritzen et al paper was just how naive the analysis was – and I do not mean to be unduly critical of a highly informative paper, which definitely should have been published.
A number of papers published in climate science have been justifiably criticised for not taking into account good, existing, developed knowledge in statistical methodology. It strikes me that the same criticism can be leveled at the “gifted amateur” approach adopted for the development of numerical models in CS.
A number of different industries have had to develop (over decades and hundreds of thousands of manhours) large-scale simulators to deal with complex coupled non-linear equations for dynamic processes in heterogeneous systems with variable boundary conditions. While not all of this developed knowledge is directly applicable, some of it can be lifted off-the-shelf or bought in from genuine experts.
As an example, the reason given for the energy leakage in the tested model:-
“We investigated the leakage of energy in MPIESM-
LR of about 0.5 Wm22 and found that it arises for
the most part from mismatching grids and coastlines
between the atmosphere and ocean model components.”
Nested irregular grid definition? Dynamic local grid-refinement? These things have been implemented successfully for decades in commercial simulators. However, they may be limited in value if the governing equations do not contain discrete conservation of mass or energy!
I am sure that even after vacuuming up all the available knowledge, AOGCMs would still present some novel problems, but it seems to me incredible that after the massive collective man-time investment on their development, they are still subject to problems which were solved or at least evaluated by other disciplines and industries years ago. And that they fail to apply the rigourous testing procedures which have “protected” those other markets from duff products.
Re: Paul_K (Nov 5 14:54),
Aren’t most AOGCM’s academic exercises basically written by graduate students?
Paul_K #106195
“Nested irregular grid definition? Dynamic local grid-refinement?”
Opportunities are limited. The methods are pretty much explicit in time. So they have to resolve the acoustic wave equation horizontally. Typically the elements are about 100 km and time step 1800 s; Courant number 1800*334/100000 which is about 0.6. Hard to reduce grid size.
DeWitt,
“Aren’t most AOGCM’s academic exercises basically written by graduate students?”
No. They are outgrowths of the programs for numerical weather forecasting – a big coordinated European effort of the 1970’s. They have been built on over years, by institutions. I’m sure some of the work was done by grad students.
De Witt,
It would explain a lot.
Explicit methods do not conserve mass or energy exactly. An implicit method can enforce there by forcing the appropriate integrals to zero.
But explicit methods can calculate the discrepancy and correct. Mas conservation of course is not just total mass, but water etc. Energy can be conserved in the process of solving vertical transport. The program CAM3 describes that here. Mass and thermal energy conservation are enforced by observation and adjustment. The “fixers” are described here and here.
Nick writes “Energy can be conserved in the process of solving vertical transport.”
From the link for horizontal diffusion “However, Dn is modified to ensure that the CFL criterion is not violated in selected upper levels of the model. If the maximum wind speed in any of these upper levels is sufficiently large, Dn=1000 then in that level for all n>nc, where nc=adt/MAX|V|. This condition is applied whenever the wind speed is large enough that nc<K, the truncation parameter in (3.115), and temporarily reduces the effective resolution of the model in the affected levels."
sorry about the lack of LaTex and perhaps I'm misinterpreting whats going on here but…. The problem isn't so much that there is a correction, the problem is that there needs to be one. If physics based, how exactly does the wind speed get so large it needs to be artificially truncated?
To me, somewhat physics based doesn't mean physics based. Over millions of iterations, data moving to combinations of values never observed and therefore poorly tuned when parameterised means a curve fit.
TTTM,
They aren’t truncating the wind speed, but the resolution. At that stage, CAM 3 is working in a spectral domain – ie in Fourier coefficients of a spherical harmonics expansion. n is the index of the harmonic. So what they do is to truncate the higher coefficients in the expansion. Dividing by 1000 is effectively truncation. This, as they say, reduces the resolution. It is equivalent to expanding the spatial grid, which would also meet the CFL condition.
So putting it back into English, are you saying that at certain windspeeds, the associated energies completely cross a grid box in the timeslice? Hence to “fix” this they need to make the grid boxes bigger to make this a non-issue?
TTTM,
That’s the non-spectral equivalent, yes. And basically for that reason.
The CFL condition always limits the spatial resolution for a given timestep. If you can make the limitation flexible, doing better at lower windspeed, you’re gaining.
re:Nick Stokes (Comment #106201)
November 5th, 2012 at 6:33 pm
Nick,
Thanks for the references to the CAM3 description.
You wrote:-
“Explicit methods do not conserve mass or energy exactly. An implicit method can enforce there by forcing the appropriate integrals to zero.”
This makes me think we are talking at cross purposes. I can formulate an explicit solution which will conserve mass or energy, and I can formulate an implicit solution which does not.
The CAM3 formulation to which you referred me is based on a one-step predictor-corrector method which should approximately conserve energy and mass. The predictor step is largely explicit, and is NOT formulated to conserve energy – but it could be. The predictor step IS however formulated to enforce on each column (integral) a requirement that all of the internal energy components sum to zero. [It is worth looking at this, because this is the sort of approach that you might use to include full energy conservation at the predictor step.] The corrector step then looks at the difference between the energy gained in the atmosphere at that point (using the approximate solutions from the predictor step) and the energy which should have been gained, and calculates a constant value of temperature adjustment which will reconcile the two values. This latter step assumes that the distribution of pressure and temperature from the predictor step is correct, and makes an absolute adjustment to balance the books.
I am not sure that I would award any prizes for the methodology described here, but it should work numerically. But this is an atmosphere model. Now, do you want to speculate on what happens when you try to couple this with an ocean model, and have to calculate the surface fluxes to atmosphere rather than have them as “known” at the start of the timestep?
Paul_K
On explicitness, I meant the conventional explicit solution for v and P. Yes, you can reformulate to conserve energy, mass etc, though I think conserving each mass component might be a challenge.
Anyway, I think it’s moot, because I’ve been trying to work out exactly what CAM3 means by semi-implicit. It seems to all come down to 3.1.13, in the spectral domain. They solve for the vorticity component explicitly, but for divergence, pressure, T etc coupled. That probably gets rid of the need to fully resolve sound. They seem to suggest that it does, but the next fastest process – gravity waves at 100-200 m/s, is not much slower.
“Now, do you want to speculate on what happens when you try to couple this with an ocean model, and have to calculate the surface fluxes to atmosphere…”
Well, that’s set out in 4.10.2 Ocean exchange. Coupling the dynamics doesn’t seem to be the major problem, because they assume the ocean velocity is negligible relative to wind. Fluxes (gases, heat etc) across the interface were the big problem that made AOGCM’s a recent achievement only. But I don’t think the interaction with the atmosphere N-S dynamics is the main issue.
Nick,
The main point of my last post was to clarify the issue of explicit vs semi-implicit vs fully implicit VERSUS the question of express mass and energy conservation.
The range from explicit to implicit only deals with the reference point in time at which terms in the governing equations, and particularly the differential terms, are evaluated. Explicit = estimates based on start-of-time-step values. Implicit = end-of-timestep values. Semi-implicit = (normally) linearised estimates of the non-linear components updated to end-of-timestep estimates. This is about the choice of HOW to solve a set of equations, and impacts the solution accuracy of distributed properties, as well as computational overhead.
However, the decision to substitute into the governing equations mass and/or energy conservation terms is a question of the formulation of the governing equations i.e. what equations you choose to solve.
Anyway, I have probably belaboured the definitional point.
Figure 4 in the Mauritsen et al paper has left me in a state of shock. It appears to say that the CMIP3 models “equilibrated” after 1000 years pre-industrial control with net flux imbalances ranging from -3 W/m^2 to +4W/m^2. CMIP5 models do a little better. Whatever the individual subtleties of the formulations used, this looks like very bad news; the models are doing a poor job of energy conservation, which brings me back to the question I raised earlier: at what level of net flux imbalance should any specific model be declared to be in steady-state when estimating ECS? It does not look to me like net flux = 0.
PS I agree that conservation of all mass components would be totally impractical – and ultimately unnecessary.
Nick,
“Coupling the dynamics doesn’t seem to be the major problem, because they assume the ocean velocity is negligible relative to wind.”
I am sure that the latter assumption is valid. However, from the Mauritsen paper:-
“We investigated the leakage of energy in MPIESM-
LR of about 0.5 Wm22 and found that it arises for
the most part from mismatching grids and coastlines
between the atmosphere and ocean model components.”
This does not look like a failure in the assumption that atmospheric and ocean velocities are very different.
Apologies if this is not on the narrow point under discussion but I’m not sure about the wind velocity = ocean velocity assumption. I recall that surface air velocity does play a part in evaporation rates. There is a handy online calculator here if you want to try some numbers:
http://www.engineeringtoolbox.com/evaporation-water-surface-d_690.html
I’d suggest these numbers are relevant at the type of flux levels under disussion?
Curious,
The assumption cited isn’t that wind velocity=ocean velocity. Actually, that’s a no-slip condition and dealing with that in turbulent flow is a whole other story. But the assumption cited here is just that for the purposes of air dynamics (horizontal) ocean velocity can be ignored.
But you’re right that near surface air velocity is important for gas (and heat) exchange. In CAM3 it’s a proportional factor in Eq 4.439 which governs this.
Thanks Nick – comment noted.
PS – FWIW I’ve just seen Judith Curry has a current post which discusses evaporation effects – not read it yet though:
http://judithcurry.com/2012/11/05/uncertainty-in-observations-of-the-earths-energy-balance/
Nice article up by Victor of Variable Variability.
Radiative transfer and cloud structure
Apparently, models that predict high climate sensitivity are more reliable. Not because they track temperature better, mind you. They’re better because they track humidity in the tropics better.
The interesting question is how well those models do on absolute temperature. Are they biased high or low? If they’re biased low then a high ECS still puts the equilibrium absolute temperature in the same range as models that are biased high with a low ECS. If they’re calculating a lower humidity, I’m betting they’re biased low on absolute temperature.
I admit I also find this quite a stretch (picking a somewhat arbitrary metric and deciding that whatever best simulates this in absolute terms produces the better estimate for future warming). I don’t have access to the paper (is it even out yet?), but 2 of the 3 CMIP3 models that match observations have ECS > 4 in that article’s figure, so they must come from IPSL-CM4 (4.4 K), MIROC3.2 (hires) (4.3 K), and UKMO-HadGEM1 (4.4K). MIROC3.2 (hires) is one for sure since it is slightly below the others in that figure. Unsurprisingly, these 3 “high sensitivity” models are among the worst offenders of overestimating the warming trend of the last 30+ years:
http://dl.dropbox.com/u/9160367/Climate/11-9-2012_3SensCMIP3.png
IPSL-CM4: 0.28 [0.24, 0.32] K/decade
HadGEM1: 0.25 [0.22, 0.29] K/decade
MIROC3.2 (hires): 0.35 [ 0.30, 0.39] K/decade
Compare this to the NCDC trend: 0.15 [0.12, 0.18] K/decade, and it is clear that this group has significantly overestimated the temperature trend, especially without the lower (“worse” according to the F&T study) models to bring down the mean. What’s more, if you look at the current TOA imbalance in these models from Loeb et al. (2012), you see that the MIROC3.2 and IPSL-CM4 are incompatible with current observations (HadGEM1 is not included in that study):
http://troyca.files.wordpress.com/2012/02/l12_s3.png
So on the one hand, these higher sensitivity models reproduce observations of the absolute relative humidity better in the tropics. On the other hand, these same models produce global temperature increases and global TOA imbalances that are incompatible with observations. I suspect the latter two quantities are far more important in determining the accuracy of future warming scenarios, but I could be wrong.
Re Nick Stokes (Comment #106216)
Nick – CAM 3 has a 2.5% error in energy transfer by water vaporization at 25 degrees C (that’s approximately where most water evaporates from oceans) – it uses a constant latent heat of vaporization for all temperatures. Why use sophisticated equations when basic assumptions are off?
BTW. there is also a CAM 5. Do you know if they are developed independently or if CAM 3 is now obsolete?
Curious George (Comment #106282)
Cam 3 was developed is described as a community model. It is advanced as a model for teaching and testing purposes, rather than a front-line research model. I cite it primarily because of its good documentation in HTML format, so I can link to specific sections.
Yes, there is now a CAM 5. The equivalent document is this PDF. It’s similar in layout; there is a section near the front (1.1) which describes the history. And yes, CAM 5 is an advance on CAM 3, so you can say it is obsolete, though I haven’t seen anything in CAM 3 which CAM 5 would render incorrect. Still, there’s lots I haven’t looked at.
I haven’t seen a full description of latent heat in CAM 3, though 4.10.2 does suggest a constant latent heat. If that is used consistently, for evap and precipitation, it should conserve energy. And the temperatures at which the actual evaporation and condensation are occurring are hard to relate to the grid averages.
DeWitt Payne (Comment #106280)
November 9th, 2012 at 3:32 pm
“The interesting question is…” which ones best simulate *ALL* metrics. For example, my pet metric is the ability to replicate annual temperature variability in ocean temperatures at depth as per ARGO. For the CMIP3 models, I can make a case for ECHAM5 and MIROC and argue against FGOALS and GISS-ER.
But this is just one of many metrics.
Brandon Shollenberger (Comment #106279)
I wonder what would have happened if Trenberth had found that the models with lowest sensitivity best simulated the humidity signal. Would the paper have seen the light of day?
I’ll guess that he would have just moved on to the next metric until the “correct” answer was found?
I think that Trenberth and McIntyre have something in common. Both have experience in the mining industry. Mind you, Trenberth’s experience is specific to the mineral data-tonium.
Nick – thanks. Yes, the energy is conserved, but the rate of energy transfer is wrong. Climate change is all about the energy transfer. The energy would be conserved with ANY choice of a constant latent heat, including zero.
What does it teach me? It teaches me that there was an incredible amount of work going in the model, but that another incredible amount of work is needed to make its results reliable.Yes, evaporation is difficult to relate to grid averages .. can you estimate the impact of this approximation on the accuracy of results? I asked the UCAR team, they could not. Maybe we don’t have yet a computer technology good enough to model systems of this complexity.
As for testing .. what exactly do you want to test?
Troy CA:
My understanding is the paper got published about a week ago, so it should be available. I don’t have access to the journal it’s in though, so I can’t actually check.
You’ve highlighted what I find silly about the paper. We’re primarily interested in temperature. When we check how these models do in regard to temperature, we find they do terrible. This causes us to suspect the models may have systematic biases. Then the authors of this paper come along and say that’s not true because those models get a different value right.
That ignores an obvious alternative. These models may simply be incapable of getting accurate results for multiple values. They may be able to “nail down” one value, but doing so causes them to get other values wrong.
Related to that, that paper shows model projections for humidity in the tropics are highly correlated to model projections for ECS. We can basically draw a sloped line and pick any point on it to get a pair of ECS and humidity values. However, no possible pair is “good” as the slope itself if wrong. That suggests the models create an inaccurate relation between temperature and humidity. Rather than giving us confidence in certain model projections, this should make us more certain models are systematically biased.
AJ:
I’d wager there isn’t the slightest chance that result would get published. Examining humidity projections just highlights the inaccuracies of the models. There is no reason it should give us more confidence in any projections. It’s only by ignoring temperature data one can believe this paper’s results are valuable.
Well, that or you could believe there will be some crazy rapid warming in the near future. With emphasis on crazy.
Brandon Shollenberger (Comment #106306)
“Well, that or you could believe there will be some crazy rapid warming in the near future. With emphasis on crazy.”
My view is that the warming is, or at least was, accelerating. Combine that with a quasi-sinusoidal background and you get steeper warming periods and “cooling” periods going from slightly negative to slightly positive. In short, we should see temperatures rising in steps. So after this “cooling” period is over, we might see rapid warming.
Brandon Shollenberger (Comment #106305)
“They may be able to “nail down†one value, but doing so causes them to get other values wrong.”
I agree. I want to see a model that gets *ALL* metrics (annual variability and long term trends) correct.
AJ, I don’t disagree about what patterns we may see in the warming. The key is it would already take a lot of warming to justify models with ECSs of 3C. It’s extremely hard to imagine getting enough warming to justify 4C+.
While that would be great, it’s a much higher standard than I’d set. I’d be content if the models could just avoid systematic biases. I’m fine with a lack of precision.
Paul_K (Comment #106184)
A little late, but thank-you.
AJ,
Thanks for the thanks. Always appreciated.
AJ, Brandon, DeWitt, Troy,
There used to be a TV comedy show back in the 60’s in the UK called “Never mind the quality, feel the width”.
In more formal model development processes, specifications on key metrics are generally defined up front. If there are k key parameters, typically these will be specified in the form of k vectors of length n (in time) with tolerances defined in n-space. The developer can’t go to the customer/end-user and say “Look, we have a great model that matches 2 of the k key variables” – unless he is dealing with a customer who is ignorant or stupid. There may however be some intelligent conversation between developer and client about modification of tolerances at some stage, with a careful evaluation of the consequences of doing that.
Perhaps the root of the problem here is that climate models have been produced as an amorphous, uncontrolled research effort for a faceless customer. There is no “smart-buyer” of climate models. Governments which sponsor the research and who seek to use the results of these models as a basis for policy-making should be assuming this role, but I don’t see it at present.
“Perhaps the root of the problem here is that climate models have been produced as an amorphous, uncontrolled research effort for a faceless customer. There is no “smart-buyer†of climate models. Governments which sponsor the research and who seek to use the results of these models as a basis for policy-making should be assuming this role, but I don’t see it at present.”
yup. the policy making customers are not really part of the model dev process in climate modelling. There are some coming to the realization just now.. you see it reflected in the efforts to to decadal projections and regional projections.
When people think the are supporting a long term global policy, one tends to focus on projections out to 2300 for the entire globe.
as we start to see the impacts of AGW the focus will shift to ‘how bad is it going to be for me” and you’ll see shifts to shorter time periods and smaller areas.
interesting stuff on data asimilation and GCMs is being done..
RE: Paul_K
Your post on time marching methods is correct. However, a main point is that most modern discretization methods such as finite element based methods AUTOMATICALLY conserve discretely all the quantities whose continuous conservation is expressed in the equations of motion. For Navier-Stokes, that’s conservation of mass, momentum, and energy. Subgrid models can be formulated to not interfere with this discrete conservation. Seems to me this energy conservation problem should be easy to fix. What I suspect is that these models are very large codes with a lot of “legacy” code that few people understand very well. Rewriting from scratch may be the only option for upgrading the methods and people don’t want to do that probably because to do so involves admitting that there is a problem. So far as I can determine climate models DO NOT use modern numerical methods. They seem to be based on older finite difference methods and time marching schemes such as leapfrog from the 1960’s. Leapfrog has a lot of problems, including its tendency to induce nonlinear instability, a very bad thing. This is discussed in my graduate school text on numerical analysis, written circa 1975. It is well known and easy to find. I have sent some references to Schmidt and got no response. Lakis insists that the models do conserve all the appropriate quantities. I find that hard to believe, based on what can be inferred from what little literature I’ve seen. The literature I’ve read seems to avoid the discussion of numerical methods and their merits and demerits, with a few exceptions. I think the team must be so consumed with not admitting any problems with the models that they are afraid to even discuss the issue with outsiders. I can think of no other reason for the silence. A little like a drug company not wanting to publish the negative results on VIOXX. It took a scandal for them to own up to the problems. It was in the FDA report, but never published. Does the FDA even read these things?
I have at this point given up trying to get people to pay attention. It is a very sad commentary on the state of this field in my opinion. Don’t even know how to proceed at this point, but am still interested in trying to get modelers to pay attention. Apparently Paul Williams has the same problem. He is walking a fine line when he shows numerical problems with climate models as he risks getting on the wrong side of the ruling class in climate science. And we all know how unpleasant that can be!!
Nick Stokes, Semi-implicit probably means predictor corrector methods. These are not terrible methods, but can suffer from time step limits based on stability limits. Fully implicit can be a lot better because you can do rigorous error control for the time errors and the time step is determined by accuracy and not stability. You can also use very high order discretizations in time. The best codes from the 1970’s go up to order 10, which means the errors are order delta t to the 10th power. Pretty potent as delta t gets small. Among the explicit methods, Runge-Kutta seems to be among the best and is still used a lot. Leapfrog is the “best method of the 1960’s and should be totally rooted out of all these codes. However, fully implicit methods require the solution of a large linear system at each time step and so are vastly more expensive on a per time step basis. However, in the metric of numerical error, they can be orders of magnitude more efficient.
The problem is that you need a large investment in coding and debugging to get over the hump and start seeing the accuracy benefits. In an environment where there is constant pressure to make more runs, this investment is hard to do and takes a level of scientific discipline not needed if you just stick with your old methods.
David Young,
One of several papers from the early 70’s on numerical instability of leapfrog methods.
http://amath.colorado.edu/faculty/fornberg/Docs/MathComp_73_nonlin_instab.pdf
I don’t believe any commercial simulators have used such methods since then – for obvious reasons.
David Young,
I think all source code for GISS is publicly available. http://www.giss.nasa.gov/tools/modelE/
Donno how difficult it would be to identify whether of not the numerical methods are stable and conserve what needs to be conserved, but I am betting it would be a major task.
David, Browning and Kreiss had several papers on the problem of ill posed semi-implicit parameterizations in climate models. Dr. Browning was engaging Schmidt at RC on issues brought up at CA on exponential increase in error in models. In this discussion and other discussions, some of the methods used were hyperviscosity and convective adjustment. Also, for regional models, Dr. Browning discussed the problems of boundary conditions, initial values, and stepwise induced error that indicate that semi-implicit boundary conditions cannot be resolved with mesh size and timestep in a recent CA comment.
John, thanks for the heads up. I think I met browning many years ago and will try to find it. I assume kreiss is the famous kreiss. Which RC post was this?
I found the Browning Kreiss references. Both really good mathematicians who I trust. Their work is fully rigorous and honest. I met Browning 35 years ago at NCAR. Outstanding man!
David Young and other readers,
In the comments section of this CA article, you will find a number of Browning’s comments, including some which did not make it into RealClimate:
http://climateaudit.org/2008/05/10/koutsoyiannis-2008-presentation/
This is one of my favourite Gavin responses. Gavin is panning Franks’ “nonsense”, but then uses conservation of mass and energy. Somewhere in these threads is where Browning discusses one of Gavin’s papers where they have to reset calculations when the mass and energy in a cell go negative. TOO FUNNY. One of the others mentioned here at Lucia’s is that models do not agree to the absolute temperature thus effecting the magnitude of outgoing radiation. So how can you have a mass and energy balance, different absolute temperatures, and think you have described the actual planet we live on, have an initial value problem with time step-mesh size problems and claim that you know and EXAMINE the errors for “whether the errors in the first 10 years, or 100 years are substantially different to the errors in after 1000 years or more.” What planet were they examining and how did they do it? So the models get to set up boundary conditions for planets other than earth and still get the right answer. Then one adds CO2, a forcing, and claim the initial value and ill posed conditions go away.
[Response: I have no desire to force poor readers to wade through Frank’s nonsense and since it is just another random piece of ‘climate contraian’ flotsam for surfers to steer around it is a waste of everyones time to treat it with any scientific respect. If that is what he desired, he should submit it to a technical journal (good luck with that!). The reason why models don’t have unbounded growth of errors is simple – climate (and models) are constrained by very powerful forces – outgoing long wave radiation, the specific heat of water, conservation of energy and numerous negative feedbacks. I suggest you actually try running a model (EdGCM for instance) and examining whether the errors in the first 10 years, or 100 years are substantially different to the errors in after 1000 years or more. They aren’t, since the models are essentially boundary value problems, not initial value problems. Your papers and discussions elsewhere are not particular relevant. – gavin]
http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/comment-page-7/#comments is one of the RC threads. A long read, both threads are long.
Schmidt, G.A., R. Ruedy, J.E. Hansen, I. Aleinov, N. Bell, M. Bauer, S. Bauer, B. Cairns, V. Canuto, Y. Cheng, A. Del Genio, G. Faluvegi, A.D. Friend, T.M. Hall, Y. Hu, M. Kelley, N.Y. Kiang, D. Koch, A.A. Lacis, J. Lerner, K.K. Lo, R.L. Miller, L. Nazarenko, V. Oinas, Ja. Perlwitz, Ju. Perlwitz, D. Rind, A. Romanou, G.L. Russell, Mki. Sato, D.T. Shindell, P.H. Stone, S. Sun, N. Tausnev, D. Thresher, and M.-S. Yao 2006. Present day atmospheric simulations using GISS ModelE: Comparison to in-situ, satellite and reanalysis data. J. Climate 19, 153-192.
page 158
>Occasionally, divergence along a particular direction might lead to temporarily negative gridbox masses. These exotic circumstances happen rather infrequently in the troposphere but are common in stratospheric polar regions experiencing strong accelerations from parameterized gravity waves and/or Rayleigh friction. Therefore, we limit the advection globally to prevent half of the mass of any box being depleted in any one advection step.
John, here’s my two cents on this: If you have errors that increase over time, they usually scale with the time discretization $latex \Delta\tau$ (to some power, more generally it is a polynomial relationship).
An estimate of the error can be made by running the program with several values of the time slice, then looking the mean difference between models with a larger value of $latex \Delta\tau$ versus models with a smaller value of $latex \Delta\tau$.
Sometimes you can formally have an error that grows over time and either not be able to measure it, or just have the amount by which it has grown to be so small that you’re still dominated by ordinary discretization error, even for the largest time period you’re interested in running at.
That said, I’m not sure about Gavin making the distinction between initial condition versus boundary value problems. When you are dealing with the time evolution of a system I don’t see that this matters. I’d guess though that what he said about stability of the GCMs over time is accurate.
John F. Pittman (Comment #106397),
I read over much of that old RealClimate thread. In the end Gavin just says (more or less), you have to believe the models are accurate because they reasonably match the past. Which of course begs most all the issues G Browning raised, and worse, suggests Gavin believes that all the ‘tuning’ done with aerosols, clouds, etc. is perfectly valid. I can understand Gavin’s desire to defend the virtue of the models (it is his entire career, after all), but I think he was, and is, deluding himself more than a little bit. The truth is that the various kludges and ‘tunings’ are defined to generate pretty much the climate sensitivity that the modelers (and more importantly, their bosses) already believe is the ‘correct’ value. Hence the almost comical (and IMO, intellectually corrupt) repeated revision of historical aerosol offsets to maintain consistency with the evolving data on ocean heat uptake.
.
These guys need to take a deep breath, step back, and acknowledge that the adopted parameters (mostly on clouds) mainly control the diagnosed model sensitivity…. and these parameters are largely unknown. The CGCM’s just do not give very useful information on climate sensitivity, and climate sensitivity is the only thing that matters much for public policy. They have a product which is wholly unsuited to do what they want it to do: convince people climate sensitivity is high.
.
The sooner modelers can bring themselves to accept that reality, the sooner sane choices on public policy can be made. As things stand today, climate models only inhibit rational public policy choices, because they are demonstrably based on artificial/arbitrary parameter choices. There are reasonable arguments to be made for prudent policy steps, but climate models strike me as almost the worst and weakest possible basis for making those arguments.
Carrick, the problem I have is with the adjustment parameterization such as sponge layers and hyperviscosity when the reason the models match is that they are tuned to match the historical part. To me it is a circular argument wrt a real forcing such as CO2. This is because the microscale accelleration by increased wv content and the poorly modelled cloud and percipitation. As one reads of the acknowledged noise, the measurement that is claimed to have been made, and the effect of hyperviscosity and other methods used, I see no proof that such a system has been verified. In fact, the literature by the modellers themselves indicate it will take from 90 to 150 years to know whether a projection of 100 years, even if projected correctly, is close to being correct. Note that these methods are there to stop instabilities. So just what confidence can you assign that a model is stabile, when it has been un-physically adjusted so that it is stable? I have little to none. These are not models with well defined conditions and dozens if not hundrds of independent experiments. There is but one temperature history. Worse, such systems that do work, one has to be careful when extrapolating. These GMC’s are designed for nothing else but an extrapolation, in the parameter that was defined as the driving mechanism.
I really don’t think hyperviscosity is the root problem here.
“I really don’t think hyperviscosity is the root problem here.”
OK. That was cryptic. Care to amplify?
Perhaps I could recast my cryptic comment as a question for John.
Given the (many) other reasonable objections to the way the model physics are handled, why the special interest in hyperviscosity (and gravity wave damping in general)?
David Young (Comment #106372)
“I think the team must be so consumed with not admitting any problems with the models that they are afraid to even discuss the issue with outsiders. I can think of no other reason for the silence.”
What you say here, I judge from my reading of the literature, also relates to many other climate science topics and with the same detrimental effects of slowing potential progress in the field. I have seen the same attitude with a few of the private enterprises with which I have been involved in the past and the result has usually either been a change in attitude and admitting mistakes -which usually requires first installing new management – or the inevitable failure resulting in bankruptcy. The attitude stems from a fear of bad PR which unfortunately leads almost directly to worse PR. Most of what I saw in these enterprises was a result of not admitting to problems, even internally, but it was for the most part eventually self correcting.
If I am to apply what I saw in those enterprises to climate science and in the case at hand, modeling, and have some optimism about the possibility of a correction in the near future, I would want to know if the failure to admit to problems was only for external perception and that internally the scientists are busy attempting to correct the problems. Unfortunately I would tend to think that the denial is internal as well as external. I also do not see a self correcting mechanism in this field that would penalize that attitude. I suspect with modeling there does not exist some heroic individual or even group of individuals who have sufficient resources to go out on their own and change the field.
The question I ask here and have asked before is: what is the end game for climate modeling? I am personally in no position to pass judgment on the state of the art as I continue to be amazed that numerical methods used for solving nonlinear PDEs has not, at least approached, being a settled issue. Is climate modeling being held back by failure to use more modern approaches in solving PDEs or by too much reliance on arbitrary parameters or the computer capabilities or some combination of these. In the meantime are modelers in position to attach rigorous uncertainty limits to their results? Are there theoretical limits to what climate models can tell us?
John, I understand your concerns. But from my perspective what Gavin was claiming was that “if you test the models, they are stable.”
I was just explaining one approach by which “that testing happens.” You don’t need the model to represent Earth climate before you conclude the model is insensitive to the length of the run time.
Whether the model is tuned or not is irrelevant to the stability question.
Regarding this:
You can test, for example using the method I described, to see if the model is stable.
I also don’t think the right term is “extrapolation” at least in the usual sense. The purpose of the models is to predict future climate based on assumed underlying physical principles and an assumed forcing scenario. I personally wouldn’t describe that as “extrapolation.”
Oliver, if it is non-physical such as hyperviscosity, there needs to be empirical testing. Other choices rather than hyperviscosity are made and each has similar non-physical implemetation. Notice with Carrick’s comment “The purpose of the models is to predict future climate based on assumed underlying physical principles and an assumed forcing scenario” the contradiction. It is not physical. If it has no effect, then why use it. If it does have an effect, it must be accounted for since it is unphysical.
That is why I say it is an extrapolation of the non physical parameterization and the assumptions made wrt the effect on the physical. It is not interpolated, since we have not measured it. Since we do not have a doubling to compare it to, saying it is an extrapolation is better than just stating it is simply an assertion. That it is stable using non-physical reasoning means claiming it is stable is a bit of a circular argument with out the proof that the non-physical effects do not escalate or increase error. Browning anf Kreiss indicate using such do.
And Carrick if that was all that was claimed, that the model was stable, I would agree with you. But that is not what is claimed by far.
Carrick – right on! A numerical stability of (Gavin’s) models is only a necessary condition; should they start to oscillate wildly after 10 years they would be clearly useless. How well they represent the Earth climate is a totally different ballgame.
I suspect the representation is extremely tenuous; a 2.5% error in an energy transfer by water evaporation from tropical seas in CAM 5 model (UCAR) is not even a concern for modelers; when notified of it, they just do not care.
Remember, “climate change” is all about energy transfer.
John Pittman, the purpose of filters is to remove non-signal components from a time series. It doesn’t matter if the filter is “unphysical” if that is all it accomplishes. It sounds to me like what they are implementing is an “absorbing layer”, which is a standard way to filter out upwards propagating gravity waves.
The “unphysical” nature of the filter would be a problem if it introduced unphysical behavior in the system in a sense that “mattered”. My models apply a low-pass filter in wavenumber space to prevent aliasing of large positive wavenumbers into negative wavenumbers.
Ironically, while the wavenumber filter is unphysical, it produces a much more physically realistic behavior than one where you didn’t filter the large wavenumbers. Allowing the aliasing of positive wavenumbers into negative ones, it turns out, is an equally unphysical, much more egregious sin than “unphysically” removing them.
“”It doesn’t matter if the filter is “unphysical†if that is all it accomplishes.””
That was where the argument lay, and how does one demonstrate such. Just stating a model is stable does not preclude that the unphysical mattered. WRT the discussion, there were other unphysical methods, but similar, in the GISS model that were discussed in particular.
For me to state more, I will have go back through and reread lest I make mistakes.
Carrick, if you let me add enough eddy viscosity, I can stabilize anything. Schmidt’s argument has no real bearing on Browning’s point. To test this, try running your model without added viscosity. Viscosity is the mortal enemy of accuracy.
I agree that stability isn’t the end all here. You can run the model without the filter and see what happens. Almost certainly it will blow up very rapidly.
The problem with gravity waves is you need to include fluid dynamic nonlinearities to keep their behavior physical. Gravity waves (of the sort considered in climate models) get produced typically by air flowing over mountain ranges. They then travel nearly vertical into the thermosphere (e.g. they travel nearly parallel to the gradient of density).
But turns out their amplitude scales as $latex 1/\sqrt{\rho(z)}$, so at very high altitudes their amplitudes become unphysically large, unless you have nonlinearity present. What the nonlinearity does is cause them to “ramp up” their amplitude much like a breaking ocean wave, and then they “crash”… breaking into smaller scale waves + turbulent energy that gets converted into heat.
(These are technically “internal gravity waves”. They have a neighbor called the “Kelvin-Helmholtz” wave which produces this cool breaking wave cloud phenomenon.)
Anyway the physics that is important from a climate model perspective is the transfer of mechanical energy in the troposphere into thermal energy in the thermosphere. Getting the exact details of the mechanism isn’t all that important as long as you catch the salient features embodied by the gravity wave generation and eventual dissipation.
As I understand it, this is an ongoing topic in climate science, and whether climate models have properly captured all of the relevant physics is a different question than whether they have to use “ground up first-principles” equations at every step.
(If you had to do that, you’d have to treat ordinary molecular dissipation at a quantum mechanical level. Fortunately you don’t have to, since classic theory works pretty well, at least as long as you stay out of the thermosphere.)
[Speaking of other planets, we do get to study this phenomenon on other planets, mainly Jupiter.]
Also See this for more on gravity waves.
David Young, I’ll have to disagree on that one. It’s very standard to put an absorbing layer e.g. at the top of the layer being modeled in atmospheric and ocean (body) wave propagation using the parabolic wave approximation. Very well studied problem.
It’s used to enforce the “outward propagating wave” condition, and for this problem, there are less efficient but more exact methods for benchmarking the code, to verify that the absorbing layer has been implemented properly.
I’m with Oliver there are things to criticize the models on, but this isn’t one of them.
Carrick #106414
Yea, you are repeating the standard doctrine that the “details don’t matter” as long as the gross effect is right. This is true only in the case of sufficient viscosity (real or artificial) to damp the disturbunces. Kelvin-Helmholtz instability is important to get right. These mechanisms can give rise to turbulence in an otherwise laminar flow which makes a difference on the large scales. Without capturing Kelvin-Helmholtz, your shear layer will be far too thin and when it interacts with things downstream will give wrong effects that can be important.
Google “NASA drag prediction workshop #4” and look at the presentation of Toni Sclafani toward the end and the discussion of spurious separation. This phenomena arises from small differences in the subgrid model but makes a huge difference to the overall forces.
This doctrine that the “details don’t matter” is the questionable basis for turbulence models which are notorously inaccurate. At the very least, you need a truth model to test these models. In fluid dynamics, you run fully resolved Navier-Stokes with vastly huge grids or you refer to test data. For climate models, what is the truth model?
On gravity waves, why not put in the nonlinear terms. They are just the convection terms in the Euler equations. It’s straightforward. Damping them artificially is usually a bad idea in simulations in my experience.
Carrick #106415
I’ve seen this idea of an artificial absorbing layer used in fluid dynamics and in electromagnetic scattering. It’s often used in the far field where you supposedly just need to damp outgoing waves. The problem of course is that the absorbing layer is never perfect so some unrealistic backscatter happens. And this backscatter is grid dependent and parameter dependent. But in the ocean for example, there is real backscatter from the interfaces!!! Why not just fix those things that can be fixed? However, we have found that an exact far field condition is vastly superior. You can refer to Young et al Journal of Computational Physics, I think 2001 or so where this is discussed. Trust me on this, in a solution adaptive context, its important to get these things as right as you can for the grid size you are using. For one thing an exact discrete boundary condition requires no tunable parameters and no doctrine that “the details don’t matter.” I don’t get this whole line of argument you are using. Progress in simulation always results when spurious numerical effects (to the layman “errors”) are eliminated.
Carrick #106408
You point to what I too found Schmidt saying. I called it the “doctrine of the attractor” and so far as I can tell has no scientific basis whatsoever. It is basically that “every time I run the model, I get something reasonable that seems to have qualitatively the right features.” There is no earthly way to tell if this is because the attractor is that strong or if artificial dissipation stabilizes things too much. All simulations have numerical dissipation, the coarser the grid, the more they have. I believe that climate models have a lot of this totally artificial stabilization. Talk of hyperviscosity makes me very nervous because it implies a totally artificial stabilization fudge factor. Schmidt would better spend his time figuring out how to minimize his numerical dissipation instead of “communicating” using totally unsupportable arguments.
David Young:
Actually it’s the upwards going wave, but I suspect that’s what you meant (I tend to think “backwards” as returning back towards the source as in “backscattered waves”). You’re absorbing the upwards traveling wave so it doesn’t end up being downwardly scattered by your “top of atmosphere” boundary condition, which looks like a great scatterer when you discretize in the vertical dimension.
Agreed, which is why it’s necessary to test it against other methods. This tends to be the most efficient way of dealing with the problem, but it’s also a bit more “cranky” than some other methods.
I’ll have to look it up.
We’ve gone over to using an eigenvalue approach based on a biorthogonal expansion (which addresses the problems with the non-self-adjoint nature of problem arising from attenuation). Of course this has the advantage that the boundary condition at the “top of atmosphere” can be exactly enforced.
The main trade off with that is finding all of the eigenvalues, and some technical issues associated with pseudo-modes when you have a continuum spectrum.
I do use a finite-element wave-number integration method that admits to an exact statement of the radiative condition. It’s very slow, but it’s based on a formulation that is an exact 1st order method for a horizontally stratified medium.
I’m still not totally sure here. There are different types of instabilities present in climate models. I think the inescapable one is gravity-wave related instabilities, and you have to do something “unphysical” to fix that. Parametrization of physical processes doesn’t have to be pejorative… after all that’s the entire basis of the bulk Maxwell equations.
The biggest difference here is you can sit down and derive the bulk equations from the microscopic equations, of course ending up with extra terms that are corrections to the phenomenologically based equations….
Carrick,
Your eigenvalue expansion sounds a lot like what is done in fluid dynamics where you do a local eigenvalue analysis of the 5 by 5 system and determine the outgoing characteristics and just remove those. Better than absorbing layers but still not as good as an exact discrete boundary condition.
As you point out, there are always terms that are neglected to derive continuum PDE’s. For Maxwell and Navier-Stokes these terms are pretty small. By far the biggest errors are in subgrid models required for realistic flows and excessive numerical dissipation. The turbulence models are not realistically represented in the literature. People tend to add terms, tune their parameters, and publish a few cherry picked results that miraculously agree almost perfectly with data. There are a few negative results in the literature. Tom Hughes says that the turbulence models are empirical garbage. There are few alternatives as you say. Hughes has something called multi-scale methods that offer the potential to be more rigorous. There is still handwaving, but they are formulated as finite element methods so discrete conservation is guaranteed and you can use all the solution adaptive methodology to minimize numerical errors. There is an AIAA paper by Venkatakrishnan et al in 2003 that has some remarkable computations showing that these methods are a lot better for controling numerical dissipation than finite volume methods. We are in the process of publishing a paper comparing the common turbulence models with integral boundary layer methods. The scatter of results is surprising to most people because of the positive results bias in the literature. Simpler models based on observationally based rules are often more accurate. In any case, these sources of error can be addressed, but it takes a concerted effort and careful mathematical analysis. There is an interesting DLR presentation at the recent NASA conference on “the future of CFD” that finally shows some rather shocking negative results with spurious separation for the common models. Their positive results look cherry picked to me so you have to take it with a grain of salt. It does however show that subgrid model errors can have large macroscopic effects and make the results very wrong.
You didn’t explain why you can’t just include the nonlinear terms required to give physically realistic modeling of gravity wave behaviour. Seems like it should be pretty straightforward.
SteveF,
The task of analyzing the GISS model is a job for a team of people with a lot of resources. Typically, these models are a rats nest of legacy code and it can be very difficult to reverse engineer the actual mathematics of the numerical methods used. This is a job for someone like Gerry Browning who knows the codes much better than I do and has 35 years of experience.
David Young,
Rats nests of legacy code are a bit like entropy.. always increasing. 😉
Still, some sort of dedicated team with the task of cleaning up rats nest code and verifying accuracy would seem a requirement for complex systems like climate models if you want to be able to maintain the system and be in a position to upgrade with improved methodology. Scientific types are often not the best at developing maintainable code.
I think this illustrates one of the main bones of contention:
“”>[Response: But Jerry, a) what are the high resolution models diverging from? b) the climate models don’t resolve mesoscale features, and c) what added forcing terms are you talking about? – gavin]
But Gavin, a) the models are diverging from each other in a matter of less than 10 days due to a small perturbation in the jet of 1 m/s compared to 40 m/s as expected from mathematical theory b) the climate models certainly do not resolve any features less than 100 km in scale and features of this size, e.g. mesoscale storms, fronts, hurricanes, etc. are very important to both the weather and climate. They are prevented from forming by the large unphysical dissipation used in the climate models. c) any added forcing terms (inaccurate parameterizations) will not solve the ill posedness problem, only unphysically large dissipation that prevents the correct cascade of voticity to smaller scales can do that.””
David, Dr. Browning also took Gavin to task on the problems you out line such as legacy code, mesh size, grid size dependencies, and cherry picking especially wrt tuning. I think the two responses by Dr. Browning below give insight to the contention.
“”Note that when I pointed out that exponential growth has been shown in NCAR’s own well posed (nonhydrostatic) models as proved by mathematical theory, Gavin did not respond. Unphysically large dissipation can hide a multitude of sins and is a common tool misused in many numerical models in many scientific fields.
If you look at the derivation of the oceanographic equations, they are derived using scaling for large scale oceanographic flows (similar to how the hydrostatic equations of meteorology are derived). The viscous terms are then added to the hydrostatic oceanographic equations in an ad hoc manner. There is no guarantee (and in fact we were unable to prove) that this leads to anything physically realistic. A better method starts with the NS equations and subsequently to a well posed system. ””
“”Jerry
I also see that you did not disagree with the results from the mathematical manuscript published by Heinz Kreiss and me that shows that the initial value problem for the hydrostatic system approximated by all current climate models is ill posed. This is a mathematical problem with the continuum PDE system .
Can you explain why the unbounded exponential growth does not appear in these climate models? Might I suggest it is because they are not accurately approximating the differential system? For numerical results that illustrate the presence of the unbounded growth and subsequent lack of convergence of the
numerical approximations, your readers can look on climate audit under the thread called Exponential Growth in Physical Systems.The reference that mathematically analyzes the problem with the continuum system is cited on that thread.””
John Pittman, relating to exponential growth, any solution to the wave equation involves a forward and reverse traveling wave.
Because of attenuation, the reverse traveling wave looks exactly like the problem described by Browning, namely an exponentially growing solution in time. It certainly seems that Browning has proven something as impossible that is routinely solved accurately.
The way around this conundrum in our case is imposing on the solution the knowledge up-front that there isn’t a reverse traveling wave. This is extremely straight forward to do using any of the methods our group uses, namely parabolic wave equation, wavenumber integration and biorthogonal wavenumber integration.
The one place where I know there are problems is use of the explicit method in a hyperbolic equation (e.g, the wave equation), where you can show that exponential growth is unavoidable just using sensitivity analysis (see e.g. this.. This is rectified by the use of fully implicit and semi-implicit methods (e.g., split step spectral methods).
A simple example of how this works is solving the hyperbolic equation in the frequency domain where the wave equation gets transformed into a Helmholtz equation that manifestly is stable, and imposing on the solution the requirement that it is zero at infinity (that explicitly removes any exponential growth in the system).
You then convolute your frequency domain solution with the Fourier transform of your source function to give you the response of the system in the time domain.
Going back and looking at CAM-3, it doesn’t appear that they are quite the noobs that Browning paints them to be. While there may be problems with exponential growth, the adoption of a semi-implicit method (which has been known since at least 1974) suggests to me that it isn’t a slam-dunk that the models will grow exponentially.
And as I mentioned above, it is definitely possible to test for exponential growth, not by comparing against an unknown exact solution, but by comparing the solution of the model with differing time steps and spatial discretizations. As I mentioned it is possible to accurately achieve a solution over the time scale of interest. For the response of the human cochlea to external stimulus the time scale of interest is no larger than a few seconds. You can adopt explicit methods, which are easier to solve in a nonlinear, active system like this, even though you know they are guaranteed to fail for long enough time integration, as long as you check for convergence of the solution over the maximum time period of interest.
Browning’s discussion of the treatment of turbulence is a very different topic than the one of stability. There are places where not properly characterizing turbulence leads to known problems in the model, for example, the effect of large-scale eddy turbulence on Hadley cells.
I would suggest a more physically based approach, such as that of Isaac Held or Tapio Schneider, is a vast improvement over the more fundamental math based arguments that I’ve seen Browning use.
The issue of whether the mathematical-provable presence of an undesirable characteristic in an approximate solution method matters, is a physics question not a purely mathematical one.
All numerical solutions are by definition approximations. The word approximation itself doesn’t need to be seen as definitionally a bad thing either.
David Young, thanks for the interesting comments. I wish I had a chance to more fully respond. Deadlines approach!
The only quick issue I have is that yes there are “rats nest” legacy codes out there, but I think you are giving the community too little credit here. There are efforts to develop first rate, open source codes. I’m sure there’s room for progress, because of the size of the codes there is a substantial amount of inertia between the development of new methods and their implementation in this field.
David Young (Comment #106416)
How do you plan to reproduce the details of the K-H instability in a GCM? All you can possibly hope for is to get the gross effects right. E.g., the shear layer problem you describe is sometimes handled by a parameterization which widens the shear layer to represent shear instability.
For idealized flows, maybe. Unrealistic for environmental flows.
The advection term is there, and it can cause an energy flux toward high (ultimately, unresolved) wave numbers. The nonlinear processes (turbulent mixing and certainly molecular viscosity) which would damp the excess kinetic energy are not captured because of the gridsize, no matter what nonlinear terms you include. Just turning up the viscosity would turn the flow into molasses. A hyperviscosity scheme attempts to provide sufficient damping while avoiding too much damping leaking into the resolved scales.
David Young:
Oliver drew my attention to this comment.
There is an ongoing effort to describe acoustic propagation using the full Navier-Stokes equations. I don’t know of an example yet that hasn’t resulted in full frontal complete humiliation on the part of the people attempting it.
Anyway, the Navier-Stokes equations themselves are approximate and breakdown under real-world situations and include at least one obvious parametrization… the absolute viscosity coefficient μ. This little parametrization hides a lot of sins.
Yes, computational acoustics is not mature. Linearized methods are good for propagation, but modeling sources is tough. The problem is that acoustic energy is usually much smaller than the energy of the baseline flow. There are lots of good linearized propagation codes. We built one. Can’t talk about it, but if you search carefully you can find some papers about it
It is possible to make the viscosity depend on local velocity. Look for quadratic constitutive relation. It is important in some situations but this needs a lot more work.
Oliver:
What you are saying to me says that climate models are totally reliant on subgrid models for most of the important physics. Doesn’t increase my confidence in them. I hope they have good data to calibrate them. Of course the data is very noisy. But, I digress. It’s all Ok because of the doctrine of the attractor.
Carrick, I may be too harsh about the codes. Some of the NCAR software has historically been very good.
David Young:
I’ve written several. 😉
Fluid dynamics .. a nuclear power plant is shut down in San Onofre, California because of an “excessive wear of steam generator tubes”, due to “computer modeling that miscalculated the velocity of water flowing through the steam generators”.
That much for “investment grade” fluid dynamics models. Mitsubishi Heavy Industries, the modeler, will have to pay a lot. Based on my inquiries at UCAR, I don’t believe that climate models are much better.
Are climate modelers responsible for anything?
http://nuclearstreet.com/nuclear_power_industry_news/b/nuclear_power_news/archive/2012/07/16/nrc-releases-further-details-of-san-onofre-nuclear-plant-steam-generator-wear-071602.aspx
George, Yes, turbulent fluid flow modeling has a lot of problems, not always acknowledged in the literature or to policy makers.
I doubt it was miscalculated water velocity. I would guess it is a miscalculation of the amount of cavitation. The engineers probably calculated zero. I might be wrong but the steam generations most likely operate with laminar flow, not turbulent flow. At least that was the case with Westinghouse.
Cramer–
Possibly they miscalculated cavitation because they miscalculated velocity. After all, if the dynamic pressure is higher owing to high than predicted velocity, the static pressure will be lower. If the static pressure falls below the vapor pressure (at local temperatures) you’ll get cavitations.
It’s a noobie mistake– but oddly also one experienced engineers make if they have forgotten their stuff covered in a typical ‘introduction to fluid mechanics’ in a mechanical engineering curriculum. Using code may make things worse rather than better since some will get in the habit of not engaging their brains.
Actually laminal flow can be much worse than turbulent. Typical Navier-Stokes don’t do it well at all because if you turn off the eddy viscosity, they tend to produce a lot of nonphysical oscillations. And of course laminar separation is very touchy.
To my best knowledge, heat exchangers are always designed with a turbulent flow. It results in a much better heat transfer.
Just read the thread at CA and I must say Browning’s arguments resonate with me. We see similar things in RANS modeling of separated flows.
@David Young (Comment #106372)
The silence is all in your imagination. There is continual development of models because of the known limitations. The models are open for inspection. There are conferences at which models are discussed.
bugs – Did you miss this bit?:
“It is well known and easy to find. I have sent some references to Schmidt and got no response. Lakis insists that the models do conserve all the appropriate quantities. I find that hard to believe, based on what can be inferred from what little literature I’ve seen. The literature I’ve read seems to avoid the discussion of numerical methods and their merits and demerits, with a few exceptions.”
Do you have any references that you can point to which help the discussion?