Warning: I will not be explaining Model E deals with heat from viscous dissipation. I don’t know. However, the topic came up in comments about some terms in Model E which evidently eventually was explained by some comment that included the words “the CHANGE of KE by DISSIP”. Various people wondering if such a thing could be physical, or if it must be round off error or some problem with gridding etc.
So I did post something about viscous dissipation degrading mechanical energy to heat, which then appears in the energy equation in computational models. (My absolute guess is DISSIP is a fortran variable representing viscous dissipation, and KE is a fortran variable representing Kinetic Energy, and it is at least generally entirely appropriate to make viscous dissipation appear in the the heat equation.)
RyanO and JohnV requested that I start a thread on this topic. So, what I am going to do is
a) try to find all the comments on this topic
b) move the to comments here and
c) let everyone talk about it.
I think Willis first introduced the issue. As I need to hit “publish” before I can move the comments, you might want to wait for the comment to appear before saying much.
If you know the comment numbers, I miss, let me know. However, I am going to the Lyric Opera and must leave by 4pm. So, any I dont’ find within a half hour will likely not appear.
Update
I think I have to artificially back date the post to make the previous comments appear! (The comment shifted last time I tried this, but they aren’t showing!)
Ryan O, you say
I didn’t, even when I looked just now. Ask again?
w.
Wilis:
.
This interested me, but I’m afraid I don’t really understand the implication. I’m not a model guy, so I don’t know if I’m trying to interpret this incorrectly. If you are talking about an energy imbalance, I would have expected it to be put in terms of “+/- Joules per step†– an excess or deficiency of energy when the step completes.
.
I’m not sure how to interpret your statement, unless it means that the uncertainty of each step results in an artificial forcing of up to 2 W/m2 for the duration of the step. If this is the case, that’s freakin huge! And since it’s already in terms of W/m2, how is it possible to “spread it out� It’s already in per-area form . . . unless they spread it out by volume through the atmosphere or oceans???
.
Sorry if these questions display my ignorance.
.
BTW . . . how long is each step?
RyanO–
I’ve heard Willis account before– from Dan Hughes. I haven’t looked at the code, but if I based on what Dan & Wills say, my guess is
1) It makes physical sense for “energy” to be lost when set up a code that that embodies only conservation of mass and conservation of momentum and don’t add what I would call a “heat equation” (i.e. First law of thermo.)
2) What I don’t understand is why one would take all the “energy” lost over the full computational domain and then smear it over every grid cell.
I can elaborate a bit in 1&2, talking about engineering codes, which share many, many, many features with these climate models.
If we code up conservation of mass and momentum and include many realistic effects (like, for example friction), we can often use these alone to solve some problems of interest. We can do pipeflows, subsonic flow over airfoils.
Now, suppose we run the code to predict something and decide for some reason to now tally what happened to the sum of these contributions to energy: Potential energy (gh), kinetic energy (0.5 v^2). We will find these components of energy decrease.
Now, since people who are running codes using only conservation of mass and momentum are often applying an isothermal assumption, if we also compute the change in energy associated with temperature, (cpT), and at least in our head, we are neglecting heat transfer, we will also discover that our code violates the first law of thermodynamics!
What’s really happened? Well, for many problems of interest, the effect of temperature variations is so small, that we ignore this. Depending on the specific problem, we can generally explain the issue in more detial. So, for example, in a pipeflow, what’s really happened is that frictional effected degraded what we call “mechanical energy” and turned it into heat. That heat might be conducted to the sides of the pipe and lost, and so the fluid in the pipe maintains a constant temperature.
So, the “loss” of this sort of energy is understood. I suspect this is what’s happening in Model E. It’s not roundoff error– it actually happens.
So now about the smearing
Now, as I said, in many engineering codes, this loss is insignificant. We know that there is no need to account for it in any details.
However, when the losses are large or we do heat transfer problems, we often do account for this energy. If you flip open a graduate level text discussing computational modeling of convective heat transfer in viscous flows, you will find the energy equation, and it will likely include a term that describes a heat source that includes the viscosity. This may initially seem mysterious, because if you don’t know what’s going on, you will wonder why there is a source of energy, and the source is proportional to viscosity.
That energy represents the conversion of potential (gh) and kinetic (0.5 v^2) energy to heat.
But here’s the thing: In the differential form energy equation for a continuums, that energy is not smeared. The heat source appears where the energy is lost.
If you find a text with the energy equation containing this term, that source is local.
What Willis is saying is that for some reason, in Model E, this energy is smeared all over the entire atmosphere.
Is this a problem? I don’t know. But it’s a fair question to ask. And if the energy is comparable to the heating due to CO2, then we need a pretty good answer before we can believe it’s ok to smear all the energy all over space.
Analogy time: Consider this ordinary problem.
Suppose you were concerned about frictional heating arising when a block slides down an inclined plane. You know once the block of weight W reaches a steady velocity V, heat is created at a rate Q=WV.
You know the heat is created at the block/ plane interface. So, if you wanted to figure out how hot the interface got, you want the computation to apply the heat there, and then account for conduction away from that point, unsteady effects and ultimately convection (or radiation) from block surfaces exposed not in contact with the inclined plane. (And of course, you may need to deal with heat transfer to the inclined plane etc.)
Now, suppose you looked at someones analysis, and saw they said the rate at which heat was created was Q=WV, but as an approximation, they just speared that over the volume of the entire block, instead of creating it at the surface. Then, they run heat transfer calculations to estimate the temperature of the block and at the surface?
Will you get the right answer for the average temperature of the block as a whole? Will you get the right answer for the temperature at the interface?
I’ll leave those as rhetorical questions, because the full answer is: The approximation has an effect. The magnitude of the effect will depend on features of the block that I have not described in this comment.
What is true is this: It would be entirely fair to aske the modeler to estimate the magnitude of the error due to the approximation. It’s also fair to ask why he bother to make the approximation. (Did it save computational effort? Does the computations lack information to permit him to assign the heat where it really belongs? Etc.)
Anyway, that’s my input on the issue of the smearing.!
Lucia: The modeling explanation makes sense. I had thought that Willis meant the energy imbalance was due to mathematical uncertainties and was, therefore, artificial. So my followup was going to be if the imbalance was generally positive, negative, or neutral for each time step. I no longer have to ask that question. 🙂
.
It does seem to me, though, that the amount of energy lost should at least be tallied as the simulation progresses. Otherwise, it would seem difficult to even approximate how much the response is affected by how you choose to distribute the energy. An explanation for the choice should be provided. And it seems like this should be published information so that researchers who use the model understand where and to what degree their results may be affected.
.
Whether it has any great affect on the results or not, I am certainly not qualified to say . . . but even if the affect is small, it does bother me that the information does not seem to be available in published literature.
.
And as far as “whole” patterns go – yep. In this particular case, I just defined it as everything shown in Fig. 9.1 c) and f), since that was the context of what you presented in the initial post and it is easy to defend. But there’s no scientific or physical reason for either not including additional features to the response or not considering subparts of the response . . . as long as they can still be called identifying.
.
The problem comes when people strip enough features out such that what is left is no longer identifying, use a different definition of fingerprint than the original claimant, remove the context of the original claim by adding hypotheticals, and then proceed to argue that someone who clearly DID NOT do those things is making false claims. 🙂
Regarding the “lost energy” in the Model E code:
There are a couple of items of interest in Dan Hughes’ original article:
http://danhughes.auditblogs.com/2006/12/11/a-giss-modele-code-fragment/
.
First, if I read the code correctly, it appears that the units should be J/m2 — not W/m2. If the time step was 1 second, they would be equivalent. If the time step is 1 hour (pretty small for a climate model), then 2 J/m2 is equivalent to (0.00056 W/m2) over the complete hour. Since the variable is named ediff instead of pdiff, I tend to think it is energy in Joules, not power in Watts. Can anyone confirm the units?
.
(In comments, Gavin uses W/m2 but that may have been a typo).
.
Second, Gavin said in comments that “in the printout it is the CHANGE of KE by DISSIP”. I have not confirmed that this is true, but it appears that there is a diagnostic available.
RyanO
It would be nice to see a published discussions somewhere. If this were a nuclear application, it would be contained in a big thick report so reviewers can access it. But, unless the result is unusually very interesting, the information would not appear in a journal article.
One of the shortcomings in the field is that even though climate science now has important policy implications, they follow the academic method of documentation i.e. almost exclusively journal arricles. This means minutae are not published and so not accessible to mere mortals who are puzzled by these things. Academics are not interested in these questions, and not rewarded for answering them. AGW advocates consider spending time answering them to be focusing on the wrong thing, and so avoid detailed answers.
It’s a dillema. But.. well.. there you go!
And doing it “placement” of information rather than statements that are known untrue out of context is a particularly insidious way of doing. Red Herrings are used to distract because they work.
John V:
.
That was my first impression from Willis’ initial comment about it – it should be in Joules, not watts. However, initially in the post Dan couldn’t figure out how to get the units to work for that, and later down in the post, both Gavin and Dan repeat the W/m2 units and Dan claims to have seen a dissipation range of up to 15 W/m2.
.
To me, something doesn’t quite jive. What’s the timescale for each step? If it’s in days, then J/m2 is incredibly, incredibly small . . . much smaller than I would expect would be lost to friction in the atmosphere and at the atmosphere/surface interface (following Lucia’s explanation earlier). If it’s in W/m2, that’s absolutely freakin massive . . . and is there any physical mechanism by which KE can be converted so rapidly to heat in the atmosphere on a global basis?
.
Now I’m more confused. 🙁
JohnV
In my discussion above, the reason mechanical energy is “lost” is it is converted by heat through the process of viscous dissipation. DISSP looks suspciously like heat. Whether the Model E code happens lot record energy or power (energy/unit time) I can’t begin to guess.
+1 vote for the idea that the ModelE heat dissipation comments could have their own thread.
I created a thread for viscous dissipation turning into heat in model E here.
I found 9 of the comments. If there are others you’d like moved, that would be great. I know I skipped some because some discuss both the hotspot and the viscous dissipation issue.
Thanks, Lucia!
.
I made a post on Dan’s blog asking for clarification on the units (at least I think I did . . . the comment hasn’t cleared moderation yet).
I sent an email to Gavin asking for clarification as well. It’s probably best if we don’t all harass him. I’ll report back when I know more.
Thanks John.
Based on the small amount I read in comments, what this does could be perfectly reasonable, and in line with what we would do in engineering codes. The only bit that sounds slightly odd is the smearing over all space– but even that might have some explanation.
I’m if they do indeed smear, the answer will be that the term is so small, they judge it’s not worth the computational bother. In which case, someone would probably want a back of the envelop quality scaling analysis to say how the errors from this approximation might compare to other approximations. (After all, this would hardly be the only one! And so far, it doesn’t sound like any smoking gun to me.)
At least with my work in engineering, there usually never are smoking guns. There’s lots of things people think are smoking guns, but individually they have a habit of not affecting the results much.
I received a response from Gavin and the units are W/m2.
To be clear, I was wrong when I thought they were J/m2.
.
The term is printed as a line in Model E diagnostics as “CHANGE OF TPE BY KE DISSIP”.
I haven’t looked at this for quite some time, and would like to get a much better handle on the issues. I have a post of the general subject, with a few literature citations, here.
I have not resolved what the coding in ModelE does. I strongly suspect the purpose is to force total energy conservation and that the lack of conservation is due completely to numerical artifacts. Otherwise why isn’t there a simple declarative equation for the modeled viscous dissipation.
In the meantime, there are other ‘energy conservation’ issues that are equally important.
I’ll get back to this Real Soon Now.
Ryan–
If someone tries to reverse engineer the modeling assumptions from many codes, i suspect they will often think they found smoking guns. Of course, I may only think this because I know I would have a hard time working backwards like that.
I can say one thing for sure: I had no idea how efficient the atmosphere was at converting solar energy to wind energy. Even the low values on Dan’s blog were surprising to me.
.
As far as ModelE goes, since the units are W/m2 and you can obtain values as high as 15 W/m2 – or 10 times the assumed GHG forcing – then at first glance this would seem to pose a couple of problems.
.
First, the spread of the values is many times that of GHG forcing. It would be interesting to see a histogram showing the mean and standard deviation of the dissipation power (given the units, referring to it as energy doesn’t seem appropriate). If the spread is large, it would seem that it would contribute significantly to the variability in the long-term projections of an ensemble or runs (say 100 years) since the effects are cumulative.
.
Secondly, unless there is some data to place limits on the actual amount of dissipation occurring in the earth’s atmosphere, then it’s not possible to claim that the dissipation in the model is physical or an artifact because there’s nothing to compare it to.
.
Thirdly, when I read Dan’s comments about the viscous dissipation issue, it seems that the spread in the literature for the global values don’t help place reasonable limits on it (by that I mean limit it such that the uncertainty is less than the assumed GHG forcing) because he implied the spread is “very large”. I wish I could understand more of what he wrote, but frankly, a lot of it is over my head.
.
Lastly (for now), some of the abstracts provided on Dan’s page indicate that the conversion of solar energy to wind energy is strongly dependent on the distribution of heat in the atmosphere. This would indicate that simply distributing the dissipation power based on mass as ModelE has done could yield results that are unphysical even if the magnitude of the power is correct.
.
If I misconstrued anything, let me know. I’m not trying to draw erroneous conclusions; I’m trying to get at least a layman’s understanding of what this might mean and what questions require investigation.
OT
Lucia, I hope you finished that afghan and several more!!
http://www.accuweather.com/news-top-headline.asp?partner=accuweather&traveler=0&date=2009-01-09_21:55
Kuhnkat– I dd finish the afghan. Yesterday, the 7 day forecast was “snow, snow, snow, snow, snow, snow,snow”. It was snowing last night as I drove home from the opera. It’s snowing now.
RyanO– Is it 15W/m^2? I’d say more, but I’m going to go out and shovel. JIm already started.
Yes. Dan said he had as much as 15 W/m2 in a given time step.
My question is what is the equivalent viscosity to produce the dissipation seen in the models. I have seen comments by Gerald Browning at CA in the Exponential Growth threads that the viscosity in the models is orders of magnitude higher than the physical viscosity. It also has something to do with chaotic behavior, I think. If you include a dissipation term, you can manage or avoid chaotic behavior or exponential growth or something like that. Someone who actually knows something about this please jump in.
Dewitt–
GCM models share a feature with many, many engineering CFD codes: Because the grids dimensions are too large, they don’t resolve all scales of motion. Since most of the dissipation (and diffusion of momentum) happens at scales smaller than the grid, this diffusive behavior of the small scales is modeled. Approximating this behavior using a turbulent diffusivity has a long history. (It kinda works; it kinda doesn’t. Doing nothing is even worse.)
If I recall the specific magnitudes Gerry discussed, when Judy joined the thread, I asked her for some information on the velocity and length scales in the regions Gerry was worried about. For the magnitudes I found, I did a back of the envelop calculation, and the turbulent viscosities he said Model E uses were of an appropriate magnitude based on these.
For the initial value problem Gerry discussed the turbulent viscosity issue is a huge problem. It’s not clear to me it’s such a huge problem for climate projections. That said, when you apply conservation of momentum and conservation of mass in a continuum, we know this approximation is quite imperfect and can introduce systematic artifact. (Now, if Tom Vonk is reading, he can explain chaos, problems with RANS problems with other models &etc.)
The series of threads on Exponential growth in Physical Systems at Climate Audit are:
Exponential Growth #1
Exponential Growth #2
Exponential Growth #3
Dewitt– Rest assured, I read those. 🙂
Hi Lucia (or anyone), comment #11 here:
http://climatesci.org/2007/01/31/a-personal-call-for-modesty-integrity-and-balance-by-henkrik-tennekes/
quotes some numbers for Model E increasing near the poles. I was just wondering if these were within the “Lucia” ballpark?
I have posted additional discussion of the NASA/GISS ModelE ‘viscous dissipation’ here.
The situation continues to be very un-clear to me. Anyone else care to jump into the coding, or search for literature references?
Thanks
MikeN– I don’t know. The may be two separate things. There is the issue of how large the turbulent viscosity is generally (related to the Browning issue.) There is another issue of stuffing in turbulent viscosity at the pole for purely numerical reasons. I think that has to do with the avoiding hitting the courant (sp?) condition where the grids are small.
It seems to me that focusing so intensely on GISS’ ModelE is unwise. As far as I’m aware, ModelE is a private enterprise used exclusively in house at GISS and for IPCC scenario runs; I wouldn’t expect to find highly detailed technical documents concerning the model published for public consumption.
You should switch gears and focus on NCAR’s Community Climate System Model. There is much more documentation available for this model, as it is intended for use by the wider climate science community. Much has been published on the model, and there are more people experienced with the model who might be able to help you find what you are looking for.
Counters–
This thread was opened at special request of those who do want to discuss this particular issue which happens to relate to Model E. As the issue of approximations in various models in general overlap, feel free to discuss any specific issues you have with the NCAR model.
Lucia,
I only made the suggestion because the CCSM has a much larger body of literature associated with it than the GISS ModelE does – particularly literature detailing with the technical aspects of the model, since it is designed to be improved upon by the climate science community at large. For instance, I was able to find a paper which might shed light on your issue here: http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2F1520-0442(2003)016%3C3877%3AHAKEDI%3E2.0.CO%3B2
(warning: paper is not for the faint of heart. You’re going to need a math background to get through it, but it’s a hell of a lot easier than working backwards from raw source code)
Counters said…
“It seems to me that focusing so intensely on GISS’ ModelE is unwise. As far as I’m aware, ModelE is a private enterprise used exclusively in house at GISS and for IPCC scenario runs; I wouldn’t expect to find highly detailed technical documents concerning the model published for public consumption.”
I beg to differ. It is VERY wise to focus on Model E precisely because it has contributed to the IPCC and the recent reports upon which many of their conclusions about AGW are based.
I also disagree that Model E is private enterprise. Their work is based on government contracts and funding flowing into GISS from NASA. American taxpayers such as myself are funding these people. Heck, the code is even available online – how “private” is that??
I do agree, however, that you will not find any “highly detailed technical documents” – their documentation is a disaster! Almost no equations – just brief descriptions of the subroutines. Can someone – anyone – list the complete forms of the momentum and energy equations they are using? How about all those physical models – turbulence, aerosol transport, radiation, clouds and precipitation? And, oh by the way, isn’t there an ocean/ice model to coupled with all that atmospheric modeling?
If clear, detailed documentation did exist for Model E, then questions such as the subject of this thread would be more easily addressed. As it stands, given the dearth of information that is available, all one can do is speculate about how model E deals with viscous dissipation…
I managed to butcher my link. You can find it by searching on either Google Scholar or Ams.Allenpress.Com for “Heating and Kinetic Energy Dissipation in the NCAR Community Atmosphere Model” by Boville and Bretherton.
Frank K:
The GISS ModelE is one of MANY models used in the IPCC reports. The CCSM is used right alongside it. Lucia actually comments on this indirectly in a post here: http://rankexploits.com/musings/2008/place-holder-table-of-runs-from-ar4/ . The ModelE is not nearly as extensively used in the climate science community as is the CCSM; if you peruse the pertinent literature, you’ll find that far more experiments are performed with the CCSM, the Hadley, or the GFDL than with the GISS model.
The ModelE is used almost exclusively by researchers at GISS. You’re just not going to find it used extensively in the modeling community at large. There are better models for that – the CCSM being one of them, since it’s entire purpose is to serve as large a community as possible. Whether this qualifies the ModelE as a “private enterprise” or not is irrelevant; my point was/is clear.
You’re not going to find a large amount of documentation of the ModelE because it’s not meant to be a publicly-consumed model. There are other models out there that serve this purpose. If you want equations, then allow me to link you to the technical documentation of the CCSM:
Main Project Page: http://www.ccsm.ucar.edu/models/ccsm3.0/
Scientific Description of the CAM: http://www.ccsm.ucar.edu/models/atm-cam/docs/description/
Scientific Description of the CLM: http://www.cgd.ucar.edu/tss/clm/distribution/clm3.0/TechNote/CLM_Tech_Note.pdf
Any other documentation you might be interested in can easily be found from the Main page, but if you need help finding something I’d be more than glad to lend a hand.
The bottom line here is that I think you’re tilting at windmills here. It’s good to ask the questions being asked here, but you need to make sure you’re asking the right questions about the right things to the right people. The ModelE is just not a super-open, easy to access tool; it was never meant as such. It was built to serve a niche purpose at GISS and is one of many tools worldwide at many institutions serving the same purpose.
If you’re serious about tackling this problem, then the place to begin is the CCSM, where you’ll be able to find your answers very easily. I highly doubt that the physics in the ModelE are going to differ tremendously from what you’ll find in the CCSM.
Counters–
I wasn’t meaning to suggest you are wrong. I am not looking specifically into Model E or focusing on that model. The thread exists because there are several people want to discuss that issue with each other, and this thread permits them to do that without introducing noise into other threads.
Model E is one of the models used in the IPCC but by no means the only one. Models share similarity and have some differences.
I don’t know whether information will make Dan, Ryan, Willis, JohnV etc. lose interest in the specific issue of Dissipation in Model E. But, it’s welcome information, and if they want to switch to examining the NCAR stuff, they talk about that here too.
Lucia,
I understood what you meant; I’m just trying to throw some information into this discussion to grease the wheels a little bit. I know that dissecting the ModelE would be fun because it’s developed by GISS, but short of finding one of the actual developers of the model and persuading them to come answer questions, it’s just not going to be possible. I strongly urge Dan, Ryan, Willis, JohnV and everyone to first dissect the CCSM before tackling the ModelE.
counters:
Thanks for the CCSM links. I’m primarily interested in the internals of climate models in general, not Model E in particular. I plan to review/study the CCSM documentation and see what I can learn.
Counters:
“I strongly urge Dan, Ryan, Willis, JohnV and everyone to first dissect the CCSM before tackling the ModelE.”
Why? The questions here concern Model E, not CAM 3.0 or CCSM. By the way, I have previously seen the CAM 3.0 documentation you linked above, and I have praised the folks at NCAR for their efforts at documenting their code. I haven’t tried downloading their source code, but I bet it’s a sight better than Model E.
Also, Counters, since it appears that you are familiar with both codes (and the climate modeling literature in general), and given what we know about the deplorable documentation and code practices associated with Model E, would you trust Model E results? Why or why not?
I think Counters suggestion of looking at CCSM and associated documentation is excellent. The documentation is indeed very good. The paper of Boville and Bretherton can be found here ftp://eos.atmos.washington.edu/pub/breth/papers/2003/CAM_diffusion.pdf
They do indeed deal specifically with the issue near the end of section 7. They say:
The dissipation of kinetic energy into heat D must be
calculated explicitly and included in the heating Q in
the first law of thermodynamics (2), in order to conserve
energy. In Newtonian fluids, D is a positive definite
quantity given by the product of the stress tensor and
the velocity gradient, as discussed by Fiedler (2000) and
Becker (2001). The spectral Eulerian core in CAM2
includes a biharmonic horizontal diffusion operator that
cannot be represented by a symmetric stress tensor and
therefore the kinetic energy dissipation cannot be cor-
rectly defined, as noted by Becker (2001). Instead, F K
is ignored and D Ï Ïªâ€«×¥â€¬K/‫ץ‬t Ï ÏªV · (‫ץ‬V/‫ץ‬t) d , where (‫ץ‬V/
‫ץ‬t) d is the specific force from the diffusion process. Note
that D Ͻ 0 (cooling) if ‫ | (ץ‬V | /‫ץ‬t) d Ͼ 0.
I suspect symbols here will be mangled, so I’d suggest going to the original, and reading the two succeeding paras. The essence seems to be that calculating viscous stress dissipation cannot be done directly in the spectrally transformed framework, but they can add an equivalent forcing term. The dissipation is about 2 W/m2 (similar to the magnitude of AGW).
Nick–
Is Model E spectrally transformed? Assuming the answer is no,
do you happen to know if the dissipation is generally smeared?
Turbulent the dissipation arises at small scales, which are going to be subgrid in a gcm. My impression (without delving into the documents) is that the gcm’s don’t do things like carry along any extra equations we might find in a turbulence model. So, how would they estimate turbulent dissipation in a cell? (So basically, is the real problem spectral issue, or is it larger, and due to grid resolution.)
Also, Counters, since it appears that you are familiar with both codes (and the climate modeling literature in general), and given what we know about the deplorable documentation and code practices associated with Model E, would you trust Model E results? Why or why not?
I still think you’re expecting too much from GISS on this one instance. Like I said earlier, ModelE wasn’t built to show-off to the public what a climate model looks like. It’s a complex tool which is developed, maintained, and used by a relatively small number of people. It doesn’t have an enormous design team maintaining the code. This obviously makes huge impacts on how well the model is documented, but as for code practices, that’s going to be in the eye of the beholder. As a member of the new generation of programmers weened on OOE and modern programming languages, I think that all Fortran is poor coding practice; unfortunately, that’s what we’ve got to work with, and that’s the way it’s going to be.
You’re right though: there isn’t good documentation for the ModelE and I wouldn’t want to backsolve its underlying equations from the code itself. Which is why we use the other models for experiments. Would it be great if ModelE was entirely transparent? Sure, but this is the first time I’ve seen people irked by the fact that it isn’t.
Now, would I “trust” ModelE’s results? Yeah, just as much as I and any other scientist would trust another model’s results – be it a weather model, a financial model, or a climate model. You take each individual result with a healthy dose of skepticism. We gain confidence in the results when, across a large number of models with a slightly different take on things, we tend to get converging results. It’s an interesting point to state explicitly: No single model result is trustworthy; it’s only when we have a large number of results together that we can start interpreting our data.
You can find some good inter-model comparisons floating out there in the major climate journals, and of course you can find an example of this analytical process back in IPCC AR4.
Counters said:
“Like I said earlier, ModelE wasn’t built to show-off to the public what a climate model looks like.”
Really? Then I would propose that NASA remove Model E from the GISS website and make it available only to the chosen few who may know what the heck it’s doing…
“As a member of the new generation of programmers weened on OOE and modern programming languages, I think that all Fortran is poor coding practice;”
Like you, I find languages other than FORTRAN better at achieving more modular and maintainable code. But, having said that, you can still write good code in FORTRAN – but you need to be willing to invest the time in providing copious comment blocks, defining your variables clearly, providing links to documentation, etc. By the way, I’ve seen a lot of poor code in more modern languages like C and C++…
“It’s an interesting point to state explicitly: No single model result is trustworthy; it’s only when we have a large number of results together that we can start interpreting our data.”
Are you stating that the underlying mathematical problem is ill-posed? That is, if one changes the initial conditions slightly, a large change in the solution results after some period of time? But if we run a large number of these solutions and we average them together, the “correct” solution to the differential equations will emerge? Well, I leave that for now as it gets us off topic from the original purpose of this post, which was to determine how Model E does viscous dissipation.
By the way, I do not share the same faith that you have in Model E solutions…
Frank, I think GISS’s documentation is a non-issue at this point. If it really upsets you that much, then I’m not going to be able to persuade you otherwise and I might recommend you contact GISS to speak with them about it.
Are you stating that the underlying mathematical problem is ill-posed? That is, if one changes the initial conditions slightly, a large change in the solution results after some period of time? But if we run a large number of these solutions and we average them together, the “correct†solution to the differential equations will emerge? Well, I leave that for now as it gets us off topic from the original purpose of this post, which was to determine how Model E does viscous dissipation.
I don’t think you’re looking at it the right way.
You seem to have spent enough time with the models to realize two key points: first, that the atmosphere is a chaotic system, and second, that models are just large ensembles of differential equations which are ‘solved’ (numerically approximated, since no closed-form solutions exist for many of them) over a complex solution space. It’s not a matter of perturbing the initial conditions and running an ensemble, then averaging the results – it’s about emergent behavior we see across a wide variety of models which have different ways to deal with some of the intrinsic issues in modeling the climate system.
When a half-dozen models developed by different people across the world with different parameterizations for different feedbacks and phenomenon tend to yield the same result in “global warming” experiments (i.e., how will a stable system respond to a doubling of CO2), then that result is intriguing.
Since we are indeed off-topic at this point, I’ll leave it at this. It’s not about having “faith” in a model’s solutions, it’s about understanding what they are, what they try to accomplish, what difficulties they have, and what limitations they have.
Lucia,
I think lots of answers about Model E are in this 2006 paper of Gavin et al:
http://pubs.giss.nasa.gov/abstracts/2006/Schmidt_etal_1.html
Sec 3a starts with an explicit statement that all quantities are evaluated on a Cartesian grid, which indicates no spectral representation. But a few lines above is this statement:
All processes including the dynamics, cloud schemes, grav-
ity wave drag, and turbulence conserve air, water, and
tracer mass and energy to machine accuracy. All dissi-
pation of kinetic energy through various mixing pro-
cesses is converted to heat locally.
I think that clearly implies that viscous heating (momentum mixing) is treated locally.
As to the “global smearing”, I think there’s a suggestion in B&B as to what is happening. They suggest that in CCSM there is a residual numerical energy discrepancy of about 0.4 W/m2. I’m sure all codes check energy conservation. It’s not quite clear here, but I think they are saying that this discrepancy is compensated by a globally uniform addition. The amounts are fairly small, so the error caused by an inaccurate spatial distribution is smaller again.
“Frank, I think GISS’s documentation is a non-issue at this point. If it really upsets you that much, then I’m not going to be able to persuade you otherwise and I might recommend you contact GISS to speak with them about it.”
It is unscientific statements such as these that make people skeptical. People perceive that documentation is a real issue and then someone claims it’s not. Eventually, an average reader is going to go with their own senses, if someone continues to make assertions that contradict what people see. ‘I think’ is a statement of belief, which is not going score any points unless there is evidence to back it up. The evidence says there is an issue.
Andrew ♫
Counters said…
“Frank, I think GISS’s documentation is a non-issue at this point.”
Really? OK – I thought the title of this thread was “How Model E deals with heat from viscous dissipation.” Perhaps we should change it to “How CAM 3.0 deals with heat from viscous dissipation.” Then all is better. After all, Model E is just like CAM 3.0, right? And we know this because…??
Counters also says…
“…first, that the atmosphere is a chaotic system,…”
“…how will a stable system respond to a doubling of CO2)…”
So, is it (the atmosphere) chaotic or stable??
Finally…
“It’s not about having “faith†in a model’s solutions, it’s about understanding what they are, what they try to accomplish, what difficulties they have, and what limitations they have.”
I agree with you here. But I don’t see how one can understand a model’s solutions without knowing what equations are being solved, and the techniques used in their solution. I you’re OK with black box codes, then I can understand where you’re coming from.
Counters– If you have any information to answer the question “How CAM 3.0 deals with heat from viscous dissipation.”, feel free to add that.
I think we can all admit that the question may not be important in the grand scheme of things. But, for reasons you may also consider unimportant, several people are curious to learn the answer.
If we learn the answer, then those who are puzzled may come to agree with you that the issue is unimportant. But until then, it’s unlikely that answers like “well, all the models agree to some degree” isn’t going to allay anyone’s qualms about this issue.
I for one am interested in other climate models as well . . . though we did start talking about ModelE first. One piece at a time, methinks.
Dan :
I strongly suspect the purpose is to force total energy conservation and that the lack of conservation is due completely to numerical artifacts. Otherwise why isn’t there a simple declarative equation for the modeled viscous dissipation.
You are surely partly right . Partly because there are ALWAYS 2 things in a numerical model of fluids :
1) First there is some kind of subgrid parametrization of viscous dissipation . This takes form of either an explicit equation featuring velocities and some coefficients or of a constant (normally in W/m^3) which is the result of some crude averaging .
2) Second is that no numerical model is able to conserve energy or anything else for that matter . So there is always a check at the end of the time step and any energy difference (plus or minus) is eliminated by saying that it is friction heating (amusingly sometimes it is friction cooling :)) .
The same must of course be done for any conserved quantity so that , still amusingly , mass can be either created or destroyed . Etc .
.
deWitt
Energy dissipation belongs to the 3 necessary and (almost) sufficient conditions for a system to be chaotic .
The other 2 are energy supply and non linearity .
Without energy dissipation you get hamiltonian systems that conserve mechanical energy and behave in a qualitatively completely different way than chaotic dissipative systems .
The arguments about the “smallness” of the dissipation are red herrings .
There are still many people who didn’t understand that if “smallness” can be an argument for equilibrium or linear systems , it is irrelevant for chaotic systems .
That’s why neglecting a “small” dissipation in the weather/climate would lead to unphysical end divergent trajectories .
This point is related but not identical to G.Browning issue with unphysical viscous dissipation which is artificially used to keep an ill posed problem “on track” .
.
Frank K
The atmosphere is a chaotic , out of equilibrium , unstable system .
There is no question and no doubt about it .
It is so notably because of the existence of energy dissipation .
Now despite the fundamental unstability that can be directly derived from its chaotic behaviour , its trajectories are bounded in the phase space .
This result is less obvious and trivial but it is that which enables to give a foundation of a climate “science” .
A good example is aeronautics where the turbulence at small scales is homogeneous and isotropic and the description can be stationnary (time independent) .
If you define the possible climates as an enveloppe (a volume called attractor in the chaos theory) of all trajectories allowed by the dynamics of the system AND if you admit that the system is ergodic (actually quasi ergodic) then the chaos theory authorises you to formulate statements about probabilities that the system “goes from this place to that place” .
Obviously the necessary and sufficient condition for such statements to be true is that you are able to describe the attractor what includes but is not restricted to the accurate description of energy dissipation .
.
But if there is one single thing that everybody should keep in mind is that the climate “theory” can only be stochastical and under no circumstance deterministic .
And should the system be NON quasi ergodic (what nobody knows sofar) , then even the stochastical approach is wrong and gets wronger with increasing time .
Thanks Tom! Your explanations are in line with my thinking on these matters. I am familiar with Large Eddy Simulation (LES) in CFD, and it seems there is an analog with climate modeling (that is, solving the filtered N-S equations for an unsteady, stochastic solution which is in turn time-averaged to obtain a “steady” result such as lift or drag). I should note that LES requires very fine (highly resolved) meshes and small time steps to yield meaningful results, which is why steady RANS solutions are still dominant in CFD.
“But if there is one single thing that everybody should keep in mind is that the climate “theory†can only be stochastical and under no circumstance deterministic .”
Which is why, I suppose, the climate scientists have such a hard time predicting next year’s climate very accurately…
Here is the dilemma that I have WRT viscous dissipation in the Earth’s Climate Systems. I have been attempting to describe my dilemma for almost two years now without success; here and here, for examples. And I have always asked for clarification and corrections each time I’ve mentioned the subject. So far I have gotten neither acceptance or rejection of the concepts. I hope to get some feedback on this thread.
Viscous dissipation, the conversion of mechanical energy into thermal energy, occurs in all fluid motions. In the case of the Earth’s Climate System, this includes the natural fluid motions that result from natural energy additions into the systems, motions induced by the actions of the motions and shape of the Earth, and motions induced by the actions of the inhabitants of the Earth. In the case of using the temperature as a metric, and the air in the atmosphere as the fluid, viscous dissipation always acts to increase that temperature.
The Earth’s climate Systems, especially the atmosphere and oceans, have obtained its present general configuration for a few billion years ( I think this statement is correct, all corrections will be appreciated ). This means that viscous dissipation has been effective for all this time. So, whatever the magnitude of the conversion ( in W/m^2, let’s say ), this means the potential for appreciable increases in the temperature of the Climate System is not insignificant. Over a few billion years I would say that the probability for detectable consequences is 1.0.
We are informed that an imbalance in the Earth’s over-all radiative energy balance of a few W/m^2 is sufficient to result in a significant increase in some kind of ‘Global Mean Surface Temperature’ in as little as 100 years. And that the consequent outcomes of the temperature increase are simply almost too dire to contemplate. If it is the case that the viscous dissipation is of the same order of magnitude as the radiative imbalance, then why have not the same increases in temperature been observed over a period of a few billion years? I think it is correct to say that if 2.0 W/m^2 is sufficient to attain detectable consequences in 100 years, then 2.0/1.0×10^7 W/m^2 is sufficient to cause the same consequences over a few billion years. In this regard the values that I have seen in the peer-reviewed literature of up to about 18 W/m^2 seems to be totally out of the realm of possibilities.
How does all this relate to modeling of the Earth’s Climate System as used in GCMs? If viscous dissipation is of the same order of magnitude as the radiative imbalance, then it certainly must be correctly and accurately included in the equations used in GCMs. As ‘equilibrium states’ are approached for which 0.0 = 0.0, there are not small terms that can be ignored. Additionally, if it is of significant magnitude to be important in an energy balance for earth’s Climate System as an always-present volumetric energy source, then that System cannot never attain equilibrium. And if in fact all this is correct, then something is amiss in the understanding of the Earth’s energy balance.
Thanks for all corrections to incorrectos. Especially, let me know where I’ve screwed up in my thinking.
Dan Hughes (Comment#8552) January 12th, 2009 at 8:05 am
Rats, at least two typos:
(1) 0.0 = 0.0, there are not small terms
0.0 = 0.0, there are no small terms
(2) System cannot never attain
System cannot ever attain
Dan Hughes:
I’m not sure I’m qualified to answer your questions, but I’ll give it a try.
.
The important concept to realize is that viscous dissipation does not add energy to the system (the system being the earth and its atmosphere). It is simply a conversion of atmospheric kinetic energy to atmospheric heat due to friction. The source of the kinetic energy is primarily incoming radiation (the sun) causing temperature and pressure inhomogeneities that lead to wind.
.
If the incoming radiation stopped completely, or if the earth stopped rotating, the winds would stop. The atmosphere would reach a steady-state (after a potentially very long time). Viscous dissipation would no longer be an issue.
.
Back to the models. Excluding viscous dissipation from the calculations would effectively dismiss some of the incoming radiation that causes kinetic energy in the atmosphere. If the average viscous dissipation is 2 W/m2, then ignoring the term would be similar to reducing the incoming solar radiation by 2 W/m2. If you did so in the middle of a model run, you would see temperatures drop to a new level as the simulation progressed.
.
However, the viscous dissipation is not “turned off” during a model run. It is enabled throughout the spin-up stage where the model reaches quasi-equilibrium. It is enabled throughout the scenario runs. The inclusion or exclusion of viscous dissipation makes a slight difference to the long-term average equilibrium temperature, but has very little effect on the model sensitivity to forcings.
.
There is a potential for second-order effects. As a purely hypothetical example, *if* GHG-induced warming makes the world a windier place then viscous dissipation in a GHG-warmed world would be larger than in a non-warmed world.
.
I’m only in the middle of my morning coffee. I hope this makes sense.
Quick note on the second-order effect I mentioned above:
The additional windiness would be a consequence of incoming radiation being converted to kinetic energy. The viscous dissipation heating is a consequence of the kinetic energy being converted to heat. The net energy in the system is still unchanged. The additional windiness would actually make the system just a tiny bit cooler since more of the energy is in kinetic form (rather than in the form of heat).
Dan
If you are going to think of this in terms of billions of years, I think it’s better to think of this in terms of rates, and to consider a concept of pseudo-equilibrium. After all, to the extent that dissipation raises temperatures, heat will be convected or conduced away. Eventually, the warmer earth will lose the heat by radiation.
Analogy: Consier how dissipation of energy results in a slightly warmed up mixing vessel.
We supply electrical power to a pump. Pump does work, inducing motion. As a result of induced velocity and non-zero viscosity, kinetic energy dissipates to heat. Tank warms. Once the tank warms, heat is lost to the enviroment. The amount of heat lost at the outer surfaces of the tank is (likely) a monotonically increasing function of tank temperature.
Engineering problem: Estimate each feature. Apply the first law of thermo which requires that the tank temperature rises until the *rate* of heat loss at the tank walls balances the *rate* of addition from the pump. Solve for equilibrium temperature of vessel. (Strictly speaking there is no equilibrium, since mixing in tanks is often chaotic too. But people use the idea of pseudo-equilibrium all the time.)
The heat from dissipation doesn’t just get stuck and accumulate until the earth reaches an infinitely hot temperature. The temperature rises until we reach an equilbrium.
Yes. If we increase the rate of heat addition to the climate, it will warm. Models are predicting fairly large warming in 100 years.
Because in the problem involving eons temperature increase due to dissipation causes a higher radiating temperature and is lost to outerspace. (It does in all problems.)
I think you should set up rate equation, and do a simple model with an overall energy balance. The 2.0 W/m^2 is *in addition too* what pre-existed. So, if we go back to the tank analogy it’s as if we took a tank at pseudo equilibrium and turned the power up. Over time the tank will warm to a *new* equilibrium temperature, which will almost certainly be higher.
Or,to make this even more similar to the earth, it’s as if we took the tank and added insulation to the walls of the tank. The temperature of the contents of the tank will rise because if we examine the function describing the rate at which heat is lost from the tank sides as a function of time, we lose less heat at a given tank temperature. The tank temperature rises until the *rate* of heat loss at the tank walls balances the *rate* of addition from the pump.
This number 18 W/m^ seems different from what Nick suggested which is smaller. I don’t know which is correct. But, I’m more concerned with whether this is really *smeared* or whether it’s applied locally. The paper Nick cited suggested it’s treated locally– but is it? Also, that paper didn’t give details about how this is treated locally. (I’m not suggesting it can’t be. But, I don’t happen to know how it is dealt with.).
Some things you say strike me as correct; others I disagree with.
Here’s my current thinking (which could evolve):
1) If viscious dissipation is on the order of the radiative balance, then we would want it to be dealt with locally. The reason is that we don’t want the *errors* in how we deal with the dissipation to be larger than the imposed solar forcing.
2) Engineers ignore small terms when doing equilibrium problems all the time. If someone model heat losses from a house in an HVAC problem, they don’t do an exact solution of heat loss from the walls, exact solution of heat losses through the concrete floor etc. They look at the leading order terms, and try to capture these in the balance. Depending on the heat transfer problem, we might neglect radiation entirely; in another problem we might neglect convection entirely.
In some heat transfer problems, we entirely neglect converstion of heat due to viscous dissipation of fluids. ( I suspect this would be done in most analytical solutoins to estimate the heat loss through an old fashioned air-filled double pane window with a quarter inch gap. You solve the convection problem without bothering about the dissipation because it’s small.)
3)At least in principle, the earth can reach a pseudo-equilibrium just as the air inside an old fashioned double pane window can reach a psuedo-equilibrium. For the earth, the temperature distribution reaches a value where the heat lost at the upper layers of the atmosphere balance the heat coming in (at least averaged over the year.) The existence of viscous dissipation doesn’t change this. All it can do is modify what the equilibrium temperature will be. It doesn’t make it impossible to achieve one.
John V, that’s the reason I said, “In the case of using the temperature as a metric, and the air in the atmosphere as the fluid, viscous dissipation always acts to increase that temperature.”
And I guess I should have included the fact that the radiative-equilibrium energy balances (EBMs) are always based on the temperature form of the energy balance equation.
Dan Hughes:
I’m not sure exactly what you’re responding to. I must not have explained very well, because your response does not seem relevant. The important point I was trying to make was that viscous dissipation is not a forcing since it does not change the energy balance. It is simply a conversion of kinetic energy to heat energy.
.
I agree with lucia’s more rigorous explanation. Hopefully it works better for you.
.
IMO, the question of local- vs global-application of the viscous dissipation is important (as lucia said).
John V, it was your response that was not relevant. I said nothing about any changes in the overall energy balance.
Additionally, I explained explicitly what the physical mechanism is for the conversion of fluid motions into thermal energy.
Dan Hughes:
If you’re looking for a fight, you’re not going to get one from me.
If you’re looking for “clarifications and corrections”, you’re not going to find them by picking fights.
JohnV
I think that must be good coffee, because the gist sounds right. (I’m not seeing anythign to disagreew with, so what I mean is that no one can wordsmith blog comments the way they might wordsmith a formal lecture etc.)
On this:
I’d figure failing to convert the dissipated mechanical energy to heat would necessarily lead to a cold bias. Tallying it up globally and smearing could lead mis-distribution of temperature. Maybe making the poles to hot and the equator to cool? That’s a guess. I can’t begin to guess what it would do vertically.
The mis-distribution would then have additional effects. This itself could ultimately affect estimates of the sensitivity– but I don’t know how much.
But do they smear? The paper Nick cited suggests no. Dan says the code says it smears. I’m not diving into the code myself, so I don’t know.
One of the difficulties is that much of the argument is over effects that could be called second order in some framework. EG. Do we mean an expansion around a small variable ‘dq’? So, dealing with disspation incorrectly could make a difference the estimate of sensitivity and the issue of feedbacks.
Right now, my only big questions are:
1) Is the viscous dissipation smeared all over the atmosphere as Dan suggests? Or is it treated locally as the text Nick found suggests? (In that case, the smeared stuff seems to be round off error.)
and
2) If anything is smeared, is there any agreement how large smeared term is relative to the applied forcing anomalies?
lucia,
In a comment on Dan’s blog, Gavin indicated that the code available online (which is the code that was used for IPCC runs) is about 4 years old if my memory serves me correctly (which it does less and less often it seems).
.
The paper that Nick Stokes linked above is from 2006. It’s possible that the viscous dissipation *was* applied globally in the version of Model E for which we have code, but *is* applied locally in the most recent version.
JohnV–
If you are correct, that would explain why Dan’s seeing smearing while the paper says no smearing.
It would be interesting to know how model E does deal with it locally, but I’ll admit I’m not putting digging for that answer on the top of my list.
Dan– Do the things I suggested change your thinking on the issue of the energy balance?
“The paper that Nick Stokes linked above is from 2006. It’s possible that the viscous dissipation *was* applied globally in the version of Model E for which we have code, but *is* applied locally in the most recent version.”
This, of course, brings us right back to the lack of appropriate documentation that GISS is famous for.
By the way, please count the number of equations in the 2006 document and let me know what you find…
“Dan says the code says it smears.”
I think I have not indicated the details of the manner in which ModelE, or any model / code, handles the dissipation.
Kindly let me know if and where I so indicated.
Thanks.
By the way, if you want to know how NASA employees such like Gavin Schmidt feel about code documentation, here are Gavin’s own words from Dan’s blog:
http://danhughes.auditblogs.com/2006/12/11/a-giss-modele-code-fragment/
“Creating good documentation takes time and effort, and unfortunately the scientists are not paid to do that. You won’t be impressed that the situation now is significantly better than it used to be, however, even perfect documentation does not impart understanding – that is always going to take work.”
—
So there you have it – climate scientists are not paid to document what they do – they are paid to blog …err…do science…
I guess I have not stated clearly and explicitly what I think the important issues are.
(1) What is the magnitude of the dissipation?
(2) Is that magnitude sufficient to be important relative to attaining quasi-equilibrium / equilibrium states at all time scales?
(3) Where have (1) and (2) been quantitatively addressed?
(4) If the physical effect is important relative to calculation of radiative quasi-equilibrium / equilibrium, has it been correctly and accurately incorporated into the GCM models / codes?
(5) If it is of sufficient magnitude to be important, what is its contribution to the calculated temperature increase over 100 years.
(6) If it is not important, why are models / methods included in GCMs? All coding contributes equally to the potential for bugs and the cost-per-line of code is the same from all lines of code. All unnecessary lines of code should never be included in the code.
I think these need to have been investigated and I’m looking for papers and reports in which they have been.
Sorry Dan–
I think maybe other people think you say they smeared the viscous dissipation based on your reading of the code. Do they treat it locally?
That’s an issue that’s unclear to me– and what the do matters to whether or not I would say it seems obviously wrong, or whether I think the treatment seems to fall in the class of reasonable modeling choices.
Heat dissipation is treated locally in ModelE. Look in the file ATMDYN.f and see the code following the comment:
C**** Add in dissipiated KE as heat locally
Dan Hughues
I have no idea. People have suggested a variety of numbers.
Whether the magnitude is large or small, this would not preclude the ability to attain quasi-equilibrium values at least in some statistical sense. However, the fact that it is not zero is one of the factors that results in the chaotic nature. So, even if we didn’t have annual cycles etc. we would never achieve a true honest to goodness equilibrium where all partial derivatives with respect to temperature were zero.
Good question. I have no idea. If someone knows, I’d be interested in reading the answer.
Good question. I not only don’t know if it’s been correctly or accurately incorporated into models, I have no idea what they do. I asked Nick if he might know. So far he hasn’t answered– but it’s night in Australia, so maybe he will. If someone knows, I’d be interested in reading the answer.
Who knows? But why should this lead to an accumulation of heat? If the overall dissipation rate has not changed, the accumulation due to this factor will be zero. If it’s decreased, the accumulation of heat will be negative. If it’s increased it will be positive.
So, getting it right matters in some sense. But without knowing the answers to (1) and (4) I could not begin to guess how this impacts any predictions of warming.
We know viscous dissipation is a real physical effect. We know based on fundamentals, the term should appear as heat in the temperature equation. This is discussed in engineering texts. Some engineering models leave it in, some neglect it. . Presumably, if the term is included in a GCM (like model E) then presumably, the modeler (like Gavin) thinks it’s important enough to include.
I think you’ve been clear on thes issues 1-6. However, when you said this
I thought you meant to suggest that if dissipatoin exists and it’s large, then the climate system can never attain equilibrium. I disagree with that statement. Viscous dissipation can be arbitrarily high. If is constant over time, and all boundary conditions and heat sources are constant over time, a system can achieve an equilibrium temperature. I’d say the same thing if the viscous disspation existed in my home blender.
The only problems I can see are: Is the dissipation treated reasonably well? Or not? Is the level of approximation any better or worse than for anything else treated approximately in GCM? I don’t know the answer to that.
Nick– I guess if I’d waited, I’d have read your answer! I should check time zones to see what hour it is in Australia.
The paper that Nick provided a link for (Comment#8495) describes a method of calculting viscous dissipation analogous to the mean field approximation used in solid state physics or local field in plasma physics. They do not calculate the combined effect of flux gradient and velocity tensor rate of change as they must assume the overall effect can be approximated by velocity rate of change. When they run this through the models it seems that the outcome is very similar to a summed contribution from more or less interchangable velocity distributions from individual grid cells, hence it is appropriate to approximate all such distributions of local (grid cell) velocity distributions by an average distribution. Then the mean field approximate comes in where each grid essentially acts like an independent entity with a velocity distribution equal to the average. So no this is not the same as smearing but it says the explicit mathematics produces an equivalent answer to the mean field approximation.
Though I don’t know if they have shown examples of this.
This is slightly (haha) more elementary than the above discussion.
.
Since the apparent purpose of the viscous dissipation (ediff) term in Model E is meant to describe the conversion of KE/PE to heat, the question I have is: How accurately does Model E calculate the conversion of radiation energy to KE in the form of wind? There doesn’t seem to be any good physical bounds on this in the literature; many people have come up with many different theorized and measured values. Also, in the little I’ve read about wind since this topic has come up, it seems that the efficiency of the conversion from radiation to wind is strongly dependent on local differences in temperature (though I don’t know how fine “local” means).
.
If the known range of possibilities for radiation–>wind KE is large, then it does not seem possible (to me anyway) to determine how much of the value of ediff is actually compensation for mathematical artifacts, how much of the value actually represents viscous dissipation, and how much actually represents errors in the radiation–>wind KE result. If some of ediff represents the KE associated with the wind (in terms of deviations from the calculated values) then assuming the entire value to be heat could result in significant error.
.
Additionally, since many parameters (aerosol forcing, for example) are determined by fitting the models to the historical record, incorrectly calculating both the conversion of radiation–>wind KE and the rate of viscous dissipation would seem to result in obtaining the wrong values for these parameters.
.
That last one, to me, seems the most important. Unless the model somehow consistently adds/subtracts energy to the system, there shouldn’t be any net change in the total energy. But if the calculation is inaccurate, then the wrong values will be obtained for the unknown forcings . . . which would in turn manifest itself by the model diverging from the physical earth in future times.
Just to clarify: my point is that although ediff may force a conservation of energy for each time step, if any significant portion of it is really some quantity other than heat, then assuming it is all heat will result in an incorrect calculation of the inferred forcings, like aerosols.
.
The effect when running the models into the future would be analogous to using the wrong thermal conductivity when doing a heat transfer problem.
Just as a clarifying question about
“(1) What is the magnitude of the dissipation?”
Are we really asking about viscous dissipation as it is passed through turbulence dissipation of kinetic energy in our model?
OMS–
We are really talking about the sum of the turbulent dissipation you describe plus the much smaller amount that would be resolved at large scales.
But I think the GCM’s don’t have a turbulence model, so I don’t know how the estimate the viscous dissipation rate.
It appears questions about this have been introduced into blogs and forums because a appears as a source term in the temperature equation (i.e. first law of thermodynamics) in Model E specifically. The term is appears to reflect the viscous dissipation, which, as we know should appear in any exact treatment. It does appear in texts like Bird Stewart and Lightfoot. )
It would be worth resolving this because the question has come up on blogs before.
Oh– I should add– if by “our” model, you are speaking of a specific GCM, maybe you could elaborate about any turbulence model the GCM might contain. (Or, even speaking generally is fine if you don’t want to be a public known person at a blog!)
lucia — my apologies for not being more specific. First of all, “our” just referred to the royal “we.” 😉
My confusion comes about because there appear to be lots of subthreads in the current thread regarding the so-called “viscosity” term.
It does not seem that a term which has the form of viscosity at GCM scales can be anything but a model of turbulence dissipation. Whether the form is harmonic or biharmonic is somewhat beside the point. If we care whether this term is “correctly” included in the models from a physics sense, then we’re probably safe in assuming that it is not. The epsilon value is, at best, a guess, and at worst the form is completely wrong, since we have observations of such amusing things as upgradient transport (which we know cannot be modeled correctly even in very simple cases, such as the smoke plume, without a heat equation).
As to whether this epsilon can matter on climate scales… well if the notion is valid that large scale phenomena (e.g. the overturning circulation in the oceans) can be driven by very large energy inputs but are regulated by a balance of very small terms (local epsilon), then it would seem at least plausible that we need local, accurate treatments.
If we are concerned that ediff is simply enforcing energy balance by retaining all “numerically and otherwise lost” KE + PE as additional heat, then what matters is the units. 2 W/m^2 is of comparable order to all claimed anthropogenic radiative forcing and would be very suspicious.
Of course, I may be completely misreading the various points of discussion, and would welcome any corrections or clarification!
oms–
Yes– There are people discussing the turbulent viscosity in the momentum equation and also people discussing the actually dissipation itself and whether or not it should appear in the heat equation. The topic of this post is the second– but the first comes up because of a long thread that appeared elsewhere.
I’m sure dissipation is not dealt with absolutely correctly. I can’t be– as they don’t capture the small scales.
But It looks like Nick at least did look at the code to identify that whatever they do for dissipation, it’s local. (I’m glad to read that. Someone somewhere in the thread said it was smeared. It might have been Willis. )
Someone above said the looks at a paper Nick linked and dissipations is supposedly dealt with as in plasma physics. I’ll need to look at the paper to see what that means. (I know fluid mechanics, but I’m not familiar with what is ordinarily done in plasma physics.)
Scanning this paper, linked by Nick in (#8495):
ftp://eos.atmos.washington.edu/pub/breth/papers/2003/CAM_diffusion.pdf
The narrative on page 3879 discusses what appears to be
1) Calculating the global energy imbalance.
2) Adding this back in either by spreading uniformly over the entirely computational domain (smearing) or using an ad hoc function.
This doesn’t resolve what Model E does, but it suggest that smearing is used by some models somewhere.
If the dissipation is large compared to the 2W/m^2, and it’s smeared, then one would definitely want to know why people think smearing is ok for computations where we want to figure out sensitivity to the effect of adding 2W/m^2 additional heating to the system. (It might not be important in other contexts, but in this one– it’s a valid question.)
I don’t plan to download the model E code and try to reverse engineer. Nick thinks it’s not smeared in there. I believe him, but obviously, I don’t know what they do.
Does this topic relate to Gerry Browning’s longstanding complaint about the models: http://www.climateaudit.org/?p=674
Raven–
It directly really relate to Gerry’s long standing complaint. OMS noticed that some comments here are mentioning the issue of introducing turbulent viscosity into a model (Gerry’s issue) and others are discussion the viscous dissipation and it’s appearance as a source term in the heat equation ( the issue of this post.)
Both are modeling issues; both use the term “viscous”. But otherwise, they aren’t particularly strongly connected.
Lucia,
Thanks. I thought Gerry’s “forced in a nonphysical manner” was referring to the smearing being discussed here.
Bingo. Obviously both you and I agree 1000% on this point.
In other fields, once the science is “settled”, the development of the concepts into something that works is mostly done by engineers, people who don’t worry about “original” articles in journals but do worry about details and documentation and everything hanging together. Maybe climate models have reached that point.
Steve–
Yes. People would expect the longer reports discussing specific nuts and bolts of how this is done to be documented, and journal articles would have lower priority relative to more massive documents.
These more massive, wordy documents would likely be NASA reports. (All national labs publish tons of these documents; NASA has publishes these sorts of reports too. So, the information becomes available to the public, and is archived. Some is “original” in the sense of journal articles, but it need not be original in that sense to be included. It simply needs to explain what assumptions really are made and how things actually were predicted is rather copious detail. )
I think it’s a shame these don’t exist. If they did, we could simply order the documents from NASA and read them. (They would also probably be cited in the journal article, the same way masters and ph.D. thesis are often cited as providing additional details.)
How’s this for a prediction? –>
The LAST THING before the world ends that you will see, is comprehensive documentation on how people know AGW is true.
When the giant asteroid approaches the earth and the seas are crashing into cities and the mountains are flattened and my house is on fire, among the debris and flying pieces of civilization there will be one good electrical connection, which a copier is plugged into, and one brave soul will be copying the Official AGW Documentation Report for distribution. 😉
Andrew ♫
Some nuggets out of the paper that Nick linked:
(Pages 5-6)
.
This would seem to imply that viscous dissipation cannot be calculated on a smaller scale than the grid size – since a major source of that dissipation cannot be calculated on that small of a scale. Since the article does say that the dissipation is calculated locally, and Gavin mentioned that the heat is distributed on a mass basis, then the calculated heat is most likely distributed by mass in each grid cell and layer. But the quantity ediff may not be that value – it may be something else entirely.
.
The following paragraph seems to imply that the quantity that Dan found (ediff) may not be the local dissipation:
.
If that statement is meant to describe ediff – which Dan initially thought was global (and it has no physics associated with it – it simply compares KE and PE and adds the difference back to the system), then the “small” qualifier needs some explanation. Dan stated he saw values ranging from about 2 W/m2 all the way up to 15 W/m2. While it may be “small” in terms of the total heat balance for the planet, it’s not “small” in terms of the forcing magnitudes the model is meant to discern.
Ryan– Yes. There seems to be quite a lot getting interlaced, even in the paper Nick linked.
* It appears some of the “lost” energy is due to real physics, some is due to numerics and round off. This is added back in.
* The ediff term Dan found does seem to be large. I think discussion of this is what made me think Dan was discussing something smeared. Is it? And does it represent round off or real physical dissipation? Both. Is it up to 15 W/m^2. If yes, then that seems large compared to the effect we are trying to understand.
* The discussion of viscous dissipation says what I would expect. It’s very difficult to do this locally. It is a simple fact that most real viscous dissipation happens at scales that are not resolved in a climate model. So, strictly speaking, there is no absolutely rigorous way to assign that heat to the precisely correct grid. You need to parameterize this somehow. How is this done in Model X? (Where X could be any model?) I don’t know.
* All this could be cleared up if there were more specific documentation. Many other fields do have more specific documentation. It’s either in MS and Ph.D. theses or in big thick agency reports. I wouldn’t go so far as to say everyone writes these. I know specific instances where people don’t. Failure to write these things often leads to inefficiency particularly when projects involve many people and staff changes.
Lucia: And different people code things differently, so given how long ModelE has been essentially “in development”, I wonder how many of the programmers actually understand everything that is in it.
.
I went back to Dan’s thread and found this reply from Gavin:
.
It would seem that Gavin is saying that ediff is indeed the global term, not the local one. If that’s the case, then Gavin’s explanation on Dan’s blog differs from his paper subtly. In the paper, he implies that the viscous dissipation is calculated locally and then a later correction to the energy balance of the system is applied globally. So it would seem to be impossible to extract any quantifiable viscous dissipation from ediff since ediff would inherently contain things other than simply viscous dissipation. It’s a global energy balance correction, not necessarily a viscous dissipation correction (though like you say, there’s probably a good deal of viscous dissipation in it).
.
I don’t really agree with Gavin’s justification on Dan’s blog for ediff always being viscous dissipation. I’m no model guy, but I can slog my way through simple math, and the way it’s calculated means it could be anything. The modelers are just assuming it’s viscous dissipation.
.
By the way, I tried checking the documentation for the CCSM as counters suggested. Unfortunately, the one piece of documentation that relates to what we’re talking about here – the documentation on the coupler module – does not exist. From the description of the function of the coupler:
.
So unfortunately the one piece of information about the CCSM that is pertinent to this discussion is missing. The individual modules (like CAM) may not need an analogous global correction like ediff, but it’s pretty clear when you put them all together, that there is some need for correction. Whether it’s handled like ModelE I don’t know because the documentation is unfortunately missing. 🙁
If the model calculates a parameterized dissipation at each gridcell (which would make the most sense and seems consistent with Gavin’s comments) and then applies a global correction for delta(KE + PE) , then this would seem to be more than “assuming” all residual change in KE + PE (due to everything, including numerics, as lucia pointed out) is due to viscous dissipation, but rather “assigning” it (by reinserting the difference term as heat).
OMS–
Yes. That description could be consistent with what people say they see in the code and what Gavin various comments say. In that case, the question isn’t entirely either/or. It’s “both”.
Man, that Boville and Bretherton paper about the NCAR models is dense. I’ll have another crack at the question:
For the semi-Eulerian core yes; they calculate viscous dissipation by using the rate of change of kinetic energy instead of the stress tensor. This tensor can’t be defined accurately in the model so they use what they have defined (namely V)
For the other cores, no; they work out the difference in the global integrals but then ‘average’ ie yes, smear, this over all grid cells, so no they don’t really have as accurate a picture of individual cell’s contributions. As oms pointed out they haven’t included model aretfacts in the integral imbalance so the value of 2W/m2 may not be correct.
What also does not seem to have been done is some attempt to create a dissipation profile (like neutral profiles again in plasma physics) and weight each ‘average’ factor by this. This however may be very hard to do (a range of ideal temperature profiles would have to be created and the model run to see the velocity distributions in space and simulation space) but it would be nice if the paper said this.
I posed another question to Dan on his blog. It relates to this post:
.
http://danhughes.auditblogs.com/2009/01/10/more-on-nasagiss-modele-viscous-dissipation-the-units/
.
Ediff appears to be calculated differently in this snippet of code. If DKE is a local variable, then this could be the version of ediff that Gavin says represents local viscous dissipation. If so, then the variable “ediff” takes on different roles in the code. Note at the beginning of the loop where ediff is calculated it first assigns it a value of 0 each time the parameters L, J, and I change . . . so ediff gets calculated explicity for every combination of L, J, and I and that value is stored in array T at the end of each calculation. This would seem to indicate that this is a local function.
.
If this is true, then when talking about ediff, one would have to make it clear which ediff was being discussed.
I have concluded that the process in the ModelE code isn’t viscous dissipation.
Thanks, Dan. I find it interesting that the papers you cite indicate the CCSM has a similar issue.
Ryan O,
The Boville and Bretherton alludes to viscous dissipation being handled by the dynamical core of the atmospheric model, not by the flux coupler. The flux coupler is used to help deal with issues arising between different modules of the CCSM – such as the CAM and the CLM – and from what I understand, it was completely re-coded for the imminent CCSM 4 release.
When looking through the scientific documentation on the dynamical cores, I found this section on horizontal diffusion corrections. Are these the manual corrections we’re looking for, because if so, it should be very easy to locate the corresponding code in the CAM to have a baseline for searching through the ModelE.
EDIT TO ADD: Link messed up. It’s section 3.1.17 of that document.
A few comments. Don’t know if this has been mentioned, but there is a “How-To” file for the GISS Model E at
file:///Users/willis/Documents/%20Climate%20Models/modelE1_pub/doc/HOWTO.html
Here is a slightly commented version of the code in question, any assistance appreciated
From this, it appears that the units must be joules/m2.
Next, there is a file in the Model E dataset called “conserv.txt” which says
I liked the “should be conserved” as opposed to “will be conserved” ….
Finally, there is a slightly outdated list of what they’re doing to the GISS model at
http://www.giss.nasa.gov/~gavin/muddle.html
Loved the name …
w.
Counters,
.
Thanks. I hadn’t yet read anything on CAM/CLM and I misunderstood the quote. 🙂
.
What started this, though, was really a question on conservation of energy. The viscous dissipation part came out of the explanation by lucia on how a climate model wouldn’t naturally conserve energy unless viscous dissipation was accounted for.
.
So for my part, anyway, I’m a bit more interested in how energy is conserved in the CCSM than viscous dissipation itself (I did check out the part of your link on energy conservation, but it will require more time than I have at the moment). Given the way the models handle viscous dissipation and the fact that the models are numeric approximations, it would appear unlikely that energy would naturally be conserved. Unless I’m reading the blurb on the coupler incorrectly, it seems as though the coupler is responsible for making sure that energy is conserved across whatever modules are being run for a given experiment.
.
If I’m wrong on that assumption, any documentation you could point me to would be much appreciated. 🙂
My bad, I was looking at a file on my desk and not on the web. The HOWTO file mentioned above is part of the Model E tarball.
Also, someone commented that
I still don’t understand why they don’t add it back in where it occurred. Although the error may be small when averaged over the planet (remember Gavin said 2W/m2), it likely occurs in local areas due to local conditions, and there it is likely much larger.
Also, I don’t understand why there is no murphy gauge (a gauge designed to tell you when Murphy’s Law is making your program act up), not just at the global level, but at the local level as well. If it’s .4 W/m2 globally, how much is it where it is occurring? And why?
w.
Okay, I see where the discussion is coming from, Ray. The Scientific Description of the CAM that I linked to, then, should be precisely what you’re looking for (with respect to the CCSM, such documentation doesn’t exist for the ModelE). There was a special issue of the Journal of Climate which coincided with the last major release of the CCSM back in the summer of 2006 which will have great supplemental information, although the very specific stuff you need will likely come from the Scientific Description and the Boville paper.
As for the coupler, its portal can be found here, but the link to its pertinent documentation is dead; I’ve already e-mailed the webmaster and hopefully it’ll be fixed soon.
counters (Comment#8499)
(This is my first walk through this thread so what follows might have been discussed already. This comment is on the fly).
“It’s an interesting point to state explicitly: No single model result is trustworthy; it’s only when we have a large number of results together that we can start interpreting our data.”
Analogy time. It’s the Olympic games 1000m track event. There can be only 8 people in the final and one winner there. You are a selector from a country with a candidate. You model whether you should go to the expense of entering your country’s candidate. The results of the model are more accurately projected by analysis of past results, statistics on performance, rate of improvement, etc. It would make very little difference if the finals were run on a wider track with 16 finalists.
That’s how I read your comment. There is no strength in numbers if they are all wrong. Increasing the number of models usually refines “precision” (scatter) more than “accuracy” (close to the best answer), especially when the modellers rely on each others’ data and thought concepts.
It’s a fine point, but I am not impressed by frequent suggestions that getting more supercomputers and doing more simulations is going to improve accuracy. If concepts like sub-grid cell size events are included, they remain a problem intractible to replication.
Dan #8639 – I think you’re looking for the wrong thing – the sort of viscous stress that might appear in an equation of laminar flow. That isn’t in GCM equations – the Reynolds number is huge. Instead, as Lucia suggested above, you need some sort of eddy viscosity to express the diffusion of momentum, with consequent conversion of KE to heat. In an engineering application, you might use a turbulence model to get it. However, again this doesn’t work in atmosphere models; turbulence models generally assume isotropy, and on the GCM grid scale, the atmospheric flows are not at all isotropic.
So models generally use some estimated eddy viscosity. If you look way back at GISS in 1974
http://pubs.giss.nasa.gov/docs/1974/1974_Somerville_etal.pdf
you’ll see this discussed (sec 4), and they use an empirical eddy diffusivity of 0.1 m^2/s. No doubt this has been refined since.
Willis #8653
I still don’t understand why they don’t add it back in where it occurred. Although the error may be small when averaged over the planet (remember Gavin said 2W/m2), it likely occurs in local areas due to local conditions, and there it is likely much larger.
They do add discrepancies locally where possible, as in the code fragment that I referred to in #8579. The 0.4 W/m2 reflects what’s left. It’s true that it could be added back where it is found, and probably is. That isn’t necessarily where it “occurred”, which could lead to smearing.. I think it is different to the 2 W/m2 that Gavin is talking about, which is KE loss due to (vertical) momentum diffusion, and is added in.
Incidentally, the code fragment I referenced above reappears (with a similar comment) in the subroutine DISSIP, which is authored by none other than Gavin. I’m guessing it has been inlined for computer speed. DISSIP doesn’t seem to be called.
Turbulence modelling in GISS Model E is actually more elaborate than was described in that 1974 model. They calculate a local mixing length, which is probably the most appropriate for the circumstance. This is all in the file ATURB.f, where the momentum diffusion ( sort of divergence of viscous stress) is added in to the momentum equation in subroutine atm_diffus().
Dan :
(4) If the physical effect is important relative to calculation of radiative quasi-equilibrium / equilibrium, has it been correctly and accurately incorporated into the GCM models / codes?
.
The answer is already contained in the question .
The effect is unimportant and it is unimportant how or whether it is included in the GCMs .
Why ?
Because they are precisely describing a system in quasi-equilibrium .
They do not solve (and can’t do so) time dependent PEDs describing multi phase radiating fluids out of equilibrium .
They only follow energy , mass and momentum conservation through a discrete and finite number of grid cells supposed each to be in quasi-equilibrium .
It is equilibrium thermodynamics to which you added the simplest form of the velocity field .
But in such a description viscous dissipation (or friction) is only an aditive constant term .
So it is indeed unimportant for the equilibrium temperature as long as it is small and additive because it is only a small constant correction to the KE .
If you neglect it you will get a very slightly cooler and windier Earth and if you take it in account you get a very slightly warmer and less windy Earth .
.
You have perhaps noticed that I didn’t use your word “accurate” because that is a completely different matter .
Do the GCMs accurately describe the effect of the dissipation where accurate means relevant to the REAL Earth ?
Here the answer is a resounding no because the real Earth is nowhere near to any equlibrium or quasi equilibrium .
And it is far from it by huge amounts – the real Earth is neither a steady state turbulent flow in a pipe nor a plane wing .
The day half radiates much much less than it receives and the night half the opposite .
Neither is in equilibrium and of course a sum of non equilibriums doesn’t become an equilibrium .
It is not in pseudo equilbrium defined by integrating over a month or a year either and if , like R.Pielke has been saying for ages , somebody calculated/measured the total internal energy of the Earth , he’d see that it varies significantly and chaotically on all time scales .
Viscous dissipation like cloud cover , ocen currents and latent heat transfer belongs to the many rather small unbalances that achieve together the miracle of keeping the Earth system within its chaotic attractor what they have been succesfully doing during the past some 4 billions years and to whom we owe our presence here .
But as you said in the nice expression – if a system tries to do 0.0 = 0.0 , then every epsilon matters .
As the GCMs look at an Earth that doesn’t exist namely in quasi equilibrium where one has 0.0 = 0.0 all the time , they can’t see any of the real problems and transitories (on ALL time scales !) that occur and have a non epsilon impact for an epsilon variation of a control parameter .
.
Of course the problem is not in the question if the range of local dissipated power is somewhere between 1 and 4 W/m² but in the question what the system does when the dissipated energy varies wildly in time and space and is not an independent additive variable .
This question can’t be answered by GCMs because they can’t treat non equilibrium processes .
Not even mentionning true chaotic systems .
TomVonk (#8670)
What follows is a philosophical question because I know nothing, or next to, about non equilibrium studies.
When we do any analytic integration, we assume infinitesimally small volume elements over the variables. Some of the variables in this problem are x,y.z.t. For this infinitesimally small dxdydzdt is it legitimate to assume equilibrium of the other variables? if not equilibrium is it possible to quantify a vector of the non-equilibrium direction?i.e a first order term, or even a second or third?
I am trying to ascertain whether one could solve the problem analytically, if one had infinite computing power, or the problem is inherently not accessible to analysis. Obviously nature does differentiate and integrate at the molecular level and up so I would expect that one could model that given enough computing power, but I may be wrong.
If the former, then the question becomes: to what fine structure in DxDyDzDt and other consequent variables one can go so that it might be a fair approximation of the infinitesimal, i.e. not in error.
anna,
Explicit, closed-form solutions are known for only a very few PDEs. Analytic solutions which exist for environmental flows are for fairly sanitized versions, e.g., hydrostatic, Boussinesq, isentropic, and (usually) incompressible and nondissipative. Above all, the system is linearized (throw out “unwanted” nonlinear terms).
If you want to include some nonlinear effects, you can go the weak nonlinearity route and perturb the (known, solvable) system by a small amount epsilon (try adding back in one or some of the missing terms). As long as the system is not “too nonlinear” and the epsilon is not “too big” then we can find a vector which some might choose to call a “sensitivity to forcing.” You may continue to ask for additional, higher-order terms ad infinitum, and these terms should matter less and less.
The difficulty is demonstrating that a real system of interest is not “too nonlinear.” If it turns out that your system is strongly nonlinear, then as TomVonk said, every epsilon matters. There are no clear first-, second-, third-, …-order, terms. In that case, your analytic computer of infinite power will show a solution which diverges further and further the more you compute in this way.
Geoff, you stated that “There is no strength in numbers if they are all wrong.” Here and on CA where a parallel discussion is occurring (and, for that matter, the larger AGW debate), this assertion continues to be made.
With this elephant sitting in the room, no progress is going to be made. You can’t barge into a discussion of this technical nature with the assumption that the models are either right or wrong. The whole point of this discussion stems from the fact that we’re using formulations of basic principles (conservation of energy and mass, for instance) in our models, but out of necessity we’re solving these equations with approximations. The question is being posed whether or not these approximations (specifically with respect to how energy is being conserved or not conserved as a result of viscous motion) are introducing errors in the model. Furthermore, as Dan posed in Comment # 8573, the issue of whether or not these errors dramatically effect the model results is a related albeit distinct question.
I’m not saying the models are perfect. I’m also not saying that perhaps within the coming generations of model evolution a major flaw might be found. But you can’t assume that the models are garbage right from the start. It’s wise to use the entire error bars associated with the models in discussing their solutions, but until someone demonstrates a fundamental flaw in the models (either a serious mis-coding of the equations it means to solve or a flaw in those equations themselves), it’s unwise to assume that the models are producing garbage results.
I agree with counters that it’s best to start from the assumption that the models are not just garbage.
Modeling is at least respectable in all fields. These climate models don’t look dramatically different from classes of models used in engineering. In some ways they deal with more complex flows– in other ways, not.
In fact, I would argue that, to some extent, the level of complexity of models in all fields tends to be comparable. In fields blessed with cleaner problems, codes use fewer approximations. In fields with more complicated problems, they use more approximations. In all fields we test.
Being the simple minded sort, my approach to testing is:
We say we can predict X.
We try to predict X, which we have not yet observed.
We collect data and see how well we predict.
(Of course, I we also check other things. Does it hindcast etc. But no matter how well we do Y, we don’t say that proves we can do X. We check.)
With respect to the issue of viscosity: To some extent, I’m not convinced modelers have shown we can predict X. I know that none of the hindcasts are perfect, and the models show a cold bias for earth temperatures.
Therefor, at best, the models contain an approximation that leads to some noticable biases. Dan and Willis uncovered this pesky dissipation issue— maybe it’s the dissipation issue. Who knows? (I don’t.)
But none of this is meant to suggest that if there is an issue it means the models are utter trash. They do get some things correct at least qualitatively. Or, to use common words that make people laugh “semi-quantiatively”.
Counters, to be fair, though, the scientific process dictates that those who make predictions have the burden of proof. So while simply assuming the models are garbage is unfair, so is assuming they are accurate.
.
Until the effects of the errors on the results of model runs is determined, you can make no statement of confidence in the models either way.
.
Additionally, the amount of energy imbalance caused by mathematical artifacts from the numerical approximation (which was the original focus of the topic) are an entirely different question from how well the model physics replicates the actual earth.
.
So even if the effects of the mathematical artifacts are known, you still must show how well the model physics replicates the actual earth to truly be able to put confidence levels on the model predictions.
.
This is why I find statements like “likely” or “highly likely” in the context of model predictions as being misleading. They ultimately are not statements of confidence for the model matching reality; they are representative of the spread between model runs. The caveat “if the models are an accurate representation of the physical earth within a range of +/-X” should be added to the front of every such statement. Additionally, the confidence of the prediction should be expanded by the range of +/-X listed in the caveat.
.
And if you can’t put a number to that +/-X, then when it comes to the question of how well the models predict the behavior of Earth’s climate, you can not legitimately define any quantifiable confidence levels.
.
So I agree that making the a priori assumption that the models are garbage is unwise. By the same token, however, making the a priori assumption that they are not garbage is equally unwise.
.
The burden of proof is on the predictor. It is not the other way around.
Re Counters #8710
Ido not recall using the word “garbage”. Indeed, some of the climate models can produce extremenly precise results. Here is a cut and paste from an observation of a year ago –
From: A comparison of tropical temperature trends with model
predictions
David H. Douglass,a* John R. Christy,b Benjamin D. Pearsona� and S. Fred Singer
This compares the results from 22 models of the temperature change with altitude to the tropopause. I wanted to see how well the Australian CSIRO performed.
Here is the Table –
Table II. (a). Temperature trends for 22 CGCM Models with 20CEN forcing. The numbered models are fully identified in
Table II(b).
Pressure (hPa)�>
Surface 1000 925 850 700 600 500 400 300 250 200 150 100
Model Sims.∗ Trends (milli �C/decade)
1 9 128 303 121 177 161 172 190 216 247 263 268 243 40
2 5 125 1507 113 112 123 126 138 148 140 105 2 −114 −161
3 5 311 318 336 346 376 422 484 596 672 673 642 594 253
4 5 95 92 99 99 131 179 158 184 212 224 182 169 −3
5 5 210 302 224 215 249 264 293 343 391 408 400 319 75
6 4 119 118 148 175 189 214 238 283 365 406 425 393 −33
7 4 112 460 107 123 122 130 155 183 213 228 225 211 0
8 3 86 62 57 58 82 95 108 134 160 163 155 137 100
9 3 142 143 148 150 149 162 200 234 273 284 282 258 163
10 3 189 114 200 210 225 238 269 316 345 348 347 308 53
11 3 244 403 270 278 309 331 377 449 503 481 461 405 75
12 3 80 173 114 115 102 98 124 150 161 164 166 142 4
13 2 162 155 170∗∗ 182 225 218 221 282 352 360 340 277 −39
14 2 171 293 190 197 252 245 268 328 376 367 326 278 69
15 2 163 213 174 181 199 204 226 271 307 299 255 166 53
16 2 119 128 124 140 151 176 197 228 271 289 306 260 120
17 2 219 −1268 199 223 259 283 321 373 427 454 479 465 280
18 1 117 117 126 148 163 159 180 207 227 225 203 200 163
19 1 230 220 267 283 313 346 410 506 561 554 526 521 244
20 1 191 151 176 194 212 237 254 304 387 410 400 367 314
21 1 191 328 241 222 193 187 215 255 300 316 327 304 90
22 1 28 24 46 73 27 −26 −26 −1 20 24 32 −1 −136
Total simulations: 67
Average 156 198 166 177 191 203 227 272 314 320 307 268 78
Std. Dev. (σ) 64 443 72 70 82 96 109 131 148 149 154 160 124
Australia’s results are remarkably close to the mean. Australia is number 15 of the stations. Starting from lowest altitude and going upwards in altitude, with results in millidegrees centigrade per decade, Australia VS mean is
Aust Mean
163 156
213 198
174 166
181 177
199 191
204 203
226 227
271 272
Then the figures start to diverge a little more.
307 314
200 320
255 307
166 268
53 78
The last digit in these columns is one part in a million of a degree C/decade.
As a person with a general understanding of measurements and of climate model variability (see standard deviation given in the table), I would suspect that others would find this ppm correspondence fascinating. Indeed, if I were an examiner, I would ask for the raw data and the calculations to be explained in detail. Then, if all checked out, I’d say “Well done, chaps, you are as good as the ROW”.
Is tis the type of criterion you use, Counters, to say that a model perform well? Cheers Geoff.
Dan Hughes (#8639),
As we have discussed on CA, the climate models purport to solve the viscous, forced NS equations under the large scale assumption of hydrostatic equilibrium. But that is not the case because
the natural cascade of enstrophy to smaller scales of motion is altered
1) in the case of inappropriate water vapor parameterizations by convective adjustment, i.e. the convective adjustment redistributes the overturning in a vertical column caused by the inaccurate parameterizationsin in order to force hydrostatic balance. This is an ad hoc attempt to maintain hydrostatic balance and there is no mathematical or numerical theory that supports such a gimmick. In other words the model is no longer accurately approximating the full viscous NS equations.
2) and because the dissipation in the model is orders of magnitude larger than in the real atmosphere, the cascade in the model is very different
from the correct cascade. The models achieve a long term balance between the input and dissipation of energy by using forcing terms that are larger than they are in reality. This is why the forcings must be tuned each time the mesh size is decreased. If they were physically accurate,
that would not be necessary.
The IVP for the hydrostatic system is ill posed and no numerical method
will converge to the correct continuum solution. The unbounded exponential growth of numerical solutions was demonstraterd
by numerical convergence runs on CA and a mathematical reference discussing the issue cited (available on request).
Jerry
Jerry,
The climate model documentation often talk about “control period” where models are run without forcings and expected to be stable. Runs that do not remain stable are discarded.
Is this a normal practice with these kinds of models or is it evidece of the “ill posed” nature of models?
Re Counters #8750 (continued)
Having conceded you an example where strength in numbers (the average of 67 simulations) appears to be highly precise, I guess you have to say that I’m not an elephant sitting in the room. Personally, I want to see accurate models and I do not want to impede progress to their achievement. That is not the same as suppressing criticism where criticism is invited.
There are other problems for the numerically educated.
One problem is that there is a tendency to publish error terms of models simply based on the models put forward as best for a multi-model comparison. The error terms should be calculated for all model runs done with a particular model, unless there is an obvious error like a scribal error. Those models that go flying off to infinity should also be included, otherwise there is cherry-picking going on. Now, if you include the models that don’t look so good to the modellers for ill-defined, subjective reasons, and calculate error terms, you end up with such huge errors that a reasonable person would state that the basic assumptions are ill-posed (as Gerald Browning has done for a part of the models).
Even if you accept the error bounds calculated from the chosen models, then look at actal data, there are cases where the actual spends as much time outside the confidence limits as it does inside and in some cases has a different trend slope sign. Some if not all models are simply wrong, despite their strength in numbers.
Numbers again, it is remarkable how many papers show fundamental changes to climate parameters commencing from the date of the publication. Even graphs (like the frequency of drought in Australia) can suddenly acquire a different texture in the publication year and following.
When models are at this early stage of maturity, it is simply wrong to publish them as “settled science” to frighten small children and Policy Makers. They should be published as preliminary investigations in the hope that others can suggest improvements.
AnnaV
.
OMS answered your question correctly and partly .
The PEDs (or even ODEs) that describe systems exhibiting chaotic behaviour have no analytical solution per definition .
Those equations can’t be solved “accurately” numerically either , again per definition .
The reason for that is known – there is at least one positive Lyapounov coefficient and it is the cause of exponential divergence of trajectories in the phase space .
If the trajectories were not bounded , the system would inevitably always blow up after a certain time .
It is possible to describe visually and intuitively what the system does owing to the fact that its trajectories are bounded .
You take a small sphere of points within the phase space of the system and observe how it deforms .
First it stretches fast in a certain direction transforming the sphere in a rugby ball .
Then the long axis reaches the maximum dimension alowed by the physics of the system (like if its expansion was stopped by walls) but as the stretching “force” is still there , it begins to fold on itself .
From there on it stops looking like any ball (round or eliptic) and undergoes infinite sequences of stretching and folding that smear the points in strange shapes all over a certain irregular volume that is called chaotic attractor .
So for longer times the only thing you can do is to describe the geometry of the attractor and the dynamics of the stretching and folding but you are no more able and will never be so to say where the points of the initial small sphere are and where they will be a bit later .
The energy dissipation is a major player in this process regardless of its relative magnitude compared with other forces .
So it is not like one is in impossibility to say anything about the system but one is in impossibility to make any classical prediction in the form : “At time T the values of the dynamical parameters Xi will be Yi with an incertitude of Deltai .’
.
What makes the transition to the question if climate models are garbage .
Well mathematics are one big tautology .
They say the same thing all over in different words .
The role of a proof is precisely to make sure that it is always the SAME thing .
So do the climate models .
If you write down equations describing an equilibrium system and code them in a model then the results will be hopefully consistent with the behaviour of an equilibrium system .
I believe that most climate models are not garbage in the sense that they produce results that are consistent with what the system would do if it was what the modellers believe it is .
Aka an equilibrium system .
Or a linear system with gaussian noise in it . Or even with non gaussian noise .
Whatever .
.
But now comes another , for me a much more important question : “Does the model describe the RIGHT system ?”
I even allow for a weaker variant : “Does the model describe a system that behaves APPROXIMATELY like the real system for time scales at which the study is done ?”
We all agree that the answer is no on the first question .
So everything is now in the second .
If the answer is no , then the models are indeed garbage in the sense that garbage is something you have no use for .
If the answer is yes then the model has a limited usefullness where the limit is precisely the time scale at which the study is done .
At the right scales there are scraps of usable things within the garbage and at larger scales all is garbage but the added value is that we might understand WHY it is garbage .
.
The original thing with climate models is that they are garbage at small scales as well as at large scales because chaos dominates there .
However they are supposed to be at least partly useful within a window of scales that would go , say , from a couple of dozen of years to a couple of hundred years and only for variables spatially averaged over rather large areas .
I am extremely skeptical and consider it highly unlikely that such a window exists and is precisely situated on these scales .
It would be the first example of a chaotic system that miraculously offers a scale window where it behaves like your usual good old deterministic system with or without “noise” .
On top I can’t begin to think of a physical reason why it would do so .
That’s why I have little confidence in predictive virtues of climate models , this confidence tending to 0 when the time scales increase .
Tom Vonk #8757, Thank you for that. I have undergrad math, and had trouble visualizing this discussion in 3D space until this post.
Thanks Tom and OMS.
Another two questions.
1)Are you aware of the Tsonis et al paper where they have used neural nets to approximate the chaotic system? If so, what do you think of it as a method of modeling? The paper can be found here http://www.nosams.whoi.edu/PDFs/papers/tsonis-grl_newtheoryforclimateshifts.pdf
2)I have asked on whether one could make an analogue computer model of the climate. Analogue computers and computing, back in the 1960s, were competing with digital ones and were orders of magnitude faster than the digital ones in solving differential equations. The all purpose usefulness of the digital computers won out and I have not heard of analogue computing recently. Somehow it seems to me that maybe a fresh look at the methodology might prove useful for chaotic systems. Somehow the Tsonis et al paper reminded me of this computing method. One needs good electrical and electronic engineering background though.
AnnaV
1)
I always liked Tsonis papers and think in a very similar direction as he does .
Neural networks are 1 of the methods to analyse certain chaotic systems .
Not the best one imho but I don’t know it enough to make a well informed comment .
2)
I am not aware that Navier Stokes equations have been (or even can be) treated analogically .
Perhaps Lucia knows more about that .
As for chaotic systems , well some of them are precisely simple electronic circuits (diode+ capacity f.ex) 🙂
The problem with chaotic systems is that the set of non linear equations describing the chaotic system of interest cannot be found as describing also some electronic circuit .
Look even at the very simple Lorenz system – the non linearities are really particular and you won’t find a circuit emulating them .
Actually the neural network approach is a kind of analogical computer approach but you see that it is really far from the classical method that consisted to transform equations (simple ODEs) in circuits where intensities or voltages were solutions .
Tom– I’m not aware of anyone running analog simulations of the NS.
I do remember my undergraduate text book gave some circuit diagram representations of pipe networks. I found it confusing, and asked why anyone would want to represent things that way. (The confusing thing about the representation is that it doesn’t make anything any easier to compute or understand.)
The professor at the time said it was because pre-calculator people used to set up analog circuits as approximations. The method was a pain in the neck to implement because you have to guess the correct size resistor to put in the circuit, take some measurements, and then adjust the resistance if your guess was off.
People don’t do this for pipe networks anymore, and the chapters discussing this vanished from undergraduate fluid mechanics texts, eliminating a lot of confusion.
That’s the only trivial I know about analog computers methods and fluid dynamics!
I suspect that whether or not analog computations might be useful, people might avoid them for lack of access to any sort of analog computer.
Lucia, we did the same thing for physics simulations where mechanical systems were represented by an electrical circuit. The reason we did it, though, seems to be different than the reason you did it. The point in our case wasn’t to recommend doing simulations that way; the point was to show the limitations of drawing conclusions from an analogous or simplified system (though we weren’t told that at the outset).
.
In many cases, the outcome over a short enough time interval was close enough to be within instrument error of measurements of the actual mechanical system. Let the situation run long enough, though, and the answers always diverged significantly.
.
So we went through the sources of error – non-ideal behavior of electrical components, variations in the actual component values from the listed values, how clean or dirty the power source was, etc. etc. We’d discuss all these things, figure out how to quantify and fix them (or approximately fix them), run the simulation again, and so forth.
.
The interesting thing was most of the errors simply didn’t matter. Even after correcting for them, the simulation diverged from reality in about the same timeframe. Some did matter – like actual resistance values and adding heat sinks to minimize resistive heating – sometimes helped improve the simulation, though not always.
.
The projects were meant to demonstrate the uncertainty principle and problems with using simple physics (ideal RC circuits to represent oscillation, for example) to simulate complex systems by disregarding “small” contributors. We never abstracted the results to chaos theory or the particulars numerical computer simulations because those were beyond the scope of the course.
.
The main points, though, were:
.
1. Many common-sense simplifications that work in one case may not work in other cases.
.
2. Sometimes even large errors don’t affect the simulation. Other times, incredibly small errors can. It is difficult (if not impossible) to tell this ahead of time.
.
3. Running simulations multiple times may not improve the predictive power of the simulation, especially in systems where oscillatory behavior is important, and this is difficult (if not impossible) to tell ahead of time.
.
4. The source of the divergence cannot always be found.
The end of Ryan O’s comment (Comment#8783) is very true for physics models and even when you have empirical data.
However in the course of looking at viscous dissipation in the NCAR model (linked by counters earlier and at http://www.ccsm.ucar.edu/models/atm-cam/docs/description/ ) and from reading a familiar relationship in a post on CA from counters (don’t worry I’m not picking on you) about 2 x C02 approximates to 3 W/m2 radiative forcing(similar to an old Hansen chestnut), I went and looked at the physics behind the absorptivity and emissivity relationships dealing with CO2, H20 and other constituents.
Now this may require a new thread, Lucia, so I’ll ask my question to the group: In section 4.9.1 of the model description about longwave radiation, there is a lot of maths about how overlapping spectra are dealt with. There are accommodations for N20 and CFCs but there does not seem to be any serious discussion of the overlap at 15 microns for H20 and CO2. There is no obvious term in the calculation of 4.240 and 4.241 for CO2 and H20. Its inherent in the N20 calculation but the broad-band method appears to base this on the N20 529 cm-1 band. So how does the CAM3 model deal with this?
Its just some of the discussions are about incorrect parameterisations and how energy imbalances may be cropping up and being assigned to a certain processes when they may come from incorrect defintion of other sources and physics in the model. Again this may be better in a new thread. It just struck me as odd that the NCAR description does not go into the obvious and well known overlap of part of the 15 micron CO2 band with water vapour. It would be nice to see how this was attempted.
RyanO,
We never went so far as to set up any of these analog computers to solve pipe network problems. There wasn’t even any lecture discussing it. There was just a section in the textbook. It was mentioned briefly.
Maybe if I’d been 5 years older, we would have done more with it. If I’d been five years younger, the section would have been edited out!
My understanding was (and still is) that there was a time when people used slide rules, and they could get approximate steady state solutions this way, and it wasn’t much more difficult than setting up the equations and solving iteratively. Given how many things weren’t know exactly it may have been no more difficult that other ways. Still. . . No one does this these days.
All those points 1-4 apply to a sizeable number of fluid mechanics and/or heat transfer problems and applications.
Point 5 might be: One people identify the source of a divergence and understand when it needs to be considered, suddenly everyone thinks the issue is obvious. They then forget there might still be yet another issue that will crop up in a slightly differnt problem!
Ryan (#8754),
> The climate model documentation often talk about “control period†where models are run without forcings and expected to be stable. Runs that do not remain stable are discarded.
Is this a normal practice with these kinds of models or is it evidece of the “ill posed†nature of models?
A numerical model based on the nonhydrostatic system (essentially the inviscid, compressible NS equations) can be run without forcing for short periods of time (a few days) and numerical solutions will converge with
decreasing mesh size (see CA for examples). The nonhydrostatic system is
a hyperbolic system (in the absence of vertical shear) and both the mathematical and numerical theory of such systems is well understood.
A numerical model based on the hydrostatic system (essentially the inviscid, compressible NS equations with the added assumption of hydrostatic equilibrium, i.e. the neglect of the vertical acceleration term
in the vertical momentum equation) will produce numerical solutions that will diverge in an unbounded exponential manner in a shorter and shorter time period as the mesh size is reduced (see CA for examples).
This is exactly the type of behavior expected of an ill posed system
and the behavior is hidden by gimmicks such as convective adjustment, excessively large dissipation, and coarse meshes.
No model based on either inviscid system can be run for long periods of time because of the nonlinear cascade of enstrophy (vorticity) to scales smaller than those resolved by a fixed mesh. Thus dissipation must be added to the systems to prevent the buildup of enstrophy at high wave numbers causing the model to “explode”. The problem with this approach is that the amount of dissipation used in the coarse mesh climate models
is orders of magnitude larger than in the real world leading to an incorrect cascade. Large dissipation can stabilize (prevent a blowup) of any numerical model because one is essentially solving a heat equation
instead of a hyperbolic system with very small dissipation. However the impact on the accuracy of the numerical solution is substantial (see Browning and Kreiss , Math. Comp. 1986).
Let me know if this answers your question – I will be happy to expand.
Jerry
Lucia:
.
#5 is quite true! 🙂
.
I think the advantage of the analog computer method is that you could easily perturb the system (by changing a resistor, say) to at least get a qualitative understanding of the impact over a reasonable time frame. You don’t have to go recalculate everything. With that being said, the results of such simulations were always viewed with a good amount of skepticism. They were guides, not predictions. They were also limited in what they could simulate because electrical components only have so many properties . . . which limits what you can model without needing a computer to design the circuit . . . and if you already have the computer, why build the circuit?
.
Regardless, I think sometimes people put too much stock in modeling as a predictive tool. Somehow “computing power” gets translated into “accuracy”. It does allow you to introduce more physics and finer scales, but this doesn’t necessarily mean the system produces more accurate results. The problem is that the source of the inaccuracy is often quite independent of the computing power, and in some cases, can be entirely unavoidable.
.
With that being said, I think modeling is incredibly important in helping to understand the physics. I’m just extremely skeptical when it moves beyond that realm unless there’s substantial experimental data to quantify the conditions under which the model will produce accurate results. In the case of the climate, unfortunately, such a data set does exist.
lucia, you say:
Generally that is true. However, some caveats.
First, the global climate system is likely the most complex and least understood system we have ever tried to model. It is not only that the field is “more complicated” as you say. It is immensely, outrageously more complicated. It is also very poorly understood, and we have only a quarter century of trying to model it.
Second, I would distinguish between “approximations” and hand-waving. Putting in something just because it works is hand-waving.
Third, many of the quantities of interest in climate are inherently not measurable. Take albedo. Absent a fleet of satellites measuring every possible angle of escape of reflected light rays, we can only measure how much light is reflected back at the satellite. While this is of interest, it is not the total albedo. To calculate that single metric alone requires a separate computer model. So we end up with out computer being fed with and tested against the results of other computer models, not reality.
Fourth, climate is unique among the sciences in that its field of study is not directly observable. Climate is the average of weather over a sufficiently long timespan. This means we cannot weight climate, measure it directly, or observe it. This makes the modeling even more difficult.
Fifth, nature tends to run at the edge of turbulence, at all times partway into the turbulent region. And turbulence is computationally extremely intractable.
The result of all of this is that our computer model estimates of say the climate sensitivity (two to four degrees per doubling of CO2) have not changed appreciably in the last quarter century. I know of know other field where this is true, and it is a testament to the great complexity and subtlety of the problem compared with our simplistic understanding of the system.
w.
Willis (#8802),
Well stated (as always).
Jerry
Thanks to everybody who replied to my analogue computer query.
My brush with them was back then in my graduate student days where every research info bit was fascinating and the brain was young and fizzing (1962/).
My memory had logged the following:
They were built to the problem
Their main advantage for being advocated at the time was that they were orders of magnitude faster at the time than digital calculations. ( I do not know whether this will be true today since circuits for digital calculations are very mush faster now. On the other hand the circuits used for analogue calculations would also be faster).
I also had the precis ( erroneous? from the present discussion) that differentiation and in integration circuits could be built Lego like for any problem with coupled differential equations. If this were true and the ” order of magnitude faster” still holds it would be one reason to explore them, even for neural nets I would suppose.
From Ryan O’s discussion I see inherent problems in this that would cripple a climate model even if my Lego picture holds .
Tom, I was thinking of modeling directly the involved differential equations at the thermodynamic level, not the meta level of chaos equations.
AnnaV
Tom, I was thinking of modeling directly the involved differential equations at the thermodynamic level, not the meta level of chaos equations.
.
This is done even if not analogically .
As chaotic regimes are untractable by the use of continuous Navier Stokes equations , people have been trying to understand what happens “bottom up” for dozens of years .
F.ex the chaotic Rayleigh-Taylor flow (a heavy fluid layer initially in unstable equilibrium on top of a light fluid layer) has been simulated at the atomic scales by using only equations of motion of the individual atoms .
This simulation shows clearly how , when and why the flow goes in chaos (descending bubbles and “mushrooms”) and how the viscous dissipation works what no Navier Stokes approach can do .
.
However the problem is obviously the huge need in computing power what limits the size of the system to some 10^8 atoms and rather short times .
This will be cranked up to the mm scales one of these days .
But it will still stay forever far from the scales at which operate real climate flows (ocean or atmosphere currents rising or descending by gravity) .
.
An amusing side effect is that in order to have chaotic structures at these nanoscales , it is necessary to have unstable wave numbers several times smaller than the size of the sample .
And as these wave numbers decrease with increasing gravity , it is necessary to apply astronomical g’s in order to see the chaos .
So the simulated sample corresponds physically more to a neutron star than to the earth’s atmopshere 🙂
.
You can read the results here :
http://www.thp.uni-duisburg.de/~kai/kadauRTPNAS.pdf
Tom – could you clarify what you mean by “how the viscous dissipation works what no Navier Stokes approach can do”?
Thanks!
oms
lucia,
Following up on you earlier remark about maintaining energy balance, wouldn’t be better to start there and and add complexity as needed. Following on from your sucess in matching some of the Model E results with a simple first order model, how about this: with H the heat content of t”the earth” (or maybe just the oceans)
…oops…to continue…
dH/dt =-kTH +mTS + P
where the first term on the right is heat loss, the second heat gained from TSI (S) and P is a term independent of H which includes everything else that I haven’t thought of (like pirates). k and m are parameters (independent of H) and T is transmissivity of the atmosphere (same in both directions; much the same if different). Now some modelling on how T is influenced by (say) clouds; assuming the simplest model gives T = a-bH where a and b have some physical meaning (eg T goes down as H increases due to clouds). The heat equation gives
dH/dt = -k(a-bH)H+m(a-bH)+P
After a bit of rearranging and factoring the quadratic in H this becomes
dH/dt = kb(H-W)(H-I)
with W,I functions of all the parameters. Take W>I. With all the parameters constant this can be solved but without that assumption the general behaviour is easy to see.
For intermediate H (W>H>I) dH/dt is negative, so the earth cools. H eventually reahes I at which point dH/dt = 0 and the cooling stops. OK so far, the earth is normally in an ice age with a roughly constant temperature.
For H=W, a warm phase with dH/dt = 0. So it looks stable except that since the parameters W depends on are not constant it’s at best metastable. A small change to increase W and we have H<W and we are back in a cooling phase, heading back to I. Still OK, the earth spends significant but relatively short periods in warm phases, before returning to an ice age. Good news is that for HW (small change in something lowers W during a warm period), dH/dt is positive and we have runaway warming.
OMS
Tom – could you clarify what you mean by “how the viscous dissipation works what no Navier Stokes approach can doâ€?
Yes .
Navier Stokes is a set of continuous PEDs .
It contains empirical coefficients (viscosity , conductivity) and needs boundary conditions (f.ex the no slip condition) .
So it actually explains nothing about the causes and dynamics of energy dissipation .
.
When the system is resolved at molecular scales it is a “simple” hamiltonian system . There are no empirical coefficients , no boundary conditions .
It is discrete and non homogeneous and its dynamics depends only on the nature of the molecules , translated in the intermolecular potential .
The transport coefficients (viscosity , conductivity) are derived by solving the motion equations and averaging .
The molecules slip at interfaces .
The dissipation is explicitely computed and viscosity varies both in time and in space .
That’s why this approach is better suited and much more accurate to study the chaotic regimes where N-S is untractable .
.
So N-S is a non equilibrium equivalent of equilibrium statistical thermodynamics .
It is an emergent continuous statistics of the discrete molecular interactions when the number of molecules is big .
N-S will partly break down at small scales and near boundaries where the molecular processes have no easy statistical properties or where the assumptions don’t hold .
At large scales , of course N-S holds because it is nothing else than energy and momentum conservation that is correctly written for a continuous system with the help of a couple of empirical coefficients .
Tom- thanks for your response.
I understand that N-S is essentially the governing equations in continuum approximation.
I do have a question, though: is it your view that the molecular approach frees one completely from empirical coefficients, e.g. the Boltzmann constant is not empirical in the way the viscosity of a classical gas is empirical?
Also, I would make the point in the context of the model “viscous” dissipation that there is a substantial qualitative difference between the continuum approximation of MD, which includes a molecular viscosity which is correct in form at large scales, vs a “turbulent viscosity” parameterization of N-S, where the behavior is not really correct (even statistically) at any scales.
OMS
I do have a question, though: is it your view that the molecular approach frees one completely from empirical coefficients, e.g. the Boltzmann constant is not empirical in the way the viscosity of a classical gas is empirical?
.
Yes .
Untill quantum mechanical corrections are needed , the set of considered molecules is completely described as a hamiltonian system and treated like a classical N-body system .
The only approximations that happen are in the expression of the intermolecular potential which is repulsive at short distances and attractive at large distances .
Theoretically the total force exerced on 1 molecules is the sum of all forces exerced on it .
However that is computationnaly very expensive so a “cut” must be done somewhere .
So the expression of the potential is not exactly accurate and it must be proven that the system stays stable when such and such approximation is done .
Not really obvious when one deals with a chaotic system where “details matter” but that is what the MD people do .
.
Also clearly the problem becomes very difficult when the number of the different molecular species increases because then for each I-J interaction a specific potential Uij is needed .
I am not familiar with such extreme systems but I’d guess that then some empirical coefficents could be used in the potential expressions in order to make the problem numerically tractable .
Please excuse a little naivete but I should like some order of magnitude details and some other guidance.
I can see that it is important that energy must be conserved both globally and locally but I would have thought that another important question is whether the dissipation is correctly located.
Kinetic energy is created from pressure gradients that I suppose are ultimately the products of imbalances in radiance from and to space. In each locale there is a conversion from PE to KE. There and elsewhere KE is converted to heat. In the round this could and probally must produce a net transfer of energy across large distances. Is the important point surely that this is done correctly as opposed to simply ensuring that dissipation conserves energy globally or locally.
By this I mean if a dissipative term is introduced to ensure conservation alone as opposed to correctly determine where the dissipation occurs then the model would not violate conservation but would not be realistic.
Regarding magnitudes, the 2W/m^2 does not seem very large to me but then I imagine that it simply balances a production of KE of 2W/m^2 which does not seem large to me, given the energy budget of the atmosphere.
But I think that the atmospheric mass is about 10,000Kg/m^2 which at say 4m/s gives around 80000J/m^2 and hence a time constant of around 40000secs (~11hours). Now 4m/s is probably rather low for mean global wind speeds but even so in this context 2W/m^2 seems a little large, that is 11 hours seems rather short a time.
Now I would have thought that mean global viscous dissipation in W/m^2 should be just the sort of thing that is known from nature.
I could find this: “The downward flux of kinetic energy averages ≈1.5 W·m-2 over the global land surface”
see: (http://www.pnas.org/content/101/46/16115.full)
which I think implies that this the amount of KE dissipated by surface roughness or transfered into the ocean.
Question: is that the sort of dissipation that is being considered here?
From what I have read above it does not seem to be what is under discussion. Surely dissipation at the surface would be easy to attribute to its correct location.
Again I apologise if I have naively misunderstood the thrust of the debate.
Best Wishes
Alexander Harvey
Alexander Harvey,
The compressible NS equations are not solved near the planetary surface.
There is no way that a numerical model could resolve the ensuing turbulence from heating and surface irregularities. Instead the equations near the surface are replaced with one of the infamous physical parameterizations (planetary boundary layer formulations) that are completely ad hoc.
You might look at the manuscript by Sylvie Gravel et al. posted on Climate Audit. In that study we did a very thorough study as to the physical parameterizations that play the most crucial role during a short term weather forecast using the Canadian operational large scale forecast model over the US (a relatively observational data rich region for comparison purposes). The planetary boundary layer parameterization was the most important one, but it was not crucial to the upper level forecast for several days. And that is only because it slows down the flow
near the surface as might be expected from drag. On the other hand the flow near the surface was so small that a much simpler parameterization could also be used with similar forecast results. And because both methods
were not physical, it was possible to watch the forecast error rise vertically and soon destroy the forecast completely.
One must realize that the parameterizations are not physical, but tuning
mechanisms to obtain the results one wants, i.e. they are a trial and error
approach (not science).
Jerry
Alex,
There was detailed discussion of this on Dan Hughes’ blog. Unfortunately, this seems to be down at the moment. Dan propounded the view that viscous dissipation wasn’t done in the code. I pointed out line by line where it was done, but he didn’t seem interested. I’ll try to reconstruct it here.
An equation formally similar to the Navier-Stokes momentum equation is widely used in turbulent flow calculations. In true N-S, the viscous term represents the diffusion of momentum. In turbulent flow, momentum is transported at a much higher rate; however, there are much studied ways of modelling this, and the end result is still a diffusive term with an “eddy viscosity”.
In GCM’s, fluid flow is dominated by the rapid change in the vertical dimension relative to the horizontal, and this affects the way the space is discretised. However, in other respects the basic system solved is turbulent Navier-Stokes. Because the flow is anisotropic (vertical is different) the usual eddy viscosity model is based on mixing length, and this is what they use (lscale).
In Model E, this is done starting in the routine ATMDYN.f. Dan Hughes has a neat tool for visualising the code, but I can’t access it at the moment. The turbulent viscous drag is in ATURB.f (eddy viscosity km, see k_gcm()). There is a fixed relation in these calcs between dissipative terms in the momentum equation and loss of kinetic energy, so you don’t need to calculate that separately. You can, but as a matter of numerical algebra, it should be identical. So viscous dissipation, in the turbulent sense, is correctly computed locally.
Then there was discussion of how energy was conserved – the appropriate amount of heat added to the system. This is done with a call to the routine DISSIP in ATMDYN.f. The loss of KE (DKE) was computed locally, and an equivalent amount of heat is added. Again it is all localised.
Nick–
Thanks.
One question: Do they ever compute the amount of turbulent kinetic energy in the subgrid scale and estimate how that’s transported? (I anticipate the answer is no.)
If they don’t, then how can they corretly compute a local total KE budget? (TKE budget.) In particular, how can the compute it to get the viscous dissipation correct locally?
In most turbulent flows, the local production of turbulent kinetic energy doesn’t balance local dissipation. So, you can’t just do a balance on the large scales and be at all sure you got it right.
Thanks all,
Nick,
“In GCM’s, fluid flow is dominated by the rapid change in the vertical dimension relative to the horizontal, and this affects the way the space is discretised. ”
I take it, height to area ratio much like a sheet of paper, or a plying card.
“There is a fixed relation in these calcs between dissipative terms in the momentum equation and loss of kinetic energy, so you don’t need to calculate that separately.”
So there is no risk that conservation of energy is violated in this calculation.
If that be the case then there is no “new energy” being generated that would spuriously heat the model directly, no matter how many W/m^2 of dissipation is modelled.
Also as I understand it now, the overall magnitude of dissipation is in the right ballpark. However the attribution of dissipation to the cells affects the transportation of heat around the globe and needs to be correct, but this is of a lower order than if it as simply being incorrectly created or destroyed.
BTW the Dan Hughes Blog is up and I was able to glean amongst other things that I was an more than order of magnitude out in estimating the KE of the atmosphere. (time constant for ~2W/m^2 being 8 days not my 11hours). Which feels about right, I think.
Thanks.
Gerry,
I now what parameterisation is but not a lot about how it is used in GCMs, I take the naive few that best practise is carried out and that tuning does not imply cheating. My only justification for this is that in CMIP2 outputs the models were not tuned to get the global temperature correct. (As I recall the difference between the hottest models and the coolest is >3K).
Re: short term weather forecasting, I vaguely recall reading in Paltridge & Platt “Radiative Processes ….” That you can leave out some boundary effects and still get the weather pretty close for a couple of days or so.
I will look for the Sylvie Gravel manuscript when I can.
Thanks.
Lucia,
As I understand it in order to calculate the total KE you need the mean linear KE plus the bulk angular KE plus (as I know realise) the turbulent KE.
I am probably wrong in the detail as it may be a wiki explanation but I have read that the cell energy is characterised by the linear velocity (3D), the pressure, latent heat, density and Temperature. That it did not in list the biggest whorl (the bulk angular velocity) surprised me and so now does failing to mention turbulent KE surprise me. Perhaps that is wiki type overviews for you, but if you be right then failing to transport the correct amount of turbulent KE would seem strange in the first instance.
BTW My level of understanding of Fluid Dynamics is about the level of remembering the Big Whorls’ rhyme. I am not short of mathematics just not in this discipline.
Thanks,
Alex
Alex (ref Nick Stokes comment),
It has been shown mathematically and demonstrated with convergent
numerical solutions that if the incorrect type of diffusion (e.g. hyperviscosity) or diffusion coefficient size (e.g. eddy viscosity) is used in a numerical model, then the converged numerical solution with either of these ad hoc methods produces a different solution than the NS equations with the correct type and size of diffusion coefficient (see theoreticalmathematical results by Henshaw, Kreiss, and Reyna and convergent numerical demonstrations by Henshaw et al. – use google scholar to find the references). In addition to these ad hoc techniques, the climate models use the planetary boundary layer parameterization at the lower boundary (problems discussed above) and artificial damping near the top of the atmosphere. All of these ad hoc techniques are used to keep the numerical model from blowing up due to the models inability to handle the solution components that occur in reality. They are not based on any rigorous mathematics or physics.
What continues to amaze me is that even though the mathematical theory is now well understood, modelers continue to ignore the results and continue to embarrass themselves. 🙂
If you cannot find the references discussed above, please let me know and I can provide them.
Jerry
Alex–
Correct. But for a fluid, you compute the linear velocity at all points. Unlike a solid, you don’t count linear KE of the center or mass plus angular KE for the object rotating as a rigid body. So, in some sense, you don’t need to worry about linear and angular separately in many computations.
The turbulent KE would be subgrid. That’s why I asked Nick. He’s more familiar with the guts of what the models do Typically, Reynolds Averaged type engineering code carry along a separate equation for TKE . (Or, if they get more detailed for the entire tensor for Reynolds stresses. Then you can take the trace.)
If you don’t know the turbulent KE, then it seems to me the energy budget Nick describes goes:
Energy Loss at large scale in computational volume => Heat.
The correct thing is:
Kinetic Energy ‘lost’ at large scale in local computational volume => mostly to subgrid scales KE (i.e. turbulence).
Turbulent kinetic energy (TKE) can get transported to other volumes.
Kinetic Energy ‘lost’ at small scales => mostly heat.
If model can’t move the TKE around properly attribtuting the heat the the volume where TKE is produced could put it in the wrong place. (Though, maybe someone has made the argument that the approximation is ok and heat ends up dumped more or less where it belongs. But, in which case, I’d like to read the argument stated explicitly.)
Alex,
On Climate Audit do a search on Sylvie. On the second page find the
heading “The Relative Contributions of Data Sources and Forcing Components …” Click on that pdf file. The manuscript is fairly short and easy to read. If you have any questions, just let me know.
For the theoretical math do a google scholar on Henshaw Kreiss and Reyna.
See ” On the Smallest Scale Estimates for the Incompressible NS equations”.
Note that these estimates were done for the full nonlinear NS equations,
i.e. no approximations.
To see how different types and coefficients affect convergent numerical solutions see Comparison of Numerical methods for two dimensional turbulence, Browning and Kreiss , Math Comp, 1989 The latter
manuscript might be a better starting point than the theoretical math article. 🙂
Jerry
Alex
Alex,
Note that if the vertical component of vorticity is known, then all other
components of the slowly evolving in time solution (the one of interest) can be determined (references available on request).
Thus if a model dissipates enstrophy (essentially the norm of the vertical component of vorticity) at an incorrect (too fast of) rate, then the energy forcing (parameterizations) needed to balance that incorrect rate cannot be physically correct, i.e. they must necessarily be incorrect to overcome the incorrect dissipation.
Jerry
lucia,
Can you please explain the physical basis for convective adjustment, i.e. the adjustment in a vertical column of a climate model that has become convectively unstable (small scale process) and is readjusted to maintain hydrostatic balance (a large scale process)? Is there any rigorous scientific theory that relates these two processes?
Will a numerical solution of the inviscid hydrostatic equations (the inviscid incompressible NS equations with the assumption of hydrostatic equilibrium) converge as the mesh size is reduced?
Jerry
Jerry–
Why are you asking me those questions? I’ve made no claims about the convective adjustment.
If you wish to make a point you believe important and wish to make it, feel free to make your point.
Lucia #10383
I think you’re right that sub-grid KE is not included in the count. Their DKE is based on the Reynolds-averaged velocities.
In a two-equation model like k-epsilon, subgrid KE is explicitly part of the model. Here (in a mixing-length model) they do compute TKE, in e_gcm. The routine e_eqn solves a de for TKE transport at the surface. It’s quite possible that TKE transport in the bulk fluid does not affect energy balance much.
lucia,
If the viscous dynamical equations (with the correct type and size of dissipation) are being approximated accurately by a numerical scheme and the physical forcings are accurately described by any one set of parameterizations, why don’t all of the climate models produce the same result? Why the need for more than one model?
Along this line of reasoning, note that all of the climate models purport to
accurately approximate the continuum hydrostatic system. When Dave Williamson made a 1 bit error in the last place of the initial data, the model deviated from the control (unperturbed) run very quickly.
However when the parameterizations were shut off the deviation between the runs was much smaller. This indicates that the parameterizations are very sensitive to small errors. Because they contain switches (if tests), they lead to spatial and temporal discontinuities and the ensuing
instantaneous cascade of energy to smaller scales can only be controlled by excessively large dissipation. That dissipation controls the cascade in all of the models and the system behaves more like a heat equation than a hyperbolic system with small dissipation. And this leads to the fact that
slightly different parameterizations lead to different results, but that difference is being controlled by the dissipation.
Jerry
lucia (#10397),
Unless one understands exactly what is going on in the climate models, one cannot make statements like:
To be generous, if it is not cheating, it is certainly misleading. Even Gavin admits that the models are tuned and the only reason that is necessary is
because the dissipation is too large so the forcing must be adjusted
to make the spectrum appear to balance.
The two examples I stated are enough to make any mathematician or numerical analyst cringe. And these models are full of gimmicks like these.
Jerry
Nick
Maybe; maybe or maybe not. At this point, that seem to be the zillion dollar question with regard to the climate blog question about how lost KE is turned into heat and distributed in models.
I started this thread because the question is batting around “out there”. I don’t know that it’s a bigger issue that others but it’s been batted around. It seem to me that we gotten to this point:
1) There is a physical basis for turning the lost KE into heat. (I always knew this, and have been saying so whenever the issue comes up. the lost KE HAS to be turned into heat and that’s what really happens in the real physical world.)
2) GISS Model E at least tries to attribute the heat to those compuatinal volumes when and and where the KE deficit appears.
3) But, in at least some parts of the flow, there may be errors in putting the heat where it belongs. This could be due, in part, to not keeping track of TKE. (Turbulent Kinetic Energy– or energy at subgrid scales, or whatever name any particular modeler wants to call it.)
4) We don’t know how much heat may be put in wrong places.
5) I, Lucia, have no idea if this approximation in the heat (i.e. energy) equation is a bigger deal than other approximation or equivalent to the many other parameterizations.
Jerry
We all know that parameterizations are approximations and have effects. I agree that those parameterizations are going to distort the transfer of energy to smaller scales. But this there was started because of an viscosity issue that is distinct from the one you care so strongly about. The issue on this thread is: Where and why does a heat source term appear in the heat equation?
On the topic that so fascinates you: I read your papers. I’ve went around and around with you trying to discuss the issue that bothers you. But you always end up switching to trying to advance your case by asking questions rather than answering them. The result is I often don’t have a clue what precise argument you wish to advance. As far as I can tell, you have identified a big problem for weather models staying on track. It certainly has some consequences for climate models; the consequences may or may not be large. I don’t know. I don’t think you know either.
But, if you want to show you know the answer to whether or not this makes a difference, you are going to have to show it by answering questions, not subjecting other to a pop quiz on an introductory course in turbulence and believing your argument will make itself when someone says “Oh. Yeah! Viscous dissipation really does happen at the small scales! Just like I read in Batchelor!”.
lucia (#10397),
You still do not seem to understand the impact of the simple mathematical proof I gave at CA, i.e. namely that one can produce any solution one wants from a time dependent system if one is allowed to
choose (tune) the forcing . What would you call this type of manipulation?
Jerry
Alex #10390,
Yes, in 3D the grid is very flattened, which causes some well-known numerical issues.
Conservation of energy is enforced, because the dissipated KE is added in as heat (routine DISSIP). There’s no independent measure of the viscous heat source. The relation between the momentum eqn and KE is a matter of numerical algebra. If you multiply the momentum eqn (scalar prod) with velocity, you get an equation in energy density.
The Model E code browser that I referred to is actually on the GISS site, here.
GerryB–
Of course one can get any solution to a time dependent system if one is allowed to do whatever one wants to do to what you call forcing. (For others: Gerry uses this term a different sense from forcing to the radiative heat balance for the earth.)
If you want to make a point beyond this, you will have to make it.
Lucia #10402
I think much of the blog conversation about this could be summarised – we can’t point to a concrete reason to believe anything is wrong with the treatment of viscous dissipation, but it’s GISS (Hansen, boo), so there must be something.
I also don’t know if the transfer of TKE needs to be computed everywhere (or even really whether they’ve done it). I believe that upscale energy cascade is important in the genesis of hurricanes, say, and that may be why they resolve boundary layer TKE transport carefully. I doubt if it matters for heat balance, because turbulent diffusivity for energy is much the same whether it is heat or TKE. In other words, the energy will end up as heat in the same place, regardless of when you deem the conversion to occur.
Lucia (#10405),
The point has been made. You just don’t want to admit that is what the climate modelers have done.
Jerry
Gerry–
What point do you mean when you refer to “the point”? The issue your comment 10403 is “a point”. But… so? The difficulty is you want to make much of this without ever relating it to considering the actual implication for climate models.
As far as I can tell, your point has no greater implications to climate models than others like “how do they deal with boundary conditions at the surface?” or “how do they deal with clouds” or many other questions.
Dear All,
Golly Gosh, I have not had so much attention in years it is appreciated but it will take me a while to do you all justice.
Lucia,
Typically I have understated the absolute temperature discrepance of the models. See: http://www-pcmdi.llnl.gov/projects/cmip/overview_ms/control_tseries.pdf
from:
http://www-pcmdi.llnl.gov/projects/cmip/overview_ms/ms_text.php
I realise that this is for CMIP version 2 but it is a 5C spread.
Now I am a perverse (in the nice way) sort of cove. That they are prepared to publish with such a disceptancy gives me faith in their integrity as I expect it makes them whince just a bit. They could easily just up or down albedo values a bit and hey presto!.
Now 5C is … well a whole ice age more or less. But ever the gent, I except that the ground rules are: “can the models express what happens if we play sill boys with the WM-GHG contnet of the atmosphere”.
Strangely in their submissions for CMIP2 they all agree rather closely in the transient case:
http://www-pcmdi.llnl.gov/projects/cmip/overview_ms/tseries.pdf
even though they have “it is said” very different values for the sensitivity.
Here you say something that I wish others would express with equal force:
“That said: 3K is roughly 10% of the entire greenhouse effect. You can get within 33K without any understanding of the greenhouse effect. You can narrow that a lot using a simple 1 d radiative convective argument. So, it’s not clear to that AOGCM’s give us a big leg up on getting average surface temperatures. (They do help us learn some details, and I like them as gedunken experiments or heuristic tools. But, can they predict trends well? I don’t know. 1 d models are enough to tell us increasing CO2 will increase surface temperatures. Full AOGCM’s agree.)”
I have done a little (though it took me a long time) 1D modelling and along with a little MODTRANing it was enough to convince me of the basic argument.
Doubling of CO2 (or equivalents) gives around 1.2C(hold pressure)/1.6C (hold RH) times a multiplier. (these figures off the top of my head). The multiplier is probably at least 1 !!!!
Basically that is about as far as anyone seems to have got since Arrhenius.
This is not becuase I am clever but becuase the problem seems so intractable.
I have enormous respect for the GCM modellers and I trully believe they will correctly characterise the crisis (or othwise) while there is still time. But not I fear in my lifetime. My best guess is 2050 for a model an order of magnitude better. (whatever that means, think 10 times the resolution in each of the 4D).
Many thanks to you and all other responders, as you will realise I have to confess that I have not the time to master fluid mechanics but it is flattering to get such answers.
Alex
Alex–
I’m a mechanical engineer (my phd was a topic in turbulence.)
We take everyone’s questions seriously.
The problem of modeling and predicting is complicated and difficult. I do think the modelers set parameterizations in good faith. Moreover, the vast majority of modelers just work in their specialities and report what they get.
I started this blog more as musings while taking a break from my knitting blog. But, it turns out to be a good place to share information and learn some from my readers. There is a lot of information out there, and at a certain point, you sort of learn who answers based on having actually read things, running numbers and who …well… doesn’t.
I mostly let everyone speak as long as they don’t call each other names or act like trolls. (Very few trolls visit.)
Lucia (#10425),
You are starting to sound like Bill Clinton, “It depends on how you define __.” The simple mathematical proof about the use of forcing to obtain any solution one wants from any time dependent system is clear and concise. There can be no confusion in that proof or in the gimmicks that the climate modelers have used to force the solution they want (by Gavin Schmidt’s own admission). The game is already apparent in Sylvie’s manuscript.
Use trial and error to modify the solution even if the forcing is not physical.
Jerry
Jerry,
I think I understand what you are saying. Well mostly. I am sure you are right to have concerns about the scientific basis.
For me, all I can hope to comprehend is, at best, what differences it makes. That is, do the model outputs represent a plausible earth like climate.
Whether by luck or judgement.
I am old enough to remember when computer models were likely to be analogue as opposed to digital. How it is done makes little difference to me provided that it works, that we know how well it works, and which bits we should trust, and which bits we should not.
I don’t think that this is the right thread for me to continue in this vein as it has nothing to do with Model E or viscous dissipation.
Thanks
Alex
Alex (#10594),
Actually it has a lot more to do with viscous dissipation than you might guess. The primitive equations (inviscid NS equations modified by the assumption of hydrostatic balance) are an ill posed system (mathematical reference available on request). This has serious ramifications. The numerical solutions will not converge as one decreases the mesh size and, in fact, the numerical solutions grow in an unbounded exponential manner in a smaller and smaller time interval as the mesh size is reduced. (See Exponential Growth in Physical Systems at CA for actual numerical convergence tests of this system and the corresponding well posed system for comparison purposes). The climate models have hidden this problem by convective adjustment, low resolution, and unphysically large dissipation. But the problem has been seen in numerical runs using NCAR’s own models (reference available on request). The modelers have much to lose
if this is well known and they have downplayed the results as you might guess. If you read the abstract of the Exponential Growth thread, you will see that because of legitimate exponential growth in the NS equations near vertical shear layers, the models will always diverge from the continuum solution of the compressible NS equations, i.e. the numerical solutions will always be questionable.
Jerry