Are Models Science?

Carrick thought I should promote the answer to one of Cui’s questions to a post.

Lucia, I know what you mean, but then in what way are the models “science”?

People (including me) use sloppy language. Sometimes that’s fine.

But to answer your question: The truth is models in general are fundamental to science. Newtons first law of motion is “a model”. So is the law of gravity. So, is … well everything. These more familiar models might be called foundational or “first principles” models. They are used as “building blocks”.

We also have things like “constitutive relations” which are sometimes widely accepted under appropriate circumstance ( e.g. ideal gas law, stress-strain relationship for Newtonian fluids etc.) and sometimes merely empirical observations of things we might want to describe. Some fall in between. These are also models.

Scientists and engineers build more complicated models based on “first principles” models, ‘constitutive relations’ along with a bunch of assumptions thought to apply to a particular problem. When creating these more complicated models they often make mathematical simplifications. Both of assumptions and simplifications may be appropriate or inappropriate depending on application. Extreme example of a situation where a well known ‘fundamental’ model might break down is this: the assumption the ideal gas law applies to a works if one wishes to predict behavior of air at Standard Temperature and Pressure. It totally doesn’t work for … well.. water. It often happens not to work very well for gases at high pressures.

Global Circulation Models (GCMs) are a type of complicated model. Since they are based on scientific principles and created following the method all scientists and engineers use to create physically based predictive models, they are scientific.

Of course no model — not even a foundational one– is science itself. When people say “science” they can mean the collection of knowledge we have accumulated using a particular method and methods or they mean the method itself or the mean both.

At it’s very base, the scientific method is to test ideas of how the world works against observations. In the physical sciences ideas of how the world works are nearly always described using mathematical models (e.g. F=ma). The ideas that don’t match observations are either toss or revised. (Phlogiston was pitched entirely. F=ma has been extended to account for what happens at the speed of light and so on.)

As for invalidating a model: You might be able to ‘invalidate’ “a model” by showing its assumptions aren’t true, or showing it doesn’t work in a some limit (e.g. F=ma doesn’t work in the limit that velocity approaches the speed of light. So, it’s been revised).

Hypothetically, you could look at all the modeling assumptions in a GCM collectively and show they violate the 2nd law and the collective errors are material. (If you showed they violated the 2nd law when winds approached the speed of light, no one would care.) Attempting this sort of proof would likely might a mathematics rather than comparison to data.

If you showed something like that, you might say you’ve invalidated “the model”. In that case, someone would correct it. The problem would likely lie in one of the many simplifying assumptions, or a constitutive relation ( i.e. “sub grid parameterization”.)

What I or others are generally trying to do when comparing output of a GCM (or the mean of GCMs etc.) is test a claim based on model output. That claim would be “the best estimate of temperature over time is X”. That’s not validating “the model”, “the models”, “the science” or “science”. We are testing one specific hypothesis out of many possible hypotheses.

Returning to your question: GCMs are physically based models. Physically based models have a well respected role in science as does making predictions based on these models. It is correct to say these models are “scientific” but not that they are “science” itself nor are they ” ‘the’ science”.

Given the context in which your question was asked: Developing a successful new model can make someone famous. But the process of comparing the predictions of models to observations is even more well respected than developing new models as that model will only be adopted if its predictions agree with observations.

This is because the heart of science is the comparing predictions from a model to observations, rejecting a model that fails to match observations and trying to develop newer better models that do match observations and starting the comparison cycle over again.

67 thoughts on “Are Models Science?”

  1. Everything about science is observation. Experiments are observation, theories are designed to account for our observations and predict what we’ll observe next if we look at X using Y. Models are meant only to give us something to compare observations against. A starting place to frame our observations and give us predictions to test next for more observations.

    One one side, some people take models to actual mean something real in and of themselves — they don’t. On the other side, some people take models to be a waste and useless — they aren’t. It’s all just about giving us tools to acquire more empirical data; when put in that context, the whole issue is clarified for me, and it’s much easier to see how to interpret models and where to go next.

    As science is a process, a methodology, they are a part of that process, and thus they are one of the tools of science.

  2. Ged.

    The only problem there is that “observations” are theory laden.
    In short, your notion that that there are things called “observations” over here and things called “models” over there, is itself a model.

  3. Mosher,

    I disagree completely. Observations are not theory laden, other than one could argue the act of observing can change the result. But that’s what replication/controls/uncertainty values are all about. Observations drive theory, and theory informs us where to look to find our next observations. But to observe anything is regardless of theory, hence why observations are actively pursued to disprove theories.

    So no, it is not itself a model. That’s getting into extrastentialisms and saying that the world is whatever we decide it is. No, reality is regardless of us, we just observe it. Taking a temperature is an observation that requires no models, it is what it is. Measuring the light from stars requires to models, it is what it is. One’s -interpretation- of the meaning of those observations may employ models, which are subject to observational testing in their own right; but observations themselves are a solid event not dependent on what we think or are.

    Therefore, I disagree, and it’s my opinion that trying to conflate the two — “models” (what we think of reality) and observations (which are measurements of reality) — is dangerous if not poisonous intellectually.

  4. Ged,

    After having read prolly thousands of Steven Mosher’s comments over the years, I’ve found that all the science-related ones are geared to promote the acceptance of Warmerism. You can view his comment that context.

    Andrew

  5. When its dry it pours?!
    An example of testing a model prediction (hypothesis) against objective evidence is given by David Stockwell’s 2008 study Tests of Regional Climate Model Validity in the Drought Exceptional Circumstances Report. Stockwell showed that the CSIRO’s drought predictions were contrary to the historical evidence with a hindcast/forecast using half the data for tuning and the other for prediction.
    Now we have: Counterintuitive, models wrong – rainfall more likely over drier soil

    “We have analyzed data from different satellites measuring soil moisture and precipitation all over the globe, with a resolution of 50 to 100 kilometers. These data show that convective precipitation is more likely over drier soils”, says Wouter Dorigo. . . . They found that the existing models do the wrong thing, triggering rain over wetter soils.

    See: Afternoon rain more likely over drier soils Christopher M. Taylor, Richard A. M. de Jeu, Françoise Guichard, Phil P. Harris & Wouter A. Dorigo, Nature Letter DOI: doi:10.1038/nature11377 online 12 September 2012

    We show that across all six continents studied, afternoon rain falls preferentially over soils that are relatively dry compared to the surrounding area. . . . We find no evidence in our analysis of a positive feedback—that is, a preference for rain over wetter soils—at the spatial scale (50–100 kilometres) studied. In contrast, we find that a positive feedback of soil moisture on simulated precipitation does dominate in six state-of-the-art global weather and climate models—a difference that may contribute to excessive simulated droughts in large-scale models.

    At WUWT, rgbatduke observes:

    The problem is that the empirical observation does not explain why there is a higher probability of rain over parched soils. It at best offers an heuristic hypothesis. . . .Why didn’t the models get this right in the first place?. . .They have a whole chunk of climate physics dead wrong. Worse, the physics they do have has been tuned to what were clearly biased assumptions that were not founded on measurement at all!

    This might have been corrected earlier had researchers incorporated WJR Alexander’s findings that precipitation and runoff vary strongly with the 21 year Hale cycle but NOT with surface evaporation.
    Linkages between solar activity, climate predictability and water resource development* J. So. African Inst. Civil Engineering Vol 49 No 2, June 2007, Pages 32–44, Paper 659

    Now the challenge of applying the scientific method, discovering the missing physics, and correcting the models to match the evidence – rather than political alarmism.

  6. Ged–
    Existentialism can be involved.

    Taking a temperature is an observation that requires no models,

    Be careful or you’ll send Mosher off into ontological discourses beyond the ken of mere mortals! Oddly enough, it’s possible to consider the concept of temperature itself a model.

    That said, I think for most purposes, the notion that using a thermometer to measure temperature is an observation and fourier’s law is a model is usually the more useful cognitive model when developing scientific models — especially practical ones.

  7. “the notion that using a thermometer to measure temperature is an observation”

    Aggregating the observations is a model.

  8. lucia (Comment #103366),

    Oddly enough, it’s possible to consider the concept of temperature itself a model.

    Do you have the equation for that?

  9. ged.

    Not existentialism. Just basic philosophy of science

    Taking a temperature does involve a model. What is a temperature?
    and why does a liquid in a glass tube measure it? think.

    replication? think about why you repeat observations? I measure
    the temperature. 12.4, 12.2.12.4, 12.6,12.8,12.0.

    As you know when you repeat observations you get different answers. what do you do with that? and what ‘theory of how things works’ allows you to do it? you average the results.. and you say the mean ( which is a mathematical construct and unobservable) is equal to 12.4 +-.
    Did you observe the mean? nope. then you have that pesky thing called a “error” what’s that? is error an observable? Think about the theory that underlies the very procedure of taking multiple measures. Ya, we pretty much have to assume that the laws of the universe dont change with time and that differences in measurements of the “same thing” at “different times” will be slightly different because of this “thing” we call “error”.

    “To return to examples, then, even a straightforward statement such as “this lump of coal weighs one kilogram” is riddled with theory. Whether we include inference from prior experience (i.e. that the heaviness from lifting pieces of coal is conserved over time); the apparatus required to derive weights; the physical theories upon which the instruments and concepts like weight and mass are based; other theories that determine the effect (if any) on weight at different locations; and so on; we are very far indeed from a “basic” proposition.”

    http://www.galilean-library.org/site/index.php/page/index.html/_/essays/philosophyofscience/theory-ladenness-r72

  10. The use of computer models in science is widely accepted. In most fields of science, the computer model is used to make short work of complex calculations which would take the scientist a long time using a slide rule (I think they call it a slipstick in the US). In these situations the science is well understood and the computer model is based on the proven science. In climate science it’s the other way around, the science is based on the computer model. A computer model cannot produce science.

    Science is about theory, experimentation and observation. You can’t just dump a theory into a computer model and expect to produce anything of scientific value when the underlying science is not well understood.

  11. I get where Ged is coming from but broadly agree with Mosher. I argue that models are explorations of hypotheses, opposite observations as tests of hypotheses. Model data is distinct from observational data such that model errors increase through model cycles (systematic bias) while observational data is generally improved via repetition/replication. What is dangerous to science (as I understand it in Ged’s argument) is the potential to mistake a model run for an empirical study. So while models have the potential to assist us in identifying our *mis*understanding of science by juxtaposing their data product with observational data there remains the temptation, which is harmful to science, to mistake a model run for an experiment.

    Ged might agree that models are not the embodiment of science; their product is not scientific data, but they are nevertheless useful on the road to good science. It might similarly be argued that over-reliance on models, or the elevation of models over or as a substitute for observational data collection is certainly the road to bad science.

  12. Re: Skeptikal (Comment #103369)

    lucia (Comment #103366),
    Oddly enough, it’s possible to consider the concept of temperature itself a model.

    Do you have the equation for that?

    You might consider Fourier’s (Heat) Law to “the” equation for that.

    Re: Skeptikal (Comment #103373)

    In most fields of science, the computer model is used to make short work of complex calculations which would take the scientist a long time using a slide rule (I think they call it a slipstick in the US). In these situations the science is well understood and the computer model is based on the proven science. In climate science it’s the other way around, the science is based on the computer model. A computer model cannot produce science.

    A climate model (or most any other numerical model) is a huge machine full of moving parts which do what roomfuls of “computers” (graduate students with slide rules) might have done in another day and age. The parts in isolation are supposed to behave in a way which is roughly consistent with proven science. It’s the behavior of the collection of parts which is unknown. The more subtle question is whether the (hopefully) small errors, whether known or unknown, either in the individual parts or in the way they’re connected together, leads to an unrealistic behavior for the system, and even if so, have we still learned anything useful from the exercise? That’s what makes it science, right?

  13. Skeptical–

    (I think they call it a slipstick in the US)

    Where I come from we called the slide rules.

    I think it’s fairer to say that a computer model is just a model that one has decided to solve using a computer. The underlying system of equations would be the mathematical model. The basis for that mathematical model would be the simper models (i.e. conservation of mass, momentum, energy, conduction, constitutive models, sub-grid parameterzations and so on).

    It’s true that if the choice of any of the simpler models is tenuous, we can’t necessarily expect the numerical predictions from the model to reproduce reality. Whether the numbers coming out have “scientific value” is another matter entirely. Sometimes even very crude attempts to do something have “value”. But the question is then: in what way are those numbers valuable? It might be that value is that people get some qualitative understanding about what is going one and they can use that better understanding to eventually create models that have predictive value.

    I do think it’s important to recognize that things can be “valueable” or “useful” in different ways. The reason is that otherwise, some try to defend criticisms of models lack of accuracy or precision by changing to subject to whether or not they are “useful”. Useful is then defined in some way that doesn’t involve requiring predictive ability. And then suddenly, after proving it’s “useful” in this way, it’s decreed that “if they are useful, then they must have predictive ability”.

    Of course that doesn’t quite work– but the argument is diffuse and it is my impression that it is made.

  14. Skeptical

    Do you have the equation for that?

    I was thinking of statistical thermodynamics where temperature is related to the kinetic energy of a bunch of molecules.

  15. Doesn’t a model come from successful hypotheses as per Francis Bacon? So are we mislabelling them as models, which explain and predict phenomena, and they should actually be called GCHs instead of GCMs until they are successful?

  16. MrE–
    F=ma is a model. The idea “F=ma” can be used to predict acceleration if we know the force is a “hypothesis”.

    F=ma^2 is also a model. If someone formulated the hypothesis that it could be used to predict acceleration, that hypothesis would fail when tested against data.

    GCMs until they are successful?

    They are models whether or not they are successful.

    “Failed model” would not be an oxymoron.

  17. //A computer model cannot produce science.//

    A model is a model – and the logic of science effectively requires some kind of model (it doesn’t need to be a computer one or a mathematical one) to operate. Science isn’t pure mathematics and it doesn’t operate from knowing some sort of apriori axioms of the universe from which everything can be deduced. Consequently some sort of hypothetical collection of propositions interrelated by maths and logic have to be proposed and tested against observations. Yes, the data collected has an element of objectivity (or rather independence) but the data that the model is compared against is still chosen in light of the model (and itself will be looked at through other models including models of measurement).
    Can computer models produce science? That question can be rephrased as “Can computer models produce claims that can be TESTED against data produce from a source other than the model?” The answer is yes. Various climate models are tested in various ways on this very blog on a regular basis.
    What computers bring to scientific models is a level of complexity and a range of testable claims that are beyond what we could otherwise do by hand. Epistemologically that makes them MORE capable of “producing” science rather than less.
    Note none of that means any given computer model in climate science is correct. They could ALL be wrong – but that in itself is a big hint that they are science. They are capable of failing tests of their veracity.

  18. //Doesn’t a model come from successful hypotheses as per Francis Bacon?//

    A set of hypotheses that form some sort of coherent whole are a model.
    For example a basic model in early geology is that, all other things being equal, the order in which strata are laid down is a chronological order. It isn’t a very mathematical model (the maths is basically an elementary ordinal scale) but it is a model. As it stands it is a model that isn’t exactly true but it is the basis of a more complex model of geological strata.

  19. Lucia,
    I liked the promotion of the comment to a post, if only because it lays out a reasoned (and reasonable) intellectual basis for scientific inquiry; I think it would be difficult for someone to read your post and draw the conclusion that you (and most who comment here!) are in some way anti-science…. a claim that is (sadly) made too often.
    I would add only that science is and must be internally and externally consistent. Which is to say, each and every conceptual construction of science is based on a specific model of reality, and all of those models are consistent with each other. Even models which are later replaced by more accurate models usually remain ‘accurate’ over a restricted range of conditions. It is the simultaneous demand that scientific models be consistent with observations (AKA reality) and with each other which makes science a unique and uniquely useful human endevor. The vast jigsaw puzzle fits together most everywhere; only at the limits of understanding do the pieces not fit together.
    Climate models are for sure physically based, except where they are not…. too many adjustible parameters that allow ‘consistency’ with observations. I remain unconvinced that GCM’s can make the kinds of accurate predictions their developers claim; efforts like yours to regularly test model predictions against observations are the only way models will progress….. no matter how uncomfortable modelers and their oppoligists are with results.

  20. Nyq–
    I agree with all that you say. Models need not be mathematical and not all are– still in the physical science I thin most are in some sense. Even the one you described is mathematical– though the math isn’t complicated.

    SteveF– I agree that at least in principle, all accepted scientific principles must be mutually consistent. Of course there are times when inconsistencies get noticed and in those situations, lots of people work to figure out who two things each of which seems right on its own give the appearance of inconsistency. Is light a particle? Is it a wave? Eventually, that sort of question gets resolved. (It’s both!!)

    I remain unconvinced that GCM’s can make the kinds of accurate predictions their developers claim; efforts like yours to regularly test model predictions against observations are the only way models will progress….. no matter how uncomfortable modelers and their oppoligists are with results.

    I’m not sure all modelers claim the sort of accuracy that seems to be conveyed in… uhhhmmm “certain quarters”.

    I do think the current models are over predicting temperature trajectories and the mean is outside the bounds of earth trends. I think modelers want to avoid this question and talk only about whether the earth trend falls in the range of models. But refusing to discuss both is not right. When results are handed off to others– like economists, it is important to know whether the mean from the models is biased high, low or whether it seems right. Pretending that we can’t recognize bias merely because the temperature does fall inside the range of the ensemble is collectively bone headed

  21. Lucia,
    The results handed off to economists are more removed from a grounding in reality (eg Stefan Rhamstirf’s claims of extreme sea level rise by 2100). The uncertainty of climate models is small next to the uncertainty in claims of utter catastrophy that economists receive from climate science.

  22. I do think the current models are over predicting temperature trajectories and the mean is outside the bounds of earth trends.

    .
    Er… what are the “bounds of Earth trends”?

  23. Skeptikal
    Re: “when the underlying science is not well understood.”
    Depends on what you mean by “underlying science”. Newton developed his inverse squared universal law of gravitation that works extremely accurately. However, Newton had no idea as to what causes gravity.

    Even today, scientists have proposed “gravitational waves”. However, they are so small that they are still struggling with developing an experiment sensitive enough to detect a gravitational wave. See Ripples in SpaceTime

  24. lucia (Comment #103376)

    It’s true that if the choice of any of the simpler models is tenuous, we can’t necessarily expect the numerical predictions from the model to reproduce reality. Whether the numbers coming out have “scientific value” is another matter entirely. Sometimes even very crude attempts to do something have “value”. But the question is then: in what way are those numbers valuable?

    If the output from a computer model is junk, then the only “value” you’ll get from it is that you’ll be able to say to your friends… “Don’t use that model. I’ve already tried it and it doesn’t work”.

    Climate scientists built climate models that didn’t work. They took the “value” from those failures to build the next generation of climate models… that still didn’t work. The problem with this approach is that you can’t build a climate model hoping it will teach you how the climate works. You first have to learn how the climate works and then you can go and build a model.

  25. Scratch my last comments. They more like computer models. But computer models are not exactly the same as science models. However, computer models are tools that can certainly help science (or other fields).

  26. Ged (Comment #103363)

    “Observations drive theory, and theory informs us where to look to find our next observations. But to observe anything is regardless of theory, hence why observations are actively pursued to disprove theories.”

    Einstein’s theory of relativity surely must be a case where theory drove observations.

  27. Skeptikal (Comment #103391),
    “Climate scientists built climate models that didn’t work.”
    Most all such efforts start with baby steps. “didn’t work” is a bit unfair; the models do reproduce certain features reasonable well, others rather poorly. I think a more nuanced view is more realistic: the predictions of climate models over the long term are more uncertain than is usually claimed, and the most critical issues (like the transient and equilibrium climate sensitivities diagnosed by CGM’s) are precisely where climate models are the least certain…… and most contorted by kludges to be consistent with historical data.
    The fact that CGM’s are not yet up to the task does not mean they can’t possibly be made more accurate. Whether they can or will become more accurate is as yet unclear; I rather suspect observational data will end up dragging the modelers (no doubt kicking and screaming, if it is the current crew!) toward more accurate models, but there is no way to tell how long they will cling to their kludges.

  28. MrE, perhaps you should replace the words “computer model” with “numerical model” or even “computational model”? I think that’s what you mean.

    A computer model would be e.g. a Mac Pro 3.5

  29. SteveF, an example of a “baby step” model would be a single column of air model, which you can use to predict the radiative effects of increasing CO2 (direct forcing) and feedback term from water vapor by assuming constant humidity (water vapor feedback), etc.

    Another type of “baby step” model would be a two-box model, with one being atmosphere (fast latency) and another being the oceans (slow latency).

  30. “What is dangerous to science (as I understand it in Ged’s argument) is the potential to mistake a model run for an empirical study. ”

    Yes, however, we do this all the time. There is a silly notion that “science” is limited to empirical studies. How much does the moon weigh?
    Well, clearly one doesnt go out and weigh the moon. It’s mass is estimated by applying a a set of equations to some data.

    http://newton.dep.anl.gov/askasci/ast99/ast99487.htm

    We cannot measure the effect of doubling C02 by going out and doubling C02 and seeing the result. We can’t do controlled experiments. Nevertheless we can use first principles to estimate the effect. If the first principles are well known and accepted ( “laws”.. a weird term to apply in my mind) then the solutions are simple and easily performed in your head or on a piece of paper with a pen or pencil. So, using some first principles we can go out and estimate the mass the moon and not look for a giant scale or a giant tape measure to measure the circumference. When the problems and systems get more complicated the math becomes harder to do by hand. Like trying to figure out how to make an atomic bomb. And then you might want to take these governing equations and have a machine calculate them.
    A physical law is nothing more than a symbolic expression that quantifies over entities posited to exist. F=MA posits that there is a thing called a force and thing called mass and a thing called acceleration. and its expresses how they are related. That mathematical expression can be considered to be a form of data compression such that a dataset of observations can be replaced by the symbolic expression. A model is nothing more than a collection of these these symbolic expressions. Is a model evidence? or is model output evidence? That’s a good question.
    The better question is can model output be used to form a warranted belief.
    I suppose if somebody ran a model of a nuclear explosion and concluded based on the “best understanding” of the “laws” of physics that the thing would work, I would take that as evidence and I would not stand next to a bomb that was designed using that model when they flipped the switch to test the model.
    I would imagine that if a computer model told me the plane could not fly, that I would not jump in the pilot seat to test the model.
    By practice we take the evidence produced by models as evidence to form warranted beliefs that guide our behavior. (And we can all point to examples where models identified bad data collection systems.) The bottom line is that if you are looking for “bright lines” to decide this question you are out of luck. There isnt a bright line that says trust data over models or trsut models over data or bright lines that say you must do an experiment to have “knowledge” or “understanding” or a “reason to believe”.

  31. http://chem.tufts.edu/answersinscience/relativityofwrong.htm

    “when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”

    Something to think about when you say that models are junk:
    the relativity of wrong

  32. Carrick (Comment #103399),
    Sure. And those baby steps really need rock-solid confirmation with measurement data. The not-so-baby-steps of primary and secondary aerosol effects and the net influences of increasing absolute humidity on cloud types, geographical and seasonal distribution, and the resulting net change in radiative balance are the bigger issues. Few observations, lots of parameters and arm waves. Not at all convincing (at least, not to me 🙂 ).

  33. Still the baby steps tell us pretty convincingly (or me anyway) that a direct radiative effect exists, and that water vapor feedback plays an important role in amplifying it. There are qualitative aspects that we get to pretty easily.

    Getting from there to a model thats is useful for policy making… much much different problem.

  34. The model of gravity the Newton invented was not based on first principles. If it was, he would have started with gravitons and worked his way up from there. We still don’t know anything about gravitons. The climate models are based on first principles.

  35. “Nevertheless we can use first principles to estimate the effect. ” Yes, but apparently current modelers have some difficulty interpreting their modeled results.

    With the BEST press release it was noted that diurnal temperature range was widening after decreasing. If you fit a RC time constant to the Tmin, you can see that Tmin in the Asian steppes is approaching a limit. I used the Asian steppes because I am a bit anal about land use being underestimated.

    http://redneckphysics.blogspot.com/2012/09/catch-wave.html

    My latest rant, but the reduction in variance of both the Tmax and Tmin (last graph before the desertification animation) are rather striking if you happen to be into non-linear dynamics.

    This reduction in variance and the variance of sensitivity to different forcings in different locations would be the reason to have complex numerical models. The BS deltaT=lamba*deltaF is for the peanut gallery.

  36. Re: bugs (Comment #103406)


    The climate models are based on first principles.

    From the Wikipedia entry on First principles:

    In physics, a calculation is said to be from first principles, or ab initio, if it starts directly at the level of established laws of physics and does not make assumptions such as empirical model and fitting parameters.

    If you agree with the Wiki definition, then the presence of empirical parameters in the climate models would suggest that climate models are not based on first principles, despite often-made statements claiming that they are.

  37. bugs (Comment #103406),
    “The climate models are based on first principles.”
    Plus parameterizations, estimates, kludges, and arm waves; they are not yet ready for prime time. Remember bugs, the IPCC models differ by a factor of >2 in diagnosed climate sensitivity. This ought to give even you some doubts about their ability to make accurate long term predictions.

  38. The question is what is a GCM good for and what is the alternative for that purpose.

    1. Recognizing that putting C02 in the atmosphere is not a risk free
    activity, we are left with the question of what the risk is.

    First question a policy make might want to ask is

    A. How much warmer/colder will it be in 100 years.

    Estimate. Go ahead. What I find is that most “science types” hate guessing. You probably want to say ‘ we can know”. Well DUH.
    That is why you have to estimate. Then comes the question: what’s the best way to estimate? shrugging your shoulders and blathering about “natural variability” isnt an estimate. So estimate and defend your estimate.

    This question is divorced from the policy question. based on the best science you have today, the best data you have, estimate.
    That’s the job climate science has. Not to reccomend policy, but to simply and clearly estimate a very hard thing to estimate with full transparency and accountability. That might end up being
    3C +- 6C or some other “useless’ projection, but it is what it is.

    blather about models are not science, blathering about how poorly models replicate the past is besides the point. What’s the best estimate, and how do you defend it. If you think you have a better estimate, step up and defend it. If you want to say “we dont know” then go do something else. the job is well defined. Estimate or hit the road.

  39. Underpinning and generating the alarming predictions being made by climate science, are computer models. Whatever they’re predicting the climate will be like in a hundred years, there’s simply no experiment that can be done to verify or disprove the assertions being made. When the same models are used to make seasonal predictions, they’re hopeless, as evidenced by the UK’s Meteorological Office recently giving up on making seasonal predictions, in the light of the last five years of hopelessly inaccurate ones. Even if the short-term predictions had been accurate, it’s still no indication that the long-term predictions would be accurate as well.

    http://thepointman.wordpress.com/2012/07/13/is-climate-science-just-a-belief/

    Pointman

  40. @Oliver

    If you agree with the Wiki definition, then the presence of empirical parameters in the climate models would suggest that climate models are not based on first principles, despite often-made statements claiming that they are.

    They are, the limitations are the resolution of the models because of hardware limitations, not because they don’t understand the physics. Newer hardware is much more powerful and will permit much better resolution with more realistic clouds.

    The performance with particles has been tested with volcanic events. IIRC, they are a ‘parameter’ in the sense that we can’t predict just how much will be up there, depending on what we do to produce them.

  41. @Pointman (Comment #103437)
    September 14th, 2012 at 3:13 am
    Underpinning and generating the alarming predictions being made by climate science, are computer models. Whatever they’re predicting the climate will be like in a hundred years, there’s simply no experiment that can be done to verify or disprove the assertions being made.

    A lot of climate scientists thought that attempt to predict the seasonal weather was doomed to fail. Climate models aren’t being used to predict the weather in 100 years. They are being used to estimate climate sensitivity.

  42. lucia (Comment #103394)

    I wouldn’t go so far as to call the output from AOGCM’s “junk”.

    I’ll admit that I live in a black and white world where shades of grey just don’t exist. In my world… either something works, or it doesn’t. You might be able to make a determination on what level of bad output is acceptable, but I can’t. To me, a bad output is junk.

    A good example of bad output is something that pointman has already mentioned…

    When the same models are used to make seasonal predictions, they’re hopeless, as evidenced by the UK’s Meteorological Office recently giving up on making seasonal predictions, in the light of the last five years of hopelessly inaccurate ones.

    You might choose to call it something like ‘less than perfect quality output’… but I call it junk.

  43. Skeptikal:

    I’ll admit that I live in a black and white world where shades of grey just don’t exist. In my world either something works, or it doesn’t.

    The blight of Western secular thinking….

    Most things work to a degree and whether they meet some required standard depend on the whismys of the people draw up the standard. Because of initial uncertainty, the standard something meets iterates along with the technology aiming to meet it. (And not just so the technology can meet it, though I can privately point to an example where exactly that happened.)

    As you get data from the not-black-and-white not-working-but-not-ideal-instruments, it improves your understanding of what it needs to be able to measure, what it needs to be insensitive to and so forth, and that is what drives iterations in standards.

  44. bugs:

    They are, the limitations are the resolution of the models because of hardware limitations, not because they don’t understand the physics

    Actually they don’t understand the physics. If you understood the physics yourself, you’d realize why this is too. Multiphase stuff is still very poorly understood (clouds, floating ice shelves, glaciers, etc.) Let alone the interactions between the biosphere and the physical/mechanical aspects of climate. That’s barely touched on.

    Being able to simulate the universe (or the Earth’s climate) from first principles in a numerical model isn’t particularly helpful in elucidating why it is the way it is. The best you can say of such models is you can make them “noiseless”…that is you get “perfect measurements” which, if you can really trust the model (in electronics design you often can), then that allows you to build analytic models that give you insight more rapidly.

  45. I don’t have a problem with the notion of a climate model as a scientific enterprise because it’s assumptions are expressly given and it’s quantitative output is reducible to testable statements about future events. [Note to Mosher: Admittedly ignoring the issue of “seeing as” versus seeing while pursuing an instance of intra-paradigmatic normal science.]

    What I have never understood is why the “model mean” is treated (rhetorically at least) as if it were a uniform set of testable assumptions, like a single model in which the relative success of any one somehow validates the rest.

    Obviously, some assumptions or combination of assumptions comprising the respective models are less skillful than others so at what point to we drop the losers from the ensemble?

    My conspiratorial assumption is that warmer, high-end, increasingly bogus models are permitted to borrow the legitimacy of the cooler, closer-to-reality products by association so that a political statement can be made about the scientifically-endorsed possibility of very high CO2 sensitivity.

    Why not have an ensemble of gravity models so that we can say that a brick falling from a fixed height will hit the sidewalk in 1.5 to 8 seconds based on an ensemble of different assumptions about the rate of acceleration due to gravity and then declare the model ensemble valid if the result is anywhere near that range?

  46. Lucia – “This is because the heart of science is the comparing predictions from a model to observations..”.

    This runs to the heart of it. How are these models tested? By hindcasting? Has anyone run a model forward since 1900 or 1950 and got results which match reality fairly well?

    Dr. Curry has a current post about the NRC’s new report on “Advancing Climate Modelling” (precis: more $$$, more powerful computers, and – direct from the report summary which I downloaded – “rewards for model advancement through well-paid career tracks [which] could help entice high caliber computer and climate scientists to become climate model developers” 🙂 ).

    Little on validation, she comments. It has a link to “Climate Modelling 101” (ie: GCMs for Dummys – http://nas-sites.org/climatemodeling/index.php ) which I figured would be about my level. I looked at the section on validation, which just said there was more info in the NRC report. The free summary of the report has nothing on verification, validation or testing.

    Sorry if I’m all over the place with this, but when somebody makes a prediction, the first thing I’d do is check their track record. Ditto for models. That a model doesn’t violate the laws of thermodynamics means it is consistant with physics; it doesn’t make it reliably predictive.

  47. “As for invalidating a model: You might be able to ‘invalidate’ “a model” by showing its assumptions aren’t true,”

    In the very elementary population genetics pencil-and-paper modelling class i took in grad school, we were taught that if the assumptions didn’t hold, the model was invalid – full stop. I continue to be surprised that this matter is never brought up regarding GCMs. What are the assumptions of the model, and how confident are we that the assumptions hold. Please note: this has nothing to do with the math. The equations can be solved perfectly, and the results can be nonsense if the assumption of the model don’t hold. Every time I was required to write a model in class, i had to list all appropriate assumptions – why is this not done when looking at GCMs?

  48. Everyone, including scientists, receives sensory info that becomes basis of our knowledge of the world.

    Those are the observations at the most fundamental level. That is not modeled.

    The human mind automatizes those into perceptions. It is not consciously done.

    Conception is where the volitional aspect of human mind function starts. Is the conscious and volitional act of conceptualization modeling or a model is any fundamental sense. I tend to think not.

    Science is fundamentally a systemic approach to the conceptualization of reality. Logic Symbolism and Mathematics is thus created as a product of conceptualization of part of reality.

    Is a mathematical construct a model? Yes, I tend to think so.

    To say science is modeling is overgeneralizing just one part of science to become the whole of science.

    Just some philosophizing on a Friday afternoon in San Jose CA. : )

    John

  49. A “model” broadly speaking is a hypothesis about how something works, mathematically. Even more broadly speaking we could add “conceptual” models to that or even call simple application of logic a “model”. Basically it is meant to represent how reality is thought to work. There is no deep philosophical problem with models (accept in those that use mathematical formulas in the realm of human behavior but excuse me for channeling the Underground Man) indeed use is in fact a fundamental part of science. That being said, it is always important to test whether a model is in fact “correct”, ie it gives results that are sufficiently close to the “truth” ie measurements, experimental results, that it cannot be said to wrong, at least definitively. When one repeatly tests a model it may be elevated from “hypothesis” to “theory” about how the world pretty definitely works. Those are the kind of models that we put a lot of trust in, because they have a proven track record and it would be pretty shocking if they suddenly failed. IMAO GCMs are a long way from rising to that level.

    Steven Mosher (Comment #103414)-I disagree that the job of “Climate Science” is to provide estimates for future changes. In many cases that is probably counter productive. Rather, the “job” that would be more productive would be informing people about the full range of risks and hazards associated with natural variability and anthropogenic effects. Given that these can vary widely and be far larger than model predictions (based on paleo evidence and even observations) but are not necessarily predictable as yet so they represent risks society at large should be aware of. To clarify what I mean I am talking about weather variations and hydrological variations on a local scale, ie on the “impacts” level of resolution. If all that “Climate Science” does is regurgitate model projections of this information they are doing a severe disservice to people in misrepresenting the range of risks they may face in the future regardless of future anthropogenic changes.

  50. //lucia (Comment #103385) September 13th, 2012 at 2:18 pm Nyq–
    I agree with all that you say. Models need not be mathematical and not all are– still in the physical science I thin most are in some sense. Even the one you described is mathematical– though the math isn’t complicated.//

    Yes – personally I think an operating definition of what maths is, is the stuff you can make models out of. However for the purposes of this discussion that would be a bit circular and also strays too far into an even more involved discussion about the nature of mathematics.

    A good example of a deep but basic model was Darwin’s initial sketch of the tree-of-life. i.e. that the species (past and present) can be seen as lines on a branching tree. We would now see a tree as an important data structure and appreciate that species-as-a-tree is a kind of mathematical model. However because it isn’t a formulas-and-variables sort of model (at least not at first glance) people might not see that is both a model and mathematical.

  51. Mmmm. I think it is perfectly legitimate to call output from the present AOGCMs “junk”.

    I worked on the development and application of various numerical simulators for a couple of decades.

    Back in the late 70’s, when I was a relatively young squib, I developed a (dynamic) simulation model from scratch, using then-state-of-the-art technology. It was for specific use to test the range of possible outcomes from a large prospective capital investment project.

    It cost me 4 months of my life, and the model was exquisite (I thought) – bright and shiny and beautiful. I was very proud. There was only one slight problem, which was that I could not get the model to match the limited amount of observational data we had to test against.

    I made a presentation to a bunch of grizzled engineers and commercial guys, wherein I showed that the governing equations had been individually tested in lab experiments, the conversion to numerical form checked out perfectly, the solution routine was clearly working well, etc.

    Everyone was very pleased until I showed the results of my various contorted attempts to match the observational data.

    One engineer, in summary, said something like: “OK, here are 3 things you can try. If you still can’t get a match, then junk the model and move on.” I was horrified, and tried to defend keeping the model for various (wrong) reasons, to which he responded not unkindly:-
    “Look. If there is one thing we have learned it is this:
    if you CAN match historical data it does not prove that you can make confident prediction, but if you CANNOT match historical data, it does prove that you are unable to make confident prediction.”

    Good advice then, and still good today.

    The AOGGCM’s do a fair-to-mediocre job of matching mean surface temperature trends in history, largely by using total net forcing as a matching parameter. Adjustment of aerosol data provides the degree of freedom required to do this. On the other hand, they do a terrible job of matching the frequencies of temperature variation. They do a terrible job of matching the temperature field variation regionally and vertically. They do a terrible job of matching regional precipitation. Water vapour data are matched only by arguing for expanded uncertainty bounds.

    They do a fair-to-mediocre job of matching total net flux, but a terrible job of matching SW and LW separately (critical(!) to understanding attribution). Shortwave heating through changes in TSI and clouds is replaced in the models by reduction of outgoing LW.

    Coarse gridding introduces numerical error and forces a number of requirements for simplifying assumptions and parameterisation steps, both for control of instabilities and for characterisation of sub-gridscale processes. Each step takes the model-build process one step further from the basic physics which (one hopes) are expressed in the governing equations before numerical conversion; for unstable fluid flow problems some of these steps are potentially enormous and notoriously difficult to quantify. It is simply invalid to point to an AOGCM result and say “that’s what the physics tells us”, and that is one reason why the match to ALL of the observational data is so critical to the credibility of the model.

    If anyone is wondering what happened to MY model. There was a happy ending. I tested the various suggestions, STILL failed to match the observational data, and with great reluctance then abandoned the model. I wrote up the results and moved on.

    Almost a year later, I was working on a completely different project, when I was approached by a very talented engineer who asked if I had considered the effect of grid orientation on my results. (I hadn’t.) He explained why he thought it could make a big difference. I spent a few days writing a conformal mapping package which allowed the model to simulate a non-cartesian coordinate system. Bingo, the model magically started to reproduce the observations, and was used as a critical prediction tool in bringing the investment project to sanction.

    So the physics in my previous model had been essentially correct. The problem was in the conversion of the physics to a numerical model. There may be a lesson there somewhere.

  52. Although, in a deeply ontological sense, every scientific idea is a conceptual “model,” it is customary in science to reserve that term for a simplified or incomplete schematization of the functioning of a system. It is precisely at this point that models depart from first-principle derivations of theory or scientific “law.” GCMs, in particular, are constructed in the absence of any proven comperehensive theory of climate and are flagrantly variable in their treatment of thermalization near the surface and the transfer of thermal energy through the atmosphere to space via conduction, convection, and ultimately radiation. Calling present-day GCMs “scientific” is a huge stretch.

  53. Nyq Only (Comment #103477)

    A good example of a deep but basic model was Darwin’s initial sketch of the tree-of-life. i.e. that the species (past and present) can be seen as lines on a branching tree…because it isn’t a formulas-and-variables sort of model (at least not at first glance) people might not see that is both a model and mathematical.

    Certainly it is true that not all mathematics involve algebra!

    Paul_K (Comment #103518)

    Mmmm. I think it is perfectly legitimate to call output from the present AOGCMs “junk”.

    The AOGGCM’s do a fair-to-mediocre job of matching mean surface temperature trends in history, largely by using total net forcing as a matching parameter.

    I think your assessment is too harsh. It isn’t clear how accurate the GMST surface trends are, but the models certainly have their uses. Certainly the models are now mature enough to generate “hypotheses” and predictions which guide observational experiments. Think of the Visible Man. I’m not saying that I can learn everything about the human body from opening him up… but as long as he’s accurate enough, I can see where it would be interesting to poke around in the real thing…

    So the physics in my previous model had been essentially correct. The problem was in the conversion of the physics to a numerical model. There may be a lesson there somewhere.

    There is a whole field in there somewhere!

  54. IMO there should be much more emphasis on measuring and reporting global accumulated energy and models should be measured and judged against their ability to project accumulated energy, not by comparing against a meaningless “global temperature”.

    We might be ultimately interested in temperature but its a poor performance indicator simply because its a poor measure in the first place.

  55. TTTM 103594,
    Sure, but the energy added from GHG and other forcings (positive and negative) are far too uncertain to allow this. The IPCC range of estimates for primary and secondary aerosol effects is huge, so you just don’t know how much energy is ‘supposed’ to accumulate. IOW, unless all forcings are very well defined,looking at accumulation of energy tells you zip about model accuracy.

  56. SteveF writes “you just don’t know how much energy is ‘supposed’ to accumulate.”

    Without accurate measurements of accumulated energy and modelling of accumulated energy, temperature projections must be little more than curve fits and not physically based so I dont see how it can be legitimately avoided.

  57. I agree with Sky that there is a difference between “model” and “law” in the sense that when one talks about first principles, one talks about laws that are supposed to be universal and invariant in the observable Universe.
    .
    To illustrate I would mention the famous Emmy Noether’s theorems.
    In them she linked (established an equivalence) symmetries in the mathematical equations of the dynamics to conservation laws in physics.
    This extremely deep insight is clearly far beyond a mere “model”.
    In that sense there are actually very few first principles/laws while there is a (potential) infinity of models.
    In classical physics I would say that it is Hamilton’s equations and their symmetries/conservation laws to which adds the 2 LOT.
    In QM it is the correspondence principle and Schrödinger’s equation.

    For me a model is an ad hoc approximation of equations derived from first principles (f.ex the many approximations of the unique Navier Stokes equations) and that’s why one uses a different word than law.
    As an ad hoc approximation it may work, e.g give a reasonable and validated prediction for a limited and generally very accurately constrained cases.
    The fact that a model fails in many other cases is considered as not very important because it has been precisely developped with the knowledge that it will NOT cover these cases which are deemed “uninteresting” for this particular model.
    The biggest sin one can do with a model is then to apply it to cases where one should know that it must fail.
    Actually in many model cases these non validity boundaries are not precisely known so that people who tend to oversell their case apply the model to systems/configurations where be dragons.
    .
    Of course if one was doing philosophy and not science, one could also say in a strained way that “laws” are “models” of the reality – Platon already said so a long time ago.
    Obviously an equation describing a thing is not the thing itself but this difference without clear distinction doesn’t justify to abandon the distinction in physics between law and model.

Comments are closed.