Miroc5 : Watanabe’s planet sure has noisy weather!

Last week I discussed
Wattanabe’s paper which attempted to explain that recent 10 year long hiatused in warming might be consistent with hiatus’s seen in models; the argument in that paper included some specific comparisons to runs in MIROC5. Kenneth prodded me to obtain the runs used in the AR5; over the weekend I downloaded those runs forced with the RCP4.5 scenario currently available. (Based on numbers mentioned in Watanabe, I think more remain to be uploaded). Fortunately, 3 realizations of MIROC5 projeciotns under RCP4.5 were available and I can now show a plot comparing MIROC5 to observations.

For your enjoyment, the plot baselined to 1960-1989 (inclusive) is shown below:

Observations_and_Miroc5

Tentatively I wish to highlight a few things:

  1. Using the ordinary eyeball test the current observed temperature appears to fall outside the range consistent with MIROC5 forced under RCP4.5. It’s not even close.
  2. The residuals to a straight-line fit are 30-60% larger in MIROC5 than in observations. I have long noticed the short term variability of models is higher than for observations. This is a rather bad sign because observations include variability due to “earth weather” and observational errors while variability in models is due to “model weather” only. If models are correct short term variability for the earth should be greater than or equal to that in models. Moreover, the fact that models short term variability is too high casts doubt on conclusions about the frequency of ‘short’ (i.e. 10 year) hiatus in earth warming merely because we see these in models whose short term variability is too high.
  3. While modelers may not wish to talk about how 10 year hiatus’s are possible, I think that begins to be beside the point as, in the first place, we are seeing 12 year hiatuses and in the second place, we are seeing substantial divergences in 30 year trends with model warming outstripping that of the earth. It is difficult to explain that in terms of 10 year hiatuses.

I’m in the process of shoving the AR5 models into my old scripts and will be discussing variability of variability a bit more later. (Among other things, I think it’s better to discuss that looking at all the models rather than just this one.) I’m also going to be making graphs with runs for individual models sharing the same color to highlight the degree to which the spread in the spaghetti in models like the one at Ed Hawkin’s blog is due to structural uncertainty in models rather than due to “weather” in models. (I anticipate we’ll see a lot of the spread due to structural uncertainty. We saw that for the AR4 models and I anticipate we’ll see it again.)

I’ll post a few things from time to time as I see “interesting” things. Meanwhile, I’m trying to make sure none of the downloaded files have obvious “boo-boos” in them.

28 thoughts on “Miroc5 : Watanabe’s planet sure has noisy weather!”

  1. Lucia,

    I have long noticed the short term variability of models is higher than for observations. This is a rather bad sign because observations include variability due to “earth weather” and observational errors while variability in models is due to “model weather” only.

    I guess you’ve explained this elsewhere but can you remind me why observational errors would be independent of earth weather?

  2. BillC–
    They aren’t necessarily independent since they could be affected by location of the stations and station locations might be located in some way such that errors are affected by weather pattern. But the residuals ts straightline fits could still be uncorrelated. (Bear in mind, systematic biases like “not detecting the poles” affect the computed trend line, but that different from affecting the magnitude of the residuals.)

    By why do you ask? Do you think errors will be such that they cancel out weather noise in a way that makes the residuals to a linear fit smaller? That is: When the earth is hot, they errors will show the earth cooler and when it is cool they will show it hotter?

  3. They may have higher noise amplitude because of better resolution. When we compute a global average for the Earth, we typically have to smooth on a scale of hundreds of km. That wipes out a fair spread of variation. Models can resolve this.

    Mosh posted a remarkable video showing the resolution that GFDL now achieves. For real measurements, this is all averaged over a 5×5° grid.

  4. MIROC5 may be a wee bit too sensitive. Of course, it has a lot of company, both among climate models and among those who happen to like model projections and dislike the divergence of those projections from reality. 😉
    .
    I await the day that data has the same ability to focus the minds of climate modelers as it has in most other scientific fields. There isn’t much apparent progress so far, so all one can do is hope for the best.

  5. Nick–
    I know you might have to spatially smooth resulting in an error. But why would that reduce the rms around a linear trend in time? It seems to me it would not: an error is an error. The spatial smoothing error could result in over or underestimating the global temperature. But those would just be errors. There is no reason for the spatial smoothing to counter-vail variations due to “weather noise”.

    Or is there some other type of smoothing going on? (And if yes, have you applied your algorithm to model-data sampled at sample locations? If yes, did you get more or less “weather noise” in the Surface Temperature time series. If you have your algoritm and have model data, this is knowable. You just run the algorithm.)

  6. SteveF (Comment #117020)

    MIROC5 may be a wee bit too sensitive. Of course, it has a lot of company, both

    You’ll like tomorrows graph. 🙂

  7. Lucia,
    Spatial smoothing produces temporal smoothing, attenuating higher frequencies. To what extent that goes through to the monthly scale, I’m not sure.

    I haven’t tried running TempLS on model data. Part of the reason is that its main function is to handle irregularly spaced points (with missing months), and models give regular. Of course, it can handle that, but it’s not so good for, say, 1° resolution. Then the overhead of gearing up for irregular spacing starts to hurt. Just a spatial integrator would do.

  8. Nick writes “Spatial smoothing produces temporal smoothing, attenuating higher frequencies. To what extent that goes through to the monthly scale, I’m not sure.”

    Just to be clear then, you’re suggesting that the spacial resolution we sample at in real life* could be something like an under or over statement of 30-60% of the true value if we had ideal sampling?

  9. Nick –
    I think I’m missing something fundamental here. The question is not the spectral content of the global average, but Lucia’s observation, “The residuals to a straight-line fit are 30-60% larger in MIROC5 than in observations. I have long noticed the short term variability of models is higher than for observations.”
    .
    Whatever smoothing-over-time effect spatial averaging may produce, oughtn’t it have the same effect upon observations as upon models? Or is there some reason to expect asymmetry?

  10. HaroldW–
    The models have less spatial smoothing. But I don’t see why spatial smoothing results in temporal smoothing. In fact, it seems to me it would not and the opposite would occur. Spatial smoothing should result in temporal measurement noise and that should increase noise in the time series.

    If Nick has a more sophisticated argument or better– evidence based argument showing spatial smoothing results in temporal smooting I would be interested in that.

    OTOH: if he means they use temporal smooth when creating the surface temperature series that would be different. If so, it would be nice to see that the combination of temporal smoothing and spatial smoothing smooths the data. For now: I’m not sure I know his full argument. It mentions spatial smoothing, but I don’t see how that would result in temporal smoothing.

  11. DocMartyn (Comment #117031)

    I am sure the best way to compare them is via a cumulative sum control chart;

    Why do you think this? (I know why these can be handy for process control. But it strikes me as a bizarre confusing way to compare things models and observations. )

  12. Yes, There was an example of this model “overestimation” of short term fluctuations on Ed Hawkins blog. It appears in his chart that the models significantly overpredict the response to volcanoes. Ed himself in response to my comment confirmed this and said its a matter of ongoing research.

  13. HaroldW
    “Whatever smoothing-over-time effect spatial averaging may produce, oughtn’t it have the same effect upon observations as upon models?”
    Sorry, misunderstanding there. I’m saying that observations are smoothed. models not, or less so. So models have more HF noise.

    TTFM,
    No, I’m talking about noise amplitude only, which is Lucia’s topic.

    Lucia,
    They are related. Most obviously, where wave motions are involved. That’s Nyquist, where spatial sampling frequency puts a hard limit on time frequency via the wave speed.

    But you know it in numerical turbulence too. TKE dissipates partly because it cascades down to sub-grid.

    Here you can see it comparing Mosh’s GFDL video with satellite observation. The patterns are similar, but even though the sat is 1/4°, compared to about 5° for normal observation indices, you can see that fine scale variation is filtered out.

  14. Nick writes “No, I’m talking about noise amplitude only, which is Lucia’s topic.”

    I dont think you are, Nick.

  15. Nick:

    Sorry, misunderstanding there. I’m saying that observations are smoothed. models not, or less so. So models have more HF noise.

    Actually, this is backwards for Miroc 5: It has less HF noise, and more low-frequency noise. (If you plot the time series, and zoom in on it, you get this really crazy looking regular oscillation.)

    Figure.

    The problem appears to boil down to the fact that Miroc5’s ENSO mechanism is way too active.

  16. Nick

    But you know it in numerical turbulence too. TKE dissipates partly because it cascades down to sub-grid.

    I know what you are saying with respect to whether models can capture HF patterns. Better resolution could permit the models to have higher frequency wave patters. Whether this necessarily results in a higher total amount of TKE in the model is actually an interesting question because it might not. But it might, and I at least understand what you are claiming for a model.

    But I’m not asking anything about the model. I’m asking about the observations.

    But TKE dissipation of real earth weather can’t be caused by spatial smoothing when you compute the global surface temperature. It can’t be affected by where you sample or how you process the local temperatures. The earth does what it does independent of measurements. So why would spatial smoothing of the measurements cause temporal smoothing of the observations?

    It has less HF noise, and more low-frequency noise.

    For what I’m describing above, the frequency of interest would be lower than the period for the trend. But I don’t necessarily mean monthly noise is high. So.. the low/high depends on where one puts a cutoff. (I do agree that it’s ENSO that seems to high. But I haven’t looked at anythign like PDO and I mean ENSO looks high “by eyeball”.)

  17. Nick

    Here you can see it comparing Mosh’s GFDL video with satellite observation. The patterns are similar, but even though the sat is 1/4°, compared to about 5° for normal observation indices, you can see that fine scale variation is filtered out.

    The video doesn’t explain why errors introduced by spatial smoothing of observations would result in temporal smoothing of the data. After all: smoothing of the thermometer data in January introduces error in January. But it does nothing physically to the weather. In February, you smooth the thermometer data again. That introduced error in Feb. But does nothing to the weather. And so on. So why would these meaurement errors “smooth” temporaly? (If they do, you should be able to show that with synthetic data– creating a time series. Not by showing a cool movie of the spatial weather patterns.)

  18. Lucia,
    I think you’re right. Individual model points would be noisier than a 5×5 grid (SST), but global averaging is the ultimate spatial smooth, and will swamp all those frequencies.

  19. Nick, I would exercise care in interpreting the “colorful fluid dynamics” of high resolution ocean models. They are modeling a chaotic system and the errors will grow with time. We see these kind of pictures all the time usually shown by slick salesmen trying to sell you their code. Scratch the surface and the results are often wrong.

    On turbulence causing a cascade of energy to smaller scales. That is true, but if you don’t capture the dynamics of the unresolved scales, your simulation will be totally wrong. Generally, eddy viscosity turbulence models have very complex and highly nonlinear PDE’s governing the evolution of the eddy viscosity. Just dissipating the eddies away will yield badly wrong results.

  20. I have recently been exploring the relationships between 4 sets of data: the global land temperature anomaly, the global SST anomaly, the global surface temperature, and the ocean heat content data. This ties into Watanabe’s objective of determining the ocean heat uptake efficiency.

    These 4 sets hold clues to how the heat is being apportioned. The global surface temperature is a combination of the land and SST temperatures averaged according to the proportion of ocean and land areas:

    T_G = Po * To + Pl * Tl

    From the ocean heat content data about ½ of the effective forcing is being sunk by the ocean depths. This is the uptake that Watanabe et al are referring to. They make the claim that the uptake is increasing over time.

    It is no secret that climate scientists observe that the ocean surface temperature anomaly is increasing at about ½ the rate of the land surface temperature. So we can say that

    To = f * Tl

    where f is set to ½ for the moment. This value of f is related to Watanabe’s kappa as the reciprocal of the ocean uptake. Smaller values of f indicate greater ocean uptake and a value of f=1 indicates no additional ocean uptake.

    We can then plug this in to the global surface equation and create a variational estimate of T_G=T_G’ based on a wandering value of f.

    T_G’ = ½ (Po + Pl/f)*To + ½ (Pl + f*Po)*Tl

    For all the points in the HadCRUT (T_G), HadSST (To), and CRUTEM (Tl) data sets we can minimize the error between T_G and T_G’ and estimate a value of f over time. The value is either calculated as a solution to the quadratic formula or we use the closest value based on setting the slope to zero.

    If we then say that the “effective” value of T_G is

    T_G = Po/f * To + Pl * Tl

    This is an effective measure because it is an unrealized warming in the average surface temperature that would occur had the ocean depths not sunk the excess heat .

    Climate scientists need to start discussing the numbers this way because the important factor is the excess heat and not the anomalous temperature. No hiatus is seen in overall warming because the ocean heat uptake is increasing.

    The full analysis is attached to the end of this blog post:

    http://theoilconundrum.blogspot.com/2013/05/proportional-landsea-global-warming.html

    The question remains, and what Watanabe et al didn’t fully answer, is what causes this heat uptake increase.

  21. WHT,
    Seems to me the pattern of ocean temperatures is critical to understanding what is happening. If low latitude ocean temp is falling while high latitude is rising (or steady) that is consistent with greater overturning and greater heat transport from low to high latitudes. If high latitude ocean temps are falling and low latitude flat or rising, then that suggests less overturning and less heat transport from low to high latitudes.

  22. Lucia,
    Thanks for the link to Ed Hawkins’ blog. It is an interesting read, but it seems to me he is whistling past the graveyard.
    .
    His graphics of the models versus Hadley temperatures show that virtually every run of every model overestimates volcanic cooling, and on average they show a volcanic response about twice that of reality. It is hard for me to understand how someone like Hawkins can ignore the simplest explanation: the models are just MUCH too sensitive to forcing. Data will ultimately drag all, even those still kicking and screaming, to acknowledge what to most is increasingly obvious.

  23. SteveF–
    I had suggested to him that he show each model in it’s own color. Today’s graph will be those models with more than one run with it’s own color. 🙂

    (Ed and his group have admitted the models are too warm. I need to re-read that and see if I believe their method of correcting models makes sense to me. That said: if you models are the prior, they all disagree– so large spread– and you don’t have a lot of data, it’s going to take much more data than we’ll ever get for surface temperature to correct things toward right. )

  24. Lucia,

    Late to the party, i guess you answered me above and the following discussion was helpful. From Carrick, if the model in question has a too-noisy ENSO, that explains a lot. I agree with your comment to Nick, we’re talking about the observations. But, you say:

    By why do you ask? Do you think errors will be such that they cancel out weather noise in a way that makes the residuals to a linear fit smaller? That is: When the earth is hot, they errors will show the earth cooler and when it is cool they will show it hotter?

    I think that p(what_you_said) >> 0. That is I think there is a good chance that measurement error will systematically cancel out extremes. I would have to think about it more to come up with something more coherent by way of explanation.

Comments are closed.