Hadley March anomaly 0.318C: Up!

Hadley’s march anomaly is in. The NH&SH anomaly is 0.318C, up from February’s value of 0.264C. The data and trends since 1980 are shown below along with the projections based on A1B SRES in the AR4:

Note that in addition to showing the red-corrected uncertainty intervals for Hadley’s trend, I have added the largest uncertainty intervals one can concoct by assuming the noise might be “ARIMA(p,0,q)” with p and q up to a value of 4. (This is not the best fit Arima, which would give smaller uncertainty intervals. I’ve basically assumed that the time period is too short to identify the ARIMA and just show which ever possible ARIMA gives the biggest uncertainty in the mean trend.)

Notice that the trend associated with the multi-model mean is outside the range consistent with the ARIMA uncertainty intervals for HadCrut. This suggests that either a) the multi-model mean is biased high relative to what is happening on the real honest to goodness earth and that bias is statistically significant, b) HadCrut is mis-measuring what is happening on the honest-to-goodness earth or c) we need to figure out some reason why we believe the uncertainty (in the sense of non-repeatability) in the linear trends fit to data should be larger than we could get by fitting with any and all possible choices of ARIMA up to (4,0,4).

As many are aware, when testing the ability of people to predict or project, I prefer to limit comparison to data observed after the method of projecting was frozen. I judge this to be very near Jan 2001. With that in mind, this is the data I would chose to test whether the multi-model mean projection is matching data:

It is worth noting that the projected trend is well outside the uncertainty intervals estimated using ARIMA; this is best evaluated by comparing the slope of the dashed green lines to the slope of the dashed black line. The mean of the observations is below the multi-model mean; this is best evaluated by noticing that the dashed black line lines outside of the confidence intervals at the central month. (That is, look at where the green and red lines dashed lines intersect. Then, look vertically to find the top dashed green line and notice that in that month, the multi-model mean projection lies outside those dashed green lines.)

Currently, these results indicate a fairly strong rejection of the hypothesis that the multi-model mean and HadCrut agree. This does not reject the notion that some individual models might be correct, but it strongly suggest that the mean over all models is high. That means: At least some models are biased high relative to HadCrut for the current period. We are not currently getting as strong results with GISTemp or NOAA, so it’s possible the problem lies with HadCrut… or not…

Update
This is a histogram showing where the current HadCrut Jan 2001-March 2001 trend falls relative to the distribution of trends from all runs in all models forced using SRES A1B:
Assuming the run trends are normally distributed (which they may not be) only 4% of run trends would be less than the HadCrut trend.

60 thoughts on “Hadley March anomaly 0.318C: Up!”

  1. Lucia,

    1.Is there a “most likely” among the projections?

    2. Would Theil’s U statistic tell you anything useful?

  2. And if the problem lies with HadCrut, then it also lies with UAH and RSS….probably not.

  3. Max_Ok–What do you mean by “most likely”? The multi-model mean is the average over the 22 individual model projections.

    Do you mean this: http://en.wikipedia.org/wiki/Theil_index
    As far as I am aware, I am not trying to determine the racial inequality in AOGCMs. If you think that U statistic could tell me anything useful, please suggest what you think we might learn by applying it.

  4. MaxOK
    1. The AR4 guides us to consider the multi-model mean as more reliable than any individual model. As far as I am aware, there has been no official pronouncement to suggest any 1 of the models is more likely to be better than others. As for my opinion, some of the models are clearly loopy, but that’ s my opinion.
    2. I have no idea. Can you first suggest what I would pick as a “naive forcast?” That matters a lot if you are going to compute that statistic.

  5. Lucia,

    1. I wonder if the multi-model mean being considered more reliable has anything to do with swarm intelligence.

    2. When I think of naive forecasts, two kinds come to mind: the no-change extrapolation, and the trend (OLS) extrapolation. Perhaps there are others.

  6. Max_OK
    2) Sure. But the answer I’d get would differ depending on which naive forecast I pick. Relative to no-change, models would do ok over longer time periods.

  7. Lucia,

    I too think forecasts from models are likely to beat no-change extrapolations over long periods, except in cases where the forecast got the direction of change wrong or grossly overstated the change.

    Is it useful when evaluating a forecast to determine if it’s more accurate or less accurate than a no-change extrapolation, and by how much? It seem to me it would because it’s one answer to the “accurate compared to what” question.

  8. Another Ian or Anyone–
    Does anyone have Andrew Bolts address? I don’t work at Ames Laboratory and never have. My husband used to work there.

  9. Thanks Don B.
    I left a comment at his blog. I also emailed a brief note to make sure he knows I am not an atmospheric scientists and don’t work at Ames Laboratory!

  10. Max_OK

    Is it useful when evaluating a forecast to determine if it’s more accurate or less accurate than a no-change extrapolation, and by how much? It seem to me it would because it’s one answer to the “accurate compared to what” question.

    People have compared to see if models have skill relative to “no change”. They do have skill relative to that — at least over longer term. I don’t have any particular interest in that because we know the answer: Relative to “no change”, the big, complicated, computationally intensive AOGCMs have skill. So does linear extrapolation. So does nearly any curve fit you want to do. So do simpler physical models like the ones used in the TAR.

    If you want to do the test to see if AOGCM’s have skill relative to no change, that’s fine. But I’m going to spend my time doing other things.

  11. Dear Lucia,

    Thank you very much for this clear and concise analysis of recent trends. The next time someone pompously asserts that temperature change is “comfortably within the error bars” of the models, I will refer them to your post.

    There is an excellent new book out on expert forecasting, and its many failures, which you might enjoy. It is “Future Babble” by Don Gardner. It quotes extensively from research done by Philip Tetlock.

    Tetlock found that experts can be regarded as “foxes” or “hedgehogs”.

    Foxes are very pragmatic. They take their data from many different sources and avoid grand theoretical constructions. Their predictions are hesitant, tentative, and carefully fenced around with many caveats about the uncertainties. Foxes are quick to revise their forecasts as new data becomes available.

    Hedgehogs typically develop one insight or generalization about their field and build a unifying theory around it. They seek out data which supports their grand theory and tend to ignore or belittle data which contradicts it.

    Hedgehogs are confident and clear about their predictions. When their predictions fail, they turn their formidable mental prowess to defending their theory with rationalizations rather than modifying the theory for fit the facts.

    According to Tetlock, “A long-term prediction from a hedgehog is almost certain to be wrong.”

    From your analysis above, it appears to me that the GCM model community has a large population of hedgehogs.

  12. Dale–
    HadCrut is not “comfortably within the error bars” the error bars. At a minimun, you should ask people to defend how they determine the size of the error bars.

    Some might make the case that GISSTemp is but that also depends on what you mean by “the error bars”. Usually to get the largest possible “error bars” people construct a range of “all weather in all models”. To be outside those error bars, most models would need to be incorrect individually. I’m only saying the multi-model mean is biased high.

    What I show doesn’t comment on individual models. Some individual models may be on track.

  13. Lucia,

    Some individual models may be on track.

    .
    Humm… Do you calculate the average of all models from the individuals, or do you get that average of models from another source? If you are calculating the average, maybe it would be interesting to determine which (if any) of the individual model trends fall outside the +/- 2-sigma range of the pool. I mean, if you can show that some of the models are statistically unlikely to be “part of the same population” as the others, then maybe some models can be rationally excluded from the pooled model estimate of the trend, which should narrow the uncertainty in the average model trend.

  14. SteveF–
    I calculate the average from the individuals. I will be comparing to individual models, computing uncertainties by a variety of methods. Tests indicate the trends in models are different etc.

    So.. yes… you’ll be seeing these, but later.

    Right now, I’m trying to read up a bit more on the best way to consider fractional differencing because there is a test of “weather” in models I want to do. But… since I haven’t established what might be the most appropriate thing to do with fractional differencing, there is no post on that yet. (If I never figure out the most rational thing to do with fractional differencing, there will never be a post on that! 🙂 )

  15. … once again, the problem is with the time period you are using and how the two volcanoes affected the first half of the record…

  16. Lucia you are going to be pulverized by the true believers by showing above graph…be forewarned and prepared.

  17. lucia said in Comment #73953

    People have compared to see if models have skill relative to “no change”. They do have skill relative to that — at least over longer term. I don’t have any particular interest in that because we know the answer: Relative to “no change”, the big, complicated, computationally intensive AOGCMs have skill. So does linear extrapolation. So does nearly any curve fit you want to do. So do simpler physical models like the ones used in the TAR.
    ——————————–

    Lucia, you may be right, but Professor J. Scott Armstrong, who is supposed to be an authority on forecasting, has been telling Congress something different. The following quote is from his testimony to the Committee on Science, Space and Technology Subcommittee on Energy and Environment – March 31, 2011

    “We were unable to find any ex ante comparisons of forecasts by the alarmists. In the spirit of doing a systematic evaluation of forecasts, in 2007 I invited former Vice President Gore to join with me in a test as to the whether forecasts by manmade global warming alarmists would be more accurate than forecasts from a no-change model. Each of us would contribute $10,000 to go to the winner’s favorite charity. The period of the bet was to be 10 years so that I would be around to see the outcome. Note that this is a short time period, such that the probability of my winning is only about 70%, based on our simulations. Had we used 100 years for the term of the bet, I would have been almost certain to win. Mr. Gore eventually refused to take the bet (the correspondence is provided on theclimatebet.com). So we proceeded to track the bet on the basis of “What if Mr. Gore had taken the bet” by using the IPCC 0.03ºC per-year projection as his forecast and the global average temperature in 2007 as mine. The status of this bet is being reported on theclimatebet.com.”

    http://www.forecastingprinciples.com/images/stories/pdf/ags2011congress.pdf

  18. Max_OK,

    “What if Mr. Gore had taken the bet” by using the IPCC 0.03ºC per-year projection as his forecast and the global average temperature in 2007 as mine.

    When did the IPCC forecasts 0.03 C/year?!?
    Maybe over some very long time– but it doesn’t make sense to make a 100 year wager in terms of a bet. Barring unforeseen advances in medicine, Gore will be dead within 40 years.

  19. Professor Armstrong believes he has a cinch with his no-change extrapolation if given enough time.

    “Had we used 100 years for the term of the bet, I would have been almost certain to win.”

    I don’t think he’s talking about the kind of no-change extrapolation I have in mind.

  20. Lucia,

    Thanks for adding the model histogram update. It looks like 58 or 59 total runs to me. Is that right? How many models are included in the average?

  21. SteveF–
    I think the tally said 58. I used to have it memorized. 🙂 I’ll double check in the morning.

    I’ve been downloading B1’s tonight. Evidently Hansen thinks the emissions have tracked B1. So, it will be worth comparing to those tomorrow. (There are fewer B1’s. :))

  22. Lucia, very informative post thanks. A small point – you say;
    “Evidently Hansen thinks the emissions have tracked B1.”

    I’m pretty sure emissions of everything but F-gases have exceeded A1. It’s the atmospheric concentrations that have fallen behind. Particularly, oceanic outgassing was way overestimated and CH4 mysteriously flat-lined.

    In a recent interview Hansen offered these graphs:

    http://www.columbia.edu/~mhs119/Temperature/T.2011.03/

  23. FergalR,

    Particularly, oceanic outgassing was way overestimated and CH4 mysteriously flat-lined.

    One small comment, there is a continuous and large net uptake of CO2 by the ocean, and a large net uptake by plants; there is never net out-gassing on an annual basis. I think what has fallen short is not out-gassing, but the expected drop in net uptake rate. As best I can tell from published estimates, both plants and the ocean have continued to remove CO2 from the atmosphere at a rate that is roughly proportional to the increase above 280 PPM. Unless current trends in absorption change markedly, the atmospheric CO2 concentrations will continue to fall below projected levels.

  24. SteveF,

    Heh, yeah I probably just made that word up. As I understand it, AR4 figured a 1°C warming would release ~40ppm from the Carbon cycle (primarily the oceans) but more recent analysis suggests a quarter of that.

  25. Lucia comments some models might be on track. The reality is all the models are wrong because the ‘two stream approximation’ aerosol optical physics [Sagan, Chandrasekar etc.] fails to account for substantial direct backscattering at upper cloud surfaces.

    The monotonic theoretical albedo-optical depth curve fits apparent optical depth [determined experimentally from Beer-Lambert] but when used to predict albedo change for thicker clouds, the result [‘cooling’] goes the wrong way.

    You can easily prove it: as droplets in a thick clouds coarsen prior to rain, apparent optical depth increases as does albedo. However, the 1/r law for optical depth by just diffuse scattering predicts decreased optical depth, the opposite of that observed.

    So, climate science has a problem: when Lacis and Hansen introduced Sagan’s two-stream approximation to climate modelling in 1974 [ http://pubs.giss.nasa.gov/docs/1974/1974_Lacis_Hansen_1.pdf , eq 19], the cloud part of ‘global dimming’ came built in. By about 2003, there was no experimental evidence for it.

    NASA invented ‘surface reflection’: http://geo.arc.nasa.gov/sgg/singh/winners4.html. This substitution for Twomey’s correct Mie physics he warned couldn’t be extrapolated to thick clouds sounded plausible, but it’s a fake widely believed in climate science which seems to have generally poor physics’ knowledge.

    The bottom line is that the IPCC’s claim of high feedback is entirely dependent on imaginary ‘cloud albedo effect’ cooling in AR4. Correct the physics and it becomes heating for thicker clouds, another AGW. Net CO2-AGW could well be near zero.

    This completely overturns the subject.

  26. Fergal–
    Sorry-I shouldn’t read too late at night. In the document where he put those figures, Hansen wrote:

    We considered three scenarios for future greenhouse gas amounts. Figure 1 shows that the real world so far is close to scenario B. Temporary aside: there are two main reasons that greenhouse gas growth moved off the track of scenario A onto scenario B in the early 1990s, as shown in Figure 2: (1) the growth of CFCs (chlorofluorocarbons) was greatly diminished by successive tightenings of the Montreal Protocol, (2) the growth of methane slowed sharply.

    Here “B” is “B” in his testimony to Congress. I read “B” and thought IPCC-B. He does end with

    Observations (Figure 21 of Reference 3) show a linear warming rate over the past 50 years of 0.17°C per decade. Our climate model slows this down to about 0.15°C for the near future because of the change in GHG growth shown in Figure 2(b) above. That bet, warming of 0.15°C/decade would have a high probability of winning over a bet of no temperature change.

    Which suggests that he too doesn’t expect us to achieve the IPCC’s 0.2C/dec during the first few decades of this century.

  27. Alestair,

    You can easily prove it: as droplets in a thick clouds coarsen prior to rain, apparent optical depth increases as does albedo. However, the 1/r law for optical depth by just diffuse scattering predicts decreased optical depth, the opposite of that observed.

    I think the situation is a bit more complicated. It is true that the expected optical depth from diffuse scattering, at least for a thin cloud of fixed physical depth, increases as droplet diameter falls. However, in real rain clouds, the albedo at the top of the cloud is very high over a broad range of doplet sizes. That is to say: for thick clouds, most sunlight is reflected back to space, mostly independent of droplet size. Of course, when a cloud produces rain it can become optically thin enough to become less reflective, and you can indeed observe this as rain making clouds evolve. For a simpler example, consider two emulsions of oil in water, one with 5 micron droplets and the second with 10 micron droplets. If the two emulsions are physically deep (like deep convective clouds), then the surface albedo above the emulsion is almost the same (very high). It is only when the emulsion is both coarser and physically less deep that the droplet size begins to substantially influence albedo.

  28. Hansen is a Lukewarmer

    “Our climate model slows this down to about 0.15 degree C for the near future because of the change in GHG growth shown in Figure 2(b) above. That bet, warming of 0.15 degree C/decade would have a high probability of winning over a bet of no temperature change”

    Just kidding, but quote mining is fun

  29. steven–
    But Hansen does seem to be suggesting that he doesn’t expect warming of 0.2C/dec in the “near future”. It seems his models are predicting values close to linear extrapolation over the previous 30 years. And of course, linear extrapolation didn’t work so bad after Hansen’s testimony either.

  30. That’s his latest modelling? Anybody have a history of Hansen’s predictions – when he predicted and how much per decade/century?

  31. Looks like we are missing the anthropogenic component of GW to me… or at least it’s significance is approaching insignificance. Oh well, there is still ocean buffering to worry, into motivating alarm. GK

  32. It seems to me that even during the strongest and most continuous warming periods (appr. 1910-1940 and 1970-1998) the trend has never reached 0,2C/decade?!

  33. Steve F: ‘However, in real rain clouds, the albedo at the top of the cloud is very high over a broad range of doplet sizes. That is to say: for thick clouds, most sunlight is reflected back to space, mostly independent of droplet size.’

    Agreed. However, it’s to do with the Power Laws in the optical physics and the frequency distribution of droplet sizes. The optical scattering cross section is proportional to r^6 so the few large droplets concentrate light dramatically, like narrow beam searchlights.

    When that beam hits another droplet, although only c. 3% is backscattered, it’s 3% of 10^7 at 15 microns, 10^9 at 45 microns, the reason for the high albedo.

    [The particle number effect is a reduction by a factor of c. 10 for a tripling of droplet size.]

  34. Alistair,

    The optical scattering cross section is proportional to r^6 so the few large droplets concentrate light dramatically, like narrow beam searchlights.

    That is correct for particles in the Rayleigh scattering size range, which in a gas (eg air), means particles which are smaller than about 0.05 micron follow that sixth power law. Scattering by particles larger than ~0.1 micron is only accurately described by Mie theory scattering, and the scattering profile for these size particles is a rather complex function of size, refractive index, and refractive index of the medium (not to possibly mention absorption).
    .
    The smallest droplets in clouds are on the order of ~3- 4 micron, far larger than the Rayleigh size range. These droplets scatter strongly in the forward direction (little back-scatter), and the average angle of deflection is small… but those angles are basically random, especially after the first scattering event. What makes clouds have high albedo is multiple scattering. Light that enters the cloud is very quickly (a few tens of meters at most) totally randomized in direction by multiple scattering. If you fly through a cloud in an airplane, you may note that the light inside the cloud is very uniform, and coming from all directions. There is no way to tell where the light originally came from… it has been randomized by multiple scattering.
    .
    If the cloud is physically deep, the probability of light escaping from the bottom of the cloud is low, and the greater the physical depth, the lower the probability of that escape from the bottom. So any thick cloud bank is remarkably high in albedo, almost independent of droplet size, because the probability of escape from the bottom of the bank is so low…. most all of the light ends up going back into space.
    .
    You may want to see: http://noconsensus.wordpress.com/?s=Steve+Fitzpatrick for more detail on the basics; there is much more than I describe above.

  35. The R^6, 1/lamda^4 relationship is correct for Mie scattering.

    I suggest you go to a Mie theory primer, e.g.: http://www.phy.duke.edu/~rgb/Class/phy319/phy319/node117.html

    The final equation shows the optical cross section for Mie scattering. NB, k is the wavenumber: invert the k^4 proportionality for wavelength.

    One thing I have noticed is that for some US course material, there appears to be an error in the direction of greatest scattering. This could account for the incorrect physics claimed by NASA in the reference I gave above, i.e. whoever wrote it believed there was a ‘surface reflection’ like process in terms of thinking the 97% forward scattering you get when water droplets scatter visible light is reversed.

    However, I think it was probably a fraud to purport a physical explanation supporting the incorrect prediction of substantial ‘cloud albedo effect’ cooling by faulty aerosol optical physics which has misled the climate modellers for nearly 40 years now.

    That has to be corrected before any of the climate models can predict climate. But first they must dump most CO2-AGW.

  36. Alistair,
    The link you provide shows the calculation of scattering from very small particles (Rayleigh size region); that is, particles much smaller that the wavelength of light in the medium where the scattering takes place. Note the title of the article you link to… it says “Small particles”.
    Scattering from larger particles (as in clouds) is not at all accurately described by Rayleigh scattering. For relatively large particles, substantially larger than the wavelength of light, the net scattering cross section (with total scattering measured far from the particle; the far field approximation) is approximately equal to twice the geometric cross section. See :
    http://en.wikipedia.org/wiki/Mie_theory

  37. The radius and wavelength dependence is the same for Mie and Rayleigh scattering. For the highly asymmetrically scattering aerosol droplets in clouds, forward scattering becomes a much sharper forward lobe as droplet size increases.

    This is why rain clouds have such high albedo: the backscattering at the next interaction is very high in absolute terms. It’s not predicted by the two-stream approximation aerosol optical physics in the climate models because the various formulations assume just one process, directed diffuse scattering.

    So, 0.7 W/m^2median ‘cloud albedo effect’ cooling in Figure 2.4 of AR4 is plain wrong: it’s either heating or neutral depending on whether albedo has asymptoted to mainly symmetrical diffuse scattering. I think low level tropical clouds reached that point in about 2000 when Asian pollution became very high.

    I could be wrong but how else do you explain why thick cloud albedo increases as droplet size increases, when the models predict the reverse?

  38. Re: alistair (Apr 19 10:11),

    Are you referring to what the IPCC calls the aerosol indirect effect ? If so, then I agree it’s probably been highly overestimated. That doesn’t throw out the effect of CO2, though. It does reduce climate sensitivity because you can’t use it to offset radiative forcing. Aerosol forcing in general is a kludge to tune models to approximately hindcast the instrumental temperature record.

  39. Answer to DeWitt Payne:

    Yes, I am saying the aerosol [first] indirect effect is calculated wrongly. This is sometimes called the Twomey effect he predicted and observed, but once albedo gets near 0.5 [assuming symmetrical diffuse scattering], it saturates. Twomey, a good physicist who didn’t agree with Sagan, warned not to extrapolate his results and theory to thicker clouds.

    For thicker clouds, albedo can get very high, c. 0.9. The aerosol optical physics in the models claims this is because of biased diffuse scattering whereby the highly asymmetric optical scattering defined by the scattering asymmetry factor, g, magickally backscatters more radiation than is forward scattered.

    However, Mie calculated g for the boundary condition of a plane wave. By definition, diffuse scattering has no wave, so g = 0. the original mistake was made in the 1950s and was copied by the rest.

    In reality, there’s substantial direct backscattering at the upper cloud surface. At 0.9 albedo, 80% of the light doesn’t enter the cloud. At 0.6 albedo, it’s 20%. So, if you reduce droplet size, you get a dramatic reduction of direct backscattering, hence albedo.

    So, thick rain clouds have high albedo, thick polluted clouds not raining have much lower albedo, tending to 0.5. But if pollution increases light transmission, it’s another AGW.

    The claim of median 0.7W/m^2 ‘cloud albedo effect’ cooling, 44% of median net AGW in figure 2.4 of AR4 is purely theoretical from incorrect optical physics. By 2003, NASA knew there was no experimental evidence for it. They concocted a fake ‘surface reflection’ argument.

    Take it away and there’s no proof of high feedback. Add in another AGW, now saturated [Asian aerosol pollution reducing the albedo of low level tropical clouds] and you explain most recent AGW as a transient and you also have a better way of explaining palaeo-data.

    The bottom line is that there is no proof of any net CO2-AGW and it isn’t needed to explain palaeo-climate at the end of an ice age.

    I’ll accept there might be some but it’s probably controlled by the strong negative feedback that already reduces the theoretical 77K no-convection GHG warming to the real 33K. We call the mechanism weather.

  40. Re: Alistair (Apr 19 11:22),

    However, Mie calculated g for the boundary condition of a plane wave. By definition, diffuse scattering has no wave, so g = 0. the original mistake was made in the 1950s and was copied by the rest.

    I think you’ve gone a step too far there. I don’t think you can assert g=0 so casually. Sure Mie used Maxwell’s equations and a plane wave for single scattering, but there’s nothing in quantum mechanics that says Maxwell’s equations are wrong. A photon encountering a particle is still going to be scattered. The Monte Carlo method of tracking a single photon scattered randomly multiple times until it is absorbed or transmitted through the bottom or top of the cloud and repeating does not yield a value of g=0. It’s more like 0.85.

  41. Dewitt Payne:

    Thank you for your insight. But Monte Carlo is a model, and I prefer observation.

    Please explain why. contrary to the aerosol optical physics in the climate models, clouds with large droplets give higher albedo than clouds with smaller droplets.

    Glider pilots know this very well from the almost reflection like flashes you see at the top of coarsening, convective cumulus in summer, so no ice!

  42. Alistair,

    The radius and wavelength dependence is the same for Mie and Rayleigh scattering.

    I think you are simply mistaken. The scattering cross section of a large particle (one much larger than the wavelength of light) is proportional to only the particle size squared. For example, the scattering cross section of 10 micron diameter cloud droplets (a typical average value for clouds) is essentially the same at both 450 nm and 600 nm wavelengths. There is very little wavelength dependence. See for example Van de Hulst’s approximate solution (from Wikipedia):

    Q = 2 – (4/{p})*sin{p} + (4/{p^2})* (1-\cos{p}),

    where Q is the efficiency factor of scattering, which is defined as the ratio of the scattering cross section and geometrical cross section πa2;
    p = 4πa(n – 1)/λ has a physical meaning of the phase delay of the wave passed through the centre of the sphere; a is the sphere radius, n is the ratio of refractive indices inside and outside of the sphere, and λ the wavelength of the light.
    Note how as the radius becomes large relative to wavelength, the Q approaches a constant value of 2. Which is to say, at large sizes there is no dependence of Q on size.
    .
    There is, of course, an increase in total scattering, assuming the same total volume of condensed water per unit cloud volume, as droplet size decreases. The number of droplets (per unit weight of liquid water) is proportional to D^(-3), while the scattering cross section per particle is proportional to (for big droplets) D^2. So the total scattering cross section (per unit cloud volume) is approximately proportional to 1/D… photons travel shorter distances between scattering events in clouds with smaller droplets. But for deep clouds, the dependence of albedo on droplet size is quite weak, because most of the light falling on the top of the cloud is emitted from the top of the cloud, with only modest changes due to different droplet size. It is only for shallow clouds that the dependence of albedo on droplet size becomes large.
    .
    There is no reason I have seen to believe that calculated cloud albedo is very far from correct. As DeWitt says, there is good reason to doubt the magnitude of the claimed secondary aerosol effect (aerosol cloud albedo effect), but there is no reason to think people do not know how to accurately calculate cloud albedo for an assumed combination of droplet size distribution, droplet concentration, and cloud depth.

  43. Alistair,

    contrary to the aerosol optical physics in the climate models, clouds with large droplets give higher albedo than clouds with smaller droplets.
    Glider pilots know this very well from the almost reflection like flashes you see at the top of coarsening, convective cumulus in summer, so no ice!

    Developing convective cumulus clouds have relatively small droplet sizes (at least as they form) and high droplet concentrations. The rate of droplet nucleation is higher at high levels of supersaturation; that is, rapid rate of convective rise means rapid condensation and more droplets formed. Those same clouds become thin and wispy as rain falls out, the convection subsides, and/or the droplets just grow in size over time due to Ostwald ripening. The high albedo observed in developing convective clouds is indeed clear evidence of smaller droplets and/or higher droplet concentration, not larger droplets.

  44. That honk from the wild goose leading the Victory parade is: ‘Concatenation of Oceanic Oscillations’.
    ===============

  45. One thing I never understood about these scenarios. Why cant the models be run with carbon dioxide levels (methane and other GHGs) as they have actually occurred since 2001?

  46. Buck–
    In 2007, they didn’t know all levels after 2001 yet. There is some monitoring but knowledge of the exact levels lags in time.

  47. What is the sensitivity of the model to the weightings?

    For example, if the model assumes an % contribution from the various forcings, how sensitive is the model to changes in these assumptions?

    If the models are sensitive to the assumed % contribution of the various forcings, then does this not allow for “cherry picking”.

    I could for example, in a sensitive model, come up with multiple models that would hindcast well, but would produce entirely different predictiosn for the future.

    The problem with this is it allows model builders to “cherry pick” those % contributions that match their expectations. You end up with the experimenter-expectancy effect common in animal training.

    In effect the model is not forecasting future climate, it is forecasting what the experimenter expects is a reasonable forecast for future climate.

  48. ferd–
    I haven’t explored the full sensitivity to weightings. However, if you examine the dashed grey lines, those represent the spread of model means. Mind you: Some models have only 1 runs which means that the grey dashed line represent and odd thing which is falls between the spread of model means and “model weather”.

  49. Don’t climate models approximate a linear model of the form:

    Temp(year1) = A1F1(year1) + A2F2(year1) ….. AnFn(year1)
    Temp(year2) = A1F1(year2) + A2F2(year2) ….. AnFn(year2)
    ….
    Temp(yearm) = A1F1(yearm) + A2F2(yearm) ….. AnFn(yearm)

    where An = percentage contribution of each forcing either observed or estimated within bounds
    and Fn = forcing n either calculated or estimated within bounds
    and Temp(yearm) = Tn = average observed temperature in year m

    And thus we want to solve An, Fn, within bounds to deliver a best fit with Tn. What we used to call linear programming.

    Two things jumped out at me right away:

    1) by building a synthetic dataset of future temperature and splicing this to historic temperature, we could potentially solve AnFn to deliver just about any forecast we want for future temperature, and still maintain a reasonable fit for past temperature.

    2) climate scientists, by picking those model values that fit their expectations for future temperature, and discarding those that do not fit, are in fact building a synthetic dataset from their own expectation, as per point 1.

    This then gives rise to the experimenter-expectancy effect. Linear programming models are a form of machine learning, and as such they are at risk for the same types of contamination that arise in animal training studies.

  50. Don’t climate models approximate a linear model of the form:
    I’m not entirely sure what you mean but I think the answer is no. They don’t do this.

  51. How do you end up with the values for An, the percentage contribution for each forcing. How do you know that CO2 contributes X percentage and that land-use contributes Y percentage. Somewhere in the models these sorts of assumptions must exists. How were they derived and how much range is allowed to acieve a fit?

    I’m not saying that the climate models acutally do the programming to saolve AnFn. More likely the values are tried out manually, because of the horsepower required to generate Fn using GCM’s. What I’m saying is that mathematically, this is the solution for the relative contribution for all the forcings, assuming that forcings are linear as is assumed in climate models.

  52. So, say for example that F1 is CO2 and F2 is land use. Then for each year we would have a certain % contribution to temperature from CO2 and a different % contribution for land use. However, these percentages are at best estimates, no matter how precise our calculations, so they must be within bounds.

    Therefore for any year, the contribution of CO2 and Land-use is:

    Tm = A1mF1m + A2mF2m + everything else

    And if we hold everything else unchanged, then as A1m goes up, A2m must go down, as we cannot exceed 100% contribution

    So, the achieve a best fit, once we have F1m,F2m, we want to adjust the values of A1m,A2m within bounds to minimize the error between our calculated temperature and observed temperature.

    Now we could solve this manually through trial and error, but this is the sort of thing computers love to do. Try out different values of Anm over the solution space, to achieve the best fit to Tm.

Comments are closed.