New thread. Mostly open, but no discussing:
- Mosher and Fuller’s book.
- Theories about Mosher’s ‘behavior’.
- Commas or grammar or theory of or about proper use of commas or grammar.
- Pretty much any of the spin-off topics related to reviewing Mosher and Fuller’s book
Those who wish to discuss those topics can continue to do so on the other thread where they have taken over. 🙂
Meanwhile, if anyone is having difficulties viewing the display– broken theme etc. let me know.
Well, I would like to hear people’s impressions of M&F (that’s Marotzke/Forster, not Mosher & Fuller…) As usual, I gain my impressions without actually studying any of the math, though I actually did some R programming for the very beginnings of this one (see climateaudit).
My impression:
1) The original claim that the paper is trivially circular and obviously invalid is not correct. That was a reasonable guess, based on a lack of information on how the paper had done its calculations, and became an obsolete claim when Climate Lab published M&F’s response.
2) They have apparently not released their data and calculations, so the climateaudit auditors can’t really do much with them. No one cares enough to spend ten years reverse-engineering their work as they did with Mann’s stuff.
3) They have apparently not released their data and calculations, in which case no one needs to pay any attention to their work anyhow. Why should they?
4) The current critique of their work is that the whole idea of their analysis is suspect, maybe “circular” (but not as circular as originally thought) and perhaps unable to learn anything useful. As near as I can tell, this is the position held by Pekka, RomanM (thought the two of them continue to argue, it seems to be more about the definition of the word circular), and James Annan (http://julesandjames.blogspot.com/2015/02/that-marotzkeforster-vs-lewis-thing.html)
This was also my original position, based on ignorance (http://climateaudit.org/2015/02/05/marotzke-and-forsters-circular-attribution-of-cmip5-intermodel-warming-differences/#comment-750648 and comment on Annan’s post) and I think it has been echoed by Annan and Pekka. I personally don’t understand how dozens of models can have very different climate sensitivities and all more-or-less match the temperatures of the last century without tuning the other forcing responses to them. The idea is silly.
“theory of about proper use of commas”
Ungrammati….
Nick,
Long ago, I read a ‘rule’ about listserve. All comments discussing grammar or punctuation errors will themselves contain grammar or punctuation errors. 🙂
Another sane person exits the toxic swamp we call climate science.
https://theclimatefix.wordpress.com/2015/03/01/a-quick-guide-to-pielke-jr-on-climate/
Looks like witch hunts are effective.
I find it very telling that I cannot find even the slightest hint of the Pachauri resignation on any of the pro AGW sites. It’s as if he never existed.
Plenty of words written on the Soon affair but I guess alleged sexual harassment doesn’t rank up there with alleged conflict of interest.
Mind bottling.
MikeR (Comment #135582)
“I don’t understand how dozens of models can have very different climate sensitivities and all more-or-less match the temperatures of the last century without tuning the other forcing responses to them. The idea is silly.”
Each Model has to have implicit back tuning to agree with the past. Like with Computer programmes for picking stocks or greyhounds/horses.
If you input the stock [hindcast] and run a prediction the stock will always go up on May 3rd or whatever of that past year, The greyhound will always win that race against Sam or Goodie Two Shoes.
When you predict into the future your “dozens of models” actually all have very similar climate sensitivities emerge.
People have already succinctly explained that models do not have Climate sensitivities in them. They have emergent Climate sensitivities.
That is one can detect hints of a Climate sensitivity post facto in the future.
All appear to be way to high.
Connolley actually had a good post on Pachauri a few days ago on Stoat
MikeR,
There certainly is at least some circularity in the M&F analysis. After all they extract an estimate of forcing history from each model using the model’s temperature history, then use that forcing to evaluate variability in the temperature history of those self-same models. Yes, the circularity is not total, because they introduce other parameters from the models in their calculation, but there remains an underlying circularity which appears to have not been properly considered as a source of uncertainty. I expect someone will write a formal paper covering the problem….. but I doubt it will be in Nature; too contrary to their political policy positions.
I think M&F go too far into the fantasy that ‘models are just like reality’ for my taste, but others may think differently.
Tom Scharf, “Looks like witch hunts are effective”.
I don’t think so. He makes it clear that
“This is a decision that I made long before the events of the past week.”
– which people following his blog/twitter will have noticed anyway.
MichaelS,
Just more evidence of the political nature of the movement. They do not care that Al Gore’s carbon footprint is the size of North Dakota, so long as he helps them advance their policy goals. The consistent tolerance of people who’s personal behavior is either utterly contrary to the behavior they insist everyone else must adopt, utterly abominable, or even criminal, speaks volumes. Pachauri is but one more example.
Paul Matthews,
The witch hunt for Roger Jr has been going on for a very long time; I think it is pretty clear that it has had an impact on Roger. Zealots take no prisoners and don’t care about the people they injure.
A Congressional investigation sponsored by a Democrat is unlikely to go anywhere in the current Congress. But it’s still annoying. People like Pielke, Jr. and Bjorn Lomborg are considered heretics to the creed by the zealots. Heretics are always singled out for special consideration, as they are potentially far more dangerous than infidels. See, for example, Leon Trotsky.
DeWitt – perhaps to soon, but see also ISIS. Now, I won’t bring them up again, I promise, hopefully Lucia doesn’t have to amend her list.
Bill_C,
Oh, there are lots of recent possible examples, the murders of Boris Nemtsov, Alexander Litvinenko and Anna Politskovskaya come to mind. And there’s always the old classic, the Spanish Inquisition. Another is the massacre at Béziers in 1209 ( “Caedite eos. Novit enim Dominus qui sunt eius – Kill them all for the Lord knoweth them that are His†(2 Tim. ii. 19), the list goes on and on.
RE:http://www.realclimate.org/index.php/archives/2015/02/climate-oscillations-and-the-global-warming-faux-pause/
Have I mis-remembered, or did Kevin Trenberth (among others) not only acknowledge the ‘pause’ but has since stated it was expected? (IE: since the travesty of not finding the expected heat.)
There seems to be less consensus about the (perhaps) current hiatus. I been reading many articles; from those who deny there ever was/is a pause; to the many now trying to explain it.
The strange thing is that Bjorn and Roger uphold *** almost *** all the tenets of climate change advocates. For some reason this seems to make them even more of a target than the cranks. This is some interesting psychology. Possibly advocates are very worried that some who don’t tow the consensus line will be viewed as reasonable objectors and they find this the bigger threat as they work so hard to paint a picture that all skeptics are knuckle draggers.
I’m having a hard time determining if this represents a positive or negative development. Is it a “win” if you successfully shout down all objections with career threats and other bullying tactics? When the knives come out in a science debate, it isn’t science anymore. It’s more than a little disheartening to see it is academia that are the first to draw the knives in many cases lately. It drags the integrity of the entire science sector down. I ask myself whether other science sectors are like this, I hope not. I’m sure they all have plenty of whizzing matches, but the overt suppression of descent is getting out of hand.
Tom Scharf: “For some reason this seems to make them even more of a target than the cranks.”
.
But the tenets on which they don’t agree have prime importance to alarmists: disasters and economics.
.
Disasters are great PR tools for the climate change narrative. Disasters that don’t happen ie historical lack of major hurricane US landfalls, and the lack of trend for increasing disasters, are highly inconvenient to this “Must Act Now!” narrative.
.
On the economics we have people like Stern “demonstrating” that the costs of dealing with climate change are insignificant compared to the damages. While people like Pielke and Lomberg point out how preferred policy costs lots of money, provides little benefit and is harmful now to those it purports to protect in the future.
MikeR (Comment #135582)
“(but not as circular as originally thought)”
Which could be something like: “but not as pregnant as originally thought”. There is more wrong with the M&F paper than circularity – like assuming linear trends, using overlapping trends and confusing natural variability as a residual from putting a straight line through a non linear curve.
My point, that I continue to make at CA, is that there are simpler and more straight forward methods to attempt to extract a deterministic and non linear trend from these model (and observed) time series. The results might not be the obscure ones desired in this era of pause-waiting-for-an-explanation but I think it shows there are alternative approaches.
I actually think there may be a climate signal in the inactivity of landfalling hurricanes. Too bad the consensus bet on the other horse.
To date, we have one betting thread gone amuck, and now two betting analogies here.
The truth of the age old battle between man and nature is being revealed.
Tom,
If there is a signal in reduced landfalls of tropical cyclones, the message of that signal is significantly different than the consensus hype.
My take is that there is no signficant change either way, climatologically. That if we look at a climatologically significant period of time the change is effectively nil.
The consensus marketing effort is committed to parsing and cherry picking, like many other disreputable marketing campaigns.
Tom: “I actually think there may be a climate signal in the inactivity of landfalling hurricanes.”
.
Maybe, but demonstrating beneficial effects of climate change would be “unhelpful”.
MikeR: M&F15 claim (in the abstract) “Using a multiple regression approach that is physically motivated by surface energy balance, we isolate the impact of radiative forcing, climate feedback and ocean heat uptake on GMST—with the regression residual INTERPRETED as internal variability.” Below their regression, they say: “We INTERPRET the ensemble spread of the regression result … as the deterministic spread and the spread of the residuals … as the quasi-random spread.” IMO, both of these interpretations appear to be unwarranted – no matter how Lewis’s circularity argument is resolved.
Consider dT = dF/(a+k). On earth, dF is a deterministic term, while dT has both deterministic and unforced components. Unforced variability arises because k (ocean heat uptake efficiency) varies chaotically. El Nino is one example. (The climate feedback parameter (a) also varies, rapidly due to changes in humidity, clouds and lapse rate and slowly due to changes in surface albedo. The fast feedbacks may equilibrate on a yearly time scale.)
M&F are doing an “inverse analysis” of model output. They have calculated an average a and k for each model from its output. These fixed terms can no longer produce unforced variability. Unforced variability MUST come from the dF term; now called effective radiative forcing (ERF) because it is deduced from dT (which has a chaotic component). Confusion arises from using the same symbol, dF, for both a change in forcing and in effective radiative forcing.
The change in ERF (dF) contains both deterministic and unforced components and is present in the allegedly deterministic portion of the regression equation. M&F have not separated the deterministic and unforced components of warming. Whether or not M&F’s regression is statistically meaningless, their interpretation of that regression is wrong.
The assertion that the regression residuals can be interpreted as unforced variability appears equally absurd. If one does a linear regression of a system with a quadratic behavior, the regression residuals will reflect the inadequacy of the regression equation being used – in addition to measurement error and any unforced variability that may be present. dT = dF/(a+k) + e is a decent, but imperfect, model for energy balance. M&F’s linear expansion ignores higher order terms (approximating 1/(1+x) as 1-x when x is as big as 0.5) The expansion requires beta2 and beta3 to be equal, so their regression equation contains an inappropriate degree of freedom. Their regression equation is not rigorously derived; it is “suggested” (their term) by flawed mathematics. The regression residuals do contain some of the unforced variability (some is in dF), but the residuals also contain systematic errors associated with the regression equation itself. These errors artificially inflate the confidence interval obtained from regression of model output, reducing the discrepancy between the model ensemble and observations.
The discrepancy between models and observation can’t be explained by a flawed regression.
Frank,
No matter the content of the abstract or body of the paper, IMO M&F15 has only one objective: to offer another excuse for divergence between models and reality. I doubt this contribution will make any real difference in the long term… after another decade more of divergence, this paper, and many others that attempt to provide an excuse for the divergence, will become irrelevant….. and ignored. Reality can’t be den!ed forever. Eventually, the models’ sensitivities will be scaled back…. but only after much kicking and screaming.
SteveF: I personally don’t think it is useful to speculate about the author’s motivations. Their introduction does an excellent job of describing the problem: On their histograms, the observed 1998 15-year trend lies just below the envelop from 114 model runs. The observed 1927 15-year trend lies above all but 4 of 114 model runs. Since the histograms are roughly normally distributed, the 1998 trend approaches three standard deviations below the ensemble mean for 1998 and the 1927 trend is about two standard deviations above the ensemble mean. With about 100 starting years to choose from, how surprised should we be that one can cherry-pick years when observations and models disagree substantially? I don’t know how to statistically analyze the extent of this “cherry-picked disagreement”. I think M&F were simply trying to identify an appropriate confidence interval that can be used for any starting date by making full use of the data from roughly 100 models runs and 100 starting dates. Unfortunately, they used a regression “suggested” by flawed mathematics. And that regression didn’t separate deterministic (forced) warming from unforced variability.
Unfortunately, the reality that “can’t be denied forever” includes unforced variability. There isn’t any reason why the rapid warming of 1975-1998 can’t occur again starting this year and further postpone the recognition of reality. Fortunately, it will take several decades of even faster warming to compensate for the hiatus
Frank,
“There isn’t any reason why the rapid warming of 1975-1998 can’t occur again starting this year and further postpone the recognition of reality.”
Sure, and I am not suggesting otherwise. What I am suggesting is that the more rapid warming between 1975 and the early 2000’s is just the flip side of the current situation with relatively little warming.
Please point to papers published before the ‘pause’ became a subject of intensive climate science research which suggested the earlier warming was in part due to internal variability, and that estimates of high climate sensitivity were not ‘confirmed’ by that earlier rapid warming. Hell, point to any paper by main stream climate scientists, whenever published, which says that. I very much doubt such papers exist. When warming was faster, it was embraced by climate science as complete confirmation of high climate sensitivity. Now that the situation is changed, NOBODY in main stream climate science is publishing papers saying that the best estimate of sensitivity should be lowered. Where are papers that critique the highest sensitivity models which are comically inconsistent with measured reality? I do not think they exist.
Perhaps you think it is unfair to suggest the authors have motives which ‘guide’ the substance of their research. If so, I disagree. In fact, I can see no other plausible explanation: the entire field is guided by certain policy views, and in particular, by the desire to drastically reduce fossil fuel use, independent of the true future rate of GHG driven warming.
The field has not in any meaningful way faced the reality of obvious divergence, and instead offers weak, endless, and often silly arm waves and excuses. When they do face reality, we will see projections of future warming substantially reduced. I am not holding my breath.
Frank (Comment #135623)
“IMO, both of these interpretations appear to be unwarranted – no matter how Lewis’s circularity argument is resolved.”
I agree. They have not made the case for their interpretations.
Frank,
Sure there is. There is reason to believe that the 1975-1998 rapid warming had the same contributing factor as the rapid warming from 1910-1940, changes in the AMOC. The AMO index is currently near its peak and shows signs of starting to decline. The 21 year moving average of the AMO index has a minimum in 1913 and a maximum in 1945. There was another minimum in 1976. Somehow I don’t think that’s a coincidence. Hence, there’s good reason to believe that there won’t be rapid temperature increase for at least another ten and possibly 20 years.
A sine wave fit to the 21 year moving average has its next minimum in 2044, but the maximum contribution to the rate is at the mid point in 2027. My canary in the coal mine is the PIOMAS Arctic ice volume anomaly, which has been increasing since the minimum in 2012.
By the way, since all the models, as far as I know, use variation in aerosols to explain the hiatus between ~1945-1975, there is no way that their output can be used to extract multidecadal variation from the temperature record. It’s classic GIGO.
There are times when I think the failure of the GLORY launch was intentional. If the effect of aerosols were shown to be far less than is modeled (and nearly every model has different aerosol sensitivity), the climate sensitivity of all models would have to be reduced.
“There is reason to believe that the 1975-1998 rapid warming had the same contributing factor as the rapid warming from 1910-1940, changes in the AMOC.”
Steinman et al. claims that “internal climate variability instead partially offset global warming” during that time.
HaroldW,
From your link:
Not if the models all mask out most multidecadal internal variability with aerosols.
They’re correct that the AMO has been fairly flat recently, but again, it’s due to go negative real soon now. I’m betting they’ve overdone the importance of the PDO too. If the PDO is strongly negative, how come sea ice in the Sea of Okhotsk has been below average since about 2004 and there isn’t much of a trend at all in Bering Sea ice for the entire satellite era. One would think both of those would be sensitive to the state of the PDO.
HaroldW,
It’s begging the question. They’re trying to prove that model sensitivity is correct by assuming that CMIP5 model ensemble sensitivity is correct.
SteveF wrote: “Perhaps you think it is unfair to suggest the authors have motives which ‘guide’ the substance of their research.”
All scientists have motives and biases: Hoping our favorite hypothesis turns out to be correct. Publish or perish. Playing the devil’s advocate. Hoping for results that support a preferred policy. We are all supposed to suppress these motives and search for the truth no matter where the data or analysis leads. Do I think climate scientists on the average let their motives interfere with the pursuit of scientific truth more than ordinary scientist? Absolutely. On both sides. However, I don’t think it is USEFUL to assume that any particular flawed paper was caused by non-scientific motivations. Such assumptions make it more likely that my biases (which I suspect are similar to yours) will cause me to miss the truth. Furthermore, Nic Lewis’ post says: “I have a high regard for Piers Forster, who is a very honest and open climate scientist.”
DeWItt wrote: “There is reason to believe that the 1975-1998 rapid warming had the same contributing factor as the rapid warming from 1910-1940, changes in the AMOC. The AMO index is currently near its peak and shows signs of starting to decline.”
I’d enjoy reading a good paper on this subject. At the moment, I don’t see how we can have any confidence our understanding of a oscillatory phenomena (particularly amplitude) from only two cycles, one of which we were monitoring via open containers on a moving ship. Suppose we had only two cycles of ENSO to analyze. What are the chances we’d arrive at the same understanding we have from more than 20 cycles?
“There are times when I think the failure of the GLORY launch was intentional.”
I hope Lew isn’t listening.
SteveF wrote: “Please point to papers published before the ‘pause’ became a subject of intensive climate science research which suggested the earlier warming was in part due to internal variability, and that estimates of high climate sensitivity were not ‘confirmed’ by that earlier rapid warming. Hell, point to any paper by main stream climate scientists, whenever published, which says that. I very much doubt such papers exist.”
Try: Recent global-warming hiatus tied to equatorial Pacific surface cooling. Kosaka&Xie, Nature 501, 403–407. (This is the paper that constrained SSTs in the Eastern Equatorial Pacific so they matched observation and let the rest of the model planet respond to rising GHGs normally.) It’s right there in black and white:
“For the recent decade, the decrease in tropical Pacific SST has lowered the global temperature by about 0.15 degC compared to the 1990s (Fig. 1b), opposing the radiative forcing effect and causing the hiatus. Likewise an El-Nino-like trend in the tropics accelerated the global warming from the 1970s to late 1990s (Extended Data Table 1).”
One sentence. No numbers. (Constraining SSTs caused 0.20 degC decrease during the hiatus and 0.14 degC increase of earlier warming.) Data buried on the last page of the supplemental material. Earlier acceleration of warming not apparent in the main Figure. No mention of earlier acceleration of warming in the abstract.
(Did I chide you earlier today for speculating about motives? I’ll take it all back.)
Frank,
I do not suggest bias has to even be conscious… there can be bias that is completely unconscious (the mother of the axe murderer honestly thinks her dear son is innocent!). There is always reluctance to accept you were mistaken about something important… and modeled diagnosed high sensitivity is the most important ‘finding’ of climate science, at least with respect to public energy policy. The field’s reluctance to face reality is understandable, more than a little amusing, but very bad for formulation of good public policy.
WRT Kosaka&Xie, they forced the tropical Pacific to be much cooler than what the models predict, and show (shockingly!) that the model then tracks reality better than when the tropical Pacific was not so constrained. The point of the paper as I understood it was to show the model is RIGHT about high climate sensitivity, not that climate sensitivity is lower than the model diagnosed. The effect of holding the tropical Pacific cooler than what it would have been in the model world was to throw vast model heat into a black hole, never to be seen again. IMO, it is another of the silly excuse papers from climate science. What they should have been asking was “why is the model trapping more heat than reality?”
There used to be a list of characteristics of bad science floating around. Nearly every one of the traits correlated strongly to how supporters of the climate consensus conducts and promotes research.
This is as close to the list as I can find:
http://quackfiles.blogspot.com/2005/07/seven-warning-signs-of-bogus-science.html
“1. The discoverer pitches the claim directly to the media.” Climate hype promoters take this to the next level: How many times do we hear about climate conferences whose purpose is to develop better ways to pitch climate fear? And of course media itself now enhances and determines what the public hears about climate consensus claims.
“2. The discoverer says that a powerful establishment is trying to suppress his or her work.” From Mann to Gore to the floor of the Senate climate catastrophe believers claim that “big oil”, the “Koch brothers”, “the fossil fuel industry” are involved in a grand conspiracy to suppress the truth about evil CO2.
“3. The scientific effect involved is always at the very limit of detection.”
This is my personal favorite because the corollary of it is the dominate area of discussion in the public square: If the cliamte crisis was real it would not require such dubious, convoluted and failed methods to tease out the evidence. Instead we have, for just the pause, literally dozens of frequently conflicting excuses. We find that historical data has to be “corrected” to show the dangerous changes. We are told there are dangerous changes by way of graphs adjusted to dramatically enhance trivial changes…and told to ignore the reality that famine, flood, and suffering are actually flat to reduced.
And there are more, some of which the climate obsessed have evolved far past.
Frank,
I would like to read that paper too. I seriously doubt it would get published in a mainstream journal.
The standard hypothesis that aerosols caused the slow down in warming between 1945 and 1975 is on even less solid ground than the cyclic nature of the AMOC. Attribution of a substantial contribution to warming to ghg’s is based primarily on only 30 years of data from 1979-1998. The rate of warming is similar between 1910 and 1940 and 1979-1998.
Here’s a plot of the annual average of the AMO index, a sine fit to that data and the 21 year moving average of the data. That doesn’t look like an accident to me. Peaks in the AMO index are also correlated to severe droughts. There were severe droughts in the Continental US in the late 19th century. The sine fit to the AMO index peaked in 1878. There was the Dust Bowl in the 1930’s and another severe drought in the 1950’s. The AMO index peaked in 1944. Things aren’t exactly very wet now with the entire state of California suffering drought conditions in 2014. The AMO index recent peak is in 2011.
DeWitt,
“If the effect of aerosols were shown to be far less than is modeled (and nearly every model has different aerosol sensitivity), the climate sensitivity of all models would have to be reduced.”
.
Yes, and the hilarity of ignoring this seems completely lost on climate scientists and their camp followers. ‘Aerosol experts’ in AR5 reported substantial reductions from AR4 in the best estimates of aerosol effects (though still very uncertain). Are the climate model assumed aerosol effects consistent with these “best estimates”? Well, maybe the individual models have assumed offsets that fall within the AR5 stated (very wide) uncertainty range, but it does seem odd that the model ensemble on average is way higher in aerosol effects than the IPCC’s best estimate….. and AFAIK, no models fall on the low side of the AR5 uncertainty range for aerosols. Odd in the same way that nearly all the runs of all the models project warming which is higher than has been observed. One might be tempted to speculate that those two oddities are not independent of each other.
.
As Lucia commented a few days ago: this sort of thing must be giving the modelers the Hebe-jeebies… and it should. What I want to see is a few well known models “re-tuned” through their parameterizations to match historical temperatures, but with lower assumed historical aerosol effects. I suspect the diagnosed sensitivity would then fall in line with much lower empirical estimates like Lewis & Curry’s. Which is why I am sure it will not be done. Climate modeling is the most intellectually corrupt science I have ever encountered.
“The field has not in any meaningful way faced the reality of obvious divergence, and instead offers weak, endless, and often silly arm waves and excuses. When they do face reality, we will see projections of future warming substantially reduced. I am not holding my breath.”
I think if we confine our view of motivation getting in the way of presenting good science and do it on an individual published paper basis, I think we can use that judgment to increase our vigilance in analyzing future work by the authors in question or those who use that work as a basis for further study. Frank makes a good general critique of the M&F paper and it points to what I sometimes view as a problem with criticisms at these blogs being understood in a wider context. The blog post by Nic Lewis was aimed at a single feature of the M&F paper – as would any published paper. The problem comes from having to focus on a single feature of the paper that the defenders of the work, in my view anyway, often appear, therefore, to think that the remainder of the paper is not problematic. Blogging has the flexibility to present the focused and more general analyses of these papers – although I think sometimes the more general analysis and the importance in judging the overall quality of the works get lost in the shuffle.
I think, given the general criticisms of the weaknesses and wrongheaded approach of the M&F paper such as presented here by Frank and that it gets published in the prestigious journal Nature, one has to consider motivations in this instance. I consider some of these weaker works are motivated by scientists and publications who feel that the end result is apparent from the consensus from other authors works they may or may not fully comprehend. Their mission is then that of presenting evidence, no matter how weak, to support the consensus conclusions.
The authors of M&F were not so much interested in showing that the internal variations in the models were presenting an obstacle to showing a clear difference in the observed and modeled historical temperatures during the pause – as many here would admit – but rather that the deterministic trends in the models were not sufficiently varied to obscure the model/observed difference during the recent pause. Given that there are other simpler approaches in attempting to separate the deterministic and noise (internal variability) why would the authors go this route. I think a prominent reason would be that this more complex route of the authors provides the uniqueness of approach that is much more likely to be published in a prestigious journal than simpler ones, and further, since it might require considerable reviewing time to truly comprehend, it can get by that hurdle by a lack of attention and finally the end result is in the consensus domain with no potential ruffling of the status quo feathers.
I think a bigger question coming out of the M&F paper is that of how the models handle the aerosol levels in the atmosphere and how much effect does that have on the historical temperature series given that these models have a wide range of TCR and ECS values. The CMIP5 models were all presented with the same levels of aerosols and the differences between models is determined by how those same levels get manifested in different forcings as evidenced by differences in Aerosol Optical Depth. Some models have the capabilities to handle secondary cloud effects from aerosols but many do not. I was surprised to learn, when I went looking for an aerosol proxy to regress the SSA deterministic trend against with the TCR values, that only 6 CMIP5 models had historical scenario runs that would allow estimation of an aerosol effect.
SteveF,
The problem, of course, is that if the effect of aerosols in the models is reduced, they won’t be able to hindcast the instrumental record even with reduced ghg sensitivity. The surface temperature record from 1910 to 1975 is a major stumbling block. If it isn’t aerosols, then they would have to admit that internal unforced variability at multidecadal time scales is much larger than was thought. That reduces ghg sensitivity even more. Pretty soon, the catastrophe goes away. That would be heresy. It will take the proverbial 2×4 applied to the mule’s head to get its attention for that to happen. Another ten years of slow temperature rise might be enough, but I doubt it.
DeWitt: I presume you know about the latest on this subject from Steinman, Mann et al. Science (2015) 988-991.
DOI: 10.1126/science.1257856
http://www.meteo.psu.edu/holocene/public_html/Mann/articles/articles/SteinmanEtAlScience15.pdf
They reference earlier work on the subject that might be of interest.
“Some recent work (18, 19, 21, 22, 25) has attributed a potentially large proportion of observed regional and hemispheric temperature changes to multidecadal internal variability related to the so-called “AMO†and/or “PDO.â€
IMO, all attempts to demonstrate that oscillations are an important component of climate appear to be inherently circular. (Ice ages may be an exception.) First, you need a way to define what would have happened if the oscillation didn’t exist. You can assume a flat baseline, a linearly trend (AMO), temperature elsewhere (global SST for PDO), or output from climate models with or without fudge factors (Steinman et al fudge). Then you calculate the difference between what should have happened and what did happen and call that unforced variability. If you used models, you then claim that unforced variability explains almost all of the difference between model hindcasts and observations – proving your models are correct. (:()
No matter what you do, the resulting “unforced variability” will always contain all of the errors arising from your choice of “what would have happened”.
If you are lucky, the difference will appear to have a dominant frequency (like the AMO, but not PDO). For long oscillations like AMO that appear to be regular, it will take centuries or millennia to prove whether they are periodic, occasionally periodic (common in chaotic systems), or random.
The paper below reports the first identification of the AMO. The same technique (singular spectral analysis) that identified the AMO was applied to a variety of regions (Figure 2). Some regions bordering on the Atlantic follow the AMO, but the rest of the world looks quite different. It is not surprising that GMST varies somewhat with AMO, because the AMO is the dominant pattern of variability for non-trivial fraction of the world, but it looks irrelevant elsewhere.
http://lightning.sbs.ohio-state.edu/geo622/paper_amo_Schlesinger1994.pdf
Fudge factors,
Aerosols would seem to be number 2 with a rocket from the above discussion.
Has anyone done a ranking of the fudge factors, also known as reasons for the pause?
How lucky we are to have 43 reasons to each account for at least 25% of the pause, otherwise we would currently be 10 degrees warmer and cooked.
Would anyone care to say how many can be safely discarded?
Climate Sensitivity is clearly much higher than anyone imagined.
Why are there no scientists putting forward 43 causes of warming other than GHG?
SteveF wrote: WRT Kosaka&Xie, they forced the tropical Pacific to be much cooler than what the models predict, and show (shockingly!) that the model then tracks reality better than when the tropical Pacific was not so constrained. The point of the paper as I understood it was to show the model is RIGHT about high climate sensitivity, not that climate sensitivity is lower than the model diagnosed. The effect of holding the tropical Pacific cooler than what it would have been in the model world was to throw vast model heat into a black hole, never to be seen again.
Actually they constrained a portion of the Eastern Equatorial Pacific (8.2% of the earth’s surface) to follow SSTs that were observed in that region: colder than normal during the hiatus due to more strong La Ninas than usual and warmer than usual for the previous decades due to more strong El Ninos than usual. The rest of the model evolved under historic forcing. I don’t know if the amount of heat added or removed from the mixed layer was significant on a global scale. (I would have conserved energy by distributing it evenly everywhere else in the mixed layer.) As I see it, the rational was that GCMs can’t reproduce the extreme variability in SST seen in this region, especially the decadal variation in a preference for strong La Ninas or strong El Ninos. Models certainly can’t cause these preferences to occur in the correct decade. If a model starts with a modest, but highly variable, portion of the ocean with the correct temperature, what will happen elsewhere on the planet to temperature and precipitation?
Answer: Compared with the unconstrained control runs, the warming was more rapid from 1971-1997 and negligible from 2002-2012. Only the latter result was publicized – as an explanation for the hiatus. Since the decadal variability of the Eastern Equatorial Pacific is not understood, nothing has been “explained”
Frank,
For the purpose of determining the climate sensitivity, we don’t need to know if the AMOC influenced the GMST in the past before the instrumental record or will continue in the future to exhibit a sinusoidal variation with a period of ~65 years. The question is whether aerosols somehow managed to produce a sinusoidal signal with a period of ~65 years in the GMST or whether it was unforced variability in the system. IMO, the unforced variability, which is reflected in the AMO index, but may or may not have been caused by changes in the AMOC, is the simpler explanation. In that case, one removes the sinusoidal variation and looks at the residuals. Those residuals look remarkably like what one would expect from the increase in atmospheric ghg’s with very little influence from aerosols.
Looking at the residuals from subtracting the CMIP5 ensemble results, which have been tuned with aerosols to mostly remove the sinusoidal variation, from the GMST record to look for unforced variability as in Steinman is not productive. As I said elsewhere, it’s begging the question. If the PDO is the major factor, as in Steinman, why don’t we see a correlation with Arctic sea ice extent in the Bering Sea and the Sea of Okhotsk with the PDO? Arctic sea ice on the Atlantic side appears to be strongly correlated with the AMO index.
If it is sinusoidal unforced variability, we also don’t need to know if it has existed for, or will continue for millenia, only for the next 50-100 years. We should know within 20 years or less if that’s likely to be the case.
DeWitt,
Yes, using aerosols as a kludge is indeed a big issue. Setting aerosol influences at lower levels would make it impossible for the models to hindcast historical temperatures. If you remove a sinusoidal AMO signal, then lower sensitivity to GHG forcing becomes the most plausible explanation… I don’t think even a “2 by 4” will convince the modelers of this.
.
Considering that aerosol effects dominate the uncertainty in net man-made forcing, I sometimes wonder why there is no loud clamoring in climate science to replace the failed Glory satellite. In my darker moments, I wonder somewhat less.
Frank,
Seems to me you answered your own question:
“Only the latter result was publicized – as an explanation for the hiatus.”
That is what the paper intended to do from the outset. Every one of the “explanation” papers avoids the most plausible explanation: the models are far too sensitive to GHG forcing. No paper examines the possibility of substantially lower historical aerosol effects combined with changes in cloud parameterizations to compensate for the higher net GHG forcing. The GISS models are all based on the assumption of ~50% of all historical human forcing offset by aerosol efects (I won’t speculate on where that number was pulled from, but James Hansen almost certainly was involved)… contradicting the best estimates of offsets (AR5). Substitue the AR5 aerosol estimate, and the GISS models would run far too warm. As I said: climate modeling is an intelectually corrupt exercise.
SteveF: “The GISS models are all based on the assumption of ~50% of all historical human forcing offset by aerosol effects…”
The earliest reference I can find (quickly!) is Hansen et al., Earth’s energy imbalance and implications(2011). The caption for figure 1 states: “Forcings … are the same as used by Hansen et al. (2007), except the tropospheric aerosol forcing after 1990 is approximated as -0.5 times the GHG forcing.”
Hansen et al. (2007) had 2003 aerosol forcing at -1.37 Wm-2 (relative to 1880). Using the 2011 approach, aerosol forcing is about 10% larger (around -1.5 Wm-2 for 2003). By 2010 it’s around -1.6 Wm-2.
AR5 gives a central estimate for 2011 aerosol forcing of -0.9 Wm-2, with the 17th percentile at -1.52 Wm-2. The central observational estimate is only 80% of that (-0.73 Wm-2).
The lack of self-doubt or critical review of assumptions in the offered explanations so far indicates strongly that thay are little more than post-hoc arm waving and excuse making. To be off this far and this long under any realistic program is a call to check assumptions thoroughly and critically. Not one of the consensus promoters has done this, as far as I can tell. The level of smugness implicit in that behavior is in itself reason to doubt the consensus. If physicists were getting results that differed from expected results this much for this long we would not see such sorry excuse making as we see in climate science.
I see two positive points in M&F 2015.
1) They point out that the recent divergence is not a new problem, models have been equally bad at hind-casting, even when they knew what the answer was supposed to be, ie “they’ve always been this bad, what’s the fuss”. It’s unclear why this should make us trust the models
2) the 62y trend analysis actually DOES show two clearly separate groups emerging by the end of the record: see thier figure 3b.
https://climategrog.files.wordpress.com/2015/02/mf2015_fig3b.png?w=662
Note the clear bifurcation into two groups of result, with white space in between.
The down side is they failed to notice this and analyse whether the two groups corresponded to high and low TCS. My bet is that it does.
The negative side is that sliding-trend is a crude low-pass filter but with nasty negative lobes that invert part of the data. In the case of 15y sliding trend , this inverts 10y variability. Now with two major volcanoes 11y apart at the end of the 20th c. this is a problem. Having inverted a significant part of the signal they find no “traceable impact” of TCS. Well, you can’t prove a negative and if you scramble the data, it may not be surprising, or significant that you don’t find anything.
Now if I use a gaussian to filter the deviation of model dT/dt from that of HadCRUT4, which they chose as observational reference, I do find a divergence that relates to high and low sensitivity models.
https://climategrog.wordpress.com/?attachment_id=1315
What that graph also shows is that, as M&F point out, the recent divergence is not new. The biggest problem that *all models* have in the last half century that they have been tweaked to fit is around 1970. 1960 is close behind.
Now they under-estimate during the strong 1960 solar peak and over-estimate during the very weak 1970s solar peak.
This looks like a clear indication that *all models* are under-estimating the impact of the solar signal and that this is their biggest problem.
If they under-scale solar they will probably have to over-estimate the effects of volcanic forcing which, as luck would have it, often happens just after a sudden drop in solar, leading to the risk of false attribution:
https://climategrog.wordpress.com/?attachment_id=1322
A model which is over-sensitive to volcanic forcing will also be very likely to be over-sensitive to aGW. When both are present this may more or less work. When one is absent we will get …. a divergence.
Temperature Adjustments
NOAA will calculate a monthly average for GHCN stations missing up to nine days worth of data (see the DMFLAG description in ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/readme.txt). Depending on the month’s length, GHCN averages will be calculated despite missing up to a third of the data.
calculating a monthly average when missing 10-11% of the data can produce a result with questionable accuracy.
If reasons for missing data include problems accessing the data which are most likely to occur on the coldest days in higher latitudes [*fact] is there not a TOBS bias for warming the data present.
Given that the colder stations drop out more often does this not also warm the homogenization done?
They play catch up over the year. That is the number is continually recalculated as more stations check in.
HaroldW,
Yup, and the justification is…. well, that James Hansen just knows climate sensitivity is high. I have seen no other justification offered. Put in the central estimate from AR5, and re-tune model parameters to match history, and model sensitivity has to drop by a factor of ~ (3.15 – 1.6)/(3.15 – 0.73)= 1.55/2.42 = 0.64. Now considering that the GISS models diagnose a sensitivity of ~3C per doubling…… a more reasonable aerosol offset would yield a bit under 2C per doubling, virtually the same as Lewis & Curry’s recent empirical estimate. Which is clearly not sufficiently alarming. The best thing about climate modeling is that you get to use whatever kludges you need to get the answer you want. Diagnosing climate sensitivity with climate models is worse than nonsense, it’s rubbish.
“If reasons for missing data include problems accessing the data which are most likely to occur on the coldest days in higher latitudes [*fact] is there not a TOBS bias for warming the data present.”
No. The default way of dealing with missing data is to average the remaining data for that step. This in effect attributes that average value to the missing.
The first averaging step is over month for each station. That attributes that month’s average, for that place, to the missing. There is no reason to expect those days to be colder than others in the month.
Au contraire Nick.
I outlined the reasons for the missing days to be colder than the others in each month.
Eli backed up this line of reasoning [thanks Eli] when he said
“That is the number is continually recalculated as more stations check in.”
He knows that stations do report in late [when the observer can get through the station to the gauges to get the data to send back] which is also the time when most data is likely to be permanently lost.
Zeke has confirmed it in his explanation of the reasons why data is incomplete [Older staff, older stations, not replaced when volunteer observers get old/die/drop out and placement of stations in colder areas harder to reach being more affected by problems]
You know about this as you have read his explanation.
If the colder days are the ones more often dropping out is possible to run checks for this.
And if this is the case there is an adjustment needed for bias in using an average for the month on what you have rather than an average based on the historical records of how much colder those missed cold days would have had.
Like a TOBS adjustment only cooler.
SteveF: “…considering that the GISS models diagnose a sensitivity of ~3C per doubling…”
Actually the GISS models are at the low end of the CMIP5 ensemble. From AR5 WG1 Table 9.5;
GISS-E2-H has ECS=2.3 K & TCR=1.7 K.
GISS-E2-R has ECS=2.1 K & TCR=1.5 K.
[Difference is the ocean model: HYCOM (H) vs. Russell (R). Source ]
I think Hansen’s estimates are no longer tied to GISS’s GCMs. His more recent papers focus more on feedback coefficients, fast and slow.
In examining the distorting effects of the 15y sliding trend used by M&F I used a comparable length gaussian filter on dT/dt that does not produce negative lobes and invert the 10y variability.
Above I linked a plot showing that both high and low sensitivity models have large divergences around 1960 and 1970, greater than the current divergence.
Selecting a group of high sensitivity runs and comparing their mean divergence from hadCRUT4 ( chosen by M&F ) it bears a strong resemblance to SSN. Suggesting they are under-estimating solar effects.
https://climategrog.wordpress.com/?attachment_id=1321
In brief exchange with SteveF on his interesting regression of exponential responses he said he found a time-constant of 15mo when allowing different scaling of volcanic and solar forcing.
http://rankexploits.com/musings/2013/more-on-estimating-the-underlying-trend-in-recent-warming/#comment-117258
At the time I said this was too short in relation to the values coming out of models. For some reason I thought this cast doubt on his regression rather than on the models.
I recently did a similar analysis but concentrating just on the climate reaction shown in ERBE TOA after Mt Pinatubo.
http://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/
I found a time-const of 8mo for the tropics which shows a very low sensitivity. It would almost certainly be longer for global which Steve was doing.
HaroldW,
I had seen the lower values for GISS Model E versions once before; I believe these come from Forster et al (along with a calculated ‘effective forcing’). If correct, these are a big reduction from earlier sensitivity values for Model E. My recollection is that Gavin talked about 2.8C per doubling for ECS (itself a significant reduction from Hansen’s earlier ECS values). The very low values seem suspect to me, because they appear inconsistent with best estimates of forcing and heat accumulation.
.
The AR5 table (from Forster et al) says these models have an equilibrium sensitivity of 0.6C per Watt/M^2. The observed warming over pre-industrial is ~0.85C, so that implies about 1.42 watts/M^2 of forcing is currently being ‘used’ to maintain that ~0.85C higher temperature. ARGO data suggest a current rate of ocean heat uptake for the first 2000 meters of ~0.45-0.55 watt/M^2. If you add a bit for heat accumulation below 2000 meters, a bit more for net ice melt and heat accumulation on land, you might reach 0.55 to 0.65 watt/M^2. So, between the two, we need ~2 watts/M^2 of current forcing, net of aerosol effects, to be consistent with a sensitivity of 0.6C/Watt/M^2.
.
AR5 (figure SPM.5) puts the best estimate of forcing, before aerosol effects, at ~3 watts/M^2. But my understanding is that GISS puts current aerosol offsets (direct and indirect) at ~1.6 watts/M^2… leaving only ~1.4 watts/M^2, not nearly the ~2 watts/M^2 needed. Of course if GISS is much higher in non-aerosol forcing than AR5’s best estimates, much lower in heat accumulation than ~0.6 watt/M^2, or some combination of the two, that could resolve the discrepancy. But I would look first at the credibility of the sensitivity values from Forster et al.
Greg Goodman,
Thermal stratification of the ocean is strongest in the tropics, with a well mixed layer which is relatively thin (~50 meters on average?) and which has a fairly sharp transition to much cooler temperatures below. So I guess it is not surprising if you find the tropics have a faster response to applied forcing. The summer seasonal well mixed layer at higher latitudes can be very shallow (really just a skin of fairly warm water), but the depth of the colder permanent well mixed layer is much greater than in the tropics; ARGO profiles for different seasons at ~40 degrees latitude and higher show how a thin layer of warm water forms in the summer on top of the deeper permanent well mixed layer, but disappears with surface cooling (and convective overturning) in autumn.
SteveF (#135664) –
Looking at Table 1 of Forster et al., GISS-E2-R has very nearly the highest cloud radiative effect feedback, at +0.48 Wm-2/K. [GISS-E2-H is close behind at +0.47.] I don’t quite know how to relate that to the aerosol forcing value, but I suspect that GISS-E2 aerosol forcing is well north of the -1.6 Wm-2 given by Hansen’s “0.5 times GHG forcing” formula.
Thanks for the reply Steve.
“Thermal stratification of the ocean is strongest in the tropics, with a well mixed layer which is relatively thin (~50 meters on average?) ”
Relative to what? You then go on to say extra-tropics is even thinner, so a better description of the tropics would seem to be “relatively thick” mixed layer in the tropics.
IMO the short time-constant should be interpreted as it is elsewhere: to indicate insensitivity to radiative forcing. High sensitivity models have long time-constants.
Could you clarify something about your previous analysis ? It makes sense to have an exponential ( linear relaxation ) response for the volcanic and solar forcing but was the ENI treated in the same way? ie was the 7mo lag a simple temporal lag on top of 15mo exp. , an additional exp ‘lag’ or a simple time lag with no exp applied?
It seems that my 8mo exp for the tropics ( with a temporal lag of 13mo between peak forcing and peak response ) may be compatible with what you found, at least in the case of separate non-fungible solar and volcanic.
Harold, have a look at my article over at Judith’s.
http://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/
All these numbers are fudge factors that get tweaked up and down in an attempt to balance the books. In 1992 Hansen’s group has volc. aerosol forcing at AOD x 30 W/m^2 , in 2001 they had dropped it by 50% in order to reconcile model output whilst maintaining high sensitivity in the models. This is little more that adjusting the data to fit the models rather than the other way around.
I do not want to be too cynical about their motivations but there does seem to be a mindset for “correcting” data to agree with what they “expect” it should be. Thus the models become an exercise in bias confirmation.
Here’s another curiosity about ENSO as seen in NINO3.4 regional SST.
https://climategrog.wordpress.com/?attachment_id=1325
In the year following each major stratospheric eruption there was a positive SST anomaly in this region. I’m working on some other indications that as well as being “internal variability” ENSO may be acting as a negative feedback.
If this is the case subtracting it should enhance correlation of volc and solar but this may be somewhat tricky if it is also subtracting a feedback.
Greg Goodman,
The “well mixed layer”, which mostly defines the regional heat capacity, is almost constant in depth over time in the tropics, so the tropical heat capacity is reasonably constant, and it is equivalent to ~50 meters. At higher latitudes, there is a deeper “permanent” mixed layer, which is much deeper than the mixed layer in the tropics, the depth of which is set by wintertime convective overturning, and a VERY shallow (and very temporary) summertime surface stratification. So the apparent heat capacity (at least WRT to short term changes in forcing) varies over the year at higher latitudes. Response to longer term changes in forcing (like a major volcano) is going to be dominated by the deeper permanent mixed layer depth… and so overall should be slower than in the tropics.
The ENI is essentially the sum of exponentially decaying past monthly Nino 3.4 indexes, such that the most influential Nino 3.4 value at any time is that for the previous month, the next most influential is the month before that, and so forth. The rational is that the gain/loss of heat associated with ENSO takes time to be distributed, and that there will always be some ‘memory’ of past ENSO states at any moment. That memory should decay over time. The fit of tropical surface temperatures to the ENI is in fact a little better than the fit to a simple 3 or 4 month time lagged Nino 3.4 index. I suspect that part of the improved fit is due to smoothing of ‘noise’ in the monthly Nino 3.4 index (multiple months contribute to the index, so short term variation is attenuated), and part is due to a better representation of what is happening physically. I did not spend much time looking, but I imagine one could create a better functional fit of Nino 3.4 to global temperatures by examining how ENSO influence propagates from the tropics (within the Hadley circulation) to higher latitudes, as a separate physical process.
Thanks Steve, I see better what you meant now. That is in qualitative agreement with with what I found by stacking the SST response to six major eruptions, aligning dates of eruption as zero.
I found tropics to be much more stable and quicker to settle. Here is a series of graphs for NH, SH for SST and HadCRUT. They are all interlinked in the text.
https://climategrog.wordpress.com/?attachment_id=285
So if your additional ENI ‘lag’ was in the exponential time const, then it should be additive with my 8mo tc for the tropics.
8mo+7mo=15mo , a remarkable agreement. If I’m correctly following what you did.
I’m currently exploring SSN in the tropics and am seeing indications of a much longer time-const of about 24-30mo, that would be closer to model values. I suspect the two responses are very different ( non-fungible ), this may be due to depth of penetration of short wave solar. For the moment I’m concentrating on analysing the data rather than trying to explain why.
DeWitt and SteveF: I certainly recognize the possibility that modelers have used high sensitivity to aerosols to overfit the hiatus in warming during the ’50s and ’60s – which might be due to unforced variability such as the AMO. Both hiatuses came after earlier periods of rapid warming. That earlier hiatus now looks much like today’s hiatus, but this wasn’t clear when modelers first chose their parameters for aerosol forcing and AR5’s consensus that sensitivity is likely weaker than previously believed wasn’t available in time for the CMIP5 models. (That evidence had probably been accumulating for some time, and discounted because it made it harder for their models to fit the historical record. Now they have no reason not to respond by doing some work with models of lower sensitivity to aerosols.)
Kosada’s work was done with the GFDL 2.1 model, which has a relatively low TCR (1.5 degC) – the climate sensitivity parameter most relevant to the study. (ECS is 3.4 degC.) A TCR of 1.5 degC is in reasonable agreement with Otto (2013) and Lewis and Curry (2014). So this model might not use as much aerosol cooling to bring it into line with observations of warming as you think. Kosada doesn’t “explain” the hiatus. “it simply tells us that is models had the ability to hindcast observed natural variation in SSTs in the Eastern Equatorial Pacific, the hindcast for the rest of the planet would show the hiatus and increased warming in earlier decades. Both the authors and the climate science community deserve significant blame for suppressing the enhancement of warming.
DeWitt: If I understand correctly, recent results from energy balance models use weaker estimates of aerosol forcing. However, unforced variability like AMO will cause error in their results. That is why Lewis and Curry (2014) chose 65- and 130-year periods even though there is substantial uncertainty in aerosol forcing over this period. Otto (2013) used shorter periods where the net change in aerosol amount was relatively small, minimizing the influence of the uncertainty in how to convert aerosol amount to aerosol forcing, but exposing themselves to greater error from unforced variability.
From Stainforth’s perturbed parameter ensemble of climate models, it is obvious that it is easier to find a climate model with high climate sensitivity than low (ECS = 2 degC). Tuning parameters one at a time (usual vs a different aspect of climate like albedo) doesn’t lead to a optimum set of parameters that is independent of the tuning strategy. When trying to simultaneously fit eight aspects of climate, Stainforth found that no subset of parameter space was clearly inferior or superior at reproducing many aspects of climate at the same time. With no clear direction in a relatively flat wilderness with multiple local optima (and confirmation bias), it is not surprising that modeler have produced models with high climate sensitivity that needed to be cooled by aerosols. The question is: “What will happen now?” There is plenty of other evidence that all models do a lousy job reproducing the ANNUAL changes in OLR and reflected SWR from clear and cloudy skies that we have been monitoring accurately from space for more than a decade. And they are lousy in many different ways, demonstrating most are full of compensating errors.
If you have ever done molecular mechanics trying to find the global optimum conformation for a molecule with rotatable bonds (or read about protein folding), you will recognize their need to systematically explore parameter space.
https://climategrog.wordpress.com/?attachment_id=1315
GFDL was in my group of low sensitivity models. They do better at the end of the record and are not too far off with the pause. However, both groups of models are nearly as bad as each other in 60s and 70s.
This corresponds to under-estimating the strong 60s cycle and under-estimating the dip caused by the weak 70s cycle.
The major problem all models have seems to be in getting solar right. This may be due to trying to make all radiative forcings equal.
Parts of the solar spectrum can penetrate very deep below the surface and affect a large heat sink. Volcanic aerosols modulate solar but not in the same way as solar output itself varies over the cycle.
Re-emitted, downward LW only affects mircons of the surface film.
It seems obvious that these will have very different time-constants and sensitivities.
If the early 20th c. rise was solar induced the reason it’s not dropping more now may be because AGW is propping it up. That may be a blessing in one way. I wish they’d get their fingers out and make a wider range of runs available for inspection instead of trying to rig data to fit expectations.
I’m sure someone at Hadley has done some “what if” runs but the results are safely locked in vault somewhere.
” If I understand correctly, recent results from energy balance models use weaker estimates of aerosol forcing.”
They keep winding down aerosol forcing in an attempt to maintain high sensitivity. My study of tropical feedbacks suggested the Lacis at al figures from 1992 were pretty good. That means at least 50% stronger scaling than commonly used now ( AOD x 30 W/m^2 ) with strong neg. feedbacks and low sensitivity to volcanic forcing.
In 1992, they were simply doing their best to model the physics, without trying make the result fit what they “expected” it should be of tweak it to reconcile model output.
Hansen’s 2002 paper states fairly directly that it’s all a case of what droplet size distribution you chose and you can get whatever sensitivity you want by parameter tweaking.
Frank,
I think you are correct about energy balance estimates of climate sensitivity using somewhat lower central estimates of aerosol effects; Lewis & Curry use the AR5 best estimates. The advantage of a long period (starting in the late 19th century) is that you can be reasonably sure the initial human aerosol influence was very small.. so the dominant uncertainty is in the current aerosol influence, where we at least have some empirical data and a probability distribution from AR5. Natural variability is a confounding factor in energy balance estimates, since you can’t be certain how natural variability influences ‘starting’ and ‘ending’ temperatures. But once again, a longer analysis period reduces the uncertainty.
.
I am skeptical that climate models can ‘get it right’ any time in the foreseeable future; too many moving parts, too many parameters, too much potential for bias (conscious or otherwise) to influence key parameters…. with assumed aerosol influences being a glaring example. If I had to bet, I would place my money on improved empirical estimates (data are constantly improving in both quantity and quality!) defining climate sensitivity within a fairly narrow range long before modelers can find their way to more reasonable estimates. In fact, I think better empirical estimates will end up guiding the modelers toward more realistic models, just as the models have been historically MIS-guided to high sensitivity by silly estimates from ice ages and “expert opinion” (AKA James Hansen’s fantasies). The models probably do have some (limited) utility, but that does not include guiding public policy with estimates of future warming; it is a clear case of software that is not fit for purpose.
Greg Goodman,
Lowered estimates of aerosol offsets make high sensitivity much less plausible, not more. To the extent that aerosol estimates have been lowered (as in AR5 versus AR4) the probability of climate sensitivity being high has been reduced.
.
WRT downwelling long wave radiation and depth of penetration: This is a subject which causes much discussion, little of which conveys an understanding of the processes involved. It is true that liquid water absorbs all IR in a very thin skin. It is also true that this skin is almost always cooler than the underlying water due to surface evaporation. The net heat flux at the surface is from below the surface to the air (mostly in the form of latent heat). The cooler skin at the surface is constantly being replaced by warmer water from below, due to a combination of convection, wave action, and wind shear.
.
It is also true that a great deal of solar energy (in the blue, violet, and UV wavelengths) is deposited far below the surface, and some light reaches 200 meters or more, especially in the open ocean where the water is very low in suspended particulates and so very clear. It is the deposition of solar energy at depth which is mainly responsible for the depth of the well mixed layer in the tropics. The transition from the well mixed layer to the thermocline is close to the depth where the energy content of remaining downwelling solar energy is balanced by the rate of upwelling cold water; upwelling cold water (about 4 vertical meters per year on average in the tropics) is warmed by solar energy that penetrates below the bottom of the well mixed layer. In places were the upwelling rate is higher, the well mixed layer will be more shallow, because more solar energy is being “used” to warm more rapidly upwelling cold water, and a greater fraction of solar energy is absorbed below the bottom of the well mixed layer. This means that less solar energy is effectively available to warm the well mixed layer, and so the well mixed layer is cooler where upwelling is greater.
.
An increase in downwelling infrared will cause the skin to warm (very slightly, remember it is constantly being replaced), but that just means the underlying water in the well mixed layer will have to gradually warm to maintain the same upward flux of heat through the surface at equilibrium. The temporal response to an increase in downwelling IR will be set mainly by the depth of the well mixed layer, not by the depth of the absorbing skin.
I have noted that there is much discussion at these blogs concerning the use of aerosol forcing to “adjust” the model historical temperature series to better match those of the observed. The aerosol levels used in the historical part of the CMIP5 model runs are the same for all model runs. What is different for the individual models is that the forcing derived from those same levels can vary significantly. Except for six CMIP5 models we have no more or less direct way of quantifying the model differences in aerosol effects.
https://troyca.wordpress.com/2014/03/15/does-the-shindell-2014-apparent-tcr-estimate-bias-apply-to-the-real-world/
I have been correlating singular spectrum analyses (SSA) derived trends from CMIP5 model series against published TCR values for the models and arriving at correlations in the range of 0.70 to 0.80 plus depending on the time period used and the window length applied in SSA. I have been attempting to find a decent proxy for estimating the differences between CMIP5 models for aerosol effects. The only one that practically comes to my mind is the ratio of the warming of the NH and SH during the period of significant GHG and aerosol emissions -or a ratio using a more aerosol affected region of the NH versus a much less affected one in the SH. I am planning to look at these ratios today and applying those values to a multiple regression of SSA trends versus TCR and NH/SH warming. If the aerosol tuning of the models is relatively straight forward I would think that an improvement in my SSA trend versus TCR should achieved by adding in the NH/SH warming ratio.
Steve F,
Is it fair to look at “well mixed” as a dynamic distribution of mixing and light penetration that varies greatly over time and geography?
Hunter,
Not sure I understand what you are asking.
Kenneth, thanks for the reference on CA to Frank’s M&F analysis here a few days ago and Frank for making it.
So this is where you guys hang out. I read everything above and would like to ask Frank or anyone a simple question for a relative newby. How can F be deterministic and alpha and kappa not if they are related directly in the equation being analyzed? If F were 2 and alpha and kappa were each 1 wouldn’t the sum of alpha and kappa be exactly as deterministic as F? Without a full lesson in statistics is there a logical explanation?
BTW, I think I understand Frank’s analysis of M&F to be that the equation is not reversible, which means that while Forster 2013 values may be valid in a model with no unforced variability they fail otherwise. This to me is so simple a HS student should have seen it but with the paper written in a form that only a few experts could fully comprehend, and the debate being mostly on statistical analysis validity that was lost until now. But Frank if there can be a consensus built on your analysis by perhaps Nic and others I would like to see it posted on CLB.
Do we want the blogosphere to have any impact on trying to clean up the science? Looking at the amount of time we all have devoted I believe following through on M&F is worthwhile. I mean well composed final critiques posted on CLB.
I used the NH to SH warming ratio for 33 CMIP5 RCP4.5 models in a regression of SSA derived trend versus TCR and NH/SH ratio to compare to the same regression with just SSA trend versus TCR. I used the period from 1950-2005 for each model to obtain the NH to SH warming ratio. My thought was that any improvement in the regression correlation using a very crude proxy for the aerosol effect like the NH to SH warming ratio would show that the model sensitivity to aerosols can affect the trend predicted by the model TCR values. I did regressions for the five time periods used previously. The regression correlations and p.values for the warming ratio coefficients are listed in the linked table.
The correlation increased for all time periods and had p.values for the ratio coefficient less than 0.05 for the 2 historical periods used. The ratio coefficient was negative in all cases as would be expected if the aerosol effect was greater in the NH than in the SH. I might attempt to extend these comparisons by using NH and SH global regions that are expected to have greater differentiated aerosol effects and confine the NH to SH ratio to all land or all ocean in an attempt to mitigate the land to ocean area differential between NH and SH.
http://imagizer.imageshack.us/v2/1600x1200q90/911/zrn5TX.png
R Graf,
“Do we want the blogosphere to have any impact on trying to clean up the science? ”
.
Lucia frowns on rhetorical questions which are not immediately and clearly answered by the person who asked them. Best to avoid rhetorical questions…. unless you offer a clear answer immediately afterward.
R Graf (Comment #135688)
Who has said that kappa and alpha are not deterministic?
“Do we want the blogosphere to have any impact on trying to clean up the science? Looking at the amount of time we all have devoted I believe following through on M&F is worthwhile. I mean well composed final critiques posted on CLB.”
Blogs are a convenient venue for analyzing published science papers and allow more generalized treatment than that normally available from peer reviewed papers. There is a lot more chaff than in peer review but that is the price worth paying.
Why is it apparently so important to you that there be critiques at the Climate Lab Book blog? I’ll ask you one question (and it is not rhetorical) about the discussion at CLB. Did anyone refer to instances and examples of circular regressions in the literature and what that might do the derived R^2 values?
In the regressions I just did which included an independent variable of the ratio of NH to SH warming as a proxy for aerosol effects, I could have used a variable something like (NH-SH)/(NH+SH)/2 and I would have obtained even better regression correlations, but on inspection something akin to the dependent variable then occurs on both sides of the equation, i.e. (NH+SH)/2.
R Graf asked how can F be both deterministic some of the time and contain a chaotic component the rest of the time? M&F are using the same symbol for two different concepts: radiative forcing and effective radiative forcing. For simple GHG’s, radiative forcing is calculated by inputing absorption coefficient from laboratory studies of GHG into a model of the atmosphere (temperature, pressure, humidity and cloud cover change with altitude) can calculating how much an increase in the GHG will reduce OLR (assuming no change in the atmosphere). Effective radiative forcing calculates a change in forcing from dT = dF/(a+k), given dT, a and k. a and k are the AVERAGE values deduced from model output, but they aren’t constant during model runs because the fluids that carry heat from the surface move chaotically.
A bad analog: Imagine an object falling through the atmosphere. It’s position with time can be calculated knowing g, its surface area and its coefficient of drag. Everything appear to be deterministic. Now imagine the object is tumbling. There will be a deterministic and chaotic component to the change in position vs time. We calculate an average surface area and coefficient of drag (a and k) from some experiments. From position vs time, we could use these averages to calculate an effective g that varies chaotically with time.
Frank (Comment #135707)
M&F were using climate resistance as deterministic with I would suppose stated reservations.
Frank, I stand corrected.
I thank you guys for being really patient with me. And, I think if nothing else I may be of service as a test subject of the average scientific public on level of understandably of these issues. I own and run a business that is based on my own inventions in electrochemisty. I am regularly able to communicate with computer techs lawyer and professors but I think because of the political nature there is a particular tendency to talk past one’s audience. For example, Pekka agreed with the most important point about M&F; their conclusion was unfounded. That pretty much does a paper in right there. But, because of a competition in pride and credentials the communication broke down. I was guilty of this frustration too.
Let me tell you all what I understand now. I get that the models are attempting to validate what mix of forcings and feedbacks are at play and at what magnitude and variability. Putting aside why the computer modelers did not place internal analysis and output of the hard-to-determine parameters, I know there have been several methods adopted in order to estimate them from the temperature outputs. I believe Forster and Taylor in 2006 were the first to utilize OLS on the outputs of abruptly forced 4XCO2 runs. And CMIP5 has a group of these runs done ready for that same technique to be employed again, which Forster did in 2013. All three M&F variables, ERF, alpha and kappa were diagnosed at the same time in 2013 by OLS. I believe the ERF was the y intercept (T) just after CO2 initiation with the assumption that all radiative feedback adjustments were immediate and observed. The delta T of the temp rise (y) over the next period, (days or weeks I assume,) represented alpha. The value on y intercept at the estimated termination of alpha was kappa. All values were relative to a zero pre-industrial CO2 and some average temperature (maybe 1850). Assuming I didn’t already botch this, Forster then assumed that 1850 kappa was close enough to equilibrium for government work and that all these values, although not constant, were intrinsic model properties that would emerge through time regardless of internal noise.
Common sense would dictate the next experiment would be to apply Forster’s values to historical model simulations for those same models to see if his values would emerge with time as predicted. From here I am mostly lost and ask the following questions:
Briefly how does OLS able to break out from one T data stream what is ERF caused versus alpha or kappa or noise?
Why were M&F’s conclusions about real world climate conforming to the models rather than just validating their diagnosis method, and if it worked, the variances in behavior of the models from differing parameters? I mean legitimate reasons. If none pls state.
Whereas all feedbacks, but kappa, more-so than the other variables, is set to a fixed pre-industrial calibration T, mustn’t delta T in the M&F equations to give a valid k be delta T always relative to pre-industrial T, not the beginning of the selected period?
Whereas kappa drifts based on lag of prior delta T doesn’t that need to be factored in, especially with 50-plus year runs?
Whereas ERF’s radiant feedbacks are and not empirical forcing doesn’t the feedback portion belong with the other climate resistance in the denominator? Frank, is this what you were talking about with M&F using F for both radiant forcing and ERF?
I think there are many others in my boat who are immensely interested but are not math whizes. I think the problems can be broken down. Frank seems to be onto the an explanation that if true, invalidates all Forster’s work to the extent that internal variability cannot be separated out independently and then added to the end of the equation, not as statistical error, but part of the energy balance.
Ken,
I am further behind in comprehension than you are giving me credit but I very much want to catch up.
First, on deterministic alpha and kappa, didn’t M&F conclude that alpha and kappa had little power to determine temperature trend?
On blogs, obviously I wholeheartedly agree that blogs are a quantum leap of synergistic processing of science. CLB happens to be the author’s chosen turf that they believe has a trusted filter for chafe and to keep the discussion focused and productive. It is thus the best live journal tool we have access to in order to reach climate scientists IMO.
On circular regressions, I do not recall any illustrations of examples of such or tests proposed on CLB to determine such. I am very poorly qualified in this but I believe flawed logic precedes flawed math setups. If we can find the bug in the logic most all can understand and if we don’t no math tool will work, (if my logic is correct).
I am of the opinion that circularity is at play long with possibly other false assumptions. I do not understand Nic’s exact accusation because the equation is all self derived from observations of behavior of T, therefore T can be substituted in anywhere. so substitution by itself does not make it circular. It is how the equation is used that is of concern. If you are using output that is also contained in your input then you are circular. GHG forcing must be considered the original input since CO2 is being added to the system that is assumed to be at equilibrium. Alpha, kappa and T react to forcing. If you use T in order to diagnose F, as Frank pointed out, you are on shaky ground unless you are absolutely certain about alpha, kappa and the lack of any other potential confounding influences, like chaotic variability.
It is known that the models had randomly added variability to simulate PDO. Could that have confounded the calibration runs?
Ken, on your NH/SH maybe if you educate me on the other questions I will be a better audience to respond.
R. Graf,
Minor nitpick: The system is not at equilibrium or at least not thermodynamic equilibrium. It’s at steady-state. At thermodynamic equilibrium, there is no net energy flow in, out or within.
Re: equilibrium vs. steady state.
I have heard this brought up before and thought about it and decided that for our purposes equilibrium is better at implying a reference point for pre-industrial forcing. It is a good point that kappa is not zero in the calibration. It indeed is not at equilibrium.
Regarding the M&F conclusion, were they saying, or could they have meant to say that since alpha’s effects are so immediate it has no pending effects that should be anticipated; it all done? And, kappa is so slow to respond that it’s effects are all still coming in the future and therefore it has little effect in a 62-year period?
And I believe they are saying the observed historical variance from the model plotted mean is such that today’s variance is not out of place. Right? So the analysis risked showing today’s hiatus as being out of place by an arbitrary or unprecedented amount and it was determined scientifically not to be? I just want to be sure I am getting the same message as you are because such conclusions are scientifically lame on their face IMO. And, I would think there would be much more paper discussion about the implications of such claims or further suggested research.
“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” A. Einstein
Although I disagree with Einstein because in his case he was proven right famously by predicting lens effect to be seen next to the sun during an eclipse, I think his quote could be appreciated in regards to M&F.
What climate science needs is more ingenuity in designing efficient tests. Ken, you have the right idea about making tests. I just have to figure out what it is you are proposing.
Actually it’s more like a criminal trial. Not guilty does not mean innocent. Not wrong is not equivalent to true. An incorrect prediction would have proved him wrong. A correct prediction does not make his theory true. It only makes it not wrong. In fact, we’re reasonably sure General Relativity is wrong because it’s not quantized. It’s just a very good approximation. It’s also not at all clear that we can ever achieve conditions that will show that it’s not accurate.
DeWitt,
Okay, I agree with you that General Relativity does nothing to incorporate quantum mechanics. I also agree with your idea that you are a tad nit-picky. I wouldn’t mind being wrong like Einstein.
What do you have to say about the ability for an energy balance to self-cleanse circularity? This was ATTP’s argument on CLB supporting M&F. Thinking more about it, the only relevance at all of the equation being an energy balance would be if there were observations coming from separate sources that needed to be reconciled. When you have all values being diagnosed from one observation everything is automatically related in balance.
I believe equilibrium requires reversibility and steady-state changes in entropy and thus cannot reverse. That is significant if one can find that it resulted in a false assumption.
MikeR,
The original claim that “the regression is trivially circular and obviously invalid” has not gone away by any stretch of the imagination. It represents only one of multiple problems that the paper has. See my longer post to R Graf below.
Frank,
I think that you may be misunderstanding the argument of M&F for their calculation of forcing. Within the GCMs, the temperature series from any run of course contains both unforced and forced variation. Both of these components influence the restorative flux via surface temperature change, which F13 and M&F15 purport to capture as alpha*DeltaT; hence both components influence the Net Flux, but they do not influence the exogenous forcing. The M&F argument is that at any point in time,
Net Flux = Forcing – alpha*(DeltaT(forced) + DeltaT(unforced))
Thus (they argue) the inversion of this expression should yield the forcing. They then use the derived forcing value to estimate and isolate just the forced temperature response.
The method is circular and has more holes than a piece of gorgonzola, which I will discuss in a slightly later post, but I think you may be unfairly misconstruing this part of their logic.
“The method is circular and has more holes than a piece of gorgonzola, which I will discuss in a slightly later post, but I think you may be unfairly misconstruing this part of their logic.”
Paul_K, as layperson getting my feet wet in this area of climate science, I look forward to your post and general expose’ of M&F.
My gorgonzola does not have holes while my Swiss does.
Kenneth “My gorgonzola does not have holes while my Swiss does.”
Maybe Paul_K is thinking of the air channels that are made to allow the mold spores to grow into the cheese 🙂
It’s all circular. 😉
Paul_K, trying to be devil’s advocate here, perhaps M&F were thinking that there was no unforced variability in the pre-industrial calibration period model runs. But if the PDO simulation or any other programming was changed for the calibration it is no longer a true calibration.
While eagerly anticipating the cheese, I am still welcoming a logical explanation for what M&F were doing with F13. I still read that F13 approximated all values, F, AF, N, a and k, by their defined determinate relationship with dT. Then MF15, based on dt in the same models’ runs simulating history conclude that k and a have little determinate relation with dt. I no longer am even asking about the statistical argument. The models are either accurate or not based on their degree of conformity with plotted observed dT from 1850 to present, period. This is why the reporters assumed that M&F’s clean bill of health for the models meant they could announce to the world the models are right on track. Although I am pretty sure M&F did some invalid math somewhere I again confess I do not even know how they purporting to prove their conclusions. I am only 99% certain they meant k and a in real life have little determinate effect, with still a 1% chance they meant it’s a model flaw they found.
The purpose of M&F15 was to provide evidence that the models are bias free. The critics apparently have been saying models are just thrown together combinations of forcing and feedbacks tuned to approximate the recorded GMST dT from 1850-1998, and this is becoming apparent considering the lowest theoretical forcing was in 1850 and the highest is at present, yet the last 16-year trend is flat. Okay then how did M&F think they had the tools to even address this having only in hand exactly facts the critics point to: random models matching recorded dT to 1998? Any model conformity pre-1999 proves nothing unless one can prove the modelers were blind to the record.
If you tell me I’m wrong I’ll still try a bite of your cheese.
SteveF (Comment #135680)
March 8th, 2015 at 7:38 pm
Thanks for your knowledgable comments on penetration depth etc., most informative. Not sure that I’m convinced by the discussion of downward IR, but that is contentious and poorly understood IMO, so I won’t divert into that.
However, I don’t follow your logic about senstivity. If AOD scaling is higher ( 30 W/m^2 ) as I estimated from ERBE and as Lacis et al derived from detailed physical modelling in 1992, then this implies stronger feedbacks and less sensitivity. The effect on temperature is known so a stonger forcing producing the same effect implies less sensitivity.
How do you arrive at the opposite conclusion?
Greg,
Aerosol forcing offsets ghg forcing by increasing albedo. Reduced aerosol forcing means the total forcing is larger for the same ΔT, i.e. lower sensitivity.
David Whitehouse discusses a new paper, Smith et al., “Near-term acceleration in the rate of temperature change”, Nature Climate Change. From its abstract, “The rate of global-mean temperature increase in the CMIP5 archive over 40-year periods increases to 0.25 ± 0.05 °C (1σ) per decade by 2020, an average greater than peak rates of change during the previous one to two millennia.” Of course, current rate of increase using that same metric is pretty close to that, yet the observed trend is notably less.
I’m trying to figure out what makes the article worthy of publication. We already knew that the models’ forecast rates are far higher than their hindcast rates. What’s novel about the information presented?
Lucia – did you reduce the editing window time?
At any rate, the comment above should be modified to indicate that “the shorter-term observed trend is notably less.” Over a 40-year window, the observed OLS rate is lower than the models’, ~0.15 K/decade, but not as far below as over (say) the last 15 years.
I was astonished to read in the review that the 40 year window had been selected because:
lifetime of houses? buildings?
j ferguson,
Seems like a crazy low lifetime. 80 to 100 years is closer to reality. I once talked to a bridge engineer about expected lifetime. He told me they designed roadway bridges (non-suspension) for 100 years or more service life.
DeWitt,
Yup, man made aerosols offset GHG forcing… to some very uncertain extent. 😉
Hi SteveF,
I used to design wastewater treatment plant buildings for 100 year useful lives. This was a requirement of the federal portion of the funding in those days.
In 1986, I drove my new wife (2nd marriage) up to Wisconsin to see the first project I had had design authority over. (1968).
I couldn’t find it. I eventually remembered exactly where it should have been. It had been demolished 2 years earlier due to its process no longer being required.
Alas.
Has anyone seen comment by Spencer, Christie, Curry or non-warmer about M&F or other bad papers? Is there clearly a chill to criticize personal works, even if clear objective errors found? Is there any limit to how unfounded, uninformative or non-innovative a paper has to be before it can be criticized by any who are employed in the field? If there is not the system is broken. The problem with Nature is not lack of astrophysicist referees, their editor and chief is one.
Paul_K: I agree with everything you wrote about M&F’s derivation. I’ll leave the circularity issue to Nic Lewis and others with more experience in statistics. My main disagreement with M&F involves several aspects of their interpretation of their regression.
The residuals can not be interpreted as unforced variability and used to determine the error bars for their ensemble of models. There are systematic errors in their regression equation and those errors are being interpreted as unforced variability.
Suppose I analyze an object – a ping pong ball might be a good example falling at terminal velocity through the atmosphere under the force of gravity. At first glance, the system appears purely deterministic. However, the turbulence left behind means that the drag will have a chaotic component. Or, we can imagine chaotic convection influencing the motion. So we do a linear regression of position vs time and interpret the residuals as unforced variability in the motion. However, I have forgotten that drag increases as the object falls into denser air. The residuals fom my regression would contain both unforced variability in addition to systematic errors because my regression equation didn’t included all of the relevant physics.
The physics in M&F’s regression is seriously flawed. The error bars derived from it appear meaningless. The residuals contain systematic error in addition to unforced variability. The allegedly deterministic variability is computed from a term that is calculted from dT, which has unforced variability.
In a traditional definition of radiative forcing, deltaF is the instantaneous change in TOA flux. The retained energy will cause surface warming (deltaT), until the increased radiation to space and increased heat transport into the deep ocean equal the forcing. (Conservation of energy). If we postulate that these fluxes are linearly proportional to warming (an approximation not derived from fundamental physics), we get
deltaF = (a+k)*deltaT
Unforced variability in warming arises because a+k varies chaotically. El Ninos occur when accumulated heat in the Wester Pacific stays on the surface by moving east rather than being transported deeper into the ocean.
In the real world and in AOGCMs, forcing is deterministic and chaotic variation in a+k adds an unforced component to temperature change. F13 calculates an average value for a and k for each model. M&F use these fixed averages to calculate an effective radiative forcing, which – unlike traditional forcing – contains the unforced variability found in deltaT. This happens because M&F15 treats a+k as an invariant quantity.
Since few of the reliable commenters here and at CA have indicated approval, perhaps I have made a mistake. As best I can tell, I understand what M&F have done: They interpret their regression equation so as to produce the deterministic and unforced components of the error bars they show. This interpretation seems inappropriate.
Frank,
I agree with almost everything in your last comment. My promised comment to R Graf is going to try to identify the sources of error in the regression equation, and explain more fully why the inferences drawn from the residuals are not supportable.
Frank, I agree with everything you write except: “M&F use these fixed averages to calculate an effective radiative forcing, which – unlike traditional forcing – contains the unforced variability found in deltaT.”
My reading of F13 and M&F is that they regressed fixed numbers for ALL three variables. Then in M&F they add (e) to the equation to take up the difference from their values and model dT and assume it is unforced variability.
I believe the flaw is assuming that k is not sensitive to change over time in a trend of its own and that a is not sensitive to logarithmic change as T increases, forming a stiff ceiling of resistance (which I believe there is paleo-record evidence of). But as I mentioned above I feel the fundamental flaw is not in the math, its in the logic. One cannot make a determination of their derivations simply by reversing the process to see if your values come back to closure. That only proves you did not have a math error it does not prove your model is simulating correctly.
Frank,
I also agree with almost everything in your last comment, except for this part: ” El Ninos occur when accumulated heat in the Wester Pacific stays on the surface by moving east rather than being transported deeper into the ocean.”
There does not appear to be much association of ENSO and changes in total ocean heat content (see 0-700 meters here: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/ ) or transport to deeper water. When warm water moves eastward from the west Pacific warm pool, the depth of the (warm) well mixed layer in the west decreases and the geographical distribution of ocean heat changes, but the total ocean heat content looks not much different. Heat loss from the west Pacific warm pool is (I think) more due to surface currents, especially the from from the warm pool to the Kuroshio (https://cimss.ssec.wisc.edu/sage/oceanography/lesson3/images/ocean_currents2.jpg). I would not be surprised if the rate of surface transport does change with ENSO.
I just did a simulation of what I suspect that the M&F regression variables had to look like in a time series from whence those variables came. When taking overlapping trends variances as M&F did for their regression – or just about any overlapping variable – a time series will have very high autocorrelations in the residuals when regressing the variable versus time. In the case of M&F I estimated it to be around ar1=0.89. In a previous post on this matter I speculated about whether regression of 2 variables from a common time series with residuals with high auto correlation when regressed against one another would also have high auto correlations in the regression residuals. In my simulation the auto correlations in the residuals remained nearly as high with a reduction of ar1 from 0.89 to 0.82.
If this is the case with the MF regression residuals such a high auto correlation would have to be taken into account when determining confidence intervals of the regression coefficients and if those coefficients are different than 0.
If M&F were to release their data as asked by SteveM, the residuals could be readily tested for auto correlation. I am thinking that I should email the corresponding author of M&F and asked for a copy of the residuals or whether they tested the residuals for auto correlation and what was the result.
I have been playing with finding a proxy for the individual CMIP5 RCP4.5 climate models for the forcing from aerosols. I have concentrated on differences in NH and SH temperatures and have looked for an increase in the regression R^2 of model temperature trends (derived from SSA first 2 principle components and not linear ones) versus TCR and the aerosol effect.
From these attempts I am beginning to think that the mechanism for parameter tuning used to keep even high TCR models within a reasonably wide range of the historically observed temperature series is not related to how these models differently handle the same aerosol levels in the historical period but rather from some other source of parameter tuning. I think we my get on this track because the models in general may be using too high of aerosol levels in the historical scenario compared to that from the observed and/or having in general too high a sensitivity to the scenario levels.
It is difficult for me to believe but I know that many CMIP5 models have out of balance TOA energy budgets given the incoming short wave and outgoing short wave and long wave radiation and the change in ocean heat content. We are told that the model tuning concentrates heavily on getting the TAO budget right but we are also told that the CMIP5 models do not have the drift problems of the CMIP3 models but that these models do have problems getting closure on the TOA. I am looking at the question of: Could the regressions of SSA temperature trends for various time periods against TCR be improved by adding in a variable related to the energy (out of) balance?
DeWitt Payne (Comment #135740)
You are making unstated assumptions. You are assuming that CO2 forcing is correct. The reason that the VF (fudge factor) was reduced
was to reconcile climate model outputs whilst retaining high CO2 sensitivity. One implication if the original Lacis et al figure, that matched what I found from ERBE, is that it will likely mean that “effective” CO2 forcing needs to be reduced too. That forcing is only as high as it is, is because of the *assumed* positive feedbacks which are yet more guestimates and fiddle factors without any observational evidence.
This is essentially on over-fitting problem because they have so many poorly constrained parameters there is an infinite number of solutions that will provide equally good ( or rather equally poor ) fits to the data.
One thing we do learn from (or rather are reminded of ) by M&F is that the current divergence problem is not new. There have been worse divergences in the hindcast period when the right answer was already know.
Models fail to reproduce the early 20th rise, have even larger divergence is 1960s and 70s than in post 2000.
https://climategrog.wordpress.com/?attachment_id=1321
Another thing that M&F shows, which is contrary to their published conclusions, is that their 62y DID show two distinct groups in the last 5 years of their analysis. There is a clear bifurcation with white space between the two groups.
https://climategrog.wordpress.com/?attachment_id=1319
Greg, I imagine that M&F were not that concerned with observing that the 16 models broke into two groups of behavior, which you’re right is kind of interesting.
Frank, I know less than Steve about consensus of how ENSO is to affect GMST theoretically but I developed my own theory after working at the ice age question before M&F. I believe that the degree of global temperature homogeneity is in itself a forcing. The reason is simple: efficiency of radiant emission is not linear to temperature as can be seen here: http://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Emissive_Power.png/640px-Emissive_Power.png
Therefore the more the temperature is smoothed out the less
efficient its black body emission efficiency. I believe ocean currents are the largest influence on GMST by their large and chaotic redistribution of heat from the equator toward the poles. I believe the ice age runs away when the thermohaline conveyor TC short circuits a northern termination off Labrador rather than making it all the way between Greenland and Iceland. The resulting concentration of heat that the tropics increases evapo-transpiration, convection at the tropics, increasing tropical TOA temperature slightly, which keeps the TOA in budget while a disproportionate polar TOA surface area is allowed to drop in temp. The larger surface temp gradient between the polar and tropical means large snow falls that cannot completely melt in summers and albedo climbs and the feedback loop drops GMST until Earth reaches its normal glaciated state. That state is broken temporarily when a slight polar summer warming (aided by Milankovitch orbial obliquity) collapses large amounts of ice and the increased water pumping pressure into the TC knocking it back up to Greenland and causing more runaway melting, which of course, lowers albedo. The interglacials likely end when glacier melt can no longer be enough when add to annual snowfall melt to keep the the TC going at its excited state. We were likely nearing this tipping point in the LIA and got reprieved with industrial GHG. ENSO is doing the same thing as TC but in smaller and more chaotic fashion.
Perhaps the model’s inability to gain closer has to do with needing a variables for SST and TOA homogeneity.
“Greg, I imagine that M&F were not that concerned with observing that the 16 models broke into two groups of behavior, which you’re right is kind of interesting.”
Well when their main conclusion was that there was no “tracable impact” of model TCS in the results, non reporting the two groups in fig3 and the failure to diagnose whether that split corresponds to model TCS is a serious omission which could invalidate the paper.
SteveF: “There does not appear to be much association of ENSO and changes in total ocean heat content (see 0-700 meters ”
Interesting paper by Allen Clarke et al, analyses ENSO as two oscillatory components in heat anomaly in OHC above the thermocline.
http://journals.ametsoc.org/doi/pdf/10.1175/JPO3035.1
The temperature anomalies usually shown in an equatorial cross-section are mainly from zonal influx and outflux of warm water not vertical movement of warm water along the equatorial line.
My personal hypothesis is that the underlying cause is tidal. How it then develops and the magnitude of each event is a complex interaction of surface and atmospheric conditions.
“the failure to diagnose whether that split corresponds to model TCS is a serious omission which could invalidate the paper.”
As I have admitted I do not know how the paper is supposed to be valid to start. regardless if F13 used a 1-part or 21-part method to diagnose the values, it was all based on assumption on simulated effect on dT thus they can’t make any conclusion about the validity of their diagnosis with only the use of dT. They could use historical recorded dT to validate the model only if the modelers were blind to the effects of their programming, prohibited from trial runs, etc…, but I believe the opposite was true; it was an open book test. Therefore the only validation can come from degree of correlation with future recorded GMST. I believe this is part of the motivation for not wanting to re-tune the models. If temp swings up rapidly the models could remain on the cusp of plausible validation for some time but only if not touched.
That validation is not susceptible to “innovative methods,” tuning or trickery is what makes it scientifically valid. (Sorry so circular. Did I make anyone nauseous?)
Greg, another serious flaw in the equation: dT = AF/a+k +e
is that if dT is negative in an otherwise rising trend the climate resistance should shrink. This then shrinks or inverts (e) in reality, so if the analyst assumes (a) and (k) are constant they are overestimating (e), the unforced variability.
I think this is a separate flaw from the using of the equation in reverse to diagnose F, a and k in a steady-state equation where thermodynamic reversibility must not be assumed.
Thank you DeWitt.
If we can all come to a consensus on most of the flaws I propose we make apply to M&M for a guest post on CA of a paper we jointly compose and ask for opinions and co-sponsorship for submission to Nature.
The point is that if their results show something that is unreported and undiagnosed, that there are two groups emerging from their processing, then their conclusions are INVALID.
It is not even necessary to show what these two groups are ( although it is not hard to guess ), for their conclusions to be invalid .
It then becomes unnecessary to continue arguing whether it is circular, and all the other alpha/kappa crap. The paper falls even if one tentatively accepts their non validated “innovative” method.
Greg, I agree that one clear contradiction is enough. But I need you to lay out more thoroughly the conflict. Also, a good lawyer does not stop at one airtight point since nothing is airtight or immune to denial.
Thinking more on the reversibility, gravity can be determined from a falling object in a vacuum since no work is being done. But you cannot determine gravity accurately from the output of a waterwheel even is the flow is a steady-state because work is being one, entropy change, (if I remember my Physical Chem, AKA fluid dynamics). If this line of argument pans out they were breaking the Second Law. That is a good invalidation too.
Liang et al 2015, interesting study on diffusion in and out of deep ocean
http://www.mit.edu/~xliang/resources/liang2015a.pdf
R Graf (Comment #135764)
While your suggestion has an admirable motivation I am sure, it is not very realistic. Even those who have authored papers dealing with this subject, like Nic Lewis and Judith Curry, would have a difficult time getting a counter paper published in Nature. Besides in order for a paper of this kind to be readily published it would have to be narrow in its scope. You also would never get much agreement among blog posters on the content of proposed paper.
I do not know how well you have followed the analyses of climate science papers at these blogs, but there are some very basic errors pointed out in these analyses that are not even acknowledged by many of those active in the field and certainly not by journal editors or reviewers. Changes at these more basic levels will probably have to come from within the climate science community even though currently I see the problem there as one of thinking the ultimate answer is fairly well already known and bounded about AGW and the work is merely providing one-sided evidence for that foregone conclusion.
My remarks are perhaps biased by what I intend to reap from these blog discussions and that is obtaining a better understanding of the science from a theoretical point of view and then finding and analyzing the evidence in support of these theories for my own self satisfaction and enjoyment.
If one is more concerned about government attempts to mitigate the supposed effects of AGW, that effort should be orientated in my mind towards the political process and, at a fundamental level, political philosophy. Government AGW mitigation attempts will be undertaken not so much by the scientific facts involved – as is nearly never the case in these matters – but rather by intellectual arguments about how much power governments should wield.
Off topic, but funny in a way. This article discusses a new paper on deforestation:
So a paper with entirely different results “should not be seen as contradicting.” A novel variation on “not inconsistent with”.
Kenneth Fritsch (Comment #135768),
For sure. The often heard argument from advocates: “Reducing fossil fuel use is the right thing to do anyway” is consistent with political views about the proper role and scope of government which are based on a belief that government control of private activity is an unmitigated force for good, not a necessary evil which ought to be held at the minimum practical level. The fundamental disagreement has never been about ‘the science’, it has always been about using ‘the science’ to advance a specific political philosophy… ever more political control of private activities.
To my new friend, and apparently fellow libertarian, Steve, I believe all agree that power corrupts. It’s just about 35% believe that corporate power is the most self-interested and corrupt while 40% realize like Jefferson, Madison and Franklin that private enterprise is checked by governments and automatically by competition while governments with all power would squash individual’s spirits and aspirations at the very least. And, there very top of a central government would see laws or restraints of any type as not applicable to themselves.
To my friend Ken, I feel all have an inherent social and civic responsibilities. The protection of truth and science is everyone’s business. One just needs to be sure on one’s facts. As Davy Crockett said, “…then go ahead.”
On that score I am doubting now that reversibility was a significantly flawed assumption since there is no net entropy change in the system drawn, the biosphere. Although the solar radiation in vs. infrared out is an entropy changing non-reversible steady-state, the modulations of the escaping infrared are reversible. For example, TOA imbalance would not be able to distinguish a change in ocean surface temp from a change in GHG. Right? And, if this is true it would then be proper to describe the dF as being in an equilibrium. Imagine a ping pong ball attached to a spring hooked on a stationary pole in a river. The changes in the spring force can’t tell the difference between a speed up of the stream force or an increase on weeds catching on the ball to increase drag. It is irrelevant that the stream itself is a non-equilibrium, non-reversible steady-state. The spring is.
“Unlike the satellite evaluation, he explained, other deforestation estimates, such as the FAO’s 2010 assessment, are based on ground based surveys of trees”
.
Hence the term, “Can’t see the forest for the trees” :roll eyes:.
Harold, aka Wiley Coyote, I saw your comment on Clive Best. I had to ask about M&F. I hope he answers my comment: http://clivebest.com/blog/?p=6418#comment-6922
Clive Best replied he will study M&F and respond on CLB.
Greg, I re-read your last replies on CLB regarding the bifurcation of the model groups according to TCS. Did the low TCS groups also have a reciprocally higher forcing as one would expect? If so I would agree that your plot blows M&F out of the water since your most responsive, most divergent from observed trends, and hottest in the end (most divergent from accurate destination), is the group of higher sensitivity with presumed lower forcing. I believe Ken got much the same results plotting TCR.
Hopefully Clive will read all the comments including yours and has enough confidence in following M&F’s logic to see the flaw(s).
Reading M&F saying “The claim that climate models systematically overestimate the response to radiative forcing…†Maybe they meant just not every one of them.
I went ahead and did my final post of CLB to let Clive have the last word.
Greg Goodman,
Feedbacks have nothing whatsoever to do with forcing. CO2 forcing is defined as the change in radiative imbalance at the tropopause after the stratosphere is allowed to equilibrate to a step change in CO2, but the troposphere and the surface temperature profiles are unchanged. Feedbacks are the result of changing temperature of the surface and the troposphere in response to the forcing and therefore have no effect on the calculation of ghg forcing.
Climate sensitivity, OTOH, is all about feedbacks. If aerosols do not offset as much of the CO2 and other ghg forcing, then feedbacks have to be reduced to match the instrumental period. The anthropogenic ghg contribution to forcing remains unchanged.
I understand how the concept has come about ( try and solve the static situation before the dynamic ) but after reproducing the Myhre results, I’m wondering if the RF concept even has relevance. ‘Holding’ the tropospheric temperatures fixed also implies a motionless atmosphere. That’s important because the capacity of the earth-atmosphere to emit is not at a fixed maximum, but is determined also by the motion ( convection ) of energy in the atmosphere. By using both the gcm results for prediction ( even though we know motion is the unpredictable portion of modeling ) as well as invoking ‘RF’ it appears the IPCC is tacitly admitting unpredictability.
Climate Weenie,
You have to have a standard reference state for comparison purposes. The standard state is not physically realistic as it’s not possible to have an instantaneous 2X step change in CO2. But it’s relatively easy to calculate. And yes, changes in convection are important. There’s a wide variation in the models as to rate of change of total precipitation with temperature. Needless to say, the models with the highest rate of change have the lowest climate sensitivity.
Clive Best just wrote a post on his blog analyzing M&F here:
http://clivebest.com/blog/?p=6440
I happen to be in a M&F debate with Gre g Laden? Anyone ever heard of him?
http://scienceblogs.com/gregladen/2015/02/07/andrew-weaver-wins-law-suit/#comment-620096
R Graf
waste of time
Greg is a nice guy who believes in AGW and will not debate you.
Your persistence is admirable. your choice of venue is not.
R Graf,
He is an unscrupulous CAGW advocate who willfully misrepresents what people say to advance his cause.
http://wattsupwiththat.com/2013/01/16/greg-laden-liar/
IMO, based on his writing, he is an intellectually shallow… perhaps even stupid… person.
Regarding stupidity, politics only works because people are irrational ( even the smart ones ).
.
This wouldn’t seem to be an issue for ‘climate change’ which we think of as a scientific matter.
.
But the more I read about Maurice Strong, the UN and the Clube of Rome, the more I consider that the whole thing has been political from the get go.
.
Now, warming from CO2 has long been a postulate ( Arrhenius, Callender, Manabe ) with some validity. But the political usage of this fact to scare people about unlikely to impossible stuff is toxic.
.
Because there was always a political agenda, people stake out positions, develop identities, and generally speak and behave unscientifically, regardless of what they know.
.
It’s depressing.
Remember Greg’s analysis of the deviation of CMIP5 mean from observed that I thought looked like a sin wave? Clive Best attributes a lot of the observed natural variability to a 60-year period PDO/AMO. If his analysis is correct the natural low will be in 2030 and then natural forcing will increase again for 30 years. This independent of ENSO which is a 5-10-year smaller oscillation within the PDO/AMO.
I believe CMIP5 has a random ENSO emulator but has no PDO/AMO equivalent. If Clive is right CMIP5 should be reprogrammed to match Clive’s plot, putting the year 2000 as the high. Clive is either wrong or this is a huge deal that was under everyone’s nose right in the Hadcrut signal all along. I believe he used a Fourier Transform. All I know is that this is a diagnostic for oscillating signals and I have read about it being used in paleo-reconstructions. Could something like this have been missed to now? Is that possible? I love Clive but…
R Graf (Comment #135798),
Not sure what the thrust of that comment was. It is pretty clear there is evidence of oscillation, but with only a couple of cycles of historical records, it is speculative to suggest that the cycles are real. What is real is that the models are comically high in warming as judged against reality. This could change, but if I were forced to bet, I would go with equilibrium sensitivity near 2C per doubling of CO2. Of course, based on available data, there is no certainty. That will only come when a purported cyclical warming/cooling goes astray of measured warming over a decade or two.
SteveF (Comment #135799)
Clive shows three peaks and three troughs since 1860. It looks like a good fit. Of course I know better now about what choice of filter can do. Take a look: http://clivebest.com/blog/?p=6440
Regarding 2C, you are high lukewarmer then. My current number is 1.5C. If Clive’s plot is backed up I’d move to a 1.3C.
Anyone interested in working with Clive to co-author a paper? He is ready to do it if we can muster support. He also want to submit his posted method for diagnosing PDO/AMO signal in Hadcrut4. You can contact me at rongrafs (period) home at gmail.
Lucifer,
“But the more I read about Maurice Strong, the UN and the Clube of Rome, the more I consider that the whole thing has been political from the get go.”
.
I read the Club of Rome rubbish in the early 1970’s. It was obvious even then that it was 100% Mathusian/leftist politics. Nothing has changed over 40+ years, except that ‘global warming’ has replaced ‘rapid population growth’ and consequent ‘inevitable famine’ as the ultimate human bug-bear. When it is clear that ‘global warming’ from CO2 will not be a total catastrophe, they will move onto something else…. the political objective never changes, only the justification. When the Club of Rome types are called on their past errors (Paul Erlich being the best example), they simply refuse to admit reality.
R Graf,
The 2C ECS I mention is the median value from empirical studies like Lewis and Curry 2014. The mode value (most likely value) is lower, at about 1.75C; the skewed shape of the ECS probability distribution is the result of how the forcing probability distribution translates into an ECS distribution.
SteveF (Comment #135803)
“the political objective never changes…”
We have millions of church backsliders with a huge guilt complex. Reverend Gore went to school to learn how to sooth people’s need for absolution. Reminds me there are people tugging at his shirt cuffs right now for him to get his presidential game hat on.
” Reminds me there are people tugging at his shirt cuffs right now for him to get his presidential game hat on.”
I can imagine Gore under real pressure, not protected by fawning media helping him hide from tough questions or issues:
https://www.youtube.com/watch?v=ZlV3oQ3pLA0
Nic Lewis has a post up at CA about a paper that he can use to reduce the range of aerosol effects on determining TCR and ECS and how this in turn can dramatically reduce the upper limits of the TCR and ECS probability distributions.
“Reminds me there are people tugging at his shirt cuffs right now for him to get his presidential game hat on.”
He got screwed last time, he won’t play again. The bitterness of that experience is what drives his war on fossil fuels.
DeWitt:
What is this nonsense? Holding the surface , letting the stratoshpere equilirate?!
You have raw forcings , CO2 is fairly non contravercial, volcanic is a huge fiddle factor based on various estimations of the scaling of AOD to flux.
Then you have “effective” forcings after feedbacks have taken thier effect. In real climate that on happens once and only has one meaning. God does not put the surface on hold and see what stratosphere does. The rest is model related garbage.
My study of ERBE around Pinatubo eruption suggested much stronger direct volcanic forcing ( x30 rather then x20 ). These values were also derived by Hansen’s team in 1992, so they are at least possibly acceptable to mainstream when they were still mainly doing science and not politics.
Now if the direct forcing is stronger, feedbacks must be more negative. Assuming that there is not a totally different regime of feedbacks to CO2 radiative forcing than there is to AOD forcing, then CO2 feedbacks must be more negative too. Thus “effective” CO2 forcing is less and the models will have less overshoot.
This is totally in agreement with what we find by comparing deviations of low and high sensitivity models from the surface record they are supposed to be reporducing.
https://climategrog.wordpress.com/?attachment_id=1315
In that graph we see that high sensitivity models under-estimate warming around 1960 and over-estimate warming post 2000. That looks a lot like an AGW “finger-print”, and it is one that fingers where the error lies.
Off-topic — well, unrelated to previous posts, anyway — is a NY Times article about the bowdlerization of campus speech, or “self-infantilization” as it’s been called.
P.S. My version was adorned with an Amazon ad for Mann’s “The Hockey Stick and the Climate Wars: Dispatches from the Front Lines” and Oreskes’ “Merchants of Doubt.” 🙂
“The safe space, Ms. Byron explained, was intended to give people who might find comments “troubling†or “triggering,†a place to recuperate. The room was equipped with cookies, coloring books, bubbles, Play-Doh, calming music, pillows, blankets and a video of frolicking puppies, as well as students and staff members trained to deal with trauma. Emma Hall, a junior, rape survivor and “sexual assault peer educator†who helped set up the room and worked in it during the debate, estimates that a couple of dozen people used it. At one point she went to the lecture hall — it was packed — but after a while, she had to return to the safe space. “I was feeling bombarded by a lot of viewpoints that really go against my dearly and closely held beliefs,†Ms. Hall said.”
Harold, please tell me this is a parody from a very funny comedy. Otherwise I am headed to my room for cookies and coloring.
For anyone who may be interested WebHubTelscope provided me this invite that is more appropriate for others than I here:
“Ron Graf, If you want to seriously discuss your ideas, get yourself a user login from the Azimuth Project Forum http://forum.azimuthproject.org. Currently, we are working on modeling ENSO and on estimating GHG concentrations. Lot more efficient than participating in a blog comment section.”
R Graf,
I wouldn’t advise anyone to even try engaging WHT in a blog conversation, never mind get involved with a ‘project’ he is associated with. His ‘theories’ on climate are nothing short of bizarre: mindless curve fit ‘models’ with many free parameters, refusal to consider forcing from all known sources, wild-eyed ideas about causation without any physical rational. Worst of all, he is an extreme Malthusian CAGW advocate, who for years insisted that the age of petroleum was coming to an end based on “peak oil” theories… and we know just how well that prediction worked out. “Intractable thinking in the face of contrary reality” is how I would describe it. IMO, someone to avoid.
Tom Scharf had an excellent comment on RealClimate. Kerry Emanuel guest posted in Cyclone Pam. Tom hit it out of the park here: http://www.realclimate.org/index.php/archives/2015/03/severe-tropical-cyclone-pam-and-climate-change/comment-page-1/#comment-627184
Tom pointed out among other facts it was Emanuel’s contribution in the 2011 Pulitzer Prize winning series from the Sarasota Herald Tribune that doubled everybody’s insurance rates based on the models predicting mega-storms at high frequency.
It is curious that when a cluster of events points one way as it did several years ago with tropical storms and hurricanes that scientists like Kerry Emanuel, with his theories about energies and wind speeds ,as I recall, in these storms increasing by the power of 3 with increasing SST, and his papers received immediate attention in the MSM. In the intervening period with the fewer storms I have heard little from Emanuel.
Back in the day we had many interesting discussions about tropical storms and hurricanes and paper analyses at CA. The statistics at that time pointed as much to better sighting and measurement of these storms than to some permanent increase in activity and energy. I was wondering when I heard about these energetic storms to which Emanuel refers when we would see more talk about a connection between global warming and hurricanes/cyclones.
I was of the opinion that when we had these discussion back in the day that the consensus thought among those climate scientists without a horse in the race was that global warming of the tropical seas would result in fewer storms (due to greater shear forces) but storms with slightly greater energy. I have not heard from any of those scientists furthering their case based on the short interim period observations – and that speaks well for them.
Ken, Frank, others, have you stopped by Clive’s site. I noticed Paul_K and Greg are continuing their work on M&F there.
Ken, perhaps you could ask Clive about your auto-correlation question. He is very interested in M&F and other conclusions being made on model output. He will reply to you via email if you like by going to his about page and asking him too while subscribing to his blog so he gets your address.
Steve, Great exchange with Rice and Carrick on CE Steve. U posted his freeking resume! LOL I noticed I learned my physics, as a Chem E major, at the U of Delaware lab Rice visited in 2001.
R Graf,
I grow tired of the ‘holier than thou’ types like Ken Rice. He runs a blog that is little more than a green-rant echo chamber. I posted a link to his publicly available resume so that people would know who they are dealing with….. and to show that he is no more formally qualified to pontificate on climate science than most of those he so frequently criticizes. I also found it interesting that his very first job, before grad school on planetary physics, was with the South African environmental agency. Which is perfectly consistent with his deep green world view.
R Graf,
Another interesting fact: one of the attack dogs at the ATTP blog that Ken Rice mentioned is named Tom Curtis, also part of the Skeptical Science attack squad: Here is some background:
“Mr. Thomas Curtis is the Managing Director and Global Co-Head of DB Climate Change Advisors. He has been the Global Head of Strategic Planning and Communications, Head of Business Development, and Member of Global Operating Committee at Deutsche Asset Management Inc. As Global Head of Business Development from 2004 to 2009, and prior to that, as the Head of Corporate Strategy for Deutsche Bank Americas from 2000, Mr. Curtis has been responsible for initiating and executing a wide variety of acquisitions, dispositions, joint ventures, and similar transactions on Deutsche Bank’s behalf and the sale of the firm’s businesses in the United Kingdom, Italy, and Australia. Mr. Curtis has over 20 years of experience in principal investing, mergers and acquisitions, and corporate finance. Before joining Deutsche Bank, he spent nine years as an Attorney at Cleary, Gottlieb, Steen, and Hamilton. Mr. Curtis is a graduate of Duke University and holds a law degree from the Columbia University School of Law. He is also a recipient of a J. William Fulbright Scholarship.”
.
So a corporate lawyer, with no technical background or experience, who runs a consulting firm that focuses on climate change, figures he should lecture scientists and engineers about climate science. Putting aside for a moment that promoting alarm is what he does for a living, and how that may put just a tiny bias in his ‘scientific thinking’, that he pretends to be a climate science expert is simply laughable on its face. That Ken Rice puts faith in the ‘climate science’ pronouncements of Tom Curtis speaks volumes about Ken Rice.
A bit more information:
https://www.responsible-investor.com/home/article/dbcca_e/
Seems the climate change alarm business isn’t all that it was cracked up to be.
And still more (reduced size) company information: http://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapId=84655488
R Graf. that ended up being an entertaining thread, didn’t it? 😉
Now for something completely different.
Via BishopHill, Watts has a post on ocean overturning.
My guess is they are really seeing changes in the circulation, but it doesn’t mean what they say it means.
Here’s another paper on the subject.
Steve, Carrick, When I said LOL I meant I was falling out of my chair doubled over. Here Judith was hosting a social to get at what people want out of a blog and of course its to get informative and challenging threads. On top of that you have a world renown expert willing to talk to you in plain English and is humble enough to think to ask for criticism.
In walks the man in a cape and mask with his chest puffed out, as it always is, and says “what’s this about special sauce?” Springer points out that he just had legitimate comments wiped out (and they were. I read them) and immediately a fight breaks out. Steve pulls out his resume and Don starts reading it out loud, pulling off his mask and dropping in the fish tank.
“So, it turns out you’re really just a salesman from Cincinnati.” (No offense to anyone. I’m actually from Ohio)
I don’t know if ATTP will be dropping in anymore where his super-powers are useless.
Carrick, Thanks for the links. I am very interested in ocean currents.
R Graf (Comment #135819)
I have been spending more time researching and less time in discussions at these blogs. Like. I think. Carrick and SteveM concluded at Judith Curry’s blog these discussions will probably not change minds. I engage primarily to learn and particularly in areas where I have an interest and some basic knowledge already – – and do it for rather selfish reasons and motivations.
My interest in TCR was primarily in relating it to deterministic trends that can be extracted analytically from modeled and observed temperature series using methods like SSA . I have to now determine whether I have reached a limit in obtaining a regression correlations much better than 0.8 by adding independent variables to the the regression. I have to go back and see if anyone has put any confidence limits on the TCR derivations.
As an aside I obtained the necessary data from Jochem Marotzke to attempt to replicate the M&F regression. When I finally went through what I thought M&F were doing with their regression I soon discovered I was off on the wrong track. I now have down the general approach they used and have run through my first attempt at the regression. There are no overlapping trends used in the regression as I first thought and thus auto correlation of the regression residuals is not a problem. What I have noticed about the 98 regressions for the 15 year periods is that the coefficients for the independent variables from the regression are sometimes barely significant (p.value less than 0.05 ) and sometimes not. The p.value for the overall regression is not significant for a goodly number of 15 year periods and the R^2 values are nearly always less than 0.1. M&F used the temperature data from multiple runs of a model with the only 1 corresponding ERF series. This situation suggest strongly to me that the multiple run series need to be averaged and thus reducing the number of degrees of freedom. That is my next effort with the regression.
Interesting that since the independent variables,alpha and kappa, in the regression are time invariant it limits how a regression can be carried out and that is what clued me in that my original approach had to be wrong.
As a further aside, I like Judy’s blog and Lucia’s because of wide range of discussion both technical and political that are allowed. Judy’s blog changes topics too frequently for me but I have a problem with that even at blogs where the changes are considerable less frequent. I am just now getting around to fully understanding what M&F did with their regression and that thread is pretty much dead now.
Judith Curry’s approach to AGW and the state of the climate science has been transformative in my mind. I have always wondered what triggered or caused that transformation. I have to agree with a lot she now says about the state of climate science with regards to AGW, but in the past at CA we had some rather antagonistic discussions about the past and future predicted frequency of tropical storms. She claimed then that she was more or less a libertarian and I told her it was way on the less side. Now I think she is on the spectrum.
not the same Tom Curtis. The SKS Tom Curtis http://bybrisbanewaters.blogspot.ca/ seems to be unemployed due to ill health.
Steve McIntyre,
Thanks for that information. Two people named Tom Curtis, both non-scientists, both closely associated with global warming alarm. Wow.
Tom Curtis the lawyer: My apologies for assuming you were the author of abusive comments on climate blogs.
And now that the real Tom Curtis has been identified, if you need a good eye-roller, read this little rant: http://bybrisbanewaters.blogspot.ca/search/label/Welcome
.
It has a bit of everything: global collapse of civilization, human population dropping to under 2 billion, a return to the economic level of the 1800’s, with no possibility of ever recovering, collapse of all Earth’s ecosystems, and maybe extinction of Humans in the very near future…. it is actually one of the funniest documents I have read in a long time. If I had to write a parody poking fun at the most delusional of the loony green fringe, then I would be hard pressed to do better than that document.
.
Sort of explains why he can’t stop acting horribly in blog comments; he really believes the end is near unless we immediately repent of our sins against Gaia (Repent! Repent I say!). When the world is coming to an end unless you win the great battle against the forces of evil, then nothing else matters…. and the ends justify any means.
Ken Rice should be a lot more careful about the company he keeps.
Kenneth Fritsch
That is not at all surprising if you look at the frequency response of the “sliding-trend” as a fitler. I has a massive negative lobe centred around 10y, ie it INVERTS the variability close to that frequency.
Thus anything around 9y lunar influence 11y solar and roughly 11y interval between El Chichon and Mt P gets, quite literally, turned upside down.
Now what do you imagine that will do to the p-values etc. ?
What they essentially did was mangle the data then conclude that the absence of any “traceable” signal due to some particular factor was proof that there was no effect. Nonsense.
They may get away with their 62y sliding-trend since there’s not much frequency content around 40y were the neg. lobe lies in that case.
Ken, Do you have the right to share the M&F Data? If so, Greg, Clive, Paul_k and Frank might be interested. If not, I imagine they would still be interested that M&F were responsive to you so that they might approach also.
W/r/t evolving views, there is no way not to if one is analyzing as one goes rather than on jihad. W/r/t blog frequency and wanting to find something conclusive before moving on, I think it goes to my prior sentence.
An individual who will remain unnamed (but his picture is right above me here) thinks I’m too impulsive and get baited or diverted (mucking up threads). Perhaps so. And at the same time I burrow in too long and deep for most’s taste. But I’m learn-n da etiquette. My apologies.
SteveF,
A fair to middlin’ number of climateteers have made more than a small fortune off of climate hype. As the market thins out they will attenuate their pitch more and more until it becomes a squeal and finally ranges into something suitable for bats.
SteveF,
That’s a fairly accurate description of what would happen if there were an effective immediate ban on any use of fossil fuels.
“Ken, Do you have the right to share the M&F Data? If so, Greg, Clive, Paul_k and Frank might be interested. If not, I imagine they would still be interested that M&F were responsive to you so that they might approach also.”
I see no reason why Jochem Marotzke would not share the ERF data with others. He put no stipulations on my passing it on to others. He did not provide the temperature data for the models involved in his regressions but that is readily available in good form from KNMI. I could make all these data available if I hear of anyone that is interested.
My primary interest was to determine whether the regression coefficients were significantly different than zero and whether the overall regressions p.values were less than 0.05. I have currently finished the 15 year regressions using the average of the model temperature series where models had multiple series and thus matched to the corresponding single ERF series available for each model used in M&F. Most of the regression coefficients and overall p.values show that the M&F regression model for 15 year periods is not appropriate and that no conclusions should be drawn from it. What it more directly says is that model differences in ERF trends, alpha and kappa variables cannot be used to predict model differences in model temperature trends over 15 year periods.
I will do the 62 year trends as carried out in M&F and then report my findings over at CA on the initial Nic Lewis thread.
Right now I suspect this will all get back to using something like 40 year trends from 1975-2014 to show differences between model and observed temperature trends. Using SSA derived non linear trends, I already know that almost all of the the CMIP5 model temperature trends are higher than the observed trends. Curious that climate scientists never use that 40 year period, and, for that matter, skeptics and luke-warmers seldom do.
Ken, Thanks. Clive and Paul are working on this also right now. If you would please forward the data to clive dot best at gmail . com
R Graf (Comment #135835)
I plan to put all my data and results into an Excel file accessible from my Dropbox location when I finish my analysis – which should be shortly.
Great! When done here is a good place to leave a leave a link to it: http://clivebest.com/blog/?p=6440
SteveF
The Tom Curtis of SKS is biased on the warm side of the debate. But given facts he has shown a willingness to make concessions. He’s a rare bird in the blogowars. While he tends to enter the conversation guns ablazin’ he has an ability to moderate his tone as the circumstance warrants.
In the discussion surrounding the Moon Landing paper he eventually acknowledged that Cook did not post a link at SKS to the survey. It took some time. He has also been good about criticizing the folks on his side of the debate when their behavior crosses certain lines.
See here for an example http://climateaudit.org/2013/04/03/tom-curtis-writes/
According to SMc’s post above Mr. Curtis is not in good health. Here’s hoping he recovers quickly.
DGH,
Biased? Concessions? Good grief.
.
It is clear to me that Tom Curtis acts very much like an a$shole in most of his blog comments, and has, IMO, the intellectual perspective of a nematode. He is a self proclaimed ‘philosopher’, with, AFAICT, no technical training and no technical experience, but feels he should hector scientists and engineers over global warming. This is something of a stretch for someone with his background.
.
His behavior is consistently abusive and obnoxious, as you might expect for someone who is both unaware of his limitations and outside his range of competence. I care not a wit about his poor health, since it seems to me clear his contribution to the debate is, if anything, wholly negative. But for his family’s sake, I hope he recovers.
I grant that he thinks he is ‘doing the right thing’, but he is grotesquely mistaken about that, and not technically competent enough to see that.
For everybody’s information I think this string is getting too way interesting for people besides the ones actually commenting. Is anyone game for having a private blog with a login? Another idea would be an email pool. Clive or Lucia might help organize. Just ideas…. Thoughts?
I’ve been reading it and find it interesting
This string, particularly my synopsis of the “doxing” incident is generating more comments on ATTP’s post on aerosols than aerosols ever could have. Is it secret sauce run amuck or kindergarten?
KF:
Firstly, I don’t think “trends” are a valid model with which to analyse climate data but if your objective is simply to reproduce M&F and look at statistical tests, that is valuable. It is one thing they obviously should have done themselves before publishing conclusions based on their failure to find some or other result using a “novel” method.
I sent some comments about the flaws in the paper and got a reasonably lengthy reply form Prof M. Not very convincing but at least an open and detailed reply.
In relation to the bifurcation that I pointed out in their figure 3b, he suggested that the 78 samples used was not sufficient to define two groups which is rather odd since that undermines whole foundation of their paper: if the method is not capable to detecting two separate groups then not finding a “traceable impact on GMST” is NO indication of whether model sensitivity is a factor.
Very odd response from the main author.
As a result I asked him to send me the distribution of the results for the last five years of the 62y trend analysis so that I could look at the distribution cross-section in actual numbers rather then in course grained colour coding used in their figure 3.
That was about a week ago and no reply so far. He was rather slow to reply the first time and I guess he’s a busy man, so we’ll see if he does get around to providing that.
Now if you manage to reproduce their analysis this would be something you could look at.
https://climategrog.files.wordpress.com/2015/02/mf2015_fig3b.png?w=662
What I’m interested in is the end where we see two distinct groups, one showing positive trends and another showing neg. trends.
He informed me that the white space separating the two was not actually zero, just a small value. Again it would be a lot better to see this in numbers.
I would suggest not grouping individual runs as you describe, since you will not be plotting comparable data if you plot this alongside single runs of other models. They used 78 sets of data, you should probably try to do similar.
I posted my M&F results at the original Nic Lewis thread at CA. It is currently in moderation. I added a post asking for it to be removed from moderation.
In case people haven’t seen it, there is a new paper critical of Lewandowsky’s work on conspiracy ideation. As I happened to think Lew had badly bolloxed it up, this fits well with my prior beliefs.
But I think this new paper very well done, well sourced, and contains a very good explanation for how you evaluate the quality of these sources of surveys for testing questions of this sort.
If Lewandowsky had any degree of self-awareness he would have a sense of being disemboweled about right now. I like the term “climatebots” for people like Lew, who keep trudging along, never looking for any feedback to see whether the course they are on is leading over a cliff or not.
Conspiracist Ideation as a Predictor of Climate-Science Rejection by Dixon and Jones.
Carrick,
“If Lewandowsky had any degree of self-awareness ”
.
He doesn’t, he is an unconscious imbecile.
“I once read an interesting comment by a climate modeler about a paper that suggested little or no positive climate feedback for CO2, based upon physical arguments about the atmosphere. The modeler said that the paper was clearly wrong, and cited several other papers which he claimed showed this. I took the time to read the abstracts of these other papers, and they all turned out to be descriptions of the results of different climate models. So the modeler obviously thought anything that conflicts with model results must be wrong!” SteveF May 7, 2009
http://wattsupwiththat.com/2009/05/06/the-global-warming-hypothesis-and-ocean-heat/#comment-128236
I am trying to learn about the lost heat and found the Pielke Project. Is there still a mystery of where the energy imbalance is getting stored? It seem so elementary that the ocean surface will be a natural heat gauge. I am wondering first why it had to be a novel idea to look. And second how can one go further until one accounts for the heat?
R Graf (Comment #135849)
If the observed or modeled energy budget at TOA does not balance most of that heat has to be either in the ocean. If the imbalance is small one could perhaps look at dumping the excess into the land but that could theoretically account for small imbalances. If we can measure reasonably well the ocean heat content and SST and that does not account for the net heat then it is either missing or the result of not having accurate measurements – for the observed that is.
In the case of the models the ocean heat and land, I think, content and net TOA can be determined and would not be caused by inaccurate measurements.
In the case of the observed the heat budget problem can be reasonably blamed on measurement errors. In the case of models that is not the case and would indicate that the model is not yielding realistic results. Models are tuned to give first and foremost an energy balance and if the models fail after tuning I would think that would indicate a major problem with that model.
Ken. Thanks so much for trying to explain this but remember this is all new to me. Are you saying the model’s oceans have been warming and the real ones not? Are you saying it is still currently believed since the Pielke Project the oceans’ surfaces are cooling since the late 1990s and there is no accounting for it?
Great job on the paper BTW. I took the liberty of posting a link to it on Clive’s and ATTP’s. Pekka was on the string at the latter. I hope he takes a look. Cheers!
R Graf,
Prior to extensive data from ARGO being available, the ocean parts of the GCM’s were largely unconstrained, and IIRC, most models had quite large ocean heat uptake. As ARGO data became available, it became evident that the rate of uptake was less than the models calculated. But no worry, aerosol offsets could always be adjusted to maintain model balance. But like patriotism is the last refuge of scoundrels, aerosols are the last refuge of high sensitivity models: take that away, and there is no escaping that the models are simply too sensitive to forcing. Which is why there is some ‘resistance’ (hysteria) when anyone suggests that the upper plausible range of aerosol influence is actually lower than the IPCC suggests (as Bjorn Stevens did in his recent paper). It may not be possible to develop your own GCM, but if a simple heat balance can invalidate the GCM’s then you don’t have to…. they are demonstrably too sensitive.
” but if a simple heat balance can invalidate the GCM’s then you don’t have to…. they are demonstrably too sensitive.”
They made the mistake of creating a position that could be proven false. -to paraphrase Karl Popper.
BTW, It seems like I am now an ex-blog commenter on ATTP for some time. I swear it didn’t use any language other than question the integrity of climate science on a blog about the spreading the word of the 97% to an unbelieving public.
I asked ATTP if he believed M&F’s Max Plank Institute’s press release: “Skeptics who still doubt anthropogenic climate change have now been stripped of one of their last-ditch arguments” was a pertinent conclusion.
In ATTP’s defense I think it was his bedtime at 9EST and he wanted to kick out anyone who could not be trusted to lock up the store as they left.
I give Jochem credit for sharing his data with Ken. Karl Popper might be smiling though.
http://www.mpg.de/8925360/climate-change-global-warming-slowdown
From the Max Planck press release:
Impressive indeed, They used a “novel” method without checking whether it worked. Were unaware that there are inverting, negative lobes in the filter they used and failed to notice a separation into two groups that in their own results that goes contrary to their main conclusion.
When questioned about this Prof. Marotzke suggested that the number of samples was too small for this to be significant. An argument that, if true, would invalidate his “impressive” papers results. If there are not enough samples to detect a separation, not finding one tells us nothing. The whole study disappears in a puff of smoke.
re: (Comment #135850) Kenneth Fritsch said, “Models are tuned to give first and foremost an energy balance and if the models fail after tuning I would think that would indicate a major problem with that model.” Excellent summary.
Instead we have climate concerned people blaming the data, massaging the data, blaming the critics, ignoring the skeptics and yelling louder.
R Graf,
Ken Rice does not appreciate when someone points to clear evidence of the field being very green/left oriented and so subject to bias; takes away from the credibility of the central meme the field relies on: “the science demands” certain public policies that they happen to like. Consider his choice of blog name…. just another reference to the same “science demands” meme.
.
The good news is that extreme predictions of warming will be overtaken by reality. The Chinese, Indians, Brazilians, and a host of others are not going to restrict growth in their use of fossil fuels over the next couple of decades, and human forcing (net of aerosols) will by then be near 3 watts/M^2. Absent dramatic warming over the next two decades, the claims of (possibly) high sensitivity will no longer be credible, and so ignored by nearly everyone.
R Graf (Comment #135851)
I finally skimmed through the article from the Watts blog to which you linked above. That article, I think, relates to the observed measurement versus modeled ocean heat to which SteveF refers in his post above.
My point was somewhat different and that is the proposition that the TOA net energy has to be balanced primarily by a change in the ocean heat – otherwise you have a destabilized climate. The models have emergent properties (temperature change for the ocean mass and net TOA radiative energy) that allow one to determine if the earth’s climate conserves energy in the model output. In the observed case one knows in the real world that there must be a conservation of energy and thus if there is a contradiction between the net TOA and ocean heat then one or both measurements or calculations must be wrong. I do not know off the top of my head whether there is a contradiction in the observed values or even the plus/minus limits on these energy calculations. The general point I am making here is that we can obtain sufficient information from climate models to determine whether the model output makes sense based on fundamental physics whereas with the observed we can determine whether there is a contradiction in measurements.
I did some calculations recently on CMIP5 models to determine whether the total global sea water temperature change in an individual model was matching the net TOA radiative energy. I found, as I recall, about 3 models that matched well with many others that did not and could not even with drastic changes in the ratio of heat going into the ocean versus the land. I excerpt and link from the write-up of that study below.
“I have determine N by N=rsdt-(rsut+rlut) where the radiation values are in watts/m2 and are global means. The GSWT values for the CMIP5 model runs were obtained from DKRZ in global mean form. These GSWT changes should track the ocean heat content and further the slope from regressing the GSWT versus N should be around 2.6E-03 K/(watts/m2) assuming a global sea water volume of 1.37 billion km3, sea water density of 1025kg/m3, sea water heat capacity of 3985 J kg-1 K-1 and that 95% of N is absorbed by the global sea water. All values were taken for the period 2006-2100 and are for RCP4.5 CMIP5 model runs. The link here explains a similar approach by Palmer and McNeall:”
http://iopscience.iop.org/1748-9326/9/3/034016/pdf/1748-9326_9_3_034016.pdf
“It can be readily seen that the GSWT versus accumulated N slopes differ across individual models but not much over model runs for the same model. This result should provide a clue as to how the model to model differences occur – and it would apparently not be from internal variability. The slopes for 3 model/model runs come close to my expectations from above and are namely CMCC, GFLD, and IPSL-CMSA-MR. Looking at the rest of the table results shows that the differences between models arises mainly from differences in N. Overall the remaining puzzle to me is that over this time period from 2006-2100 those models that stray significantly from the expected slope must not have an established energy balance unless heat is being stored in some very different manner. This apparent lack of balance is even more surprising to me given the literature that notes that TOA balance, with consideration for OHC I would assume, is of prime importance when tuning these climate models.”
http://imagizer.imageshack.us/v2/1600x1200q90/537/PAhjXA.png
It has been suggested that perhaps the TOA values coming out of the CMIP5 models need to be adjusted by TOA values that are derived from the pre-industrial model runs. These so-called pi control runs are the long period spin-up of the model under pre-industrial climate conditions. I think that adjustment might apply if the pi runs were showing a steady drift in the net TOA. While some models show some drift in pi runs most show a more or less constant offset from zero. I have read comments from papers that claim that the CMIP5 models in contrast to the CMIP3 models do show much drift but rather have problems with closure on TOA – whatever that means. I need to go back and look at pi control run adjustments in the future, but I would ask why would such an adjustment be required to a climate model that is supposed to be emulating a real climate.
SteveF (Comment #135852)
Steve, since the CMIP5 models all used the same levels of aerosols for the historical runs and historical parts of the RCP scenario runs, the model to model difference as noted in experiments with 7 of the models has to be in differences in converting those same levels to different forcings. I think, but am not certain, that the point you have made at these blogs numerous times is that the CMIP5 scenario level for aerosols is higher than that that would be observed.
My particular interest in the aerosol effects on models is the difference that it might have in producing the differences in the deterministic temperature trends seen from model to model. I have been correlating model SSA derived trends with model TCR values and wanted to determine whether the correlations of around 0.70 for the period 1975-2014 and 0.80 for the period 1975-2100 could be increased by accounting for the aerosol effect difference. So far I have not found a way of doing this and am now looking at the CIs for determining the TCR values for models.
Ken, thanks for the clarifications. I remember reading somewhere of problems with achieving closure in CMIP5 and thinking, wow, they couldn’t make it a programmed requirement that they follow the First Law? At every turn it seems it’s “worse than we thought.”
I noticed in CE that Rud Istvan and Matthew R Marler (http://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/#comment-670894 ) are up on models and might be good for answers or to bounce ideas off of beside the others here. (But i’m sure all likely knew this.)
Isaac M Held seems to be one of the most noted authorities on models.
http://gfdl.noaa.gov/bibliography/results.php?author=1054
Re: R Graf (Comment #135860)
You generally want these-large scale (i.e., coarse-resolution) models to function with some physically motivated moving parts which also work together in what looks to be a reasonable way. Then if mismatches arise (e.g., drift, TOA imbalance, etc.) you try and diagnose where they are coming from. This may allow you to place additional physical constraints on “underdetermined variables” in the system, but it usually isn’t obvious how to actually incorporate these constraints. Simply programming a TOA balance requirement would likely not be the right way to do it.
Oliver (Comment #135861)
“Simply programming a TOA balance requirement would likely not be the right way to do it.”
This makes sense if you are using the lack of energy balance to troubleshoot the model assumptions. But once you are putting a good-to-go seal on a model to be used as a reference to actual simulation I would think if needs to have closure. “Parameterization,” I think they call it, should be done at an early stage not as a response to data from real world or other models. Doing it late, IMO, runs the risk of too much bias, which is called “tuning,” I believe.
Oliver,
Long time no see. WRT models, seems to me the overriding requirement is that they do a reasonable job of making projections. In that respect, I believe the evidence is that they are at least highly suspect, if not plainly wrong. Of course, recent slow warming due to greater than expected ‘internal variability’ could be making the models look particularly bad over the last decade. But then, internal variability probably made them look particularly good prior to the early 2000’s. Seems to me the most reasonable interpretation is a lower underlying secular trend than the models diagnose (AKA lower sensitivity), combined with significant multi-decadal natural variation.
.
BTW, IIRC, we had an exchange some years back about the economic sense (or in my view, nonsense) of buying a Prius instead of a comparable conventional car. The gasoline pump price today were I live is $2.45 per gallon. Do you think this impacts the justification for buying a Prius (I recall you suggesting $4.00/gallon and higher would make the Prius a reasonable investment)?
I’ve posted on my experience retracing the steps of calculating RF after the Myhre paper (IPCC).
.
Have a look if you’re interested.
.
This is in various papers and old hat for most, but I felt enlightened after acutally executing the radiative model for myself. I’ll have some more controversial calculations to share next week.
SteveF,
It’s not just the price of gasoline, you have to factor in the type of driving. I would say that unless the large majority of your driving miles are low speed in high density stop and go traffic, it makes no sense to even consider a hybrid regardless of the price of gasoline.
A natural gas powered hybrid bus, for example, probably makes sense. However, I’m not sure that carrying something like 1.000 lbs of batteries in a ~4,000 lb car, for which you pay a premium price but don’t gain all that much more efficiency, ever makes sense for an individual. A hybrid taxi in a big city might work.
The large amount of battery mass is necessary for extended battery life. Your average hybrid using nickel/metal hydride batteries never uses more than 10% of the total battery capacity (45-55% charge).
I don’t think I followed the previous exchange on the value of a Prius, but as a person who has driven one quite a bit, I have to disagree with DeWitt Payne. Strongly.
I get over 40 MPG while driving a Prius under just about any conditions. I’ve had some weeks where I’ve averaged over 50 MPG. At the same time, I’ve never had a situation where the Prius couldn’t do something I wanted to do which another car could have done. I also haven’t found the maintenance to be notably worse than any other car. All in all, I’ve found the Prius to be a great car (and a surprisingly roomy one).
I don’t know what the arguments are for whether or not hybrids are good for the environment, but as a consumer, I’d say they can be a great investment.
Re: Prius. Here (Japan) they are replacing the classic Toyota Crown taxis. You also see a lot of them in the outside lane on the expressways, doubtless company cars being driven by reps. I’m sure the fleet managers have done their arithmetic and the numbers make sense.
As for purpose, there’s no way a Prius can meet my needs. I drive a 2 litre wagon, and when my wife attends sales fairs, we pack it to the roof with stock and sales paraphenalia. Also I maintain a factory, two shops and a home. I regularly need to carry bulky and/or long loads. The Prius may be fine if its only people going shopping and commuting. For non-civilians, not so much.
Toyota makes good stuff but 40-50 mpg is not much of a difference from what I’m getting, 30-35mpg, with my Sonic at a fraction of the cost. I’m looking forward to Tesla if Musk can get the battery factory finished and mass production cranked to bring down the cost.
BTW, Brandon, if I ever need a defense lawyer I hope he/she has half your skills.
Re: R Graf (Comment #135862)
Parameterizations usually are created at an early stage, and in fact they are supposed to be created based on data from the real world or other models.
If one were to adjust the knobs in order to enforce energy balance of the entire model, including its many (unrelated) parameterizations, then that would be the tuning that we’re worried about.
Re: SteveF (Comment #135863)
Howdy, SteveF. I’ve been following the threads on and off but have been pretty busy lately, so I haven’t jumped in.
Just to clarify, I’m not endorsing the models. My point to R Graf was only that enforcing energy conservation programmatically wouldn’t be a solution.
It certainly impacts the economics, no question about it. I don’t expect pump prices will stay as low as they are, but there’s that and the fact that a lot of conventional cars are achieving much better fuel economy than they were a few years ago. As a result, a Prius no longer makes much sense for most people. AFAIK, sales are way down, so I think people are figuring this out.
Also, I don’t know if I gave the wrong impression in our convo a while back, but the only reason I would consider a Prius would be the economics. Personally, a Prius would not be my first choice for a daily driver. 😉
Re: DeWitt Payne (Comment #135865)
There are a few other operating patterns that could probably benefit from a hybrid. For example, my old job required me to go down a huge long hill (a few miles long) from my home, then climb another huge long hill to get to work. Repeat on the way home. But sure, stop and go traffic is the place where the hybrid excels.
Diesel is a nice way to cover long highway commutes, but then again diesel is relatively expensive right now, compared to gasoline.
Oliver (Comment #135869)
“Parameterizations usually are created at an early stage, and in fact they are supposed to be created based on data from the real world or other models.
If one were to adjust the knobs in order to enforce energy balance of the entire model, including its many (unrelated) parameterizations, then that would be the tuning that we’re worried about.”
I think paramerization even at a low level to match the real world or other models empirically amounts to tuning (just more sophisticated). Even if adjustments are soundly based on theory, (not fudging,) they are not immune from tuning bias. After all, the real world record is their test for getting out of the gate.
Isaac Held, the most distinguished authority believes that getting the parameter independent from being outcome based is currently a problem. I. Held “The gap between simulation and understanding”
The power of prediction is the only sound test for validation.
R Graf, parameterizations are used because we (often) encounter situations in real-world physics where it is just not possible to base a model on first principles. The parameterization must be tuned to match real-world behaviors.
The “model tuning” that people such as Pielke Sr. have been complaining about for years is when you put together a model with many parameterizations that have been “independently tuned” (as they should be), and then you go back and re-tune the parts to get a “more desirable” aggregate model behavior.
I have read the article by Held that you linked. I am not sure which part specifically you are citing, but overall I find his viewpoints quite useful.
On the other hand, with all due respect to Held, I would not hold him up as “the” distinguished authority on this problem. 😉
Oliver,
“Just to clarify, I’m not endorsing the models. My point to R Graf was only that enforcing energy conservation programmatically wouldn’t be a solution.”
.
Well, that is a good thing; people endorsing the models are likely to be disappointed.
Future prices for petroleum are impossible to know, of course. Still, the “peak oil” based projections seem, putting it delicately, utterly delusional. Prices are not going crazy any rime soon.
Oliver,
BTW, Held’s recent post on “catastrophic increases in dew point” pretty much convinced me he too is more than a little nuts. there is no possibility that vast stretches of habitable land are going to suddenly become uninhabitable. Shame on Held for such nonsense.
The link below is quite open about “tuning” models and refers to TOA here:
“During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. These desired properties are observables, such as the radiation balance at the top of the atmosphere, the global mean temperature, sea ice, clouds and wind fields. The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution. The practice of climate model tuning has seen an increasing level of attention because key model properties, such as climate sensitivity, have been shown to depend on frequently used tuning parameters. Here we provide insights into how climate model tuning is practically done in the case of closing the radiation balance and adjusting the global mean temperature for the Max Planck Institute Earth System Model (MPI-ESM). We demonstrate that considerable ambiguity exists in the choice of parameters, and present and compare three alternatively tuned, yet plausible configurations of the climate model. The impacts of parameter tuning on climate sensitivity was less than anticipated.”
http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/full
This article has been a center piece of tuning discussions at the Blackboard.
” I’m looking forward to Tesla if Musk can get the battery factory finished and mass production cranked to bring down the cost.”
Beware that the Tesla business model depends on huge tax credits to the wealthy buyers of that car. No wonder Wall Street is in love with the company. There might well be some great ideas and innovations here but I would much prefer that the free market place determine the success of the implementations.
http://www.forbes.com/sites/patrickmichaels/2013/05/27/if-tesla-would-stop-selling-cars-wed-all-save-some-money/
Kenneth Fritsch (Comment #135876) Re: Tesla Motors
I’m a small business owner. Nobody’s more a believer in free markets. Elon has my admiration because he is using his skills and fortune to bridge humanity into the future, leaping into and hopefully over problems that humanity needs to solve, electric cars and space commercialization (which will lead to colonization). Our ultimate goal is short-loop cycling of resources, the same technology that will both save our the planet and free us from it. Compared to government research grants, tax incentives for future needed markets are a much lesser evil. Even with help it’s no easy ride. Tesla stock is down now.
R Graf,
Government research grants are much more of an evil than tax incentives for rich persons’ electric sports sedans?
Oliver:
Gas prices definitely don’t help Prius sales, but I think the point you make is probably the most important one. As hybrids became more popular, other cars started fuel efficiency in other cars became a higher priority. That push caused the Prius’s advantage to shrink.
On the other hand, the Prius has been out for quite a while. We’re likely looking at the tail end of its product life cycle. That means we will likely see significant improvements in the Prius line soon. I know I’ve heard talk we may even be seeing 60 MPG in a new one before too long. If that happens and gas prices go back up, I expect the Prius (or other hybrids) to rack up a lot of sales.
Speaking of which, I wouldn’t buy a new hybrid now with the talk of what’s to come. Why buy one now if you can wait 18 months and get a significantly better model? If you’re getting it used though, it could still be a good competitor.
R Graf:
Thanks!
I recall hearing that all earlier models failed to close the energy budget and that this was finally got around by introducing a spurious “correction” since it is absolutely essential that the energy budget balances.
I suspect it may have been one of the lectures by Dr Christopher Essex.
Sorry I don’t have a proper ref.
Oliver (Comment #135878)
“Government research grants are much more of an evil than tax incentives for rich persons’ electric sports sedans?”
I can’t be too against gov research grants since my oldest is about to make his living off of them as a biochemist. (He is also very warmist after a long campus influence.) I am also not against “the rich” as long as they got their money by providing valuable contributions. I think that incentivizing the people with power and resources to apply them toward desired technology is in some cases justified against the evil of government-knows-best tinkering. My business’s customer and vendor list has included all spectrum of organization from one office operations to the US government. It has given me a unique vantage point to judge clear trends in culture and dynamic as related to size of organization and profit motive. My experience is that there are both efficient and inefficient large private enterprises. There are no inefficient small ones. The best government run operation is about half as efficient as the least well run large private one. I don’t know who else here has had a view of both sides but it would be interesting to hear. The obvious conclusion is that the more that can be done be incentivizing private enterprise rather than direct government control the more bang for buck. On top of that you preserve liberty, which is not insignificant.
I’ve reached much the same conclusion from a different observation. He’s obviously far more educated and knowledgeable than I’ll ever be on the topics, including the Hot Spot. But when he does hand stands to convince himslef that the observations ( both RAOB and MSU ) are wrong, only in the tropics and only at 300mb, to defend the model output, then I have no doubt he’s fallen in love with the hypothesis. And as we know, love makes one do crazy things.
I came to a similar conclusion recently. He’s far more educated and experienced with climate and models than I’ll ever be, including considerations of the hot spot. But when I read him doing handstands to discount the observations to defend the model output, I knew he’s in love with the hypothesis. And love makes people do crazy things.
Has anyone here ever heard about an Plank Effect differential related to the homogeneity of the global TOA grid? For example, 10C higher puts out more energy at the equator TOA than the energy lost from that 10C lowering at the polar TOA due to the non-linear block body radiance to temp curve. The more evenly heat is dissipated the less efficiently it radiates. If tropical convection cells can transfer great amounts of latent heat into sensible heat to the tropical TOA in supercell’s that are smaller than the computer epsilon resolution could this perhaps be unaccounted for outgoing heat?
R Graf,
The thing is, you won’t get a uniform global temperature increase. Polar amplification is not a unique property of ghg warming. The poles will warm, or cool, faster than the equator. So the fact of increased heat transfer with increasing temperature is known. Whether it’s properly accounted for in the computer models is a different question.
DeWitt, is “polar amplification” due to the troposphere being about half as thick at the poles as at the equator, as I believe I read? What I am not sure I said clearly is that the TOA can have a higher output of radiant heat when there is poor distribution of heat. So if a supercell makes a hotspot the outgoing TOA radiation shoots up more than just linearly. This then statistically lowers the whole planet’s GMST. Conversely, ocean currents that spread heat toward the poles are raising GMST by diffusing heat and lowering TOA radiative efficiency. I am sure this must be known but I am wondering if the models have the resolution to properly simulate it.
R Graf,
IMO, polar amplification is primarily a result of the Clausius-Clapeyron relationship and the huge heat of vaporization of water. Assuming constant relative humidity, adding one joule to the tropical atmosphere produces a much smaller temperature increase than it would at the high latitudes. That’s because a much smaller amount of water must be converted to vapor at lower temperature to maintain constant RH for a given ΔT. But that’s just my opinion.
DeWitt,
That is certainly part of it. The other big factor is that the net total radiative loss at high latitudes is much lower than at low latitudes, especially in winter, when the temperature is low. An added watt per square meter of GHG forcing requires a greater increase in tropospheric temperature to increase radiative loss by one watt when the temperature is lower.
I think this is consistent with warming being greater at night than in the day and greater in winter than summer.
R Graf:
It might help to remember that there is a net gain in energy in the tropics versus a net deficit in the polar regions.
http://earthobservatory.nasa.gov/GlobalMaps/view.php?d1=CERES_NETFLUX_M
“Averaged over the year, there is a net energy surplus at the equator and a net energy deficit at the poles. This equator-versus-pole energy imbalance is the fundamental driver of atmospheric and oceanic circulation.”
re: Clausius-Clapeyron, I believe that simply relates to the ratio of sensible heat to latent heat. My point is only about what happens at the TOA, where I believe 99% of vapor has already been removed, which I guess is the basic premise for the enhanced GHE, CO2 is the major player there.
re: Warming greater at night and winter. I believe this is because in the direct sunlight the stratosphere is highly inverted in temperature so that the lapse rate, and perhaps GHE, is also inverted. In addition there is a small absorption of sunlight by CO2 at the 1.8 and 2.9 nm wavelengths, which heats the TOA improving radiative efficiency. This scenario would explain why we would see most of the EGHE at the poles where it is only inhibiting heat escape and not shading incoming.
Re: Imbalance, my point is that the more imbalance the more efficient the TOA can radiate, whether on global scale or so local as to be under model resolution. I believe the Milankovitch obliquity is seen as the M-cycle with most ice-age correlation not just because the polar melting but it’s effect to spread sunlight more evenly on the globe, thereby decreasing outgoing radiative efficiency.
Temp (K) to Black body Radiant Emittance (W/m-2)
295– 429.5
300– 459.3
305– 490.7
(490.7+429)*.5-(459.3) = 0.80 W/m-2 for 10K overall mixing of temp around the globe. But the effect is even greater if the temperature hot spots are more localized. I believe that black carbon, for example, at or above the TOA by having it’s high emissivity and there not being any convection to cool it one gets perhaps red-hot micro cinders, which are extremely radiant.
R Graf,
Yes, the amount of latent heat per kg of air increases exponentially with temperature at constant RH, but the sensible heat increases linearly. The consequence is as I stated above.
Radiation is measured at the TOA, but it doesn’t come from the TOA. For every given wavelength there is an effective height of emission (also the height of maximum absorption of incident radiation). That height is where the optical path is equal to one measured from the top down. In the window from 8-14 μm, a large fraction of the outgoing radiation at the TOA is emitted directly from the surface. I recommend you purchase a copy of Grant Petty’s A First Course in Atmospheric Radiation, $36 direct from the publisher. The relevant figure, 7.10 on page 188, can be viewed on Amazon using the ‘look inside’ feature.
Another place to start would be the series Visualizing Atmospheric Radiation at Science of Doom. Part 1 is here.
R Graf,
Another useful tool found on the net is the Archer MODTRAN site. You can quickly calculate long wavelength emission spectra from 100-1500cm-1 looking up or down from any altitude. Reciprocal centimeters or wavenumber is effectively frequency and is standard for plotting IR spectra.
DeWitt, thanks for the learning references. I will check them out but I knew and agree with all that you wrote. (I meant 1.8μm and 2.9μm, not “nm” for CO2’s highest relevant absorption bands.) I understand the large buildup of latent heat in tropical ocean atmospheres but my point is that above the condensation level it’s all back to sensible heat. I agree the closer the effective radiation height for 8-14 μm is to the condensation level the higher the radiative efficiency, as the atmospheric temperature is raised at just the point it’s radiation can effectively escape the atmosphere. But for the CO2 12μm band impeding that escape the efficiency is still benefited by the localized nature of the themalization. My point is if models are dispersing this heat from a thunderhead over a 100sq-mile grid point the physics is not correct. I am asking if anyone knows how models handle this.
R Graf,
If the grid points are a certain size, then basically everything has to be spread over that size. That includes heat/mass flux associated with deep convection in clouds, rainfall, turbulent winds, lifting of flows over mountains (even the mountains themselves). Everything. You can incorporate corrections for the fact that a certain phenomenon is only covering some fraction of the grid cell, but that’s it. The physics are not “correct” in some absolute sense, but we already knew that when we decided to use a model.
R Graf, DeWitt, et. al.
You may be interested in a blog entry I just posted:
Temporal and Spatial Variation of Radiative Forcing. You can see that the RF at the tropopause is much lower in the polar regions than at the Equator. On the other hand, the RF at the surface is roughly the same as the tropopause for the polar regions ( around 2 W/m^2 ) but is near zero in the tropics.
.
Also, the value for RF, at least as defined, appears to be 4.1 not 3.7 W/m^2.
.
Arctic Amplification seems to have many mothers, but in the post you can see how RF may or may not be one.
R Graf (Comment #135884)
March 29th, 2015 at 4:47 pm
“Has anyone here ever heard about an Plank Effect differential related to the homogeneity of the global TOA grid?”
We have a “Plonk” effect in Australia, especially since winning the world cup in cricket.
“For example, 10C higher puts out more energy at the equator TOA than the energy lost from that 10C lowering at the polar TOA due to the non-linear block body radiance to temp curve. The more evenly heat is dissipated the less efficiently it radiates. If tropical convection cells can transfer great amounts of latent heat into sensible heat to the tropical TOA in supercell’s that are smaller than the computer epsilon resolution could this perhaps be unaccounted for outgoing heat?”
You cannot put out more heat than comes in, so for every supercell putting 3 times as much energy out, The surrounding areas will have to put out 3 times less.
Another way to look at it is that it cannot last hot for very long as it is not a heat generator , just a heat transporter. The heat is has is a localized build up which should have dissipated elsewhere to radiate out. So other areas are already colder and will radiate the equivalent 3 times less heat.
I hesitate to engage as my views on heat in equaling heat out have been trashed numerous times and precludes any build up of heat in the system [Missing heat for example].
No doubt the surface atmosphere can get hotter with CO2 increase in that surface atmosphere, whether the rest of the system heats up is a moot point. As a solid planet at our distance[s] from the sun we can only sustain our type of atmosphere in a limited range.
No runaway superhot atmosphere, there is not enough heat in.
Oliver (Comment #135898)
“If the grid points are a certain size, then basically everything has to be spread over that size.”
.
For sure. Which brings us back to model parametrizations for sub-grid critical processes, especially clouds… formation, lifetime, albedo, rain potential, and influence of human aerosols on cloud properties.
.
So long as models remain… err… arbitrarily parameterized… on multiple critical factors for sensitivity, their rational credibility will remain low. Andy Lasic’s recent Youtube video (his recent presentation in Germany on climate sensitivity) is a perfect example… Paraphrasing:
Reminds me a bit of the next to last scene in the Wizard of Oz. No matter how many times you say it, it does not impact reality: clouds are most likely net neutral to negative in feedback. Otherwise, we would be seeing the very rapid model predicted warming we have not been seeing.
SteveF, I have been reading a CE thread from 12-2010 on radiative transfer, and it seems Jeff Glassman says there are models of the micro-dynamics that are used to create the parameters. It is a high level discussion with Vaughan, Pekka, DeWit and others. http://judithcurry.com/2010/12/21/radiative-transfer-discussion-thread/
The GHE theory looks sound but there’s a flaw in it or in the models, clearly one or the other.
R Graf
The issue of cloud feedbacks is separate from the influence of GHG’s on radiative heat transfer. Clouds are both opaque to all infrared wavelengths and highly (though variably) reflective in the visible to UV wavelengths. A cumulous cloud at noon in the tropics is strongly cooling because it reflects so much solar energy. The same cloud at midnight is warming, because it reduces radiative loss through the ‘atmospheric window’. All cloud behavior is ‘sub grid’ in scale and completely ‘parameterized’ behavior. The size of potential error in cloud parameterization is almost unlimited, no matter what Andy Lacis says.
SteveF (Comment #135908)
Your comment seems to ignore the point of a sub-grid parameterization, which is that it reduces the error relative to having no representation at all of (missing) sub-grid processes.
Oliver,
The road to Hades is paved with good intensions. Of course the sub-grid processes are designed to improve accuracy, and I was not suggesting otherwise. The issue is that all such parameterizations add degrees of tuning freedom, moving the model away from ‘fundamental physics’ based and toward ‘kludge’ based. The broad range of diagnosed climate sensitivities by different models tells us there is a lot more that goes into the control of sensitivity in the models than physics. My frustration is the apparent reluctance of the “modeling community” to come to grips with reality: whatever the models diagnose for sensitivity, they are so far from measured reality that they are in their present state very unlikely to provide useful guidance about climate sensitivity. I keep thinking that the modelers will re-tune the kludges to better match reality….. but it is not happening. Makes me think of Einstein’s working definition of insanity.
oliver (Comment #135911)
“… it reduces the error relative to having no representation at all of (missing) sub-grid processes.”
When science puts out a finding of extreme political implications, especially when it happen to support the political goals of the power regime, there should be not only an expectation to provide extraordinary proof but extraordinary transparency. This “trust us, we’re scientists” approach is fine for 99% of scientific findings. The 1% is when we have to surrender huge powers to government, as in energy.
The claim of understanding of the temperature regulation knobs of our planet needs to be proven in models that not only have the power to predict but do so on a scientifically sound basis at EVERY step. If there is one step that uses a fortune cookie the whole enterprise fails.
My solution, and that of any non-scientist, would be to make micro models of micro-dynamics, prove them, plug the results into a mid size models handling the area of typical grid points, prove them against observation, plug the data in to fill grid points of the global models and wait 5 years. All the while one could continue to study paleo, volcanic emission vs. forcing vs. response vs. recovery etc…
In medicine the precautionary principle is followed in evaluating validation protocols before taking action with the patient. In climate science the precautionary principle seems to be to turned around as a dispensation from that requirement due to importance and urgency. It is just my opinion here that we do in fact have time to do the science right.
Re: R Graf (Comment #135922)
And that’s exactly why I said earlier that simply enforcing energy balance of the aggregate system programmatically would probably not be the right way to do this. You probably want the model to conserve energy on its own, or else you want to know why. You don’t want to just change it and pretend it never happened.
You might be surprised — we scientists do make models of sub-grid processes and “validate” them against observation as best we can! I don’t know why you would want 5 years should be the exact waiting period, but anyhow it usually it takes much longer than 5 years for a particular scheme or parameterization to gain acceptance.
And there are people doing all these things! 😉
Oliver,
But they seem to be doing so at great cost and little or no meaningful results. As long as the climate crisis paradigm is the pre-determined answer all of that hard work is more like make work.
R Graf,
If one considers the complexity of the climate system it seems more likely that the CO2 control knob as *the* control knob is not much more than magical thinking.
SteveF:
At least with boundary layer meteorology, the sub-grid scale models are constrained to match known physics and observations. Echoing Oliver’s point, the process of spatial discretization introduces systematic errors in the estimate (your underlying physics equations have been effectively modified by the discretization process), and in general, it is a much larger error to simply ignore sub-grid scale effects.
Unlike climate science, in boundary layer meteorology, there is a plethora of empirical data, which the models are required to match. So there is very little left freedom to tune the model, nor is there much motivation to do so. After all, if you argue something that is empirically wrong, it is just a matter of time before other people figure it out.
The way I see it, sub-grid scale models with tunable parameters amount to little more than a characterization of the null-space of the problem (in this case, the portion of the model that is not constrained by empirical measurements or fundamental physical relationships).
So the range of solutions compatible with known theory and measurement simply represents a statement of our uncertainty in the solution, and the free parameters of the SGS model are just a way to explore that uncertainty.
If I have any criticism of climate modelers, it is they spend too little time exploring the full range of solutions compatible with known measurements and theories, and it’s probably the case their bias affects the range of the parameter space that they actually study.
What that means in practice is the reported variability in model solutions probably understates the total uncertainty in the models, or at least biases the central value of that estimate towards the direction of the bias of the researchers. This is a problem for people trying to interpret the variability in models as a measure of their uncertainty.
At the moment though, it’s not obvious to me the entirety of the problem with the apparent bias in ECS is researcher bias in SGS parameter selection. It may be simply that the SGS models don’t accurately reflect the underlying physics, for which the only remedy is higher resolution models.
My guess is, in fact, they are getting models that are systematically “too hot” because they’re not accurately representing all of the physics of the continuum physics via a too coarse resolution grid scale.
Washington, April 1 (AF) –
The Federal Emergency Management Agency (FEMA) announced today the release of the Local Information for Emergencies (LIFE) chip. The LIFE chip is inserted subcutaneously near the ear, and upon activation by authorities, will automatically tune to the government information channel, and announce government directives. All citizens will be expected to have these devices implanted by the end of the year. Justin Fun, the FEMA spokesperson, explained that the chip’s purpose is safety. “Think of the children. If you love your children, you will give them LIFE,” said Fun.
Fun further explained that there will be no criminal penalty for failing to comply, as that would exceed the government’s constitutional authority. However the government will impose a $1 million per capita tax, as well as a $1 million credit for those with a LIFE chip implanted. “As we know, all taxes are constitutionally valid,” Fun added.
Winston Smith, a reporter, noted that the chip is also capable of transmitting information and thereby might be used to spy on American citizens. Fun replied that a Presidential appointee would ensure that this did not occur. He directed that Mr. Smith be led from the briefing room by Secret Service personnel and be provided with a demonstration of the effectiveness of Administration procedures.
HaroldW (Comment #135928)
I see no links or references so I assume this post is your original thought. It hits a bit too close to what could occur given the past few years of increasing government control for the MSM to publish it – even as an April Fools comment.
Carrick (Comment #135927)
“If I have any criticism of climate modelers, it is they spend too little time exploring the full range of solutions compatible with known measurements and theories, and it’s probably the case their bias affects the range of the parameter space that they actually study.”
As I get more involved with this area of climate science I tend to see this lacking in the same light. Also there is a lack of analytical detail in some of the papers I have read.
Carrick,
“My guess is, in fact, they are getting models that are systematically “too hot†because they’re not accurately representing all of the physics of the continuum physics via a too coarse resolution grid scale.”
.
Well, if so, then a simple test of the hypothesis is to compare short runs of two models with different grid size. Model A with 3 * 3 * 3= 27 more three dimensional cells than model B; initialize both to the same starting conditions and run them both for a short time (a few years the ‘model world time’). If the fine grid model is capturing the basic processes any better/more accurately, then the two should have clear differences in how they evolve and how they respond to seasonal forcing.
.
But I suspect the parameterized process that are important (cloud evolution, optical properties, lifetime, rain formation, etc.) are so far below grid scale that they can’t be modeled even with a 10-fold reduction in grid scale. We are probably stuck with parameterizations of key processes indefinitely. Which is why I strongly favor empirical estimates of sensitivity.
SteveF,
IIRC, the current models blow up at finer resolution. It’s not just a matter of lack of CPU cycles. A restricted higher resolution sub grid model with the external boundaries defined by a coarser grid GCM is an exercise in futility, considering the near total lack of regional skill demonstrated by current models.
Kenneth Fritsch (Comment #135929): “I see no links or references so I assume this post is your original thought.”
Original, yes. Thought, not so much.
Oliver: “And there are people doing all these things! ;)”
As I believe most here are reading from scientific training and professional experience, my comments are all in full appreciation and support or science. In every other field I can think of I can read a scientific journal article or even news article and trust there was diligence against bias and the conclusions are likely valid within stated claims. My alarm sounded when I read a WSJ article last year that was 180 degrees opposite the MSM reports. I then heard Mark Steyn on Fox mention he was being sued for daring to speak out against main stream climate science. This is when I started taking time out to look closer at what was going on. I learned of Climategate from Steynonline four months ago. There has not be a day since then that I have not learned a new and disturbing piece of information. I was physically shaken by unabashed bias in the M&F paper and its reporting. One can read the Max Plank press release and ask oneself what is going on here: “The claim that climate models systematically overestimate global warming caused by rising greenhouse gas concentrations is wrong,†says Jochem Marotzke. And the release was made even worse when scientific editors of the larges online science news daily reprinted stating. “Climatologists have been fairly correct with their predictions.”
I think we all agree the problem is not “bad people” making bad science. It is only the positive feedback loop is letting many scientists that should know better feel bias is justified. Bias is never justified.
If the climate scientist out there feel that I should have nothing to fear it is not because they believe the IPCC assessment is based on science. I can now see for myself the statement we here from the president and media that “science has been settled” is not true. Individuals who state otherwise are not equivalent to those who claim the Holocaust was a big lie. You must be aware as a scientist that the public may be mostly ignorant but they are not the enemy. Truth is never an enemy. The ends rarely justify deception. Even successful black ops like the 1953 Shaw of Iran coup are not successful in a longer historical context. America’s finest were involved in the Shaw affair including Norman Schwarzkopf Sr. and Kermit Roosevelt. Group-think was the problem.
Re: DeWitt Payne (Comment #135932)
Well, one obvious problem is that many parameterizations in current models are scale dependent, so if you change the model resolution then you must also change the parameters. There’s nothing “wrong” with that, it means that the simplified physics at scale X were no longer a reasonable approximation at X/4 or whatever.
That’s kind of a blanket statement. How high-resolution is your fine mesh model, how big and good is your coarser grid, and skill for what? The degree of “futility” is going to be along some continuum, dependent on the answers to those questions.
Re: Carrick (Comment #135927)
You explained it much better and clearer than I did, Carrick. Cheers!
DeWitt:
While it is true that certain solution methods for hyperbolic equations such as those governing GCMS (notably explicit methods) are unstable and become more so as you improve the resolution of the model, AFAIK there is no blanket reason one would expect “the current models” to blow up at finer resolution. So I think this is a exaggeration on your part of the problems with the current art in this case.
As Oliver pointed out, when you increase the resolution, there is a certain amount of tuning that is required to recover the same physics that you had with lower resolution models. Unfortunately, in complex models, some of this tuning of the physics is done on a cut-and-try basis.
Now the run time increases by roughly a factor of 10 for each factor of two improvement in resolution (link), which means that it becomes exponentially more difficult, given the same computer speed, to perform the parameter tuning as you’ve increased the resolution.
In any case, historically there has been steady progress in improving the resolution of the models, with the current state of the art being being about 0.1°x0.1° (ocean/ice) and 0.5°x0.5° (land)—this particular model would be GFDL CM2.6.
I am aware of papers that suggest that models become unstable at higher resolution. One that I’ve seen quoted is Grotch and MacCracken 1991. I don’t think it’s accepted that this result is anywhere universal, nor does the steady progress made by modelers indicate there is any veracity to that speculation.
There are very good physics based reasons for wanting to improve the resolution of the models. This isn’t like we’re just improving the precision of the answer, the physics we’re able to capture with the model changes as the resolution improves.
So, this is definitely a case of size does matter.
SteveF, as I mentioned above, the trade off is a bit worse than just the resolution to the 3rd power.
My guess, from my own model experience, is you’ll see the model gradually become less sensitive with higher resolution, simply because you’ve greatly increased the number of modes that can get excited by your external forcing (so the forcing gets repartitioned over a larger number of modes as the resolution increases).
At some resolution, and I don’t know that anybody can put a finger on it yet, you will cease to see substantial improvements in the climate models as you continue to increase the resolution of the model. So what resolution is needed is itself is an empirical question.
For regional scale problems, as you are probably aware, you can use the GCM output to drive mesoscale models that allow you to study regional scale climate science (this is called “downscaling”). For the GCM output to be useful for boundary value solutions to mesoscale weather models, again there’s going to be some minimum resolution requirement that would have go to into this sort of application.
And similarly, the mesoscale output can be used to drive e.g. boundary layer meteorology models. This is the application that I am most familiar with here.
So it’s not the case fortunately that we have to in general capture all scales in a single model.
Oliver–thanks!
‘AFAIK there is no blanket reason one would expect “the current models†to blow up at finer resolution.’
The resolution issue is this. Explicit solutions of Navier-Stokes have to resolve sound waves in the horizontal. That is how near-incompressibility is enforced. So there is a Courant-type limit that says a sound wave can’t cross more than a certain fraction (less than half, I would expect) of a grid cell in a time step. There may be ways of improving that a bit. You can reduce the cell size as much as you wish, but you have to reduce the time step in proportion, which hurts.
Sound takes about 5 min to travel 100 km.
So, is there any results based paramterization allowed? Or, are model parameters limited by what can be reproduced in regional scale models? If yes, are the regional scale models prohibited from tuning? Is there a protected chain of data custody? I think this is what everybody outside the modeling community wants to know.
Nick,
Doubling the resolution in three dimensions would then require 2^4 as many calculations for the same modeled period. Maybe Carrick’s factor of ten per doubling of resolution means one dimension (probably vertical) is not increased as much as the others.
Carrick,
A resolution of 0.1 angular degree is still a cell more than 10 km on a side in the tropics. The behavior and properties of clouds are controlled by processes that take place on a scale of tens of meters (or maybe less!). So while reducing grid scale may help, crucial properties will remain parameterized, not calculated. The wide range of diagnosed sensitivity among the models, with virtually all runs of all models greater in warming than measured reality, is a confirmation that there remain important factors which are introducing inaccuracy, and the current implementations are almost certainly biased warm.
.
As I said above, my frustration is the apparent insistence by modelers (and many others!) that the models can make reasonable predictions of future warming, when a rational comparison with reality says just the opposite. IMO, empirical determinations are the way forward if we want to have reasonable estimates of future warming.
The answer is not clear in Taylor, Stouffer, Meehl (2012) whether ENSO simulation is an intrinsic character of model physics or the result of a random signal generator program. On page 494, 2nd col it states: “…historical runs are initiated from an arbitrary point of a quasi-equilibrium control run, so internal variations (even if they were perfectly predictable) would not be expected to occur at the same time as those found in the observational record…”
On page 495 they write about down-scaling. In the last paragraph they advise against taking the output of a grid cell as useful for regional forecast for crops. Ahh shucks. 😉
I just searched AR5 Chapter 9 Flato and Marotzke for ENSO and it’s mum on the point of how models actually simulate ENSO. If the models produced ENSO as an emergent manifestation of physics I would think they would be crowing about it, and it would be independent of historical initiation point as it would be organically unpredictable. I notice the papers talking about needing to remove the ENSO signal in order to assess CO2 forcing. If it’s was artificially added, why put it in?
Carrick,
Perhaps I overstated, but the point I was trying to make is that it is unlikely that you could take any of the current models and simply run them at a finer resolution without changing anything else, as suggested above by SteveF, and get reliable results.
I still think downscaling at the current state of GCM development is pointless.
Before I can get very excited about the potential of future higher resolution climate models, I would want to understand how the current CMIP5 models provide such different estimates of GHG and aerosol forcings and why there is not a more critical view of these differences in the science literature and why some kind of evaluation is not available to determine, at least, whether some of these models are way off the tracks. Why do scientists and modelers talk about distributions of the results of a large ensemble of models where there is good possibility, in my view at least, that some of these models are simply wrong?
The recent paper of Marotzke and Forster is a prime example of an effort to create sufficient (and unaccounted for in my view) noise in the CMIP5 model results in order to avoid talking about differences in model results. The authors make an obviously true statement that a 15 year period is too short a time to see statistical significance between model and observed temperature trends. That proposition does not address what happens when one goes to longer time periods for this comparison and where the effort to compare the deterministic parts is not befogged by novel methods that while having a better chance of getting published in a prestigious journal allow some very subjective assumptions to be used in the methods and conclusions.
R Graf (Comment #135947)
I think the authors are merely pointing out that while the cyclical appearing changes in the modeled climate can occur as chaotic noise, the models cannot time those changes to coincide with the observed cycles. That noise is what makes a 15 year comparison of model and observed climate difficult to reach statistical significance.
A better comparison is the deterministic part of the observed and modeled climate which after all is influenced by the generation of GHGs and aerosols and as such is the part of the climate signal that man has the most apparent control over.
By the the way, there’s this guy Nick Stokes who wrote an article on Navier-Stokes for climate models here. Given his background, I wouldn’t argue with that guy over what is required for exact Navier-Stokes solutions.
But anyway, climate models are a long way from trying to solve Navier-Stokes exactly. Which is a good thing, because exact solution of Navier-Stokes remains intractable for most (all?) large-scale real world problems ATM.
In climate models, the way near-incompressible is enforced by the Boussinesq approximation, which basically assumes that density perturbations are small compared to the local average. This approximation eliminates acoustical modes in the system.
But even when there aren’t stability issues, there are still issues with numerical convergence. You certainly have to reduce the time step when you increase the resolution to get an improvement in accuracy from the increase.
So the solution is necessarily going to be worse than 2^3 for the doubling of resolution of the model, but, depending on the approximations made, not necessarily bad as 2^4.
Carrick,
Your link to Nick’s insights is dead.
SteveF, try this.
I agree…improving the resolution isn’t likely to directly help you with cloud physics. But as I pointed out above, if you want to use GCMs to drive mesoscale weather models, you’ll get more accurate boundary conditions with the higher resolution GCM model. This in turn could yield improved models of the mesoscale physics for use in the GCMs.
Anyway there are plenty of examples of atmospheric-terrain interactions and ocean-shore interactions that improved resolution would help with. Just looking at the effect of reducing the resolution has on surface boundary conditions, should give one pause from advocating against improving the resolution of the models.
R Graf, as far as I know, all of the models produce an ENSO. You can see this in e.g. Figure 9.5.
The issue is whether the ENSOs are “Earth-like”.
AR4 has nice coverage of that issue.
Thanks Carrick. I noticed the AR4 writes as if ENSO being successfully organically created is an assumption. The problem highlighted is improving predictive power, (which is still zippo apparently,) and determining the degree of know-ability in which to gauge success. But this still does not explain why as quoted in my prior commenting that CMIP5 ENSO was necessarily out of phase in successive realizations due to being “initiated from an arbitrary (control) point.” My question could be answered by the testing individual models to see if they produce different ENSO based on physical chaos regardless to identical start point conditions.
I have finished estimating the CIs for TCR determination from the 1% CO2 experiment used with 32 CMIP5 models. My approach involved obtaining a model for the CO2 model response series and in turn modeling the residuals of that model with an ARMA fit. A 2 degree polynomial model fit to the CO2 series had R^2 values of 0.903 to 0.997 with an average of 0.989. The ARMA fits – which varied between (1,0) , (1,1) and (2,0) – in turn produced residuals with independent series as determined with a Box.test p.value that averaged 0.80. I used the residuals with 10,000 Monte Carlo simulations per model to estimate the CIs. Those CIs ranged from plus/minus 0.14 to plus/minus 0.49 and with an average of 0.23.
My interest in TCR CIs was to determine whether the extent of the residuals from my regression correlations of a deterministic Singular Spectrum Analysis (SSA) trend using the RCP4.5 scenario time series versus TCR was limited by the limits within which TCR could be determined from the 1% CO2 series. My intent here was to relate the SSA trends to a deterministic variable of the CMIP5 climate models and gain credibility for the SSA derived trend indeed being of deterministic origin. While the regression correlation for that relationship is around 0.70 for the period 1975-2014, there is remains considerable variation around a linear relationship even after considering the TCR CIs to strongly suggest that the negative forcing in these models in the historical period vary considerably from model to model. See the first link below. The only negative forcing of sufficient magnitude for that offset is from aerosols. I have attempted to find proxies for the variation that might be imposed by the models varied reactions to the same level of aerosols used in the historical part of the RCP scenarios and have failed to this point in my investigation to accomplish that.
I wanted to look in more detail how the estimated CMIP5 model temperature trends in the 1975-2014 period varied according to the predicted levels from the 1% CO2 experiment and actual temperature series for that period. To that end I used the degree 2 polynomial model of the CMIP5 model temperature series that would correspond to the 1975-2014 time period and compared it to the SSA derived trend of the actual CMIP5 model series for that same time period. As it turns out and according to my calculations historical GHG CO2 equivalent concentration levels in the atmosphere did increase at only a very slightly greater rate than 1% year during that time period and thus the 1% CO2 experiment fits that period well. In the second and third links below are plots of the 1% CO2 CMIP5 model series showing the polynomial degree 2 model fit.
The fourth and fifth links below show the actual CMIP5 model SSA derived trend lines in red with the 1% CO2 model trend lines in black for the period 1975-2014. I offset the trends so that the difference was zero at 1975.
http://imagizer.imageshack.us/v2/1600x1200q90/538/0jGZcm.png
http://imagizer.imageshack.us/v2/1600x1200q90/540/uspAdt.png
http://imagizer.imageshack.us/v2/1600x1200q90/673/OOfqVk.png
http://imagizer.imageshack.us/v2/1600x1200q90/910/5XTc50.png
http://imagizer.imageshack.us/v2/1600x1200q90/540/HX7vvD.png
It is obvious on viewing these plots that the CMIP5 models vary greatly in the response to the same levels of GHGs and that the adjustments in attempts to return the temperature reasonably close to the observed levels requires large variations in responses to negative forcing to the same level of concentrations of negative forcing agents. What is disconcerting to me is that the offsetting trend of negative forcing to reduce the 1%CO2 temperature trend to that of model historical/RCP4.5 temperature series trend would have to produce by itself a steady downward temperature trend over the 1975-2014 period. That does not appear realistic to me.
I don’t time for a detailed response. Stokes post is OK but rather elementary. The problems in CFD (colorful fluid dynamics) are broad and deep. Isotopic eddy viscosity (bussinesq) is just the most obvious problem. The literature on this subject is unreliable
Of course, it was aerosols in the early 1970s that were thought to be driving us into an early ice age. Nice work, btw. The logical flaw that you re-highlight here is that all the models, with different physical assumptions, all arrive near the same endpoint. For M&F that is confirmation of EGHE. For skeptical observers it is confirmation of endpoint based anchoring. It will be interesting if Stevens gets traction; then all the models are wrong.
Update: Stevens recants any challenge his work poses to alarm.
“The logical flaw that you re-highlight here is that all the models, with different physical assumptions, all arrive near the same endpoint.”
I should have better expressed in my post above that the levels of GHGs and aerosols used by all the CMIP5 models were the same and the differences in deterministic trends in the models are due to differences in the models. It is those differences that is of primary interest to me currently and make me wonder at the modeling world to appear to want to minimize those differences.
If the attempt is made to find and analyze the deterministic part of the model temperature series the model differences can no longer be shielded by the chaotic noise in the model series. And this points to another difference in the models and that is noise. Notice that the noise from the 1% CO2 experiment had an excellent ARMA fit that varied from (1,0), (1,1) and (1,1). I did not report it above but the residuals from the ARMA model (from the CMIP5 model 1% CO2 series) had a standard deviation that varied from 0.052 to 0.136 and averaged 0.092.
I should also note that CIs from the above post were from 0.025 to 0.975.
Ken, do you think that models with CO2 high sensitivity also had high aerosol sensitivity. And, if so, one must ask if that is because the radiative physics for those two mechanism go hand-in-hand, or is it by chance, or is it anchoring?
UAH anomaly for March, 2015 is 0.26°C. The ‘pause’ continues.
RGraf,
The B Stevens genuflect to CAGW hysteria is neither surprising nor technically meaningful; it is a political apology.
.
When Nic Lewis used the Steven estimates to rework Lewis&Curry (2014), showing that high sensitivities become very improbable if Stevens is right, I commented that Stevens must feel very secure in his job. Apparently maybe less secure than he thought. I can only imagine the pressure he must have come under by the climate jihadists to issue such a silly disclaimer. The wagons remain tightly circled, the mullas inside still issue fatwas, and apostates are still quickly punished. It is a sick and sorry field. Defunding is the only answer.
DeWitt,
Roy is not to be trusted; there is not and there never was a ‘pause’. Just ask a reliable authority like….. um… Tamino!
Kenneth,
” that the levels of GHGs and aerosols used by all the CMIP5 models were the same and the differences in deterministic trends in the models are due to differences in the models. ”
.
Do you know what the assumed aerosol history was?
“Ken, do you think that models with CO2 high sensitivity also had high aerosol sensitivity. And, if so, one must ask if that is because the radiative physics for those two mechanism go hand-in-hand, or is it by chance, or is it anchoring?”
Please explain how model sensitivity to GHGs is related to model sensitivity to aerosols.
I am currently looking at papers describing aerosol and all negative forcings used in some CMIP5 models. All I have to date is that the aerosol forcings and/or levels appear too level after 1980 or so to affect model trends while I have seen a paper that indicates that an ensemble average of all negative model forcings might have the right trending to make a difference in the 1975-2014 period. Individual model negative forcing trends varied a lot and were not all that smooth.
I had a reply about Stevens very much in the vain of that posted by SteveF, but it appears to have gotten lost in the cloud. I am thinking that it might have been from my use of the word that rhymes with mathetic that I used to describe Stevens need to write what he did.
SteveF (Comment #135966)
I believe the reference is from this paper linked below.
http://www.atmos-chem-phys.net/10/7017/2010/acp-10-7017-2010.pdf
Kenneth,
Thanks. If I am reading correctly, what was prescribed was aerosol loading and distribution (and other things). Each model can assume whatever forcing level the modelers want (eg influence of aerosols on cloud properties). There appears to be no meaningful constraint on the models’ aerosol effects. Which is what I had figured was the case.
Carrick, stokes’ post is ok but superficial. Bousenesq isn’t the least of the problems. We have a paper on some this soon. Resolution sometimes helps, but there are lots of other large sources of error
After all the jargon and appeals to authority, Schmidts ultimTe justification for gcms was that every run gave a “reasonable” climate. No better example of confirmation bias can be imagined.
Trenberth: “A jump is imminent.”
RGraf,
Trenberth is 70+ YO. I’m betting on his retirement arriving long before his predicted catastrophic warming does. Like James Hansen, he is long past his ‘best by’ date. It seems a bit of ‘a travesty’ he can’t see that now would be a good time to cash in his chips.
David Young, the Courant-Friedrich-Levy condition that Nick was discussing relates to stability, as you know, not numerical error.
Nick is correct, if you leave in the acoustic sector, you end up with a stability condition where $latex c \Delta t$ scales as $latex \Delta x$ and you’d expect a $latex 2^4$ increase in run time for doubling of spatial resolution. The Boussinesq approximation is applied to remove acoustic modes from the system, and helps with stability of the resulting model and with the accuracy of the reduced primitive equations.
Resolution is a big deal for these large scale simulations because you won’t capture the mode structure associated with atmospheric-ocean oscillations with too coarse of a grid size. This in turn IMO is a big deal—accurately modeling how much mechanical energy gets exchanged between the atmosphere and ocean is critical in accurately modeling the transient climate response of the system. Also, you have a rough boundary condition, too crude of a boundary also leads to unrealistically large interactions (the blocks associated with the Alps end up having large vertical gradients, even if you are using a finite element method with smoothing).
Unless you’ve examined the effects of the improved resolution (fortunately you don’t need to run the model for 150-years simulation time to study this question…there are researchers who use grid sizes down to 6.5km x 6.5 km for example to study this problem), you can’t really say which approximations associated with sub-grid scale physics are dominating your error budget.
It’s been my experience that researchers who are used to smoothly varying (averaged) wavenumber spectrums and smooth boundary conditions often underestimate the effects of resolution on the accuracy (and even stability) of the resulting model solution, where there are typically discrete numerical modes present.
There’s a discussion here of the effects of improved resolution in an atmosphere-ocean general circulation model. Staniforth 2012 is another paper which discusses the issues. One of the more surprising things about that paper is I didn’t realize many models are still using latitude-longitude grids. That doesn’t exactly strike me as state-of-the-art.
SteveF (Comment #135970)
Yes, the CMIP5 models use the same levels of GHGs and aerosols but can have very different resultant forcings from those same levels. For aerosols some of the models can account for direct effects only or others an additional first indirect effect and some even account for a second indirect effect. See link below for details.
http://iopscience.iop.org/1748-9326/8/2/024033/article
This linked paper is of additional interest in that while it talks about the effects of aerosols on the recent warming hiatus the representation it gives the aerosols shows for the most part a leveling of the model forcing due to aerosols. Also the paper uses Ensemble Empirical Mode Decomposition (EEMD), to consider multi-decadal scale non-linear trends. EEMD provides results much like those obtained from Singular Spectrum Analysis (SSA). The residuals and 2nd to last mode extracted from AAMD is often considered an estimate of the secular trend. It kind of works in reverse of SSA.
In my analysis of the 1975-2014 time period using the 1% CO2 CMIP5 experiment as a proxy for resultant model temperature trend due to GHG forcings and the SSA derived trend from historical model runs leaves a required monotonic smoothly downward trend to bring the 1% CO2 series in line with the historical series. While the amount of compensation required from model to model varies significantly, all models require some compensation of nearly the same shape. The paper linked above would strongly indicate that aerosol negative forcing is pretty much flat during the 1975-2014 period. Further aerosol negative forcing appears to be the only forcing agent with a sufficient magnitude to provide the compensation noted here, but if it is flat it cannot provide the downward trend required.
Now there could be some complex interactions between GHG and aerosols effects on forcing that are buried deep in the black box model contents which in turn would question the wisdom of using the 1% CO2 experiment to determine an emergent climate model variable like TCR. Another explanation could be that the various model runs like the 1% CO2 runs and the historical runs were tuned differently. I would certainly assume and hope that is not the case.
I plan to continue my search here for a reasonable explanation and will provide links and excerpts to sources giving historical and model scenario aerosols levels during the 1975-2014 period.
Ken, it seems to me they need to provide the keys to the black box somewhere. After all, there is no other way to transparently confirm there is real physics at every step. Without such continuity the programmer is essentially a puppeteer. Van Ypersele, the heir apparent to chair the IPCC says: the “IPCC mandate is to assess, in the most rigorous, inclusive, and transparent way…â€I’d be happy to write Van to suggest publishing all model parameters.
FYI, here is the most paid download on modeling paper in full pdf: atmospheric grid configuration
on the climate and sensitivity of the IPSL-CM5A
DeWitt Payne (Comment #135717)
“The system is not at equilibrium or at least not thermodynamic equilibrium. It’s at steady-state. ”
Steady-state thermodynamics imply entropy flux. It so happens that Christopher Essex (1984) was the first paper to show the Earth’s energy budget included entropy. Wu, Liu, Wen (2011) points out that differing theoretical assumptions on TOA entropy can make up to 0.23 W m-2 K-1 difference. It the troposphere latent heat transfer involves entropy on about that same scale. One wonders if the sub-grid assumptions are properly accounting for tropical storm heat and also increased entropy.
Reading the IPSL-CM5A paper, (which is no picnic,) I am surprised that higher resolution leads to warmer SST. My thought has been that less homogeneity leads to better radiative efficiency at TOA (about 0.8 W m-2 K-1 for 10K of mixing at 300K, for example).
Here is the Essex 1984 Paper
The most comprehensive published data analysis of the GHG and aerosol/non-GHG forcing and net result comes from the Forster link below:
http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50174/full
In that link in Figure 2 it is shown that an ensemble mean of CMIP5 model series from historical non-GHG forcing is nearly flat in the 1975-2005 period, while the resultant temperature ensemble mean has a slight downward slope. The ensemble slopes of the trends of the historical and historical GHG CMIP5 models shown in that same figure appear very nearly the same for the period 1975-2005.
By contrast the slopes that I derive from the CMIP5 1% CO2 experiment (used for the purpose of estimating model TCR values) and that I used as a proxy for GHG resultant temperature series has a greater slope than the corresponding model historical (1975-2005) and RCP4.5 (2006-2014) for all models and with a difference that varies significantly from model to model. That is a major disconnect with what the linked paper shows and motivates me to determine a reason why.
Here is the link to the presentation view-slides from the recently concluded ‘climate sensitivity’ workshop in Germany:
http://www.mpimet.mpg.de/en/science/the-atmosphere-in-the-earth-system/ringberg-workshop/ringberg-2014/talks.html
.
I have looked at almost all of them and concluded the workshop was inappropriately named. A more accurate name would have been: “Why we know Nic Lewis is wrong and ECS is probably very high”. The explanations (and contortions) are at once interesting and amusing, if only because there are so many of them, and because many conflict with each other. One presentation actually noted that an empirical energy balance approach was sensible only when we were less confident of what the true ECS is, but now, with more confident (model based?) estimates, we have to discount such observationally constrained estimates. Bjorn Stevens suggested the most likely range for ECS is 2 – 3.5C, discounting very high sensitivity values, which probably made some at the conference unhappy.
.
The message was loud, clear, and consistent in most presentations: “No matter how the measured temperature evolves over the coming decades, we will NEVER accept that climate sensitivity is unlikely to be high.” Of course, reality does not care about what those folks think.
SteveF, I’ve always had a few issues with Nic Lewis’s (and others) work:
First, I don’t see how you can accurately measure ECS (which is the response you would get in the system after a few thousand years of measurements) using a much shorter interval of measurement.
Secondly, I think it’s a big mistake to rely on temperature data from the 18th century. Shortening the interval to a period where you have more accurate estimates of global mean temperature, just make issues with the first point even more awkward to address, but I would guess the error associated with including the 18th century data greatly inflates the true (rather than admitted) uncertainty in the measurement.
Note that neither of these are arguing that Lewis’s numbers are too low. Rather I’m saying the method itself is not a reliable method for estimating ECS.
I think the best you can do is nail down some variation on TCR (but since the system is nonlinear, the frequency composition of the signal you are measuring the response to affects your estimate of TCR too).
Carrick,
I think it is fair to say that any estimate of multi-thousand year response is bound to be quite uncertain. But that may not be so important: atmospheric CO2 level will likely be falling by near 2100, and equilibrium response will never be even approached. Some estimate of how much temperatures will increase between now and when the ocean has absorbed most all the CO2 from fossil fuels is all that is needed. Nic’s efforts make perfect sense to me because they underline just how inconsistent GCMs are with measured reality. The temporal scale Nic works with (century) is in the range that ‘matters’ for developing reasonable public policies. IMO, the kinds of draconian energy policies which are commonly justified with scary GCM projections will likely do more damage (economic, political, human) than warming from CO2.
.
I am 100% for empirical sensitivity estimates, since they can be no worse than the silly GCM projections (which are comically bad), and are arguably much better. If I had to bet on what temperatures will be in 50 years, then a bet based on empirical sensitivities is how I would bet. While neither of us has to worry about a personal wager of 50+ years duration, humanity will have to. I just want the bet to be a rational and prudent one.
Ken, I believe that Paul_K and Clive may be still working on statistical analysis of the models and M&F results. I would email Clive or post your question on his M&F post here.
There is a new article from Sandia Lab regarding improving model time resolution here.
SteveF I agree with your comments that the shorter response time is more useful for policy, and the empirical estimates likely being more reliable than the model based ones.
I would suggest a better interval to choose 1910-2015 (I think currently 1910 is about as far back as you can safely go before you start getting significant coverage bias).
My main criticism is it’s just not a good metric to compare against the models estimates of ECS: The two numbers aren’t inter-comparable.
(As it happens, there’s room to criticize how the models arrive at their estimate of ECS, since they don’t typically run until they’ve achieved equilibrium themselves.)
Carrick (Comment #135994) :
“I think it’s a big mistake to rely on temperature data from the 18th century.”
I don’t think any of Lewis’ analyses go that far back. Lewis & Curry(2014) uses a base period of 1859-1882. Otto et al. (2013) uses a base period of 1860-1879.
HaroldW, sorry I meant the 1800’s, not 18th century. I don’t think it’s justified for anybody to use data from that era due to the poor geographical coverage.
In my continuing effort to make sense of the Forster et al. (2013) paper linked above I have been looking at the following statements in the paper:
” [23] A model’s historical temperature trend depends on forcing, climate sensitivity, and ocean heat uptake. As aerosol forcing and climate sensitivity are uncertain, modeling centers could be modifying their controlling factors to reproduce the observed globally averaged 20th century temperature trends as well as possible. There was some evidence of a trade off between climate sensitivity and forcing in CMIP3 and earlier generations of models [Kiehl, 2007; Knutti, 2008]. Figure 7 reproduces Figure 1 ofKiehl [2007] for CMIP5 models and finds considerably smaller correlation than in either the CMIP3 analysis of Knutti [2008] or the older model analysis of Kiehl [2007] that are reproduced as blue and red symbols, respectively. The R2 fit in CMIP5 models is slightly smaller than in CMIP3 models and is not significant. The green squares show a subset of the CMIP5 models that match the observed century-scale linear temperature trends (0.57 to 0.92 K increase over 1906–2006, IPCC [2007]). This subset reproduces the Kiehl [2007] fit almost perfectly. The CMIP5 models that are not in this grouping tend to have a larger positive AF compared to those that match observations and thereby overestimate the observed temperature trend. Variation in the magnitude of the CO2 AF affects both the AF in 2003 and the equilibrium climate sensitivity (ECS). Figure 8 shows that both AF in 2003 and the 2xCO2 AF are positively correlated with α [see also Andrews et al., 2012b]. This means that models with smaller climate feedbacks (i.e., higher sensitivities) tend to also have smaller CO2 AFs which would act to converge models towards similar Historical temperature responses.”
What that excerpt is saying – in a not so direct manner – is that a compensation in the forcing of a CMIP5 model for its sensitivity could, in effect, be evidence that a modeling group was using controlling factors in forcing to push the resultant temperature series toward the observed. If all the climate models’ adjusted forcing from 1861-2003 (AF) versus equilibrium climate sensitivity (ECS) are plotted the correlation gives an R^2 =0.19 with a negative correlation, but if one used a subset of those models that have long term trends of global mean temperatures reasonably close to the observed (90% uncertainty range) the correlation yields an R^2=0.74 again with a negative correlation that would indicate a compensation. All this means in my view is that there is evidence of compensation in forcing by the modeling groups to push the temperature series toward the observed and those modeling groups that did not make the effort or do it sufficiently well nearly all had temperature series with higher trends.
The authors in another analysis show that for all these models an excellent correlation can be obtained for AF/(alpha+kappa) versus the long term temperature change to 2003 with an R^2 =0.76. The alpha and kappa values can be derived from the 1% CO2 experiment or alternatively as was the case with the above correlation the alpha values derived from the abrupt 4XCO2 experiment which were “very similar” to those from the 1% CO2 experiment according to the authors. This correlation would not be unexpected since in the M&F paper, discussed here and other blogs, the authors start with the relationship of (alpha+kappa)*(deltaT)=deltaF or in the case of the current regression deltaF/(alpha+kappa)=deltaT.
My question is now what do these correlations taken together tell us about the potential source of tuning for those models that come closest to matching the observed temperature trends. I think the AF/(alpha+kappa) [taken together alpha + kappa is referenced as climate resistance and called rho] correlation is merely confirming that alpha and kappa data taken from different CMIP5 experiments and then applied to predicting temperature change for the historical temperature series shows consistency in the emergent constants alpha and kappa across these different model runs. The upshot of that consistency would indicate that any tuning in the model runs is probably the same.
None of this sheds much light on what I have seen as the apparent disconnect noted above between my results and the results in this paper in the Figure 2 plot where I do not see the parallel association of the historical and historical GHG series that the paper shows.. I need to take a more refined look at the paper data and my data to determine whether that might explain some or all of the difference that I currently think I see.
Carrick (#135999),
No problem; I suspected that’s what you meant but wasn’t 100% sure.
Lewis&Curry(2014) show an alternate calculation with base period of 1930-50 and final period of 1995-2011. TCR best estimate comes out the same, but (naturally) the pdf is wider.
Ken, you are such a nice guy in your analysis of Forster et al, who throws CMIP3 under the bus in a couple of sentences then tries to contort some mumbo jumbo about CMIP5. If there were massive improvements in the physical understandings and dynamics Forster would have mentioned it. Instead, the bottom line is CMIP5 consists of those that tuned and those that overshot.
CE’s current post has some really good bottom line blurbs on a bunch of modeling and sensitivity papers and conferences.
http://i62.tinypic.com/2nqb12p.jpg
.
With age wisdom sometimes comes…. at least outside climate science.
http://www.vancouversun.com/technology/Conversations+that+matter+Earth+actually+growing+greener/10944052/story.html
In order to obtain the detail required to determine better how each model handles or ignores or handles only partially the tuning required to get temperature trends within the 90% uncertainty range of the observed temperatures, I am in the process of downloading the Historical GHG only, Historical Natural only and the Historical all (extended from 2005 to 2012) series for the CMIP5 models. I have had a bit of a problem locating these data but have found them here:
http://cera-www.dkrz.de/WDCC/ui/EntryList.jsp?acronym=ETHhg
http://cera-www.dkrz.de/WDCC/ui/EntryList.jsp?acronym=ETHhn
http://cera-www.dkrz.de/WDCC/ui/EntryList.jsp?acronym=ETHhx
My eyes tend to glaze over when I see all the series presented together in spaghetti diagrams. Those plots could be just as efficiently presented showing the individual models and that is what I intend to do here after I finish. There are more data now available than I was lead to believe from reading papers that were published perhaps before more data became available.
Ken, I’ve felt the only reason one would use a large ensemble mean as the reference for analysis, as M&F did, would to create a basis for generalization about the fitness of CMIP5. Clearly, this was a political aim not a scientific one. If you can create a statistical diagnostic that confirms the Forster(2013) that would be interesting. Then it would be neat if you could identify a signal for the ENSO with a some sort of transform. Canceling out the ENSO, the model’s output becomes a line with temporary reactions to volcanoes. You would be providing a tool for skeptical analysis to see things like degree of organic nature of ENSO. As I said before, if it is an emergent property there is no reason for it to be dependent on initial conditions like stated in Taylor (2012).
R Graf (Comment #136007)
My purpose in my current project is to gain insight on how the models are tuned in attempts to emulate the observed temperature series. The chaotic noise only gets in the way of that mission. That noise is also related, I think, to why spaghetti graphs are so popular with portraying model results. The CMIP5 Historical temperature series for some models with multiple runs shows trends and for even longer time trends with very different results. Other models with multiple runs show trends with much narrow ranges and even for shorter time periods. It brings forth the question of why these model/model runs would ever be averaged together. The individual models are very different and tell different stories completely.
On the other hand, the Historical GHG only series appear to have, thus far in my investigation, much narrower ranges for temperature trends for models with multiple runs. I think that I can get much more insight into the deterministic part of the model out puts (and tuning) by looking at the 1% CO2 runs and GHG only runs and perhaps Historical runs with those models that produce multiple runs with narrow trend ranges.
I also have in the interim concluded that while the long term trends in the Historical CMIP5 models might well use aerosol effects for tuning to better match the observed, I do not think that is the case for the period 1975-2014.
Carrick, I was on vacation and am just now back. I reread Nic Stokes’ post you link to and it contains nothing that is not at least 35-40 years old. It goes back to Chorin’s projection method for the Navier Stokes equations. So its correct as far as it goes. Implicit methods are now getting more and more standard for a lot of reasons but obviously they have vastly superior stability properties. And Nic is right that preconditioning becomes the main issue and where a lot of recent progress has happened. Newton Krylov is one version of implicit methods we helped get started in the 1980’s.
As to the resolution issue, improved resolution will help up to a point and atmospheric simulations are probably very coarsely resolved at the moment. My skepticism is twofold: 1. resolution will not help you at all if your eddy or numerical viscosity is large. 2. Just because the solution looks more realistic is not a good norm to use. You need some kind of careful analysis or assessment of numerical and modeling errors to draw conclusions. Perhaps that is in the Stainforth paper, as I haven’t had time to look at it yet.
in aeronautical flows, there is a huge increase in complexity by going to RANS or even LES and increasing your boundary resolution, but answers are not that much better than simple boundary layer theory with some extensions to separated flows at least for may flows of strong interests. That’s what our recent paper was about.
Hi David,
I agree that Nick’s version could use some updating. 😉
LES works very well for surface layer meteorology. In this case we have a plethora of data to compare against (and the tuning of the subgrid models to match turbulence data almost certainly contributes to that).
If I remember right, the resolution here for nocturnal simulations because the mean size of the vortices in the source region are on the order of centimeters across, rather than meters. So you need a pretty fine scale to accurately resolve the nocturnal boundary layer.
For GCMs, I go mostly on what people in the field say. For example, in the introductory lecture to this series on global atmospheric models, Peter Lauritzen says:
I think this is one of the clearer expositions of why resolution matters for these models.
I can’t imagine that downscaling is going to work very well, if you aren’t accurately modeling the core physics. So even if your cloud physics model is seriously broken, for example, I think you have no hope of getting that right until you have adequate spatial resolution for your models.
You say:
Yes, this is one thing that really bothers me with Gavin’s expositions. He shows pretty pictures where you clearly have “earth like” climate, without apparently spending any energy testing quantitatively whether his models are reproducing Earth’s climate or some other non-terrestrial planet.
I could imagine that you could find planets with ECSs all the way from 1.0 to e.g. 5.0 °C/doubling for example. But simply because you qualitatively get the same physics you see on the Earth, doesn’t mean your ECS number has anything to do with the Earth’s.
So for me, we’re back to the question “Does the right answer really matter to these people?”. Because if it did, I think they’d be a bit more vigorous in their efforts to test their models.
Carrick (#136010):
Well, Nick is convinced, anyway, because models produce the correct (earth-like) patterns.
My view is that TCR/ECS values just aren’t verifiable at this point, simply because we don’t have sufficient high-quality data yet. Signal-to-noise too low. Gavin Schmidt’s Ringberg presentation seems to throw some dust on the entire concept of “a” TCR, in that he suggests that different factors provoke different relative responses. Makes one wonder if the entire attempt to convert all climatic causes to a “common currency” of an equivalent forcing in Wm-2, is a step in the wrong direction.
David Young’
“it contains nothing that is not at least 35-40 years old”
.
Not too much of a surprise, Nick is no adolescent (nor am I). 😉
.
But yes, out-of-date research conclusions can be misleading, especially in an immature field that changes rapidly (like say… climate science).
“Nick is no adolescent (nor am I)”
Whereas David is forever young.
I was actually writing an explanatory note on GCM’s and their basis in Navier-Stokes, not a research paper. Some GCMs are 35-40 years old. They deal with low Mach number flows of no particular boundary complication (re flow). The science of 35-40 years ago could handle that quite well. Advances since may have improved speed. GCM designers have focussed on the things that are peculiar to their problem – the rapid change in the vertical relative to horizontal (terrain coords etc), air/sea interaction, radiation and the many things that require sub-grid scale modelling. I don’t think Boeing-style CFD advances would help much there.
However, I was glad to be pointed in the direction of papers like this. Sorry I missed it at the time. I became a fan of the inexact Newton-Raphson viewpoint following the later papers of Bramble.
Nick, I just said at ATTP that when I retire I want to imitate you or Nic Lewis and really get into climate. Imitation is the sincerest form of flattery. Old methods can indeed be very good. 🙂
David Young
You made a very good run at aTTP.
Yes, Some astronomers don’t know that without an external forcing (engines) there is no lift and you never get off the ground. Oh well, we all have our blind spots. 🙂
Nick, I don’t know what you mean by “Boeing style” CFD. It is all Navier-Stokes simulation and the principles and phenomena are essentially the same. Gerry Browing and I were just discussing his view that the lack of adequate planetary boundary layer resolution was a big issue for GCM’s. Turbulent boundary layer modeling is a staple of aerodynamic CFD and its gotten quite good in the last 30 years or so. There is a good paper by Keyes and Knoll in JCP about 2004 on Newton Krylov methods and how they can be applied over the range of fluid simulations including atmospheric modeling.
I’ve heard a million times that “my problems” are special or require expert knowledge or “specialist judgment.” Usually, it means “I don’t really understand what you are saying and I don’t have time to look into it.”
One thing that is disturbing for example is lack of discrete conservation in atmospheric models. It should be easy to fix and it almost always improves accuracy.
David,
I did say “Boeing-style CFD advances“. I’m thinking of advances in, say, adaptive meshing. Or transonic flow, or fluid-structure interaction, or tip vortices etc. Not much relevant to a low mach number flow with a regular grid.
Maybe turbulent boundary layers. They have had a lot of attention in GCM development, because so much else goes on there as well – gas exchange, surface heat loss etc. And people were thinking about them a lot 35-40 years ago.
Nick,
I remember Pielke Sr. and others suggesting that models don’t handle surface boundary layers very well and that could explain some of the discrepancy between modeled lower tropospheric temperature trends and satellite/balloon measurements. IIRC, Pielke Sr was saying that the models’ lower tropospheres are tuned (more or less) to the surface record, but the surface record is expected to be different from the lower troposphere because of boundary layer effects. Are you saying than GCMs actually have an accurate treatment of the surface boundary layer?
Yes Nick, You may be right about grid adaptivity even though it might be worth exploration. Adaptivity might be perhaps more relevant for surface features like mountains.
Low Mach number flow is a different animal as you say. There are some incompressible CFD codes out there but not many. There is some recent work however on an “all speed” version of the SUPG finite element method.
I’m actually more a fan of intermediate complexity models these days. They can be dramatically faster and very accurate.
SteveF,
” Are you saying than GCMs actually have an accurate treatment of the surface boundary layer?”
I haven’t analysed how well it works. But they seem to have thought about it in the usual way, and rather thoroughly. Here is a CAM 3 discussion. One issue, common enough in CFD, is the parameterization of surface roughness. But even insofar as that is imperfect, it’s hard to see how it would invalidate a GCM’s ability to represent global climate.
Nick, This looks like an eddy viscosity method but I need more time to look at it in detail. True resolution of a turbulent boundary layer requires 30 points across the boundary layer. Computation of the boundary layer height seems to be part of the method and this is very difficult and perhaps ill posed unless that height is one of the variables in an integral boundary layer method.
I doubt if this atmospheric method has anywhere near the history or test data basis as modern aerodynamic boundary layer methods, but I could be wrong. There has been a lot of progress here by the way over the last 20 or so years with the SA and SST models and Drela’s integral boundary layer method showing dramatic improvements over older algebraic models.
Both sides might benefit from some discussion of methods.
Nick, I asked 6 questions of Nic and 7 questions of Mosher on CA here. Care to weigh in? I think they kinda miss you on CA.
Ron,
On PDO, I don’t know any more than Wiki. It’s derived as an empirical orthogonal function, and various observed effects have been associated. AFAIK, there’s no theory that gives an a priori 60 year frequency.
I don’t have any special knowledge of cloud feedback.
I think currents vary GMST by moving heat around. El Nino brings warm water to the surface. Extra poleward transport does affect TOA loss, but only secondary to temperature variation. You might say that it has a more lasting effect.
I don’t know about talk at Ringberg. As to the pause, I think it is mainly just a run of La Ninas, which will probably revert to normal. Maybe there is a PDO effect. I think 2015 will be warm, probably more than 2014. The pause will continue to fade.
The other Qs were snipped.
Nick, Thanks so much for your candor. I get snipped on CA due to dangerous questions a lot. Here they are if your game:
1) Does the 3.7 W/m2 refer to the increase in heat required to maintain radiant flux steady state with a doubling of CO2 ? If not, what exactly is it a measure of? Does the consensus assume CO2 forcing is interchangeable with aerosol forcing and solar forcing? Realizing there is a radiative entropy flux from the incoming vector plane of high energy solar photons to the multiplied number of outgoing, low energy, omnidirectional photons, is that entropy flux part of the CO2 calculation?
2) Whereas CO2 has small near-infrared absorption bands and whereas solar radiation includes a small portion in these bands, and thus the TOA will be more thermally responsive to incoming radiation to that extent, is that part of the 3.7? If not then isn’t the 3.7 number only referring to night and polar TOA?
3) Whereas the high angle sun produces a huge temperature inversion in the stratosphere wouldn’t the increased effective height of the 15um band be just as radiant, if not more-so, at the higher altitude? Is this part of the 3.7?
4) If CO2 has the above caveats and perhaps others that together create an amplified effect in polar regions vs. tropical shouldn’t this be very relevant since one could have low TCR and still have glacial melt?
5) If glacial melt weakens the AMOC and that lack of tropical heat transport weakens is it not possible to have increasedTOA radiative efficiency (due to poor polar transport) and NH cooling and increased albedo? Is this not a type of feedback, especially as it is a favored theory for ice age initiation?
6) Should ECS be assumed constant at all GMSTs? What about in the depth of the last ice age 23ka bp?
7) If CO2 measured at prior peaks of 320-350ppm in periods of GMST similar to today’s how did the GMST slip into ice ages dozens of times. Granting the Milankovitch are responsible for eventually throwing the switch in many cases, there is still not a compelling explanation for mechanism fitting paleo-reconstructions.
8) What are your feeling of odds that the Maunder Minimum (1645-1715) of lack of sun spot activity stand as evidence of diminished insolation and that being responsible for the Little Ice Age, if in fact there was such a dip?
R Graf, I understand why Steve does it to keep things on topic but it can be annoying too. Sometimes blogs are means of communication too even if off topic.
Ron,
These are good questions, but I’d better take it in sections. Others may want to chip in.
“1) Does the 3.7 W/m2 refer to the increase in heat required to maintain radiant flux steady state with a doubling of CO2 ? If not, what exactly is it a measure of? Does the consensus assume CO2 forcing is interchangeable with aerosol forcing and solar forcing?”
The AR4 SPM says (below Fig SPM 2):
“Radiative forcing is a measure of the influence that a factor has in altering the balance of incoming and outgoing energy in the Earth-atmosphere system and is an index of the importance of the factor as a potential climate change mechanism.”
They are usually back-calculated from the performance of a climate model. They are quantified as the amount of extra (or less) incoming radiation you would need to have the same effect.
Fig SPM2 is an important figure, and does say that you can add different forcings.
” Realizing there is a radiative entropy flux from the incoming vector plane of high energy solar photons to the multiplied number of outgoing, low energy, omnidirectional photons, is that entropy flux part of the CO2 calculation?”
I wrote about the entropy budgetting here. But it’s really a comparison of states rather than identifying a physical flux. The equilibrium energy outflux is determined by solar; the entropy flux depends on the emission temperature, or more particularly its variation. More CO2 raises the TOA emission level (colder); that means the surface has to emit a greater share (warmer) through the atmpspheric window. That’s one version of GHE. If you export heat at uniform temp, that gives maximum entropy outflow (for that flux), so more CO2 does reduce entropy export.
But I think it is a consequence, not a part of the calc of CO2 effect.
Ron G
“2) Whereas CO2 has small near-infrared absorption bands and whereas solar radiation includes a small portion in these bands, and thus the TOA will be more thermally responsive to incoming radiation to that extent, is that part of the 3.7? If not then isn’t the 3.7 number only referring to night and polar TOA?
3) Whereas the high angle sun produces a huge temperature inversion in the stratosphere wouldn’t the increased effective height of the 15um band be just as radiant, if not more-so, at the higher altitude? Is this part of the 3.7?”
I think these are based on the idea that RF (forcing) is directly measured. The details are set out in AR4 here. I wasn’t quite right saying that they are all back-calculated; AR4 says that GHG RF is calculated directly from radiative models (though I think Forster at least back-calcs). They give some cases that do bear on your Q – eg the stratospheric-adjusted RF in Fig 2.2.
On 3) I don’t think it is the high sun that produces the inversion exactly – it is because above the tropopause there is more heat being absorbed (UV and O3) than emitted as IR (GHGs are sparse). The effective emission layer is still below the tropopause, and its rising will be to a cooler level.
Thanks Nick! And, everyone is welcome to chime.
The Pope is going to call for climate action, foiling the bible clutching deniers, (so says the consensus). Papal Encyclical to Urge Global Action on Climate Change
Thanks Nick! All are welcome to opine.
The inversion in the stratosphere has nothing to do with sun angle. It’s all about UV absorption by oxygen and by ozone that is produced by the UV absorption by oxygen. The result is that the ratio of the average absorption coefficient of incoming solar radiation to the average emission coefficient from CO2 and ozone (mainly) is greater than one. In the troposphere, the ratio is less than one. When the ratio is less than one, the temperature decreases with altitude and increases with altitude when the ratio is greater than one.
Again, I highly recommend that you purchase a copy of Petty’s A First Course in Atmospheric Radiation. This would answer the question in detail.
Folks, the inversion strength of the tropical tropopause is largely a funtion of the greater thickness of the tropical troposphere:
http://www.goes-r.gov/users/comet/tropical/textbook_2nd_edition/media/graphics/vertpro_temp_prof.jpg
I recently replicated the process of calculating RF and found it a very worthwhile exercise. One of the realizations is that RF is and will always be a hypothetical immeasurable value. That doesn’t mean it’s not occurring, but it is, by definition, for an atmosphere at rest ( which doesn’t happen ) and any amount of warming tends to restore balance. Warming is an indirect effect, so we can think of some RF occurring, but only to the extent that we could distinguish CO2 forcing as occurring from other processes ( albedo ) that we can’t measure well enough ( neither absorbed solar, not OLR ).
The process of arriving at the 3.7W/m2 is:
1. calculate the radiative flux for three atmospheric profiles ( temperature, humidity, clouds ) with well mixed pre-industrial CO2
2. calculate the radiative flux for the same three atmospheric profiles but with twice the well mixed pre-industrial CO2
3. subtract the difference in net up/down short/long wave radiance at the tropopause for these three atmospheres
4. use the average of these three values for the global mean
This yields ( depending on the assumptions of the cloud amounts ) approximately the 3.7 result. When one reproduces the same process for a much finer resolved atmosphere ( 1° by 1° ), one gets a value of about 4.1 W/m2, so the 3.7 value may be an underestimate. However, it may also be an overestimate of what actually occurs because the calculation is before the atmosphere moves things around and dynamics can alter the efficiency with which earth radiates.
Radiative Forcing is formally calculated by allowing the stratosphere to radiatively equilibrate, i.e. cool off, after the step change in CO2, but no change is allowed in the troposphere. Otherwise, the downward LW radiation from the stratosphere at the tropopause would increase. The time to achieve a new steady state in the stratosphere is quite rapid, on the order of weeks to months.
So the consensus on 1) seems to be the 3.7 is from models. The only problem is that models are not great at showing their assumptions. Nick points out that Forster found ERF to average 2.2 W/m2, and these are the models that might be have aerosols over-cooling (Stevens 2015) and are still running hot despite M&F(2015).
DeWitt: “The inversion in the stratosphere has nothing to do with sun angle.” Well I would think the higher the sunlight angle the more radiation per sq meter and thus more heating of UV absorbing stratospheric/TOA ozone. Eddie’s diagram is better than the ones I’ve seen since it breaks out the polar, mid-lat and tropical plots. I would be also interested in seeing night, morning and mid-day plots. I have been truly meaning to check out DeWitt’s book as I think well-explained high level text is golden.
R Graf:”So the consensus on 1) seems to be the 3.7 is from models.”
Forster et al.(2013) examined 23 models and came up with adjusted forcing from doubled CO2 as 3.44 +/- 0.84 Wm-2. Min was 2.59, max 4.31.
That’s only approximately true A gas does not have a surface so the cos(θ) correction doesn’t apply the way it does at an opaque surface. You’re looking at W/m³. In fact, at low sun angle, the absorbing path length is longer so at wavelengths where the atmosphere isn’t opaque, more radiation is absorbed than at high angle. Where it’s opaque, there is no difference. Also, at the poles during the solstice, there is more total incident daily average radiation than at the tropics, over 525W/m², because the sun is above the horizon for 24 hours.
Figure 10.7 in Petty shows heating rates in °C/day for cloud free tropical atmosphere at different solar zenith angles. The difference between 15 degrees and 30 degrees is barely visible. Even at 60 degrees, the heating rate at 30km altitude only drops from ~2.5 to ~1.5 °C/day.
R Graf,
It’s not from GCM’s. It’s calculated by averaging the forcing calculated by line-by-line radiative transfer theory using the average of several atmospheric profiles, clear sky and cloudy. The rather wide range of forcing for doubled CO2 from the GCM’s is yet another reason to take their calculated climate sensitivities with a large amount of salt. GCM’s don’t use lbl RT or even band RT. It’s too computationally intensive.
So, I’m trying to understand this better. I’m looking through GISS ModelE2 but it’s not particularly well documented and while neat, contains lots of cryptic 6 letter variable names, typical of old FORTRAN. Do you have a good schematic or doc detailing the implementation of radiance in gcms?
” Do you have a good schematic or doc detailing the implementation of radiance in gcms?”
If you’re looking at Model E code, this older manual may help. It has only a few words on radiation, but gives references.
Here is the CAM 3 description.
Thanks Harold. I should have thought to look at Forster 2013 instead of mis-reading Nick’s referral to “Fig 2.2”. I am a little surprised that the model’s output allows for such a varied ERF considering 3.7 seems to be the point of most agreement. One would think that other parameters would be constrained to keep it line. Also In studying the table I notice the huge variations in climate resistance, P, and ocean uptake in particular, K. M&F pointed this out but I still am confused how they claimed P showed no impact when one can see the impact by just studying the table and comparing P variances to ECS at similar AF.
I see that although cloudy sky feedback is almost a wash, clear sky feedback is still strongly positive in the models. This I assume is water vapor below the TOA impeding upward LW. I am guessing the biggest debate is whether the cloud feedback is more negative and if the vapor leads to more clouds. This along with the ocean rate of uptake being a complete roll of the dice makes the overall problem of nailing TCR and ECS well appreciated.
Anyone have a favorite theory on ice age or LIA?
RGraf,
If I remember right, the most clear difference between models that diagnose high sensitivity (>3.5) and models that diagnose lower sensitivity (<2.5) is their treatment of clouds, with sensitive models setting clouds strongly amplifying, and lower sensitivity models setting clouds slightly amplifying. I think there is more consensus (among models!) for clear sky moisture feedback that sets clear sky sensitivity near 2C per doubling, though my personal guess is that the assumption of constant relative humidity with warming surface temps will turn out to be a bit higher than correct, and the canonical 2C for clear sky also a bit higher than correct.
R Graf,
Ice ages seem to be due in part to low GHG levels (compared to most of Earth’s history), combined with hysteresis from isostatic adjustment with massive northern ice sheets. The holocene looks to me (based on the Greenland ice core temperature reconstruction) like it was coming to a close before GHG”s started to rise. The temperature in Greenland in 1700 (bottom of the little ice age and start of industrialization) was the lowest it had been for more than 10,000 years. Glaciers were advancing in the Alps and elsewhere…. and ice ages are associated with lots more ice in the northern hemisphere. The Holocene had already passed the duration of the previous interglacial…… but in spite of ongoing cooling, atmospheric GHG’s had not been falling (nor had they been falling at the end of the last interglacial!).
Of course changing solar intensity in the northern hemisphere is no doubt also involved, but it is albedo feedback and the slow rate of isostatic adjustment as ice sheets build that “locks in” a buildup of ice over ~100 Kyears, and leads to sudden collapse of the ice sheets and relatively brief interglacials like the holocene.
Another ice age seems improbable if CO2 is above 400 ppm.
Thanks Steve. I just read Wikipedia on ice ages. It is amazingly comprehensive, including CO2 as a prominent factor. But one will not find mention of ice cores resolutions showing delta CO2 following dT by several hundred years. (But Wiki also speaks of Mann’s Nature Trick as having been meant in complimentary fashion as in one finding a neat, innovative trick in solving a hard problem.)
Wiki says orbital cycles are thought to be favorable for extending our interglacial 50ka, and at another paragraph start AGW with the start of civilization rather than industrialization. So wiki writers are sure to get any idea of CO2 usefulness clearly out of your thoughts.
R Graf,
If you want a critical review of the literature on ice ages that’s a bit more rigorous than Wikipedia, try the Science of Doom series Ghosts of Climates Past. There are nineteen posts so far. Part I is here.
R Graf,
First the caveat – of course CO2 is a radiative gas and has an effect, and did during its variation from the glacials.
Now the claim – the ice ages would have taken place without any effect from CO2. It ( CO2 effect ) is a thought virus for those seeking to elevate the significance of CO2.
The ice ages are not a function of global average temperature. They are a function of local (polar) summertime temperature ( freezing – ice accumulates, melting – ice recedes ) which is strongly influenced by the well known orbital variations. The fact that glaciations started some 800 years before CO2 even budged should tell you something. The Glacier Girl aircraft was found fifty years after crashing under 260 feet of ice. That ‘s more than 100m ice accumulation per century. Now, the beginning of the last glacial advance may not have had the same accumulation, but if it did, for eight centuries, that’s more than a kilometer of ice – without CO2! And orbital variations lasted closer to eight millenia, not eight centuries, so the ice accumulation would have continued ( that’s why they call them ice ages, because of the ice, not the temperature ).
It’s also important to remember how much was different. At the last glacial max, sea level was 120m lower, so the atmosphere was thicker over the oceans, and by the equation of state, a little warmer. Un-glaciated land masses were 120m higher, by the equation of state, a little bit cooler. But it didn’t end there. The glaciated land masses were as much as two kilometers higher. Like spilling milk on a table, cold air masses spilled off the high ice sheets of the north and Antarctica at very high wind speeds, which is why the dust levels pick up during glacials ( a truly nasty time ). It is the winds that came after the ice built up that caused the CO2 to decrease ( greater mixing into the cold waters that could absorb CO2 and descend to the ocean bottom ).
The best global temperature proxy we have is the CLIMAP oxygen isotope analysis:
http://upload.wikimedia.org/wikipedia/commons/1/19/CLIMAP.jpg
The warm tropical oceans are part of the ‘paradox of the tropics’. Some people argue about the gcms in terms of this, but that’s not my point. To be sure, these are proxies ( though well understood proxies ) and have uncertainty. But remember that the orbital variations don’t change the amount of sunshine at the top of the atmosphere, they just shift the sunshine load from the poles to the tropics ( and back again ). When the poles stopped receiving as much summer sunshine which starts the glacials, the tropics received what the poles did not. The tropics received more incoming sunshine during the glaicals than they did in the inter-glaicals. Combined with the fact that the oceans were lower and the atmosphere over the oceans was thicker and the warming ‘makes sense’. It is not that tropical warming confounds our understanding, but rather to the point that the glacials were nuanced and not just a matter of global temperature variation.
We are not in a glacial advance, but we are, of course, still in an ‘Ice Age’ – Greenland and much of Antarctica are buried under kilometers of ice. It is instructive to remember how that ice persists, in spite of the fact that the Holocene optimum shined some 50W/m^2 more sunshine on the ice than present. The answer is that a mountain of ice and no ice at all are two of the quasi-stable states.
If a mountain of ice exists near the poles, the top rises a kilometer or two above sea level which lowers the average summer surface temperature to below freezing such that ice accumulates on top. This may exceed the amount that the lower edges lose. The elevation of the ice enables further ice gain.
If no such mountain of ice exists, summer temperatures are higher, perhaps above freezing and no ice accumulation occurs. The lack of elevation of the surface ensures that no accumulation takes place. Orbital variation simply changes the local level at which this occurs.
If you visit this tool, you can calculate the solar forcing of the ice ages.
I’ve done so here:
http://climatewatcher.webs.com/InsolationComparison.png
CO2 surely changes radiance, but the same forces which started the glacial advances without CO2 persisted for long after CO2.
The glacials would occur without CO2 at all and the significance of CO2 is overstated.
Turbulent Eddie,
Your story about the airplane comes under what I call the “dog that did not bark”. I think it is important to recognize that there are a lot of things that should not be if there was in fact dangerous climate change occurring. A plane getting buried under tens of meters of snow in a few decades is one of those things.
Vostok ice core data shows unequivocally that snow accumulates much faster on the Antarctic Plateau during interglacial periods than during glacial periods. This is expected as the saturation vapor pressure of water decreases exponentially with temperature, the Clausius-Clapeyron relationship and it’s much colder on the Antarctic Plateau during glacial periods.
http://i165.photobucket.com/albums/u43/gplracerx/VostokIcecoreaccumulationrateandice.png
The black dots represent the accumulation rate in m/year based on layer thickness uncorrected for compression. Not that it takes nearly 50 years to accumulate 1 m during the current period. The pink dots are the difference between the gas age in trapped bubbles and the age of the surrounding ice. At the last glacial maximum, it took nearly 7,000 years to accumulate enough snow to seal the gas from the atmosphere. The minimum time during interglacials is on the order of 2,000 years, as it takes about 60m of snow before the trapped gas is completely isolated from the atmosphere.
Snow accumulates much faster near the coast and at lower altitudes.
The factors start simply enough. Raising temp brings moister wind – cold allows precipitation. Accumulation is favored where they meet below freezing: the low but rising elevation of arctic coastline.
However, from here on nowhere is the point-counterpoint more playful. Starting with CO2 (because that is the main issue, let’s face it,) from the 1890s to the 1990s it was the dominant player, in theory. Although it’s more established than ever that CO2 has 3-4 W/m2 warming power, it’s also established that it followed dT CO2, not led. So, CO2 is a positive paleo-feedback along with ice albedo, which was always, and still is, a weak prospect for the leader. So the consensus view is it’s Milanckovitch cycles — steady, predictable and natural, (unlike bad unnatural AGHG). Not so fast. Ice cores dT show anything but an M-cycle fingerprint. Sure, I know the Holocene started with a peak in obliquity (24.5 deg vs. 23.5 deg today, on a 41ka cycle) but there was no interglacial 41ka before that or 41ka before that. Yes, but there was one 41ka before that! it’s true that the last five interglacials coincided roughly with every second or third obliquity peak. There is virtually no correlation to precession or eccentricity. It’s hard to even be sure how they would work. Does distribution of insolation from low eccentricity give weaker summer melts or weaker winter albedo? Whichever one it is the more the eccentricity is aligned with solstice timed aphelion/perihelions (from precession) the more amplified the effect. But, again, interglacials are quasi-correlated with obliquity. The big question is why do interglacials pop out of a deep glacial maximum like a tennis ball dragged to the bottom of a pool and let go? After all, CO2 was higher 80ka earlier at the first obliquity max, GMST was higher, and ice albedo lower. So what was overwhelmingly unleashed 12ka bp as well as the other glacial recessions? What energy was banked? I have thought about potential energy of cascading ice melt pushing polar down welling to kick the global ocean conveyor into high gear. Eddie points out that there is more ice height, more potential energy delta over sea level the longer into a glacial cycle. Another theory I have heard points out that the glacier bottoms can reach a super-critical liquid phase once a threshold of pressure is reached, thus making them prone to rapid movement like being on ice skates. So glacial collapse drives excited current, which brings an otherwise overheated thermocline to the edge of the glaciers and perhaps summer rain leading to more collapse and self-sustaining runaway warming. Then the interglacial is vulnerable, after the ice is mostly gone and obliquity off peak, to any cooling event, supervolcano, cosmic collision, solar minimum or combo to trigger re-glaciation spiral.
Looking a Dewitt’s chart I notice a trough at the last obliquity peak 50ka bp. I wonder if Meteor Crater, said to be from asteroid collision 50ka bp, had anything to do with that. If so, we might have had the Holocene 50ka years ago. Where would we be today? Would NYC be under a glacier? (The movie rights are taken.)
Just because temperature led CO2 at the beginning of the Holocene doesn’t mean it didn’t contribute to the warming. It just didn’t cause it. Absent fossil fuel burning, CO2 is a feedback like water vapor.
The Science of Doom articles include this one on ice sheet dynamics.
DeWitt:
CO2 release from very large scale volcanic eruptions also act as a significant forcing for climate change.
Carrick,
Yes, and whatever caused the PETM as well and possibly the wrong sort of comet. I should have said absent a major release of carbon into the atmosphere…
I suspect that given a comet impact, the release of carbon would be the least of our problems.
To follow up on DeWitt’s comment, it’s important to recognize that any feedback mechanism can turn into a forcing mechanism if that mechanism is being parametrically modulated by exogenous factors.
Put another way, just because something (more technically a component of that “something”) is being treated as an exogenous forcing here, it doesn’t automatically follow that that same “something” always acts as a forcing.
Just because temperature led CO2 at the beginning of the Holocene doesn’t mean it didn’t contribute to the warming.
Ya –
it’s just that carbon dioxide enthusiasts glommed on to the correlation because it fit the narrative they were looking for while ignoring that orbitals drove both temperature and CO2.
This is a distortion from the localized nature of the glacials – it wasn’t global temperature but local temperature during the melt season only that mattered.
Consider the start of the last glacial:
http://climatewatcher.webs.com/StartLG.png
The majority of the glacial cooling ( around 9C from Vostok data ) occured with nearly zero CO2 change.
This was twelve thousand years. over the next 100,000 years or so, further cooling of around 2C more closely corresponded with CO2 declines ( likely driven by the katabatic winds off of ice sheets ).
Even if we chalk up the 2C completely to CO2, it is a bit player in the glacials.
Eddie, What are the details of your thoughts on CO2 ocean uptake? Why do katabatic winds or any coastal winds have more effect on CO2 uptake than other winds?
Realizing that the cold arctic waters are less gas saturated since cooling improves dissolved gas capacity, but warming improves kinetics, are tropical waters are CO2 saturated? Do you think that any CO2 can get sequestered naturally by downwelling to currents that flow deeper than the super-critical point of CO2, where is would liquefy and settle into the depths? I know this is far-fetched. But such sequestered CO2 would be neat, not lower PH at all. Of course, CO2 mixed into the depths even as a gas would be mostly non-effective in lowering PH anyway due to dilution. It’s the thermocline that’s the worry. But if the kinetics are not favored the reaction will be ineffectively slow.
Science future list: Genetically engineer carbon fixing algae (that don’t need iron, the current limiting nutrient) that float through life but sink upon death without decay (or being edible), encapsulated and self entombing. Make there coatings heavy metal absorbent, like egg shells, to clean up mercury as a bonus.
R Graf,
Interesting ideas. I wonder about the karma of unintended consequences. Creating an algae that would leach heavy metals out of the oceans could have significant negative impacts. Concentrating algae into algae eaters far down the food chain comes to mind. As to sequestration not limited by iron, what about an algae not limited by iron becoming oceanic kudzu……
Hunter, I said non-edible. Iron is the limiting factor in oceans for life which is why steel shipwrecks start artificial reefs. I changed my mind on making the algae sink. How about they float on death but have reflective skins, raising albedo of oceans? Of course we create different strains with different growth rates and viruses unique to each so we have complete control of the knobs via kill offs before we unleash any.
We could also collect up the algae and their leach with robotic skimmers.
We also could engineer this reflective algae to grow on ice, insulating glaciers from sunlight with a reflective crust. This would take care of the sea level rise issue without freezing the buns off the people in Montreal.
Well, the glacials were much windier, as evidenced by the dust levels in the ice cores from both Greenland and Antarctica:
http://eo.ucar.edu/staff/rrussell/climate/paleoclimate/images/ice_core_graph_vostok.gif
That’s pretty amazing for Antarctica because it’s surrounded by Ocean, so the the global atmosphere was very dusty in the ice ages. How much windier is open to conjecture. Part, maybe most(?) of that windiness may have been due simply to the increase of the pole to equator temperature gradient ( much colder poles but slightly warmer tropics = much greater gradient = much greater wind speeds ). But part of the windiness also came because there were very high ice sheets which generated great masses of cold air. And like milk spilling off the table, these cold air masses came crashing down off of the ice sheets. There was an increase in windiness in part because there was a greater amount of topography ( the Northern ice sheets ) for the katabatic winds to occur. LeRoux speculated about the effects of the ice sheets and the ideas are worth bearing in mind at least as far as recalling the nature of the ice sheet topography:
http://climatewatcher.webs.com/LGM.png
Seems as if the oceans are taking up an ever increasing amount of CO2:
http://csas.ei.columbia.edu/files/2014/12/Fig.-2.jpg
And the part of the oceans that take up the most ( Henry’s Law ) are the polar regions. And this takes the CO2 to the bottom:
http://www.drroyspencer.com/wp-content/uploads/Ocean-temperature-vs-depth.png
The deeper waters probably always have a higher concentration of CO2, in part because of the propensity of the oceans to collect colder waters which contain more CO2, in part because the upper levels include the tropic warm pool near the surface, and in part because phytoplankton consume the CO2, only to deposit it in the form of fish poop, fish bones, and other detritus:
http://www.pnas.org/content/106/30/12235/F2.large.jpg
In any event, increasing uptake and flat emissions ( for 2014, anyway ) would seem to mean decelerating warming. Since the warming has been mild and since temperature ( outside of state change such as the glacials ) just isn’t that important to weather or climate, I would hope reason would prevail that neither restrictions nor sequestration schemes are necessary. A hundred years from now, no one will care about any of this.
“And the part of the oceans that take up the most ( Henry’s Law ) are the polar regions.”
Henry’s Law applies to the dissolved CO2 to the extent it is not in a carbonic acid, bicarbonate or carbonate anionic complex, who’s equilibrium all are PH (and to some degree temperature) dependent. Uptake saturation of CO2 is thus first determined by PH / temperature equilibrium of the anionic complexes in relation to equilibrium with dissolved CO2 gas. Then one can apply the Henry’s Law constant for CO2 pp vs. temperature.
Then you have found the limits. Kinetics involved in approaching those limits is another problem to solve.
I think it would be easier to empirically measure the results of a large experimental matrix of sea water and CO2 injected test tubes to be agitated at different rates at varying temperature and PH and plot the results. This likely has been done but would also make a cool HS lab or college experiment.
RGraf,
Thanks for the clarifications and additions of your ideas. Please don’t think I was attacking you or even the ideas. I was merely trying consider some of the costs/benefits. It looks like a really cool Science Fiction concept that will probably join the growing long list of science fiction ideas that become reality.
“Please don’t think I was attacking you or even the ideas.”
Hunter, please do attack ideas. We’re here to make each other’s better.
Forget ice algae. It will be too vulnerable to weather. Ice-water algae to keep polar cool in summer by reflective properties would be best. The toxin eating algae is a separate idea. We can test that off shore at Atlantic City.
R Graf,
Thanks. Distinction noted.
Reflective vegetation seems like a conflict in terms. The metabolism of photosynthesis relies on *absorbing* light. My recollection is that sequestered carbon is raining down through the oceans continuously as dead or digested plankton, algae, diatoms, etc. I have wanted to see a serious iron fertilization experiment for years. I am not so concerned about sequestering CO2 since I believe the CO2 obsession is not supported by reality. But I am interested in discovering if fertilizing the oceans with more iron could lead to a plankton increase that would lead to an increased food chain yield- plankton- krill- small fish- larger fish- tuna- dinner for humans. If a side effect is to tie up some CO2 and relieve the anxieties of the CO2 obsessed, so be it.
My take on the mercury issue is that the concerns regarding it are wildly over stated. But if we can remove some biologically then that would be ok. Maybe charge the costs to coal producers and users….but that would be a creative solution and climate/enviro fanatics are not about creative solutions. They are about gaining and exercising power.
I believe experiments have been done on iron. The most dramatic were the unintentional, shipwrecks for example, as I mentioned. Fishermen have known for years that old dumping areas are great fishing spots. (Maybe that explains some of the mercury in fish;-) My brother’s college professor’s proudest brag was his satellite imaging analysis of sea dumping which lead to the stopping of Dupont’s barges of spent ferric chloride (from making TiO pigment) 50 miles off the coast of NJ. The images showed a day by day snap shot of the “pollution” coming back toward shore. My brother taking the professor’s class realized that ferric chloride in parts per billion or trillion would be invisible to imaging. He did the experiment in a fish tank and produced an algae plume with a little ferric chloride. He also research and found the fishing industry off the coast of NJ withered after dumping stopped. He showed Dupont, (which was 20 years after the last barge) they laughed but I’m sure nobody even entertained the idea of approaching the Delaware authorities to bring back fishing via dumping. I remember year later reading an article about a NSF study of ironing out the oceans as a CO2 mitigation by Dr. Leon Zaborsky. I’m guessing they concluded that the bio-decay would place the CO2 right back into the atmosphere. Or, they didn’t want to have to approach Dupont for help. 😉
On the reflective algae, how about we let the living ones absorb light and use it to make thin iridescent shells with trapped oxygen so that they float after death instead of sink?
For my statistician friends you might be interested in Jim2’s cited article that the statistical P value is getting smacked down and maybe pushed out. http://judithcurry.com/2015/04/18/week-in-review-science-edition/#comment-694834
Lower average vegetation cover, leading to an increase in dust concentration is a much better, in my opinion, explanation for the increase in dust in the ice cores towards the end of a glacial epoch than an increase in windiness. Sea level dropping exposes the continental shelves, for example.
Nic Lewis has his final installment, Ringberg III, presentation is up here.
DeWitt: I believe the consensus is synergy between more available source material and higher winds to get the 3-orders of magnitude increases in dust concentration in Antarctica that blew over from Patagonia.
R Graf,
The reflective algal husks could be cool.
But your anecdote on DuPont and iron fertilization is interesting: How enviros actually time after time hurt the cause of more life and better yields for humans.
I imagine a time when fishing rights to big areas of the oceans are leased out and farmed like we would a forest or crop land today. Iron in the south…I don’t recall what the limiting mineral is in the northern oceans but it is not iron for some reason (if I recall correctly). Either way most of the CO2, I think, would end up on the bottom, not the surface. Dead things tend to sink, and so does used food.
http://marinebio.org/oceans/deep/
” First, most of the deep seafloor consists of mud (very fine sediment particles) or “ooze” (defined as mud with a high percentage of organic remains) due to the accumulation of pelagic organisms that sink after they die.”
[Pelagic is the surface region, btw]
“In the absence of photosynthesis, most food consists of detritus — the decaying remains of microbes, algae, plants and animals from the upper zones of the ocean — and other organisms in the deep.”
The CO2 sequestration would be a side issue, from my view (and I bet that of most people). But it might help calm our climate obsessed friends. What the point would be would be good yummy yields of fish and crustaceans.
“R Graf,
The reflective algal husks could be cool.” (No pun? 😉
Dupont – “Better living through chemistry.”
R Graf,
Wow, I completely missed that one.
Wonder what the statistically skilled among Lucia’s readers think about the journal Basic and Applied Social Psychology recently banning the use of p-values. See http://app3.scientificamerican.com/article/scientists-perturbed-by-loss-of-stat-tool-to-sift-research-fudge-from-fact/
As one not very skilled in the use of statistics, my basic take from the article is that many people who purport to use the statistics don’t understand them– seems like something that also happens in the field of climate science.
JD
Bjorn Stevens: Clouds, circulation and climate sensitivity 11-2014 free access with registration.
Paywalled Nature link to:Thorsten Mauritsen & Bjorn Stevens: Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models
Link to:Pekka vs. Lewis, a 12 round bout begins
ATTP vs. Harold, Eddie, AP and I on Ocean uptake of CO2 now and after fossil fuel end The debate starts mid thread and ends with my comment in the bottom of the thread above it. There will definitely be another debate with ATTP on this. He is wrong, IMO, that uptake will slow due to “Henry’s Law saturation” and remain in the atmosphere after the transition from fossil fuel due to being trapped in biosphere “fast cycle.” I think he is confused by the fact that pre-industrial CO2 was a function of fast cycle being able to pull CO2 out of the ocean, which in my mind was much less aggressive when CO2 is 150-300ppm. When CO2 is above 400 the ocean becomes a fiercer competitor with life and biosphere a weaker competitor due to satiation of it’s CO2 needs. If I’m wrong CO2 is a better fertilizer than we thought then the oceans would had to have been a fiercer competitor during pre-industrial time. But if that is true the ocean will be yet better able to uptake CO2 @ >400ppm. Either way makes no sense for CO2 to maintain concentration after emissions stop. PA supplied a lot of good data and math I will play with. I think this is a central question that can provide a lot of answers, from paleo CO2 behavior to what the leveling threshold of atmospheric CO2 is at RCP8.5.
R Graf,
Are you considering the biosphere that inhabits the oceans?
They are not sleeping through all of this.
Assuming that ATTP is actually a physicist, he should have paid more attention in his Chemistry classes. For one thing, the equilibration time for the world ocean is on the order of thousands of years. His ‘Henry’s Law saturation’ concept is a non-starter too. [Note: I haven’t read the linked comments, nor do I intend to].
David Archer, who is responsible for the Web MODTRAN calculator, also has a GEOCARB Geologic Carbon Cycle calculator. When fossil fuel emissions cease, whether the atmospheric concentration is 500 or 1000 ppmv, the atmospheric concentration will decrease. The negative δ13C perturbation in the PETM was back to normal in 100,000-200,000 years.
DeWitt, he’s technically an astrophysicist. Since, at times, he struggles with basic physics concepts, he should have paid more attention to his physics courses too.
Carrick and DeWitt,
Follow the link here to Ken Rice’s personal home page for his CV: http://www.ph.ed.ac.uk/people/ken-rice
My observation is that Rice gets a lot of fairly obvious things pretty wrong. He may have been a good student, but he doesn’t seem to ‘have a feel’ for the kinds of physical processes involved in climate science, and especially how important those processes are for the credibility of model projections. He strikes me as someone who just wants fossil fuel use cut drastically, and he doesn’t give a hoot about the costs, downsides, damage, unintended consequences, and the rest; he has made up his mind, and he isn’t really interested in thinking much more about it. When someone points out that the costs of mitigation are high and return on that investment likely very low, (since climate sensitivity is likely on the low side of the IPCC range), he always pretty much says… “Yes, but what if sensitivity is actually very high and we ruin the Earth?” IMHO, he is a remarkably self unaware simpleton.
New article by Lomberg somebody at Judy’s led to comment
“Salvatore del Prete | April 22, 2015 at 1:59 pm | Reply
Iris hypothesis From Wikipedia, the free encyclopedia
The iris hypothesis is a hypothesis proposed by Richard Lindzen et al. in 2001 that suggested increased sea surface temperature in the tropics would result in reduced cirrus clouds and thus more infrared radiation leakage from Earth’s atmosphere. His study of observed changes in cloud coverage and modeled effects on infrared radiation released to space as a result supported the hypothesis.[1] This suggested infrared radiation leakage was hypothesized to be a negative feedback in which an initial warming would result in an overall cooling of the surface.”
Lucia, you are absolute on the fact that feedbacks to a positive forcing cannot be negative.
Is it time to open a new general thread or would you be able to put one up to discuss climate sensitivity and feedbacks specifically again and reinforce your message which seemed compelling at the time?
angech,
Can you clarify your comment about positive feedback some?
Thanks,
angech,
Do you mean that a positive forcing can never result in cooling? 100% negative feedback means no change results from a forcing.
angech (Comment #136083)
Is this how the ice age is gonna neek up on us? 🙂
Rice seems to me to be first of all an apologist for science which is after all his day job. He is posting so much, a lot of bs slips in. This is for him a learning experience, which is fine. What is not fine is the very direct and wrong attacks on other scientists. And also the usual ignorant activists who he takes seriously
Seriously, the accelerated polar melt, or exhaustion of polar ice, either one could disrupt the AMOC and trigger global cooling. This is a plausible example of positive force leading to a -100%-plus reaction.
This said, even a slight negative feedback from forcing is a reversal from IPCC accepted positive water vapor feedback.
David Young,
You suffer fools more gladly than I do.
.
If Ken Rice had taken the time to look at climate science critically BEFORE drawing conclusions about its credibility, I would be less critical. As far as I can tell, his acceptance of the fetid rubbish of the field
is complete.
Steve, forgive me for not resisting hitting the ball the teed up on CA. The bias is so thick in climate science one can cut it with a knife. I did not realize until watching JC’s talk at George Mason last year that she’s semi-retired. She said she felt the pressures firsthand and described it as an assumed loyalty to the field to remain united, not to allow findings or remarks to give fodder for reporters to stretch nuances into flaws. She said in 2007 a reporter took something she said and made it bigger that it was and she felt threatened and thus retreated afterward from press contact. This is what Stevens is going through right now. She cautioned him on CE this week when Stevens’ unScientific American interview trashed Nic that she had not been pulled out of the club, but “pushed.”
Nic had not been contacted by Sc Am before publishing. If Stevens does not agree to write a joint letter of correction with Nic I think he should send his own correction via defamation attorney to both Stevens and Sc Am.
Lucia, what do you think?
R Graf,
Scientific American is anything but scientific. It is run by a bunch of political hacks. Of course they would never contact Nic… their objective is NOT to produce accurate reports, it is to advance a policy agenda. I stopped reading their tripe 20+ years ago when it became obvious they had become a political mouthpiece. Sad, since in my youth (1960’s) SciAm was a good source of dumbed-down but accurate science…. about what I needed at 13 to 16 to broaden my scientific perspective. Now SciAm is pure, foul smelling, rubbish.
.
Bjorn Stevens is in the belly of the politico-climate science beast. To save himself, he will have to continue recanting…. and that most certainly means no joint letter with Nic to complain about the accuracy of the SciAm article. Like always in climate science, it is mainly politics, and I am more than a little puzzled that Stevens did not seem to appreciate the problems he would face when he published a paper that supports a quite low equilibrium sensitivity value. He is either more naive or braver than most.
SteveF: “He is either more naive or braver than most.”
I think this has been on minds since his aerosol paper came out. The first question on CE was “Is he retiring?” His photo looked kinda young (which fits with his recant letter). That letter in itself is amazing. There is current post on CA about this now. I suggested Nic should call Stossel or Megyn Kelly (after he talks to an attorney.) I would do both, have the attorneys write at the same time go on Stossel to await the reply.
Stevens must have submitted his “Iris Effect” paper (published this week) before the reaction to his aerosol paper and the apparent visit he got from the Climatariat Illuminatim, as Don M put it.
R Graf (Comment #136092)
I would hope that Nic stays above the political fray and personalizing any of these issues. Nic has done a great job of presenting his own work and critiquing that of others. Regardless of the science outcome of these analyses, Nic has certainly sharpened the science discussion.
Ken, David just got clobbered by Goliath. I agree. Are you saying he would be best to stay down to avoid further injury?
I think he could get his case into the airwaves that matter without appearing to be a reckless aggressor. People do not know this is a David and Goliath issue. Only people here do. The public is told there is no scientific debate, and that the oil companies are the Goliaths holding back congress from acting.
From Rabett run ? from A Lacis
“It would seem more appropriate to assign “wickedness†to problems that are more specifically related to witches. The climate problem, while clearly complex and complicated, is not incomprehensible. Current climate models do a very credible job in simulating current climate variability and seasonal changes. Present-day weather models make credible weather forecasts – and there is a close relationship.â€
I interpret this comment as a misogynist attack on Judith in the vein of Tamino et al in the past.
There is nothing directly in the term wickedness to associate with witches.
It reminds me of the witchhunt against Australian Prime Minister Julia Gillard.
Andy Lacis getting in a free kick at Judith in such a snide manner should be called to account.
Well, the main point about weather and climate models is pretty vacuous too. What is a “credible” job? Andy should know better.
If they didn’t do a credible job on seasonal changes, everybody would know they were useless. The don’t do a credible job on variability, however. They’re useless for regional modeling and the noise power distribution with frequency is, if I remember correctly, not much like the real climate.
Which means you can’t trust climate models for more than ten days in the future other than it will be cold in the winter and warm in the summer. The whole initial conditions vs boundary conditions blather is just hand waving.
David Young,
More vacuous yet is the suggestion that the models are A-OK because they do a reasonable job of simulating seasonal changes. Let’s think about that for a moment. At 40N the late June solar intensity times daylight hours delivers ~11KWH TOA per day per sq meter, while in december it is more like ~3.7 KWH TOA per day…. so a drop in soar heating of more than 2/3. Gee, it is a shock the models can predct significant wintertime cooling….. I mean with a tiny difference of only ~ 300 watts per square meter TOA, it is difficult to imagine significant wintertime cooling. 😉 Is Lacis really that unaware?
I think Lacis is looking for a justification for the political solution he wants. He works on models, so he knows the problems. His statements are very weasel worded so as to be technically hard to disprove but really say nothing that is scientifically significant. I actually feel sorry for him. If he were less honest, he wouldn’t have to adopt this hedge but could just outright misrepresent the science. I do give him some credit. Better a weasel than a liar.
Why is everyone blind to the comment
“global warming has just somehow stalled simply because there has been only a little rise in global surface temperature since the prominent peak in 1998. There was no comparable “pause†in the rate of atmospheric CO2 increase during this time period. Instead, the global energy imbalance of the Earth increased as the heat energy that would have been warming the ground surface was being diverted toward heating the ocean”
Andy gives no scientific justification for the energy he imagines warming the “ground” surface suddenly being diverted toward heating the ocean.
It doesn’t just happen because there is a pause and we have to find somewhere to put the heat.
Further if the heat is put into the ocean it has to go in on top first, not straight into the depths to come out years later.
In doing so, going into the top layer of the sea, the atmosphere would have to be hotter anyway.
But it has paused.
there is no scientific excuse in making up missing heat that is only used when a pause occurs.
When I heard JC use the term “wicked problem” I assumed it had a sociological definition and indeed it takes a second to Google it. Nonetheless since sociological idioms are not of common knowledge I think she needs to elaborate on the definition or skip using the term. I do find it interesting the term was coined (in 1960s) prior to the global warming alarm.
W/r/t missing heat, it seems there is good evidence that ocean currents are a primary factor. But the Mann led consensus is now pushing the notion of a slowing AMOC. Shouldn’t slowing of overturning lead to heightened SST stratification and higher GMST? I would think the cooling would not begin until the slowing lead to polar ice increase (and it’s albedo) which would take decades if not centuries.
The problem is that because a pause has developed, acknowledged by the fact that Andy has had to find a reason for it he come up with the widely used excuse that the heat that was going into the ground is now going into the ocean.
This gives the comic idea of the ground heat packing up its bag and going for a holiday at the seaside for as long as the pause lasts.
No scientific reason for why the heat has decided to escape to the sea, just that it must because a pause is happening.
Then when it gets to the sea it heats the sea which was stiil already being heated.
Despite this extra heat the SST has no noticeable change.
Why does the heat suddenly move, Andy??
A scientific reason please.
The AMOC hasn’t slowed. The more rapid than predicted melting of Arctic sea ice is because the North Atlantic Gyre shifted north. If there is indeed a multidecadal AMOC cycle, it’s due to shift back south about now. In fact, the Arctic sea ice anomaly has been flat since about 2005. One could take that as the first indication that the shift has started. And it has nothing to do with greenhouse gases.
An increase in total energy going into the ocean would have consequences. A change in the rate of sea level increase would be one. I’m still waiting for the model predicted acceleration of sea level increase, but I’m not holding my breath.
The North Atlantic Gyre, like all ocean gyres, is driven by the wind. The AMOC may contribute to the flow in the North Atlantic, but it is not the principal driver. There is very little overturning circulation in the North Pacific relative to the North Atlantic, yet there is still a strong North Pacific Gyre. We know the overturning circulation is low because the 14C age of the deep water in the North Pacific is much older than the deep water in the North Atlantic.
David Young,
Of course Lacis wants to support a specific public policy of forced reduction in fossil fuel use….. we are talking about climate science after all. Weasel words or not, he refuses, like all activist/green climate scientists, to accept any information which says ‘it’s not as bad as we thought’, since accepting that would diminish the justification for draconian reductions in fossil fuel use. Those reductions are the reason for most of the silly work done in the field…. like the blizzard of papers which all offer ‘explanations’ for divergence of models from reality, without ever entertaining the most obvious: the sensitivity to GHG forcing diagnosed by the models is far too high.
I don’t consider Andy’s pronouncements especially honest. Any honest climate scientist who even hints that warming will be less than the model projections (eg Bjorn Stevens) is quickly beaten into recanting by the climate jihadists. Andy is one of the climate mullahs.
Dewitt,
“The AMOC hasn’t slowed.”
There are conflicting studies. There may be a slight (few percent) long term reduction (over ~50 years) but that is by no means certain, and probably won’t be for a few decades. But I completely agree about Arctic sea ice.. it is likely to have some rebound over the next couple decades; which will be at least entertaining when the climate mullahs have to explain that trend…since everyone understands that increasing Artic sea ice is inconsistent with GHG driven warming.
Steve,
There is a reason that most, if not all, models diverge from reality, cloud feedback. Clouds are a negative feedback. It’s very simple to demonstrate that clouds make the Earth cooler than it would be otherwise. I believe that alone is sufficient to prove that cloud feedback must be negative. What got me started on this is someone posted a comment at Science of Doom about how Venus has clouds and it’s hot. Consider how much hotter Venus would be if it’s albedo were lower and more sunlight reached the surface. Any model with a positive cloud feedback is not only wrong, all models are wrong, it’s not useful.
DeWitt, I think this is the most I’ve seen you come out against the models. (You’ve kept your cards close to the chest.)
Anyone, do you agree with my sort of paradox that a slowing in overturning would lead to a warming through increased stratification but at the same time set the table for polar ice expansion? And, if so, wouldn’t that mean warming first and then cooling later or would the kinetics be simultaneous with cooling winning with increased albedo?
I realize this is a fundamental part of the ice age puzzle but I ask opinions.
R Graf,
You haven’t been around long enough.
A shifting south of the current in the North Atlantic has been associated in the past with lower global average temperature, not higher. See the period from the mid 1940’s to the early 1970’s for example. The only question is whether the underlying long term trend will be increasing faster than the reduction from the negative phase of the AMOC cycle or not.
R Graf, the mildest way I can put it is nobody who is a regular here (and simultaneously mentally competent) is a big defender of the GCMs, in terms of the models prognostication ability.
By the way Judith Curry has a post up on “making (non)sense of climate denial, run by John Cook.
It’s about an online course with our favorite lunies, including Stephan Lewandosky, Naomi Oreskes and Michael E Mann.
There is even a RealClimate thread dedicated to this amazing contribution to the scientific corpus.
Applause is certainly needed to herald the arrival of such fertile material.
I had a comment go into netherspace. Here it is again:
By the way Judith urry has a post up on making (non)sense of climate denial, run by John Cook.
It’s about an online course with our favorite lunies, including Stephan Lewandosky, Naomi Oreskes and Michael E Mann.
There is even a RealClimate thread dedicated to this amazing contribution to the scientific corpus.
Applause is certainly needed to herald the arrival of such fertile material.
DeWitt,
I agree that the prima facie evidence is that clouds must have a cooling influence, and so have a negative feedback. The arguments that I have read supporting a positive cloud feedback are mostly based on a substantial change in the type and distribution of clouds with warming (eg, more high cirrus in the tropics and subtropics which are almost opaque to IR but let most visible light pass, more wintertime clouds at high latitudes due to higher atmospheric moisture when there is little sunlight to reflect, etc.) I am not saying those arguments are correct, only that those arguments are made. But ultimately I expect the cloud feedback arguments will be overrun by reality: continuing improvements in aerosol measurements (satellite based data is clear, there has been no significant increase in global AOD for as long as satellites have been measuring) and ocean heat uptake will ultimately narrow and lower the empirical sensitivity PDF until modelers have little choice but to change their cloud parameterizations…..or be laughed out of the room. The only doubt I have is how many wasteful and counterproductive regulations will be promulgated between now and then based on the overstated projections of models.
Steve,
It still sounds like hand waving.
I know my analysis is crude, but the models don’t do very well at reproducing the current cloud distribution, so why should we believe their prognostications of cloud distribution at higher temperature?
I don’t think you can get a temperature inversion above a cloud top like you can at the surface, so more clouds at high latitudes in the winter will, if anything, increase total energy radiated to space.
DeWitt:
I had asked Nic Lewis a series of questions two weeks ago and the only one he passed on answering was this:
I am saying that if clouds are net a negative radiative influence they must be really negative in the tropics in daytime because everywhere else they must be positive, including nights and low-angled sun, (high latitudes). Night thunderstorms would be a different matter.
Well, clouds come in lots of categories, constituencies, optical depths, heights, and droplet/crystal particle distributions. Unless they’re very thick, high clouds don’t reflect a lot of SW.
But, low stratus over the oceans do reflect a lot of SW and emit at pretty close to the surface temperature.
Here is the CONUS water vapor. Notice the are just south of Baja California ( very high water vapor emissive temperature – sending lots of energy to space ):
http://weather.rap.ucar.edu/satellite/displaySat.php?region=US&itype=wv&size=small&endDate=20150429&endTime=-1&duration=0
Now look at the same area in the visible. Notice the low clouds, typical of the ‘maritime inversion’ in the subtropics ( lots of low clouds reflecting any daytime SW )
http://weather.rap.ucar.edu/satellite/displaySat.php?region=US&itype=wv&size=small&endDate=20150429&endTime=-1&duration=0
Uncompensated change in the area of these conditions will change the global energy balance.
Whoops, here’s the visible:
http://weather.rap.ucar.edu/satellite/displaySat.php?region=US&itype=vis&size=small&endDate=20150429&endTime=-1&duration=0
Carl Sagan was wrong. Venus is not hot because of clouds. It’s just the overall atmospheric pressure. If the oceans are near black bodies and even a tiny change in albedo is important this explains a lot of instability and ice age triggering. I seems once we understand the ocean current driving parameters better it seems mankind’s best solution would be to use albedo as a control knob and leave the CO2, which helps agriculture, alone. Here are two more anthropogenic albedo ideas”
1) GMO sea surface organisms to produce surfactant that will foam from wave activity. Spume already occurs naturally.
2) Drone floating factories of reflective mylar type closed foam plastic that is bio-degradable within several years.
If the Earth starts to cool we can turn them off.
Clive Best found the pre-1990, pre-adulterated global temperature database GHCN and has it for download on his new post.
Carl Sagan was wrong about a number of things. Remember Nuclear Winter?
As far as the Venusian atmosphere, you can’t have a near adiabatic lapse rate absent an energy flow. Sunlight does reach the surface of Venus. The Russian Venera probe took photographs using available light. That solar energy is sufficient to maintain the high surface temperature, given that the Venusian atmosphere is almost totally opaque to IR radiation from a ~700+K surface. The holes in the CO2 spectrum are filled by water vapor and sulfuric acid.
R Graf:
No it’s not.
OK, its hard to say anything about the Greenhouse Effect without being oversimplified, but here goes my student’s shot:
1) The thicker the atmosphere (higher pressure) the larger the temperature gradient (and lapse rate?) between planet surface and TOA.
2) The larger differential between SW transparency and LW opacity the higher and more extreme the gradient, (also lapse rate?) and least chance of inversion.
3) The higher the gravity the higher the temperature gradient due to higher lapse rate everything being equal above.
4) Clouds have the same effect as GHG to the extent that they would transmit SW and block LW. The more SW blocking and higher they are the more negative they are to GHE.
5) Aerosols can be both reflective and emissive. Emissive aerosols are more highly GHE negative to the closer they are to the TOA.
6) All effects are exaggerated near the TOA since that it the active end of the gradient, (in/out system portal). This goes for GHG, clouds and aerosols.
How did I do Carrick? Did I pass?
R Graf, I was mainly addressing your comment that it’s the pressure that does it.
Pressure is just the stress on the atmosphere associated with the weight of the column of air above it. There is no direct and only a few indirect albeit weak contributions to the lapse rate from pressure.
The adiabatic lapse rate is given by $latex g/c_p$.
Pressure doesn’t directly enter into this, just the acceleration of gravity and specific heat capacity at constant pressure . It’s true that $latex c_p$ does depend on pressure… it is about 20% larger for Earth’s atmospheric composition at Venus’s surface pressure.
That’s a mild effect, which actually reduces the temperature gradient—so the physical effect is opposite to what you predicted with #1.
In the absence of forced convection, the actual lapse rate is less than or equal to the adiabatic lapse rate. That is, the adiabatic lapse rate is just an upper bound on the change of temperature with altitude, before you get forced convection. Near the surface, the actual lapse rate can exceed the adiabatic lapse rate because during the daytime there can be enough thermal energy to produce forced convection.
You can see how this plays out by looking at a plot of radiative versus adiabatic lapse rates:
Figure.
In the lower atmosphere, the radiative lapse rate is larger in magnitude than the adiabatic one. As a result, we’d expect to see the lapse rate approach the adiabatic limit in the lower atmosphere, and more or less track the radiative lapse rate well above where the two curves cross.
That’s about as much time as I have to discuss this. But mainly, it’s not the pressure, at least not the way you were conceiving of it.
Thanks Carrick. I’ll blame Carl Sagan and will re-read your comment about 8 more times and commit it to memory.
The moral here is as Rosanne Rosannadanna used to say, there’s always something.
Carrick, in case I didn’t convey correctly, thanks to you, DeWitt and others for taking the time for me. You rightly brought to my attention that figuring out the dynamics behind the actual lapse rate on a planet is one of the complex topics. Science of Doom’s simplified version is still going to take a re-read or two.
R Graf, unfortunately it is even more complex that I suggested, because I left out weather, which brings with it vertical wind sheer and changes of state of water (e.g., cloud formation, rain).
I found this diagram to be helpful in visualizing what the real lapse rate does, as opposed to time-averaged versions of it.
There is a good discussion on Nick’s blog about calculating the actual (measured) lapse rate. (Nick calls this the “environmental lapse rate”, but I think in practice this means something slightly different than the measurement lapse rate.)
I think Steve Fitzgerald did a great job summarizing it:
[Why he says “must” is explained by the comments above this excerpted paragraph.]
Thanks Carrick, but I have not yet changed my name, though it sometimes seems like that would make things easier. Still FitzPATRICK. 😉
LOL, sorry Steve!
UAH anomaly April, 2015 +0.07
I think that’s the MT anomaly, not the LT. From the data page, the April, 2015 global anomaly is +0.16, still lower than the previous month.
DeWitt,
From here, UAH TLT for April is 0.06 K (relative to 1981-2010), down from 0.14 in March. That’s using their latest version, 6.0. Don’t know why Spencer’s website says 0.07 K.
Version 5.6 has April at 0.16 K, down from 0.25 in March.
Version 6 now has Beta 2, so still a little bit of ‘flux’ going on.
Why has UAH been diverging from Hadcrut and GISS the past to years, besides those being surface temps? I mean the were in sync most of the two and a half decades, why lately a conflict?
Well, you kinda answered it – they’re not measuring the same thing. And the recent UAH version 6 is excluding some of the polar regions, which is wasn’t before, though those are not really sampled well at the surface, either.
.
But beyond all of that, the satellite era( since 1979 ) of surface temps ( average of data sets ) is about 1.5K per century. For the same period, the mean of UAH and RSS LT is about 1.3K per century. Further, that 1.3K per century is the same trend as the RATPAC raob for 850mb, which LT should ( and does ) more closely match:
http://climatewatcher.webs.com/Lukewarming.png
Discussions at ATTP. hE SAID
“The OHC has not slowed, which is the fundamental indicator of AGW.â€
I replied
“The ocean heat content is a slippery, nebulous beast. Not one which I would prefer to use as “the fundamental indicatorâ€.
The main objections being the slow time frame and difficulty in measuring change in 0.1 of a degree over decades and the slow response time to changes in CO2.
We point to various indicators of global warming.
Surface temperature suffers from the same problem [a slippery, nebulous beast], has more natural variation and a much faster reaction time.
It is directly affected by GHG factors like CO2 increase and water vapor and has a very quick response time.
Given the laws of physics one should be able to say CO2 level x, temperature y.
If we understood clouds , currents, volcanoes, coastlines and seasons better we “could†take the natural variation out and should have 2 related trends.
But we do not.
In which case either we need to improve our science or consider the vague possibility of lower Climate sensitivity or negative feedbacks.
Point blank dismissal of either concept is poor science, I hope we all except that.
Romantic reasoning of “the natural variability did it†just emphasizes my point that the science is not good enough yet to be making dogmatic statements.”
What value is OHC compared to Satellite data?
How long would it take to show a meaningful change
angech,
Dr. Pielke Sr several years ago made some statements that implied something you are getting at in your (Comment #136163):
That climate does not lend itself to being described by physics so much as it requires an engineering systems approach, that GCM’s are not so much like physics models but are rather more like engineering models.
There is an odd thing about the consensus believers grasping OHC now, well into the pause. Years ago some of the same consensus believers ridiculed Pielke, Sr. for promoting the use of OHC as the best metric for measuring AGW.
hunter,
I don’t think Pielke, Sr. was saying that the climate is not described by physics, so much as he was saying that the reality of GCMs is that they are “engineering code,” i.e., they are designed and implemented with an eye to results rather than first principles.
Re: angech (Comment #136163)
Just a comment about difficulty in making OHC measurements — if it’s a question about 0.1 deg, then let me point out that we typically measure millidegrees in the ocean. We have to, because that’s how small the gradients are in the deep ocean.
If it’s the time and space scales that one finds difficult to cover, then the problem isn’t unique to OHC. It’s practically definitional for measuring climate changes.
oliver,
You made my point more clearly than I did. Thank you.
How do you feel about how Dr. Pielke was treated by his peers?
Please tell us more, if you care to, on the work you do.
The change in ocean heat content is a measure of the radiative imbalance at the top of the atmosphere. Warming is a product of the radiative imbalance and the climate sensitivity. The extreme warmers cannot claim that increasing OHC proves anything about climate sensitivity. The evidence is still that climate sensitivity is most probably near the low end of the range. Sufficiently low that internal unforced variability can mask it for at least a decade and probably longer. Also, the current rate of change of ocean heat content is below the predicted rate.
Measuring changes of millidegrees per year in the ocean isn’t all that difficult technically. The major question is whether there is some bias in the sampling.
oliver:
Though to be fair, it’s much easier to measure a vertical temperature change to millidegree accuracy than it is to measure multi-decadal temperature changes to this same accuracy.
Right – the ‘physics’ are are the same for all models.
But that’s a great reminder that parameterizations are not physics.
The physics of a cumulonimbus (thunderstorm) are not the same as guessing whether a thunderstorm will occur within a 1 degree by 1 degree grid cell based on parameterization of a single grid point.
Temperature, one would think, would be the most accurate prediction, but I like this depiction of the variance.
It looks like a Navajo Rug:
http://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2009/Fig9-37.jpg
And that’s just modeling the past.
The uncertainty of projections may reflect more the uncertainty of models than the variance of nature.
Re: Carrick (Comment #136169)
Perhaps so, but I think (and I suspect you will agree) that it wouldn’t be ridiculously difficult to pick some particular point in the ocean and take an accurate temperature measurement, and then go back decades later and take another, sufficiently accurate, reading to conclude that yes, the temperature is slightly different between the measurements. The difficulty would be in interpreting the difference or the trend or whatever it is that you think you have observed. Did the ocean as a whole warm or cool slightly? Did the warm water move a little to the left this season? Or what?
Re: hunter (Comment #136167)
Dr. Pielke, Sr. can of course speak for himself, but no, a lot of what was going on in the web-o-sphere wasn’t pretty by any stretch.
These days I do mainly measurements of small-scale ocean physics, mixing parameterizations, that sort of stuff.
Oliver,
There are currently more than 3,600 ARGO floats. Each one measures a temperature profile about every ten days. The current distribution of floats looks like this. That works out to about 360 profiles every day. So for at least the last 8 years, we have a pretty good idea what’s happening in the upper 2,000m of the world’s oceans. As I said, there are sampling issues. Some areas are over sampled and some areas have few, if any, floats.
I think it’s safe to say, however, that we know beyond a reasonable doubt that the heat content in the upper 2,000m of the world’s oceans has been increasing for at least the last 8 years. That also means there must be a radiative imbalance at the top of the atmosphere, as expected from the increase in ghg concentration.
oliver:
In principle, yes.
In practice, it’s just much more difficult problem to measure trends accurately.
The vertical gradient is relatively easy because you’re measuring it at the same latitude and longitude at (nearly) the same time.
Going back and performing as accurate repeat measurements is tougher, when you are talking decades or longer periods.
The sorts of issues that you encounter are:
• the equipment ages and the scale offset drifts (assuming the buoy is left in place),
• you change equipment: Remember that simply because it’s the same part number doesn’t mean that it is metrologically equivalent (as the manufacturer makes changes in his manufacturing process), calibration methodology changes purposefully or by accident (including how it gets calibrated),
• subtle differences are made in how the scale offset alignment of the instruments were tested (we used incandescent bulbs before, now we are using led bulbs in the room where the testing is being done).
I should point out that scaling multiplier accuracies much better than 0.001°C are achievable with RTD sensors (it’s routine to use RTD devices to take measurements to 10 ppm for example). Absolute temperature measurement is all about controlling the voltage offsets in the RTD preamplifier and digitizer, and effects of aging in the sensor itself.
My point is vertical measurements performed at nearly the same time are just inherently simpler to perform: You can’t just reverse the logic and say simply because I can measure vertical temperature changes to mK or better that I can necessarily, or as easily, measuring temporal variations over multiple decades to the same resolution.
Given that the temperature range you’re measuring is small and close to 0°C, it might well be easy to maintain a stated accuracy of better than 0.001°C at these depths. But the logic would be different than “we can achieve a vertical spatial resolution of better than 0.001°C”.
I agree this is a further confounding problem. It leads to you having a 3-d array of measurements with high enough resolution that you can overcome these issues. It’s similar the problems on land, when land usages changes occur (urbanization, change in crop cover, irrigation etc).
So, how sound is the theory that the oceans will take up decadal or centennial atmospheric heat imbalance?
I know that in the seasonal variation, the oceans store heat and shed heat, but longer term?
It’s pretty clear that over the longer term, the oceans are far more given to storing the cold waters formed at the poles than at diffusing tropical heat deeper into the oceans as seen in the IPCC cartoon. Still, the cartoon depicts the squiggled lines of diffusion, and squiggles would seem right because for that to happen, the diffusion would be fighting buoyancy the entire way.
How much of the OHC change is due to bottom water formation rates? How much of any change in bottom water formation is due to temperature? to sea ice change?
http://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2003/FigFAQ3.1-1.jpg
TE,
Bottom water formation displaces cold water upward everywhere, more in some places than others. Solar energy diffuses downward by eddy diffusion, much faster than classic diffusion. Where the two meet is the thermocline. If bottom water formation exceeds diffusion downwards, the thermocline will decrease in depth and conversely. IIRC, the average depth of the thermocline hasn’t changed much over the years.
Whereas the heat capacity of the ocean is 4000 times that of the atmosphere, one can see the true Earth surface T in oceans.
One the other hand, one must be able have 4000 times more precise a measurement of the ocean to perceive a delta trend.
Proposed solution: Add a telescoping whip (antennae) sensor to the top of ARGO buoys to get an accurate air temp. to add to the data stream (if they don’t do this already).
Add Argo to large inland lakes like the Great lakes to get full global representation (and less temp confounding abysses and circulation currents).
De Witt,
REF: (Comment #136173)
“I think it’s safe to say, however, that we know beyond a reasonable doubt that the heat content in the upper 2,000m of the world’s oceans has been increasing for at least the last 8 years. ”
So a pause of 18+ years in surface and satellite temps is unimportant or dismissed, but a move, if it is even accurate in OHC for 8 years is proof? I have learned that one thing to seriously look for is range of temps and historical context. * years without any context does not do much.
Re: Carrick (Comment #136174)
The measurement issues you bring up are pretty standard. People who do these kinds of measurements aren’t going to go to the same location in successive decades (as in your extreme example) and call it an accurate repeat measurement simply because they a) used the same part number, or b) used the same instrument which has been left to drift for decades.
Also, while vertical gradients are easier to measure than absolute temperatures, I wasn’t arguing that we can get the second just because we can get the first. I was saying that we already seek very accurate absolute temperatures because the gradients (not just vertical!) are so small. We need this kind of accuracy to, for example, map the height of temperature/density surfaces.
DeWitt,
One smallish objection to your description: The thermocline does indeed have a temperature profile that results from opposing downward energy flux and upward motion of underlying water. But the bottom of the well mixed layer (typically ~60 meters in the tropics) is close to the balance point between downward solar energy flux (remaining after passing through the well mixed layer) and the heat required to warm the rising cold water from upwelling. The well mixed layer is well mixed because solar energy absorbed below the surface increases the temperature and causes convective overturning as that heat escapes to the atmosphere. The base of the well mixed layer is the depth above which sensible heat is moving toward the surface (on average), and below which energy is moving downward via a combination of sunlight and eddy diffusion. In the open ocean a significant amount of light reaches to 100-200 meters (deep blue,violet, and UV). The contribution of eddy diffusion vs solar flux of course changes with increasing depth, and this is evident in the shape of the upper part of the thermocline (eg 100 or 150 meters below the well mixed layer). Below that depth ot is nearly all eddy driven downmixing.
Of course wave action and wind shear contribute to the surface mixing, but even completely absent wind and waves, a well mixed layer would still form.
A lot of people here in Brownian motion.
If one can measure the temperature of an aliquot of water in less than millidegrees I would defy anyone to reproduce exactly the same temperature at the same place [depth and position] one minute later, never mind 10 years later. That depth of precision becomes unrepeatable in a fluid system subject to currents and pressure changes all the time.
Mind I am happy they can make them, just they had better surround such figures with a large area of uncertainty, easily >? 0.2 degrees. Hence the uncertainty itself is larger than the posited changes of 0.01-0.05 degrees one could expect to see in a decade.
OHC or ocean temperature changes are good for the big picture but virtually useless because of their ambiguity and accuracy in the time spans and calculations of humans re climate change.
Oliver the issue is we do care about the change in temperature over multiple decades, so this not an extreme example, it’s exactly what we want to measure:
Temperature changes for shorter intervals are going to get heavily influenced by natural variability in the upper layers of the ocean and the atmosphere.
I don’t doubt the people responsible for the measurements are both bright and competent. But regardless of how well they understand it, when you are talking about relatively small temperature changes occurring over multiple decades, preventing the loss of low frequency information via calibration drift remains a very challenging problem.
Carrick,
Perhaps I didn’t make myself very clear. I’m not saying that “attempting to measure across decades” is an extreme example. What I consider an extreme example is what you described, which was “attempting to measure across decades while using the same instrument which hasn’t been calibrated in decades” or “using two different instruments with the same part number and assuming they behave exactly the same.”
It’s a challenge, but so is taking many of these measurements already, and techniques already exist for doing the point measurement “right.” It isn’t clear what are the “right” techniques for interpreting said measurements.
angech (Comment #136183)
Suppose you have a perfect-model thermometer, with instantaneous time response and zero drift in time, and start taking measurements at some point in space. You are surprised to find that you keep reading different successive temperatures. Is this a problem with the thermometer?
Great engineering and scientific planning are just half the keys to success. “Lucky Lindy” flew with three compasses and plenty of excess fuel. There is also elegance to the brute force of system redundancy.
hunter,
Stop putting words in my mouth that I didn’t say and try actually reading what I wrote.
The continuing increase in OHC is proof that there is a radiative imbalance at the top of the atmosphere. I never said that the rate of change of surface and near surface temperature was unimportant. They are two different things.
This is part of what I actually wrote:
I don’t believe that qualifies as dismissing the recent surface temperature record.
DeWitt Payne (Comment #136188)
May 11th, 2015 at 6:09 am
“The continuing increase in OHC is proof that there is a radiative imbalance at the top of the atmosphere.”
No, it is not proof.
It is highly indicative but not proof of radiative imbalance.
The OHC has gone up as sea levels have risen if the measurements are correct [sorry, I know that is a stupid argument].
Let us suppose that there is no radiative imbalance for discussion sake.
eg the C02 has gone up but the sun is radiating less.
or there is more cloud cover or aerosols causing a higher neutralizing albedo.
The OHC would stay the same.
Alternatively the increase in sea level [volume] Could be due to the general heat increase of the last 300 years in which case it could just be ice melting without any increase in OHC at all.
You will not agree with these ideas but the fact that I can sensibly put them up says your statement of proof is not strict enough to be valid. Helpful but not proof.
Oliver (Comment #136186)
“Suppose you have a perfect-model thermometer, with instantaneous time response and zero drift in time, and start taking measurements at some point in space. You are surprised [why?] to find that you keep reading different successive temperatures. Is this a problem with the thermometer?”
Exactly.
The problem is with what you do with the thermometer information.
You have the perfect world average sea temperature for that one locus in the world.
Unfortunately it is meaningless in giving anything like a correct figure for the world due to the vagaries of it’s siting in the first place.
It is still miles better than nothing.
But relying on micro-degrees of information when you cannot place a thermometer with that accuracy you described and when the milleau is in constant change means the information must be treated with respect.
Re: angech (Comment #136190)
I’m not surprised. You said you’d “defy anyone” to reproduce exactly the same temperature in two successive minutes, so I figured it must be surprising to someone.
angech,
You’re right. There is no proof in the logical sense in science.
I would say, however, that it’s a near certainty, beyond a reasonable doubt, that there is a radiative imbalance and slightly less certain, but highly likely that it is caused by the increase in atmospheric ghg concentration since the middle of the19th century.
DeWitt Payne:
Proof of TOA imbalance proves only the current global mean surface temperature is greater than the millennial global mean surface temperature, nothing more.
Am I wrong? School me.
Appreciatively,
Ron
Here’s a good one.
“Forcing agents such as carbon dioxide can directly affect cloud cover and precipitation, without any change in global temperature.”
R Graf,
You can have a positive or negative imbalance. If it were a positive imbalance, where the system was emitting more energy than it receives because it’s warmer, then OHC would be decreasing, not increasing. A negative imbalance means something is causing the system to retain energy, to emit less than it receives. That retained energy must cause a further increase in temperature somewhere in the system. The most likely reason by a long way is the increase in the atmospheric concentration of CO2 and other molecules that absorb and emit radiation in the 5-50μm wavelength range.
An increase in solar radiation could cause a negative imbalance, but we have satellites, SOHO in particular, that measure the output of the sun, and it hasn’t changed enough.
TE,
Do you have a link for that statement? It sounds like something Al Gore would say. In case you’re wondering, I am of the opinion that Al Gore makes a bucket of rocks look like a genius.
Well, I teased it on purpose.
The source? The IPCC of course!
See slide 4 from this Steven Sherwood (IPCC) slide set.
Part of the AR5 “Effective” RF definition. The Sherwood slide set goes on: “AR5 recognizes this for the first time, and designates the total forcing including these effects as the effective radiative forcing. (The effects themselves are called rapid adjustments).
They then go on: “Significance: human can drive some climate changes very quickly and these can be significant no matter how small the climate sensitivity. Solar radiation geoengineering will not avoid all climate changes.
That last part appears to me to be political spin. The takeaway I get is this:
“No one knows how much of radiative forcing is actually realized as forcing because RF is calculated for a static atmosphere but dynamics may occur that lessen the static calculation”.
So, I guess Al Gore was a close guess – went to his Nobel collaborators instead.
DeWitt Payne:
I think we are agreeing now that imbalance is not evidence of GHE but is compatible with GHE. The current imbalance is simply a function of the trailing chronologically weighted average of prior 1000 years of imbalance. If the Little Ice Age was a global event and caused by a forcing, like the Maunder Minimum, then oceans could have been higher than steady state temp for many of the say 250 years of LIA, erasing the Roman period retained heat, for example. The confusing part is that if the GMST is high due to a year of poor ocean turnover ocean and air temps are diverging, unlike when a GMST is high due to radiative effects. How fast the oceans adjust I have heard called “ocean diffusivity”. Sorting out the values obviously will depend on ever more precise Argo and past reconstruction data for both oceans and atmosphere.
Why is Antarctica having record sea ice if the oceans have never been so warm?
No, it’s not. You’re hand waving.
And the current imbalance is evidence for the GHE, not merely consistent with it.
As far as Antarctic sea ice, that’s still a good question. I asked that years ago and still haven’t received an explanation that didn’t amount to hand waving. My hand waving explanation is the polar see-saw. The poles normally seem to be out of phase for temperature changes. It’s hand waving because no one has come up with a good physical explanation of why it should happen. It’s somewhat like continental drift before plate tectonics.
DeWitt,
How is imbalance evidence of anything but imbalance? The causes include any effect to a variance between current atmospheric forcing and ocean turnover providing a different steady state value than the past trailing some odd hundreds of years of combination of atmospheric forcing and ocean turnover. If the Roman WP was caused by solar the imbalance they had would be identical to ours. Right? BTW, I have heard debate as to whether there is in fact statistically measurable trend in OHC. But I guess we are assuming there not only is warming but fast warming. How fast is fast? Relative to what?
@ DeWitt Payne (Comment #136188)
Please know I was not trying to put words in your mouth.
However, you did not answer my question: at what point would you question the assumptions and parameters of the consensus?.
And you response raises this question:
So is the *only* explanation of the increasing OHC is TOA imbalance?
I do like your quote and thank you for putting it up again, by the way. For what it is worth I agree- the cliamte is responding at most in the lower range of sensitivity predictions.
It is pretty clear that like nearly every apocalyptic movement there is small kernel of reality heavily veneered in layers of hype and fear.
R Graf,
The excess energy going into the ocean must come from somewhere. A change in solar irradiance is unlikely because we see little variation in solar irradiance. There is, in fact, no evidence that solar irradiance changed significantly even in the Maunder minimum. That’s speculation. We can measure the Earth’s albedo by measuring the reflected light from the Moon. That doesn’t seem to have changed either. We can also calculate the expected imbalance using radiative transfer theory, which is pretty much bullet proof. Calculated emission and absorption spectra are a very close match to measured spectra. Guess what. The amount of energy being deposited in the ocean is close to the calculated imbalance from RT.
Sure, something completely different might be happening. However, the simplest explanation is a decrease in emission caused by increasing ghg’s in the atmosphere. Unless you can come up with hard evidence of another mechanism, I’ll go with the simplest explanation. Your alternative mechanism also has to explain why RT calculations are wrong.
hunter,
Which assumptions and parameters do you have in mind? I’ve had serious reservations about the high end of the IPCC climate sensitivity range for years. As I wrote above, RT calculations are about as close to bullet proof as you can get given the limitations on the accuracy of the atmospheric profiles needed. A net negative feedback for climate is highly unlikely. Even a near term drop in GMST would only confirm my opinion that multi-decadal cycles are in play, that the underlying secular trend is greater than or equal to 0.1 degrees/decade and likely to increase somewhat in the future since humans are not going to stop burning fossil fuels at a high rate for many decades.
Regarding sea ice, remember that sea ice is of course multi-factoral. Motion can have a big effect. In the Arctic, if sea ice moves past the Fram Strait, it is lost to warmer waters. It Arctic sea ice remains in the Arctic, it can spin around and accumulate. In the last decade, the Fram Strait exit is predominating:
http://psc.apl.washington.edu/northpole/pngs/Allyears_buoy_drifts.png
And as Rigor and Wallace ( http://iabp.apl.washington.edu/pdfs/RigorWallace2004.pdf ) assess, more than half of the Arctic sea ice decline was due to this motion.
Global temperatures are indeed rising, and CO2 is a usual ( likely? ) suspect. But folks may be jumping the gun by attributing recent Arctic Sea Ice decline to AGW. In fact, sea ice variation leaves a distinct fingerprint behind, even when the ice has gone. That is in the temperature record. When ice is thick, it acts as a good insulator ( except for the wind forced ‘cracks and creases’ ) so little additional freezing takes place. But, if ice is missing or thin, the latent heat of freezing releases heat to the environment. So thin or missing ice causes a maximum anomaly during the cold season ( fall through spring ). And thick ice causes a minimum anomaly during the cold season, because less than usual latent heat is released. Manabe noticed this in the 1980 paper ( worth a read when you have time – note he was looking not at a 2x scenario but a 4x scenario, http://www.gfdl.noaa.gov/bibliography/related_files/sm8001.pdf ).
It turns out we may have been here before. The pattern of thin ice in the Arctic appears in the temperatures of:
1910-1945
http://climatewatcher.webs.com/Stadium1910.gif
as well as 1975-present
http://climatewatcher.webs.com/Stadium1975.gif
The pattern in between ( 1945-1975 ) indicates a thick ice scenario ( anomalously colder in winter ):
http://climatewatcher.webs.com/Stadium1945.gif
The meme is the amplified warmth in the Arctic is causing the sea ice loss. At least partly true is the converse – the lack of sea ice in the Arctic is causing additional warmth. That doesn’t obviate CO2 forcing, but the dynamic change in sea ice is a factor.
Now, you asked about the Antarctic. Similar dynamics occur there too, though the Arctic is an ocean surrounded by land, the Antarctic a continent surrounded by ocean.
DeWitt,
CO2 emissions would appear likely to continue, however the key measure is radiative forcing ( neglecting for the moment how much is actually “Effective” radiative forcing a la AR5 ).
Hansen’s associate Sato updates forcing agents on his page.
This the latest update of RF:
http://www.columbia.edu/~mhs119/GHGs/dF_GHGs.gif
I shared my view of this with ATTP recently. In 1979, things looked grim. Population was growing exponentially, as was radiative forcing:
http://climatewatcher.webs.com/Forcing1.png
http://climatewatcher.webs.com/Forcing2.png
http://climatewatcher.webs.com/Forcing3.png
Forcing is at a rate lower than the IPCC B1 scenario. It is increasing at a rate less than the peak rate somewhere around 1979. A lot of that is from the regulation of CFCs. Some more of the reduction came from reduced methane ( improved efficiency and fewer leaks? ) And the remainder came from a deceleration of CO2 accumulation ( perhaps increased uptake and greater efficiency ).
Now consider where we are. Population looks to be decelerating rapidly toward the low end scenario. Here is a plot I made of global total fertility rate from the UN 2012 population outlooks ( lo,med,hi ) and the CIA Factbook:
http://climatewatcher.webs.com/TFR.png
Changes in death rate and shapes of populations keep population growth from matching the fertility rate, but after a decade or two, population will follow TFR. We are on the brink of falling population! Much of my interest in this is from this article by David Merkel:
http://www.valuewalk.com/2015/04/municipal-db-pension-problem-and-human-fertility-part-4/
Reflect on that in combination with the unchanged emissions in 2014 and it seems that we may be witnessing positive, but declining rates of forcing going forward.
In this vein, I believe we will see continued warming but at declining rates, plus or minus the substantial variability.
2014 Emissions:
http://www.iea.org/newsroomandevents/news/2015/march/global-energy-related-emissions-of-carbon-dioxide-stalled-in-2014.html
TE,
Those date ranges for thick and thin ice also match the descending and ascending phases of the AMO index. If the index actually reflects changes in circulation patterns and there continues to be a quasi-periodic oscillation, then it’s likely we’re headed into a thicker ice phase as the smoothed AMO index should have peaked in 2011.
There are already indications that may be happening. PIOMAS ice volume has recovered somewhat from its all time low in 2012. The Cryosphere Today Arctic sea ice area anomaly has been essentially flat (with a lot of noise) since 2005. Extent has been near record lows this year, but the volume is not as low. On average so far this year, seven years have had lower volumes. On average so far this year, there have been 4 years with lower area.
Area and extent have been affected by low ice coverage of the Bering Sea and the Sea of Okhotsk this winter. That tends to fluctuate a lot.
TE,
As far as emissions, the world economy isn’t exactly booming. In fact, it’s looking much like an asset bubble is getting ready to pop. This one could be a lot uglier than 2008.
Ya – that dovetails with population trends.
GDP growth = productivity * number of producers(consumers)
When number of producers declines or stagnates…
China appears to be hitting the wall now, which goes with their downhill slide of working age population:
http://foreignpolicyblogs.com/wp-content/uploads/chinas-working-age-population.jpg
Kind of uncharted waters for the global economy.
Eddie, great research, as usual. I am fascinated in both facts that population rate is decelerating faster than thought (stasis by 2040.) And, 2014 CO2 was less than 2013 without a slower world economy.
DeWitt, I highly respect that you know 10X as much on climate physics than I (make it 100X). But I urge you to use your broader wisdom to factor in that the consensus is self-rewarding and self-policing. The NASA website, for example, has time-lapse imaging animations for both the Arctic sea ice and land ice decline but neither for the Antarctic. The most trusted scientific agency in the USA, with public funds to impartially inform the public on climate change, stands brazenly as a propaganda tool. Their information is literally one sided. Wikipedia is the same story.
The daily Yahoo News blast on climate today announced a Nature paper just published finding accelerating sea level rise. I read the first para of the actual abstract and found it is challenging the prevailing view that rise had paused with the hiatus. This relevant fact went unmentioned in Yahoo’s article. I assure you Lewis and Curry (2014) made no headlines in Yahoo or NYT of Huff Post. I don’t even think Fox noticed it. On the other hand, Mann’s March paper (still using Strip Bark Bristle-cone Pines) diagnosing a pause in the AMOC made a huge story complete with video from “The Day After Tomorrow” (with disclaimers that the movie was just a dramatization).
This means an ECS of at least 1 is bulletproof. Wait a minute Lindzen and Choi (2011) say no, its less than one due to negative feedbacks.
I am not claiming that there is no EGHE. I think ECS is likely 1.5-2.0 and TCR 1.1-1.4. What I am certain of beyond a reasonable doubt is the state is hiding evidence from the defense and jury. I think we can all see that. The question is only do the ends justify the means?
An actual readable post on WUWT.
Richard Bett’s response to Lewandosky’s latest puerile nonsense.
R Graf,
Can you possibly get it through your head that radiative forcing and climate sensitivity are not the same thing? I believe this is the third time I’ve had to say this to you. The forcing from an instantaneous doubling of CO2, 3.7W/m², is bullet proof. The TCR and ECS are not. Lindzen and Choi are outliers, by the way. They may be correct, but I and a lot of others doubt it. According to Feynman, you should be most skeptical of findings that confirm what you already believe.
Cryosphere Today, once they get their current server problems fixed, has daily images of the South Pole sea ice going back to 1980. A few days are occasionally missed. Your conspiracy theory is unwarranted.
TE,
China has a bigger problem than population decline, leverage. They’ve been making loans to less and less productive projects to keep the boom going. The longer they delay deleveraging, the more painful it’s going to be.
DeWitt, If you search 3.7 with ctrl F in this string you will find it first mentioned by Eddie that he actually calculated it as 4.1 W/M2. The next instance is my questions to Nick Stokes about different RT and feedback scenarios here. Your reply to my comment was nothing that I didn’t obviously already know from my question but I simply thanked you for the suggested reading of Petty’s Guide. BTW, you said, “The inversion in the stratosphere has nothing to do with sun angle…” I let that slide too because I knew what you meant. You thought I didn’t understand that ozone has as a strong UV band but I thought everyone knew about our protective ozone layer by now and didn’t need to waste time informing you of that.
Today you are saying not only do I not understand the science but I am a conspiracy buff. First of all, bias and anchoring effect are the reasons for the scientific method. It is not a conspiracy. Despite all scientists knowing this Feynman’s remarks are poignant because he realized as a scientist that scientists as a whole seem to never be able to come to full belief that they too are part of the problem that requires scientific method and ethical protocols. Feynman repeats what we all know: The ends do not justify slanting the evidence, analysis or appealing to authority to dismiss challenge. The IPCC was making the hockey stick their logo in 2001. No scientist bothered to verify MBH98 until two non-climate scientist on their own dime decided to check it in MM2003. They blew it away yet the lead author of MBH98 is still a top authority. These facts should paint a pattern even to one who is 100% trusting. Six months ago when I began I Googled “NASA climate change.” There was only the North pole imagery then and I checked about a month ago and still the same. Its not a server problem.
Petty’s is still on my list. I do thank you.
Try reading what I actually wrote. The photos are at Cryosphere Today http://arctic.atmos.uiuc.edu/cryosphere/ and they currently do have a server problem as you would have found out if you clicked on the link in my post. If you want monthly data on Antarctic sea ice extent and area, NSIDC has it here: ftp://sidads.colorado.edu/DATASETS/NOAA/G02135/ Arctic data is first, you have to scroll down to find the Antarctic data. No one is trying to hide the data, though they may be ignoring it.
Just because Climate Science has become politicized doesn’t mean that every aspect of it is wrong.
Your Google fu is also lacking. Using just one search phrase is not likely to find what you want.
DeWitt,
Trust me. I believed you on the server glitch. And trust me that was not the “problem” I was commenting on. I also believed that you believed you were making a fair and valuable point the RT being bulletproof as logic to explain OHC.
I apologize if you felt that everybody should have been aware that that the EGHE presupposes a strong positive feedback to go from the weak RT caused ECS to IPCC range ECS. There is limited space and time to compose your point and I thought it could been missed. I felt the need to clarify your point trusting that you would reply respectfully.
Back to the “problem,” psychosis to some degree is normal. Climate science simply has all of the exacerbating features that make it a problem. It’s illuminating on its own to the problem that psychologists are starting to weigh in. Also, that scientists can virtually agree on 95% of the facts and have vastly different conclusions simply based on the assumption of some type of ill-intention based on political or religious affiliation.
I assure you I do not have ill-will to anyone personally, even Micheal Mann, whom I think is simply misguided (both in fudging his data and suing his critics).
I am sure we both agree that history is chock full of misguided good intentions, from Reefer Madness to enhanced interrogation, Eugenics, Japanese internment and the Holocaust.
There will be plenty more examples to come. Climate Science will be sure to be one if the “highly likely” dangerous warming turns into not so bad warming. They will then say again honesty and trusting in debate should have been the best policy.
De Witt,
Thanks for outlining some of the factors you consider in this. If I read you correctly you are pretty much skipping over the clouds as a factor. I don’t recall the question of clouds has been resolved yet.
The lunar brightness is one I have not heard of before.
Only a very small part of lunar brightness is due to earthshine. I don’t think atmospheric clarity correlates to CO2 content. Is there a reference on this? I did a quick search and did not see one.
The secular trend of 0.1 per decade is definitely on the low end of the consensus. How much variability does it take to raise the assumption of a secular trend?
Obviously you don’t measure earth shine where the Moon’s surface is exposed to direct sunlight. A new moon, however, is lit only by earth shine. But you can block out the lit side of the moon on a telescope and measure earth shine.
I’ve stated elsewhere that I think clouds are a net negative feedback. Negative feedbacks, however, are only 100% in electric circuits. We don’t know what cloud cover will do with increasing temperature. Climate models don’t get the current distribution of cloud right so it’s laughable to take their extrapolations of future behavior seriously.
Right now, we are on track to reach 560ppmv atmospheric CO2, double the pre-industrial level, in about 2060. There is no way that won’t cause a significant temperature increase. The up side of that is that descent into an ice age becomes extremely unlikely.
Al Gore doesn’t seem to be worried about sea level rise. He just bought an expensive mansion on the beach in Montecito.
I’d agree we’re ( global humanity ) going to continue to emit CO2 for some time and that will likely force additional warming.
But let’s look at how much CO2 and how much warming.
The preliminary IEA says emissions were flat in 2014. Accumulation was about 2ppm for 2013:
http://csas.ei.columbia.edu/files/2014/12/Fig.-2.jpg
If 2014 is really the peak of emissions, it would appear unlikely that accumulations would exceed 2ppm per year, a rate that would give us 560ppm about 2095. But population will be falling before then, so accumulation rates may well be even lower.
Reaching the 2xCO2 of 560 from here ( 400ppm ) would mean from the simplified expression
5.35*ln( 560/400 ) = aproximately 1.8W/m^2 additional forcing.
At the same correlation we’ve observed with previous GHG build up:
http://climatewatcher.webs.com/TCR.png
That amounts to a warming rate around 1K per century, somewhat slower than the 1.6K per century we’ve observed since 1979.
Natural variability will continue to provide fluctuations, but I think Al Gore is safe for the rest of his days on the beach in California.
TE,
It’s a real stretch to say that 2014 is the peak of carbon emissions. I think you’re in the same boat with Kenneth Deffeyes who thought that oil production peaked in 2005. A substantial fraction of the world’s population isn’t using fossil fuels at all. That could still change. China, India and Brazil per capita use of fossil fuels is still below the developed world. Hydraulic fracturing is getting less expensive.
I also see no evidence in the MLO CO2 measurements that the year over year increase in CO2 concentration has stopped increasing. The linear fit to the difference plot had a value of 2.0ppmv/year in 2006. Currently, it’s 2.22ppmv/year.
Wind and solar are still much more expensive and too variable for a grid that has 24/7 demand than fossil. They would require substantial fossil backup capacity, which is far less efficient than a base load fossil plant. Nuclear could displace fossil if the permitting process didn’t raise interest charges so much. I doubt that’s a problem in China. But there’s no evidence that China has slowed the rate of construction of new fossil fueled power plants. Their statements of future plans to that effect probably have little relation to their actual plans.
Nothing is a lock and time will tell, of course.
But let me lay out the case:
Flat 2013 to 2014.
Decrease in emissions from China,
which was the largest source of growth in global emissions.
also in that graph, decades long decline in developed world emissions.
for global emissions to even remain constant, emissions from India and Africa will have to increase.
As for CO2 annual increases, they do fluctuate a bit, but since 2000 are just a little over 2ppm/year.
Also, year-over-year latest month:
1.97ppm, which could, of course, be noise, but it’s consistent.
In addition to population slow down, ageing is probably also a factor.
I ran across this which shows a world full of old farts is likely to emit less CO2.
It remains to be seen what happens longer term, but there’s also this evidence that China’s overcapacity is coming home to roost.
Of course, if this is correct,
India will be making up for China in coal imports.
TE,
After detrending the data, the standard deviation, ignoring probable serial autocorrelation, of year over year differences is 0.58ppmv. 1.97 ppmv is then less than one standard deviation from the expected 2.2 ppmv change. As I said, there is no evidence yet that the trend of increasing year over year concentration has changed. Correcting for serial autocorrelation would increase the estimated variance. I might get around to plugging the data into R, calculating a noise model and associated 95% confidence limits, but don’t hold your breath.
I haven’t run the numbers, but even if emissions were flat from now on, it would probably take a decade or more for the difference in the CO2 concentration time series from steadily increasing emissions to be significant.
One more thing to add to the calculation: other sciences are moving ahead. Nano-technology and colloidal chemistry are behind the beginning of electrical storage technology that we are seeing from Tesla. There have been some false starts in the engineering pathways for the electrolyte gels but there is nothing like a multi-billion-dollar market to help get the kinks out. Tesla’s mega-factory is just the first out of the gate.
On the other sides, there are improvements in everything from building insulation to the LED lighting them at 10% of the power of incandescent bulbs. Many parts of the developing world are going to skip over electrical grids and go right to local wind, solar and storage combinations.
In seventy years we will look at using coal and oil for power as we look now on using whale oil for lighting.
R Graf,
You know even less about battery technology than you do about climate science. Elon Musk, IMO, is a modern P.T.Barnum. There’s more hype than reality in his supposed advances in battery technology. Except Barnum didn’t rely on government subsidies. You can make solar cells with less and less silicon. You can’t do that with batteries. There are limits set by the chemistry on specific energy, Wh/kg, and energy density, Wh/L. Nanotechnology and colloidal chemistry won’t change that, although it makes for a good press release.
Musk’s 7 kWh wall of batteries might run a laptop for quite a while, but it won’t run your heat pump for more than a very few hours when it’s very hot or very cold. Cycle life and efficiency is also inversely proportional to charging and discharging rate. The batteries in a hybrid vehicle last a long time because only about 10% of the total battery capacity is ever used.
DeWitt, I’m an electrochemist. The battery main hangup has been kinetics, not storage density. Colloidal chemistry gets you to many multiples of active surface area in which charge and discharge can occur, which translates to high power-high and electro-chemical efficiency.
Me too, actually. Although I ended up doing atomic spectroscopy because electro-organic synthesis never lived up to the hype and an x-ray or plasma emission spectrometer could do elemental analysis much faster with more sensitivity than any electroanalytical technique.
I was looking at a lithium ion car battery this weekend, 600 CCA. weighed 3lbs and was about 1/4 the size of a lead battery. It costs $600. You can justify that sort of expense in a race car. I remain unconvinced, however, that there are significant economies of scale in battery manufacture such that we’ll see the lead battery being replaced with lithium ion in your average street car any time soon.
Faster charging and discharging doesn’t change the energy density or specific energy.
Very cool that you’re a chemist too. Clearly though even electrochemistry, like physics, can cover a surprisingly broad area. For those that aren’t the challenges of electrical banking can be viewed as similar to currency banking. Man dealt with the value density issue by using copper, silver and gold coin to concentrate value. Electrical storage density is only assigned by nature not by man so we have limits based on electro-chemical potential and chemical stoichiometry.
Like with money the other important issues are efficiency of access and security of storage as well as perish-ability. If you were limited in the amount that you could bank deposit or withdraw per day that could be a big problem. Also think about a 80% service fee on deposits and a 10% per month service fee.
All of these problems can be improved greatly by colloidal chemistry, which by itself is probably as complicated as life. Actually, that complexity is likely responsible for allowing life to originate in the soup. Nature uses the power of colloids a lot starting with mother’s milk. With nanotechnology we should be able to improve on nature just like in genetic engineering. But its just slowly advancing now.
R Graf, in my opinion, the reason that battery technology is not advancing is because of market-based government subsidies:
If you can make the same money (due to subsidies) with an existing technologies, there is no reason to invest in improvements on what you have.
R Graf,
I think hydrogen fuel cells are more likely to power vehicles in the future than batteries. But I’m not holding my breath on that one either. I had just finished grad school (UT Austin, Al Bard) when Bockris proposed the Hydrogen Economy. Back then it was thought that nuclear power was going to be so cheap that it wouldn’t need to be metered and it could be used to generate hydrogen electrolytically.
Methane reforming with steam is still the least expensive method of generating hydrogen by far, especially with cheap methane from hydraulic fracturing. Methane reforming also lends itself to carbon sequestration as you have a pure stream of CO2 coming out.
Batteries for transportation have a fundamental problem, low range and long recharge time. Let me know when you think there will be battery technology that will give a family sized car a range of 400+ miles at 70mph while running the heater or air conditioner and can be recharged in less than ten minutes. Hydrogen fuel cells could at least conceivably do that.
I was less than impressed by this article about driving a Tesla P85D from San Francisco to LA. Fifty miles of driving and then 30 minutes of recharging is not my idea of a practical vehicle.
DeWitt, I don’t know if fuel cells will win against batteries to dominate car powering. I have been working with Dupont’s Nafion fuel cell membrane people and U of DE fuel cell Chem-Es, doing their platinum plating for their testing apparatuses, and I chat with them. There have been a lot of breakthroughs but I think battery’s lead is going to be tough to overtake.
I route for both.
I agree charging time is a challenge but that is exactly what nanotechnology can improve. The Tesla Power Wall is basically their lithium ion car battery with a different shell and software. It comes in two models, one for daily charging and discharging for power shifting and the other for periodic use as a backup for power outage of rainy days. Both can be arrayed up to 9 units. The periodic (weekly cycling) one is 10kw vs. 7kw for the daily one. Both are to have a 10-year guaranteed life and to be 92% efficient round trip. I doubt the later spec lasts for 10 years (think laptop battery).
Both items are for add-on to a home or business solar setup. The Power Wall adds to the solar economic equation by its backup utility as well as those that can contract with their utility for off-peak discounted electricity.
It’s a start. I see Elon Musk more as a Steve Jobs rather than a PT Barnum. Gore on the other hand?…
I think you mean 7 kWh, not kW. And I suspect that it’s the loss of capacity that will limit the lifetime. A full home stand by generator needs to be rated at 16-20kW. Even 9 units at 63kWh and a price of $31,500 probably isn’t enough to be comparable, not to mention the solar arrays to charge them.
Notice also that HVAC isn’t included in the list of electricity using items in the home.
“HVAC isn’t included in the list of electricity using items in the home.”
Yes. This is not for Al Gore’s power backup.
I remember when only movie theaters had air conditioning. It is possible to live without it. But most places still need heat in the winter, the H in HVAC. My grandmother who lived in San Francisco had a wood fired stove and coal grates and a kerosene heater in the living rooms. Hot water bottles were a requirement for the beds when we went to visit for Christmas.
We’re not talking about Al Gore, we’re talking about anyone living north of about Florida. In fact, it snowed in Jacksonville once when we were living there. We were in St. Augustine at the time, so I missed it. The no frost line is somewhere south of Orlando.
Cold kills.
There are places, the Palm Springs area comes to mind, that are not livable for about half the year without AC.
I found note on a heat pump to be installed in the Pittsburgh area, The specs include a 15kW electric heater for when the outside temp is too low for the heat pump, about 20°F. That’s about four hours for the 63kWh battery pack, not including any other load.
The note was from someone who wanted to do the install himself to save money, hardly Al Gore.
People can mostly manage even in Florida heat without AC as evidenced by power outage from Hurricane Jeane, which landed 9-26-04. Millions went without power from West Palm to Vero Beach for average of two weeks, including my elderly parents.
People can also do without heat most of the time as long as they have shelter, sweaters and blankets. Most homes use gas or oil heat for the main energy source. Gas and wood fireplaces are a good backup too. The draw from a gas or oil furnace is just the fan and relay, which is from 1/4 to 3/4 HP or .19Kw to .57Kw.
R Graf,
But the whole point is to not use fossil fuels. If fossil fuels are allowed, a gas (methane or propane, not gasoline) or diesel powered backup generator with a small, cheap battery pack to keep things running until the generator is up to speed makes more sense than a large battery pack. And both are far more expensive than just relying on the grid.
If you really want to get me going, lets talk about government subsidized roof top solar arrays and forced utility purchase of excess solar power from them.
DeWitt,
I’m talking about average suburban households being able to supplement their grid power with solar while reaping the benefit of hardening their home against grid outages. This is becoming affordable for those in the same market and mindset to buy a Prius, Leaf, Volt or Tesla. I think a smart solar contractor would market their proposed consumer system with one daily Power Wall and a second backup. Remember in an outage one could run the large appliances while the sun is shining and get 100% efficiency while saving the batter power for night.
DeWitt – See this from the UK’s new Energy Secretary.
HaroldW,
*sigh*
And this is the Tory minister!
England gets a lot of solar energy in the summer when they don’t need it and not so much in the winter when they do. Or at least they do when it isn’t raining. It can be cloudy there for months on end. I was in London for Christmas a few years back and, as I remember, the sun came up at about 10AM and went down about 2PM.
R Graf,
If someone wants to spend their own money installing solar and batteries or buying a hybrid or fully electric vehicle, that’s fine with me. Asking me to pay for it isn’t. Government subsidies and forced sale of excess power to utilities at or near retail is asking someone else and by extension, me, to pay for it. That goes double for the electric vehicle subsidies. Wind turbines still get subsidies!
lucia,
How’s the novel going?
DeWitt, R Graf,
The interesting development is that solar cells have dropped in price enough that they would be very competitive with grid power, at least in sunny regions, save for the cost and lifetime of batteries, which remain prohibitive. If Mr Musk can reduced the lifetime weighted cost of battery storage to under $0.10 per KWh stored, then I think there will be substantial migration toward rooftop solar in sunny areas. It is also true that you can reduce required battery capacity somewhat by running heavy power users in the day, but this may not be convenient for some. Sunny regions tend to be warmer regions, so any practical solar system would have to be sized based on the need for air conditioning.
The big bug-bear is that most existing houses in warmer areas are not very well insulated; a very well insulated house could use 1/3 the A/C power that most existing houses in warm climates use, which by itself would make a big impact on the required size of a solar system for power. The existing stock of houses was not designed with energy efficiency in mind, at least not in warm climates.
Battery lifetime costs and improved insulation seem to be the big hurdles for distributed solar power.
SteveF:
That and we’ve got an enormous amount invested in a central grid already. For countries like Africa, where you have to bear the full burden of implementing a grid to deliver centralized power, the equation works out a bit different.
(Especially when you include the vulnerability of the grid to hostile action.)
SteveF,
In theory he’s close. 7kWh/day for 10 years at $3500 is $0.137/kWh. I seriously doubt that the capacity curve for 100% charge/discharge for 10 years is flat, though. If that were so, then the lifetime would be longer than ten years.
That would also be 36,500 cycles. I’m not sure there is a battery on the market that can do 10% of that. I have seen a claim that a battery with more than 10,000 cycles to 80% capacity could be manufactured. That remains to be seen.
The $3500 battery does not include the solar panels or wind turbine. But compared to the price of building a grid it’s a bargain. The third world is not going to be the market for the Tesla. They will be using lead acid cells for some time. They have plenty of sun or wind usually and that is 100% and that energy can be banked by and ice-maker, for example. They need power for the village store(s), infirmary, hotel and school. As the community is slowly powered their economy will improve even from the modest gain in infrastructure. At the same time the costs for renewable tech should come down.
Having the UK government pushing for solar topped suburbs as the norm is very encouraging. The same could happen in the USA very quickly. Once a few in a community put up the very visible advertisement of solar panels they could become the norm amazingly fast as well as the the price go down. Just like many panned the internet start-ups for going under in the early 2000s don’t count solar out yet. It just takes a time-ramp, like Ebay, Google and Amazon understood.
DeWitt,
I think you mean 3,650 cycles. There are lead acid batteries which will currently do more than half of that, if the depth of each cycle is limited to about 50% of total capacity. But you are correct that Mr Musk’s batteries are very, very unlikely to last 10 years if the cycle is deep. What I have read is that Mr Musk’s lieutenants are carefully discounting the possibility of individuals “cutting the cord” to the grid based on rooftop solar cells (or a wind turbine) and Mr Musk’s batteries. Perhaps they know something we do not.
Carrick,
“For countries like Africa, where you have to bear the full burden of implementing a grid to deliver centralized power, the equation works out a bit different.”
Sounds a little like Sara Palin. I suspect you mean ‘regions like Africa’ or ‘countries in Africa’. 😉
SteveF, yes, I meant countries in Africa.
Re: DeWitt Payne (Comment #136424)
Why is that necessarily an argument against? As a society, we agree to pay for lots of things that we don’t use personally, in exchange for support for some things that we do use personally.
In the case of energy subsidies, it’s quite likely that the newer subsidies effectively go to displace other subsidies, either explicit or implicit, that are already in place
Oliver,
If you think that tax subsidized solar installations and wind mills, and legally required buy-back of excess solar and wind power at retail rates (rather than the applicable wholesale rates) are subsidies which only replace other existing subsidies, then please specify what those other subsidies are.
.
Seems to me these things are simply a transfer of money from some people to others, with no offsetting changes. I don’t see a lot of difference from import duties on ethanol to protect corn farmers and the corn-ethanol industry, or duties on sugar to protect sugar farmers. A strong argument can be made that all these things cost society a great deal of money with no sensible return.
SteveF,
Yes, 365 * 10 = 3,650, not 36,500. I’m pretty sure the convention for discharge cycles is 100%. That is also taken to mean that two 50% cycles equal one 100% cycle. Obviously, depth of discharge is a factor in calculating required capacity.
Oliver,
I second SteveF. I can’t think of any subsidy that would be displaced by subsidizing roof top solar installations, especially not net metering at retail.
Oliver,
Your argument extended indefinitely would mean that the government should subsidize everything regardless of value. Any subsidy has to stand on it’s own merit. IMO, the cost/benefit ratio of subsidizing roof top solar arrays does not justify it. The costs are too high and the benefits to society are minimal.
DeWitt,
“Your argument extended indefinitely would mean that the government should subsidize everything regardless of value.”
Indeed, we could just pay each other to dig holes with shovels and immediately fill them back in to ensure full employment and economic growth. The id!otic corn ethanol fuel law is the perfect example of that bizarre logic. We use as much petroleum to make the ethanol as we reduce petroleum usage when blended with gasoline, but substantially increase the final cost to the consumer. On an energy equivalent basis, corn based ethanol currently costs ~40% more than a gallon of gasoline at wholesale (and that is a historically low price…over the past decade, it has average almost double gasoline). We could just pay the farmers to grow nothing on their land, and spend all their time at a condo in Florida, if that is politically required to end this madness, and it would still end up being MUCH cheaper than subsidizing an entire wasteful infrastructure for producing, distributing, and blending ethanol with gasoline….. and we’d stop enriching politically connected companies like ADM. People who think government is an effective way to address problems live in a dream world of fantasies.
Oliver,
It is quite unlikely that the wind and solar subsidies are replacing previous subsidies on energy.
In fact I doubt if you can document any evidence at all to support your assertion.
And please do not turn the tables and ask those of us who doubt you to prove you were wrong.
You have made an unusual assertion. Please prove it.
Re: hunter (Comment #136499):
I’m not sure what you are getting at.
It doesn’t seem at all unusual to me that one might assert that the established energy industries (e.g., coal and gas) receive subsidies. “Alternative energy” subsidies (along with “efficiency” subsidies) are there in part to head off investments into replacement or expanded capacity.
Are you asking me to provide documentation that coal and gas industries receive subsidies, both explicit and implicit, or are you disputing that wind and solar installations displace demand that would have otherwise been served by coal and gas, or something else?
Re: DeWitt Payne (Comment #136487)
I haven’t argued that my argument should be extended indefinitely. All I have asked is why having to pay for someone else’s subsidy is necessarily an argument against said subsidy.
Re: SteveF (Comment #136489)
Steve, that’s a great point. By suggesting that solar and wind subsidies could go to displacing existing energy subsidies for other sources, I have essentially been arguing that we should dig holes and fill them in again.
Oh wait, that’s what we already do when we mine for coal.
:facepalm:
Oliver,
That is the logical extension, not necessarily the specific case of wind and solar subsidies. Somewhere between subsidized research on rare childhood diseases and subsidized hole digging, subsidies do become absurd. What subsidies you think absurd depends a bit on your political inclinations.
.
You have not yet pointed out any “displaced subsidies”; I for one would like to know where you see there are natural gas subsidies.
.
By the way, sarcasm doesn’t suit you well.
A report on subsidies to the nuclear power industry .
A report on historical energy subsidies for various energy sources.
(apologies in advance for formatting issues below)
Re: SteveF (Comment #136508)
I did in the post above the one you replied to. Subsidies given to one portion of the energy industry do, in part, take away business from other sources which are themselves subsidized.
Decades of federal investment into developing the current technology of shale gas fracking would be an obvious place to start.
A long history of large tax credits given to the energy industry, including to natural gas suppliers, would be another place to look.
Add to that the less easily quantifiable, but certainly present, environmental impacts from fracking, coal mining, or practically any other fossil fuel extraction.
Okay, how would you prefer that I respond to specious hole-digging analogies?
Oliver,
So you have nothing but rhetoric.
Thanks for clearing that up.
Now I am traveling to a surprise birthday party for my father’s 80th for Memorial Day.
Best wishes to all for a great Memorial Day weekend and holiday!
SteveF:
Do you actually think there aren’t natural gas subsidies, or are you quibbling over whether the substantial federal & state assistance to natural gas exploration, refinement and transportation qualify as “subsidies?”
I admit I get a bit amused seeing the arguments against subsides for solar & other alternative energies which completely ignore the enormous investment that’s been made to create and perpetuate our gigantic energy grid.
For some people, apparently it is benevolent space aliens who built and now maintain the grid for us. It certainly doesn’t cost us a thing to keep this enormous engineering project functioning or to expand it, no sir.
hunter,
It’s well known that the fossil fuel industries have long received substantial government subsidies. “Alternative” energy subsidies represent a potential alternative to fossil fuel subsidies.
I don’t see anything unusual or rhetorical about either of the above statements. If there is, then I’d appreciate it if you’d point out the specific parts that seem rhetorical. If you want/need documentation, then RB has very kindly provided some sources in the posts above. More sources can be found if you are genuinely interested.
Either way, have a great Memorial Day weekend!
From the link in #136510 regarding other indirect subsidies
Carrick,
What I think is that what some people call subsidies for natural gas are very different in nature from the subsidies for renewable energy. Yes there are very big investments in infrastructure for natural gas and electricity distribution, but those are pay-as-you-go investments which benefit all the users in many different regions by making natural gas and electricity more available, more affordable, and more reliable. Compare that with the mandated use of ethanol in gasoline… a costly boondoggle of gargantuan proportions. What does that ‘investment’ accomplish? Mandated retail feed-in of solar and wind energy simply shifts the cost of maintaining the distribution grid and required available (non-renewable) generating capacity away from some users and onto other users; there is little common benefit, but instead tremendous benefit for a few at the expense of many. It is closer to sugar import tariffs to protect and enrich a few sugar farmers at the expense of everyone else than to an ‘investment’ in something with net public good.
.
I have no objection to subsidizing research on alternative energy (nor for that matter research on coronary disease), since benefits, if any, will be shared by all. I object to “subsidies” which reward a few at the expense of many. And don’t get me started on taxpayers subsidizing wealthy people to drive around in a $130,000 electric sports car for which they pay only $85,000. Do you think that is a sensible subsidy?
Oliver,
I presume I am not prohibited from using the results of federal research on fracking; anyone can use it… you see, public benefits come from that “subsidy”, they are available to all. The issue of depletion allowances is a real subsidy, and arguably one that is less than uniformly beneficial. So there may be a reasonable case to be made against depletion allowances. But even there the argument is about who pays and who benefits; for sure the class of individuals who benefit (which includes to some degree all users of petroleum and natural gas) is huge, and represents a large portion of the population, even if the benefits are not completely uniform. Compare that to a handful of people who install solar rooftop panels (subsided by the taxpayer) so that they can shift the cost of the energy grid to others via mandated feed-in rates. The benefits are highly concentrated, and the costs very diffuse. I note that concentrated benefits and diffuse costs are a characteristic of nearly all boondoggles.
The argument that because subsidies exist and many are useful to society, therefore all are useful and unobjectionable, doesn’t hold water. Steve and I are not objecting to subsidies in general, so giving examples of subsidies that have a societal benefit, like the power grid, is irrelevant to whether subsidizing roof top solar panels is a good thing and should be continued or expanded.
DeWitt’s argument (Comment #136520) is also the argument given by energy utilities. However, the counter-argument is that benefits still go to a few whether it is from net metering of rooftop installations or from utility companies passing on costs to ratepayers so as to be sufficiently profitable for their investors.
RB,
Oh puhleeze. Nonprofit and receiving no funding does not mean unbiased. See for example, Consumer Reports.
Also, anybody can buy utility stocks if they think they’re a good investment, although they’ve underperformed the market for at least the last five years. In fact, most people are investors, whether they know it or not, through pension funds and insurance. Installing a roof top solar array is not available to everyone. Besides the high initial capital investment, the market penetration is capped for good reason.
Not to mention that not all utilities are investor owned. I live in TVA land, for example.
Yes. Some states have little to no significant potential for alternative energy sources. So a federal subsidy of those things does represent a wealth transfer of sorts for which they receive no benefit but pay in additional tax burden.
Problems common to all subsidies:
1) Empowering government to perturb market parameters promotes corruption, undermining justice and the common good.
2) Having markets have to factor in anticipated or unanticipated government interventions undermines business planning and leads to less enterprise.
3) For the very reason of not disturbing the markets (or alienating constituencies) subsidies often linger far beyond the point that is commonly believed to be in the public good. Repeal is hard.
4) Subsidies presuppose future market outcomes are more predictable than events allow them to be. Laws have little flexibility to maneuver in changing environments, unlike business planning.
5) They block unforeseen innovations by providing low incentive to innovate to compete against a subsidy.
Alternatives:
1) Government research labs – problem: low productivity.
2) Subsidized research – problems: low productivity and susceptibility to corrupting influences.
3) Privately funded research – problem: insufficient rewards after government regulatory scrutiny and taxable gains.
Best solution: Privately funded research with for profit motive in a competitive market with minimum necessary regulation in a low tax environment. See The Men Who Built America.
You may have read claims that the US government subsidizes fossil fuels more than renewable energy. There is a CBO report written after the Obama administration made changes that contradicts this conclusion, but the usual citations came from a 2009 report from the Environmental Law Institute before those changes.
https://www.eli.org/sites/default/files/eli-pubs/d19_07.pdf
The ELI calls it a US subsidy when Energy extracting companies are allowed full tax credits for paying high foreign tax rates (up to 85%) far above the prevailing business tax rate. Foreign governments are allegedly applying high tax rates in place of royalty payments. This represents 20% of all subsidies.
Competitively bid deep-water oil and gas leases produce less government revenue as a percentage of sales than other government oil and gas leases. The ELI calls this a subsidy – 10% of all subsidies. They ignore the fact that risks of deepwater operations explains the lower revenue. (See BP and Deepwater Horizon.)
The inadequate exercise taxes imposed by Congress on the coal industry to support the Black Lung Disability Trust Fund represents another subsidy. Making disability payments to individuals free of income tax represents a subsidy.
Providing funds to Highway Trust Fund – which is short of money due to inadequate fuel taxes – represents a subsidy.
The cost of the Strategic Petroleum Reserve represents a subsidy.
Assistance to low income homeowner to pay for winter heating is a subsidy.
There are some real subsidies. 20% of subsidies go to the Credit for Production of Nonconventional Fuels – shale oil, shale gas, tar sands. Since the US is now close to “energy independence”, perhaps these subsidies were useful.
That’s indeed a real subsidy, especially if heating is by oil or propane, as is the sale of gasoline below cost in, for example, Venezuela at something like $0.05/gallon. In fact, globally, artificially low fuel prices are the major form of government fossil fuel subsidy.
My proposal is to prepare a group document for those that are interested in the science and solutions and do not believe they are already self-apparent. This would allow people to sign onto as an alternative the AR5. It could be updated bi-annually through the same process, which I would suggest as follows allow a 60-day period for 11 committees to discuss and draft 500-1000 words on their section. The first two weeks would be open for volunteers and then closed after that. The committees would submit their chapters to a blog for general audit and debate then go back the committee for two weeks for final draft.
. Suggested Committees
1. Paleo Data: Temp and CO2 Proxies, Natural Variance, Ice Ages
.
2. Temp Data: Hadley, GISS, BEST, UAH, RSS, Argo, TCR/ECS
.
3. Atmospheric Physics of CO2 and GHGs, clouds, vapor, aerosols
.
4. Anthro CO2 vs Natural: Emission, Sinks, Projected Peak CO2
.
5. Ocean Physics: AMO/PDO, AMOC, ENSO, Stadium Wave
.
6. CAGW: Sea Levels, Droughts, Floods, Storms, Acid Ocean, Biodiversity, Crops
.
7. Alternative Energy: Efficacy, Polices for Research and Markets
.
8. Global Policies: Carbon Taxes, Treaties, IPCC, IPCC Models
.
9. Klimatariat: IPCC Mann, Cook 97%, Ethics (papers to blogs)
.
10. Politics and Psychology: Cut off of debate, deniers, religion
.
11. Introduction and Conclusion: Most important points from each topic and overall.
Currently the only organized voices for climate are Green groups and governments. I propose we define another opinion instead of having it defined for us by expert panels calling us infantile simpleton science-phobes.
Lucia, sorry about the duplicate post but I did not realize that when the upload fails that means one needs to check email for a re-authentication.
.
Are you supportive of the notion of some sort of representative lukewarmer’s position paper? It seems the CAGW alarmists have no trouble defining our position If we do not. Committee compromise is the foundation for democracy.
.
If committees split on a position they could draft two ranges of positions and offer them both to the whole of committees for choice. The site signup genius would be a good tool for committee signups. I believe one could stay anonymous through the whole process.
.
I am interested in all opinions pro and con. If organization is too much work that is understandable. But I think use of the web is an exciting way to extend the voice of people, and by extension motivate informed debate resulting in informed action.
R. Graf,
I’m not sure there is much value in a ‘position’ paper. To be useful, we would need an official club/group/organization etc to exist. None does. So before one could write the paper, they first have to create some sort of official organization. (Will it have dues? Will it file for some sort of ‘club’ status with the IRS? Will it meet regularly? Will it have officers? What are the rules for joining? And so on.) Creating one would be a lot of work– it hasn’t been done.
If an organization existed, then the people who are in the official organization could decide whether they wanted their organization to have a position paper.
As for my opinion on whether having such a thing would achieve any goal: If the goal is to ‘force’ CAGW alarmists to pay attention and recognize what the lukewarmer position is I don’t think the paper would do any good. I don’t think the CAGW alarmist would feel remotely inclined to admit that the lukewarmer postion is what it logically must be: The position lukewarmers actually hold. A position paper wouldn’t really change their irrational behavior on this front.
I agree the web is a good way to extend voices. Not sure how that interacts with the notion of “position paper” though.
Lucia, as Groucho Marx would say, I wouldn’t belong to any club that would have me as a member. Likely some here are also very independent. However, I do support causes in which I agree. My thought was that the web could be a place where one did not have to join a club or pay dues to be able to form behind a position. I resented the American Chemical Society for taking my dues and posting a position paper supporting CAGW. I let my membership lapse this year.
.
I don’t know if “position paper” or “petition” are the right words. Neither fit. This is something new. It would need a name.
.
As for the rules, clearly there must be an individual to write the first rules and within them how they get changed. If the rules follow democratic principles among sincere participants I don’t see how there is a problem with acceptance. Here are some ideas:
1) Break the subject up to 8-12 pieces for committees to submit equal sized chapters. We could follow AR5 and add on a couple more.
.
2) Perhaps a blog post could be dedicated to each committee. Debate could spill off by linkage to individual’s WordPress sites
where they could have their own working drafts and comments.
.
3) At the end of 60 days the committee would vote on their favorite individual draft, first for the top 10, then top 2, then those two could be open for posts and debated by anyone for a week. Then the final vote is available to the committee only to choose the final draft after revisions are made by their respective authors.
.
4) The final document would be posted for comments limited 5 per individual, each being snipped after 15 words in length or flagged language.
.
5) Before the voting committee members could put for names for removal from the committee on grounds of lack of participation or counter-productive participation. This would be the guard against trolls.
.
6) Rule changes would be by petition of 10 members of any committee participants after each process had completed and prior to 30 days before the beginning of a new process, which could be annually. The up or down vote on the rules would be available only to previous committee members.
.
The hardest part is establishing voting committee membership but I believe that trolls will not want to be in the middle of ridicule and subject themselves to removal. The other big one is that the process needs a sponsor. That is where you would come in, Lucia. If you put out the request for interest the worst that can happen is the idea falls flat and no change occurs. The best case is that it will attract a lot of attention and become highly regarded open event. It certainly would add a new space for scientific debate to coalesce outside of government and its funded institutions.
R. Graf,
If someone other than me wants to organize this, I have no objection. But I’m not the person to sponsor this.
As you’ve noticed, I’ve been blogging lightly. I have certain irons in the fire and I do not want to work on committees to create working documents. Even if I had nothing else to do, I hate being involved in committees that intend to write anything remotely like a “position” paper. It’s the sort of committee assignment that screams “decline any invitation for that committee” to me. So, I’m just not the person to sponsor this. It has to be someone who likes working on these sorts of things.
Lucia,
I think your most valuable potential contribution to such an effort would be a the detached moderator, whose duties would include: rule poster, vote counter, petition administrator and deadline enforcer.
Lucia the Enforcer!
:>
Seriously though, sounds like work to me.
I think the best thing to do for the proposed highly complicated and fairly massive project is to put me in charge of the whole thing. Then nobody would have to worry about it anymore.
Mark Bofill,
It sounds like work to me too. More specifically, the type of work I dislike. Rule poster? Vote counter? Administrator (of petitions?) Enforcer? Ouch!
I’m sure people who like doing those things exist; I am not one of them. If you give me those jobs– especially unpaid– you can be 1000% sure they will not get done.
My experience is that the productivity of a committee is inversely proportional to the number of people on said committee, especially if there is any doubt/disagreement about goals and means. Individuals generally produce creative and insightful analyses; committees generally don’t.
Yes. I have been on committees. The harder the contribution the larger that individual’s fingerprint. The rest trade credit in exchange for the authority of their endorsement. I was hoping that an open dialogue of people doing what they are already doing but for little impact was a novel idea. I hope somebody else re-invents the idea some day.
.
The catch22 for individualism is that it often gets steamrollered by those much less imaginative or diligent but better organized. So Lukewarmers will just have to live with 99.9% of the public not knowing anything but what gets reported mostly political speeches bashing deniers. Fine, I have a thick skin.
Yes. I have been on committees. The harder the contribution the larger that individual’s fingerprint. The rest trade credit in exchange for the authority of their endorsement. I was hoping that an open dialogue of people doing what they are already doing but for little impact was a novel idea. I hope somebody else re-invents the idea some day.
.
The catch22 for individualism is that it often gets steamrollered by those much less imaginative or diligent but better organized. So Lukewarmers will just have to live with 99.9% of the public not knowing anything but what gets reported mostly political speeches bashing deniers. Fine, I have a thick skin.
.
Lucia, BTW I am having trouble making posts. My email will not verify. I will try changing user name.
Lucia,
Can’t say I blame you, me too. 🙂
R Graf,
I’m happy to participate in open dialog. We are all doing that and I suspect we will all continue to discuss things in the open.
Lucia, You revealed a benefit I never mentioned: a collaborative effort via blog would be immune to the paranoiac concerns of backroom ulterior motives. That, along with producing something that the many, (an informed jury,) lacking professional conflicts could attach their name to might be more worthy of a news story of a tangible group view, perhaps a majority view.
.
But I don’t know if it would work. It would need to reach a critical mass. There may not be enough interest. But with so many investing so much time in their extensive personal research I honestly thought that many would have liked to have an opportunity to create something with it short of having to try to get published individually. There are many lay experts (I feel) in climate that are generally more well-informed than paid experts. Specialization in such a broad field puts blinders to larger pictures that those interested in the entire topic can see, like jurors hearing expert testimony from many fields.
.
I completely understand your kind pass on this. I particularly like your site because I like to try out ideas. A blackboard was a great idea.
Cheers
Problem with a ‘Lukewarmer’ manifesto is that ‘Lukewarming’ is not part of an ideology but a demonstrable result of low end sensitivity, forcing, and likely low end future forcing.
.
If there were a good reason to believe climate response was greater than what we observe, or forcing rates were going to increase exponentially, or population growth was accelerating toward high end scenarios, then ‘Lukewarmer’ would not be appropriate.
.
We can repeatedly demonstrate that there is luke-warming for good reason and that this rate of warming will likely decline going forward.
.
But this is because of the existing observations, physics, and demographics, not because of a distinct philosophy.
“But this is because of the existing observations, physics, and demographics, not because of a distinct philosophy.”
Indeed. Lukewarmer Science is one and the same pile of doo doo that Catastrophist Science uses.
Andrew
Andrew,
I believe you are saying that you are not even a tepid lukewarmer.
Do you think you could sign your name to the same document that Mosher could sign? I believe he is on the warm end of lukewarmer.
.
Maybe the answer to better definitions could be to break each topic into three to five classes of degree, from cold to hot. Then people could sign onto that position or change position once a year.
.
For example you, Andrew, could be on the same page a Mosher regarding the problems with CMIP5 but would be on different ends of the scale with regard to problems with BEST.
R Graf,
Andrew_KY is not a warmer at all, luke- or otherwise. To be a lukewarmer, you have to believe that the fundamentals of the IPCC WG-1 report are correct. I’m pretty sure he doesn’t believe that, thus:
R Graf,
DeWitt is correct. I am not any kind of Warmer.
Andrew
R Graf,
His input would not be useful for a “lukewarmer” manifesto. He is a cooler.
As far as I can imagine, you are describing a collection of documents all of which would be ‘self published’ by each committee or whatever club the committees belong to. Or did you think it could be submitted to some other body? If so, you might need to look into who would publish it– because this says “self-publish” all over it to me.
“He is a cooler.”
There is a certain element of the Warmer movement that likes to apply ‘Denier’ to someone who they claim denies the science, which is essentially accurate in my case.
I’m a Denier. And in my book, a superior one. lol
Andrew
Andrew,
Yeah, me too. I prefer the term dirty no good gosh darn denier myself, but that’s just personal preference.
Andrew_KY,
We are all pretty much using the same science. It is the applications and implications that we are discussing, at least for most of us.
Not to mention the Lysenko-ists and their need to determine the answer before the work is performed to assure proper support for their enlightened political goals.
re: “Or did you think it could be submitted to some other body?”
.
It not only would be self-published but the drafting process would be transparent part of it. I think that if there was enough participation, especially by a few on the inside of the science, it would be a newsworthy item, and if really successful, a cited reference.
.
But now I can see there would be resistance from warmers and coolers. As there are vast differences even among Lukewarmers on many topics, like alternative energy policy or carbon tax, perhaps a Chinese menu approach should be considered based on WG1, 2 and 3.
.
If Lukewarmers expect a chance that their knowledge can be used to affect policy then there needs to be documents of some authority. Right now there are none.
.
Rand Paul is a Lukewarmer, for example, but I doubt he would draft a 50-page climate science and policy document, so he will be forced to fumble and be labeled “denier” (like the rest of the GOP.) Now one is either for the IPCC summary or against. BTW, I think Lukewarmers might favor a petition to abolish US support for the IPCC.
R Graf,
Aside from the true green-loons, I suspect there is a lot more sympathy for the reasoned arguments of “lukewarmers”, even among those who support policies that most lukewarmers don’t. For example, Nic Lewis and others continue to effectively argue that empirical estimates of climate sensitivity, equilibrium or transient, lie at the low end of the IPCC range. Good scientists (not the crazy ideologues) surely recognize that the GCMs are rapidly going off the rails, and so have been looking for rational explanations for the models’ glaring failures. At some point the tide will shift, not just because of the efforts of people like Nic Lewis, but also because of the many reasoned arguments offered by a host of individuals who are willing and able to point out the weaknesses in the arguments (scientific and political) which are continuously parroted by the ‘settled science’ ideologues. In the end, reality will demand substantial changes in GCMs to lower sensitivity, and then reasoned discussion about public policy can start.
.
We are not yet there, but there are encouraging signs: the recent paper by Mauritsen and Stevens (Nature) about a physical mechanism in support of Lindzen’s iris theory is a very good sign that a shift has started (hard to imagine that paper being published 5 years ago). The paper points out that the proposed mechanism ALSO resolves the question of why the ‘tropospheric hotspot’ is missing in measurements of the tropical troposphere. For physical scientists, that kind of ‘two birds with one stone’ explanation is the sort of thing that rings true in a profound sense: you set out to explain one issue and find that the same explanation settles other outstanding questions. Such coincidences don’t happen often by chance, and so fairly well scream that you are on the right track.
.
So if reasonable people can resist implementation of the kinds of mindless and id!otic policies that the US loon-in-cheif insists on for another decade or so, I am reasonably confident that climate alarm will gradually fade in importance, while rational consideration of GHG driven warming comes to dominate policy discussions.
SteveF
I mostly agree, Steve. But, the GCMs should not be used as GMST crystal balls at all (IMHO). Use them as a science tool yes, a policy tool, no. If we have to correct them by observations before they get around the first turn from the gate they do not see the track.
.
Second, I believe there is value in being proactive in correcting bad science before Mother Nature does. One of the main concerns about the IPCC (and world progressives) is that they are damaging our society by degrading science. Science needs to be out of bounds as a political tool to be spun. Currently, the science is being corrupted (I feel) like everything else that stands in the way of ideological quests.
.
My ideology is not to take extreme measures (suspending ethics) in the name of ideology. We all remember Heir Adolf and comrade Joe’s 100-year or 1000-year plans.
.
We should want leaders that inspire hard work and sacrifice toward sustainability but not by sacrificing ethics by vilifying anyone. We need leaders to reveal bad science, not just wait for nature to discredit it (us.) I agree we should have faith that things will correct but at the same time our ethics should be do all we can individually too. I know this is stupidly obvious so forgive me.
.
One does not need to be particularly insightful to see that Heartland and IPCC cannot both be right. What the IPCC does not realize that the consequences of having western science discredited are as large as having a late start on mitigation (if there is a sound mitigation policy).
.
Your paragraph dot idea is catching on.
R Graf,
“Currently, the science is being corrupted (I feel) like everything else that stands in the way of ideological quests.”
.
So it has always been. Ideologues have always used whatever means available to achieve their political goals. Climate science just happens to be a useful camel in this case to carry the load for green nutcakes and committed leftists (Though too many climate scientists seem themselves to be green nutcakes and committed leftists!). People who insist they are absolutely correct are always a danger to humanity.
And, since the law of increasing irony demands it, those who are corrupting science to achieve their own ends claim their opponents are anti-science.
DeWitt,
I Googled “law of increasing irony” and got not a single hit. You have made a unique contribution to humor. 😉
SteveF,
Normally I state it as: The fundamental principle of human behavior, irony increases. The law of increasing irony is a bit more concise way to phrase it.
DeWitt, I know you are particularly interested in RT. There CE post on fresh papers. One explains the cooling of the Antarctic. http://judithcurry.com/2015/07/06/new-research-on-atmospheric-radiative-transfer