To clear the Kimoto thread for conversation about Kimoto’s assumptions in his paper, I’m creating a thread for other people to discuss tangents. Most of Dallas’s comments are being moved here. It’s an open thread, discuss what you like.
95 thoughts on “Thread for Dallas”
Comments are closed.
well this crappy box model is just as crap as the traditional crappy box models.
Why not just measure the damned thing directly?
You know do an actual experiment.
Arthur, don’t be a pillock, look at his slab model.
The ‘average’ incoming radiation is 342 W/m2. 77 W/m2 bounces off the clouds back into space and 30 W/m2 bounces of the ground and off into space.
This leaves 235 W/m2. Of this 235, 67 W/m2 is absorbed by clouds and the rest, 168 W/m2, by the Earths surface.
The Earths surface gets 168 W/m2 sunshine and gets 324 from clouds, 492 W/m2.
Normally this would raise the temperature to about 305K, but 103 W/m2 is transferred to the clouds by water evaporation (78) and conduction (24).
He then has the clouds absorb the majority of the long wave radiation and being the main emitter of long wave radiation.
It is nice to see that someone has thought of the work done by raising water into the atmosphere for once.
It is complete crap though, but slightly less crap than the majority.
Attempting to describe a cyclical steady state by averaging the system and treating it as an equilibrium thermodynamic puzzle is a waste of time.
The system needs to be directly measured.
Doc Martyn,
Can I just ask if you are involved in measuring things directly and if so why don’t you share them?
Dave Andrews, the experiment is very simple.
You need to be able to measure radiative influxes and effluxes at ground level and above clouds, and the temperature. You also need to be able to measure temperature.
Then you need a very large object, in orbit, that has the size/distance characteristics so that it is able to move in front of the sun and completely blocks incoming sunlight for 15 minutes or so.
The best thing would be to have you huge orbiting sun shade make a track across the sky so that a long swath of the Earths surface was covered.
The big huge shiny thing in the night sky is called the Moon, and it does this sort of thing all the time.
If you fly a Mach 2 a solar eclipse lasts about 70 minutes, so all NASA has to do is get a couple of the early mark F-15’s from the bone yard, fit them with coniform tanks and get them look up look down diode array spectrophotometers. Use a look down microwave radar to examine the cloud levels and you are made.
Measure the thing directly. At steady state influx = efflux. Block the sun, measure the rate that T drops and measure the down-welling IR from the sky.
Very simple of course, and you would actually have to do some work, so a bit of a no hoper I’m afraid.
be polite now
This is just another of these ridiculous games where deniers find two numbers, divide them into each other, find a result that’s smaller than 3.0 and claim that therefore the scientists have massively overestimated the climate sensitivity. The division of those two numbers does not give a meaningful quantity.
My numbers are a little different, 0.79 versus 0.5. But it appears 31C not 33C a closer estimate due to the 30% albedo assumption including an error due to conductive and convective tropopause altitude (~2500 to 3800 meters with no radiant flux considered). The 0.79 also allows for increasing downward opacity in the CO2 spectrum from the average altitude of back radiation impact.
With feedbacks, the Antarctic temperatures and pressures appear to cause complex feedbacks as CO2 has a non-linear impact on Ft, peaking at -20C.
The Equation Kicks butt by the way.
Dallas-
You haven’t been able to explain how your number juggling relates to physics. I am hoping Kimoto answers Arthur Smith.
Oh, Comparison of NASA to K&T indicate ~24Wm-2 atmospheric absorption not included in the K&T. Several papers have documented issues with unaccounted for atmospheric absorption. Search Maximum Local Emissivity Variations, most are pay walled, but http://www.ssec.wisc.edu/library/turnerdissertation.pdf is a thesis by David Taylor who was also a coauthor on several pay walled papers.
Lucia,
This may help explain the number juggling.
http://redneckphysics.blogspot.com/2011/10/oh-antarctica.html
It is like testing a black box with known inputs and outputs. Closing only one input ciruit at a time to estimate maximum response.
anal sex with leopards!
Sex!!
Dallas, I have never understood why these things are never measured. The actual spectroscopy is trivial, a WWII barrage balloon and a gyro-stabilized weather station/didode array would measure up/down welling radiation. Measure the diurnal cycle in a flat area in Texas with fields of sand, concrete, stones, grass, sunflower, corn, water, e.t.c.
Essentially, measure all you can measure, 24 hrs a day over the course of a year. Its not rocket science.
Even spraying water on a dry field would give you a huge amount of information. If one was smart one would have fields in concrete tanks, on scales, and have then continuously weighed. No one has actually done a good measure of bio-productivity like this.
In case anyone crazy enough to wonder what I am on about. I used similar methods as K. Kimoto only I did not assume the K&T cartoons were accurate. Using the NASA budget instead, which includes the 20+ Wm-2 atmospheric absorption of OLR not include3d in the K&T, my numbers reflect what I believe is a more accurate energy budget.
The 0.79 value for sensitivity I calulated is equivalent to the 1.2 no feedback estimate, allowing for change in density and water vapor impacting infrared transmittance (increasing opacity to DWLR with decreasing altitude).
This indicates that CO2 has significantly differnt impacts on different regions, NH extent most, Little in the tropics and variable feedback in the SH extent.
Yes, my methods are unorthodox, the” Effective” value for radiant flux was estimated using the “Benchmark” value proposed on Science of doom. Initial reference values for the coefficients for conductive flux and latent flux were based on the assumed steady state global averages. The Antarctic value I am working on is based on the thermal conductivity and Kenimatic viscosity of air at -49C. Still have some work to do on the Arctic.
The goal is a simple model using temperature, pressure and potential temperature, which are the most accurate observational data, to calculate variations in Flux. It looks promising, but hey, I am just a fisherman 🙂
Doc,
“Dallas, I have never understood why these things are never measured. ” The paper I referenced shows some of the issues with direct measurement of infrared flux. Because of the dynamic state of the atmopshere, accuracy is limited to a percent of total range under near ideal conditions and have errors greater than 80 Wm-2 (~20K) in the polar regions.
Above the tropopause, measurements are more accurate because there is less conductic flux interaction and less water vapor blurring. K. Kimoto’s approach is quite valid, though using K&T seems to have impacted his results considerably.
Lucia, if I may,
Fs is a significant portion of Fb, Atmospheric shortwave absorption is is 61Wm-2 versus 51Wm-2 via OLR per NASA.
Fb is dependant on both surface and atmospheric absorption of sw, with surface temperature more dependent on sw absorption to maintain a relatively constant Fb, than Fb to maintain surface temperature.
You may note that the ratio of sw atm/surface absorption is the moderator of Fb, using the NASA budget which is much clearer.
Dallas–
1) Your comment does not clarify. It just makes claims that would require me to ask you to provide physics to justify each subsidiary claim.
2) I want that threat clear to read Kimoto’s answers.
3) For that reason, I’ve moved it here.
The goal is to learn how Kimoto justifies it.
Fair enough,
1) The radiant flux of a body is proportional to its temperature and it temperature proportional to the sum of its energy absorbed and its initial temperature. I.e. if the atmosphere was perfect Black body in space next to a larger black body at 288K it would have a temperature proprotional to the energy absorbed by the black body at 288K, The atmosphere absorbs ~51Wm-2 radiant energy from the surface, ~79Wm-2 latent energy from the surface and ~24Wm-2 conductive energy from the surface of a total of 390Wm-2 emitted by the surface. the Atmosphere absorbs, 154W/m-2 from the Earth’s surface. In addition, the atmosphere absorbs ~61Wm-2 from the sun for a total of 154+61=215 Wm-2 absorbed. The solar sw absorbed directly is 61/215 or 28% of the total and the surface radiant energy directly absorbed by the atmosphere is 51/215 or 24% of the total energy absorbed by the atmosphere.
Should you prefer the K&T values, 390Wm-2 minus 40Wm-2 through the atmospheric window yields 350 Wm-2 absorbed by the atmosphere minus 320Wm-2 estimated back radiation yields 30 Wm-2 net radiant absorbed energy from the surface.
That net of 30 plus 24 conductive and 79 latent yields 133 Wm-2 plus 61 sw absorbed equals 194Wm-2. yielding 31% by sw and 15% by surface radiant flux absorbed.
If we could switch the sun off for an experiment and convert all the surface thermal flux to just radiant, 390Wm-2 would be enitted by the surface and 240Wm-2 emitted at the top of the atmosphere, the difference, 390-240 would be 160Wm-2, roughly the classic value of the atmospheric effect. If you flipped the sun on quickly, 61Wm-2 would be absorbed by the atmosphere yielding 160 plus 61 or 221Wm-2 total absorbed energy in the atmosphere. That would be the atmospherice effect. That is the basic physically description of the atmospheric effect.
Perhaps you would like to explain with physics how 215 to 221 was inflated to 320Wm-2? I believe energy must still be conserved, despite computer modeling.
Oddly, that 215 to 221 just happens to agree with the potential energy of the atmosphere. The hydrostatic effect considering the total mass of our atmosphere and the gravity exerted on it by the combined mass of the Earth and its atmosphere.
If you were to model the atmosphere as a thin sphere with a temperature equivalent to a flux of 215 to 220 Wm-2 at an altitude equivalent to 600mb pressure, the potential temperature would be 288K should that thin sphere collapse to the surface and under go the heat of compression from 600mb to 1 bar.
Other than this basic physics, I got nothin’
How did it get to be 320 Wm-2 again?
Dallas
Oh? Not T^4? Let’s stop here. Then we can continue.
Proportional does not have to be linear. I could used the full S-B 5.67×10-8(T)^4 assuming emissivity equal one. One of the elegances of the derivative of the change in flux relative to the change in temperature is that it is ~4F/T. Should I provide that derivation?
If you like, it can all be done the long way. And as long a you don’t miss about 21Wm-2, you will get the same answers I get.
Just curious, what would happen should one confuse 21K with 21Wm-2?
Dallas–
1) Consult wikipedia for the usage of “proportional to”.
2) The derivative– particular expressed in that form is not relevant to the question asked.
3) Yes, you should answer specific questions the long way, and your answer should apply to the specific question. Also note that your expression of the derivative would be wrong if a forcing really is proportional to temperature.
Okay, over the range of 255K to 288K the actual approximation would be 4.55F/T
So let’s rephase and say that the black body recieving the energy from the 288K black body receives 220Wm-2 flux for long enough to obtain equilibrium. The temperature of the black body absorbing 220Wm-2 obtains a temperature equivalent to 220Wm-2 or (220/5.67e-8)^1/4 = 249K, of course we are assuming the 288K black body maintains temperature and the cooler black body contineously absorbs 220Wm-2. Then the back radiation felt by the 288K black body from the cooler body would be some percentage of 220Wm-2 emitted by the cooler body.
If the 288K body were a solid sphere, and the cooler body at 249K was a centered sphere around the solid 288K body, the core 288K body would receive less than the 220Wm-2 emitted by the outer sphere body. If the distance from the surface of the core body to the outer sphere body were small the core would receive more of the energy emitted by the outer sphere. If there were a third sphere containing the core a first sphere with an effective temperature greater than 249K then the core sphere would receive more energy from the cooler and closer sphere due to closer sphere receiving energy from the new outer sphere.
At no time will the core sphere receive more energy than the closer inner sphere can emit based on closer sphere’s effective temperature due to the total energy that it absorbs.
Now let’s place a gas in between the core and the closer sphere with a thermal coefficient of heat transfer. The temperature of that gas will decrease from the temperature of the core to the temperature of the closer sphere. Some portion of the energy received by the closer sphere is now conductive not radiant energy. The temperature of the gas will never exceed the temperature of the inner core or be less than the temperature of the closer sphere. We add gas between the closer sphere and the larger outer sphere. At no time will that gas have a temperature less than the closer sphere of greater than the temperature of the outer sphere,
If we add an infinite number of spheres matching the temperature profile of the atmosphere we will find that the radiant energy emitted by the inner core never exceeds it 288K Flux value of 390Wm-2 minus the conductive flux minus its latent flux, or 390-24-79=287Wm-2. If all of that 287 is absorbed by the closer inner sphere only a portion of that energy would be emitted back to the inner core. Since apparently 240 Wm-2 is emitted to the outer spheres, we have 287-240=47Wm-2 absorbed in the vacinity of the closer inner sphere. Unless we select a much closer inner sphere at a temperature of 274K equivalent to 320 Wm-2 we cannot have a back radiation equal to or greater than 320Wm-2, which would required more radiant energy that the 287Wm-2 radiant energy emitted by the surface.
Which is fine by me, then the back radiation would be warm any place on the globe that ever attempted to go below zero degrees C. I am sure that my new back radiation nocturnal absorbers will make me a fortune. I am pretty sure there would not be much problem measuring that amount of energy,
But since back radiation is not back conduction, I kinda doubt I should invest in noctural collector inventory, just yet.
With our infinite number of spheres we can pick any one we like. For some reason, I think Angstrom’s number and the potential temperature of the atmopshere equaling 288K makes more sense. Everything else is just an arbitrary temperature.
By definition, since that seems to be needed.
http://redneckphysics.blogspot.com/2011/10/what-heck-is-downwelling-long-wave.html
Arthurt Smith Said,
“That simplifies analysis there because there are no convective or latent heat fluxes to worry about. When translated down to the surface, a radiative flux change of a small amount at the tropopause becomes a much larger amount at the surface, with surface flux changes a factor of 4 or more larger after feedbacks.”
And there is the issue. The tropopause is a temperature inversion. That greatly complicates the calculation of the impact of forcing. Using the surface as a frame of reference or the true TOA simplifies the calculation. The no greenhouse gase Earth would have a tropopause, which would not have a temperature equivalent to 33C, it would be colder than 33C. Exactly how much colder would require knowing the percentage of surface and atmospheric albedo. Using the TOA would give a correct answer, but it would not be very informative, we already know that atmospheric effect is approximately 33C or 155Wm-2. Using the surface we can approximate the combined surface and solar energy absorbed by the atmosphere then determine what the lapse rate should be from the surface frame of reference.
This results in the inverted pyramid showing how the atmopsheric effect is balanced by the surface energy flux, the inverse of increasing energy magnitude from a fictious value assumed to be at the tropopause. Energy is conserved.
“that simplifies analysis there because there are no convective or latent heat fluxes to worry about”
Indeed. It gives one a qualitative description of a complex non-equilibrium steady state system by averaging the various thermodynamic parameters and treating the system as a equilibrium. The real question is when can we use equilibrium thermodynamics to describe a non-equilibrium system?
Here is a little thought experiment, we make a model of a planet, we take a steel cannon ball and place it on a rod. At the equator, we use a blow torch to have localized heating.
With a non-rotating ball we have an asymmetrical heat distribution around all axes.
If we rotate the ball rapidly enough we have a symmetrical temperature along all the latitudes, as the heat input around the equator is uniform.
So, at what rotation speed do we get a switch from asymmetrical to symmetrical equatorial temperature?
LOL, Welcome back to the crackpot thread Doc 🙂
http://redneckphysics.blogspot.com/2011/10/hows-that-choice-of-temperature.html
Here is a little quick read before you re-enter the fray 🙂
DeWitt Payne scores in the big boy thread! 0.8 would be the minimum surface temperature change with 3.7Wm-2 of increased CO2 forcing. The inverse square law is alive and well. Now with a little fluid dynamics maybe we can estimate water vapor feedback?
Re: DocMartyn (Comment #84609)
October 26th, 2011 at 6:07 am
Is this a trick question? I would say none, as it never becomes symmetric.
ursubtitche2
You are moderated both under your current name previous one. I’ve let these through so people can see the quality of comments, but in future I will not be letting them through.
Oliver, you will get a uniform heat gradient, high at the equator and low at the poles if you rotate the ball rapidly enough. If it isn’t rotated or rotated to slowly, then the ‘equator’ near the heat source will be hotter than the bark side.
It used to be believed that Mercury was gravity locked in its solar orbit, which would mean that one face was always facing the sun and the other facing away. This turned out to be wrong, but the Diurnal temperature range would have been something to write home about.
Dallas–
It’s not “the crackpot thread”. I want to be fair to Kimoto, whose native language is Japanese, who is not accustomed to blogs, who d is not familiar with html and so on. I also want very much to read his justification for his analysis.
You will notice that in a recent comment he writes “Probably, my English ability is not enough for this theme.” I think that a thread gets quickly diverted to your theory, or Doc’s question (The real question is when can we use equilibrium thermodynamics to describe a non-equilibrium system?) will prevent us from learning what assumptoins Kimoto made.
It is clear he made an assumption about equilibrium thermo. We all know he did it. It’s a very common one. I don’t want discussion of that to get in the way of our discovering what he did after making that assumption.
Re: DocMartyn (Comment #84649)
October 26th, 2011 at 10:39 am
As you increase the rotation rate, you will approach this condition, but you will never achieve it. That’s why I asked whether it was a trick question when you asked “at what rotation speed do we get a switch…”
Suggest we be pleasant, polite and most of all patience at what may seem to us gaijins to be indirect argument but which really isn’t. You will be rewarded. And you will make a really good friend.
That is based on my 40 years of experience in Japan on technical and analytical discussions.
John
Humor Lucia, that’s why the smilies. 🙂 The issue though is communication, as it is with most of climate science. My “proportional” for example is based on being an old guy that has used quite a bit of log paper to linerize funtions. To me Proportional is in the eye of the grapher. Frame of reference though will always be key. K. Kimoto’s frame and intital equilibrium values are equally important.
Lucia I missed your comment in the original Monckton post that you just referenced.
From my perspective the equation is very flexible.
Ft +Fl +Fr Fs are basic divisions and Fb is a function of all atmospheric input fluxes.
I found that Fl, the latent, is better considered as F(l+s) to include both the latent and sensible protion of latent cooling, approximately a 5% to 10% difference. Fr can also be written as Fra +Frs, where Fra is the portion of the spectrum that interacts with GHG’s most significantly and Frs, the atmospheric window that still has sme interation in the atmosphere, but a much small amount.
In the Antarctic, Ft, thermal is better written as Fc+Fv, for conductive and convective since the conductivity and the kinematic viscousity are much different in the Antarctic than anywhere else on the globe.
The equation allows for simplification or near infinite expansion. With the coarse global energy balance, simplified better matches the quality of the data and is sufficient for the point he is attempting to make.
That point, from a true TOA frame of reference, the equation can match the system performance. From a surface Frame of reference, the equation matches the TOA values and provides more information about the system, and from the tropopause frame of reference, nothing matches anything, which is valuable information.
Actually Oliver you will get a uniform temperature.
The temperature achieves a true homogeneity when the rate that ball spins reaches the conductance rate at the circumference.
It is actually simple to model.
Hey Doc,
Remember sarcasm is the lowest form of wit. I was just asking if you had done any modelling and if so would you share the results with us.
DocMartyn,
Are you saying you spin the ball so that the torch moves around the circumference just as quickly as the heat is conducted along the circumference?
Dallas–
I “got” the smilies. Nevertheless, I wanted to explain my purpose in having a side thread. I generally don’t create them. But I sometimes do.
SteveF, “Your calculation of a climate sensitivity well below Stefan-Boltzmann is just not credible… regardless of the methodology used.”
Think about that SteveF
IF the Earth were a perfect black Body 4F~T, Using the S-B standard 0.926 factor for an average black body 3.7F~T, Because the Earth is a gray body, it is less than 3.7, how much? Approximately 3.3F=T is the benchmark value. That is a average for the globe as does not allow for regional variance, so we don’t even know if that is linear.
If I am at the TOA and 4 Watts of energy from the surface is attenuated to ~3.3, why should I believe that if I shine 4 Watts down, it won’t be attenuated at a similar rate?
Like Author and the T(z) non-sense, energy flux meets impedance in all direction in the atmosphere. Be careful what you assume.
BTW,
If you want, start with a model of Earth with only the solar input and at an intial temperature of approximately zero. Then figure out what happens as the temperature approaches 288K. The compare that to a model with the surface over 288K and turn the sun of and watch it cool. Those represent real relationships and there will be no perpetural motion.
That would make a nice article, a back to the basics kinda thing.
Dallas,
“Think about that SteveF”
I assure you that I have, and quite a lot. The atmosphere has very different optical properties at visible and infrared wavelengths. The “impedance” is most certainly not uniform at different wavelengths….. which is of course why the atmosphere warms the planet surface.
I do not at all understand what you are trying to convey with comment 84688.
The variation the opacity impacts longwave different looking up versus looking down. So a 100Wm-2 at the surface would be seen roughly as 80Wm-2 at 4000meters. The inverse is approximately true for that short distance. So the Effective” emissivity is approximately .8, (it is acctually closer to 08.25). That same source at 12,000 meters is roughly 71 Watts seen, Effective emissivity 0.71, At the top of the atmosphere, the same source is seen as 61 Watts, emissivity equals 0.61. So depending on your frame of reference you get different values. Kimoto’s 0.5 is low, because of an error in the K&T paper in addition to the one he is exploring, so his value may be off, but not necessarily his methods.
Lucia,
“t1 (z) = Ts + integral( lapse rate with respect to z)
Right there, we see direct functional dependence on Ts.”
What is the Flux value associated with Ts for the dry adiabatic lapse rate?
F(t1,(z))~4(aFt +eFr), where a ~0.33 and e~0.825, Of course you need the more correct values provided by NASA. But you are right, things need to be look at again.
Dallas–
What you are asking is irrelevant to the discussion about Kimoto’s assumption. The issue cuts to whether one can assume Fb is not a function of Ts and whether he is making mutually inconsistent assumptions.
This has nothing to do with “the Flux value associated with Ts for the dry adiabatic lapse rate?” so the answer to your question doesn’t have anything to do with my point. Whatever you are asking seems to be related to some argument you are having in your own head.
LOL,
Yes, I know it seems that way. All it takes though is one incorrect assumption to snowball into major issues.
The 33C for example, is only valid at the true TOA and/or the surface. Not at the tropopause, because that is a temperature inversion, that would exist is a world without GHG’s.
Just using the mass of the atmopshere, gravity and the thermal coefficient of nitrogen of 0.024, the surface temperature would not be the same as the tropopause, low though it may be. Using the tropopause or what is assumed to be the TOA, is the first error. Even Eli Rabet is aware of that and oddly quite recently.
So how much error is possible if someone picks a wrong frame of reference for a fairly complex thermodynamic problem? Quick answer, ~2C relating TOA to surface.
So nothing I say will make any sense as long as 33C is assumed as the surface temperture with no Greenhouse effect. It can be assumed as the total atmospheric effect, but that involves, thermal flux, radiant flux and the interaction of thermal and radiant fluxes. Then one can begin to consider lantent transfer.
If you were to assume that K&T were incorrect, then you would notice that 321Wm-2 DWLR would be the approximate value if the source of that Flux was at the true TOA, not at the surface. Then you could build a simple model using the inverse square law of flux propogation (you probably have another term).
It is an interesting problem, communicating why the first things taught in thermodynamics are KISS, Frame of Reference and ASSUME. Until that is understood, nothing I say will make sense and nothing the atmosphere does will make sense.
Steady state, the impact is rather small. Dynamically, it is much more interesting.
Re: Dallas (Oct 27 09:38),
In a perfectly transparent atmosphere there would be no tropopause and no stratosphere. The temperature inversion occurs because oxygen and ozone absorb strongly in the UV. The tropopause is defined by the beginning of the inversion. The kinetics of ozone formation are such that it occurs at high altitude and causes the stratosphere to warm with altitude. In fact the single layer gray atmosphere toy model predicts that an atmosphere that absorbs more SW radiation more than it absorbs LW radiation will warm with altitude. If you can come up with a mechanism that would cause a warming with altitude in a perfectly transparent atmosphere, I would very much like to see it.
Dewitt Payne,
That is true, I have never met perfection though and it is a dangerous assumption.
http://redneckphysics.blogspot.com/2011/10/330-watt-weirdness.html
This addresses part of the issue. If you like I can use what is considered negligable N2 and O2 low energy photon absoprtion plus the conductivity of~ 0.024 with estimates of the kinematic viscosity at 255K to estimate the no GHG lapse rate plus the UV O2 interaction to estimate the tropopause inversion. Seems like a lot of extra work, but it that is required?
For Arthur, SteveF, Dewitt, Jeff Id and Vice Admiral Lucia
The Planck response is nothing more than the “effective†emissivity. It is well known that the S-B relationship is less than perfect. The “effective†emissivity of any object and the transmittance of the path between object must be considered. Effective emissivity simplifies that solution by dealing only with the radiant impact, not individual spectral intensities of windows.
For Earth, the dF/dT @ 238K ~4F/T and dF/dT @ 288K ~5.44F/T, This is very important to remember when attempting to determine atmospheric relationships. Using 238K as a reference temperature, automatically is equivalent to selecting either TOA or Tropopause frames of reference. If the Tropopause is not stable, that frame of reference is very complicated. As the surface of the Earth can only emit or absorb in one direction, it is the most ideal frame of reference to select.
While we can quibble over whether or not a no Greenhouse gas Earth might have a Tropopause, simply that selection as a frame of reference is potentially heart breaking, embarrassing, less than intelligent… select your own term.
If one can accept that frame of reference may be important. Then atmospheric relationships make much more sense. Though stubbornly defending a weak position can be fun, it is not very productive.
Then holding Fb constant can give an indication of the impact on the surface which can then be used to determine a new equilibrium condition. That can then be used to determine a new “effective†emissivity. And properly considered, a new reference altitude if one wishes to explore the radiant impact versus conductive, convective and latent changes in flux.
Remember, that a solution in one frame of reference, if correct, can be used to determine solution in other frames of reference, sometimes with enlightening results.
Re: SteveF (Comment #84821) on the Kimoto thread:
“I noted during my recent flight from Tokyo to San Francisco”
Is there any value in using commercial flights for atmospheric data collection? I’d guess most of the variables of interest are being recorded. I’d also guess it would be a minor job to add any particular sensors of interest?
Anybody know if this would be a viable data source and if it is done? TIA.
This is a quick explaination of the solar and high DWLR impact on clouds for my significant other, who is not well versed in physics and loves to talk during football games.
http://redneckphysics.blogspot.com/2011/10/im-little-photon-short-and-stout.html
“DeWitt Payne
In a perfectly transparent atmosphere there would be no tropopause and no stratosphere. The temperature inversion occurs because oxygen and ozone absorb strongly in the UV.”
And if you Aunt had balls she would be your uncle.
The atmosphere has the composition it does because of the biotica; before water splitting evolved there was little free oxygen and the atmosphere was highly reducing. At the moment the atmosphere is highly oxidizing.
How fortunate that the switch from a reducing to oxidizing atmosphere didn’t cause a major temperature imbalance.
Doc scores!
For those of you following along at home, 75% of the mass of the atmosphere is located below 11km. In ~1905 this guy Einstien came up with this crazy equation, e=mc^2 which was interesting. At the point were 50% of the mass of the atmosphere is approximated, might be a good reference layer for atmospheric energy impact. Let’s see, 1905 is nearly 10 years after 1896 and Arrhenius’ Carbonic Acid paper. If he shifted his reference temperature for the radiant layer from 255K to the 50% mass/energy layer, what would that be?
Re: DocMartyn (Oct 30 08:04),
Your completely irrelevant, incompetent and immaterial comment should have been directed to Dallas, not me. He’s the one who postulated an atmosphere without greenhouse gases. Ozone is a greenhouse gas. Nitrogen and oxygen are also greenhouse gases because collision induced absorption causes them to absorb/emit weakly in the thermal IR range. In the absence of the more strongly absorbing ghg’s like water vapor and carbon dioxide there would still be a small greenhouse effect for a nitrogen and oxygen atmosphere.
[my emphasis]
“Not at the tropopause, because that is a temperature inversion, that would exist is a world without GHG’s.”
As Dewitt said N2 and O2 are weak GHGs though they are assumed negligable with respect to Water vapor, CO2 etc. Which was my point of the “no GHG Earth” which is non-sense of course. It is intended to illustrate the danger of using 255K as the refecence temperature assuming it is at the tropopause or top of the troposphere. The ideal reference temperature for determining a change in DWLR impact is between the surface and the tropopause, i.e. the layer of the average energy of the atmopshere. It is the change in the average energy of the atmosphere that is the change in the Atmospheric effect.
Surface temperature is a handy reference since there is only thermal flux out of and into the surface to contend with. So variation of the average energy layer intensity and altitude with respect to the surface reference, simplifies the problem.
While the TOA 255K can be used, it does not provide much information about what is happening internal to the system.
Re: Dallas (Oct 30 12:03),
I would call it a thought experiment rather than nonsense. There are lots of concepts in physics that have no counterpart in the real world. They are still useful. For example, there is no such thing in the real world as a blackbody with emissivity identical to 1 at all wavelengths, a perfect reflector with reflectivity identical to one or a perfectly transparent body with transmissivity identical to one. That doesn’t make the concepts nonsense.
I agree, as long as it remains a useful standard like a Carnot engine for example. One of the problem I have had is people believing there would be “real” no GHG earth with no a tropopause plastered to the surface. Which creates misconceptions like DWLR directly warming the surface.
DWLR is a reflection of atmospheric temperature. While a few photons my run the guantlet and be absorbed directly by the surface from a CO2 molecule in the upper atmosphere. The odds are pretty low.
Most doen’t seem to understand that the impact of DWLR is felt through replace of convective air currents. If the replacement air is warmer than it would be due to additional CO2, the net effect is surface warming.
That is why the destiction between upper level convection initiated by CO2 forcing and solar absorption needs to be considered. The lower in the atmosphere the convection is intiated, the greater the impact of warming felt at the surface.
The basic model I am building uses Ideal Black Body standards for comparison with the actual temperatures and pressures to estimate the total effect. Then known portions of the effect can be used to indicate how much unkown there is to consider. Pretty simple really. Only problem is 330Wm-2 is not the DWLR value. The potential temperature of air at ~600mb is the standard reference (~249K at 600mb would increase to 288K at the surface via compression). Because of the satellite bands though, 500mb will have to do. That means the value for DWLR is ~220Wm-2. That is lower than Kimoto’s because the K&T is missing approximately 20Wm-2 absorbed by the atmosphere as compared to the NASA values.
No big deal, the cartoon just happens to be wrong. Then it always has been, just a cartoon.
Cont.
BTW upper level convection involving clouds is a good example of the enthalpy loop that can be created. Basically, an atmospheric heat pipe. If you look at actual tropopause data you can see temperature drops to -95C and some satellite spectra of deep convection clouds showing the atmospheric window is what the cloud tops, water and ice spectra, are seeing. Solomon et al stratospheric water vapor puzzle appears to be due to the atmospheric heat pipe.
Re: Dallas (Comment #84858)
255 K is nowhere near the top of the atmosphere.
Julio, Yes, 238W/m-2 would have an equivalent temperature of ~255K at the TOA. It is a number of reference to determine the estimated Atmospheric effect. It has no meaning in the atmosphere. Arhenius however, appears to have used it as a reference in his first paper on Cabonic Acid in the atmosphere. It seems to have stuck in some cases.
The old guy did maticulous work. However, since he retracted his estimate and grundingly stated that the Carbonic Acid effect was only 1.6 (2.1) with water vapor, never publishing his work, it appears he believed he found a mistake in his original paper, possibly, using 255K as his refernce layer for CO2 forcing.
1.6 (2.1) seems to be remarkably accurate by the way, he should have published.
Thought this might be of interest:
http://www.suite101.com/news/new-satellite-data-contradicts-carbon-dioxide-climate-theory-a394975
April fools in October from JAXA??
Here is an excellent technique to test climate models.
http://www.scientificamerican.com/article.cfm?id=finance-why-economic-models-are-always-wrong
Calibration–a standard procedure used by all modelers in all fields, including finance–had rendered a perfect model seriously flawed. Though taken aback, he continued his study, and found that having even tiny flaws in the model or the historical data made the situation far worse. “As far as I can tell, you’d have exactly the same situation with any model that has to be calibrated,” says Carter.
test comment http\:
what happens?
http://redneckphysics.blogspot.com/2011/11/power-series-or-opposing-forces.html
Is energy conserved?
Re: Dallas (Oct 30 13:24),
90% of DWLR comes from within 100m of the surface, not the 600 mbar level. It’s easy to tell this because DWLR does not have a blackbody spectrum over the full frequency range. In the parts that do look like a blackbody, the temperature is only slightly cooler than the surface. Here’s a figure from Grant Petty, A First Course in Atmospheric Radiation (an excellent reference, btw) that shows observed spectra from the surface looking up with Planck curves calculated from the surface temperature superimposed. But that’s clear sky. When the sky is covered with low level clouds, the surface sees almost exactly the same radiative flux as it’s emitting.
Re: ferd berple (Oct 31 08:50),
Along those lines, there’s also this as reported on Pielke, Sr.’s site.
Even though the various surface data sets are not all that different, they still lead to significant differences in climate sensitivity when used to calibrate the MIT Integrated Global Systems Model.
Dewitt,
“90% of DWLR comes from within 100m of the surface”
Yes, so DWLR is 390Wm-2 – 0.8 or 389.2 Wm-2, a meaningless number as I have said before. Where would 90% of the CO2 forcing come from? Oh, that’s right, withing a 100m of the surface resulting in 6Wm-2. Why is that not happening? Could it be that there is temperature due to conductive? Transport of heat via latent transfer?
Why would I used 90% when 50%, the mean, is a realistic reference? Condensation doesn’t start at 100m, where does it start on average?
DeWitt,
Maybe 90% of DWLR measured at the surface comes from within 100 m of the surface, but doesn’t that first 100 m depend on the next 100 m and so on?
Dallas,
This “convection” which you speak of. I believe I may have heard it mentioned once or twice before.
Re: Dallas (Nov 1 15:52), Re: Dallas (Nov 1 15:52),
The atmosphere isn’t a gray or blackbody. DWLR is not 390 W/m² for clear sky conditions, and it isn’t some arbitrary, meaningless mathematical construct. It can be and is measured. Did you even look at the graph I linked?
No, the CO2 forcing is due to reduced emission at the top of the atmosphere. Here’s another link you probably won’t look at. See the big dip at 667 cm-1 for a, c and d? That’s because the effective altitude for CO2 emission to space is near the tropopause at a temperature of 200-220 K. That turns out to be warmer than the surface of Antarctica in the winter because of the massive temperature inversion that forms during the winter and the high elevation of the top of the ice sheet. More CO2 means the dip gets wider and less radiation is emitted to space, the difference between the radiation emitted to space at xCO2 and an instantaneous increase to 2xCO2 is the forcing. The 6 W/m² isn’t a forcing it’s the increase of DWLR after the system again reaches more or less steady state and the surface and the atmosphere above it have warmed and is mainly a result of the warming. The increased DWLR before the atmosphere warms is much smaller and is largely counterbalanced by increased absorption of solar radiation in the atmosphere.
Re: Oliver (Nov 1 16:03),
Obviously. If there were no ghg’s above 100m, it would be a lot colder. But you don’t see the photons emitted at higher altitudes. They’re absorbed before they get to the surface. The intensity and structure of the spectrum depends on the temperature where the optical density measured from the surface upward reaches 1 (absorptivity = 1-1/e). At the center of the CO2 band, that’s only a meter or two, if that. Or if it doesn’t reach 1, the intensity comes from the total column emission with decreasing contribution from higher, colder altitudes. That can be quite small for relatively low humidity. Emission through the ‘window’ on calm, clear and cool nights is the reason for the formation of dew or frost on exposed, low heat capacity surfaces like windshields or grass blades.
If you really want to understand this stuff you need to do your homework and at least read Caballero. Better would be to buy the Grant Petty textbook. That’s what I used. If you want to play with absorption spectra of the various molecular gases in an absorption cell, you can do limited ‘experiments’ for free at http://www.spectralcalc.com .
Dewitt, I have seen both your link spectra and quite a few more.
One question those don’t answer and no one has be able to provide a link that will answer is the spectra looking up and down from the mean of the DWLR. CO2 is not the only molecule that absorbs and emits. CO2 is estimated to contribute between 5 and 30% of the GH effect. The GHG is what percentage of the total atmospheric effect. Lots of not very well answered questions.
Why am I concerned with the 50% source equilvalents? As you know CO2 is a more potent GHG higher in the atmosphere where it can emit and absorb with less comptition with water vapor and collisional transition.
The impact that CO2 has with the surface from that point, relative to the 50% contribution of water vapor, relative to the 50% contribution of conduction/convection and the 50% contribution of latent are all factors that need to be considered to determine the net impact on CO2 doubling on the surface as that information helps determine the magnitude of potential feedback.
The latent shift I am refering to is one of the more important thermal boundary layers. Below that point CO2 has little radiant impact, it appears to have a significant conductive impact though. It absorbs at the surface by has to transfer that energy by collision. That enhances conductivity. Enhances conductivity can impact latent.
Above the latent boundary, CO2 can warm water, not water vapor, via collision and increasingly by emission as molecular density decreases.
What appears to be key to understanding the full relationship is Minimum Local Emissivity Variance, or how local emissivity varies with the dynamics of the atmospheric system. In other words, spectra from the surface looking up versus spectra from the TOA looking down do not adequately inform us of the system dynamics, the clouds question.
By applying basic thermodynamics, you can get a better idea what is unknown that really needs to be known, to explain why the system is performing the way it is performing, MLEV. Since MLEV is still in question, despite your saying that the paper I had referenced was published in 2005, so “You” are sure it has been resolve by now, it apparently has not been resolved.
Do you have an emissivity profile of the atmosphere accurate enough to not lose 20 to 100 Wm-2 here and there? The Arctic and Antarctic appear to be a bit problematic. Could that account for 333W.m-2 being downgraded to 254 then my implying that 220 is more realistic based on conservation of energy and basic thermodynamic? I still am quite confident in my understanding of the problem.
Re: DeWitt Payne (Comment #84979) 

Whoa, hold on there. I’ll admit I was sleeping sometimes during atmospheric radiative physics, but it’s a bit harsh to accuse me of not doing my own homework…
Re: Dallas (Nov 1 18:45),
Go to the MODTRAN page. Pick an atmospheric profile. Let’s say tropical atmosphere. Calculate from an observation height of 0 km looking up. Iout = 348.226 W/m². Now increase altitude until Iout = 174.113 W/m², which happens to be 4.252 km. So there’s your 50 % spectrum looking up. Switch to looking down. Iout = 357.646 W/m². Or start with 0 km looking down. Iout = 417.306 W/m². Set altitude at 100 km. Iout = 287.812 W/m². The average of the two is then 352.559 W/m², which is pretty close to the power for 50% looking up. As close as I can get is 4.68 km looking down gives 352.622 W/m². Looking up from that altitude gives Iout = 161.993 W/m²
But it’s not at all clear to me that’s what you’re looking for. In fact, the rest of your post is as clear as mud to me. You never did provide an answer to scienceofdoom’s reply to you. As he noted, the Arctic and Antarctic are small fractions of the global surface area. Data from the high latitude sites he provided showed observed DLR to be higher than calculated, not lower. Your contention that global DLR is ~2/3 of the K&T number is still a completely unsupported assertion.
Dewitt, consider the Antarctic, Surface pressure can change from ~1020mb to ~950mb in a day or two. That is not a radiant response, that is convective. If you use the Poisson relationship for potential temperature, Theta = T(Po/P)^R/Cp, you have what I am looking for. It is a non trivial atmospheric effect not directly associated with radiant forcing.
To separate the radiant response, which is what we are looking for by definition, i.e. climate sensitivity if the temperature response due to a doubling of CO2, you have to separate the non-randiant response from the radiant.
Since the mix of radiant gases does not change only the apparent temperature, it would not be obvious in spectral analysis without correction for dynamic changes in density. It is as simple as that, you can’t see it from space. You can model it though. Is the polar vortexes properly modeled for their impact on DWLR measurement? No. That is why UWM and other institutions are attempting to calculate MLEV with, get this, best guess and brute force cacluations. My thermo approach appears to provide a decent “best guess”, by using a more stable reference layer globally, the 600mb @ 249K average.
It’s all physics.
Dewitt or anyone else, this is a paywalled paper not in my budget,
http://www.jstor.org/pss/91063
As I mentioned before, the conductive impact of CO2 and carbonic acid appears to be underestimated, particularly in the Antarctic. I would love to have access to this paper and others of relevance. I have an accurate enough solution for MLEV at the moment, but not for the conductive impacts in the Antarctic lava lamp.
Re: Dallas (Nov 2 07:00),
Why would you care about thermal conductivity? Convective heat transfer is many orders of magnitude greater. So much so that meteorologists assume thermal conductivity doesn’t happen, except at the surface. The tiny amount of CO2 in the air isn’t going to significantly affect the thermal conductivity of the air.
Why? Well, because the assumption of the meteorologists is prefectly fine any where except the poles. CO2 has a non-linear thermal coefficient that peaks at -20C. With the amazing conditions in Antarctica, lots of things are possible. The huge pressure changes, equivalent to cat one hurricanes, with the changing kenimatic viscosity of air and nonlinear conductivity seem to be creating an unanticipated conductive/convection feedback. As I said, the only part of this theory that may be mine is the conductive part.
Improved conductivity reduces heat transfer to the air allowing more efficient cooling as temperatures increase. A rather amazing phenominon, here to fore over looked to the best of my knowledge. That is kinda exciting to a physics geek, don’t cha know 🙂
Since it provides a better explaination of the Venus’ isothermal atmosphere, it gets real exciting. Enough to pursue in my opinion.
http://redneckphysics.blogspot.com/2011/11/consider-venus-since-i-am-taking-on-big.html
Re: Dallas (Nov 1 18:45),
I looked up MLEV ( Huang, et. al., 2003 Minimum Local Emissivity Variance Retrieval of Cloud Altitude and Effective Spectral Emissivity—Simulation and Initial Verification ). As far as I can tell it has absolutely nothing to do with DLR. It’s an algorithm for determining the atmospheric pressure at the cloud top and the fraction of cloud cover when looking at ULR from space or very high altitude over a relatively narrow frequency range for a field of view that’s partly cloud covered.
You still haven’t provided anything other than handwaving to support your contention that global average DLR is over 100 W/m² less than Fasullo, Kiehl and Trenberth have reported.
You also haven’t looked very hard for a free copy of Jenkin’s paper. I found one here:
http://freeroyalsociety.com/the-thermal-properties-of-carbonic-acid-at-low-temperatures/
It’s about the thermal properties of liquid CO2 and the saturated vapor above it. That doesn’t really seem applicable to the thermal properties of CO2 in the atmosphere.
Dallas,
Can you give an order of magnitude comparison between the effects on heat transfer of the following:
1. changing CO2 thermal coefficient,
2. changing kinematic viscosity of air,
3. “huge pressure changes” and associated winds?
Oliver,
1) and 2) combined two, Antarctive is roughly twice as efficient conductive/conventively as the global average. You can’t really consider one without the other. Another issue is that the surface/air boundary layer has to be considered. CO2 has a noticable impact on the conductivity of the surface (several ice core studies reference the change in conductivity with CO2 concentration) and the atmosphere.
3) Not yet, ball park, the average rate of surface pressure change is about 20% of the average of the Antarctic as a guess. The Antarctic data is pretty sparse and not all that accurate as best I can tell. I have to fine tune my estimates of Antarctic effective conductivity and Effective emissivity for a more stable temperature and pressure reference because of the low level dynamics. 100mb may work, but 600mb is too chaotic to use. At least in the Antarctic latent is near zero to somewhat simplify the problem.
Dallas, I didn’t mean a comparison of 1–3 separately between Antarctica and elsewhere so much as a comparison of the relative importance of 1–3 in Antarctica.
Thanks Dewitt, I had tried a couple times but never made it to the full paper. I have it doenloaded now. It is more applicable to the surface Ice conductivity than the atmosphere. The thermal coefficient of the surface air boundary is fairly important.
For MLEV, yes it is mainly clouds which are an important consideration as a feedback. It also seems to be associated with water vapor and density changes.
For the K&T consider that 333Wm-2 is roughly the value of the entire atmosphere, 0.61*390 =238 Once you add thermals, 24 and latent 79 you get 341Wm-02. It the Tropopause did not exist, that would be DWLR. The tropopause does exist. Other than the thermodynamic explanation, there is little other than handwaving either way.
You can do a though experiment and replace the sun with an internal heat source to maintain 288K surface temperature. Assuming latent and thermal remain the same. you have Ft=24, Fl=79 and Fr=287Wm-2 which would give you the DWLR due solely to surface flux. We are looking for DWLR due to a 288K surface since the ratio of solar absorbed by atmosphere/surface can change. While you can pick nearly any number for DWLR, the values from that thought experiment provide more information on the system internal dynamics. For example, Ft and Fl do not change as DWLR changes, but because of other thermal impacts created by the change in DWLR. A much simpler visualization of the problem.
Oliver
For example looking at the UAH trends, south pole mid trop versus strat you have -.13 vs -.55 For the North pole, +.25 vs -.23. The north pole is responding as advertised and the south pole is the opposite. Data quality and coverage is of course a concern, still that is a bit odd.
I’m assembling the pieces to try and replicate Wood’s experiment and Vaughn Pratt’s attempt at replicating Wood’s experiments. Well, actually, do a lot better, I hope. The temperature (~65 C) achieved by Wood and by Vaughn Pratt for his IR transparent covered box is way too low. For clear sky conditions unless the sun is very low in the sky, and/or it’s very cold outside, the internal temperature of the box should be over 100 C. Pratt’s boxes don’t look to be insulated much at all and there’s also almost certainly big losses by convection/conduction at the window even with the double film. Pratt also doesn’t seem to have a clue about thermal conductivity coefficients and how to calculate the thermal conductivity of glass and acrylic sheets. The details of Wood’s experiment are also lacking. I’m planning on using R-30 fiberglass batting and putting at least three layers of thin plastic film spaced at 3/4″ over the window.
A quick calculation says that putting a glass cover on the box should raise the internal temperature ~50 C if convective heat loss to the atmosphere from the outer glass surface can be minimized. The outer surface of the glass should reach nearly the same temperature as the surface of the box when its cover is IR transparent.
DeWitt – FWIW I thought VP’s experiment was very poorly specced. I think convective losses off the front sheet could be surprising. Long time since I did anything with it but I think Nusselt coeff is relevant – you might want to consider some internal perpendicular divisions (baffles) of your 3/4″ spacing between the window sheets. Also thought VP’s attempt to cool using the fan looked likely to be futile – I think choking effects between the sheets would dominate. I liked Anthony’s recent experiment with his calibration efforts. One thing I’d like to see in a Woods type test is a box response calibration with a known heat source within it with the box at its in use orientation. From the brief look I’ve had at Wood’s experiment I think replication is a bit of a stretch as his details seemed so patchy. Looking forwards to hearing about your work – should be fun! 🙂 – what are you aiming to investigate/demonstrate?
Dewitt, that’s a good idea. I have not seen R values properly considered, Vaughan at least made an attempt, but I agree with you.
If you look at Angstroms method of determining DWLR, it is not that complicated. I was thinking of an experiment with three layers with three gas concentrations That way, conduction and convection might have less impact, since you are comparing three layers. Thin plastic could seperate the layers with a low but known R value, so Radiant impact should be noticably different. I doubt it will show anything much different that what can be calculated, but it may be a better experiment.
The surface air boundary is still my biggest question mark. Sea water for example can transfer 1000 more Wm-2 to air than the reverse. An imperceptable change in that boudary’s effeiciency would have a significant impact. That balance would seem to have the most impact on the Southern hemisphere. The Antarctic Ice core show changes in conductivity, which may be enough of a clue to determine if what should be a negligable impact is actually not.
Re: curious (Nov 3 15:56),
I’m quite certain that convective losses were substantial even when he added a second layer of film. Unfortunately, he didn’t give the temperature drop across the glass cover and he only gave the temperature drop across the acrylic sheet for the non-box experiment. I’m getting six thermocouples so maybe I can get a better handle on non-radiative heat loss.
A single sheet of glass has an R value of about 1 only because you get to count the insulating properties of the boundary layer air. The glass itself has very low insulating properties because it’s so thin.
A Greenhouse Effect
Quite a few people have and intend to experiment with how the Greenhouse effect works, in simulated greenhouses. Woods did his experiment. A Vaughan Pratt did his experiment. Myth Busters did their experiment. I think even Al Gore commissioned an experiment. What do all these experiments have in common? None of them were actual Green Houses.
While a greenhouse gets its warmth from the sun it is the retention of that warmth that makes a greenhouse useful. The warmth it retains is from the surface or ground. In addition to proper greenhouses, there are also cold frames and a variety of other methods used to retain surface heat. Plants need sunlight to survive, so a greenhouse can’t use highly efficient insulation to retain heat, it has to use less efficient transparent materials. In very cold climates, these transparent materials need to include double glazing and if one can accord, triple glazing, to prevent excess heat loss to the environment.
Greenhouse are also equipped with ventilation systems to prevent over heating in the day, which damages the plants watering and misting systems to maintain humidity and in some cases, CO2 injection to improve plant growth rate.
So if someone wants to study a greenhouse, why not use a greenhouse?
Simple experiment number 1. Build two greenhouses of identical construction. In the first, till the soil and plant directly in the ground. In the second, insulate the ground and provide a moisture barrier. Then plant in pots or use hydroponics. Since the solar input is roughly the same, why is the second so much less efficient? There ya go, the surface is the source of the heat we are trying to retain.
Since some labs have limited space, how would you simulate a greenhouse accurately on a small scale?
Tip one, not by shine heat lamps on poor unsuspecting items, but by simulating a surface source with a upper sink and measuring how conduction, convection and radiant heat flow are all impacted. The percentage of each is the greenhouse effect, just as it is in our atmosphere.
Since our atmosphere has layers, why not include layers in your greenhouse effect experiment.
It’s an analogy. Just like in high school when they told us atoms are like little balls. They actually aren’t.
Bugs,
It is a pretty good analogy.
Re: Dallas (Nov 4 05:12),
I don’t care about actual greenhouses. I care that Wood and Vaughn Pratt get different results when making similar measurements and the Wood experiment is frequently cited as proof that IR trapping doesn’t exist. My calculations show that Pratt is more correct than Wood, but I need better data.
As far as heat loss at night from a greenhouse, Roy Spencer has demonstrated that, as expected, having an IR transparent cover on a heavily insulated box with the inside painted with high emissivity paint caused the internal temperature to drop below ambient at night.
Dewitt I understand and agree that DWLR exists and has an impact. As Roy’s experiment shows. It also shows that surface temperaure, dew point and the box temperaure, or radiant relationships change significantly as the night progresses. That would indicate conductive flux, has a significant impact in stabilizing surface temperature.
The three layer box would help determine the relative impacts of surface conduction versus DWLR, something I have not seen studied much, other than by greenhouse builders.
Re: dallas (Nov 5 04:34),
Of course it does. The profile of the temperature inversion that forms on clear, calm, cool nights that results in dew or frost is the result of thermal diffusion. Initially, heat is lost from the surface by radiation faster than it can diffuse downward. That causes a temperature gradient to form. The surface temperature drops until the gradient is large enough that heat transfer from the atmosphere by eddy diffusion (convection doesn’t happen because an inversion is stable) balances heat lost by radiation. Because the heat capacity of air is low, the profile extends higher with time. Pielke, Sr. published an article on how this can bias temperature trends for minimum temperature and (min+max)/2.
Dewitt, Exactly. What my calculations are showing is that conductive impact is changing slightly. Basically, conductive, solar and cloud cover extent are sychronously changing to offset a larger portion of the radiant impact. That is right now about 20 to 30% less warming that expected with potentially nearly twice that with a prolonged solar. Should the AMO synchronize with the PDO, much more is possible of course.
The conductive change in the southern oceans may be the PDO trigger, which appears to have started circa 1994. Comparing mid trop rates of change to strat rates of change seems to show that something unusual is happening. Interesting.