Introduction
The cloud feedback is one of the largest sources of uncertainty in climate science, which in turn leads to uncertainty in climate sensitivity. Last year, Science published a paper by Andrew Dessler, “A Determination of the Cloud Feedback from Climate Variations over the Past Decade” , which concluded that observations did not support the idea of a large negative cloud feedback.  It caused some ripples through the blogosphere: posts at skeptical science and real climate. Dr. Spencer posted several of their exchanges, and his new paper argues that the method is not an effective way to diagnose the cloud feedback .
Nonetheless, I’ve become interested in the sensitivity of this method to the datasets chosen. Â In this post, I will be going over the shortwave component of this feedback.
Data and Methods
The basic idea is to calculate the Cloud Radiative Forcing (CRF) by finding the difference between the clear-sky and all-sky TOA fluxes. We then see how this CRF changes in response to variations in temperature to get a basic estimate of the cloud changes/feedback. Of course, it is complicated by the fact that clouds can mask some of the other feedbacks, so that changes in CRF are not 100% due to cloud changes: Â they may be due to albedo, water vapor, and other changes, which also correlate with temperature.
The CERES website provides observations of clear-sky and all-sky fluxes for both longwave and shortwave radiation. One thing that got me curious about the topic is that the Dessler paper uses the all-sky fluxes from the CERES observations, but not the clear-sky fluxes. Instead, he uses a calculated value for clear-sky fluxes based on the temperature and water vapor distributions in reanalysis data. My concern would be that even if the calculations and reanalysis data are accurate on an absolute level, we are detecting the change in cloud feedback based on CRF anomalies that are much smaller.
Dessler10 cites potential bias in the CERES clear-sky TOA fluxes as the reason for avoiding using the measured data. The paper he references mentions that the bias arises from clear-sky CERES estimates tending to have less water vapor when they are taken, so that the clear-sky all-sky difference of OLW includes the effect of water vapor, not just clouds. Of course, this would mean that the overall CRF is actually larger than estimated (since the smaller LW forcing is in the opposite direction of the SW forcing). Furthermore, I did not find any mention of how this would bias the SW results, which is what we’re focusing on here. And regardless, it is interesting enough to see what results we get using the satellite measured clear-sky fluxes along with those all-sky fluxes.
It’s also worth noting that the temperature series used in Dessler10 comes from the reanalysis data as well.  For my part, I’ll be using the GISS, HadCRUT, NOAA, UAH, and RSS temperature series here, in addition to the ERA-interim reanalysis skin temperature (more details on how that was calculated here ), which should match that of the Dessler paper.
The CERES data I’ll be using in this post is available for download here.  I’ve chosen SSF1deg to match up with the Dessler paper, then selected global mean, monthly, Terra, full time range, with the TOA fluxes. I’ll also note that the site automatically downloads version 2.6 now instead of 2.5; I did some of the earlier analysis in 2.5 and there are some tiny differences. I’ve also included the NC file with the script and ERA-interim temp data here.
Shortwave CRF Change Results
The best correlation between CRF and temperature comes with the satellite temperature datasets, as seen here (I should note that these are measuring TLT rather than surface temperatures):
The r^2 value reported in the Dessler10 paper for the Net R_cloud vs. reanalysis temperature is .02, although I don’t see one reported for SW specifically. However, he mentions that “[the smaller uncertainy in LW] means that the long-wave component of DRcloud correlates more closely with DTs than the short-wave componentâ€, so presumably it was less than that.
Accounting for Albedo changes
We still have yet to account for the non-cloud influences on the SW dCRF that correlate with temperature. For instance, the biggest impact for the shortwave component will be changes in surface albedo — as temperatures increase, we expect ice and snow to melt and therefore decrease surface albedo. Since clouds already reflect some of the incoming SW radiation, the change in surface albedo will be more pronounced in the clear-sky scenario, thus increasing the difference between reflected SW radiation in clear-sky vs. all-sky scenarios. This makes the change in CRF vs. temperature in the above scenario look more negative than it should be, since the increase in the measured CRF in this specific case is not due to changes in cloud properties, but rather the surface albedo.
The method in the Dessler paper to adjust for these non-cloud effects on flux is using radiative kernels. The radiative kernels represent what each climate variable at each specific grid point (some variables include a vertical pressure “coordinate” as well) contributes to the TOA flux. He then multiplies each of these kernels by the anomalies of those climate variables (from the reanalysis data), to see what the non-cloud variables contribute to the TOA flux. More on this (hopefully) in the next part.
Fortunately, since we’re concentrating only on the shortwave radiation in this part, I’ve attempted to simply use the CERES observations again (in this case all-sky albedo and clear-sky albedo) to estimate how much surface albedo feedback contributes to our apparent cloud feedback in the various temperature indices, and thereby remove the bias.
We want to calculate $latex \displaystyle \frac{dC_{RF.al}}{dT}$, and suppose we simplify our equations as follows (all equations below refer to only the shortwave component):
(1) $latex \displaystyle \frac{dC_{RF.al}}{dT} = \frac{\partial C_{RF}}{\partial a_{surf}} * \frac{da_{surf}}{dT}$
(2) $latex \displaystyle albedo_{all-sky} = albedo_{clouds} + (1 – albedo_{clouds}) * albedo_{surf}$
(3) $latex \displaystyle albedo_{clear-sky} = albedo_{surf}$
(4) $latex \displaystyle C_{RF} = (1365 / 4) * (albedo_{clear-sky} – albedo_{all-sky})$
(5) $latex \displaystyle \frac{\partial C_{RF}}{\partial a_{surf}} = (1365 / 4) * (albedo_{clouds})$
From (2) and (3) it follows that:
(6) $latex \displaystyle albedo_{clouds} = \frac{albedo_{all-sky}-albedo_{clear-sky}}{1-albedo_{clear-sky}}$
We can then calculate $latex \displaystyle \frac{da_{surf}}{dT}$ based on the regression of albedo_clear against dT. It’s worth noting that this is not an effective way to calculate long-term albedo feedbacks, since the ice melt is expected to be slower and respond more to long-term temperature changes than monthly ones. However, it is an effective means to determine the impact of surface albedo biasing our cloud feedback results for a particular temperature set, as the overall temperature trend during the time period is dwarfed by monthly and annual variations.
It’s also worth noting that the water vapor feedback has a slight impact on the shortwave portion as well. The bulk of the water vapor effect on dCRF/dT will be in longwave radiation, however, and for the shortwave component it actually makes our cloud feedback appear slightly more positive (due to water vapor absorption “interfering” with cloud reflectivity). I have ignored this for now because of its estimated smaller impact, because it will likely require more than the top-level CERES observations, and because it cannot account for an underestimate of the negative cloud feedback (since adjusting for this should make our results more negative).
Conclusions
Here is the figure (3C) showing SW radiative feedback in the Dessler paper, along with those of the models:
Contrast that with the calculations using CERES observations for clear-sky as well and the different temperature indices:
If you went by those indices with the best correlation (UAH and RSS), it would appear that the models could generally be underestimating the SW negative feedback. On the other hand, the surface indices give a result closer to those simulated in models, though it still appears that the models on average have a more positive feedback than the observations (especially if we were to take into account the positive shortwave bias in observation CRF from water vapor increase).
Of course, if there is no correlation between the short-term cloud feedback and long-term cloud feedback (which Dessler10 mentions based on the models), or if you believe this entire method to be ineffective for diagnosing feedback vs. forcing (as Dr. Spencer does), the results are somewhat moot. But I think it may be hard to rule out a large negative cloud feedback based on the recent decade of observations and the Dessler10 paper.



