HadCRUT NH-SH: Temperature Rose in November.

Now…. with no exclamation mark: HadCrut, the third major group computing surface temperature anomaly also reports the November Global Mean Temperature Anomaly was higher than the October Anomaly. Inspired by Tamino’s claim that ‘For some reason “the Blackboard” has an obsession with trends over the most recent 10-year period.’, I modified my EXCEL spreadsheet to show something I very rarely show: trends over the most recent 10-year period. Here’s HadCrut NH&SH over the most recent 10 year period. All November anomalies are circled and the OLS fit to the HadCrut anomalies from Dec. 2000-Nov. 2010 are provided as is the nominal trend of 0.2 C/decade suggested by the wording of projections in the IPCC AR4.


As noted in the the blog post title, Nov.’s anomaly was higher than October’s. However, this month’s anomaly does not set a record for Nov. HadCrut NH&SH anomalies. I’ll let those with good eyes note the magnitude of the trend. No error bars are provided today because I’m thinking out a variety of tests applied to model data which I’m hoping to discuss…. soon. After that, error bars will return.

Now, turning to something I have been curiously monitoring all year and posting about from time to time this year: The magnitude of the 12 month lagging average relative to the multi-model mean for models driven with the A1B SRES. I’ve been wondering whether when using the 1980-1999 baseline, the 12 month average surface temperature

Though not necessarily of interest to others, at the onset of El Nino, I wondered whether the 12-month average temperature would manage to cross above the multi-model mean. Though this is not a formal test, it’s worth noting that if the multi-model mean was unbiased relative to the earth’s temperature (or even pretty close), then we would expect the earth’s temperatures would oscillate above and below the 12-month average temperature anomaly from the multi-model temperatures. Instead, we continue to see the observations oscillate in a region below the multi-model mean just grazing the multi-model mean at the top of earth El Ninos.

Meanwhile, I know plenty of others are more interested in whether each or any of the various sets will set records for the annual average temperature. While some groups will, it’s clear that HadCRUT will not set the record for annual average set by HadCRUT in 1998. I think GISS will– but you never know. If you ask me to guess for NOAA, I’ll flip a coin.

152 thoughts on “HadCRUT NH-SH: Temperature Rose in November.”

  1. Although mathematically the last 10 years gives a 0.2K/decade rise, eyeballing the data suggests that there is no significant trend. Is there any bias in linear trend data analysis to get a trend even when one doesn’t exist, i.e. at low trend potentials?

    Ten years is not enough time to reveal global trends, I know, when we have 30 year up and down half-cycles. Sill, we are at the bleeding edge of decisions and any current trend has social significance. A non-trend during increasing CO2 surely must be countered by an “excess” heating rate once the warming trend is reestablished – if the CAGW hypothesis is correct. (Which, at least in magnitude, it would be difficult to be after 22 years of equivocal data.)

  2. Although mathematically the last 10 years gives a 0.2K/decade rise,

    How do you get this? Using ordinary least squares applied to HadlCRU NH&SH, gives a trend of -0.1 C/decade. The nominal predicted trend was 0.2C/decade. That’s not based on observations.

    Is there any bias in linear trend data analysis to get a trend even when one doesn’t exist, i.e. at low trend potentials?

    No. And as you see, OLS gives nearly no trend– but over the past 10 years, with HadCrut, the computed trend happens to be very slightly negative.

  3. What is interesting is the ‘huge’ difference between GISS NASA and HADCRUT for November.

    NASA 0.74C
    HADCRUT 0.43C

    And the two satellite monitors are 0.31C and 0.38C.

    I know GISS takes in more area (with no weather stations) than HADCRUT and they use different base years for anamolies. But this just makes the whole global temp exercise meaningless.

  4. Ian George (Comment#64094)
    Why is “the whole global temp exercise” meaningless because baselines differ?

  5. “Now…with no explanation mark.”

    How will we know what grade we earned with our explanation?

  6. Don B– I could blame my lack of proof reading on lack of coffee, an dishwasher repair guy being here and getting ready for a meeting. But… y’all know I can’t proof read.

    Still, I was tempted to not fix that one. I like the idea of “explanation” marks.

  7. Ian George (Comment#64094)

    Ian,

    I agree that the numbers, at first glance, seem to imply the global temp exercise could be considered meaningless. However, if you look at a comparison of a running average, (such as the site linked below), the 5 anomaly sets actually appear more or less similar. I believe this makes the exercise slightly less futile than it may appear at first. See what you think. Select ‘global temperature’ from the left hand column, then select ‘recent land surface temperature’. It’s a useful site. The different baselines are noted below each graph for the non-satellite data.

    http://www.climate4you.com

    Regards,

    AB

  8. I think Doug Proctor misread the orange line in your first figure as the OLS line. I had to do a double take myself.

  9. However, if you look at a comparison of a running average, (such as the site linked below), the 5 anomaly sets actually appear more or less similar.

    I’ve got three of them on the graph of the running 12 month average above. HadCrut, GISS and NOAA are largely similar. GISS is currently high relative to HadCRUT and GISS. But if I added RSS and UAH, they would be the highest. RSS and UAH tend to swing more during El Nino and La Nina.

  10. Hi Lucia,

    Glad to know you are alive… I was a bit worried that R had killed you.

    The link to the high resolution version of the first graph seems broken.

  11. Speaking of proofreading, you’re missing a close-quotation on the href to HadCrut. This messes up the link to the high-res version of your graph, at least in Firefox and IE. Also hides a couple of lines of your text.

  12. SteveF–
    It isn’t just R itself. 🙂 As long as I’m using R, I’m checking various methods of testing things, and also checking various things with synthetic data. It’s slowing down posting, but there is a particular things I want to discuss, and I’d like to do it a way that is simultaneously not to confusing to explain, right, maximizes statistical power and gets a robust result. It’s a result I tested using t-tests, ranksum, F tests etc. in EXCEL. They all tend to give the same result– but each test is sub-optimal in some way. (Or, it is unless the F test applied to the thing I am actually applying it to is ok. So, I’m doing synthetic data for that.)

    That was all probably confusing because I haven’t explained what I’m testing. But I do a few things, realize why a particular way is suboptimal, and try to think of a way around that. (Answer never changes…. but it’s better to get an answer the right way than to get the same answer a whole bunch of not-so defensible ways.)

  13. So HadCrut will NOT set a record for 2010, and the trend (according to Lucia) is going DOWN. At a glance the hottest HadCrut month still is February 1998 with 0.756.
    I cannot see any warming during the recent past, especially not the IPCC 0.2°C per decade.

  14. I am lazy, and I would not mind if the posting mentioned the actual figure HadCrut published. Then I could see that, compared to UAH and GISS, Hadley stayed more or less constant.

  15. “the trend (according to Lucia) is going DOWN.

    I’m not sure what you mean by this. My guess based on the continued existence of La Nina conditions is over the short term, the GMST will decline. But, if I had to make a longer term guess: During the next El Nino, which should be within the next 2-5 years, HadCrut will likely break the record set in 1998. That’s my guess– but of course I may turn out to be wrong on that.

    I’m very confident HadCRUT won’t set a record this year. Jan-Nov temperatures are already in, and the Dec. temperature would have to be very, very high for HadCRUT to break the 1998 record. So… not going to happen.

  16. Lucia,

    “That was all probably confusing because I haven’t explained what I’m testing.”

    Yup, that is accurate. Still, I am impressed by your obvious intensity of focus.

  17. lucia (Comment#64112)
    “The trend is going down” refers to the thin blue line in your figure. What the trend will be doing in the future, we will see. But I should have said: “The trend has been going down”.

  18. lucia (Comment#64112) December 21st, 2010 at 5:06 pm

    “But, if I had to make a longer term guess: During the next El Nino, which should be within the next 2-5 years”

    You do realize NASA has adjusted their prediction for max sunspots in SC24 from 140 down to 90 and more recently to 64 which would be the lowest in at least a century(SC14). David Archibald is sticking with his prediction that SC24 will mimic SC5 the Dalton Minimum.

    TSI is looking a tad anemic as well.
    http://www.acrim.com/

    In any case, the opportunity to measure the impact of a really lousy solar cycle is upon us.

  19. lucia,

    The AMO index should be dropping now that the El Nino is past. 2010 is looking to be the third highest annual average for the AMO index behind 1878 and 1998 (barely). The AMO has a strong effect on NH temperature, which in turn dominates the global average. If the index goes negative any time soon, NH temperature will be flat to down and Arctic sea ice will stage a recovery. That’s a fairly big if, though.

  20. A large share of the positive contribution to the anomalies comes from the arctic desert of northeastern Canada. This has made me wonder if there is any point in calculating average global temperature anomalies at all, since the added heat (enthalpy increase) of e.g. a +20 C anomaly is significantly smaller in the dry cold climate than in warmer and moister climates. Even when comparing to Sahara, the positive anomalies of the arctic Canada correspond to less added heat: Going from -20 C to 0 C, even at 90% RH, corresponds roughly to going from 25 C to 40 C at 15% RH.

  21. Alexej Buergin (Comment#64110) December 21st, 2010 at 5:00 pm

    So HadCrut will NOT set a record for 2010, and the trend (according to Lucia) is going DOWN. At a glance the hottest HadCrut month still is February 1998 with 0.756.
    I cannot see any warming during the recent past, especially not the IPCC 0.2°C per decade.

    The trend is up, the current temperature is going down. The ice is at a record low at the moment, for this time of the year.

    http://www.ijis.iarc.uaf.edu/en/home/seaice_extent.htm

    No sign of a ‘recovery’ yet. The long term trend for sea ice extent is down.

  22. Lucia

    rather than paraphrase the IPCC, why not just quote them:

    “For the next two decades a warming of about 0.2°C per decade is projected for a range of SRES emissions scenarios. ”

    So one decade is not sufficient to ‘falsify’ the projection?

    It’s also interesting that the A2 scenario more closely follows the present trend than the A1B… Wonder why? Is there some reason you specifically look at A1B? Surely our emissions are more closely following the A2 track.

    Also, given that their statement
    “For the next two decades a warming of about 0.2°C per decade is projected for a range of SRES emissions scenarios. ”
    refers directly to that figure, could you not determine a better estimate of expected trend from the figure?

  23. “‘For some reason “the Blackboard” has an obsession with trends over the most recent 10-year period.’,”

    I believe he is referring to your old-style ‘falsiying the IPCC’ posts. Now we all knowthey didn’t deal EXACTLY with ten year trends, but that’s not really important is it…

  24. So Nathan, is your point that we should all just shut up until “the next two decades” are done?

    I guess then we’d be limited to talking about Hansen’s 1988 scenarios.

  25. DeWitt

    If the index goes negative any time soon, NH temperature will be flat to down and Arctic sea ice will stage a recovery. That’s a fairly big if, though.

    In which case, my guess will be wrong, right?

  26. Nathan

    So one decade is not sufficient to ‘falsify’ the projection?

    The answer is yes. Absolutely.

    Whether a particular decade does falsify a projection depends on what temperatures are observed.

    Is there some reason you specifically look at A1B?

    They were the pre-processed set publicly available at the Dutch Site. Out of curiosity, why do you think A2 better follows the data?

    I believe he is referring to your old-style ‘falsiying the IPCC’ posts.Now we all know they didn’t deal EXACTLY with ten year trends, but that’s not really important is it…

    If so, he is wrong. These were never based on 10 year trends nor even approximately ten years. His statement is based on having no idea what I really do do. That’s not surprising since my impression is he doesn’t read my blog. But it’s odd to read him relating what I am “obsessed” about! 🙂

  27. “For the next two decades a warming of about 0.2°C per decade is projected for a range of SRES emissions scenarios. ”
    refers directly to that figure, could you not determine a better estimate of expected trend from the figure?

    Sure. I have in the past. If you get that figure, and sketch a line fro the HadCrut trend of it will fall outside those error bars. Go ahead, get the figure 10.4 from the AR4 and slap the negative trend on there.

    BTW: The “uncertainty” range in Figure 10.4 is a smoothed version of the 1sd range in my annual average figure above. So, if you put the temperature on there, you’ll get something that looks like the right hand right hand side of my final figure in the blog post. But, to make it “like” Figure 10.4, remove the outer (magenta) uncertainty intervals. You’ll see the realization of observed temperature strayed outside the range on figure 10.4, but currently is near the multi-model mean.

  28. The long term trend for sea ice is way, way up you MORON! The polar ice caps are an anomaly! They haven’t existed throughout most of earth’s history. I can taste vomit in my mouth from these disgraceful distortions. Also, everyone please just shut up about the damn trends. I submit that anyone with a brain knows that the trends demonstrate global warming to be false. Millions of years ago, atmospheric co2 dropped by a 1000 parts per million and the average temperature went way up. Let’s here Bugs or Tamino explain that one. If you guys wanna talk about trends then explain how the hell that happened.

  29. Something interesting to note: the next IPCC report (AR5) will focus a lot more on decadal projections and modeling. It will be interesting to see how their decadal projections differ from the AR4 Figure 10.4 (which didn’t really highlight the short-term projections anyhow).

  30. Dear Lucia,

    Comparing the projection associated with the A1B emission scenario is an interesting exercise. It seems that the current temperature trend is running low with respect to this scenario, however not yet be statistically significant. It is somewhat unsatisfying to just stop with this observation, I really would like to know what the possible explanations are here.

    1. can it be explained by natural variability (e.g. enso)?
    2. is the proposed emission scenario still valid, were the projected emissions of CO2, CH4, SO2, … etc realistic. There are studies that suggest that SO2 has gone up lately.
    3. Are the models inaccurate? Is the sensitivity lower (long term feedback trough H20/clouds weaker), and/or is the warming effect of CO2/CH4/… weaker and/or is the cooling effect of SO2 stronger?
    4. Or are the thermometers underestimating the current warming trend? e.g. it has been suggested that ARGO data is not homogeneous with pre ARGO data.
    5. …

  31. Re: Zeke (Dec 22 11:51),

    Zeke, what does “decadal” mean in this (AR5) context? Is it that the results of projections and modeling will be discussed more intensely on a decade-by-decade basis, for the next century or two? In other words, are “decadal projections” shorter-term or longer-term than what was typical for AR4?

    .

    Re: Dr. Shooshmon, phd. (10:49am and 10:57am) —

    Golly, you seem to get peeved at people who disagree with you. And, recently, there’s been a lot of that (meaning “disagreement”) going around, on things climate-related.

    I think other commenters have figured out ways to disagree that aren’t so, er, disagreeable.

  32. MP

    It is somewhat unsatisfying to just stop with this observation,

    Agreed. I just don’t happen to be saying anything about significance in this particular post, and said I am deferring discussing error bars or statistical significance for the time being. The reason is I’m trying to look at something a different way from how I did previously, and I’m hoping to say more relatively soon.

    So, this is a question I find very interesting.

    I can comment on the bullet points a bit:

    1. can it be explained by natural variability (e.g. enso)?

    Assuming it needs expanation, what we are seeing can’t be “explained” by ENSO, because if the difference between the multi-model mean were due only to ENSO, the earth temperatures would be higher than the multi-model mean at the top of El Nino (i.e. now) and lower at the bottom of La Nina. Instead, if we look at temperatures, we happen to see oscillations below the multi-model mean.

    However, other oscillations exist. So, not being explained by ENSO doesn’t fully answer the questions about natural variability. We have been in a period of low solar activity. That’s another form of natural variability. (Modelers, mysteriously, mostly chose to omit the solar variability from the projections even though they should have expected the solar constant to decline at the beginning of the century following the more-or-less expected solar cycle.)

    is the proposed emission scenario still valid, were the projected emissions of CO2, CH4, SO2, … etc realistic. There are studies that suggest that SO2 has gone up lately.

    This question is a bit orthogonal to the one under “1”. We only need to ‘explain’ the discrepancy if we agree the difference in projections is statistically significant. In which case, we would be trying to identify the cause.

    But, at least in my opinion, it useful to make determination whether the observed differences are statistically significant a separate exercise from figuring out if we have an explanation for the difference.

    3&4 are also good questions. But, of course, we can’t really address them unless we get agreement that the difference between models and observations is statistically significant. If it’s not, we don’t really need to worry about 3&4.

    My general thoughts are: The models results as a collection are biased high. I could be wrong about this– but it is what I think is true.

    What I don’t know if the bias is owing to some models having excessively high sensitivities or because they forcing selected by the “collective” who decided on scenarios were higher than we are actually experiencing, or if it’s because modelers, for some reason chose to omit the expected decline in solar forcing at the beginning of the century. (They could be forgiven forseeing an extended solar minimum, but not including it at all was an odd modeling decision.)

    But what I would point out is whichever reason above is true, those modeling decisions are decisions and represent part of the full modeling process. The final projections result from the full group of decisions by modelers. In the end, I find the question: are the projections being communicated to the public generally high or low, of some interest.

    it has been suggested that ARGO data is not homogeneous with pre ARGO data.
    5. …

    I’m not using ARGO data. But the question of the accuracy of any set of observations is a good one.

  33. Lucia,

    Thanks for your answer.

    The way I see it is as follows, if we see a deviation between models it is helpful to explore different explanations and estimate how realistic these explanations are. Otherwise models will not become more reliable and future projections are not very useful. Deviations may be explained by different causes or a combination thereof. My bullet list was just a first attempt, natural variability, model assumptions (physics and/or assumed projected emission scenarios that are fed into the model (not sure if those data are available)) and finally possible issues with temperature measurements (ARGO was just one possible example of that. It is used in some of the SST products?).

    It may be interesting to discuss these aspects in a separate post in more detail sometime in the future. Even if a deviation is not statistically significant, it may still be possible to come up with a reasonable explanation.

  34. “Please stop using phrases like “your full of shit”.”

    And especially learn to use English contractions correctly: “You’re (you + are = you’re) full of..” is the correct form.

  35. Layman question: more than 70% of the Earth is covered by water and most are the oceans. How accurate are the ocean temperatures? I understand that the skin temperatures (1 mm) are measured by satellites, the rest is measured by buoys and ships. These can hardly cover all of the oceans, so how accurate is that?
    Cheers
    HP

  36. We are now experiencing a ‘warm’ La Nina. That is we have had global warming for a few decades now, and now a La Nina. It is interesting to speculate if the recent flooding around the globe is associated with the a La Nina that is occurring with higher global temperatures. I know Spencer likes to tell us that the consequence of a warmer atmosphere that can hold more moisture is that we will see more rain, which he depicts as a gentle shower over the sea. The flooding we are seeing is a little more dramatic and damaging than he was hoping for.

  37. So we see that the Anarctic is growing. The south pole is much bigger than the north pole so Bugs can go pound sand. The Northern extent is down 1.363, o my god bugs its all going to melt, what a vast and significant number you loser.

  38. So we see that the Anarctic is growing. The south pole is much bigger than the north pole so Bugs can go pound sand. The Northern extent is down 1.363, o my god bugs its all going to melt, what a vast and significant number you loser.

    IIRC, the Antarctic is a huge desert, because most of it is too cold for precipitation. Warming will allow more snow, and hence more ice.

    This graph is the global sea ice extent.

    http://arctic.atmos.uiuc.edu/cryosphere/IMAGES/global.daily.ice.area.withtrend.jpg

    It has an obvious trend line going down as well. WUWT is being misleading with his reference to ‘southern comfort’, as the SH does not increase more than the NG is decreasing.

  39. It may be interesting to discuss these aspects in a separate post in more detail sometime in the future. Even if a deviation is not statistically significant, it may still be possible to come up with a reasonable explanation.

    All those factors have been discussed from time to time at the blog. But, currently, I’m focusing on statistical significance, so there won’t be a major blog post on the other for a period of time. Of course, if you’d like to discuss what you know about those, you are welcome to do so.

    Sometimes, I even let people who know write guest posts. So, if you have detailed information, you could let us know, and maybe that could eventually be a guest post. 🙂

  40. AMac,

    By decadal projections I meant highlighting various model projections for the next decade or two, rather than only the century projections that the AR4 focused on.

  41. Lucia,

    Still it would be nice to have coherent series of dedicated posts on this topic, of course I would not want to divert you from the statistical analysis.

    Not sure if I am the right person for producing guest posts on this, if proven to be worthy of course 🙂

    Anyway, thanks for your response.

  42. “why do you think A2 better follows the data? ”

    Just look at it in the figure in AR4, it’s below the A1B curve. So if you were to plot the 12 month average over the A2 curve, it would appear to be a better match. I believe that guy who does the ‘Climate Denial Crock of the Week’ has already compared all the data sets to the A2 curve.

    “Whether a particular decade does falsify a projection depends on what temperatures are observed.”

    How can a ‘particular decade’ falsify the projection? This implies that you think one decade is sufficient.

    As ususal the problem with these data explorations of yours is that they stop short of any useful.

    For example:
    “Though this is not a formal test, it’s worth noting that if the multi-model mean was unbiased relative to the earth’s temperature (or even pretty close), then we would expect the earth’s temperatures would oscillate above and below the 12-month average temperature anomaly from the multi-model temperatures.”

    This is a truly strange thing to say, why would it need to oscillate above and below the MM-mean for A1B for the period 2000 to 2010? What if it oscillated above and below for every other decade this century? This again suggest you think that 10 years is sufficient to test the projection

    Yet again it’s an investigation that goes nowhere and doesn’t actually investigate anything. If you think the models or projections are wrong or inaccurate or whatever, shouldn’t you then test that in other ways. For example by using this decades temps to say the models are too warm you are implying that the climate sensitivity is too high, so why not test you implied claim about climate sensivity? It would fit with your ‘Lukewarmer’ beliefs too.

    I bet what you will find is that your implied climate sensitivity will be too low to accomodate paleoclimates.

    On Tamino’s claim:
    “If so, he is wrong. These were never based on 10 year trends nor even approximately ten years. ”
    He didn’t say “Ten year trends” he said “Trend over ten years”. Which is pretty close to your trend since 2001 posts you used to do every few months.

  43. Just look at it in the figure in AR4, it’s below the A1B curve. So if you were to plot the 12 month average over the A2 curve, it would appear to be a better match. I believe that guy who does the ‘Climate Denial Crock of the Week’ has already compared all the data sets to the A2 curve.

    I actually have all the A2s because Chad did them– but I don’t want to pre-post stuff Chad is working on. There is really barely any difference– which you could easily see by eyeballing figure 10.4.

    Do you have a link to whatever it is you think you saw by the guy who does denial crock of the week?

    How can a ‘particular decade’ falsify the projection? This implies that you think one decade is sufficient.

    Implies? I’m saying flat out one decade could be sufficient. If after a projection is made, the data that come in are far enough off from the projection then it will falsify the projection. What is so difficult about this concept?

    This again suggest you think that 10 years is sufficient to test the projection

    Yes. Once again, 10 years can be sufficient.

    What if it oscillated above and below for every other decade this century?

    Uhmm… if the projections are right, that’s what ought to happen. Without analysis we don’t know how long the projections can remain below or above, but… uhmm… yah… if the projections of the mean tendency are right, then a noisy realization ought to oscillate above and below.

    If on the other hand, the observation stays consistently above or below– the projection will have been… uhm…. biased.

    et again it’s an investigation that goes nowhere

    You are a very confused person. This post is not an “investigation”.

    Which is pretty close to your trend since 2001 posts you used to do every few months.

    Used to? And how long were those trends at the time? Not ten years. Go back and you will find posts discussing “trends over 10 years” quite rare.

  44. Posted by Lucia (64124) “If so, he is wrong. These were never based on 10 year trends nor even approximately ten years. His statement is based on having no idea what I really do do. That’s not surprising since my impression is he doesn’t read my blog.”

    I’ve been lurking and reading your blog for about a year now, and I’m still not 100% sure what you really do! That said, the quality of discussion here is head-and-shoulders above any other climate blogs or sites – absolutely no contest.

    Posted by Nathan(64144)
    “(Quoting Lucia) “Whether a particular decade does falsify a projection depends on what temperatures are observed.”

    How can a ‘particular decade’ falsify the projection? This implies that you think one decade is sufficient.”

    Read the part you quoted: – “It depends on what temperatures are observed.” If the temperatures absolutely plummeted over a decade, Day After Tomorrow style, that would certainly falsify the projection and one decade WOULD be sufficient.

  45. Seriously, the first ten years is sufficient to falsify a century long projection? Wow… Well that’s amazing.

    Oh wait, no you’re saying that if a particular 10 years was way out then you could falsify… Not that this particular ten year period says anything. OK, well I suppose we are in agreement that the first ten years doesn’t say much but that if some future 10 year period was way out then it could falsify the projection.

    Here’s the link: http://www.fool-me-once.com/2010/09/temperatures-are-below-projections.html

    Wasn’t the climate denial crock guy, was someone else.

    “if the projections are right, that’s what ought to happen. Without analysis we don’t know how long the projections can remain below or above, but… uhmm… yah… if the projections of the mean tendency are right, then a noisy realization ought to oscillate above and below. ”

    Couldn’t the temps stay below the MM-mean and within the 2SD range the whole time and the models still be considered correct? There’s nothing special about the MM-mean. There’s no reason the temps need to oscillate above and below the MM-mean for any period in the 100 years.

    “Used to? And how long were those trends at the time? Not ten years. Go back and you will find posts discussing “trends over 10 years” quite rare.”

    Well they were from Jan 2001 to whatever month you were dealing with at the time. So they would be trends over 7-9 years… Now we know that’s not exactly 10… But it’s boring and pointess to say that. Let’s say he’s accurate to one significant digit

    “You are a very confused person. This post is not an “investigation”. ”
    I know it’s not an investigation of anything, and I said as much. What I was hoping was that you would turn it into an investigation… But as usual you won’t.

    Can you not continue this ‘non-investigation’ and explore it further? Say by introducing some aspect of climate sensitivity or by using it in your Lukewarmer Hypothesis? Maybe even create a proper Lukewarmer Hypothesis?

  46. The radiative forcings used models do not cover the last 10 years, it would be quite informative to see what the hindcast is when data for the last 10 years are added. This would help to separate uncertainties in the proposed emission scenario from other factors associated with the climate models, global temperature estimates and natural variability.

  47. Nathan provided the link: http://www.fool-me-once.com/20…..tions.html

    I think the link is excellent and provides a clear explanation of the A2 projection as compared with the 5 current global temperature metrics.

    When Lucia said ” I wondered whether the 12-month average temperature would manage to cross above the multi-model mean. Though this is not a formal test, it’s worth noting that if the multi-model mean was unbiased relative to the earth’s temperature (or even pretty close), then we would expect the earth’s temperatures would oscillate above and below the 12-month average temperature anomaly from the multi-model temperatures. Instead, we continue to see the observations oscillate in a region below the multi-model mean just grazing the multi-model mean at the top of earth El Ninos,” I thought she asked an interesting, relevant question worthy of looking into. The Fool Me Once video did just that (but with A2).

  48. Nathan–
    I answered the question you actually asked. I didn’t discuss the question you didn’t ask.

    Couldn’t the temps stay below the MM-mean and within the 2SD range the whole time and the models still be considered correct?

    Well… that would depend on what you think “the models still be correct” means. Certainly, the multi-model mean would be biased and the only way that could happen is if at least some models are wrong or all models are wrong.
    If you mean: couldn’t one of the models be correct– sure. The ones on the low end could be correct. But that’s not the same as “the models (collectively) are correct. If only the lowest ones are correct, that would mean the models, as an ensemble are biased high. That is: wrong on the high side. So, if someone is communicating we should expect the mean or the high end are meaningful– well… uhmm.. no. If only the low models are correct, that’s something we would need to know.

    So they would be trends over 7-9 years… Now we know that’s not exactly 10…

    Uhmm… well 7 is clearly not 10.

    I know it’s not an investigation of anything, and I said as much.

    No. Not quite. You wrote “Yet again it’s an investigation that goes nowhere”, which is a) calling it an investigating it, and b) criticizing it as if it was represented as such.

    But then, accuracy or clarity of expression has never been your strong suit.

    As for the link– doesn’t seem to ever get around to showing what you claimed. But if it does, maybe you can give a time stamp when you think it discusses what you think?

  49. it would be quite informative to see what the hindcast is when data for the last 10 years are added.

    Yes. And I have done that– but not today.

    But even at that, testing pure hindcasts has the difficulty that each modeling group used different forcings during the 20th century.

  50. The Fool Me Once video did just that (but with A2).

    The Fool Me Once video also seems to compare annual averaged projections to monthly (noisier) data.
    The monthly data does go above the multi-model mean.

  51. Hoi Polloi (Comment#64137) December 22nd, 2010 at 2:53 pm
    asks:

    ” … How accurate are the ocean temperatures? I understand that the skin temperatures (1 mm) are measured by satellites, the rest is measured by buoys and ships. These can hardly cover all of the oceans, so how accurate is that?”

    Much of the ocean is measured from surface to around 700 meter depth the by Argo network. See http://www.argo.ucsd.edu/

    This data is then used to calculate things such as the change in the ocean heat content. This is a very interesting measure since it is a very direct indication of the warming or cooling of the earth, since the heat capacity of the ocean is much greater than the land. See http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/ for 3 month running average and yearly average OHC.

    Since 2003, the OHC has been primarily determined using Argo data, and OHC has been relatively constant.

  52. Re: Nathan (Dec 22 04:35),

    Is one decade enough to falsify a projection of .2C per century? Sure. Absolutely.
    If the average temps fell 5C, then certainly something would be amiss. And if we had a abrupt sudden warming of 5C we would certainly be talking about the IPCC getting it wrong.

  53. Lucia

    “Well… that would depend on what you think “the models still be correct” means. Certainly, the multi-model mean would be biased and the only way that could happen is if at least some models are wrong or all models are wrong.
    If you mean: couldn’t one of the models be correct– sure. The ones on the low end could be correct. But that’s not the same as “the models (collectively) are correct. If only the lowest ones are correct, that would mean the models, as an ensemble are biased high. That is: wrong on the high side. So, if someone is communicating we should expect the mean or the high end are meaningful– well… uhmm.. no. If only the low models are correct, that’s something we would need to know. ”

    This is vague nonsense.
    If the temp stayed within 1SD of the MM-mean, but only below the MM mean, it wouldn’t mean “only the low models are correct”. There isn’t a selection of low models that only stay below the mean and ones that stay above.

    There’s nothing special about the MM-mean that the Global temps need to hover near it, or above and below it.

    And if the temps stayed between the MM-mean and 1SD below the mean, it wouldn’t say anything at all about the models being ‘biased high’.

    If you really think this, do a post and prove it. Make up some data that stays between the MM-mean and 1SD below, then compare the trend to the MM-mean and to the individual model runs. My guess it that there will be an insignificant difference.

  54. Lucia
    “Uhmm… well 7 is clearly not 10. ”

    In this context that is just trivial. Claiming that you have never done an analysis of trends over ten years when you have done perhaps 20 or more on trends over 7-9 years is the most trivial rebuttal.

    I note yet again you ignore any request to quantify or explore the Lukewarmer Hypothesis.
    It doesn’t exist does it?

  55. actually Lucia, I saw a really interesting presentation by Tim Palmer where he “falsified” century long climate model projections by doing data assimilation 6 hours into the century long forecast.

    http://sms.cam.ac.uk/media/1083628?format=flv&quality=high&fetch_type=stream

    minute 57.

    of course the runs that were eliminated were all scary with very high sensitivity numbers. Does that make him a luke warmer?
    Henceforth anybody who even works on the problem of sensitivity and rules out higher numbers in the ranges shall be deemed a lukewarmer.
    And that Baysian who had his estimate of 2C per doubling. Lukewarmer.
    Maybe anybody who even dares suggest a test of models against observations should be called a luke warmer.

    http://onlinelibrary.wiley.com/doi/10.1002/qj.23/abstract

  56. I note yet again you ignore any request to quantify or explore the Lukewarmer Hypothesis.
    It doesn’t exist does it?

    ######

    Luke warmer is a DESCRIPTIVE term. For the most part we believe.

    1. that data should be open.
    2. that code should be open
    3. that GHGs cause warming
    4. That man is increasing GHGs
    5. That the sensitivity falls in the lower half of the IPCC range, say 1.5 to3C

    its a big tent I’m sure we wouldnt kick anybody out who thought 1C was the right number. Oh and we are all oil shills and creationists.

  57. If the temp stayed within 1SD of the MM-mean, but only below the MM mean, it wouldn’t mean “only the low models are correct”.

    Yes, it would mean only models whose trajectories stay low– below the mean– can be correct. These could be called “low models”.

    There isn’t a selection of low models that only stay below the mean and ones that stay above.

    Uhhm….. only models with lower trajectories have any reasonable probability of exhibiting a trajectory that stays consistently below the mean.

    Also, I did not exclude the possibility that all models would be incorrect. Either a) one or more low model could be correct or b) no models are correct. Having data fall consistently below the multi-model mean, forever and always is a situation where we would know the models with higher trajectories are wrong. We could take our choice of tests, but they could not be right.

  58. Nathan

    I note yet again you ignore any request to quantify or explore the Lukewarmer Hypothesis.

    Ordinarily, out of politeness, I would merely continue to ignore this request tacked on to the end of a multi-paragraph comment filled with questions.

    If you insist on attention, I will reveal this: I decline your request because it is a stupid one that reveals your ignorance of the meaning of the term “Lukewarmer”. It seems to me you don’t want to even try to understand it so you can continue to ask stupid questions.

  59. Charlie A (Comment#64154)

    Cheers Charlie.
    So if I got it right, Argo only started in 2000. The NODC graph starts in 1955 and uses Argo since 2003 (where I see quite a big jump, coincidentally?). All in all not too much history to compare with, or issit?
    .
    What strikes me is that the mission text is rather biased. It does appear that it’s mission is to substantiate AGW:

    We are increasingly concerned about global change and its regional impacts. Sea level is rising at an accelerating rate of 3 mm/year, Arctic sea ice cover is shrinking and high latitude areas are warming rapidly. Extreme weather events cause loss of life and enormous burdens on the insurance industry. Globally, 8 of the 10 warmest years since 1860, when instrumental records began, were in the past decade.

  60. I would assume, if climate science is like other sciences, that the predictive models for average global temperatures will get progressively better as we learn more about the factors affecting climate and as we refine our understanding of the interplay of linked and competing feedbacks. I never expected these, what can only be considered early-stage, climate models to be perfect. Did anyone (except perhaps Nathan)? I am, however, rather impressed by the quantitative accuracy they have shown to date.

  61. Interesting thought Owen. However, I think the opposite may be true. I think the models may become progressively worse for the short term at least because the more factors they find that influence the climate, the harder the calculations to find the GAT will be. A good comparison is oil reserves. How is it that Saudi Arabia’s supplies have only increased? It’s because they keep finding more oil. People keep talking about oil running out while more and more is being discovered at the same time.

    And I’m not trying to hate down the models and suggesting we do away with them all together. Recently, there was an a write in comment to a local newspaper where I live and the person identified themselves as a college teacher. She was rattling off all of the supposed benefits to ratifying the clean air act. In one example, she claimed that 15,000 lives or so would be saved in my state alone by passing this bill… I think it takes a real idiot to listen to anything she has to say. You can’t make even a remotely accurate prediction like that, it is all based on assumption. It is similar to the hoax claims about second hand smoke. It is pretty much impossible to test the effects of second hand smoke.

  62. If the multi-mean runs just a bit under the actual temps, then presumably some models run well lower, while others run equal or a little bit higher.
    .
    Has anybody looked at which models showed higher or lower trends, and whether there seems to be common factors? Say, something like “models not including detailed solar forcings run much too low”, etc.

  63. Owen,#64166

    So you agree the models are in their infancy and need refinement over coming years.

    The problem is that the IPCC and politicians are demanding that action be taken NOW based upon the very models which you agree are inadequate.

  64. I’m probably no where near expressing this is the appropriate statistical terms but Is there a test to determine the probability of the data falling to one side of the model means over a period of time? In other words, can we look at the length of time the models appear to over/under estimate the data determine if this is significant? What I’m thinking of is some test that determines the probability distribution of excursions of different lengths to each side of the mean to see if the present excursion represents something that is statistically reasonable or if it represents a state change of the data.

  65. Thanks Steven

    “3. that GHGs cause warming
    4. That man is increasing GHGs
    5. That the sensitivity falls in the lower half of the IPCC range, say 1.5 to3C”

    Ok, so according to this you just think that Climate Sensitivity is slightly lower… Why?

  66. Lucia

    “If you insist on attention, I will reveal this: I decline your request because it is a stupid one that reveals your ignorance of the meaning of the term “Lukewarmer”. It seems to me you don’t want to even try to understand it so you can continue to ask stupid questions.”

    What?
    I wam admitting my ignorance of the term… Mostly because the people who use it won’t define it.

    But Steven tried above. So I will assume that your ‘Lukewarmer’ hypothesis is the same.

    Why do you think that climate sensitiity is between 1.5 and 3 C?

  67. Lucia
    “If you insist on attention…”

    ? I barely post here. Ever.

    Remember… 10 does not equal 7!!!!

  68. Re BarryW:

    “In other words, can we look at the length of time the models appear to over/under estimate the data determine if this is significant?”

    I was thinking along similar lines, except that we don’t just care about the length of time the observations spend below (or above) the model (or in this case the model mean), but the magnitude of the difference as well.

    One thought I had was that if we believe that in a graph like this (HADCrut vs. Model Mean):

    http://dl.dropbox.com/u/9160367/Climate/12-23_ModelsVsHadCrut.jpg

    That the model mean represents the climate trend around which the observations oscillate due to weather, we essentially have a century of data with which to determine what we should expect for weather noise. So if the discrepancy between the 21st century projections and observations deviates beyond anything we’ve seen from the 20th century hindcast, it might signal that either our forcing projections are wrong, or that the model mean does not actually represent the climate trend.

    Here’s a graph of differences (absolute value) between 10 year averages of model estimates vs observations:

    http://dl.dropbox.com/u/9160367/Climate/12-23_10YearDiffModelsVsHadCrut.jpg

    By my calculation, the difference for this first 10 years of projection (2001-2010) comes to 0.148. If we look back to the 20th century, out of the 1081 different (but overlapping) 10-year periods, 61 of them have a greater discrepancy between model estimate and observation, or about 5.6%. All 61 of those periods overlap and deal with the model underestimate between the 1930-1950 period. This current 10 year period is the largest OVERestimate seen. However, it should be noted that the magnitude of these differences is sensitive to the baseline we choose to match them up against…here I’ve used 1901-1950 based on the IPCC climate models chapter.

    So, IF those 10-year periods were all independent (which they are definitely not), and the 20th century gave the full sample of weather noise, it seems like we might be able to formulate something along these lines: If our Model Mean represents our true climate trend, the chances that the first 10 years would show a deviation of 0.148 or more from this mean are about 6%.

    Perhaps somebody more statistically inclined could suggest a more correct formulation, or maybe using the 20th century obs vs. models to determine the significance of 21st century divergence is simply a dead end…

  69. Nathan (Comment#64173) December 23rd, 2010 at 7:29 pm | Reply w/ Link

    Thanks Steven

    “3. that GHGs cause warming
    4. That man is increasing GHGs
    5. That the sensitivity falls in the lower half of the IPCC range, say 1.5 to3C”

    Ok, so according to this you just think that Climate Sensitivity is slightly lower… Why?

    ######

    1. I found the papers cited by the IPCC that support these numbers convincing, first and foremost.
    2. Those elements admittedly left out by the models ( small scale sub grid processes )appear to be negative feedbacks
    3. The range for sensitvities determined by observation tend to be lower than those that emerge from models. Ray P had a nice presentation on this at AGU. I tend to choose observationally based analysis over computer models, but I’m still a model lover.
    4. There appears to be a subtle bias in the field to higher sensitivities because of people’s risk adverse nature. they dont want to rule out high values. I would not, however, have an issue using the consensus figure of 3 for PLANNING, but at the end of the day I’ll bet the under 3 bet.
    5. I think the land record is biased upward slightly by UHI, not much at most .1C or so.
    6. The projections of Ar4 are based on models that for the most part used a level solar forcing ( the exception being modelE). Since the value was taken at the high point of TSI, I think the models prolly run slightly warm.
    7. I just saw a really cool baysian approach that put it at 2C. Nobody in the room suggested he was a knuckle dragging anti science oil shill.
    8. you really want more reasons why I fall squarely in the range of IPCC approved numbers?
    9. the actual number will most decidely either fall above the mean or below it. I’ll take the under bet.

    However, if the climate reconstructions showed a higher MWP I might bet the upper end of the distribution.

    Any more stupid questions? i’ve pretty much laid out all of these over the past years.
    Why do you reject the science that says it could be less than 3? and when are you going to call for those scientists to be banned?

    ( lucia hows that for my imitation of stupid nathan questions??

  70. Dave Andrews (Comment#64170)

    I don’t think I implied that climate models are inadequate. I just didn’t expect them to be spot on, and still don’t as we move into the future. However, I really don’t see a downside to working hard to reduce dependence on fossil fuels. The negatives of annually oxidizing many billions of tons of sequestered carbon are pollutions of various types, monetary support of states that are hostile to the US, under-the-table support of terrorists, possible economic blackmail, and limited long-term quantities of economically accessible reserves. We could go on, perhaps, for another 50-100 years before the costs of increasingly scarce oil and natural gas become prohibitively high.
    Already, many countries (including economic superpowers Japan and Germany) use far less fossil fuel per person than do we, and without economic collapse. If there were a national will to do so, we could make great strides over the next 30-50 years, with the attendant economic growth that could accompany a great structural changeover.
    And I didn’t even mention the CO2 mediated enhanced greenhouse effect in the above discussion, although this may be the best reason of all to forge the badly needed national resolve.

  71. Nathan;

    “I wam admitting my ignorance of the term… Mostly because the people who use it won’t define it.”

    are you retarded. from the first time we coined the term we did a definition.
    seriously, adderal, mate.

  72. Nathan,

    Let me add a bit to Mosher’s take.

    I would describe myself as a ‘lukewarmer’. I think the sensitivity to doubling of CO2 will be almost certainly below ~2.5 and probably below 2. Why? Because that is the sensitivity that seems most consistent with:
    1. Measured warming
    2. Known GHG gas concentrations and consequent radiative forcing
    3. The measured lack of dramatic ocean heat accumulation following the rapid temperature rise from the late 1970’s through 2000, strongly suggesting the effective ocean lag constant is well under 10 years (probably 5-6 years is about right) and this is incompatible with high sensitivity.
    4. Inconsistency of extreme aerosol forcing with measured temperature increases… in the absence of very long ocean lags.

    In short, the high climate sensitivity story just does not ring true. Too many measured values have to be wrong for very high sensitivity to be right.

    BTW, I suspect that if you actually listen, you will find people who identify themselves as ‘lukewarmers’ are not usually creationists, nor in the pay of oil companies. They are skeptical of technical nonsense form any quarter. You probably will find that they have invested a lot of time and effort in understanding climate science, and that most have some sort of technical background.

    Very few drag their knuckles when walking. 😉

  73. SteveF (Comment#64180) : “The measured lack of dramatic ocean heat accumulation following the rapid temperature rise from the late 1970′s through 2000, strongly suggesting the effective ocean lag constant is well under 10 years (probably 5-6 years is about right) and this is incompatible with high sensitivity.”

    Steve, what is an “effective ocean lag constant” and why would a value of 5-6 years be incompatible with high sensitivity?

    Thanks.

  74. Owen–

    Steve, what is an “effective ocean lag constant” and why would a value of 5-6 years be incompatible with high sensitivity?

    If the time constant is fast, and the sensitivity is high, then the earth would already be warmer than we are seeing. So, people who think the time constant is fast and observe the temperature rise we’ve observed will tend to think the sensitivity can’t be high.

  75. Owen,
    “Already, many countries (including economic superpowers Japan and Germany) use far less fossil fuel per person than do we, and without economic collapse. If there were a national will to do so, we could make great strides over the next 30-50 years, with the attendant economic growth that could accompany a great structural changeover.”

    Whoa friend. I have spent a fair amount of time in both those countries. They have lower energy use for several reasons, among which include:
    1. Housing that is tightly located near (and mostly within) cities; so much less commuting.
    2. Extreme government restrictions on use of private land (restricted/prohibited housing development).
    3. Mostly modest size and very expensive apartments rather than spacious houses on inexpensive land… cheaper to heat.
    4. Heavy taxes on anything that uses electricity in a way the government deems ‘undesirable’… check prices on small window air conditioners in those countries.
    5. Over-the-top taxes on auto fuels to make sure people take trains instead of cars (and of course, to increase government taxes as a fraction of GDP).

    The German and Japanese ‘models’ would be an uncomfortable fit in much of the USA. A political non-starter right now. Some people in the USA may thing this is the proper role of government, but I think most do not. I sure as heck don’t.
    .
    Investments only produce wealth if those investments make economic sense. If you mandate returns on energy investments (eg, green energy tax subsidies), then you reduce wealth, not increase it. Investing in energy research to make non-fossil energy more economically attractive probably does make some sense (or at least does minimum harm), but having government pick winners and losers is sure to make the nation less wealthy, not more. Why do you think the Germans are pulling subsidies for solar and wind? (hint: think about how much they cost).
    .
    Consider the biggest boondoggle in my lifetime, corn ethanol: a) no net energy production, b) huge expense, c) reduced worldwide food supplies, e) disruptive tariffs to keep out competitive ethanol from sugar cane, and f) winners (farmers, ethanol producers) and losers (gasoline buyers, poor people in the third world) selected by government fiat.

  76. Originally posted by Nathan:

    “Why do you think that climate sensitiity is between 1.5 and 3 C?”

    One of the best arguments for low sensitivity has been put by Steve Fitzpatrick, on this site and elsewhere. In brief, the IPPC range of values for climate sensitivity arise from models where many of the inputs (the effects of black carbon, sulphate aerosols and the amount of heat energy going into the sea) are not well known, and you can justifiably use a wide range of values. The models use assumed values for these inputs and the various models use DIFFERENT assumed values. This means that all the models “hindcast” the climate fairly well despite having different climate sensitivities – the variability in their assumed aerosol forcing allows them to force a fit.

    Assuming the sulphate aerosols mask a lot of warming and the sea takes in a lot of heat gives a model with a high climate sensitivity. Assuming otherwise gives a model with a low climate sensitivity. The more recent sulphate and ocean heat content data suggest that the assumptions behind the models with a low climate sensitivity are probably more valid. The ARGO buoys are giving us better sea temperature data, and the GLORY satellite due for launch in February should give us better sulphate data.

    For more detail and hard figures you really should visit this thread:
    http://rankexploits.com/musings/2010/hot-pepper-haiku-nh-ice-open-thread/ and jump to comment 51774.

    For the background discussion to that post, start at comment 51602 (Julio) and check out the graph there.Then read SteveF’s comment at 51624, Julio’s response at 51638, jump to 51758 and follow their exchange from then on.

  77. Owen,
    “Steve, what is an “effective ocean lag constant” and why would a value of 5-6 years be incompatible with high sensitivity?”

    Lucia’s reply is right, but there is a bit more to it. If there is a very long ocean lag, then that means any change in ocean surface temperature must lead to a very large increase in (measurable) ocean heat content. And that accumulation must physically lag behind the surface temperature. In other words, if there is a surface temperature increase for 20 years in a row, then the ocean surface temperature is reasonably flat for 5-10 years (and this has actually happened), then we would expect to see continued large ocean heat accumulation for a long time. We didn’t. ARGO data suggests a relatively short lag constant. If the lag constant is short, then there is little “warming in the pipeline”, since we are not gradually accumulating heat in the ocean and masking the warming that would otherwise take place. With a short lag constant, the actual measured surface temperature represents most all of the warming for the current level of forcing.

  78. Re: Dr. Shooshmon, phd. (Dec 23 12:09),

    A good comparison is oil reserves. How is it that Saudi Arabia’s supplies have only increased? It’s because they keep finding more oil. People keep talking about oil running out while more and more is being discovered at the same time.

    Saudi and other OPEC nations stated reserves have a serious credibility problem. Their production quotas are based on their declared reserves. They all magically went up by a factor of about two some years back.

    The global rate of oil discovery peaked in the 1970’s. More is being discovered, but nowhere near enough for production to expand. We’ll be lucky to maintain the same level of production for the next few decades.

    To quote the Red Queen from Through the Looking Glass:

    “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”

    The problem is, of course, that we can’t run twice as fast.

  79. SteveF (Comment#64185) December 23rd, 2010 at 9:35 pm

    Owen,
    “Steve, what is an “effective ocean lag constant” and why would a value of 5-6 years be incompatible with high sensitivity?”

    Lucia’s reply is right, but there is a bit more to it. If there is a very long ocean lag, then that means any change in ocean surface temperature must lead to a very large increase in (measurable) ocean heat content. And that accumulation must physically lag behind the surface temperature. In other words, if there is a surface temperature increase for 20 years in a row, then the ocean surface temperature is reasonably flat for 5-10 years (and this has actually happened), then we would expect to see continued large ocean heat accumulation for a long time. We didn’t. ARGO data suggests a relatively short lag constant.

    The lag of the southern hemisphere compared to he north, as predicted, indicates otherwise.

    Ocean heat content.

    http://www.skepticalscience.com/news.php?p=2&t=57&n=56

  80. SteveF: “ARGO data suggests a relatively short lag constant.”

    Bugs: “The lag of the southern hemisphere compared to he north, as predicted, indicates otherwise.”

    Bugs, if you by “lag” mean that surface temperatures in the southern hemisphere follow temperatures in the northern hemisphere I don’t really see it:
    http://cdiac.ornl.gov/trends/temp/jonescru/graphics/glnhsh.png

    The southern hemisphere seems to follow along nicely except perhaps in the 1990’s. Since 1950, temperatures have risen by approximately the same amount, it seems. And if the southern hemisphere lagged the northern hemisphere, you would expect the most recent dip to be more pronounced in the northern hemisphere, wouldn’t you.

  81. SteveF (Comment#64183)
    December 23rd, 2010 at 9:26 pm
    “Whoa friend. I have spent a fair amount of time in both those countries. They have lower energy use for several reasons,”

    I lived in Germany for a summer, and my son lived in Japan for several years in a small apartment (we visited). In both cases people went everywhere by public transportation – convenient and efficient. I see the higher (via taxation) gasoline prices as being a good thing in that 1) it gets auto manufacturers to produce far more efficient cars (ca 50% of European cars are diesel getting around 50 mpg on the autobahn) and 2) it encourages the development and use of public transportation.

    The idea of government limiting land use (discouraging sprawl, creating public recreation) is also a good one (in my mind, anyway). In the issue of individual rights versus the common good, I often tend to come down on the side of the latter.

    My experience in Germany showed me that the quality of life, which was quite high in Deutschland, was enhanced by excellent public transportation and the idea of walking to get places.

    I agree with you by the way on corn ethanol – completely.

    The alternative, as I see it, is to wait for market forces to emend our behavior. Perhaps high oil prices will drive efficiency more efficiently than government regulation. Perhaps, but with a longer lag constant?

  82. Bugs,

    The NOAA graphic of ocean heat content at the page you linked is long out of date; errors in merging the pre and post ARGO data were fixed. Here is the updated graphic:
    http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/index.html
    This page is updated each 3 months. If you compare the ocean heat graphic to the global surface temperature graphic posted by Niels A Nielsen above, you can see that there is a only very modest lag between surface temperature trend and ocean heat. Certainly nothing like the 20+ year lag that would be consistent with the IPCC mid-range sensitivity value.

    Now some might argue that the pre-ARGO data is too uncertain to draw detailed conclusions; which may be true. It is also true that there may be a very slow accumulation of heat below 700 meters which does not show up in the measured trend, but which, if present, would have little or no impact on climate for several hundred years or more. (Adding 0.1 watt per square meter to the bottom 3000 meters of ocean warms that water by only 0.00025 degree per year, so even a deep accumulation of 0.3 watt per square meter would warm the bottom 3000 meters by only 0.075C per century.)
    The key point is that the measured heat trend for the top 700 meters is nothing like what we would expect for very long ocean heat lag, and this is inconsistent with the middle to upper parts of the IPCC climate sensitivity range.

  83. Owen (Comment#64178)

    Already, many countries (including economic superpowers Japan and Germany) use far less fossil fuel per person than do we, and without economic collapse.

    A quick sanity check easily shows that simplified arguments lead to dubious conclusions. Transportation is a significant factor in fossil fuel use. Population density is clearly a contributor to the amount of transportation needed.

    Population Density ranking per square kilometer:
    22nd Japan – 337.13
    35th Germany – 229.00
    141st United States – 32.19
    There are numerous other factors to consider such as climate that affect heating and cooling energy usage and the economic output per unit of energy usage.

    If there were a national will to do so, we could make great strides over the next 30-50 years, with the attendant economic growth that could accompany a great structural changeover.

    “National will” being a top down approach of simplified solutions for a complex problem that have no chance of working. Somehow solutions that consume vast amounts of resources inefficiently are suppose to lead to economic growth.

  84. The graph SteveF links to is updated to september 2010. The trend has been flat for almost eight years – since the ARGO data became available in 2003. Where is the “heat in the pipeline”?

  85. Owen,

    Thanks for your thoughtful reply.

    I hope we can simply agree to disagree about the proper role of government (public good versus private, etc.). It is a political question which will be resolved via elections.

    My experience is that there are lots of people in Europe and Japan who chafe under the imposed limitations on private economic activity. I have also seen a Japanese engineer working in Brazil literally weep when called back to Japan after 5 years in Brazil.. leaving a 250 square meter apartment with panoramic ocean views to live in cramped small apartment in Tokyo (twice the price, with no central heat and no air conditioning) was more than he could bear to think about. I guess people have different ideas about what constitutes good ‘quality of life’.

    By the way, it is only recently that the technology for small diesel engines would allow compliance with USA emissions standards (especially particulates and NOx). I think there are still some issues with USA regs for diesels in cars, but I could be mistaken. These low-emissions diesel cars do have much better fuel efficiency, but are extremely expensive. There is no way that fuel costs in the USA would ever justify the added cost. These only become a logical option with huge fuel taxes on gasoline (and much lighter taxes on diesel). I submit that this application of capital does not enrich any country in an economic sense.

  86. Sorry, but I am still having problems understanding what the effective ocean lag constant is. Is it related to the rate constant for the transfer of thermal energy from atmosphere to ocean? Does a long lag imply a slow rate of heat transfer?

    Why is it related to climate sensitivity which I understand as the (long-term measurable) response of average global temperature to the doubling (or some general change) in atmospheric CO2 (e.g., dT/dCO2)? Is it a feedback of some type?

    Thanks.

  87. SteveF (Comment#64195)

    These low-emissions diesel cars do have much better fuel efficiency, but are extremely expensive. There is no way that fuel costs in the USA would ever justify the added cost. These only become a logical option with huge fuel taxes on gasoline (and much lighter taxes on diesel).

    The ‘if we switched from gasoline to diesel’ argument is another dubious argument based on ignorance of crude oil and what refining accomplishes. Crude oil is a mixture of hydrocarbon molecules and refining is the process by which the different hydrocarbons are separated. Gasoline and diesel come from different hydrocarbons and the crude oil mostly determines what you get. Think of it as a barrel of colored balls all mixed together. Separating (refining) the colored balls doesn’t change how many of each color you get.

  88. Owen,
    .
    You are asking a rather complicated question.
    .
    Suppose you were to suddenly increase radiative forcing by 5 watts per square meter (a huge change!), and suppose we know (with God-like certainty) that the temperature change from this increase in forcing would ultimately be 2.5C. If the ocean were very shallow (say 50 meters everywhere) then we would see the temperature respond to the change over a period of a few years, with a warming in ocean temperature that was at first reasonably fast, but gradually slowing according to an exponential approach function. So the result of the applied forcing would be quickly apparent, and we could quickly see the true ‘climate sensitivity’ of 0.5 degree per watt/M^2.
    .
    The real ocean is far more complicated, but does in fact consist (over much of it’s surface) of a relatively thin, relatively warm surface layer (called the ‘well-mixed layer’, but which could more accurately be called the ‘convective mixing layer’) where the temperature remains almost constant with depth. The thin surface layer floats on top of a much colder and deeper layer where the temperature falls rapidly with depth, reaching low single digit temperatures by ~1200-1500 meters, called the ‘thermocline’. Below the thermocline is the very deep ocean, which averages ~4000 meters depth, where the temperature is always within a couple of degrees of freezing, and which is very (1000+ years) slowly replaced by the sinking of cold surface water at high (cold) latitudes. The volume of sinking of cold water must (of course) be equaled by a volume of upwelling of water at lower latitudes; this is called the ‘thermo-haline circulation’, and leads to about 1.2 cm per day of upwelling on average, but which is not at all uniform globally.. there are regions with relatively fast upwelling (like off the west coast of tropical South America) and regions with very little upwelling (like the western Pacific tropics). The sinking water at high latitudes is replaced by warm surface currents (like the Gulf Stream) which continuously carry tropical heat poleward. The shape of the thermocline is determined by the relative rates of cold water up-welling and the turbulent mixing of heat down from the bottom of the warm surface layer. The thermocline temperature also has an exponential decay shape.
    .
    The effective lag constant depends on how much depth of water (and over what period) is subject to changes in temperature as a result of warming (or cooling) at the surface. If there is a deep layer involved in significant warming, then the lag constant is long, and the full response of the surface temperature to changes in forcing will be delayed for a long time as heat accumulates in the ocean. If a lot of water is subject to warming, then the true climate response to a change in forcing will only be apparent many years (multiple decades) after a forcing is applied. For example if the effective lag constant is 20 years, then 5 years after applying a sudden change in forcing, we would have only seen a change in surface temperature of about 22% of the ‘ultimate’ climate response.
    .
    If you think about the system for a bit, you will see that there is not a single lag constant, but a range, with the surface layer approaching equilibrium within a few years, the thermocline approaching equilibrium over several years to centuries (depending on depth), and the very deep ocean approaching a new equilibrium only 1000+ years in the future. Fortunately, for practical purposes the effect of such very slow warming of the deep is tiny (not much heat is being added each year), and does not mask the ‘true sensitvity’ very much. Which is why most of the focus in ocean heat measurements is in the top 700 meters; this is the range of depths where the effective lag is going to be apparent. The real issue is how that top 700 meters responds to changes in surface temperature…. ARGO provides that information. If the lag constant is long, we have a lot of “warming in the pipeline”, if it is short, we do not.
    .
    So far, it looks like the heat accumulation is most consistent with a shortish lag constant.
    .
    An interesting (elegant, but a little mistaken) related analysis of how lag constant constrains climate sensitivity is:
    Heat capacity, time constant, and sensitivity of Earth’s climate system. Schwartz S. E. J. Geophys. Res. , D24S05 (2007)

    This generated a firestorm of critiques, to which Schwartz replied with a revised calculation:
    Reply to comments by G. Foster et al., R. Knutti et al., and N. Scafetta on “Heat capacity, time constant, and sensitivity of Earth’s climate system”. Schwartz S. E. J. Geophys. Res. 113, D15105 (2008)

    At the end of which Schwartz concludes that the best estimate for the Earth’s sensitivity to a doubling of CO2 is 1.9 ± 1.0 K.

  89. Greg F (Comment#64199)

    I am quite familiar with crude oils, and have characterized dozens of crude samples myself. A significant fraction of diesel fuel can be collected from most crude oils.

  90. Owen, if the ocean lag is short then there is little “heat in the pipeline” ie if we stopped emitting CO2 today we would see little extra warming.

    The concentration of CO2 in the atmosphere is currently about 390 ppm and without the ocean lag we would already have realised 40% of the global warming we will see if the CO2 concentration doubles from the preindustrial level. If the sensitivity is 3C we would already have seen a warming of about 1.2C due to CO2.

    We haven’t seen that much warming but it is believed that we have only seen part of the committed warming yet because of the vast heat capacity of the oceans. If we stopped emitting CO2 today we would still be warming considerably. There is “heat in the pipeline”.

    If the lag is short there is little “heat in the pipeline” and we are closer to radiative equilibrium. If 40% of the doubling is causing 0.5 C of warming or even 1C then a full doubling would create less than 3C of warming.

    If there is a long ocean lag and we are far from radiative equilibrium a long period of non-warming is at least remarkable.

  91. Steven and Steve have made a good case for the lukewarmist position, which as described by Mosher is certainly a very broad tent.

    It should be added, perhaps, that even the GISS model E “only” predicts an equilibrium climate sensitivity of about 2.4-2.8 C (see http://pubs.giss.nasa.gov/docs/2006/2006_Schmidt_etal_1.pdf , section 6).

    The “transient climate response” one gets from a simple regression of the (time-averaged) observed temperature to the GISS forcings is much lower, about 1.5 C (depending on the averaging interval). If the lag time (“heat in the pipeline”) is short, then the equilibrium climate sensitivity should be closer to this value.

    My understanding is that the arguments for much larger climate sensitivities are mostly paleoclimatic, and concern various models for how we got in and out of ice ages. Given the difficulty we have understanding in detail the current Earth energy balance, when we have tons of precise measurements available, my personal bias would be to not place much stock on highly conjectural paleoclimate reconstructions.

    Happy holidays, everybody!

  92. Hi Julio,

    Glad to see you are still around.

    I think the current GISS version of Model E gives an ultimate sensitivity of 2.85C per doubling, most all of which takes place in the first 60 years or so.

    I completely agree with you about paleo reconstructions. Perhaps I am too cynical, but these studies seem to me to be tilted by a little bit of knowledge of the ‘expected’ result. And as you say, the data they are based on is, well, let’s just say ‘speculative’.
    .
    Happy holidays to you and yours.
    .
    BTW, what is the size range of the quantum dots you have worked on?

  93. SteveF (Comment#64200)
    December 24th, 2010 at 10:09 am

    Steve, thanks for your very complete response. I am impressed that you could formulate such a detailed response so quickly. You obviously know your stuff. I had just been looking at an older PowerPoint by Schwartz, but it concluded that the time constant for the Earth’s climate system was EITHER 5 +/- 1 years OR 16 +/- 3 years. He said that that issue needed to be resolved. Apparently is has.

    What about feedbacks due to humidity, albedo, clouds, methane? Aren’t they also big potential players in climate sensitivity?

  94. Niels A Nielsen (Comment#64202)

    Thanks for the response. I like the pipeline analogy. The picture that I am getting is one of the ocean as a heat buffer that can either slowly or more rapidly absorb and release thermal energy. Seems to make sense.

  95. Owen #64205,

    “What about feedbacks due to humidity, albedo, clouds, methane? Aren’t they also big potential players in climate sensitivity?”
    .
    Of course, humidity, albedo, clouds, and aerosols are all included in the “ultimate climate sensitivity”. The “no-feedback” case (it is only a theoretical construct, not real) sets a lower limit of ~1.1 to 1.2 C per doubling of CO2; positive feedbacks increase this, negative feedbacks reduce it.

    Methane is a separate (and mostly unrelated) source of radiative forcing. Methane is slowly (half life ~14 years) converted to CO2 via oxidation in the atmosphere, and methane is many times stronger in radiative forcing than CO2. Reducing methane emissions would therefore give a relatively rapid reduction in total radiative forcing. Other important non-CO2 GHG’s are N2O and halocarbons. These combine with CO2 and a bit of tropospheric ozone to give a current total forcing of ~3 watts/M^2… about 81% of a simple doubling of CO2. Methane and N2O have historically tracked CO2, although methane has recently diverged from CO2 trends and has increased little in recent years. Halocarbons (‘Freon’ refrigerants and related products) are restricted in use (Montreal Protocol) and are slowly declining from their peak concentration of about 10 years ago.

    The issue of ocean lag is only one part of the puzzle. Aerosols (which reduce GHG forcing) are the other. Were both known with reasonable accuracy, we would know the true climate sensitivity pretty well.

  96. Hi Steve,

    I’ve been out of the quantum dot business for a while, and never was that much in it in the first place–it was part of an abortive collaboration with “the experimentalists down the hall.” It yielded exactly one paper (in Physical Review B, back in 2006). It has not been cited much, which may be just as well.

    Anyway, they were supposed to be self-assembled InAs quantum dots, and I was told they might be about 4 nm across. I never personally saw one 🙂

    Happy holidays to you too!

  97. SteveF (Comment#64201)

    A significant fraction of diesel fuel can be collected from most crude oils.

    I never said otherwise as that wasn’t the point. The point was that gasoline is an inevitable byproduct of oil refining. This alone puts limits on the ratio of diesel to gasoline powered cars worldwide. The advocates of switching the U.S. to diesel cars (and I am not saying you are advocating this) is based on this fundamental error. The fact is the European reliance on diesel for motor fuels results in a surplus of European gasoline which is mostly exported to the U.S. market.

  98. Julio,

    4 nm humm.. pretty small. Do you know if measuring the size of these is of any interest to anybody?

  99. Greg F,

    Well, I guess then more demand for diesel would… drive down gasoline prices! 🙂

    I not going to sell my gasoline powered car just yet.

  100. SteveF (Comment#64201)

    If the lag time (“heat in the pipeline”) is short, then the equilibrium climate sensitivity should be closer to this value.

    Using plumbing to explain basic electrical theory is quite common but falls apart quickly when considering AC systems. I think the same applies here. The time constant for the absorption of incoming solar radiation is going to be very short compared to the loss of heat from the oceans to the atmosphere. As an electrical analogy the ocean would be a capacitor with a small series resistor for incoming solar radiation and a much larger resistor draining the capacitor. If I recall almost all the solar radiation is absorbed in the first 200 meters so 700 meters should be adequate. The ocean, like a capacitor, acts as an integrator or low pass filter. If the ocean temperatures are flat then I would assume the system is already in equilibrium. It would seem to me that the ocean temperatures should be a pretty good leading indicator of what is in the pipeline.

  101. Re: SteveF (Comment#64210)

    Probably not… they have their own scanning tunneling microscope in their MBE facility, so they can grow and characterize their dots “in situ”…

  102. Greg F (Comment#64213)

    “If the lag time (“heat in the pipeline”) is short, then the equilibrium climate sensitivity should be closer to this value.”

    You got the wrong guy… that was Julio, not me.

    Happy Holidays.

  103. Owen #64178,

    Believe me, the models are inadequate!

    I am not antipathetical to the idea of reducing dependance on fossil fuels, but on a world-wide scale this is not in any way immediately practicable. You cannot just flick a switch or click a button in the real world to make things happen. So despite all the plusses you indicate could result it just aint going to occur.

    And I haven’t even mentioned the problems with the CO2 theory of warming.

  104. The southern hemisphere seems to follow along nicely except perhaps in the 1990′s. Since 1950, temperatures have risen by approximately the same amount, it seems. And if the southern hemisphere lagged the northern hemisphere, you would expect the most recent dip to be more pronounced in the northern hemisphere, wouldn’t you.

    Look at the temperatures, the SH is lower than the NH. It will catch up, but it is taking a long time to do so. There is a distinct long term lag as distinct to the short term response that tracks the immediate temperature changes.

  105. Re: bugs (Dec 24 18:41),

    That’s an interesting point. Since the SH has more ocean than the NH, the lag time between the NH and SH could possibly be used to determine the ocean time constant(s).

  106. bugs (Comment#64225),

    Merry Christmas Bugs.

    Differences between northern and southern hemispheres are not a good estimate of heat accumulation in the oceans. There are too many complicating issues, including long term cyclical effects (north-south ping-pong) and sparse data for the Southern hemisphere for much of the record. The ARGO coverage is very complete and is the best measure of current heat accumulation. Now, if there is a big difference in north and south trends in ARGO data, that would be interesting. I have not heard anything about that. Have you?

  107. DeWitt,

    Possibly, but there are lots of complicating factors. For example, if the GHG forcing were uniform, the extra heat over land would likely be mainly transferred to the northern hemisphere oceans. More land area means more northern warming, independent of ocean lag constant,

    In addition, aerosols complicate things; aerosols have historically been produced mainly in the north, and are mostly removed from the atmosphere before they can be distributed to the south. Were aerosols off-setting a large fraction of forcing, then this would tend to reduce the warming in the north more than in the south. Huge aerosol off-sets and long ocean lags are both required to hit the middle of the IPCC sensitivity range.

  108. I agree with you, SteveF, but when you write,
    “Now, if there is a big difference in north and south trends in , that would be interesting. I have not hear anything about that. Have you?”,
    I’m curious what you would conclude from such a difference in sea temperature trends?

  109. Niels,

    Not temperature trends, heat content trends. A significant difference might give some insight about what is driving changes in heat content. The albedo of land is higher than ocean, but its heat capacity is much lower, so surface temperature increase from applied forcing should be higher in the north, even while total accumulated heat (not per square meter, but total joules) in the northern ocean should be substantially less than total joules in the southern hemisphere oceans. After all, the albedo of the ocean is much lower than land, the north is where most of the aerosols are, and sunlight is ~7% more intense in the southern summer than in the norther summer… so everything says the southern hemisphere oceans should accumulate much more total heat. Do they? I don’t know.

  110. Espen,

    Thanks. Twice a much heat per square meter accumulated in the northern hemisphere since 1955. I suspect this is not counting the arctic ocean, since it is mostly covered with ice and so not available for measuring ocean heat.

    The ratio of land area to water area in the two hemispheres is:
    Southern – 0.25 Km^2 land per Km^2 water
    Northern – 0.67 Km^2 land per Km^2 water
    .
    The effective difference may be even greater, since the south polar cap counts as ‘land’ while the north polar cap counts as ‘ocean’, even though both are mostly covered by ice, at least for most of the year. So ‘land driven warming’ in the norther hemisphere should be >2.67 times greater than ‘land driven warming’ in the southern hemisphere.

    The total heat for the northern hemisphere ocean is about 2 * 0.6 of total area = 1.2 (somewhat less if the arctic ocean is not included), while it is 1 * 0.8 of total area = 0.8 for the southern hemisphere. The oceans in the northern hemisphere have accumulated more total heat than oceans in the southern hemisphere. In spite of the (likely) substantial difference in aerosol effects, which should be considerably higher in the northern hemisphere, and in spite of probably lower overall albedo in the southern hemisphere.

    There must be other important factors involved.

  111. Bugs #64254,

    If so, then this should be evident in the ARGO southern hemisphere data. The problem is not that heat could not possibly be mixed more in the southern hemisphere; it is that measurements (by ARGO) do not show that mixing.

    Recent estimates of total ocean heat accumulation below 2000 meters is under 0.1 watt per M^2.

  112. Re: SteveF (Dec 25 12:08),

    total accumulated heat (not per square meter, but total joules) in the northern ocean should be substantially less than total joules in the southern hemisphere oceans. After all, the albedo of the ocean is much lower than land, the north is where most of the aerosols are, and sunlight is ~7% more intense in the southern summer than in the norther summer

    Maybe and maybe not. The average annual TOA insolation for NH and SH hardly differs, 341.3 NH and 341.9 W/m2 SH. Any extra accumulation in the SH summer will likely be radiated away in the SH winter. In fact, the OHC graph I posted at The Air Vent shows exactly that. The 90 degree phase lag between OHC and insolation is indicative of a time constant of years rather than months. The source of the graph is Antonov, et.al., Climatological annual cycle of ocean heat content, GEOPHYSICAL RESEARCH LETTERS, VOL. 31, L04304, doi:10.1029/2003GL018851, 2004.

  113. DeWitt Payne (Comment#64258),

    Maybe we are seeing different things in the graphic you posted. The ~90 degree lag seems consistent with a pretty wide range of lag constants. The southern hemisphere maximum and minimum heat contents are at ~2 months after the maximum and minimum of southern hemisphere insolation (January, July). As to why the total heat content swing (maximum to minimum) is greater in the southern hemisphere: is that not just a result of having a lot more ocean surface area int eh southern hemisphere?

    Even though the annual range of heat content is greater in the southern hemisphere, the overall trend (long term) is much greater in the north. Maybe an informative comparison would the the northern versus southern hemisphere heat per square meter annual change. It seems to me with the much greater ocean area in the southern hemisphere, the per square meter change in annual heat content in the south would have to be considerably lower than in the north, especially if the arctic ocean is excluded.

  114. Re: SteveF (Dec 26 13:50),

    Insolation peaks at the solstices just pass the middle of December and June so it’s closer to 3 months or 90 degrees lag, assuming that the monthly data is for the average for the month which corresponds to the middle of the month, than two months or 60 degrees. A near 90 degree lag puts a upper limit on the cutoff frequency of at least an order of magnitude less than the signal frequency of 1/year or a time constant of 10 years or longer.

  115. Dewitt,

    The solstice is Dec 20, but the peak of solar intensity comes January 3. The peak of net insolation (TOA) is probably very close to January 1, though changing cloud cover during the spring/summer transition could certainly have a strong influence on what arrives at the surface, and so on when the peak in ocean heat arrives.

    I’m not sure I see the calculation of time constant the way you do; There can be no single time constant. But I will think about it.

  116. Re: SteveF (Dec 26 20:53),

    That’s for the planet. That doesn’t count when you’re looking at hemispheric data. For the NH and SH, the max and min insolation are at the solstices. When you sum NH+SH, there’s a small phase shift away from the solstices. The phase for the hemispheric OHC data listed Table 1 in the reference cited is 3 months for the SH, 8.9 months for the NH and 3.1 months for the World Ocean. Any way you slice it that’s about a 90 degree phase shift.

  117. DeWitt,

    I do not think the angle of lag is as sensitive an indicator of lag constant as you suggest. Here are three synthetic curves with three different assumed lag constants:
    http://i54.tinypic.com/eflbis.jpg
    There is only about a 10 degree difference (that is, ~10 days difference in maxima or minima of ocean heat content) for 3 years versus 10 years of assumed lag. I suspect that it would be very difficult to use this small difference to figure the correct ocean lag constant, since other factors could change the location of maxima and minima of heat content by more than 10 days.

    These curves used only a single assumed lag constant. The real ocean/atmosphere would be more complicated.

  118. Re: SteveF (Comment#64277)

    Nice demo, Steve.

    Mathematically, it’s the factor of 2*pi that’s led DeWitt astray. The condition he wants is not T >> 1/f, (where T is the time constant and f the driving frequency), but T >>1/(2*pi*f), which is a much more relaxed one.

  119. By the way, I just installed R yesterday, and in no time at all I was plotting cross-correlations, like this one:

    http://i56.tinypic.com/vsj9sn.png

    That’s the hadcrutv3 annual temperature series for the northern hemisphere, cross correlated with the southern hemisphere. The best correlation is clearly at zero lag. (Given my null experience with statistical analysis of time series, that’s about all I can say for sure :-))

  120. Hi Julio,

    You are right about the 2Pi factor, of course.

    “I just installed R yesterday, and in no time at all I was plotting cross-correlations”.

    Based on comments by others, I assumed R had a pretty steep learning curve. Maybe it is not as bad as I imagined. Was the muttering of swear words involved in getting started? 😉

  121. Hi Steve,

    I too was a bit intimidated by the posts by Lucia and Mosher regarding R, but the thing went surprisingly smoothly (no swearing required!). Of course, I just wanted a “quick and dirty” result; customization, to get things to look exactly right, could potentially take an infinite amount of time…

    A printed manual would help, though. Also, I cannot really tell, given any function, which arguments are required and which are optional. And the use of <- as an assignment operator, instead of the traditional =, annoys me. I'm sure I could nitpick all day, but so far, I have not found it any less friendly than most other open-source software (each reader should decide for him- or herself whether this is a recommendation or a warning!)

  122. Re: SteveF (Dec 26 22:13),

    This is a typo?

    No. In the table the phase refers to the location of peak OHC wrt the beginning of the year. So SH OHC peaks in March as does global OHC and NH OHC peaks in September. Relative to peak insolation, the phase lag is ~3 months for both SH and NH.

    Determining time constants from the seasonal signal is not what I was suggesting. The seasonal signal only puts a lower limit on the time constant. It would take a signal with much lower frequency to be able to calculate the actual time constant(s). The global signal is nearly useless for this purpose as well because you lose too much information when you sum NH and SH. See the Tamino two box model discussions, for example. You can get a fairly good fit for a wide range of time constants.

    The phase lag for satellite LT temperature is much shorter (~30 days) than for OHC, as would be expected for the lower thermal mass.

  123. DeWitt,

    “The seasonal signal only puts a lower limit on the time constant. It would take a signal with much lower frequency to be able to calculate the actual time constant(s). ”

    Agreed. That lower limit is somewhere under 5 years, maybe 3 to 4 years. A slower signal like the 11 year solar cycle does in fact seem to show up in the temperature history. If you assume that the solar output changes by ~0.0017 watt per sunspot over the last ~120 years, the best regression fit suggests an ocean lag right around 5 years… but there is a lot of uncertainty of course, due to the temperature variation from other sources. Still, that is the best fit.

  124. DeWitt,
    Let me add to the above comment: the best fit is about 5 years lag with very low aerosol off-sets. With substantial aerosol offsets (say up to ~40% of current GHG forcing), then the best fit increases from ~5 years to ~10 years… suggesting a sensitivity ranging from ~1.3 C to about 1.9 C per doubling.

  125. SteveF,

    There are 3000 Argo buoys are there not? They supposedly measure the OHC over 70% of the Earth’s surface.

    How can they do that adequately?

  126. SteveF,

    OK let’s put it another way. Do you think Argos adequately measures the temperature of the oceans?

  127. Re: Dave Andrews (Dec 28 14:41),

    ARGO probably measures the ocean temperature profiles far better than weather station thermometers measure the 2 m surface air temperature. The ARGO thermometers have much higher precision, for example. Of course they need it because they’re measuring temperature changes on the order of a few millidegrees. The geographical distribution is better too because it’s more uniform. Water temperature is a direct measure of heat content. For air, you have to factor in humidity so it’s not usually done. The year over year change in ocean heat content is also a direct measure of radiation imbalance, although it won’t tell you what is causing the imbalance without other information such as cloud cover.

  128. Dave Andrews (Comment#64376)
    “OK let’s put it another way. Do you think Argos adequately measures the temperature of the oceans?”

    ARGO measures the profile of temperature with depth… AKA heat content. That is not the same as temperature of the ocean.

    For measuring heat content, ARGO is for sure a lot better than earlier measures. The number of measurements, their precision, and their geographical dispersion all suggest good coverage. Are they perfect? Heck no, but they are certainly pretty good… probably better than satellite measurements of tropospheric temperatures, which are themselves quite good. I am still not sure what you are trying to get at. Compared to most measurements related to climate science, ARGO seems to be the best. Are you suggesting that ARGO is not accurate?

  129. Some of the known shortcomings of ARGO: the duration of the record, particularly since the number of floats did not “fill out” until more recently. Also, the density is somewhat uneven, with notable gaps in some areas of climate interest such as the Southern Ocean and many western boundary currents where you tend to lose floats. Finally, as everyone knows, the sampling depth is not by any means the full ocean depth.

    But yes, ARGO is a useful observing system and much better than what we have had in the past.

  130. Oliver,
    .
    I find it quite amusing that ARGO data is routinely discounted when the data suggest the climate sensitivity is significantly lower than the middle of the IPCC range.
    .
    ARGO data clearly show very little ocean heat accumulation since 2004, no matter how much that displeases some people. The number or ARGO floats in service reached >2000 by 2003 and near 3000 by 2004. Coverage has been good since at least 2004. Recent measures of heat accumulation in the deep ocean (below ARGO) are very modest (~0.1 watt per square meter). In any case, the best estimate we have of ocean heat accumulation (that is, Earth’s current energy imbalance) is ARGO. The measurement of rate of heat accumulation is completely independent of the length of the record… either heat is being accumulated at a high rate, or it is not; and clearly, it is not.

  131. Also, I cannot really tell, given any function, which arguments are required and which are optional. And the use of <- as an assignment operator, instead of the traditional =, annoys me.

    Simple:

    julio <- function(x,y,z) {}

    All are required.

    julio <- function(x=14,y,z) {}

    X is SET, y and z are required. You can change x

    julio<- function(x=14242,y=1,z=64)

    ALL are set you can call julio like this julio() and the paramters are
    set to their defaults.

    AND you can use = as opposed to <-

    Its a hot topic on the help list

  132. SteveF,

    The believability of a “rate” measurement does depend upon the record length and coverage, since the ARGO measurement (like any other observation) is not noise-free. Meanwhile, measurements below the ARGO coverage have to be taken as extremely sparse, at best.

    I do agree that people tend to be biased when it comes to “admissibility” of one data set vs. another.

  133. SteveF,
    I’m not suggesting that Argo is not accurate in what it does nor that it is not better than systems we have had before. Rather how well does it repesent the ‘true’ picture?

    As Oliver mentioned above coverage is far from ideal and problems arise at boundary currents and no doubt elsewhere.

    So it is an improvement on the past but does it give us the whole story?

  134. Oliver,

    While I would agree that variation in the system being measured (like weather patterns and natural pseudo oscillations.. ENSO and others) could cause the rate of accumulation of heat to be lower or higher than the long term trend, I think it is important to separate that variation from uncertainty due to variation in the measurements. In the case of ARGO, the measurement is quite good, that is, the nil to low measured heat accumulation of the last 6-7 years is almost certainly real. It is possible that this lack of heat accumulation is the result of natural variation, and that over longer periods a different trend will become evident; I would argue that the Hansen et al (2005) conclusion that the Earth’s long term imbalance is very large (~0.8 watt per sq meter) is a perfect case of wrongly assuming that a short measured trend is representative of the longer term.
    .
    That does not mean that the current ARGO measurements are in error; there is no evidence that they are. “Earth’s energy imbalance” as James Hansen likes to describe it, is currently very low.

  135. Dave Andrews,
    I do not agree that there are significant coverage issues.
    http://dapper.pmel.noaa.gov/dchart/index.html shows the most recent float distribution. Outside of the ice covered ocean regions, (Arctic ocean and the sea-ice regions in the Southern ocean) the coverage appears to be quite good. Whatever deficiencies someone might imagine exist, they are tiny compared to what was available before. If you are suggesting that a lot of heat is accumulating but somehow hiding from ARGO measurements, then you need to do more than suggest that is the case.

  136. Re:steven mosher (Comment#64404)
    Ah, this helps. Nice to know I can use =, also. Now that I have taken the plunge, I should really head over to your blog for some tutorials. I’m just being lazy and having too much fun playing with my Christmas toys. 🙂

  137. steven mosher (Comment#64404) December 28th, 2010 at 10:56 pm

    Also, I cannot really tell, given any function, which arguments are required and which are optional. And the use of <- as an assignment operator, instead of the traditional =, annoys me.

    If you go back further in tradition, “=” means equals, it does not mean assignment. The two meanings are completely different. The use of “<-" for assignment is more correct.

  138. Bugs,

    The use of ‘=’ as the assignment operator goes back to at least Fortran. You can argue that a different assignment operator is better, but that does not mean a different operator is “traditional”. The use of “=’ is the traditional method.

  139. Re: bugs (Comment#64441)

    If you go back further in tradition, “=” means equals, it does not mean assignment.

    I guess it depends on your tradition. According to Wikipedia, “In mathematics, the equals sign is used for both assignment (ex: let x = 2) and for testing or denoting equality (ex: if x = 2 then x/2 = 1).” This dual usage is certainly older than any programming language.

    The question is how to distinguish the two usages in a program, and different languages address this differently. But, as Steve points out, the oldest tradition in scientific programming that we are acquainted with (that is, FORTRAN) is to use “=” for assignment and something else (“.eq.” in FORTRAN, “==” in C) for logical statements.

    Other languages, like Pascal, make a different choice, and so, from the perspective of that tradition, perhaps using “:=” as an assignment operator would have made some sense. But there is nothing traditional, as far as I can tell, about the “reverse structure selection operator” <-. I see what they were trying to do. It still annoys me anyway.

  140. Such good questions and thoughtful answers, especially Steve F (64200). My question regards the climate sensitivity of 1.5- 3 degree C which Mosher cites and other “lukewarmers” here agree is about right or even slightly high. We have been recovering from the LIA with about a 0.7C per century temperature increase. In the absense of of good understanding of the forcings, including those forcings responsible for the 300 year recovery from the LIA, might one justifiably assume a continued 0.7 C warming per century plus the additional forcing of CO2. If that is true, then the “lukewarmers” CO2 forcing would be 1.5- 3C minus approximately 0.7C wouldn’t it, or am I misunderstanding the argument.

  141. Doug Allen,

    That is a good question. I guess the answer is, nobody knows for sure. If the Earth is still “recovering” from the LIA, then there has to be a mechanism involved in that recovery. I have never heard a credible mechanism proposed. That is not to say a long can continuing recovery from the LIA is not possible, just that if it is true, no known mechanism exists.

    One thing that does seem to be true (based on ice core records in both Antarctica and Greenland) is that the Holocene has on average had a gradual cooling trend (in spite of century-long “ups” and “downs”) for ~8000 years. The warming of the last century is comparable to or greater than the largest of the “ups” during the last 8000 years, which suggests that the rise in GHG’s during the last century could very well be involved, even if not wholly responsible.

  142. Julio:

    Other languages, like Pascal, make a different choice, and so, from the perspective of that tradition, perhaps using “:=” as an assignment operator would have made some sense. But there is nothing traditional, as far as I can tell, about the “reverse structure selection operator” <-. I see what they were trying to do. It still annoys me anyway.

    I’d be remiss to not point out the -> used in C/C++ languages and of course, “=” for assignment and “==” for testing equality.

Comments are closed.