Ocean Heat Content Kerfuffle

Bob Tisdale wrote to ask my opinion on the Ocean Heat Content Kerfuffle which involves Bob’s blog, Tamino’s blog, and is continuing in comments at Bob’s blog where a commenter called “kdkd” is posting some analyses which seem to contain a confusing mix of correct and incorrect claims.

I told Bob I don’t generally follow Ocean Heat Content posts, so I don’t have firm opinions on many issues. However, I could comment how I might estimate the uncertainty in observed trends using quarterly data. (On can resort to using annual average data, but that will tend to result in unnecessarily large uncertainty intervals. That is: using similar assumptions about the data, using quarterly data will tend to give smaller intervals without increasing type I error.)

I also explained what I consider to be the main question mark in his post: How in the heck can we test the modelE projections after 2003 when we don’t actually have any?

Although I don’t want anyone to lose sight of the main difficulty with testing model E projections, in today’s post I’ll just set aside that question just show how one might estimate the uncertainty due to “weather noise” in the observed trend.

My relatively modest scope:

  1. Pick this data set: OHC levitus data on a quarterly basis.
  2. Suggest a type of statistical model for the “weather noise”: I’m going to suggest we pick a linear trend with ARMA residuals. (Some people might suggest exploring fractional differencing, but I’m not going to give that a try.)
  3. Compute best fit trend since 1993 based on the chosen model.
  4. Report uncertainty intervals based on that model.
  5. Make a few comments.
  6. Free the code.

The organization of this post is “here’s what I did; here’s a graph.

Step 1: Download the OHC levitus data on a quarterly basis. Store in a folder on my Mac.

Step 2: Write an R script which includes the “guts” and all the extra R stuff I need for reasonably nice titles, labels etc.

Analysis step 1: Chop off all data prior to Jan-March 1993. Find the best fit ordinary least squares fit to the data since Jan 1993 and look at the residuals to the fit:

The residuals to the ordinary least squares fit exhibit serial autocorrelation. Since my scope is do something quick, I used “auto.arima” to suggest the best model for the residuals. It suggested the residuals look AR1.

I then used arima(), to find the best fit model under the assumption the residuals are AR1. This best fit linear model is illustrated by the bold red line in the figure below:

In the figure above: The solid red line is the best fit trend and is an attempt to convey the expected value magnitude of the “underlying trend” ; the red dashed lines indicate the uncertainty in the best fit trend. The circles are quarterly OHC data; the dashed black line is the uncertainty associated with predicting the individual observation (i.e. weather) data points if we were able to re-wind the earth back to 1993, tweak the initial conditions and watch “weather” evolve again.

The slope of this trend line is m=0.58 * 1022 Joules/year; the 2σ uncertainty range spans [0.47* 1022 Joules/year to [0.69* 1022 Joules/year]. If, as Bob suggests, models project a trend of 0.7 * 1022 Joules/year since 1993, that trend is slightly outside these intervals. Note however that the method I am using gives slightly undersized uncertainty intervals. (Exactly how much too small would require me to run some monte carlo test, which so far, I have been too lazy to do for this specific case. I will continue to be too lazy to do it until such time as I think we actually know what some models projected for the period from 1993-2011.)

Naturally, I examined the histogram of the residuals and the correlogram.

I’m not sure I’m convinced the residuals look Gaussian, but they may be. We don’t have very many residuals to work on so I didn’t apply any test. It appears the autocorrelation of the residuals to the ARIMA fit may well be white. So other than the slight tendency of the ARIMA fit to give slightly undersized uncertainty intervals, the assumptions underlying the calculation of the trend and uncertainty interval in the may be fine. Based on this: The trend in OHC content based on the Levitus series appears to exclude a trend of 0.7 * 10^22 Joules/year at the 95% confidence. But as I said: I don’t know what this tells me about model projections. For all we know, the trend from 1993-2003 was slightly elevated because the earth cooled after the eruption of Pinatubo, and consequently that periods is a recovery period. If so, the model-mean projections may have shown a bit of deceleration after the recovery was more-or-less complete. If so, when someone actually digs up OHC data from models and computes the multi-model mean trend from 1993-2011, it may turn out to be less than 0.7 * 10^22 Joules/year.

(Also: There are other OHC observational series. Some show higher trends than Levitus.)

For those who want to discuss observed trends in OHC since 2003, I’ll be willing to discuss those and show various figures and talk about whether or not a start date of 2003 is fair. But bear in mind: I don’t think we can test any projection ending until someone actually drags up the OHC data from models computes the projections from Model E or the multi-model mean.

R ScriptProbably badly written, but here it is OceanHeatContentStatSig

58 thoughts on “Ocean Heat Content Kerfuffle”

  1. Regarding the Monte Carlo tests, I recently did something similar on my blog, so I was able to whip up a script real quick:

    http://dl.dropbox.com/u/9160367/Climate/OHC_BlackboardPost_5-16.R

    Basically, I use the lag1 correlation (.487) and white noise sd (1.786) from the “ar” function to simulate the AR(1) noise (assuming normal distribution for the white component), then add this to the fitted linear trend for each MC run. Run 10000 times, I get .457 and .738 for the 2.5% and 97.5%.

    Playing with the seed will change the results slightly, however…I’d need to leave it running for a while to get a lot more runs in.

  2. I would say the residuals aren’t terribly white as the data from about 2003 on has a run of 8 points above the trend and another run of 10 points below the trend, if I counted correctly. As I remember, the probability of that happening for n = 80 with gaussian distributed residuals is vanishingly small.

  3. Troy — Thanks. That was a bit larger than I expected. That said: I developed my “intuition” on the bias in the standard errors running for 20 years of monthly data. Quarterly is a lot less data. I knew ARIMA under-estimates the uncertainty intervals.

    If you fiddle, I think you’ll find that for AR1, the ratio of {error in the standard error in the trend}/{standard error in the trend} is a function of (AR1 coefficient, number of samples in the time series) but not sd of the actual errors of the individual cases.

    The magnitude of {standard error in the trend} is linearly proportional to sd.

    DeWitt: But the residuals to a straight line aren’t white. That’s why you need to look at the correlogram of residuals to the AR1 fit. Those are in the thumbnail graph to the right of the histogram. I’m not sure the residuals look Gaussian– but we don’t have many. So I’m not testing.

  4. It really is a shame that the data for the ensemble members or model means for the OHC runs of the GISS Model-ER and Model-EH aren’t posted anywhere.

  5. Re: lucia (May 16 14:37),

    Ok, for an AR(1) process with a coefficient of 0.5, the probability having n points in a row above or below the mean is going to be higher than for an i.i.d. process, but two runs of that length would still be strongly indicative of a trend shift. Of course you’re not looking specifically for a trend shift, but it appears to be there.

  6. Bob–It’s possible they may never have been calculated.

    Dewitt– There may or may not be a trendshift. I’m just doing this based on a statistical model that assumes the trend is constant.

    Steve mosher– He wants ocean heat content integrated over the upper 700 meters. It’s not one of the series already processed at KNMI. It may be that people have to get data from PCMDI and process themselves (I’ve never checked to see if data for the whole ocean are deposited!)

  7. steven mosher asked: “Bob Tisdale (May 16 16:52), what data do you want. exactly?”

    Thanks for asking.

    Ideally? Model-ER (Russell ocean) and Model-EH (HYCOM ocean) Ocean Heat Content ensemble members:
    A. monthly or annual time-series data from 1955 to however far into the future they’ve gone (based on the 20c3m forcings?).
    B. On a Global basis would probably be best and simplest
    C. 0-700meters depth
    D. In terms of 10^22 Joules, but I’m sure I could find the conversion factors for other forms.

    Have you been rumaging around the IPCC models at the PCMDI webpage?

    If this helps, I believe the Model-ER was presented in Hansen et al (2005), and both the Model-EH and -ER were discussed in Sun and Hansen (2003).

  8. Lucia, a couple of points on the R-script, which looked good to me.

    Windows users need to have the following to handle quartz (Mac specific). The script here shouldn’t affect Mas.

    if(.Platform$OS.type==”windows”) {
    quartz<-function() windows()
    }

    Also R is really good at downloading data directly and this makes the runs turnkey without having to refer to internal directories. For example:

    loc="ftp://ftp.nodc.noaa.gov/pub/data.nodc/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/3month/ohc_levitus_climdash_seasonal.csv&quot;
    download.file(loc,"temp.csv")
    ohcOrig<-read.csv("temp.csv",header=FALSE)

    If you are downloading an excel file or a zip file or a nc file, then:
    download.file(loc,"temp",mode="wb")

    the package xlsReadWrite will read excel sheets into R, which is quite handy.

  9. The ocean temperature data needed to compute ocean heat content is on the PCMDI server. Not all of the ensemble members for each model are available however, because

    For the 3-d ocean fields, it is likely that storage space constraints will limit relatively quick access to output from only a single member of each ensemble, so in prioritizing your processing, consider initially sending PCMDI the 3-d ocean output from only 1 member of the ensemble.

    However, it appears that all of GISS ER/EH’s ensemble members are present (at least for 20c3m and a1b).

  10. Steve

    “the package xlsReadWrite will read excel sheets into R, which is quite handy.’

    On the MAC you dont have a binary distro for this. So she would have to install from source, not sure why the mac package has no binary…

  11. Chad–

    I’m downloading the data now. I’ll look at it tomorrow.

    Wow! That’s just got to be one s***load of data!

    Steven/Steve–
    On the mac, if have an excel spreadsheet, I just save as csv and then read into R. Turn key from someone source is nice in principle, but I don’t mind doing one 1 minute task.

  12. Lucia

    “I just save as csv and then read into R

    The idea is to write a script that others can source without having to download, save source data files. It makes it easier for others to reproduce, follow your work.

    It also has the advantage of not having to save a bunch of extra files that you may or may not recognize later. By having data link in your script, you can go directly to latest version of data set.

  13. Lucia, Bob Tisdale,
    The transition from pre-Argo to Argo data continues to look suspicious (a substantial step up at the transition). The statistics post Argo look to my eye different from pre-Argo…. no surprise considering better coverage and more consistent data; the total variance should be lower.
    .
    In light of this, I wonder if statistics for the combined data pool really tells us much.

    Tamino’s tirade about cherry picking a starting point is a bit hollow in light of this known issue with the data. Absent the suspicious-looking step at the transition, the claim of cherry picking is weakened.

  14. Kelly O’Day,
    I understand the idea. I just don’t consider it as high a priority as some other people do.

    By having data link in your script, you can go directly to latest version of data set.

    Sometimes this is precisely what you don’t want. For an archived paper, you want people to go to the version you used not the latest version of the data set. Also, NOAA, Hadley etc. sometimes change their version.

    Some scripts I’m writing and running download directly; some aren’t. It’s just a choice.

  15. Wow! That’s just got to be one s***load of data

    I only downloaded GISS ER’s data. It’s about 11.2 GB for 20c3m and a1b. Thankfully, GISS ER data doesn’t have spatial high resolution!

  16. SteveF–
    I agree data post argo looks different from pre-argo. That’s one reason I’ve stayed away from OHC. I figure the Argo people need to sort out things out even now.

    I also think Tamino’s accusation of cherry picking is weak. I believe Bob’s version of how he picked 2003. The evidence does show he picked it before it was revised upwards and he picked it way back when Pielke blogged about OHC, and Pielke’s post talks about testing things starting in 2003. (This was because Pielke was working off knowledge through the end of 2002.)

    At the same time, I do think we have to worry a bit about picking any start date near or during the Argo transition precisely because big steps in any direction could just be uncertainty associated with the transition and/or problems with the system getting implemented. The fact that there is such a huge jump from 2002 to 2003 is also problematic.

    So, while I don’t think much of Tamino’s rant, I also think we need a heck of a lot more work before anyone would be able to say the model projections for OHC are out of whack.

    Bob–
    You’re correct about the data at the page I linked. That page seems to have been created when NOAA wrote some climate report in 2010. It appears to have data they could get when they wrote that report. It’s not an auto-updating resource. But you can see that there are other OHC data sets some with higher and some lower trends.

    I’m not trying to make a subtle dig by this. My main thinking is only that it’s a bit difficult to tests model projections when we don’t have real projections. If we did get real projections, we would want to try to get up to date data from as many of the published observational data sets as we could get. It seems to me that right now, we’ll get a trends and uncertainties depending on which data set we select.

    Also, the fact that measurement methods did transition during the period make me scratch my had a bit about how to interpret any linear trend.

  17. Lucia (#76006),

    I agree there is much to be done in understanding the OHC data. A couple of things stand out:
    1. The heat capacity of the well mixed (convective) surface layer (about 55M global average) is high enough that ~1/3 of total accumulation ought to be accounted for just by looking at the change in the average surface temperature of the ocean (but this is NOT the case, see below).
    2. Heat accumulation down the thermocline ought to be calculable based on the historical variation in ocean surface temperatures and the average eddy diffusion (down-mixing) rate.
    3. But there appears to be a small negative correlation (with a slight off-set) between OHC and surface temperature changes that are driven by the ENSO. This is visible by overlaying the ocean surface temperature history, the Nino 3.4 index, and the Levitus et al OHC data on the same graph (appropriately scaled). For the ENSO driven surface temperature change to be negatively correlated with a change in ocean heat content is counter-intuitive, since 1/3 of the total change in OHC ought to be in the surface layer. This suggests that we need to think of the ENSO as mostly a redistribution of surface heat content, combined with a concurrent slight modulation of heat loss/gain. (e.g. cyclical variation in the depth of the Western Pacific warm pool during ENSO)
    4. I suspect that if the influence of the ENSO on both ocean surface temperature and OHC were quantified and removed from the data, the relationship between the evolution of average ocean surface temperature and OHC would be a lot easier to understand; the influence of ENSO on average ocean surface temperature and OHC is making everything very confused. A quick eye-ball-only look tells me that the odd OHC step at the Argo transition would become even more obvious if this were done.

    Yup, a lot to do to try to understand what is happening.

  18. lucia (#76008):

    Thanks Lucia… anything is possible. If I get a chance I will do so tonight.

  19. AJ,
    Very interesting graphs. I am not sure what you mean when you compare the individual years to the “climatology”. Can you clarify?

  20. SteveF (#76011)

    I’ve used two APDRC datasets. One has Argo data by Climatology mean (12 months) and Monthly mean (2005-present). My “individual years” would be a separate line for each calendar year.

  21. Lucia,

    The OHC data is pretty inconsistent. I was playing around with it for a climate bet thing. The seasonal fluctuations in the ARGO looks odd. Using just the last half year of the ARGO I get a cooling slope. Probably just noise, but odd.

  22. Lucia,

    There is no need to double your reading in.

    read.csv will work with a URL

    if you want a local copy, then read from url and write.csv local

    like so

    url_OHC <-"ftp://ftp.nodc.noaa.gov/pub/data.nodc/woa/DATA_ANALYSIS/3M_HEAT_CONTENT/DATA/basin/3month/ohc_levitus_climdash_seasonal.csv&quot;

    # read the file directly using the url.
    # set header = false, strings as factors = false
    ohcData <-read.csv(url_OHC,header=FALSE,col.names=c("Date","OHC"),stringsAsFactors=FALSE)

    # turn the text into numeric values
    Year <- as.numeric(substr(ohcData$Date,1,4))
    Month <- as.numeric(substr(ohcData$Date,6,8))
    # middle of the quarter
    yearFrac <- ohcYear+(ohcMonth-1.5)/12

    # create a data frame with the new vars.

    ohcData <- cbind(ohcData,Year=Year,
    Month=Month ,
    yearFrac= Year+(Month-1.5)/12)

    # re order the columns
    ohcData <- ohcData[,c(1,3,4,5,2)]

    #write.csv for local copy

  23. And oh, Lucia and other folks using R. If you are not using Rstudio as your IDE, then I would highly recommend it. On windows, mac, or linux it is the IDE of choice ( for those who dont use emacs, that is ) The dev team has been at work for 2/12 years on the app and they do lovely work. had a chance to meet up with them this past week. The best thing is they are at work on including debugging and “source control” and then package dev tools.

  24. I had a fun time with one guy who was convinced that he had found an error in some database. Of course, once I determined that he had downloaded a file by hand into excel, and then read that excel file into a program, it was pretty clear what the source of the error was. His “handi work” . Of course there may be times when a manual download is necessary. The other situation which occurs with excel has to do with character sets. After you work a while downloading excel stuff from other countries ( like station names) you will come to hate excel, the people who wrote it, and any living creature that uses it. hehe. Any way, I think it’s useful to show folks the various ways of getting data into R. for some folks taste the documentation is not quite up to snuff.

  25. SteveF:

    This suggests that we need to think of the ENSO as mostly a redistribution of surface heat content, combined with a concurrent slight modulation of heat loss/gain.

    Good point; yet another reason global average temperature is not a good metric since it can change simply by redistribution whether the total energy content does or not; in other words: it’s not a conserved quantity. RPSr is right, energy makes much more sense for a lot of reasons.

  26. Steve,

    I’ve used Rstudio. It was pretty nice at first. I like that it kept my images in an easily accessible window for me to throw away or save. Unfortunately, you need (at least Ubuntu users) to install the KDE desktop. On top of that, there are these weird KDE windows that popup, don’t respond and disappear. That’s pretty normal if you have any KDE-dependent applications installed. I didn’t like the interface too much. To each his own.

    I’m like that guy in the Dos Equis commercials: I don’t always use R, but when I do, I use the terminal.

  27. Chad (Comment #76020) “I don’t always use R, but when I do, I use the terminal.”

    I’m a cargo-cult R user. Learnt how to get data in and learnt how to attach it and then recite magic incantation found in book or downloaded. Unlike most cargo-cults the planes do actually arrive.

  28. SteveMosher-

    for some folks taste the documentation is not quite up to snuff.

    It’s not up to snuff. But then, what do you expect for free?

  29. jstults (Comment #76019),

    RPSr is right, energy makes much more sense for a lot of reasons.

    No doubt.
    However, keep in mind that there does appear to be a negative Nino 3.4 versus OHC correlation. So even 0-700 meter OHC may not be a perfect metric for measuring imbalance if it too is subject to natural pseudo-cycles. OHC could also be influenced by longer term natural pseudo-oscillations like the AMO. The available surface temperature data suggests a ~70 year cycle with a peak-to-trough difference of ~0.2C.
    .
    I suspect there are no substitutes for a more complete understanding of both the short and long term physical processes that generate these variations.

  30. ‘Tamino’ and ‘tirade’ are synonymous. Perhaps he should change his moniker:-)

  31. Dave Andrews (Comment #76024),
    I view Tamino as mostly an “enforcer”… the heavy who goes out to beat up anyone who raises a legitimate doubt about the consensus POV. I find that he is usually worse with cherry picking and misrepresentation of factual data than most of the people he attacks. IMO, he is usually not worth reading. I don’t think I am alone in that view.
    http://traffic.alexa.com/graph?&w=400&h=220&o=f&c=1&y=r&b=ffffff&n=666666&r=3m&u=tamino.wordpress.com&&u=rankexploits.com&u=climateaudit.org&amp;

  32. SteveF: ” But there appears to be a small negative correlation (with a slight off-set) between OHC and surface temperature changes that are driven by the ENSO. ”
    ———-
    Does this comment mean that as measured surface (or tropospheric) temps increase during an El Nino, that the ocean heat content decreases?
    If so, isn’t that exactly what we would expect and not counter intuitive at all? During El Nino a thin surface layer of very warm water from the Indian Ocean races eastward toward South America producing a net transfer of thermal energy from the ocean to the atmosphere, cooling the former and heating the latter. Vice versa for La Nina.

  33. Re: SteveF (May 17 14:39),

    You need to look at year over year changes in OHC because you expect seasonal variation, if nothing else because the Earth’s orbit is elliptical. The top 700m probably isn’t enough either as any changes in the overturning circulation will redistribute heat between the upper 700m and the rest of the ocean. Changes in the heat content of the atmosphere should be small by comparison even if the temperature difference is fairly large.

    Year over year, heat transfer down at the thermocline is balanced by upward flow of cold water caused by the overturning circulation. We know this because the average depth of the thermocline doesn’t show a secular trend.

  34. With regard to the step-function pre-Argo/Argo in 2003:

    When the Argo data came in in 2003, was it as well distributed as in 2008, so that the data distribution causes the warmer start? Like the warm/cool station data changes since 1990 discussed by Watts et al.

    Yes, 2003 is not a start-point problem if it is the beginning of a legitimate longer term pattern (>8 years). But we have many 8-year patterns that are very high that we discount, so we should be cautious with this one, especially considering the recognized ARGO start lurch.

  35. Doug Proctor (Comment #76030): When the Argo data came in in 2003, was it as well distributed as in 2008…”

    Nope. By 2003, ARGO was providing a significant contribution to the samplings at depth. The coverage of the Southern Hemispere oceans started to improve in 2003, and by 2004 they were well sampled, but nowhere as close to what they were in 2008. I’ve got a few gif animations that show maps of the sampling on an annual basis in the following post:
    http://bobtisdale.wordpress.com/2011/03/25/argo-era-nodc-ocean-heat-content-data-0-700-meters-through-december-2010/

  36. Owen (Comment #76028),
    There is a big average ocean surface temperature (change up or down), but a barely visible change in OHC. The counter intuitive part (for me) is that so much could happen to average ocean surface temperature and so little to OHC. As I noted, long term warming of the convective layer represents a substantial fraction of total change in OHC, yet large changes in average ocean surface temperature driven by ENSO over a short period hardly make a dent. Sure, a big physical shift of warm water from the (deep… often 200 meters) west Pacific warm pool eastward across the Pacific is consistent with the observation of little change in heat. I am just surprised at how little.

  37. DeWitt Payne (Comment #76029) May 17th, 2011 at 4:33 pm

    You need to look at year over year changes in OHC because you expect seasonal variation, if nothing else because the Earth’s orbit is elliptical.

    The average sea surface temperatures ( looked at Hadley data) are seasonally adjusted AFAIK. A 12 month rolling average OHC (what I looked at) automatically removes seasonal effects.

    Year over year, heat transfer down at the thermocline is balanced by upward flow of cold water caused by the overturning circulation. We know this because the average depth of the thermocline doesn’t show a secular trend.

    For sure. The global average cold upwelling rate is on the order of 3.5-4 meters per year. Since the horizontal mixing/diffusivity along isopycnal lines is at least several orders of magnitude greater than the vertical mixing/diffusivity, the thermocline is remarkably uniform in shape over much of the global ocean. I believe most of the transfer of heat to below 700 meters is not by diffusion down the thermocline but by changes in deep convection at high latitudes, especially in the Southern Ocean. Physical mixing down the thermocline is remarkably slow. If I remember correctly, Josh Willis from NASA JPL told Roger Pielke Sr. that the heat accumulation in the top 400 meters alone is expected to represent most all warming over decade-long periods and represents a reasonable estimate of the current global imbalance.

  38. SteveF: “Sure, a big physical shift of warm water from the (deep… often 200 meters) west Pacific warm pool eastward across the Pacific is consistent with the observation of little change in heat. I am just surprised at how little.”
    ————————-
    The heat capacity differences between ocean and atmosphere do not explain it?

  39. AJ (Comment #76012),
    I’ve looked at your graphs some more, and the interesting thing is the phase shift with depth. The 70 degree shift at the surface is a t least reasonably consistent with variation in solar forcing, but the gradual change from 70 degrees to ~140 degrees over the top 300 meters does suggest different mechanisms are involved. Seasonal solar heating should be pretty much gone by a few hundred meters depth. Your suggestion of seasonal changes in upwelling rate sounds reasonable, since the further down the thermocline you go the smaller the temperature change for a variation in upwelling should be.

  40. Owen (Comment #76035),
    I don’t think so. The surface temperature I looked at was the average ocean surface temperature, not the combined land/ocean temperature. Any ocean surface warming that takes place during the el Nino phase of the ENSO beyond the tropical Pacific (say in the north tropical Atlantic) represents real ocean heat accumulation in those regions.. which has to be pretty closely balanced by heat loss from the tropical Pacific, since total OHC doesn’t change very much. Like I said, I was a bit surprised.

  41. SteveF (Comment #76036)
    .
    The phase difference is shown in days. It would be better if someone who did this for a living commented on our musings. I’ll guess that the upper levels are dominated by radiation and mixing and that after about 200M it’s mostly conduction. Also note that heat is not only moving vertically, but also latitudinally and longitudinally.
    .
    My comment about upwelling was specific to a wedge that appears in the crest of the wave after about 250M. It’s only idle speculation on my part, it could be just noise. The climatology dataset I sampled only includes six full years… so time will tell if the wedge persists. If I sampled another ARGO product, it might not even be there.

  42. AJ,

    Thanks for clarifying days versus degrees… though it does not matter too much; 360 degrees and 365 days in a year 😉

    My comment about variation in upwelling was that it makes sense because to me because the changing slope of the thermocline would automatically reduce the size of the annual oscillation the deeper you go.

    The rate of thermal conduction (absent eddy mixing) is too slow to account for the shape of the thermocline, even at great depth.

  43. There is almost no change in Ocean SSTs between 2003 and today. 2002 to 2006 was dominated by El Ninos and a warm North Atlantic. After a few La Ninas and a little cooling, I would say the Ocean SST level is even a little cooler in 2011 versus 2003.

    But the Oceans were warming somewhere around 0.1C per decade (maybe a little less) in the period before this stagnation (although there is a big difference in this number between the different datasets). The last 7 years are flat or cooling.

    Here is the weekly Global SST since 1981 and what I consider to be the ENSO and AMO influence on that trend (under the assumption that both are true ocean cycles that will balance out to Zero in the long-run, might not be true but more than likely given this is needed to explain the short-term bumps and longer-term cycles in Global SSTs).

    http://imageshack.us/m/94/8290/ensoamoglobalsstmay1111.png

  44. SteveF (Comment #76040)
    Yes your right, not much difference between 360 and 365. I wouldn’t be surprised if that was why 360 degrees was chosen in the first place. It’s close to the number of days in a year and is one of those numbers that can be divided evenly by a lot of other numbers. I’ll have to google it.

  45. Me:

    I get .457 and .738 for the 2.5% and 97.5%.

    Ack, I found my script was using the variance output for ar for the sd in rnorm for the white component in the MC runs. Correcting this, I get the confidence interval of: [.477, .702].

    It seems your intuition was correct, Lucia 🙂

Comments are closed.