#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Blog - El Niño project (part 6)

Here is a draft of a post by the statistician Steve Wenner:

Please read it and criticize it! It's actually quite a strong attack on Ludescher's work, though it's phrased in a perfectly polite and pleasant way. It may get some counterattacks. So, we should try to make sure it contains no obvious mistakes... though of course nobody except Steve is "responsible" for his claims.

It's quite interesting.

• Options
1.
edited August 2014

I see one weakness that we should try to fix.

The most standard definition of El Niño uses the Oceanic Niño Index (ONI), which is the running 3-month mean of Niño 3.4 index. An El Niño occurs when the ONI is over 0.5 °C for at least 5 months in a row. A La Nña occurs when the ONI is below -0.5 °C for at least 5 months in a row.

Ludescher et al use a nonstandard, less strict definition. They say there's an El Niño when the Niño 3.4 index is over 0.5°C for at least 5 months.

Wenner goes further in this direction. He defines an El Niño initiation month to be one where the Niño 3.4 index is over 0.5°C.

Perhaps we should make it clear that this is using Ludescher's definition of El Niño, not the standard one.

Comment Source:I see one weakness that we should try to fix. The most standard definition of El Ni&ntilde;o uses the **Oceanic Niño Index** (ONI), which is the running 3-month mean of Niño 3.4 index. An **El Ni&ntilde;o** occurs when the ONI is over 0.5 &deg;C for at least 5 months in a row. A **La N&ntilde;a** occurs when the ONI is below -0.5 &deg;C for at least 5 months in a row. Ludescher _et al_ use a nonstandard, less strict definition. They say there's an El Niño when the Niño 3.4 index is over 0.5°C for at least 5 months. Wenner goes further in this direction. He defines an **El Niño initiation month** to be one where the Niño 3.4 index is over 0.5°C. Perhaps we should make it clear that this is using Ludescher's definition of El Ni&ntilde;o, not the standard one.
• Options
2.

I will also ask Steve to write a short paragraph introducing himself and mentioning his qualifications.

Comment Source:I will also ask Steve to write a short paragraph introducing himself and mentioning his qualifications.
• Options
3.

Perhaps I should also write a short post addressing the definition of El Niño. It would be nice to see a graph like this:

but with the Oceanic Niño Index replacing the Niño 3.4.

Comment Source:Perhaps I should also write a short post addressing the definition of El Ni&ntilde;o. It would be nice to see a graph like this: <img src = "http://www.azimuthproject.org/azimuth/files/ludescher-replication-v2.png" alt = ""/> but with the Oceanic Ni&ntilde;o Index replacing the Ni&ntilde;o 3.4.
• Options
4.
edited July 2014

Ludescher et al have supplementary material which should be read before criticising. In particular, there are zoomed in portions of the graph for the borderline decisions.

Where can I download data for the Oceanic Niño Index? And how many flavours does it come in, and which one would you like?

The link in comment 1 doesn't work. Hope this does. http://www.azimuthproject.org/azimuth/show/Blog+-+El+Ni%C3%B1o+project+%28part+6%29.

Comment Source:Ludescher et al have supplementary material which should be read before criticising. In particular, there are zoomed in portions of the graph for the borderline decisions. Where can I download data for the Oceanic Niño Index? And how many flavours does it come in, and which one would you like? The link in comment 1 doesn't work. Hope this does. [http://www.azimuthproject.org/azimuth/show/Blog+-+El+Ni%C3%B1o+project+%28part+6%29](http://www.azimuthproject.org/azimuth/show/Blog+-+El+Ni%C3%B1o+project+%28part+6%29).
• Options
5.

Perhaps I should also write a short post addressing the definition of El Niño

Dear John (I know this is a tall order)

Could you kindly write these definitions in a mathematical notion as you do your regular physics publication? Possibly matrix notation for grid data. Also could you kindly explicitly provide the links to data, there are too many varying versions out there.

I like to start issuing the Machine Learning forecasts + wavelet analysis on daily basis. Practice Makes Perfect is what is needed for machine learning training :)

What do I mean by Machine Learning forecast: This is a non-cognitive machine forecast free of any human interpretation and inference, solely based upon the well known algorithms dealing with adaptive non-linear approximations to the multivariate functions from one banach/functional space to another which you define as base for weather conditions e.g. EL Nino.

Dara

Comment Source:>Perhaps I should also write a short post addressing the definition of El Niño Dear John (I know this is a tall order) Could you kindly write these definitions in a mathematical notion as you do your regular physics publication? Possibly matrix notation for grid data. Also could you kindly explicitly provide the links to data, there are too many varying versions out there. I like to start issuing the Machine Learning forecasts + wavelet analysis on daily basis. Practice Makes Perfect is what is needed for machine learning training :) What do I mean by Machine Learning forecast: This is a non-cognitive machine forecast free of any human interpretation and inference, solely based upon the well known algorithms dealing with adaptive non-linear approximations to the multivariate functions from one banach/functional space to another which you define as base for weather conditions e.g. EL Nino. Dara
• Options
6.

John

We, with cooperation with WebHubTel and others, could then issue you interim computations and symbolic expressions using Mathematica and other tools, so you could review the actual history of computations to fine tune the forecast algorithms.

I am thinking of these interim computations will be in form of tech-notes reports with live-code and data, I will post some samples later on.

Dara

Comment Source:John We, with cooperation with WebHubTel and others, could then issue you interim computations and symbolic expressions using Mathematica and other tools, so you could review the actual history of computations to fine tune the forecast algorithms. I am thinking of these interim computations will be in form of tech-notes reports with live-code and data, I will post some samples later on. Dara
• Options
7.

The link in the first comment seems to be to a page that hasn't been created yet.

Comment Source:The link in the first comment seems to be to a page that hasn't been created yet.
• Options
8.

Todd, see comment 5 - the page exists. I don't understand why the link from comment 1 doesn't work.

Comment Source:Todd, see comment 5 - the page exists. I don't understand why the link from comment 1 doesn't work.
• Options
9.
edited July 2014

Thanks, Todd! I left out the word "Blog - " So, folks, please read this and criticize it:

I'm sorry to have taken so long to reply to this and other comments. I had some other kinds of work to do: 3 papers of mine suddenly got accepted for publication, some with corrections required. I'm also trying to finish off another: Operads and phylogenetic trees.

Comment Source:Thanks, Todd! I left out the word "Blog - " So, folks, please read this and criticize it: * [[Blog - El Niño project (part 6)]] I'm sorry to have taken so long to reply to this and other comments. I had some other kinds of work to do: 3 papers of mine suddenly got accepted for publication, some with corrections required. I'm also trying to finish off another: [Operads and phylogenetic trees](http://math.ucr.edu/home/baez/phylo.pdf). I asked Steve Wenner to add an introduction but he never replied to me. I'll ask again now, since I want to post this article fairly soon.
• Options
10.
edited July 2014

Dara wrote:

Could you kindly write these definitions in a mathematical notion as you do your regular physics publication? Possibly matrix notation for grid data. Also could you kindly explicitly provide the links to data, there are too many varying versions out there.

I won't write my blog post like a physics paper. But it should be completely clear and precise.

I gave a definition that seems fairly clear and precise to me:

The most standard definition of El Niño uses the Oceanic Niño Index (ONI), which is the running 3-month mean of Niño 3.4 index. An El Niño occurs when the ONI is over 0.5 °C for at least 5 months in a row. A La Nña occurs when the ONI is below 0.5 °C for at least 5 months in a row.

There are just two questions:

1) Where do we get our Niño 3.4 index?

2) When we define the "running 3-month mean" of a function $f(t)$ (where $t$ is the time in months), do we define it by the formula

$$\langle f(t) \rangle = \frac{1}{3} (f(t-1) + f(t) + f(t+1) )$$ or perhaps

$$\langle f(t) \rangle = \frac{1}{3} (f(t) + f(t-1) + f(t-2) )$$ Answers:

1) The US National Weather Service provides a file of the monthly Niño 3.4 index here:

Unlike some other files, this data takes global warming into account! The Niño 3.4 index is in the column "ANOM".

2) It seems the US National Weather Service computes the 3-month running mean this way:

$$\langle f(t) \rangle = \frac{1}{3} (f(t-1) + f(t) + f(t+1) )$$ You can check this by looking at their ONI table.

Let me check it! They give the Niño 3.4 index for January, February and March 1950 as

$$-1.42, -1.31, -1.04$$ If we take the mean of these we get

$$\frac{1}{3}( -1.42 -1.31 -1.04) = -1.2566...$$ So, I predict their ONI for February 1950 will be about -1.2566... Looking at their ONI table,they say -1.3. That's okay, since they just give 2 digits.

You could check more examples, but I think this is how the ONI is defined. And that gives the definition of El Niño.

Comment Source:Dara wrote: > Could you kindly write these definitions in a mathematical notion as you do your regular physics publication? Possibly matrix notation for grid data. Also could you kindly explicitly provide the links to data, there are too many varying versions out there. I won't write my blog post like a physics paper. But it should be completely clear and precise. I gave a definition that seems fairly clear and precise to me: > The most standard definition of El Ni&ntilde;o uses the **Oceanic Niño Index** (ONI), which is the running 3-month mean of Niño 3.4 index. An **El Ni&ntilde;o** occurs when the ONI is over 0.5 &deg;C for at least 5 months in a row. A **La N&ntilde;a** occurs when the ONI is below 0.5 &deg;C for at least 5 months in a row. There are just two questions: 1) Where do we get our Niño 3.4 index? 2) When we define the "running 3-month mean" of a function $f(t)$ (where $t$ is the time in months), do we define it by the formula $$\langle f(t) \rangle = \frac{1}{3} (f(t-1) + f(t) + f(t+1) )$$ or perhaps $$\langle f(t) \rangle = \frac{1}{3} (f(t) + f(t-1) + f(t-2) )$$ Answers: 1) The US National Weather Service provides a file of the monthly Ni&ntilde;o 3.4 index here: * [http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/detrend.nino34.ascii.txt](http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/detrend.nino34.ascii.txt). Unlike some other files, this data takes global warming into account! The Ni&ntilde;o 3.4 index is in the column "ANOM". 2) It seems the US National Weather Service computes the 3-month running mean this way: $$\langle f(t) \rangle = \frac{1}{3} (f(t-1) + f(t) + f(t+1) )$$ You can check this by looking at their [ONI table](http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml). Let me check it! They give the Ni&ntilde;o 3.4 index for January, February and March 1950 as $$-1.42, -1.31, -1.04$$ If we take the mean of these we get $$\frac{1}{3}( -1.42 -1.31 -1.04) = -1.2566...$$ So, I predict their ONI for February 1950 will be about -1.2566... Looking at their [ONI table](http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml),they say -1.3. That's okay, since they just give 2 digits. You could check more examples, but I think this is how the ONI is defined. And that gives the definition of El Ni&ntilde;o.
• Options
11.

Thank you John.

Here is one last question, say I coded a forecast algorithm:

Forecast :TODAY ---> {index1, index2, Index3, index4, index5}

It takes the date today and issue the forecast for the next 5 dates.

Since the definition is 5 MONTHS IN A ROW, how could I qualify if the algorithm predicts the El Nino e.g. is TODAY the beginning of the 5 MONTHS or middle? And then count with the future values in mind?

Dara

Comment Source:Thank you John. Here is one last question, say I coded a forecast algorithm: Forecast :TODAY ---> {index1, index2, Index3, index4, index5} It takes the date today and issue the forecast for the next 5 dates. Since the definition is 5 MONTHS IN A ROW, how could I qualify if the algorithm predicts the El Nino e.g. is TODAY the beginning of the 5 MONTHS or middle? And then count with the future values in mind? Dara
• Options
12.

Hi Dara, I don't think I understand the question. 5 months of 3-month rolling averages need 7 monthly values?

Comment Source:Hi Dara, I don't think I understand the question. 5 months of 3-month rolling averages need 7 monthly values?
• Options
13.

Hello Jim

The question is I run the forecast algorithm TODAY, it predicts say 5 values of index in coming 5-7 months, so how would I report a forecast whether there is an El Nino in effect of not i.e. as of TODAY? checking for the forecast numbers of the next 5-7 months?

This a reporting issue since there is a range of numbers constitute the El Nino

Dara

Comment Source:Hello Jim The question is I run the forecast algorithm TODAY, it predicts say 5 values of index in coming 5-7 months, so how would I report a forecast whether there is an El Nino in effect of not i.e. as of TODAY? checking for the forecast numbers of the next 5-7 months? This a reporting issue since there is a range of numbers constitute the El Nino Dara
• Options
14.

I don't think I've seen:

Ludescher, J., Gozolchiani, A., Bogachev, M. I., Bunde, A., Havlin, S., and Schellnhuber, H. J. (2014). Very Early Warning of Next El Niño, PNAS 111, 2064 (doi/10.1073/pnas.1323058111)

http://www.pnas.org/content/111/6/2064.abstract

Abstract The most important driver of climate variability is the El Niño Southern Oscillation, which can trigger disasters in various parts of the globe. Despite its importance, conventional forecasting is still limited to 6 mo ahead. Recently, we developed an approach based on network analysis, which allows projection of an El Niño event about 1 y ahead. Here we show that our method correctly predicted the absence of El Niño events in 2012 and 2013 and now announce that our approach indicated (in September 2013 already) the return of El Niño in late 2014 with a 3-in-4 likelihood. We also discuss the relevance of the next El Niño to the question of global warming and the present hiatus in the global mean surface temperature.

I see no reason not to email one of the authors (who include H.J.Schnellhuber, founding director of PIK) with any questions or criticisms before publishing.

Comment Source:I don't think I've seen: Ludescher, J., Gozolchiani, A., Bogachev, M. I., Bunde, A., Havlin, S., and Schellnhuber, H. J. (2014). Very Early Warning of Next El Niño, PNAS 111, 2064 (doi/10.1073/pnas.1323058111) http://www.pnas.org/content/111/6/2064.abstract Abstract The most important driver of climate variability is the El Niño Southern Oscillation, which can trigger disasters in various parts of the globe. Despite its importance, conventional forecasting is still limited to 6 mo ahead. Recently, we developed an approach based on network analysis, which allows projection of an El Niño event about 1 y ahead. Here we show that our method correctly predicted the absence of El Niño events in 2012 and 2013 and now announce that our approach indicated (in September 2013 already) the return of El Niño in late 2014 with a 3-in-4 likelihood. We also discuss the relevance of the next El Niño to the question of global warming and the present hiatus in the global mean surface temperature. I see no reason not to email one of the authors (who include H.J.Schnellhuber, founding director of PIK) with any questions or criticisms before publishing.
• Options
15.

One can always issue a forecast that is great for next 2-3 SPECIFIC units of time, this is possible by also flipping coins.

When we say forecast, at least in computing field, you run BACKTEST forecast against historical data and issue a CONFIDENCE level or MEAN SQUARED error of some kind for a long period of time.

For example if I do a forecast for El-Nino I go back to past 40 years and test my algorithms on each year/months of the year and see how accurate the algorithm was.

Somehow I do not see done by the authors of that paper

D

Comment Source:One can always issue a forecast that is great for next 2-3 SPECIFIC units of time, this is possible by also flipping coins. When we say forecast, at least in computing field, you run BACKTEST forecast against historical data and issue a CONFIDENCE level or MEAN SQUARED error of some kind for a long period of time. For example if I do a forecast for El-Nino I go back to past 40 years and test my algorithms on each year/months of the year and see how accurate the algorithm was. Somehow I do not see done by the authors of that paper D
• Options
16.
edited July 2014

It's some time since I read it, so I'll have to re-read the paper.

PS. enclosing stars (as with the source of) highlighted term does highlighting without the capitals.

Comment Source:It's some time since I read it, so I'll have to re-read the paper. PS. enclosing stars (as with the source of) *highlighted term* does highlighting without the capitals.
• Options
17.

Let me give a real-life example to make my point. Generally speaking the stocks are moving upwards in US stock markets, so I issue forecasts for their time-series and the results of the forecasts are terrific! and they are so not because the forecast algorithm is so great but because the motility is quite predictable even by naked eye looking at the price charts.

Therefore to avoid the short-term forecasts which could be deceptive, the forecasters are asked to run BACKTEST algorithms i.e. run the forecast algorithms e.g. in my case from 1997 to present and issue forecast on very unit of time and see how off it was from the actual past value.

So I am planning to write some code to forecast some of the indices here, obviously I have data all the way from 1950s or even 1800s! So I will then BACKTEST my algorithm and issue the error analysis.

Then John and other researchers here look at the results and see how good the algorithm was. One way or the other new ideas spring to make changes to improve or explain the results.

Dara

Comment Source:Let me give a real-life example to make my point. Generally speaking the stocks are moving upwards in US stock markets, so I issue forecasts for their time-series and the results of the forecasts are terrific! and they are so not because the forecast algorithm is so great but because the motility is quite predictable even by naked eye looking at the price charts. Therefore to avoid the short-term forecasts which could be deceptive, the forecasters are asked to run BACKTEST algorithms i.e. run the forecast algorithms e.g. in my case from 1997 to present and issue forecast on very unit of time and see how off it was from the actual past value. So I am planning to write some code to forecast some of the indices here, obviously I have data all the way from 1950s or even 1800s! So I will then BACKTEST my algorithm and issue the error analysis. Then John and other researchers here look at the results and see how good the algorithm was. One way or the other new ideas spring to make changes to improve or explain the results. Dara
• Options
18.

Hello Jim I was not being critical about what you noted or this odd paper.

We need to present a methodology for forecasting.

Comment Source:Hello Jim I was not being critical about what you noted or this odd paper. We need to present a methodology for forecasting.
• Options
19.

No problem. Just a matter of perceived style.

I agree, I'd expect any forecasting algorithm to be backtested. I don't know how Steve's concerns about sensitivity to parameter settings and different methods could be answered.

Comment Source:No problem. Just a matter of perceived style. I agree, I'd expect *any* forecasting algorithm to be backtested. I don't know how Steve's concerns about sensitivity to parameter settings and different methods could be answered.
• Options
20.

Just a matter of perceived style.

I have gone back to full-time coding and communicating with really sharp fast guys, so please excuse my abrupt mannerism it gets worse around 4am GMT ;)

I have to say this I do not know what forecast these climatologists are boasting about (recall what the fellow told John). Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art.

Dara

Comment Source:> Just a matter of perceived style. I have gone back to full-time coding and communicating with really sharp fast guys, so please excuse my abrupt mannerism it gets worse around 4am GMT ;) I have to say this I do not know what forecast these climatologists are boasting about (recall what the fellow told John). Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art. Dara
• Options
21.

Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art.

Sorry I can't help; perhaps somebody else can.

Best wishes

Comment Source:> Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art. Sorry I can't help; perhaps somebody else can. Best wishes
• Options
22.

I am thinking to code several forecasts (SVR, NN and Knn) like this:

few months Forecast :TODAY —> {index1, index2, Index3, index4, index5}

or a year

Forecast :TODAY —> {index1, index2, Index3, index4, index5, ..., index12}

TODAY = {month, year} month mode 12

And then compare that to the past and issue error analysis.

Then we start adding new params e.g. equator average temp of some number of nodes or whatever

Forecast :{TODAY,param1, param2 ...} —> {index1, index2, Index3, index4, index5}

See if the forecast accuracy increases, by trial and error we examine a small set of candidate parameters.

Dara

Comment Source:I am thinking to code several forecasts (SVR, NN and Knn) like this: few months Forecast :TODAY —> {index1, index2, Index3, index4, index5} or a year Forecast :TODAY —> {index1, index2, Index3, index4, index5, ..., index12} TODAY = {month, year} month mode 12 And then compare that to the past and issue error analysis. Then we start adding new params e.g. equator average temp of some number of nodes or whatever Forecast :{TODAY,param1, param2 ...} —> {index1, index2, Index3, index4, index5} See if the forecast accuracy increases, by trial and error we examine a small set of candidate parameters. Dara
• Options
23.

Dara said

I have to say this I do not know what forecast these climatologists are boasting about (recall what the fellow told John). Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art.

I don't know enough to guide you, but the references here could be a good starting point: http://www.cpc.ncep.noaa.gov/products/precip/CWlink/MJO/enso.shtml#references

Comment Source:Dara said > I have to say this I do not know what forecast these climatologists are boasting about (recall what the fellow told John). Please point me to actual atmospheric forecasts that are anything but guessing where the curve goes next, so I have an idea of prior art. I don't know enough to guide you, but the references here could be a good starting point: [http://www.cpc.ncep.noaa.gov/products/precip/CWlink/MJO/enso.shtml#references]( http://www.cpc.ncep.noaa.gov/products/precip/CWlink/MJO/enso.shtml#references)
• Options
24.

Thanx Graham I will upload some relevant references. So far I have not seen any backtesting on any of these related forecasts.

Comment Source:Thanx Graham I will upload some relevant references. So far I have not seen any backtesting on any of these related forecasts.
• Options
25.

I have a good example of backfitting and backtesting here: http://contextearth.com/2014/01/22/projection-training-intervals-for-csalt-model/

This is what Dara is referring to -- using historical data as a means for testing models. Any part of the historical data time-series can be used as a training interval to test projections of other parts of the time series.

With the CSALT model -- which provides a multivariate estimate of the average global temperature -- I need an estimate of the ENSO factor to be able to project the natural variability of temperature. That is actually what got me started on the ENSO El Nino kick. Skeptics argued that the CSALT model was not that good because it needed an accurate forecast of ENSO, but all I had was historical data. So I applied backfitting to demonstrate how well it could work with latter intervals, absent of being able to know the future.

Perhaps this is being too pedantic, but It is important to consider backtesting due to the lack of a controlled system to experiment with. In other words, use the available information in as many ways that you can creatively dream up.

Comment Source:I have a good example of backfitting and backtesting here: <http://contextearth.com/2014/01/22/projection-training-intervals-for-csalt-model/> This is what Dara is referring to -- using historical data as a means for testing models. Any part of the historical data time-series can be used as a training interval to test projections of other parts of the time series. With the CSALT model -- which provides a multivariate estimate of the average global temperature -- I need an estimate of the ENSO factor to be able to project the natural variability of temperature. That is actually what got me started on the ENSO El Nino kick. Skeptics argued that the CSALT model was not that good because it needed an accurate forecast of ENSO, but all I had was historical data. So I applied backfitting to demonstrate how well it could work with latter intervals, absent of being able to know the future. Perhaps this is being too pedantic, but It is important to consider backtesting due to the lack of a controlled system to experiment with. In other words, use the available information in as many ways that you can creatively dream up.
• Options
26.

We need backtesting for the new GPM and TRIMM data from satellites. The forecasts will be daily if not hourly and they requires serious examination. It happens in some periods of time the forecast algorithm needs to be shut-off due to high errors, and we could measure those errors with backtesting.

Otherwise it will be all wild claims and conjectures and politics

Dara

Comment Source:We need backtesting for the new GPM and TRIMM data from satellites. The forecasts will be daily if not hourly and they requires serious examination. It happens in some periods of time the forecast algorithm needs to be shut-off due to high errors, and we could measure those errors with backtesting. Otherwise it will be all wild claims and conjectures and politics Dara
• Options
27.
edited July 2014

If there's going to be a contingency table analysis done in the paper, I think at least a Bayesian counterpart ought to be included or replace the analysis, such as section 14.1.10 of Kruschke (2011). I am happy to help with that, and do it. I'll need to read the article more carefully, and will grab the contingency table when it appears stable.

Um, if Kruschke (2011) is not available see http://stats.stackexchange.com/questions/90668/bayesian-analysis-of-contingency-tables-how-to-describe-effect-size (sorry, John, the "Help" for the markup below was giving a "404"), or, better still, Bill Press' (sorry again, John). The only quibble, from Kruschke himself, is that Press still ends up with p-values.

To clarify, the number of times an El Niño initiated or did not, or the number of times the Ludescher, ''et al'' algorithm would indicate arrows or not are not fixed by experiment but are, rather, random counts, hence random variables. Accordingly, these are ''draws'' from distributions presumably having means of the kind "x" and "N-x". Whether or not a Binomial is a good representation, or "x" is Poisson is not for these are important to details, but beside the point. Another time, another sample, the margins might be quite different. A proper calculation of probabilities of getting the particular counts that were gotten should consider these uncertainties, as so is whether or not a credible interval for each cell contains the observed count. Such consideration is treating these margins as ''nuisance parameters" as Press teaches.

The Bayesian approach to such tables is the standard hierarchical model, using a Poisson model for cell counts, where their means have priors that are Exponential (link functions, in GLM terms) of cominbations of factors unique to each which obey multiplicative independence for each cell, but not necessarily across cells. In other words, the model is a Poisson ANOVA.

Accordingly, one good set of hyperpriors for the Exponentials are Normals. In Chapter 22, Kruschke recommends hyperpriors of folded-''t'' densities for their ''precisions'', but I've seem him yield back to Gelman's recommendation of Gammas for these in another context. We'd need to experiment to see what works best (in terms of Gibbs convergences, for example). Kruschke also has R and ''JAGS'' code to accompany his text which goes along with this, and that's where I'd start.

Comment Source:If there's going to be a contingency table analysis done in the paper, I think at least a Bayesian counterpart ought to be included or replace the analysis, such as section 14.1.10 of Kruschke (2011). I am happy to help with that, and do it. I'll need to read the article more carefully, and will grab the contingency table when it appears stable. Um, if Kruschke (2011) is not available see http://stats.stackexchange.com/questions/90668/bayesian-analysis-of-contingency-tables-how-to-describe-effect-size (sorry, John, the "Help" for the markup below was giving a "404"), or, better still, Bill Press' https://www.youtube.com/watch?v=bHK79WKOX-Y (sorry again, John). The only quibble, from Kruschke himself, is that Press still ends up with p-values. To clarify, the number of times an El Ni&ntilde;o initiated or did not, or the number of times the Ludescher, ''et al'' algorithm would indicate arrows or not are not fixed by experiment but are, rather, random counts, hence random variables. Accordingly, these are ''draws'' from distributions presumably having means of the kind "x" and "N-x". Whether or not a Binomial is a good representation, or "x" is Poisson is not for these are important to details, but beside the point. Another time, another sample, the margins might be quite different. A proper calculation of probabilities of getting the particular counts that were gotten should consider these uncertainties, as so is whether or not a credible interval for each cell contains the observed count. Such consideration is treating these margins as ''nuisance parameters" as Press teaches. The Bayesian approach to such tables is the standard hierarchical model, using a Poisson model for cell counts, where their means have priors that are Exponential (link functions, in GLM terms) of cominbations of factors unique to each which obey multiplicative independence for each cell, but not necessarily across cells. In other words, the model is a Poisson ANOVA. Accordingly, one good set of hyperpriors for the Exponentials are Normals. In Chapter 22, Kruschke recommends hyperpriors of folded-''t'' densities for their ''precisions'', but I've seem him yield back to Gelman's recommendation of Gammas for these in another context. We'd need to experiment to see what works best (in terms of Gibbs convergences, for example). Kruschke also has *R* and ''JAGS'' code to accompany his text which goes along with this, and that's where I'd start.
• Options
28.

I agree Dara. A good example of a significant error is with temperature measurements during WWII. A warming bias was definitely introduced during the years from ~1940 to 1945, that becomes evident when trying to fit the entire series. http://contextearth.com/2013/11/16/csalt-and-sst-corrections/

This image shows how the war resulted in significant patches in spatial coverage, particularly in the ENSO regions of the Pacific.

So during WWII, we have the problem of missing data and instrumental bias as military ships took over from commercial vessels in performing the SST measurements.

Comment Source:I agree Dara. A good example of a significant error is with temperature measurements during WWII. A warming bias was definitely introduced during the years from ~1940 to 1945, that becomes evident when trying to fit the entire series. <http://contextearth.com/2013/11/16/csalt-and-sst-corrections/> This image shows how the war resulted in significant patches in spatial coverage, particularly in the ENSO regions of the Pacific. ![spatial coverage](http://img585.imageshack.us/img585/5273/y6w.gif) So during WWII, we have the problem of missing data and instrumental bias as military ships took over from commercial vessels in performing the SST measurements.
• Options
29.

Hi Jan,

Ian Ross sent me Berliner et al on hierarchical Bayesian EOF analysis of El Ninos

http://ro.uow.edu.au/cgi/viewcontent.cgi?article=9833&context=infopapers

I'm trying to write a summary of Ian's thesis if you've got the mileage to comment.

Cheers

Comment Source:Hi Jan, Ian Ross sent me Berliner et al on hierarchical Bayesian EOF analysis of El Ninos http://ro.uow.edu.au/cgi/viewcontent.cgi?article=9833&context=infopapers I'm trying to write a summary of Ian's thesis if you've got the mileage to comment. Cheers
• Options
30.

Jim,

Not sure exactly what you are asking: Commenting on your summary? On Ian's thesis? On Berliner, Wikle, Cressie? But happy to help within timeframe. Not sure how quickly I can turn around reading a thesis, though. Happy to read your summary and comment from what I know of Berliner, et al, though.

In a couple of weeks soon enough?

Comment Source:Jim, Not sure exactly what you are asking: Commenting on your summary? On Ian's thesis? On Berliner, Wikle, Cressie? But happy to help within timeframe. Not sure how quickly I can turn around reading a thesis, though. Happy to read your summary and comment from what I know of Berliner, et al, though. In a couple of weeks soon enough?
• Options
31.

I shouldn't have left it ambiguous. I'd appreciate any comments on Berliner's approach. I think Ian has done a lot of grunt work running many methods against many models in his thesis so I'm using that as a baseline evaluation of the field. I think it's necessary background (with a very good summary of proposed El Nino mechanisms) but not many people will read it so I've started a short summary which will still take me a bit of time.

Comment Source:I shouldn't have left it ambiguous. I'd appreciate any comments on Berliner's approach. I think Ian has done a lot of grunt work running many methods against many models in his thesis so I'm using that as a baseline evaluation of the field. I think it's necessary background (with a very good summary of proposed El Nino mechanisms) but not many people will read it so I've started a short summary which will still take me a bit of time.
• Options
32.

Sure, I'd be happy to, Jim.

Hans Berliner is The Man (see http://bayesian.org/video/bayesian-mechanistic-statistical-modeling-examples-geophysical-settings). And Noel Cressie wrote definitive works on spatial statistics, like kriging, although, until recently, did not adopt Bayesian approaches. The paper is from 2000, so it is interesting to consider how Cressie's work since the collaboration developed. I don't know Wikle's work before this introduction, although I probably should have.

Comment Source:Sure, I'd be happy to, Jim. Hans Berliner is The Man (see http://bayesian.org/video/bayesian-mechanistic-statistical-modeling-examples-geophysical-settings). And Noel Cressie wrote definitive works on spatial statistics, like kriging, although, until recently, did not adopt Bayesian approaches. The paper is from 2000, so it is interesting to consider how Cressie's work since the collaboration developed. I don't know Wikle's work before this introduction, although I probably should have.
• Options
33.

Re: the Berliner paper

I think that determining whether the essential behavior behind ENSO follows a stochastic red noise process or some complex deterministic oscillation is a critical question.

This is a more subtle distinction than one might first imagine. In the stochastic model referred to as red noise -- aka an Ornstein-Uhlenbeck process -- it is well-described by a random walk within a potential well, with the deepest excursions showing a correspondingly greater drag. This behavior results in an erratic oscillation with a clear reversion-to-the-mean tendency, which is well-suited to describe ENSO. This is a schematic figure I drew up that describes the various regimes: A and D are higher drag and B and C are lower drag

In contrast, a deterministic oscillation within a potential well, such as what happens with a wave set in motion on a periodic basis, can also show erratic swings similar to red noise. This can occur as a result of non-linear effects, often described by a Mathieu equation representing the sloshing dynamics of a volume of liquid. When there are multiple periodic forcing functions, the swings will only become more erratic, and possibly approaching unstable or chaotic regimes. This will also show reversion-to-the-mean characteristics as long as an unstable regime does not occur.

So the big questions are (1) how to determine if a waveform is stochastic or deterministic, and perhaps verging on chaotic and (2) if the ENSO is deemed to be red noise, does that mean we are faced with an evolutionary dead-end?

The latter resolution would state that the best we could do is make probabilistic predictions of future El Nino events, with correspondingly lower confidence the further out we go in time. That's what happens if a random walk element is introduced, like it or not :)

However if the ENSO shows signs of determinism such as might be revealed by a long-term sloshing dynamic governed by periodic forcing functions, we have a better chance of making high-probability predictions and for a longer term projection interval. This is the essential call that I am making while pursuing this topic. If there is a possibility that something deterministic is happening, governed by tidal forces or some other well-characterized forcing, then it will have a huge payoff in terms of improved predictability.

Comment Source:Re: the Berliner paper I think that determining whether the essential behavior behind ENSO follows a stochastic red noise process or some complex deterministic oscillation is a critical question. This is a more subtle distinction than one might first imagine. In the stochastic model referred to as red noise -- aka an Ornstein-Uhlenbeck process -- it is well-described by a random walk within a potential well, with the deepest excursions showing a correspondingly greater drag. This behavior results in an erratic oscillation with a clear reversion-to-the-mean tendency, which is well-suited to describe ENSO. This is a schematic figure I drew up that describes the various regimes: ![O-U](http://1.bp.blogspot.com/-wOBXO75tVAI/TsHCF5lXkRI/AAAAAAAAAm0/Q5s1-a_Ir7k/s1600/ornstein-uhlenbeck.gif) A and D are higher drag and B and C are lower drag In contrast, a deterministic oscillation within a potential well, such as what happens with a wave set in motion on a periodic basis, can also show erratic swings similar to red noise. This can occur as a result of non-linear effects, often described by a Mathieu equation representing the sloshing dynamics of a volume of liquid. When there are multiple periodic forcing functions, the swings will only become more erratic, and possibly approaching unstable or chaotic regimes. This will also show reversion-to-the-mean characteristics as long as an unstable regime does not occur. So the big questions are (1) how to determine if a waveform is stochastic or deterministic, and perhaps verging on chaotic and (2) if the ENSO is deemed to be red noise, does that mean we are faced with an evolutionary dead-end? The latter resolution would state that the best we could do is make probabilistic predictions of future El Nino events, with correspondingly lower confidence the further out we go in time. That's what happens if a random walk element is introduced, like it or not :) However if the ENSO shows signs of determinism such as might be revealed by a long-term sloshing dynamic governed by periodic forcing functions, we have a better chance of making high-probability predictions and for a longer term projection interval. This is the essential call that I am making while pursuing this topic. If there is a possibility that something deterministic is happening, governed by tidal forces or some other well-characterized forcing, then it will have a huge payoff in terms of improved predictability.
• Options
34.

Dear WebHubTel

This is a more subtle distinction than one might first imagine. In the stochastic model referred to as red noise – aka an Ornstein-Uhlenbeck process – it is well-described by a random walk within a potential well

Now this is tech talk, really interested to understand these sorts of models and theories.

When there are multiple periodic forcing functions, the swings will only become more erratic, and possibly approaching unstable or chaotic regimes.

Excellent, love to study such systems.

I suspect that the weather system in question has no stochastic model of any kind! I believe that approach is a waste of time (no reference to anybody's comments here). There is no such model for a man-made stock which is a simple time-series, imagine the weather system of El Nino.

The only use of the statistical or stochastic models are , IMHO, for the postmortem accuracy/error analysis of forecast algorithms and at that for backtesting as we talked. The statistical models could explain the erratic or robust behaviour of a family of forecast algorithms, but I cannot believe that they could describe or model such large ACTUAL atmospheric oceanic systems.

However if the ENSO shows signs of determinism such as might be revealed by a long-term sloshing dynamic governed by periodic forcing functions

I thought about your sloshing ideas, I was wondering if the sloshing needs solid barriers e.g. coastal solid matter. Why couldn't sloshing happen between warm waters and cold waters crashing to each other.

Dara

Comment Source:Dear WebHubTel > This is a more subtle distinction than one might first imagine. In the stochastic model referred to as red noise – aka an Ornstein-Uhlenbeck process – it is well-described by a random walk within a potential well Now this is tech talk, really interested to understand these sorts of models and theories. > When there are multiple periodic forcing functions, the swings will only become more erratic, and possibly approaching unstable or chaotic regimes. Excellent, love to study such systems. I suspect that the weather system in question has no stochastic model of any kind! I believe that approach is a waste of time (no reference to anybody's comments here). There is no such model for a man-made stock which is a simple time-series, imagine the weather system of El Nino. The only use of the statistical or stochastic models are , IMHO, for the postmortem accuracy/error analysis of forecast algorithms and at that for backtesting as we talked. The statistical models could explain the erratic or robust behaviour of a family of forecast algorithms, but I cannot believe that they could describe or model such large ACTUAL atmospheric oceanic systems. >However if the ENSO shows signs of determinism such as might be revealed by a long-term sloshing dynamic governed by periodic forcing functions I thought about your sloshing ideas, I was wondering if the sloshing needs solid barriers e.g. coastal solid matter. Why couldn't sloshing happen between warm waters and cold waters crashing to each other. Dara
• Options
35.

Dara, Possible regarding the sloshing. Right now I am approaching it from a phenomenological POV. There is a basic equation which includes gravity, diameter, depth, etc which give rise to the characteristic sloshing frequency of a given geometric volume but this is nothing but a guess when one considers the curvature of the earth is also involved with a body of water as large as the Pacific.

The thesis by Ian Ross that was referred to by Jim S actually has a very good section describing all the delayed action oscillator models. "Nonlinear dimensionality reduction methods in climate data analysis" -- http://arxiv.org/abs/0901.0537

Comment Source:Dara, Possible regarding the sloshing. Right now I am approaching it from a phenomenological POV. There is a basic equation which includes gravity, diameter, depth, etc which give rise to the characteristic sloshing frequency of a given geometric volume but this is nothing but a guess when one considers the curvature of the earth is also involved with a body of water as large as the Pacific. The thesis by Ian Ross that was referred to by Jim S actually has a very good section describing all the delayed action oscillator models. "Nonlinear dimensionality reduction methods in climate data analysis" -- <http://arxiv.org/abs/0901.0537>
• Options
36.

Dear WebHubTel,

Some previous reported work on the Mathieu proposition is reported here: http://contextearth.com/2014/05/27/the-soim-differential-equation/, with a comment from Professor Peter Webster, co-author with V. E. Toma of Tropical Meteorology and Climate (2014), earlier (1988) with Professor J. Curry of Thermodynamics of Atmospheres and Oceans. (Professor Webster's c.v. is available here: http://webster.eas.gatech.edu). Context Earth also has a lot of material which may be pertinent to the Azimuth ENSO project.

There is another possibility on the deterministic side: The path of the ENSO through state-space might involve LONG secular excursions, so much so that within some non-trivial confine, they are not ergodic, and so have no comparability to the stochastic at all. Whether or not we have sufficiently dense observations to distinguish between that case, and sloshing dynamics, and whatever similarity we might see to a stochastic process could be due to undersampling. Professor Webster's comment at the reference above seems to rule this out. His description sounds like a deterministic slosher in the short term, one which gets reset from ENSO to ENSO, and there's little or no understanding of how that reset occurs.

There are other models: http://journals.ametsoc.org/doi/pdf/10.1175/2008JCLI2387.1

Finally, a stochastic model, like the model offered by Berliner, Wikle, and Cressie, can be descriptive and predictive without the underlying process being fundamentally stochastic. Berliner, Wikle, and Cressie make this point at the outset of their model. Similarly, arbitrarily complicated physics can enter into the sampling density in Bayesian inversions, since there is some variability to be accounted for anyway.

Berliner makes this illustration in several other cases in the lecture previously cited: http://bayesian.org/video/bayesian-mechanistic-statistical-modeling-examples-geophysical-settings

Comment Source:Dear WebHubTel, Some previous reported work on the Mathieu proposition is reported here: http://contextearth.com/2014/05/27/the-soim-differential-equation/, with a comment from Professor Peter Webster, co-author with V. E. Toma of *Tropical* *Meteorology* *and* *Climate* (2014), earlier (1988) with Professor J. Curry of *Thermodynamics* *of* *Atmospheres* *and* *Oceans*. (Professor Webster's c.v. is available here: http://webster.eas.gatech.edu). Context Earth also has a lot of material which may be pertinent to the Azimuth ENSO project. There is another possibility on the deterministic side: The path of the ENSO through state-space might involve LONG secular excursions, so much so that within some non-trivial confine, they are not ergodic, and so have no comparability to the stochastic at all. Whether or not we have sufficiently dense observations to distinguish between that case, and sloshing dynamics, and whatever similarity we might see to a stochastic process could be due to undersampling. Professor Webster's comment at the reference above seems to rule this out. His description sounds like a deterministic slosher in the short term, one which gets reset from ENSO to ENSO, and there's little or no understanding of how that reset occurs. There are other models: http://journals.ametsoc.org/doi/pdf/10.1175/2008JCLI2387.1 Finally, a stochastic model, like the model offered by Berliner, Wikle, and Cressie, can be descriptive and predictive without the underlying process being fundamentally stochastic. Berliner, Wikle, and Cressie make this point at the outset of their model. Similarly, arbitrarily complicated physics can enter into the sampling density in Bayesian inversions, since there is some variability to be accounted for anyway. Berliner makes this illustration in several other cases in the lecture previously cited: http://bayesian.org/video/bayesian-mechanistic-statistical-modeling-examples-geophysical-settings
• Options
37.

+1 WebHubTel : context-earth : ).

Comment Source:+1 WebHubTel : context-earth : ).
• Options
38.
edited July 2014

I think Ian's thesis (only read it once and have a terrible short-term memory) concluded something like that the first 10 EOF's, selected because they explain +70% of the variance in SSTs, do not require nonlinearity. Somebody will shoot me if I'm wrong.

Comment Source:I think Ian's thesis (only read it once and have a terrible short-term memory) concluded something like that the first 10 EOF's, selected because they explain +70% of the variance in SSTs, do not require nonlinearity. Somebody will shoot me if I'm wrong.
• Options
39.
edited July 2014

Jim wrote:

I don’t think I’ve seen:

Ludescher, J., Gozolchiani, A., Bogachev, M. I., Bunde, A., Havlin, S., and Schellnhuber, H. J. (2014). Very Early Warning of Next El Niño, PNAS 111, 2064 (doi/10.1073/pnas.1323058111)

That's too bad! That's the main paper we've been discussing, starting here:

There's a link to a free version of this paper right near the start of the blog article.

Comment Source:Jim wrote: > I don’t think I’ve seen: > Ludescher, J., Gozolchiani, A., Bogachev, M. I., Bunde, A., Havlin, S., and Schellnhuber, H. J. (2014). Very Early Warning of Next El Niño, PNAS 111, 2064 (doi/10.1073/pnas.1323058111) That's too bad! That's the main paper we've been discussing, starting here: * [El Ni&ntilde;o Project (Part 3)](http://johncarlosbaez.wordpress.com/2014/07/01/el-nino-project-part-3/), Azimuth Blog. There's a link to a free version of this paper right near the start of the blog article.
• Options
40.
edited July 2014

and the aim is to criticize and correct this article before publishing it. If you want to talk about the definition of El Niño, the CSALT model, and various other random fascinating things, please do so in existing threads on these topics - or if there aren't any, start your own.

The idea of this forum is that people looking for information on various topics should be able to find it by searching for relevant threads. If every thread becomes a mishmash of random stuff, that's bad.

Comment Source:Hey, guys! Could you please use this thread to talk about the actual topic of this thread? This thread is about Steve Wenner's article: * [[Blog - El Niño project (part 6)]] and the aim is to criticize and correct this article before publishing it. If you want to talk about the definition of El Ni&ntilde;o, the CSALT model, and various other random fascinating things, please do so in existing threads on these topics - or if there aren't any, start your own. The idea of this forum is that people looking for information on various topics should be able to find it by searching for relevant threads. If every thread becomes a mishmash of random stuff, that's bad.
• Options
41.
edited July 2014

Steve Wenner added an introduction to

explaining who he is. I added a remark hinting that we know the definition of El Niño being used is not the standard one. And, I added an explanation of p-values, as follows:

I used Fisher’s exact test to compute some p-values. Suppose (as our 'null hypothesis') that the occurence of an El Niño is no more or less likely when is predicted by Ludescher et al than it otherwise would be. What's the probability that their predictions are as successful as they are (or more so)? Just 0.032. This was, by the way, the most significant of the five p-values for the alternative rule sets applied to the learning series.

I'm checking with him to see if this is okay. But apart from that, it looks ready to go...

Next I think it will be useful to do a blast of "Exploring climate data" posts, to publicize some of the graphs and things you folks have been creating.

Comment Source:Steve Wenner added an introduction to * [[Blog - El Niño project (part 6)]] explaining who he is. I added a remark hinting that we know the definition of El Ni&ntilde;o being used is not the standard one. And, I added an explanation of p-values, as follows: > I used <a href = "https://en.wikipedia.org/wiki/Fisher%27s_exact_test">Fisher’s exact test</a> to compute some p-values. Suppose (as our 'null hypothesis') that the occurence of an El Niño is no more or less likely when is predicted by Ludescher <i>et al</i> than it otherwise would be. What's the probability that their predictions are as successful as they are (or more so)? Just 0.032. This was, by the way, the most significant of the five p-values for the alternative rule sets applied to the learning series. I'm checking with him to see if this is okay. But apart from that, it looks ready to go... Next I think it will be useful to do a blast of "Exploring climate data" posts, to publicize some of the graphs and things you folks have been creating.
• Options
42.
edited July 2014

Perhaps my comment #5 got lost among the random fascinating things, or perhaps I confused you by talking about supplementary material which belongs to their 2013 paper. Anyway, it is here http://www.pnas.org/content/suppl/2013/06/26/1309353110.DCSupplemental/sapp.pdf and Figures 5 and 6 seem very relevant to Steve's post.

Comment Source:Perhaps my comment #5 got lost among the random fascinating things, or perhaps I confused you by talking about supplementary material which belongs to their 2013 paper. Anyway, it is here [http://www.pnas.org/content/suppl/2013/06/26/1309353110.DCSupplemental/sapp.pdf](http://www.pnas.org/content/suppl/2013/06/26/1309353110.DCSupplemental/sapp.pdf) and Figures 5 and 6 seem very relevant to Steve's post.
• Options
43.

Okay, thanks! I forgot to bring this to Steve's attention. I'll do that now.

Comment Source:Okay, thanks! I forgot to bring this to Steve's attention. I'll do that now.
• Options
44.

Reading Steve Wenner's criticism of the Ludescher paper, it has highlighted the seeming arbitrariness of their conclusions based on rather loose subjective associations.
Is this a problem with PNAS papers in general, in that they are not heavily peer-reviewed?

Some evidence that PNAS papers go straight to publication with little review:

http://occamstypewriter.org/stevecaplan/2011/10/23/peer-review-and-the-ole-boys-network/

I remember looking at the PNAS route a while back and seeing the publication fees as the only hurdle.

It is possible that with Wenner's criticism's the paper would have been sent back for a redo. I am trying to rationalize Wenner's findings, which do look rather convincing. All it takes is a sharp eye for detail to find these exceptions and Wenner did just that.

Comment Source:Reading Steve Wenner's criticism of the Ludescher paper, it has highlighted the seeming arbitrariness of their conclusions based on rather loose subjective associations. Is this a problem with PNAS papers in general, in that they are not heavily peer-reviewed? Some evidence that PNAS papers go straight to publication with little review: <http://pipeline.corante.com/archives/2008/08/28/pnas_read_it_or_not.php> <http://occamstypewriter.org/stevecaplan/2011/10/23/peer-review-and-the-ole-boys-network/> I remember looking at the PNAS route a while back and seeing the publication fees as the only hurdle. It is possible that with Wenner's criticism's the paper would have been sent back for a redo. I am trying to rationalize Wenner's findings, which do look rather convincing. All it takes is a sharp eye for detail to find these exceptions and Wenner did just that.
• Options
45.
edited July 2014

PNAS is a fairly prestigious journal, so you don't get in there just by paying publication fees - you have to impress the referees that you're doing something cool.

However, this is not the same as doing something correct.

The journals with the highest impact factor (a measure of prestige) also have the highest retraction rates. This is not necessarily because they do a worse job of refereeing - see the link for various other possible explanations. But it means that you can't trust a paper to be right just because it's in a prestigious journal.

I see that in 2012, PNAS has an impact factor of 9.7, which is low compared to Nature's 36.3, but a lot higher than one of the 2 most prestigious math journals, Inventiones Mathematicae, which comes in at 2.3. Of course this is because more people cite papers about stem cells than, say, étale cohomology of $p$-adic schemes.

Comment Source:_PNAS_ is a fairly prestigious journal, so you don't get in there just by paying publication fees - you have to impress the referees that you're doing something cool. However, this is not the same as doing something correct. The journals with the highest impact factor (a measure of prestige) also have the [highest retraction rates](http://thefinchandpea.com/2013/03/27/retraction-rate-increases-with-impact-factor-is-this-because-of-professional-editors/). This is not necessarily because they do a worse job of refereeing - see the link for various other possible explanations. But it means that you can't trust a paper to be right just because it's in a prestigious journal. I [see](http://www.citefactor.org/journal-impact-factor-list-2012.html) that in 2012, _PNAS_ has an impact factor of 9.7, which is low compared to _Nature_'s 36.3, but a lot higher than one of the 2 most prestigious math journals, _Inventiones Mathematicae_, which comes in at 2.3. Of course this is because more people cite papers about stem cells than, say, &eacute;tale cohomology of $p$-adic schemes.
• Options
46.
edited July 2014

From WeHubTels's first link: "Track I papers are identified as "Contributed by" the member". The 2013 Ludescher paper is "Contributed by Hans Joachim Schellnhuber". Track I is the unconventional route to publication.

Comment Source:From WeHubTels's first link: "Track I papers are identified as "Contributed by" the member". The 2013 Ludescher paper is "Contributed by Hans Joachim Schellnhuber". Track I is the unconventional route to publication.
• Options
47.
edited July 2014

Okay, I've done the analysis of Steve Wenner's contingency table. I don't know the JMP facility well enough to tell how they use the Fisher-Exact interface, but there may be a misunderstanding of its result involved. Given the Bayesian analysis, the output of the the Fisher-Exact JMP is giving is a significance test p-value, not a probability. Someone familiar with JMP would need to check this. If so, that 0.0318 in Steve's report is the p-value of the data, and, thus, indicates, the pattern of counts is unlikely to be due to random variation. I'm not going to defend or explain significance tests or the Fisher-Exact, apart from saying the idea is to enumerate all possible 2x2 tables having marginals agreeing with the given table, and, seeing what fraction are "consistent with" the given table. Another way to look at it is to say that, if rows and columns are independent, the cell values should be well estimated by the total number of counts in the table times the products of the row and column densities. Thus, given the table, and assuming fixed marginals, the "independence version" of the table is 15 and 7 in the first row, and 6 and 2 in the second row. I'll leave more to others.

Now, I'm callous about the Fisher-Exact because I've studied it and the idea of fixed marginals is inconsistent with reality here. In some situations, such as controlled experiments, the marginals are indeed fixed, and analysis should proceed considering them. For example, Steve did not know that the total number of "Author Arrows = No" would be 22, nor that the total number of the counts would be 30. Thus, these could have been nearly anything and the results of analysis depend in part upon considering the possibility that they, too, are suffering random variation.

Moving on to the Bayesian version, which is done using the code available, the spread of possible counts for the cells in this table, without assuming fixed marginals, is shown in the figure:

Wenner-Ludescher--EtAl/Wenner-Ludescher--etAl--CellPosteriorDensities--20140722.png:pic

That's quite a spread. It reflects the small number of overall events.

Nevertheless, it's possible to consider, given the data we have, what probabilities of various scenarios. In particular, if the odds ratio of having an ENSO activation given a Ludescher forecast to not having an ENSO activation, or P(AI=Yes|AA=Yes)/P(AI=No|AA=Yes), we get the posterior density:

Wenner-Ludescher--EtAl/AIYgAAYdbyAINgAAY--20140722.png:pic

Essentially, the odds go from about unity to almost 5. That's a big range, attributable to the low number of successful instances, but it hardly dismisses the Ludescher, et al algorithm as poor.

The other "relative risks", another name for "odds ratios", are shown below:

Wenner-Ludescher--EtAl/AIYgAAYdbyAIYgAAN--20140722.png:pic

Wenner-Ludescher--EtAl/AIYgAANdbyAIYgAAY--20140722.png:pic

Wenner-Ludescher--EtAl/AINgAAYdbyAINgAAN--20140722.png:pic

Wenner-Ludescher--EtAl/AINgAANdbyAINgAAY--20140722.png:pic

Consider especially Wenner-Ludescher--EtAl/AIYgAAYdbyAIYgAAN--20140722--annotated.png:pic

That last one, which is the odds of an ENSO starting given that Ludescher, et al, have predicted it, to it starting without their having predicted it, agrees, more or less, with the odds of the Fisher-Exact, being 3.4, but the Fisher-Exact markedly understates the possible range of control the Ludescher, et al, algorithm has over the future, given the uncertainties and limited data. Indeed, that the Fisher-Exact says this is 3.4 is why I thing any interpretation of that 0.0318 p-value as being dismissive of the Ludescher, et al, results may result from misreading the JMP documentation.

The entire package is available for download as a gzipped tarball. If you are interested in re-running the example, you'll need R and JAGS installed, and the code is currently configured to use 4 cores of a system. Final results and diagnostics are available.

If you want to reproduce this and need help, give a yell. I love to help people learn about the modern methods of Bayesian analysis, including things like JAGS.

Some technical details ... The counts were modeled using Poisson densities having means which are exponentials of the sum of a factor common to all cells, factors common to each column, factors common to each row, and then factors specific to each cell. Uniform priors were used in lieu of the sometimes recommended Normal priors. The posterior densities of odds ratios were obtained from the posterior densities of the estimated cell means (the Poisson means), using Bayes Rule, and were calculated along with the rest of the Gibbs Sampler run in JAGS. Gelman-Rubin PSRF statistics over 10 separately initialized chains. There were 20,000 burn-in steps, 10,000 adaptation steps, and 750,000 primary steps were done in each chain, thinning to take every 15th, trying to improve the correlation values in the results for the factors. Execution took 3.7 minutes on a 4-core 3.2 GHz AMD 64-bit system, with essentially no memory constraints, running under Windows 7, Home Premium.

Comment Source:Okay, I've done the analysis of Steve Wenner's contingency table. I don't know the JMP facility well enough to tell how they use the Fisher-Exact interface, but there may be a misunderstanding of its result involved. Given the Bayesian analysis, the output of the the Fisher-Exact JMP is giving is a *significance* *test* *p*-*value*, not a probability. Someone familiar with JMP would need to check this. If so, that 0.0318 in Steve's report is the p-value of the data, and, thus, indicates, the pattern of counts is unlikely to be due to random variation. I'm not going to defend or explain significance tests or the Fisher-Exact, apart from saying the idea is to enumerate all possible 2x2 tables having marginals agreeing with the given table, and, seeing what fraction are "consistent with" the given table. Another way to look at it is to say that, if rows and columns are independent, the cell values should be well estimated by the total number of counts in the table times the products of the row and column densities. Thus, given the table, and assuming fixed marginals, the "independence version" of the table is 15 and 7 in the first row, and 6 and 2 in the second row. I'll leave more to others. Now, I'm callous about the Fisher-Exact because I've studied it and the idea of fixed marginals *is* *inconsistent* *with* *reality* *here*. In some situations, such as controlled experiments, the marginals are indeed fixed, and analysis should proceed considering them. For example, Steve did not know that the total number of "Author Arrows = No" would be 22, nor that the total number of the counts would be 30. Thus, these could have been nearly *anything* and the results of analysis depend in part upon considering the possibility that they, too, are suffering random variation. Moving on to the Bayesian version, which is done using the [code available](http://azimuth.ch.mm.st/Wenner-Ludescher--EtAl/WennerContingencyTable.R), the spread of possible counts for the cells in this table, *without* *assuming* *fixed* *marginals*, is shown in the figure: [[Wenner-Ludescher--EtAl/Wenner-Ludescher--etAl--CellPosteriorDensities--20140722.png:pic]] That's quite a spread. It reflects the small number of overall events. Nevertheless, it's possible to consider, given the data we have, what probabilities of various scenarios. In particular, if the odds ratio of having an ENSO activation given a Ludescher forecast to not having an ENSO activation, or P(AI=Yes|AA=Yes)/P(AI=No|AA=Yes), we get the posterior density: [[Wenner-Ludescher--EtAl/AIYgAAYdbyAINgAAY--20140722.png:pic]] Essentially, the odds go from about unity to almost 5. That's a big range, attributable to the low number of successful instances, but it hardly dismisses the Ludescher, et al algorithm as poor. The other "relative risks", another name for "odds ratios", are shown below: [[Wenner-Ludescher--EtAl/AIYgAAYdbyAIYgAAN--20140722.png:pic]] [[Wenner-Ludescher--EtAl/AIYgAANdbyAIYgAAY--20140722.png:pic]] [[Wenner-Ludescher--EtAl/AINgAAYdbyAINgAAN--20140722.png:pic]] [[Wenner-Ludescher--EtAl/AINgAANdbyAINgAAY--20140722.png:pic]] Consider especially [[Wenner-Ludescher--EtAl/AIYgAAYdbyAIYgAAN--20140722--annotated.png:pic]] That last one, which is the odds of an ENSO starting given that Ludescher, et al, have predicted it, to it starting without their having predicted it, agrees, more or less, with the odds of the Fisher-Exact, being 3.4, but the Fisher-Exact markedly understates the possible range of control the Ludescher, et al, algorithm has over the future, given the uncertainties and limited data. Indeed, that the Fisher-Exact says this is 3.4 is why I thing any interpretation of that 0.0318 p-value as being dismissive of the Ludescher, et al, results may result from misreading the JMP documentation. The entire package is available for download as a [gzipped tarball](http://azimuth.ch.mm.st/Wenner-Ludescher--EtAl/Wenner-LudescherEtAl--Bayesian20140722.tar.gz). If you are interested in re-running the example, you'll need *R* and *JAGS* installed, and the code is currently configured to use 4 cores of a system. Final [results and diagnostics are available](http://azimuth.ch.mm.st/Wenner-Ludescher--EtAl/20140722WennerContingencyTableRun.Rhistory). If you want to reproduce this and need help, give a yell. I love to help people learn about the modern methods of Bayesian analysis, including things like JAGS. Some technical details ... The counts were modeled using Poisson densities having means which are exponentials of the sum of a factor common to all cells, factors common to each column, factors common to each row, and then factors specific to each cell. Uniform priors were used in lieu of the sometimes recommended Normal priors. The posterior densities of odds ratios were obtained from the posterior densities of the estimated cell means (the Poisson means), using Bayes Rule, and were calculated along with the rest of the Gibbs Sampler run in JAGS. Gelman-Rubin PSRF statistics over 10 separately initialized chains. There were 20,000 burn-in steps, 10,000 adaptation steps, and 750,000 primary steps were done in each chain, thinning to take every 15th, trying to improve the correlation values in the results for the factors. Execution took 3.7 minutes on a 4-core 3.2 GHz AMD 64-bit system, with essentially no memory constraints, running under Windows 7, Home Premium.
• Options
48.

Thanks Jan. Although I do Bayesian infererence a lot, I've never needed to deal seriously with contingency tables. I have a couple of issues with what you've done.

1. Surely we should be looking at the performance on the test data, ie 1981-2012. The performance on the training data has been optimized by choosing $\theta$.

2. I think P(AI) is known quite accurately, because we have data going back to 1870, and so we know El Ninos occur about 1 year in 4. A fixed value for this may be closer to the truth than a distribution derived from the table. But better still... You (I think it was you) pointed to a talk on a Bayesian approach to contingency tables, and there was an assumption of a beta prior, with a=b=0. That could be replaced by a prior based on what we know from 1870-1950.

Comment Source:Thanks Jan. Although I do Bayesian infererence a lot, I've never needed to deal seriously with contingency tables. I have a couple of issues with what you've done. 1. Surely we should be looking at the performance on the test data, ie 1981-2012. The performance on the training data has been optimized by choosing $\theta$. 2. I think P(AI) is known quite accurately, because we have data going back to 1870, and so we know El Ninos occur about 1 year in 4. A fixed value for this may be closer to the truth than a distribution derived from the table. But better still... You (I think it was you) pointed to a talk on a Bayesian approach to contingency tables, and there was an assumption of a beta prior, with a=b=0. That could be replaced by a prior based on what we know from 1870-1950.
• Options
49.

HI Graham!

My "look" at Steve's draft blog post was limited to examining the role of the contingency table in his write-up. I am not looking at Ludescher, et al any more than that. I simply do not have the time.

As I originally indicated, I thought using the Fisher Exact in this case was mistaken, and proposed a Bayesian assessment of the contingency table in its stead. I offered to provide that. I more or less have.

The priors I used were uniform, within wide intervals. I'm not sure, as I wrote, about the JMP result, and indicated I thought it might be being misinterpreted. That is because, as far as the Bayesian analysis goes, Ludescher, et al, isn't doing too badly at all with respect to the arrows and initiations. The basic message is that there hasn't been enough of a track record to get a sharper result.

Sorry if this disappoints. I am also on the hook to review Berliner, et al for this discussion. Other than that, however, that's about all I can devote to the Azimuth ENSO project.

Comment Source:HI Graham! My "look" at Steve's draft blog post was limited to examining the role of the contingency table in his write-up. I am not looking at Ludescher, et al any more than that. I simply do not have the time. As I originally indicated, I thought using the Fisher Exact in this case was mistaken, and proposed a Bayesian assessment of the contingency table in its stead. I offered to provide that. I more or less have. The priors I used were uniform, within wide intervals. I'm not sure, as I wrote, about the JMP result, and indicated I thought it might be being misinterpreted. That is because, as far as the Bayesian analysis goes, Ludescher, et al, isn't doing too badly at all with respect to the arrows and initiations. The basic message is that there hasn't been enough of a track record to get a sharper result. Sorry if this disappoints. I am also on the hook to review Berliner, et al for this discussion. Other than that, however, that's about all I can devote to the Azimuth ENSO project.
• Options
50.
edited July 2014

Thanks for your comments, Jan! Since Steve Wenner ran out of time for work on his article (his day job has become demanding for a while), I went ahead and posted it as it stood:

I didn't see any easy way to improve Wenner's article based on your comments. But I think your comments would be great as comments on this blog article! You've got some other ways of studying Ludescher's work, and comparing those ways with Wenner's could help trigger some interesting discussions. Could you copy them over to the Azimuth Blog? Or I can do it if you prefer.

I have some comments on your comments, but again I think it would be better if more people read them over on the Azimuth Blog.

Comment Source:Thanks for your comments, Jan! Since Steve Wenner ran out of time for work on his article (his day job has become demanding for a while), I went ahead and posted it as it stood: * [El Niño project (part 6)](http://johncarlosbaez.wordpress.com/2014/07/23/el-nino-project-part-6/), Azimuth Blog. I didn't see any easy way to improve Wenner's article based on your comments. But I think your comments would be great as comments on this blog article! You've got some other ways of studying Ludescher's work, and comparing those ways with Wenner's could help trigger some interesting discussions. Could you copy them over to the Azimuth Blog? Or I can do it if you prefer. I have some comments on your comments, but again I think it would be better if more people read them over on the Azimuth Blog.