Part of the abstract for John's NIPS paper talks about critically analyzing the technique used by Ludescher et. al.

I'm starting this thread as a place for us to bat around ideas about this. Perhaps this will help John in his thinking about the subject.

Sorry I don't have a lot concrete to contribute here, because I was busy with other things, and lost track of the flow of what was going on.

Here are some possible points:

* Definitions are more complex than they need to be; Graham showed that the same results can be achieved with simpler definitions.

* A better goal is to predict a continuous index of El Nino.

* Questions about the statistical significance of their results. Someone posted about this, but I forgot who or where.

* Is a network of correlation strengths really a meaningful entity, which can form the basis of empirical predictions? Are the other cases where such correlation networks have led to predictive results? What are the underlying physical bases.

* They put themselves out on a limb by predicting El Nino in 2014, but now it is appearing to be less likely...

Perhaps you guys who have been more active in the El Nino project can fill in some of the ideas here, or contradict them, or add some other points.

If 2014 turned out to be a non-El-Nino year, and this raises doubts about the Ludescher et. al approach, then part of your talk could be cast as a post-mortem, a returning to the blackboard, and a drumming up of more speculative ideas about climate network theory in new directions. Trouble is your talk occurs too soon to tell.

But now I read that the probability of El Nino this winter has been [reduced to 58%](http://www.vox.com/2014/11/8/7177709/el-nino-2014-forecast-weakening). But that means that there is a 42% chance that a strong prediction made by the Ludescher et al theory is incorrect. Doesn't that in itself cast some doubt on the their "theory"?