Thanks for your long comment, David... especially given that it was eaten by the ether the first time 'round!

> One of the things that makes me a bit hesitant to look at neural nets, decision forests, etc, at this point is that I don’t understand those well enough to sparsify them effectively without essentially needing to have a training, test and validation set which means it’d be looking at a division of the data into 3 parts, so that it’d be more difficult to compare performance directly.

I guess I see two goals:

1) Get something done by December 1st for the NIPS talk.

2) Do something really interesting.

with 1) as a kind of warmup for 2). It would be amazing if we could do something that simultaneously met goals 1) and 2), but I'm not counting on that.

For 1) I was imagining some "quick and dirty" ways of doing something _very much like_ what Ludescher _et al_ did, but slightly different, to begin to see how good their approach is. This would let me give a talk about climate networks, their paper, and a kind of critique or evaluation of their paper.

For 1), a first baby step would be to take any method like neural nets, random forests etc. and use it to predict the El Niño 3.4 index _starting from the average link strength computed by Ludescher et al_ (and available [here](https://github.com/johncarlosbaez/el-nino/blob/master/R/average-link-strength.txt)). This was supposed to be easy, since it's predicting one time series from another; no sparsification needed (right?). It would not test the sanity of using "average link strength", just Ludescher _et al_'s particular way of using it.

Of course if you have limited time it makes sense for you to tackle a type 2) project while someone else (maybe even little old me) tries this simpler thing.