> Are you talking about predicting the El Niño 3.4 index at some time given just previous values of this index, or - more demanding but potentially much more rewarding - given previous values of the temperature at a grid of points in the Pacific?
Paul Pukite wrote:
> I interpret as what John is saying is that we can treat this in either of two ways – or both, for that matter:
> * Predict ENSO at the current time (or at historical times) based on correlated events from other measures, spatial or otherwise.
> * Predict ENSO at future times based on some sort of physical model or heuristic model that we can glean from past history.
This is not the distinction I was highlighting. I was talking about
1) predicting the El Niño 3.4 index at time $t$ given just values of the El Niño 3.4 index at times before $t - \Delta t$ for some $\Delta t$,
2) predicting the El Niño 3.4 index at time $t$ given values of many other quantities at times before $t - \Delta t$ for some $\Delta t$,
Ludescher _et al_ do 2). Until December (at least) I want to focus on doing things similar to what they do. But I imagine it's quite standard to use machine learning techniques to do tasks like 1). In [my reply to Daniel Mahler](http://forum.azimuthproject.org/discussion/1382/el-nino-project-thinking-about-the-next-steps/?Focus=12521#Comment_12521), I proposed a version of 2) that I hope will be easy:
3) predicting the El Niño 3.4 index at time $t$ given the "average link strength" at times between $t - \Delta_1 t$ and $t - \Delta_2 t$, for some numbers $\Delta_1 t$ and $\Delta_2 t$.
This is a problem of predicting one time series given another, presumably much less data-intensive that predicting one time series given _many_ others, like the temperatures at all these grid points:
I am not wanting to focus on physical models just yet. It's a very interesting challenge, and I know that's what you like to do. But I don't think I can do it before December! After December I will feel more free to tackle big ambitious projects.