It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.4K
- Chat 505
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 75
- Azimuth Code Project 111
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 719

Options

I'm starting this thread to talk about the direction of our El Niño work in summary terms that are more qualitative than technical.

Here are some of the basic facts:

John is looking for material for an upcoming NIPS talk

We studied the Ludescher et. al paper

Graham reproduced their results (essentially)

We decided to focus on predicting a continuous El Niño index

We are focusing our attention on machine-learning-based inferences from temperature grid data

John has blogged about all of the above

He now wants to blog about climate networks

## Comments

Nad, Dara, Paul, John, David Tweed, can you add any bullet points to summarize what you have been doing, or trying to do, or thinking about, with regard to El Niño research? If I've missed anyone else here (Nick, Rafael?) please chime in with some bullet points.

`Nad, Dara, Paul, John, David Tweed, can you add any bullet points to summarize what you have been doing, or trying to do, or thinking about, with regard to El Niño research? If I've missed anyone else here (Nick, Rafael?) please chime in with some bullet points.`

Correct me if I'm wrong, but I gather that we're not at the point of having any formulated hypotheses? Or am I wrong about this?

`Correct me if I'm wrong, but I gather that we're not at the point of having any formulated hypotheses? Or am I wrong about this?`

Rather, our goal is to conduct data investigations in search of relationships that could help predict the El Niño index.

Can anyone

boil downsome of the mains lines of these investigations that we are undertaking, or considering?`Rather, our goal is to conduct data investigations in search of relationships that could help predict the El Niño index. Can anyone _boil down_ some of the mains lines of these investigations that we are undertaking, or considering?`

Unfortunately I am a little too tired now to summarize all, but it amounts currently to the fact that I currently plan to write tomorrow an email to DLR and ask wether there is a free core nutation of the sun (a socalled "sun magnetic pole wobble"), which exceeds 7 degrees. I should probably go to bed.

`>Nad, Dara, Paul, John, can you add any bullet points to summarize what you have been doing, or trying to do, or thinking about, with regard to El Niño research? If I’ve missed anyone else here (Nick, Rafael?) please chime in with some bullet points. Unfortunately I am a little too tired now to summarize all, but it amounts currently to the fact that I currently plan to write tomorrow an email to DLR and ask wether there is a free core nutation of the sun (a socalled "sun magnetic pole wobble"), which exceeds 7 degrees. I should probably go to bed.`

David Tanzer wrote:

I'm really glad you've started this thread, since right now I'm running around like a headless chicken: teaching a course on network theory and a course on real analysis, directing 4 grad students, putting together some grant proposals, and trying to get my NIPS talk ready... without knowing exactly what it will be about!

You, more than most people in the Azimuth gang, are good at organization - by which I don't mean bossing people around, but simply talking about the big picture and our goals, and thinking about how we can accomplish something where the whole is more than the sum of the parts.

Here is one strand of investigation:

One of the main things Graham has already done is begin to simplify the work of Ludescher

et al, stripping it of complexity without robbing it of predictive power.One of the main things I'd like to do is take a more investigative approach. Ludescher

et alpublished a paper that basically claims climate networks are good at El Niño prediction. This is a good way to get newspapers to pay attention, but instead of trying to "beat the competition" and predict El Niños better than the last guy, I'd really like tounderstandclimate networks andfigure out ways to measurehow good they are at predicting El Niños.David Tweed has emphasized that instead of treating an El Niño as a binary on-off condition as Ludescher

et aldid, it's wiser to try to predict a continuously variable quantity. There are some great candidates: the Niño 3.4 index, and its time-averaged versions.I like the idea of using fairly simple machine learning procedures to study "how well X can predict Y", where X might be something like the "average link strength" in a climate network, and Y might be something like the Niño 3.4 index. This would be simplest if X is just a single time series, or a few. Then it's up to us to ponder which time series to use! Using the "average link strength" amounts to making a hypothesis about what's important for El Niño prediction. An alternative approach is to let X be a huge pile of time series, like the temperatures at hundreds of grid points. Then it's up to the machine learning algorithm to formulate its own hypotheses. As an old-fashioned scientist, I sort of like the idea of formulating hypotheses and testing them myself. But the two approaches are not mutually exclusive! They could go well together.

Very important is this: I

don'tthink the goal here is to become the world's experts on El Niño prediction. I think the goal is to have new ideas about climate networks, prediction, machine learning and other quite general things.`David Tanzer wrote: > Can anyone _boil down_ some of the mains lines of these investigations that we are undertaking, or considering? I'm really glad you've started this thread, since right now I'm running around like a headless chicken: teaching a course on network theory and a course on real analysis, directing 4 grad students, putting together some grant proposals, and trying to get my NIPS talk ready... without knowing exactly what it will be about! You, more than most people in the Azimuth gang, are good at organization - by which I don't mean bossing people around, but simply talking about the big picture and our goals, and thinking about how we can accomplish something where the whole is more than the sum of the parts. Here is one strand of investigation: * One of the main things Graham has already done is begin to simplify the work of Ludescher _et al_, stripping it of complexity without robbing it of predictive power. * One of the main things I'd like to do is take a more investigative approach. Ludescher _et al_ published a paper that basically claims climate networks are good at El Niño prediction. This is a good way to get newspapers to pay attention, but instead of trying to "beat the competition" and predict El Niños better than the last guy, I'd really like to _understand_ climate networks and _figure out ways to measure_ how good they are at predicting El Niños. * David Tweed has emphasized that instead of treating an El Niño as a binary on-off condition as Ludescher _et al_ did, it's wiser to try to predict a continuously variable quantity. There are some great candidates: the Niño 3.4 index, and its time-averaged versions. * I like the idea of using fairly simple machine learning procedures to study "how well X can predict Y", where X might be something like the "average link strength" in a climate network, and Y might be something like the Niño 3.4 index. This would be simplest if X is just a single time series, or a few. Then it's up to us to ponder which time series to use! Using the "average link strength" amounts to making a hypothesis about what's important for El Niño prediction. An alternative approach is to let X be a huge pile of time series, like the temperatures at hundreds of grid points. Then it's up to the machine learning algorithm to formulate its own hypotheses. As an old-fashioned scientist, I sort of like the idea of formulating hypotheses and testing them myself. But the two approaches are not mutually exclusive! They could go well together. * Very important is this: I _don't_ think the goal here is to become the world's experts on El Niño prediction. I think the goal is to have new ideas about climate networks, prediction, machine learning and other quite general things.`

My path has been to treat ENSO as a standalone behavior, driven by the massive inertia of the Pacific Ocean. What set me in this direction was the number of references to

sloshingof the ocean waters, but very few discussions of the dynamics of this behavior. Sloshing is a well-accepted technical term in the hydrodynamics literature, but has only been used as a hand-wavy explanation for ENSO. This page I put together contains many of the layman explanations of ENSO, all of which use the term sloshing to describe the behavior.By the same token, it is rare to find research that applies the quasi-periodic oscillations in sea-level as an index for ENSO. This is a direct measure of sloshing, so I assume the connection, both theoretically and empirically, to ENSO is not widely known.

So my plan is to continue to search for patterns in climate measures that show the mathematical signature of sloshing. These are the forum threads that I have started -- either dealing with sloshing, pertinent climate measures, or possible ENSO forcings:

and this recent Questions thread that I am partly hijacking

Before squatting on Azimuth, I posted more half-baked ideas on the Context/Earth blog, and continue to use that to summarize progress. This page I wrote to advertise what we are trying to do with the Azimuth El Nino predition project -- that was my own interpretation of the different directions that people were going in, and so when reading it remember that YMMV.

`My path has been to treat ENSO as a standalone behavior, driven by the massive inertia of the Pacific Ocean. What set me in this direction was the number of references to *sloshing* of the ocean waters, but very few discussions of the dynamics of this behavior. Sloshing is a well-accepted technical term in the hydrodynamics literature, but has only been used as a hand-wavy explanation for ENSO. [This page](http://contextearth.com/sloshing-quotes/) I put together contains many of the layman explanations of ENSO, all of which use the term sloshing to describe the behavior. By the same token, it is rare to find research that applies the quasi-periodic oscillations in sea-level as an index for ENSO. This is a direct measure of sloshing, so I assume the connection, both theoretically and empirically, to ENSO is not widely known. So my plan is to continue to search for patterns in climate measures that show the mathematical signature of sloshing. These are the forum threads that I have started -- either dealing with sloshing, pertinent climate measures, or possible ENSO forcings: * <http://forum.azimuthproject.org/discussion/1480/tidal-records-and-enso/> * <http://forum.azimuthproject.org/discussion/1497/nino-3-and-seasonal-alignment/> * <http://forum.azimuthproject.org/discussion/1471/qbo-and-enso/> * <http://forum.azimuthproject.org/discussion/1492/multivariate-enso-index-mei/> * <http://forum.azimuthproject.org/discussion/1451/enso-proxy-records/> and this recent Questions thread that I am partly hijacking * <http://forum.azimuthproject.org/discussion/1498/is-there-an-exact-biannual-global-temperature-oscillation/> Before squatting on Azimuth, I posted more half-baked ideas on the [Context/Earth](http://ContextEarth.com) blog, and continue to use that to summarize progress. [This page](http://contextearth.com/2014/09/13/azimuth-project-on-el-ninos/) I wrote to advertise what we are trying to do with the Azimuth El Nino predition project -- that was my own interpretation of the different directions that people were going in, and so when reading it remember that YMMV.`

Thanks to David Tanzer for helping to organise this discussion. As has been mentioned, I'm currently a bit flaky so I would really advise against any plans that depend upon me delivering certain results in a certain timeframe. However, to add to the bullet points:

A while back Dara asked whether this code could be used for doing non-linear fitting, and I didn't get around to answering. To address that, the code is assuming that

you've got a prediction function $f$ of a multivariate parameter $p$ such that $$f(p) = \sum_{j\in 1:K} f_j(p \intersection P_j)$$ ie, the prediction can be broken into a simple sum of predictors that depend only on some particular subdivision of $p$ into subsets,

That to optimise $f_j(p \intersection P_j)$ you get reasonable results by optimising over each scalar element of the parameters in turn for multiple cycles until there's no change. (This is true for things like linear models, but you can imagine predictors where the influences of the different variables are so deeply intertwined that optimising along one co-ordinate without also simultaneously considering the others will bounce around forever without converging.)

As such, it could be used for fitting against a

known in advanceset of non-linear functions providing they aren't too non-linear that 2 no longer holds.I really hope to finish this software, at least to the point where I can provide some interesting plots for the blog article, hopefully further.

Also, just to note that I'm not against binary classification: for classifiers that are inherently based upon a binary decision (eg, SVM's, random hyperplane trees, etc) you really want have a binary output to be trying to estimate. I'm just a little bit wary of any technique that takes a "real number-prediction model" and than makes it binary by applying some form of sigmoid function to the output (eg, a logistic function in logistic regression).

`Thanks to David Tanzer for helping to organise this discussion. As has been mentioned, I'm currently a bit flaky so I would really advise against any plans that depend upon me delivering certain results in a certain timeframe. However, to add to the bullet points: * I'm currently working on some software for doing sparse linear/bilinear regression against medium-large features vectors. I hope to get this completed and run it against a big collection of min/max correlations between various "measurement points" at different temporal offsets. This is mainly exploratory, attempting to used (bi-)linear relationships to provide some ideas for more detailed, physically based models and as many people have observed El Nino behaviour is definitely not just a simple linear phenomenon. (The kind of thing I'm thinking of is, say, a positive correlation between SF bay and the sea around Japan at the same time is important, and so is a negative correlation between the areas at some distance around the El Nino 3.4 box and the points within the box 3 months later. This might be plausible because, say, due to energy conservation behaviour outside the box has to move towards the mean as the area inside the box moves away from the mean. But a goal is to avoid doing too much assumption and just explore the data.) The code is being put up as I'm writing it [on github](https://github.com/davidtweed/multicoreBilinearRegression), and anyone is welcome to do anything they wish with it (expecially if I complete it). A while back Dara asked whether this code could be used for doing non-linear fitting, and I didn't get around to answering. To address that, the code is assuming that 1. you've got a prediction function $f$ of a multivariate parameter $p$ such that $$f(p) = \sum_{j\in 1:K} f_j(p \intersection P_j)$$ ie, the prediction can be broken into a simple sum of predictors that depend only on some particular subdivision of $p$ into subsets, 2. That to optimise $f_j(p \intersection P_j)$ you get reasonable results by optimising over each scalar element of the parameters in turn for multiple cycles until there's no change. (This is true for things like linear models, but you can imagine predictors where the influences of the different variables are so deeply intertwined that optimising along one co-ordinate without also simultaneously considering the others will bounce around forever without converging.) As such, it could be used for fitting against a _known in advance_ set of non-linear functions providing they aren't too non-linear that 2 no longer holds. I really hope to finish this software, at least to the point where I can provide some interesting plots for the blog article, hopefully further. ---- Also, just to note that I'm not against binary classification: for classifiers that are inherently based upon a binary decision (eg, SVM's, random hyperplane trees, etc) you really want have a binary output to be trying to estimate. I'm just a little bit wary of any technique that takes a "real number-prediction model" and than makes it binary by applying some form of sigmoid function to the output (eg, a logistic function in logistic regression).`

I look forward to seeing your work, David! (Tweed, that is.)

`I look forward to seeing your work, David! (Tweed, that is.)`