Options

Crunch time

Okay, it's "crunch time" - I've got to spend the next month preparing my talk on "Climate Networks" for the Neural Information Processing Seminar in Montreal on Wednesday December 10th. I've never been so terrified of giving a talk before. Adding to the stress is that on that Monday, Arindam Banerjee and Claire Monteleoni are giving a 2-hour tutorial on Climate Change: Challenges for Machine Learning. I'll enjoy attending this and learning a lot, but it will also make it harder for my talk to impress anyone.

It would have been much, much smarter to give a talk about category theory and networks - something I know a lot about. But I feared this would be too abstract to interest the audience, and I overestimated my ability to do something really exciting in a new subject. Oh well: it's too late to change topics now, since my title is on the posters! So, I'm going to have to dive in and try to prepare an interesting talk on climate networks.

I will try to avoid thinking about anything else for the next month. I'll try to post drafts of my talk as soon as possible. And, I'll try to write some blog articles briefly summarizing all the interesting papers on climate networks that I know.

«1345

Comments

  • 1.

    Would it be possible to make a subtle shift in emphasis, which would give you the opportunity to talk about some possible theoretical bases for climate network theory.

    Perhaps you could survey the existing applications in climate network theory, find some common themes, and abstract from them. Any kind of abstraction or generalization could lead you towards grounds that you are more familiar with, so at least part of your talk could involve sharing ideas from network theory which could potentially be relevant in the context of climate networks.

    Comment Source:Would it be possible to make a subtle shift in emphasis, which would give you the opportunity to talk about some possible theoretical bases for climate network theory. Perhaps you could survey the existing applications in climate network theory, find some common themes, and abstract from them. Any kind of abstraction or generalization could lead you towards grounds that you are more familiar with, so at least part of your talk could involve sharing ideas from network theory which could potentially be relevant in the context of climate networks.
  • 2.
    edited November 2014

    Let's look at the contents of your abstract, and try to brainstorm about how to work within this framework:

    The El Niño is a powerful but irregular climate cycle that has huge consequences for agriculture and perhaps global warming. Predicting its arrival more than 6 months ahead of time has been difficult. A recent paper by Ludescher et al caused a stir by using ideas from network theory to predict the start of an El Niño toward the end of 2014 with a 3-in-4 likelihood. We critically analyze their technique, related applications of network theory, and also attempts to use neural networks to help model the Earth's climate.

    Comment Source:Let's look at the contents of your abstract, and try to brainstorm about how to work within this framework: > The El Niño is a powerful but irregular climate cycle that has huge consequences for agriculture and perhaps global warming. Predicting its arrival more than 6 months ahead of time has been difficult. A recent paper by Ludescher et al caused a stir by using ideas from network theory to predict the start of an El Niño toward the end of 2014 with a 3-in-4 likelihood. We critically analyze their technique, related applications of network theory, and also attempts to use neural networks to help model the Earth's climate.
  • 3.
    edited November 2014

    Maybe part of the line could be: this just in, doubts about El Niño in 2014, which raises further questions about the Ludescher et al approach. Summarize their approach. What's missing?

    Comment Source:Maybe part of the line could be: this just in, doubts about El Niño in 2014, which raises further questions about the Ludescher et al approach. Summarize their approach. What's missing?
  • 4.

    It would also be great to find some applications of climate network theory that have strands of connections to a physical hypothesis.

    Comment Source:It would also be great to find some applications of climate network theory that have strands of connections to a physical hypothesis.
  • 5.

    The phrase "related applications of network theory" is open to a world of possible interpretations. That is good!

    Comment Source:The phrase "related applications of network theory" is open to a world of possible interpretations. That is good!
  • 6.

    Also you didn't promise any hot new results, only a critical review of existing work. This is also good.

    Comment Source:Also you didn't promise any hot new results, only a critical review of existing work. This is also good.
  • 7.
    edited November 2014

    Here is a meditation idea: imagine writing a paper called "Climate network theory: prospects and mathematical foundations."

    Then think about how to transplant elements of that paper into the talk that you will be giving.

    Comment Source:Here is a meditation idea: imagine writing a paper called "Climate network theory: prospects and mathematical foundations." Then think about how to transplant elements of that paper into the talk that you will be giving.
  • 8.
    edited November 2014

    I believe that you will be able to give an engaging and informative talk. The emphasis may need to deviate somewhat from the tone of your abstract, but they should understand that you are a mathematician, and are bringing a new perspective to bear on the matter, no?

    Good luck!

    Comment Source:I believe that you will be able to give an engaging and informative talk. The emphasis may need to deviate somewhat from the tone of your abstract, but they should understand that you are a mathematician, and are bringing a new perspective to bear on the matter, no? Good luck!
  • 9.
    edited November 2014

    Thanks for the helpful comments, David. I'm in the funny position of having decided to talk about "climate network theory" and Ludescher's paper, and now being rather skeptical of both. But I think the best approach is to be honest at the start, and say something like:

    Instead of talking about what I was probably invited to talk about - abstract aspects of network theory - I decided to learn about climate networks and talk about those. My colleagues in the Azimuth Project and I put some work into this subject, and this is what I've learned so far.

    You write:

    Perhaps you could survey the existing applications in climate network theory,

    I'm not sure there are "applications" yet, except for Ludescher et al's attempt to use them for El Niño prediction. This is part of my dissatisfaction.

    So far it seems people have mainly been using climate networks to take a new look at climate data. They've made some mildly interesting discoveries, which I would not call "applications".

    My vague plan is to:

    1) Explain the ENSO - the El Niño Southern Oscillation - and why it's important.

    2) Talk about the rather fascinating and important idea of teleconnections: roughly, patterns of highly correlated weather between distant locations. The most famous teleconnection is ENSO, but there are others.

    3) Summarize some attempts to objectively search for teleconnections in weather data. This is a pattern recognition question - something NIPS people will like. In many of these attempts, the ENSO shows up as the most powerful teleconnection. I should mention some of the runners-up.

    4) Explain the main ideas of network theory as used by researchers in climate networks. This is the sort of network theory that people use when talking about "complex networks" - it's basically the analysis of statistical properties of large weighted graphs: graphs with positive numbers labelling their edges. (It's not the same as what I usually mean by network theory, though my idea of network theory includes this.)

    5) Talk about attempts to use network theory to find teleconnections. There's an idea called the backbone of the climate network, the sub-network consisting of sites having the strongest link strengths.

    6) Talk about the idea that El Niños "break climate links" around the world.

    7) Describe and critique Ludescher et al's attempt to predict El Nños by looking for increased link strengths between the El Niño basin and other parts of the Pacific.

    This is certainly enough stuff for an hour. It will take work to make it really clear and exciting. But it seems like a reasonably interesting subject even if nobody quite knows what to make of it yet. I'm not going to try to "sell" climate networks.

    Comment Source:Thanks for the helpful comments, David. I'm in the funny position of having decided to talk about "climate network theory" and Ludescher's paper, and now being rather skeptical of both. But I think the best approach is to be honest at the start, and say something like: > Instead of talking about what I was probably invited to talk about - abstract aspects of network theory - I decided to learn about climate networks and talk about those. My colleagues in the Azimuth Project and I put some work into this subject, and this is what I've learned so far. You write: > Perhaps you could survey the existing applications in climate network theory, I'm not sure there are "applications" yet, except for Ludescher _et al_'s attempt to use them for El Niño prediction. This is part of my dissatisfaction. So far it seems people have mainly been using climate networks to take a new look at climate data. They've made some mildly interesting _discoveries_, which I would not call "applications". My vague plan is to: 1) Explain the ENSO - the El Niño Southern Oscillation - and why it's important. 2) Talk about the rather fascinating and important idea of [teleconnections](https://en.wikipedia.org/wiki/Teleconnection): roughly, patterns of highly correlated weather between distant locations. The most famous teleconnection is ENSO, but there are others. 3) Summarize some attempts to objectively search for teleconnections in weather data. This is a pattern recognition question - something NIPS people will like. In many of these attempts, the ENSO shows up as the most powerful teleconnection. I should mention some of the runners-up. 4) Explain the main ideas of network theory as used by researchers in climate networks. This is the sort of network theory that people use when talking about "complex networks" - it's basically the analysis of statistical properties of large **weighted graphs**: graphs with positive numbers labelling their edges. (It's not the same as what _I_ usually mean by network theory, though my idea of network theory _includes_ this.) 5) Talk about attempts to use network theory to find teleconnections. There's an idea called the [backbone of the climate network](http://arxiv.org/abs/1002.2100), the sub-network consisting of sites having the strongest link strengths. 6) Talk about the idea that El Niños "break climate links" around the world. 7) Describe and critique Ludescher _et al_'s attempt to predict El Nños by looking for increased link strengths between the El Niño basin and other parts of the Pacific. This is certainly enough stuff for an hour. It will take work to make it really clear and exciting. But it seems like a reasonably interesting subject even if nobody quite knows what to make of it yet. I'm not going to try to "sell" climate networks.
  • 10.

    I wrote:

    I’m not sure there are “applications” yet, except for Ludescher et al’s attempt to use them for El Niño prediction.

    I was forgetting what Nathan Urban said:

    I ran across a paper I thought could be of interest to Azimuth climate network people: "Deep ocean early warning signals of an Atlantic MOC collapse". It uses a climate network approach, using methods developed in earlier work ("Interaction network based early warning indicators for the Atlantic MOC collapse" and "Are North Atlantic multidecadal SST anomalies westward propagating?").

    Open versions here, here, here.

    Comment Source:I wrote: > I’m not sure there are “applications” yet, except for Ludescher et al’s attempt to use them for El Niño prediction. I was forgetting what Nathan Urban said: > I ran across a paper I thought could be of interest to Azimuth climate network people: ["Deep ocean early warning signals of an Atlantic MOC collapse"](http://onlinelibrary.wiley.com/doi/10.1002/2014GL061019/abstract). It uses a climate network approach, using methods developed in earlier work (["Interaction network based early warning indicators for the Atlantic MOC collapse"](http://onlinelibrary.wiley.com/doi/10.1002/grl.50515/abstract) and ["Are North Atlantic multidecadal SST anomalies westward propagating?"](http://onlinelibrary.wiley.com/doi/10.1002/2013GL058687/abstract)). > Open versions [here](http://www.climatelinc.eu/fileadmin/UG_ADVANCED/Publications/Qingyi-Dijkstra-2-FAMOUS_grl52004.pdf), [here](https://www.pik-potsdam.de/members/kurths/publikationen/2013/mheen_grl50515.pdf), [here](http://www.climatelinc.eu/fileadmin/UG_ADVANCED/Publications/QINGYI-and-Dijkstra--Are_North_Atlantic_multidecadal_SST_anomalies_west_propagating.pdf).
  • 11.
    edited November 2014

    I'm continuing some conversations related to my NIPS talk that began on another thread.

    I wrote:

    This file has the average link strength, called S, at 10-day intervals starting from day 730 and going until day 12040, where day 1 is the first of January 1948. (For an explanation of how this was computed, see Part 4 of the El Niño Project series.)

    Daniel wrote:

    I think I am confused by the above definition. What is the date of first and last day for which data actually appears in the average-link-strength.txt file?

    Here's another way of saying what I said. The first day in the file is 729 days after January 1st, 1948, and the last day is 12039 days after January 1st, 1948. Please don't ask me to calculate those dates!

    If the actual dates really matter to you, I should warn you of this: climate scientists pretend that the day February 29 on leap years does not exist. For them, every year has 365 days. So, you have to take that into account.

    Comment Source:I'm continuing some conversations related to my NIPS talk that began on another thread. I wrote: > This file has the average link strength, called S, at 10-day intervals starting from day 730 and going until day 12040, where day 1 is the first of January 1948. (For an explanation of how this was computed, see Part 4 of the El Niño Project series.) [Daniel wrote](http://forum.azimuthproject.org/discussion/1485/global-warming-and-thermodynamical-quantities/?Focus=13315#Comment_13315): > I think I am confused by the above definition. What is the date of first and last day for which data actually appears in the average-link-strength.txt file? Here's another way of saying what I said. The first day in the file is 729 days after January 1st, 1948, and the last day is 12039 days after January 1st, 1948. Please don't ask me to calculate those dates! If the actual dates really matter to you, I should warn you of this: climate scientists pretend that the day February 29 on leap years does not exist. For them, every year has 365 days. So, you have to take that into account.
  • 12.

    Daniel wrote:

    I have run an ExtraTreesRegressor, a random forest variant, to predict the anomaly directly from the whole raw temperature map 6 months before. the results are here. Out out of the 778 months available I trained on the first 400 and tested on the remaining 378. The sources for this are also in the same directory.

    That's great! Can you explain some things?

    You've got a graph produced by

    zz=xcorr(test.predicted, test.true, maxlags=None)

    with a big spike at zero. What is this? Something like a correlation between what you predicted and the true value, as a function of... what?

    Comment Source:Daniel wrote: > I have run an [ExtraTreesRegressor](http://scikit-learn.org/dev/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html), a random forest variant, to predict the anomaly directly from the whole raw temperature map 6 months before. the results are [here](https://www.googledrive.com/host/0B4cyIPgV_Vxrb2wxUnFteXVwWHM). Out out of the 778 months available I trained on the first 400 and tested on the remaining 378. The sources for this are also in the same directory. That's great! Can you explain some things? You've got a graph produced by `zz=xcorr(test.predicted, test.true, maxlags=None)` with a big spike at zero. What is this? Something like a correlation between what you predicted and the true value, as a function of... what?
  • 13.
    edited November 2014

    For each time point I make a prediction based on temperature distribution 6 monnth ago giving a vector of predicted ensos. The line you mention compute the cross correlation between the the predicted and true temperature. A sharp peak around zero indicated that there is reasonable correlation between the 2 signals but drops of sharply if the signals are shifted relative to each other. The x axis is the relative shift between the signals.

    Comment Source:For each time point I make a prediction based on temperature distribution 6 monnth ago giving a vector of predicted ensos. The line you mention compute the cross correlation between the the predicted and true temperature. A sharp peak around zero indicated that there is reasonable correlation between the 2 signals but drops of sharply if the signals are shifted relative to each other. The x axis is the relative shift between the signals.
  • 14.
    edited November 2014

    I am trying to make sure that the two data sets are aligned correctly. If I understand correctly the data in the new file starts around 1-1-1950 and ends near the end of 2013.

    Comment Source:I am trying to make sure that the two data sets are aligned correctly. If I understand correctly the data in the new file starts around 1-1-1950 and ends near the end of 2013.
  • 15.
    edited November 2014

    I have added and updated to the analyses the overall directory is here

    For the temperature and pressure analyses I have added plots showing which location on the map contribute most to the prediction. Interesting different location are important when using temperature. These are the last images in each file. The second last image in each file is a sample input map.

    I also updated the link strength analysis to use the extended data set. It still does not look to me like there is a meaningfull connection between link strength and the enso3.4 signal.

    You can also download the corresponding notebooks and play around with them.

    Comment Source:I have added and updated to the analyses the overall directory is [here](https://www.googledrive.com/host/0B4cyIPgV_VxrX0lxSUxHU2VLN28) For the [temperature](https://www.googledrive.com/host/0B4cyIPgV_VxrX0lxSUxHU2VLN28/temp-anom-predict.html) and [pressure](http://www.googledrive.com/host/0B4cyIPgV_VxrX0lxSUxHU2VLN28/pressure-anom-predict.html) analyses I have added plots showing which location on the map contribute most to the prediction. Interesting different location are important when using temperature. These are the last images in each file. The second last image in each file is a sample input map. I also updated the [link strength analysis](https://www.googledrive.com/host/0B4cyIPgV_VxrX0lxSUxHU2VLN28/link-anom.html) to use the extended data set. It still does not look to me like there is a meaningfull connection between link strength and the enso3.4 signal. You can also download the corresponding notebooks and play around with them.
  • 16.
    edited November 2014

    Here are the results of training a model using 12 month window of past pressures to predict 6 month into the future. It is a little better than using just one month but not massively so. Looking at the results I noticed that all models predict very conservatively, this was particularly noticeable in the scatterplots which had slope of around 0.5. This is primarily due to the strong normalization required for training on more features than inputs.

    I have gone through and consistently multiplied all the model outputs by a factor of 2. This primarily affects the appearance of the side by side plots. Correlation values are not affected by scaling.

    This is a modification after looking at the results, but given that it is just uniform scaling I think it is justifiable.

    Comment Source:Here are the [results](http://www.googledrive.com/host/0B4cyIPgV_VxrX0lxSUxHU2VLN28/window-pressure-anom-predict.html) of training a model using 12 month window of past pressures to predict 6 month into the future. It is a little better than using just one month but not massively so. Looking at the results I noticed that all models predict very conservatively, this was particularly noticeable in the scatterplots which had slope of around 0.5. This is primarily due to the strong normalization required for training on more features than inputs. I have gone through and consistently multiplied all the model outputs by a factor of 2. This primarily affects the appearance of the side by side plots. Correlation values are not affected by scaling. This is a modification after looking at the results, but given that it is just uniform scaling I think it is justifiable.
  • 17.

    Daniel your predicted curve is oscillating too much IMHO, you are better off working with delta transform of the data rather than the original data. You might be losing a huge amount of accuracy because of using the raw original data itself. The delta data should boost you accuracy.

    Comment Source:Daniel your predicted curve is oscillating too much IMHO, you are better off working with delta transform of the data rather than the original data. You might be losing a huge amount of accuracy because of using the raw original data itself. The delta data should boost you accuracy.
  • 18.

    Hello John

    I travel Nov20 th, so I am out of commission for 1 day, other than that, I am here coding for you best I could till you deliver you paper.

    I publish the SVR today, I hope NN in next 24 hours. ,then I like to do Knn regression if there is good results.

    Dara

    Comment Source:Hello John I travel Nov20 th, so I am out of commission for 1 day, other than that, I am here coding for you best I could till you deliver you paper. I publish the SVR today, I hope NN in next 24 hours. ,then I like to do Knn regression if there is good results. Dara
  • 19.

    Daniel, Is the hot spot for the largest pressure contribution in red? That puts it fairly close to Tahiti, which is one-half of the Southern Oscillation dipole. The other half of the dipole is Darwin, which would be a strong negative pressure correlation.

    Comment Source:Daniel, Is the hot spot for the largest pressure contribution in red? That puts it fairly close to Tahiti, which is one-half of the Southern Oscillation dipole. The other half of the dipole is Darwin, which would be a strong negative pressure correlation.
  • 20.
    edited November 2014

    Daniel, Is the hot spot for the largest pressure contribution in red? That puts it fairly close to Tahiti, which is one-half of the Southern Oscillation dipole. The other half of the dipole is Darwin, which would be a strong negative pressure correlation.

    Yes, red means important/informative. Negative correlation would also be red in this plot, since they are also informative. Also white or whitish means moderately important

    Comment Source:> Daniel, Is the hot spot for the largest pressure contribution in red? That puts it fairly close to Tahiti, which is one-half of the Southern Oscillation dipole. The other half of the dipole is Darwin, which would be a strong negative pressure correlation. Yes, red means important/informative. Negative correlation would also be red in this plot, since they are also informative. Also white or whitish means moderately important
  • 21.

    The importance heat maps now have superimposed land outlines. It looks like for the pressure based model there is a minor hotspot on Darwin as well as a few more NW of Australia, but there is a much bigger hotspot in the Midway-Hawaii region, though not as big as the Fiji hotspot. Has anybody looked into a Fiji-Hawaii teleconnection?

    The the most influential hostspot for the temperature model is around the Galapagos, of the coast of Ecuador. Is that known to be significat in other respects?

    Comment Source:The importance heat maps now have superimposed land outlines. It looks like for the pressure based model there is a minor hotspot on Darwin as well as a few more NW of Australia, but there is a much bigger hotspot in the Midway-Hawaii region, though not as big as the Fiji hotspot. Has anybody looked into a Fiji-Hawaii teleconnection? The the most influential hostspot for the temperature model is around the Galapagos, of the coast of Ecuador. Is that known to be significat in other respects?
  • 22.
    edited November 2014

    Daniel wrote:

    If I understand correctly the data in the new file starts around 1-1-1950 and ends near the end of 2013.

    The start date sounds correct, but not the end date. The file average-link-strength-1948-2013.txt contains average link strengths starting 729 days after 1-1-1948 and ending 23369 days after 1-1-1948. The end seems to be shortly after the start of 2012.

    The reason I wrote "2013" in the file name is that it's computed from data going from 1948 to 2013. The link strengths in the file run over a shorter range of dates since the link strength on a given date is computed from data at times before and after that date.

    Comment Source:Daniel wrote: > If I understand correctly the data in the new file starts around 1-1-1950 and ends near the end of 2013. The start date sounds correct, but not the end date. The file [average-link-strength-1948-2013.txt]() contains average link strengths starting 729 days after 1-1-1948 and ending 23369 days after 1-1-1948. The end seems to be shortly after the start of 2012. The reason I wrote "2013" in the file name is that it's computed from data going from 1948 to 2013. The link strengths in the file run over a shorter range of dates since the link strength on a given date is computed from data at times before and after that date.
  • 23.

    Daniel, I recall you asking in a recent post about whether we are using the same data that Ludescher et. al. But I can't seem to find that message. Well, whether you asked it, or I imagined that you asked it, it's an important question which shouldn't get lost in the shuffle -- especially if we are to make claims about weaknesses of their method.

    Since I wasn't actively involved in the analysis, I can only relate in a very qualitative way what I saw as an observer here.

    There was something wrong with their notation for the running means, which Nad picked up on. She wrote to the authors to ask about this, but they didn't respond. Graham ended up making an assumption about what they actually meant. On that basis, he closely replicated their results. Then he simplified their measures, and still obtained similar results.

    Graham and John, can you say how much of these issues have a significant bearing on the comparison between our data and theirs. And how much of it limits our ability to make a deeper critique of their methodology.

    I had posted a draft of a letter to Ludescher et al to address some of these questions, and discussed it on the forum here.

    John if you haven't done so already, I suggest that you write this letter, because you are going to know how to put the question most clearly and in a professional academic way.

    Comment Source:Daniel, I recall you asking in a recent post about whether we are using the same data that Ludescher et. al. But I can't seem to find that message. Well, whether you asked it, or I imagined that you asked it, it's an important question which shouldn't get lost in the shuffle -- especially if we are to make claims about weaknesses of their method. Since I wasn't actively involved in the analysis, I can only relate in a very qualitative way what I saw as an observer here. There was something wrong with their notation for the running means, which Nad picked up on. She wrote to the authors to ask about this, but they didn't respond. Graham ended up making an assumption about what they actually meant. On that basis, he closely replicated their results. Then he simplified their measures, and still obtained similar results. Graham and John, can you say how much of these issues have a significant bearing on the comparison between our data and theirs. And how much of it limits our ability to make a deeper critique of their methodology. I had posted a draft of a letter to Ludescher et al to address some of these questions, and discussed it on the forum [here](http://forum.azimuthproject.org/discussion/1412/letter-to-ludescher-et-al/). John if you haven't done so already, I suggest that you write this letter, because you are going to know how to put the question most clearly and in a professional academic way.
  • 24.

    Graham ended up making an assumption about what they actually meant. On that basis, he closely replicated their results. Then he simplified their measures, and still obtained similar results.

    That is interesting. We then need to reconcile Grahams results with my analyses, since I am seeing no meaningful relationship between link strength and nino34. Was Graham using the link strength numbers from our github file?

    Comment Source:> Graham ended up making an assumption about what they actually meant. On that basis, he closely replicated their results. Then he simplified their measures, and still obtained similar results. That is interesting. We then need to reconcile Grahams results with my analyses, since I am seeing no meaningful relationship between link strength and nino34. Was Graham using the link strength numbers from our github file?
  • 25.
    edited November 2014

    I was only concerned with reproducing their calculation of link strength, and calculating simpler variants of link strength, not using that link strength to predict NINO34. The way that Ludescher et al go from link strength to El Nino predictions is pretty convoluted, and involves using the NINO34 as well as the link strength.

    There does seem to be a meaningful relationship between link strength and nino34 in the sense that link strength tends to decrease during El Nino events.

    Comment Source:I was only concerned with reproducing their calculation of link strength, and calculating simpler variants of link strength, not using that link strength to predict NINO34. The way that Ludescher et al go from link strength to El Nino predictions is pretty convoluted, and involves using the NINO34 as well as the link strength. There does seem to be a meaningful relationship between link strength and nino34 in the sense that link strength tends to decrease *during* El Nino events.
  • 26.

    Graham,

    Were you using the nino34 from the group github file or were you calculating it yourself? Did you look at the analysis I posted? I see no correlation between nino34 and link strength in my analyses. What I do see is that there is that nino34 has a larger variance when link strength is low. I would like to compare our calculations more closely to see what is going on.

    thanks Daniel

    Comment Source:Graham, Were you using the nino34 from the group github file or were you calculating it yourself? Did you look at the analysis I posted? I see no correlation between nino34 and link strength in my analyses. What I do see is that there is that nino34 has a larger variance when link strength is low. I would like to compare our calculations more closely to see what is going on. thanks Daniel
  • 27.
    edited November 2014

    I used https://github.com/azimuth-project/el-nino/blob/master/R/grj/nino34-anoms.txt

    I looked at some analyses, but there's too much going on to keep track, so I don't know if I have seen what you want me to. I didn't see a plot of nino34 and link strength together so that "strength tends to decrease during El Nino events" could be assessed visually.

    Comment Source:I used https://github.com/azimuth-project/el-nino/blob/master/R/grj/nino34-anoms.txt I looked at some analyses, but there's too much going on to keep track, so I don't know if I have seen what you want me to. I didn't see a plot of nino34 and link strength together so that "strength tends to decrease during El Nino events" could be assessed visually.
  • 28.

    "There does seem to be a meaningful relationship between link strength and nino34 in the sense that link strength tends to decrease during El Nino events."

    I am trying to understand this. Normally the ocean waters around a region change temperature in unison -- the temperatures would gradually move up or down as with seasonal changes or some other slowly changing factor. However, an El Nino event would punctuate that scenario, creating a delta function in the hotspot NINO34 region.

    I would venture that comparing a delta function against a surrounding area of gradual change would be a strong negative correlation. So this would intuitively decrease the link strength during an El Nino event.

    Perhaps this is not surprising after all. Unless I am missing something obvious.

    Comment Source:> "There does seem to be a meaningful relationship between link strength and nino34 in the sense that link strength tends to decrease during El Nino events." I am trying to understand this. Normally the ocean waters around a region change temperature in unison -- the temperatures would gradually move up or down as with seasonal changes or some other slowly changing factor. However, an El Nino event would punctuate that scenario, creating a delta function in the hotspot NINO34 region. I would venture that comparing a delta function against a surrounding area of gradual change would be a strong negative correlation. So this would intuitively decrease the link strength during an El Nino event. Perhaps this is not surprising after all. Unless I am missing something obvious.
  • 29.

    Normally the ocean waters around a region change temperature in unison

    Paul you yourself showed me the Himalaya's sink for temperature change (other sea surface regions as well) in my Laplacian flux plot video, which makes your statements above not true in general.

    It seems from the animated plots of surface temperature flux that there are regions quite dormant (little variation in flux) and suddenly in the middle a sink of some kind appears!

    Laplacian Surface Temps

    Daniel asked me to normalize using dt:

    Laplacian/Dt 2013

    Laplacian/Dt 2010

    Dara

    Comment Source:>Normally the ocean waters around a region change temperature in unison Paul you yourself showed me the Himalaya's **sink** for temperature change (other sea surface regions as well) in my Laplacian flux plot video, which makes your statements above not true in general. It seems from the animated plots of surface temperature flux that there are regions quite dormant (little variation in flux) and suddenly in the middle a sink of some kind appears! [Laplacian Surface Temps](https://www.youtube.com/watch?v=P8nKYVsBfgg) Daniel asked me to normalize using dt: [Laplacian/Dt 2013](https://www.youtube.com/watch?v=RH_euUseQCQ) [Laplacian/Dt 2010](https://www.youtube.com/watch?v=QftphMFNMCI) Dara
  • 30.

    Paul then I suspected my animated plots, and decided to see if there are actually boundaries or ridges in these flux regions:

    Laplacian Ridge Filter

    Therefore again I believe your statement is not generally true, I have a grossly incompetent view of the thermodynamics of the planet (please don't tell me to read some school books).

    Dara

    Comment Source:Paul then I suspected my animated plots, and decided to see if there are actually boundaries or **ridges** in these flux regions: [Laplacian Ridge Filter](https://www.youtube.com/watch?v=nE71SZCb1Tk) Therefore again I believe your statement is not generally true, I have a grossly incompetent view of the thermodynamics of the planet (please don't tell me to read some school books). Dara
  • 31.

    What further complicates this is the contributions of dipoles. These are the regions that are strongly anti-correlated to begin with. If the sign is maintained, then this may also decrease the average link strength during an El Nino event

    A significant amount of climate research is dedicated to searching for these dipoles:

    [1]J. Kawale, S. Liess, A. Kumar, M. Steinbach, A. R. Ganguly, N. F. Samatova, F. H. Semazzi, P. K. Snyder, and V. Kumar, “Data Guided Discovery of Dynamic Climate Dipoles.,” presented at the CIDU, 2011, pp. 30–44.

    Comment Source:What further complicates this is the contributions of dipoles. These are the regions that are strongly anti-correlated to begin with. If the sign is maintained, then this may also decrease the average link strength during an El Nino event A significant amount of climate research is dedicated to searching for these dipoles: [1]J. Kawale, S. Liess, A. Kumar, M. Steinbach, A. R. Ganguly, N. F. Samatova, F. H. Semazzi, P. K. Snyder, and V. Kumar, “Data Guided Discovery of Dynamic Climate Dipoles.,” presented at the CIDU, 2011, pp. 30–44.
  • 32.

    Damn them. The Kawale paper is on ResearchGate to which us non-academics don't have access - any other links?

    Comment Source:Damn them. The Kawale paper is on ResearchGate to which us non-academics don't have access - any other links?
  • 33.

    Jim, She has a few papers on this topic of dipole discovery. Here is another one: http://www-users.cs.umn.edu/~ksteinha/papers/KDD12.pdf Kawale, Jaya, et al. "Testing the significance of spatio-temporal teleconnection patterns." Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012.

    I will get you the one that I referenced.

    Comment Source:Jim, She has a few papers on this topic of dipole discovery. Here is another one: <http://www-users.cs.umn.edu/~ksteinha/papers/KDD12.pdf> Kawale, Jaya, et al. "Testing the significance of spatio-temporal teleconnection patterns." Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012. I will get you the one that I referenced.
  • 34.

    Thanks again Paul. I'm at jims.a.stuttard on gmail.

    Comment Source:Thanks again Paul. I'm at jims.a.stuttard on gmail.
  • 35.

    Even better, the linked Kawale paper says:

    tions associated with EOF and other types of eigenvector analysis; namely, it only finds a few of the strongest signals and the physical interpretation of such signals can be difficult due to the orthogonality of EOFs, whereas signals in climate are not necessarily orthogonal to each other.

    I was going to ask what the limits of EOFs were and now I know :)

    Comment Source:Even better, the linked Kawale paper says: > tions associated with EOF and other types of eigenvector analysis; namely, it only finds a few of the strongest signals and the physical interpretation of such signals can be difficult due to the orthogonality of EOFs, whereas signals in climate are not necessarily orthogonal to each other. I was going to ask what the limits of EOFs were and now I know :)
  • 36.
    Comment Source:> I used https://github.com/azimuth-project/el-nino/blob/master/R/grj/nino34-anoms.txt Did you use https://raw.githubusercontent.com/johncarlosbaez/el-nino/master/R/average-link-strength-1948-2013.txt at all ? I am wondering if that data is the same in your work, Ludescher and my analyses.
  • 37.

    Did you use https://raw.githubusercontent.com/johncarlosbaez/el-nino/master/R/average-link-strength-1948-2013.txt at all ? I am wondering if that data is the same in your work, Ludescher and my analyses.

    I wrote the R script that generated that data (and made a shorter set of data myself). It is similar to, but not identical to Ludescher et al's link strength. See John's Blog post

    Comment Source:> Did you use https://raw.githubusercontent.com/johncarlosbaez/el-nino/master/R/average-link-strength-1948-2013.txt at all ? I am wondering if that data is the same in your work, Ludescher and my analyses. I wrote the R script that generated that data (and made a shorter set of data myself). It is similar to, but not identical to Ludescher et al's link strength. See John's [Blog post](http://johncarlosbaez.wordpress.com/2014/07/08/el-nino-project-part-4/)
  • 38.
    edited November 2014

    I have figured out the main reason why I was getting only negligible correlation between link strength and nino34 anomaly, even though Graham was able to reproduce the Ludascher results. Maplotlib's xcorr and acorr function do not subtract the means from the signals before doing the rolling dot products. This was fine for the nino34 anomaly since that is mean 0 by design, but link strength is all positive so it is badly affected by this.

    Once the mean is subtracted from the anomaly, the 0 time lag correlation remain negligible, but there a small but noticable peak in the xcorr plot at -4 coresponding to a correlation with the anomaly 4 month after the corresponding link strength. The correlation with 6 months later is almost the same, but it is still lower than the 6-month lag autocorrelation of the anomaly itself and significantly lower than the other models I have posted here.

    I have updated the notebook to subtract the mean from the anomaly prior to the analysis.

    Comment Source:I have figured out the main reason why I was getting only negligible correlation between link strength and nino34 anomaly, even though Graham was able to reproduce the Ludascher results. Maplotlib's xcorr and acorr function do not subtract the means from the signals before doing the rolling dot products. This was fine for the nino34 anomaly since that is mean 0 by design, but link strength is all positive so it is badly affected by this. Once the mean is subtracted from the anomaly, the 0 time lag correlation remain negligible, but there a small but noticable peak in the xcorr plot at -4 coresponding to a correlation with the anomaly 4 month after the corresponding link strength. The correlation with 6 months later is almost the same, but it is still lower than the 6-month lag autocorrelation of the anomaly itself and significantly lower than the other models I have posted here. I have updated the [notebook](https://www.googledrive.com/host/0B4cyIPgV_VxrX0lxSUxHU2VLN28/link-anom.html) to subtract the mean from the anomaly prior to the analysis.
  • 39.
    edited November 2014

    -

    Comment Source:-
  • 40.
    edited November 2014

    Here are regression analyses between nino34 and link strength. The code and output snippets are in R. This is on the full 1950-2013 data.

    • Link strength vs nino34, no lag
      • although statistically significant (p=.0002) the correlation only explains about 1.6% of the variance.
        > summary(lm(d$ANOM ~ d$link))
        
        Call:
        lm(formula = d$ANOM ~ d$link)
        
        Residuals:
            Min      1Q  Median      3Q     Max 
        -1.9899 -0.5747 -0.0789  0.5044  2.4438 
        
        Coefficients:
                    Estimate Std. Error t value Pr(>|t|)    
        (Intercept) 0.005587   0.029435   0.190 0.849523    
        d$link      0.350309   0.094849   3.693 0.000237 ***
        ---
        Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
        
        Residual standard error: 0.8157 on 766 degrees of freedom
        Multiple R-squared:  0.0175,    Adjusted R-squared:  0.01621 
        F-statistic: 13.64 on 1 and 766 DF,  p-value: 0.000237
    
    • Link strength vs nino34, with 6 month lag
      • statistically significant (p < 2.2e-16) and explains about 8.7% of the variance.
        > summary(lm(d$ANOM[7:nrow(d)] ~ d$link[1:(nrow(d)-6)]))
        
        Call:
        lm(formula = d$ANOM[7:nrow(d)] ~ d$link[1:(nrow(d) - 6)])
        
        Residuals:
             Min       1Q   Median       3Q      Max 
        -2.01307 -0.54718 -0.03153  0.48593  2.33848 
        
        Coefficients:
                                Estimate Std. Error t value Pr(>|t|)    
        (Intercept)              0.01621    0.02834   0.572    0.567    
        d$link[1:(nrow(d) - 6)]  0.78163    0.09113   8.577   <2e-16 ***
        ---
        Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
        
        Residual standard error: 0.7822 on 760 degrees of freedom
        Multiple R-squared:  0.08826,   Adjusted R-squared:  0.08706 
        F-statistic: 73.57 on 1 and 760 DF,  p-value: < 2.2e-161.6
    
    • Link nino34 vs nino34, with 6 month lag
      • statistically significant (p<2.2e-16) and explains about 16.1% of the variance.
        > summary(lm(d$ANOM[7:nrow(d)] ~ d$ANOM[1:(nrow(d)-6)]))
        
        Call:
        lm(formula = d$ANOM[7:nrow(d)] ~ d$ANOM[1:(nrow(d) - 6)])
        
        Residuals:
             Min       1Q   Median       3Q      Max 
        -1.92681 -0.45092 -0.03014  0.45603  2.25653 
        
        Coefficients:
                                Estimate Std. Error t value Pr(>|t|)    
        (Intercept)              0.01162    0.02717   0.428    0.669    
        d$ANOM[1:(nrow(d) - 6)]  0.39918    0.03295  12.115   2e-16 ***
        ---
        Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
        
        Residual standard error: 0.75 on 760 degrees of freedom
        Multiple R-squared:  0.1619,    Adjusted R-squared:  0.1608 
        F-statistic: 146.8 on 1 and 760 DF,  p-value: < 2.2e-16
    
    Comment Source:Here are regression analyses between nino34 and link strength. The code and output snippets are in R. This is on the full 1950-2013 data. + Link strength vs nino34, no lag + although statistically significant (p=.0002) the correlation only explains about 1.6% of the variance. <pre> &gt; summary(lm(d$ANOM ~ d$link)) Call: lm(formula = d$ANOM ~ d$link) Residuals: Min 1Q Median 3Q Max -1.9899 -0.5747 -0.0789 0.5044 2.4438 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.005587 0.029435 0.190 0.849523 d$link 0.350309 0.094849 3.693 0.000237 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.8157 on 766 degrees of freedom Multiple R-squared: 0.0175, Adjusted R-squared: 0.01621 F-statistic: 13.64 on 1 and 766 DF, p-value: 0.000237 </pre> + Link strength vs nino34, with 6 month lag + statistically significant (p &lt; 2.2e-16) and explains about 8.7% of the variance. <pre> &gt; summary(lm(d$ANOM[7:nrow(d)] ~ d$link[1:(nrow(d)-6)])) Call: lm(formula = d$ANOM[7:nrow(d)] ~ d$link[1:(nrow(d) - 6)]) Residuals: Min 1Q Median 3Q Max -2.01307 -0.54718 -0.03153 0.48593 2.33848 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.01621 0.02834 0.572 0.567 d$link[1:(nrow(d) - 6)] 0.78163 0.09113 8.577 &lt;2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.7822 on 760 degrees of freedom Multiple R-squared: 0.08826, Adjusted R-squared: 0.08706 F-statistic: 73.57 on 1 and 760 DF, p-value: &lt; 2.2e-161.6 </pre> + Link nino34 vs nino34, with 6 month lag + statistically significant (p&lt;2.2e-16) and explains about 16.1% of the variance. <pre> &gt; summary(lm(d$ANOM[7:nrow(d)] ~ d$ANOM[1:(nrow(d)-6)])) Call: lm(formula = d$ANOM[7:nrow(d)] ~ d$ANOM[1:(nrow(d) - 6)]) Residuals: Min 1Q Median 3Q Max -1.92681 -0.45092 -0.03014 0.45603 2.25653 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.01162 0.02717 0.428 0.669 d$ANOM[1:(nrow(d) - 6)] 0.39918 0.03295 12.115 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.75 on 760 degrees of freedom Multiple R-squared: 0.1619, Adjusted R-squared: 0.1608 F-statistic: 146.8 on 1 and 760 DF, p-value: &lt; 2.2e-16 </pre>
  • 41.
    edited November 2014

    Hello Daniel

    Your OUT[53] in your iPython notebook in #39 here clearly shows a lump not a linear dependency between the link and anom.

    However in you ANOVA analysis in #41 you are using lm simple linear link~anom which is fully linear regression.

    Let me know if I am misunderstanding something. I generally and easily misguide myself with ANOVA, so please have that consideration.

    Support Vector Regression (SVR) does what you are trying to do here (if I understand it correctly) but in the non-linear or curved space, it uses the kernel and maps, non-linearly, into a linear space and does then linear regression and then maps back non-linearly and produces a non-linear regression. I attached the math in my SVR forecast.

    Thank you for this work, I know how much hard effort it is.

    Dara

    Comment Source:Hello Daniel Your OUT[53] in your iPython notebook in #39 here clearly shows a lump not a linear dependency between the link and anom. However in you ANOVA analysis in #41 you are using lm simple linear link~anom which is fully linear regression. Let me know if I am misunderstanding something. I generally and easily misguide myself with ANOVA, so please have that consideration. Support Vector Regression (SVR) does what you are trying to do here (if I understand it correctly) but in the non-linear or curved space, it uses the kernel and maps, non-linearly, into a linear space and does then linear regression and then maps back non-linearly and produces a non-linear regression. I attached the math in my SVR forecast. Thank you for this work, I know how much hard effort it is. Dara
  • 42.
    edited November 2014

    Here is a R package for SVR

    R package for SVR

    It uses the kernLab package:

    R package for kernLab

    I developed these in Mathematica since I am using wavelet kernels and other kernels soon which are not readily available in these packages nor in Scikit

    Dara

    Comment Source:Here is a R package for SVR [R package for SVR](http://cran.r-project.org/web/packages/LinearizedSVR/LinearizedSVR.pdf) It uses the kernLab package: [R package for kernLab](http://cran.r-project.org/web/packages/kernlab/kernlab.pdf) I developed these in Mathematica since I am using wavelet kernels and other kernels soon which are not readily available in these packages nor in Scikit Dara
  • 43.

    Hello Daniel

    Something I remembered from our earlier Scikit experience, they like their APIs (most of them) to have N(0,1) distribution for arguments, so one needs to normalize the parameters into the range [-1,+1]. I also suspect that in some of their code they do that internally which is a disaster to interpret the results. One key reason why we decided to code our own algorithms.

    For example for Neural Networks, our algorithm works the best for normalized inputs, but that changes the learning completely, so I added a scalar the programmer could use to scale the input and output to fine tune, same for SVR.

    I also noted that some of the plots for stat packages and scientific packages smooth our without notifying the programmer, which again is another disaster to deal with.

    Just to be cautious about the interpretations of the results and forming too quick a conclusion.

    Dara

    Comment Source:Hello Daniel Something I remembered from our earlier Scikit experience, they like their APIs (most of them) to have N(0,1) distribution for arguments, so one needs to normalize the parameters into the range [-1,+1]. I also suspect that in some of their code they do that internally which is a disaster to interpret the results. One key reason why we decided to code our own algorithms. For example for Neural Networks, our algorithm works the best for normalized inputs, but that changes the learning completely, so I added a scalar the programmer could use to scale the input and output to fine tune, same for SVR. I also noted that some of the plots for stat packages and scientific packages smooth our without notifying the programmer, which again is another disaster to deal with. Just to be cautious about the interpretations of the results and forming too quick a conclusion. Dara
  • 44.

    Daniel said:

    " still lower than the 6-month lag autocorrelation of the anomaly itself "

    This means that an anomaly analysis, as in "dead-reckoning" based on its past history, works better for prediction than trying to extract something based on external link strengths.

    This makes sense if the ENSO phenomena is more of an internal phenomenon fed by equatorial waters and winds, than something triggered by external links from regions spatially separated from the equatorial Pacific.

    That works as a kind of a null hypothesis. Unless one can find a link that is stronger than the autocorrelation of the ENSO signal itself, it will be hard to get buy-in that it is a better predictor.

    Comment Source:Daniel said: > " still lower than the 6-month lag autocorrelation of the anomaly itself " This means that an anomaly analysis, as in "dead-reckoning" based on its past history, works better for prediction than trying to extract something based on external link strengths. This makes sense if the ENSO phenomena is more of an internal phenomenon fed by equatorial waters and winds, than something triggered by external links from regions spatially separated from the equatorial Pacific. That works as a kind of a null hypothesis. Unless one can find a link that is stronger than the autocorrelation of the ENSO signal itself, it will be hard to get buy-in that it is a better predictor.
  • 45.

    This is how I think about this: In order to forecast something we need certain amount of information, in our case the information is within the heatmaps. I assume the link strength is also obtained from the same heatmaps, therefore no new amount of information is added to the forecast, therefore generally speaking, not just for a few of the sample cases, then any heatmap related information could not aid the forecast.

    However, non-heatmap related e.g. upper atmosphere radiation quantities or something completely unrelated then adds to the amount of information, possibly allow for better forecast.

    Example1: If I am riding my bike the past history of riding a bike could enable me to balance and follow a path, however at the next bent in the road there is a wall and a large occluded hole beside it. No matter what data I add from the history of the bike ride will not allow me to forecast a navigation to circumvent the hole, but if someone shouted "Watch out hole behind the wall" , that information could easily help to modify the forecast.

    Example2: I write forecast algorithms for stock's price, then if I use the same stock price history and make up a new variable might not greatly improve my forecast since I added no new information. So what is proposed nowadays is to add data from text of the company report releases e.g. 10k filings, then I could add a new variable to the input vector with added information which could increase the forecast accuracy.

    Comment Source:This is how I think about this: In order to forecast something we need certain amount of information, in our case the information is within the heatmaps. I assume the **link strength** is also obtained from the same heatmaps, therefore no new amount of information is added to the forecast, therefore generally speaking, not just for a few of the sample cases, then any heatmap related information could not aid the forecast. However, non-heatmap related e.g. upper atmosphere radiation quantities or something completely unrelated then adds to the amount of information, possibly allow for better forecast. Example1: If I am riding my bike the past history of riding a bike could enable me to balance and follow a path, however at the next bent in the road there is a wall and a large occluded hole beside it. No matter what data I add from the history of the bike ride will not allow me to forecast a navigation to circumvent the hole, but if someone shouted "Watch out hole behind the wall" , that information could easily help to modify the forecast. Example2: I write forecast algorithms for stock's price, then if I use the same stock price history and make up a new variable might not greatly improve my forecast since I added no new information. So what is proposed nowadays is to add data from text of the company report releases e.g. 10k filings, then I could add a new variable to the input vector with added information which could increase the forecast accuracy.
  • 46.
    edited November 2014

    The Sig volumetric data clearly shows North South Trends and its Laplacian clearly shows leakage at Himalayas, coast of Chile and polar regions, therefore I am doubtful any simple computation of averages of heat data from the same Sig data will improve any forecasts for El Nino index, assuming the link computations are heat related.

    Comment Source:The Sig volumetric data clearly shows North South Trends and its Laplacian clearly shows leakage at Himalayas, coast of Chile and polar regions, therefore I am doubtful any simple computation of averages of heat data from the same Sig data will improve any forecasts for El Nino index, assuming the link computations are heat related.
  • 47.

    I agree with Dara's argument and examples. In particular, even if one does find external linkages, unless one can forecast those behaviors, it won't help much.

    That is why I am more satisfied with the approach that I am taking; the SOI sloshing model has 3 external forcing factors that are at least quasi-predictable -- i.e. quasi-biennial oscillations, Chandler wobble, and solar cycles. Each of these has a "watch out what's coming" predictability.

    Comment Source:I agree with Dara's argument and examples. In particular, even if one does find external linkages, unless one can forecast *those* behaviors, it won't help much. That is why I am more satisfied with the approach that I am taking; the SOI sloshing model has 3 external forcing factors that are at least quasi-predictable -- i.e. quasi-biennial oscillations, Chandler wobble, and solar cycles. Each of these has a "watch out what's coming" predictability.
  • 48.

    In reply to WebHubTel, #29 and #32:

    Ludescher et al's link strength is a highly derived quantity, and it is not at all easy to relate it back to physics. I believe that for correlations between slowly varying signals, it works in the opposite direction to what you'd expect, decreasing as the correlation increases.

    Suppose the daily temperatures at two places are $$ k t + N_1(t)$$ and $$ k t + N_2(t)$$ where t is time, $k$ a constant, and the $N_i$ are independent daily fluctuations, and this holds for 365 + 200 days up to present time. If $k$ is large all the correlations at different time lags will be near 1, so the max/average they do results in a link strength a little above the minimum of 1. (You might get .99/.97 $\approx$ 1.02.) If $k$ is tiny, the noise dominates, and there is greater variation between correlations at different time lags, and max/average will be bigger.

    So far as I can see, the highest values for the link strength occur when there is correlation on very short time scales. Eg if the two signals are $N_1(t)$ and $N_1(t+\tau)$. Then you get a correlation of 1 at one particular time lag, but very small correlations at other time lags, so max/average is large.

    Comment Source:In reply to WebHubTel, #29 and #32: Ludescher et al's link strength is a highly derived quantity, and it is not at all easy to relate it back to physics. I believe that for correlations between slowly varying signals, it works in the opposite direction to what you'd expect, decreasing as the correlation increases. Suppose the daily temperatures at two places are $$ k t + N_1(t)$$ and $$ k t + N_2(t)$$ where t is time, $k$ a constant, and the $N_i$ are independent daily fluctuations, and this holds for 365 + 200 days up to present time. If $k$ is large all the correlations at different time lags will be near 1, so the max/average they do results in a link strength a little above the minimum of 1. (You might get .99/.97 $\approx$ 1.02.) If $k$ is tiny, the noise dominates, and there is greater variation between correlations at different time lags, and max/average will be bigger. So far as I can see, the highest values for the link strength occur when there is correlation on very short time scales. Eg if the two signals are $N_1(t)$ and $N_1(t+\tau)$. Then you get a correlation of 1 at one particular time lag, but very small correlations at other time lags, so max/average is large.
  • 49.

    Graham, sorry I don't follow that. I understand how a dipole is detected, which is looking for correlations that approach -1, but not the link strength. I hope I am not leading people astray by my confusion.

    I keep thinking that correlations on very short time scales are indistinguishable from autocorrelation. Pick a location just outside the NINO34 region and any autocorrelation in the NINO34 region can easily leak into the outer regions. But how do you exclude that or determine a cutoff region?

    Comment Source:Graham, sorry I don't follow that. I understand how a dipole is detected, which is looking for correlations that approach -1, but not the link strength. I hope I am not leading people astray by my confusion. I keep thinking that correlations on very short time scales are indistinguishable from autocorrelation. Pick a location just outside the NINO34 region and any autocorrelation in the NINO34 region can easily leak into the outer regions. But how do you exclude that or determine a cutoff region?
  • 50.

    WebHubTel,

    We may be at cross-purposes here. I am not convinced that anybody, including Ludescher et al, understands what it is that their link strength actually measures. Whether it should be called a link strength is dubious. There's no compelling reason why you should be interested in their definition, but if you are, I just wanted to warn you that its weird! I put some preliminary tests on the wiki.

    I keep thinking that correlations on very short time scales are indistinguishable from autocorrelation. Pick a location just outside the NINO34 region and any autocorrelation in the NINO34 region can easily leak into the outer regions.

    What might be interesting is that the degree to which the correlations leak out of the basin (or more likely leak in given the prevailing winds) could vary over time. Maybe, steady trade winds lead to strong correlations of daily fluctuations between regions up say 2000km apart and a time lag of up to a few days. When the trade winds falter, these correlations drop.

    Comment Source:WebHubTel, We may be at cross-purposes here. I am not convinced that anybody, including Ludescher et al, understands what it is that their link strength actually measures. Whether it should be called a link strength is dubious. There's no compelling reason why you should be interested in their definition, but if you are, I just wanted to warn you that its *weird*! I put [some preliminary tests](http://www.azimuthproject.org/azimuth/show/Experiments+with+varieties+of+link+strength+for+El+Ni%C3%B1o+prediction) on the wiki. > I keep thinking that correlations on very short time scales are indistinguishable from autocorrelation. Pick a location just outside the NINO34 region and any autocorrelation in the NINO34 region can easily leak into the outer regions. What might be interesting is that the degree to which the correlations leak out of the basin (or more likely leak in given the prevailing winds) could vary over time. Maybe, steady trade winds lead to strong correlations of daily fluctuations between regions up say 2000km apart and a time lag of up to a few days. When the trade winds falter, these correlations drop.
Sign In or Register to comment.