It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 502
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 72
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 715

## Comments

Jim, It works for me. The context web server is here http://entroplet.com/ . This doesn't use any XSLT. The XML is RDF.

It seems to me that if category theory is to be applied for managing a graph database of knowledge, one must first do the baby-steps of using the ideas of the semantic web ala Berners-Lee and conventional ontologies first. Triple store beats relational and it is in combat with free-form (i.e. google-like search engines ) . I am not even sure how category theory plays in to this spectrum.

`Jim, It works for me. The context web server is here http://entroplet.com/ . This doesn't use any XSLT. The XML is RDF. It seems to me that if category theory is to be applied for managing a graph database of knowledge, one must first do the baby-steps of using the ideas of the semantic web ala Berners-Lee and conventional ontologies first. Triple store beats relational and it is in combat with free-form (i.e. google-like search engines ) . I am not even sure how category theory plays in to this spectrum.`

Jim, I tried pushing an Excel subdirectory into the azimuth-project/el-nino repo. I got a Request failed. If I mess up the password I get an Authentication failed. So it looks like I may not have permissions yet.

I added jimstutt as a collaborator to my pukpr/context repo

https://github.com/pukpr/context/

`Jim, I tried pushing an Excel subdirectory into the azimuth-project/el-nino repo. I got a Request failed. If I mess up the password I get an Authentication failed. So it looks like I may not have permissions yet. I added jimstutt as a collaborator to my pukpr/context repo https://github.com/pukpr/context/`

Sorry, I remember that I've failed to give somebody the appropriate permissions before. I'll have to relearn how to do it and get back to you.

`Sorry, I remember that I've failed to give somebody the appropriate permissions before. I'll have to relearn how to do it and get back to you.`

github docs say that:

and

Did you get an email invite? I don't know what that would have told you to do. I still can't remember what I had to do before :(.

`github docs say that: > Invitations are sent via email and can be accepted at https://github.com/azimuth-project and > If the user is not a member of your organization, they'll receive an email inviting them to the organization. They will need to accept the invitation before becoming a member of the organization. Did you get an email invite? I don't know what that would have told you to do. I still can't remember what I had to do before :(.`

You are correct,s sir. Don't look at my email enough. I tested by placing an Excel subdir under el-nino.

`You are correct,s sir. Don't look at my email enough. I tested by placing an Excel subdir under el-nino.`

+1

`+1`

Jim, I have a spreadsheet for the DiffEq expansion technique in the azimuth-projectel-nino/Excel directory.

Need to run the Solver (evolutionary mode only) against the CC entry . Then you can get results as in Comment #44. It is setup conceptually very similar to Eureqa but not every one has access to Eureqa, so if you want to see what you can do with it ...

`Jim, I have a spreadsheet for the DiffEq expansion technique in the azimuth-projectel-nino/Excel directory. Need to run the Solver (evolutionary mode only) against the CC entry . Then you can get results as in [Comment #44](http://forum.azimuthproject.org/discussion/comment/14498/#Comment_14498). It is setup conceptually very similar to Eureqa but not every one has access to Eureqa, so if you want to see what you can do with it ...`

Great, I'll give it a go! I'm not sure applying the AIC to parameterisations of your formula counts as comparing different models. I sort of expected a comparison with Graham Jones's Ludescher et al. R code in the github repo.

`Great, I'll give it a go! I'm not sure applying the AIC to parameterisations of your formula counts as comparing different models. I sort of expected a comparison with Graham Jones's Ludescher et al. R code in the github repo.`

Jim, I don't really know what is going on with regards to approaching this as a formal statistical evaluation. And I have never approached science in that way.

For ENSO, all I am trying to do is apply elementary formulations of fluid dynamics to try to understand what is going on in terms of the apparently erratic oscillation. This approach may be naive but I think it is worth a shot, especially considering that I have found little evidence of anyone trying this path in the literature.

So when I find out right away that a sinusoidal forcing function with the same frequency of the QBO gives a decent correlation, this is the opening to go for. Adding the frequency of the other hypothesized forcing, that of the angular momentum Chandler wobble, only works to improve the agreement. The TSI also has some important contribution to play.

The essential statistical question that remains is to ask whether this correlation is simply a staggering coincidence.

In other words, I have no idea of how to provide a statistical likelihood that this is a realistic behavioral model of the driving physical mechanism behind ENSO.

This is a good question but I just don't know how to answer it.

`Jim, I don't really know what is going on with regards to approaching this as a formal statistical evaluation. And I have never approached science in that way. For ENSO, all I am trying to do is apply elementary formulations of fluid dynamics to try to understand what is going on in terms of the apparently erratic oscillation. This approach may be naive but I think it is worth a shot, especially considering that I have found little evidence of anyone trying this path in the literature. So when I find out right away that a sinusoidal forcing function with the same frequency of the QBO gives a decent correlation, this is the opening to go for. Adding the frequency of the other hypothesized forcing, that of the angular momentum Chandler wobble, only works to improve the agreement. The TSI also has some important contribution to play. The essential statistical question that remains is to ask whether this correlation is simply a staggering coincidence. In other words, I have no idea of how to provide a statistical likelihood that this is a realistic behavioral model of the driving physical mechanism behind ENSO. > "I sort of expected a comparison with Graham Jones's Ludescher et al. R code in the github repo." This is a good question but I just don't know how to answer it.`

As the SOI model as formulated in this thread is completely guided by physics considerations, it makes sense to revisit certain assumptions. One consideration that I originally started with but eventually left and now am only coming back to is using a lower altitude measure of QBO as a forcing function.

The lower altitude QBO that I originally evaluated was the 70 hPA measure, but I eventually switched over to the 20 hPa data set because it appeared less noisy.

As it turns out that is not a good enough reason as the lower altitude measure is actually more realistic as it is nearer in proximity to interacting with the ocean's surface and thus generating the forcing necessary to push the ocean's volume in a periodically prevailing direction.

The outcome of the switch to the 70 hPa measure is that the long term period remains the same at 2.33 years, but the jitter in the waveform is markedly greater. By modeling the 70 hPA QBO as a frequency modulated time series, the wave-equation fit becomes much better. This is expected as a jitter in the waveform will express itself as a temporarily increasing or decreasing frequency, which can show a significant response in the resonant wave-equation formulation.

The bottom panel is the modeled QBO, where I tried to achieve at least a CC of 0.7 between the model and the data. The top panel is the data with the fit, where the yellow filled regions show discrepancies between the two sets of data (diffs shown in middle panel).

Don't be alarmed by this as the dipole agreement between -Tahiti and Darwin shows similar deviations, even though they should be dipole replicas of each other. The issue that we are always dealing with is the presence of nuisance noise. So if we map -Tahiti onto Darwin the agreement is there but it also shows significant nuisance noise.

Dealing with dipoles in an electrical engineering lab environment is much easier as the noise can be minimized, but, hey, this is the real world and Nature does not want to give up its secrets easily :)

`As the SOI model as formulated in this thread is completely guided by physics considerations, it makes sense to revisit certain assumptions. One consideration that I originally started with but eventually left and now am only coming back to is using a lower altitude measure of QBO as a forcing function. The lower altitude QBO that I originally evaluated was the 70 hPA measure, but I eventually switched over to the 20 hPa data set because it appeared less noisy. As it turns out that is not a good enough reason as the lower altitude measure is actually more realistic as it is nearer in proximity to interacting with the ocean's surface and thus generating the forcing necessary to push the ocean's volume in a periodically prevailing direction. The outcome of the switch to the 70 hPa measure is that the long term period remains the same at 2.33 years, but the jitter in the waveform is markedly greater. By modeling the 70 hPA QBO as a frequency modulated time series, the wave-equation fit becomes much better. This is expected as a jitter in the waveform will express itself as a temporarily increasing or decreasing frequency, which can show a significant response in the resonant wave-equation formulation. ![Model fit](http://imageshack.com/a/img537/9164/oH6zXU.gif) The bottom panel is the modeled QBO, where I tried to achieve at least a CC of 0.7 between the model and the data. The top panel is the data with the fit, where the yellow filled regions show discrepancies between the two sets of data (diffs shown in middle panel). Don't be alarmed by this as the dipole agreement between -Tahiti and Darwin shows similar deviations, even though they should be dipole replicas of each other. The issue that we are always dealing with is the presence of nuisance noise. So if we map -Tahiti onto Darwin the agreement is there but it also shows significant nuisance noise. ![T-D](http://imageshack.com/a/img540/7214/0o5ifP.gif) Dealing with dipoles in an electrical engineering lab environment is much easier as the noise can be minimized, but, hey, this is the real world and Nature does not want to give up its secrets easily :)`

The usual way to do this is to split the data into training and testing sets (Really one should split into training, validation a test, but train & test will do here). Use the train set to fit your model and and then use the fitted model to make predictions on the test set. If the predictions are still good then there is grounds to say the correlations are real.

The simplest way to do this is to pick a date such that there is enough data on either side of it. Fit using the data before the date and evaluate on the data after that.

`> The essential statistical question that remains is to ask whether this correlation is simply a staggering coincidence. > In other words, I have no idea of how to provide a statistical likelihood that this is a realistic behavioral model of the driving physical mechanism behind ENSO. The usual way to do this is to split the data into training and testing sets (Really one should split into training, validation a test, but train & test will do here). Use the train set to fit your model and and then use the fitted model to make predictions on the test set. If the predictions are still good then there is grounds to say the correlations are real. The simplest way to do this is to pick a date such that there is enough data on either side of it. Fit using the data before the date and evaluate on the data after that.`

Daniel, I did a training interval example in comment #43.

You have to remember that the validation results for typical El Nino projections are only good for a few months to a year or two out. Or at least that is the impression I get. Yet here is a case in that the back-prediction is coherent in phase to 100 years prior to the start of the training interval !

I am beginning to think that this is all toy physics, with the caveat that toy physics usually doesn't work so well in real life. In this case it works out because no one ever thought to solve the wave equation with the known forcings. IMO, this is no longer a statistical question but a question in the philosophy of climate science modeling. Are these kinds of simple first-order models not acceptable by the standards of the climate science establishment? Just asking because I don't have a clue.

Last year a buddy of mine who works in the CompSci department at the local U got us hooked up with a climate scientist that is working with his former thesis advisor. This is one of those data mining research projects whereby they plow through satellite data looking for correlations and patterns. Over lunch at a local restaurant, I explained to him what my ENSO model was all about and he seemed interested in it. This scientist has written several papers on QBO and ENSO and so we thought it might go somewhere in the next several months. After keeping him informed of the progress and what I was also doing on the Azimuth Forum, we have completely lost contact since then. No replies to any emails.

`Daniel, I did a training interval example in [comment #43](http://forum.azimuthproject.org/discussion/comment/14498/#Comment_14498). ![ti](http://imageshack.com/a/img909/2582/YJfdLh.gif) You have to remember that the validation results for typical El Nino projections are only good for a few months to a year or two out. Or at least that is the impression I get. Yet here is a case in that the back-prediction is coherent in phase to 100 years prior to the start of the training interval ! --- I am beginning to think that this is all toy physics, with the caveat that toy physics usually doesn't work so well in real life. In this case it works out because no one ever thought to solve the wave equation with the known forcings. IMO, this is no longer a statistical question but a question in the philosophy of climate science modeling. Are these kinds of simple first-order models not acceptable by the standards of the climate science establishment? Just asking because I don't have a clue. --- Last year a buddy of mine who works in the CompSci department at the local U got us hooked up with a climate scientist that is working with his former thesis advisor. This is one of those data mining research projects whereby they plow through satellite data looking for correlations and patterns. Over lunch at a local restaurant, I explained to him what my ENSO model was all about and he seemed interested in it. This scientist has written several papers on QBO and ENSO and so we thought it might go somewhere in the next several months. After keeping him informed of the progress and what I was also doing on the Azimuth Forum, we have completely lost contact since then. No replies to any emails.`

Isaac Held outlined the use of simple models "the fruit fly of climate models". hth

`Isaac Held outlined the use of simple models ["the fruit fly of climate models"](http://www.gfdl.noaa.gov/blog/isaac-held/2012/05/25/28-the-fruit-fly-of-climate-models/). hth`

Jim, Yes I saw that post recently and I have been applying the fruit-fly idea in trying to rationalize what I am finding.

Along this note, I posted this as a comment to RealClimate.org after Hank Roberts mentioned the work that we are doing here. He thought it interesting the suggestions that John offered up to get published. I responded as follows:

That is absolutely true that I got a response to a paper I submitted to a climate science conference saying that the paper looked as if it was written by an AI algorithm -- I assumed like this one. Without credentials, you have to develop a thick skin apparently.

`Jim, Yes I saw that post recently and I have been applying the fruit-fly idea in trying to rationalize what I am finding. Along this note, I posted this as a comment to RealClimate.org after Hank Roberts mentioned the work that we are doing here. He thought it interesting the suggestions that John offered up to get published. [I responded](http://www.realclimate.org/index.php/archives/2015/04/a-scientific-debate/comment-page-1/#comment-628435) as follows: > "That’s me that Hank Roberts is referring to in #8. My name is Paul Pukite and I am in no way, shape, or form a climate science insider. I have published in several other physics and engineering disciplines but harbor no illusions that I can easily get published in an earth sciences field such as climate science. The key word there is “easily”. That’s why I am in no hurry and continue to spend a few hours a week working on these “fruit fly” models of climate, as Isaac Held refers to them. > Funny story is that I actually received a response to a climate science paper that I submitted explaining that it was ejected because one of the reviewers said it looked like one of those “AI generated” research papers. > This is real life and stuff like that happens. If you think you are doing interesting and challenging work, you plow forward." That is absolutely true that I got a response to a paper I submitted to a climate science conference saying that the paper looked as if it was written by an AI algorithm -- I assumed [like this one](http://pdos.csail.mit.edu/scigen/). Without credentials, you have to develop a thick skin apparently.`

The ENSO model I use is very similar to an electrical circuit made of resistors, inductors and capacitors. Because the motion of the ocean appears to be strongly inviscid, we ignore the resistor. So we have a resonant LC circuit as described on http://en.wikipedia.org/wiki/LC_circuit

The input terminal corresponds to the forcing function voltage v(t) and the output terminal to the current i(t). The current is essentially what we measure as a response to the forcing function coupled with the resonant circuit. This configuration reduces to a second-order differential equation, which forms the basis of the oceanic sloshing model.

The twist is that the forcing also causes a temporal modification in the characteristic frequency of the resonant circuit. This is further described in the sloshing literature as a Mathieu (single sinusoid) or Hill (multiple sinusoids as a Fourier series) differential equation.

So just figure out the forcing function (QBO ...) the characteristic frequency (see Clarke) and tweak the parameters of the Hill perturbation and one can get an excellent fit to an ENSO measure such as SOI. We create error margins on the signal by looking at the excursions of the oppositely polarized dipole (the noisy Tahiti and Darwin signals) and fill the inner envelope with yellow. The bottom panel shows a representation of the actual signal and the region filled in yellow now represents where the model goes outside the error margins.

That's what is called an elevator pitch aimed at an engineer. Create a representative analogous model to the fruit-fly model and pitch that.

`The ENSO model I use is very similar to an electrical circuit made of resistors, inductors and capacitors. Because the motion of the ocean appears to be strongly [inviscid](http://en.wikipedia.org/wiki/Inviscid_flow), we ignore the resistor. So we have a resonant LC circuit as described on http://en.wikipedia.org/wiki/LC_circuit ![LC](http://upload.wikimedia.org/wikipedia/commons/thumb/b/b2/LC_parallel_simple.svg/220px-LC_parallel_simple.svg.png) The input terminal corresponds to the forcing function voltage v(t) and the output terminal to the current i(t). The current is essentially what we measure as a response to the forcing function coupled with the resonant circuit. This configuration reduces to a second-order differential equation, which forms the basis of the oceanic sloshing model. The twist is that the forcing also causes a temporal modification in the characteristic frequency of the resonant circuit. This is further described in the sloshing literature as a Mathieu (single sinusoid) or Hill (multiple sinusoids as a Fourier series) differential equation. So just figure out the forcing function (QBO ...) the characteristic frequency (see Clarke) and tweak the parameters of the Hill perturbation and one can get an excellent fit to an ENSO measure such as SOI. We create error margins on the signal by looking at the excursions of the oppositely polarized dipole (the noisy Tahiti and Darwin signals) and fill the inner envelope with yellow. The bottom panel shows a representation of the actual signal and the region filled in yellow now represents where the model goes outside the error margins. ![BEST SOI](http://imageshack.com/a/img537/7091/ThY7SC.gif) That's what is called an elevator pitch aimed at an engineer. Create a representative analogous model to the fruit-fly model and pitch that.`

The white paper I wrote on the Southern Index Oscillation Model, arxived here , is probably in need of an edit.

This is actually good news. To create the model as described in the paper, I applied 4 factors : the QBO, the Chandler Wobble, the TSI, and an extra boundary condition defined by the Pacific Ocean Climate shift event of 1976/1977.

But are all these factors necessary? Is it possible to reduce the number of factors so that only the primary factors contribute and still maintain fidelity to the data?

The two I felt were the weakest and most in need of a scientific rationale were the TSI and climate shift factors. So I removed these and juggled only the QBO and CW factors. Sure enough, the model fit retained most of the erratic structure and the correlation coefficient was only a bit lower. Part of the reason that I think this works is that the more detailed QBO wave-form that I have been recently applying compensates for the extra degrees of freedom that the TSI data provided.

And as far as the climate shift is concerned, this factor vanished.

The implications of the removal of these factors is that the model should have better predictive power with the higher determinism of QBO and CW.. Sudden climate shifts are impossible to predict so that would have added uncertainty to the forecast capability. Same thing with TSI, as the period varies around its 10 to 11 year period.

So as far as predictive power, we rely on the stationary aspects of the QBO (2.33 year average period) and of the Chandler wobble (6.5 year beat period). These each show slight jitter about the mean frequency, but they also show stationarity in terms of exhibiting coherence over long time scales -- in other words if you advance the QBO by many multiples of a 2.33 year period, you will know where you are on the waveform (minus the jitter).

Now, why did I take this detour in the first place? If you go back to blog posts from last year you will see that I started with only the CW and QBO factors. But then I somehow became motivated to achieve a higher correlation coefficient than the two factors could produce. Yet as I have becoming more and more convinced, chasing a high CC is pointless when the data itself has noisy characteristics. The fear is that the extra fitting is simply fitting the noise excursions.

This is actually a good outcome. There are only 2 factors now to cover the year-over-year fluctuations of ENSO SOI, with a slow sinusoid of a 50-60 year period to capture the multi-decadal envelope.

To add a caveat, some of the residual still appears to contain a TSI factor, so this term may still have some significance, but not when the quest is to arrive at the least complex solution.

`The white paper I wrote on the Southern Index Oscillation Model, arxived [here](http://arxiv.org/abs/1411.0815) , is probably in need of an edit. This is actually good news. To create the model as described in the paper, I applied 4 factors : the QBO, the Chandler Wobble, the TSI, and an extra boundary condition defined by the Pacific Ocean Climate shift event of 1976/1977. But are all these factors necessary? Is it possible to reduce the number of factors so that only the primary factors contribute and still maintain fidelity to the data? The two I felt were the weakest and most in need of a scientific rationale were the TSI and climate shift factors. So I removed these and juggled only the QBO and CW factors. Sure enough, the model fit retained most of the erratic structure and the correlation coefficient was only a bit lower. Part of the reason that I think this works is that the more detailed QBO wave-form that I have been recently applying compensates for the extra degrees of freedom that the TSI data provided. And as far as the climate shift is concerned, this factor vanished. The implications of the removal of these factors is that the model should have better predictive power with the higher determinism of QBO and CW.. Sudden climate shifts are impossible to predict so that would have added uncertainty to the forecast capability. Same thing with TSI, as the period varies around its 10 to 11 year period. So as far as predictive power, we rely on the stationary aspects of the QBO (2.33 year average period) and of the Chandler wobble (6.5 year beat period). These each show slight jitter about the mean frequency, but they also show stationarity in terms of exhibiting coherence over long time scales -- in other words if you advance the QBO by many multiples of a 2.33 year period, you will know where you are on the waveform (minus the jitter). --- Now, why did I take this detour in the first place? If you go back to blog posts from last year you will see that I started with only the CW and QBO factors. But then I somehow became motivated to achieve a higher correlation coefficient than the two factors could produce. Yet as I have becoming more and more convinced, chasing a high CC is pointless when the data itself has noisy characteristics. The fear is that the extra fitting is simply fitting the noise excursions. This is actually a good outcome. There are only 2 factors now to cover the year-over-year fluctuations of ENSO SOI, with a slow sinusoid of a 50-60 year period to capture the multi-decadal envelope. To add a caveat, some of the residual still appears to contain a TSI factor, so this term may still have some significance, but not when the quest is to arrive at the least complex solution.`

So after 91 years everything repeats?

`> So as far as predictive power, we rely on the stationary aspects of the QBO (2.33 year average period) and of the Chandler wobble (6.5 year beat period). These each show slight jitter about the mean frequency, but they also show stationarity in terms of exhibiting coherence over long time scales -- in other words if you advance the QBO by many multiples of a 2.33 year period, you will know where you are on the waveform (minus the jitter). So after 91 years everything repeats?`

Graham asks :

Checking ... there are 3 cycles of 2.33 in 7 years and 2 cycles of 6.5 in 13 years, so the repeat is 7*13 = 91 -- gotcha. (Even though the Chandler wobble is not exactly 6.5)

I am applying a Hill/Mathieu equation so the repeat period is much longer in that situation. One has to figure out the repeat of this kind of waveform:

That's why sloshing is erratic, and is one of the reasons that this problem has not easily been cracked.

My latest model of combining the factors:

I have gained a weird insight into deciphering these waveforms after playing with them off and on over the last year. It probably helped that I stared at 2D and 3D diffraction patterns for hours on end during my grad school daze :)

`Graham asks : > "So after 91 years everything repeats?" Checking ... there are 3 cycles of 2.33 in 7 years and 2 cycles of 6.5 in 13 years, so the repeat is 7*13 = 91 -- gotcha. (Even though the Chandler wobble is not exactly 6.5) I am applying a Hill/Mathieu equation so the repeat period is much longer in that situation. One has to figure out the repeat of this kind of waveform: ![MF](http://imagizer.imageshack.us/a/img819/3857/239c.gif) That's why sloshing is erratic, and is one of the reasons that this problem has not easily been cracked. My latest model of combining the factors: ![soim80](http://imageshack.com/a/img673/9976/OX8MUE.gif) I have gained a weird insight into deciphering these waveforms after playing with them off and on over the last year. It probably helped that I stared at 2D and 3D diffraction patterns for hours on end during my grad school daze :)`

In particular the solution of a Mathieu equation is often not periodic and that's been one of the things which I found hadn't been really adressed in your type of reasoning.

`>I am applying a Hill/Mathieu equation so the repeat period is much longer in that situation. In particular the solution of a <a href="http://en.wikipedia.org/wiki/Mathieu_function">Mathieu equation is often not periodic</a> and that's been one of the things which I found hadn't been really adressed in your type of reasoning.`

Thanks, WebHubTel!

`Thanks, WebHubTel!`

nad said

I think the confusion may be in where the periodicities lie. For the forcing functions, there are certainly strict periodicities at play, and these involve factors such as seasonal cycles, QBO, Chandler wobble, tidal cycles, etc. However, in the

responsefunctions, these periodicities can be quickly obscured.One way for this to happen is if there are many forcing cycles at work and they combine to form a tangled mat of beat frequencies. Ordinarily, a Fourier series decomposition can straightforwardly isolate these frequencies. This is obviously not happening with ENSO, otherwise somebody would have discovered the specific series for ENSO long ago !

The second way for the time series to diverge from a periodic characteristic is if the transfer function shows non-linear effects. In the case of a Mathieu or Hill DiffEq, this is a special case of non-linearity in only the temporal modification of the DiffEq coefficients (not in the differential order terms). It is important to note that a Fourier series decomposition via a Fourier transform will not easily isolate the characteristics of the response as the spectral lines will show multiple bifurcations.

IMO, the second way is operational here and I have been pushing this since I first wrote a blog post on ENSO last year. Read that post and you will understand why I too thought that using conventional analyses to find the underlying periodicities was going to be a challenging task. It took me a while to understand the sloshing literature, where they too discover the same kind of challenges in trying to deconvolute sloshing dynamics.

Thuvanismail, Nasar, Sannasiraj Sannasi, and Sundar Vallam. "A numerical study: liquid sloshing dynamics in a tank due to uncoupled sway, heave and roll ship motions." Journal of Naval Architecture and Marine Engineering 10.2 (2013): 119-138.So over the last year, I have developed quite an arsenal of techniques to infer the behavior. What is most interesting is that in that first post I inferred these characteristics of a Mathieu equation

The $a$ term is $\omega^2$ which I have as ~2.2 right now. The $q$ shows a similar relative amplitude now as then, and the $T$ term is now close to the Chandler Wobble of ~6.5 years. The progress has been slow but it is gradually converging.

Does that address your concern?

`nad said > "... that's been one of the things which I found hadn't been really adressed in your type of reasoning." I think the confusion may be in where the periodicities lie. For the forcing functions, there are certainly strict periodicities at play, and these involve factors such as seasonal cycles, QBO, Chandler wobble, tidal cycles, etc. However, in the *response* functions, these periodicities can be quickly obscured. One way for this to happen is if there are many forcing cycles at work and they combine to form a tangled mat of beat frequencies. Ordinarily, a Fourier series decomposition can straightforwardly isolate these frequencies. This is obviously not happening with ENSO, otherwise somebody would have discovered the specific series for ENSO long ago ! The second way for the time series to diverge from a periodic characteristic is if the transfer function shows non-linear effects. In the case of a Mathieu or Hill DiffEq, this is a special case of non-linearity in only the temporal modification of the DiffEq coefficients (not in the differential order terms). It is important to note that a Fourier series decomposition via a Fourier transform will not easily isolate the characteristics of the response as the spectral lines will show multiple bifurcations. IMO, the second way is operational here and I have been pushing this since I first wrote a blog post on ENSO [last year](http://contextearth.com/2014/02/10/the-southern-oscillation-index-model/). Read that post and you will understand why I too thought that using conventional analyses to find the underlying periodicities was going to be a challenging task. It took me a while to understand the sloshing literature, where they too discover the same kind of challenges in trying to deconvolute sloshing dynamics. ![slosh](http://imageshack.com/a/img661/4453/b03SI2.gif) *Thuvanismail, Nasar, Sannasiraj Sannasi, and Sundar Vallam. "A numerical study: liquid sloshing dynamics in a tank due to uncoupled sway, heave and roll ship motions." Journal of Naval Architecture and Marine Engineering 10.2 (2013): 119-138.* So over the last year, I have developed quite an arsenal of techniques to infer the behavior. What is most interesting is that in that first post I inferred these characteristics of a Mathieu equation a = 2.83 q = 2.72 T = 6.30 years The $a$ term is $\omega^2$ which I have as ~2.2 right now. The $q$ shows a similar relative amplitude now as then, and the $T$ term is now close to the Chandler Wobble of ~6.5 years. The progress has been slow but it is gradually converging. Does that address your concern?`

I just found this relatively obscure article in the Volume 6 No. 1, March 2001 Newsletter of the Climate Variability and Predictability Programme (CLIVAR)

"On the Role of a quasiperiodic Forcing in the interannual and interdecadal Climate Variations"D.M. Sonechkin, N.N. Ivachtchenko, Hydrometeorological Research Center of RussiaI am applying precisely this forcing configuration to the ENSO model. In the residual to the model fit, I was seeing a small but very distinct spectral component at a 3.65 year period. Now that makes sense because it is the difference frequency, $CW-QBO$.

I am beginning to think that perhaps ENSO is an analagous manifestation of QBO, but instead of the resonance operating in the atmosphere, it is manifesting as an oceanic variation.

The citations to the Sonechkin article are sparse, but there is this:

G. Fuhrmann, “Non-Smooth Saddle-Node Bifurcations of Forced Monotone Interval Maps I: Existence of an SNA,” arXiv preprint arXiv:1307.0347, 2013.

Hill or Mathieu equations show up in Earth-Moon-Sun orbital perturbation theory.

The following paper essentially calls out the deterministic nature of these oscillations -- perhaps no different in concept to first-order tidal theory, but hampered by the non-linearities in the hydrodynamics.

S. Hameed, R. G. Currie, and H. Lagrone, “Signals in atmospheric pressure variations from 2 to ca. 70 months: Part I, simulations by two coupled ocean—atmosphere GCMs,” International journal of climatology, vol. 15, no. 8, pp. 853–871, 1995.

I do not think I see any noise at all with the latest ENSO model fit.

`I just found this relatively obscure article in the [Volume 6 No. 1, March 2001 Newsletter of the Climate Variability and Predictability Programme (CLIVAR)](http://nora.nerc.ac.uk/119292/1/ex19.pdf) *"On the Role of a quasiperiodic Forcing in the interannual and interdecadal Climate Variations"* D.M. Sonechkin, N.N. Ivachtchenko, Hydrometeorological Research Center of Russia > "Mention in this context the so-called Chandler wobble in the Earth’s pole motion. The mean period of this wobble is about 14 months (1.2 year), and the ratio of its frequency to the frequency of the annual period seems to be similar to the “worst” irrational number Y=0.8393... derived from the above root of the cubic equation (Sidorenkov and Sonechkin, 1999). It is well known that the Chandler wobble excites a pole tide in the atmosphere. For certain, this pole tide forces the equatorial gravity and Kelvin waves that are known in the Lindzen–Holton theory to be the direct drivers of the QBO of the lower-stratospheric equatorial zonal winds. Although the magnitude of this pole tide is very small a nonlinear mechanism of its force enhancing like the so-called parametric resonance may be supposed. Some similar enhanced effects of the pole tide may also be supposed for the atmospheric variations within extratropics. > Thus the atmosphere turns out to be forced by a quasi-periodic manner. Therefore, the power spectra of the atmospheric variations must reveal some, possible subtle, peaks at both annual and Chandlerian frequencies and their combinational harmonics. In particular, a sequence of some peaks must exist at the difference frequency Z=1-Y=0.1607... (the oscillation of the about 6.5 years period), Z^2 (the oscillation of the about 40 year period) etc. as a consequence of the above-mentioned self-similarity of the power spectra of the strange nonchaotic attractor. This sequence can be also shifted along the frequency axis as a whole, so that all of the underlain periods are doubled, tripled, or quadrupled. For example, it is well known that the spectrum of the equatorial QBO reveals the main peak at the frequency of the doubled Chandlerian period (of the about 28 months, i.e. 2.4 years). The doubled period corresponding to Z (of the about 13-14 years) is also observed (Vlasovaet al. 1987). Unfortunately, the length of the equatorial lower-stratospheric zonal wind record is too short to admit an accurate estimation of the energy of the wind oscillations at the frequency Z^2." I am applying precisely this forcing configuration to the ENSO model. In the residual to the model fit, I was seeing a small but very distinct spectral component at a 3.65 year period. Now that makes sense because it is the difference frequency, $CW-QBO$. I am beginning to think that perhaps ENSO is an analagous manifestation of QBO, but instead of the resonance operating in the atmosphere, it is manifesting as an oceanic variation. The citations to the Sonechkin article are sparse, but there is this: G. Fuhrmann, “Non-Smooth Saddle-Node Bifurcations of Forced Monotone Interval Maps I: Existence of an SNA,” arXiv preprint [arXiv:1307.0347](http://arxiv.org/pdf/1307.0347.pdf), 2013. > "Quasi-periodic forcing and SNAs play an important role in a large class of models for real life systems: The Harper map is a mathematically well-understood dynamical system related to a certain kind of quasi-periodic Schrödinger equations (see below); there is numerical evidence for the existence of SNAs in the physiologically relevant Izhikevich Neuron Model [17]; [22-*Soneckin*] motivates that not just in order to get a complete description of the tides -as the result of the gravitational interaction between the Earth, the Moon, and the Sun- but even to predict interdecadal atmospheric variations, strange non-chaotic attractors have to be considered. " Hill or Mathieu equations show up in Earth-Moon-Sun orbital perturbation theory. --- The following paper essentially calls out the deterministic nature of these oscillations -- perhaps no different in concept to first-order tidal theory, but hampered by the non-linearities in the hydrodynamics. S. Hameed, R. G. Currie, and H. Lagrone, “Signals in atmospheric pressure variations from 2 to ca. 70 months: Part I, simulations by two coupled ocean—atmosphere GCMs,” International journal of climatology, vol. 15, no. 8, pp. 853–871, 1995. > "The two sets of parents and 33 tones vary in period from 2 to ca. 70 months and demonstrate that the spectrum of climate on these time-scales is ‘signal-like’ rather than ‘noise-like’ as traditionally believed" I do not think I see any noise at all with the latest ENSO model fit.`

I just found this which seems to have some interesting numbers but I don't have easy access to the book.

`I just found [this](http://nora.nerc.ac.uk/2761/) which seems to have some interesting numbers but I don't have easy access to the book. > Jevrejeva, S.; Moore, J. C.; Grinsted, A.. 2007 ENSO signal propagation detected by wavelet coherence and mean phase coherence methods. In: Tsonis, Anastasios A.; Elsner, James B., (eds.) Nonlinear dynamics in geosciences. New York, Springer-Verlag, 167-175. > We present observational evidence of the dynamic linkages between ENSO and Northern Hemisphere (NH) ice conditions over the past 135 years. Using Wavelet Transform (WT) we separate statistically significant components from time series and demonstrate significant co-variance and consistent phase differences between NH ice conditions and the Arctic Oscillation and Southern Oscillation indices (AO and SOI) at 2.2, 3.5, 5.7 and 13.9 year periods. To study the phase dynamics of weakly interacting oscillating systems we apply average mutual information and mean phase coherence methods. Phase relationships for the different frequency signals suggest that there are several mechanisms for distribution of the 2.2-5.7 year and the 13.9 year signals. The 2.2- 5.7 year signals, generated about three months earlier in the tropical Pacific Ocean, are transmitted via the stratosphere, and the Arctic Oscillation (AO) mediating propagation of the signals. In contrast the 13.9 year signal propagates from the western Pacific as eastward propagating equatorial coupled ocean waves, and then fast boundary waves along the western margins of the Americas to reach both polar regions, and has a phase difference of about 1.8-2.1 years by the time it reaches the Arctic.`

Thanks, Jim

I have always thought that the mix of frequencies provides hints as to what is going on.

Incidentally, I ran another wavelet scalogram and the amount of agreement between model and data looks on the surface quite remarkable. There are no breaks in the model, and so did not have to invoke a "climate shift" event to any point in the time series, like I did earlier at 1980

A wavelet scalogram is like a diffraction pattern in that it is a concise way of showing all the contributing frequencies. But it does take time and practice before one can start to intuitively interpret the meaning at a glance.

I still need to find a metric when applied to the scalogram quantifies this agreement.

`Thanks, Jim I have always thought that the mix of frequencies provides hints as to what is going on. Incidentally, I ran another wavelet scalogram and the amount of agreement between model and data looks on the surface quite remarkable. There are no breaks in the model, and so did not have to invoke a "climate shift" event to any point in the time series, like I did earlier at 1980 ![wavelet](http://imageshack.com/a/img537/6551/tVNMMu.gif) A wavelet scalogram is like a diffraction pattern in that it is a concise way of showing all the contributing frequencies. But it does take time and practice before one can start to intuitively interpret the meaning at a glance. I still need to find a metric when applied to the scalogram quantifies this agreement.`

no unfortunately not, but I am currently to tired to explain that.

by the way I am currently a bit irritated about your QBO oscillations in your diagram at https://forum.azimuthproject.org/discussion/comment/14521/#Comment_14521

Are these really the wind speeds at 70hPa height?

`>Does that address your concern? no unfortunately not, but I am currently to tired to explain that. by the way I am currently a bit irritated about your QBO oscillations in your diagram at <a href="https://forum.azimuthproject.org/discussion/comment/14521/#Comment_14521">https://forum.azimuthproject.org/discussion/comment/14521/#Comment_14521</a> Are these really the wind speeds at 70hPa height?`

nad points out:

Sorry I don't place units on these because I treat the values as Arbitrary Units when doing time-series analysis.

Does that address your concern?

`nad points out: > "by the way I am currently a bit irritated about your QBO oscillations in your diagram at https://forum.azimuthproject.org/discussion/comment/14521/#Comment_14521 > Are these really the wind speeds at 70hPa height?" Sorry I don't place units on these because I treat the values as Arbitrary Units when doing time-series analysis. Does that address your concern?`

No. I am not concerned about the speed units but about the height.

`>Sorry I don't place units on these because I treat the values as Arbitrary Units when doing time-series analysis. Does that address your concern? No. I am not concerned about the speed units but about the height.`

nad, Take a look at the raw data: http://www.geo.fu-berlin.de/met/ag/strat/produkte/qbo/singapore.dat

So when you see values that reach 100, that actually should be multiplied by 0.1 to get wind speed in m/s.

Or is it confusion about what height means? In meteorology, atmospheric height is often described in terms of an equivalent pressure reading, called the geopotential height. So you get a quick feel for how thin the air is. Of course the thinner the air is, the faster it can blow since it carries less momentum. So 70 hPa is 0.07 as dense as the standard sea-level pressure reading of 1000 hPa.

`nad, Take a look at the raw data: http://www.geo.fu-berlin.de/met/ag/strat/produkte/qbo/singapore.dat Monthly mean zonal wind components ( 0.1 m/s) at Singapore (48698), 1N/104E So when you see values that reach 100, that actually should be multiplied by 0.1 to get wind speed in m/s. Or is it confusion about what height means? In meteorology, atmospheric height is often described in terms of an equivalent pressure reading, called the geopotential height. So you get a quick feel for how thin the air is. Of course the thinner the air is, the faster it can blow since it carries less momentum. So 70 hPa is 0.07 as dense as the standard sea-level pressure reading of 1000 hPa.`

As I wrote here I am sort of still checking back wether the ugamp page shows a longitude independent or averaged image, by looking at your oscillations of Singpore (approx. at the equator) at 70hPa it looks as if easterlies and westerlies last approx. the same time at 70hPa. That's different from fig 1 of the ugamp page. So it seems one has to find out what a "UKMO assimilated dataset" is. argh.

`As I wrote <a href="https://forum.azimuthproject.org/discussion/comment/14529/#Comment_14529">here</a> I am sort of still checking back wether the ugamp page shows a longitude independent or averaged image, by looking at your oscillations of Singpore (approx. at the equator) at 70hPa it looks as if easterlies and westerlies last approx. the same time at 70hPa. That's different from <a href="http://www.ugamp.nerc.ac.uk/hot/ajh/qbo.htm">fig 1 of the ugamp page.</a> So it seems one has to find out what a "UKMO assimilated dataset" is. argh.`

Latest NOAA blog entry on ENSO:

http://www.climate.gov/news-features/blogs/enso/challenges-enso-today’s-climate-models

This post touches on the characterization of data via models and what counts as a suitable metric for fitness

It kind of got lost in the shuffle, but besides tuning with respect to the correlation coefficient, I also minimize the difference between the summed variance of the two time series when I plot the agreement. So that the following two data sets have essentially the same variance

Many of the machine learning algorithms such as Eureqa are starting to apply "hybrid" optimization criteria which apply both "shape" fitting ala CC and "scaling" error minimization such as mean-squared error. This is easy enough to do by minimizing the product $(1-CC)*MSE$ .

I wrote more here: http://contextearth.com/2015/04/28/climate-science-is-just-not-that-hard/

The right answer for the wrong reason is the bigger challenge to overcome. Is the process over-fitting, as enough compensating factors can be combined to force a good fit. That is essentially where the Aikake Information Criteria comes up, as it will penalize a model for having too many parameters.

`Latest NOAA blog entry on ENSO: http://www.climate.gov/news-features/blogs/enso/challenges-enso-today%E2%80%99s-climate-models This post touches on the characterization of data via models and what counts as a suitable metric for fitness > " Two types of metrics help us to do this: “performance metrics” and “process-based metrics.” The first type is based on statistics of some key indicators, such as impacts related to ENSO or SSTs. One example is to measure the variance or standard deviation of SST in the eastern equatorial Pacific. Up until recently this metric was the primary focus of modeling groups to evaluate ENSO in their model development process. But detailed studies have shown that the performance metric can look correct, but result from the wrong balance of processes or compensating errors (as seen in the last figure). A so-called “right answer” for the wrong reasons. Hence the recent advent of process-based metrics. The much harder challenge for modeling groups is now to get the right statistics (performance metrics) for the right reasons (process-based metrics). The synthesis of ENSO model evaluation provided in the latest IPCC report for example shows that if the latest generation of models (CMIP5) exhibits improved performance metrics, the process-based metrics are still an issue." It kind of got lost in the shuffle, but besides tuning with respect to the correlation coefficient, I also minimize the difference between the summed variance of the two time series when I plot the agreement. So that the following two data sets have essentially the same variance ![enso](http://imageshack.com/a/img673/9976/OX8MUE.gif) Many of the machine learning algorithms such as Eureqa are starting to apply "hybrid" optimization criteria which apply both "shape" fitting ala CC and "scaling" error minimization such as mean-squared error. This is easy enough to do by minimizing the product $(1-CC)*MSE$ . I wrote more here: http://contextearth.com/2015/04/28/climate-science-is-just-not-that-hard/ The right answer for the wrong reason is the bigger challenge to overcome. Is the process over-fitting, as enough compensating factors can be combined to force a good fit. That is essentially where the Aikake Information Criteria comes up, as it will penalize a model for having too many parameters.`

Since this thread is devoted to revisiting previous discussions of ENSO, I thought to revisit this Azimuth Forum thread on ENSO proxies.

Early on I discovered that there is a wealth of coral proxy data that has been calibrated to match 20th century SOI and NINO34 measures. The climate scientists thus have the ability to go back hundreds of years to estimate the strength and timing of the southern ocean oscillations.

As a substantiation for the SOI model that I have been analyzing, it is straightforward to get reasonable agreement to the proxy data sets by assuming roughly periodic QBO and CW as forcing factors. The following figure was a fit from last year to the coral proxy data set referred to as the Universal ENSO Proxy (UEP):

More recently, I thought to use an optimized model fit (see msg #80 above) for the last 130 years to the NINO34 and SOI and then apply that

unmodifiedto the coral proxy data extending back to 1650:This is the unadorned match from 1880 to 1970 (where the proxy records end):

The correlation coefficient is 0.48, which should be considered in the context that the data only has yearly resolution. This is essentially a calibration confirmation that the model fit to NINO34 transitively applies to the UEP data.

Now let's take a different 90 year slice, this one starting from 1700 and extending to 1790.

In essence, the 1880 to 1970 fit was used as the training interval, and we solved the DiffEq in reverse, by integrating backwards in time. The fit is very good considering that we are going back in time almost 200 years before the start of the 20th century. The histogram shows the number of excursions that match in sign between model and data. A nearly 2-out-of-3 predictive count is certainly outside of chance, and is actually

greaterthan the calibrated training set.In practice, it is difficult to maintain coherence over this long a time range, as small deviations in the modeled periodicities will amplify the further back one goes. Due to the imprecision in an estimated period, such as for QBO or the Chandler wobble, one might expect the agreement to potentially go in and out of phase.

And what do you know ... the fit for the 90 year interval from 1790 to 1880 appears out-of-phase with the model.

I have been evaluating a machine learning run on this data and it is indeed finding the 2.33 year QBO periodicity, but perhaps closer to being between 2.34 and 2.35 year over the complete interval, which might be enough to lose a cycle over a 100 year time span. And this missing cycle, as well as other possible phase slips would be enough to generate an interval that loses phase coherence.

What is also interesting is that I have seen coral proxy records that extend back to 1150 AD and find the same coherence over intervals ranging from 70 to 100 years (none of the records extend for extremely long periods so I am using these intervals as is). A blog post describing this is here. Again, the same basic DiffEq model was used, but in this case, I fit the individual data sets independently wihout using the present-day training interval. What would be nice to have is an continuous (i.e. unbroken) set of data that extends back 1000 years, in which case the SOI model could really be put through the paces.

But then again, consider that the science of EL Nino prediction is still in the primitive stage of only being able to predict a few months to a year in advance. I think it is understood that there is a definite distinction between the ENSO quasi-cycles and having a full-blown El Nino, and imagine some of this has to do with getting the strength accurately predicted, not just predicting the peak at the correct time.

In any case, I am going with the premise that working the model against historical data can only help the short-term predictions.

And the agreement that we do see between model, proxy data, and modern-day data is well beyond being a fortuitous match. So what more will it take to establish this as a standard model of ENSO?

`Since this thread is devoted to revisiting previous discussions of ENSO, I thought to revisit [this Azimuth Forum thread](http://forum.azimuthproject.org/discussion/1451/enso-proxy-records#latest ) on ENSO proxies. Early on I discovered that there is a wealth of coral proxy data that has been calibrated to match 20th century SOI and NINO34 measures. The climate scientists thus have the ability to go back hundreds of years to estimate the strength and timing of the southern ocean oscillations. As a substantiation for the SOI model that I have been analyzing, it is straightforward to get reasonable agreement to the proxy data sets by assuming roughly periodic QBO and CW as forcing factors. The following figure was a fit from last year to the coral proxy data set referred to as the Universal ENSO Proxy (UEP): ![full](http://imageshack.com/a/img905/1551/syejKr.gif) More recently, I thought to use an optimized model fit (see msg #80 above) for the last 130 years to the NINO34 and SOI and then apply that *unmodified* to the coral proxy data extending back to 1650: This is the unadorned match from 1880 to 1970 (where the proxy records end): ![latest](http://imageshack.com/a/img537/4126/iJN0Zi.gif) The correlation coefficient is 0.48, which should be considered in the context that the data only has yearly resolution. This is essentially a calibration confirmation that the model fit to NINO34 transitively applies to the UEP data. Now let's take a different 90 year slice, this one starting from 1700 and extending to 1790. ![earlyA](http://imageshack.com/a/img538/1631/8tCI1o.gif) In essence, the 1880 to 1970 fit was used as the training interval, and we solved the DiffEq in reverse, by integrating backwards in time. The fit is very good considering that we are going back in time almost 200 years before the start of the 20th century. The histogram shows the number of excursions that match in sign between model and data. A nearly 2-out-of-3 predictive count is certainly outside of chance, and is actually *greater* than the calibrated training set. In practice, it is difficult to maintain coherence over this long a time range, as small deviations in the modeled periodicities will amplify the further back one goes. Due to the imprecision in an estimated period, such as for QBO or the Chandler wobble, one might expect the agreement to potentially go in and out of phase. And what do you know ... the fit for the 90 year interval from 1790 to 1880 appears out-of-phase with the model. ![middle](http://imageshack.com/a/img911/616/ViRNMP.gif) I have been evaluating a machine learning run on this data and it is indeed finding the 2.33 year QBO periodicity, but perhaps closer to being between 2.34 and 2.35 year over the complete interval, which might be enough to lose a cycle over a 100 year time span. And this missing cycle, as well as other possible phase slips would be enough to generate an interval that loses phase coherence. What is also interesting is that I have seen coral proxy records that extend back to 1150 AD and find the same coherence over intervals ranging from 70 to 100 years (none of the records extend for extremely long periods so I am using these intervals as is). A blog post describing this is [here](http://contextearth.com/2014/06/25/proxy-confirmation-of-soim/). Again, the same basic DiffEq model was used, but in this case, I fit the individual data sets independently wihout using the present-day training interval. What would be nice to have is an continuous (i.e. unbroken) set of data that extends back 1000 years, in which case the SOI model could really be put through the paces. But then again, consider that the science of EL Nino prediction is still in the primitive stage of only being able to predict a few months to a year in advance. I think it is understood that there is a definite distinction between the ENSO quasi-cycles and having a full-blown El Nino, and imagine some of this has to do with getting the strength accurately predicted, not just predicting the peak at the correct time. In any case, I am going with the premise that working the model against historical data can only help the short-term predictions. And the agreement that we do see between model, proxy data, and modern-day data is well beyond being a fortuitous match. So what more will it take to establish this as a standard model of ENSO?`

I'm wowwed like your correspondents over on contextearth and am greatly looking forward to some statistician somewhere checking it out so you can get version 2 of the paper published :). I can't see too much else you can do.

I found this site for evidence from historical documents going back to 1500. Everybody has selection bias but I doubt that historical researchers have been trying to justify some 4.3 year number.

I stripped out the data here but haven't thought through how to compare it with your results - there weren't any going back that far last week. if you get what I mean.

Anyway, this continues to be a very educational, steep learning curve and a really enjoyable project.

`I'm wowwed like your correspondents over on contextearth and am greatly looking forward to some statistician somewhere checking it out so you can get version 2 of the paper published :). I can't see too much else you can do. I found this [site](https://sites.google.com/site/medievalwarmperiod/Home/historic-el-nino-events) for evidence from historical documents going back to 1500. Everybody has selection bias but I doubt that historical researchers have been trying to justify some 4.3 year number. I stripped out the data [here]( https://db.tt/LFXDYjCR) but haven't thought through how to compare it with your results - there weren't any going back that far last week. if you get what I mean. Anyway, this continues to be a very educational, steep learning curve and a really enjoyable project.`

Jim, Thanks for reminding me of the other historical data. I will have to do a higher resolution hindcast on the model and note where the strong El Nino spikes are and then compare against your list.

And you talk about a steep learning curve! As of 2013, when I started getting interested in ENSO, I was actually writing commentary on Nick Stokes' blog that it was likely a Red Noise behavior. Nick and other commenters dissuaded me of this notion, but now that we may be getting somewhere, I wish they would get drawn back in to the discussion! Something has to spark the interest of Dr. Stokes , who I know is a fluid mechanics/statistics whiz and also is a member of the Azimuth Forum. He, more than anybody at our level, may be able to ascertain whether this model is all wet.

I will also keep trying to get the paper published somewhere. Nothing is published yet, and the submission is still languishing on ARXIV.

`Jim, Thanks for reminding me of the other historical data. I will have to do a higher resolution hindcast on the model and note where the strong El Nino spikes are and then compare against your list. And you talk about a steep learning curve! As of 2013, when I started getting interested in ENSO, I was actually [writing commentary](http://moyhu.blogspot.com/2013/09/more-on-global-temperature-spectra-and.html) on Nick Stokes' blog that it was likely a Red Noise behavior. Nick and other commenters [dissuaded](http://moyhu.blogspot.com/2013/09/more-on-global-temperature-spectra-and.html?showComment=1381372335393#c6934134888318331683) me of this notion, but now that we may be getting somewhere, I wish they would get drawn back in to the discussion! Something has to spark the interest of Dr. Stokes , who I know is a fluid mechanics/statistics whiz and also is a member of the Azimuth Forum. He, more than anybody at our level, may be able to ascertain whether this model is all wet. I will also keep trying to get the paper published somewhere. Nothing is published yet, and the submission is still languishing on ARXIV.`

So:

Except this is misnamed as the list is from 1500-2009 :).

(2009-1500)/123 = 4.1382

`So: > jim@nixos:~]$ cat ElNinoEvents1500-2015.dat | wc -l > 123 Except this is misnamed as the list is from 1500-2009 :). (2009-1500)/123 = 4.1382`

Hmm, the nominal resonance condition in the sloshing model has a period of 4.25 years. This is close to the average separation between El Nino events.

`Hmm, the nominal resonance condition in the sloshing model has a period of 4.25 years. This is close to the average separation between El Nino events.`

509/4.25 = 119.7547 so if there were 3 false positives then you'd be bang on the nail.

`509/4.25 = 119.7547 so if there were 3 false positives then you'd be bang on the nail.`

Webhubtel wrote:

It's spelled "El Niño", not "El Nino".

In Spanish the "ñ" is pronounced differently than "n", and that affects how you pronounce "El Niño" - click the link to hear it.

`Webhubtel wrote: > Thanks John, I checked for spellings of El Nino in the paper and didn't find anything odd. It's spelled "[El Niño](https://en.wikipedia.org/wiki/El_Ni%C3%B1o)", not "El Nino". In Spanish the "ñ" is pronounced differently than "n", and that affects how you pronounce "[El Niño](http://dictionary.cambridge.org/us/pronunciation/british/el-nino)" - click the link to hear it.`

Thanks !

`Thanks !`

Here is another analysis in Fourier (frequency) space

Consider the sloshing differential equation

$ f''(t) + (a + q(t)) f(t) = h(t) $

in frequency space this would be

$ (\omega^2 + a) F(\omega)+ Q(\omega) \ast F(\omega) = H(\omega) $

The tricky part is in the second term, which is a convolution.

The Hill or Mathieu factor $q(t)$ is generally a sinusoid. The Fourier transform of a sinusoid is:

$ \mathcal{F}[q(t)] = \mathcal{F}[b \cos(\omega_o t)] = b \frac{\delta(\omega-\omega_0) + \delta(\omega+\omega_0) }{2} $

doing the convolution

$ Q(\omega) \ast F(\omega) = b \frac{\delta(\omega-\omega_0) + \delta(\omega+\omega_0) }{2} \ast F(\omega) = b \frac{F(\omega-\omega_0) + F(\omega+\omega_0) }{2} $

What this does is bifurcate or splits the spectral peaks.

$ (\omega^2+ a)F(\omega) + b \frac{F(\omega-\omega_0)+F(\omega+\omega_0)}{2} = H(\omega) $

This is an unusual looking equation because of what look like "shifted" frequency terms. Ordinarily this is is evaluated recursively, and for a delta forcing, a Mathieu function results for

Ffor aq(t)that is a single sinusoid.Otherwise, one can picture in you mind that for a RHS of a QBO frequency corresponding to $2\pi/2.33$ rads/year and a $\omega_0$ corresponding to the Chandler wobble of $2\pi/6.43$, then two sidebands will appear as the sum and difference of these two values.

Note the two new spectral components at approximately 44 months and 21 months. Well, it seems that, in particular, 44 months appears in the power spectra of the ENSO metrics such as SOI and NINO34.

The physicist Lubos Motl just rediscovered this fact last week : http://motls.blogspot.co.uk/2015/04/geomagnetic-44-month-cycle-seen-in.html And so did somebody named "willis" on the WUWT blog, as Lubos pointed out.

It is also found in the examples documentation for Matlab where they isolate the spectral components of ENSO :

What do you know -- the spectral decomposition agrees with the model fit shown in comment #80 and explanations for the emergent significant frequencies are to be had. Now you understand why the 2.333 year = 28 month period is not observed directly in the ENSO power spectrum -- it becomes bifurcated. No wonder everyone was mystified by this missing link, as only a sloshing formulation will reveal this subtle transformation from a forcing frequency to a seemingly unrelated response frequency.

I think all the moons are in phase and the flowers are blooming parsimoniously.

`Here is another analysis in Fourier (frequency) space Consider the sloshing differential equation $ f''(t) + (a + q(t)) f(t) = h(t) $ in frequency space this would be $ (\omega^2 + a) F(\omega)+ Q(\omega) \ast F(\omega) = H(\omega) $ The tricky part is in the second term, which is a convolution. The Hill or Mathieu factor $q(t)$ is generally a sinusoid. The Fourier transform of a sinusoid is: $ \mathcal{F}[q(t)] = \mathcal{F}[b \cos(\omega_o t)] = b \frac{\delta(\omega-\omega_0) + \delta(\omega+\omega_0) }{2} $ doing the convolution $ Q(\omega) \ast F(\omega) = b \frac{\delta(\omega-\omega_0) + \delta(\omega+\omega_0) }{2} \ast F(\omega) = b \frac{F(\omega-\omega_0) + F(\omega+\omega_0) }{2} $ What this does is bifurcate or splits the spectral peaks. $ (\omega^2+ a)F(\omega) + b \frac{F(\omega-\omega_0)+F(\omega+\omega_0)}{2} = H(\omega) $ This is an unusual looking equation because of what look like "shifted" frequency terms. Ordinarily this is is evaluated recursively, and for a delta forcing, a Mathieu function results for *F* for a *q(t)* that is a single sinusoid. Otherwise, one can picture in you mind that for a RHS of a QBO frequency corresponding to $2\pi/2.33$ rads/year and a $\omega_0$ corresponding to the Chandler wobble of $2\pi/6.43$, then two sidebands will appear as the sum and difference of these two values. ![modulation](http://imageshack.com/a/img537/7186/mTFAd5.gif) Note the two new spectral components at approximately 44 months and 21 months. Well, it seems that, in particular, 44 months appears in the power spectra of the ENSO metrics such as SOI and NINO34. The physicist Lubos Motl just rediscovered this fact last week : http://motls.blogspot.co.uk/2015/04/geomagnetic-44-month-cycle-seen-in.html And so did somebody named "willis" on the WUWT blog, as Lubos pointed out. It is also found in the [examples documentation for Matlab](http://www.mathworks.com/help/curvefit/custom-nonlinear-enso-data-analysis.html) where they isolate the spectral components of ENSO : > "In conclusion, Fourier analysis of the data reveals three significant cycles. The annual cycle is the strongest, but cycles with periods of approximately 44 and 22 months are also present. These cycles correspond to El Nino and the Southern Oscillation (ENSO)." What do you know -- the spectral decomposition agrees with the model fit shown in comment #80 and explanations for the emergent significant frequencies are to be had. Now you understand why the 2.333 year = 28 month period is not observed directly in the ENSO power spectrum -- it becomes bifurcated. No wonder everyone was mystified by this missing link, as only a sloshing formulation will reveal this subtle transformation from a forcing frequency to a seemingly unrelated response frequency. I think all the moons are in phase and the flowers are [blooming parsimoniously](http://books.google.com/books?id=iVkugqNG9dAC&pg=PA277&lpg=PA277&dq=blooming+parsimoniously).`

If you want to be taken seriously, quoting obsessive, right-wing ideological bigots who's hangup is to deny that human activity is dangerously warming the planet won't get you very far with impartial scientists. You must have realised that they spend their time looking for equations which can remove human behaviour from the record. The correct concIusion is surely that there is no evidence that ENSO has anything to do with long term climate trends.

Things go both ways though so, apart from looking at Atlantic dipoles as a next step, I'd expect that your model can be used to subtract from the record and expose the factors behind persistent warming at the climate scale.

`If you want to be taken seriously, quoting obsessive, right-wing ideological bigots who's hangup is to deny that human activity is dangerously warming the planet won't get you very far with impartial scientists. You must have realised that they spend their time looking for equations which can remove human behaviour from the record. The correct concIusion is surely that there is no evidence that ENSO has anything to do with long term climate trends. Things go both ways though so, apart from looking at Atlantic dipoles as a next step, I'd expect that your model can be used to subtract from the record and expose the factors behind persistent warming at the climate scale.`

Sorry Jim. I am simply looking for people that are doing similar work. On that particular blog that I refered to , there is a fellow in the comments that claims he can predict ENSO accurately. I looked at his approach and it looks very similar to the neural network training that Dara was working on within the last year. Dara thinks this fellow's approach is standard NN and also doubts that he can use it to predict with any accuracy. I also can't tell whether he is on the level either. Is he over-fitting? Heck, am I over-fitting? Who knows, maybe he is doing something similar to me.

Yet I am not going to put my head in the sand and simply ignore what someone has to say because they have a different political outlook than I do. I subscribe to the theory that they can score "Own Goals" and unwittingly contradict their own case. This eliminates a path on inquiry that I don't have to go down.

My personal issue is that I have a difficult time trying to find anybody else working on the ENSO problem outside of the inner climate science circles. I have to admit that I am a naturally curious and can't help checking out what others are working on.

BTW, Atlantic dipoles are very weak and I don't think they are worth looking at.

`Sorry Jim. I am simply looking for people that are doing similar work. On that particular blog that I refered to , there is a fellow in the comments that claims he can predict ENSO [accurately](http://www.global-warming-and-the-climate.com/images/ENSO-solar-tidal-impact.jpg). I looked at his approach and it looks very similar to the neural network training that Dara was working on within the last year. Dara thinks this fellow's approach is standard NN and also doubts that he can use it to predict with any accuracy. I also can't tell whether he is on the level either. Is he over-fitting? Heck, am I over-fitting? Who knows, maybe he is doing something similar to me. Yet I am not going to put my head in the sand and simply ignore what someone has to say because they have a different political outlook than I do. I subscribe to the theory that they can score "Own Goals" and unwittingly contradict their own case. This eliminates a path on inquiry that I don't have to go down. My personal issue is that I have a difficult time trying to find anybody else working on the ENSO problem outside of the inner climate science circles. I have to admit that I am a naturally curious and can't help checking out what others are working on. BTW, Atlantic dipoles are very weak and I don't think they are worth looking at.`

That looks like a Volterra integral equation to me.

`> in frequency space this would be > $ (\omega^2 + a) F(\omega)+ Q(\omega) \ast F(\omega) = H(\omega) $ That looks like a [Volterra integral equation](http://en.wikipedia.org/wiki/Volterra_integral_equation) to me.`

Right.

so this part

$ Q(\omega) \ast F(\omega) $

was very straightforward to interpret via application of a Fourier transform instead of a Laplace -- it's a matter of taste which way to go. The convolution is trivial because sine waves turn into delta functions and those are easy to deal with in frequency space.

The wikipedia page on Volterra integral equation indicates that these find application for viscoelastic materials, which incidentally I came across the other day in the context of the Mathieu equation. Much like with sloshing, the induced deformation of the material creates an inertial change in the elastic modulus that requires such a formulation. In other words, the

q(t)deformation response will have a similar shape as theh(t)forcing due to a frequency-dependence of the elastic modulus. With time constants and periods as long as we are dealing with, this can easily occur as the material can "catch up" to the forcing and thus mimic its shape. If the forcing is too fast, the elastic modulus would never have a chance to respond, and so would not be important.This is what happens with sloshing of the ocean waters as the inertial response due to angular momentum changes (i.e. Chandler wobble) and wind shear (i.e. QBO) induces a similar deformation as a viscoelastic material would show. It makes the math harder of course, but that's how nature works. And dealing with this extra complexity is what computers are good for.

`Right. > "Such equations can be analysed and solved by means of Laplace transform techniques." so this part $ Q(\omega) \ast F(\omega) $ was very straightforward to interpret via application of a Fourier transform instead of a Laplace -- it's a matter of taste which way to go. The convolution is trivial because sine waves turn into delta functions and those are easy to deal with in frequency space. The wikipedia page on Volterra integral equation indicates that these find application for viscoelastic materials, which incidentally I came across the other day in the context of the Mathieu equation. Much like with sloshing, the induced deformation of the material creates an inertial change in the elastic modulus that requires such a formulation. In other words, the *q(t)* deformation response will have a similar shape as the *h(t)* forcing due to a frequency-dependence of the elastic modulus. With time constants and periods as long as we are dealing with, this can easily occur as the material can "catch up" to the forcing and thus mimic its shape. If the forcing is too fast, the elastic modulus would never have a chance to respond, and so would not be important. This is what happens with sloshing of the ocean waters as the inertial response due to angular momentum changes (i.e. Chandler wobble) and wind shear (i.e. QBO) induces a similar deformation as a viscoelastic material would show. It makes the math harder of course, but that's how nature works. And dealing with this extra complexity is what computers are good for.`

I came across Volterra integral equations in a completely different context: branching theory applied to macroevolution. Wikipedia mentions renewal theory, which is close. In my application, the equation could be solved iteratively. I think this notation

$ Q(\omega) \ast F(\omega) $

is confusing and will use

$ (Q \ast F)(\omega) $.

The idea is to write

$ F(\omega) = (\omega^2 + a)^{-1} [ H(\omega) - (Q \ast F)(\omega) ], $

take a guess at $F(\omega)$, say $F_0(\omega)=0$, and iterate like

$ F_{i+1}(\omega) = (\omega^2 + a)^{-1} [ H(\omega) - (Q \ast F_i)(\omega) ] .$

Numerically, the $F_i$s are evaluated at a finite number of evenly spaced $\omega$ values. In my case this procedure was guaranteed to converge.

`I came across Volterra integral equations in a completely different context: branching theory applied to macroevolution. Wikipedia mentions renewal theory, which is close. In my application, the equation could be solved iteratively. I think this notation $ Q(\omega) \ast F(\omega) $ is confusing and will use $ (Q \ast F)(\omega) $. The idea is to write $ F(\omega) = (\omega^2 + a)^{-1} [ H(\omega) - (Q \ast F)(\omega) ], $ take a guess at $F(\omega)$, say $F_0(\omega)=0$, and iterate like $ F_{i+1}(\omega) = (\omega^2 + a)^{-1} [ H(\omega) - (Q \ast F_i)(\omega) ] .$ Numerically, the $F_i$s are evaluated at a finite number of evenly spaced $\omega$ values. In my case this procedure was guaranteed to converge.`

Paul, I apologise if that came across too harshly (0300hrs BST) but I don't want any excuses for not taking your work seriously.

I agree with your other points: I'll take evidence from anywhere; I've just learned about periodicity analysis from wuwt but what he and Lubos appear to have done with mathematica doesn't seem close to your results. I had to wonder whether they just copied you without credit :P ? Glad to see sunspots disappearing from the scene.

`Paul, I apologise if that came across too harshly (0300hrs BST) but I don't want any excuses for not taking your work seriously. I agree with your other points: I'll take evidence from anywhere; I've just learned about periodicity analysis from wuwt but what he and Lubos appear to have done with mathematica doesn't seem close to your results. I had to wonder whether they just copied you without credit :P ? Glad to see sunspots disappearing from the scene.`

Jim, I did coax Per to come over to this forum and share his neural network model. So something good may come out of my excursion to foreign territory :)

I've found that straightforward periodicity analysis does not work for ENSO and certainly does not give a huge amount insight, apart from potentially isolating the strongest signals.

I am thinking that instead of periodicity analysis, we need something more like a bifurcation analysis. What Graham Jones is suggesting may give hints at a potential direction. With these nonlinear equations such as Mathieu and Hill, the periods can bifurcate and then provide feedback to the forcing. I have found once the fit starts locking in, it gets better when the modulating factors are duplicated as forcings.

`Jim, I did coax Per to come over to this forum and share his neural network model. So something good may come out of my excursion to foreign territory :) I've found that straightforward periodicity analysis does not work for ENSO and certainly does not give a huge amount insight, apart from potentially isolating the strongest signals. I am thinking that instead of periodicity analysis, we need something more like a bifurcation analysis. What Graham Jones is suggesting may give hints at a potential direction. With these nonlinear equations such as Mathieu and Hill, the periods can bifurcate and then provide feedback to the forcing. I have found once the fit starts locking in, it gets better when the modulating factors are duplicated as forcings.`

In comment #81, I used the historical ENSO coral proxy data as a validation test against an ENSO model fit of post-1880 direct measurements.

The fit was reasonable but it went out-of-phase over one interval. I used a QBO period of 2.333 years for that model.

I have since tried again and this time used a QBO period of 2.366 years. Again the training period is restricted to post-1880 and the correlation coefficient for the entire span is a little over 0.42. The yellow areas show the divergence and it looks slightly worse the farther back it goes but it never shows a negative CC over any interval.

What is interesting about this value for the QBO period is that once I noticed where it was headed I set it exactly to twice the value of the Chandler wobble period of 432.2 days.

2*432.2/365.25 = 2.366 years

The CW period beats against the yearly cycle and produces approximately a 6.5 year angular momentum change cycle, which is the other forcing factor in the model. (There is also a period of 50 years to model multi-decadal changes, and a few non-forcing factors such as TSI fitting the non-ENSO residual).

I think there may be something inherent in the relation between the QBO period and the CW period. The QBO reflects the oscillation of the upper atmosphere, while the Chandler wobble could possibly be an angular momentum response of the Earth's mantle and ocean. How the period doubling (or frequency doubling -- depending on the causal direction) arises is a mystery, making it potentially nothing more than a coincidence.

Azimuth has more than its share of people interested in topological puzzles, so perhaps someone has ideas outside of the chaotic-bifurcation explanation. Yet even that may yield some insight.

`In [comment #81](https://forum.azimuthproject.org/discussion/comment/14573/#Comment_14573), I used the historical ENSO coral proxy data as a validation test against an ENSO model fit of post-1880 direct measurements. The fit was reasonable but it went out-of-phase over one interval. I used a QBO period of 2.333 years for that model. I have since tried again and this time used a QBO period of 2.366 years. Again the training period is restricted to post-1880 and the correlation coefficient for the entire span is a little over 0.42. The yellow areas show the divergence and it looks slightly worse the farther back it goes but it never shows a negative CC over any interval. ![eup](http://imageshack.com/a/img537/696/IRZpCd.gif) What is interesting about this value for the QBO period is that once I noticed where it was headed I set it exactly to twice the value of the Chandler wobble period of 432.2 days. 2*432.2/365.25 = 2.366 years The CW period beats against the yearly cycle and produces approximately a 6.5 year angular momentum change cycle, which is the other forcing factor in the model. (There is also a period of 50 years to model multi-decadal changes, and a few non-forcing factors such as TSI fitting the non-ENSO residual). I think there may be something inherent in the relation between the QBO period and the CW period. The QBO reflects the oscillation of the upper atmosphere, while the Chandler wobble could possibly be an angular momentum response of the Earth's mantle and ocean. How the period doubling (or frequency doubling -- depending on the causal direction) arises is a mystery, making it potentially nothing more than a coincidence. Azimuth has more than its share of people interested in topological puzzles, so perhaps someone has ideas outside of the chaotic-bifurcation explanation. Yet even that may yield some insight.`

I'm sorry to say I don't know what MW=0.1253 is. I haven't got the Clarke paper to hand so neither do I know what CharFreq = 0.42 represents?

`I'm sorry to say I don't know what MW=0.1253 is. I haven't got the Clarke paper to hand so neither do I know what CharFreq = 0.42 represents?`

Jim, The CF is the characteristic or resonant frequency factor of the mean-valued wave equation (aka the

aterm in comment #89). Clarke said this was about 4.25 years and I am using 4.36 years for this fit.The 0.42 actually represents the correlation coefficient between the model and data.

I indicated that there appears to be a 50 year cycle to represent multi-decadal changes. The MW of 0.1253 is ~2$\pi$/50. This is a well-known variation in the Pacific ocean and perhaps the same mechanism behind the long term Pacific decadal oscillation.

There is also an observed long term precession in the earth known as the Markowitz wobble of a few decades. But now they are saying this is an artifact of measurement so I doubt the 50 year cycle is due to that.

I was looking again at the data behind the UEP and it is quite noisy. Below I show the raw data (which has the opposite sign of what I am fitting to). The standard deviation fluctuates quite a bit and has about the same magnitude as the signal. This means that it is quite difficult to get a significant variance reduction against a model fit, as one is battling the noise while trying to isolate the signal.

I know this isn't quite the same as trying to extract signals from SETI data, but as the combination of applying machine learning and narrowing possible physical correlations improves it gets more interesting. Why isn't anyone else doing this? SETI essentially hijacks thousands of computers to try to find something that may not exist, yet there is more than likely something hiding in the ENSO signal that will have some practical benefit if we can just isolate it.

`Jim, The CF is the characteristic or resonant frequency factor of the mean-valued wave equation (aka the *a* term in comment #89). Clarke said this was about 4.25 years and I am using 4.36 years for this fit. The 0.42 actually represents the correlation coefficient between the model and data. I indicated that there appears to be a 50 year cycle to represent multi-decadal changes. The MW of 0.1253 is ~2$\pi$/50. This is a well-known variation in the Pacific ocean and perhaps the same mechanism behind the long term Pacific decadal oscillation. There is also an observed long term precession in the earth known as the [Markowitz wobble](http://hpiers.obspm.fr/eop-pc/models/PM/Markowitz.html) of a few decades. But now they are saying this is an artifact of measurement so I doubt the 50 year cycle is due to that. I was looking again at the data behind the UEP and it is quite noisy. Below I show the raw data (which has the opposite sign of what I am fitting to). The standard deviation fluctuates quite a bit and has about the same magnitude as the signal. This means that it is quite difficult to get a significant variance reduction against a model fit, as one is battling the noise while trying to isolate the signal. ![UEP](http://imagizer.imageshack.us/a/img539/5597/CEAViP.gif) I know this isn't quite the same as trying to extract signals from SETI data, but as the combination of applying machine learning and narrowing possible physical correlations improves it gets more interesting. Why isn't anyone else doing this? SETI essentially hijacks thousands of computers to try to find something that may not exist, yet there is more than likely something hiding in the ENSO signal that will have some practical benefit if we can just isolate it.`

Thanks for the explanation and extra work. I might be able to post more when I've gone back through the PDO and the Russian ref. on arctic QBOs using EOF and wavelets papers I read yesterday.

`Thanks for the explanation and extra work. I might be able to post more when I've gone back through the PDO and the Russian ref. on arctic QBOs using EOF and wavelets papers I read yesterday.`