It looks like you're new here. If you want to get involved, click one of these buttons!
Pulling together various time-series of historical ENSO Proxy records.
The following link is a post on modeling the McGregor Unified ENSO Proxy reconstruction
http://contextearth.com/2014/08/26/soim-fit-to-unified-enso-proxy/
The following is an example of a Mathieu equation fit (via Mathematica) to the ENSO proxy over a span of 300+ years
$ y''(t) + b \cdot y'(t) + [a+q sin(\omega t)]y(t) = k \cdot sin( \nu t) + c_0 +c_1 \cdot t $
Comments
You could use ParametricNDSolve[] and parametrize y with additional parameters maybe even using a or b and then subtract the solution from actual data and NMINIMIZE the squared sum of that subtraction.
This is least squre fit method adapted for differential equations attempting to fit an actual data.
Dara
You could use ParametricNDSolve[] and parametrize y with additional parameters maybe even using a or b and then subtract the solution from actual data and NMINIMIZE the squared sum of that subtraction. This is least squre fit method adapted for differential equations attempting to fit an actual data. Dara
My general observation is that these differential equations have difficult finding values that deal with a global max or min, in particular f'=0 mostly gives you the local max/min. Therefore any differential equation no matter how sophisticate and well thought thru, dealing with min/max problem e.g. fitting against a data minimizing the error, alone by itself as a differential equation is not adequate.
Just like newton raphson, you estimate a solution, that is not enough, you need another step to move the solution towards a final solution which ends up to be minimizing algorithm.
To solve this in general Differential Evolution has been devised by many researchers which then allows for a fast number of iterations to get to a GLOBAL MIN/MAX solution.
My general observation is that these differential equations have difficult finding values that deal with a global max or min, in particular f'=0 mostly gives you the local max/min. Therefore any differential equation no matter how sophisticate and well thought thru, dealing with min/max problem e.g. fitting against a data minimizing the error, alone by itself as a differential equation is not adequate. Just like newton raphson, you estimate a solution, that is not enough, you need another step to move the solution towards a final solution which ends up to be minimizing algorithm. To solve this in general Differential Evolution has been devised by many researchers which then allows for a fast number of iterations to get to a GLOBAL MIN/MAX solution.
Dara captured the optimization issues perfectly, thanks for that.
I have tried using Mathematica's ParametricNDSolve in the past along with a search minimizer and found the results disappointing. It grinds away and finds solutions that are no better than what you can find by doing a search with systematic manual selection of parameters.
It would be great if I (or Dara or someone else) could crack this nut and figure out how to incorporate a Differential Evolution solver.
The problem for me is that it seems an overwhelming task to get up to speed on what Differential Evolution entails.
Dara captured the optimization issues perfectly, thanks for that. I have tried using Mathematica's ParametricNDSolve in the past along with a search minimizer and found the results disappointing. It grinds away and finds solutions that are no better than what you can find by doing a search with systematic manual selection of parameters. It would be great if I (or Dara or someone else) could crack this nut and figure out how to incorporate a Differential Evolution solver. The problem for me is that it seems an overwhelming task to get up to speed on what Differential Evolution entails.
That is on my plate, I am planning to do it in C on GPU servers in Sept, obviously you could test and use it.
Paul we need such algorithms to push your wonderful ideas to the next level
>It would be great if I (or Dara or someone else) could crack this nut and figure out how to incorporate a Differential Evolution solver. That is on my plate, I am planning to do it in C on GPU servers in Sept, obviously you could test and use it. Paul we need such algorithms to push your wonderful ideas to the next level
Another set of ENSO proxy records is the Palmyra coral data reconstruction. [1] Cobb, Kim M, Christopher D Charles, Hai Cheng, and R Lawrence Edwards. “El Nino/Southern Oscillation and Tropical Pacific Climate during the Last Millennium.” Nature 424, no. 6946 (2003): 271–76.
Blogged here: http://contextearth.com/2014/06/25/proxy-confirmation-of-soim/
This set is actually one of the 10 reconstructions that comprise the Unified ENSO Proxy at the top of this thread.
So if the data is available for all 10 sets, there are lots of possibilities for analyses. Hope for the best Dara that the automated tools can help out.
It's like knitting or whittling as it is -- kind of relaxing doing DiffEq optimization by hand, but not really a productive use of time :)
Another set of ENSO proxy records is the Palmyra coral data reconstruction. [1] Cobb, Kim M, Christopher D Charles, Hai Cheng, and R Lawrence Edwards. “El Nino/Southern Oscillation and Tropical Pacific Climate during the Last Millennium.” Nature 424, no. 6946 (2003): 271–76. Blogged here: <http://contextearth.com/2014/06/25/proxy-confirmation-of-soim/> This set is actually one of the 10 reconstructions that comprise the Unified ENSO Proxy at the top of this thread. So if the data is available for all 10 sets, there are lots of possibilities for analyses. Hope for the best Dara that the automated tools can help out. It's like knitting or whittling as it is -- kind of relaxing doing DiffEq optimization by hand, but not really a productive use of time :)
Let me know, I am completely lost in this field of endeavour and their data management....
> Hope for the best Dara that the automated tools can help out. Let me know, I am completely lost in this field of endeavour and their data management....
Paul could you enhance down your differential equations by adding more taylor series like terms t^2 and so on?
Paul could you enhance down your differential equations by adding more taylor series like terms t^2 and so on?
Dara, that is a good idea. I have the constant and first order term which are meant to capture the offset and long term trend. Adding the t"2 and t^3 terms would enable to capture the slower fluctuations in the time series without having to add a specific sinusoidal term.
Dara, that is a good idea. I have the constant and first order term which are meant to capture the offset and long term trend. Adding the t"2 and t^3 terms would enable to capture the slower fluctuations in the time series without having to add a specific sinusoidal term.
I found these particular differential equations very sensitive to such taylor expansions, which is good to model a complex system
I found these particular differential equations very sensitive to such taylor expansions, which is good to model a complex system
With your permission I steal your code above and play around later on tonight
With your permission I steal your code above and play around later on tonight
Paul could you help me figure out the data for FIG 3 and 4 in that ref paper above, I like to code them and do region plots accordingly.
I am curious if those 2D probability plots could be then passed to machine learning algorithms
Paul could you help me figure out the data for FIG 3 and 4 in that ref paper above, I like to code them and do region plots accordingly. I am curious if those 2D probability plots could be then passed to machine learning algorithms
This is an interesting fit to the Unified ENSO Proxy time-series using Eureqa, which uses a search algorithm somehow related to Differential Evolution
The higher-complexity sinusoidal formulation it finds on the Pareto front (red dot) is bizarre
$ UEP = sin(7.824 \cdot Year + sin(1.812 \cdot Year)) \cdot cos(0.01399 \cdot Year + sin(25.36 \cdot Year)) $
The correlation coefficient is still just 0.42.
Finding solutions of sine waves modulated by sine waves leads to the idea (via the chain rule) that a differential equation parameter may have a sinusoidal dependence with time. Of course the solution is iterative, as the chain rule doesn't converge. But iterating further is what gives rise to the Mathieu basis functions.
So there are tantalizing hints that a pattern exists.
This is an interesting fit to the Unified ENSO Proxy time-series using Eureqa, which uses a search algorithm somehow related to Differential Evolution  The higher-complexity sinusoidal formulation it finds on the Pareto front (red dot) is bizarre $ UEP = sin(7.824 \cdot Year + sin(1.812 \cdot Year)) \cdot cos(0.01399 \cdot Year + sin(25.36 \cdot Year)) $ The correlation coefficient is still just 0.42. Finding solutions of sine waves modulated by sine waves leads to the idea (via the chain rule) that a differential equation parameter may have a sinusoidal dependence with time. Of course the solution is iterative, as the chain rule doesn't converge. But iterating further is what gives rise to the Mathieu basis functions. So there are tantalizing hints that a pattern exists.
have you thought about using other metrics in place of Correlation? I have a feeling that the correlation used in such raw way might mislead you or divert our attention from a significant pattern of solutions.
Canberra and Cosine distance are good candidates to explore.
John might know a lot of other metrics which we could code, as you know Mathematica allows for programmable Distance Functions
Dara
have you thought about using other metrics in place of Correlation? I have a feeling that the correlation used in such raw way might mislead you or divert our attention from a significant pattern of solutions. Canberra and Cosine distance are good candidates to explore. John might know a lot of other metrics which we could code, as you know Mathematica allows for programmable Distance Functions Dara
Dara, when you ask about data for Fig 3 and 4 , from which paper is that referring to? The McGregor paper or the Cobb paper or my blog post or some other one?
Dara, when you ask about data for Fig 3 and 4 , from which paper is that referring to? The McGregor paper or the Cobb paper or my blog post or some other one?
Quasi-resonant circulation regimes and hemispheric synchronization of extreme weather in boreal summer
Quasi-resonant circulation regimes and hemispheric synchronization of extreme weather in boreal summer
Don't waste too much time, just point me to the right direction
Don't waste too much time, just point me to the right direction
Eureqa has a number of metrics listed here:
http://formulize.nutonian.com/documentation/eureqa/general-reference/error-metrics/
The only reason that I like correlation coefficient is because I don't have to fiddle with scaling, but I suppose I can do that with any metric as long as I scale each of the input data sets by their variance?
I think I understand what you mean by distance metrics. These somehow use both X (the amplitude) and t (time) in devising an error metric. The error is not set by X alone so that slight phase shifts WRT time are not penalized so severely.
Is that right? I have been searching for something to use like that.
Eureqa has a number of metrics listed here: <http://formulize.nutonian.com/documentation/eureqa/general-reference/error-metrics/> The only reason that I like correlation coefficient is because I don't have to fiddle with scaling, but I suppose I can do that with any metric as long as I scale each of the input data sets by their variance? I think I understand what you mean by distance metrics. These somehow use both X (the amplitude) and t (time) in devising an error metric. The error is not set by X alone so that slight phase shifts WRT time are not penalized so severely. Is that right? I have been searching for something to use like that.
Oh, that boreal summer paper is in the other thread. http://forum.azimuthproject.org/discussion/1446/scatter-plot-laplacian-time/?Focus=12185#Comment_12185
I will try to answer that there.
Oh, that boreal summer paper is in the other thread. <http://forum.azimuthproject.org/discussion/1446/scatter-plot-laplacian-time/?Focus=12185#Comment_12185> I will try to answer that there.
This is a good explanation of the various error metrics including ones that Dara mentions.
http://numerics.mathdotnet.com/docs/Distance.html
This is a good explanation of the various error metrics including ones that Dara mentions. <http://numerics.mathdotnet.com/docs/Distance.html>
Yes this error metric need not be correlation nor Euclidean
> These somehow use both X (the amplitude) and t (time) in devising an error metric. Yes this error metric need not be correlation nor Euclidean
EXACTLY! if you find others that are not in that list we could code them and provide them as .m file
>http://numerics.mathdotnet.com/docs/Distance.html EXACTLY! if you find others that are not in that list we could code them and provide them as .m file
I have been cranking on the Universal ENSO Proxy records (top of this thread) some more. This is over 300 years worth of ENSO proxy data taken at yearly intervals.
In the modern day non-Proxy ENSO records, I found three significant frequencies in the context of a Mathieu-like DiffEq formulation.
There is the nonlinear modulation of about 8.3 years, and a combined forcing frequency of a 6 year period and a 28 month = 2.33 year period. The latter 28 month period is common to the Quasi-Biennial Oscillation of stratospheric winds where the periodicity is quite striking. The 6 year period has no obvious connection but similar periods occur when looking at periodic jerks in the rotation of the earth, the Chandler wobble, and the beat frequency between the anomalistic and draconic lunar month .
This plot is where I broke up the UEP time series in two approximately equal intervals, with the break point at the year 1818. Since this is a yearly sample, I did not filter the data any further.
I took the liberty of trying to modulate the QBO with another small cyclic term to model what appears to be a variation in the QBO forcing itself.
I believe that even though the correlation coefficient is "only" 0.42, this is a deceptively good fit and it is consistent with the model I fit to the ENSO SOI data. I am not sure how much further I can tweak the fit, as it seems to be close to converging.
The noise in the data seems to be a factor as the EUP is an ensemble of 10 different records, and there is considerable variance in the data from the different records. There is thus likely a "ceiling" to the correlation coefficient even if the fundamental underlying waveform is discovered. This correlation coefficient ceiling may in fact may be as low as 0.6 --a guess based on what I have seen in the past with such a busy waveform.
The remaining issue is that there may be other combinations of parameters that provide an even better fit. Dara's evolutionary strategy may help out here.
Paul Pukite
I have been cranking on the Universal ENSO Proxy records (top of this thread) some more. This is over 300 years worth of ENSO proxy data taken at yearly intervals. In the modern day non-Proxy ENSO records, I [found three significant frequencies](http://contextearth.com/2014/05/27/the-soim-differential-equation/) in the context of a [Mathieu-like DiffEq](http://mathworld.wolfram.com/MathieuDifferentialEquation.html) formulation. There is the nonlinear modulation of about 8.3 years, and a combined forcing frequency of a 6 year period and a 28 month = 2.33 year period. The latter 28 month period is common to the [Quasi-Biennial Oscillation](http://en.wikipedia.org/wiki/Quasi-biennial_oscillation) of stratospheric winds where the periodicity is quite striking. The 6 year period has no obvious connection but similar periods occur when looking at [periodic jerks](http://phys.org/news/2013-07-pair-year-oscillations-length-day.html) in the rotation of the earth, the [Chandler wobble](http://www.technologyreview.com/view/415093/earths-chandler-wobble-changed-dramatically-in-2005/), and the beat frequency between the anomalistic and draconic [lunar month ](http://en.wikipedia.org/wiki/Lunar_month). This plot is where I broke up the UEP time series in two approximately equal intervals, with the break point at the year 1818. Since this is a yearly sample, I did not filter the data any further.  * The 1.537 number is the Mathieu equation modulation in rads/year on the LHS * The 1.02/1.01 numbers are the first and second half of the ~6-year forcing period on the RHS * The 2.67/2.68 numbers are the first and second half of the QBO forcing period on the RHS I took the liberty of trying to modulate the QBO with another small cyclic term to model what appears to be a variation in the QBO forcing itself. I believe that even though the correlation coefficient is "only" 0.42, this is a deceptively good fit and it is consistent with the model I fit to the ENSO SOI data. I am not sure how much further I can tweak the fit, as it seems to be close to converging. The noise in the data seems to be a factor as the EUP is an ensemble of 10 different records, and there is considerable variance in the data from the different records. There is thus likely a "ceiling" to the correlation coefficient even if the fundamental underlying waveform is discovered. This correlation coefficient ceiling may in fact may be as low as 0.6 --a guess based on what I have seen in the past with such a busy waveform. The remaining issue is that there may be other combinations of parameters that provide an even better fit. Dara's evolutionary strategy may help out here. Paul Pukite
Hello Paul
Impressive work!
I hope this weekend I start the preparations, if you do not mind to be involved in some of the discussion, though not asking you to code ugly C CUDA code ;)
But it will help if we could provide for John and the group here the global optimizers, lots and lots of new horizons open up!
Hold on to these equations you are forming, these are computationally pointing to the right direction
Dara
Hello Paul Impressive work! >Dara’s evolutionary strategy may help out here. I hope this weekend I start the preparations, if you do not mind to be involved in some of the discussion, though not asking you to code ugly C CUDA code ;) But it will help if we could provide for John and the group here the global optimizers, lots and lots of new horizons open up! Hold on to these equations you are forming, these are computationally pointing to the right direction Dara
The importance of the Differential Evolution global optimizer is that it does not require a closed form for the function that is optimized! In other words you could feed it the solutions from your diff eq which might not have any closed forms (interpolations or list of numbers).
The importance of the Differential Evolution global optimizer is that it does not require a closed form for the function that is optimized! In other words you could feed it the solutions from your diff eq which might not have any closed forms (interpolations or list of numbers).
Dara, I recently discovered that Microsoft Excel has an evolutionary search algorithm in its Solver utility.
It may be instructive for others to try it out if they have time to experiment.
I tried it out on a parametric version of a differential equation, whereby the x''(t) was approximated in place as double differences between adjacent cells. Then I set the DiffEq error between the LHS and RHS and minimized this error with the solver. For now I used the sum of abs(error)
It does get close to what I get with the full DiffEq solution but I will have to keep on experimenting with it.
Dara, I recently discovered that Microsoft Excel has an evolutionary search algorithm in its Solver utility. It may be instructive for others to try it out if they have time to experiment. I tried it out on a parametric version of a differential equation, whereby the x''(t) was approximated in place as double differences between adjacent cells. Then I set the DiffEq error between the LHS and RHS and minimized this error with the solver. For now I used the sum of abs(error) It does get close to what I get with the full DiffEq solution but I will have to keep on experimenting with it.
This approach is most promising to come up with some approximation of LHS and RHS and then do sum of abs error (do not necessarily do squares).
You could use SVR's internal computational model on normed vector spaces and modify what passed to global optimizer and make many variations of SVR which deal with regressor which is actually fitting differential equations vs. polynomials.
This is within our reach today! (I know I am going to regret these words by the end of sept)
Dara
>It does get close to what I get with the full DiffEq solution but I will have to keep on experimenting with it. This approach is most promising to come up with some approximation of LHS and RHS and then do sum of abs error (do not necessarily do squares). You could use SVR's internal computational model on normed vector spaces and modify what passed to global optimizer and make many variations of SVR which deal with regressor which is actually fitting differential equations vs. polynomials. This is within our reach today! (I know I am going to regret these words by the end of sept) Dara
What is more difficult about evaluating the error between LHS and RHS of a differential equation discrete-difference expansion is that both sides are essentially heavily parameterized so that there is no fixed yardstick, which is in contrast to a data vs model error.
For example, in the DiffEq expansion the RHS forcing may be strong with lots of structure or very weak with little structure, which has ramifications on how much regression the LHS requires. In the data vs solved model, the LHS=RHS combination is solved and that solution is compared to the data, whose shape is permanently fixed apart from a scale factor.
I think this makes solving the DiffEq a much more robust approach with the expansion serving as a quick-and-dirty approach.
Does that make sense?
I am "fishing" on this because of the possibility of using the two together, as in a directed search, where the DiffEq expansion is used to generate a preferred gradient of search for the full model solution. Or it may be that this has been done before but I am not sure what the algorithm is called.
What is more difficult about evaluating the error between LHS and RHS of a differential equation discrete-difference expansion is that both sides are essentially heavily parameterized so that there is no fixed yardstick, which is in contrast to a data vs model error. For example, in the DiffEq expansion the RHS forcing may be strong with lots of structure or very weak with little structure, which has ramifications on how much regression the LHS requires. In the data vs solved model, the LHS=RHS combination is solved and that solution is compared to the data, whose shape is permanently fixed apart from a scale factor. I think this makes solving the DiffEq a much more robust approach with the expansion serving as a quick-and-dirty approach. Does that make sense? I am "fishing" on this because of the possibility of using the two together, as in a directed search, where the DiffEq expansion is used to generate a preferred gradient of search for the full model solution. Or it may be that this has been done before but I am not sure what the algorithm is called.
I ran Eureqa on the pure UEP dataset looking for periodicities, ignoring any DiffEq framework
EUP = f(t)
The solution reveals a strong factor along the Pareto front (lower right) of 7.84 rads/year (from a modulated 7.82 with a side-band of 0.014). Since the data is only sampled once per year, this is likely an aliased period of an intra-annual component. IOW, anything above ~6.28 rads/year would be a sub-annual period.
The anomalistic tidal month of 27.55455 days results in 83.285 rads/year. But subtracting 12 * 2π from this gives 7.887 rads/year, which is very close to 7.84 rads/year.
The reason that this may be important is that a tidal month period can "beat" in synch with a particular time of year or season and this may be amplified enough to be detectable via enough yearly samples.
I ran Eureqa on the pure UEP dataset looking for periodicities, ignoring any DiffEq framework EUP = f(t)  The solution reveals a strong factor along the Pareto front (lower right) of 7.84 rads/year (from a modulated 7.82 with a side-band of 0.014). Since the data is only sampled once per year, this is likely an aliased period of an intra-annual component. IOW, anything above ~6.28 rads/year would be a sub-annual period. The [anomalistic tidal month](http://en.wikipedia.org/wiki/Lunar_month) of 27.55455 days results in 83.285 rads/year. But subtracting 12 * 2π from this gives 7.887 rads/year, which is very close to 7.84 rads/year. The reason that this may be important is that a tidal month period can "beat" in synch with a particular time of year or season and this may be amplified enough to be detectable via enough yearly samples.
Paul thanx! make sure, please kindly, if you do not mind, keep a journal of all this so I could join you very shortly to try the diff eq fitting.
BTW it does not have to be differential equations, we could use any other systems of equations, but for now that is best i know.
CosxSin is great term how about we also add an exponential?
Paul thanx! make sure, please kindly, if you do not mind, keep a journal of all this so I could join you very shortly to try the diff eq fitting. BTW it does not have to be differential equations, we could use any other systems of equations, but for now that is best i know. CosxSin is great term how about we also add an exponential?
Dara, I like the way that Mathematica allows one to create a journal, mixing math with a narrative..
I am not sure how an exponential by itself will add much. The x'' factors seem much stronger than the damping x' factors. But feel free to add that if you like.
Dara, I like the way that Mathematica allows one to create a journal, mixing math with a narrative.. CosxSin is great term how about we also add an exponential? I am not sure how an exponential by itself will add much. The x'' factors seem much stronger than the damping x' factors. But feel free to add that if you like.
Exactly! amazing amount of time saved.
You could control the shape of the solution more, than just the sincos... of course it is just a thought, but we need to experiment with actual code... very very soon
>Dara, I like the way that Mathematica allows one to create a journal, mixing math with a narrative.. Exactly! amazing amount of time saved. >I am not sure how an exponential by itself will add much. You could control the shape of the solution more, than just the sincos... of course it is just a thought, but we need to experiment with actual code... very very soon
Here is another fascinating Eureqa search result. Recall that we found a 2.671 rad/year periodicity in the RHS forcing for the model of ENSO Proxy data displayed in comment #23 above. You can also see that the 2.671 frequency has a slight modulation in it that is also sinusoidal over time. That is, it looks like:
$ sin(wt + k sin(vt ... $
The base value for w also matches the 28-month periodicity of QBO. The following Eureqa result decomposes the QBO into that same period.
But notice also how Eureqa is decomposing the period into several layers of modulation.
That is, it results in
$ sin(2.67 \cdot t + sin(0.389 \cdot t + 1.77 sin(0.16 \cdot t + ... $
There is also an enclosing sin modulation that is capturing the form of the peaks, where they appear to be more rounded at the top.
This is a very important result IMO and it is potentially pointing out a significant forcing for the ENSO. The modulation appears in both waveforms and I will next try to see if the particular QBO decomposition improves the ENSO fit.
The QBO data only goes back to the early 1950's but the ENSO Proxy results go back to 1650, so if this fit improves the model correlation significantly it may have some implications for making El Nino predictions.
Here is another fascinating Eureqa search result. Recall that we found a 2.671 rad/year periodicity in the RHS forcing for the model of ENSO Proxy data displayed in comment #23 above. You can also see that the 2.671 frequency has a slight modulation in it that is also sinusoidal over time. That is, it looks like: $ sin(wt + k sin(vt ... $ The base value for w also matches the 28-month periodicity of QBO. The following Eureqa result decomposes the QBO into that same period.  But notice also how Eureqa is decomposing the period into several layers of modulation. That is, it results in $ sin(2.67 \cdot t + sin(0.389 \cdot t + 1.77 sin(0.16 \cdot t + ... $ There is also an enclosing sin modulation that is capturing the form of the peaks, where they appear to be more rounded at the top. This is a very important result IMO and it is potentially pointing out a significant forcing for the ENSO. The modulation appears in both waveforms and I will next try to see if the particular QBO decomposition improves the ENSO fit. The QBO data only goes back to the early 1950's but the ENSO Proxy results go back to 1650, so if this fit improves the model correlation significantly it may have some implications for making El Nino predictions.
This is a really really good match! Congrats Paul.
This solution perhaps better be used as a SIGN model i.e. it is positive when the original signal is positive and negative same way. This is very hard to come up with for any time-series.
D
This is a really really good match! Congrats Paul. This solution perhaps better be used as a SIGN model i.e. it is positive when the original signal is positive and negative same way. This is very hard to come up with for any time-series. D
Dara, I will have to look at what the SIGN model entails. Is this what is also refers to as a +/- signed bit model ? So all you know is the change of sign? This approach is often used in qualitative reasoning models.
So then one can come up with a correlation of unity if all (+)(+) or (-)(-) give a +1 value and 0 if random. The amplitude is out of the picture so then one doesn't have to worry about slight non-stationary amplitude variations, caused by noise, etc.
Good idea!
I was thinking about what the modulated sinusoid means.
Say that a function is described as:
$ f(t) = sin(wt+sin(vt)) $
the first derivative of this
$ f'(t) = (w+v cos(vt)) cos(wt+sin(vt)) = (w+v cos(vt)) \cdot f(t+\pi/2) $
which indicates that a time-modulated interaction may be generating the resulting modulation, ala a Mathieu or Hill differential equation.
This gets back to my original premise that these perturbed differential formulations may be very common in understanding climate to first-order.
Dara, I will have to look at what the SIGN model entails. Is this what is also refers to as a +/- signed bit model ? So all you know is the change of sign? This approach is often used in qualitative reasoning models. So then one can come up with a correlation of unity if all (+)*(+) or (-)*(-) give a +1 value and 0 if random. The amplitude is out of the picture so then one doesn't have to worry about slight non-stationary amplitude variations, caused by noise, etc. Good idea! --- I was thinking about what the modulated sinusoid means. Say that a function is described as: $ f(t) = sin(wt+sin(vt)) $ the first derivative of this $ f'(t) = (w+v cos(vt)) cos(wt+sin(vt)) = (w+v cos(vt)) \cdot f(t+\pi/2) $ which indicates that a time-modulated interaction may be generating the resulting modulation, ala a Mathieu or [Hill differential equation](http://en.wikipedia.org/wiki/Hill_differential_equation). This gets back to my original premise that these perturbed differential formulations may be very common in understanding climate to first-order.
Yes, they are very very hard to come by and most precious, actually many people prefer them to the magnitude forecasts.
> So all you know is the change of sign? Yes, they are very very hard to come by and most precious, actually many people prefer them to the magnitude forecasts.
Great, the Hill differential equations are good star to study periodic functions a coefficients. You could replace the Fourier expansion by Wavelet equations which minimize an error function. I had given your a sample which could download in MML.
So f(t) in Hill equation could be the SVR regressor! in wavelet form so it gives you two things a) local periodicity b) minimzing some error function.
I actually like to experiment and code some of your ideas with this Hill thingy you found
Dara
Great, the Hill differential equations are good star to study periodic functions a coefficients. You could replace the Fourier expansion by Wavelet equations which minimize an error function. I had given your a sample which could download in MML. So f(t) in Hill equation could be the SVR regressor! in wavelet form so it gives you two things a) local periodicity b) minimzing some error function. I actually like to experiment and code some of your ideas with this Hill thingy you found Dara
So my idea is let's use SVR or wavelets or some other algorithm estimate the coefficients of the diff eq (basically global minimizing of some function), as the first step, then we do solve the resultant equation and minimize its error function, hence differential evolution is a great minimizer for these ideas.
My basic problem is to collect enough code and functionality so we have an environment to start playing around with. D
So my idea is let's use SVR or wavelets or some other algorithm estimate the coefficients of the diff eq (basically global minimizing of some function), as the first step, then we do solve the resultant equation and minimize its error function, hence differential evolution is a great minimizer for these ideas. My basic problem is to collect enough code and functionality so we have an environment to start playing around with. D
Sounds good, Dara. I have the dogged persistence to pursue some of this the old-fashioned way by grinding it out, but I am sure anything you can do with SVR will leave me in the dust.
The Mathieu function (which I use more frequently) is a special case of the Hill equation, where only one periodic modulation term is allowed versus any number for the more general Hill formulation.
So, to recap and avoid future confusion, there appear to be modulations in both (1) the periodic forcing function and (2) the parameters of the differential equation, the latter which describes the Mathieu or Hill equation.
The complexity of the model is increasing, but that is OK as long as we can get the machine learning performance to scale accordingly.
Sounds good, Dara. I have the dogged persistence to pursue some of this the old-fashioned way by grinding it out, but I am sure anything you can do with SVR will leave me in the dust. --- The Mathieu function (which I use more frequently) is a special case of the Hill equation, where only one periodic modulation term is allowed versus any number for the more general Hill formulation. So, to recap and avoid future confusion, there appear to be modulations in both (1) the periodic forcing function and (2) the parameters of the differential equation, the latter which describes the Mathieu or Hill equation. The complexity of the model is increasing, but that is OK as long as we can get the machine learning performance to scale accordingly.
woof!
All I know which could help you is the global minimizer which would open up your horizons for computing and developing new equations (or inequalities).
D
> I have the dogged persistence to pursue some of this the old-fashioned way by grinding it out, but I am sure anything you can do with SVR will leave me in the dust. woof! All I know which could help you is the global minimizer which would open up your horizons for computing and developing new equations (or inequalities). D
More interim results on the Universal ENSO Proxy (UEP) DiffEq fitting.
The correlation coefficient is almost 0.42
Recall that this is a 300+ year time-series and the coefficients are very close to those used in the SOI fit covering only the last century -- see Figure 2 here. So the ENSO years 1650 to 1900 revealed by proxy have a similar structure to the non-proxy results of post-1900, as far as the best-fit DiffEq structure is concerned.
I am becoming more confident that this DiffEq structure describes a significant underlying pattern in the dynamics of ENSO and the trail is leading us closer to unraveling one important piece of the puzzle.
I was also thinking this is a good example of the training set/test set approach described in the latest http://johncarlosbaez.wordpress.com/2014/09/03/science-models-and-machine-learning/ blog post by David Tweed. The training set could be the ENSO SOI, and the test set the ENSO Proxy data. Or vice versa.
Comments encouraged, as Feynman warned that it is most easy to fool oneself.
thanks, Paul Pukite
More interim results on the Universal ENSO Proxy (UEP) DiffEq fitting. 1. The RHS forcing is getting closer to the QBO modulation as per #32. This 2.673 rads/yr (highlighted in yellow) is very close to the mean QBO period of 28 months 2. I added a slight modulation term to the LHS non-linear Mathieu sin() factor 3. For the other RHS forcing, I modulated the 6.2-year period (yellow highlighted 1 rads/yr) slightly as well. I am calling this a Chandler Wobble beat factor because that is what it is closest to in value. 4. (edit) I also filtered out a slight low-freq background modulation to see if that would help locking in, see the first line in figure below. The correlation coefficient is almost 0.42 Recall that this is a 300+ year time-series and the coefficients are very close to those used in the SOI fit covering only the last century -- see [Figure 2 here](<http://contextearth.com/2014/07/17/correlation-of-time-series/>). So the ENSO years 1650 to 1900 revealed by proxy have a similar structure to the non-proxy results of post-1900, as far as the best-fit DiffEq structure is concerned. I am becoming more confident that this DiffEq structure describes a significant underlying pattern in the dynamics of ENSO and the trail is leading us closer to unraveling one important piece of the puzzle.  I was also thinking this is a good example of the training set/test set approach described in the latest <http://johncarlosbaez.wordpress.com/2014/09/03/science-models-and-machine-learning/> blog post by David Tweed. The training set could be the ENSO SOI, and the test set the ENSO Proxy data. Or vice versa. Comments encouraged, as Feynman warned that it is most easy to fool oneself. thanks, Paul Pukite
Paul I am freed up mostly, so I will start street fighting with your equations
Paul I am freed up mostly, so I will start street fighting with your equations
it is a very good fit
it is a very good fit
For the Universal ENSO Proxy data, I am finding that a tool such as Eureqa is having a hard time generating solution fits much above 0.4 for a correlation coefficient. And any solution that nears 0.5 has a very high complexity factor. My understanding is that the proxy records appear to contain significant noise and whatever is in the remaining 0.5 correlation fraction is verging on unfittable white noise.
The amount of noise is estimated by looking at the variation in the data sets that go into UEP. They do a good job of averaging amongst the ensemble reconstructions but the standard deviation is high -- about the same value as the mean excursion:
Dara, good luck seeing what you can do, as this is a real signal processing challenge -- to be able to extract meaning out of what looks like randomness. If it was easy, someone would probably have already done the task :)
But remember that we have a somewhat solid baseline to compare against (both in the SOI and in the ENSO proxy data) and some physical rationale for what may be happening. I would recommend not seeding any prior solutions to see what an unbiased differential evolution strategy will produce.
For the Universal ENSO Proxy data, I am finding that a tool such as Eureqa is having a hard time generating solution fits much above 0.4 for a correlation coefficient. And any solution that nears 0.5 has a very high complexity factor. My understanding is that the proxy records appear to contain significant noise and whatever is in the remaining 0.5 correlation fraction is verging on unfittable white noise. The amount of noise is estimated by looking at the variation in the data sets that go into UEP. They do a good job of averaging amongst the ensemble reconstructions but the standard deviation is high -- about the same value as the mean excursion:  Dara, good luck seeing what you can do, as this is a real signal processing challenge -- to be able to extract meaning out of what looks like randomness. If it was easy, someone would probably have already done the task :) But remember that we have a somewhat solid baseline to compare against (both in the SOI and in the ENSO proxy data) and some physical rationale for what may be happening. I would recommend not seeding any prior solutions to see what an unbiased differential evolution strategy will produce.
That is why I said street fighting required ;)
We need multidimensional data and more than 1 variable for diff eq.
Also we need to work with denoised signal, it might give much better indication of the underlying equations than the raw signal
D
>Dara, good luck seeing what you can do That is why I said street fighting required ;) We need multidimensional data and more than 1 variable for diff eq. Also we need to work with denoised signal, it might give much better indication of the underlying equations than the raw signal D
If you want to denoise the signal, it might be best to use the modern-day ENSO data such as SOI and SST, as those have monthly resolution.
With the UEP ENSO proxy, all that is provided is yearly data so that it is difficult to tell how much of the noise fluctuates within a year and whether it is visible wrt to the sampling rate. That seems to be a recipe to generate noise that can't easily be eliminated because the spectral composition is folded on itself, see http://en.wikipedia.org/wiki/Temporal_anti-aliasing
You may have better ideas about this than I do. Look at the residual and see if the frequency spectrum is flat? That would indicate white noise.
If you want to denoise the signal, it might be best to use the modern-day ENSO data such as SOI and SST, as those have monthly resolution. With the UEP ENSO proxy, all that is provided is yearly data so that it is difficult to tell how much of the noise fluctuates within a year and whether it is visible wrt to the sampling rate. That seems to be a recipe to generate noise that can't easily be eliminated because the spectral composition is folded on itself, see <http://en.wikipedia.org/wiki/Temporal_anti-aliasing> You may have better ideas about this than I do. Look at the residual and see if the frequency spectrum is flat? That would indicate white noise.
I am juggling three primary data sets, the Southern Oscillation Index (SOI) time-series from 1880-present, the QBO time-series from 1952-present, and the Universal ENSO Proxy (UEP) records from 1650-1976.
The connection so far is:
The QBO provides the RHS forcing function to the LHS sloshing dynamics Mathieu differential equation.
One DiffEq solution fits the UEP time series
One DiffEq solution fits the SOI time series
This is an experiment to see how well a fit to the UEP time-series works with the SOI time-series, using essentially the same RHS forcing function, apart from any initial conditions required to align the two series in amplitude and phase. They also share the same LHS DiffEq.
I used the UEP model fit from comment #41 above , repeated below, as the baseline. This is optimized over a 300+ year time span, with no favored interval.
UEP results
Then using the same RHS forcing and LHS DiffEq in the UEP fit, we can apply it to the SOI fit. The parameters are kept constant except for the initial conditions for y', y, and the constant and linear forcing terms, which are necessary to account for offset and drift.
SOI results
The SOI fit is deceptively good considering that the UEP fit ends at 1975, while the SOI data continues to 2013. Both cases have a correlation coefficient slightly above 0.42
This indicates that the UEP data is providing an extended training interval, where we can establish a periodic pattern for the RHS forcing and a stable LHS featuring a slightly modulated Mathieu factor. That solution, when applied to SOI generates a very good fit -- considering how difficult it is to map to such an erratic waveform. (it is very difficult to get to a CC much above 0.6 using an arbitrary Fourier series of a handful of terms )
We are getting so close, I can taste it
The qualifier to this is that the UEP data is calibrated to the SOI for recent years. However, this calibration is not perfect and the fit puts more emphasis in the 200 years prior to 1880.
The statistics question is how much of the correlation is due to a real deterministic behavior and how much could be due to a random alignment of signals? I have a feeling that the statistical significance of it being a real correlation is overwhelming, and that this behavior is describing the erratic dynamics of ENSO effectively.
Feedback appreciated, because as usual, I don't want to be fooling myself.
I am juggling three primary data sets, the Southern Oscillation Index (SOI) time-series from 1880-present, the QBO time-series from 1952-present, and the Universal ENSO Proxy (UEP) records from 1650-1976. The connection so far is: 1. The QBO provides the RHS forcing function to the LHS sloshing dynamics Mathieu differential equation. 2. One DiffEq solution fits the UEP time series 3. One DiffEq solution fits the SOI time series This is an experiment to see how well a fit to the UEP time-series works with the SOI time-series, using essentially the same RHS forcing function, apart from any initial conditions required to align the two series in amplitude and phase. They also share the same LHS DiffEq. I used the UEP model fit from [comment #41 above ](http://forum.azimuthproject.org/discussion/1451/enso-proxy-records/?Focus=12376#Comment_12376), repeated below, as the baseline. This is optimized over a 300+ year time span, with no favored interval. ## UEP results ##  Then using the same RHS forcing and LHS DiffEq in the UEP fit, we can apply it to the SOI fit. The parameters are kept constant except for the initial conditions for y', y, and the constant and linear forcing terms, which are necessary to account for offset and drift. ## SOI results ##  The SOI fit is deceptively good considering that the UEP fit ends at 1975, while the SOI data continues to 2013. Both cases have a correlation coefficient slightly above 0.42 This indicates that the UEP data is providing an extended training interval, where we can establish a periodic pattern for the RHS forcing and a stable LHS featuring a slightly modulated Mathieu factor. That solution, when applied to SOI generates a very good fit -- considering how difficult it is to map to such an erratic waveform. (it is very difficult to get to a CC much above 0.6 using an arbitrary Fourier series of a handful of terms ) *We are getting so close, I can taste it* The qualifier to this is that the UEP data is calibrated to the SOI for recent years. However, this calibration is not perfect and the fit puts more emphasis in the 200 years prior to 1880. The statistics question is how much of the correlation is due to a real deterministic behavior and how much could be due to a random alignment of signals? I have a feeling that the statistical significance of it being a real correlation is overwhelming, and that this behavior is describing the erratic dynamics of ENSO effectively. Feedback appreciated, because as usual, I don't want to be fooling myself.