It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 500
- Study Groups 19
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 1
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 68
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 2
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 147
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 713

## Comments

Christopher Rackauckas is a bit of a pied piper with his Julia work. What appeals to me is that the DSL looks pretty slick for certain classes of problems. From https://sciml.ai/roadmap/ this model of a chemical reaction network, which is the way I think about the problem domain.

Or this statement from https://tobydriscoll.net/blog/matlab-vs.-julia-vs.-python/

`Christopher Rackauckas is a bit of a pied piper with his Julia work. What appeals to me is that the DSL looks pretty slick for certain classes of problems. From https://sciml.ai/roadmap/ this model of a chemical reaction network, which is the way I think about the problem domain. <pre> rs = @reaction_network begin c1, S + E --> SE c2, SE --> S + E c3, SE --> P + E end c1 c2 c3 p = (0.00166,0.0001,0.1) tspan = (0., 100.) u0 = [301., 100., 0., 0.] # S = 301, E = 100, SE = 0, P = 0 # solve ODEs oprob = ODEProblem(rs, u0, tspan, p) osol = solve(oprob, Tsit5()) # solve JumpProblem u0 = [301, 100, 0, 0] dprob = DiscreteProblem(rs, u0, tspan, p) jprob = JumpProblem(dprob, Direct(), rs) jsol = solve(jprob, SSAStepper()) </pre> Or this statement from https://tobydriscoll.net/blog/matlab-vs.-julia-vs.-python/ > If you believe that V.conj().T@D**3@V is an elegant way to write \\(V^*D^3V\\) , then you may need to see a doctor.`

The goal is to use other approaches to model a time-series as detailed as this:

`The goal is to use other approaches to model a time-series as detailed as this: ![](https://imagizer.imageshack.com/img922/2166/TnJl2B.png)`

Curious, what do you think of Mathematica?

`Curious, what do you think of Mathematica?`

I should complete my assessment of "why R". In short, it is at heart a functional programming language. (Yes, Python has functional subsets. Julia and Scala are also FPLs. At least I believe Julia is.) But the overwhelming reason for me is the ecosystem of R packages which now number over 16,000, covering basic and dark corners of statistics and numerical work. Understanding them is an education in itself. For example, I did not know there was a body of non-linear numerical optimization work, pretty classically posed, which relies upon evolutionary computation to do the optimizing. See package

nloptr.`I should complete my assessment of "why R". In short, it is at heart a functional programming language. (Yes, Python has functional subsets. Julia and Scala are also FPLs. At least I believe Julia is.) But the overwhelming reason for me is the [ecosystem of R packages](https://cran.r-project.org/web/packages/available_packages_by_name.html) which now number over 16,000, covering basic and dark corners of statistics and numerical work. Understanding them is an education in itself. For example, I did not know there was a body of non-linear numerical optimization work, pretty classically posed, which relies upon evolutionary computation to do the optimizing. See package [*nloptr*](https://cran.r-project.org/web/packages/nloptr/vignettes/nloptr.pdf).`

Jan, Most of my opinions of programming languages are based on feel and trust, which means it's all subjective. I don't care for Mathematica in terms of its brittleness -- when I was using it regularly it would freeze my PC. When I looked at it closely, it was actually running in some sort of kernel mode -- I'm assuming just to get that last ounce of performance. That's a bit beyond my comfort level. It is impressive because they do go for the neat bells & whistles whenever they can. The syntax is a bit too read-only for me.

I get what you are saying about R. In the past I have R integrated into another executable just to get a specific library call. But then I noticed that it was leaving a never-ending chain of defunct processes, which wasn't so good :( Overall, I don't like the feel of the R syntax either.

When I look at a language I first try to decide whether it is procedural, functional, and declarative, or some mix. Julia is a mix, while a language such as Haskell is more purely functional. I am at the point of being more interested in solving problems and trying to pick the right programming approach to apply.

`Jan, Most of my opinions of programming languages are based on feel and trust, which means it's all subjective. I don't care for Mathematica in terms of its brittleness -- when I was using it regularly it would freeze my PC. When I looked at it closely, it was actually running in some sort of kernel mode -- I'm assuming just to get that last ounce of performance. That's a bit beyond my comfort level. It is impressive because they do go for the neat bells & whistles whenever they can. The syntax is a bit too read-only for me. I get what you are saying about R. In the past I have R integrated into another executable just to get a specific library call. But then I noticed that it was leaving a never-ending chain of defunct processes, which wasn't so good :( Overall, I don't like the feel of the R syntax either. When I look at a language I first try to decide whether it is procedural, functional, and declarative, or some mix. Julia is a mix, while a language such as Haskell is more purely functional. I am at the point of being more interested in solving problems and trying to pick the right programming approach to apply.`

Since long-period tidal forcing is essentially a multiplicative expansion of 3 fundamental lunar cycles and the annual cycle, I set up a combinatorial expansion of the 3 sin waves synced to an annual impulse.

The fit was a result of 126 binomially expanded terms, with some more important than others. The best way to rank their impact is from the power spectra:

These follow essentially the same amplitude ranking as expected ... Mf is always strongest, Mm is next, with the first cross-term Mf' following. The 27.09 day ostensibly evective term is fascinating in that it is also a complementary satellite sideband of the Mm to Mf peak via the 8.85 year perigee cycle. So when the tropical sinusoid of 2*Mf=27.32 days is multiplied by the 8.85 year cycle, it will create both the Mm=27.55 day (prograde) cycle and the 27.09 (retrograde) cycle. And then the 2 satellite sidebands around Mm at 27.44 and 27.66 days are due to the 18.6 year nodal cycle. That's why there are 126 terms, instead of the 35 expected from just the annual + 3 lunar tidal expansion to the 4th power -- adding the longer cycles introduces the necessary satellite terms to the Mm factor

Compare against this which is obtained from the power spectrum of the Earth's length of day (LOD)

Described at another level of detail in this blog post: https://geoenergymath.com/2020/08/02/combinatorial-tidal-constituents/

`Since long-period tidal forcing is essentially a multiplicative expansion of 3 fundamental lunar cycles and the annual cycle, I set up a combinatorial expansion of the 3 sin waves synced to an annual impulse. ![](https://imagizer.imageshack.com/img924/7915/l33Rsl.png) The fit was a result of 126 binomially expanded terms, with some more important than others. The best way to rank their impact is from the power spectra: ![](https://imagizer.imageshack.com/img923/2899/zxFLZ2.png) These follow essentially the same amplitude ranking as expected ... Mf is always strongest, Mm is next, with the first cross-term Mf' following. The 27.09 day ostensibly [evective](https://en.wikipedia.org/wiki/Evection) term is fascinating in that it is also a complementary satellite sideband of the Mm to Mf peak via the 8.85 year perigee cycle. So when the tropical sinusoid of 2*Mf=27.32 days is multiplied by the 8.85 year cycle, it will create both the Mm=27.55 day (prograde) cycle and the 27.09 (retrograde) cycle. And then the 2 satellite sidebands around Mm at 27.44 and 27.66 days are due to the 18.6 year nodal cycle. That's why there are 126 terms, instead of the 35 expected from just the annual + 3 lunar tidal expansion to the 4th power -- adding the longer cycles introduces the necessary satellite terms to the Mm factor Compare against this which is obtained from the power spectrum of the Earth's length of day (LOD) ![](https://imagizer.imageshack.com/img924/7623/lo3vJB.png) > "Similarly, the stabilized AR-z spectrum sees the two Mm tidal signals at 13.20 cpy (~27.67 days) and 13.48 cpy (~27.09 days) (see inset zoom-in) that are far from resolved in the Fourier spectrum. " Application of Stabilized AR‐z Spectrum in Harmonic Analysis for Geophysics, Hao Ding, Benjamin F. Chao https://doi.org/10.1029/2018JB015890 Described at another level of detail in this blog post: https://geoenergymath.com/2020/08/02/combinatorial-tidal-constituents/`

Watching a few of the Steven Strogatz lectures on Nonlinear Dynamics and Chaos (at 1.75X speed) and he is recommending graphical representations to understand the patterns.

The practical problem with phase-space plots : The first two are clearly similar, but the 3rd is a wave composition that throws an obvious scaling pattern for a loop

The LTE modulation fitting I am doing is essentially working backwards from the latter to the former, so there is quite a bit of cycle mixing that get's in the way. What I may try is phase-space plots on shorter intervals, which I can then display similar to a wavelet scalogram -- perhaps extending the plot into a depth dimension.

`Watching a few of the Steven Strogatz lectures on Nonlinear Dynamics and Chaos (at 1.75X speed) and he is recommending graphical representations to understand the patterns. https://youtu.be/ERzcine5Mqc The practical problem with phase-space plots : The first two are clearly similar, but the 3rd is a wave composition that throws an obvious scaling pattern for a loop ![](https://pbs.twimg.com/media/Ef846f3XkAICQ2W.png) ![](https://pbs.twimg.com/media/Ef85BU5XYAIfi54.png) ![](https://pbs.twimg.com/media/Ef85Q4PXoAEZtdk.png) The LTE modulation fitting I am doing is essentially working backwards from the latter to the former, so there is quite a bit of cycle mixing that get's in the way. What I may try is phase-space plots on shorter intervals, which I can then display similar to a wavelet scalogram -- perhaps extending the plot into a depth dimension.`

Following from the previous comment, this is the phase-space (Lissajous) pattern of ENSO model amplitude vs. dAmplitude/dt. This only features the primary dipole LTE modulation of ~2.57, so it pushes the modulation "loop eyes" toward the center, as indicated by the yellow

The shape is a parallelogram which may reflect hysteresis, via the lag after the seasonal impulse. This is substantiation of a charge-discharge type mechanism, where the lag after the seasonal impulse is the slow discharge, and the tidal force at the impulse defines the charge.

This is what the equivalent looks like for QBO, which features a lower LTE modulation since the QBO is closer to a 0 wavenumber (monopole behavior instead of dipole).

See this for a plasma charge-discharge https://electronics.stackexchange.com/questions/122829/gas-discharge-v-q-lissajous. The chart below is easier to understand because it doesn't have the non-linear Mach-Zehnder-like LTE modulation

There are other examples of this hysteresis-like behavior involving examples of stiction (sticky friction that gets released) and in seismology with the same sticky release on faults but w/o the cyclic behavior. One reason to keep up-to-date with earthquake-related research.

`Following from the previous comment, this is the phase-space (Lissajous) pattern of ENSO model amplitude vs. dAmplitude/dt. This only features the primary dipole LTE modulation of ~2.57, so it pushes the modulation "loop eyes" toward the center, as indicated by the yellow ![](https://imagizer.imageshack.com/img923/5120/oj24vE.png) The shape is a parallelogram which may reflect hysteresis, via the lag after the seasonal impulse. This is substantiation of a charge-discharge type mechanism, where the lag after the seasonal impulse is the slow discharge, and the tidal force at the impulse defines the charge. This is what the equivalent looks like for QBO, which features a lower LTE modulation since the QBO is closer to a 0 wavenumber (monopole behavior instead of dipole). ![](https://imagizer.imageshack.com/img922/3835/F6Yfjt.png) See this for a plasma charge-discharge https://electronics.stackexchange.com/questions/122829/gas-discharge-v-q-lissajous. The chart below is easier to understand because it doesn't have the non-linear Mach-Zehnder-like LTE modulation ![](https://i.stack.imgur.com/XhxSo.png) There are other examples of this hysteresis-like behavior involving examples of stiction (sticky friction that gets released) and in seismology with the same sticky release on faults but w/o the cyclic behavior. One reason to keep up-to-date with earthquake-related research.`

There's much structure in the ENSO model phase plots

By multiplying the points by the strength of the annular impulse at that time, the phase-plots give away the source of the alignment

Leading month pulse:

Trailing month pulse:

There's also a 3-D quality to these plots, akin to a skewed perspective view of a box. This exercise is essentially one of extracting a geometric order out of once was a seemingly erratic time-series.

. . .

The erratic translation of the squared contours is due to a square wave that's staggered by a primary tidal cycle which is incommensurate with the annual impulse. IOW, if the sine wave and the annual impulse were the same period, a lagged impulse response would create a perfect square wave, not one that looks like this

`There's much structure in the ENSO model phase plots By multiplying the points by the strength of the annular impulse at that time, the phase-plots give away the source of the alignment Leading month pulse: ![](https://imagizer.imageshack.com/img924/6004/xXSmHF.png) Trailing month pulse: ![](https://imagizer.imageshack.com/img924/1685/9jsZGd.png) There's also a 3-D quality to these plots, akin to a skewed perspective view of a box. This exercise is essentially one of extracting a geometric order out of once was a seemingly erratic time-series. . . . The erratic translation of the squared contours is due to a square wave that's staggered by a primary tidal cycle which is incommensurate with the annual impulse. IOW, if the sine wave and the annual impulse were the same period, a lagged impulse response would create a perfect square wave, not one that looks like this ![](https://imagizer.imageshack.com/img924/9279/yXZ6UM.gif)`

These are great, Paul.

A little rambling conversation below.

Something which I struggled with for the SARS-CoV-2 deaths time series with these is coming up with uncertainty drapes to put over the phase traces. Do you know what your uncertainties are? Are they thinner than the lines themselves? I needed to figure out a way of displaying uncertainties in one derivative and the next in two dimensions as well.

I have a solution now, for mine, but a couple of things have kept me from finishing the post at my blog, the latest being because my mainstain workstation suffered a blow from an errant update that killed my UAC privileges. I got those back after a struggle, but somehow in the process I have corrupted USB entries (I believe) in the Device Manager, and I cannot get my keyboard or mouse to work. I am trying a few things, but I may end up needing a new system. I hate that but I don't know what else to do.

I

shouldhave allowed a remote login on the system, but I was reluctant because of security reasons.Still, I'd consulted deep expertise. (Ed Nisley, a PE and all around fab engineer I know from IBM days, who writes a blog,

The Smell of Molten Projects in the Morning, and he has a couple of ideas. He used to write forDr Dobbs Journal.)We'll see. It's tough because since March (2020) I'm now forcibly semi-retired, and so there's a budget.

I'll get to it one of these days. The worst thing is the interruption in my process.

Nothing important is lost, since the system is deeply backed up. It's just inconvenient to move from a 4 core 64 GB RAM system down to a Chromebook. Here I'll need to run RStudio, probably in the Cloud.

`These are great, Paul. A little rambling conversation below. Something which I struggled with for the SARS-CoV-2 deaths time series with these is coming up with uncertainty drapes to put over the phase traces. Do you know what your uncertainties are? Are they thinner than the lines themselves? I needed to figure out a way of displaying uncertainties in one derivative and the next in two dimensions as well. I have a solution now, for mine, but a couple of things have kept me from finishing the post at my blog, the latest being because my mainstain workstation suffered a blow from an errant update that killed my UAC privileges. I got those back after a struggle, but somehow in the process I have corrupted USB entries (I believe) in the Device Manager, and I cannot get my keyboard or mouse to work. I am trying a few things, but I may end up needing a new system. I hate that but I don't know what else to do. I <em>should</em> have allowed a remote login on the system, but I was reluctant because of security reasons. Still, I'd consulted deep expertise. (Ed Nisley, a PE and all around fab engineer I know from IBM days, who writes a blog, <a href="https://softsolder.com/"><em>The Smell of Molten Projects in the Morning</em></a>, and he has a couple of ideas. He used to write for <em>Dr Dobbs Journal</em>.) We'll see. It's tough because since March (2020) I'm now forcibly semi-retired, and so there's a budget. I'll get to it one of these days. The worst thing is the interruption in my process. Nothing important is lost, since the system is deeply backed up. It's just inconvenient to move from a 4 core 64 GB RAM system down to a Chromebook. Here I'll need to run RStudio, probably in the Cloud.`

I try to deal with uncertainties by routinely going to the higher resolution time series such as the 5-day MJO and daily SOI data sets. So the fits become multi-scale and the uncertainties can potentially be narrowed down.

High-resolution (5-day) MJO model fit

Back extrapolation to historical SOI

The forcing for SOI and high-res MJO is aligned, not degrading at all regions that are outside the training interval (prior to 1980)

The LTE modulation over the entire span (above following) and just over the post 1979 MJO data interval (below following)

Power spectrum of forcing shows the expected 13.66 day tropical fortnightly signal, but nearly inseparable from the next strongest 27.55 day anomalistic monthly signal. With the annual impulse mixing these show up as 3.795 year and 3.917 year periods, which thus gives rise to a long beat frequency of 121 years, which is likely the low-frequency peak.

There's a whole world of signal processing approaches available that has barely been scratched.

`I try to deal with uncertainties by routinely going to the higher resolution time series such as the 5-day MJO and daily SOI data sets. So the fits become multi-scale and the uncertainties can potentially be narrowed down. <p>High-resolution (5-day) MJO model fit</p> ![](https://imagizer.imageshack.com/img924/1622/uqsYIo.png) Back extrapolation to historical SOI ![](https://imagizer.imageshack.com/img923/7056/2tDqvq.png) The forcing for SOI and high-res MJO is aligned, not degrading at all regions that are outside the training interval (prior to 1980) ![](https://imagizer.imageshack.com/img922/1471/xyAvMw.png) The LTE modulation over the entire span (above following) and just over the post 1979 MJO data interval (below following) ![](https://imagizer.imageshack.com/img922/1140/Xw2U8O.png) Power spectrum of forcing shows the expected 13.66 day tropical fortnightly signal, but nearly inseparable from the next strongest 27.55 day anomalistic monthly signal. With the annual impulse mixing these show up as 3.795 year and 3.917 year periods, which thus gives rise to a long beat frequency of 121 years, which is likely the low-frequency peak. ![](https://imagizer.imageshack.com/img923/1636/CmbPVu.png) There's a whole world of signal processing approaches available that has barely been scratched.`

phenomenological: "A phenomenological model is a scientific model that describes the empirical relationship of phenomena to each other, in a way which is consistent with fundamental theory, but is not directly derived from theory."This blog post by Tao provides an indication on how difficult the math is to solve in a generalized fully 3D Navier-Stokes formulation (as defined per the Clay prize). Interesting that these points that Tao makes are all elements to the LTE solution for ENSO

https://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/

Even if one gets close, you may not be that close, unless you choose wisely:

Strategy 2

Later

So these “blue-sky” long shot approaches (elaborated more in Tao's blog post):

In conclusion,

then, in response to a question in the comments section

Tao says:

When it comes down to it, a not-well-understood model that obeys Navier-Stokes and that will phenomenologically match the data in a parsimonious and plausible manner may be just what's needed to proceed.

`<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Could u explain why no asymmetry please?</p>— olympic (@aoyunxue0822) <a href="https://twitter.com/aoyunxue0822/status/1299531875153661960?ref_src=twsrc%5Etfw">August 29, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> 1. Climate variability simulation predictions don't work 2. Fluid dynamics contains difficult math 3. When the math does work, can't understand fully why *phenomenological* : "A phenomenological model is a scientific model that describes the empirical relationship of phenomena to each other, in a way which is consistent with fundamental theory, but is not directly derived from theory." This blog post by Tao provides an indication on how difficult the math is to solve in a generalized fully 3D Navier-Stokes formulation (as defined per the Clay prize). Interesting that these points that Tao makes are all elements to the LTE solution for ENSO https://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/ Even if one gets close, you may not be that close, unless you choose wisely: > "... so there is basically no chance of a reduction of the non-perturbative case to the perturbative one unless one comes up with a highly nonlinear transform to achieve this (e.g. a naive scaling argument cannot possibly work)." Strategy 2 > "... we are thus left with Strategy 2 – discovering new bounds, stronger than those provided by the (supercritical) energy. This is not a priori impossible, but there is a huge gap between simply wishing for a new bound and actually discovering and then rigorously establishing one. Simply sticking in the existing energy bounds into the Navier-Stokes equation and seeing what comes out will provide a few more bounds, but they will all be supercritical, as a scaling argument quickly reveals. The only other way we know of to create global non-perturbative deterministic bounds is to **discover a new** conserved or monotone quantity. In the past, when such quantities have been discovered, they have always been connected either to geometry (symplectic, Riemmanian, complex, etc.), to physics, or to some consistently favourable (defocusing) sign in the nonlinearity (or in various “curvatures” in the system). " Later > "Strategy 2 would require either some exceptionally good intuition from physics, or else an incredible stroke of luck. " So these “blue-sky” long shot approaches (elaborated more in Tao's blog post): 1. Work with ensembles of data, rather than a single initial datum. 2. Work with a much simpler (but still supercritical) toy model. 3. Develop non-perturbative tools to control deterministic non-integrable dynamical systems. 4. Establish really good bounds for critical or nearly-critical problems. 5. Try a topological method. In conclusion, > "while it is good to occasionally have a crack at impossible problems, just to try one’s luck, I would personally spend much more of my time on other, more tractable PDE problems than the Clay prize problem, though one should certainly keep that problem in mind if, in the course on working on other problems, one indeed does stumble upon something that smells like a breakthrough in Strategy 1, 2, or 3 above. (In particular, there are many other serious and interesting questions in fluid equations that are not anywhere near as difficult as global regularity for Navier-Stokes, but still highly worthwhile to resolve.)" then, in response to a question in the comments section > "From a physical viewpoint, it may well be that one of these modified equations is in fact a more realistic model for fluids than Navier-Stokes. But for the narrow purposes of solving the Clay Prize Problem, we’re stuck with the original Navier-Stokes equation :-) ." Tao says: > "Self-promotion of one’s own papers is against the stated comment policy of this blog. If you wish to discuss your own research papers, please do so using another venue, such as your own personal web pages." --- When it comes down to it, a not-well-understood model that obeys Navier-Stokes and that will phenomenologically match the data in a parsimonious and plausible manner may be just what's needed to proceed.`

I've followed Gell-Mann's work on complexity over the years and will now try my hand at using his approach to describe the simplicity of the models developed in this long thread.

Each model fits the data applying a concise algorithm -- the key being its conciseness but not necessarily subjective intuitiveness.

Here's a quick breakdown :

## 1. Say I was doing tidal analysis and fitting a model to a SLH tidal gauge time-series. That's essentially an effective complexity of

1because it just involves fitting known sinusoid amplitudes and phases.## 2. Same effective complexity of

1for the dLOD, as it is straightforward additive tidal cycles.## 3. The Chandler wobble model that I developed has an effective complexity of

2because it takes a single monthly tidal forcing and it multiplies it by a semi-annual nodal impulse (one for each nodal pass). Just a bit more complex than #1 or #2 but evidently too difficult for geophysicists to handle in this day and age.## 4. The QBO model that I developed is also estimated at an effective complexity of

2as it is impulse modulated by nearly the same mechanism as for the Chandler wobble of #3. Instead of a bandpass filter for #3 (Chandler wobble) it uses an integrating filter to create more of a square-wave-like time-series. Again, this is apparently at the breaking point of understanding for the atmospheric physicists## 5. The ENSO model that I developed is an effective complexity of

3because it adds the nonlinear Laplace's Tidal Equation (LTE) modulation to the square-wave-like fit of #4 (QBO), tempered by being calibrated by the tidal forcing model for #2 (dLOD). Of course this additional level of "complexity" is certain to be above the heads of ocean scientists and climate scientists, who are still scratching their heads over #3 and #4.By comparison, most GCMs of climate behaviors have effective complexities much more than this because (as Gell-Man defined it) the shortest algorithmic description would require pages and pages of text to express. To climate scientists, perhaps the massive additional complexity of a GCM is preferred over the intuition required for enabling incremental complexity.

Since started with a Gell-Mann citation, may as well stick one here at the end:

`I've followed Gell-Mann's work on complexity over the years and will now try my hand at using his approach to describe the simplicity of the models developed in this long thread. ![](https://pbs.twimg.com/media/Eg8i9OvXYAEsR8z.png) Each model fits the data applying a concise algorithm -- the key being its conciseness but not necessarily subjective intuitiveness. Here's a quick breakdown : #1. Say I was doing tidal analysis and fitting a model to a SLH tidal gauge time-series. That's essentially an effective complexity of **1** because it just involves fitting known sinusoid amplitudes and phases. ![](https://imagizer.imageshack.com/img922/8716/LClDrl.png) #2. Same effective complexity of **1** for the dLOD, as it is straightforward additive tidal cycles. ![](https://imagizer.imageshack.com/img924/748/uIBabl.png) #3. The Chandler wobble model that I developed has an effective complexity of **2** because it takes a single monthly tidal forcing and it multiplies it by a semi-annual nodal impulse (one for each nodal pass). Just a bit more complex than #1 or #2 but evidently too difficult for geophysicists to handle in this day and age. ![](https://imagizer.imageshack.com/img924/9381/BnYSgd.png) #4. The QBO model that I developed is also estimated at an effective complexity of **2** as it is impulse modulated by nearly the same mechanism as for the Chandler wobble of #3. Instead of a bandpass filter for #3 (Chandler wobble) it uses an integrating filter to create more of a square-wave-like time-series. Again, this is apparently at the breaking point of understanding for the atmospheric physicists ![](https://imagizer.imageshack.com/img923/7210/7FQPAA.png) #5. The ENSO model that I developed is an effective complexity of **3** because it adds the nonlinear Laplace's Tidal Equation (LTE) modulation to the square-wave-like fit of #4 (QBO), tempered by being calibrated by the tidal forcing model for #2 (dLOD). Of course this additional level of "complexity" is certain to be above the heads of ocean scientists and climate scientists, who are still scratching their heads over #3 and #4. ![](https://imagizer.imageshack.com/img922/9074/17Yhbw.png) By comparison, most GCMs of climate behaviors have effective complexities much more than this because (as Gell-Man defined it) the shortest algorithmic description would require pages and pages of text to express. To climate scientists, perhaps the massive additional complexity of a GCM is preferred over the intuition required for enabling incremental complexity. --- Since started with a Gell-Mann citation, may as well stick one here at the end: ![](https://imagizer.imageshack.com/img923/7452/PfQefQ.png)`

Solitons or cnoidal waves appear closely related to the LTE solution

Shallow Water Waves and Solitary Waves

In their periodic form, they appear sinusoidal but contain higher harmonics and can show folding or wave breaking. The transcendental functions

cnandsnare the Jacobian elliptic functionsThey are sinusoids for small alpha and break as it nears 1

https://demonstrations.wolfram.com/ApproximatingTheJacobianEllipticFunctions/

Fortunately the LTE Mach-Zehnder-like solution is much simpler.

`Solitons or cnoidal waves appear closely related to the LTE solution [Shallow Water Waves and Solitary Waves](https://arxiv.org/pdf/1308.5383.pdf) In their periodic form, they appear sinusoidal but contain higher harmonics and can show folding or wave breaking. The transcendental functions *cn* and *sn* are the Jacobian elliptic functions They are sinusoids for small alpha and break as it nears 1 https://demonstrations.wolfram.com/ApproximatingTheJacobianEllipticFunctions/ Fortunately the LTE Mach-Zehnder-like solution is much simpler.`

Look at this buoyancy experiment. Forcing an inverted immiscible oil layer to suspend by applying a vibration to an injected air layer underneath this layer. This vibration prevents a Rayleigh–Taylor instability (i.e. glob dripping) from collapsing the layer.

from this article: Floating under a levitating liquid

Have to look at the liquid stabilization more closely -- the formulation looks close to a Mathieu equation. This has long been used to describe sloshing in a liquid amongst other behaviors. From the supplementary doc

Mathieu equation

\( {\displaystyle {\frac {d^{2}y}{dt^{2}}}+(a-2q\cos 2t)y=0,} \)

The solution to the Mathieu equation (Mathieu function) is known to have stable & unstable regimes for specific parameters, which reveals as a harmonic-rich spectrum. With the sustained forcing the Rayleigh–Taylor instability is restricted to these ordered harmonics, thus preventing collapse?

`Look at this buoyancy experiment. Forcing an inverted immiscible oil layer to suspend by applying a vibration to an injected air layer underneath this layer. This vibration prevents a Rayleigh–Taylor instability (i.e. glob dripping) from collapsing the layer. https://youtu.be/gAsDcS-QW_U from this article: [Floating under a levitating liquid](https://www.nature.com/articles/s41586-020-2643-8) Have to look at the liquid stabilization more closely -- the formulation looks close to a Mathieu equation. This has long been used to describe sloshing in a liquid amongst other behaviors. From the [supplementary doc](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-020-2643-8/MediaObjects/41586_2020_2643_MOESM1_ESM.pdf) ![](https://pbs.twimg.com/media/EhsKHuCXcAcJsXw.png) [Mathieu equation](https://en.wikipedia.org/wiki/Mathieu_function) \\( {\displaystyle {\frac {d^{2}y}{dt^{2}}}+(a-2q\cos 2t)y=0,} \\) The solution to the Mathieu equation (Mathieu function) is known to have stable & unstable regimes for specific parameters, which reveals as a harmonic-rich spectrum. With the sustained forcing the Rayleigh–Taylor instability is restricted to these ordered harmonics, thus preventing collapse?`

Climate scientist thinks reduced effective gravity is somehow weird

`Climate scientist thinks reduced effective gravity is somehow weird <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Wow, that's really weird! <a href="https://t.co/nPaewwyIXq">https://t.co/nPaewwyIXq</a></p>— Scott Denning (@airscottdenning) <a href="https://twitter.com/airscottdenning/status/1306818499105316871?ref_src=twsrc%5Etfw">September 18, 2020</a> </blockquote>`

https://twitter.com/tim_dunkerton/status/1306003350777860096

Not sure what this QBO finding is but will keep track.

`https://twitter.com/tim_dunkerton/status/1306003350777860096 Not sure what this QBO finding is but will keep track.`

Mostly unfiltered SOI ENSO time-series model. The high frequency cycling is likely not noise but high wavenumber standing-waves or traveling-waves that are solutions to LTE.

The high-frequencies are filtered out with the averaged response of the NINO34 ENSO time-series.

`Mostly unfiltered SOI ENSO time-series model. The high frequency cycling is likely not noise but high wavenumber standing-waves or traveling-waves that are solutions to LTE. ![](https://imagizer.imageshack.com/img923/5503/eWyosP.png) The high-frequencies are filtered out with the averaged response of the NINO34 ENSO time-series. ![](https://imagizer.imageshack.com/img924/7915/l33Rsl.png)`

@#468 is truly remarkable science. Wish all sources had the theory of their phenomena and the patient analysis so well developed!

On the other hand, I bet if there wasn't such a rush to application in some of these fields, we might collectively be farther along with understanding.

My system is back, by the way, so I look forward to cranking away on my phase plane plots for COVID-19

with uncertainty clouds about.By the way, regarding #457, #458, and #459, the pitfalls of phase plane plots seen as projections on basis vectors corresponding to derivatives has an analogy with projects of data upon singular vectors or eigenvectors. Some of the time (I'm not sure how one says "most of the time") these separate out independent behaviors, say, when depicting these derived from spectral decompositions of circulants of (time) series. But once in a while, there's a phenomenon where two basis vectors are needed to see the actual behavior and they act as a couplet. Presumably, for some series, there might be a need for three or four. Else it's like "shadows on the wall of the cave".

So it's possible that for complicated systems, some behaviors are irreducible to the phase plane.

I'm not sure if this phenomenon is well known in the PCA world.

`@#468 is truly remarkable science. Wish all sources had the theory of their phenomena and the patient analysis so well developed! On the other hand, I bet if there wasn't such a rush to application in some of these fields, we might collectively be farther along with understanding. My system is back, by the way, so I look forward to cranking away on my phase plane plots for COVID-19 <em>with uncertainty clouds about</em>. By the way, regarding #457, #458, and #459, the pitfalls of phase plane plots seen as projections on basis vectors corresponding to derivatives has an analogy with projects of data upon singular vectors or eigenvectors. Some of the time (I'm not sure how one says "most of the time") these separate out independent behaviors, say, when depicting these derived from spectral decompositions of circulants of (time) series. But once in a while, there's a phenomenon where two basis vectors are needed to see the actual behavior and they act as a couplet. Presumably, for some series, there might be a need for three or four. Else it's like "shadows on the wall of the cave". So it's possible that for complicated systems, some behaviors are irreducible to the phase plane. I'm not sure if this phenomenon is well known in the PCA world.`

Jan said:

In wave dynamics, triads also appear prevalent as a collective behavior. This is explainable as a triad is required to exchange energy during wave bifurcation. See these two recent blog posts:

https://geoenergymath.com/2020/04/06/triad-waves/

https://geoenergymath.com/2020/05/17/double-sideband-suppressed-carrier-modulation-vs-triad/

`Jan said: >" there's a phenomenon where two basis vectors are needed to see the actual behavior and they act as a couplet." In wave dynamics, triads also appear prevalent as a collective behavior. This is explainable as a triad is required to exchange energy during wave bifurcation. See these two recent blog posts: https://geoenergymath.com/2020/04/06/triad-waves/ https://geoenergymath.com/2020/05/17/double-sideband-suppressed-carrier-modulation-vs-triad/ ![](https://imagizer.imageshack.com/img924/6263/Y0h3kC.png) > Annotated triads from Davis et al showing how the triad identity (equation on the right) is equivalent to a mirror folding about 1/2 of the carrier forcing frequency (g). So that the side-band peaks (c) and (f) are mirror folded about (g/2).`

This is a bit of an aside, but as a methodological question, do you do your spectra as FFTs on weighted data? Or do you use multitaper methods? Although clearly trained in the former, with actual datasets, I tend to use the multitaper methods.

`This is a bit of an aside, but as a methodological question, do you do your spectra as FFTs on weighted data? Or do you use multitaper methods? Although clearly trained in the former, with actual datasets, I tend to use the multitaper methods.`

Jan, Nothing extra because I usually compare the data against the model so the same weighting on both time-series would not add discriminating power

Example: The Fourier amplitude spectrum comparison below may not reveal anything different if the same weighting or tapering was applied to both model and data.

Yet it would be interesting if it would, so if you have some ideas, please share.

`Jan, Nothing extra because I usually compare the data against the model so the same weighting on both time-series would not add discriminating power Example: The Fourier amplitude spectrum comparison below may not reveal anything different if the same weighting or tapering was applied to both model and data. ![](https://imagizer.imageshack.com/img922/1339/p1qc8l.png) Yet it would be interesting if it would, so if you have some ideas, please share.`

Paul, do you have an example dataset I could play with? A Github directory? I'd like things that need spectra made of them.

I probably won't get seriously into it until the week of the 19th because we are going away for the holiday weekend and then I have a 3 day Data Science Annual Meeting online which will consume the rest of the week when we return.

`Paul, do you have an example dataset I could play with? A Github directory? I'd like things that need spectra made of them. I probably won't get seriously into it until the week of the 19th because we are going away for the holiday weekend and then I have a 3 day Data Science Annual Meeting online which will consume the rest of the week when we return.`

Jan, This is one that I have used

https://github.com/pukpr/GeoEnergyMath/blob/master/nino34_soi.txt

`Jan, This is one that I have used https://github.com/pukpr/GeoEnergyMath/blob/master/nino34_soi.txt`

The best advice to give someone trying to work geophysical fluid dynamics models is not to get stuck over absolute quantitative estimates but instead work on the relative behavioral patterns. As an example, consider the path of understanding that conventional tidal analysis took as the mathematical approaches matured. Initially, many scientists thought one could make absolute predictions of the sea-level-height change due to tides. Yet for reasons tied into self-gravitational pull and specific boundary conditions, it quickly became obvious that the absolute values derived from first-principles could just as well be calibrated via measurement and then all subsequent computations could be made relative to the calibration. But this only works if you know the parametric behavioral pattern to work from. So when you find a geophysics or climate paper that spends way too much time trying to calculate the absolute value of a particular measure, it means that they likely don't have a working pattern either.

Along these lines, there is a most intense discussion over approximately solving fluid dynamics equations that I have yet to encounter, focused on this submitted paper : "Quasi-hydrostatic equations for climate models and the study on linear instability". This consists of over a dozen rounds of reviewer/author interaction (with a few pointed accusations, and a referee trying to cool things down).

Since there is no comparison to data, the long back-and-forth discussion amounts to arguing how best to close a set of fluid dynamics equations with selected approximations and reductions. Anything is possible by manipulating the math equations, but as I said in my initial review, all that matters is what works to describe the climate behaviors observed. Until that occurs -- as I said in a subsequent comment :

"Otherwise, in the absence of a real-world context, there is no end in sight"and the back-and-forth argument will continue.Next, go read a recent blog page I wrote -- https://geoenergymath.com/the-just-so-story-narrative/ -- where I explain how marketing hype may get ahead of the quality of the science. There are still unresolved issues in climate science, many of these having to do with solving the prickly Navier-Stokes equations. As I said the solutions may not be as chaotic as many are led to believe, but neither do they have the necessary patterns to apply to get out of this mess. What's left is that many of the papers are left to qualitative musings (the just-so stories of Rudyard Kipling) that may sound plausible but don't lead to anything quantitative.

`The best advice to give someone trying to work geophysical fluid dynamics models is not to get stuck over absolute quantitative estimates but instead work on the relative behavioral patterns. As an example, consider the path of understanding that conventional tidal analysis took as the mathematical approaches matured. Initially, many scientists thought one could make absolute predictions of the sea-level-height change due to tides. Yet for reasons tied into self-gravitational pull and specific boundary conditions, it quickly became obvious that the absolute values derived from first-principles could just as well be calibrated via measurement and then all subsequent computations could be made relative to the calibration. But this only works if you know the parametric behavioral pattern to work from. So when you find a geophysics or climate paper that spends way too much time trying to calculate the absolute value of a particular measure, it means that they likely don't have a working pattern either. Along these lines, there is a most intense discussion over approximately solving fluid dynamics equations that I have yet to encounter, focused on this submitted paper : ["Quasi-hydrostatic equations for climate models and the study on linear instability"](https://gmd.copernicus.org/preprints/gmd-2020-146/#discussion). This consists of over a dozen rounds of reviewer/author interaction (with a few pointed accusations, and a referee trying to cool things down). Since there is no comparison to data, the long back-and-forth discussion amounts to arguing how best to close a set of fluid dynamics equations with selected approximations and reductions. Anything is possible by manipulating the math equations, but as I said in my initial review, all that matters is what works to describe the climate behaviors observed. Until that occurs -- as I said in a subsequent comment : *"Otherwise, in the absence of a real-world context, there is no end in sight"* and the back-and-forth argument will continue. Next, go read a recent blog page I wrote -- https://geoenergymath.com/the-just-so-story-narrative/ -- where I explain how marketing hype may get ahead of the quality of the science. There are still unresolved issues in climate science, many of these having to do with solving the prickly Navier-Stokes equations. As I said the solutions may not be as chaotic as many are led to believe, but neither do they have the necessary patterns to apply to get out of this mess. What's left is that many of the papers are left to qualitative musings (the [just-so stories of Rudyard Kipling](https://en.wikipedia.org/wiki/Just-so_story)) that may sound plausible but don't lead to anything quantitative.`

An "extra" dimension helps to show what happens when the solution to Laplace's Tidal Equations along the equator develops greater modulation -- equivalent to wave breaking wrapping around a torus.

The wave can curl in the left or right hand sense, but since it's topologically confined along the equator there's no preference in the direction. The lack of chirality allows the two curls to cancel and left with the z-projection.

https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch12

`An "extra" dimension helps to show what happens when the solution to Laplace's Tidal Equations along the equator develops greater modulation -- equivalent to wave breaking wrapping around a torus. ![](https://imagizer.imageshack.com/img922/9703/V7RV2c.png) The wave can curl in the left or right hand sense, but since it's topologically confined along the equator there's no preference in the direction. The lack of chirality allows the two curls to cancel and left with the z-projection. https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch12`

From a recent extensive twitter thread started by a grad student in oceanography

Is the math required all that advanced or is the issue that the constraints are not sufficiently bounded? If the math doesn't go anywhere, I can understand how students can get frustrated learning about equations that can't be straightforwardly applied.

The typical research paper I come across features a section where they lay out the partial DiffEq of geophysical fluid dynamics but barely go beyond that.

But when it does, look at the "intense discussion" linked in comment #475 -- the back-and-forth soap opera has now reached a breaking point

This is a bizarre scientific field -- everyone seems to be overmatched and outwitted by the data. The flailing about is a sign of their inability to resolve issues in the fundamental models. The curse of Navier-Stokes?

`From a recent extensive [twitter thread](https://twitter.com/henrifdrake/status/1314893144312299520) started by a grad student in oceanography ![](https://imagizer.imageshack.com/img924/3582/NqMNZS.png) Is the math required all that advanced or is the issue that the constraints are not sufficiently bounded? If the math doesn't go anywhere, I can understand how students can get frustrated learning about equations that can't be straightforwardly applied. The typical research paper I come across features a section where they lay out the partial DiffEq of geophysical fluid dynamics but barely go beyond that. But when it does, look at the "intense discussion" linked in comment #475 -- the back-and-forth soap opera has now reached a breaking point ![](https://imagizer.imageshack.com/img922/9032/DTMfS6.png) This is a bizarre scientific field -- everyone seems to be overmatched and outwitted by the data. The flailing about is a sign of their inability to resolve issues in the fundamental models. The curse of Navier-Stokes?`

Well, at least in physical oceanography there are a plethora of special cases the student and scholar are expected to remember. Sure, these are all corollaries of Navier-Stokes, but their peculiar evolution depends upon the boundary and other conditions of the ocean medium. There are presumably similar phenomena on "waterworlds" in the Solar System, and on the gas giants, but their boundary conditions are different, and, so, produce qualitatively different results. Another subfield is study of wave phenomena.

There are disciplines which rely primarily upon memory for their expertise, Medicine being one. But in Geophysics, theory seems elevated, but in practice and in the end, there are a large number of special cases which demand memorization. It is not surprising to me at all that machine learning is help make substantial progress here, because case analysis with hundreds of thousands of cases is precisely the kind of thing ML is good at. Nevertheless, I have heard ML dissed by some Big Names, and they have, to me, expressed how they wished people and students approached them and the People Who Know These Things with greater humility. My take is that it is true priesthood. And I don't pretend I understand any of this, surely not as thoroughly as Paul or you, Dr Drake, or my friend, Ray Pierrehumbert. or Prof Mark Jacobson. But, frankly, with this kind of attitude, apart from Ray's great book on climate, or Mark's

Atmospheric Modeling, whywouldI want to learn? There aresomany other interesting fields which are approachable and more egalitarian. I'm not special, and I'm not pretending I can make any contribution here, but if I feel this way, why wouldn't students?`Well, at least in physical oceanography there are a plethora of special cases the student and scholar are expected to remember. Sure, these are all corollaries of Navier-Stokes, but their peculiar evolution depends upon the boundary and other conditions of the ocean medium. There are presumably similar phenomena on "waterworlds" in the Solar System, and on the gas giants, but their boundary conditions are different, and, so, produce qualitatively different results. Another subfield is study of wave phenomena. There are disciplines which rely primarily upon memory for their expertise, Medicine being one. But in Geophysics, theory seems elevated, but in practice and in the end, there are a large number of special cases which demand memorization. It is not surprising to me at all that machine learning is help make substantial progress here, because case analysis with hundreds of thousands of cases is precisely the kind of thing ML is good at. Nevertheless, I have heard ML dissed by some Big Names, and they have, to me, expressed how they wished people and students approached them and the People Who Know These Things with greater humility. My take is that it is true priesthood. And I don't pretend I understand any of this, surely not as thoroughly as Paul or you, Dr Drake, or my friend, Ray Pierrehumbert. or Prof Mark Jacobson. But, frankly, with this kind of attitude, apart from Ray's great book on climate, or Mark's <em>Atmospheric Modeling</em>, why <em>would</em> I want to learn? There are <em>so</em> many other interesting fields which are approachable and more egalitarian. I'm not special, and I'm not pretending I can make any contribution here, but if I feel this way, why wouldn't students?`

Jan, That is so well put. Yes, all of that case-by-case domain expertise captured in a ML knowledgebase could be just the trick. The guru priesthood can then finally be codified.

`Jan, That is so well put. Yes, all of that case-by-case domain expertise captured in a ML knowledgebase could be just the trick. The guru priesthood can then finally be codified.`

Pertaining to this Roundy article about applying EOF (empirical orthogonal functions) and PCA (principal component analysis) to ENSO and MJO, the figure below shows a set of two principle components that Roundy found. On the right is the phase relationship pattern between the two. Note that it doesn't fill up space very well -- and also that the orthogonality is somewhat in question for the two factors, as many points fall along a proportional x ~ y line.

It is then instructive to look at how orthogonality applies to solutions of Laplace's Tidal Equations. The fact that the solution factors are orthogonal is trivially true for the Mach-Zehnder-like solutions to LTE. Each solution -- sin( k1⋅f(t)), sin( k2⋅f(t)), etc -- is automatically orthogonal for different values of k, which are essentially different standing-wave patterns showing an average cross-correlation of zero over all time.

Let's look at one canonical fit to the ENSO time series, which features two standing-wave M-Z LTE patterns where the ratio between the two values of k is approximately 8.3.

Below is the fit, with the tidal forcing input in the top panel, and the decomposition of the two superposed M-Z LTE solutions in the lower panel. For the strong El Nino events in 1982 and 1998, the superposition of the peaks is constructive (explaining their large amplitude).

The charts below illustrate the phase relationship between the two components, revealing apparent fragments of a Lissajous pattern, which comes about from graphing a pair of parametric equations. Note the similarity to a pure non-fragmentary Lissajous curve created from two sine waves with the same relative amplifying factor of 8.3 in the lower graph. The unusual aspect of this comparison is that the time is not the parameter of the upper curve, as you can see from the apparently random sizing of the points, thus explaining it's fragmentary character.

In contrast to my results which clearly show orthogonality with an ergodic space-filling character, Roundy's results appear at most a rough initial heuristic. In other words, one approach reduces complexity in an elegant manner and the other one doesn't.

`Pertaining to this [Roundy article](https://journals.ametsoc.org/jcli/article/28/3/1148/106735) about applying EOF ([empirical orthogonal functions](https://en.wikipedia.org/wiki/Empirical_orthogonal_functions)) and PCA ([principal component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis)) to ENSO and MJO, the figure below shows a set of two principle components that Roundy found. On the right is the phase relationship pattern between the two. Note that it doesn't fill up space very well -- and also that the orthogonality is somewhat in question for the two factors, as many points fall along a proportional x ~ y line. ![](https://imagizer.imageshack.com/img924/2791/wvVV0B.png) It is then instructive to look at how orthogonality applies to solutions of Laplace's Tidal Equations. The fact that the solution factors are orthogonal is trivially true for the Mach-Zehnder-like solutions to LTE. Each solution -- sin( k1⋅f(t)), sin( k2⋅f(t)), etc -- is automatically orthogonal for different values of k, which are essentially different standing-wave patterns showing an average cross-correlation of zero over all time. Let's look at one canonical fit to the ENSO time series, which features two standing-wave M-Z LTE patterns where the ratio between the two values of k is approximately 8.3. Below is the fit, with the tidal forcing input in the top panel, and the decomposition of the two superposed M-Z LTE solutions in the lower panel. For the strong El Nino events in 1982 and 1998, the superposition of the peaks is constructive (explaining their large amplitude). ![](https://imagizer.imageshack.com/img922/3299/vlZVfK.png) The charts below illustrate the phase relationship between the two components, revealing apparent fragments of a [Lissajous pattern](https://en.wikipedia.org/wiki/Lissajous_curve), which comes about from graphing a pair of parametric equations. Note the similarity to a pure non-fragmentary Lissajous curve created from two sine waves with the same relative amplifying factor of 8.3 in the lower graph. The unusual aspect of this comparison is that the time is not the parameter of the upper curve, as you can see from the apparently random sizing of the points, thus explaining it's fragmentary character. ![](https://imagizer.imageshack.com/img923/9828/6gyaPs.png) In contrast to my results which clearly show orthogonality with an ergodic space-filling character, Roundy's results appear at most a rough initial heuristic. In other words, one approach reduces complexity in an elegant manner and the other one doesn't.`