It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 499
- Study Groups 18
- Petri Nets 9
- Epidemiology 3
- Leaf Modeling 1
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 67
- Azimuth Code Project 110
- Statistical methods 3
- Drafts 2
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 147
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 707

## Comments

That list is v. useful. I've been happy to realise that it's a sort of curriculum for the project I find I've been trying to follow over the past few years. Even better is that rather than a single journal article reporting negative results it gives lots of hypothetical approaches you've tested. As digital real-estate is almost costless I think there's no good reason for every academic publisher not to expand what they accept in their journal article T&Cs to include similar descriptions of rejected theories as background to any proposed theory. I'm sure that would contribute to anybody's learning process. I think Lakatos might have liked this :).

`That list is v. useful. I've been happy to realise that it's a sort of curriculum for the project I find I've been trying to follow over the past few years. Even better is that rather than a single journal article reporting negative results it gives lots of hypothetical approaches you've tested. As digital real-estate is almost costless I think there's no good reason for every academic publisher not to expand what they accept in their journal article T&Cs to include similar descriptions of rejected theories as background to any proposed theory. I'm sure that would contribute to anybody's learning process. I think Lakatos might have liked this :).`

Jim, yes. Scientific blogs could be renamed public engineering notebooks.

Read the linked doc, and the only thing different is the typical proprietary nature of a notebook.

All pages are

OTOH, if one is trying to solve a health or environmental crisis the idea of profiting from it is of questionable ethics.

The other bit is this:

Mistakes are important to document so as to prevent others from repeating them, equivalent to errata on a published document. That's part of the learning process that you mentioned.

`Jim, yes. Scientific blogs could be renamed public [engineering notebooks](https://www.cusd80.com/cms/lib6/AZ01001175/Centricity/Domain/6705/engineeringnotebook1.pdf). > "An engineering notebook is a book in which an engineer will formally document, in chronological order, all of his/her work that is associated with a specific design project." Read the linked doc, and the only thing different is the typical proprietary nature of a notebook. All pages are * Numbered * Dated * Signed by the designer * Signed by a witness * Include a statement of the proprietary nature of notebook OTOH, if one is trying to solve a health or environmental crisis the idea of profiting from it is of questionable ethics. The other bit is this: * If you make a mistake, draw a line through it, enter the correct information, and initial the change. Mistakes are important to document so as to prevent others from repeating them, equivalent to errata on a published document. That's part of the learning process that you mentioned.`

I presented to the International Conference on Learning Representations 2020 a few days ago. This workshop was on Integration of Deep Neural Models and Differential Equations. The math/physics applied is at the level of Hamiltonian and Lagrangian approaches and climate science is a significant application area, but it will take a while before they catch up.

`I presented to the [International Conference on Learning Representations](https://iclr.cc/Conferences/2020/Schedule?showEvent=1306) 2020 a few days ago. This workshop was on [Integration of Deep Neural Models and Differential Equations](https://openreview.net/group?id=ICLR.cc/2020/Workshop/DeepDiffEq). The math/physics applied is at the level of Hamiltonian and Lagrangian approaches and climate science is a significant application area, but it will take a while before they catch up. https://youtu.be/PD-nzaWJgK0`

Great. Here I'm pasting in your abstract from conference site:

`Great. Here I'm pasting in your abstract from conference site: > Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. The solutions to the partial differential equations are highly non-linear related to Navier-Stokes and only search approaches can be used to fit to the data.`

Thanks David, I off-and-on get some feedback on these geophysical fluid dynamics models but occasionally get people that just rage. One PhD fellow that is considered an AGW skeptic and works for Boeing in wing design claims that it's pointless to do computational FD on climate problems because all flow is turbulent. I often wonder if these people are on to something based on their credentials and experience -- i.e. doing engineering aerodynamics has to count for something, right?

But then over time I notice that these same skeptics are essentially contrarian about anything that hints at social progress towards the common good.

For example, this is the same Boeing aero PhD that has been a thorn in the side of climate scientists doing CFD for years now, and it doesn't surprise me that he would say something like this (over at a climate science blog):

https://imagizer.imageshack.com/img924/3603/CpR8OR.png

and then you have people that stick up for these cultists, they delete comments that point out this stuff

https://pbs.twimg.com/media/EXwxTZwWoAMCyod.png

https://pbs.twimg.com/media/EXwyhBCXsAIE3x2.png

`Thanks David, I off-and-on get some feedback on these geophysical fluid dynamics models but occasionally get people that just rage. One PhD fellow that is considered an AGW skeptic and works for Boeing in wing design claims that it's pointless to do computational FD on climate problems because all flow is turbulent. I often wonder if these people are on to something based on their credentials and experience -- i.e. doing engineering aerodynamics has to count for something, right? But then over time I notice that these same skeptics are essentially contrarian about anything that hints at social progress towards the common good. For example, this is the same Boeing aero PhD that [has been a thorn in the side of climate scientists doing CFD](http://theoilconundrum.blogspot.com/2013/03/climate-sensitivity-and-33c-discrepancy.html) for years now, and it doesn't surprise me that he would say something like this (over at a climate science blog): https://imagizer.imageshack.com/img924/3603/CpR8OR.png and then you have people that stick up for these cultists, they delete comments that point out this stuff https://pbs.twimg.com/media/EXwxTZwWoAMCyod.png https://pbs.twimg.com/media/EXwyhBCXsAIE3x2.png`

I attended several of the virtual EGU sessions and took notes and captured all my online comments here:

https://geoenergymath.com/2020/05/10/egu-2020-notes/

The topics I concentrated on were ENSO, QBO, Chandler wobble, and geophysical fluid dynamics.

If you have a Copernicus account, you can still comment on all the presentations until the end of the month

https://meetingorganizer.copernicus.org/EGU2020/meetingprogramme

`I attended several of the virtual EGU sessions and took notes and captured all my online comments here: https://geoenergymath.com/2020/05/10/egu-2020-notes/ The topics I concentrated on were ENSO, QBO, Chandler wobble, and geophysical fluid dynamics. If you have a Copernicus account, you can still comment on all the presentations until the end of the month https://meetingorganizer.copernicus.org/EGU2020/meetingprogramme`

@WebHubTel wrote:

Wouldn't that call for exploring computational models of turbulence, rather than giving up?

For example, cursory searching on "computational fluid dynamics turbulence climate" turned up:

and a book chapter on Computational Fluid Dynamics in Turbulent Flow Applications.

I don't have the expertise to be able to evaluate these ideas, but if nothing else it sounds like an important research area. Ties in with the general theme of stochastic modeling.

`@WebHubTel wrote: > One PhD fellow that is considered an AGW skeptic and works for Boeing in wing design claims that it's pointless to do computational FD on climate problems because all flow is turbulent. Wouldn't that call for exploring computational models of turbulence, rather than giving up? For example, cursory searching on "computational fluid dynamics turbulence climate" turned up: * [New technique for modeling turbulence in the atmosphere](https://www.sciencedaily.com/releases/2018/08/18080717105), U.S. Army Research Laboratory, 2018. and a book chapter on [Computational Fluid Dynamics in Turbulent Flow Applications](https://www.intechopen.com/books/numerical-simulation-from-brain-imaging-to-turbulent-flows/computational-fluid-dynamics-in-turbulent-flow-applications). I don't have the expertise to be able to evaluate these ideas, but if nothing else it sounds like an important research area. Ties in with the general theme of stochastic modeling.`

David said:

I think turbulence needs to be evaluated if and when it occurs. As I hinted, the Boeing fellow is looking for an excuse to marginalize the research of others.

Here is a somewhat detailed follow-up analysis I did in the past few days based on one of last week's EGU presentations entitled

"Transition from geostrophic flows to inertia-gravity waves in the spectrum of a differentially heated rotating annulus experiment"https://geoenergymath.com/2020/05/14/characterizing-wavetrains/

What I found was how non-turbulent the waves were in their experiment. They were looking for some turbulence to help explain Kolomogorov's theory but I found lots of order. The spectrum below is highly ordered, at least until it gets to the high wavenumbers, but that's very low in kinetic energy anyways.

`David said: > "Wouldn't that call for exploring computational models of turbulence, rather than giving up?" I think turbulence needs to be evaluated if and when it occurs. As I hinted, the Boeing fellow is looking for an excuse to marginalize the research of others. Here is a somewhat detailed follow-up analysis I did in the past few days based on one of last week's EGU presentations entitled *"Transition from geostrophic flows to inertia-gravity waves in the spectrum of a differentially heated rotating annulus experiment"* https://geoenergymath.com/2020/05/14/characterizing-wavetrains/ What I found was how non-turbulent the waves were in their experiment. They were looking for some turbulence to help explain Kolomogorov's theory but I found lots of order. The spectrum below is highly ordered, at least until it gets to the high wavenumbers, but that's very low in kinetic energy anyways. ![](https://imagizer.imageshack.com/img922/5749/hOHcgk.png)`

Algorithm for conventional tidal analysis:Since it's a linear superposition, a multiple linear regression algorithm can be used in step #3 instead of an iterative solver.

This is the algorithm for the LTE-based ENSO analysis:For the ENSO model, a multiple linear regression algorithm can't be used and the iterative solution has to grind away. The two extra transform steps shown below are simple to implement but make it much more computationally intensive than the conventional tidal analysis.

For either case, one can minimize over the frequency domain instead of the time domain. Thus is amenable to pure digital signal processing techniques with discrete time steps at the monthly or daily time steps depending on the resolution of the data.

`**Algorithm for conventional tidal analysis:** 1. Select N major tidal constituents, fixing the period for each but allowing amplitude and phase to vary 2. Create a linear superposition of the N constituents. 3. Iterate over training period the 2N amplitude+phase parameters to minimize the error against prior data. 4. Use that set of parameters to make a prediction Since it's a linear superposition, a multiple linear regression algorithm can be used in step #3 instead of an iterative solver. **This is the algorithm for the LTE-based ENSO analysis:** 1. Select N major tidal constituents, fixing the period for each but allowing amplitude and phase to vary 2. Create a linear superposition of the N constituents. 3. Multiply by an annual impulse aligned at a fixed time of year, and calculate a lagged integral response (IIR). 4. Modulate the result with M LTE transfer functions, each containing a Mach-Zehnder-like amplitude+phase. 5. Iterate over training period 2(M+N) amplitude+phase parameters to minimize the error against prior data. 6. Use that set of parameters to make a prediction For the ENSO model, a multiple linear regression algorithm can't be used and the iterative solution has to grind away. The two extra transform steps shown below are simple to implement but make it much more computationally intensive than the conventional tidal analysis. ![](https://imagizer.imageshack.com/img924/8215/ycXSL8.png) For either case, one can minimize over the frequency domain instead of the time domain. Thus is amenable to pure digital signal processing techniques with discrete time steps at the monthly or daily time steps depending on the resolution of the data.`

@WebHubTel, hello, can you recommend books and study material for learning about climate change? Thanks.

`@WebHubTel, hello, can you recommend books and study material for learning about climate change? Thanks.`

I did the short explainer in comment #409 above because I scrolled through the (infamous) Imperial College source code for modeling epidemics here as I wanted to get a feel for the complexity levels. Neil Ferguson of Imperial built his model on thousands of SLOCs of intricate C++ decision logic mixed in with compiler pragmas for multiprocess speedup. I don't understand the rationale behind this. It's difficult enough to get buy in for something as conceptually simple as the physics-based ENSO model, but to spend that much effort on a model easily defeated by sociopolitical game theory machinations makes no sense.

It's revealing that the gang of climate bloggers at places such as ATTP are infatuated with Ferguson's contagion model, as most climate models are equally as complex. For my ENSO model, the basic functionality written in verbose Ada takes up 100 semicolons and takes 7 milliseconds to run as an executable on my laptop.

`I did the short explainer in comment #409 above because I scrolled through the ([infamous](https://www.telegraph.co.uk/technology/2020/05/16/neil-fergusons-imperial-model-could-devastating-software-mistake/)) Imperial College source code for modeling epidemics [here](https://github.com/mrc-ide/covid-sim/blob/master/src/Sweep.cpp) as I wanted to get a feel for the complexity levels. Neil Ferguson of Imperial built his model on thousands of SLOCs of intricate C++ decision logic mixed in with compiler pragmas for multiprocess speedup. I don't understand the rationale behind this. It's difficult enough to get buy in for something as conceptually simple as the physics-based ENSO model, but to spend that much effort on a model easily defeated by sociopolitical game theory machinations makes no sense. It's revealing that the gang of climate bloggers at places such as ATTP are infatuated with Ferguson's contagion model, as most climate models are equally as complex. For my ENSO model, the basic functionality written in verbose Ada takes up 100 [semicolons](https://en.wikipedia.org/wiki/Source_lines_of_code#Measurement_methods) and takes 7 milliseconds to run as an executable on my laptop.`

Continuing on from the last comment #411, I decided to merge the ENSO model written in Ada with a threaded optimization inspired by my recent contribution to the ongoing Petri net discussion.

What I will do is encapsulate the ENSO search algorithm in N Ada threads (I have N=8 CPUs on my PC so will go for that) and then use a protected resource to keep track of the best-fit metric, which is at present a correlation coefficient. The metric will be stored in an Ada protected object and the object's logic will decide whether a task thread submitting a candidate correlation will go to the top of the list.

Although not exactly the synchronization semantics I have in mind, the following Petri net for two threads (

R1andR2) competing for a protected resource token (held in the protected object labelledL) is close to the idea:If a task is stalled in a local maximum and is unable to make any process, it will re-initialize with a new seed. The task containing the best correlation will continue running . So there will always be one task thread in the lead, and N-1 tasks trying to catch up. IOW, the protected object will be the monitor in deciding which task is running the best-fitting model.

This is untested code for the monitor.

So easy to do this. Can also try the following approach, but is not considered good style as Ada tries to follow the paradigm that functions do not have side-effects -- the side-effect being that the internal state changes.

`Continuing on from the last comment #411, I decided to merge the ENSO model written in Ada with a threaded optimization inspired by my recent contribution to the ongoing [Petri net discussion](https://forum.azimuthproject.org/discussion/comment/22259/#Comment_22259). What I will do is encapsulate the ENSO search algorithm in N Ada threads (I have N=8 CPUs on my PC so will go for that) and then use a protected resource to keep track of the best-fit metric, which is at present a correlation coefficient. The metric will be stored in an Ada [protected object](https://learn.adacore.com/courses/intro-to-ada/chapters/tasking.html#Protected_objects) and the object's logic will decide whether a task thread submitting a candidate correlation will go to the top of the list. Although not exactly the synchronization semantics I have in mind, the following Petri net for two threads (**R1** and **R2**) competing for a protected resource token (held in the protected object labelled **L**) is close to the idea: > <center> ![pn](https://www.researchgate.net/profile/Stephane_Lafortune3/publication/220476441/figure/fig1/AS:393910439432197@1470926976043/Petri-net-Petri-nets-are-bipartite-directed-graphs-containing-two-types-of-nodes-places_W640.jpg) </center> > If a task is stalled in a local maximum and is unable to make any process, it will re-initialize with a new seed. The task containing the best correlation will continue running . So there will always be one task thread in the lead, and N-1 tasks trying to catch up. IOW, the protected object will be the monitor in deciding which task is running the best-fitting model. This is untested code for the monitor. <pre> package Optimization_Resource is protected Monitor is procedure Check (Metric : in Float; Best : out Boolean); private -- Value of current best metric stored internally Best_Metric: Float := 0.0; end Monitor; end Optimization_Resource; package body Optimization_Resource is protected body Monitor is procedure Check (Metric : in Float; Best : out Boolean) is begin if Metric >= Best_Metric then Best_Metric := Metric; Best := True; else Best := False; end if; end Monitor; end Optimization_Resource; </pre> So easy to do this. Can also try the following approach, but is not considered good style as Ada tries to follow the paradigm that functions do not have side-effects -- the side-effect being that the internal state changes. <pre> package Optimization_Resource is protected Monitor is function Is_Best (Metric : in Float) return Boolean; private -- Value of current best metric stored internally Best_Metric : Float := 0.0; end Monitor; end Optimization_Resource; package body Optimization_Resource is protected body Monitor is function Is_Best (Metric : in Float) return Boolean is begin if Metric >= Best_Metric then Best_Metric := Metric; return True; else return False; end if; end Monitor; end Optimization_Resource; </pre>`

This is exciting. By yesterday, I got the full multicore-processing version of the Ada ENSO modeling source code running and pegging the system at nearly 100% on my 8 CPU PC (easy to tell if its working, otherwise top only shows ~13%=100/8).

The contention among the threads for an optimal metric works perfectly and I can monitor the battle as one thread will trade back and forth with another as they each follow their own gradient descent path. The part that's exciting is how fast it will approach a solution in contrast to the Excel Solver that I have been using off and on. Excel Solver also uses all 8 CPU cores so it should be comparable, yet it appears to execute in a slower search approach. The Excel Solver is very persistent though -- not easily getting stuck in local minima, something that I haven't verified about my algorithm.

Now, the part of the optimization algorithm I am battling with is how to reset a computational thread that is falling behind the leaders. I have it set right now that a thread will reset after (1) a certain number of cycles

and(2) its metric lags the best value by a certain percentage. The issue is in defining these thresholds. Since the best thread's metric will keep getting better, the percentage threshold is a moving target. What I think the protected object monitor needs is a "best trajectory" that can be used as a threshold. This gets stored and revised as the best metric evolves, and so a poorly performing thread can be abandoned with knowledge that it wouldn't be able to catch up to the lead thread. (this may of course not cover the case where the thread is a late bloomer and is following a path with a tough part followed by a steep descent)Probably should have done this multi-processing code project long ago, but with the lock-down in place, I have a lot more time :/

`This is exciting. By yesterday, I got the full multicore-processing version of the Ada ENSO modeling source code running and pegging the system at nearly 100% on my 8 CPU PC (easy to tell if its working, otherwise top only shows ~13%=100/8). The contention among the threads for an optimal metric works perfectly and I can monitor the battle as one thread will trade back and forth with another as they each follow their own gradient descent path. The part that's exciting is how fast it will approach a solution in contrast to the Excel Solver that I have been using off and on. Excel Solver also uses all 8 CPU cores so it should be comparable, yet it appears to execute in a slower search approach. The Excel Solver is very persistent though -- not easily getting stuck in local minima, something that I haven't verified about my algorithm. Now, the part of the optimization algorithm I am battling with is how to reset a computational thread that is falling behind the leaders. I have it set right now that a thread will reset after (1) a certain number of cycles **and** (2) its metric lags the best value by a certain percentage. The issue is in defining these thresholds. Since the best thread's metric will keep getting better, the percentage threshold is a moving target. What I think the protected object monitor needs is a "best trajectory" that can be used as a threshold. This gets stored and revised as the best metric evolves, and so a poorly performing thread can be abandoned with knowledge that it wouldn't be able to catch up to the lead thread. (this may of course not cover the case where the thread is a late bloomer and is following a path with a tough part followed by a steep descent) Probably should have done this multi-processing code project long ago, but with the lock-down in place, I have a lot more time :/`

Another possible strategy is to have the individual threads optimize according to a

training intervalwhile the monitor is keeping track of the best fit to an orthogonaltest interval. This might be able to add a sense of robustness to the model fit as the model can also be cross-validated as a result. The key here is to suspend the processing of the thread with the best metric while the other threads try to catch up. This won't require a moving target threshold as the lead thread can't build up an advantage by continuing to compute while the others are starting from scratch.I remember the symbolic reasoning solver Eureqa having a training & test option but couldn't quite figure out how it was used. It may actually be implemented something similar to this, as that was also a multi-threaded tool.

`Another possible strategy is to have the individual threads optimize according to a **training interval** while the monitor is keeping track of the best fit to an orthogonal **test interval**. This might be able to add a sense of robustness to the model fit as the model can also be cross-validated as a result. The key here is to suspend the processing of the thread with the best metric while the other threads try to catch up. This won't require a moving target threshold as the lead thread can't build up an advantage by continuing to compute while the others are starting from scratch. I remember the symbolic reasoning solver Eureqa having a training & test option but couldn't quite figure out how it was used. It may actually be implemented something similar to this, as that was also a multi-threaded tool.`

Daniel asked:

I would recommend this paper by climate scientist Raymond Pierrehumbert who incidentally recently became a Royal Society fellow

`Daniel asked: > "@WebHubTel, hello, can you recommend books and study material for learning about climate change? Thanks." I would recommend this paper by climate scientist Raymond Pierrehumbert who incidentally recently became a [Royal Society fellow](https://www.oxfordmail.co.uk/news/18420126.oxford-university-researchers-become-royal-society-fellows/) * [The Myth of "Saudi America"](https://slate.com/technology/2013/02/u-s-shale-oil-are-we-headed-to-a-new-era-of-oil-abundance.html) > "However, if oil analysts such as those speaking at the American Geophysical Union are right, almost all of this oil will remain inaccessible. In that case, coal—which certainly contains enough carbon to bring us to the danger level and probably much beyond—remains the clear and present threat to the climate, and the fight to leave as much coal as possible in the ground remains the front line in the battle to protect the climate. This does not mean the threat posed by the carbon pool in unconventional oil can be completely ignored. The case against oil abundance seems persuasive, but I’d hate to bet the planet against the ingenuity of future oil engineers, which is why I feel that some rearguard actions that inhibit development of unconventional oil are warranted, notably in the case of the Keystone XL pipeline, which taps into Canada’s Athabasca oil sands."`

Hi @WebHubTel / Paul, Do you have anything written which gives a summary overview of your ENSO modeling logic, from a computational perspective? Or could you post a few paragraphs here.

`Hi @WebHubTel / Paul, Do you have anything written which gives a summary overview of your ENSO modeling logic, from a computational perspective? Or could you post a few paragraphs here.`

From a few days ago there is this comment #409 : https://forum.azimuthproject.org/discussion/comment/22250/#Comment_22250

Computationally, all it involves is calculation of sin functions and 3-point filtering.

The first stage is understanding how to do tidal analysis https://undergrad.research.ucsb.edu/2017/01/introduction-tidal-harmonic-analysis/, which is essentially guessing a superposition of known sin waves of unknown amplitude and phase.

Next steps involves a simple IIR filter and Mach-Zehnder modulation, which is essentially a sin function applied to the amplitude.

Nothing much more other than a correlation coefficient, which is a library call if needed. I mentioned in comment #411 that it's only like 100 lines of code, so there's not a lot of complexity that you can fit in to that space. Eventually the complexity is driven by the gradient descent search algorithm chosen, because unlike tidal analysis on its own, the complete response is non-linear superposition and so a multiple-linear regression algorithm won't work.

Just occurred to me that I could easily make this output from op-amp circuitry. It would involve several sine-wave generator sources, followed by a Dirac comb/impulse train w/ a sample-and-hold, and then the Mach-Zehnder would be an op-amp with a sine-wave modulation in the feedback loop. I made something similar to the latter years ago in the form of a square root compander used for CX audio noise reduction .

`> "Hi @WebHubTel / Paul, Do you have anything written which gives a summary overview of your ENSO modeling logic, from a computational perspective? Or could you post a few paragraphs here." From a few days ago there is this comment #409 : https://forum.azimuthproject.org/discussion/comment/22250/#Comment_22250 Computationally, all it involves is calculation of sin functions and 3-point filtering. The first stage is understanding how to do tidal analysis https://undergrad.research.ucsb.edu/2017/01/introduction-tidal-harmonic-analysis/, which is essentially guessing a superposition of known sin waves of unknown amplitude and phase. Next steps involves a simple [IIR filter](https://en.wikipedia.org/wiki/Infinite_impulse_response#Transfer_function_derivation) and [Mach-Zehnder modulation](https://en.wikipedia.org/wiki/Electro-optic_modulator#Amplitude_modulation), which is essentially a sin function applied to the amplitude. ![](https://pbs.twimg.com/media/EZIMgHjX0AY5Lei.jpg) Nothing much more other than a correlation coefficient, which is a library call if needed. I mentioned in comment #411 that it's only like 100 lines of code, so there's not a lot of complexity that you can fit in to that space. Eventually the complexity is driven by the gradient descent search algorithm chosen, because unlike tidal analysis on its own, the complete response is non-linear superposition and so a multiple-linear regression algorithm won't work. --- Just occurred to me that I could easily make this output from op-amp circuitry. It would involve several sine-wave generator sources, followed by a [Dirac comb/impulse train](https://en.wikipedia.org/wiki/Dirac_comb) w/ a [sample-and-hold](https://en.wikipedia.org/wiki/Sample_and_hold), and then the Mach-Zehnder would be an op-amp with a sine-wave modulation in the feedback loop. I made something similar to the latter years ago in the form of a square root [compander](https://en.wikipedia.org/wiki/Companding) used for CX audio noise reduction . ![](https://imagizer.imageshack.com/img921/9230/hSyCO2.gif)`

This link helps to explain the training vs validation split: http://formulize.nutonian.com/forum/discussion/555/training-validation-and-test-sets/p1

This question is what bothered me as well:

When I used the tool, the validation interval always fit very well, which I thought was hard to believe unless it was involved in the model creation, i.e. during the fitting processs.

Pointing to this in the Eureqa user's guide:

So it looks as if it does pick the best validation results out of an ensemble of training runs.

As a first experiment, I let the multiprocessor ENSO model run overnight and had the Petri net monitor decide which randomly seeded solver running on a

training intervalhad the best results on atest/validation interval. The trainer would only optimize until it took the lead as best validation metric, and it would give up and reset after 100,000 cycles if it couldn't take the lead. I was impressed by the results in that the leader at the end had a training interval correlation coefficient of 0.6 and a validation interval CC of 0.4. This is the sliding correlation results, with the red dotted line showing the best correlation possible (considering noise) by comparing NINO34 against SOI.I will likely place the source code on my GitHub soon. So anyone that wants to evaluate the ENSO model will get introduced to the best software engineering language that mankind has yet to devise.

`This link helps to explain the training vs validation split: http://formulize.nutonian.com/forum/discussion/555/training-validation-and-test-sets/p1 This question is what bothered me as well: > "I just started experimenting with Eureqa and I'm a little confused with the validation process. When training a model I would normally define a training, validation (parameter optimization) and the final test set (which is used at the end). Using only a training and validation will result in a bias as both data sets are involved in the model creation and really need to have a final independent test. ..... Not sure how this is dealt with in Eureqa, are the two sets ultimately used in model creation? If so, how would I be able to add a test set and compare final result with that?" When I used the tool, the validation interval always fit very well, which I thought was hard to believe unless it was involved in the model creation, i.e. during the fitting processs. Pointing to this in the Eureqa user's guide: >"By default, Eureqa will randomly shuffle your data and then split it into training and validation data sets based on the total size of your data. Training data will be taken from the start of data set and validation data will be taken from the end (after shuffling)." So it looks as if it does pick the best validation results out of an ensemble of training runs. As a first experiment, I let the multiprocessor ENSO model run overnight and had the Petri net monitor decide which randomly seeded solver running on a **training interval** had the best results on a **test/validation interval**. The trainer would only optimize until it took the lead as best validation metric, and it would give up and reset after 100,000 cycles if it couldn't take the lead. I was impressed by the results in that the leader at the end had a training interval correlation coefficient of 0.6 and a validation interval CC of 0.4. This is the sliding correlation results, with the red dotted line showing the best correlation possible (considering noise) by comparing NINO34 against SOI. ![](https://imagizer.imageshack.com/img922/5104/VPfVYE.png) I will likely place the source code on my GitHub soon. So anyone that wants to evaluate the ENSO model will get introduced to the best software engineering language that mankind has yet to devise.`

I am adapting the multiprocessing software to make it more general.

This is the fit to the QBO data after only a few minutes of computation, using largely the input parameters from the ENSO model. During the computation the solver adjusts the amplitudes from primarily the tropical lunar cycle to the nodal cycle. In this case the CC is near 0.7 and the Excel Solver struggles to get to 0.6, hmmm.

After a bit more adaptation it should work for any climate index, and also for tidal analysis and perhaps the Chandler wobble cyclic behavior (which doesn't use LTE).

Lots of stuff I can get done under lock-down conditions ;)

`I am adapting the multiprocessing software to make it more general. This is the fit to the QBO data after only a few minutes of computation, using largely the input parameters from the ENSO model. During the computation the solver adjusts the amplitudes from primarily the tropical lunar cycle to the nodal cycle. In this case the CC is near 0.7 and the Excel Solver struggles to get to 0.6, hmmm. ![](https://imagizer.imageshack.com/img923/7210/7FQPAA.png) After a bit more adaptation it should work for any climate index, and also for tidal analysis and perhaps the Chandler wobble cyclic behavior (which doesn't use LTE). Lots of stuff I can get done under lock-down conditions ;)`

Initial release of multiprocessor LTE simulator for ENSO and QBO models : https://github.com/pukpr/GeoEnergyMath

This will work for the following climate indices:

It will also work for any tidal analysis, if configured for days instead of months. And fairly certain it will work for the Chandler wobble and for modeling dLOD variations.

`Initial release of multiprocessor LTE simulator for ENSO and QBO models : https://github.com/pukpr/GeoEnergyMath This will work for the following climate indices: * [ENSO](https://github.com/pukpr/GeoEnergyMath/wiki/ENSO) : NINO34, SOI, etc * [QBO](https://github.com/pukpr/GeoEnergyMath/wiki/QBO) : each of the stratified altitudes 10 hPa, 30 hPa, 70 hPa, etc * [IOD](https://github.com/pukpr/GeoEnergyMath/wiki/ENSO) : aka DMI * [NAO](https://github.com/pukpr/GeoEnergyMath/wiki/NAO) * [AMO](https://github.com/pukpr/GeoEnergyMath/wiki/ENSO) * [PDO](https://github.com/pukpr/GeoEnergyMath/wiki/ENSO) : aka IPO * [AO](https://github.com/pukpr/GeoEnergyMath/wiki/NAO) : aka NAM * [AAO](https://github.com/pukpr/GeoEnergyMath/wiki/NAO): aka SAM * [PNA](https://github.com/pukpr/GeoEnergyMath/wiki/PNA) * [MJO](https://github.com/pukpr/GeoEnergyMath/wiki/MJO) It will also work for any [tidal analysis](https://github.com/pukpr/GeoEnergyMath/wiki/Tides), if configured for days instead of months. And fairly certain it will work for the [Chandler wobble](https://github.com/pukpr/GeoEnergyMath/wiki/CW) and for modeling [dLOD variations](https://github.com/pukpr/GeoEnergyMath/wiki/LOD).`

Just one data point but this paper reports better correlation coefficient and lower RMS error for transfer learning cf. original data and superiority to PCA or kriging and "restores a missing spatial pattern of the documented El Niño from July 1877". Christopher Kadow, David Matthew Hall & Uwe Ulbrich, Artificial intelligence reconstructs missing climate information (2020)

`Just one data point but this paper reports better correlation coefficient and lower RMS error for transfer learning cf. original data and superiority to PCA or kriging and "restores a missing spatial pattern of the documented El Niño from July 1877". Christopher Kadow, David Matthew Hall & Uwe Ulbrich, [Artificial intelligence reconstructs missing climate information (2020)](https://www.nature.com/articles/s41561-020-0582-5?utm_source=ngeo_etoc&utm_medium=email&utm_campaign=toc_41561_13_6&utm_content=20200609&sap-outbound-id=25F98A1363FE1F2806A4A29B6DD8F8997B839D7F)`

Thanks Jim. A couple more papers on ENSO-specific machine learning recently, both from Nature Scientific Reports.

The people at the ATTP blog don't like anyone discussing it though https://andthentheresphysics.wordpress.com/2020/06/06/mitigation-adaptation-suffering/#comment-177225

I don't know why I continue to comment there. Bleeding gatekeepers.

`> "Just one data point " Thanks Jim. A [couple](https://doi.org/10.1038/s41598-020-59128-7) more [papers](https://doi.org/10.1038/s41598-020-65070-5) on ENSO-specific machine learning recently, both from Nature Scientific Reports. The people at the ATTP blog don't like anyone discussing it though https://andthentheresphysics.wordpress.com/2020/06/06/mitigation-adaptation-suffering/#comment-177225 >Willard says: >June 9, 2020 at 12:36 am >“But Enso” drive-by done. >Thanks. I don't know why I continue to comment there. Bleeding gatekeepers.`

New Phil. Trans Royal Soc paper "Climbing down Charney’s ladder: Machine Learning and the post-Dennard era of computational climate science". I was hired by IBM Research to investigate high-speed materials and shared an office with Robert Dennard before he retired. Cut to today and IMO it's the mathematical physics algorithm NOT the computational speed available that will provide the breakthrough needed.

The Balaji paper is mainly insights as to what direction climate science will take. The following is likely true -- you can't keep throwing computational horsepower at a problem that is only obscurely understood and tended to by gatekeepers of "ever more elaborate models". This is timely as there is an ongoing discussion about complexity in software and poor documentation, in regards to contagion modeling and GCMs.

Note that machine learning applied to climate science is fairly dumb -- it's not implying any particular physical insight.

So why would it even matter if the software is understandable if it can give the right answer? The necessary pattern matching mechanism could have been added accidentally and no one would be the wiser (and no one would know exactly what it was that made the difference). Same thing as happens with machine learning -- no one has any idea why it works when it does "just seem to work".

Balaji references a NOAA paper claiming that

"with little additional effort ... anyone can be a climate forecaster"! The "model-analog" approach is that you dig up an old model run from the archives and you check to see if it matches recent data (such as ENSO) and then extrapolateUnless there are simpler models available, no one will build on what is there -- only established teams with tribal knowledge will build on their GCMs. And if there is no scientific curiosity or drive to want to do better, there the elaborate models will sit, and the stasis will continue.

Concluding challenge in the Balaji paper:

The observation is that climate science may turn into a pure machine learning exercise unless something simpler comes along, or if machine learning reveals it.

`New Phil. Trans Royal Soc paper ["Climbing down Charney’s ladder: Machine Learning and the post-Dennard era of computational climate science"](https://arxiv.org/pdf/2005.11862.pdf). I was hired by IBM Research to investigate high-speed materials and shared an office with Robert Dennard before he retired. Cut to today and IMO it's the mathematical physics algorithm NOT the computational speed available that will provide the breakthrough needed. ![](https://imagizer.imageshack.com/img923/987/EjDNnw.png) The Balaji paper is mainly insights as to what direction climate science will take. The following is likely true -- you can't keep throwing computational horsepower at a problem that is only obscurely understood and tended to by gatekeepers of "ever more elaborate models". This is timely as there is an ongoing discussion about complexity in software and poor documentation, in regards to contagion modeling and GCMs. >"The current juncture in computing, seven decades later, heralds an end to ever smaller computational units and ever faster arithmetic, what is called Dennard scaling. This is prompting a fundamental change in our approach to the simulation of weather and climate, potentially as revolutionary as that wrought by John von Neumann in the 1950s. One approach could return us to an earlier era of pattern recognition and extrapolation, this time aided by computational power. Another approach could lead us to <b>insights that continue to be expressed in mathematical equations</b>. In either approach, or any synthesis of those, it is clearly no longer the steady march of the last few decades, continuing to add detail to ever more elaborate models."</blockquote> Note that machine learning applied to climate science is fairly dumb -- it's not implying any particular physical insight. >"AI, or artificial intelligence, is a term we shall generally avoid here in favour of terms like machine learning, which emphasize the statistical aspect, <b>without implying insight.</b>"</blockquote> So why would it even matter if the software is understandable if it can give the right answer? The necessary pattern matching mechanism could have been added accidentally and no one would be the wiser (and no one would know exactly what it was that made the difference). Same thing as happens with machine learning -- no one has any idea why it works when it does "just seem to work". >"One conceives of meteorology as a science, where everything can be derived from the first principles of classical fluid mechanics. A second approach is oriented specifically toward the goal of predicting the future evolution of the system (weather forecasts) and success is measured by forecast skill, by any means necessary. This could for instance be by creating approximate analogues to the current state of the circulation and relying on similar past trajectories to make an educated guess of future weather. One can have understanding of the system without the ability to predict; **one can have skilful predictions innocent of any understanding**"</blockquote> Balaji references a NOAA paper claiming that <i>"with little additional effort ... anyone can be a climate forecaster"</i> ! The "model-analog" approach is that you dig up an old model run from the archives and you check to see if it matches recent data (such as ENSO) and then extrapolate > ![](https://pbs.twimg.com/media/EaP29U4XQAAR98F.png) Unless there are simpler models available, no one will build on what is there -- only established teams with tribal knowledge will build on their GCMs. And if there is no scientific curiosity or drive to want to do better, there the elaborate models will sit, and the stasis will continue. Concluding challenge in the Balaji paper: >"If ML-based modeling needs a manifesto, it may be this: to learn from data not just patterns, but simpler models, climbing down Charney’s ladder. The vision is that these models will leave out the details not needed in an understanding of the underlying system, and learning algorithms will find for us underlying “slow manifolds”, and maybe the basis variables in which to do the learning. That is the challenge before us."</blockquote> The observation is that climate science may turn into a pure machine learning exercise unless something simpler comes along, or if machine learning reveals it.`

At this point, don't have to be perfect, just better than the alternatives

This is how they present the machine learning results from the #422 comment:

They essentially do a running fit, with the "validation" part matching only 3 months ahead. So the following fit is deceptive -- by using a 3-month

runningprojection to train the machine learning algorithm it can always catch up and then refit for the next interval. It would be horrible if they ended the training in 1984 and let it project to the current time.By that token, I let the validation interval extend for years :

Trying to sell this stuff as being superior is tricky. You can fool others but as Feynmann said, the one person that you don't want to fool is yourself, and that's why I am always looking for better cross-validation approaches.

`At this point, don't have to be perfect, just better than the alternatives This is how they present the machine learning results from the #422 comment: >["Niño 3.4 index and SOI reanalysis data from 1871 to 1973 were used for model training, and the data for 1984–2019 were predicted 1 month, 3 months, 6 months, and 12 months in advance."](https://doi.org/10.1038/s41598-020-65070-5) They essentially do a running fit, with the "validation" part matching only 3 months ahead. So the following fit is deceptive -- by using a 3-month **running** projection to train the machine learning algorithm it can always catch up and then refit for the next interval. It would be horrible if they ended the training in 1984 and let it project to the current time. ![](https://pbs.twimg.com/media/Eajtph1WkAAfDn9.png) By that token, I let the validation interval extend for years : ![](https://imagizer.imageshack.com/v2/1132x597q90/r/923/RvgE11.png) Trying to sell this stuff as being superior is tricky. You can fool others but as Feynmann said, the one person that you don't want to fool is yourself, and that's why I am always looking for better cross-validation approaches.`

“On the forcings of the unusual Quasi-Biennial Oscillation structure in February 2016” — http://oceanrep.geomar.de/47636/1/acp-20-6541-2020.pdf

This paper plots an interesting time series called a “horizontal Rossby wave momentum flux” which comprises a rapid gravity wave component along a horizontally stratified layer.

The data is at an altitude of 40 hPa, which is congruent with the horizontal 40 hPa QBO stratospheric layer.

What I also plotted is a fit using the parameters for the ENSO model but allowing a larger proportion of the high wavenumber LTE modulation in comparison to that applied to the oceanic ENSO. This makes sense because the atmosphere has a much faster inertial response, so can accommodate the high-K solutions. Yet, a priori there is no way that this should fit to this degree (CC=0.68) unless this is actually what is happening -- i.e. that this particular measure is actually of atmospheric LTE dynamics.

(The lower panel is a sliding windowed CC showing where the match is better or worse)

Furthermore, what must be riding along with this is the monopole K~0 QBO solution, which are pure reversals of wind direction encircling the globe. So the Rossby waves I am thinking are essentially perturbations along a regional spatial extent, so can respond to the localized tropical forcing.

The geophysics here is so far ahead of the current climatology all I can do is shake my head.

`“On the forcings of the unusual Quasi-Biennial Oscillation structure in February 2016” — http://oceanrep.geomar.de/47636/1/acp-20-6541-2020.pdf This paper plots an interesting time series called a “horizontal Rossby wave momentum flux” which comprises a rapid gravity wave component along a horizontally stratified layer. The data is at an altitude of 40 hPa, which is congruent with the horizontal 40 hPa QBO stratospheric layer. ![](https://imagizer.imageshack.com/img924/5479/vNE2sZ.png) What I also plotted is a fit using the parameters for the ENSO model but allowing a larger proportion of the high wavenumber LTE modulation in comparison to that applied to the oceanic ENSO. This makes sense because the atmosphere has a much faster inertial response, so can accommodate the high-K solutions. Yet, a priori there is no way that this should fit to this degree (CC=0.68) unless this is actually what is happening -- i.e. that this particular measure is actually of atmospheric LTE dynamics. (The lower panel is a sliding windowed CC showing where the match is better or worse) Furthermore, what must be riding along with this is the monopole K~0 QBO solution, which are pure reversals of wind direction encircling the globe. So the Rossby waves I am thinking are essentially perturbations along a regional spatial extent, so can respond to the localized tropical forcing. The geophysics here is so far ahead of the current climatology all I can do is shake my head.`

Interesting paper that further supports Milankovitch model of glacial cycles

"Detection of significant climatic precession variability in early Pleistocene glacial cycles"

The nature of the orbit is connected to natural variability in climate at every scale. Think about it: the null hypothesis for ANY natural climate variation should exclude orbital forcings first

There's also obviously the diurnal and semi-diurnal tidal cycle, and possibly the thermohaline meriodinal overturning cycles that have orbital influences. So essentially the orbital periods of 1 day, 365.242 day, 27.321 day, 27.212 day, 27.554 day, 365.256 day, 365.2596 day pretty much generates all the possibilities.

`Interesting paper that further supports Milankovitch model of glacial cycles ["Detection of significant climatic precession variability in early Pleistocene glacial cycles"](https://sci-hub.tw/10.1016/j.epsl.2020.116137) ![](https://pbs.twimg.com/media/EauSNRiWkAAKx3J.png) The nature of the orbit is connected to natural variability in climate at every scale. Think about it: the null hypothesis for ANY natural climate variation should exclude orbital forcings first * Orbital daily => diurnal climate cycle * Orbital annual => seasonal climate cycle * Orbital multi-annual => erratic ENSO climate cycle * Orbital millennial => Milankovitch climate cycle There's also obviously the diurnal and semi-diurnal tidal cycle, and possibly the thermohaline meriodinal overturning cycles that have orbital influences. So essentially the orbital periods of 1 day, 365.242 day, 27.321 day, 27.212 day, 27.554 day, 365.256 day, 365.2596 day pretty much generates all the possibilities.`

Probably the most direct real-world example of the Laplace's Tidal Equation MZ-like modulation is described here https://geoenergymath.com/2020/06/29/the-sao-and-annual-disturbances/

The modulation is dependent on amplitude so that it will have a certain signature. For a sinusoidal waveform the peak will bifurcate as below

This is what happens with the temperature signal at 1 hPa at an upper latitude

This is not difficult to model and fit

at all. It's essentially a modulated annual signal.`Probably the most direct real-world example of the Laplace's Tidal Equation MZ-like modulation is described here https://geoenergymath.com/2020/06/29/the-sao-and-annual-disturbances/ The modulation is dependent on amplitude so that it will have a certain signature. For a sinusoidal waveform the peak will bifurcate as below ![](https://imagizer.imageshack.com/img922/8757/B1TOct.png) This is what happens with the temperature signal at 1 hPa at an upper latitude ![](https://imagizer.imageshack.com/img922/3818/qDeKxw.png) This is not difficult to model and fit **at all**. It's essentially a modulated annual signal.`

Kerry Emanuel opinion piece on climate science, read this thread https://twitter.com/WHUT/status/1279434134876835841

`Kerry Emanuel opinion piece on climate science, read this thread https://twitter.com/WHUT/status/1279434134876835841`

New study detects ringing of the global atmosphere

`[New study detects ringing of the global atmosphere](https://phys.org/news/2020-07-global-atmosphere.html)`

Great comment on Emanuel, Paul. Also, I'm not sure that I'd call what is assembled as geophysical knowledge for fluids at the level of atmosphere and oceans a "theory". What it is is a bunch of special cases, each with their setup and boundary conditions. It is not unified. It may be too simple a comparison, because the physics is ultimately simple, but it is very far removed from anything like a Maxwell's Laws unification. Indeed, it's structure is more like Economics: There's an underlying theory which appears to work in the micro, and then economists go off and try to find instances where the theory actually achieves a prediction of something. Surely geophysical concepts and science are far better than Economics, if only because the data are so much better, but what is the purpose of the Emanuel Project? Is it to demand all atmospheric and ocean scientists first achieve a mastery of fluids and their geophysical manifestations? Do they need to conceptually memorize Kundu and Cohen? Is it to work on a great unifying principle? Is it to eschew looking at the surprises which numerical models sometimes and looking for explanations? I would suggest the very failure Professor Emanuel points to, neglecting "subgrid‐scale turbulence on surface heat ﬂuxes in the far western Paciﬁc, where the model‐resolved surface winds are often light" demonstrates the importance of that effect. Similar things can be said for neglecting the conversion of "dissipated kinetic energy back into heat".

I also wonder if this re-emphasis upon currently understood theory is wise given the long arc of the history of Physics. Look at the conceptual back and forth which attended the evolution of the Planck Effect, or Brownian motion, or Blackbody. There was a set of mutually contradictory notions at the time. It was difficult to judge one superior to another, and I would argue the reason was lack of good experiments.

I thought the point of computational physics in this area was to try to work these physics

abinitio, even though the computational engines aren't there to do that with the "speed that is required". The latter is, by the way, driven by applications, not development of science. If you had a coupled atmosphere-ocean-ice sheets engine that took 120 days to complete a run and your purpose was understanding, so what? How long does it take the LHC to complete a series of runs producing data worth analysis? How about the LIGO?I could argue that some people in fluid physics come up short understanding numerical mathematics, too. I never quite understood why the Lorenz "chaos" got such big play when that was a well known phenomenon in numerical analysis and computational methods for decades, and never mind Mandelbrot. But what's the point of that? Not everyone can know everything.

`Great comment on Emanuel, Paul. Also, I'm not sure that I'd call what is assembled as geophysical knowledge for fluids at the level of atmosphere and oceans a "theory". What it is is a bunch of special cases, each with their setup and boundary conditions. It is not unified. It may be too simple a comparison, because the physics is ultimately simple, but it is very far removed from anything like a Maxwell's Laws unification. Indeed, it's structure is more like Economics: There's an underlying theory which appears to work in the micro, and then economists go off and try to find instances where the theory actually achieves a prediction of something. Surely geophysical concepts and science are far better than Economics, if only because the data are so much better, but what is the purpose of the Emanuel Project? Is it to demand all atmospheric and ocean scientists first achieve a mastery of fluids and their geophysical manifestations? Do they need to conceptually memorize Kundu and Cohen? Is it to work on a great unifying principle? Is it to eschew looking at the surprises which numerical models sometimes and looking for explanations? I would suggest the very failure Professor Emanuel points to, neglecting "subgrid‐scale turbulence on surface heat ﬂuxes in the far western Paciﬁc, where the model‐resolved surface winds are often light" demonstrates the importance of that effect. Similar things can be said for neglecting the conversion of "dissipated kinetic energy back into heat". I also wonder if this re-emphasis upon currently understood theory is wise given the long arc of the history of Physics. Look at the conceptual back and forth which attended the evolution of the Planck Effect, or Brownian motion, or Blackbody. There was a set of mutually contradictory notions at the time. It was difficult to judge one superior to another, and I would argue the reason was lack of good experiments. I thought the point of computational physics in this area was to try to work these physics _ab_ _initio_, even though the computational engines aren't there to do that with the "speed that is required". The latter is, by the way, driven by applications, not development of science. If you had a coupled atmosphere-ocean-ice sheets engine that took 120 days to complete a run and your purpose was understanding, so what? How long does it take the LHC to complete a series of runs producing data worth analysis? How about the LIGO? I could argue that some people in fluid physics come up short understanding numerical mathematics, too. I never quite understood why the Lorenz "chaos" got such big play when that was a well known phenomenon in numerical analysis and computational methods for decades, and never mind Mandelbrot. But what's the point of that? Not everyone can know everything.`

Very cool, Daniel.

`Very cool, Daniel.`

Re: The atmospheric ringing paper => The dispersion appears linear (wavenumber proportional to frequency) and Figure 7 shows lots of harmonics of the daily cycle. The following excerpt of their Fig 7 is exactly what the LTE model predicts. All the red ticks are harmonics of the daily tide.

I will generate an equivalent chart and plot it here shortly.

`Re: [The atmospheric ringing paper](https://journals.ametsoc.org/jas/article/77/7/2519/347483/An-Array-of-Ringing-Global-Free-Modes-Discovered) => The dispersion appears linear (wavenumber proportional to frequency) and Figure 7 shows lots of harmonics of the daily cycle. The following excerpt of their Fig 7 is exactly what the LTE model predicts. All the red ticks are harmonics of the daily tide. ![](https://imagizer.imageshack.com/img924/135/Kvz0SP.png) I will generate an equivalent chart and plot it here shortly.`

Jan, It will be interesting if the Kerry Emanuel paper generates further discussion.

`Jan, It will be interesting if the Kerry Emanuel paper generates further discussion.`

Paul, I wonder if it will, at all. I wonder if it's an echo of a wish that things be once as they were. I heard a prominent climate scientist say, at a conference, in person, that maybe our current situation with widespread disbelief in climate science wouldn't have happened if Charney had lived longer. I have the greatest respect for this scientist and have

tonsof respect for them, butseriously?It's interesting that the biological sciences seem to be well ahead of these areas of Physics in settling these perspectives. Perhaps that's because they've been so disconnected from mathematical applications to their fields for so long, and relish the contributions computation and

theattitudesandperspectivesitbringsto their fields, and perhaps it's because, well, geophysical fluid dynamics has gotten stodgy and high priest heavy. Perhaps it's because the biological sciences, notably biopharma and bioinformatics, have been better funded.There are cultural tells. The geophysical fluid dynamics people really

lookdownon solid Earth geophysicists, which I think iscompletelyunwarranted. The biological sciences have added to the termsin vivo, andin vitrothe termin silico, given their change in mindset.Naturally, I find the problems and data in the biological sciences refreshing relative to either Physics or much of Engineering. But I gotta admit, landing boosters returning from space, perceived as a Control Theory problem gives me goosebumps every time I witness it. It's

amazing.`Paul, I wonder if it will, at all. I wonder if it's an echo of a wish that things be once as they were. I heard a prominent climate scientist say, at a conference, in person, that maybe our current situation with widespread disbelief in climate science wouldn't have happened if [Charney](https://en.wikipedia.org/wiki/Jule_Gregory_Charney) had lived longer. I have the greatest respect for this scientist and have _tons_ of respect for them, but _seriously_? It's interesting that the biological sciences seem to be well ahead of these areas of Physics in settling these perspectives. Perhaps that's because they've been so disconnected from mathematical applications to their fields for so long, and relish the contributions computation and _the_ _attitudes_ _and_ _perspectives_ _it_ _brings_ to their fields, and perhaps it's because, well, geophysical fluid dynamics has gotten stodgy and high priest heavy. Perhaps it's because the biological sciences, notably biopharma and bioinformatics, have been better funded. There are cultural tells. The geophysical fluid dynamics people really _look_ _down_ on solid Earth geophysicists, which I think is _completely_ unwarranted. The biological sciences have added to the terms _in vivo_, and _in vitro_ the term _in silico_, given their change in mindset. Naturally, I find the problems and data in the biological sciences refreshing relative to either Physics or much of Engineering. But I gotta admit, landing boosters returning from space, perceived as a Control Theory problem gives me goosebumps every time I witness it. It's _amazing_.`

Jan, The thing that perplexes me about geophysical fluid dynamics is that they have special names for every type of wave -- Rossby, Kelvin, etc. When I learned about waves, a wave was a wave and was essentially described numerically, and perhaps differentiated by whether it was a standing wave, traveling wave, harmonic instead of like you said, by a "high priest" naming convention.

You mentioned Charney -- what is more bizarre to muse about was that Richard Feynman wrote in his last blackboard lecture before he died that he was going to learn more about fluid dynamics. On the right side in the middle, it says "non-linear classical hydrdynamics" right below 2-D Hall effect (which has analogs in climate topologies).

"What I cannot create, I do not understand"from:

https://aboatmadeoutoftrash.wordpress.com/2012/01/19/feynmans-last-blackboard/

Perhaps the atmosphere-ringing paper will help straighten out some of the fundamental understanding. It is a cool paper.

`Jan, The thing that perplexes me about geophysical fluid dynamics is that they have special names for every type of wave -- Rossby, Kelvin, etc. When I learned about waves, a wave was a wave and was essentially described numerically, and perhaps differentiated by whether it was a standing wave, traveling wave, harmonic instead of like you said, by a "high priest" naming convention. You mentioned Charney -- what is more bizarre to muse about was that Richard Feynman wrote in his last blackboard lecture before he died that he was going to learn more about fluid dynamics. On the right side in the middle, it says "non-linear classical hydrdynamics" right below 2-D Hall effect (which has analogs in climate topologies). *"What I cannot create, I do not understand"* from: ![](https://aboatmadeoutoftrash.files.wordpress.com/2012/01/feynmanlastboard1.gif) https://aboatmadeoutoftrash.wordpress.com/2012/01/19/feynmans-last-blackboard/ Perhaps the atmosphere-ringing paper will help straighten out some of the fundamental understanding. It is a cool paper.`

Yeah, to me, the most interesting thing about atmosphere-ringing at different frequencies are the ramifications of beat frequencies.

`Yeah, to me, the most interesting thing about atmosphere-ringing at different frequencies are the ramifications of beat frequencies.`

This part is confusing to me:

One forced tide is the daily tide, which they show as being extremely strong in Figure 4, shown below

All the red vertical lines align precisely with harmonics of the daily tide. The yellow highlights show clear symmetric double-sidebands, which are obviously some modulation of the daily tide -- I can guess that based on the relative spacing, it appears that it's due to a beat frequency (as Jan mentioned) with the main fortnightly lunar tide. Interesting that the double-sidebands only appear around the

even harmonicsof the daily tide. Overall, the odd harmonics are lower in amplitude, indicating that the daily waves have more of an asymmetricsawtoothimpulse character.So what is left are the peaks indicated by the blue circles. It appears that these are the main focus of the paper. But why these are the main focus is puzzling to me. They say

"whereas it should beSomepeaks represent astronomically forced tides"".MOSTpeaks represent astronomically forced tides (or their harmonics)"As is typical of research work, they may be burying the fundamental findings with trying to uncover some other odd stuff. The reason they do this is because I am sure some reviewers said that the atmospheric tide aspect is nothing novel -- even though IMO I think that it is. As they said in the opening statement of the abstract this is all due to having

"newly available ERA5 hourly global data", which wasn't possible before.`This part is confusing to me: > "**Some** peaks represent astronomically forced tides, but we show that **most** peaks are manifestations of the ringing of randomly excited global-scale resonant modes, reminiscent of the tones in a spectrum of a vibrating musical instrument." One forced tide is the daily tide, which they show as being extremely strong in Figure 4, shown below ![](https://imagizer.imageshack.com/img923/4194/EP8Hq7.png) All the red vertical lines align precisely with harmonics of the daily tide. The yellow highlights show clear symmetric double-sidebands, which are obviously some modulation of the daily tide -- I can guess that based on the relative spacing, it appears that it's due to a beat frequency (as Jan mentioned) with the main fortnightly lunar tide. Interesting that the double-sidebands only appear around the *even harmonics* of the daily tide. Overall, the odd harmonics are lower in amplitude, indicating that the daily waves have more of an asymmetric *sawtooth* impulse character. So what is left are the peaks indicated by the blue circles. It appears that these are the main focus of the paper. But why these are the main focus is puzzling to me. They say *"**Some** peaks represent astronomically forced tides"* whereas it should be *"**MOST** peaks represent astronomically forced tides (or their harmonics)"*. As is typical of research work, they may be burying the fundamental findings with trying to uncover some other odd stuff. The reason they do this is because I am sure some reviewers said that the atmospheric tide aspect is nothing novel -- even though IMO I think that it is. As they said in the opening statement of the abstract this is all due to having *"newly available ERA5 hourly global data"*, which wasn't possible before.`

This is the model spectrum for the "ringing atmosphere"

It appears that the even/odd pattern in intensity might be due to a 1/2 day (semi-diurnal) modulation in the daily tide, and the double satellite side-bands are highly likely due to the 13.66 day fortnightly lunar tide modulating the diurnal forcing. The fact that the sidebands don't show up on the odd-harmonic spectra might be due to the fact that the odd peaks are closer to the background noise.

The other peaks are not modeled but are circled in green. The fact that the peaks are broader likely means that they are stimulated by a stochastic resonance -- the first one corresponds to ~33 hours (the article calls it the the “33-h Kelvin wave") and the second ~9.4 hour. The weaker third one is at ~7.2 hours. The 33 hour and 9.4 hour waves were identified in this paper from 1999 https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999JA900044 . The combination of these three waves fulfills the condition for a triad.

`This is the model spectrum for the "ringing atmosphere" ![](https://imagizer.imageshack.com/img924/1039/560kq5.png) It appears that the even/odd pattern in intensity might be due to a 1/2 day (semi-diurnal) modulation in the daily tide, and the double satellite side-bands are highly likely due to the 13.66 day fortnightly lunar tide modulating the diurnal forcing. The fact that the sidebands don't show up on the odd-harmonic spectra might be due to the fact that the odd peaks are closer to the background noise. The other peaks are not modeled but are circled in green. The fact that the peaks are broader likely means that they are stimulated by a stochastic resonance -- the first one corresponds to ~33 hours (the article calls it the the “33-h Kelvin wave") and the second ~9.4 hour. The weaker third one is at ~7.2 hours. The 33 hour and 9.4 hour waves were identified in this paper from 1999 https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999JA900044 . The combination of these three waves fulfills the condition for a [triad](https://geoenergymath.com/2020/04/06/triad-waves/).`

with a 1% spread in the extra frequencies

The marketing behind the "atmosphere ringing" paper is absurd. You can't hear any of the ringing with your ears since as you can see the fundamental frequency has a period of one day. It's like someone claiming that you can hear the cycling of sunrise and sunset. Wish I could be so blatant with the marketing of any of this actually interesting stuff presented on this forum, but I at least retain a modicum of integrity :)

`with a 1% spread in the extra frequencies ![](https://imagizer.imageshack.com/img924/8336/NoTd6o.png) The marketing behind the "atmosphere ringing" paper is absurd. You can't hear any of the ringing with your ears since as you can see the fundamental frequency has a period of one day. It's like someone claiming that you can hear the cycling of sunrise and sunset. Wish I could be so blatant with the marketing of any of this actually interesting stuff presented on this forum, but I at least retain a modicum of integrity :)`

LOL, here is the latest on modeling the QBO at the 70 hPa layer-- this is actually an interesting mix of tidal factors, which also clearly shows the impect of Laplace's Tidal Equations modulation. Without the LTE modulation and only with the tidal forcing, the waveform would appear squared off instead of jagged.

`LOL, here is the latest on modeling the QBO at the 70 hPa layer-- this is actually an interesting mix of tidal factors, which also clearly shows the impect of Laplace's Tidal Equations modulation. Without the LTE modulation and only with the tidal forcing, the waveform would appear squared off instead of jagged. ![](https://imagizer.imageshack.com/img924/7407/WZUtbI.png)`