Options

QBO and ENSO

145679

Comments

  • 401.

    That list is v. useful. I've been happy to realise that it's a sort of curriculum for the project I find I've been trying to follow over the past few years. Even better is that rather than a single journal article reporting negative results it gives lots of hypothetical approaches you've tested. As digital real-estate is almost costless I think there's no good reason for every academic publisher not to expand what they accept in their journal article T&Cs to include similar descriptions of rejected theories as background to any proposed theory. I'm sure that would contribute to anybody's learning process. I think Lakatos might have liked this :).

    Comment Source:That list is v. useful. I've been happy to realise that it's a sort of curriculum for the project I find I've been trying to follow over the past few years. Even better is that rather than a single journal article reporting negative results it gives lots of hypothetical approaches you've tested. As digital real-estate is almost costless I think there's no good reason for every academic publisher not to expand what they accept in their journal article T&Cs to include similar descriptions of rejected theories as background to any proposed theory. I'm sure that would contribute to anybody's learning process. I think Lakatos might have liked this :).
  • 402.

    Jim, yes. Scientific blogs could be renamed public engineering notebooks.

    "An engineering notebook is a book in which an engineer will formally document, in chronological order, all of his/her work that is associated with a specific design project."

    Read the linked doc, and the only thing different is the typical proprietary nature of a notebook.

    All pages are

    • Numbered
    • Dated
    • Signed by the designer
    • Signed by a witness
    • Include a statement of the proprietary nature of notebook

    OTOH, if one is trying to solve a health or environmental crisis the idea of profiting from it is of questionable ethics.

    The other bit is this:

    • If you make a mistake, draw a line through it, enter the correct information, and initial the change.

    Mistakes are important to document so as to prevent others from repeating them, equivalent to errata on a published document. That's part of the learning process that you mentioned.

    Comment Source:Jim, yes. Scientific blogs could be renamed public [engineering notebooks](https://www.cusd80.com/cms/lib6/AZ01001175/Centricity/Domain/6705/engineeringnotebook1.pdf). > "An engineering notebook is a book in which an engineer will formally document, in chronological order, all of his/her work that is associated with a specific design project." Read the linked doc, and the only thing different is the typical proprietary nature of a notebook. All pages are * Numbered * Dated * Signed by the designer * Signed by a witness * Include a statement of the proprietary nature of notebook OTOH, if one is trying to solve a health or environmental crisis the idea of profiting from it is of questionable ethics. The other bit is this: * If you make a mistake, draw a line through it, enter the correct information, and initial the change. Mistakes are important to document so as to prevent others from repeating them, equivalent to errata on a published document. That's part of the learning process that you mentioned.
  • 403.
    edited April 28

    I presented to the International Conference on Learning Representations 2020 a few days ago. This workshop was on Integration of Deep Neural Models and Differential Equations. The math/physics applied is at the level of Hamiltonian and Lagrangian approaches and climate science is a significant application area, but it will take a while before they catch up.

    Comment Source:I presented to the [International Conference on Learning Representations](https://iclr.cc/Conferences/2020/Schedule?showEvent=1306) 2020 a few days ago. This workshop was on [Integration of Deep Neural Models and Differential Equations](https://openreview.net/group?id=ICLR.cc/2020/Workshop/DeepDiffEq). The math/physics applied is at the level of Hamiltonian and Lagrangian approaches and climate science is a significant application area, but it will take a while before they catch up. https://youtu.be/PD-nzaWJgK0
  • 404.

    Great. Here I'm pasting in your abstract from conference site:

    Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. The solutions to the partial differential equations are highly non-linear related to Navier-Stokes and only search approaches can be used to fit to the data.

    Comment Source:Great. Here I'm pasting in your abstract from conference site: > Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. The solutions to the partial differential equations are highly non-linear related to Navier-Stokes and only search approaches can be used to fit to the data.
  • 405.
    edited May 14

    Thanks David, I off-and-on get some feedback on these geophysical fluid dynamics models but occasionally get people that just rage. One PhD fellow that is considered an AGW skeptic and works for Boeing in wing design claims that it's pointless to do computational FD on climate problems because all flow is turbulent. I often wonder if these people are on to something based on their credentials and experience -- i.e. doing engineering aerodynamics has to count for something, right?

    But then over time I notice that these same skeptics are essentially contrarian about anything that hints at social progress towards the common good.

    For example, this is the same Boeing aero PhD that has been a thorn in the side of climate scientists doing CFD for years now, and it doesn't surprise me that he would say something like this (over at a climate science blog):

    https://imagizer.imageshack.com/img924/3603/CpR8OR.png

    and then you have people that stick up for these cultists, they delete comments that point out this stuff

    https://pbs.twimg.com/media/EXwxTZwWoAMCyod.png

    https://pbs.twimg.com/media/EXwyhBCXsAIE3x2.png

    Comment Source:Thanks David, I off-and-on get some feedback on these geophysical fluid dynamics models but occasionally get people that just rage. One PhD fellow that is considered an AGW skeptic and works for Boeing in wing design claims that it's pointless to do computational FD on climate problems because all flow is turbulent. I often wonder if these people are on to something based on their credentials and experience -- i.e. doing engineering aerodynamics has to count for something, right? But then over time I notice that these same skeptics are essentially contrarian about anything that hints at social progress towards the common good. For example, this is the same Boeing aero PhD that [has been a thorn in the side of climate scientists doing CFD](http://theoilconundrum.blogspot.com/2013/03/climate-sensitivity-and-33c-discrepancy.html) for years now, and it doesn't surprise me that he would say something like this (over at a climate science blog): https://imagizer.imageshack.com/img924/3603/CpR8OR.png and then you have people that stick up for these cultists, they delete comments that point out this stuff https://pbs.twimg.com/media/EXwxTZwWoAMCyod.png https://pbs.twimg.com/media/EXwyhBCXsAIE3x2.png
  • 406.

    I attended several of the virtual EGU sessions and took notes and captured all my online comments here:

    https://geoenergymath.com/2020/05/10/egu-2020-notes/

    The topics I concentrated on were ENSO, QBO, Chandler wobble, and geophysical fluid dynamics.

    If you have a Copernicus account, you can still comment on all the presentations until the end of the month

    https://meetingorganizer.copernicus.org/EGU2020/meetingprogramme

    Comment Source:I attended several of the virtual EGU sessions and took notes and captured all my online comments here: https://geoenergymath.com/2020/05/10/egu-2020-notes/ The topics I concentrated on were ENSO, QBO, Chandler wobble, and geophysical fluid dynamics. If you have a Copernicus account, you can still comment on all the presentations until the end of the month https://meetingorganizer.copernicus.org/EGU2020/meetingprogramme
  • 407.
    edited May 14

    @WebHubTel wrote:

    One PhD fellow that is considered an AGW skeptic and works for Boeing in wing design claims that it's pointless to do computational FD on climate problems because all flow is turbulent.

    Wouldn't that call for exploring computational models of turbulence, rather than giving up?

    For example, cursory searching on "computational fluid dynamics turbulence climate" turned up:

    and a book chapter on Computational Fluid Dynamics in Turbulent Flow Applications.

    I don't have the expertise to be able to evaluate these ideas, but if nothing else it sounds like an important research area. Ties in with the general theme of stochastic modeling.

    Comment Source:@WebHubTel wrote: > One PhD fellow that is considered an AGW skeptic and works for Boeing in wing design claims that it's pointless to do computational FD on climate problems because all flow is turbulent. Wouldn't that call for exploring computational models of turbulence, rather than giving up? For example, cursory searching on "computational fluid dynamics turbulence climate" turned up: * [New technique for modeling turbulence in the atmosphere](https://www.sciencedaily.com/releases/2018/08/18080717105), U.S. Army Research Laboratory, 2018. and a book chapter on [Computational Fluid Dynamics in Turbulent Flow Applications](https://www.intechopen.com/books/numerical-simulation-from-brain-imaging-to-turbulent-flows/computational-fluid-dynamics-in-turbulent-flow-applications). I don't have the expertise to be able to evaluate these ideas, but if nothing else it sounds like an important research area. Ties in with the general theme of stochastic modeling.
  • 408.
    edited May 15

    David said:

    "Wouldn't that call for exploring computational models of turbulence, rather than giving up?"

    I think turbulence needs to be evaluated if and when it occurs. As I hinted, the Boeing fellow is looking for an excuse to marginalize the research of others.

    Here is a somewhat detailed follow-up analysis I did in the past few days based on one of last week's EGU presentations entitled "Transition from geostrophic flows to inertia-gravity waves in the spectrum of a differentially heated rotating annulus experiment"

    https://geoenergymath.com/2020/05/14/characterizing-wavetrains/

    What I found was how non-turbulent the waves were in their experiment. They were looking for some turbulence to help explain Kolomogorov's theory but I found lots of order. The spectrum below is highly ordered, at least until it gets to the high wavenumbers, but that's very low in kinetic energy anyways.

    Comment Source:David said: > "Wouldn't that call for exploring computational models of turbulence, rather than giving up?" I think turbulence needs to be evaluated if and when it occurs. As I hinted, the Boeing fellow is looking for an excuse to marginalize the research of others. Here is a somewhat detailed follow-up analysis I did in the past few days based on one of last week's EGU presentations entitled *"Transition from geostrophic flows to inertia-gravity waves in the spectrum of a differentially heated rotating annulus experiment"* https://geoenergymath.com/2020/05/14/characterizing-wavetrains/ What I found was how non-turbulent the waves were in their experiment. They were looking for some turbulence to help explain Kolomogorov's theory but I found lots of order. The spectrum below is highly ordered, at least until it gets to the high wavenumbers, but that's very low in kinetic energy anyways. ![](https://imagizer.imageshack.com/img922/5749/hOHcgk.png)
  • 409.

    Algorithm for conventional tidal analysis:

    1. Select N major tidal constituents, fixing the period for each but allowing amplitude and phase to vary
    2. Create a linear superposition of the N constituents.
    3. Iterate over training period the 2N amplitude+phase parameters to minimize the error against prior data.
    4. Use that set of parameters to make a prediction

    Since it's a linear superposition, a multiple linear regression algorithm can be used in step #3 instead of an iterative solver.

    This is the algorithm for the LTE-based ENSO analysis:

    1. Select N major tidal constituents, fixing the period for each but allowing amplitude and phase to vary
    2. Create a linear superposition of the N constituents.
    3. Multiply by an annual impulse aligned at a fixed time of year, and calculate a lagged integral response (IIR).
    4. Modulate the result with M LTE transfer functions, each containing a Mach-Zehnder-like amplitude+phase.
    5. Iterate over training period 2(M+N) amplitude+phase parameters to minimize the error against prior data.
    6. Use that set of parameters to make a prediction

    For the ENSO model, a multiple linear regression algorithm can't be used and the iterative solution has to grind away. The two extra transform steps shown below are simple to implement but make it much more computationally intensive than the conventional tidal analysis.

    For either case, one can minimize over the frequency domain instead of the time domain. Thus is amenable to pure digital signal processing techniques with discrete time steps at the monthly or daily time steps depending on the resolution of the data.

    Comment Source:**Algorithm for conventional tidal analysis:** 1. Select N major tidal constituents, fixing the period for each but allowing amplitude and phase to vary 2. Create a linear superposition of the N constituents. 3. Iterate over training period the 2N amplitude+phase parameters to minimize the error against prior data. 4. Use that set of parameters to make a prediction Since it's a linear superposition, a multiple linear regression algorithm can be used in step #3 instead of an iterative solver. **This is the algorithm for the LTE-based ENSO analysis:** 1. Select N major tidal constituents, fixing the period for each but allowing amplitude and phase to vary 2. Create a linear superposition of the N constituents. 3. Multiply by an annual impulse aligned at a fixed time of year, and calculate a lagged integral response (IIR). 4. Modulate the result with M LTE transfer functions, each containing a Mach-Zehnder-like amplitude+phase. 5. Iterate over training period 2(M+N) amplitude+phase parameters to minimize the error against prior data. 6. Use that set of parameters to make a prediction For the ENSO model, a multiple linear regression algorithm can't be used and the iterative solution has to grind away. The two extra transform steps shown below are simple to implement but make it much more computationally intensive than the conventional tidal analysis. ![](https://imagizer.imageshack.com/img924/8215/ycXSL8.png) For either case, one can minimize over the frequency domain instead of the time domain. Thus is amenable to pure digital signal processing techniques with discrete time steps at the monthly or daily time steps depending on the resolution of the data.
  • 410.

    @WebHubTel, hello, can you recommend books and study material for learning about climate change? Thanks.

    Comment Source:@WebHubTel, hello, can you recommend books and study material for learning about climate change? Thanks.
  • 411.

    I did the short explainer in comment #409 above because I scrolled through the (infamous) Imperial College source code for modeling epidemics here as I wanted to get a feel for the complexity levels. Neil Ferguson of Imperial built his model on thousands of SLOCs of intricate C++ decision logic mixed in with compiler pragmas for multiprocess speedup. I don't understand the rationale behind this. It's difficult enough to get buy in for something as conceptually simple as the physics-based ENSO model, but to spend that much effort on a model easily defeated by sociopolitical game theory machinations makes no sense.

    It's revealing that the gang of climate bloggers at places such as ATTP are infatuated with Ferguson's contagion model, as most climate models are equally as complex. For my ENSO model, the basic functionality written in verbose Ada takes up 100 semicolons and takes 7 milliseconds to run as an executable on my laptop.

    Comment Source:I did the short explainer in comment #409 above because I scrolled through the ([infamous](https://www.telegraph.co.uk/technology/2020/05/16/neil-fergusons-imperial-model-could-devastating-software-mistake/)) Imperial College source code for modeling epidemics [here](https://github.com/mrc-ide/covid-sim/blob/master/src/Sweep.cpp) as I wanted to get a feel for the complexity levels. Neil Ferguson of Imperial built his model on thousands of SLOCs of intricate C++ decision logic mixed in with compiler pragmas for multiprocess speedup. I don't understand the rationale behind this. It's difficult enough to get buy in for something as conceptually simple as the physics-based ENSO model, but to spend that much effort on a model easily defeated by sociopolitical game theory machinations makes no sense. It's revealing that the gang of climate bloggers at places such as ATTP are infatuated with Ferguson's contagion model, as most climate models are equally as complex. For my ENSO model, the basic functionality written in verbose Ada takes up 100 [semicolons](https://en.wikipedia.org/wiki/Source_lines_of_code#Measurement_methods) and takes 7 milliseconds to run as an executable on my laptop.
  • 412.
    edited May 27

    Continuing on from the last comment #411, I decided to merge the ENSO model written in Ada with a threaded optimization inspired by my recent contribution to the ongoing Petri net discussion.

    What I will do is encapsulate the ENSO search algorithm in N Ada threads (I have N=8 CPUs on my PC so will go for that) and then use a protected resource to keep track of the best-fit metric, which is at present a correlation coefficient. The metric will be stored in an Ada protected object and the object's logic will decide whether a task thread submitting a candidate correlation will go to the top of the list.

    Although not exactly the synchronization semantics I have in mind, the following Petri net for two threads (R1 and R2) competing for a protected resource token (held in the protected object labelled L) is close to the idea:

    pn

    If a task is stalled in a local maximum and is unable to make any process, it will re-initialize with a new seed. The task containing the best correlation will continue running . So there will always be one task thread in the lead, and N-1 tasks trying to catch up. IOW, the protected object will be the monitor in deciding which task is running the best-fitting model.

    This is untested code for the monitor.

    package Optimization_Resource is
    
      protected Monitor is
         procedure Check (Metric : in Float;
                            Best : out Boolean);
      private
         --  Value of current best metric stored internally
         Best_Metric: Float := 0.0;
      end Monitor;
    
    end Optimization_Resource;
    
    
    package body Optimization_Resource is
    
      protected body Monitor is
         procedure Check (Metric : in Float;
                            Best : out Boolean) is
         begin
            if Metric >= Best_Metric then
               Best_Metric := Metric;
               Best := True;
            else
               Best := False;
            end if;
      end Monitor;
    
    end Optimization_Resource;
    

    So easy to do this. Can also try the following approach, but is not considered good style as Ada tries to follow the paradigm that functions do not have side-effects -- the side-effect being that the internal state changes.

    
    package Optimization_Resource is
    
      protected Monitor is
         function Is_Best (Metric : in Float) return Boolean;
      private
         --  Value of current best metric stored internally
         Best_Metric : Float := 0.0;
      end Monitor;
    
    end Optimization_Resource;
    
    
    package body Optimization_Resource is
    
      protected body Monitor is
         function Is_Best (Metric : in Float) return Boolean is
         begin
            if Metric >= Best_Metric then
               Best_Metric := Metric;
               return True;
            else
               return False;
            end if;
      end Monitor;
    
    end Optimization_Resource;
    
    Comment Source:Continuing on from the last comment #411, I decided to merge the ENSO model written in Ada with a threaded optimization inspired by my recent contribution to the ongoing [Petri net discussion](https://forum.azimuthproject.org/discussion/comment/22259/#Comment_22259). What I will do is encapsulate the ENSO search algorithm in N Ada threads (I have N=8 CPUs on my PC so will go for that) and then use a protected resource to keep track of the best-fit metric, which is at present a correlation coefficient. The metric will be stored in an Ada [protected object](https://learn.adacore.com/courses/intro-to-ada/chapters/tasking.html#Protected_objects) and the object's logic will decide whether a task thread submitting a candidate correlation will go to the top of the list. Although not exactly the synchronization semantics I have in mind, the following Petri net for two threads (**R1** and **R2**) competing for a protected resource token (held in the protected object labelled **L**) is close to the idea: > <center> ![pn](https://www.researchgate.net/profile/Stephane_Lafortune3/publication/220476441/figure/fig1/AS:393910439432197@1470926976043/Petri-net-Petri-nets-are-bipartite-directed-graphs-containing-two-types-of-nodes-places_W640.jpg) </center> > If a task is stalled in a local maximum and is unable to make any process, it will re-initialize with a new seed. The task containing the best correlation will continue running . So there will always be one task thread in the lead, and N-1 tasks trying to catch up. IOW, the protected object will be the monitor in deciding which task is running the best-fitting model. This is untested code for the monitor. <pre> package Optimization_Resource is protected Monitor is procedure Check (Metric : in Float; Best : out Boolean); private -- Value of current best metric stored internally Best_Metric: Float := 0.0; end Monitor; end Optimization_Resource; package body Optimization_Resource is protected body Monitor is procedure Check (Metric : in Float; Best : out Boolean) is begin if Metric >= Best_Metric then Best_Metric := Metric; Best := True; else Best := False; end if; end Monitor; end Optimization_Resource; </pre> So easy to do this. Can also try the following approach, but is not considered good style as Ada tries to follow the paradigm that functions do not have side-effects -- the side-effect being that the internal state changes. <pre> package Optimization_Resource is protected Monitor is function Is_Best (Metric : in Float) return Boolean; private -- Value of current best metric stored internally Best_Metric : Float := 0.0; end Monitor; end Optimization_Resource; package body Optimization_Resource is protected body Monitor is function Is_Best (Metric : in Float) return Boolean is begin if Metric >= Best_Metric then Best_Metric := Metric; return True; else return False; end if; end Monitor; end Optimization_Resource; </pre>
  • 413.

    This is exciting. By yesterday, I got the full multicore-processing version of the Ada ENSO modeling source code running and pegging the system at nearly 100% on my 8 CPU PC (easy to tell if its working, otherwise top only shows ~13%=100/8).

    The contention among the threads for an optimal metric works perfectly and I can monitor the battle as one thread will trade back and forth with another as they each follow their own gradient descent path. The part that's exciting is how fast it will approach a solution in contrast to the Excel Solver that I have been using off and on. Excel Solver also uses all 8 CPU cores so it should be comparable, yet it appears to execute in a slower search approach. The Excel Solver is very persistent though -- not easily getting stuck in local minima, something that I haven't verified about my algorithm.

    Now, the part of the optimization algorithm I am battling with is how to reset a computational thread that is falling behind the leaders. I have it set right now that a thread will reset after (1) a certain number of cycles and (2) its metric lags the best value by a certain percentage. The issue is in defining these thresholds. Since the best thread's metric will keep getting better, the percentage threshold is a moving target. What I think the protected object monitor needs is a "best trajectory" that can be used as a threshold. This gets stored and revised as the best metric evolves, and so a poorly performing thread can be abandoned with knowledge that it wouldn't be able to catch up to the lead thread. (this may of course not cover the case where the thread is a late bloomer and is following a path with a tough part followed by a steep descent)

    Probably should have done this multi-processing code project long ago, but with the lock-down in place, I have a lot more time :/

    Comment Source:This is exciting. By yesterday, I got the full multicore-processing version of the Ada ENSO modeling source code running and pegging the system at nearly 100% on my 8 CPU PC (easy to tell if its working, otherwise top only shows ~13%=100/8). The contention among the threads for an optimal metric works perfectly and I can monitor the battle as one thread will trade back and forth with another as they each follow their own gradient descent path. The part that's exciting is how fast it will approach a solution in contrast to the Excel Solver that I have been using off and on. Excel Solver also uses all 8 CPU cores so it should be comparable, yet it appears to execute in a slower search approach. The Excel Solver is very persistent though -- not easily getting stuck in local minima, something that I haven't verified about my algorithm. Now, the part of the optimization algorithm I am battling with is how to reset a computational thread that is falling behind the leaders. I have it set right now that a thread will reset after (1) a certain number of cycles **and** (2) its metric lags the best value by a certain percentage. The issue is in defining these thresholds. Since the best thread's metric will keep getting better, the percentage threshold is a moving target. What I think the protected object monitor needs is a "best trajectory" that can be used as a threshold. This gets stored and revised as the best metric evolves, and so a poorly performing thread can be abandoned with knowledge that it wouldn't be able to catch up to the lead thread. (this may of course not cover the case where the thread is a late bloomer and is following a path with a tough part followed by a steep descent) Probably should have done this multi-processing code project long ago, but with the lock-down in place, I have a lot more time :/
  • 414.

    Another possible strategy is to have the individual threads optimize according to a training interval while the monitor is keeping track of the best fit to an orthogonal test interval. This might be able to add a sense of robustness to the model fit as the model can also be cross-validated as a result. The key here is to suspend the processing of the thread with the best metric while the other threads try to catch up. This won't require a moving target threshold as the lead thread can't build up an advantage by continuing to compute while the others are starting from scratch.

    I remember the symbolic reasoning solver Eureqa having a training & test option but couldn't quite figure out how it was used. It may actually be implemented something similar to this, as that was also a multi-threaded tool.

    Comment Source:Another possible strategy is to have the individual threads optimize according to a **training interval** while the monitor is keeping track of the best fit to an orthogonal **test interval**. This might be able to add a sense of robustness to the model fit as the model can also be cross-validated as a result. The key here is to suspend the processing of the thread with the best metric while the other threads try to catch up. This won't require a moving target threshold as the lead thread can't build up an advantage by continuing to compute while the others are starting from scratch. I remember the symbolic reasoning solver Eureqa having a training & test option but couldn't quite figure out how it was used. It may actually be implemented something similar to this, as that was also a multi-threaded tool.
  • 415.

    Daniel asked:

    "@WebHubTel, hello, can you recommend books and study material for learning about climate change? Thanks."

    I would recommend this paper by climate scientist Raymond Pierrehumbert who incidentally recently became a Royal Society fellow

    "However, if oil analysts such as those speaking at the American Geophysical Union are right, almost all of this oil will remain inaccessible. In that case, coal—which certainly contains enough carbon to bring us to the danger level and probably much beyond—remains the clear and present threat to the climate, and the fight to leave as much coal as possible in the ground remains the front line in the battle to protect the climate. This does not mean the threat posed by the carbon pool in unconventional oil can be completely ignored. The case against oil abundance seems persuasive, but I’d hate to bet the planet against the ingenuity of future oil engineers, which is why I feel that some rearguard actions that inhibit development of unconventional oil are warranted, notably in the case of the Keystone XL pipeline, which taps into Canada’s Athabasca oil sands."

    Comment Source:Daniel asked: > "@WebHubTel, hello, can you recommend books and study material for learning about climate change? Thanks." I would recommend this paper by climate scientist Raymond Pierrehumbert who incidentally recently became a [Royal Society fellow](https://www.oxfordmail.co.uk/news/18420126.oxford-university-researchers-become-royal-society-fellows/) * [The Myth of "Saudi America"](https://slate.com/technology/2013/02/u-s-shale-oil-are-we-headed-to-a-new-era-of-oil-abundance.html) > "However, if oil analysts such as those speaking at the American Geophysical Union are right, almost all of this oil will remain inaccessible. In that case, coal—which certainly contains enough carbon to bring us to the danger level and probably much beyond—remains the clear and present threat to the climate, and the fight to leave as much coal as possible in the ground remains the front line in the battle to protect the climate. This does not mean the threat posed by the carbon pool in unconventional oil can be completely ignored. The case against oil abundance seems persuasive, but I’d hate to bet the planet against the ingenuity of future oil engineers, which is why I feel that some rearguard actions that inhibit development of unconventional oil are warranted, notably in the case of the Keystone XL pipeline, which taps into Canada’s Athabasca oil sands."
  • 416.

    Hi @WebHubTel / Paul, Do you have anything written which gives a summary overview of your ENSO modeling logic, from a computational perspective? Or could you post a few paragraphs here.

    Comment Source:Hi @WebHubTel / Paul, Do you have anything written which gives a summary overview of your ENSO modeling logic, from a computational perspective? Or could you post a few paragraphs here.
  • 417.
    edited May 28

    "Hi @WebHubTel / Paul, Do you have anything written which gives a summary overview of your ENSO modeling logic, from a computational perspective? Or could you post a few paragraphs here."

    From a few days ago there is this comment #409 : https://forum.azimuthproject.org/discussion/comment/22250/#Comment_22250

    Computationally, all it involves is calculation of sin functions and 3-point filtering.

    The first stage is understanding how to do tidal analysis https://undergrad.research.ucsb.edu/2017/01/introduction-tidal-harmonic-analysis/, which is essentially guessing a superposition of known sin waves of unknown amplitude and phase.

    Next steps involves a simple IIR filter and Mach-Zehnder modulation, which is essentially a sin function applied to the amplitude.

    Nothing much more other than a correlation coefficient, which is a library call if needed. I mentioned in comment #411 that it's only like 100 lines of code, so there's not a lot of complexity that you can fit in to that space. Eventually the complexity is driven by the gradient descent search algorithm chosen, because unlike tidal analysis on its own, the complete response is non-linear superposition and so a multiple-linear regression algorithm won't work.


    Just occurred to me that I could easily make this output from op-amp circuitry. It would involve several sine-wave generator sources, followed by a Dirac comb/impulse train w/ a sample-and-hold, and then the Mach-Zehnder would be an op-amp with a sine-wave modulation in the feedback loop. I made something similar to the latter years ago in the form of a square root compander used for CX audio noise reduction .

    Comment Source:> "Hi @WebHubTel / Paul, Do you have anything written which gives a summary overview of your ENSO modeling logic, from a computational perspective? Or could you post a few paragraphs here." From a few days ago there is this comment #409 : https://forum.azimuthproject.org/discussion/comment/22250/#Comment_22250 Computationally, all it involves is calculation of sin functions and 3-point filtering. The first stage is understanding how to do tidal analysis https://undergrad.research.ucsb.edu/2017/01/introduction-tidal-harmonic-analysis/, which is essentially guessing a superposition of known sin waves of unknown amplitude and phase. Next steps involves a simple [IIR filter](https://en.wikipedia.org/wiki/Infinite_impulse_response#Transfer_function_derivation) and [Mach-Zehnder modulation](https://en.wikipedia.org/wiki/Electro-optic_modulator#Amplitude_modulation), which is essentially a sin function applied to the amplitude. ![](https://pbs.twimg.com/media/EZIMgHjX0AY5Lei.jpg) Nothing much more other than a correlation coefficient, which is a library call if needed. I mentioned in comment #411 that it's only like 100 lines of code, so there's not a lot of complexity that you can fit in to that space. Eventually the complexity is driven by the gradient descent search algorithm chosen, because unlike tidal analysis on its own, the complete response is non-linear superposition and so a multiple-linear regression algorithm won't work. --- Just occurred to me that I could easily make this output from op-amp circuitry. It would involve several sine-wave generator sources, followed by a [Dirac comb/impulse train](https://en.wikipedia.org/wiki/Dirac_comb) w/ a [sample-and-hold](https://en.wikipedia.org/wiki/Sample_and_hold), and then the Mach-Zehnder would be an op-amp with a sine-wave modulation in the feedback loop. I made something similar to the latter years ago in the form of a square root [compander](https://en.wikipedia.org/wiki/Companding) used for CX audio noise reduction . ![](https://imagizer.imageshack.com/img921/9230/hSyCO2.gif)
  • 418.

    This link helps to explain the training vs validation split: http://formulize.nutonian.com/forum/discussion/555/training-validation-and-test-sets/p1

    This question is what bothered me as well:

    "I just started experimenting with Eureqa and I'm a little confused with the validation process. When training a model I would normally define a training, validation (parameter optimization) and the final test set (which is used at the end). Using only a training and validation will result in a bias as both data sets are involved in the model creation and really need to have a final independent test. ..... Not sure how this is dealt with in Eureqa, are the two sets ultimately used in model creation? If so, how would I be able to add a test set and compare final result with that?"

    When I used the tool, the validation interval always fit very well, which I thought was hard to believe unless it was involved in the model creation, i.e. during the fitting processs.

    Pointing to this in the Eureqa user's guide:

    "By default, Eureqa will randomly shuffle your data and then split it into training and validation data sets based on the total size of your data. Training data will be taken from the start of data set and validation data will be taken from the end (after shuffling)."

    So it looks as if it does pick the best validation results out of an ensemble of training runs.

    As a first experiment, I let the multiprocessor ENSO model run overnight and had the Petri net monitor decide which randomly seeded solver running on a training interval had the best results on a test/validation interval. The trainer would only optimize until it took the lead as best validation metric, and it would give up and reset after 100,000 cycles if it couldn't take the lead. I was impressed by the results in that the leader at the end had a training interval correlation coefficient of 0.6 and a validation interval CC of 0.4. This is the sliding correlation results, with the red dotted line showing the best correlation possible (considering noise) by comparing NINO34 against SOI.

    I will likely place the source code on my GitHub soon. So anyone that wants to evaluate the ENSO model will get introduced to the best software engineering language that mankind has yet to devise.

    Comment Source:This link helps to explain the training vs validation split: http://formulize.nutonian.com/forum/discussion/555/training-validation-and-test-sets/p1 This question is what bothered me as well: > "I just started experimenting with Eureqa and I'm a little confused with the validation process. When training a model I would normally define a training, validation (parameter optimization) and the final test set (which is used at the end). Using only a training and validation will result in a bias as both data sets are involved in the model creation and really need to have a final independent test. ..... Not sure how this is dealt with in Eureqa, are the two sets ultimately used in model creation? If so, how would I be able to add a test set and compare final result with that?" When I used the tool, the validation interval always fit very well, which I thought was hard to believe unless it was involved in the model creation, i.e. during the fitting processs. Pointing to this in the Eureqa user's guide: >"By default, Eureqa will randomly shuffle your data and then split it into training and validation data sets based on the total size of your data. Training data will be taken from the start of data set and validation data will be taken from the end (after shuffling)." So it looks as if it does pick the best validation results out of an ensemble of training runs. As a first experiment, I let the multiprocessor ENSO model run overnight and had the Petri net monitor decide which randomly seeded solver running on a **training interval** had the best results on a **test/validation interval**. The trainer would only optimize until it took the lead as best validation metric, and it would give up and reset after 100,000 cycles if it couldn't take the lead. I was impressed by the results in that the leader at the end had a training interval correlation coefficient of 0.6 and a validation interval CC of 0.4. This is the sliding correlation results, with the red dotted line showing the best correlation possible (considering noise) by comparing NINO34 against SOI. ![](https://imagizer.imageshack.com/img922/5104/VPfVYE.png) I will likely place the source code on my GitHub soon. So anyone that wants to evaluate the ENSO model will get introduced to the best software engineering language that mankind has yet to devise.
  • 419.

    I am adapting the multiprocessing software to make it more general.

    This is the fit to the QBO data after only a few minutes of computation, using largely the input parameters from the ENSO model. During the computation the solver adjusts the amplitudes from primarily the tropical lunar cycle to the nodal cycle. In this case the CC is near 0.7 and the Excel Solver struggles to get to 0.6, hmmm.

    After a bit more adaptation it should work for any climate index, and also for tidal analysis and perhaps the Chandler wobble cyclic behavior (which doesn't use LTE).

    Lots of stuff I can get done under lock-down conditions ;)

    Comment Source:I am adapting the multiprocessing software to make it more general. This is the fit to the QBO data after only a few minutes of computation, using largely the input parameters from the ENSO model. During the computation the solver adjusts the amplitudes from primarily the tropical lunar cycle to the nodal cycle. In this case the CC is near 0.7 and the Excel Solver struggles to get to 0.6, hmmm. ![](https://imagizer.imageshack.com/img923/7210/7FQPAA.png) After a bit more adaptation it should work for any climate index, and also for tidal analysis and perhaps the Chandler wobble cyclic behavior (which doesn't use LTE). Lots of stuff I can get done under lock-down conditions ;)
  • 420.
    edited July 17

    Initial release of multiprocessor LTE simulator for ENSO and QBO models : https://github.com/pukpr/GeoEnergyMath

    This will work for the following climate indices:

    It will also work for any tidal analysis, if configured for days instead of months. And fairly certain it will work for the Chandler wobble and for modeling dLOD variations.

    Comment Source:Initial release of multiprocessor LTE simulator for ENSO and QBO models : https://github.com/pukpr/GeoEnergyMath This will work for the following climate indices: * [ENSO](https://github.com/pukpr/GeoEnergyMath/wiki/ENSO) : NINO34, SOI, etc * [QBO](https://github.com/pukpr/GeoEnergyMath/wiki/QBO) : each of the stratified altitudes 10 hPa, 30 hPa, 70 hPa, etc * [IOD](https://github.com/pukpr/GeoEnergyMath/wiki/ENSO) : aka DMI * [NAO](https://github.com/pukpr/GeoEnergyMath/wiki/NAO) * [AMO](https://github.com/pukpr/GeoEnergyMath/wiki/ENSO) (see [this thread](https://forum.azimuthproject.org/discussion/comment/22395/#Comment_22395)) * [PDO](https://github.com/pukpr/GeoEnergyMath/wiki/ENSO) : aka IPO * [AO](https://github.com/pukpr/GeoEnergyMath/wiki/NAO) : aka NAM * [AAO](https://github.com/pukpr/GeoEnergyMath/wiki/NAO): aka SAM * [PNA](https://github.com/pukpr/GeoEnergyMath/wiki/PNA) * [MJO](https://github.com/pukpr/GeoEnergyMath/wiki/MJO) It will also work for any [tidal analysis](https://github.com/pukpr/GeoEnergyMath/wiki/Tides), if configured for days instead of months. And fairly certain it will work for the [Chandler wobble](https://github.com/pukpr/GeoEnergyMath/wiki/CW) and for modeling [dLOD variations](https://github.com/pukpr/GeoEnergyMath/wiki/LOD).
  • 421.

    Just one data point but this paper reports better correlation coefficient and lower RMS error for transfer learning cf. original data and superiority to PCA or kriging and "restores a missing spatial pattern of the documented El Niño from July 1877". Christopher Kadow, David Matthew Hall & Uwe Ulbrich, Artificial intelligence reconstructs missing climate information (2020)

    Comment Source: Just one data point but this paper reports better correlation coefficient and lower RMS error for transfer learning cf. original data and superiority to PCA or kriging and "restores a missing spatial pattern of the documented El Niño from July 1877". Christopher Kadow, David Matthew Hall & Uwe Ulbrich, [Artificial intelligence reconstructs missing climate information (2020)](https://www.nature.com/articles/s41561-020-0582-5?utm_source=ngeo_etoc&utm_medium=email&utm_campaign=toc_41561_13_6&utm_content=20200609&sap-outbound-id=25F98A1363FE1F2806A4A29B6DD8F8997B839D7F)
  • 422.
    edited June 10

    "Just one data point "

    Thanks Jim. A couple more papers on ENSO-specific machine learning recently, both from Nature Scientific Reports.

    The people at the ATTP blog don't like anyone discussing it though https://andthentheresphysics.wordpress.com/2020/06/06/mitigation-adaptation-suffering/#comment-177225

    Willard says:

    June 9, 2020 at 12:36 am

    “But Enso” drive-by done.

    Thanks.

    I don't know why I continue to comment there. Bleeding gatekeepers.

    Comment Source:> "Just one data point " Thanks Jim. A [couple](https://doi.org/10.1038/s41598-020-59128-7) more [papers](https://doi.org/10.1038/s41598-020-65070-5) on ENSO-specific machine learning recently, both from Nature Scientific Reports. The people at the ATTP blog don't like anyone discussing it though https://andthentheresphysics.wordpress.com/2020/06/06/mitigation-adaptation-suffering/#comment-177225 >Willard says: >June 9, 2020 at 12:36 am >“But Enso” drive-by done. >Thanks. I don't know why I continue to comment there. Bleeding gatekeepers.
  • 423.

    New Phil. Trans Royal Soc paper "Climbing down Charney’s ladder: Machine Learning and the post-Dennard era of computational climate science". I was hired by IBM Research to investigate high-speed materials and shared an office with Robert Dennard before he retired. Cut to today and IMO it's the mathematical physics algorithm NOT the computational speed available that will provide the breakthrough needed.

    The Balaji paper is mainly insights as to what direction climate science will take. The following is likely true -- you can't keep throwing computational horsepower at a problem that is only obscurely understood and tended to by gatekeepers of "ever more elaborate models". This is timely as there is an ongoing discussion about complexity in software and poor documentation, in regards to contagion modeling and GCMs.

    "The current juncture in computing, seven decades later, heralds an end to ever smaller computational units and ever faster arithmetic, what is called Dennard scaling. This is prompting a fundamental change in our approach to the simulation of weather and climate, potentially as revolutionary as that wrought by John von Neumann in the 1950s. One approach could return us to an earlier era of pattern recognition and extrapolation, this time aided by computational power. Another approach could lead us to insights that continue to be expressed in mathematical equations. In either approach, or any synthesis of those, it is clearly no longer the steady march of the last few decades, continuing to add detail to ever more elaborate models."

    Note that machine learning applied to climate science is fairly dumb -- it's not implying any particular physical insight.

    "AI, or artificial intelligence, is a term we shall generally avoid here in favour of terms like machine learning, which emphasize the statistical aspect, without implying insight."

    So why would it even matter if the software is understandable if it can give the right answer? The necessary pattern matching mechanism could have been added accidentally and no one would be the wiser (and no one would know exactly what it was that made the difference). Same thing as happens with machine learning -- no one has any idea why it works when it does "just seem to work".

    "One conceives of meteorology as a science, where everything can be derived from the first principles of classical fluid mechanics. A second approach is oriented specifically toward the goal of predicting the future evolution of the system (weather forecasts) and success is measured by forecast skill, by any means necessary. This could for instance be by creating approximate analogues to the current state of the circulation and relying on similar past trajectories to make an educated guess of future weather. One can have understanding of the system without the ability to predict; one can have skilful predictions innocent of any understanding"

    Balaji references a NOAA paper claiming that "with little additional effort ... anyone can be a climate forecaster" ! The "model-analog" approach is that you dig up an old model run from the archives and you check to see if it matches recent data (such as ENSO) and then extrapolate

    Unless there are simpler models available, no one will build on what is there -- only established teams with tribal knowledge will build on their GCMs. And if there is no scientific curiosity or drive to want to do better, there the elaborate models will sit, and the stasis will continue.

    Concluding challenge in the Balaji paper:

    "If ML-based modeling needs a manifesto, it may be this: to learn from data not just patterns, but simpler models, climbing down Charney’s ladder. The vision is that these models will leave out the details not needed in an understanding of the underlying system, and learning algorithms will find for us underlying “slow manifolds”, and maybe the basis variables in which to do the learning. That is the challenge before us."

    The observation is that climate science may turn into a pure machine learning exercise unless something simpler comes along, or if machine learning reveals it.

    Comment Source:New Phil. Trans Royal Soc paper ["Climbing down Charney’s ladder: Machine Learning and the post-Dennard era of computational climate science"](https://arxiv.org/pdf/2005.11862.pdf). I was hired by IBM Research to investigate high-speed materials and shared an office with Robert Dennard before he retired. Cut to today and IMO it's the mathematical physics algorithm NOT the computational speed available that will provide the breakthrough needed. ![](https://imagizer.imageshack.com/img923/987/EjDNnw.png) The Balaji paper is mainly insights as to what direction climate science will take. The following is likely true -- you can't keep throwing computational horsepower at a problem that is only obscurely understood and tended to by gatekeepers of "ever more elaborate models". This is timely as there is an ongoing discussion about complexity in software and poor documentation, in regards to contagion modeling and GCMs. >"The current juncture in computing, seven decades later, heralds an end to ever smaller computational units and ever faster arithmetic, what is called Dennard scaling. This is prompting a fundamental change in our approach to the simulation of weather and climate, potentially as revolutionary as that wrought by John von Neumann in the 1950s. One approach could return us to an earlier era of pattern recognition and extrapolation, this time aided by computational power. Another approach could lead us to <b>insights that continue to be expressed in mathematical equations</b>. In either approach, or any synthesis of those, it is clearly no longer the steady march of the last few decades, continuing to add detail to ever more elaborate models."</blockquote> Note that machine learning applied to climate science is fairly dumb -- it's not implying any particular physical insight. >"AI, or artificial intelligence, is a term we shall generally avoid here in favour of terms like machine learning, which emphasize the statistical aspect, <b>without implying insight.</b>"</blockquote> So why would it even matter if the software is understandable if it can give the right answer? The necessary pattern matching mechanism could have been added accidentally and no one would be the wiser (and no one would know exactly what it was that made the difference). Same thing as happens with machine learning -- no one has any idea why it works when it does "just seem to work". >"One conceives of meteorology as a science, where everything can be derived from the first principles of classical fluid mechanics. A second approach is oriented specifically toward the goal of predicting the future evolution of the system (weather forecasts) and success is measured by forecast skill, by any means necessary. This could for instance be by creating approximate analogues to the current state of the circulation and relying on similar past trajectories to make an educated guess of future weather. One can have understanding of the system without the ability to predict; **one can have skilful predictions innocent of any understanding**"</blockquote> Balaji references a NOAA paper claiming that <i>"with little additional effort ... anyone can be a climate forecaster"</i> ! The "model-analog" approach is that you dig up an old model run from the archives and you check to see if it matches recent data (such as ENSO) and then extrapolate > ![](https://pbs.twimg.com/media/EaP29U4XQAAR98F.png) Unless there are simpler models available, no one will build on what is there -- only established teams with tribal knowledge will build on their GCMs. And if there is no scientific curiosity or drive to want to do better, there the elaborate models will sit, and the stasis will continue. Concluding challenge in the Balaji paper: >"If ML-based modeling needs a manifesto, it may be this: to learn from data not just patterns, but simpler models, climbing down Charney’s ladder. The vision is that these models will leave out the details not needed in an understanding of the underlying system, and learning algorithms will find for us underlying “slow manifolds”, and maybe the basis variables in which to do the learning. That is the challenge before us."</blockquote> The observation is that climate science may turn into a pure machine learning exercise unless something simpler comes along, or if machine learning reveals it.
  • 424.
    edited June 15

    At this point, don't have to be perfect, just better than the alternatives

    This is how they present the machine learning results from the #422 comment:

    "Niño 3.4 index and SOI reanalysis data from 1871 to 1973 were used for model training, and the data for 1984–2019 were predicted 1 month, 3 months, 6 months, and 12 months in advance."

    They essentially do a running fit, with the "validation" part matching only 3 months ahead. So the following fit is deceptive -- by using a 3-month running projection to train the machine learning algorithm it can always catch up and then refit for the next interval. It would be horrible if they ended the training in 1984 and let it project to the current time.

    By that token, I let the validation interval extend for years :

    Trying to sell this stuff as being superior is tricky. You can fool others but as Feynmann said, the one person that you don't want to fool is yourself, and that's why I am always looking for better cross-validation approaches.

    Comment Source:At this point, don't have to be perfect, just better than the alternatives This is how they present the machine learning results from the #422 comment: >["Niño 3.4 index and SOI reanalysis data from 1871 to 1973 were used for model training, and the data for 1984–2019 were predicted 1 month, 3 months, 6 months, and 12 months in advance."](https://doi.org/10.1038/s41598-020-65070-5) They essentially do a running fit, with the "validation" part matching only 3 months ahead. So the following fit is deceptive -- by using a 3-month **running** projection to train the machine learning algorithm it can always catch up and then refit for the next interval. It would be horrible if they ended the training in 1984 and let it project to the current time. ![](https://pbs.twimg.com/media/Eajtph1WkAAfDn9.png) By that token, I let the validation interval extend for years : ![](https://imagizer.imageshack.com/v2/1132x597q90/r/923/RvgE11.png) Trying to sell this stuff as being superior is tricky. You can fool others but as Feynmann said, the one person that you don't want to fool is yourself, and that's why I am always looking for better cross-validation approaches.
  • 425.

    “On the forcings of the unusual Quasi-Biennial Oscillation structure in February 2016” — http://oceanrep.geomar.de/47636/1/acp-20-6541-2020.pdf

    This paper plots an interesting time series called a “horizontal Rossby wave momentum flux” which comprises a rapid gravity wave component along a horizontally stratified layer.

    The data is at an altitude of 40 hPa, which is congruent with the horizontal 40 hPa QBO stratospheric layer.

    What I also plotted is a fit using the parameters for the ENSO model but allowing a larger proportion of the high wavenumber LTE modulation in comparison to that applied to the oceanic ENSO. This makes sense because the atmosphere has a much faster inertial response, so can accommodate the high-K solutions. Yet, a priori there is no way that this should fit to this degree (CC=0.68) unless this is actually what is happening -- i.e. that this particular measure is actually of atmospheric LTE dynamics.

    (The lower panel is a sliding windowed CC showing where the match is better or worse)

    Furthermore, what must be riding along with this is the monopole K~0 QBO solution, which are pure reversals of wind direction encircling the globe. So the Rossby waves I am thinking are essentially perturbations along a regional spatial extent, so can respond to the localized tropical forcing.

    The geophysics here is so far ahead of the current climatology all I can do is shake my head.

    Comment Source:“On the forcings of the unusual Quasi-Biennial Oscillation structure in February 2016” — http://oceanrep.geomar.de/47636/1/acp-20-6541-2020.pdf This paper plots an interesting time series called a “horizontal Rossby wave momentum flux” which comprises a rapid gravity wave component along a horizontally stratified layer. The data is at an altitude of 40 hPa, which is congruent with the horizontal 40 hPa QBO stratospheric layer. ![](https://imagizer.imageshack.com/img924/5479/vNE2sZ.png) What I also plotted is a fit using the parameters for the ENSO model but allowing a larger proportion of the high wavenumber LTE modulation in comparison to that applied to the oceanic ENSO. This makes sense because the atmosphere has a much faster inertial response, so can accommodate the high-K solutions. Yet, a priori there is no way that this should fit to this degree (CC=0.68) unless this is actually what is happening -- i.e. that this particular measure is actually of atmospheric LTE dynamics. (The lower panel is a sliding windowed CC showing where the match is better or worse) Furthermore, what must be riding along with this is the monopole K~0 QBO solution, which are pure reversals of wind direction encircling the globe. So the Rossby waves I am thinking are essentially perturbations along a regional spatial extent, so can respond to the localized tropical forcing. The geophysics here is so far ahead of the current climatology all I can do is shake my head.
  • 426.
    edited June 17

    Interesting paper that further supports Milankovitch model of glacial cycles

    "Detection of significant climatic precession variability in early Pleistocene glacial cycles"

    The nature of the orbit is connected to natural variability in climate at every scale. Think about it: the null hypothesis for ANY natural climate variation should exclude orbital forcings first

    • Orbital daily => diurnal climate cycle
    • Orbital annual => seasonal climate cycle
    • Orbital multi-annual => erratic ENSO climate cycle
    • Orbital millennial => Milankovitch climate cycle

    There's also obviously the diurnal and semi-diurnal tidal cycle, and possibly the thermohaline meriodinal overturning cycles that have orbital influences. So essentially the orbital periods of 1 day, 365.242 day, 27.321 day, 27.212 day, 27.554 day, 365.256 day, 365.2596 day pretty much generates all the possibilities.

    Comment Source:Interesting paper that further supports Milankovitch model of glacial cycles ["Detection of significant climatic precession variability in early Pleistocene glacial cycles"](https://sci-hub.tw/10.1016/j.epsl.2020.116137) ![](https://pbs.twimg.com/media/EauSNRiWkAAKx3J.png) The nature of the orbit is connected to natural variability in climate at every scale. Think about it: the null hypothesis for ANY natural climate variation should exclude orbital forcings first * Orbital daily => diurnal climate cycle * Orbital annual => seasonal climate cycle * Orbital multi-annual => erratic ENSO climate cycle * Orbital millennial => Milankovitch climate cycle There's also obviously the diurnal and semi-diurnal tidal cycle, and possibly the thermohaline meriodinal overturning cycles that have orbital influences. So essentially the orbital periods of 1 day, 365.242 day, 27.321 day, 27.212 day, 27.554 day, 365.256 day, 365.2596 day pretty much generates all the possibilities.
  • 427.
    edited July 1

    Probably the most direct real-world example of the Laplace's Tidal Equation MZ-like modulation is described here https://geoenergymath.com/2020/06/29/the-sao-and-annual-disturbances/

    The modulation is dependent on amplitude so that it will have a certain signature. For a sinusoidal waveform the peak will bifurcate as below

    This is what happens with the temperature signal at 1 hPa at an upper latitude

    This is not difficult to model and fit at all. It's essentially a modulated annual signal.

    Comment Source:Probably the most direct real-world example of the Laplace's Tidal Equation MZ-like modulation is described here https://geoenergymath.com/2020/06/29/the-sao-and-annual-disturbances/ The modulation is dependent on amplitude so that it will have a certain signature. For a sinusoidal waveform the peak will bifurcate as below ![](https://imagizer.imageshack.com/img922/8757/B1TOct.png) This is what happens with the temperature signal at 1 hPa at an upper latitude ![](https://imagizer.imageshack.com/img922/3818/qDeKxw.png) This is not difficult to model and fit **at all**. It's essentially a modulated annual signal.
  • 428.

    Kerry Emanuel opinion piece on climate science, read this thread https://twitter.com/WHUT/status/1279434134876835841

    Comment Source:Kerry Emanuel opinion piece on climate science, read this thread https://twitter.com/WHUT/status/1279434134876835841
  • 429.
    Comment Source:[New study detects ringing of the global atmosphere](https://phys.org/news/2020-07-global-atmosphere.html)
  • 430.

    Great comment on Emanuel, Paul. Also, I'm not sure that I'd call what is assembled as geophysical knowledge for fluids at the level of atmosphere and oceans a "theory". What it is is a bunch of special cases, each with their setup and boundary conditions. It is not unified. It may be too simple a comparison, because the physics is ultimately simple, but it is very far removed from anything like a Maxwell's Laws unification. Indeed, it's structure is more like Economics: There's an underlying theory which appears to work in the micro, and then economists go off and try to find instances where the theory actually achieves a prediction of something. Surely geophysical concepts and science are far better than Economics, if only because the data are so much better, but what is the purpose of the Emanuel Project? Is it to demand all atmospheric and ocean scientists first achieve a mastery of fluids and their geophysical manifestations? Do they need to conceptually memorize Kundu and Cohen? Is it to work on a great unifying principle? Is it to eschew looking at the surprises which numerical models sometimes and looking for explanations? I would suggest the very failure Professor Emanuel points to, neglecting "subgrid‐scale turbulence on surface heat fluxes in the far western Pacific, where the model‐resolved surface winds are often light" demonstrates the importance of that effect. Similar things can be said for neglecting the conversion of "dissipated kinetic energy back into heat".

    I also wonder if this re-emphasis upon currently understood theory is wise given the long arc of the history of Physics. Look at the conceptual back and forth which attended the evolution of the Planck Effect, or Brownian motion, or Blackbody. There was a set of mutually contradictory notions at the time. It was difficult to judge one superior to another, and I would argue the reason was lack of good experiments.

    I thought the point of computational physics in this area was to try to work these physics ab initio, even though the computational engines aren't there to do that with the "speed that is required". The latter is, by the way, driven by applications, not development of science. If you had a coupled atmosphere-ocean-ice sheets engine that took 120 days to complete a run and your purpose was understanding, so what? How long does it take the LHC to complete a series of runs producing data worth analysis? How about the LIGO?

    I could argue that some people in fluid physics come up short understanding numerical mathematics, too. I never quite understood why the Lorenz "chaos" got such big play when that was a well known phenomenon in numerical analysis and computational methods for decades, and never mind Mandelbrot. But what's the point of that? Not everyone can know everything.

    Comment Source:Great comment on Emanuel, Paul. Also, I'm not sure that I'd call what is assembled as geophysical knowledge for fluids at the level of atmosphere and oceans a "theory". What it is is a bunch of special cases, each with their setup and boundary conditions. It is not unified. It may be too simple a comparison, because the physics is ultimately simple, but it is very far removed from anything like a Maxwell's Laws unification. Indeed, it's structure is more like Economics: There's an underlying theory which appears to work in the micro, and then economists go off and try to find instances where the theory actually achieves a prediction of something. Surely geophysical concepts and science are far better than Economics, if only because the data are so much better, but what is the purpose of the Emanuel Project? Is it to demand all atmospheric and ocean scientists first achieve a mastery of fluids and their geophysical manifestations? Do they need to conceptually memorize Kundu and Cohen? Is it to work on a great unifying principle? Is it to eschew looking at the surprises which numerical models sometimes and looking for explanations? I would suggest the very failure Professor Emanuel points to, neglecting "subgrid‐scale turbulence on surface heat fluxes in the far western Pacific, where the model‐resolved surface winds are often light" demonstrates the importance of that effect. Similar things can be said for neglecting the conversion of "dissipated kinetic energy back into heat". I also wonder if this re-emphasis upon currently understood theory is wise given the long arc of the history of Physics. Look at the conceptual back and forth which attended the evolution of the Planck Effect, or Brownian motion, or Blackbody. There was a set of mutually contradictory notions at the time. It was difficult to judge one superior to another, and I would argue the reason was lack of good experiments. I thought the point of computational physics in this area was to try to work these physics _ab_ _initio_, even though the computational engines aren't there to do that with the "speed that is required". The latter is, by the way, driven by applications, not development of science. If you had a coupled atmosphere-ocean-ice sheets engine that took 120 days to complete a run and your purpose was understanding, so what? How long does it take the LHC to complete a series of runs producing data worth analysis? How about the LIGO? I could argue that some people in fluid physics come up short understanding numerical mathematics, too. I never quite understood why the Lorenz "chaos" got such big play when that was a well known phenomenon in numerical analysis and computational methods for decades, and never mind Mandelbrot. But what's the point of that? Not everyone can know everything.
  • 431.

    Very cool, Daniel.

    Comment Source:Very cool, Daniel.
  • 432.

    Re: The atmospheric ringing paper => The dispersion appears linear (wavenumber proportional to frequency) and Figure 7 shows lots of harmonics of the daily cycle. The following excerpt of their Fig 7 is exactly what the LTE model predicts. All the red ticks are harmonics of the daily tide.

    I will generate an equivalent chart and plot it here shortly.

    Comment Source:Re: [The atmospheric ringing paper](https://journals.ametsoc.org/jas/article/77/7/2519/347483/An-Array-of-Ringing-Global-Free-Modes-Discovered) => The dispersion appears linear (wavenumber proportional to frequency) and Figure 7 shows lots of harmonics of the daily cycle. The following excerpt of their Fig 7 is exactly what the LTE model predicts. All the red ticks are harmonics of the daily tide. ![](https://imagizer.imageshack.com/img924/135/Kvz0SP.png) I will generate an equivalent chart and plot it here shortly.
  • 433.

    Jan, It will be interesting if the Kerry Emanuel paper generates further discussion.

    Comment Source:Jan, It will be interesting if the Kerry Emanuel paper generates further discussion.
  • 434.
    edited July 9

    Paul, I wonder if it will, at all. I wonder if it's an echo of a wish that things be once as they were. I heard a prominent climate scientist say, at a conference, in person, that maybe our current situation with widespread disbelief in climate science wouldn't have happened if Charney had lived longer. I have the greatest respect for this scientist and have tons of respect for them, but seriously?

    It's interesting that the biological sciences seem to be well ahead of these areas of Physics in settling these perspectives. Perhaps that's because they've been so disconnected from mathematical applications to their fields for so long, and relish the contributions computation and the attitudes and perspectives it brings to their fields, and perhaps it's because, well, geophysical fluid dynamics has gotten stodgy and high priest heavy. Perhaps it's because the biological sciences, notably biopharma and bioinformatics, have been better funded.

    There are cultural tells. The geophysical fluid dynamics people really look down on solid Earth geophysicists, which I think is completely unwarranted. The biological sciences have added to the terms in vivo, and in vitro the term in silico, given their change in mindset.

    Naturally, I find the problems and data in the biological sciences refreshing relative to either Physics or much of Engineering. But I gotta admit, landing boosters returning from space, perceived as a Control Theory problem gives me goosebumps every time I witness it. It's amazing.

    Comment Source:Paul, I wonder if it will, at all. I wonder if it's an echo of a wish that things be once as they were. I heard a prominent climate scientist say, at a conference, in person, that maybe our current situation with widespread disbelief in climate science wouldn't have happened if [Charney](https://en.wikipedia.org/wiki/Jule_Gregory_Charney) had lived longer. I have the greatest respect for this scientist and have _tons_ of respect for them, but _seriously_? It's interesting that the biological sciences seem to be well ahead of these areas of Physics in settling these perspectives. Perhaps that's because they've been so disconnected from mathematical applications to their fields for so long, and relish the contributions computation and _the_ _attitudes_ _and_ _perspectives_ _it_ _brings_ to their fields, and perhaps it's because, well, geophysical fluid dynamics has gotten stodgy and high priest heavy. Perhaps it's because the biological sciences, notably biopharma and bioinformatics, have been better funded. There are cultural tells. The geophysical fluid dynamics people really _look_ _down_ on solid Earth geophysicists, which I think is _completely_ unwarranted. The biological sciences have added to the terms _in vivo_, and _in vitro_ the term _in silico_, given their change in mindset. Naturally, I find the problems and data in the biological sciences refreshing relative to either Physics or much of Engineering. But I gotta admit, landing boosters returning from space, perceived as a Control Theory problem gives me goosebumps every time I witness it. It's _amazing_.
  • 435.

    Jan, The thing that perplexes me about geophysical fluid dynamics is that they have special names for every type of wave -- Rossby, Kelvin, etc. When I learned about waves, a wave was a wave and was essentially described numerically, and perhaps differentiated by whether it was a standing wave, traveling wave, harmonic instead of like you said, by a "high priest" naming convention.

    You mentioned Charney -- what is more bizarre to muse about was that Richard Feynman wrote in his last blackboard lecture before he died that he was going to learn more about fluid dynamics. On the right side in the middle, it says "non-linear classical hydrdynamics" right below 2-D Hall effect (which has analogs in climate topologies).

    "What I cannot create, I do not understand"

    from:

    https://aboatmadeoutoftrash.wordpress.com/2012/01/19/feynmans-last-blackboard/

    Perhaps the atmosphere-ringing paper will help straighten out some of the fundamental understanding. It is a cool paper.

    Comment Source:Jan, The thing that perplexes me about geophysical fluid dynamics is that they have special names for every type of wave -- Rossby, Kelvin, etc. When I learned about waves, a wave was a wave and was essentially described numerically, and perhaps differentiated by whether it was a standing wave, traveling wave, harmonic instead of like you said, by a "high priest" naming convention. You mentioned Charney -- what is more bizarre to muse about was that Richard Feynman wrote in his last blackboard lecture before he died that he was going to learn more about fluid dynamics. On the right side in the middle, it says "non-linear classical hydrdynamics" right below 2-D Hall effect (which has analogs in climate topologies). *"What I cannot create, I do not understand"* from: ![](https://aboatmadeoutoftrash.files.wordpress.com/2012/01/feynmanlastboard1.gif) https://aboatmadeoutoftrash.wordpress.com/2012/01/19/feynmans-last-blackboard/ Perhaps the atmosphere-ringing paper will help straighten out some of the fundamental understanding. It is a cool paper.
  • 436.

    Yeah, to me, the most interesting thing about atmosphere-ringing at different frequencies are the ramifications of beat frequencies.

    Comment Source:Yeah, to me, the most interesting thing about atmosphere-ringing at different frequencies are the ramifications of beat frequencies.
  • 437.
    edited July 9

    This part is confusing to me:

    "Some peaks represent astronomically forced tides, but we show that most peaks are manifestations of the ringing of randomly excited global-scale resonant modes, reminiscent of the tones in a spectrum of a vibrating musical instrument."

    One forced tide is the daily tide, which they show as being extremely strong in Figure 4, shown below

    All the red vertical lines align precisely with harmonics of the daily tide. The yellow highlights show clear symmetric double-sidebands, which are obviously some modulation of the daily tide -- I can guess that based on the relative spacing, it appears that it's due to a beat frequency (as Jan mentioned) with the main fortnightly lunar tide. Interesting that the double-sidebands only appear around the even harmonics of the daily tide. Overall, the odd harmonics are lower in amplitude, indicating that the daily waves have more of an asymmetric sawtooth impulse character.

    So what is left are the peaks indicated by the blue circles. It appears that these are the main focus of the paper. But why these are the main focus is puzzling to me. They say "Some peaks represent astronomically forced tides" whereas it should be "MOST peaks represent astronomically forced tides (or their harmonics)".

    As is typical of research work, they may be burying the fundamental findings with trying to uncover some other odd stuff. The reason they do this is because I am sure some reviewers said that the atmospheric tide aspect is nothing novel -- even though IMO I think that it is. As they said in the opening statement of the abstract this is all due to having "newly available ERA5 hourly global data", which wasn't possible before.

    Comment Source:This part is confusing to me: > "**Some** peaks represent astronomically forced tides, but we show that **most** peaks are manifestations of the ringing of randomly excited global-scale resonant modes, reminiscent of the tones in a spectrum of a vibrating musical instrument." One forced tide is the daily tide, which they show as being extremely strong in Figure 4, shown below ![](https://imagizer.imageshack.com/img923/4194/EP8Hq7.png) All the red vertical lines align precisely with harmonics of the daily tide. The yellow highlights show clear symmetric double-sidebands, which are obviously some modulation of the daily tide -- I can guess that based on the relative spacing, it appears that it's due to a beat frequency (as Jan mentioned) with the main fortnightly lunar tide. Interesting that the double-sidebands only appear around the *even harmonics* of the daily tide. Overall, the odd harmonics are lower in amplitude, indicating that the daily waves have more of an asymmetric *sawtooth* impulse character. So what is left are the peaks indicated by the blue circles. It appears that these are the main focus of the paper. But why these are the main focus is puzzling to me. They say *"**Some** peaks represent astronomically forced tides"* whereas it should be *"**MOST** peaks represent astronomically forced tides (or their harmonics)"*. As is typical of research work, they may be burying the fundamental findings with trying to uncover some other odd stuff. The reason they do this is because I am sure some reviewers said that the atmospheric tide aspect is nothing novel -- even though IMO I think that it is. As they said in the opening statement of the abstract this is all due to having *"newly available ERA5 hourly global data"*, which wasn't possible before.
  • 438.
    edited July 9

    This is the model spectrum for the "ringing atmosphere"

    It appears that the even/odd pattern in intensity might be due to a 1/2 day (semi-diurnal) modulation in the daily tide, and the double satellite side-bands are highly likely due to the 13.66 day fortnightly lunar tide modulating the diurnal forcing. The fact that the sidebands don't show up on the odd-harmonic spectra might be due to the fact that the odd peaks are closer to the background noise.

    The other peaks are not modeled but are circled in green. The fact that the peaks are broader likely means that they are stimulated by a stochastic resonance -- the first one corresponds to ~33 hours (the article calls it the the “33-h Kelvin wave") and the second ~9.4 hour. The weaker third one is at ~7.2 hours. The 33 hour and 9.4 hour waves were identified in this paper from 1999 https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999JA900044 . The combination of these three waves fulfills the condition for a triad.

    Comment Source:This is the model spectrum for the "ringing atmosphere" ![](https://imagizer.imageshack.com/img924/1039/560kq5.png) It appears that the even/odd pattern in intensity might be due to a 1/2 day (semi-diurnal) modulation in the daily tide, and the double satellite side-bands are highly likely due to the 13.66 day fortnightly lunar tide modulating the diurnal forcing. The fact that the sidebands don't show up on the odd-harmonic spectra might be due to the fact that the odd peaks are closer to the background noise. The other peaks are not modeled but are circled in green. The fact that the peaks are broader likely means that they are stimulated by a stochastic resonance -- the first one corresponds to ~33 hours (the article calls it the the “33-h Kelvin wave") and the second ~9.4 hour. The weaker third one is at ~7.2 hours. The 33 hour and 9.4 hour waves were identified in this paper from 1999 https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/1999JA900044 . The combination of these three waves fulfills the condition for a [triad](https://geoenergymath.com/2020/04/06/triad-waves/).
  • 439.

    with a 1% spread in the extra frequencies

    The marketing behind the "atmosphere ringing" paper is absurd. You can't hear any of the ringing with your ears since as you can see the fundamental frequency has a period of one day. It's like someone claiming that you can hear the cycling of sunrise and sunset. Wish I could be so blatant with the marketing of any of this actually interesting stuff presented on this forum, but I at least retain a modicum of integrity :)

    Comment Source:with a 1% spread in the extra frequencies ![](https://imagizer.imageshack.com/img924/8336/NoTd6o.png) The marketing behind the "atmosphere ringing" paper is absurd. You can't hear any of the ringing with your ears since as you can see the fundamental frequency has a period of one day. It's like someone claiming that you can hear the cycling of sunrise and sunset. Wish I could be so blatant with the marketing of any of this actually interesting stuff presented on this forum, but I at least retain a modicum of integrity :)
  • 440.
    edited July 14

    LOL, here is the latest on modeling the QBO at the 70 hPa layer-- this is actually an interesting mix of tidal factors, which also clearly shows the impect of Laplace's Tidal Equations modulation. Without the LTE modulation and only with the tidal forcing, the waveform would appear squared off instead of jagged.

    1953-1980

    other fitting intervals

    1980-2000

    2000-2020

    entire interval

    all

    what is interesting is that the tidal forcing is stationarily equivalent across each interval and only the LTE modulation varies (and this only slightly).

    The model is structurally sensitive to the nonlinear LTE modulation, which appears to have a very high wavenumber and likely similar in scale to the tropical instability waves (TIW) along the equatorial Pacific.

    There also doesn't appear to be any anomalous QBO behavior around 2016, which from the tidal forcing levels simply appears to be a consequence of a slight broadening and lower level of the forcing peak.

    Comment Source:LOL, here is the latest on modeling the QBO at the 70 hPa layer-- this is actually an interesting mix of tidal factors, which also clearly shows the impect of Laplace's Tidal Equations modulation. Without the LTE modulation and only with the tidal forcing, the waveform would appear squared off instead of jagged. ![1953-1980](https://imagizer.imageshack.com/img924/7407/WZUtbI.png) other fitting intervals ![1980-2000](https://imagizer.imageshack.com/img923/6763/M3ewtU.png) ![2000-2020](https://imagizer.imageshack.com/img922/625/8Df8Hi.png) entire interval ![all](https://imagizer.imageshack.com/img923/1921/i4GFXp.png) what is interesting is that the tidal forcing is stationarily equivalent across each interval and only the LTE modulation varies (and this only slightly). ![](https://imagizer.imageshack.com/img923/5782/GTLSzy.png) The model is structurally sensitive to the nonlinear LTE modulation, which appears to have a very high wavenumber and likely similar in scale to the tropical instability waves (TIW) along the equatorial Pacific. There also doesn't appear to be any [anomalous QBO behavior around 2016](http://oceanrep.geomar.de/47636/1/acp-20-6541-2020.pdf), which from the tidal forcing levels simply appears to be a consequence of a slight broadening and lower level of the forcing peak.
  • 441.
    edited July 16

    This is the annual impulse of ENSO. Finally it's been identified.

    "New Indices for Better Understanding ENSO by Incorporating Convection Sensitivity to Sea Surface Temperature"

    https://journals.ametsoc.org/jcli/article/33/16/7045/348273/New-Indices-for-Better-Understanding-ENSO-by

    "Because of the seasonally varying SST and EOT, FACT contains the influence from the seasonality of SST, implying a nonlinear interaction between ENSO and seasonal cycle (e.g., Stuecker et al. 2013). "

    What I use as an an input forcing to the ENSO model is the tidal-reinforced annual impulse , which aligns with the longitudinal extent of the Pacific warm pool

    After the LTE modulation is applied, then the ENSO cycles are reproduced. This longitudal extent may be the key missing ingredient that the mathematical model required as an intermediate stage of computation. The longitudinal extent thus acts as a lever arm as the erratic tidal-thermocline torque is applied. And the nonlinear LTE modulation does the rest, in terms of creating the standing wave modes necessary to match the Pacific-wide physical behavior.

    Comment Source:This is the annual impulse of ENSO. Finally it's been identified. ![](https://pbs.twimg.com/media/EdAzMDbWAAUoF2C.png) "New Indices for Better Understanding ENSO by Incorporating Convection Sensitivity to Sea Surface Temperature" https://journals.ametsoc.org/jcli/article/33/16/7045/348273/New-Indices-for-Better-Understanding-ENSO-by > "Because of the seasonally varying SST and EOT, FACT contains the influence from the seasonality of SST, implying a nonlinear interaction between ENSO and seasonal cycle (e.g., Stuecker et al. 2013). " What I use as an an input forcing to the ENSO model is the tidal-reinforced annual impulse , which aligns with the longitudinal extent of the Pacific warm pool ![](https://imagizer.imageshack.com/img924/3770/kSgufq.png) After the LTE modulation is applied, then the ENSO cycles are reproduced. This longitudal extent may be the key missing ingredient that the mathematical model required as an intermediate stage of computation. The longitudinal extent thus acts as a lever arm as the erratic tidal-thermocline torque is applied. And the nonlinear LTE modulation does the rest, in terms of creating the standing wave modes necessary to match the Pacific-wide physical behavior.
  • 442.

    Here is a rationale for continuing the research angle -- there is a contingent that believes that climate science is already well-understood and all that is important to do is communicate the consensus views:

    Concentrating on the communication thing makes it too easy to slide into the role of a gate-keeper -- trying to maintain a status-quo of consensus, and not allowing science to advance in potentially novel ways.

    Comment Source:Here is a rationale for continuing the research angle -- there is a contingent that believes that climate science is already well-understood and all that is important to do is communicate the consensus views: ![](https://pbs.twimg.com/media/EdZXwNMXYAAx9XA.png) Concentrating on the communication thing makes it too easy to slide into the role of a gate-keeper -- trying to maintain a status-quo of consensus, and not allowing science to advance in potentially novel ways.
  • 443.

    Oh, I'm not a climate scientist or a geophysicist, but the idea that pedagogy is more important than research in climate science is poppycock. On the face of it, even. Pedagogy is about policy. Frankly, it continues to be supremely important to develop our collective understand on how the Earth system works and, in this respect, particularly how climate works. It's not at all like it's solved. We don't sufficiently understand many things, including radiation physics of water condensation, or couplings of ice sheet dynamics with oceans and atmosphere, or local-global couplings of ocean heat transfers. We know a lot. But when people in policy ask for timetables on how climate disruption will evolve in terms of SLR or precipitation impacts, even if they give us an emissions profile, we are stuck on that, with uncertainties which are too broad to be actionable.

    I'd say, realistically, you can't do pedagogy until your science is sound, because when you do, the presentation is strongly conditional upon the state of knowledge you have. When that state changes, of course your presentation should change. But the people to which you have spoken don't understand the conditional part, and the change seems to them like two-facedness or opportunism or political winds.

    I think it would be far better to teach dealing with uncertainty.

    Comment Source:Oh, I'm not a climate scientist or a geophysicist, but the idea that pedagogy is more important than research in climate science is poppycock. On the face of it, even. Pedagogy is about policy. Frankly, it continues to be supremely important to develop our collective understand on how the Earth system works and, in this respect, particularly how climate works. It's not at all like it's solved. We don't sufficiently understand many things, including radiation physics of water condensation, or couplings of ice sheet dynamics with oceans and atmosphere, or local-global couplings of ocean heat transfers. We know a lot. But when people in policy ask for timetables on how climate disruption will evolve in terms of SLR or precipitation impacts, even if they give us an emissions profile, we are stuck on that, with uncertainties which are too broad to be actionable. I'd say, realistically, you can't do pedagogy until your science is sound, because when you do, the presentation is strongly conditional upon the state of knowledge you have. When that state changes, of course your presentation should change. But the people to which you have spoken don't understand the conditional part, and the change seems to them like two-facedness or opportunism or political winds. I think it would be far better to teach dealing with uncertainty.
  • 444.

    Jan, That's definitely a better comeback than I can make. Having to look up pedagogy to make sure I knew what it meant, Wikipedia gives the example of the Socratic Method as a teaching policy. That may better suit a scientific discipline where there is still lots of uncertainty, but I doubt that's what "mtobis" meant.

    Comment Source:Jan, That's definitely a better comeback than I can make. Having to look up pedagogy to make sure I knew what it meant, Wikipedia gives the example of the Socratic Method as a teaching policy. That may better suit a scientific discipline where there is still lots of uncertainty, but I doubt that's what "mtobis" meant.
  • 445.

    Scientists have had enough of Richard Lindzen, who came up with the consensus QBO explanation nearly 50 years ago. This is one of the latest from a twitter thread labelled "Dick L, who was not just wrong, but very very confident about it" by Prof Andrew Dessler :

    https://twitter.com/AndrewDessler/status/1286107941955870720

    "I hope people don't forget about him because there's a good lesson in his crashing and burning. Science needs iconoclasts and point out problems with the field's ideas. That's how you strengthen & firm up the paradigms (or destroy them)."

    "But the one thing that science will not tolerate are people who will not give up. Feel free to question the paradigm, but you have move on when it's clear that the paradigm is right. Lindzen painted himself into a corner and could not get out."

    "Importantly, you never have to admit you're wrong. You just have to start stop talking about. If Lindzen and just moved on in the early 2000s and started working on other things, people would have completely forgetten he was ever a skeptic."

    That is a harsh assessment and not the only one. This YouTube video on Lindzen was from Prof Pierrehumbert :

    "If you're wrong in an interesting way, that advances the science. It's great to be wrong. And Richard Lindzen has made a whole career out of being wrong in interesting ways."

    Interesting that Lindzen essentially gave up on his QBO model, declaring victory in 1974. From his review paper "On the Development of the Theory of the QBO", he said:

    "Following Holton and Lindzen (1972), I concluded that there was little point in refining the theory of the QBO until we had a better handle on the nature and generation of the upward-propagating waves, as well as some observational details of the wave-mean-flow interaction. An observationally based attempt in this direction is described in Lindzen and Tsay (1974). That was my last direct contribution tothe study of the QBO. "

    Yet Lindzen did not give up on his arguments countering the consensus models of global warming. I think the problem is that a contrarian such as Lindzen starts with the intent to work on a topic until people relent and agree with him, instead of the conventional scientific process of formulating the science according to the evidence, and updating understanding as necessary. It's not a question of giving up nor of persisting -- everyone that understands science would agree that the scientific process is never-ending with no clear beginning and no definitive end.

    Comment Source:Scientists have had enough of Richard Lindzen, who came up with the consensus QBO explanation nearly 50 years ago. This is one of the latest from a twitter thread labelled **"Dick L, who was not just wrong, but very very confident about it"** by Prof Andrew Dessler : https://twitter.com/AndrewDessler/status/1286107941955870720 > "I hope people don't forget about him because there's a good lesson in his crashing and burning. Science needs iconoclasts and point out problems with the field's ideas. That's how you strengthen & firm up the paradigms (or destroy them)." > "But the one thing that science will not tolerate are people who will not give up. Feel free to question the paradigm, but you have move on when it's clear that the paradigm is right. Lindzen painted himself into a corner and could not get out." > "Importantly, you never have to admit you're wrong. You just have to <strike>start</strike> stop talking about. If Lindzen and just moved on in the early 2000s and started working on other things, people would have completely forgetten he was ever a skeptic." That is a harsh assessment and not the only one. This [YouTube video](https://youtu.be/RICBu_P8JWI?t=2146) on Lindzen was from [Prof Pierrehumbert](https://en.wikipedia.org/wiki/Raymond_Pierrehumbert) : > "If you're wrong in an interesting way, that advances the science. It's great to be wrong. And Richard Lindzen has made a whole career out of being wrong in interesting ways." Interesting that Lindzen essentially gave up on his QBO model, declaring victory in 1974. From his review paper *"On the Development of the Theory of the QBO"*, he said: >"Following Holton and Lindzen (1972), I concluded that there was little point in refining the theory of the QBO until we had a better handle on the nature and generation of the upward-propagating waves, as well as some observational details of the wave-mean-flow interaction. An observationally based attempt in this direction is described in Lindzen and Tsay (1974). That was my last direct contribution tothe study of the QBO. " Yet Lindzen did not give up on his arguments countering the consensus models of global warming. I think the problem is that a contrarian such as Lindzen starts with the intent to work on a topic until people relent and agree with him, instead of the conventional scientific process of formulating the science according to the evidence, and updating understanding as necessary. It's not a question of giving up nor of persisting -- everyone that understands science would agree that the scientific process is never-ending with no clear beginning and no definitive end.
  • 446.
    edited July 24

    My custom search optimizer already works faster than the Excel solver https://forum.azimuthproject.org/discussion/comment/22337/#Comment_22337

    I'm thinking that since the last stage of the ENSO model is a sum of LTE modulations, that I can add a multiple linear regression to calculate the optimal set of modulation coefficients for a given tidal forcing. This should remove some of the stiffness of the solution if it gets stuck in a local minimum at the higher level modulations. The key to these kinds of optimizers is to re-weight as many of the coefficients as possible at each search step. Thus, the multiple regression servers as a "mini" gradient-descent search for that set of coefficients.

    The speed-up due to this mod could be substantial.


    On a related note, the machine learning front keeps pushing forward, from yesterday “A Novel Framework for Spatio-Temporal Prediction of Climate Data Using Deep Learning” https://arxiv.org/abs/2007.11836

    ” Specifically, we show how spatio-temporal processes can be decomposed in terms of a sum of products of temporally referenced basis functions, and of stochastic spatial coefficients which can be spatially modelled and mapped on a regular grid, allowing the reconstruction of the complete spatio-temporal signal.”

    Things are converging : https://geoenergymath.com/2020/07/17/el-nino-modoki/comment-page-1/#comment-1897

    A little context to show that they may be rearranging/recomposing EOFs similar to the way that standing-wave modes are superimposed.

    "Randomly chosen examples of prediction map and time series are shown and compared with the true spatio-temporal field in Fig. 3. The predicted map recovers the true spatial pattern, and the temporal behaviours are fairly well replicated too. Figure 4 shows the spatio-temporal semivariograms of the simulated data, of the output of the model and of the residuals — i.e. the difference between the simulated data and the modelled one. All semivariograms are computed on the test points. The semivariogram on the modelled data shows how the interpolation recovered the same spatio-temporal structure of the (true) simulated data, although its values are slightly lower. This imply that the model has been able to explain most of the spatio-temporal variability of the phenomenon. However, it must be pointed out that even better reconstruction of the spatio-temporal structure of the data could be recognizable in the semivariograms computed on the training set, similarly to how the training error would be lower than the testing one. Finally, almost no structure is shown in the semivariogram of the residuals, suggesting that almost all the spatially and temporally structured information — or at least the one described by a two-point statistic such as the semivariogram — has been extracted from the data. It also shows a nugget corresponding to the noise used in the generation of the dataset."

    Comment Source:My custom search optimizer already works faster than the Excel solver https://forum.azimuthproject.org/discussion/comment/22337/#Comment_22337 I'm thinking that since the last stage of the ENSO model is a sum of LTE modulations, that I can add a multiple linear regression to calculate the optimal set of modulation coefficients for a given tidal forcing. This should remove some of the stiffness of the solution if it gets stuck in a local minimum at the higher level modulations. The key to these kinds of optimizers is to re-weight as many of the coefficients as possible at each search step. Thus, the multiple regression servers as a "mini" gradient-descent search for that set of coefficients. The speed-up due to this mod could be substantial. --- On a related note, the machine learning front keeps pushing forward, from yesterday **“A Novel Framework for Spatio-Temporal Prediction of Climate Data Using Deep Learning”** https://arxiv.org/abs/2007.11836 > ” Specifically, we show how spatio-temporal processes can be decomposed in terms of a sum of products of temporally referenced basis functions, and of stochastic spatial coefficients which can be spatially modelled and mapped on a regular grid, allowing the reconstruction of the complete spatio-temporal signal.” Things are converging : https://geoenergymath.com/2020/07/17/el-nino-modoki/comment-page-1/#comment-1897 A little context to show that they may be rearranging/recomposing EOFs similar to the way that standing-wave modes are superimposed. > "Randomly chosen examples of prediction map and time series are shown and compared with the true spatio-temporal field in Fig. 3. The predicted map recovers the true spatial pattern, and the temporal behaviours are fairly well replicated too. Figure 4 shows the spatio-temporal semivariograms of the simulated data, of the output of the model and of the residuals — i.e. the difference between the simulated data and the modelled one. All semivariograms are computed on the test points. The semivariogram on the modelled data shows how the interpolation recovered the same spatio-temporal structure of the (true) simulated data, although its values are slightly lower. This imply that the model has been able to explain most of the spatio-temporal variability of the phenomenon. However, it must be pointed out that even better reconstruction of the spatio-temporal structure of the data could be recognizable in the semivariograms computed on the training set, similarly to how the training error would be lower than the testing one. Finally, almost no structure is shown in the semivariogram of the residuals, suggesting that almost all the spatially and temporally structured information — or at least the one described by a two-point statistic such as the semivariogram — has been extracted from the data. It also shows a nugget corresponding to the noise used in the generation of the dataset." ![](https://pbs.twimg.com/media/Edtm6pfWoAEygFT.png) ![](https://pbs.twimg.com/media/EdtnDb9XoAQphBV.png)
  • 447.
    edited July 29

    I will likely look into using the Julia scientific programming environment at some point. This is a very good marketing/educational video

    And there is an online Julia meeting going on right now https://live.juliacon.org/live

    Watching this right now -- "Machine Learning will have a big impact on fluid dynamics computation"

    Comment Source:I will likely look into using the Julia scientific programming environment at some point. This is a very good marketing/educational video https://youtu.be/QwVO0Xh2Hbg And there is an online Julia meeting going on right now https://live.juliacon.org/live Watching this right now -- "Machine Learning will have a big impact on fluid dynamics computation" https://youtu.be/og6aE3sYdHg
  • 448.

    There was some enthusiasm for Julia back at Akamai among data scientists, at least before I left. That's a little odd, since Scala seems to be the thing in the Spark universe. Never mind, though, I'm sticking with R accompanied by a little bit of Python 3 now and then. It difficult to imagine I'd pick Python to use by choice any more, though.

    Comment Source:There was some enthusiasm for Julia back at Akamai among data scientists, at least before I left. That's a little odd, since Scala seems to be the thing in the Spark universe. Never mind, though, I'm sticking with R accompanied by a little bit of Python 3 now and then. It difficult to imagine I'd pick Python to use by choice any more, though.
  • 449.

    Jan, It's also more of a MATLAB replacement

    Comment Source:Jan, It's also more of a MATLAB replacement
  • 450.

    Hmmm. I mean R is based upon the same numerics model as MATLAB. In the beginning I preferred R over Python because the numerics in Python's numpy and scipy are a bit dirty. Example: If you repeatedly calculate acos(cos(x)) for arbitrary numerical x on [0, 2\pi), R will be totally happy. Every once in a while, though, Python 3 and numpy will produce a cos(x) value which is close to unity or negative unity, but just slightly bigger than unity in magnitude, enough for acos(.) to choke. That would never happen in MATLAB.

    Comment Source:Hmmm. I mean R is based upon the same numerics model as MATLAB. In the beginning I preferred R over Python because the numerics in Python's numpy and scipy are a bit dirty. Example: If you repeatedly calculate acos(cos(x)) for arbitrary numerical x on [0, 2\pi), R will be totally happy. Every once in a while, though, Python 3 and numpy will produce a cos(x) value which is close to unity or negative unity, but just slightly bigger than unity in magnitude, enough for acos(.) to choke. That would never happen in MATLAB.
Sign In or Register to comment.