Options

QBO and ENSO

1456810

Comments

  • 351.

    Wrote a blog post last month on double-sideband suppressed carrier modulation https://geoenergymath.com/2019/10/24/autocorrelation-in-power-spectra-continued/

    This explains how an annual impulse forcing doesn't show the fundamental frequency as a response but only generates satellite peaks corresponding to the frequencies that the annual impulse interacts with.

    Comment Source:Wrote a blog post last month on *double-sideband suppressed carrier modulation* https://geoenergymath.com/2019/10/24/autocorrelation-in-power-spectra-continued/ This explains how an annual impulse forcing doesn't show the fundamental frequency as a response but only generates satellite peaks corresponding to the frequencies that the annual impulse interacts with.
  • 352.

    After working the ENSO model part-time over an extended period if time, it has become clear that the model is a good counter-example to a structurally stable system https://en.m.wikipedia.org/wiki/Structural_stability

    Although the model has an equivalent number of degrees of freedom to a conventional tidal analysis model or to the differential length-of-day (dLOD) model, the non-linearity of the solution makes it exceedingly sensitive to the value and phase of the tidal factors.

    What this means is that a 90% accurate estimate of the tidal factors will do a quite adequate job for a conventional tide estimate, but that provides just a starting point for the ENSO model. All the 2nd order factors that don't add much to improving a tidal analysis fit appear to be critical for an ENSO fit. This is just a consequence of the Laplace's Tidal Equation solution in the ENSO regime, where the non-linear multiplier on the forcing will completely change the dynamical trajectory for small perturbations.

    But what does this say about the confidence of the results? If it's that sensitive, could matching results simply be fortuitous? The reason I don't think this is the case is that the number of DOF wrt to the set of parameters are as limited in the ENSO model as in the tidal or LOD model. It's just that arriving at the set of values is more difficult with ENSO. For example, one can't simply apply a multiple linear regression algorithm to the ENSO LTE solution like one can with the linearized conventional tidal analysis solution. I am still at the point where I have to exhaustively iterate to get a matching solution.

    But after the solution is attained, the primary constituent forcing factors found don't differ that much from the dLOD factors or to representative conventional tidal factors. The strong factor is always the tropical fortnightly term at 13.66 days, with the next largest the anomalistic terms at 27.55 & 13.77 days and then the tropical monthly at 27.32 days. These of course are synched to the annual impulse. The rest of the terms are under an order of magnitude less in spectral power (amplitude * frequency), but because of the sensitivity of the LTE nonlinear multiplier, these need to be iterated to obtain each contributing amplitude.

    But with the knowledge that the dLOD results provide a good 1st order starting point, it's no longer a shooting in the dark trial and error approach to iterating on a final solution. The approach I am now using is to take these terms and compact them into an inverse cube law that when Taylor series expanded will generate the primary first-order terms along with all the second-order cross terms. Applying physics to the problem makes it a more plausible model and the reduction in the number of DOFs after doing this makes it more immune to over-fitting.

    Comment Source:After working the ENSO model part-time over an extended period if time, it has become clear that the model is a good counter-example to a structurally stable system https://en.m.wikipedia.org/wiki/Structural_stability Although the model has an equivalent number of degrees of freedom to a conventional tidal analysis model or to the differential length-of-day (dLOD) model, the non-linearity of the solution makes it exceedingly sensitive to the value and phase of the tidal factors. What this means is that a 90% accurate estimate of the tidal factors will do a quite adequate job for a conventional tide estimate, but that provides just a starting point for the ENSO model. All the 2nd order factors that don't add much to improving a tidal analysis fit appear to be critical for an ENSO fit. This is just a consequence of the Laplace's Tidal Equation solution in the ENSO regime, where the non-linear multiplier on the forcing will completely change the dynamical trajectory for small perturbations. But what does this say about the confidence of the results? If it's that sensitive, could matching results simply be fortuitous? The reason I don't think this is the case is that the number of DOF wrt to the set of parameters are as limited in the ENSO model as in the tidal or LOD model. It's just that arriving at the set of values is more difficult with ENSO. For example, one can't simply apply a multiple linear regression algorithm to the ENSO LTE solution like one can with the linearized conventional tidal analysis solution. I am still at the point where I have to exhaustively iterate to get a matching solution. But after the solution is attained, the primary constituent forcing factors found don't differ that much from the dLOD factors or to representative conventional tidal factors. The strong factor is always the tropical fortnightly term at 13.66 days, with the next largest the anomalistic terms at 27.55 & 13.77 days and then the tropical monthly at 27.32 days. These of course are synched to the annual impulse. The rest of the terms are under an order of magnitude less in spectral power (amplitude * frequency), but because of the sensitivity of the LTE nonlinear multiplier, these need to be iterated to obtain each contributing amplitude. But with the knowledge that the dLOD results provide a good 1st order starting point, it's no longer a shooting in the dark trial and error approach to iterating on a final solution. The approach I am now using is to take these terms and compact them into an inverse cube law that when Taylor series expanded will generate the primary first-order terms along with all the second-order cross terms. Applying physics to the problem makes it a more plausible model and the reduction in the number of DOFs after doing this makes it more immune to over-fitting.
  • 353.

    Elsewhere Jan said:

    "This is about decision making and statistics, signal detection theory, and all that, responding (finally) to the comment here. "

    A couple of months ago I had an exchange with noteworthy climate scientist Michael Mann about the signal detection algorithm that we applied to ENSO

    https://twitter.com/MichaelEMann/status/1179147055924731906

    He seemed to assume that this was nothing new and concluded "There are a million things you can do w/ spectral estimation. Some of them are instructive, others not so much. We choose our battles. Godspeed..."

    Bottomline, I think that there's a world of mathematical analyses applicable to climate and earth sciences that has remained unexplored. Physics too.

    Comment Source:Elsewhere Jan said: > "This is about decision making and statistics, signal detection theory, and all that, responding (finally) to the comment here. " A couple of months ago I had an exchange with noteworthy climate scientist Michael Mann about the signal detection algorithm that we applied to ENSO https://twitter.com/MichaelEMann/status/1179147055924731906 He seemed to assume that this was nothing new and concluded *"There are a million things you can do w/ spectral estimation. Some of them are instructive, others not so much. We choose our battles. Godspeed..."* Bottomline, I think that there's a world of mathematical analyses applicable to climate and earth sciences that has remained unexplored. Physics too.
  • 354.

    I think a link to the comment referenced would be helpful. BTW, I do find it annoying that you cannot in Wordpress, unlike here, edit comments you've made after submitting them. This means, effectively, that for anything complicated, you write it up in your own Wordpress, check it, and the re-paste it. But, alas, the set of rules for comments are more restrictive than the rules for posts, particularly regarding image. That seems to be true here, as well. It's why I'm exploring Overleaf as a blogging medium, at least for technical things.

    Moving on ....

    The tendency to use spectral estimation for geophysical inference is to me a real problem. I'm not questioning its appropriateness or power. I'm questioning its opacity. In that respect it is no better than using convolutional neural networks for calculating or estimating things. Unless all the code is offered, with documentation, the results appear as if by magic and are correct only based upon trust in the skill and self-integrity of the scholars.

    Still, I've seen spectral methods abused. I'm not saying Mann or anyone did this. But I've seen people try to do Fourier transforms where multi-taper methods are needed. Professor Mann pioneered using EOFs and eigen-decompositions of data (sometimes referred to, inappropriately, as PCA). Still, it is both difficult to follow, unless scrupulous documentation of steps is taken, and doing an uncertainty analysis is difficult.

    In fact, I'd say spectral methods are limited because it is not at all clear how to do such an uncertainty analysis which means something in the time domain rather than in the frequency domain.

    Mann is not the only one. Donnelly, et al (2015) of WHOI did the same for determining recurrence rates for hurricanes. I'm not saying there's anything wrong with it. I'm saying that if there were something wrong with it, they don't have the benefit of the community looking in and trying to do an independent assessment. There are independent ways of doing this. But with the energy barrier to understanding what Donnelly, et al did being so high, who is going to invest the effort?

    I do disagree with Professor Mann, who I greatly respect, that choice of method is so stylistic. I think there are objective ways of going about this, particularly with a Bayesian approach. And I think Professor Mann's defense of the hiatus was a demonstrated weakness in whatever means of inference he chooses to pursue. (There is no evidence, in hindsight, it was at all real.)

    That there is "a world of mathematical analyses applicable to climate and earth sciences" which "has remained unexplored" does not mean these are worthwhile approaches. The same critical criteria need to be applied to them as all others, and, as in the case of models, those methods which are opaque are less useful than ones which are transparent. This is why, for example, despite their limitations, methods of random forests are so attractive to those using them: They can be interpreted.

    Comment Source:I think a [link to the comment referenced](https://andthentheresphysics.wordpress.com/2019/11/30/tipping-points-elements/#comment-166602) would be helpful. BTW, I _do_ find it annoying that you cannot in _Wordpress_, unlike here, edit comments you've made after submitting them. This means, effectively, that for anything complicated, you write it up in your own _Wordpress_, check it, and the re-paste it. But, alas, the set of rules for comments are more restrictive than the rules for posts, particularly regarding <img src="https://user.fm/files/v2-2f2f4da2d4075cb75a46d39dec0b0fa2/LaTeX.jpg">. That seems to be true here, as well. It's why I'm exploring <a href="https://www.overleaf.com/learn">Overleaf</a> as a blogging medium, at least for technical things. Moving on .... The tendency to use spectral estimation for geophysical inference is to me a real problem. I'm not questioning its appropriateness or power. I'm questioning its opacity. In that respect it is no better than using convolutional neural networks for calculating or estimating things. Unless all the code is offered, with documentation, the results appear as if by magic and are correct only based upon trust in the skill and self-integrity of the scholars. Still, I've seen spectral methods abused. I'm not saying Mann or anyone did this. But I've seen people try to do Fourier transforms where multi-taper methods are needed. Professor Mann pioneered using EOFs and eigen-decompositions of data (sometimes referred to, inappropriately, as PCA). Still, it is both difficult to follow, unless scrupulous documentation of steps is taken, and doing an uncertainty analysis is difficult. In fact, I'd say spectral methods are limited because it is not at all clear how to do such an uncertainty analysis which means something in the time domain rather than in the frequency domain. Mann is not the only one. [Donnelly, _et al_ (2015) of WHOI](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2014EF000274) did the same for determining recurrence rates for hurricanes. I'm not saying there's anything wrong with it. I'm saying that if there were something wrong with it, they don't have the benefit of the community looking in and trying to do an independent assessment. There are independent ways of doing this. But with the energy barrier to understanding what Donnelly, _et al_ did being so high, _who_ is going to invest the effort? I do disagree with Professor Mann, who I greatly respect, that choice of method is so stylistic. I think there are objective ways of going about this, particularly with a Bayesian approach. And I think Professor Mann's [defense of the hiatus](https://www.nature.com/articles/nclimate2938) was a demonstrated weakness in whatever means of inference he chooses to pursue. (There is no evidence, in hindsight, it was at all real.) That there is "a world of mathematical analyses applicable to climate and earth sciences" which "has remained unexplored" does not mean these are worthwhile approaches. The same critical criteria need to be applied to them as all others, and, as in the case of models, those methods which are opaque are less useful than ones which are transparent. This is why, for example, despite their limitations, methods of _random forests_ are so attractive to those using them: They can be interpreted.
  • 355.

    Jan, I can no longer reliably comment at ATTP, as the moderator takes capricious delight in deleting my contributions, no matter how benign they are. That extract was from an ATTP comment that you had made recently and which I couldn't reply to at that blog.

    Agree, I am not into whatever kind of signal processing fu that Mann is attempting. The stuff I am analyzing should be more than obvious to anyone that took a signal processing course in college. Maybe the difference is that I was doing stuff like building FM transmitters, depth finders, analog companders, etc while I was a teenager so have an intuitive feel for working with signals that's distinct from the math. The point I was trying to get across to Mann is blindingly obvious to a comms engineer but it apparently went right over his head, and I wasn't going to generate any bad blood by pushing it beyond that, so dropped it and just thanked him for responding.

    Out of curiosity, have you had one-on-one interactions with earth scientists or geophysicists?

    Comment Source:Jan, I can no longer reliably comment at ATTP, as the moderator takes capricious delight in deleting my contributions, no matter how benign they are. That extract was from an ATTP comment that you had made recently and which I couldn't reply to at that blog. Agree, I am not into whatever kind of signal processing fu that Mann is attempting. The stuff I am analyzing should be more than obvious to anyone that took a signal processing course in college. Maybe the difference is that I was doing stuff like building FM transmitters, depth finders, analog companders, etc while I was a teenager so have an intuitive feel for working with signals that's distinct from the math. The point I was trying to get across to Mann is blindingly obvious to a comms engineer but it apparently went right over his head, and I wasn't going to generate any bad blood by pushing it beyond that, so dropped it and just thanked him for responding. Out of curiosity, have you had one-on-one interactions with earth scientists or geophysicists?
  • 356.

    Paul, yes I have: Ray Pierrehumbert, Kerry Emanuel, Scott Doney (more an oceanographer), and people at Woods Hole Oceanographic Institution who are in a variety of fields, but touch on climate-related things from time to time, e.g., the Jeff Donnelly work cited above. Why?

    Comment Source:Paul, yes I have: Ray Pierrehumbert, Kerry Emanuel, Scott Doney (more an oceanographer), and people at Woods Hole Oceanographic Institution who are in a variety of fields, but touch on climate-related things from time to time, e.g., the Jeff Donnelly work cited above. Why?
  • 357.

    Why?

    Next week is this year's fall meeting of the AGU and decided not to go after attending the last 3 years. If my interactions with attendees were more productive, I may have gone again but it's mostly one-way NIH discussions.

    Comment Source: > Why? Next week is this year's fall meeting of the AGU and decided not to go after attending the last 3 years. If my interactions with attendees were more productive, I may have gone again but it's mostly one-way NIH discussions.
  • 358.
    edited December 2019

    Yes, I don't travel to conferences and such. WHOI is conveniently close, accessible by bus, and Claire and I go there twice a year for other functions (1930 Society, Associates Afternoon of Science, and Fye Society). I like to catch symposia there from time to time, but I have been really busy this year, both on the Planning Committee of the Boston Chapter of the American Statistical Association, as Chair of our UU parish Green Congregation Committee (generally this Autumn getting activists onto the streets), and as advisor to committees of the Town of Westwood regarding risks and resiliency planning. (These are all pro bono, of course.) I do catch seminars at MIT from time to time. (There's one about decarbonizing the electricity sector tomorrow. These are all Webcasts, too.) I met Ray for the first time at an EAPS conference at MIT, and I've spoken with Kerry a couple of times. At WHOI, I'm interested in both physical oceanography and marine ecology. I'm supposed to visit wit Joe Pedlosky some time, but I feel I need to make more progress with his book before I do that.

    Some academics go all over the place. Although he's in a spell now where he is remaining close to home, my son, Jeff, professor at UCL went to China, Australia, Argentina, Chicago, California, and France (and a couple of other places I forget) in the last couple of years.

    Comment Source:Yes, I don't travel to conferences and such. WHOI is conveniently close, accessible by bus, and Claire and I go there twice a year for other functions (1930 Society, Associates Afternoon of Science, and Fye Society). I like to catch symposia there from time to time, but I have been really busy this year, both on the Planning Committee of the Boston Chapter of the American Statistical Association, as Chair of our UU parish Green Congregation Committee (generally this Autumn getting activists onto the streets), and as advisor to committees of the Town of Westwood regarding risks and resiliency planning. (These are all pro bono, of course.) I do catch seminars at MIT from time to time. ([There's one about decarbonizing the electricity sector tomorrow](https://climate.mit.edu/symposia/electricity/). These are all Webcasts, too.) I met Ray for the first time at an EAPS conference at MIT, and I've spoken with Kerry a couple of times. At WHOI, I'm interested in both physical oceanography and marine ecology. I'm supposed to visit wit Joe Pedlosky some time, but I feel I need to make more progress with his book before I do that. Some academics go all over the place. Although he's in a spell now where he is remaining close to home, [my son, Jeff, professor at UCL](http://www.homepages.ucl.ac.uk/~ucahalk/index.html) went to China, Australia, Argentina, Chicago, California, and France (and a couple of other places I forget) in the last couple of years.
  • 359.

    Here is another perspective of the quite obvious signal detection algorithm that Mann casually dismissed. This is simple in that it is a straightforward Fourier series of the ENSO signal. Can take either the Sine expansion or the Cosine expansion as it will give complementary symmetries. Below is the Cosine series, with the frequencies between 0.5 and 1/year mirror-folded back and then compared to the spectrum between 0 and 0.5 /year.

    The arrows indicate the mirror-image folding and the gray rectangular box is a guide-to-the-eye in how to compare the amplitudes.

    This deconstruction is called double-sideband suppressed carrier modulation and it has never been described for climate dipoles according to the research literature. All the frequencies below half the carrier frequency constitute the lower sideband while the frequencies above half the carrier frequency are the upper sideband.

    The spectrum is symmetrical with a Cosine series and polarity-reversed anti-symmetrical with a Sine series. What gets suppressed is the carrier frequency, which amounts to an annual impulse that likely drives the ENSO behavior but is not seen directly in the spectrum because the underlying modulated signal is purely sinusoidal, i.e. there is no DC component.

    Comment Source:Here is another perspective of the quite obvious signal detection algorithm that Mann casually dismissed. This is simple in that it is a straightforward Fourier series of the ENSO signal. Can take either the Sine expansion or the Cosine expansion as it will give complementary symmetries. Below is the Cosine series, with the frequencies between 0.5 and 1/year mirror-folded back and then compared to the spectrum between 0 and 0.5 /year. ![](https://imagizer.imageshack.com/img921/3143/wQqQ70.png) The arrows indicate the mirror-image folding and the gray rectangular box is a guide-to-the-eye in how to compare the amplitudes. This deconstruction is called double-sideband suppressed carrier modulation and it has never been described for climate dipoles according to the research literature. All the frequencies below half the carrier frequency constitute the lower sideband while the frequencies above half the carrier frequency are the upper sideband. The spectrum is symmetrical with a Cosine series and polarity-reversed anti-symmetrical with a Sine series. What gets suppressed is the carrier frequency, which amounts to an annual impulse that likely drives the ENSO behavior but is not seen directly in the spectrum because the underlying modulated signal is purely sinusoidal, i.e. there is no DC component.
  • 360.

    This sentence was just published in Nature Climate

    Latest climate models confirm need for urgent mitigation

    They actually sound proud of the fact that all the models diverge. Perhaps they are promoting the idea that different groups of climate scientists should try different approaches. Every model but the true model is considered imperfect, but we don't know what the true model is, so they are all imperfect to some extent.

    Comment Source:This sentence was just published in Nature Climate > [Latest climate models confirm need for urgent mitigation](https://www.nature.com/articles/s41558-019-0660-0) > ![](https://pbs.twimg.com/media/EK4mxgVWsAAGur8.png) They actually sound proud of the fact that all the models diverge. Perhaps they are promoting the idea that different groups of climate scientists should try different approaches. Every model but the true model is considered imperfect, but we don't know what the true model is, so they are all imperfect to some extent.
  • 361.

    This is the aim of ensemble forecasting: Each model uses a bit different physics, and takes a different riff on the compromises inevitable in any big numerical engine. To the degree they vary, slightly different versions of Earth and of physics are modelled. And the hope is that by using an ensemble, a median behavior and uncertainty cloud emerges, akin to the spaghetti plots of hurricane forecasts, except the CMIPx trajectories are through climate state space.

    Comment Source:This is the aim of ensemble forecasting: Each model uses a bit different physics, and takes a different riff on the compromises inevitable in any big numerical engine. To the degree they vary, slightly different versions of Earth and of physics are modelled. And the hope is that by using an ensemble, a median behavior and uncertainty cloud emerges, akin to the spaghetti plots of hurricane forecasts, except the CMIPx trajectories are through climate state space.
  • 362.

    Yup, I realize that. The weird aspect is that they use the term "celebrated". What I would celebrate is if someone determined the model for ENSO that leads to the variability, with no ensembles required. They can do that for tides without resorting to ensembles, so why not ENSO.

    Comment Source:Yup, I realize that. The weird aspect is that they use the term "celebrated". What I would celebrate is if someone determined the model for ENSO that leads to the variability, with no ensembles required. They can do that for tides without resorting to ensembles, so why not ENSO.
  • 363.

    Jan, Here's a grad student from MIT/WHOI that I roped into a Twitter thread today.

    https://twitter.com/henrifdrake/status/1202043296123674624

    Fascinating to see how it shakes out.

    Comment Source:Jan, Here's a grad student from MIT/WHOI that I roped into a Twitter thread today. https://twitter.com/henrifdrake/status/1202043296123674624 Fascinating to see how it shakes out.
  • 364.

    Cool, Paul. (I don't do Twitter. Had it. Deleted it.) But, yes. It's interesting that climate physics is "narrow". Is Ray Pierrehumbert's stuff "narrow"? Seems to consider a range of planets and things. It's interesting Drake mentions eddies. For life in moving fluids, eddies and swirls serve as transient energy stores, and they can be tapped or pushed against. Because eddies occur at a whole range of scales, I wonder how much total energy, manifested locally as fluid motion, can be stored in these things. Up the total amount of energy, do we see more eddies at all scales? There is Jansen, et al (2019), "Towards an energetically consistent, resolution aware parameterization of ocean mesoscale eddies", which says most of the energy goes into the mesoscale. But it seems we don't need to intelligently speculate, because of Martinez-Moreno, et al (2019), "Kinetic energy of eddy‐like features from sea surface altimetry", which suggests there are empirical constraints available.

    Comment Source:Cool, Paul. (I don't do Twitter. Had it. Deleted it.) But, yes. It's interesting that climate physics is "narrow". Is Ray Pierrehumbert's stuff "narrow"? Seems to consider a range of planets and things. It's interesting Drake mentions eddies. For life in moving fluids, eddies and swirls serve as transient energy stores, and they can be tapped or pushed against. Because eddies occur at a whole range of scales, I wonder how much total energy, manifested locally as fluid motion, can be stored in these things. Up the total amount of energy, do we see more eddies at all scales? There is Jansen, et al (2019), "Towards an energetically consistent, resolution aware parameterization of ocean mesoscale eddies", which says most of the energy goes into the mesoscale. But it seems we don't need to intelligently speculate, because of Martinez-Moreno, et al (2019), "Kinetic energy of eddy‐like features from sea surface altimetry", which suggests there are empirical constraints available.
  • 365.

    Jan, Concerning eddies (vortices), I think the fundamental aspect worth modeling is the behavior known as Tropical Instability Waves. These always form along the equator with a fixed wavelength of 1100 kilometers. That makes it a vortex train. This is at the top range of the mesoscale that you mention where horizontal dimensions generally range from around 5 kilometers to several hundred kilometers.

    https://www.climatescience.org.au/sites/default/files/Holmes_23May2017.pdf

    Regularity beats spontaneity when seeking an understanding. Mr. Drake should be looking at this if anything.

    In the ENSO model, adding a TIW feature is a real parameterization with a wave number added to the mix which is 15 times the spatial resolution of the main ENSO dipole. As you say eddies can occur at a whole range of scales and any scaled standing wave that fits into the equatorial waveguide has a possibility to exist -- this is classical wave physics for cavity resonance from way back.

    Comment Source:Jan, Concerning eddies (vortices), I think the fundamental aspect worth modeling is the behavior known as Tropical Instability Waves. These always form along the equator with a fixed wavelength of 1100 kilometers. That makes it a vortex train. This is at the top range of the mesoscale that you mention where horizontal dimensions generally range from around 5 kilometers to several hundred kilometers. > ![](https://imagizer.imageshack.com/img921/8417/n0dxVF.gif) > https://www.climatescience.org.au/sites/default/files/Holmes_23May2017.pdf Regularity beats spontaneity when seeking an understanding. Mr. Drake should be looking at this if anything. In the ENSO model, adding a TIW feature is a real parameterization with a wave number added to the mix which is 15 times the spatial resolution of the main ENSO dipole. As you say eddies can occur at a whole range of scales and any scaled standing wave that fits into the equatorial waveguide has a possibility to exist -- this is classical wave physics for cavity resonance from way back. > ![](https://imagizer.imageshack.com/img924/9229/Iibg14.png)
  • 366.

    Cavity resonance is interesting. I originally picked up JoeP's text because I was fascinated by standing wave patterns evidenced by "mackerel sky" clouds (cirrocumulus) high in troposphere. The idea of having a stratification or layer in atmosphere bounded above and below sufficient to trap such waves and cause localized condensations is/was amazing to me. I figured I had no intuition for what was going on. My knowledge of all this is still quite weak.

    Comment Source:Cavity resonance is interesting. I originally picked up JoeP's text because I was fascinated by standing wave patterns evidenced by "mackerel sky" clouds (cirrocumulus) high in troposphere. The idea of having a stratification or layer in atmosphere bounded above and below sufficient to trap such waves and cause localized condensations is/was amazing to me. I figured I had no intuition for what was going on. My knowledge of all this is still quite weak.
  • 367.

    Jan, The frustrating part of attending the AGU and having discussions with presenters is that they don't seem to appreciate that the focus should be on the behavioral regions with the lowest dimensionality and the fewest degrees of freedom. I wouldn't know where to start on some region that is constantly undergoing changing regimes.

    Comment Source:Jan, The frustrating part of attending the AGU and having discussions with presenters is that they don't seem to appreciate that the focus should be on the behavioral regions with the lowest dimensionality and the fewest degrees of freedom. I wouldn't know where to start on some region that is constantly undergoing changing regimes.
  • 368.

    Although he's in a spell now where he is remaining close to home, my son, Jeff, professor at UCL went to China, Australia, Argentina, Chicago, California, and France (and a couple of other places I forget) in the last couple of years.

    Very impressive work. Looking at his CV what also caught my eye is that your son may have worked with the son of Carl Wunsch, who is one of the godfathers of ocean sciences at MIT. Small world

    Comment Source:> Although he's in a spell now where he is remaining close to home, my son, Jeff, professor at UCL went to China, Australia, Argentina, Chicago, California, and France (and a couple of other places I forget) in the last couple of years. Very impressive work. Looking at his CV what also caught my eye is that your son may have worked with the son of Carl Wunsch, who is one of the godfathers of ocean sciences at MIT. Small world
  • 369.

    Paul, I checked with Jeff. His coa-author is indeed Professor Carl Wunsch's son.

    Comment Source:Paul, I checked with Jeff. His coa-author is indeed Professor Carl Wunsch's son.
  • 370.

    Jan, Cool. I have read many of Carl Wunsch's articles because he's an expert on ocean circulation and one of the few that believes lunar tidal forces may have a strong impact on climate via changes in that circulation. This is a Nature article of his from 2000

    Moon, tides and climate

    Much earlier than that he wrote about long period tides

    "The equilibrium hypothesis was tested by computing peridograms for the fortnightly and monthly tides on islands of the Pacific. Despite high noise levels, substantial deviations from equilibrium were found, with fluctuations over distances of about 3000 km. An approximate analytic solution to the Laplace tidal equations shows the measurements to be consistent with the hypothesis that the tides excite the Rossby wave modes of the ocean" The Long-period Tides (1967)

    I essentially solved Laplace tidal equations along the equator and applied the tidal cycles to get where I'm at with the ENSO model. Alas, Prof Wunsch is at emeritus stage and is unlikely to be active in this area any longer.

    Comment Source:Jan, Cool. I have read many of Carl Wunsch's articles because he's an expert on ocean circulation and one of the few that believes lunar tidal forces may have a strong impact on climate via changes in that circulation. This is a Nature article of his from 2000 [Moon, tides and climate](https://www.nature.com/articles/35015639) Much earlier than that he wrote about long period tides > "The equilibrium hypothesis was tested by computing peridograms for the fortnightly and monthly tides on islands of the Pacific. Despite high noise levels, substantial deviations from equilibrium were found, with fluctuations over distances of about 3000 km. An approximate analytic solution to the Laplace tidal equations shows the measurements to be consistent with the hypothesis that the tides excite the Rossby wave modes of the ocean" [The Long-period Tides (1967)](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/RG005i004p00447) I essentially solved Laplace tidal equations along the equator and applied the tidal cycles to get where I'm at with the ENSO model. Alas, Prof Wunsch is at emeritus stage and is unlikely to be active in this area any longer.
  • 371.

    I got into a twitter discussion with this denier. If we can warn Prof. Wunsch not to waste too much time on him :)

    Comment Source:I got into a twitter discussion with this denier. If we can warn Prof. Wunsch not to waste too much time on him :) <blockquote class="twitter-tweet"><p lang="en" dir="ltr">I have respect for Prof. Wunsch/late Prof. Walter Munk. In fact I’ve corresponded recently with Prof. Wunsch on a technical issue. Doesn’t change the fact that he doesn’t make a logical argument against the Frank paper. Spencer has coherent arguments but <a href="https://t.co/GA7nNAiPmd">https://t.co/GA7nNAiPmd</a></p>&mdash; Geoff Smith (@geoffsmithsmind) <a href="https://twitter.com/geoffsmithsmind/status/1205490018296352769?ref_src=twsrc%5Etfw">December 13, 2019</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
  • 372.

    I have been using the delta length-of-day (dLOD) measurements to calibrate the forcing for the ENSO model. The change in LOD is almost entirely due to tidal forcing, in so far as every LOD cycle corresponds to a well-understood lunar cycle. The dLOD is resolved enough that the overall envelope even follows the 18.6 year nodal cycle of maximum declination in the moon's orbit. Yet, I've found that the 18.6 year modulation is less critical for fitting ENSO than getting the tropical fortnightly cycle (13.66 days) and perigean anomalistic cycle (27.55 days) matched. So the timing of the cycles appears to be the critical aspect in the forcing. This makes sense in terms of the tropical/synodic cycle being critical for the localized nature of ENSO whereas the 18.6 year modulation impacts the entire earth as the longitude of maximum declination roams around. A metric that I use for timing or zero crossing is what I call an excursion matching criteria, defined as :

    Excursion matching criteria \( EMC = \frac{\sum x_i \cdot y_i}{\sum |x_i| \cdot |y_i|} \)

    The value of EMC when calculated for an ENSO tidal forcing which is calibrated to dLOD is above 0.99 (maximum=1). This agreement is the top panel in the following figure, where you can see the red trace matches the dLOD in phase, but not necessarily in amplitude. So whereas the EMC is above 0.99 (which means the sign of the excursion almost always matches), the correlation coefficient is around 0.9 (indicating the missing amplitude of the excursions).

    On closer inspection, over-riding the 18.6 year nodal cycle for the ENSO model is the 19 year Metonic cycle, which essentially follows the pattern of eclipses (where the moon and sun are maximally aligned coinciding as a common multiple of the solar year and the synodic lunar month). This can be seen by sliding the ENSO forcing model by 57 years or 3 times the Metonic cycle, which is a particularly strong alignment of the Metonic cycle.

    metonic

    This works out to be a subtle effect, as the differences between an 18.6 year nodal cycle and a 19 year Metonic cycle may not always be easily distinguishable. Or with the 18 year 11 day Saros cycle, which is the well known eclipse cycle independent of time of year. More information on the NASA eclipse page.

    And I should add that this interpretation is based on what the best fit is showing. In other words, calibrating the forcing to the 18.6 year modulation of the dLOD will provide a reasonable ENSO fit but not as crisp as shown above. What is reduced in amplitude is the 13.63 day (Mf') cycle which arises from the multiplication of the 27.32 tropical cycle with the 27.21 draconic cycle. In the dLOD fit this is about 1/2 the amplitude as the 13.66 day (Mf) cycle, but is much reduced in the ENSO fit.

    Comment Source:I have been using the delta length-of-day (dLOD) measurements to calibrate the forcing for the ENSO model. The change in LOD is almost entirely due to tidal forcing, in so far as every LOD cycle corresponds to a well-understood lunar cycle. The dLOD is resolved enough that the overall envelope even follows the 18.6 year nodal cycle of maximum declination in the moon's orbit. Yet, I've found that the 18.6 year modulation is less critical for fitting ENSO than getting the tropical fortnightly cycle (13.66 days) and perigean anomalistic cycle (27.55 days) matched. So the timing of the cycles appears to be the critical aspect in the forcing. This makes sense in terms of the tropical/synodic cycle being critical for the localized nature of ENSO whereas the 18.6 year modulation impacts the entire earth as the longitude of maximum declination roams around. A metric that I use for timing or zero crossing is what I call an [excursion matching criteria](https://contextearth.com/2017/10/25/improved-solver-target-error-metric/), defined as : Excursion matching criteria \\( EMC = \frac{\sum x_i \cdot y_i}{\sum |x_i| \cdot |y_i|} \\) The value of EMC when calculated for an ENSO tidal forcing which is calibrated to dLOD is above 0.99 (maximum=1). This agreement is the top panel in the following figure, where you can see the red trace matches the dLOD in phase, but not necessarily in amplitude. So whereas the EMC is above 0.99 (which means the sign of the excursion almost always matches), the correlation coefficient is around 0.9 (indicating the missing amplitude of the excursions). ![](https://imagizer.imageshack.com/img922/568/0uN5cm.png) On closer inspection, over-riding the 18.6 year nodal cycle for the ENSO model is the 19 year [Metonic cycle](https://en.wikipedia.org/wiki/Metonic_cycle), which essentially follows the pattern of eclipses (where the moon and sun are maximally aligned coinciding as a common multiple of the solar year and the synodic lunar month). This can be seen by sliding the ENSO forcing model by 57 years or 3 times the Metonic cycle, which is a particularly strong alignment of the Metonic cycle. ![metonic](https://imagizer.imageshack.com/img922/3576/VEotBv.png) This works out to be a subtle effect, as the differences between an 18.6 year nodal cycle and a 19 year Metonic cycle may not always be easily distinguishable. Or with the 18 year 11 day Saros cycle, which is the well known eclipse cycle independent of time of year. More information on the [NASA eclipse page](https://eclipse.gsfc.nasa.gov/SEsaros/SEperiodicity.html). And I should add that this interpretation is based on what the best fit is showing. In other words, calibrating the forcing to the 18.6 year modulation of the dLOD will provide a reasonable ENSO fit but not as crisp as shown above. What is reduced in amplitude is the 13.63 day (Mf') cycle which arises from the multiplication of the 27.32 tropical cycle with the 27.21 draconic cycle. In the dLOD fit this is about 1/2 the amplitude as the 13.66 day (Mf) cycle, but is much reduced in the ENSO fit.
  • 373.

    These kinds of non-linear measures are pretty interesting. I have a book sitting on a shelf by Kantz and Schreiber called Nonlinear Time Series Analysis (2nd edition). I haven't really looked at it. But almost all of conventional statistics, frequentist, information theoretic, or Bayesian, is linear. And tucked in amongst the new algorithms of machine learning are nonlinear approaches which we don't entirely understand and are pretty powerful, notions like boosting, for which no one has done a thorough theoretical analysis, although there have been promising approaches. Same for random forests and other tree-based methods. And then there are hybrids or extensions ... I bet somewhere there there is lurking a quasi-tree-based approach which uses continuous measures across supports akin to Bayesian membership scores instead of ensembles of discrete splits. Don't know how that goes, but it would be an interesting thing to develop and explore.

    Comment Source:These kinds of non-linear measures are pretty interesting. I have a book sitting on a shelf by Kantz and Schreiber called _Nonlinear Time Series Analysis_ (2nd edition). I haven't really looked at it. But almost all of conventional statistics, frequentist, information theoretic, or Bayesian, is linear. And tucked in amongst the new algorithms of machine learning are nonlinear approaches which we don't entirely understand and are pretty powerful, notions like _boosting_, for which no one has done a thorough theoretical analysis, although there have been promising approaches. Same for _random forests_ and other tree-based methods. And then there are hybrids or extensions ... I bet somewhere there there is lurking a quasi-tree-based approach which uses continuous measures across supports akin to Bayesian membership scores instead of ensembles of discrete splits. Don't know how that goes, but it would be an interesting thing to develop and explore.
  • 374.
    edited December 2019

    Thanks Jan. Way back early in the El Nino Azimuth Project discussion, Dara O'Shayda was doing studies using the random forest approach, for example in this discussion thread

    https://forum.azimuthproject.org/discussion/comment/13797/

    It looks like the files are gone from his old website but they may have all moved here: http://files.untiredwithloving.org/random_forrest_nino34.pdf

    This is an example of one of Dara's plots

    Perhaps we can pursue some paths by looking at link strengths, etc, especially now that we think we know what the candidate non-linearities are. Correct me if I am wrong, but I am under the impression that one of the issues with deep learning is explaining what the connections mean once they have been discovered. In this case, we may be able to guide the exploratory parameters to reveal the underlying patterns. If we can get a random forest approach to reproduce the patterns, it would boost the model's credibility, as it's likely a less biased criteria for establishing a causal correlation.

    AGU meeting session on Machine Learning

    Comment Source:Thanks Jan. Way back early in the El Nino Azimuth Project discussion, Dara O'Shayda was doing studies using the *random forest* approach, for example in this discussion thread https://forum.azimuthproject.org/discussion/comment/13797/ It looks like the files are gone from his old website but they may have all moved here: http://files.untiredwithloving.org/random_forrest_nino34.pdf This is an example of one of Dara's plots ![](http://imageshack.com/a/img921/5581/aJFtI4.gif) Perhaps we can pursue some paths by looking at link strengths, etc, especially now that we think we know what the candidate non-linearities are. Correct me if I am wrong, but I am under the impression that one of the issues with deep learning is explaining what the connections mean once they have been discovered. In this case, we may be able to guide the exploratory parameters to reveal the underlying patterns. If we can get a random forest approach to reproduce the patterns, it would boost the model's credibility, as it's likely a less biased criteria for establishing a causal correlation. AGU meeting session on Machine Learning https://youtu.be/xH3tbCOd_oQ
  • 375.
    edited December 2019

    Some deep-learning algorithm out there will find the ENSO pattern eventually, but perhaps not anytime soon. This is how bad a neural network does on natural variability as reported at last week's AGU meeting

    It's so bad that they punt and admit beforehand that it won't work on natural variability, yet later in the presentation they show that it does work on man-made forced variability.

    This is rather obvious and it would be a wonder if it didn't work since all they have to do is train the NN on the CO2 emission time series -- i.e. the warming trend and CO2 rise are already well correlated.

    But unless the NN knows the math behind the nonlinear LTE solution and the fact that tidal forcing plays a role, it will never get the ENSO natural variability correct. In other words, it can't train on itself if the pattern is deeply obscured by unknown forcing factors further modulated by an unknown nonlinear transfer function.

    The presentation is here:

    An ArXiv paper is here: https://arxiv.org/abs/1912.01752 "Physically Interpretable Neural Networks for the Geosciences: Applications to Earth System Variability"

    " Network interpretation techniques have become more advanced in recent years, however, and we therefore propose that the ultimate objective of using a neural network can also be the interpretation of what the network has learned rather than the output itself. We show that the interpretation of a neural network can enable the discovery of scientifically meaningful connections within geoscientific data. By training neural networks to use one or more components of the earth system to identify another, interpretation methods can be used to gain scientific insights into how and why the two components are related. In particular, we use two methods for neural network interpretation. These methods project the decision pathways of a network back onto the original input dimensions, and are called "optimal input" and layerwise relevance propagation (LRP). "

    Comment Source:Some deep-learning algorithm out there will find the ENSO pattern eventually, but perhaps not anytime soon. This is how bad a neural network does on natural variability as reported at last week's AGU meeting ![](https://pbs.twimg.com/media/EMFAXFvWoAAPjOQ.png) It's so bad that they punt and admit beforehand that it won't work on natural variability, yet later in the presentation they show that it does work on man-made forced variability. ![](http://imageshack.com/a/img923/3290/npNE0D.gif) This is rather obvious and it would be a wonder if it didn't work since all they have to do is train the NN on the CO2 emission time series -- i.e. the warming trend and CO2 rise are already well correlated. But unless the NN knows the math behind the nonlinear LTE solution and the fact that tidal forcing plays a role, it will never get the ENSO natural variability correct. In other words, it can't train on itself if the pattern is deeply obscured by unknown forcing factors further modulated by an unknown nonlinear transfer function. The presentation is here: https://youtu.be/O67El-fqA9c An ArXiv paper is here: https://arxiv.org/abs/1912.01752 "Physically Interpretable Neural Networks for the Geosciences: Applications to Earth System Variability" >" Network interpretation techniques have become more advanced in recent years, however, and we therefore propose that the ultimate objective of using a neural network can also be the interpretation of what the network has learned rather than the output itself. > We show that the interpretation of a neural network can enable the discovery of scientifically meaningful connections within geoscientific data. By training neural networks to use one or more components of the earth system to identify another, interpretation methods can be used to gain scientific insights into how and why the two components are related. In particular, we use two methods for neural network interpretation. These methods project the decision pathways of a network back onto the original input dimensions, and are called "optimal input" and layerwise relevance propagation (LRP). "
  • 376.

    It's looking like this is one of the only remaining forums that allows free-flowing discussion of climate science. Jan, the ATTP blog is fading fast -- on Alexa it's rank is 5,622,000, down around 4,000,000 since a few months ago. The majority of my comments are deleted by the moderator and all are upheld for moderation. FYI, I only saw this one on my RSS feed before it was deleted.

    Comment Source:It's looking like this is one of the only remaining forums that allows free-flowing discussion of climate science. Jan, the ATTP blog is fading fast -- on Alexa it's rank is 5,622,000, down around 4,000,000 since a few months ago. The majority of my comments are deleted by the moderator and all are upheld for moderation. FYI, I only saw this one on my RSS feed before it was deleted. ![](https://imagizer.imageshack.com/img921/9233/pJuYaG.gif)
  • 377.

    No problem, Paul. ATTP has consumed more and more of my time, of late. And my comment about the trolls is based upon historical evidence that, in the past, major funders of climate denial offer bounties for various kinds of anti-climate change actions, ranging from patrols of comments on posts related to climate in major news outlets, to writing papers for scientific publication. There have been people who confessed that they received US$2000 per such accepted paper. The purposes of these is to challenge standard science, sow doubt in the readership and, most importantly, distract and engage and otherwise occupy scientists and advocates for climate change mitigation and renewable energy in opposing these views rather than pursuing more worthwhile objectives.

    I know this sounds paranoid but actually there's evidence now. And, indeed, some of the connections are pretty far up the food chain, including via a shady outfit called "The $CO_2$ Coalition", involving William Happer (a part of 45's administration for a time), Don Easterbrook, and David Burton. I tusseled with Burton at my blog for a time. I recounted the experience here. In particular, Burton opposes Prof Rob Young in North Carolina when Rob tries to convince shoreline towns that building dikes and doing beach refurbishment is a waste of money, and that the best thing that can be done is oversee managed retreat.

    And this extends to paying people who do not live nearby to show up at public hearings on licenses and permits for land-based wind energy to oppose placement of turbines, and even of solar farms. It doesn't take many loud voices to get the media to pick up the story, and that's the point.

    So, while I will still engage from time to time, if people are numerous enough and harness the otherwise good intentions of moderators to embrace "a wide variety of views are accepted here" or "we need to hear from all sides", it's a losing cause. The strongest way I can protest is my denying the things I bring to the discussion, which you so kindly pointed out.

    There are lots of other places to go and things to help with. In fact, I'm working with Tamino at his blog on some sea level rise stuff.

    By the way, regarding interpretable neural networks, there's a lot of recent work by Cynthia Rudin of Duke, formerly of MIT and who I know, regarding interpretable ML and other systems. A primary paper for them is:

    Angelino, Larus-Stone, Alabi, Seltzer, Rudin, "Learning certifiably optimal rule lists for categorical data", JMLR, 18 (2018) 1-78

    I don't know how the techniques transfer to continuous or measure variables, if indeed they do.

    Comment Source:No problem, Paul. ATTP has consumed more and more of my time, of late. And my comment about the trolls is based upon historical evidence that, in the past, major funders of climate denial offer bounties for various kinds of anti-climate change actions, ranging from patrols of comments on posts related to climate in major news outlets, to writing papers for scientific publication. There have been people who confessed that they received US$2000 per such accepted paper. The purposes of these is to challenge standard science, sow doubt in the readership and, most importantly, distract and engage and otherwise occupy scientists and advocates for climate change mitigation and renewable energy in opposing these views rather than pursuing more worthwhile objectives. I know this sounds paranoid but actually there's evidence now. And, indeed, some of the connections are pretty far up the food chain, including via a shady outfit called "The $CO_2$ Coalition", involving William Happer (a part of *45*'s administration for a time), Don Easterbrook, and David Burton. I tusseled with Burton at my blog for a time. I [recounted the experience here](https://667-per-cm.net/2019/10/15/the-co2-coaliation-a-cabal-of-digital-denial/). In particular, Burton opposes Prof Rob Young in North Carolina when Rob tries to convince shoreline towns that building dikes and doing beach refurbishment is a waste of money, and that the best thing that can be done is oversee managed retreat. And this extends to paying people who do not live nearby to show up at public hearings on licenses and permits for land-based wind energy to oppose placement of turbines, and even of solar farms. It doesn't take many loud voices to get the media to pick up the story, and that's the point. So, while I will still engage from time to time, if people are numerous enough and harness the otherwise good intentions of moderators to embrace "a wide variety of views are accepted here" or "we need to hear from all sides", it's a losing cause. The strongest way I can protest is my denying the things I bring to the discussion, which you so kindly pointed out. There are lots of other places to go and things to help with. In fact, I'm working with Tamino at his blog on some sea level rise stuff. By the way, regarding interpretable neural networks, there's a lot of recent work by Cynthia Rudin of Duke, formerly of MIT and who I know, regarding interpretable ML and other systems. A primary paper for them is: Angelino, Larus-Stone, Alabi, Seltzer, Rudin, "[Learning certifiably optimal rule lists for categorical data](http://www.jmlr.org/papers/volume18/17-716/17-716.pdf)", _JMLR_, 18 (2018) 1-78 I don't know how the techniques transfer to continuous or measure variables, if indeed they do.
  • 378.

    More to the point here, there is this paper,

    I. D. Haigh, M. D. Pickering, J. A. M. Green, et al, "The Tides They Are a‐Changin’: A comprehensive review of past and future non‐astronomical changes in tides, their driving mechanisms and future implications", Reviews of Geophysics, https://doi.org/10.1029/2018RG000636, December 2019

    which I have not yet read, but want to. I corresponded extensively with Dr Haigh some time ago. I sent him congratulations on publication of this paper.

    Comment Source:More to the point here, there is this paper, I. D. Haigh, M. D. Pickering, J. A. M. Green, _et al_, "The Tides They Are a‐Changin’: A comprehensive review of past and future non‐astronomical changes in tides, their driving mechanisms and future implications", _Reviews of Geophysics_, https://doi.org/10.1029/2018RG000636, December 2019 which I have not yet read, but want to. I corresponded extensively with Dr Haigh [some time ago](https://667-per-cm.net/2014/04/21/comment-on-timescales-for-detecting-a-significant-acceleration-in-sea-level-rise-by-haigh-et-al/). I sent him congratulations on publication of this paper.
  • 379.

    Thanks. I wasn't aware of the direct evidence for payola -- it's unfortunate that sincere commenting gets associated with this.

    The review paper on tide variation will be useful over the millennial timeframe I would imagine.

    Comment Source:Thanks. I wasn't aware of the direct evidence for payola -- it's unfortunate that sincere commenting gets associated with this. The review paper on tide variation will be useful over the millennial timeframe I would imagine.
  • 380.
    edited December 2019

    I was always wondering how to do a Fourier series on the LTE modulation without having to do a numerical sort on the ordinal. Well, I should have realized that it's not necessary as long as the input series is discrete. Here is the Fourier amplitude spectrum of the LTE modulation:

    This provides a direct representation of the wavenumber dispersion. The largest spike is the primary ENSO dipole and the peak at 15x the ENSO wavenumber lies at the same value as the equatorial Tropical Instability Wave (TIW). The remainder consists of the much higher wavenumber dynamics that apparently generates the almost daily variability in ENSO indices such as SOI. These high wavenumbers are I think due to the details in the step response of the LTE modulation

    I have spent some time trying to fit to the daily variations and there may be some hope here. Some variation of long-range order appears to exist based on the fit above, but it may be obscured by other sources of noise.

    daily

    The daily SOI does not show as clear a Double-Sideband Suppressed Carrier modulation as the monthly time series, which may be the result of the Darwin-Tahiti dipole not capturing the sharp delineation of a higher wavenumber standing wave. In other words, the low wavenumber of the primary ENSO dipole allows uncertainty in the geographic location of the dipole nodes. The hope maybe in trying to force a DSSC symmetry by independently doing the Tahiti and Darwin time series and rejecting points that aren't mirror-symmetry aligned and subsequently combining them.

    As with Navier-Stokes, the higher the resolution detail that one seeks in understanding the fluid dynamics the less certainty and more DOF one understandably encounters.

    Should add that the inferred LTE modulation from the daily SOI time-series shows a very interesting impulse train that appears due to the TIW. The modulation is inferred as it uses the tidal forcing level as a (sorted) x-coordinate and the actual SOI amplitude is plotted as the amplitude at that coordinate. The impulse train is indicated by the upward arrows below, very regularly spaced. As it is an impulse train, it must have all the higher harmonics of the fundamental LTE TIW modulation. It may have something to do with a specific tidal cycle creating a resonance, and am hoping it's not some computation artifact.

    tiw

    Comment Source:I was always wondering how to do a Fourier series on the LTE modulation without having to do a numerical sort on the ordinal. Well, I should have realized that it's not necessary as long as the input series is discrete. Here is the Fourier amplitude spectrum of the LTE modulation: ![](https://imagizer.imageshack.com/img922/7793/W0B6JJ.png) This provides a direct representation of the wavenumber dispersion. The largest spike is the primary ENSO dipole and the peak at 15x the ENSO wavenumber lies at the same value as the equatorial Tropical Instability Wave (TIW). The remainder consists of the much higher wavenumber dynamics that apparently generates the almost daily variability in ENSO indices such as SOI. These high wavenumbers are I think due to the details in the step response of the LTE modulation ![](https://imagizer.imageshack.com/img921/9246/Dl9HE4.png) I have spent some time trying to fit to the daily variations and there may be some hope here. Some variation of long-range order appears to exist based on the fit above, but it may be obscured by other sources of noise. ![daily](https://imagizer.imageshack.com/img924/3492/FRviZi.png) The daily SOI does not show as clear a Double-Sideband Suppressed Carrier modulation as the monthly time series, which may be the result of the Darwin-Tahiti dipole not capturing the sharp delineation of a higher wavenumber standing wave. In other words, the low wavenumber of the primary ENSO dipole allows uncertainty in the geographic location of the dipole nodes. The hope maybe in trying to force a DSSC symmetry by independently doing the Tahiti and Darwin time series and rejecting points that aren't mirror-symmetry aligned and subsequently combining them. As with Navier-Stokes, the higher the resolution detail that one seeks in understanding the fluid dynamics the less certainty and more DOF one understandably encounters. Should add that the inferred LTE modulation from the daily SOI time-series shows a very interesting impulse train that appears due to the TIW. The modulation is inferred as it uses the tidal forcing level as a (sorted) x-coordinate and the actual SOI amplitude is plotted as the amplitude at that coordinate. The impulse train is indicated by the upward arrows below, very regularly spaced. As it is an impulse train, it must have all the higher harmonics of the fundamental LTE TIW modulation. It may have something to do with a specific tidal cycle creating a resonance, and am hoping it's not some computation artifact. ![tiw](https://imagizer.imageshack.com/img923/9632/AWWhpU.png)
  • 381.

    Paul, I'd be interested in your thoughts about proper means to do spectrum analysis on real data. I've found using outright Fourier transforms to be tricky and have, with some, preferred to use multitaper methods which clearly have Fourier mathematics built into them, but focus upon managing spectral leakage and aliasing. Barbour, Parker, "Normalization of Power Spectral Density estimates". Barbour, Parker, "psd: Adaptive, sine multitaper power spectral density estimation for R", Computers & Geosciences 63 (2014) 1–8.

    Comment Source:Paul, I'd be interested in your thoughts about proper means to do spectrum analysis on real data. I've found using outright Fourier transforms to be tricky and have, with some, preferred to use _multitaper methods_ which clearly have Fourier mathematics built into them, but focus upon managing spectral leakage and aliasing. Barbour, Parker, "Normalization of Power Spectral Density estimates". Barbour, Parker, "psd: Adaptive, sine multitaper power spectral density estimation for *R*", Computers & Geosciences 63 (2014) 1–8.
  • 382.

    Jan, I don't know how much is required to get at the primary spectral components.

    This paper applies the technique to the same kind of geophysical time-series data
    https://www.researchgate.net/publication/318821331_Tidal_Analysis_Using_Time-Frequency_Signal_Processing_and_Information_Clustering

    The multitaper method does reduce the noise background so therefore might lift out some of the weaker spectral lines. However it doesn't do much with respect to the strongest.

    This appears to be the most impressive approach I have come across "Application of stabilized AR-z spectrum in harmonic analysis for geophysics" https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JB015890

    Perhaps I should be using MM to recover the time-series? The plain Fourier series works well for me so I don't know what else it will add at this level. For example, I already know that the signal is heavily aliased, but this is due to a physical aliasing not related to sampling aliasing. The spectral leakage caused by having signals of 27.2122, 27.312, 27.554 so closely separated may not be a factor if the time series is long enough. In any case, the physical aliasing separates these as 2.37, 2.715, and 3.91, which are much more widely separated. I think that is what is causing all the confusion in the first place -- the fact that no one is aware that nonlinear physical aliasing is occurring. Maybe get that point across first and then we can use more advanced spectral techniques. I don't think I am missing anything but who knows.

    Comment Source:Jan, I don't know how much is required to get at the primary spectral components. This paper applies the technique to the same kind of geophysical time-series data https://www.researchgate.net/publication/318821331_Tidal_Analysis_Using_Time-Frequency_Signal_Processing_and_Information_Clustering The multitaper method does reduce the noise background so therefore might lift out some of the weaker spectral lines. However it doesn't do much with respect to the strongest. ![](https://imagizer.imageshack.com/img922/1386/mSHsSq.gif) This appears to be the most impressive approach I have come across "Application of stabilized AR-z spectrum in harmonic analysis for geophysics" https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JB015890 Perhaps I should be using MM to recover the time-series? The plain Fourier series works well for me so I don't know what else it will add at this level. For example, I already know that the signal is heavily aliased, but this is due to a physical aliasing not related to sampling aliasing. The spectral leakage caused by having signals of 27.2122, 27.312, 27.554 so closely separated may not be a factor if the time series is long enough. In any case, the physical aliasing separates these as 2.37, 2.715, and 3.91, which are much more widely separated. I think that is what is causing all the confusion in the first place -- the fact that no one is aware that nonlinear physical aliasing is occurring. Maybe get that point across first and then we can use more advanced spectral techniques. I don't think I am missing anything but who knows.
  • 383.
    edited January 3

    The SOI is essentially measuring the dipole extent of Tahiti minus Darwin (T-D) atmospheric pressure. If these two are perfect anti-nodes, then the correlation coefficient (CC) between the two should be -1. For unfiltered monthly data, the CC is almost -0.6 (and almost to -0.8 if the respective time series are smoothed). This means that either the SOI does not represent a perfect dipole or there may be some non-dipole noise contaminating the data (or both). Yet, it remains a good enough measure since the T-D filter removes much of the common-mode noise by subtraction.

    For the daily SOI measure (available from Aus BOM from 1991-current), the CC between Tahiti and Darwin hovers around +/- 0.0 if left unfiltered. This is not good for the same reason (noise) as the monthly set, only that the daily noise seems to be even worse. Again, only if filtered over a two week time, does the CC start approaching -0.5.

    Since I am interested in finding out how much I can push the daily data to isolate the higher-order wave-numbers that seem to contribute to the standing wave, I started experimenting with biased estimating techniques. One rather obvious biased approach is to compare the model point-by-point against T-D, 2T, and -2D and select the value that is closest to the model fit. This will always reduce the error of any fit to the data because unless we have a perfect dipole where T-D=2T=-2D, the selected value will always be smaller or at most equal to T-D.

    The overall rationale for trying this is similar to the rationale for using a 3-point median filter -- the value closest to the model value is less likely to be an outlier impacted by a spurious noise excursion. As the following fit shows, the technique does work, by transforming the mix of uncorrelated and correlated signal towards a more correlated time-series aligning well with the model. Evident as a background fuzzy envelope, the possible maximal excursions for 2T and -2D can be gauged -- these are often much greater than T-D in absolute value, but get chosen wherever the model also deviates too far from the T-D value.

    This may not be an optimal approach, but given the poor anti-correlated properties of the daily SOI pair values there may not be any other options. One other approach I can think of is to maximize the Double-Sideband Carrier Modulation in the Fourier series of the raw data.

    EDIT: After letting this gestate for a day or two, I'm not sure what value this approach is providing as the addition of a massive number of DOF will obviously improve the fit. One could just as easily introduce two additional random numbers for each point and by cherry-picking for the best fit, the CC will improve. .... But then again consider this recent paper in PNAS (https://phys.org/news/2019-12-el-nio-event-year.html) that claims that an El Nino is more likely to occur after a year of "high disorder". The issue is that disorder has a broad distribution, while strict order doesn't. That doesn't seem a strong argument -- welcome back to the world of climate science at the bleeding edge!

    This is what the CC improvement looks like on the SOI monthly time-series (CC=0.65 improved to 0.85). The error bars indicate the extent of the (T-D, 2T, -2D) triplet values not chosen. In other words, one of these is closest to the model value and thus selected as the "most valid" data point, and the other two are rejected, replaced by error bars.

    Comment Source:The SOI is essentially measuring the dipole extent of Tahiti minus Darwin (T-D) atmospheric pressure. If these two are perfect anti-nodes, then the correlation coefficient (CC) between the two should be -1. For unfiltered monthly data, the CC is almost -0.6 (and almost to -0.8 if the respective time series are smoothed). This means that either the SOI does not represent a perfect dipole or there may be some non-dipole noise contaminating the data (or both). Yet, it remains a good enough measure since the T-D filter removes much of the common-mode noise by subtraction. For the daily SOI measure (available from [Aus BOM](https://www.longpaddock.qld.gov.au/soi/soi-data-files/) from 1991-current), the CC between Tahiti and Darwin hovers around +/- 0.0 if left unfiltered. This is not good for the same reason (noise) as the monthly set, only that the daily noise seems to be even worse. Again, only if filtered over a two week time, does the CC start approaching -0.5. Since I am interested in finding out how much I can push the daily data to isolate the higher-order wave-numbers that seem to contribute to the standing wave, I started experimenting with biased estimating techniques. One rather obvious biased approach is to compare the model point-by-point against T-D, 2T, and -2D and select the value that is closest to the model fit. This will always reduce the error of any fit to the data because unless we have a perfect dipole where T-D=2T=-2D, the selected value will always be smaller or at most equal to T-D. The overall rationale for trying this is similar to the rationale for using a 3-point median filter -- the value closest to the model value is less likely to be an outlier impacted by a spurious noise excursion. As the following fit shows, the technique does work, by transforming the mix of uncorrelated and correlated signal towards a more correlated time-series aligning well with the model. Evident as a background fuzzy envelope, the possible maximal excursions for 2T and -2D can be gauged -- these are often much greater than T-D in absolute value, but get chosen wherever the model also deviates too far from the T-D value. ![](https://imagizer.imageshack.com/img923/6806/CCGjq1.png) This may not be an optimal approach, but given the poor anti-correlated properties of the daily SOI pair values there may not be any other options. One other approach I can think of is to maximize the Double-Sideband Carrier Modulation in the Fourier series of the raw data. EDIT: After letting this gestate for a day or two, I'm not sure what value this approach is providing as the addition of a massive number of DOF will obviously improve the fit. One could just as easily introduce two additional random numbers for each point and by cherry-picking for the best fit, the CC will improve. .... But then again consider this recent paper in PNAS (https://phys.org/news/2019-12-el-nio-event-year.html) that claims that an El Nino is more likely to occur after a year of "high disorder". The issue is that disorder has a broad distribution, while strict order doesn't. That doesn't seem a strong argument -- welcome back to the world of climate science at the bleeding edge! This is what the CC improvement looks like on the SOI monthly time-series (CC=0.65 improved to 0.85). The error bars indicate the extent of the (T-D, 2T, -2D) triplet values not chosen. In other words, one of these is closest to the model value and thus selected as the "most valid" data point, and the other two are rejected, replaced by error bars. ![](https://imagizer.imageshack.com/img924/6312/5DRu4z.png)
  • 384.
    edited January 5

    Of note: Sippel, et al (including Knutti), Climate change now detectable from any single day of weather at global scale", Nature, 2020.

    There's an important supplement, by the way.

    Comment Source:Of note: Sippel, _et_ _al_ (including Knutti), <a href="https://www.nature.com/articles/s41558-019-0666-7">Climate change now detectable from any single day of weather at global scale</a>", _Nature_, 2020. There's <a href="https://static-content.springer.com/esm/art%3A10.1038%2Fs41558-019-0666-7/MediaObjects/41558_2019_666_MOESM1_ESM.pdf">an important supplement</a>, by the way.
  • 385.

    Thanks, that will take a while to digest. If it's some type of fingerprint extraction from the data, that sounds like an advance.

    The fingerprint of AGW as causing the recent Australia heat is tricky because there are several climate dipole modes that impact the region -- IOD to the NW, ENSO to the NE, and SAM/AAO to the south (and it is summer down there of course). If these occur independently, these could sum up to create huge heat spikes, as this paper from 2016 explains:

    "The importance of interacting climate modes on Australia’s contribution to global carbon cycle extremes" -- https://www.nature.com/articles/srep23113

    There is also a strong SSW event that occurred. John mentioned it in his tweet below: https://twitter.com/johncarlosbaez/status/1212032690372796416

    This is the SA paper: https://blogs.scientificamerican.com/observations/australias-angry-summer-this-is-what-climate-change-looks-like/

    Comment Source:Thanks, that will take a while to digest. If it's some type of fingerprint extraction from the data, that sounds like an advance. The fingerprint of AGW as causing the recent Australia heat is tricky because there are several climate dipole modes that impact the region -- IOD to the NW, ENSO to the NE, and SAM/AAO to the south (and it is summer down there of course). If these occur independently, these could sum up to create huge heat spikes, as this paper from 2016 explains: "The importance of interacting climate modes on Australia’s contribution to global carbon cycle extremes" -- https://www.nature.com/articles/srep23113 ![](https://imagizer.imageshack.com/img922/3561/nH921K.gif) There is also a strong SSW event that occurred. John mentioned it in his tweet below: https://twitter.com/johncarlosbaez/status/1212032690372796416 This is the SA paper: https://blogs.scientificamerican.com/observations/australias-angry-summer-this-is-what-climate-change-looks-like/
  • 386.
    edited January 9

    The current Michael Mann paper is out in Nature and is entitled “Absence of internal multidecadal and interdecadal oscillations in climate model simulations”. This is under some scrutiny because Mann's apparent claim is that the longer cycles of AMO (and PDO) are more likely noise or forced by anthropogenic sources such as aerosols than intrinsic to the ocean.

    I may have misinterpreted what he is saying, but Mann coined the term AMO so have to take it seriously. What I did yesterday was post the results of modeling AMO using essentially the same forcing as ENSO (the same forcing applies to PDO).

    https://geoenergymath.com/2020/01/08/the-amo/


    Added:

    Take a look at the first comment for the PDO equivalent.

    Comment Source:The current Michael Mann paper is out in Nature and is entitled [“Absence of internal multidecadal and interdecadal oscillations in climate model simulations”](https://www.nature.com/articles/s41467-019-13823-w). This is under some scrutiny because Mann's apparent claim is that the longer cycles of AMO (and PDO) are more likely noise or forced by anthropogenic sources such as aerosols than intrinsic to the ocean. ![](https://imagizer.imageshack.com/img922/4916/Ktk2FC.gif) I may have misinterpreted what he is saying, but Mann coined the term AMO so have to take it seriously. What I did yesterday was post the results of modeling AMO using essentially the same forcing as ENSO (the same forcing applies to PDO). https://geoenergymath.com/2020/01/08/the-amo/ --- *Added:* Take a look at the first comment for the PDO equivalent.
  • 387.

    The strong requirement that an annual impulse interacts with the tidal forcing suggests that the 18-year Saros cycle of eclipse occurrences is operable for ENSO modeling, as it is a longitudinally regional (i.e. Pacific ocean) cycle. In contrast, the entire earth reacts to global tidal forces, which explains why the modulation in LOD is closer to the 18.6 year nodal cycle, which ignores the regional eclipse cycle.

    Since we are only using the annual cycle along with the 3 primary lunar monthly cycles in modeling ENSO, one can see how there may be an 18 year cycle observable in the model, given this from eclipse.gsfc.nasa.gov:

    "One Saros is equal to 223 synodic months. However, 239 anomalistic months and 242 draconic months are also equal to this same period (to within a couple hours)!

            223 Synodic Months        = 6585.3223 days   = 6585d 07h 43m
            239 Anomalistic Months    = 6585.5375 days   = 6585d 12h 54m
            242 Draconic Months       = 6585.3575 days   = 6585d 08h 35m 
    

    "

    The chart below is a typical tidal model fit to ENSO data with a given LTE modulation and daily amplitude spectrum. Below that in the middle panel is the baseline tidal cycle plotted at a sub-monthly scale, overlapped with the time-series shifted by close to 9 years, which is half the Saros cycle (equivalent to a solar-eclipse-to-lunar-eclipse alternation cycle). The repeat pattern is readily apparent here and points to how much of the information content is likely found in a single 9-year interval (horizontal dotted arrow). Hint: one actually has to look closely to observe the slight discrepancies between the two.

    TIW1 TIW1-daily

    I experimented with an independent model fit that iteratively evolved with a slightly different tidal factor pattern as well as a different set of LTE modulation levels. As the LTE modulation can only create harmonics of the original forcing, it's perhaps not surprising that it takes effort to find the optimal modulation. Yet saying that, the information content of the sub-monthly tidal forcing again shows a clear repeat pattern following the 9-year half-Saros cycle (middle panel). The comparison of the two models frequency spectrum is shown in the lower panel.

    even1a even1a-daily

    The bottom-line is that even though the 9 and thus 18-year period is clearly evident in the model, it is not exact. If it was indeed exact and all 3 of the lunar cycles and annual cycle precisely aligned on that interval, then the ENSO cycle will also repeat on a 9-year interval, since the LTE modulation can't do anything to the fundamental period (i.e. it can only change a waveform from a sinusoid to a misshapen sinusoid with the same period). But since there is a slight drift and weave to the cyclic components and any error will accumulate over time, the fitted ENSO model emerges lacking a clear 9-year repeat modulation.

    Comment Source:The strong requirement that an annual impulse interacts with the tidal forcing suggests that the [18-year Saros cycle of eclipse](https://eclipse.gsfc.nasa.gov/SEsaros/SEsaros.html) occurrences is operable for ENSO modeling, as it is a longitudinally *regional* (i.e. Pacific ocean) cycle. In contrast, the entire earth reacts to *global* tidal forces, which explains why the modulation in LOD is closer to the 18.6 year nodal cycle, which ignores the regional eclipse cycle. Since we are only using the annual cycle along with the 3 primary lunar monthly cycles in modeling ENSO, one can see how there may be an 18 year cycle observable in the model, given this from eclipse.gsfc.nasa.gov: > "One Saros is equal to 223 synodic months. However, 239 anomalistic months and 242 draconic months are also equal to this same period (to within a couple hours)! > > 223 Synodic Months = 6585.3223 days = 6585d 07h 43m > 239 Anomalistic Months = 6585.5375 days = 6585d 12h 54m > 242 Draconic Months = 6585.3575 days = 6585d 08h 35m > " The chart below is a typical tidal model fit to ENSO data with a given LTE modulation and daily amplitude spectrum. Below that in the middle panel is the baseline tidal cycle plotted at a sub-monthly scale, overlapped with the time-series shifted by close to 9 years, which is half the Saros cycle (equivalent to a solar-eclipse-to-lunar-eclipse alternation cycle). The repeat pattern is readily apparent here and points to how much of the *information content* is likely found in a single 9-year interval (horizontal dotted arrow). Hint: one actually has to look closely to observe the slight discrepancies between the two. ![TIW1](http://imageshack.com/a/img924/9788/LtVWBv.png) ![TIW1-daily](http://imageshack.com/a/img921/3858/zDOUNO.png) I experimented with an independent model fit that iteratively evolved with a slightly different tidal factor pattern as well as a different set of LTE modulation levels. As the LTE modulation can only create harmonics of the original forcing, it's perhaps not surprising that it takes effort to find the optimal modulation. Yet saying that, the information content of the sub-monthly tidal forcing again shows a clear repeat pattern following the 9-year half-Saros cycle (middle panel). The comparison of the two models frequency spectrum is shown in the lower panel. ![even1a](http://imageshack.com/a/img921/7721/MQEdrw.png) ![even1a-daily](http://imageshack.com/a/img921/1922/gvk03r.png) The bottom-line is that even though the 9 and thus 18-year period is clearly evident in the model, it is not exact. If it was indeed exact and all 3 of the lunar cycles and annual cycle precisely aligned on that interval, then the ENSO cycle will also repeat on a 9-year interval, since the LTE modulation can't do anything to the fundamental period (i.e. it can only change a waveform from a sinusoid to a misshapen sinusoid with the same period). But since there is a slight drift and weave to the cyclic components and any error will accumulate over time, the fitted ENSO model emerges lacking a clear 9-year repeat modulation.
  • 388.

    The ENSO forcing without the annual impulse can be used as the basis for the North Atlantic Oscillation (NAO) forcing. As in the prior comment, the 9-year repeat can be discerned by eye.

    The big distinction is that (like the QBO), the NAO model requires a semi-annual impulse modulation rather than the annual impulse of ENSO. This together with the differing LTE modulation allows a good fit for essentially the same lunar forcing. In other words, very few DOFs required and even with over-fitting on a short training interval (shown in yellow below) the model still matches the data outside that interval

    I would suggest that future research should be focused on cross-validating the various climate indices using a common lunar forcing.


    Climate scientist James Hansen sent out another one of his regular mailing list briefings recently

    http://www.columbia.edu/~jeh1/mailings/2020/20200203_ModelsVsWorld.pdf

    According to Hansen, 40 years ago one parameterization of a global climate model took "a few years" to complete before the results were published.

    Yet, there is now this finding reported by the mainstream news : Climate Models Are Running Red Hot, and Scientists Don’t Know Why. As the title says, the scientists interviewed can't explain why they are running hot, even though they should be able to systematically run sensitivity tests to isolate the root cause. This is how NASA's chief climate scientist explains the difficulty:

    https://twitter.com/ClimateOfGavin/status/1224452096663023616

    My take is that 40 years on, the complexity of the model appears to be increasing at such a great rate that they can't take advantage of the orders-of-magnitude improvement in computational power to isolate the causative factors.

    It seems that they may need to take a step back and simplify their models -- unless they are content in following the lead of neural network models which seem to work in spite of understanding why https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf

    Comment Source:The ENSO forcing without the annual impulse can be used as the basis for the North Atlantic Oscillation (NAO) forcing. As in the prior comment, the 9-year repeat can be discerned by eye. ![](http://imageshack.com/a/img923/3945/SWlEIa.png) The big distinction is that (like the QBO), the NAO model requires a semi-annual impulse modulation rather than the annual impulse of ENSO. This together with the differing LTE modulation allows a good fit for essentially the same lunar forcing. In other words, very few DOFs required and even with over-fitting on a short training interval (shown in yellow below) the model still matches the data outside that interval ![](http://imageshack.com/a/img922/779/ZSoZ8K.png) I would suggest that future research should be focused on cross-validating the various climate indices using a common lunar forcing. --- Climate scientist James Hansen sent out another one of his regular mailing list briefings recently > ![](http://imageshack.com/a/img923/5522/SZz66C.gif) >http://www.columbia.edu/~jeh1/mailings/2020/20200203_ModelsVsWorld.pdf According to Hansen, 40 years ago one parameterization of a global climate model took "a few years" to complete before the results were published. Yet, there is now this finding reported by the mainstream news : [Climate Models Are Running Red Hot, and Scientists Don’t Know Why](https://www.bloomberg.com/news/features/2020-02-03/climate-models-are-running-red-hot-and-scientists-don-t-know-why). As the title says, the scientists interviewed can't explain why they are running hot, even though they should be able to systematically run sensitivity tests to isolate the root cause. This is how NASA's chief climate scientist explains the difficulty: > ![](http://imageshack.com/a/img922/194/eayfVN.png) > https://twitter.com/ClimateOfGavin/status/1224452096663023616 My take is that 40 years on, the complexity of the model appears to be increasing at such a great rate that they can't take advantage of the orders-of-magnitude improvement in computational power to isolate the causative factors. It seems that they may need to take a step back and simplify their models -- unless they are content in following the lead of neural network models which seem to work in spite of understanding why https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf
  • 389.

    One of the maddening aspects of climatology as a science is in the cavalier treatment of data and in particular the potential loss of information through filtering. A group of scientists at NASA JPL (Perigaud et al) have pointed out how reckless it is to remove what are considered errors (or nuisance parameters) in time-series by assuming that they relate to known tidal or seasonal factors and so can be safely filtered out and ignored. The problem is that this is only safe IF those factors relate to an independent process and don't also cause non-linear interactions with the rest of the data. So if a model predicts a linear component and non-linear component, it's not helping to hide the linear portion from the analysis.

    This extends to filtering annual data. I just found out how NINO3.4 data is filtered to remove the annual data, and that the filtering is over-zealous in that it removes all annual harmonics as well. Worse yet, the weighting of these harmonics changes over time, which means that they are removing other parts of the spectrum not related to the annual signal. Found in an "ensostuff" subdirectory:

    This makes me cringe now that I take a look at the portion of the filtered data (which I independently extracted, shown below) and notice how well it matches to the annual impulse I am applying in the ENSO model. The impulse, which is required to amplify the tidal cycles, is now clearly phase correlated to the observed annual temperature cycling.

    This may sound like an innocent error correction but this eliminates the possibility of tracking correlations, which is the core of any science that does not allow experimental control or laboratory experimentation. Perhaps an example of climate scientists shooting themselves in the foot!


    Without controlled experiments available, earth sciences advancements are glacial in progress, so you have to have patience. How Murray Gell-Mann described the process in an interview:

    "Battles of new ideas against conventional wisdom are common in science, aren't they?"

    "It's very interesting how these certain negative principles get embedded in science sometimes. Most challenges to scientific orthodoxy are wrong. A lot of them are crank. But it happens from time to time that a challenge to scientific orthodoxy is actually right. And the people who make that challenge face a terrible situation. Getting heard, getting believed, getting taken seriously and so on. And I've lived through a lot of those, some of them with my own work, but also with other people's very important work. Let's take continental drift, for example. American geologists were absolutely convinced, almost all of them, that continental drift was rubbish. The reason is that the mechanisms that were put forward for it were unsatisfactory. But that's no reason to disregard a phenomenon. Because the theories people have put forward about the phenomenon are unsatisfactory, that doesn't mean the phenomenon doesn't exist. But that's what most American geologists did until finally their noses were rubbed in continental drift in 1962, '63 and so on when they found the stripes in the mid-ocean, and so it was perfectly clear that there had to be continental drift, and it was associated then with a model that people could believe, namely plate tectonics. But the phenomenon was still there. It was there before plate tectonics. The fact that they hadn't found the mechanism didn't mean the phenomenon wasn't there. Continental drift was actually real. And evidence was accumulating for it. At Caltech the physicists imported Teddy Bullard to talk about his work and Patrick Blackett to talk about his work, these had to do with paleoclimate evidence for continental drift and paleomagnetism evidence for continental drift. And as that evidence accumulated, the American geologists voted more and more strongly for the idea that continental drift didn't exist. The more the evidence was there, the less they believed it. Finally in 1962 and 1963 they had to accept it and they accepted it along with a successful model presented by plate tectonics...."

    This was the telling passage of the Gell-Mann interview, which is easily missed on first read -- "The more the evidence was there, the less they believed it"

    Did he really mean that? The more the accumulation of evidence, the stronger the resistance? (The full interview is available on Science News to subscribers).

    Comment Source:One of the maddening aspects of climatology as a science is in the cavalier treatment of data and in particular the potential loss of information through filtering. A group of scientists at NASA JPL (Perigaud et al) have pointed out how reckless it is to remove what are considered errors (or *nuisance parameters*) in time-series by assuming that they relate to known tidal or seasonal factors and so can be safely filtered out and ignored. The problem is that this is only safe **IF** those factors relate to an independent process and don't also cause non-linear interactions with the rest of the data. So if a model predicts a linear component and non-linear component, it's not helping to hide the linear portion from the analysis. This extends to filtering annual data. I just found out how NINO3.4 data is filtered to remove the annual data, and that the filtering is over-zealous in that it removes all annual harmonics as well. Worse yet, the weighting of these harmonics changes over time, which means that they are removing other parts of the spectrum not related to the annual signal. Found in an "ensostuff" subdirectory: ![](http://imageshack.com/a/img921/8803/GKja1e.png) This makes me cringe now that I take a look at the portion of the filtered data (which I independently extracted, shown below) and notice how well it matches to the annual impulse I am applying in the ENSO model. The impulse, which is required to amplify the tidal cycles, is now clearly phase correlated to the observed annual temperature cycling. ![](http://imageshack.com/a/img921/192/JPiLyf.png) This may sound like an innocent error correction but this eliminates the possibility of tracking correlations, which is the core of any science that does not allow experimental control or laboratory experimentation. Perhaps an example of climate scientists shooting themselves in the foot! --- Without controlled experiments available, earth sciences advancements are glacial in progress, so you have to have patience. How Murray Gell-Mann described the process [in an interview](https://scienceblogs.com/pontiff/2009/09/16/gell-mann-on-conventional-wisd): > "Battles of new ideas against conventional wisdom are common in science, aren't they?" > "It's very interesting how these certain negative principles get embedded in science sometimes. Most challenges to scientific orthodoxy are wrong. A lot of them are crank. But it happens from time to time that a challenge to scientific orthodoxy is actually right. And the people who make that challenge face a terrible situation. Getting heard, getting believed, getting taken seriously and so on. And I've lived through a lot of those, some of them with my own work, but also with other people's very important work. Let's take continental drift, for example. American geologists were absolutely convinced, almost all of them, that continental drift was rubbish. The reason is that the mechanisms that were put forward for it were unsatisfactory. But that's no reason to disregard a phenomenon. Because the theories people have put forward about the phenomenon are unsatisfactory, that doesn't mean the phenomenon doesn't exist. But that's what most American geologists did until finally their noses were rubbed in continental drift in 1962, '63 and so on when they found the stripes in the mid-ocean, and so it was perfectly clear that there had to be continental drift, and it was associated then with a model that people could believe, namely plate tectonics. But the phenomenon was still there. It was there before plate tectonics. The fact that they hadn't found the mechanism didn't mean the phenomenon wasn't there. Continental drift was actually real. And evidence was accumulating for it. At Caltech the physicists imported Teddy Bullard to talk about his work and Patrick Blackett to talk about his work, these had to do with paleoclimate evidence for continental drift and paleomagnetism evidence for continental drift. And as that evidence accumulated, the American geologists voted more and more strongly for the idea that continental drift didn't exist. **The more the evidence was there, the less they believed it.** Finally in 1962 and 1963 they had to accept it and they accepted it along with a successful model presented by plate tectonics...." This was the telling passage of the Gell-Mann interview, which is easily missed on first read -- *"The more the evidence was there, the less they believed it"* Did he really mean that? The more the accumulation of evidence, the stronger the resistance? (The full interview is available on [Science News to subscribers](https://www.sciencenews.org/article/interview-murray-gell-mann)).
  • 390.

    On tectonics, it wasn't only Bullard and Blackett, it was, famously, J Tuzo Wilson and then Marie Tharp and Bruce Heezen (although I have my doubts about how active Heezen was, apart from he was male and therefore more believed than Tharp, a telling condemnation in itself). There remain a small cadre of geologists in the U.S. who still try to refute evidence for tectonics, despite, now, deep seismic imaging of the mantle which reveals descending plates and microquakes along them.

    Comment Source:On tectonics, it wasn't only Bullard and Blackett, it was, famously, J Tuzo Wilson and then Marie Tharp and Bruce Heezen (although I have my doubts about how active Heezen was, apart from he was male and therefore more believed than Tharp, a telling condemnation in itself). There remain a small cadre of geologists in the U.S. who *still* try to refute evidence for tectonics, despite, now, deep seismic imaging of the mantle which reveals descending plates and microquakes along them.
  • 391.

    In a recent discussion called "feedbacks, runaway, and tipping points" I linked to a concise derivation of how climate set-points are reached based on what we published last year.

    This is a fun derivation for anyone interested in reducing well-known relations to a quadratic equation, with the two solutions giving the low-T snowball-earth and the high-T current regime. In other words, the positive feed back of CO2-catalyzed warming does not lead to a thermal runaway.

    There's a geologist on the comments that apparently called it a curve-fit and the discussion went downhill from there. I wouldn't dare talk plate tectonics on that blog :)

    Comment Source:In a recent discussion called ["feedbacks, runaway, and tipping points"](https://andthentheresphysics.wordpress.com/2020/02/01/feebacks-runaway-and-tipping-points/) I linked to a concise derivation of how climate set-points are reached based on what we published last year. This is a fun derivation for anyone interested in reducing well-known relations to a quadratic equation, with the two solutions giving the low-T snowball-earth and the high-T current regime. In other words, the positive feed back of CO2-catalyzed warming does not lead to a thermal runaway. ![](https://pbs.twimg.com/media/EPjcnuwWAAABWMd.png) ![](https://pbs.twimg.com/media/EPjcnu3X4AQKjHU.png) ![](https://pbs.twimg.com/media/EPjcnuuWsAIV21Z.png) ![](https://pbs.twimg.com/media/EPjcnuvXsAAXfxO.png) There's a geologist on the comments that apparently called it a curve-fit and the discussion went downhill from there. I wouldn't dare talk plate tectonics on that blog :)
  • 392.
    edited February 22
    Comment Source:Bezos to spend $10,000,000,000 to stop climate change https://arstechnica.com/tech-policy/2020/02/jeff-bezos-pledges-10-billion-to-stop-climate-change/ EDIT: see the new thread here => https://forum.azimuthproject.org/discussion/2484/bezos-earth-fund
  • 393.

    This is a wild correlation https://geoenergymath.com/2020/02/21/the-mjo/

    There's ENSO, which is a standing-wave oscillation and there's the Madden-Julian Oscillation (MJO), which is a traveling-wave that propagates eastward a bit like an erratic sequence of solitons.

    Found a correlation between the two by analyzing the high-resolution daily Southern Oscillation Index (SOI) of ENSO and then duplicated the amplitude and phase of the MJO by shifting it forward by ~21 days. IOW, the SOI leads the MJO by 21 days.

    No one in the climate research literature has reported this as far as I can tell, even though it is blatantly obvious.

    The mechanism is straightforward to explain -- transitions in the ENSO forcing (including El Nino/La Nina transitions) are the trigger for the MJO traveling wave, and it likely takes at least several days for the traveling wave to fully propagate to the extent it can be measured. Perhaps think in terms of a visual lightning strike and then hearing the thunder, propagating at 7 seconds per mile, but in this case the MJO traveling wave is only moving at ~5 meters per second.

    This paper explains how a standing wave and traveling wave can be isolated https://atmos.washington.edu/~oliverwm/publications/Watt-MeyerKushner2015a.pdf

    The left column is the combined waveform and the two on the right are the SW & TW decomposition.

    Comment Source:This is a wild correlation https://geoenergymath.com/2020/02/21/the-mjo/ There's ENSO, which is a standing-wave oscillation and there's the Madden-Julian Oscillation (MJO), which is a traveling-wave that propagates eastward a bit like an erratic sequence of [solitons](https://en.wikipedia.org/wiki/Soliton). Found a correlation between the two by analyzing the high-resolution daily Southern Oscillation Index (SOI) of ENSO and then duplicated the amplitude and phase of the MJO by shifting it forward by ~21 days. IOW, the SOI leads the MJO by 21 days. No one in the climate research literature has reported this as far as I can tell, even though it is blatantly obvious. The mechanism is straightforward to explain -- transitions in the ENSO forcing (including El Nino/La Nina transitions) are the trigger for the MJO traveling wave, and it likely takes at least several days for the traveling wave to fully propagate to the extent it can be measured. Perhaps think in terms of a visual lightning strike and then hearing the thunder, propagating at 7 seconds per mile, but in this case the MJO traveling wave is only moving at ~5 meters per second. This paper explains how a standing wave and traveling wave can be isolated https://atmos.washington.edu/~oliverwm/publications/Watt-MeyerKushner2015a.pdf The left column is the combined waveform and the two on the right are the SW & TW decomposition. ![](https://imagizer.imageshack.com/img922/3351/q2SSgm.gif)
  • 394.

    I'm currently tooling on G. W. Imbens, D. B. Rubin, Causal Inference for Statistics, Social, and Biomedical Sciences, Cambridge, 2015. I am enthusiastic about using causal inference and tools like propensity scores for scientific purposes, as I noted in a comment at Ewan's blog post, "Code for causal inference: Interested in astronomical applications".

    Comment Source:I'm currently tooling on G. W. Imbens, D. B. Rubin, *Causal Inference for Statistics, Social, and Biomedical Sciences*, Cambridge, 2015. I am enthusiastic about using causal inference and tools like *propensity scores* for scientific purposes, as I noted in a comment at Ewan's blog post, "[Code for causal inference: Interested in astronomical applications](https://astrostatistics.wordpress.com/2020/01/23/code-for-causal-inference-interested-in-astronomical-applications/)".
  • 395.
    edited November 23

    Speaking of solitons there's a recent paper in PRL, which I just caught because it referenced an old paper of ours for a key finding.

    Hafke, B. et al. Thermally Induced Crossover from 2D to 1D Behavior in an Array of Atomic Wires: Silicon Dangling-Bond Solitons in Si(553)-Au. Physical Review Letters 124, (2020).

    This is fascinating in the how the solitons are triggered by increasing thermal energy applied to the step-edge danging bonds. The “standing waves” of the bonding arrangement release a soliton (the equivalent of a traveling wave) which makes it free to slide along the 1D wire defined by the step edge.

    A couple of points here. One is in that how similar this arrangement is to the 1D behavior of the standing-wave ENSO and the traveling wave MJO. In the solid state situation, the (N-S/Euler/Laplace) wave equation of a fluid is replaced by Schrödinger's wave equation of the lattice considering the electron occupancy level of unsaturated dangling bonds. The stable low-T state shows a 3-fold periodicity in occupancy (analogous to the main ENSO dipole) but at higher T it starts triggering solitons that fuzz up the periodicity. This is where they incorporate our analysis for detecting order/disorder transitions in periodicity.

    Another point is in the concept that condensed matter physics and ideas in particular based on topological insulators of low-dimensional structures are useful. This is a la the work of Delplace, Marston, Venaille, who also claim that equatorial waves have a topological origin, borrowed from ideas of the quantum Hall effect. This is such cool stuff and in which most climate scientists, save for these guys and what we are working on here, are unaware. So there is hope.


    1D molecular chain

    https://newscenter.lbl.gov/2020/11/12/charges-cascading-molecular-chain/

    Comment Source:Speaking of solitons there's a recent paper in PRL, which I just caught because it referenced an old paper of ours for a key finding. Hafke, B. et al. [Thermally Induced Crossover from 2D to 1D Behavior in an Array of Atomic Wires: Silicon Dangling-Bond Solitons in Si(553)-Au](https://www.researchgate.net/publication/338522710_Thermally_Induced_Crossover_from_2D_to_1D_Behavior_in_an_Array_of_Atomic_Wires_Silicon_Dangling-Bond_Solitons_in_Si553-Au). Physical Review Letters 124, (2020). This is fascinating in the how the solitons are triggered by increasing thermal energy applied to the step-edge danging bonds. The “standing waves” of the bonding arrangement release a soliton (the equivalent of a traveling wave) which makes it free to slide along the 1D wire defined by the step edge. ![](https://imagizer.imageshack.com/img924/2396/YUzMNx.gif) A couple of points here. One is in that how similar this arrangement is to the 1D behavior of the standing-wave ENSO and the traveling wave MJO. In the solid state situation, the (N-S/Euler/Laplace) wave equation of a fluid is replaced by Schrödinger's wave equation of the lattice considering the electron occupancy level of unsaturated dangling bonds. The stable low-T state shows a 3-fold periodicity in occupancy (analogous to the main ENSO dipole) but at higher T it starts triggering solitons that fuzz up the periodicity. This is where they incorporate [our analysis for detecting order/disorder transitions in periodicity](https://www.sciencedirect.com/science/article/pii/0039602885907277). Another point is in the concept that condensed matter physics and ideas in particular based on topological insulators of low-dimensional structures are useful. This is *a la* the work of [Delplace, Marston, Venaille](https://science.sciencemag.org/content/358/6366/1075.abstract), who also claim that equatorial waves have a topological origin, borrowed from ideas of the quantum Hall effect. This is such cool stuff and in which most climate scientists, save for these guys and what we are working on here, are unaware. So there is hope. --- 1D molecular chain https://newscenter.lbl.gov/2020/11/12/charges-cascading-molecular-chain/ ![](https://newscenter.lbl.gov/wp-content/uploads/sites/2/2020/11/STM-1D-array-graphene-1200x800-1-628x419.png)
  • 396.
    edited February 29

    There isn't a way to validate Lotka-Volterra-type predator-prey models other than that they cycle in a fashion approximating that of observations. A more realistic model may take into account seasonal and climate variations that control populations directly. The following is a recent paper by a wildlife ecologist that has long been working on the thesis that seasonal/tidal cycles play a role (one paper that he wrote on the topic dates to 1977).

    Archibald, H. L. Relating the 4-year lemming ( Lemmus spp. and Dicrostonyx spp.) population cycle to a 3.8-year lunar cycle and ENSO. Can. J. Zool. 97, 1054–1063 (2019).

    These are his main figures:

    fig1 fig2

    This directly agrees with the ENSO model driven by the fortnightly tropical cycle (13.66 days) described in this thread (see the middle right pane in the figure below):

    The tidal forcing square wave aligns with the cyclic peak lemming populations. The 3.8 year cycle derives directly from 1/(27-365.242/13.6608) = 3.794 years.

    There's also a derivation of Lotka-Volterra in the Azimuth Wiki -- https://www.azimuthproject.org/azimuth/show/Lotka-Volterra+equation

    Which model is a better example of Green Math?

    EDIT: This is in no way a validation either way but here is another view of the tidal forcing used in the climate index model in comparison to the peak years in lemming population. The dotted lines are a guide to the eye

    Comment Source:There isn't a way to validate [Lotka-Volterra-type predator-prey](https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations) models other than that they cycle in a fashion approximating that of observations. A more realistic model may take into account seasonal and climate variations that control populations directly. The following is a recent paper by a wildlife ecologist that has long been working on the thesis that seasonal/tidal cycles play a role (one paper that he wrote on the topic dates to 1977). > Archibald, H. L. [Relating the 4-year lemming ( Lemmus spp. and Dicrostonyx spp.) population cycle to a 3.8-year lunar cycle and ENSO](https://tspace.library.utoronto.ca/bitstream/1807/97104/1/cjz-2018-0266.pdf). Can. J. Zool. 97, 1054–1063 (2019). These are his main figures: > ![fig1](https://imagizer.imageshack.com/img923/2978/48k1uU.png) > ![fig2](https://imagizer.imageshack.com/img924/4862/Clalwp.png) This directly agrees with the ENSO model driven by the fortnightly tropical cycle (13.66 days) described in this thread (see the middle right pane in the figure below): ![](https://imagizer.imageshack.com/img921/8176/Vb5hJl.gif) The tidal forcing square wave aligns with the cyclic peak lemming populations. The 3.8 year cycle derives directly from 1/(27-365.242/13.6608) = 3.794 years. There's also a derivation of Lotka-Volterra in the Azimuth Wiki -- https://www.azimuthproject.org/azimuth/show/Lotka-Volterra+equation Which model is a better example of Green Math? EDIT: This is in no way a validation either way but here is another view of the tidal forcing used in the climate index model in comparison to the peak years in lemming population. The dotted lines are a guide to the eye ![](https://imagizer.imageshack.com/img924/9279/yXZ6UM.gif)
  • 397.
    edited March 13

    In case anyone needs some inspiration:

    In 1665, the University of Cambridge temporarily closed due to the bubonic plague. Isaac Newton had to work from home, and he used this time to develop calculus and the theory of gravity. https://t.co/EA98WDihJA

    — Martin Kleppmann

    ... and a principia : https://geoenergymath.com/2020/03/12/groundbreaking-research/

    Comment Source:In case anyone needs some inspiration: > In 1665, the University of Cambridge temporarily closed due to the bubonic plague. Isaac Newton had to work from home, and he used this time to develop calculus and the theory of gravity. <a href="https://t.co/EA98WDihJA">https://t.co/EA98WDihJA</a></p>&mdash; Martin Kleppmann ... and a principia : https://geoenergymath.com/2020/03/12/groundbreaking-research/
  • 398.
    Comment Source:Nerilie J. et al., [Coupling of Indo-Pacific climate variability over the last millennium (2020)](https://go.nature.com/2wVMDee)
  • 399.
    edited March 20

    "Suddenly everyone believes in models" -- noted by many in social media circles over the last few days

    and this:

    "A warning: with quarantines, travel bans, bailouts, money printing, and more…we have entered the time of the unlimited government.

    The unlimited government is limited not by law or precedent but by physics." -- @balajis

    Comment Source:"Suddenly everyone believes in models" -- noted by many in social media circles over the last few days and this: *"A warning: with quarantines, travel bans, bailouts, money printing, and more…we have entered the time of the unlimited government.* *The unlimited government is limited not by law or precedent but by physics."* -- @balajis
  • 400.
    edited April 11

    https://medium.com/@frd.prost/technè-epistémè-et-praxis-sont-dans-un-bateau-1b6eb09ab05e

    "The ethical imperatives in terms of knowledge building are for doctors to convey the conditions and results of their actions as precisely as possible. It is only after the battle that the necessary sorting work will take place. Right now medical teams around the world are trying things. They communicate their experiences with each other through new means of communication: through social networks, using new means of communication (various messaging). Hundreds of articles have been published on a disease that did not exist three months ago. For example on BiorXiv where the work has not been validated before publication, as is the case in scientific journals. Are these works useless because they have not received the seal of peer accreditation? No, you just have to take them for what they are: partial information on which you have to exercise a critical eye. Peer validation has not disappeared; it applies after publication. It's a new way of collaborating that didn't exist before because the act of publishing was expensive: you had to print physical books, distribute them, etc. The cost of withdrawal in the event of an error was high, it is essentially zero today (we can update the information). This way of validating new knowledge, which goes through width rather than depth, is faster and takes more effort. You have to exercise critical thinking all the time, but it is also the promise of faster progress than we have ever known. It must also encourage reflection, more than reflexes, ethics because new questions arise."

    This is essentially the decision of "we'll sort it out later".

    Can relate that to the ENSO & QBO effort. Even though they were informative, some of the false starts I have taken on this long thread include:

    • Climate shifts -- phase reversals or other fudge factor events used to explain anomalous changes
    • Mathieu equation -- a fluid dynamics formulation to describe sloshing that held some initial promise until the LTE analysis approach was settled on
    • Delay differential equations -- applying ideas from the simple recharge oscillator ENSO models used by others
    • Biennial modulation -- a diversion based on clues from DDE and Mathieu period doubling, until it was obvious that an annual modulation worked more parsimoniously and plausibly
    • Nodal vs Tropical -- Predominately nodal (draconic) tidal forcing worked so well for QBO and Chandler wobble that I assumed it was the same for ENSO. Now it's obvious why tropical applies to ENSO.

    Some of the branches I avoided were the idea of teleconnections (which clearly is a dog chasing its own tail), full GCMs (not when simplicity is the guiding principle), and the dead end of chaos mathematics.

    In fact, it may be that the entire research path of applying the recharge oscillator models to ENSO may have been a "sort it out later" diversion. None of these non-linear models, such as Zebiak-Cane, have ever been fit adequately to the data. They also haven't been contextualized to match the Navier-Stokes basis of the typical GCM formulation, because like the toy Lorenz model of chaos they don't actually seek to model the actual fluid dynamics. In contrast, the LTE approach described in this thread perfectly aligns with a GCM, as both LTE and GCM directly derived from the N-S fluid dynamics.

    So whether or not models are peer-reviewed, eventually they all still need to be sorted out over time.


    Latest on triad waves : https://geoenergymath.com/2020/04/06/triad-waves/

    EDIT: How this triad waves relates is that it helps resolve the period doubling -- where the general triad for energy transfer has a K = Ka + Kb wavenumber relationship, when Ka=Kb, the specific triad is to period doubling from small scale to large scale.

    Comment Source:https://medium.com/@frd.prost/techn%C3%A8-epist%C3%A9m%C3%A8-et-praxis-sont-dans-un-bateau-1b6eb09ab05e > "The ethical imperatives in terms of knowledge building are for doctors to convey the conditions and results of their actions as precisely as possible. **It is only after the battle that the necessary sorting work will take place.** Right now medical teams around the world are trying things. They communicate their experiences with each other through new means of communication: through social networks, using new means of communication (various messaging). Hundreds of articles have been published on a disease that did not exist three months ago. For example on BiorXiv where the work has not been validated before publication, as is the case in scientific journals. Are these works useless because they have not received the seal of peer accreditation? No, you just have to take them for what they are: partial information on which you have to exercise a critical eye. Peer validation has not disappeared; it applies after publication. It's a new way of collaborating that didn't exist before because the act of publishing was expensive: you had to print physical books, distribute them, etc. The cost of withdrawal in the event of an error was high, it is essentially zero today (we can update the information). This way of validating new knowledge, which goes through width rather than depth, is faster and takes more effort. You have to exercise critical thinking all the time, but it is also the promise of faster progress than we have ever known. It must also encourage reflection, more than reflexes, ethics because new questions arise." This is essentially the decision of "we'll sort it out later". Can relate that to the ENSO & QBO effort. Even though they were informative, some of the false starts I have taken on this long thread include: * Climate shifts -- phase reversals or other *fudge factor* events used to explain anomalous changes * Mathieu equation -- a fluid dynamics formulation to describe sloshing that held some initial promise until the LTE analysis approach was settled on * Delay differential equations -- applying ideas from the simple recharge oscillator ENSO models used by others * Biennial modulation -- a diversion based on clues from DDE and Mathieu period doubling, until it was obvious that an annual modulation worked more parsimoniously and plausibly * Nodal vs Tropical -- Predominately nodal (draconic) tidal forcing worked so well for QBO and Chandler wobble that I assumed it was the same for ENSO. Now it's obvious why tropical applies to ENSO. Some of the branches I avoided were the idea of teleconnections (which clearly is a dog chasing its own tail), full GCMs (not when simplicity is the guiding principle), and the dead end of chaos mathematics. In fact, it may be that the entire research path of applying the recharge oscillator models to ENSO may have been a "sort it out later" diversion. None of these non-linear models, such as [Zebiak-Cane](https://www.azimuthproject.org/azimuth/show/ENSO#ZCModel), have ever been fit adequately to the data. They also haven't been contextualized to match the Navier-Stokes basis of the typical GCM formulation, because like the toy Lorenz model of chaos they don't actually seek to model the actual fluid dynamics. In contrast, the LTE approach described in this thread perfectly aligns with a GCM, as both LTE and GCM directly derived from the N-S fluid dynamics. So whether or not models are peer-reviewed, eventually they all still need to be sorted out over time. --- Latest on triad waves : https://geoenergymath.com/2020/04/06/triad-waves/ EDIT: How this triad waves relates is that it helps resolve the period doubling -- where the general triad for energy transfer has a K = Ka + Kb wavenumber relationship, when Ka=Kb, the specific triad is to period doubling from small scale to large scale.
Sign In or Register to comment.