This is related to comments #344 & #345. Given that it will take many decades to validate any ENSO model via prediction, the only technique to use in the interim are various flavors of cross-validation using existing data. The following chart uses a test interval that is short (~18 years, the nodal repeat cycle) but constrains the fit by applying a stringent calibration to the angular momentum forcing as measured by length-of-day (LOD) changes -- see middle panel.


The correlation coefficient in the training interval and the calibration interval is demanding at 0.95. The calibration interval is fit to long-period tidal constituents so that this can be extended outside the training interval as an applied ENSO forcing -- note that LOD changes are only high precision back to 1962 so this must be extrapolated for the 82 prior years. The completely unknown factor is the LTE modulation (shown in the upper inset) which is kept constant across the entire interval.

The lower panel is an amplitude spectrum across the entire extrapolated model range, which maintains a resemblance to the spectrum of the ENSO data in spite of the short training interval applied. Recall that because of the LTE modulation, the position of the Fourier peaks become scrambled similar to what occurs with Mach-Zehnder modulation (MZM) described in comment #336.


This has been a challenging modeling problem partly because of structural stability in the non-linear aspects of the LTE solution. Here is a recent article titled "Escape from Model Land" which relates structural stability to the Hawkmoth effect and distinguishes it from the (what I consider inapplicable) Butterfly effect.

> "It is sometimes thought that if a model is only slightly wrong, then its outputs will correspondingly be only slightly wrong. The Butterfly Effect revealed that in deterministic nonlinear dynamical systems, a “slightly wrong” initial condition can yield wildly wrong outputs. The Hawkmoth Effect implies that when the mathematical structure of the model is only “slightly wrong” then one loses topological conjugacy (with probability one), and even the best formulated probability forecasts will be wildly wrong. This result, due to Smale in the early 1960’s holds consequences not only for the aims of prediction but also for model
development and calibration, and of course for the formation of initial condition ensembles. Naïvely, we might hope that by making incremental improvements to the “realism” of a model (more accurate representations, greater details of processes, finer spatial or temporal resolution, etc.) we would also see incremental improvement in the outputs (either qualitative realism or according to some quantitative performance metric). Regarding the realism of short term trajectories, this may well be true! It is not expected to be true in terms of
probability forecasts. And it is not always true in terms of short term trajectories; we note that fields of research where models have become dramatically more complex are experiencing exactly this problem: the nonlinear compound effects of any given small tweak or addition to the model structure are so great that calibration becomes a very computationally-intensive task and the marginal performance benefits of additional subroutines or processes may be
zero or even negative. In plainer terms, adding detail to the model can make it less accurate."

This is why it is so difficult to decrypt MZM-encoded optical transmissions. Not only is the modulation unknown to a hacker but that slight variations in the modulation can provide completely different results. That helps explains why one needs to spend lots of effort on cross-validating the LTE model for ENSO before it is even close to becoming a practical tool for predictions, so re-read comment #345 in that context.