There is a neat technique that can validate a certain class of nonlinear models called [frequency-domain cross-validation](http://www.diva-portal.org/smash/get/diva2:315809/FULLTEXT02) (FDCV)

> "In the frequency domain cross validation is easily performed by dividing the frequency measurements in two disjoint sets : estimation data and validation data. [...] A model is then estimated using the estimation data only. The quality of the model is assessed by comparing the estimated transfer function with the validation data set and a proper model order can then be inferred."

If the sets are non-overlapping and the model fit to one set is able to predict the amplitude and phase spectrum of other set, there must be a fundamental process linking the profiles of the two frequency intervals. This could happen if the frequencies are correlated as is the case with harmonics. Consider modeling an unknown spectrum using square-wave components, but all the estimation data contained was high-harmonic frequency data. In this case, if the *actual* data was a square-wave, then the high-harmonic fit would naturally include the lower-frequency fundamental components. This would then generate a high-quality cross-validation check as the root model identifies the necessary fundamental implicitly.

That's a trivial and perhaps contrived example, but it also illustrates how this technique won't work on behaviors with independent signal components, but remains effective only if modulation occurs. For example, it would be impossible to predict that a 60 Hz hum existed in a signal if all you could process was frequency components of 1 kHz and above. That would be possible only if some behavior in the model forced it to simultaneously occur both in the low and high bandwidth through some cooperative interaction. For example, it might happen through some sort of nonlinear interaction or frequency modulation, but the model would have to account for that. In fact, this is why FM or AM demodulation works -- even though the signals measured are embedded, i.e. non-linearly mixed within a high-frequency carrier, the model is able to extract the low-frequency signal. The "validation" that the model works is that a high-fidelity tune is recognizable when played on a radio. Our brain is sophisticated enough that it is able to recognize the tune, otherwise something like Shazam could do the identification.

With ENSO, the nonlinear mixing is due to the alternately-annual (biennial) modulation of the cyclic lunar gravitational tidal pull. A biennial modulation is the mixing factor that allows a frequency-domain cross-validation to work. We simply need to tune the tidal factors to fit to the frequency spectrum of one interval (the estimation part), and then check to see if they also pop out on the orthogonal part of the frequency spectrum while showing a high correlation with the modeled results (the validation part).

The only way this approach would fail as a validation is if the extra degree of freedom (DOF) afforded by the biennial modulation somehow improved the correlation better than any other modulation we pick (such as triennial, biannual, etc). However, this is wildly implausible, as it is well-known that a strong biennial modulation is operational in the Pacific ocean's dynamics. In other words, this is a very weak DOF and functions more as a constraint than a free parameter. Like the tidal pull (which **certainly** exists with strength debateable), the biennial modulation is highly likely to exist. It's just a matter of pulling the signal out from any other noise that might exist.

Two other elements are necessary to do the FDCV, (1) the solution to Laplace's tidal equations and (2) taking the derivative of the ENSO data to equalize the ENSO frequency spectrum, thus allowing for a balanced orthogonal interval comparison.

The first training interval uses only frequency components between 0.5 /year and 1 /year in amplitude. The fit is extremely aggressive, reaching a correlation coefficient of 0.99, yet the validation interval appears to match.

The second training interval works the complement.

![cv](https://imageshack.com/a/img921/3531/zdeVAa.gif)

These are the real-space fits corresponding to the frequency-space fits

![r](https://imageshack.com/a/img922/7796/5HI1mL.gif)

> "In the frequency domain cross validation is easily performed by dividing the frequency measurements in two disjoint sets : estimation data and validation data. [...] A model is then estimated using the estimation data only. The quality of the model is assessed by comparing the estimated transfer function with the validation data set and a proper model order can then be inferred."

If the sets are non-overlapping and the model fit to one set is able to predict the amplitude and phase spectrum of other set, there must be a fundamental process linking the profiles of the two frequency intervals. This could happen if the frequencies are correlated as is the case with harmonics. Consider modeling an unknown spectrum using square-wave components, but all the estimation data contained was high-harmonic frequency data. In this case, if the *actual* data was a square-wave, then the high-harmonic fit would naturally include the lower-frequency fundamental components. This would then generate a high-quality cross-validation check as the root model identifies the necessary fundamental implicitly.

That's a trivial and perhaps contrived example, but it also illustrates how this technique won't work on behaviors with independent signal components, but remains effective only if modulation occurs. For example, it would be impossible to predict that a 60 Hz hum existed in a signal if all you could process was frequency components of 1 kHz and above. That would be possible only if some behavior in the model forced it to simultaneously occur both in the low and high bandwidth through some cooperative interaction. For example, it might happen through some sort of nonlinear interaction or frequency modulation, but the model would have to account for that. In fact, this is why FM or AM demodulation works -- even though the signals measured are embedded, i.e. non-linearly mixed within a high-frequency carrier, the model is able to extract the low-frequency signal. The "validation" that the model works is that a high-fidelity tune is recognizable when played on a radio. Our brain is sophisticated enough that it is able to recognize the tune, otherwise something like Shazam could do the identification.

With ENSO, the nonlinear mixing is due to the alternately-annual (biennial) modulation of the cyclic lunar gravitational tidal pull. A biennial modulation is the mixing factor that allows a frequency-domain cross-validation to work. We simply need to tune the tidal factors to fit to the frequency spectrum of one interval (the estimation part), and then check to see if they also pop out on the orthogonal part of the frequency spectrum while showing a high correlation with the modeled results (the validation part).

The only way this approach would fail as a validation is if the extra degree of freedom (DOF) afforded by the biennial modulation somehow improved the correlation better than any other modulation we pick (such as triennial, biannual, etc). However, this is wildly implausible, as it is well-known that a strong biennial modulation is operational in the Pacific ocean's dynamics. In other words, this is a very weak DOF and functions more as a constraint than a free parameter. Like the tidal pull (which **certainly** exists with strength debateable), the biennial modulation is highly likely to exist. It's just a matter of pulling the signal out from any other noise that might exist.

Two other elements are necessary to do the FDCV, (1) the solution to Laplace's tidal equations and (2) taking the derivative of the ENSO data to equalize the ENSO frequency spectrum, thus allowing for a balanced orthogonal interval comparison.

The first training interval uses only frequency components between 0.5 /year and 1 /year in amplitude. The fit is extremely aggressive, reaching a correlation coefficient of 0.99, yet the validation interval appears to match.

The second training interval works the complement.

![cv](https://imageshack.com/a/img921/3531/zdeVAa.gif)

These are the real-space fits corresponding to the frequency-space fits

![r](https://imageshack.com/a/img922/7796/5HI1mL.gif)