A couple of weeks ago I did the Tahiti-Darwin correlation coefficient calculation that John is talking about here:

The number I get using Mathematica is -0.55, in line with John's interpretation of it being negative negative number because the dipole is anti-correlated.

In the figure below, I take the negative of the Tahiti measure to make it positive (I also multiply the CC by 100 for visual checking of numbers).


It is possible that Dara is doing something similar by way of taking the complement and just losing track of the sign.

The number of -0.55 does not look particularly strong and is a weaker anti-correlation than the -0.58, but this number is also extremely sensitive to the amount of high-frequency noise in the time series data. The higher frequencies can rapidly reduce the anti-correlation because any slight amount of phase shifting cause the two waves to rapidly lose alignment according to the CC formula.

I was hoping to find some other measure that considers the amplitude error caused by slight phase shifts and takes this into account for a "goodness of fit" criteria. In other words the error would be the distance in amplitude and time, instead of just amplitude differences at a particular time. I am not sure that such a beast exists, perhaps related to a variation of a Mahalanobis distance measure?