It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 493
- ACT Study Group 5
- Azimuth Math Review 6
- MIT 2020: Programming with Categories 53
- MIT 2020: Lectures 21
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 138
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 64
- Azimuth Code Project 110
- Drafts 1
- Math Syntax Demos 15
- Wiki - Latest Changes 1
- Strategy 110
- Azimuth Project 1.1K

Options

in Chat

I've just been trying to unpick the algorithm in this paper. I had to skip this equation on the first page due to lack of maths fu.

R e = e lambda -- (4)

where R is a symmetric, orthogonal matrix and e is a (presumably column) vector.

I only know right and left eigenvectors:

R e = lambda e -- right

e R = lambda e -- left

so I don't know what (4) represents.

Will somebody please give me a clue? I might add that my ignorance extends to only ever having used right eigenvalues which are apparently convenient for most applications (including all those I've come across). When would I use a left eigenvector?

Tia

## Comments

Jim, I think you are reading too much into it, $\lambda$ is a scalar so you can write it on either side.

$\lambda \mathbf{e} = \mathbf{e} \lambda$

I think Kutzbach is just indulging his free spirit by writing the scalar on the right.

A left eigenvector is just a right eigenvector of the transposed matrix, since $(A B)^T = B^T A^T$ for all multiplication compatible matrices, including column and row vectors. For symmetric matrices left and right eigenvectors are the same, except left eigenvectors are row vectors. The main reason to use left eigenvectors is presumably to avoid writing the transpose sign. :)

`Jim, I think you are reading too much into it, $\lambda$ is a scalar so you can write it on either side. $\lambda \mathbf{e} = \mathbf{e} \lambda$ I think Kutzbach is just indulging his free spirit by writing the scalar on the right. A left eigenvector is just a right eigenvector of the transposed matrix, since $(A B)^T = B^T A^T$ for all multiplication compatible matrices, including column and row vectors. For symmetric matrices left and right eigenvectors are the same, except left eigenvectors are row vectors. The main reason to use left eigenvectors is presumably to avoid writing the transpose sign. :)`

Thanks Daniel,

A spot on diagnosis of over-reading on my part. Free-spirited math notation is an anathema if you're trying to code the stuff.

`Thanks Daniel, A spot on diagnosis of over-reading on my part. Free-spirited math notation is an anathema if you're trying to code the stuff.`

Daniel wrote:

Yes, that's a good way of putting it. I don't know why the hell someone would want to write the scalar on the right, but it's allowed and it's the same as scalar multiplication on the left - unless our scalars are something wacky like quaternions instead of real or complex numbers!

`Daniel wrote: > I think Kutzbach is just indulging his free spirit by writing the scalar on the right. Yes, that's a good way of putting it. I don't know why the hell someone would want to write the scalar on the right, but it's allowed and it's the same as scalar multiplication on the left - unless our scalars are something wacky like quaternions instead of real or complex numbers!`

I would really like to find a paper that starts with

global climate dataand deduces teleconnections using principal component analysis - like what Kutzbach is doing on a much smaller scale, using the limited computing resources of his day. Someone should have done this - but if nobody has, maybe we should someday.By the way, climate scientists usually speak of "empirical orthogonal functions", but that's the same as principal component analysis, so if Jim wants to learn about this technique he might try the Wikipedia article I just linked to.

`I would really like to find a paper that starts with _global climate data_ and deduces teleconnections using principal component analysis - like what Kutzbach is doing on a much smaller scale, using the limited computing resources of his day. Someone should have done this - but if nobody has, maybe we should someday. By the way, climate scientists usually speak of "empirical orthogonal functions", but that's the same as [principal component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis), so if Jim wants to learn about this technique he might try the Wikipedia article I just linked to.`

That's kind of what I was doing earlier when I found the interference like patterns, except I was mostly using ICA and NMF instead of PCA, but all those are just different flavors of matrix factorization. Also the importance heat maps from the models I did recently for enso identify critical regions, but by a rather diferent method.

`> I would really like to find a paper that starts with _global climate data_ and deduces teleconnections using principal component analysis - like what Kutzbach is doing on a much smaller scale, using the limited computing resources of his day. Someone should have done this - but if nobody has, maybe we should someday. That's kind of what I was doing earlier when I found the interference like patterns, except I was mostly using ICA and NMF instead of PCA, but all those are just different flavors of matrix factorization. Also the importance heat maps from the models I did recently for enso identify critical regions, but by a rather diferent method.`

I hated PCA 50 years ago because that was the method Eysenck used to detect racial differences in some generalised supposed IQ. I don't know what method Jensen used.

Paul, could you elaborate on the limits of EOF analysis. I've managed to get the cc and ccnef functions going in common component network analysis (CCN) which apparently Blake Pollard might be doing something with.

OT. There was a book which influenced my IT gang by Jaques Vallee that I'm fairly sure was called 'The information routing group solution'. It's great to see all its futurology happen in open public research like all these models people have come up with here.

Cheers.

`> ICA and NMF instead of PCA, but all those are just different flavors of matrix factorization. I hated PCA 50 years ago because that was the method Eysenck used to detect racial differences in some generalised supposed IQ. I don't know what method Jensen used. Paul, could you elaborate on the limits of EOF analysis. I've managed to get the cc and ccnef functions going in common component network analysis (CCN) which apparently Blake Pollard might be doing something with. OT. There was a book which influenced my IT gang by Jaques Vallee that I'm fairly sure was called 'The information routing group solution'. It's great to see all its futurology happen in open public research like all these models people have come up with here. Cheers.`

Jim, if you are asking me about the limitations of EOF analysis, I would say that it is the issues of replacing one empirical time-series with two or more empirical time-series. If the new time-series are just as erratic as the one it is replacing, then you have that many more underlying physical models to work out.

And if the EOFs not independent in the end, or are spurious combinations then you will have to start over.

One of the strangest EOF-like combinations that I have seen is on this thread http://forum.azimuthproject.org/discussion/1498/is-there-an-exact-biannual-global-temperature-oscillation/?Focus=12857#Comment_12857

A machine-learning algorithm "discovered" that the QBO at altitude of 40 hPa is a

multiplicationof the QBOs at 30 hPa and 50 hPa. I still don't know what this means, and I am not sure if it is just some artifact of improper data manipulation or perhaps a reanalysis gone haywire, i.e. they tried to interpolate 30 and 50 to get 40, but did a multiplication instead of an addition to average.`Jim, if you are asking me about the limitations of EOF analysis, I would say that it is the issues of replacing one empirical time-series with two or more empirical time-series. If the new time-series are just as erratic as the one it is replacing, then you have that many more underlying physical models to work out. And if the EOFs not independent in the end, or are spurious combinations then you will have to start over. One of the strangest EOF-like combinations that I have seen is on this thread <http://forum.azimuthproject.org/discussion/1498/is-there-an-exact-biannual-global-temperature-oscillation/?Focus=12857#Comment_12857> A machine-learning algorithm "discovered" that the QBO at altitude of 40 hPa is a *multiplication* of the QBOs at 30 hPa and 50 hPa. I still don't know what this means, and I am not sure if it is just some artifact of improper data manipulation or perhaps a reanalysis gone haywire, i.e. they tried to interpolate 30 and 50 to get 40, but did a multiplication instead of an addition to average. ![QBO](http://imageshack.com/a/img540/1031/Q9nT8E.gif)`