Options

Tensor product of modules

edited June 2016 in General

I am reading this introduction to tensor products, which is clearly written:

Whereas in the world of vector spaces, tensors have a clearly visualizable representations, things become more subtle when we generalize to modules over a ring.

He writes:

There isn’t a simple picture of a tensor (even an elementary tensor) analogous to how a vector is an arrow. Some physical manifestations of tensors are in the previous answer, but they won’t help you understand tensor products of modules. Nobody is comfortable with tensor products at first. Two quotes by Cathy O’Neil and Johan de Jong nicely capture the phenomenon of learning about them:

O’Neil: After a few months, though, I realized something. I hadn’t gotten any better at understanding tensor products, but I was getting used to not understanding them. It was pretty amazing. I no longer felt anguished when tensor products came up; I was instead almost amused by their cunning ways.

de Jong: It is the things you can prove that tell you how to think about tensor products. In other words, you let elementary lemmas and examples shape your intuition of the mathematical object in question. There’s nothing else, no magical intuition will magically appear to help you “understand” it.

This is discouraging. Can we do better than this?

There is the construction of the tensor product as the quotient of enormous (free) module by an enormous sub-module, but it doesn't register with my intuition very well.

Regarding this, Conrad says:

From now on forget the explicit construction of M ⊗R N as the quotient of an enormous free module FR(M × N). It will confuse you more than it’s worth to try to think about M ⊗R N in terms of its construction.

He says instead to use the universal mapping property to understand the tensor product. But I don't like the idea of abandoning the definition of something in order to understand it.

Is this a case where it only makes sense to understand things though its morphisms? I hope not, because I like objects as well as arrows :)

Comments

  • 1.

    David, I am grateful for this topic. I have tried to understand tensors for several months now and I still don't understand the vector space version. I realized, though, that I must understand them well if I want to understand general relativity as well as Lie groups and algebras. I am still not able to make simple calculations. So I appreciate your teaching me what you know about the vector space version.

    However, I feel that I have gained a bit of intuition. I think the most important thing is that a tensor is characterized by its type (p, q) where the total dimension n = p+q is to be thought of as broken down into a "bottom-up" component of p-dimensions building up space (adding planes) and a "top-down" component of q-dimensions which is dismantling space (removing hyperplanes). I find this picture helpful: https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors#/media/File:Vector_1-form.svg because it makes clear that there are two different ways of looking at space involved. The vector is made from basis elements that are building up the space but those basis elements don't have to be orthogonal. They are contravariant, which means they are defined opposite to the vector to compensate. However, in the picture there are also covectors, which are defined by the normals to those bases, and in that sense they are orthogonal, I suppose. The covectors are covariant. The key point that characterizes a tensor is how many dimensions of each kind it is using. But the "bottom-up" and "top-down" views are dual ways of looking at the whole space.

    The covectors are defined as linear functionals which means they map into a field, say, the real numbers R. They are duals to the vectors. But the duals of the covectors would be linear functionals that, when working in finite dimensions, would match the vectors. In the infinite dimensional case it doesn't necessarily work out. But this all suggests to me that for the sake of elegance it is the linear functionals which are actually more natural and should be more fundamental. So the whole view of "vectors" is, I think, perhaps very unhelpful. Also, the vectors and the covectors are distinguished by whether we are "eating" vectors or "spitting them out", and whether we write them as column vectors or as row vectors.

    Another bit of intuition I have is that tensors are what is "maximally trivial". That is, mathematicians like to say certain things are "trivial" and so they don't have to explain further. Well, if we think about that concept, then there is a sense that what is "trivial" is linearity. Linearity is the plain vanilla of math. And then linearity gets pushed to different directions such as taking derivatives. Anyways, I think that tensors are the most robust form of "triviality", that is, of linearity. And they are the default way of making sense of a multi-dimensional space.

    I think I know just enough to realize that most people don't really understand what are tensors. I know that I don't understand. So I appreciate your question, your reference and your knowledge. Thank you.

    Comment Source:David, I am grateful for this topic. I have tried to understand tensors for several months now and I still don't understand the vector space version. I realized, though, that I must understand them well if I want to understand general relativity as well as Lie groups and algebras. I am still not able to make simple calculations. So I appreciate your teaching me what you know about the vector space version. However, I feel that I have gained a bit of intuition. I think the most important thing is that a tensor is characterized by its type (p, q) where the total dimension n = p+q is to be thought of as broken down into a "bottom-up" component of p-dimensions building up space (adding planes) and a "top-down" component of q-dimensions which is dismantling space (removing hyperplanes). I find this picture helpful: https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors#/media/File:Vector_1-form.svg because it makes clear that there are two different ways of looking at space involved. The vector is made from basis elements that are building up the space but those basis elements don't have to be orthogonal. They are contravariant, which means they are defined opposite to the vector to compensate. However, in the picture there are also covectors, which are defined by the normals to those bases, and in that sense they are orthogonal, I suppose. The covectors are covariant. The key point that characterizes a tensor is how many dimensions of each kind it is using. But the "bottom-up" and "top-down" views are dual ways of looking at the whole space. The covectors are defined as linear functionals which means they map into a field, say, the real numbers R. They are duals to the vectors. But the duals of the covectors would be linear functionals that, when working in finite dimensions, would match the vectors. In the infinite dimensional case it doesn't necessarily work out. But this all suggests to me that for the sake of elegance it is the linear functionals which are actually more natural and should be more fundamental. So the whole view of "vectors" is, I think, perhaps very unhelpful. Also, the vectors and the covectors are distinguished by whether we are "eating" vectors or "spitting them out", and whether we write them as column vectors or as row vectors. Another bit of intuition I have is that tensors are what is "maximally trivial". That is, mathematicians like to say certain things are "trivial" and so they don't have to explain further. Well, if we think about that concept, then there is a sense that what is "trivial" is linearity. Linearity is the plain vanilla of math. And then linearity gets pushed to different directions such as taking derivatives. Anyways, I think that tensors are the most robust form of "triviality", that is, of linearity. And they are the default way of making sense of a multi-dimensional space. I think I know just enough to realize that most people don't really understand what are tensors. I know that I don't understand. So I appreciate your question, your reference and your knowledge. Thank you.
  • 2.
    edited June 2016

    David wrote:

    He says instead to use the universal mapping property to understand the tensor product. But I don't like the idea of abandoning the definition of something in order to understand it.

    That's not "abandoning the definition". To this mathematician, at least, the universal mapping property is the definition of the tensor product. It says essentially this: suppose you want to think of bilinear maps out of $M \times N$ as linear maps out of some module. Then the module you want is $M \otimes N$.

    If this is too fancy, don't worry: there are lots of different ways to understand tensor products, from geometrical to algebraic, from explicit nuts-and-bolts constructions to nice conceptual characterizations. They're all equivalent, and there's got to be one that's right for you! Everyone has their own favorites.

    Is this a case where it only makes sense to understand things though its morphisms? I hope not, because I like objects as well as arrows.

    We all like objects, but the best way to understand them is through their morphisms. To understand what something is, nothing beats knowing what it does, and what you can do with it.

    However, if you don't like this philosophy I have no intention of forcing it on you.

    Whereas in the world of vector spaces, tensors have a clearly visualizable representations, things become more subtle when we generalize to modules over a ring.

    I wouldn't say that. I visualize them almost the same way. If you're working over a ring with some particular properties, you may want to adapt your visualizations a bit. For example, if your ring has 1 + 1 = 0, now you're in a world where all your vectors "wrap around", obeying $v + v = 0$. But if you're working in general, for an arbitrary ring, you might as well visualize things the way you're used to. You just need to not take all the features of your visualization too seriously.

    You seem to be reading books that feature highly discouraging quotes about the comprehensibility of tensors. That's too bad! Maybe someone who doesn't understand something shouldn't be writing about it.

    Comment Source:David wrote: > He says instead to use the universal mapping property to understand the tensor product. But I don't like the idea of abandoning the definition of something in order to understand it. That's not "abandoning the definition". To this mathematician, at least, the universal mapping property _is_ the definition of the tensor product. It says essentially this: suppose you want to think of bilinear maps out of $M \times N$ as linear maps out of some module. Then the module you want is $M \otimes N$. If this is too fancy, don't worry: there are lots of different ways to understand tensor products, from geometrical to algebraic, from explicit nuts-and-bolts constructions to nice conceptual characterizations. They're all equivalent, and there's got to be one that's right for you! Everyone has their own favorites. > Is this a case where it only makes sense to understand things though its morphisms? I hope not, because I like objects as well as arrows. We all like objects, but the best way to understand them is through their morphisms. To understand what something _is_, nothing beats knowing what it _does_, and what you can _do with it_. However, if you don't like this philosophy I have no intention of forcing it on you. > Whereas in the world of vector spaces, tensors have a clearly visualizable representations, things become more subtle when we generalize to modules over a ring. I wouldn't say that. I _visualize_ them almost the same way. If you're working over a ring with some particular properties, you may want to adapt your visualizations a bit. For example, if your ring has 1 + 1 = 0, now you're in a world where all your vectors "wrap around", obeying $v + v = 0$. But if you're working _in general_, for an _arbitrary_ ring, you might as well visualize things the way you're used to. You just need to not take all the features of your visualization too seriously. You seem to be reading books that feature highly discouraging quotes about the comprehensibility of tensors. That's too bad! Maybe someone who doesn't understand something shouldn't be writing about it.
  • 3.

    Andrius wrote:

    Well, if we think about that concept, then there is a sense that what is "trivial" is linearity. Linearity is the plain vanilla of math.

    I agree with that. And tensors are a way to subsume multilinearity under linearity, thus making multilinear maps just as vanilla-flavored as linear ones.

    Comment Source:Andrius wrote: > Well, if we think about that concept, then there is a sense that what is "trivial" is linearity. Linearity is the plain vanilla of math. I agree with that. And tensors are a way to subsume <a href = "https://en.wikipedia.org/wiki/Multilinear_map">multilinearity</a> under linearity, thus making multilinear maps just as vanilla-flavored as linear ones.
  • 4.

    John wrote:

    I visualize them almost the same way.

    I have no problem with picturing vectors that wrap around, but it's the tensors that appear to lose the palpable interpretations that are possible in the special case of modules that are vector spaces.

    Here, let me do a bit of linear algebra 101 thinking out loud, just for the sake of stating the case clearly. That will then help to explain why things look less clear in the more general case of modules.

    Basic questions:

    • What is a tensor?

    • What is the specific tensor that results from taking the tensor product of two vectors/covectors?

    From linear algebra 101:

    In a vector space $V$, over the ground field $F$, we can give the following answers:

    • A tensor is a multi-linear mapping, where the domain is a product of copies of $V$ and its dual $V*$, and the range is the ground field $F$.

    This is "meaty" and works for physics. Once we choose a basis for $V$, then a tensor becomes visualizable as a multi-dimensional array of coefficients (which transform in a certain way, when the basis changes).

    Further interpretations are available here. Consider tensor of rank 2, represented by a matrix. By matrix multiplication with a vector, it gives us a homomorphism from $V$ to $V$.

    So these are the pictures available for tensors, in vector spaces: multi-linear mapping into the ground field, array of coefficients with a basis transformation law, homomorphisms with domains involving $V$ and $V*$.

    For the second question, how can we picture the tensor product of two vectors/covectors?

    The product of a covector (dual vector) and a covector is just the bilinear machine that results from multiplying the outputs of each of the covectors.

    The product of a covector $w$ and a vector $v$ maps a vector $x$ to $w(x) * v$ -- this is a linear transformation.

    In terms of matrices, we can picture the tensor product of a covector $w$ and a vector $v$ as the outer product -- obtained by matrix multiplication -- of the row vector $w$ and the column vector $v$.

    Such products will only lead to a certain type of matrix, in which the $i,j$th entry is the product of $w_i$ and $v_j$. These are the simple tensors. It is easy to see that the simple tensors span the entire space of tensors (matricies).

    Comment Source:John wrote: > I _visualize_ them almost the same way. I have no problem with picturing vectors that wrap around, but it's the _tensors_ that appear to lose the palpable interpretations that are possible in the special case of modules that are vector spaces. Here, let me do a bit of linear algebra 101 thinking out loud, just for the sake of stating the case clearly. That will then help to explain why things look less clear in the more general case of modules. Basic questions: * What _is_ a tensor? * What is the specific _tensor_ that results from taking the tensor product of two vectors/covectors? From linear algebra 101: In a vector space $V$, over the ground field $F$, we can give the following answers: * A tensor is a multi-linear mapping, where the domain is a product of copies of $V$ and its dual $V*$, and the range is the ground field $F$. This is "meaty" and works for physics. Once we choose a basis for $V$, then a tensor becomes visualizable as a multi-dimensional array of coefficients (which transform in a certain way, when the basis changes). Further interpretations are available here. Consider tensor of rank 2, represented by a matrix. By matrix multiplication with a vector, it gives us a homomorphism from $V$ to $V$. So these are the pictures available for tensors, in vector spaces: multi-linear mapping into the ground field, array of coefficients with a basis transformation law, homomorphisms with domains involving $V$ and $V*$. For the second question, how can we picture the tensor product of two vectors/covectors? The product of a covector (dual vector) and a covector is just the bilinear machine that results from multiplying the outputs of each of the covectors. The product of a covector $w$ and a vector $v$ maps a vector $x$ to $w(x) * v$ -- this is a linear transformation. In terms of matrices, we can picture the tensor product of a covector $w$ and a vector $v$ as the outer product -- obtained by matrix multiplication -- of the row vector $w$ and the column vector $v$. Such products will only lead to a certain type of matrix, in which the $i,j$th entry is the product of $w_i$ and $v_j$. These are the _simple_ tensors. It is easy to see that the simple tensors span the entire space of tensors (matricies).
  • 5.

    These are the pictures that are dear to me, for tensor products in the world of vector spaces.

    But now let's see what happens to these pictures when we generalize to the case of $R$-modules -- i.e. when we work with a ring of scalars $R$, rather than a field $F$.

    Comment Source:These are the pictures that are dear to me, for tensor products in the world of vector spaces. But now let's see what happens to these pictures when we generalize to the case of $R$-modules -- i.e. when we work with a ring of scalars $R$, rather than a field $F$.
  • 6.

    Let's see what happens when the ground ring consists of the integers $Z$.

    Review point: an $R$-module is a commutative group ("abelian group") $G$, along with a ring homomorphism from $R$ into the ring of endomorphisms of $G$. $G$ comprises the "vectors," along with their commutative addition, and the homomorphism tells us how to scale a "vector" in $G$ by a scalar in $R$. It is assumed that $1$ in $R$ maps to the identity mapping on $G$ -- so that scaling a vector by 1 always leaves it unchanged.

    So $Z$-module is just a commutative group, where the scalars are integers. The full structure of the scaling operation is already determined by the structure of $G$. For example, $3 * v = (1 + 1 + 1) * v = 1 * v + 1 * v + 1 * v = v + v + v$. So a $Z$-module is just a commutative group with the added perspective of being able to scale an element by an integer.

    Comment Source:Let's see what happens when the ground ring consists of the integers $Z$. Review point: an $R$-module is a commutative group ("abelian group") $G$, along with a ring homomorphism from $R$ into the ring of endomorphisms of $G$. $G$ comprises the "vectors," along with their commutative addition, and the homomorphism tells us how to scale a "vector" in $G$ by a scalar in $R$. It is assumed that $1$ in $R$ maps to the identity mapping on $G$ -- so that scaling a vector by 1 always leaves it unchanged. So $Z$-module is just a commutative group, where the scalars are integers. The full structure of the scaling operation is already determined by the structure of $G$. For example, $3 * v = (1 + 1 + 1) * v = 1 * v + 1 * v + 1 * v = v + v + v$. So a $Z$-module is just a commutative group with the added perspective of being able to scale an element by an integer.
  • 7.

    It's an easy abstraction from a vector space, but the loss of the assumption that we can $divide$ within the ground ring has big repercussions, which lead to qualities that are inconceivable in vector spaces.

    Comment Source:It's an easy abstraction from a vector space, but the loss of the assumption that we can $divide$ within the ground ring has big repercussions, which lead to qualities that are inconceivable in vector spaces.
  • 8.
    edited June 2016

    Let's take a simple example of a $Z$-module: $Z_n$, the cyclic group of order $n$ = $\{0, 1, ..., n-1\}$.

    This is generated by the number $1$, since by taking all linear combinations of 1 -- i.e. all multiples of 1 -- we get everything in $Z_n$. {1} spans the whole module. But {1} is not linearly independent, because $n * 1 = 0$.

    Put differently, it is not the case that every member of $Z_n$ is a unique linear combination of the "vectors" in $\{1\}$ -- the mapping which sends $k$ in $Z$ to $k * 1$ in $Z_n$ is not one-to-one.

    So $Z_n$ is a module that has no basis -- it is not "free." Whereas a basic theorem tells us that every vector space is free, has a basis.

    Comment Source:Let's take a simple example of a $Z$-module: $Z_n$, the cyclic group of order $n$ = $\{0, 1, ..., n-1\}$. This is generated by the number $1$, since by taking all linear combinations of 1 -- i.e. all multiples of 1 -- we get everything in $Z_n$. \{1\} spans the whole module. But \{1\} is not linearly independent, because $n * 1 = 0$. Put differently, it is not the case that every member of $Z_n$ is a unique linear combination of the "vectors" in $\{1\}$ -- the mapping which sends $k$ in $Z$ to $k * 1$ in $Z_n$ is not one-to-one. So $Z_n$ is a module that has no basis -- it is not "free." Whereas a basic theorem tells us that every vector space is free, has a basis.
  • 9.

    The dual space is a very different animal in the general world of $R$-modules.

    Review point: for an $R$-module $M$, the dual space $R*$ is the space of linear mappings from $M$ into $R$.

    For finite-dimensional vector spaces, the dual space $V*$ is isomorphic to $V$. Once we choose a basis, we can immediately go back-and-forth, in a one-to-one manner, between vectors $v$ and their duals.

    Comment Source:The dual space is a very different animal in the general world of $R$-modules. Review point: for an $R$-module $M$, the dual space $R*$ is the space of linear mappings from $M$ into $R$. For finite-dimensional vector spaces, the dual space $V*$ is isomorphic to $V$. Once we choose a basis, we can immediately go back-and-forth, in a one-to-one manner, between vectors $v$ and their duals.
  • 10.

    But what is the dual space of $Z_n$?

    That would consist of all linear mappings from $Z_n$ into $Z$.

    But there is only one such mapping: the function that sends everything in $Z_n$ to $0$.

    Suppose that we had linear $f$ such that $f(1) = k$.

    Then $f(n * 1) = f(0) = 0$, and also $f(n * 1) = n * f(1) = n * k$.

    So, in $Z$, we have $n * k = 0$, which means that $k = 0$ (given that $n >= 1$).

    That implies that $f$ is the zero homomorphism.

    Comment Source:But what is the dual space of $Z_n$? That would consist of all linear mappings from $Z_n$ into $Z$. But there is only _one_ such mapping: the function that sends everything in $Z_n$ to $0$. Suppose that we had linear $f$ such that $f(1) = k$. Then $f(n * 1) = f(0) = 0$, and also $f(n * 1) = n * f(1) = n * k$. So, in $Z$, we have $n * k = 0$, which means that $k = 0$ (given that $n >= 1$). That implies that $f$ is the zero homomorphism.
  • 11.

    Synposis: the dual space of $Z_n$ is not isomorphic to $Z_n$, in fact, it is just the trivial group containing one element.

    Comment Source:Synposis: the dual space of $Z_n$ is not isomorphic to $Z_n$, in fact, it is just the trivial group containing one element.
  • 12.

    Now let's talk about tensor products of $Z$-modules.

    These can be fully and explicitly defined, by a formal construction involving a quotient of one set of symbolic terms modulo another, which I won't dig into here.

    But to what extent can we still invoke our familiar pictures of multi-linear machines, arrays of coefficients, and hom sets?

    Comment Source:Now let's talk about tensor products of $Z$-modules. These can be fully and explicitly defined, by a formal construction involving a quotient of one set of symbolic terms modulo another, which I won't dig into here. But to what extent can we still invoke our familiar pictures of multi-linear machines, arrays of coefficients, and hom sets?
  • 13.
    edited June 2016

    Let's work with a specific example.

    What is the tensor product $Z_a \otimes Z_b$?

    It can be shown that this product is also a cyclic group:

    $Z_a \otimes Z_b = Z_c$, where $c = gcd(a,b)$.

    For $x \in Z_a$ and $y \in Z_b$, the tensor product also gives us a specific tensor:

    $x \otimes y \in Z_a \otimes Z_b = Z_{gcd(a,b)}$.

    Comment Source:Let's work with a specific example. What is the tensor product $Z_a \otimes Z_b$? It can be shown that this product is also a cyclic group: $Z_a \otimes Z_b = Z_c$, where $c = gcd(a,b)$. For $x \in Z_a$ and $y \in Z_b$, the tensor product also gives us a specific tensor: $x \otimes y \in Z_a \otimes Z_b = Z_{gcd(a,b)}$.
  • 14.
    edited June 2016

    Now, can we picture these tensors themselves as bilinear machines from $Z_a \times Z_b$ into the ground ring $Z$?

    I don't think so.

    We already saw that the only linear function from $Z_n$ into $Z$ was the zero function. Hence the only bilinear function from $Z_a \times Z_b$ into $Z$ is the zero function.

    But there are $gcd(a,b)$ tensors in $Z_a \otimes Z_b$. So the tensors cannot be identified with the bilinear functions.

    This is a basic picture of the tensor, as an object, which falls apart when we generalize from vector spaces to modules.

    Or, have I made a mistake, and the picture actually be retained?

    Comment Source:Now, can we picture these tensors _themselves_ as bilinear machines from $Z_a \times Z_b$ into the ground ring $Z$? I don't think so. We already saw that the only linear function from $Z_n$ into $Z$ was the zero function. Hence the only bilinear function from $Z_a \times Z_b$ into $Z$ is the zero function. But there are $gcd(a,b)$ tensors in $Z_a \otimes Z_b$. So the tensors cannot be identified with the bilinear functions. This is a basic picture of the tensor, as an object, which falls apart when we generalize from vector spaces to modules. Or, have I made a mistake, and the picture actually be retained?
  • 15.

    On the other hand, it appears that the size of $Hom(Z_a,Z_b)$ is the same as $Z_a \otimes Z_b$, so a homset-based understanding of the tensor product of modules may still remain.

    Of course, the universal mapping property is always there for us -- but that is not what the question is about.

    Comment Source:On the other hand, it appears that the size of $Hom(Z_a,Z_b)$ is the same as $Z_a \otimes Z_b$, so a homset-based understanding of the tensor product of modules may still remain. Of course, the universal mapping property is always there for us -- but that is not what the question is about.
  • 16.
    edited June 2016

    For $x \in Z_a$, $y \in Z_b$, what is $x \otimes y \in Z_a \otimes Z_b$?

    It can't be identified with a bilinear machine into $Z$, but it can be identified with a homomorphism from $Z_a$ into $Z_b$: the function which sends $t \in Z_a$ to $t * x * y$ mod $b$.

    Comment Source:For $x \in Z_a$, $y \in Z_b$, what is $x \otimes y \in Z_a \otimes Z_b$? It can't be identified with a bilinear machine into $Z$, but it can be identified with a homomorphism from $Z_a$ into $Z_b$: the function which sends $t \in Z_a$ to $t * x * y$ mod $b$.
  • 17.

    David, I find this interesting and helpful. I will be thinking about this, both the vector space case and the R-module case.

    Comment Source:David, I find this interesting and helpful. I will be thinking about this, both the vector space case and the R-module case.
  • 18.

    Cool, thanks.

    Continuing, a bit. I see stuff about $Hom$ and $\otimes$ being related through adjoint functors -- so is the Hom interpretation of tensors more durable than the idea of "generalized dual vectors" i.e. multilinear mappings into the ground ring?

    So I believe there is a (natural) isomorphism between $L \otimes M$ and $Hom(L,M)$ for $R$-modules? More specifically, for each $L$, that would be a natural isomorphism between the functors $L \otimes \_$ and $Hom(L, \_)$ (or is it reversed, $Hom(\_,L)?$, which would assign a specific isomorphism between $L \otimes M$ and $Hom(L,M)$ to each $L$.

    Can we give a specific construction for such isomorphisms? In message 16, I gave what I believe are the defining parameters for this isomorphism between $Z_a \otimes Z_b$ and $Hom(Z_a,Z_b)$.

    Can we give a construction for this isomorphism, for the case of general $Z$-modules, or finite $Z$-modules.

    For the latter, we could use the nice theorem, that any finite (or even finitely genenerated) $Z$-module is a direct sum of cyclic groups.

    In the world of vector spaces, this isomorphism is a no-brainer: for vectors $v \in L$, $w \in M$, we could associate the following member of $Hom(L,M)$ with $v \otimes w$: $v^T(\_) * w$, where $v^T$ is the dual to $v$, relative to some basis for $L$.

    That last example indicates that there will be many such isomorphisms, as it depends on the choice of a basis.

    So in the above text, I am not asking for the isomorphism, but just some isomorphism.

    Comment Source:Cool, thanks. Continuing, a bit. I see stuff about $Hom$ and $\otimes$ being related through adjoint functors -- so is the Hom interpretation of tensors more durable than the idea of "generalized dual vectors" i.e. multilinear mappings into the ground ring? So I believe there is a (natural) isomorphism between $L \otimes M$ and $Hom(L,M)$ for $R$-modules? More specifically, for each $L$, that would be a natural isomorphism between the functors $L \otimes \_$ and $Hom(L, \_)$ (or is it reversed, $Hom(\_,L)?$, which would assign a specific isomorphism between $L \otimes M$ and $Hom(L,M)$ to each $L$. Can we give a specific construction for such isomorphisms? In message 16, I gave what I believe are the defining parameters for this isomorphism between $Z_a \otimes Z_b$ and $Hom(Z_a,Z_b)$. Can we give a construction for this isomorphism, for the case of general $Z$-modules, or finite $Z$-modules. For the latter, we could use the nice theorem, that any finite (or even finitely genenerated) $Z$-module is a direct sum of cyclic groups. In the world of vector spaces, this isomorphism is a no-brainer: for vectors $v \in L$, $w \in M$, we could associate the following member of $Hom(L,M)$ with $v \otimes w$: $v^T(\_) * w$, where $v^T$ is the dual to $v$, relative to some basis for $L$. That last example indicates that there will be many such isomorphisms, as it depends on the choice of a basis. So in the above text, I am not asking for _the_ isomorphism, but just some isomorphism.
  • 19.
    edited June 2016

    The whole root of the "concept breakdown" in the setting of general modules is the loss of the assumption that a basis will always exist.

    Here is a striking example. A classical definition of a tensor is a basis-dependent system of coefficients that transform in a specific way when the basis is changed.

    Well, with modules, we still have tensors, which can be defined using a more abstract formal construction -- but bases are not guaranteed, so they can't be used to define tensors!

    Comment Source:The whole root of the "concept breakdown" in the setting of general modules is the loss of the assumption that a basis will always exist. Here is a striking example. A classical definition of a tensor is a basis-dependent system of coefficients that transform in a specific way when the basis is changed. Well, with modules, we still have tensors, which can be defined using a more abstract formal construction -- but bases are not guaranteed, so they can't be used to define tensors!
  • 20.

    David, I'm curious, how would you intuitively distinguish modules and vector spaces? That might inspire more clues.

    In studying the field with one element, I realized that when we count the k-dimensional subspaces of an n-dimensional vector space over a field Fq of characteristic q, we may typically depend on a given basis: e1, e2, e3, ..., en. If we want to count the number of independent choices for constructing a vector space, then the first space would be generated by e1 (multiplied by any scalar). The next dimension would be generated by e2 + f21e1 where f21 can be any scalar including zero. And then e3 + f32e2 + f31*e1 and so on. So the analogous counts (weights) are growing: 1 + q + q2 +... The weights can be thought of as labeling the spaces with natural numbers. So intrinsic to a vector space is a notion that its basis is a totally ordered set. When q=1 then that order vanishes. And the reason is that when q=1 we have only one scalar and so there is no real "choice of scalars" made by which to naturally distinguish the order of the basis elements.

    I have found a similar combinatorial interpretation of the Gaussian binomial coefficients that they count the k-simplexes in an n-simplex where the vertices of the k-simplex are given weights 1, q, q2, ... qk-1 and the edges all have weight 1/q. The simplexes are then total orders, which is to say, ordered sets. So we are counting the ordered subsets of an ordered set. When q=1 this becomes the uniquely orderable subsets of a uniquely orderable set, which is to say, the unordered subsets of an unordered set.

    All of this is to say that it can be argued that fields are defined in such a way to give us "choices" which imply intrinsically that vector spaces are constructed in terms of ordered bases. This is an argument based on the "implicit math", not what gets written on the paper, but what reflects our mental activity.

    Whereas modules have me think of expansions of amounts and units. I tutored students that "every answer consists of an amount and a unit", "combine like units to simplify calculation", "list different units to make the answer easier to understand".

    When we figure things out in mathematics, there seem to be some times that in our minds we make use of a list, as with the basis of a vector space, for example, when we construct a flag. And at other times we just use a set, as when we equate two expansions and thereby establish equations for each term.

    Could this distinction between lists and sets have some bearing here?

    John, thank you for the link to multilinearity. I will have to study how multilinearity isn't quite linearity but is related. I will have to think about that.

    Comment Source:David, I'm curious, how would you intuitively distinguish modules and vector spaces? That might inspire more clues. In studying the field with one element, I realized that when we count the k-dimensional subspaces of an n-dimensional vector space over a field Fq of characteristic q, we may typically depend on a given basis: e1, e2, e3, ..., en. If we want to count the number of independent choices for constructing a vector space, then the first space would be generated by e1 (multiplied by any scalar). The next dimension would be generated by e2 + f21*e1 where f21 can be any scalar including zero. And then e3 + f32*e2 + f31*e1 and so on. So the analogous counts (weights) are growing: 1 + q + q2 +... The weights can be thought of as labeling the spaces with natural numbers. So intrinsic to a vector space is a notion that its basis is a totally ordered set. When q=1 then that order vanishes. And the reason is that when q=1 we have only one scalar and so there is no real "choice of scalars" made by which to naturally distinguish the order of the basis elements. I have found a similar combinatorial interpretation of the Gaussian binomial coefficients that they count the k-simplexes in an n-simplex where the vertices of the k-simplex are given weights 1, q, q2, ... qk-1 and the edges all have weight 1/q. The simplexes are then total orders, which is to say, ordered sets. So we are counting the ordered subsets of an ordered set. When q=1 this becomes the uniquely orderable subsets of a uniquely orderable set, which is to say, the unordered subsets of an unordered set. All of this is to say that it can be argued that fields are defined in such a way to give us "choices" which imply intrinsically that vector spaces are constructed in terms of ordered bases. This is an argument based on the "implicit math", not what gets written on the paper, but what reflects our mental activity. Whereas modules have me think of expansions of amounts and units. I tutored students that "every answer consists of an amount and a unit", "combine like units to simplify calculation", "list different units to make the answer easier to understand". When we figure things out in mathematics, there seem to be some times that in our minds we make use of a list, as with the basis of a vector space, for example, when we construct a flag. And at other times we just use a set, as when we equate two expansions and thereby establish equations for each term. Could this distinction between lists and sets have some bearing here? John, thank you for the link to multilinearity. I will have to study how multilinearity isn't quite linearity but is related. I will have to think about that.
  • 21.

    For a commercial break, here is a little story.

    In school I had a colleague, who was a computer scientist, and was deep into type theory, and to some extent its formulation in category theory. I would show him things about groups, rings, etc. and pose question relating to the category theory of it. To a certain extent he would get engaged in these discussions. One day he abruptly stopped, and complained that mathematicians come up with all of these "weird structures."

    Comment Source:For a commercial break, here is a little story. In school I had a colleague, who was a computer scientist, and was deep into type theory, and to some extent its formulation in category theory. I would show him things about groups, rings, etc. and pose question relating to the category theory of it. To a certain extent he would get engaged in these discussions. One day he abruptly stopped, and complained that mathematicians come up with all of these "weird structures."
  • 22.

    Andrius wrote:

    how would you intuitively distinguish modules and vector spaces?

    Well, to a type-theorist, they have a lot in common, as weird structures :)

    Comment Source:Andrius wrote: > how would you intuitively distinguish modules and vector spaces? Well, to a type-theorist, they have a lot in common, as weird structures :)
  • 23.
    edited June 2016

    Okay, I'll try to be serious now.

    To go from vector spaces to modules, "all" that we did was to discard an axiom about how the scalars behave -- we abandoned the divisibility assumption, so that we are left with a ring rather than a field of scalars. But that leads to the unraveling of a lot of higher level structures, including, I believe, the "physicist's intuition" for what a tensor is, as multilinear machines with array-based coordinate representations.

    Comment Source:Okay, I'll try to be serious now. To go from vector spaces to modules, "all" that we did was to discard an axiom about how the scalars behave -- we abandoned the divisibility assumption, so that we are left with a ring rather than a field of scalars. But that leads to the unraveling of a lot of higher level structures, including, I believe, the "physicist's intuition" for what a tensor is, as multilinear machines with array-based coordinate representations.
  • 24.
    edited June 2016

    Here is an illustration of the ripple effects of discarding the division of scalars.

    With vector spaces, we have the following:

    • Every maximal set of linearly independent vectors is a basis
    • Every minimal spanning set of vectors is a basis

    But not so for general modules.

    Take the ring of integers $\mathbb{Z}$, which itself is a one-dimensional $\mathbb{Z}$-module. There are two bases for $\mathbb{Z}$: $\{1\}$, and $\{-1\}$.

    But:

    • The set $\{2\}$ is a maximal independent set, yet it does not span the whole space
    • The set $\{2,3\}$ is a minimal spanning set, yet it is linearly dependent
    Comment Source:Here is an illustration of the ripple effects of discarding the division of scalars. With vector spaces, we have the following: * Every maximal set of linearly independent vectors is a basis * Every minimal spanning set of vectors is a basis But not so for general modules. Take the ring of integers $\mathbb{Z}$, which itself is a one-dimensional $\mathbb{Z}$-module. There are two bases for $\mathbb{Z}$: $\{1\}$, and $\{-1\}$. But: * The set $\{2\}$ is a maximal independent set, yet it does not span the whole space * The set $\{2,3\}$ is a minimal spanning set, yet it is linearly dependent
  • 25.
    edited June 2016

    What's going on here?

    The crux of the matter is that without division of scalars, the "spanning power" of a vector is greatly curtailed.

    For example, $span \{2\} = 2 \mathbb{Z}$ = the even integers.

    Because the scalars do not form a field, the span of $\{2\}$ is not able to reach the odd numbers, and hence, although $\{2\}$ is maximal independent set, it is not able to span the whole space.

    To get the full span, we need to add something else, say $\{3\}$. But then our spanning set $\{2,3\}$ contains more elements than the dimension of the space, and is linearly dependent.

    Synoposis: The spanning power of the vectors is curtailed. So a maximal independent say may not be spanning. To span whole the space, we may need the combined effects of extra vectors -- but this may cause the set to be linearly dependent.

    In contrast, if we take the rationals $\mathbb{Q}$ as a $\mathbb{Q}$-module over itself, we have a field of scalars and hence a vector space, and $\{2\}$ is indeed a spanning set for $\mathbb{Q}$. And $\{2,3\}$ is not a minimal spanning set.

    Comment Source:What's going on here? The crux of the matter is that without division of scalars, the "spanning power" of a vector is greatly curtailed. For example, $span \{2\} = 2 \mathbb{Z}$ = the even integers. Because the scalars do not form a field, the span of $\{2\}$ is not able to reach the odd numbers, and hence, although $\{2\}$ is maximal independent set, it is not able to span the whole space. To get the full span, we need to add something else, say $\{3\}$. But then our spanning set $\{2,3\}$ contains more elements than the dimension of the space, and is linearly dependent. Synoposis: The spanning power of the vectors is curtailed. So a maximal independent say may not be spanning. To span whole the space, we may need the combined effects of extra vectors -- but this may cause the set to be linearly dependent. In contrast, if we take the rationals $\mathbb{Q}$ as a $\mathbb{Q}$-module over itself, we have a field of scalars and hence a vector space, and $\{2\}$ is indeed a spanning set for $\mathbb{Q}$. And $\{2,3\}$ is not a minimal spanning set.
  • 26.
    edited June 2016

    Here is another perspective on the matter.

    Observe that $2 \mathbb{Z}$ is a proper one-dimensional submodule of the one-dimensional module $\mathbb{Z}$.

    But with vector spaces, you can never have a $k$-dimensional subspace that is properly contained in another $k$-dimensional subspace.

    So with modules, the lattice of submodules can have some interesting and rich structures, which cannot be present in the lattice of subspaces of a vector space.

    Indeed, the structure of the lattice of submodules of $\mathbb{Z}$ -- all of which are one-dimensional, and where containment indicates divisibility -- contains a great deal of information about the theory of numbers.

    Comment Source:Here is another perspective on the matter. Observe that $2 \mathbb{Z}$ is a _proper_ one-dimensional submodule of the one-dimensional module $\mathbb{Z}$. But with vector spaces, you can never have a $k$-dimensional subspace that is properly contained in another $k$-dimensional subspace. So with modules, the lattice of submodules can have some interesting and rich structures, which cannot be present in the lattice of subspaces of a vector space. Indeed, the structure of the lattice of submodules of $\mathbb{Z}$ -- all of which are one-dimensional, and where containment indicates divisibility -- contains a great deal of information about the theory of numbers.
  • 27.
    edited June 2016

    David wrote:

    So I believe there is a (natural) isomorphism between L⊗M and Hom(L,M) for R-modules?

    No, for example you're shown the $R$-modules $M \otimes R$ and $\mathrm{Hom}(M,R)$ are different in general: the first is isomorphic to $M$, while the second is by definition $M^*$.

    The hom-tensor duality for $R$-modules says, among other things, that for $R$-modules $M,N,P$ we have an isomorphism

    $\mathrm{Hom}(M \otimes N, P) \cong \mathrm{Hom}(M,\mathrm{Hom}(N,P))$

    This is easy to see. An $R$-module homomorphism from $M \otimes N$ to $P$ is the same as a bilinear map from $M \times N$ to $P$. But we can take such a bilinear map and think of it as a something that eats an element of $M$ and spits out, in a linear way, a linear map from $N$ to $P$. In other words, an element of $\mathrm{Hom}(M,\mathrm{Hom}(N,P))$

    I would take your arguments that tensors work differently over a general ring work differently than tensors over a field and put a different spin on them. What's mainly true is that modules over a field are all free, while modules over a general ring aren't.

    This has lots of ramifications. First, working with vector spaces makes one instantly want to grab a basis whenever there's a calculation to be done, but when working with modules you have to suppress this habit. Second, you can't identify a module with its dual. So, you shouldn't think of $M \otimes N$ as consisting of bilinear maps from $M \times N$ to $R$. Instead, it's $(M \otimes N)^*$ that consists of bilinear maps from $M\times N$ to $R$.

    Once you develop the new habits, vector spaces over fields seem like a pathetically dull (and wonderfully simple) example of modules over a ring, where all the interesting (and difficult) questions become trivial.

    Comment Source:David wrote: > So I believe there is a (natural) isomorphism between L⊗M and Hom(L,M) for R-modules? No, for example you're shown the $R$-modules $M \otimes R$ and $\mathrm{Hom}(M,R)$ are different in general: the first is isomorphic to $M$, while the second is by definition $M^*$. The hom-tensor duality for $R$-modules says, among other things, that for $R$-modules $M,N,P$ we have an isomorphism $\mathrm{Hom}(M \otimes N, P) \cong \mathrm{Hom}(M,\mathrm{Hom}(N,P))$ This is easy to see. An $R$-module homomorphism from $M \otimes N$ to $P$ is the same as a bilinear map from $M \times N$ to $P$. But we can take such a bilinear map and think of it as a something that eats an element of $M$ and spits out, in a linear way, a linear map from $N$ to $P$. In other words, an element of $\mathrm{Hom}(M,\mathrm{Hom}(N,P))$ I would take your arguments that tensors work differently over a general ring work differently than tensors over a field and put a different spin on them. What's mainly true is that modules over a field are all free, while modules over a general ring aren't. This has lots of ramifications. First, working with vector spaces makes one instantly want to grab a basis whenever there's a calculation to be done, but when working with modules you have to suppress this habit. Second, you can't identify a module with its dual. So, you shouldn't think of $M \otimes N$ as consisting of bilinear maps from $M \times N$ to $R$. Instead, it's $(M \otimes N)^*$ that consists of bilinear maps from $M\times N$ to $R$. Once you develop the new habits, vector spaces over fields seem like a pathetically dull (and wonderfully simple) example of modules over a ring, where all the interesting (and difficult) questions become trivial.
  • 28.
    edited June 2016

    David wrote:

    Observe that $\mathbb{Z}_2$ is a proper one-dimensional subspace of the one-dimensional space $\mathbb{Z}$.

    It's not a subspace, it's a quotient space. That is, there's no 1-1 homomorphism of modules $\mathbb{Z}_2 \to \mathbb{Z}$. Instead, there's an onto homomorhism of modules $\mathbb{Z} \to \mathbb{Z}_2$, sending each integer to that integer mod 2.

    By the way, people call these things "modules", not "spaces", which connotes vector space. The word "dimension" is also not one we use for modules over a ring. So, nobody would say "the one-dimensional space $\mathbb{Z}$". They'd say "the free $\mathbb{Z}$-module of rank one, $\mathbb{Z}$".

    Comment Source:David wrote: > Observe that $\mathbb{Z}_2$ is a proper one-dimensional subspace of the one-dimensional space $\mathbb{Z}$. It's not a subspace, it's a quotient space. That is, there's no 1-1 homomorphism of modules $\mathbb{Z}_2 \to \mathbb{Z}$. Instead, there's an onto homomorhism of modules $\mathbb{Z} \to \mathbb{Z}_2$, sending each integer to that integer mod 2. By the way, people call these things "modules", not "spaces", which connotes vector space. The word "dimension" is also not one we use for modules over a ring. So, nobody would say "the one-dimensional space $\mathbb{Z}$". They'd say "the free $\mathbb{Z}$-module of rank one, $\mathbb{Z}$".
  • 29.
    edited June 2016

    Oops, I meant to say $2 \mathbb{Z}$, rather than $\mathbb{Z}_2$.

    I just applied this fix to messages 25 and 26, and also changed the terms "subspaces" to "submodules."

    So the sentence you referred to now reads:

    Observe that $2 \mathbb{Z}$ is a proper one-dimensional submodule of the one-dimensional module $\mathbb{Z}$.

    For a free modules, which are spanned by $n$ linearly independent generators, it seems like we could still retain the terminology of "dimension" to describe the number of linearly independent generators comprising a basis.

    In any case, that is what I meant by calling the ideals $k \mathbb{Z}$ one-dimensional submodules -- being that they are generated by the linearly independent set $\{k\}$.

    Comment Source:Oops, I meant to say $2 \mathbb{Z}$, rather than $\mathbb{Z}_2$. I just applied this fix to messages 25 and 26, and also changed the terms "subspaces" to "submodules." So the sentence you referred to now reads: > Observe that $2 \mathbb{Z}$ is a _proper_ one-dimensional submodule of the one-dimensional module $\mathbb{Z}$. For a _free_ modules, which are spanned by $n$ linearly independent generators, it seems like we could still retain the terminology of "dimension" to describe the number of linearly independent generators comprising a basis. In any case, that is what I meant by calling the ideals $k \mathbb{Z}$ one-dimensional submodules -- being that they are generated by the linearly independent set $\{k\}$.
  • 30.
    edited July 2016

    Oops, I meant to say $2\mathbb{Z}$, rather than $\mathbb{Z}/2$.

    Oh, I see. Maybe I should have guessed.

    For free modules, which are spanned by $n$ linearly independent generators, it seems like we could still retain the terminology of "dimension" to describe the number of linearly independent generators comprising a basis.

    We could. But we don't: we say "rank". Part of succeeding in life is just knowing how to talk like other people do, regardless of whether it makes sense. Sometimes it pays to fight, but one should choose ones battle's wisely. image

    Comment Source:> Oops, I meant to say $2\mathbb{Z}$, rather than $\mathbb{Z}/2$. Oh, I see. Maybe I should have guessed. > For _free_ modules, which are spanned by $n$ linearly independent generators, it seems like we could still retain the terminology of "dimension" to describe the number of linearly independent generators comprising a basis. We could. But we don't: we say "rank". Part of succeeding in life is just knowing how to talk like other people do, regardless of whether it makes sense. Sometimes it pays to fight, but one should choose ones battle's wisely. <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif">
  • 31.
    edited July 2016

    Great. I'm all in favor of using the standard terminology -- I just didn't know this one until reading it above. Thanks.

    Now using these standard terms, I will restate the point that I was making above.

    With vector spaces, you can never have a submodule of rank k that is properly contained in another submodule of rank k (proper inclusions always correspond to an increase of dimension).

    But the $\mathbb{Z}$-module $\mathbb{Z}$ has an entire lattice of submodules, all of rank 1.

    This is a consequence of not being able to perform division on the scalars.

    So with modules, the lattice of submodules can have some interesting and rich structures, which cannot be present in the lattice of submodules of a vector space.

    Comment Source:Great. I'm all in favor of using the standard terminology -- I just didn't know this one until reading it above. Thanks. Now using these standard terms, I will restate the point that I was making above. With vector spaces, you can never have a submodule of rank k that is properly contained in another submodule of rank k (proper inclusions always correspond to an increase of dimension). But the $\mathbb{Z}$-module $\mathbb{Z}$ has an entire lattice of submodules, all of rank 1. This is a consequence of not being able to perform division on the scalars. So with modules, the lattice of submodules can have some interesting and rich structures, which cannot be present in the lattice of submodules of a vector space.
  • 32.
    edited July 2016

    Yes! This is one of many reasons why people studying modules of rings consider fields boring. Of course, in mathematics, "boring" is the flip side of "convenient". When you walk down the sidewalk, you don't want the process to be exciting, full of challenges and pitfalls. So everything "boring" about vector spaces compared to modules of general rings is also something that makes linear algebra over fields a convenient tool.

    Comment Source:Yes! This is one of many reasons why people studying modules of rings consider fields boring. Of course, in mathematics, "boring" is the flip side of "convenient". When you walk down the sidewalk, you don't want the process to be exciting, full of challenges and pitfalls. So everything "boring" about vector spaces compared to modules of general rings is also something that makes linear algebra over fields a convenient tool.
  • 33.
    edited October 2016

    David, I have found this series of video lectures helpful. It is Fredric Schuller's course on Geometrical Anatomy of Theoretical Physics https://www.youtube.com/playlist?list=PLPH7f_7ZlzxTi6kS4vCmv4ZKm9u8g5yic Lecture 11 is Tensor Space Theory II: over a ring. Although you may know all that he says, and I don't think he provides the intuitions you are looking for. Also, I'm intrigued by this book by Daniel A. Fleisch, "A Student's Guide to Vectors and Tensors" http://www.cambridge.org/gb/academic/subjects/physics/mathematical-methods/students-guide-vectors-and-tensors?format=PB&isbn=9780521171908 But I don't think it covers rings or modules.

    Comment Source:David, I have found this series of video lectures helpful. It is Fredric Schuller's course on Geometrical Anatomy of Theoretical Physics https://www.youtube.com/playlist?list=PLPH7f_7ZlzxTi6kS4vCmv4ZKm9u8g5yic Lecture 11 is Tensor Space Theory II: over a ring. Although you may know all that he says, and I don't think he provides the intuitions you are looking for. Also, I'm intrigued by this book by Daniel A. Fleisch, "A Student's Guide to Vectors and Tensors" http://www.cambridge.org/gb/academic/subjects/physics/mathematical-methods/students-guide-vectors-and-tensors?format=PB&isbn=9780521171908 But I don't think it covers rings or modules.
  • 34.

    I'm listening to this video lecture by ML Baker on "Tensors/tensor products over-mystified" and it's the most helpful discussion I've ever found He keeps relating it to the isomorphism between a finite dimensional vector space and its double dual. David, perhaps you know all of this already, but I just want to note it. I very much appreciate his video lectures on category theory and also on elliptic functions and modular forms. I'm amazed at how much math he has already learned. He only has a master's degree.

    Comment Source:I'm listening to this video lecture by ML Baker on "Tensors/tensor products over-mystified" and it's the most helpful discussion I've ever found https://www.youtube.com/watch?v=qHuUazkUcnU He keeps relating it to the isomorphism between a finite dimensional vector space and its double dual. David, perhaps you know all of this already, but I just want to note it. I very much appreciate his video lectures on category theory and also on elliptic functions and modular forms. I'm amazed at how much math he has already learned. He only has a master's degree.
  • 35.

    Whenever understanding tensors and differential geometry comes up, I always point people to ch 12-15 of Penrose's Road to Reality, particularly ch 14. I find the clarity of the explanation mind blowing. Still can't believe how much I banged my head against that wall before I read that.

    Comment Source:Whenever understanding tensors and differential geometry comes up, I always point people to ch 12-15 of Penrose's _Road to Reality_, particularly ch 14. I find the clarity of the explanation mind blowing. Still can't believe how much I banged my head against that wall before I read that.
  • 36.

    Daniel, thank you for the reference! I will have to go back to Penrose's wonderful book. It's huge and so I read and reread parts that I find relevant. He's put it online for free! http://chaosbook.org/library/Penr04.pdf

    Comment Source:Daniel, thank you for the reference! I will have to go back to Penrose's wonderful book. It's huge and so I read and reread parts that I find relevant. He's put it online for free! http://chaosbook.org/library/Penr04.pdf
Sign In or Register to comment.