Options

Notes on "Networks: An Introduction", by M.E.J. Newman

This is a thread for any notes from this textbook.

I would like to create a Wiki page with reading notes, but not sure about a good way to name it. Calling it "Networks: An Introduction" would make it sound like an introduction. One way to go would be to add a prefix like "Notes - Networks: An Introduction," like the way we have Blog as a prefix. Any suggestions welcome here.

Comments

  • 1.

    p. 121

    Let G be a directed graph, and let A be its adjacency matrix. Then the following are equivalent:

    • G is acyclic

    • The verticies of G can be ordered so that A is strictly upper triangular

    • A is nilpotent

    • All eigenvectors of A are zero

    Comment Source:p. 121 Let G be a directed graph, and let A be its adjacency matrix. Then the following are equivalent: * G is acyclic * The verticies of G can be ordered so that A is strictly upper triangular * A is nilpotent * All eigenvectors of A are zero
  • 2.

    I'd suggest 'Reading notes...' instead of just 'Notes...', and a new category 'reading notes'. It is similar to blogs, and also to experiments.

    Comment Source:I'd suggest 'Reading notes...' instead of just 'Notes...', and a new category 'reading notes'. It is similar to blogs, and also to experiments.
  • 3.

    That result on p. 121 is a very nice chunk of math, David Tanzer. There's a famous theorem that a linear transformation of a finite-dimensional vector space is nilpotent iff all its eigenvalues is zero iff there's some basis in which it's strictly upper triangular. This can most easily be seen as a spinoff of the Jordan canonical form of a matrix.

    But in fact the result you're talking about is easier, since we don't need to think about vector spaces and bases and linear transformations; we can just think about the adjaceny matrix $A$, and to bring it into upper triangular form we just need to order the vertices correctly.

    To do this first we list all the vertices $v_1, \dots, v_n$ that have no edges coming out of them, in any order. (Such vertices must exist since $G$ is acyclic.) Then we list the vertices $v_{n+1},\dots, v_m$ whose only outgoing edges go to the vertices $v_1, \dots, v_n$. Then we must list the vertices whose only outgoing edges go to the vertices $v_1, \dots, v_m$. And so on. (Prove that this eventually exhausts all the vertices, again using the fact that $G$ is acyclic.)

    When we list the vertices this way, our matrix $A$ is strictly upper triangular. (Or maybe strictly lower triangular, but that just means I listed them backwards.)

    I got a free textbook on network theory from Oxford Press, but it was Ernesto Estrada's, not Newman's. I'd like to compare them someday. Estrada's book is full of typos and omissions that make the math harder to follow than necessary.

    Comment Source:That result on p. 121 is a very nice chunk of math, David Tanzer. There's a famous theorem that a linear transformation of a finite-dimensional vector space is nilpotent iff all its eigenvalues is zero iff there's some basis in which it's strictly upper triangular. This can most easily be seen as a spinoff of the [Jordan canonical form](http://en.wikipedia.org/wiki/Jordan_canonical_form) of a matrix. But in fact the result you're talking about is easier, since we don't need to think about vector spaces and bases and linear transformations; we can just think about the adjaceny matrix $A$, and to bring it into upper triangular form we just need to order the vertices correctly. To do this first we list all the vertices $v_1, \dots, v_n$ that have no edges coming out of them, in any order. (Such vertices must exist since $G$ is acyclic.) Then we list the vertices $v_{n+1},\dots, v_m$ whose only outgoing edges go to the vertices $v_1, \dots, v_n$. Then we must list the vertices whose only outgoing edges go to the vertices $v_1, \dots, v_m$. And so on. (Prove that this eventually exhausts all the vertices, again using the fact that $G$ is acyclic.) When we list the vertices this way, our matrix $A$ is strictly upper triangular. (Or maybe strictly lower triangular, but that just means I listed them backwards.) I got a free textbook on network theory from Oxford Press, but it was Ernesto Estrada's, not Newman's. I'd like to compare them someday. Estrada's book is full of typos and omissions that make the math harder to follow than necessary.
  • 4.
    edited June 2013

    That shows that G is acyclic iff its vertices can be ordered so that A is upper triangular. And these are equivalent to nilpotence, because the successive powers of A are upper triangular with more and more diagonal bands of zeros.

    Next, the fact that A is strictly upper triangular implies that all eigenvalues are zero, since the eigenvalues of a triangular matrix are the values on the diagonal.

    But to complete the proof of these equivalences, how do you propose to do it without invoking the general linear-algebraic theorem that you mentioned?

    I.e., is there an easier way to prove that, for an adjacency matrix, if all eigenvalues are zero, then the graph is acyclic? To put it more tangibly, we want to show that if the graph is cyclic, then it must have an eigenvector with non-zero eigenvalue. We could get the job done by showing that it is guaranteed to have a fixed point. That's clearly the case if the graph contains any isolated cycle, because the characteristic function of the set of nodes in the cycle is a fixed point. Or, if it contains any isolated cliques of size k, then the characteristic function of the clique is an eigenvector, with eigenvalue k. By what about the general case of any old cyclic graph? I was imagining starting with the entire set of nodes, i.e., the vector (1,1,...,1), and iterating to see if it converges to a fixpoint. But why would it, since we are operating over the ground ring R, not the ground rig {0,1}. But does it converge to something special, that could be analogous to the "cyclic kernel" of a directed graph, obtained by successively removing nodes that have zero indegree. Perhaps it eventually reaches some eigenvector, not necessarily a fixpoint? You see, I am grasping at hypotheses here :)

    The Newman book does give a graph-theoretic proof of these equivalences, which doesn't cite a theorem that is based on the Jordan canonical form. It's based on another spectral theorem about graphs. I don't have it at hand, but can post it later. But now I'll leave some space here for this discussion.

    Comment Source:That shows that G is acyclic iff its vertices can be ordered so that A is upper triangular. And these are equivalent to nilpotence, because the successive powers of A are upper triangular with more and more diagonal bands of zeros. Next, the fact that A is strictly upper triangular implies that all eigenvalues are zero, since the eigenvalues of a triangular matrix are the values on the diagonal. But to complete the proof of these equivalences, how do you propose to do it without invoking the general linear-algebraic theorem that you mentioned? I.e., is there an easier way to prove that, for an adjacency matrix, if all eigenvalues are zero, then the graph is acyclic? To put it more tangibly, we want to show that if the graph is cyclic, then it must have an eigenvector with non-zero eigenvalue. We could get the job done by showing that it is guaranteed to have a fixed point. That's clearly the case if the graph contains any isolated cycle, because the characteristic function of the set of nodes in the cycle is a fixed point. Or, if it contains any isolated cliques of size k, then the characteristic function of the clique is an eigenvector, with eigenvalue k. By what about the general case of any old cyclic graph? I was imagining starting with the entire set of nodes, i.e., the vector (1,1,...,1), and iterating to see if it converges to a fixpoint. But why would it, since we are operating over the ground ring R, not the ground rig {0,1}. But does it converge to something special, that could be analogous to the "cyclic kernel" of a directed graph, obtained by successively removing nodes that have zero indegree. Perhaps it eventually reaches some eigenvector, not necessarily a fixpoint? You see, I am grasping at hypotheses here :) The Newman book does give a graph-theoretic proof of these equivalences, which doesn't cite a theorem that is based on the Jordan canonical form. It's based on another spectral theorem about graphs. I don't have it at hand, but can post it later. But now I'll leave some space here for this discussion.
  • 5.
    edited June 2013

    In addition to the main point that I just indicated which still needs proof if we aren't going to rely on the general linear algebra theorem, there is one other missing piece in the proofs I just gave: showing that if A is nilpotent, then G is acyclic. But this is easy; let's show the contrapositive.

    Suppose G is cyclic. Then $A ^ n$ is a the matrix whose (i,j)th entry counts the number of paths in G of length n from node i to node j. In a cyclic graph, for any n, there exists paths of length n. Hence $A ^ n$ cannot be zero. So A is not nilpotent.

    Comment Source:In addition to the main point that I just indicated which still needs proof if we aren't going to rely on the general linear algebra theorem, there is one other missing piece in the proofs I just gave: showing that if A is nilpotent, then G is acyclic. But this is easy; let's show the contrapositive. Suppose G is cyclic. Then $A ^ n$ is a the matrix whose (i,j)th entry counts the number of paths in G of length n from node i to node j. In a cyclic graph, for any n, there exists paths of length n. Hence $A ^ n$ cannot be zero. So A is not nilpotent.
  • 6.

    Based on the general theorem in linear algebra (that T is nilpotent iff all its eigenvectors are zero) it follows that a cyclic graph will have non-zero eigenvectors, but I can't at all see how to give a straight-ahead graph-theoretic proof of this.

    Taking a random cyclic graph, the eigenvectors of its adjacency matrix will generally have complex components. Again, I am at a loss for how to give a graph theoretic interpretation of such an eigenvector. For me, this is a stumbling block in the study of spectral graph theory.

    Comment Source:Based on the general theorem in linear algebra (that T is nilpotent iff all its eigenvectors are zero) it follows that a cyclic graph will have non-zero eigenvectors, but I can't at all see how to give a straight-ahead graph-theoretic proof of this. Taking a random cyclic graph, the eigenvectors of its adjacency matrix will generally have complex components. Again, I am at a loss for how to give a graph theoretic interpretation of such an eigenvector. For me, this is a stumbling block in the study of spectral graph theory.
  • 7.

    Hi David! We have copies of that book here in the office. It's a fairly decent one. I think we have to seperate the idea of complex networks and networks. We all know what a network is, but complex networks, what people typically mean by that, is that there is an emergent feature that is present globally. Like the forest through the trees. Of course, this is based on my own current understanding and clearly the term is not so well defined. Hopefully this can change...

    Comment Source:Hi David! We have copies of that book here in the office. It's a fairly decent one. I think we have to seperate the idea of complex networks and networks. We all know what a network is, but complex networks, what people typically mean by that, is that there is an emergent feature that is present globally. Like the forest through the trees. Of course, this is based on my own current understanding and clearly the term is not so well defined. Hopefully this can change...
  • 8.

    Hi Jacob, that's an interesting topic, complex networks.

    Just to be clear, though, I was referring to the complex numbers that arise when you take the eigenvectors and eigenvalues of even simple networks. For instance, the cyclic graph {(1,2), (2,1), (2,3)} has an adjacency matrix with complex eigenvectors and eigenvalues. Intuitively it seems that these eigenvectors must have a graph theoretic interpretation (they are after all a function of the graph), but it is eluding me.

    Comment Source:Hi Jacob, that's an interesting topic, complex networks. Just to be clear, though, I was referring to the complex numbers that arise when you take the eigenvectors and eigenvalues of even simple networks. For instance, the cyclic graph {(1,2), (2,1), (2,3)} has an adjacency matrix with complex eigenvectors and eigenvalues. Intuitively it seems that these eigenvectors must have a graph theoretic interpretation (they are after all a function of the graph), but it is eluding me.
  • 9.
    edited June 2013

    Here I will paraphrase how Newman completes the proof.

    Theorem. Let G be a directed graph, with adjacency matrix A. Let L(r) be the number of cycles of length r in G.
    Then L(r) = $\sum_{i=1}^{n} k_i^r$, where $k_1, ..., k_n$ are the eigenvalues of A.

    Corollary. If G is cyclic, then it must have a non-zero eigenvalue. Indeed, if G is cyclic, then it has a cycle of length r, so L(r) > 0, and hence some $k_i$ must be non-zero.

    Note: cycles are defined as closed paths, and paths are sequences of edges. Hence each loop that passes through k nodes will be treated as k distinct cycles, one for each starting point in the loop.

    Comment Source:Here I will paraphrase how Newman completes the proof. Theorem. Let G be a directed graph, with adjacency matrix A. Let L(r) be the number of cycles of length r in G. Then L(r) = $\sum_{i=1}^{n} k_i^r$, where $k_1, ..., k_n$ are the eigenvalues of A. Corollary. If G is cyclic, then it must have a non-zero eigenvalue. Indeed, if G is cyclic, then it has a cycle of length r, so L(r) > 0, and hence some $k_i$ must be non-zero. Note: cycles are defined as closed paths, and paths are sequences of edges. Hence each loop that passes through k nodes will be treated as k distinct cycles, one for each starting point in the loop.
  • 10.
    edited June 2013

    Proof of the theorem. (p. 136)

    $(A^r)_{i j}$ = the number of paths in G of length r that go from node i to node j.

    Hence $(A^r)_{i i}$ = the number of cycles in G of length r that start at node i.

    Hence $\sum_{i=1}^{n} (A^r)_{i i}$ = L(r) = the number of r-cycles in G.

    But this is just the trace of $A^r$, which equals the sum of the eigenvalues of $A^r$. This in turn equals the sum of the rth powers of the eigenvalues of A.

    Beautiful stuff.

    Comment Source:Proof of the theorem. (p. 136) $(A^r)_{i j}$ = the number of paths in G of length r that go from node i to node j. Hence $(A^r)_{i i}$ = the number of cycles in G of length r that start at node i. Hence $\sum_{i=1}^{n} (A^r)_{i i}$ = L(r) = the number of r-cycles in G. But this is just the trace of $A^r$, which equals the sum of the eigenvalues of $A^r$. This in turn equals the sum of the rth powers of the eigenvalues of A. Beautiful stuff.
  • 11.

    For instance, the cyclic graph {(1,2), (2,1), (2,3)} has an adjacency matrix with complex eigenvectors and eigenvalues. Intuitively it seems that these eigenvectors must have a graph theoretic interpretation (they are after all a function of the graph), but it is eluding me.

    I guess you meant {(1,2), (2,3), (3,1)}. If $\omega \neq 1$ is a cube root of 1, the eigenvalues are $1, \omega, \omega^2$. The eigenvectors can be seen as weighted sums of the vertices, and the weights are complex. I don't know how to make it more graph-theoretic than that.

    Comment Source:> For instance, the cyclic graph {(1,2), (2,1), (2,3)} has an adjacency matrix with complex eigenvectors and eigenvalues. Intuitively it seems that these eigenvectors must have a graph theoretic interpretation (they are after all a function of the graph), but it is eluding me. I guess you meant {(1,2), (2,3), (3,1)}. If $\omega \neq 1$ is a cube root of 1, the eigenvalues are $1, \omega, \omega^2$. The eigenvectors can be seen as weighted sums of the vertices, and the weights are complex. I don't know how to make it more graph-theoretic than that.
  • 12.

    Hi David! I was just making a general remark that I figured out only recently about complex networks vs networks. Thanks for this post. I learned something and have a copy of the book on my desk.

    Comment Source:Hi David! I was just making a general remark that I figured out only recently about complex networks vs networks. Thanks for this post. I learned something and have a copy of the book on my desk.
  • 13.

    Jacob, sure thing. I'm also interested to hear what you find out about emergent properties of complex networks!

    Comment Source:Jacob, sure thing. I'm also interested to hear what you find out about emergent properties of complex networks!
  • 14.
    edited June 2013

    Graham, you're right, my example was erroneous, all the eigen values and vectors are real.

    The eigenvectors can be seen as weighted sums of the vertices, and the weights are complex. I don’t know how to make it more graph-theoretic than that.

    It is true the the eigenvectors are weighted sums of the vertices, but they are not just any old weighted sum. They have a very specific character which I believe must be related to the structural properties of the graph.

    One aspect of this question is the following. We know from the algebra that any cyclic graph must have a non-null eigenvector. Can we give a constructive procedure for finding the eigenvector, based on the connectivity structure of the graph? Of course we can just algebraically compute them, but that is using linear algebra rather than graph theory.

    Here are a few cases where this can be done. Suppose the graph contains an isolated cycle that goes through the nodes $v_1,...,v_k$. Then let w be the weighted combination of the nodes, which assigns 1 to each of the nodes in the cycle, and 0 to all the other nodes (i.e. the characteristic function of the nodes in the cycle). Then w is an eigenvector with eigenvalue 1. This generalizes to disjoint unions of isolated cycles, and also to any set of nodes which equals its predecessor-set in the graph. This is a construction of the eigenvectors in direct, graph-theoretic terms. For other examples, the characteristic function of an isolated clique of size k is an eigenvector with eigenvalue k, and the characteristic function of a set of nodes that have zero outdegree, is an eigenvector with eigenvalue zero.

    So, can we generalize and unify these constructive procedures, to handle the case of a general cyclic graph?

    Comment Source:Graham, you're right, my example was erroneous, all the eigen values and vectors are real. > The eigenvectors can be seen as weighted sums of the vertices, and the weights are complex. I don’t know how to make it more graph-theoretic than that. It is true the the eigenvectors are weighted sums of the vertices, but they are not just any old weighted sum. They have a very specific character which I believe must be related to the structural properties of the graph. One aspect of this question is the following. We know from the algebra that any cyclic graph must have a non-null eigenvector. Can we give a constructive procedure for finding the eigenvector, based on the connectivity structure of the graph? Of course we can just algebraically compute them, but that is using linear algebra rather than graph theory. Here are a few cases where this can be done. Suppose the graph contains an isolated cycle that goes through the nodes $v_1,...,v_k$. Then let w be the weighted combination of the nodes, which assigns 1 to each of the nodes in the cycle, and 0 to all the other nodes (i.e. the characteristic function of the nodes in the cycle). Then w is an eigenvector with eigenvalue 1. This generalizes to disjoint unions of isolated cycles, and also to any set of nodes which equals its predecessor-set in the graph. This is a construction of the eigenvectors in direct, graph-theoretic terms. For other examples, the characteristic function of an isolated clique of size k is an eigenvector with eigenvalue k, and the characteristic function of a set of nodes that have zero outdegree, is an eigenvector with eigenvalue zero. So, can we generalize and unify these constructive procedures, to handle the case of a general cyclic graph?
  • 15.
    edited June 2013

    Since the adjacency matrix is non-negative, from one form of the Perron-Frobenious theorem, we know that there will always be a real eigenvalue, with an associated real and non-negative eigenvector -- a Perron eigenvector. It's eigenvalue will have maximal absolute value, but, because the adjacency matrix is not strictly positive, there may be multiple Perron eigenvectors for a given graph.

    This puts a sharper point on the spectrum of cyclic graph. We know that the graph must have a non-zero eigenvalue, and also that it has a Perron eigenvector v. So the eigenvalue of v must be positive, and its components real and non-negative.

    So the constructive procedure could limit itself to the case of non-negative eigenvectors and eigenvalues.

    Comment Source:Since the adjacency matrix is non-negative, from one form of the Perron-Frobenious theorem, we know that there will always be a real eigenvalue, with an associated real and non-negative eigenvector -- a Perron eigenvector. It's eigenvalue will have maximal absolute value, but, because the adjacency matrix is not strictly positive, there may be multiple Perron eigenvectors for a given graph. This puts a sharper point on the spectrum of cyclic graph. We know that the graph must have a non-zero eigenvalue, and also that it has a Perron eigenvector v. So the eigenvalue of v must be positive, and its components real and non-negative. So the constructive procedure could limit itself to the case of non-negative eigenvectors and eigenvalues.
  • 16.

    Graham wrote:

    I guess you meant {(1,2), (2,3), (3,1)}. If $\omega \neq 1$ is a cube root of 1, the eigenvalues are $1, \omega, \omega^2$

    That's an interesting one to look at.

    The Perron vector is the function which assigns 1 to all three of the nodes, which is the obvious fixpoint of the adjacency matrix.

    The other eigenvalues are interesting. Here, the group ${1, w, w^2}$ is isomorphic to the symmetry group of the graph. Is this just a special coincidence, or is it a sign of something deeper?

    But note that there is no obvious generalization of this fact, because the spectrum does not generally form a group. It may be a coincidence, stemming from the fact that in this case the adjacency matrix is a permutation, which is a symmetry of the graph.

    Comment Source:Graham wrote: > I guess you meant {(1,2), (2,3), (3,1)}. If $\omega \neq 1$ is a cube root of 1, the eigenvalues are $1, \omega, \omega^2$ That's an interesting one to look at. The Perron vector is the function which assigns 1 to all three of the nodes, which is the obvious fixpoint of the adjacency matrix. The other eigenvalues are interesting. Here, the group ${1, w, w^2}$ is isomorphic to the symmetry group of the graph. Is this just a special coincidence, or is it a sign of something deeper? But note that there is no obvious generalization of this fact, because the spectrum does not generally form a group. It may be a coincidence, stemming from the fact that in this case the adjacency matrix is a permutation, which is a symmetry of the graph.
  • 17.
    edited June 2013

    A while ago David wrote:

    I.e., is there an easier way to prove that, for an adjacency matrix, if all eigenvalues are zero, then the graph is acyclic?

    Sorry to punk out on you---my wife Lisa went to Singapore on June 5, and I gave the final for my class June 12, so I've been sort of distracted.

    It looks like you found a solution. But I think the Jordan canonical form may be hiding inside your solution. At some point in your argument you said

    But this is just the trace of $A^r$, which equals the sum of the eigenvalues of $A^r$.

    I'm wondering how you're concluding this. The trace of a matrix isn't always the sum of its eigenvalues.

    It's true when the eigenvectors of the matrix form a basis, since the trace of a linear transformation is the sum of its diagonal entries when written as a matrix in any basis, and writing the transformation in the basis of its eigenvectors, the diagonal entries are its eigenvalues.

    But the eigenvectors of a matrix don't always form a basis. For example, take

    $$ 1 \; 1 $$ $$ 0 \; 1 $$ This matrix doesn't have a basis of eigenvectors: its only eigenvectors are multiples of

    $$ 1 $$ $$ 0 $$ So, it just has one eigenvalue, namely 1, not two. But the trace is 2.

    There's a way around this problem. For one thing, this matrix can't be the adjacency matrix of a loop-free graph. (I think it's silly to avoid loop-free graphs, but lots of people do). However, right now the only way I see to always avoid this problem involves Jordan canonical form.

    (I don't personally think it's worthwhile avoiding the use of Jordan canonical form, since this is such a fundamental theorem in linear algebra: as the name says, it gives a canonical form for any linear transformation, making them all easy to understand. So, I don't think much about how to avoid it. So, there could be an easy way to avoid it here, which I'm not seeing!)

    Comment Source:A while ago David wrote: > I.e., is there an easier way to prove that, for an adjacency matrix, if all eigenvalues are zero, then the graph is acyclic? Sorry to punk out on you---my wife Lisa went to Singapore on June 5, and I gave the final for my class June 12, so I've been sort of distracted. It looks like you found a solution. But I think the Jordan canonical form may be hiding inside your solution. At some point in your argument you said > But this is just the trace of $A^r$, which equals the sum of the eigenvalues of $A^r$. I'm wondering how you're concluding this. The trace of a matrix isn't always the sum of its eigenvalues. It's true when the eigenvectors of the matrix form a basis, since the trace of a linear transformation is the sum of its diagonal entries when written as a matrix in any basis, and writing the transformation in the basis of its eigenvectors, the diagonal entries are its eigenvalues. But the eigenvectors of a matrix don't always form a basis. For example, take $$ 1 \; 1 $$ $$ 0 \; 1 $$ This matrix doesn't have a basis of eigenvectors: its only eigenvectors are multiples of $$ 1 $$ $$ 0 $$ So, it just has one eigenvalue, namely 1, not two. But the trace is 2. There's a way around this problem. For one thing, this matrix can't be the adjacency matrix of a loop-free graph. (I think it's silly to avoid loop-free graphs, but lots of people do). However, right now the only way I see to always avoid this problem involves Jordan canonical form. (I don't personally think it's worthwhile avoiding the use of Jordan canonical form, since this is such a fundamental theorem in linear algebra: as the name says, it gives a canonical form for any linear transformation, making them all easy to understand. So, I don't think much about how to avoid it. So, there could be an easy way to avoid it here, which I'm not seeing!)
  • 18.

    I agree it looks like the Jordan form is really needed.

    The proof I gave, by the way, is from the Newman textbook.

    I stated the trace theorem in a hasty way; more accurately put, the trace is the sum of the eigenvalues, viewed as a multi-set. And this follows from the Jordan form theorem, since every matrix is similar to its Jordan canonical form, which is triangular with the eigenvalues on the diagonal, and the multi-set of eigenvalues is shared by all of the matrices in a similarity class.

    Comment Source:I agree it looks like the Jordan form is really needed. The proof I gave, by the way, is from the Newman textbook. I stated the trace theorem in a hasty way; more accurately put, the trace is the sum of the eigenvalues, viewed as a multi-set. And this follows from the Jordan form theorem, since every matrix is similar to its Jordan canonical form, which is triangular with the eigenvalues on the diagonal, and the multi-set of eigenvalues is shared by all of the matrices in a similarity class.
  • 19.
    edited June 2013

    I just want to note that while there's a standard notion of a multi-set of eigenvalues, this isn't quite the same as the multi-set of numbers that show up on the diagonal of the Jordan form. We say an eigenvalue $\lambda$ of a matrix $T$ has 'multiplicity $n$' if there are $n$ linearly independent vectors $v_i$ such that

    $$T v_i = \lambda v_i$$ Counting eigenvalues with their multiplicity this way, we get a multi-set of eigenvalues. So, for example, this matrix has eigenvalue 3 with multiplicity 2:

    $$ 3 \quad 0 $$ $$ 0 \quad 3 $$ so it gives the multiset ${3, 3 }$

    But this matrix has eigenvalue 3 with multiplicity 1:

    $$ 3 \quad 1 $$ $$ 0 \quad 3 $$ since it doesn't have two linearly independent eigenvectors. So it gives the multiset ${3 }$, using the procedure I'm describing.

    I'm not trying to be a nuisance - honest! Of course you can make a multiset of eigenvalues in a different way where you just look at the list of numbers in the Jordan form... this way gives the trace when we add them up. I'm just trying to point out something that's a bit subtle. It took me too long to understand all this stuff, myself, because I learned a lot of linear algebra in physics classes, which focused on self-adjoint operators, whose Jordan form is diagonal. Then these subtleties don't arise.

    Comment Source:I just want to note that while there's a standard notion of a multi-set of eigenvalues, this isn't quite the same as the multi-set of numbers that show up on the diagonal of the Jordan form. We say an eigenvalue $\lambda$ of a matrix $T$ has 'multiplicity $n$' if there are $n$ linearly independent vectors $v_i$ such that $$T v_i = \lambda v_i$$ Counting eigenvalues with their multiplicity this way, we get a multi-set of eigenvalues. So, for example, this matrix has eigenvalue 3 with multiplicity 2: $$ 3 \quad 0 $$ $$ 0 \quad 3 $$ so it gives the multiset $\{3, 3 \}$ But this matrix has eigenvalue 3 with multiplicity 1: $$ 3 \quad 1 $$ $$ 0 \quad 3 $$ since it doesn't have two linearly independent eigenvectors. So it gives the multiset $\{3 \}$, using the procedure I'm describing. I'm not trying to be a nuisance - honest! Of course you can make a multiset of eigenvalues in a different way where you just look at the list of numbers in the Jordan form... _this_ way gives the trace when we add them up. I'm just trying to point out something that's a bit subtle. It took me too long to understand all this stuff, myself, because I learned a lot of linear algebra in physics classes, which focused on self-adjoint operators, whose Jordan form is diagonal. Then these subtleties don't arise.
  • 20.

    p. 118 sec 6.4.2 Acyclic directed networks

    Goes on to define cycles in directed networks and acyclic directed networks.

    • (Def.) A cycle in a directed network is a closed loop of edges with the arrows of each of the edges pointing the same way around the loop.

    Directed networks without cycles are called acyclic. Here all the arrows can be pointed, for example down the page or in the same overall direction. Examples that are easy to think about would be citation networks (since papers appearing later, cite papers appearing sooner typically).

    • (Thm.) Suppose we have an acyclic directed network of $n$ vertices. Then $\exists$ some node with indegree >$0$ and outdegree $0$.

    • (Proof.) Consider $n$ walks on the network, one starting from each node. If any of these walks either starts at any node $q$ or passes it along the walk and is able to traverse back to $q$ by steps along directed edges, the network is not acyclic. The walk can visit at most $n$ nodes before terminating at a node with outdegree $0$.

    If anyone that does not like the way this proof is written and wants to reword that proof in a better way, that'd be nice!

    Comment Source:## p. 118 sec 6.4.2 Acyclic directed networks ## Goes on to define cycles in directed networks and acyclic directed networks. * (**Def.**) A _cycle_ in a directed network is a closed loop of edges with the arrows of each of the edges pointing the same way around the loop. Directed networks without cycles are called _acyclic_. Here all the arrows can be pointed, for example down the page or in the same overall direction. Examples that are easy to think about would be citation networks (since papers appearing later, cite papers appearing sooner typically). * (**Thm.**) Suppose we have an acyclic directed network of $n$ vertices. Then $\exists$ some node with indegree >$0$ and outdegree $0$. * (**Proof.**) Consider $n$ walks on the network, one starting from each node. If any of these walks either starts at any node $q$ or passes it along the walk and is able to traverse back to $q$ by steps along directed edges, the network is not acyclic. The walk can visit at most $n$ nodes before terminating at a node with outdegree $0$. If anyone that does not like the way this proof is written and wants to reword that proof in a better way, that'd be nice!
  • 21.

    I don't like the proof, and the theorem is not quite true, because the network might have no edges.

    New proof. Use induction on n, starting with n=2. Suppose first that all nodes have outdegree $\gt$0. Start at any node, choose any edge pointing to another node, and repeat until we visit some node already visited. This makes a cycle, which is a contradiction. So $\exists$ a node $q$ with outdegree $0$. If it has indegree $\gt$0, we're done, otherwise use induction on the network obtained by removing $q$.

    Comment Source:I don't like the proof, and the theorem is not quite true, because the network might have no edges. **New proof.** Use induction on n, starting with n=2. Suppose first that all nodes have outdegree $\gt$0. Start at any node, choose any edge pointing to another node, and repeat until we visit some node already visited. This makes a cycle, which is a contradiction. So $\exists$ a node $q$ with outdegree $0$. If it has indegree $\gt$0, we're done, otherwise use induction on the network obtained by removing $q$.
  • 22.

    Cool! We have to get the statements solid since I assume that David Tanzer will post this onto the Azimuth wiki at one point. So how's this for a new version of the Theorem?

    • (Thm.) Suppose we have an acyclic network of $n$ nodes each of outdegree $+$ indegree > 0. Then $\exists$ at least one network node with outdegree $0$.

    That would force each node to either have an out edge, or an in one. What do you think?

    Comment Source:Cool! We have to get the statements solid since I assume that David Tanzer will post this onto the Azimuth wiki at one point. So how's this for a new version of the Theorem? * (**Thm.**) Suppose we have an acyclic network of $n$ nodes each of outdegree $+$ indegree > 0. Then $\exists$ at least one network node with outdegree $0$. That would force each node to either have an out edge, or an in one. What do you think?
  • 23.

    ...for completeness of the record, one more point regarding the algebraic multiset of eigenvalues: in addition to being displayed as the collection of diagonal entries on the Jordan canonical form of a matrix, they can also be obtained directly from the given matrix, since they are the multiset of roots of the characteristic polynomial.

    John, thanks for bringing the Jordan form into the discussion. I'd seen it before, but this time around I learned about it, because in the Azimuth context it has a living application.

    Comment Source:...for completeness of the record, one more point regarding the algebraic multiset of eigenvalues: in addition to being displayed as the collection of diagonal entries on the Jordan canonical form of a matrix, they can also be obtained directly from the given matrix, since they are the multiset of roots of the characteristic polynomial. John, thanks for bringing the Jordan form into the discussion. I'd seen it before, but this time around I learned about it, because in the Azimuth context it has a living application.
  • 24.

    Just a simple and quick idea that we might want to consider to connect these results to stochastic mechanics in the network theory series. The following I think is really obvious but still, I want to be careful about how we show it.

    In this thread, Graham Jones gave a proof of the theorem from the Newman book.

    • (Theorem 1). Suppose we have an acyclic network of $n$ nodes each of outdegree $+$ indegree > 0. Then $\exists$ at least one network node with outdegree $0$.

    Let's now consider a continuous time stochastic process on networks of this type with adjacency matrix $A$. (Note that in doing so, we're outside of the class of networks for which the Perron-Frobenius theorem gives a stationary solution since these networks are NOT strongly connected---see the network series part 20)

    The adjacency matrix of such a network can be elevated to a valid stochastic generator. See for example, the network theory series --- in particular parts 12 and 16.

    Consider the projection $D$ which sums columns of a matrix and returns a matrix with the sum along the diagonal. Then

    H = A - D(A)

    is a valid stochastic generator. This just means that we can exponentiate this matrix times time and get a valid stochastic evolution generator.

    Now I'd like to think of a simple way to show something that should come as no surprise.

    When we think about processes on graphs, we already know that the way this is talked about is using the term "walkers". We can make a walker basis by thinking about a walker being only on a single node at a time. If there are $N$ nodes in the network, then we have a complete orthogonal walker basis formed in the evident way. These would just be given by vectors $(1,0,0,...), (0,1,0,...), (0,0,0,...,0,1,0), (0,0,0,...,0,1)$ where the single non-zero 1 marks the location of a walker on some node.

    • (Theorem 2). For each of the k nodes of $A$ with outdegree $0$, $\exists$ $k$ stationary states given by the basis elements corresponding to walkers being present on nodes with outdegree $0$.

    Like I said, this theorem is no surprise but maybe it can teach us something else. I guess what we're really saying (let me think about this more) is that

    • (Remark 1). dim Ker$\{A - D(A)\}$ $= k$ and in particular, there is a $k$ element span of Ker$\{A - D(A)\}$ given by $k$ walkers at the nodes with outdegree $0$.

    I need to formalize this a lot better, think about how to go about proving it and think if we can do anything else with it. This is just a random idea for now, but hopefully more later. What do you think?

    Comment Source:Just a simple and quick idea that we might want to consider to connect these results to stochastic mechanics in the network theory series. The following I think is really obvious but still, I want to be careful about how we show it. In this thread, Graham Jones gave a proof of the theorem from the Newman book. * (**Theorem 1**). Suppose we have an acyclic network of $n$ nodes each of outdegree $+$ indegree > 0. Then $\exists$ at least one network node with outdegree $0$. Let's now consider a continuous time stochastic process on networks of this type with adjacency matrix $A$. (Note that in doing so, we're outside of the class of networks for which the Perron-Frobenius theorem gives a stationary solution since these networks are NOT strongly connected---see the network series <a href="http://math.ucr.edu/home/baez/networks/networks_20.html">part 20</a>) The adjacency matrix of such a network can be elevated to a valid stochastic generator. See for example, <a href="http://math.ucr.edu/home/baez/networks/">the network theory series</a> --- in particular parts 12 and 16. Consider the projection $D$ which sums columns of a matrix and returns a matrix with the sum along the diagonal. Then H = A - D(A) is a valid stochastic generator. This just means that we can exponentiate this matrix times time and get a valid stochastic evolution generator. Now I'd like to think of a simple way to show something that should come as no surprise. When we think about processes on graphs, we already know that the way this is talked about is using the term "walkers". We can make a _walker basis_ by thinking about a walker being only on a single node at a time. If there are $N$ nodes in the network, then we have a complete orthogonal walker basis formed in the evident way. These would just be given by vectors $(1,0,0,...), (0,1,0,...), (0,0,0,...,0,1,0), (0,0,0,...,0,1)$ where the single non-zero 1 marks the location of a walker on some node. * (**Theorem 2**). For each of the k nodes of $A$ with outdegree $0$, $\exists$ $k$ stationary states given by the basis elements corresponding to walkers being present on nodes with outdegree $0$. Like I said, this theorem is no surprise but maybe it can teach us something else. I guess what we're really saying (let me think about this more) is that * (**Remark 1**). dim Ker$\{A - D(A)\}$ $= k$ and in particular, there is a $k$ element span of Ker$\{A - D(A)\}$ given by $k$ walkers at the nodes with outdegree $0$. I need to formalize this a lot better, think about how to go about proving it and think if we can do anything else with it. This is just a random idea for now, but hopefully more later. What do you think?
  • 25.

    Hi Jacob, I'll give this one a try.

    Suppose that the ith node has outdegree 0. Then in the adjacency matrix A, the ith column will consist of all zeros. Hence the ith column of D(A) will consist of all zeros. Hence the ith column of A - D(A) will consist of all zeros. Hence the vector (0,0,...,1,0,...,0) which has the one in the ith position will be a null vector of A - D(A). This is the pure state corresponding to the walker being at the node with outdegree 0. Since it is a null vector of the stochastic generator, it is a stationary state.

    Note that the result doesn't depend on acyclicity.

    Comment Source:Hi Jacob, I'll give this one a try. Suppose that the ith node has outdegree 0. Then in the adjacency matrix A, the ith column will consist of all zeros. Hence the ith column of D(A) will consist of all zeros. Hence the ith column of A - D(A) will consist of all zeros. Hence the vector (0,0,...,1,0,...,0) which has the one in the ith position will be a null vector of A - D(A). This is the pure state corresponding to the walker being at the node with outdegree 0. Since it is a null vector of the stochastic generator, it is a stationary state. Note that the result doesn't depend on acyclicity.
  • 26.
    edited June 2013

    It sounds like you are searching for stochastic implications of acyclicity.

    Here I'll fish around a bit.

    If the graph is acyclic, then A will be a strictly lower triangular matrix. And it will be nilpotent.

    The stochastic generator G = A - D(A) will be a lower triangular matrix, with diagonal entries equal to the outdegrees of each of the nodes.

    Hence all successive powers of G will be lower triangular, though G will not be nilpotent. The ith diagonal entry of $G^k$ will equal $(-o)^k$, where o is the outdegree of the ith node.

    Are there any special consequences of the fact that a stochastic generator has a triangular matrix?

    Here is a clear consequence of acyclicity: if the graph has N nodes, then after N steps, all walkers must be at an outdegree 0 node. Each step of the walk corresponds to another successive power of the adjacency matrix, which, being nilpotent, marches along towards the zero matrix, one diagonal band at a time.

    All walkers are inevitably swept towards the outdegree zero nodes. Can this be said with a theorem about the stochastic generator?

    Comment Source:It sounds like you are searching for stochastic implications of acyclicity. Here I'll fish around a bit. If the graph is acyclic, then A will be a strictly lower triangular matrix. And it will be nilpotent. The stochastic generator G = A - D(A) will be a lower triangular matrix, with diagonal entries equal to the outdegrees of each of the nodes. Hence all successive powers of G will be lower triangular, though G will not be nilpotent. The ith diagonal entry of $G^k$ will equal $(-o)^k$, where o is the outdegree of the ith node. Are there any special consequences of the fact that a stochastic generator has a triangular matrix? Here is a clear consequence of acyclicity: if the graph has N nodes, then after N steps, all walkers must be at an outdegree 0 node. Each step of the walk corresponds to another successive power of the adjacency matrix, which, being nilpotent, marches along towards the zero matrix, one diagonal band at a time. All walkers are inevitably swept towards the outdegree zero nodes. Can this be said with a theorem about the stochastic generator?
  • 27.

    Just polishing the theorem statement, and noting that if you reverse the arrows, you can find a node with indegree $0$:

    • (Thm.) Let $G$ be an acyclic network with a finite number of nodes, and suppose that no node is isolated (that is, each node has outdegree $+$ indegree > 0). Then there is at least one node with outdegree $0$ and at least one node with indegree $0$.

    Puzzle. Does it remain true if there are infinitely many nodes?

    Comment Source:Just polishing the theorem statement, and noting that if you reverse the arrows, you can find a node with indegree $0$: * (**Thm.**) Let $G$ be an acyclic network with a finite number of nodes, and suppose that no node is isolated (that is, each node has outdegree $+$ indegree > 0). Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. **Puzzle.** Does it remain true if there are infinitely many nodes?
  • 28.
    edited June 2013

    Nice ideas! I like your theorem Graham. I'm still parsing what you said David, but got excited about the chance to prove something not about the end nodes, but about the nodes which always have a non-decreasing escape velocity!

    So we know that the nodes with outdegree 0 give rise to stationary points of a stochastic process defined on the graph. I want to think about how to prove that a walker starting at a node with indegree 0 has an always non-decreasing chance of leaving!

    Let's start with some definitions ($A$ is the same as before).

    • (Definition). The probability function of a walker starting at node $n$ and being found there later is given as $ P_t(n) := \langle n, e^{-t H} n \rangle $ where $H := A - D(A)$ is the stochastic Hamiltonian.

    The basic property of $P_t$ which we will use is the following

    $ 0 \leq P(n) \leq 1$

    $\forall t \in R_+$

    • (Lemma). The result of a silly calculation used to be right here. I don't think we ended up needing this, but I think we really get is: $\frac{d}{d t}P_t(n)=$ - outdegree(n) $P_t(n)$

    • (Lemma). Let $n$ be a node with indegree 0 then $ \langle n, H^ k n \rangle = $ ($-$ outdegree(n)$)^k$.

    The proof of this follows from the fact that if $n$ has indegree 0, the n-th row of $A$ will be all zeros. $D(A)$ simply subtracts the outdegree of $n$ and places it in the $n,n$ position. Successive powers are as described.

    Hopefully this is all correct and I think it's all we need to show $P_t$ is never increasing. More soon.

    Comment Source:Nice ideas! I like your theorem Graham. I'm still parsing what you said David, but got excited about the chance to prove something not about the end nodes, but about the nodes which always have a non-decreasing escape velocity! So we know that the nodes with outdegree 0 give rise to stationary points of a stochastic process defined on the graph. I want to think about how to prove that a walker starting at a node with indegree 0 has an always non-decreasing chance of leaving! Let's start with some definitions ($A$ is the same as before). * (**Definition**). The probability function of a walker starting at node $n$ and being found there later is given as $ P_t(n) := \langle n, e^{-t H} n \rangle $ where $H := A - D(A)$ is the stochastic Hamiltonian. The basic property of $P_t$ which we will use is the following $ 0 \leq P(n) \leq 1$ $\forall t \in R_+$ * (**Lemma**). The result of a silly calculation used to be right here. I don't think we ended up needing this, but I think we really get is: $\frac{d}{d t}P_t(n)=$ - outdegree(n) $P_t(n)$ * (**Lemma**). Let $n$ be a node with indegree 0 then $ \langle n, H^ k n \rangle = $ ($-$ outdegree(n)$)^k$. The proof of this follows from the fact that if $n$ has indegree 0, the n-th row of $A$ will be all zeros. $D(A)$ simply subtracts the outdegree of $n$ and places it in the $n,n$ position. Successive powers are as described. Hopefully this is all correct and I think it's all we need to show $P_t$ is never increasing. More soon.
  • 29.
    edited June 2013

    For the theorem about finite graphs, we don't need to add an extra condition that there are no isolated nodes -- because the conclusion is true by definition when there are isolated nodes.

    • (Thm.) Let $G$ be an acyclic network with a positive number of nodes. Then there is at least one node with outdegree $0$ and at least one node with indegree $0$.

    Graham's original proof is good.

    Here is an alternative proof, that uses the notion of the height of a node in the graph to characterize the root and leaf nodes.

    Proof. Define the height of a node n in the graph, in the standard way, as the length of a longest path starting at n. The height can only be infinite if either (1) the graph is infinite, or (2) it is finite and has a cycle. Since our graph is assumed to be finite and acyclic, it follows that every node has a finite height.

    Claim 1. If n a node with minimal height, then n has outdegree 0. For a contradiction, suppose that n has a successor node n'. Let P' be a maximal path in the graph that starts at n'. So height(n') = length(P'). Then consider the path P = (n,P'). So length(P) = 1 + length(P') = 1 + height(n'). Hence height(n) > height(n'). But that contradicts the assumption that n has minimal height.

    Claim 2. If n is a node with maximal height, then n has indegree 0. This can be proven by reversing the edges in the graph, and applying Claim 1. Or we can spell out the details, as follows. For a contradiction, suppose that n has a predecessor node n'. Let P be a maximal path in the graph that starts at n. So height(n) = length(P). Then consider the path P' = (n',P). So length(P') = 1 + length(P) = 1 + height(n). Hence height(n') > height(n). But that contradicts the assumption that n has maximal height.

    Since the graph contains a positive number of nodes, there are nodes with minimal and maximal heights, and hence nodes with indegree 0 and outdegree 0.

    If the graph is infinite, then the conclusion does not hold. For instance, successor relation among the integers is acyclic, but has no roots or leaves.

    Comment Source:For the theorem about finite graphs, we don't need to add an extra condition that there are no isolated nodes -- because the conclusion is true by definition when there are isolated nodes. * (**Thm.**) Let $G$ be an acyclic network with a positive number of nodes. Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. Graham's original proof is good. Here is an alternative proof, that uses the notion of the height of a node in the graph to characterize the root and leaf nodes. Proof. Define the height of a node n in the graph, in the standard way, as the length of a longest path starting at n. The height can only be infinite if either (1) the graph is infinite, or (2) it is finite and has a cycle. Since our graph is assumed to be finite and acyclic, it follows that every node has a finite height. Claim 1. If n a node with minimal height, then n has outdegree 0. For a contradiction, suppose that n has a successor node n'. Let P' be a maximal path in the graph that starts at n'. So height(n') = length(P'). Then consider the path P = (n,P'). So length(P) = 1 + length(P') = 1 + height(n'). Hence height(n) > height(n'). But that contradicts the assumption that n has minimal height. Claim 2. If n is a node with maximal height, then n has indegree 0. This can be proven by reversing the edges in the graph, and applying Claim 1. Or we can spell out the details, as follows. For a contradiction, suppose that n has a predecessor node n'. Let P be a maximal path in the graph that starts at n. So height(n) = length(P). Then consider the path P' = (n',P). So length(P') = 1 + length(P) = 1 + height(n). Hence height(n') > height(n). But that contradicts the assumption that n has maximal height. Since the graph contains a positive number of nodes, there are nodes with minimal and maximal heights, and hence nodes with indegree 0 and outdegree 0. If the graph is infinite, then the conclusion does not hold. For instance, successor relation among the integers is acyclic, but has no roots or leaves.
  • 30.
    edited June 2013

    Jacob wrote:

    • (Lemma). $\frac{d}{dt} P_t(n) = 1 - P_t(n)$

    The proof of this follows from the Taylor expansion of $e^{-t H}$ and the fact that $d/d t e^{-t H} = - H e^{-t H}$.

    Jacob, is this the statement you intended to make?

    This lemma, along with the fact that $0 \leq P(n) \leq 1$, would imply that $P_t(n)$ would have a non-negative derivative. So there's no way for it to decrease. In fact, since $P_0(n)$ = 1, it would remain at 1 for all time.

    Comment Source:Jacob wrote: > * (**Lemma**). $\frac{d}{dt} P_t(n) = 1 - P_t(n)$ > The proof of this follows from the Taylor expansion of $e^{-t H}$ and the fact that $d/d t e^{-t H} = - H e^{-t H}$. Jacob, is this the statement you intended to make? This lemma, along with the fact that $0 \leq P(n) \leq 1$, would imply that $P_t(n)$ would have a non-negative derivative. So there's no way for it to decrease. In fact, since $P_0(n)$ = 1, it would remain at 1 for all time.
  • 31.
    edited June 2013

    No, it's not! I was about to fix it, glad you also noticed! I'll leave it there for now and fix it in a few days, for future readers.

    What I wanted to say was that

    $\frac{d}{d t} \langle n, e^{-t H} n \rangle = - \langle n, \sum_{k=0} \frac{1}{k!}(-1)^k t^k H^{k+1} n \rangle $

    and now I'm messing around to try to get the condition that $P_t$ is always decreasing. I think I have it.

    Comment Source:No, it's not! I was about to fix it, glad you also noticed! I'll leave it there for now and fix it in a few days, for future readers. What I wanted to say was that $\frac{d}{d t} \langle n, e^{-t H} n \rangle = - \langle n, \sum_{k=0} \frac{1}{k!}(-1)^k t^k H^{k+1} n \rangle $ and now I'm messing around to try to get the condition that $P_t$ is always decreasing. I think I have it.
  • 32.
    edited June 2013

    Arg! I think part of what is messing me up is that to make things look more like quantum physics we've let time run backwards in the network theory series. Let's try this

    $P_t(n) = \langle n, e^{t H} n \rangle = \langle n, \sum_{k=0} \frac{1}{k!} t^k H^{k} n \rangle = \sum_{k=0} \frac{1}{k!} t^k \langle n, H^{k} n \rangle $

    Now if I can make the argument that $ \langle n, H^{k} n \rangle = \langle n, H n \rangle^k$

    We would recover

    $P_t(n) = e^{t \langle n, H n \rangle}$ and then if $ \langle n, H^ k n \rangle = $ ($-$ outdegree(n)$)^k$

    We would get

    $P_t(n) = e^{-\text{outdegree}(n)t}$

    which is always decreasing and reaches zero in the long time limit.

    I have to think more to be certain this is OK. It's all simple so hope it's OK. Either way it's fun! And I'm learning something. Note to self: work things out more before posting here --- but hey this started to get exciting!

    Comment Source:Arg! I think part of what is messing me up is that to make things look more like quantum physics we've let time run backwards in the network theory series. Let's try this $P_t(n) = \langle n, e^{t H} n \rangle = \langle n, \sum_{k=0} \frac{1}{k!} t^k H^{k} n \rangle = \sum_{k=0} \frac{1}{k!} t^k \langle n, H^{k} n \rangle $ Now if I can make the argument that $ \langle n, H^{k} n \rangle = \langle n, H n \rangle^k$ We would recover $P_t(n) = e^{t \langle n, H n \rangle}$ and then if $ \langle n, H^ k n \rangle = $ ($-$ outdegree(n)$)^k$ We would get $P_t(n) = e^{-\text{outdegree}(n)t}$ which is always decreasing and reaches zero in the long time limit. I have to think more to be certain this is OK. It's all simple so hope it's OK. Either way it's fun! And I'm learning something. Note to self: work things out more before posting here --- but hey this started to get exciting!
  • 33.
    edited June 2013

    Gent's, I have to head to a dinner tonight so won't get back to this until tmrw (unfortunatly). I'm thinking about this stuff though and actually just making sure I recall all the definitions when it comes to stochastic mechanics listed here. One things for sure is that this is getting more interesting with time so the fun derivative is increasing!

    I think the story here related to stochastic mechanics so far is that we have 2 things:

    • end points: stationary states
    • starting points: nodes where if you start there, your going to leave!
    Comment Source:Gent's, I have to head to a dinner tonight so won't get back to this until tmrw (unfortunatly). I'm thinking about this stuff though and actually just making sure I recall all the definitions when it comes to stochastic mechanics listed <a href="http://math.ucr.edu/home/baez/networks/networks_20.html"> here</a>. One things for sure is that this is getting more interesting with time so the fun derivative is increasing! I think the story here related to stochastic mechanics so far is that we have 2 things: * end points: stationary states * starting points: nodes where if you start there, your going to leave!
  • 34.
    edited June 2013

    Jacob, Great.

    Also, can we generalize these results along the following lines.

    Let S be a set of nodes. Say that S is predecessor-closed if for every node n in S, and predecessor n' of n, then n' is in S. Define similarly the notion of a successor-closed set of nodes.

    Any set of roots is predecessor-closed, and any set of leaves is successor-closed.

    Statement 1. Now let S be set of predecessor-closed nodes. Suppose that at the start of the experiment, the walker is known to be somewhere within S. Then, as time progresses, the probability that the walker remains within S must be a non-increasing function. In fact, only if S is also successor-closed will the probability remain constant over time.

    Here is a variant on that statement:

    Statement 2a. Let S be set of predecessor-closed nodes. For any given starting condition of the experiment, as time progresses, the conditional probability that the walker is found within S must be a non-increasing function. In fact, only if S is also successor-closed will the probability remain constant over time.

    Statement 2b. Dually, let S be set of successor-closed nodes. For any given starting condition of the experiment, as time progresses, the conditional probability that the walker is found within S must be a non-decreasing function. In fact, only if S is also predecessor-closed will the probability remain constant over time.

    These statements, if proven, would "put fibre" into the intuitive notion that the walkers are swept from roots towards leaves in a directed acyclic graph.

    Comment Source:Jacob, Great. Also, can we generalize these results along the following lines. Let S be a set of nodes. Say that S is _predecessor-closed_ if for every node n in S, and predecessor n' of n, then n' is in S. Define similarly the notion of a successor-closed set of nodes. Any set of roots is predecessor-closed, and any set of leaves is successor-closed. Statement 1. Now let S be set of predecessor-closed nodes. Suppose that at the start of the experiment, the walker is known to be somewhere within S. Then, as time progresses, the probability that the walker remains within S must be a non-increasing function. In fact, only if S is also successor-closed will the probability remain constant over time. Here is a variant on that statement: Statement 2a. Let S be set of predecessor-closed nodes. For any given starting condition of the experiment, as time progresses, the conditional probability that the walker is found within S must be a non-increasing function. In fact, only if S is also successor-closed will the probability remain constant over time. Statement 2b. Dually, let S be set of successor-closed nodes. For any given starting condition of the experiment, as time progresses, the conditional probability that the walker is found within S must be a non-decreasing function. In fact, only if S is also predecessor-closed will the probability remain constant over time. These statements, if proven, would "put fibre" into the intuitive notion that the walkers are swept from roots towards leaves in a directed acyclic graph.
  • 35.

    David wrote

    • (Thm.) Let $G$ be an acyclic network with a positive number of nodes. Then there is at least one node with outdegree $0$ and at least one node with indegree $0$.

    Yes, that's better! But 'positive' should be 'finite'.

    If the graph is infinite, then the conclusion does not hold. For instance, successor relation among the integers is acyclic, but has no roots or leaves.

    That was the example I had in mind.

    Comment Source:David wrote > * (**Thm.**) Let $G$ be an acyclic network with a positive number of nodes. Then there is at least one node with outdegree $0$ and at least one node with indegree $0$. Yes, that's better! But 'positive' should be 'finite'. > If the graph is infinite, then the conclusion does not hold. For instance, successor relation among the integers is acyclic, but has no roots or leaves. That was the example I had in mind.
  • 36.
    edited June 2013

    Jacob,

    We can order the nodes so that all the starting point nodes (lets call them roots) are first, and the others are ordered so as to respect the edge directions. Then $H$ has block form like


    D 0 A B

    where $D$ and $B$ are square, $B$ is lower triangular, and $D$ is diagonal. Then $H^k$ looks like


    D^k 0 A' B'

    and $ \langle n, H^ k n \rangle = $ ($-$ outdegree(n)$)^k$ where $n$ is any root.

    Comment Source:Jacob, We can order the nodes so that all the starting point nodes (lets call them roots) are first, and the others are ordered so as to respect the edge directions. Then $H$ has block form like ~~~~ D 0 A B ~~~~ where $D$ and $B$ are square, $B$ is lower triangular, and $D$ is diagonal. Then $H^k$ looks like ~~~~ D^k 0 A' B' ~~~~ and $ \langle n, H^ k n \rangle = $ ($-$ outdegree(n)$)^k$ where $n$ is any root.
  • 37.

    In my work in phylogenetic analysis, acyclic networks are used for modelling hybridization. They are quite restricted. There are 4 types of node:

    1. a single root node, with indegree 0 and outdegree 2.

    2. some leaf nodes with indegree 1 and outdegree 0.

    3. 'speciation' nodes with indegree 1 and outdegree 2.

    4. 'hybridization' nodes with indegree 2 and outdegree 1.

    One technique for these is multi-labeled trees. (Search for 'phylogenetic networks multi-labeled trees'). The basic idea is to split any nodes with indegree greater then 1 into multiple nodes, one for each incoming edge. This has to be done recursively, and you have to keep track of the labels. This turns the network into a set of trees, one for each root. Trees are a lot easier to work with then networks, but they might be very big.

    Comment Source:In my work in phylogenetic analysis, acyclic networks are used for modelling hybridization. They are quite restricted. There are 4 types of node: 1. a single root node, with indegree 0 and outdegree 2. 2. some leaf nodes with indegree 1 and outdegree 0. 3. 'speciation' nodes with indegree 1 and outdegree 2. 4. 'hybridization' nodes with indegree 2 and outdegree 1. One technique for these is multi-labeled trees. (Search for 'phylogenetic networks multi-labeled trees'). The basic idea is to split any nodes with indegree greater then 1 into multiple nodes, one for each incoming edge. This has to be done recursively, and you have to keep track of the labels. This turns the network into a set of trees, one for each root. Trees are a lot easier to work with then networks, but they might be very big.
  • 38.

    None of the following has been carefully checked!

    Suppose we have a single walk: nodes $1, 2, \dots, m$ and edges $(1,2), (2,3, \dots, (m-1,m)$, and a walker starts at the root $r$. Then

    $$ e^{tH} r = e^{-t}\left(1, \quad t, \quad t^2/2!, \quad t^3/3!, \dots, \quad t^{m-2}/(m-2)!, \quad \sum_{i=m-1}^\infty t^i/i!\right)^T$$ In words, it is a 'curtailed' Poisson with mean $t$. By curtailed I mean that everything in the Poisson from $m-1$ on is bundled together at the leaf. This can be proved by exponentiating $H$. Or you can imagine sitting just after the $i$th node and waiting for a walker to arrive. You will wait a time that is a sum of $i$ independent exponentials and so the distribution of the waiting time is a gamma with shape parameter $i$. And since the rate at which walkers pass you is proportional to the number at $i$, the distribution at $i$ is the same.

    Now I think it should be possible to write an acyclic network as a set of walks, and find an expression for $P_t(n)$ for any node $n$. You need all the walks from a root to a leaf with the right weights... Might work well for sparse networks, but for dense ones it might be quicker to find $ e^{tH} $ numerically.

    Comment Source:None of the following has been carefully checked! Suppose we have a single walk: nodes $1, 2, \dots, m$ and edges $(1,2), (2,3, \dots, (m-1,m)$, and a walker starts at the root $r$. Then $$ e^{tH} r = e^{-t}\left(1, \quad t, \quad t^2/2!, \quad t^3/3!, \dots, \quad t^{m-2}/(m-2)!, \quad \sum_{i=m-1}^\infty t^i/i!\right)^T$$ In words, it is a 'curtailed' Poisson with mean $t$. By curtailed I mean that everything in the Poisson from $m-1$ on is bundled together at the leaf. This can be proved by exponentiating $H$. Or you can imagine sitting just after the $i$th node and waiting for a walker to arrive. You will wait a time that is a sum of $i$ independent exponentials and so the distribution of the waiting time is a gamma with shape parameter $i$. And since the rate at which walkers pass you is proportional to the number at $i$, the distribution at $i$ is the same. Now I think it should be possible to write an acyclic network as a set of walks, and find an expression for $P_t(n)$ for any node $n$. You need all the walks from a root to a leaf with the right weights... Might work well for sparse networks, but for dense ones it might be quicker to find $ e^{tH} $ numerically.
Sign In or Register to comment.