#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Time reversibility in Markov chains

Jacob Biamonte mentioned this topic, and pointed to time reversal with respect to Markov chains.

What I know about this topic comes from the use of continuous time Markov chains as models for mutation of molecular sequences in evolutionary biology. It is usually assumed that the mutations at different positions in the molecular sequence (called sites) behave independently, so we can focus on one site. The state space $S$ is most often the set of 4 nucleotides $\{A,C,G,T\}$. Sometimes $S$ is 20 amino acids, sometimes 61 codons, but I'll stick with $|S|=4$.

A very common model is the 'general time-reversible' or GTR model for DNA substitutions. It can described by a 4x4 rate matrix $A$ in which the off-diagonal entries are nonnegative and the columns sum to zero. If the process is time-reversible, $A$ can be written as the product $DB$ of a diagonal matrix $D$ and a symmetric matrix $B$. The entries in $D$ are the equilibrium frequencies (proportions) of the states. Let $C = D^{1/2}$. Then $CBC$ is symmetric. Let $U$ the matrix whose columns are the right eigenvectors of $CBC$, and let $\Lambda$ be the diagonal matrix of eigenvalues. Then

$$A = (CU) \Lambda (CU)^{-1}.$$ (This comes from p206 of Felsenstein's book Inferring phylogenies.)

• Options
1.

Brilliant. Thanks for sharing.

Comment Source:Brilliant. Thanks for sharing.
• Options
2.
edited January 2014

Graham wrote:

It can described by a [...] matrix $A$ in which the off-diagonal entries are nonnegative and the columns sum to zero.

By the way, that's exactly what I've been calling an infinitesimal stochastic operator in the network theory series. It's a somewhat ugly name but the point is that then $\exp(t A)$ is what people call a stochastic operator for any $t \ge 0$, and given a family of operator $\exp(t A)$ physicists call $A$ the infinitesimal generator, since when $t$ is tiny we have

$$\exp(t A) \approx I + t A$$ I'm only mentioning this in case you (or someone) has been scratching their heads at my jargon.

I think the time-reversible case you mention is the same as a reversible continuous-time Markov chain. I've been writing about these lately, starting here. I link to a number of criteria for a continuous-time Markov chain to be reversible; none of them are exactly the one you hint at — $A = D B$ with $D$ diagonal and $B$ symmetric — but I bet they're equivalent.

(I say "hint at" because you didn't state that this was a necessary and sufficient condition for time-reversibility... but I bet it is.)

I would really like to take my weird viewpoints on Markov processes and apply them to the case of genetics, but I don't know what I can do that's worth doing and hasn't been done.

I am preparing a paper with Nina Otter on the phylogenetic operad, which is a way of thinking about the space of phylogenetic trees studied by Susan Holmes, linking it more tightly to Markov processes. But this is general abstract nonsense of a sort that's unlikely to appeal to biologists.

Comment Source:Graham wrote: > It can described by a [...] matrix $A$ in which the off-diagonal entries are nonnegative and the columns sum to zero. By the way, that's exactly what I've been calling an **infinitesimal stochastic operator** in the network theory series. It's a somewhat ugly name but the point is that then $\exp(t A)$ is what people call a **stochastic operator** for any $t \ge 0$, and given a family of operator $\exp(t A)$ physicists call $A$ the **infinitesimal generator**, since when $t$ is tiny we have $$\exp(t A) \approx I + t A$$ I'm only mentioning this in case you (or someone) has been scratching their heads at my jargon. I think the _time-reversible_ case you mention is the same as a reversible continuous-time Markov chain. I've been writing about these lately, starting [here](http://johncarlosbaez.wordpress.com/2014/01/07/lyapunov-functions-for-complex-balanced-systems/#comment-35159). I link to a number of criteria for a continuous-time Markov chain to be reversible; none of them are exactly the one you hint at &mdash; $A = D B$ with $D$ diagonal and $B$ symmetric &mdash; but I bet they're equivalent. (I say "hint at" because you didn't state that this was a _necessary and sufficient_ condition for time-reversibility... but I bet it is.) I would really like to take my weird viewpoints on Markov processes and apply them to the case of genetics, but I don't know what I can do that's worth doing and hasn't been done. I am preparing a paper with Nina Otter on the phylogenetic operad, which is a way of thinking about the space of phylogenetic trees studied by Susan Holmes, linking it more tightly to Markov processes. But this is general abstract nonsense of a sort that's unlikely to appeal to biologists.
• Options
3.
edited January 2014

I think the condition is that $D$ is diagonal and real, and $B$ symmetric. [Edit: forget that. It would be $\Lambda$ that the realness would apply to.] I am not certain though: when I wrote the earlier post, I wanted to ask a question about whether it was necessary and sufficient, but as I wrote, I realised I didn't know exactly what the question should be.

I saw you talking on the blog. A related issue is what happens when the process is not time reversible. At equilibrium, what is happening - what is cycling around and how? I've thought about that before, without achieving much. In the 4x4 case, you start thinking about 2 cycles and 3 cycles and figures of 8, just like Manoj discussed.

Comment Source:I think the condition is that $D$ is diagonal and <i>real</i>, and $B$ symmetric. [Edit: forget that. It would be $\Lambda$ that the realness would apply to.] I am not certain though: when I wrote the earlier post, I wanted to ask a question about whether it was necessary and sufficient, but as I wrote, I realised I didn't know exactly what the question should be. I saw you talking on the blog. A related issue is what happens when the process is not time reversible. At equilibrium, what is happening - what is cycling around and how? I've thought about that before, without achieving much. In the 4x4 case, you start thinking about 2 cycles and 3 cycles and figures of 8, just like Manoj discussed.
• Options
4.

Here's part of a wiki page John mentioned in his comment he linked to above.

Continuous-time Markov chains

The theorem states that a continuous-time Markov chain with transition rate matrix Q is reversible if and only if its transition probabilities satisfy

$$q_{j_1 j_2} q_{j_2 j_3} \cdots q_{j_{n-1} j_n} q_{j_n j_1} = q_{j_1 j_n} q_{j_n j_{n-1}} \cdots q_{j_3 j_2} q_{j_2 j_1}$$ for all finite sequences of states

$$j_1, j_2, \ldots, j_n \in S$$ References

• Kelly, Frank P. (1979). Reversibility and Stochastic Networks. Wiley, Chichester. pp. 21–25.

By the way, this book is online here

Comment Source:Here's part of a [wiki page](https://en.wikipedia.org/wiki/Kolmogorov%27s_criterion#Continuous-time_Markov_chains) John mentioned in his comment he linked to above. **Continuous-time Markov chains** The theorem states that a continuous-time Markov chain with transition rate matrix Q is reversible if and only if its transition probabilities satisfy $$q_{j_1 j_2} q_{j_2 j_3} \cdots q_{j_{n-1} j_n} q_{j_n j_1} = q_{j_1 j_n} q_{j_n j_{n-1}} \cdots q_{j_3 j_2} q_{j_2 j_1}$$ for all finite sequences of states $$j_1, j_2, \ldots, j_n \in S$$ References * Kelly, Frank P. (1979). Reversibility and Stochastic Networks. Wiley, Chichester. pp. 21–25. By the way, this book is online [here](http://www.statslab.cam.ac.uk/~frank/BOOKS/book/whole.pdf)
• Options
5.
edited January 2014

I want to figure out what these ideas mean using a “quantum style” approach.

Here's what I think is going on. I warn you, it might not be correct at all.

Definition 1 (reversible stochastic process). Given a graph Lapacian $L$ and generator

$$U = \exp(t L)$$ and its time reversed generator $$\widetilde{U} = \exp(t \widetilde{L})$$ The following quantity must identically vanish $\forall t, | i \rangle, | j\rangle$ $$\langle j | \exp(t L) | i \rangle - \langle j | \exp(t \widetilde{L}) | i \rangle = 0$$ for a process to be called reversible. Here $| i \rangle, | j\rangle$ are nodes in the site basis.

This, I think, is a definition of reversibility. Starting at node $i$ and transitioning to $j$ should have the same probability as starting at $j$ and transitioning to $i$, for the reversed process. So the following must hold

$$\exp(t L) - \exp(t \widetilde{L}) = 0$$ for all times $t$. In equivalent words, all of the time derivatives of this quantity must each vanish separately. (Is that right?) Starting with the first of these derivatives

$$\partial_t(\exp(t L) - \exp(t \widetilde{L})) = 0$$ and then setting $t=0$ gives

$$L - \widetilde{L} = 0$$ We know how to construct the graph Lapacian from an adjacency matrix $A$.

$$L = A - D$$ If we reverse all of the arrows in the graph, we then can construct $\widetilde{L}$ as

$$\widetilde{L} = A^\top - \overline{D}$$ and hence we arrive at

$$A - D - A^\top + \overline{D} = 0$$ in other words,

$$A - A^\top = D - \overline{D}$$ We know that $D$ and respectively $\overline{D}$ have only diagonal entries and that $A$ and respectively $A^\top$ must have only off diagonal ones, and so for both sides to vanish, they must each vanish separately.

If $D - \overline{D} = 0$ then $A = A^\top$ and the corresponding process is reversible. We know that $D$ and $\overline{D}$ are determined uniquely from $A$ and $A^\top$. We further know that the only (possibly) non-zero entries of $D$, $\overline{D}$ are

$$D_{ii} = -\sum_i A_{ij}$$ $$\overline{D}{ii} = -\sum_i A{ji}$$ and consequently that

$$D_{ii} = L_{ii}$$ $$\overline{D}{ii} = \widetilde{L}{ii}$$ Stated another way, if you know the diagonal entries of $L$ and $\widetilde{L}$, you can determine if the process is reversible.

Kolmogorov's reversibility criterion implies that the product of probabilities when traversing through any closed loop must be equal''. So in other words, you need only consider transitions from a node back to itself. If you know all of these transition probabilities, as well as the probabilities for the time reversed version, this is enough to determine if a process is reversible. In this case however, what they consider is discrete applications of the generator of the Markov chain. On the wiki article John found, you can see what's going on. See this example.

I'm not 100 percent sure what's going on here in this case, as I mentioned. But I think that the diagonal entries of the propagator compared with the diagonal entries of its time reversed version might lead to a sort of 'quantum version' of the Kolmogorov criterion.

However, it's early, I did this in a hurry and I waved my hands at the analysis a bit. Basically, I did the calculations and got excited and wrote this down. It should at least give us something to talk about. The version of time-reversibility that I know about is defined in (what seems at least now to be) a different way in quantum physics. I want to understand the stochastic definitions better, in hopes that we could maybe compare them.

Comment Source:I want to figure out what these ideas mean using a “quantum style” approach. Here's what I think is going on. I warn you, it might not be correct at all. > **Definition 1** (reversible stochastic process). Given a graph Lapacian $L$ and generator > $$U = \exp(t L)$$ > and its time reversed generator > $$\widetilde{U} = \exp(t \widetilde{L})$$ > The following quantity must identically vanish $\forall t, | i \rangle, | j\rangle$ > $$\langle j | \exp(t L) | i \rangle - \langle j | \exp(t \widetilde{L}) | i \rangle = 0$$ > for a process to be called reversible. Here $| i \rangle, | j\rangle$ are nodes in the site basis. This, I think, is a definition of reversibility. Starting at node $i$ and transitioning to $j$ should have the same probability as starting at $j$ and transitioning to $i$, for the reversed process. So the following must hold $$\exp(t L) - \exp(t \widetilde{L}) = 0$$ for all times $t$. In equivalent words, all of the time derivatives of this quantity must each vanish separately. (Is that right?) Starting with the first of these derivatives $$\partial_t(\exp(t L) - \exp(t \widetilde{L})) = 0$$ and then setting $t=0$ gives $$L - \widetilde{L} = 0$$ We know how to construct the graph Lapacian from an adjacency matrix $A$. $$L = A - D$$ If we reverse all of the arrows in the graph, we then can construct $\widetilde{L}$ as $$\widetilde{L} = A^\top - \overline{D}$$ and hence we arrive at $$A - D - A^\top + \overline{D} = 0$$ in other words, $$A - A^\top = D - \overline{D}$$ We know that $D$ and respectively $\overline{D}$ have only diagonal entries and that $A$ and respectively $A^\top$ must have only off diagonal ones, and so for both sides to vanish, they must each vanish separately. If $D - \overline{D} = 0$ then $A = A^\top$ and the corresponding process is reversible. We know that $D$ and $\overline{D}$ are determined uniquely from $A$ and $A^\top$. We further know that the only (possibly) non-zero entries of $D$, $\overline{D}$ are $$D_{ii} = -\sum_i A_{ij}$$ $$\overline{D}_{ii} = -\sum_i A_{ji}$$ and consequently that $$D_{ii} = L_{ii}$$ $$\overline{D}_{ii} = \widetilde{L}_{ii}$$ Stated another way, if you know the diagonal entries of $L$ and $\widetilde{L}$, you can determine if the process is reversible. > [Kolmogorov's reversibility criterion](https://en.wikipedia.org/wiki/Kolmogorov%27s_criterion#Continuous-time_Markov_chains) implies that the product of probabilities when traversing through any closed loop must be equal''. So in other words, you need only consider transitions from a node back to itself. If you know all of these transition probabilities, as well as the probabilities for the time reversed version, this is enough to determine if a process is reversible. In this case however, what they consider is discrete applications of the generator of the Markov chain. On the wiki article John found, you can see what's going on. See [this example](https://en.wikipedia.org/wiki/Kolmogorov%27s_criterion#Example). I'm not 100 percent sure what's going on here in this case, as I mentioned. But I think that the diagonal entries of the propagator compared with the diagonal entries of its time reversed version might lead to a sort of 'quantum version' of the Kolmogorov criterion. However, it's early, I did this in a hurry and I waved my hands at the analysis a bit. Basically, I did the calculations and got excited and wrote this down. It should at least give us something to talk about. The version of time-reversibility that I know about is defined in (what seems at least now to be) a different way in quantum physics. I want to understand the stochastic definitions better, in hopes that we could maybe compare them.
• Options
6.

I haven't closely followed your discussion and just briefly glanced into the book but p.23 Theorem 1.8 says that if a Markov process/chain is STATIONARY then the above holds. every reversible markov process seems to be stationary (Lemma 1.1) but the reverse is not true but only iff it is detailed balanced (theorem 1.2). That is if its reversible the transition probabilities satisfy , but if  holds then in order for the theorem to apply it seems the Markov process has to be stationary in order to yield reversibility. In order for the detailed balanced condition to be satisfied the "summation has to be finite" (see proof), which seems to be guaranteed only if one assumes that the chain is stationary (I don't see this at the moment but I don't have infinite time to delve into this). So does the Wikipedia entry need to be corrected?

Comment Source:I haven't closely followed your discussion and just briefly glanced into the book but p.23 Theorem 1.8 says that if a Markov process/chain is STATIONARY then the above holds. every reversible markov process seems to be stationary (Lemma 1.1) but the reverse is not true but only iff it is detailed balanced (theorem 1.2). That is if its reversible the transition probabilities satisfy , but if  holds then in order for the theorem to apply it seems the Markov process has to be stationary in order to yield reversibility. In order for the detailed balanced condition to be satisfied the "summation has to be finite" (see proof), which seems to be guaranteed only if one assumes that the chain is stationary (I don't see this at the moment but I don't have infinite time to delve into this). So does the Wikipedia entry need to be corrected?
• Options
7.

So does the Wikipedia entry need to be corrected?

I don't think so. I think there are two definitions of reversibility going on here. The first one, this is the one we've already talked about.

I think the first is just a fancy way of saying that going from $i$ to $j$ occurs with the same probability as going from $j$ to $i$, in the reversed process.

Regarding the second, I think it's talking about the long-time stationary behavior. (I didn't see the book but in other places I've seen them talking about a discrete version of this).

In other words, when does

$$|\pi \rangle = \lim_{t\to \infty} \exp(t L) | \hat 1 \rangle = \lim_{t \to \infty} \exp(t \widetilde L) | \hat 1 \rangle$$ So when does the reversed process have the same steady state as the forward process? In other words, when does changing the directions of the arrows change the stationary state.

Comment Source:> So does the Wikipedia entry need to be corrected? I don't think so. I think there are two definitions of reversibility going on here. The first one, this is the one we've already talked about. I think the first is just a fancy way of saying that going from $i$ to $j$ occurs with the same probability as going from $j$ to $i$, in the reversed process. Regarding the second, I think it's talking about the long-time stationary behavior. (I didn't see the book but in other places I've seen them talking about a discrete version of this). In other words, when does $$|\pi \rangle = \lim_{t\to \infty} \exp(t L) | \hat 1 \rangle = \lim_{t \to \infty} \exp(t \widetilde L) | \hat 1 \rangle$$ So when does the reversed process have the same steady state as the forward process? In other words, when does changing the directions of the arrows change the stationary state.
• Options
8.

The wikipedia entry explicitly refers to the book and the book gives a definition of reversibility, which seems not a long-time behaviour reversibility. i don't know what you have already talked about and what you are referring to. I just referred to your citation of the wikipedia entry, which is based on the book.

Comment Source:The wikipedia entry explicitly refers to the book and the book gives a definition of reversibility, which seems not a long-time behaviour reversibility. i don't know what you have already talked about and what you are referring to. I just referred to your citation of the wikipedia entry, which is based on the book.
• Options
9.

I think you're right: it looks like the wiki article quotes Theorem 1.7 of the book, omitting the word stationary. I'm still a bit confused though...

Comment Source:I think you're right: it looks like the wiki article quotes **Theorem 1.7** of the book, omitting the word _stationary_. I'm still a bit confused though...
• Options
10.

Let's talk about the long time limit. I mean let's talk about when

$$|\pi \rangle = \lim_{t\to \infty} \exp(t G) | \hat 1 \rangle = \lim_{t \to \infty} \exp(t \widetilde G) | \hat 1 \rangle$$ We have already stated that

$$L = A - D$$ $$\widetilde{L} = A^\top - \overline{D}$$ and when $G = LQ^{-1}$ and $\widetilde{G}=L \overline{Q}^{-1}$ for non-negative $Q$, $overline{Q}$ diagonal, then $G$, $\overline G$ generate valid stochastic processes. Special cases include the random walk normalised Lapacian meaning $Q$ would be the degree matrix of the graph.

Moreover, we know that for $L$ and $\tilde L$ that $| \hat 1 \rangle$ is a valid eigenstate with eigenvalue $0$. It is a general property of a graph Lapacian that the all ones vector $| \hat 1 \rangle$ is an eigenstate of highest (zero) eigenvalue. If the graph is strongly connected, the all ones vector is the unique vector with this highest eigenvalue. If the graph is not strongly connected, then we need to know about the

Note also that if there are k disjoint connected components in the graph, then this vector of all ones can be split into the sum of k independent $\lambda = 0$ eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere.

For now, I want to assume that the graph is strongly connected, and worry about other cases later.

We want to consider the case where both $G$ and $\widetilde{G}$ share the same stationary state $| \pi \rangle$ - in other words when

$$LQ^{-1} | \pi \rangle = \widetilde{L} \overline{Q}^{-1} | \pi \rangle$$ We know that $LQ^{-1} | \pi \rangle = Q | \hat 1 \rangle = 0$ and so

$$(A^\top - \overline{D}) \overline{Q}^{-1} Q | \hat 1 \rangle = 0$$ However, we know that $(A^\top - \overline{D}) | \hat 1 \rangle = 0$ and that

$$(A^\top - \overline{D}) \leq 0$$ So when $\overline{Q}^{-1}Q | \hat 1 \rangle = | \hat 1 \rangle$ we have a valid eigenstate of the Lapacian $(A^\top - \overline{D})$. We further know that $Q$, $\overline{Q}$ are diagonal, and so the relationship gives

$$\overline{Q}{ij}^{-1} Q{jk} = \delta_{ik}$$ and so we determine from the uniqueness of inverses that $\overline{Q} = Q$.

But this is not interesting, since every Lapacian has $L |\hat 1 \rangle = 0$ and so there is nothing special about the fact that $LQ^{-1}$ has $|\pi \rangle$ as its steady state, for multiple $L$. In other words, we need to dig a bit deeper than this to find something that relates to the structure of the graph/system.

Comment Source:Let's talk about the long time limit. I mean let's talk about when $$|\pi \rangle = \lim_{t\to \infty} \exp(t G) | \hat 1 \rangle = \lim_{t \to \infty} \exp(t \widetilde G) | \hat 1 \rangle$$ We have already stated that $$L = A - D$$ $$\widetilde{L} = A^\top - \overline{D}$$ and when $G = LQ^{-1}$ and $\widetilde{G}=L \overline{Q}^{-1}$ for non-negative $Q$, $overline{Q}$ diagonal, then $G$, $\overline G$ generate valid stochastic processes. Special cases include the [random walk normalised Lapacian](http://en.wikipedia.org/wiki/Laplacian_matrix#Random_walk_normalized_Laplacian) meaning $Q$ would be the degree matrix of the graph. Moreover, we know that for $L$ and $\tilde L$ that $| \hat 1 \rangle$ is a valid eigenstate with eigenvalue $0$. It is a general property of a graph Lapacian that the all ones vector $| \hat 1 \rangle$ is an eigenstate of highest (zero) eigenvalue. If the graph is strongly connected, the all ones vector is the unique vector with this highest eigenvalue. If the graph is not strongly connected, then we need to know about the * [Algebraic connectivity](http://en.wikipedia.org/wiki/Algebraic_connectivity) > Note also that if there are k disjoint connected components in the graph, then this vector of all ones can be split into the sum of k independent $\lambda = 0$ eigenvectors of ones and zeros, where each connected component corresponds to an eigenvector with ones at the elements in the connected component and zeros elsewhere. For now, I want to assume that the graph is strongly connected, and worry about other cases later. We want to consider the case where both $G$ and $\widetilde{G}$ share the same stationary state $| \pi \rangle$ - in other words when $$LQ^{-1} | \pi \rangle = \widetilde{L} \overline{Q}^{-1} | \pi \rangle$$ We know that $LQ^{-1} | \pi \rangle = Q | \hat 1 \rangle = 0$ and so $$(A^\top - \overline{D}) \overline{Q}^{-1} Q | \hat 1 \rangle = 0$$ However, we know that $(A^\top - \overline{D}) | \hat 1 \rangle = 0$ and that $$(A^\top - \overline{D}) \leq 0$$ So when $\overline{Q}^{-1}Q | \hat 1 \rangle = | \hat 1 \rangle$ we have a valid eigenstate of the Lapacian $(A^\top - \overline{D})$. We further know that $Q$, $\overline{Q}$ are diagonal, and so the relationship gives $$\overline{Q}_{ij}^{-1} Q_{jk} = \delta_{ik}$$ and so we determine from the uniqueness of inverses that $\overline{Q} = Q$. But this is not interesting, since every Lapacian has $L |\hat 1 \rangle = 0$ and so there is nothing special about the fact that $LQ^{-1}$ has $|\pi \rangle$ as its steady state, for multiple $L$. In other words, we need to dig a bit deeper than this to find something that relates to the structure of the graph/system.
• Options
11.

Let's again consider a graph with adjacency matrix $A$.

Let us consider functions $o$, $i$ that returns the in and out degree matrix of an adjacency matrix, respectively.

Let $L = A - o(A) := A - D$ and then consider the process $G = LQ^{-1}$ as well as its time reversed process $\widetilde{G}$. This time we will define the time reversed process as

$$\widetilde{G} = L^\top Q^{-1}$$.

We will then show that if

$$o(A) = i(A)$$ Then the processes $G$, $G^{-1}$ have the same steady state, despite having rather different dynamics. By construction, we already know that $LQ^{-1} |\pi \rangle = 0$ and so we consider

$$L^\top Q^{-1} |\pi \rangle = (A^\top - D)| \hat 1 \rangle = 0$$ Here we note that

$$A^\top| \hat 1 \rangle = o(A^\top)| \hat 1 \rangle = i(A)| \hat 1 \rangle = D| \hat 1 \rangle$$ We need graphs where changing the direction of the arrows, don't change the degree matrix used in the Lapacian. Examples of such a graph would be the following weighted graph pair. These two balanced graphs are examples of graphs with different dynamics, but which share the same stationary states.

Comment Source:Let's again consider a graph with adjacency matrix $A$. Let us consider functions $o$, $i$ that returns the in and out degree matrix of an adjacency matrix, respectively. Let $L = A - o(A) := A - D$ and then consider the process $G = LQ^{-1}$ as well as its time reversed process $\widetilde{G}$. This time we will define the time reversed process as $$\widetilde{G} = L^\top Q^{-1}$$. We will then show that if $$o(A) = i(A)$$ Then the processes $G$, $G^{-1}$ have the same steady state, despite having rather different dynamics. By construction, we already know that $LQ^{-1} |\pi \rangle = 0$ and so we consider $$L^\top Q^{-1} |\pi \rangle = (A^\top - D)| \hat 1 \rangle = 0$$ Here we note that $$A^\top| \hat 1 \rangle = o(A^\top)| \hat 1 \rangle = i(A)| \hat 1 \rangle = D| \hat 1 \rangle$$ We need graphs where changing the direction of the arrows, don't change the degree matrix used in the Lapacian. Examples of such a graph would be the following weighted graph pair. ![balanced graph](https://dl.dropboxusercontent.com/u/78485948/forum-graphics/balanced-graph.JPG) These two _balanced graphs_ are examples of graphs with different dynamics, but which share the same stationary states.
• Options
12.

I think you’re right: it looks like the wiki article quotes Theorem 1.7 of the book, omitting the word stationary. I’m still a bit confused though…

Yes it looks as if the wiki article erranously quotes Theorem 1.7 which is identical to Theorem 1.8, apart that it refers to continuous time Markov chains, here called Markov processes, which seemed to have been of interest here in the discussion. But as I don't understand the proof on a first glance I don't want to exclude the possibility that the wikipedia author(s) did this omittance on purpose, like becasue they think the extra condition "sum to infinity" is unnecessary where of course it is also possible this is a typo or that for example the theorem was cited once correctly and that some non-mathematician erased the word "stationary because he/she eventually thought for example that the word "stationary" "sounds ugly" in bold disregard that mathematical language is less redundandant in theorem statements and in disregard that this erasure kills the whole mathematical meaning.

Anyways as said I was just more or less accidentally drawn into this discussion here and noticed this omittance in the Wikipedia article and mentioned this in order to prevent more harm in your chain of reasoning. If you think the Wikipedia article is wrong you should may be try to straighten it out.

Comment Source:>I think you’re right: it looks like the wiki article quotes Theorem 1.7 of the book, omitting the word stationary. I’m still a bit confused though… Yes it looks as if the wiki article erranously quotes Theorem 1.7 which is identical to Theorem 1.8, apart that it refers to **continuous time** Markov chains, here called Markov processes, which seemed to have been of interest here in the discussion. But as I don't understand the proof on a first glance I don't want to exclude the possibility that the wikipedia author(s) did this omittance on purpose, like becasue they think the extra condition "sum to infinity" is unnecessary where of course it is also possible this is a typo or that for example the theorem was cited once correctly and that some non-mathematician erased the word "stationary because he/she eventually thought for example that the word "stationary" "sounds ugly" in bold disregard that mathematical language is less redundandant in theorem statements and in disregard that this erasure kills the whole mathematical meaning. Anyways as said I was just more or less accidentally drawn into this discussion here and noticed this omittance in the Wikipedia article and mentioned this in order to prevent more harm in your chain of reasoning. If you think the Wikipedia article is wrong you should may be try to straighten it out.
• Options
13.

It's a nice catch and I'll examine this in detail, comparing to other references before changing Wikipedia.

Comment Source:It's a nice catch and I'll examine this in detail, comparing to other references before changing Wikipedia.
• Options
14.
edited January 2014

I wrote:

If you think the Wikipedia article is wrong you should may be try to straighten it out.

I guess you know this, but just case: It is not clear wether editing Wikipedia might be worth the effort, as your editing work may unfortunately very easily be overwritten, especially if there is a person which is convinced that "stationary" sounds ugly and which has more energy in guarding this Wikipedia entry than you.

Comment Source:I wrote: >If you think the Wikipedia article is wrong you should may be try to straighten it out. I guess you know this, but just case: It is not clear wether editing Wikipedia might be worth the effort, as your editing work may unfortunately very easily be overwritten, especially if there is a person which is convinced that "stationary" sounds ugly and which has more energy in guarding this Wikipedia entry than you.
• Options
15.

I'll be careful!

Comment Source:I'll be careful!
• Options
16.

Jacob wrote:

• Kelly, Frank P. (1979). Reversibility and Stochastic Networks. Wiley, Chichester. pp. 21–25. By the way, this book is online here

That's funny, Frank Kelly was teaching me probability theory in 1979. Not from this book, though, we were learning more basic stuff.

Exercise 4, page 10 is

Show that a stationary Markov chain is reversible if and only if the matrix of transition probabilities can be written as the product of a symmetric and a diagonal matrix.

The matrix of transition probabilities is not the rate matrix (=infinitesimal stochastic operator), but they're closely related.

Exercise 6 shows the eigenvalues can be real without reversibility.

Comment Source:Jacob wrote: > * Kelly, Frank P. (1979). Reversibility and Stochastic Networks. Wiley, Chichester. pp. 21–25. By the way, this book is online [here](http://www.statslab.cam.ac.uk/~frank/BOOKS/book/whole.pdf) That's funny, Frank Kelly was teaching me probability theory in 1979. Not from this book, though, we were learning more basic stuff. Exercise 4, page 10 is > Show that a stationary Markov chain is reversible if and only if the matrix of transition probabilities can be written as the product of a symmetric and a diagonal matrix. The matrix of transition probabilities is not the rate matrix (=infinitesimal stochastic operator), but they're closely related. Exercise 6 shows the eigenvalues can be real without reversibility.
• Options
17.

The matrix of transition probabilities is not the rate matrix (=infinitesimal stochastic operator), but they’re closely related.

Stones theorem is for an entire semi group, multiple semi groups could intersect at a single point.

I think this is the relation? View the n actions of a propigator in light of a Trotter decomposition with Stone???

Comment Source:> The matrix of transition probabilities is not the rate matrix (=infinitesimal stochastic operator), but they’re closely related. Stones theorem is for an entire semi group, multiple semi groups could intersect at a single point. I think this is the relation? View the n actions of a propigator in light of a Trotter decomposition with Stone???