#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Stochastic probability vs. quantum amplitude current

The Stochastic Probability Current

Let $L$ be a valid stochastic generator and define

$U(t) := e^{t L}$

for all non-negative times $t$.

Definition. (Stochastic Probability Current)

$$\hat{s}{nm}(t) = U{nm}(t) - U_{mn}(t)$$ Roughly stated, one can think of this as the difference in probability between starting at node $n$ and going to $m$ and the opposite.

The following are all equivalent.

• $\hat s = 0$ $\forall t$

• $L = L^\top$

• $U(t)$ is doubly stochastic

• $L$ is a Dirichlet operator (e.g. $iLt$ generates a 1-parameter (t) unitary group)

The Quantum Amplitude Current

Let $H= H^\dagger$ be a quantum generator and define $U(t):= e^{-i t H}$ for all real numbers given by time $t$.

Definition. (Quantum Amplitude Current)

$$\hat q_{nm}(t) = U_{nm}(t) - U_{mn}(t)$$

Then the following are all equivalent.

• $\hat j = 0$ $\forall t$

• $H = H^\top$

• $[H, K]=0$ where antiunitary $K$ is complex conjugation in the same basis defining $H$

Discussion

Note that there exists a linear map $\omega := -i$ acting on the space of valid stochastic generators and making the case of vanishing stochastic probability current a subclass of vanishing quantum amplitude current. The elementary map acts as

$$\omega :L \mapsto -i L$$ where we abuse notation and overload $\hat s$, $\hat j$

$$\hat s(\omega(L)) = \hat q(-i L)$$ and so $\hat s = 0$ $\Rightarrow$ $\hat j = 0$. And so the stochastic case implies the quantum one --- it's a strict subclass. Note that while $\omega$ sends any $L=L^\top$ to the space of valid quantum generators, $\omega^2$ reverses the direction of time in the stochastic evolution, $\omega^3(L)$ yields the time-reversed ($\dagger$ of) the quantum evolution generated from $\omega(L)$ and $\omega^4(L) = L$.

• Options
1.
edited April 2014

I'm trying to understand how the Stochastic Probability Current relates to the notion of a reversible Markov chain. Let's look at discrete time and space first. Let $(X_n)$, $n\geq 0$ be a sequence of random variables on a discrete state space. This sequence is a Markov chain with initial distribution $r$ and transition kernel $P$ if $X_0$ has law $r$ and $P$ describes the conditional distribution of $X_{n+1}$ given $X_n$. We can also allow the sequence to be finite so it makes sense to say $X_0, ... , X_N$ is a MC with initial distribution $r$ and kernel $P$.

The right notion of time reversal is provided by Bayes' rule. The transition kernel $P$ describes the conditional distribution of $X_{n+1}$ given $X_n$. To reverse time, think of the joint probability distribution on chain for all times. Then we can just use Bayes' rule to compute the distribution of $X_n$ given $X_{n+1}$. This defines a new time-reversed transition kernel $P^\dagger_n$ (which might depend on time). It is not the transpose or renormalized transpose of $P$, but computed using Bayes' rule. Write $p_{i,n+1}}$ for the probability of being in state $i$ at time $n+1$ $$P^\dagger_{j(n+1) \rightarrow i(n)} = \frac{P_{i(n) \rightarrow j(n+1)} p_{i(n)}}{ p_{j(n+1)}}$$ To remove the dependence on time we can consider the Markov chain in steady state.

A Markov chain with initial distribution $r$ and transition kernel $P$ is reversible if for all $N \geq 1$, $(X_{N-n})$ is also a Markov chain with the same initial distribution and kernel. A sufficient condition, close to what is discussed above, is that the kernel is irreducible and symmetric (and so doubly stochastic) and the initial distribution is uniform. But this is not necessary. What is necessary is that $P=P^\dagger_n = P^\dagger$, reversing time while in the stationary distribution gives the same transition kernel. Then a movie of the chain played backward will be indistinguishable from one played forward. This is one way to get the detailed balance equations $\pi_i P_{i\rightarrow j} = \pi_j P_{j \rightarrow i}$, and another definition of reversible.

That is, a Markov chain is reversible if there exists a (steady-state) distribution $\pi$ such that $P=P^\dagger$, $$P_{j \rightarrow i} = P^\dagger_{j \rightarrow i} = \frac{P_{i \rightarrow j} \pi_i}{\pi_j}$$ This is the detailed balance condition and as we've set things up, it comes from Bayes' rule.

An example is a random walk on a graph. Consider the graph with four vertices $1,2,3,4$ and five edges $12,14,23,24,34$. States are vertices, and at each step a particle moves to an adjacent vertex with equal probability. The transition matrix is not symmetric (e.g. $P_{1 \rightarrow 2} =1/2$ and $P_{2 \rightarrow 1}=1/3$ or doubly stochastic. But at the stationary distribution, probability of a vertex equals vertex degree over total degree, $(.2,.3,.2,.3)$, it satisfies the detailed balance conditions and $P=P^\dagger$.

All this generalizes to the continuous case in a straightforward way. The issue now is the different notions of time reversal in classical probability and quantum unitary evolution. In the case of pure unitary evolution it seems that the classical degeneration is the easy, symmetric and doubly stochastic case. Perhaps for open systems though we need to think about the more general sort of reversibility.

Comment Source:I'm trying to understand how the Stochastic Probability Current relates to the notion of a reversible Markov chain. Let's look at discrete time and space first. Let $(X_n)$, $n\geq 0$ be a sequence of random variables on a discrete state space. This sequence is a Markov chain with initial distribution $r$ and transition kernel $P$ if $X_0$ has law $r$ and $P$ describes the conditional distribution of $X_{n+1}$ given $X_n$. We can also allow the sequence to be finite so it makes sense to say $X_0, ... , X_N$ is a MC with initial distribution $r$ and kernel $P$. The right notion of time reversal is provided by Bayes' rule. The transition kernel $P$ describes the conditional distribution of $X_{n+1}$ given $X_n$. To reverse time, think of the joint probability distribution on chain for all times. Then we can just use Bayes' rule to compute the distribution of $X_n$ given $X_{n+1}$. This defines a new time-reversed transition kernel $P^\dagger_n$ (which might depend on time). It is not the transpose or renormalized transpose of $P$, but computed using Bayes' rule. Write $p_{i,n+1}}$ for the probability of being in state $i$ at time $n+1$ $$P^\dagger_{j(n+1) \rightarrow i(n)} = \frac{P_{i(n) \rightarrow j(n+1)} p_{i(n)}}{ p_{j(n+1)}}$$ To remove the dependence on time we can consider the Markov chain in steady state. A Markov chain with initial distribution $r$ and transition kernel $P$ is reversible if for all $N \geq 1$, $(X_{N-n})$ is also a Markov chain with the same initial distribution and kernel. A sufficient condition, close to what is discussed above, is that the kernel is irreducible and symmetric (and so doubly stochastic) and the initial distribution is uniform. But this is not necessary. What is necessary is that $P=P^\dagger_n = P^\dagger$, reversing time while in the stationary distribution gives the same transition kernel. Then a movie of the chain played backward will be indistinguishable from one played forward. This is one way to get the detailed balance equations $\pi_i P_{i\rightarrow j} = \pi_j P_{j \rightarrow i}$, and another definition of reversible. That is, a Markov chain is reversible if there exists a (steady-state) distribution $\pi$ such that $P=P^\dagger$, $$P_{j \rightarrow i} = P^\dagger_{j \rightarrow i} = \frac{P_{i \rightarrow j} \pi_i}{\pi_j}$$ This is the _detailed balance condition_ and as we've set things up, it comes from Bayes' rule. An example is a random walk on a graph. Consider the graph with four vertices $1,2,3,4$ and five edges $12,14,23,24,34$. States are vertices, and at each step a particle moves to an adjacent vertex with equal probability. The transition matrix is not symmetric (e.g. $P_{1 \rightarrow 2} =1/2$ and $P_{2 \rightarrow 1}=1/3$ or doubly stochastic. But at the stationary distribution, probability of a vertex equals vertex degree over total degree, $(.2,.3,.2,.3)$, it satisfies the detailed balance conditions and $P=P^\dagger$. All this generalizes to the continuous case in a straightforward way. The issue now is the different notions of time reversal in classical probability and quantum unitary evolution. In the case of pure unitary evolution it seems that the classical degeneration is the easy, symmetric and doubly stochastic case. Perhaps for open systems though we need to think about the more general sort of reversibility.
• Options
2.

Thank you Jason! I'm reading this in detail and will reply later on. Cheers from Torino with Jacob Turner.

Comment Source:Thank you Jason! I'm reading this in detail and will reply later on. Cheers from Torino with Jacob Turner.
• Options
3.

Just an FYI that we have another related thread here

Comment Source:Just an FYI that we have another related thread here * [When is the reverse of a continuous time stochastic process also a valid continuous time stochastic process?](http://forum.azimuthproject.org/discussion/1299/the-reverse-of-a-continuous-time-markov-process/)