It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.2K
- Applied Category Theory Course 355
- Applied Category Theory Seminar 4
- Exercises 149
- Discussion Groups 49
- How to Use MathJax 15
- Chat 480
- Azimuth Code Project 108
- News and Information 145
- Azimuth Blog 149
- Azimuth Forum 29
- Azimuth Project 189
- - Strategy 108
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 711
- - Latest Changes 701
- - - Action 14
- - - Biodiversity 8
- - - Books 2
- - - Carbon 9
- - - Computational methods 38
- - - Climate 53
- - - Earth science 23
- - - Ecology 43
- - - Energy 29
- - - Experiments 30
- - - Geoengineering 0
- - - Mathematical methods 69
- - - Meta 9
- - - Methodology 16
- - - Natural resources 7
- - - Oceans 4
- - - Organizations 34
- - - People 6
- - - Publishing 4
- - - Reports 3
- - - Software 21
- - - Statistical methods 2
- - - Sustainability 4
- - - Things to do 2
- - - Visualisation 1
- General 39

Options

These days in Torino Italy we've been talking a lot about network theory, quantum network theory, complex networks---anything dealing with networks, even the network of networks. We like this stuff.

A big part of this conversation is of course to try to find ways to relate different networks. One of the most interesting developments and topics in this regard is to consider the relation between quantum and stochastic networks. For example, one could consider particles moving on graphs (walks) and let the particles move either by rules from stochastic mechanics, or rules from quantum mechanics. This was done in the network theory series , particularly in part 16 when John related stochastic walks, quantum walks and electrical networks made of resistors.

I wanted to start a conversation related to a further comparison between the quantum and stochastic versions of Noether's theorem. This has of course already been done in posts in the network theory series (see parts 11 and 13!), but this is not to say that more can not be said. We're not sure what else can be said, and so perhaps this thread can serve as a place for ideas/thoughts/questions and developments in this regard.

Just to recall the basic idea, Noether's theorem relates symmetries of generators to conserved quantities.

We call $O$ a symmetry of quantum generator $H$ iff $[O,H]=0$. This is equivalent to the conserved quantity $\langle O\rangle_Q$ not changing in time:

$\frac{d}{dt} \langle O\rangle_Q = 0$

where $\langle O\rangle_Q$ is the expected value of $O$ at time $t$ in some state evolving under $H$. The subscript $Q$ means quantum.

In quantum theory, operators commuting with $H$ and conserved quantities that don't change in time are equivalent concepts. So for every "fish" in quantum theory, we get an exact reflection like this

What John and Brendan showed is that this perfect symmetry is not the case in stochastic mechanics! Every symmetry in stochastic mechanics given by $[O,H]=0$ of course still leads to a conserved quantity

$\frac{d}{dt} \langle O\rangle_S = 0$

where the subscript $S$ means that the expected value is defined in terms of stochastic mechanics...

but... every conserved quantity does not always lead to a symmetry! So in other words, for every "fish" in stochastic mechanics, you can get something different

In particular what John and Brendan showed is that you need more than just

$\frac{d}{dt} \langle O\rangle_S = 0$

to give rise to a symmetry such that $[O,H]=0$. For instance, the above equation together with

$\frac{d}{dt} \langle O^2\rangle_S = 0$

is enough to ensure that $[O,H]=0$. So it's a bit subtle, but symmetries and conserved quantities are not always related in such a perfect way as they are in quantum theory.

What else can we say about this? What John and Brendan depicted (in my point of view) exactly shows an interesting difference between quantum and stochastic mechanics, but perhaps there are some specific examples that further illuminate these points.

If anyone can think of anything, or if you have ideas/points you want to discuss, we're very interested here in Torino.

## Comments

I think John and Brendan's proof is very interesting and valuable.

I also find it a bit depressing, for the following reason:

In quantum mechanics, if we want to look for conserved quantities we know a good way to go about it is to think about observable operators that commute with the Hamiltonian. Even if this is not easy, it is at least clear how we need to go about it.

But John and Brendan showed that if you look for observable operators that commute with the Hamiltonian of a stochastic processes then you'll only find a subset of the conserved quantities for which the higher moments (variance etc.) are also conserved. In fact this subset is so restrictive it means that the observable must take the same value for any states that are connected by the Hamiltonian (i.e. a transition between them is allowed). This means by looking for observable operators that commute with the Hamiltonian we'll only find observables that have the same value in each connected component of the system, and these observables are trivially conserved because the probability of being in a connected component don't change in time.

So it basically says: if you want to find the most interesting observed quantities, don't start by looking for commutations. That's the value of the result in my opinion, to show us where we shouldn't look and therefore to guide us to where we should look.

Where is this?...I don't know.

There are a few interesting results from non-equilibrium stochastic mechanics that I feel have some relation to conserved quantities. My favourite one is Jarzynski's equality, which was proved in the following

56, 5018 (1997).Jarzsynki's result is the following:

$$ \langle \exp(-W/T) \rangle_0 = \exp(-\Delta F /T) $$ which basically says that if I start in a thermal state at temperature $T$ and change some parameter of a system in time, then the expected value of the exponential of the work done $W$ on the system during that change is equal to the exponential of the difference $\Delta F$ in the equilibrium free energies corresponding to the initial and final values of the parameter. For a reasonably general class of evolutions, there is a non-equilibrium quantity whose expected value is entirely determined by equilibrium properties. It doesn't matter how I change the parameter! Slow or fast is ok.

This caused a huge buzz in physics. Experimentalists use it to calculate the free energies of systems, by measuring the work done in many realizations as a parameter is changed. They used to have to do the change in the parameter very slowly because they knew only that $W = \Delta F$ in the infinitely slow limit. But since Jarzynski they can do it quickly and use the relation above.

It strikes me that there might be some way of interpreting this as a conservation of a quantity, and I hope at some point in the future to try and rephrase it in this way. It might then give us some clues about how to generalise the approach to other conserved quantities.

If people are interested in this stuff perhaps I could give a few more details on Jarzynski's equality...

`I think John and Brendan's proof is very interesting and valuable. I also find it a bit depressing, for the following reason: In quantum mechanics, if we want to look for conserved quantities we know a good way to go about it is to think about observable operators that commute with the Hamiltonian. Even if this is not easy, it is at least clear how we need to go about it. But John and Brendan showed that if you look for observable operators that commute with the Hamiltonian of a stochastic processes then you'll only find a subset of the conserved quantities for which the higher moments (variance etc.) are also conserved. In fact this subset is so restrictive it means that the observable must take the same value for any states that are connected by the Hamiltonian (i.e. a transition between them is allowed). This means by looking for observable operators that commute with the Hamiltonian we'll only find observables that have the same value in each [connected component](http://en.wikipedia.org/wiki/Connected_component_(graph_theory)) of the system, and these observables are trivially conserved because the probability of being in a connected component don't change in time. So it basically says: if you want to find the most interesting observed quantities, don't start by looking for commutations. That's the value of the result in my opinion, to show us where we shouldn't look and therefore to guide us to where we should look. Where is this?...I don't know. There are a few interesting results from non-equilibrium stochastic mechanics that I feel have some relation to conserved quantities. My favourite one is [Jarzynski's equality](http://pre.aps.org/abstract/PRE/v56/i5/p5018_1), which was proved in the following * C. Jarzynski, [Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach](http://pre.aps.org/abstract/PRE/v56/i5/p5018_1), Phys. Rev. E **56**, 5018 (1997). Jarzsynki's result is the following: $$ \langle \exp(-W/T) \rangle_0 = \exp(-\Delta F /T) $$ which basically says that if I start in a thermal state at temperature $T$ and change some parameter of a system in time, then the expected value of the exponential of the work done $W$ on the system during that change is equal to the exponential of the difference $\Delta F$ in the equilibrium free energies corresponding to the initial and final values of the parameter. For a reasonably general class of evolutions, there is a non-equilibrium quantity whose expected value is entirely determined by equilibrium properties. It doesn't matter how I change the parameter! Slow or fast is ok. This caused a huge buzz in physics. Experimentalists use it to calculate the free energies of systems, by measuring the work done in many realizations as a parameter is changed. They used to have to do the change in the parameter very slowly because they knew only that $W = \Delta F$ in the infinitely slow limit. But since Jarzynski they can do it quickly and use the relation above. It strikes me that there might be some way of interpreting this as a conservation of a quantity, and I hope at some point in the future to try and rephrase it in this way. It might then give us some clues about how to generalise the approach to other conserved quantities. If people are interested in this stuff perhaps I could give a few more details on Jarzynski's equality...`

There's a nice short Azimuth post on Jarzynski's equality by Eric Downes, and a longer post on somewhat related issues by Matteo Smerlak.

If you want to look for conserved quantities that don't commute with the Hamiltonian, you could start by looking at the simple example that Brend

An and I describe in our paper. Other examples will work in a similar way: each state $i$ where the observable $O$ takes value $O_i$ has probabilities $p_{j i}$ of evolving to other states $j$, with the property that$$ O_i = \sum_j O_j p_{j i} $$ Here I'm describing the discrete-time case, where time evolution is given by a stochastic operator. Our paper discusses a continuous-time example, where time evolution is given by an infinitesimal stochastic Hamiltonian. They're similar but different.

I guess the fun part would be to find concrete, natural examples where this happens.

`There's a nice short Azimuth post on Jarzynski's equality by [Eric Downes](http://johncarlosbaez.wordpress.com/2011/04/30/crooks-fluctuation-theorem/), and a longer post on somewhat related issues by [Matteo Smerlak](https://johncarlosbaez.wordpress.com/2012/10/08/the-mathematical-origin-of-irreversibility/). > So it basically says: if you want to find the most interesting observed quantities, don’t start by looking for commutations. That’s the value of the result in my opinion, to show us where we shouldn’t look and therefore to guide us to where we should look. If you want to look for conserved quantities that don't commute with the Hamiltonian, you could start by looking at the simple example that Brend**A**n and I describe in [our paper](http://math.ucr.edu/home/baez/noether.pdf). Other examples will work in a similar way: each state $i$ where the observable $O$ takes value $O_i$ has probabilities $p_{j i}$ of evolving to other states $j$, with the property that $$ O_i = \sum_j O_j p_{j i} $$ Here I'm describing the discrete-time case, where time evolution is given by a stochastic operator. Our paper discusses a continuous-time example, where time evolution is given by an infinitesimal stochastic Hamiltonian. They're similar but different. I guess the fun part would be to find concrete, natural examples where this happens.`

Recently Jacob Biamonte, Mauro Faccin and myself have been working a bit more on this stuff. We found that in the special case where the infinitesimal generator of the Markov process happens to be a Dirichlet operator, we regain the usual quantum version of Noether's theorem. The proof goes something like this:

Assume $H$ is an infinitesimal stochastic generator, i.e. a real-valued square matrix with nonnegative offdiagonal entries, and columns which sum to zero, and $|\psi(t)\rangle$ is a normalized probability vector obeying the master equation $$ \partial_t |\psi(t)\rangle = H |\psi(t)\rangle. $$ $H$ generates a continuous-time Markovian process $e^{Ht}$ which maps the space of normalized probability vectors to itself: $|\psi(t)\rangle = e^{Ht} |\psi(0)\rangle$. Furthermore let $O$ be a real diagonal matrix representing an arbitrary observable. The expected value of this observable is $$ \langle O\rangle_{\psi} = \sum_{i \in X} \langle i | O |\psi\rangle = \langle \hat{O} | \psi \rangle, $$ where $|\hat{O}\rangle$ is the diagonal vector of $O$.

## Noether's theorem for symmetric Markov processes

If $H$ is an infinitesimal stochastic generator with $H = H^T$, and $O$ is an observable, we have $$ [O, H] = 0 \quad \iff \quad \partial_t \langle O \rangle_{\psi(t)} = 0 \quad \forall \: |\psi(0) \rangle, \forall t \ge 0. $$ Proof.

Since $H$ is symmetric we may use a spectral decomposition $H = \sum_k E_k |\upsilon_k \rangle \langle \upsilon_k |$.

As in John's proof, the $\implies$ direction is easy to show. In the other direction, we have $$ \partial_t \langle O \rangle_{\psi(t)} = \langle \hat{O} | H e^{Ht} |\psi(0) \rangle = 0 \quad \forall \: |\psi(0) \rangle, \forall t \ge 0 $$ $$ \iff \quad \langle \hat{O}| H e^{Ht} = 0 \quad \forall t \ge 0 $$ $$ \iff \quad \sum_k \langle \hat{O} | \upsilon_k \rangle E_k e^{t E_k} \langle \upsilon_k| = 0 \quad \forall t \ge 0 $$ $$ \iff \quad \langle \hat{O} | \upsilon_k \rangle E_k e^{t E_k} = 0 \quad \forall t \ge 0 $$ $$ \iff \quad |\hat{O} \rangle \in \Span\{|\upsilon_k \rangle | E_k = 0\} = \ker H, $$ where the third equivalence is due to $\{ |\upsilon_k \rangle \}_k$ being a linearly independent set of vectors.

For any infinitesimal stochastic generator $H$, iff the corresponding transition graph consists of $m$ connected components, we may reorder (permute) the states of the system such that $H$ becomes block-diagonal with $m$ blocks. Now it is easy to see that the kernel of $H$ is spanned by $m$ eigenvectors, one for each block. Since $H$ is also symmetric, the elements of each such vector can be chosen to be ones within the block and zeros outside it.

Consequently $|\hat{O} \rangle \in \ker H$ implies that we can choose the eigenbasis of $O$ to be $\{|\upsilon_k \rangle\}_k$, which implies $[O, H] = 0$.

Alternatively, $|\hat{O} \rangle \in \ker H$ implies $$ |\hat{O^2} \rangle \in \ker H \quad \iff \quad \ldots \quad \iff \quad \partial_t \langle O^2 \rangle_{\psi(t)} = 0 \quad \forall \: |\psi(0)\rangle, \forall t \ge 0, $$ where we have used the above sequence of equivalences backwards. Now, using John's original proof, we obtain $[O, H] = 0$.

As an additional observation we find that the only observables $O$ that remain invariant under a symmetric $H$ are of the very limited type described above, where the observable has to have the same value in every state of a connected component, as Tomi mentioned in post #2.

`Recently Jacob Biamonte, Mauro Faccin and myself have been working a bit more on this stuff. We found that in the special case where the infinitesimal generator of the Markov process happens to be a Dirichlet operator, we regain the usual quantum version of Noether's theorem. The proof goes something like this: Assume $H$ is an infinitesimal stochastic generator, i.e. a real-valued square matrix with nonnegative offdiagonal entries, and columns which sum to zero, and $|\psi(t)\rangle$ is a normalized probability vector obeying the master equation \[ \partial_t |\psi(t)\rangle = H |\psi(t)\rangle. \] $H$ generates a continuous-time Markovian process $e^{Ht}$ which maps the space of normalized probability vectors to itself: $|\psi(t)\rangle = e^{Ht} |\psi(0)\rangle$. Furthermore let $O$ be a real diagonal matrix representing an arbitrary observable. The expected value of this observable is \[ \langle O\rangle_{\psi} = \sum_{i \in X} \langle i | O |\psi\rangle = \langle \hat{O} | \psi \rangle, \] where $|\hat{O}\rangle$ is the diagonal vector of $O$. ###Noether's theorem for symmetric Markov processes### If $H$ is an infinitesimal stochastic generator with $H = H^T$, and $O$ is an observable, we have \[ [O, H] = 0 \quad \iff \quad \partial_t \langle O \rangle_{\psi(t)} = 0 \quad \forall \: |\psi(0) \rangle, \forall t \ge 0. \] Proof. Since $H$ is symmetric we may use a spectral decomposition $H = \sum_k E_k |\upsilon_k \rangle \langle \upsilon_k |$. As in John's proof, the $\implies$ direction is easy to show. In the other direction, we have \[ \partial_t \langle O \rangle_{\psi(t)} = \langle \hat{O} | H e^{Ht} |\psi(0) \rangle = 0 \quad \forall \: |\psi(0) \rangle, \forall t \ge 0 \] \[ \iff \quad \langle \hat{O}| H e^{Ht} = 0 \quad \forall t \ge 0 \] \[ \iff \quad \sum_k \langle \hat{O} | \upsilon_k \rangle E_k e^{t E_k} \langle \upsilon_k| = 0 \quad \forall t \ge 0 \] \[ \iff \quad \langle \hat{O} | \upsilon_k \rangle E_k e^{t E_k} = 0 \quad \forall t \ge 0 \] \[ \iff \quad |\hat{O} \rangle \in \Span\{|\upsilon_k \rangle | E_k = 0\} = \ker H, \] where the third equivalence is due to $\{ |\upsilon_k \rangle \}_k$ being a linearly independent set of vectors. For any infinitesimal stochastic generator $H$, iff the corresponding transition graph consists of $m$ connected components, we may reorder (permute) the states of the system such that $H$ becomes block-diagonal with $m$ blocks. Now it is easy to see that the kernel of $H$ is spanned by $m$ eigenvectors, one for each block. Since $H$ is also symmetric, the elements of each such vector can be chosen to be ones within the block and zeros outside it. Consequently $|\hat{O} \rangle \in \ker H$ implies that we can choose the eigenbasis of $O$ to be $\{|\upsilon_k \rangle\}_k$, which implies $[O, H] = 0$. Alternatively, $|\hat{O} \rangle \in \ker H$ implies \[ |\hat{O^2} \rangle \in \ker H \quad \iff \quad \ldots \quad \iff \quad \partial_t \langle O^2 \rangle_{\psi(t)} = 0 \quad \forall \: |\psi(0)\rangle, \forall t \ge 0, \] where we have used the above sequence of equivalences backwards. Now, using John's original proof, we obtain $[O, H] = 0$. As an additional observation we find that the only observables $O$ that remain invariant under a symmetric $H$ are of the very limited type described above, where the observable has to have the same value in every state of a connected component, as Tomi mentioned in post #2.`

Nice! As Jacob suggested to me recently, this could make a fun short blog post.

`Nice! As Jacob suggested to me recently, this could make a fun short blog post.`

The blog article in progress thread stemming from this thread is here

`The blog article in progress thread stemming from this thread is here * [Blog - Noether's theorem for doubly stochastic Markov processes](http://forum.azimuthproject.org/discussion/1321/blog-noethers-theorem-for-doubly-stochastic-markov-processes/)`