Comments

  • 1.
    edited July 2013

    Hi! I think I could point you to a proof of this if you want. I think a result like this holds not only in the acyclic case but more generally.

    How did you come up with this, and why do you care about it? (Assuming you do...)

    By the way, I think this is more about continuous-time Markov chains than stochastic Petri nets, unless I'm misreading it. A Markov chain describes a dot walking around randomly on a graph, or many dots walking around on a graph but not interacting with each other at all. A stochastic Petri net describes processes where things interact and turn into other things.

    There is a way to see a continuous-time Markov chain as a special case of a stochastic Petri net where each transition has one thing as input and one thing as output. I think that you're saying 'stochastic Petri net' when you mean this very special kind.

    There is also a way to see a stochastic Petri net as a special case of a continuous-time Markov chain where the nodes of the graph are all possible finite collections of things.

    However, the two concepts are not equivalent. (I've just described an 'adjunction' between the two concepts, but it's not an equivalence, because if you take a round trip you don't get back where you started!)

    Comment Source:Hi! I think I could point you to a proof of this if you want. I think a result like this holds not only in the acyclic case but more generally. How did you come up with this, and why do you care about it? (Assuming you do...) By the way, I think this is more about continuous-time Markov chains than stochastic Petri nets, unless I'm misreading it. A Markov chain describes a dot walking around randomly on a graph, or many dots walking around on a graph but not interacting with each other at all. A stochastic Petri net describes processes where things interact and turn into other things. There is a way to see a continuous-time Markov chain as a special case of a stochastic Petri net where each transition has one thing as input and one thing as output. I think that you're saying 'stochastic Petri net' when you mean this very special kind. There is also a way to see a stochastic Petri net as a special case of a continuous-time Markov chain where the nodes of the graph are all possible finite collections of things. However, the two concepts are not equivalent. (I've just described an 'adjunction' between the two concepts, but it's not an equivalence, because if you take a round trip you don't get back where you started!)
  • 2.

    Yes, a proof would be nice! On the slender evidence of a directed graph with two nodes and two edges, I think a more general result holds too.

    I was thinking in terms of sums of random variables each representing the time to traverse on edge in a walk, then the sums become convolutions.

    Probably, both 'Petri' and 'acyclic' should be removed from the title... As far as I can reconstruct my thoughts, I started thinking about biological processes which grow directed acyclic graphs by adding new nodes, and where it can make sense to regard each new node as having a new type, and then somehow thinking a walker on such a graph would have a new type at each node, although the walker's behaviour ignores node-types.

    Comment Source:Yes, a proof would be nice! On the slender evidence of a directed graph with two nodes and two edges, I think a more general result holds too. I was thinking in terms of sums of random variables each representing the time to traverse on edge in a walk, then the sums become convolutions. Probably, both 'Petri' and 'acyclic' should be removed from the title... As far as I can reconstruct my thoughts, I started thinking about biological processes which grow directed acyclic graphs by adding new nodes, and where it can make sense to regard each new node as having a new type, and then somehow thinking a walker on such a graph would have a new type at each node, although the walker's behaviour ignores node-types.
  • 3.

    There is also a way to see a stochastic Petri net as a special case of a continuous-time Markov chain where the nodes of the graph are all possible finite collections of things.

    Can't any set of multiple non-deterministic interactions can be represented as a chain of dummy single interactions? Is this a way to represent a PN as a Markov chain?

    However, the two concepts are not equivalent. (I’ve just described an ’adjunction’ between the two concepts, but it’s not an equivalence, because if you take a round trip you don’t get back where you started!)

    Can't you get all the permuted possible interactions back from any arbitrary list? What am I missing?

    Comment Source:> There is also a way to see a stochastic Petri net as a special case of a continuous-time Markov chain where the nodes of the graph are all possible finite collections of things. Can't any set of multiple non-deterministic interactions can be represented as a chain of dummy single interactions? Is this a way to represent a PN as a Markov chain? > However, the two concepts are not equivalent. (I’ve just described an ’adjunction’ between the two concepts, but it’s not an equivalence, because if you take a round trip you don’t get back where you started!) Can't you get all the permuted possible interactions back from any arbitrary list? What am I missing?
  • 4.
    edited July 2013

    Graham wrote:

    Yes, a proof would be nice! On the slender evidence of a directed graph with two nodes and two edges, I think a more general result holds too.

    So far it's been annoyingly difficult to find a clear general statement and proof of the result... especially annoying because it should be common knowledge, and I once saw a paper that did it. It's also annoying that in looking through my email for correspondence about this, I saw the slides of a talk someone gave on Markov processes, which 'borrowed' a lot of cute examples from my network theory posts, apparently without citing them. Amoebas, the Desargues graph...

    Anyway, the general result is like this. Suppose you have a directed (multi)graph, meaning a finite set of edges $E$, a finite set of vertices $V$, and maps $s,t: E \to V$ sending each edge to its starting point (source) and ending point (target). Suppose you have a rate constant for each edge, $r: E \to (0,\infty)$. Then you get a continuous-time Markov chain, like this:

    And the result is this. Suppose you want to know the probability that if you start at the vertex $i \in V$ at time zero then you will be at the vertex $j \in V$ at time $t$. Then this probability can be expressed as a sum over all directed paths in the graph that start at $i$ and end at $j$:

    $$ i = i_0 \stackrel{e_1}{\to} i_1 \stackrel{e_2}{\to} i_2 \stackrel{e_3}{\to} \cdots \stackrel{e_n}{\to} i_{n} = j$$ Here $e_k$ is an edge with source $i_{k-1}$ and target $i_k$. For each directed path we get a term in our sum that's proportional to an integral over the set

    $$ { 0 = t_0 \le t_1 \le t_2 \le \cdots \le t_n = t } $$ of a product of exponentials, one for each edge:

    $$ \exp(-r_{e_i} (t_i - t_{i-1})) $$ We imagine a particle hopping along the edges of the graph, hopping along the edge $e_i$ at time $t_i$.

    Physicists call this sort of thing a path integral, and it's a very common technique... but for them, more in quantum theory than probability theory. Tomi Johnson should be familiar with this. Tomi: do you know a nice discussion of this in the case of stochastic (rather than quantum) random walks?

    If you ask questions I can clarify... I'm getting tired of setting up all the notation without any feedback on whether my indended audience (you) understands it!

    Comment Source:Graham wrote: > Yes, a proof would be nice! On the slender evidence of a directed graph with two nodes and two edges, I think a more general result holds too. So far it's been annoyingly difficult to find a clear general statement and proof of the result... especially annoying because it should be common knowledge, and I once saw a paper that did it. It's also annoying that in looking through my email for correspondence about this, I saw the slides of a talk someone gave on Markov processes, which 'borrowed' a lot of cute examples from my network theory posts, apparently without citing them. Amoebas, the Desargues graph... Anyway, the general result is like this. Suppose you have a directed (multi)graph, meaning a finite set of edges $E$, a finite set of vertices $V$, and maps $s,t: E \to V$ sending each edge to its starting point (source) and ending point (target). Suppose you have a rate constant for each edge, $r: E \to (0,\infty)$. Then you get a [continuous-time Markov chain](https://en.wikipedia.org/wiki/Continuous-time_Markov_chain), like this: <img width = "300" src = "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Financial_Markov_process.svg/500px-Financial_Markov_process.svg.png" alt = ""/> And the result is this. Suppose you want to know the probability that if you start at the vertex $i \in V$ at time zero then you will be at the vertex $j \in V$ at time $t$. Then this probability can be expressed as a sum over all directed paths in the graph that start at $i$ and end at $j$: $$ i = i_0 \stackrel{e_1}{\to} i_1 \stackrel{e_2}{\to} i_2 \stackrel{e_3}{\to} \cdots \stackrel{e_n}{\to} i_{n} = j$$ Here $e_k$ is an edge with source $i_{k-1}$ and target $i_k$. For each directed path we get a term in our sum that's proportional to an integral over the set $$ \{ 0 = t_0 \le t_1 \le t_2 \le \cdots \le t_n = t \} $$ of a product of exponentials, one for each edge: $$ \exp(-r_{e_i} (t_i - t_{i-1})) $$ We imagine a particle hopping along the edges of the graph, hopping along the edge $e_i$ at time $t_i$. Physicists call this sort of thing a path integral, and it's a very common technique... but for them, more in quantum theory than probability theory. Tomi Johnson should be familiar with this. **Tomi: do you know a nice discussion of this in the case of stochastic (rather than quantum) random walks?** If you ask questions I can clarify... I'm getting tired of setting up all the notation without any feedback on whether my indended audience (you) understands it!
  • 5.
    edited July 2013

    Jim Stuttard wrote:

    Can’t any set of multiple non-deterministic interactions can be represented as a chain of dummy single interactions? Is this a way to represent a PN as a Markov chain?

    If I understand you correctly, the answer is yes and yes. Given a stochastic Petri net, we create a Markov chain. The states in this Markov chain are the 'complexes' in the stochastic Petri net. ('Complexes' is chemical reaction network jargon, which I have been using lately. I guess Petri net people call them 'labellings'.)

    Can’t you get all the permuted possible interactions back from any arbitrary list? What am I missing?

    Try reinterpreting a little stochastic Petri net like this:

    as a Markov chain. You get a Markov chain with infinitely many states, one for each number of amoebae. Then reinterpret that Markov chain as a stochastic Petri net; now there are infinitely many species, and we can have any number of items of each species. So we can have, say, 5 "amoeba triples" and 2 "amoeba quartets", and this is different from have $5 \times 3 + 2 \times 4$ "amoeba singletons". So, we are not back where we started; we've gotten something much bigger... as typical for a monad formed by going back and forth along an adjunction.

    Comment Source:Jim Stuttard wrote: > Can’t any set of multiple non-deterministic interactions can be represented as a chain of dummy single interactions? Is this a way to represent a PN as a Markov chain? If I understand you correctly, the answer is yes and yes. Given a stochastic Petri net, we create a Markov chain. The states in this Markov chain are the 'complexes' in the stochastic Petri net. ('Complexes' is chemical reaction network jargon, which I have been using lately. I guess Petri net people call them 'labellings'.) > Can’t you get all the permuted possible interactions back from any arbitrary list? What am I missing? Try reinterpreting a little stochastic Petri net like this: <img width = "100" src = "http://math.ucr.edu/home/baez/networks/amoebae1.png" alt = ""/> as a Markov chain. You get a Markov chain with infinitely many states, one for each number of amoebae. Then reinterpret that Markov chain as a stochastic Petri net; now there are infinitely many species, and we can have any number of items of each species. So we can have, say, 5 "amoeba triples" and 2 "amoeba quartets", and this is different from have $5 \times 3 + 2 \times 4$ "amoeba singletons". So, we are not back where we started; we've gotten something much bigger... as typical for a monad formed by going back and forth along an adjunction.
  • 6.
    edited July 2013

    I didn't know it was necessary to distinguish singletons and complexes until I just read Arjun's post and now I understand your proof. I can now try and formalise growing a monad along an adjunction. Really enjoyable, thanks.

    Comment Source:I didn't know it was necessary to distinguish singletons and complexes until I just read Arjun's post and now I understand your proof. I can now try and formalise growing a monad along an adjunction. Really enjoyable, thanks.
  • 7.

    Great!

    Comment Source:Great!
  • 8.

    I understood the general result you gave in post 5. It seems to be missing something at the vertices $i$ and $j$. Suppose there are three vertices $i, j, k$ and just one edge $i \to j$. Then your formula works.

    If we add an edge $j \to k$, then obviously the probability of being at $j$ must change, but your formula doesn't.

    Likewise if we add an edge $i \to k$, the probability of being at $j$ must change. It's not just that it gets smaller because the particle might go to $k$ instead of $j$, but also, conditioned on getting to $j$, it will get there quicker.

    Comment Source:I understood the general result you gave in post 5. It seems to be missing something at the vertices $i$ and $j$. Suppose there are three vertices $i, j, k$ and just one edge $i \to j$. Then your formula works. If we add an edge $j \to k$, then obviously the probability of being at $j$ must change, but your formula doesn't. Likewise if we add an edge $i \to k$, the probability of being at $j$ must change. It's not just that it gets smaller because the particle might go to $k$ instead of $j$, but also, conditioned on getting to $j$, it will get there quicker.
  • 9.
    edited July 2013

    Tomi: do you know a nice discussion of this in the case of stochastic (rather than quantum) random walks?

    Hi there. Nice result. I do know a couple of things about path integrals in stochastic mechanics, and I can see how they could help you prove your theorem.

    I mostly came across such path integrals when I was looking at how to simulate continuous-time finite-state-space Markovian non-equilibrium stochastic processes. The choice method is dynamical monte carlo (closely related to kinetic monte carlo). I learnt a lot of what I know about it from these notes:

    Let me summarise the key points.

    A continuous-time finite-state-space Markovian stochastic process is the following. A system has some finite number $n$ of configurations. If the system is in configuration $i$ then there is some Poisson process with rate $W_{i j}$ of going to each other configuration $j$ (the rates can be time-dependent but I won't bother with that here). A Poisson process just means that in an infinitesimal time $d t$ the probability of this hop from $i$ to $j$ occuring is $W_{i j} d t$. This is called Markovian because the rates don't depend on the history of the system, just what configuration it in now (it's memoryless).

    John looked at these systems a lot in the network series. I think they were mostly focussed on the evolution of the probability $P_i (t)$ of being in configuration $i$ at time $t$. From above you can show that these evolve according to the master equation

    $$ \frac{d}{d t} P_i (t) = \sum_{j \neq i} \left[ W_{j i} P_i (t) - W_{i j} P_j (t) \right] $$ The first term on the right is sum of the rate of hopping into configuration $i$ and the second term is the rate of hopping out.

    The path integral interpretation of the stochastic process relates to a particular way of expressing the solutions to this equation. To arrive at this expression I think its easier to write this in terms of vectors and matrices. Let $P(t)$ be a size $n$ column vector with elements $P_i (t)$. Additionally, let $H$ an $n \times n$ matrix with off-diagonal elements $H_{i j} = W_{i j}$ for $i \neq j$ and diagonal elements $H_{i i} = - \sum_{j \neq i} W_{i j}$. Then the master equation has the form

    $$ \frac{d}{d t} P (t) = H P (t) $$ It's helpful to divide up $H = D + W$ into its diagonal part $D$, which as we will see describes the decay of probability of being in a configuration due to hopping to another, and its off-diagonal part $W$ describing the hopping into another configurations. We can arrive at an implicit solution to the master equation

    $$ \frac{d}{d t} P (t) = (D + W) P (t) $$ by a slight rearrangement and multiplication by $e^{-D t}$ to get

    $$ \displaystyle{ \frac{d}{d t} \left[ e^{-D t} P (t) \right] = e^{-D t} \frac{d}{d t} P (t) - e^{-D t} D P (t) = e^{-D t} W P (t) } $$ Integrating this and slightly rearranging we get

    $$ \displaystyle{ P (t) = e^{D t} P (0) + \int_0^t e^{D (t-t')} W P (t') d t' } $$ which is only an infinite number of steps from the implicit solution we are after! In particular inserting the above implicit expression into itself, then again into the resulting expression, and so on ad infinitum. What we get is this:

    $$ \displaystyle{ P (t) = \left[ e^{D t} + \int_0^t e^{D (t - t_1 )} W e^{D t_1} d t_1 + \int_{t_1}^t \int_0^t e^{D (t-t_2)} W e^{D (t_2-t_1)} W e^{D t_1} d t_1 d t_2 + \dots \right] P (0)} $$ Each element of this vector equation gives the path integral expression for the probability $P_i (t)$. Let's look into it in more detail. We can write the expression as

    $$ \displaystyle{ P (t) = \left[ \sum_{N=0}^\infty \int_{t_{N-1}}^t \int_{t_{N-2}}^t \cdots \int_0^t e^{D (t-t_{N})} W e^{D (t_{N}-t_{N-1})} W \cdots W e^{D t_1} d t_1 \cdots d t_{N} \right] P (0)} $$ where each term of the sum describes contributions to the probability arising from the paths in which the system makes $N$ hops along configuration space.

    To be more explicit, lets write the initial configuration as $i_0$, which then transitions to $i_1$ at $t_1$, then $i_2$ at $t_2$, and so on until $i_N$ at $t_N$. A function $p$ taking the value $p(t) = i_0$ for times $t$ less than $t_1$ and then $p(t) = i_1$ for $t_1 \le t \lt t_2$ etc. is called the path. $N$ is called the path length. Lets write a (functional) integral over all paths as

    $$ \displaystyle{ \int [Dp] = \sum_{N=0}^\infty \int_{t_{N-1}}^t \cdots \int_0^t \sum_{i_N} \cdots \sum_{i_0} } $$ and a probability density functional $\mathcal{P}[p]$ over the paths as

    $$ \displaystyle{ \mathcal{P}[p] = e^{D_{i_N i_N} (t-t_{N})} W_{i_{N-1} i_{N}} e^{D_{i_{N-1} i_{N-1}} (t_{N} - t_{N-1})} W_{i_{N-2} i_{N-1}} \cdots W_{i_{0} i_{1}} e^{D_{i_0 i_0} t_1} P_{i_0} (0) } $$ Then (you can show this by writing out all of the matrix/vector multiplication) the path integral expression for $P (t) $ written above in vector form can also be written as

    $$ \displaystyle{ P_{i} (t) = \int_{i_N = i} [Dp] \mathcal{P}[p] } $$ So its just a (functional) integral over all paths. We look at all of the ways that the stochastic process could evolve and integrate over the ways we want to know the probability for. In this case we are only interested in paths ending in state $i$ at time $t$.

    Some of the expressions above look quite tricky, e.g. that for $\mathcal{P}[p]$. But it's actually reasonably intuitive. Looking at that expression for $\mathcal{P}[p]$ you see it is just a combination of two things. First, whenever the system stays in a configuration $i$ for time $t$, the probability of this occurring decreases as a factor $e^{D_{i i} t}$, where as defined above $D_{i i} = - \sum_{j \neq i} W_{i j}$. This is just the exponential decay at a rate given by the total rate of leaving the configurations. Second, when the configuration change occurs, the probability of the resulting configuration being $j$ is proportional to $W_{i j}$.

    This physical intuition backed up by the mathematics is what allows us to sample paths according the distribution $\mathcal{P}[p]$ without actually having to evaluate the distribution. This is why monte carlo works for such dynamics.

    How does this fit with what Graham found? I think he has written out the path integral expression explicitly for the case of integer rates $W_{i j} = \{ 0 , 1 \}$. So what I've written above should provide the proof needed. Hope that makes sense/helps. Pretty impressive that you guessed the path integral expression without knowing about it!

    Comment Source:> Tomi: do you know a nice discussion of this in the case of stochastic (rather than quantum) random walks? Hi there. Nice result. I do know a couple of things about path integrals in stochastic mechanics, and I can see how they could help you prove your theorem. I mostly came across such path integrals when I was looking at how to simulate continuous-time finite-state-space Markovian non-equilibrium stochastic processes. The choice method is [dynamical monte carlo](http://en.wikipedia.org/wiki/Dynamic_Monte_Carlo_method) (closely related to [kinetic monte carlo](http://en.wikipedia.org/wiki/Kinetic_Monte_Carlo)). I learnt a lot of what I know about it from these notes: * A. P. J. Jansen, [An Introduction To Monte Carlo Simulations Of Surface Reactions](http://arxiv.org/abs/cond-mat/0303028), arXiv:cond-mat/0303028. (Chapter 4 is the bit I'm going to talk about.) Let me summarise the key points. A continuous-time finite-state-space Markovian stochastic process is the following. A system has some finite number $n$ of configurations. If the system is in configuration $i$ then there is some Poisson process with rate $W_{i j}$ of going to each other configuration $j$ (the rates can be time-dependent but I won't bother with that here). A Poisson process just means that in an infinitesimal time $d t$ the probability of this hop from $i$ to $j$ occuring is $W_{i j} d t$. This is called Markovian because the rates don't depend on the history of the system, just what configuration it in now (it's memoryless). John looked at these systems a lot in the [network series](http://math.ucr.edu/home/baez/networks/‎). I think they were mostly focussed on the evolution of the probability $P_i (t)$ of being in configuration $i$ at time $t$. From above you can show that these evolve according to the master equation $$ \frac{d}{d t} P_i (t) = \sum_{j \neq i} \left[ W_{j i} P_i (t) - W_{i j} P_j (t) \right] $$ The first term on the right is sum of the rate of hopping into configuration $i$ and the second term is the rate of hopping out. The path integral interpretation of the stochastic process relates to a particular way of expressing the solutions to this equation. To arrive at this expression I think its easier to write this in terms of vectors and matrices. Let $P(t)$ be a size $n$ column vector with elements $P_i (t)$. Additionally, let $H$ an $n \times n$ matrix with off-diagonal elements $H_{i j} = W_{i j}$ for $i \neq j$ and diagonal elements $H_{i i} = - \sum_{j \neq i} W_{i j}$. Then the master equation has the form $$ \frac{d}{d t} P (t) = H P (t) $$ It's helpful to divide up $H = D + W$ into its diagonal part $D$, which as we will see describes the decay of probability of being in a configuration due to hopping to another, and its off-diagonal part $W$ describing the hopping into another configurations. We can arrive at an implicit solution to the master equation $$ \frac{d}{d t} P (t) = (D + W) P (t) $$ by a slight rearrangement and multiplication by $e^{-D t}$ to get $$ \displaystyle{ \frac{d}{d t} \left[ e^{-D t} P (t) \right] = e^{-D t} \frac{d}{d t} P (t) - e^{-D t} D P (t) = e^{-D t} W P (t) } $$ Integrating this and slightly rearranging we get $$ \displaystyle{ P (t) = e^{D t} P (0) + \int_0^t e^{D (t-t')} W P (t') d t' } $$ which is only an infinite number of steps from the implicit solution we are after! In particular inserting the above implicit expression into itself, then again into the resulting expression, and so on _ad infinitum_. What we get is this: $$ \displaystyle{ P (t) = \left[ e^{D t} + \int_0^t e^{D (t - t_1 )} W e^{D t_1} d t_1 + \int_{t_1}^t \int_0^t e^{D (t-t_2)} W e^{D (t_2-t_1)} W e^{D t_1} d t_1 d t_2 + \dots \right] P (0)} $$ Each element of this vector equation gives the path integral expression for the probability $P_i (t)$. Let's look into it in more detail. We can write the expression as $$ \displaystyle{ P (t) = \left[ \sum_{N=0}^\infty \int_{t_{N-1}}^t \int_{t_{N-2}}^t \cdots \int_0^t e^{D (t-t_{N})} W e^{D (t_{N}-t_{N-1})} W \cdots W e^{D t_1} d t_1 \cdots d t_{N} \right] P (0)} $$ where each term of the sum describes contributions to the probability arising from the paths in which the system makes $N$ hops along configuration space. To be more explicit, lets write the initial configuration as $i_0$, which then transitions to $i_1$ at $t_1$, then $i_2$ at $t_2$, and so on until $i_N$ at $t_N$. A function $p$ taking the value $p(t) = i_0$ for times $t$ less than $t_1$ and then $p(t) = i_1$ for $t_1 \le t \lt t_2$ etc. is called the path. $N$ is called the path length. Lets write a (functional) integral over all paths as $$ \displaystyle{ \int [Dp] = \sum_{N=0}^\infty \int_{t_{N-1}}^t \cdots \int_0^t \sum_{i_N} \cdots \sum_{i_0} } $$ and a probability density functional $\mathcal{P}[p]$ over the paths as $$ \displaystyle{ \mathcal{P}[p] = e^{D_{i_N i_N} (t-t_{N})} W_{i_{N-1} i_{N}} e^{D_{i_{N-1} i_{N-1}} (t_{N} - t_{N-1})} W_{i_{N-2} i_{N-1}} \cdots W_{i_{0} i_{1}} e^{D_{i_0 i_0} t_1} P_{i_0} (0) } $$ Then (you can show this by writing out all of the matrix/vector multiplication) the path integral expression for $P (t) $ written above in vector form can also be written as $$ \displaystyle{ P_{i} (t) = \int_{i_N = i} [Dp] \mathcal{P}[p] } $$ So its just a (functional) integral over all paths. We look at all of the ways that the stochastic process could evolve and integrate over the ways we want to know the probability for. In this case we are only interested in paths ending in state $i$ at time $t$. Some of the expressions above look quite tricky, e.g. that for $\mathcal{P}[p]$. But it's actually reasonably intuitive. Looking at that expression for $\mathcal{P}[p]$ you see it is just a combination of two things. First, whenever the system stays in a configuration $i$ for time $t$, the probability of this occurring decreases as a factor $e^{D_{i i} t}$, where as defined above $D_{i i} = - \sum_{j \neq i} W_{i j}$. This is just the exponential decay at a rate given by the total rate of leaving the configurations. Second, when the configuration change occurs, the probability of the resulting configuration being $j$ is proportional to $W_{i j}$. This physical intuition backed up by the mathematics is what allows us to sample paths according the distribution $\mathcal{P}[p]$ without actually having to evaluate the distribution. This is why monte carlo works for such dynamics. How does this fit with what Graham found? I think he has written out the path integral expression explicitly for the case of integer rates $W_{i j} = \{ 0 , 1 \}$. So what I've written above should provide the proof needed. Hope that makes sense/helps. Pretty impressive that you guessed the path integral expression without knowing about it!
  • 10.
    edited July 2013

    Thanks Tomi!

    I'll look at this more closely later. I was thinking that renewal theory should have something to do with this. And I think that once we have this, $$ \displaystyle{ P (t) = e^{D t} P (0) + \int_0^t e^{D (t-t')} W P (t') d t' } $$ it may be possible to apply the theory (not as stated on Wikipedia, but as stated on p145 of a book Branching Processes, by Athreya and Ney).

    Comment Source:Thanks Tomi! I'll look at this more closely later. I was thinking that [renewal theory](http://en.wikipedia.org/wiki/Renewal_theory) should have something to do with this. And I think that once we have this, $$ \displaystyle{ P (t) = e^{D t} P (0) + \int_0^t e^{D (t-t')} W P (t') d t' } $$ it may be possible to apply the theory (not as stated on Wikipedia, but as stated on p145 of a book _Branching Processes_, by Athreya and Ney).
  • 11.
    edited July 2013

    From Branching Processes. Let $G$ be a distribution on $[0,\infty)$. Let $G^{\star m}$ be its m-fold convolution and $G^{\star 0}=1$ for $t \geq 0$. Let $\gamma \in (0,\infty)$ be a constant. Let $\xi(t)$ be a given bounded measurable function on $[0,\infty)$. Suppose $H(t)$ is an unknown function satisfying

    $$ H (t) = \xi(t) + \gamma \int_0^t H(t-y) dG(y) \quad \quad (1)$$ Then $H(t)$ is the convolution of $\xi(t)$ with

    $$ U_{\gamma}(t) = \sum_{i=0}^\infty \gamma^m G^{\star m}(t).$$ Now lets start with Tomi's

    $$ \displaystyle{ P (t) = e^{D t} P (0) + \int_0^t e^{D (t-t')} W P (t') d t' } $$ (My real motivation for starting here is that I don't understand functional integrals!)

    Let $y=t-t'$, and get

    $$ P (t) = e^{D t} P (0) + \int_0^t e^{D y} W P (t-y) d y $$ Suppose $W$ can be diagonalized as $W=S^{-1} \Lambda S$ and get

    $$ S P (t) = S e^{D t} P (0) + \int_0^t S e^{D y} S^{-1} \Lambda S P (t-y) d y $$ Put $J(t) = S P (t) $ and get

    $$ J(t) = S e^{D t} P (0) + \int_0^t S e^{D y} S^{-1} \Lambda J (t-y) d y $$ If $S$ and $D$ commute we just have $n$ one-dimensional equations:

    $$ J_i(t) = [ S e^{D t} P (0) ]i + \lambda_i \int_0^t J _i(t-y) [ e^{D{ii} y} ] d y $$ each of which looks like (1), except that $\xi$ and $\gamma$ are now in general complex...

    Comment Source:From _Branching Processes_. Let $G$ be a distribution on $[0,\infty)$. Let $G^{\star m}$ be its m-fold convolution and $G^{\star 0}=1$ for $t \geq 0$. Let $\gamma \in (0,\infty)$ be a constant. Let $\xi(t)$ be a given bounded measurable function on $[0,\infty)$. Suppose $H(t)$ is an unknown function satisfying $$ H (t) = \xi(t) + \gamma \int_0^t H(t-y) dG(y) \quad \quad (1)$$ Then $H(t)$ is the convolution of $\xi(t)$ with $$ U_{\gamma}(t) = \sum_{i=0}^\infty \gamma^m G^{\star m}(t).$$ Now lets start with Tomi's $$ \displaystyle{ P (t) = e^{D t} P (0) + \int_0^t e^{D (t-t')} W P (t') d t' } $$ (My real motivation for starting here is that I don't understand functional integrals!) Let $y=t-t'$, and get $$ P (t) = e^{D t} P (0) + \int_0^t e^{D y} W P (t-y) d y $$ Suppose $W$ can be diagonalized as $W=S^{-1} \Lambda S$ and get $$ S P (t) = S e^{D t} P (0) + \int_0^t S e^{D y} S^{-1} \Lambda S P (t-y) d y $$ Put $J(t) = S P (t) $ and get $$ J(t) = S e^{D t} P (0) + \int_0^t S e^{D y} S^{-1} \Lambda J (t-y) d y $$ If $S$ and $D$ commute we just have $n$ one-dimensional equations: $$ J_i(t) = [ S e^{D t} P (0) ]_i + \lambda_i \int_0^t J _i(t-y) [ e^{D_{ii} y} ] d y $$ each of which looks like (1), except that $\xi$ and $\gamma$ are now in general complex...
Sign In or Register to comment.