#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Proving (1) the law of mass action, and (2) the Poisson character of the reactions

Hello, I'm working away on my second blog article. It's still too unformed for anyone else to read it. But I have some questions about the content that came up.

At first I was simply going to give the definition for stochastic Petri nets, give the formulas for the mass action kinetics, and then present the simulator. But the good simulator algorithm uses the exponential distribution for the inter-event intervals, and the correctness of that is predicated on the Poisson character of the reaction events. This is a lot to just throw at a reader who may be coming from just a software background. I want this to be an effective "praxis article," which means that we have to understand what the heck we are talking about, all the way down the line, starting from the theory that supports the model, right down to the programming language technology that will implement the simulator. In the first blog article, I dug into the programming technology. Here the links that need more attention are the theoretical supports.

In my notes I have written an informal yet semi-rigorous explanation of what a random process is, and what a Poisson random process is. It's written in a "popular mechanics" tone, and this won't give me a problem to write it nicely for the reader.

To focus the discussion, suppose there are species A, B and C, and there is a transition X: A + B --> C. Suppose this takes place in a closed container, which is a homogeneous "soup" of the entities, and consider them to be point particles. Assume they are bouncing around randomly, colliding with each other and the walls.

Let A(t), B(t), C(t) be the number of entities of each type in the container at time t.

Now, as we know, the mass action kinetics says that the firing rate of X is proportional to A(t) * B(t), with a coefficient that is a function of Temperature, among other things. Now that I am acquainted with this material, this relationship is intuitively clear, and I can give an intuitive explanation.

Let G be a small region of the container, and T be a small time interval. Let A(G,T) be the expected number of A particles found in (G,T), and B(G,T) be the expected number of B particles found in (G,T). (By "found" I mean particles which are in G at at least one time point in T.) Let Z be the probability of the transition firing between a single A particle and a single B particle, given that they are both found in (G,T). Then the expected number of firings in (G,T) is Z * A(G,T) * B(G,T). Each combination of one of the A particles in (G,T) and one of the B particles in (G,T) contributes Z to the expected number of transitions that fire in (G,T).

But I would like to go further, and prove (using informal language) that the X-firings constitute a Poisson process, under the assumption that the movements of the A and the B particles are a Poisson process. And to derive the formula for the rate constant of the X-firings, which will involve the factors A(t) * B(t).

I have worked out a proof, which is more complex and nuanced than I had hoped for. I will summarize it now. But can anyone point me to a proof in the literature (online is best), or summarize the idea of the standard proof, or make any suggestions about how to simplify the following argument.

Let A-Count(T,G) be the random variable that counts the number of A particles that are found in (G,T). (I.e., whose world-lines intersect G x T). Under the assumption that the particles are freely colliding, it is not hard to show that A-Count is a Poisson process. Similarly for B-Count.

Now in order to show that the X-firings constitute a Poisson process with a rate parameter, we need to further analyze the formula I gave above for the expected number of firings Z * A(G,T) * B(G,T).

Let T = (t, t + deltaT).

Since A is Poisson, we have that:

A(G,T) =~ Prob(A-Count(G,T) > 0) =~ Prob(A-Count(G,T) = 1) = Lambda(A,G,t) * dT, where Lambda(A,G,t) is the rate constant for the A Poisson process.

Now it is evident that Lambda(A,G,T) will be proportional to A(t), the number of A's in the system at time t, and also to a constant that is an increasing function of the mean velocity of the particles. So let's write:

Lambda(A,G,t) = A(t) * Sigma(A,t), where Sigma(A,t) includes the temperature dependency.

Putting this all together, we have the following formula for the expected number of X-firings in (G,T):

X-Firings(G,T) = Z * A(G,T) * B(G,T)

= Z * (Lambda(A,G,t) * deltaT) * (Lambda(B,G,t) * deltaT)

= Z * (A(t) * Sigma(A,t) * deltaT) * (B(t) * Sigma(B,t) * deltaT)

= Z * A(t) * B(t) * Sigma(A,t) * Sigma(B,t) * deltaT^2.

This looks promising, except we have the apparent paradox that the X-firings appear to have a quadratic dependency on deltaT -- so it doesn't look like a Poisson process at all!

The key to this riddle is the fact that that the probability Z is itself a function of G and T.

Z(G,T) = probability that X fires given that there is an A in (G,T) and a B in (G,T) = E(G,T) * F(G),

where:

E(G,T) is defined to be the conditional probability that, given that there is an A in (G,T) and a B in (G,T), that this A and B are in G at the same exact time (for some time t in T), and

F(G) is defined to be the probability that, given that there is an A and a B in G at the same exact time, that the transition will fire.

The point here is that for a long interval of time T, the fact that A and B are both in (G,T) leads to a low probability E(G,T) that they are actually there at the same time, and hence have a chance to react.

Now let us further assume that G is small in relation to the mean velocity of the particles, so that if a particle is present in (G,T), then it will quickly zip in and out, and actual time interval (t1,t2) for which it is present in G is a small sub-interval of T.

Under this assumption, we can show that E(G,T) is inversely proportional to the length of T. If we halve the length of T, then we will double the probability that the actual time interval for an A that is found in (G,T) will intersect with the actual time interval for a B that is found in (G,T). We've squeezed these two intervals into a T that is half the size, and so they are twice as likely to intersect. (Approximately speaking.)

Using this fact, we can write:

E(G,T) = (1/deltaT) * E'(G,t), for some function E' that depends only on G and t, but not the interval deltaT.

Putting these together we get:

Z(G,T) = E(G,T) * F(G) = (1/deltaT) * E'(G,t) * F(G).

Putting this into our rate formula, we have:

X-Firings(G,T) = Z(G,T) * A(G,T) * B(G,T)

= Z(G,T) * A(t) * B(t) * Sigma(A,t) * Sigma(B,t) * deltaT^2

= (1/deltaT) * E'(G,t) * F(G) * A(t) * B(t) * Sigma(A,t) * Sigma(B,t) * deltaT^2

= K(G,t) * Sigma(A,t) * Sigma(B,t) * A(t) * B(t) * deltaT

where K(G,t) = E'(G,t) * F(G).

This gives is the rate parameter for the Poisson process:

K(G,t) * Sigma(A,t) * Sigma(B,t) * A(t) * B(t)

• Options
1.

This result looks right to me.

But there is one point in the argument that I am uncomfortable about. I made the assumption that G was small in comparison to the speeds of the particles. That was how I was able to factor the (1/deltaT) out of E. But to really show that this is the rate parameter for a Poisson process, deltaT would have pass to 0 in the limit. But the factoring of (1/deltaT) out of E breaks down as deltaT gets very small.

So what this argument shows is that, for a wide range of deltaT's, that aren't "too small", the formula for X-firings above, with the given rate parameter is correct. If you want to go down to a smaller level of deltaT's, you'd have to repeat the argument with a smaller G.

Also, it would be nice to factor out vol(G) from the term K(G,t), so that the only dependence on G is through vol(G).

Any advice or pointers? Can this be cleaned up and simplified, while still maintain the general level of rigor that I have set out here?

Regardless of the extent to which I will be including this argument into the blog article, I'd like to get the proof clear in my mind, so that I won't feel like I'm fuzzing over the issue when I do write the blog article.

Thanks!

Comment Source:This result looks right to me. But there is one point in the argument that I am uncomfortable about. I made the assumption that G was small in comparison to the speeds of the particles. That was how I was able to factor the (1/deltaT) out of E. But to really show that this is the rate parameter for a Poisson process, deltaT would have pass to 0 in the limit. But the factoring of (1/deltaT) out of E breaks down as deltaT gets very small. So what this argument shows is that, for a wide range of deltaT's, that aren't "too small", the formula for X-firings above, with the given rate parameter is correct. If you want to go down to a smaller level of deltaT's, you'd have to repeat the argument with a smaller G. Also, it would be nice to factor out vol(G) from the term K(G,t), so that the only dependence on G is through vol(G). * * * Any advice or pointers? Can this be cleaned up and simplified, while still maintain the general level of rigor that I have set out here? Regardless of the extent to which I will be including this argument into the blog article, I'd like to get the proof clear in my mind, so that I won't feel like I'm fuzzing over the issue when I do write the blog article. Thanks!
• Options
2.

David wrote:

I made the assumption that G was small in comparison to the speeds of the particles.

That sounds reasonable; people often say the law of mass action holds when the chemicals are 'well mixed', and this seems somehow related. If the molecules aren't zipping around enough, so each mainly interacts with its 'neighbors', the law of mass action won't hold.

I'm afraid I'm too distracted to tackle the issue you're actually concerned with right now...

Comment Source:David wrote: > I made the assumption that G was small in comparison to the speeds of the particles. That sounds reasonable; people often say the law of mass action holds when the chemicals are 'well mixed', and this seems somehow related. If the molecules aren't zipping around enough, so each mainly interacts with its 'neighbors', the law of mass action won't hold. I'm afraid I'm too distracted to tackle the issue you're actually concerned with right now...
• Options
3.

I want to talk more about this stuff! I have to travel tomorrow, but return on Saturday. I plan to look at this over the weekend! Somehow, I'm very busy right now as everything seems to be due this November :(

Comment Source:I want to talk more about this stuff! I have to travel tomorrow, but return on Saturday. I plan to look at this over the weekend! Somehow, I'm very busy right now as everything seems to be due this November :(
• Options
4.
edited October 2012

By the way, I hope you do the obvious thing and read what turns up when you Google derivation law of mass action. I don't instantly see a 'straightforward' derivation of the sort you're attempting - I see more people trying to derive it from laws of thermodynamics, which seems interesting but peculiar. I see a book:

available on Kindle for a whopping sum, which seems to contain at least one derivation and would definitely be worth looking at. I also saw (but don't see now) a derivation using quantum mechanics.

What you're trying to do is so nice and simple that I feel it must have been done, maybe even by Boltzmann, but I don't see it yet!

Comment Source:By the way, I hope you do the obvious thing and read what turns up when you Google [derivation law of mass action](https://www.google.com/search?q=derivation+law+of+mass+action). I don't instantly see a 'straightforward' derivation of the sort you're attempting - I see more people trying to derive it from laws of thermodynamics, which seems interesting but peculiar. I see a book: * Andrei B. Koudriavtsev, Reginald F. Jameson, Wolfgang Linert, [The Law of Mass Action](http://www.amazon.com/The-Law-Mass-Action-ebook/dp/B000QXD8IM). available on Kindle for a whopping sum, which seems to contain at least one derivation and would definitely be worth looking at. I also saw (but don't see now) a derivation using quantum mechanics. What you're trying to do is so nice and simple that I feel it _must_ have been done, maybe even by Boltzmann, but I don't see it yet!
• Options
5.
edited November 2012

Hi, I have a revised proof, which is simpler, and avoids the need to assume that the region G is small.

Again, assume species A, B, and C, and a transition X: A + B --> C.

Let G be a region of the container, and T = (t, t + deltaT) be an interval of time.

Let's view it from the perspective of S = G x T as a region of space-time.

Let a and b be individual particles whose world-lines intersect S. Let Ta be the sub-interval of T, consisting of the times that a is inside of S, and Tb be the same thing for particle b.

Assumption: The probability of the reaction taking place in S between a and b is equal to the length of the intersection of Ta and Tb, times a constant factor K(G) that depends, among other things, on G.

Let Amount(A,S) = Sum Ta, for all particles a. Consider this to be in units of A-particle-seconds.

Amount(A,S) is clearly proportional to the number of A(t) of A particles in the container during the (small) interval T.

Claim now that the reaction rate is proportional to Amount(A,S) * Amount(B,S).

Now partition T into many sub-intervals of length dt. Accordingly, partition all of the world-lines of the particles into small segments (fragments), each of which has a duration of dt. Let Fa denote one of these segments of an a-world line, and the same for Fb. Any reaction between specific a and b particles will take place between an Fa and an Fb that overlap in time, and are close in space (a near-crossing).

Let P(deltaT) be the probability that a reaction will take place between a randomly chosen Fa and Fb in S.

Then the expected number of reactions in S will equal:

P(deltaT) * Amount(A,S) * Amount(B,S) = P(deltaT) * A(t) * B(t) * K,

Now it is not hard to show that P(deltaT) is proportional to deltaT = length(T). Once we have shown that, it follows that the reactions are a Poisson process, with rate parameter that is proportional to A(t) * B(t) -- i.e., we have proven the law of mass action.

So let's divide T in 2 (while leaving the much smaller dt unchanged). Let T1 be the first half of T, and T2 be the second half of T.

Let Fa, Fb be randomly chosen fragments in S = G x T = G x T1 union G x T2.

If Fa is in T1 and Fb is in T2, then they have no time overlap, and there is zero chance that they will react.

So the probability that they will react in T is equal to the probability that they will react in T1, plus the probability that they will react in T2.

I.e., P(deltaT) = 2 * P(deltaT / 2).

That proves that P is proportional to deltaT, and so we are done.

Now, it would be good to find out where this kind of direct proof has already been done. I did try web searches. John, I will try to get a hold of that book you cited.

Aside from that, can you John or Jacob or anyone else comment on the validity of the proof that I just gave.

I'd like to keep the blog articles rolling along, so I don't want to dwell on this point for too long. If nobody here spots any problems with the argument, I could just make the statement: here is a relatively simple way to understand why the law of mass action is true.

Thanks

Comment Source:Hi, I have a revised proof, which is simpler, and avoids the need to assume that the region G is small. Again, assume species A, B, and C, and a transition X: A + B --> C. Let G be a region of the container, and T = (t, t + deltaT) be an interval of time. Let's view it from the perspective of S = G x T as a region of space-time. Let a and b be individual particles whose world-lines intersect S. Let Ta be the sub-interval of T, consisting of the times that a is inside of S, and Tb be the same thing for particle b. Assumption: The probability of the reaction taking place in S between a and b is equal to the length of the intersection of Ta and Tb, times a constant factor K(G) that depends, among other things, on G. Let Amount(A,S) = Sum Ta, for all particles a. Consider this to be in units of A-particle-seconds. Amount(A,S) is clearly proportional to the number of A(t) of A particles in the container during the (small) interval T. Claim now that the reaction rate is proportional to Amount(A,S) * Amount(B,S). Now partition T into many sub-intervals of length dt. Accordingly, partition all of the world-lines of the particles into small segments (fragments), each of which has a duration of dt. Let Fa denote one of these segments of an a-world line, and the same for Fb. Any reaction between specific a and b particles will take place between an Fa and an Fb that overlap in time, and are close in space (a near-crossing). Let P(deltaT) be the probability that a reaction will take place between a randomly chosen Fa and Fb in S. Then the expected number of reactions in S will equal: P(deltaT) * Amount(A,S) * Amount(B,S) = P(deltaT) * A(t) * B(t) * K, Now it is not hard to show that P(deltaT) is proportional to deltaT = length(T). Once we have shown that, it follows that the reactions are a Poisson process, with rate parameter that is proportional to A(t) * B(t) -- i.e., we have proven the law of mass action. So let's divide T in 2 (while leaving the much smaller dt unchanged). Let T1 be the first half of T, and T2 be the second half of T. Let Fa, Fb be randomly chosen fragments in S = G x T = G x T1 union G x T2. If Fa is in T1 and Fb is in T2, then they have no time overlap, and there is zero chance that they will react. So the probability that they will react in T is equal to the probability that they will react in T1, plus the probability that they will react in T2. I.e., P(deltaT) = 2 * P(deltaT / 2). That proves that P is proportional to deltaT, and so we are done. * * * Now, it would be good to find out where this kind of direct proof has already been done. I did try web searches. John, I will try to get a hold of that book you cited. Aside from that, can you John or Jacob or anyone else comment on the validity of the proof that I just gave. I'd like to keep the blog articles rolling along, so I don't want to dwell on this point for too long. If nobody here spots any problems with the argument, I could just make the statement: here is a relatively simple way to understand why the law of mass action is true. Thanks
• Options
6.
edited November 2012

I wrote:

Then the expected number of reactions in S will equal:

P(deltaT) * Amount(A,S) * Amount(B,S) = P(deltaT) * A(t) * B(t) * K,

Now it is not hard to show that P(deltaT) is proportional to deltaT = length(T). Once we have shown that, it follows that the reactions are a Poisson process, with rate parameter that is proportional to A(t) * B(t) -- i.e., we have proven the law of mass action.

I got this backwards, but it is not hard to fix.

We want to show that the rate, which is the product P(deltaT) * Amount(A,S) * Amount(B,S) is proportional to deltaT = length(T). Before we showed that Amount(A,S) and Amount(B,S) are both proportional to deltaT.

Therefore we need to show that P(deltaT) is inversely proportional to deltaT.

Here goes:

Again, divide T in 2 (while leaving the much smaller dt unchanged). Let T1 be the first half of T, and T2 be the second half of T.

Let Fa, Fb be randomly chosen fragments in S = G x T = G x T1 union G x T2.

If Fa is in T1 and Fb is in T2, then they have no time overlap, and there is zero chance that they will react.

There is a 50% chance that either (Fa is in T1 and Fb is in T2), or (Fa is in T2 and Fb is in T1), in which case they have zero probability of reacting.

The other 50% of the time, they will either both be in T1, or both be in T2. In this case, the probability of them reacting is P(deltaT / 2).

Therefore the overall probability of them reacting is 50% * P(deltaT / 2), i.e.,

P(deltaT) = 0.5 * P(deltaT / 2), i.e.,

P(deltaT / 2) = 2 * P(deltaT),

which says that P is inversely proportional to deltaT, and we are done.

Comment Source:I wrote: > Then the expected number of reactions in S will equal: > P(deltaT) * Amount(A,S) * Amount(B,S) = P(deltaT) * A(t) * B(t) * K, > Now it is not hard to show that P(deltaT) is proportional to deltaT = length(T). Once we have shown that, it follows that the reactions are a Poisson process, with rate parameter that is proportional to A(t) * B(t) -- i.e., we have proven the law of mass action. I got this backwards, but it is not hard to fix. We want to show that the rate, which is the product P(deltaT) * Amount(A,S) * Amount(B,S) is proportional to deltaT = length(T). Before we showed that Amount(A,S) and Amount(B,S) are both proportional to deltaT. Therefore we need to show that P(deltaT) is _inversely_ proportional to deltaT. Here goes: Again, divide T in 2 (while leaving the much smaller dt unchanged). Let T1 be the first half of T, and T2 be the second half of T. Let Fa, Fb be randomly chosen fragments in S = G x T = G x T1 union G x T2. If Fa is in T1 and Fb is in T2, then they have no time overlap, and there is zero chance that they will react. There is a 50% chance that either (Fa is in T1 and Fb is in T2), or (Fa is in T2 and Fb is in T1), in which case they have zero probability of reacting. The other 50% of the time, they will either both be in T1, or both be in T2. In this case, the probability of them reacting is P(deltaT / 2). Therefore the overall probability of them reacting is 50% * P(deltaT / 2), i.e., P(deltaT) = 0.5 * P(deltaT / 2), i.e., P(deltaT / 2) = 2 * P(deltaT), which says that P is inversely proportional to deltaT, and we are done.
• Options
7.
edited November 2012

I have third proof which is better than the ones I've given here. I'm putting it into the blog article, for which I will post a message when it is reviewable. The plot got thicker than I had imagined it would -- so it's been slower going than I had hoped.

Comment Source:I have third proof which is better than the ones I've given here. I'm putting it into the blog article, for which I will post a message when it is reviewable. The plot got thicker than I had imagined it would -- so it's been slower going than I had hoped.
• Options
8.

Hi David, I'm sorry I have not yet had a chance to review your derivation as we discussed on Tuesday. David said:

I have third proof which is better than the ones I’ve given here. I’m putting it into the blog article, for which I will post a message when it is reviewable.

Given that, I'll plan to go through the details once you've posted them on the wiki. Maybe by then you'll be so confident in your proof that you won't care anymore if I review it, but I'll plan to read it anyway because I think it's interesting.

Comment Source:Hi David, I'm sorry I have not yet had a chance to review your derivation as we discussed on Tuesday. David said: > I have third proof which is better than the ones I’ve given here. I’m putting it into the blog article, for which I will post a message when it is reviewable. Given that, I'll plan to go through the details once you've posted them on the wiki. Maybe by then you'll be so confident in your proof that you won't care anymore if I review it, but I'll plan to read it anyway because I think it's interesting.
• Options
9.

Thanks!

Comment Source:Thanks!
• Options
10.
edited November 2012

I came across the first five chapters of an book on physical chemistry, by Georg Job and Regina Ruffler. It starts with a creative approach to explaining entropy, by describing it in "phenomenological" terms as a kind of substance (like the view of heat as a substance), along with empirical ways to measure it. This is not fully satisfying, because it leaves me with the question, what is this substance called entropy, but it is creative, and the writing is has a literature-like quality. It also contains nice descriptions of things like how a refrigerator works.

It also has chapters on the "chemical potential" (which I didn't get on the first skim of it), and on the law of mass action.

I'm looking for good references to introductory texts and online materials on physical chemistry, so if anyone has any, I would be appreciative. Thanks.

Comment Source:I came across the first five chapters of an book on [physical chemistry](http://job-stiftung.de/pdf/buch/physical_chemistry_five_chapters.pdf), by Georg Job and Regina Ruffler. It starts with a creative approach to explaining entropy, by describing it in "phenomenological" terms as a kind of substance (like the view of heat as a substance), along with empirical ways to measure it. This is not fully satisfying, because it leaves me with the question, what _is_ this substance called entropy, but it is creative, and the writing is has a literature-like quality. It also contains nice descriptions of things like how a refrigerator works. It also has chapters on the "chemical potential" (which I didn't get on the first skim of it), and on the law of mass action. I'm looking for good references to introductory texts and online materials on physical chemistry, so if anyone has any, I would be appreciative. Thanks.
• Options
11.
edited November 2012

You might get quite a lot from my friend Mark Leach's metasynthesis site. It carries an endorsement from Hoffman who won a Nobel prize for Frontier Molecular Orbital Theory (FMO). hth

Comment Source:You might get quite a lot from my friend Mark Leach's [metasynthesis site](http://www.metasynthesis.co.uk). It carries an endorsement from Hoffman who won a Nobel prize for Frontier Molecular Orbital Theory (FMO). hth
• Options
12.

Bingo:

Just found this today. So for one of my upcoming blog articles I ended up recreating some of the key points from this work from 1992. But it was a good exercise, and I'm happy with the form of the argument that I will give.

Also I will be talking about the limitations of the law of mass action, which is an approximation that loses validity as concentrations increase. I saw this stated in a paper, though I haven't yet found an explanation for it in the literature. In my assessment it is due to the diameter of the molecules. First there is the obvious issue that the finite size of the molecules puts an absolute limit on the concentrations. But let's look at what happens when concentrations are high, but the container is not fully packed. Suppose the reaction is between species A and B. Now, doubling the concentration of A does in fact double the number of expected crossings between A and B particles. Specifically, I look at "epsilon-crossings," meaning near-crossings where the two particles come within a distance of epsilon from each other. Here, the epsilon of interest will be the radius of A plus the radius of B.

But the A molecules are "competing" to react with the B molecules, and so the presence of more A molecules reduces the conditional probability that any given epsilon crossing will actually react. Hence the dependence of the reaction rate on each of the species concentrations will be sub-linear.

As we know, there is also the breakdown in the law, at low concentrations, for reactions that take multiple inputs from the same species. There, the correction is to use the falling powers, rather than the regular power, for the concentrations that appear more than once in the input. Stochastic behavior at low concentrations is also of practical interest. One paper I read pointed this out, for biochemical reaction networks. There, there can be relatively few molecules -- but large ones -- that are participating in the communication pathways, over long durations. There was a quote of just a handful of codons being transcribed per second.

In a wiki page, I'm going to start an annotated bibliography on Petri nets. This deserves its own page, but we can link to it from Recommended Reading. If such a page already exists, please let me know. If I haven't heard in a few days I create it.

Comment Source:Bingo: * Daniel T. Gillespie, [A rigorous derivation of the chemical master equation](http://citeseerx.ist.psu.edu/viewdoc/summary/?doi=10.1.1.159.5220), Physica A 188 (1992) 404-425. Just found this today. So for one of my upcoming blog articles I ended up recreating some of the key points from this work from 1992. But it was a good exercise, and I'm happy with the form of the argument that I will give. Also I will be talking about the limitations of the law of mass action, which is an approximation that loses validity as concentrations increase. I saw this stated in a paper, though I haven't yet found an explanation for it in the literature. In my assessment it is due to the diameter of the molecules. First there is the obvious issue that the finite size of the molecules puts an absolute limit on the concentrations. But let's look at what happens when concentrations are high, but the container is not fully packed. Suppose the reaction is between species A and B. Now, doubling the concentration of A does in fact double the number of expected crossings between A and B particles. Specifically, I look at &quot;epsilon-crossings,&quot; meaning near-crossings where the two particles come within a distance of epsilon from each other. Here, the epsilon of interest will be the radius of A plus the radius of B. But the A molecules are &quot;competing&quot; to react with the B molecules, and so the presence of more A molecules reduces the conditional probability that any given epsilon crossing will actually react. Hence the dependence of the reaction rate on each of the species concentrations will be sub-linear. As we know, there is also the breakdown in the law, at low concentrations, for reactions that take multiple inputs from the same species. There, the correction is to use the falling powers, rather than the regular power, for the concentrations that appear more than once in the input. Stochastic behavior at low concentrations is also of practical interest. One paper I read pointed this out, for biochemical reaction networks. There, there can be relatively few molecules -- but large ones -- that are participating in the communication pathways, over long durations. There was a quote of just a handful of codons being transcribed per second. In a wiki page, I'm going to start an annotated bibliography on Petri nets. This deserves its own page, but we can link to it from Recommended Reading. If such a page already exists, please let me know. If I haven't heard in a few days I create it.
• Options
13.

I think the annotated bibliography on Petri nets deserves to be near the bottom of Petri net, where we already have a bibliography! If it gets enormous we can break it off as a page on its own.

I'm really glad you want to add more references, and annotation is crucial since a huge undigested pile of references is not very useful.

So, how about that blog article?

Comment Source:I think the annotated bibliography on Petri nets deserves to be near the bottom of [[Petri net]], where we already have a bibliography! If it gets enormous we can break it off as a page on its own. I'm really glad you want to add more references, and annotation is crucial since a huge undigested pile of references is not very useful. So, how about that blog article? <img src = "http://math.ucr.edu/home/baez/emoticons/wink.gif" alt = ""/>
• Options
14.
edited December 2012

Ok, this week I will get it into a reviewable state -- I will let you know. I have been working on it, but my attention is diverted by... Life. We just had our house wired by an AV company (for sound and Lan), but they didn't configure the AV wireless network properly. So I've spent the last couple days doing network troubleshooting... it is interesting to learn about, though the benchmarking gets to be a grind. Thanks for reminding me about applications that sit above the transport layer :)

The Wikinson book that you mentioned on the blog, "Stochastic Modelling for Systems Biology," just arrived. It's a great read.

I will start adding bibliographic notes to the Petri net page, as you suggest.

Comment Source:Ok, this week I will get it into a reviewable state -- I will let you know. I have been working on it, but my attention is diverted by... Life. We just had our house wired by an AV company (for sound and Lan), but they didn't configure the AV wireless network properly. So I've spent the last couple days doing network troubleshooting... it is interesting to learn about, though the benchmarking gets to be a grind. Thanks for reminding me about applications that sit above the transport layer :) The Wikinson book that you mentioned on the blog, "Stochastic Modelling for Systems Biology," just arrived. It's a great read. I will start adding bibliographic notes to the Petri net page, as you suggest.
• Options
15.

Dave wrote

As we know, there is also the breakdown in the law, at low concentrations, for reactions that take multiple inputs from the same species. There, the correction is to use the falling powers, rather than the regular power, for the concentrations that appear more than once in the input. Stochastic behavior at low concentrations is also of practical interest. One paper I read pointed this out, for biochemical reaction networks. There, there can be relatively few molecules – but large ones – that are participating in the communication pathways, over long durations. There was a quote of just a handful of codons being transcribed per second

Note that for looking at low concentration behaviour you might be interested in

A symbolic computational approach to a problem involving multivariate Poisson distributions

Comment Source:Dave wrote > As we know, there is also the breakdown in the law, at low concentrations, for reactions that take multiple inputs from the same species. There, the correction is to use the falling powers, rather than the regular power, for the concentrations that appear more than once in the input. Stochastic behavior at low concentrations is also of practical interest. One paper I read pointed this out, for biochemical reaction networks. There, there can be relatively few molecules – but large ones – that are participating in the communication pathways, over long durations. There was a quote of just a handful of codons being transcribed per second Note that for looking at low concentration behaviour you might be interested in [A symbolic computational approach to a problem involving multivariate Poisson distributions](http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/mvp.pdf)