Home › Azimuth Project › › Azimuth Blog

It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.4K
- Chat 505
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 75
- Azimuth Code Project 111
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 719

Options

This article is just in a rough stage. So far what is there is just the introduction:

In it I will give the motivation and derivation for the rate equation, and then proceed to work out some examples. For nets involving two species we can plot the field of state derivative vectors right on the page. I'll show some graphs of the vector streamlines. There we should be able to see the points of equilibrium. It will motivate a subsequent analysis and classification of the types of equilibrium points.

After the conclusion, there will be a section of questions/problems/challenges. That way all readers who follow the basic thread of the ideas will have the satisfaction of reaching the conclusions, and then going into the more advanced and/or participatory area will be optional.

## Comments

I'm still looking for a good description for the last section. It's a hybrid between a "problem set" and an "activity section." But I don't want it to sound too dry.

Here are some that I have thought of so far:

By the end of the blog, we will have seen that the rate equation is governed by a function mapping the state of the net to a derivative of the state. This function always takes the special form of a vector of polynomials, where there is one variable for each place in the net. A little more concretely, it can be put as a mapping from each of the variables to a polynomial over the variables. In this context, this is what we mean by a "polynomial vector."

Problem: Give a purely algebraic characterization of the class C of polynomial vectors that are induced by the entire class of Petri nets.

Problem: For each polynomial vector V in C, give a construction for a Petri net that gives rise to V.

Activity: Choose a chemical reaction of interest. Describe its Petri net. Produce the rate equation for it. Find some or all of the equilibrium points.

Now that you know the general topic of this blog, I encourage anyone here to think of some good challenges or activities. If your problem fits in with the theme of the article series, I'll include it and attribute it to you.

`I'm still looking for a good description for the last section. It's a hybrid between a "problem set" and an "activity section." But I don't want it to sound too dry. Here are some that I have thought of so far: * By the end of the blog, we will have seen that the rate equation is governed by a function mapping the state of the net to a derivative of the state. This function always takes the special form of a vector of polynomials, where there is one variable for each place in the net. A little more concretely, it can be put as a mapping from each of the variables to a polynomial over the variables. In this context, this is what we mean by a "polynomial vector." * Problem: Give a purely algebraic characterization of the class C of polynomial vectors that are induced by the entire class of Petri nets. * Problem: For each polynomial vector V in C, give a construction for a Petri net that gives rise to V. * Activity: Choose a chemical reaction of interest. Describe its Petri net. Produce the rate equation for it. Find some or all of the equilibrium points. Now that you know the general topic of this blog, I encourage anyone here to think of some good challenges or activities. If your problem fits in with the theme of the article series, I'll include it and attribute it to you.`

[Mikko Kiviranta] came up with an LTSpice implementation item #10 of a Chua circuit which transitions from bistable to chaotic states.

Unfortunately this post hasn't made it to a wiki library page yet.

Cubic autocatalytic reactions show similarly interesting non-linear behaviours..

Marchant T. R., Cubic autocatalytic reaction-diffusion equations: semi-analytic method (2001)

Unfortunately this paper is paywalled, as is:

It think would be great to be able compare Petri net rep in these electrical and chemical domains and they might provide partly reusable examples for those of us who haven't mastered the master equations yet.

Hth

`[Mikko Kiviranta] came up with an [LTSpice implementation](http://forum.azimuthproject.org/discussion/1100/hello-from-mikko-kiviranta/) item #10 of a [Chua circuit](http://en.wikipedia.org/wiki/Chua%27s_circuit) which transitions from bistable to chaotic states. Unfortunately this post hasn't made it to a wiki library page yet. [Cubic autocatalytic reactions](http://www.chemsoft.ch/kin/cubcat.htm) show similarly interesting non-linear behaviours.. [Marchant T. R., Cubic autocatalytic reaction-diffusion equations: semi-analytic method (2001)](http://www.jstor.org/discover/10.2307/3067396?uid=3738032&uid=2129&uid=2&uid=70&uid=4&sid=21102108228841) Unfortunately [this paper](http://www1.chem.leeds.ac.uk//People/SKS/sks_research/own_pdf_files/1998/razvan_pre.pdf) is paywalled, as is: A two-step model for cubic autocatalysis in an open system Christian Kaas-Petersen, John K. McGarry and Stephen K. Scott J. Chem. Soc., Faraday Trans. 2, 1989,85, 1831-1835_). It think would be great to be able compare Petri net rep in these electrical and chemical domains and they might provide partly reusable examples for those of us who haven't mastered the master equations yet. Hth`

Hi Jim, this all sounds intriguing. Thanks.

In terms of the present article, I'm looking for some "bite sized" extension problems, for a reader who has just absorbed the rate equation. They have a picture of the deterministic evolution of the state, and some geometric intuition into the streamlines and their convergence to (attractor) equilibrium points.

Here are the concepts that you are introducing here:

LTSpice -- electrical circuit representation of Petri nets

Bistable behavior

Chaotic behavior

The Chua circuit

Comparison of Petri net representations in electrical and chemical domains

Can you take a crack at introducing any of these concepts, in, say a paragraph, and then ending it with a notable observation, question, or study problem? In particular, can you define bistability, or chaotic behavior, in a succinct way, for the aforementioned reader?

Alternatively, each of these topics seems worthy of its own little blog article -- or you could write one overall blog that ties them together. If you were interested in such a project, and it would help, you could post a topic outline of any such blogs, and I would be glad to give suggestions about possible writing strategies for that content.

Cheers

`Hi Jim, this all sounds intriguing. Thanks. In terms of the present article, I'm looking for some "bite sized" extension problems, for a reader who has just absorbed the rate equation. They have a picture of the deterministic evolution of the state, and some geometric intuition into the streamlines and their convergence to (attractor) equilibrium points. Here are the concepts that you are introducing here: * LTSpice -- electrical circuit representation of Petri nets * Bistable behavior * Chaotic behavior * The Chua circuit * Comparison of Petri net representations in electrical and chemical domains Can you take a crack at introducing any of these concepts, in, say a paragraph, and then ending it with a notable observation, question, or study problem? In particular, can you define bistability, or chaotic behavior, in a succinct way, for the aforementioned reader? Alternatively, each of these topics seems worthy of its own little blog article -- or you could write one overall blog that ties them together. If you were interested in such a project, and it would help, you could post a topic outline of any such blogs, and I would be glad to give suggestions about possible writing strategies for that content. Cheers`

Mikko Kiviranta knows about these things and came up with the LTSpice code within a day. I don't know how you might contact him.

I couldn't write anything better than what it says in wikipedia and will be busy on my FSim fay toy simulation library for the next few weeks.

Eventually when I've ingested the papers I will try and produce some literate code to demonstrate cubic autocatalysis if it hasn't already been done.

Thank for this series of excellent blog posts. I'm enjoying them very much. Sorry not to be of much help.

`Mikko Kiviranta knows about these things and came up with the LTSpice code within a day. I don't know how you might contact him. I couldn't write anything better than what it says in wikipedia and will be busy on my FSim fay toy simulation library for the next few weeks. Eventually when I've ingested the papers I will try and produce some literate code to demonstrate cubic autocatalysis if it hasn't already been done. Thank for this series of excellent blog posts. I'm enjoying them very much. Sorry not to be of much help.`

No problem, I only meant it as something that might interest you for yourself -- but you got your own thing going on with the FSim fay toy simulation library. What is this?

Thanks for the good word -- Cheers.

`No problem, I only meant it as something that might interest you for yourself -- but you got your own thing going on with the FSim fay toy simulation library. What is this? Thanks for the good word -- Cheers.`

These models certainly do interest me but I put Chua and Cubic-autocatalysis as challenges.

A couple of simple starter petri-net reps in electronics might be a D-flip flop which I hope most programmers understand and/or an RSLatch.

I've lifted some RSLatch code in Haskell as a test for FSim.

I've started an unnanounced Experiments page for Fay and FSim here.

The Fay compiler automagically generates a javascript implementation with html files and other js libs from a proper subset of Haskell. I think it's an excellent solution to "the javascript problem". The code to generate both server and client from a single language is being fettled as we speak.

I'm also supposed to add stuff about Petri net software to the wiki Petri net page. So far I've failed to get most of them working but I'll need help from a few people to try out and review the packages we've found so far.

I've said I'll try and hack out the SimpleHCPN code from the Haskell Coloured Petri Net package described here. It's been moribund for years but the source code is downloadable from Claus Reinke's home page.

You might find the paper useful as a design document.

`These models certainly do interest me but I put Chua and Cubic-autocatalysis as challenges. A couple of simple starter petri-net reps in electronics might be a D-flip flop which I hope most programmers understand and/or an RSLatch. I've lifted some RSLatch code in Haskell as a test for FSim. I've started an unnanounced Experiments page for Fay and FSim [here](http://www.azimuthproject.org/azimuth/show/Experiments+in+the+Fay+EDSL+for+Javascript). The Fay compiler automagically generates a javascript implementation with html files and other js libs from a proper subset of Haskell. I think it's an excellent solution to "the javascript problem". The code to generate both server and client from a single language is being fettled as we speak. I'm also supposed to add stuff about Petri net software to the wiki Petri net page. So far I've failed to get most of them working but I'll need help from a few people to try out and review the packages we've found so far. I've said I'll try and hack out the SimpleHCPN code from the Haskell Coloured Petri Net package described [here](http://www.cs.kent.ac.uk/people/staff/cr3/publications/HCPN.html). It's been moribund for years but the source code is downloadable from Claus Reinke's [home page](http://www.cs.kent.ac.uk/people/staff/cr3). You might find the paper useful as a design document.`

David Tanzer wrote:

I always use the term

Puzzlefor these things. I usually try to say if I know the answer or not.Did you know we have the answer to this on the Azimuth Wiki? The answer is pretty nice:

It's in the page on chemical reaction networks, but there's a simple way to translate these from or to stochastic Petri nets, so it actually solves the problem you raised!

And thanks for bringing this up, because I meant to include this information in that book I'm writing, and I haven't done so yet.

I'm sorry to have been so quiet here on the Forum - it's because I've been insanely busy elsewhere.

`David Tanzer wrote: > I’m still looking for a good description for the last section. It’s a hybrid between a “problem set” and an “activity section.” But I don’t want it to sound too dry. I always use the term *Puzzle* for these things. I usually try to say if I know the answer or not. > Problem: Give a purely algebraic characterization of the class C of polynomial vectors that are induced by the entire class of Petri nets. Did you know we have the answer to this on the Azimuth Wiki? The answer is pretty nice: * [Existence and uniqueness of chemical reaction networks with given rate equations](http://www.azimuthproject.org/azimuth/show/Chemical+reaction+network#Existence). It's in the page on chemical reaction networks, but there's a simple way to translate these from or to stochastic Petri nets, so it actually solves the problem you raised! And thanks for bringing this up, because I meant to include this information in that book I'm writing, and I haven't done so yet. I'm sorry to have been so quiet here on the Forum - it's because I've been insanely busy elsewhere.`

John, that is a good answer. Thanks for pointing me to the wiki page.

I don't have access to the reference

there which is cited for the proof.

Here is the statement from the Hangos paper (taken from the wiki page):

Here I will attempt a proof.

Let $M$ be a multi-set of the variables $u_1$, ..., $u_n$. $M$ corresponds to a monomial. Let $k_i$ be the number of occurrences of $u_i$ in $M$. Then let $Z(M)$ be the following "no-op" transition: $Z(M)$ takes $k_i$ inputs from $u_i$, and sends $k_i$ outputs to $u_i$. So each firing of $Z(M)$ leaves the state of the net unchanged. But even though it leaves the state unchanged, it fires at a rate equal to the monomial for M.

Assume that the rate constant for $Z(M)$ is 1.

Now let $Z(M,i)$ be the result of augmenting $Z(M)$ with one additional output to $u_i$. Then the rate equation for $Z(M,i)$ is:

$$\frac{d u_i}{d t} = M$$ Then by taking positive "linear combinations" of these no-op transitions -- which means using the rate coefficients as the scalars, and "adding" Petri nets by superposing them -- we can build any net whose rate equation is of the form:

$$\frac{d u_i}{d t} = g_i$$ where P is a polynomial with positive coefficients. We can do this independently for each of the variables, and superpose all the Petri nets, to get a general rate equation involving polynomials with positive coefficients for all of the variables.

But it is claimed that the range of polynomials with negative coefficients is more restricted.

Let $W(M,i)$ be the result of removing one of the outputs from $Z(M)$ to $u_i$. This is only defined if $u_i$ is in $M$. Then the rate equation for $W(M,i)$ is:

$$\frac{d u_i}{d t} = -M = -u_i * M_2$$ where $M_2$ is the result of removing $u_i$ from $M$.

Taking all linear combinations of the $W(M,i)$ then yields all the negative polynomials included in their claim. Finally by superposing various positive-polynomial nets with such negative polynomial nets, we get a construction for each of the polynomials in the claimed class.

The remaining piece is to explain why these are the only polynomials that are realizable by Petri nets (with mass action kinetics).

Suppose that in the rate equations for a Petri net, we have an equation

$$\frac{d u_i}{d t} = N + ...$$ where $N$ is the product of a monomial with a negative scalar. Assume that the rate equations are kept in their "canonical form" that reflects the structure of the Petri net. Then $N$ must have arisen from some transition $T$, and since the rate constants are positive, it must be the case that $u_i$ is an input to $T$ -- that is the only way that $T$ can have the effect of pulling $u_i$ downwards. Hence $u_i$ must be a factor in any negative monomial in the polynomial for $\frac{d u_i}{d t}$.

`John, that is a good answer. Thanks for pointing me to the wiki page. I don't have access to the reference * Hárs, V.; Tóth, J., Qualitative Theory of Differential Equations, chapter On the inverse problem of reaction kinetics, North Holland, 1981, pp. 363–379. there which is cited for the proof. Here is the statement from the Hangos paper (taken from the wiki page): > In the most general case the dynamics of a reaction kinetic system in the concentration (state) space can be described with the following set of ordinary differential equations (ODEs) that originate from the dynamic conservation balance equations constructed for component masses: > $$ \frac{d x_i}{d t} = F_i(x) = -x_i f_i(x) + g_i(x) \qquad i = 1, \dots, n $$ > where $f_i$ and $g_i$ are non-negative polynomials, i.e., polynomials with positive coefficients (...) It can be shown that a model (of this type) can always be realized with a reaction kinetic system obeying the mass action law (17). Here I will attempt a proof. Let $M$ be a multi-set of the variables $u_1$, ..., $u_n$. $M$ corresponds to a monomial. Let $k_i$ be the number of occurrences of $u_i$ in $M$. Then let $Z(M)$ be the following "no-op" transition: $Z(M)$ takes $k_i$ inputs from $u_i$, and sends $k_i$ outputs to $u_i$. So each firing of $Z(M)$ leaves the state of the net unchanged. But even though it leaves the state unchanged, it fires at a rate equal to the monomial for M. Assume that the rate constant for $Z(M)$ is 1. Now let $Z(M,i)$ be the result of augmenting $Z(M)$ with one additional output to $u_i$. Then the rate equation for $Z(M,i)$ is: $$\frac{d u_i}{d t} = M$$ Then by taking positive "linear combinations" of these no-op transitions -- which means using the rate coefficients as the scalars, and "adding" Petri nets by superposing them -- we can build any net whose rate equation is of the form: $$\frac{d u_i}{d t} = g_i$$ where P is a polynomial with positive coefficients. We can do this independently for each of the variables, and superpose all the Petri nets, to get a general rate equation involving polynomials with positive coefficients for all of the variables. But it is claimed that the range of polynomials with negative coefficients is more restricted. Let $W(M,i)$ be the result of removing one of the outputs from $Z(M)$ to $u_i$. This is only defined if $u_i$ is in $M$. Then the rate equation for $W(M,i)$ is: $$\frac{d u_i}{d t} = -M = -u_i * M_2$$ where $M_2$ is the result of removing $u_i$ from $M$. Taking all linear combinations of the $W(M,i)$ then yields all the negative polynomials included in their claim. Finally by superposing various positive-polynomial nets with such negative polynomial nets, we get a construction for each of the polynomials in the claimed class. The remaining piece is to explain why these are the only polynomials that are realizable by Petri nets (with mass action kinetics). Suppose that in the rate equations for a Petri net, we have an equation $$\frac{d u_i}{d t} = N + ...$$ where $N$ is the product of a monomial with a negative scalar. Assume that the rate equations are kept in their "canonical form" that reflects the structure of the Petri net. Then $N$ must have arisen from some transition $T$, and since the rate constants are positive, it must be the case that $u_i$ is an input to $T$ -- that is the only way that $T$ can have the effect of pulling $u_i$ downwards. Hence $u_i$ must be a factor in any negative monomial in the polynomial for $\frac{d u_i}{d t}$.`

Your proof looks correct to me! It's a nice result and I need to remember to include it in the book. If you want to blog about it at some point, that would be great. For example: you could feature it as a puzzle and present the solution yourself only if nobody else can do it.

`Your proof looks correct to me! It's a nice result and I need to remember to include it in the book. If you want to blog about it at some point, that would be great. For example: you could feature it as a puzzle and present the solution yourself only if nobody else can do it.`

Cool, thanks. I will include it as one of the puzzles/problems at the end of the current blog article, on the rate equations.

Here are some types of problems that can appear in such a section:

Exercise. Answer is known, and the solution can be found in a straight-ahead manner. But it is good to do!

Fundamental exercise. An exercise which teaches a fundamental principle.

Problem. Answer is known, but solution requires more creative thought. Problems can be (subjectively) classified by how advanced they are.

Fundamental problem. Teaches a fundamental principle.

Question. Something that we don't know the answer to, but the answer may be known to science. (Taking "science" in the broad sense which includes math.)

Open question. One that has not yet been answered by science.

Major open question. Something that is really needed by science. E.g. a Millennium Prize problem.

`Cool, thanks. I will include it as one of the puzzles/problems at the end of the current blog article, on the rate equations. Here are some types of problems that can appear in such a section: * Exercise. Answer is known, and the solution can be found in a straight-ahead manner. But it is good to do! * Fundamental exercise. An exercise which teaches a fundamental principle. * Problem. Answer is known, but solution requires more creative thought. Problems can be (subjectively) classified by how advanced they are. * Fundamental problem. Teaches a fundamental principle. * Question. Something that we don't know the answer to, but the answer may be known to science. (Taking "science" in the broad sense which includes math.) * Open question. One that has not yet been answered by science. * Major open question. Something that is really needed by science. E.g. a Millennium Prize problem.`

Fundamental problem. Level = basic.

Consider a Petri net involving monomers M and dimers D, along with the reaction M + M --> D and its opposite D --> M + M. (Sorry it is too late at night for me to format this nicely -- and I still haven't made lunch for the kids.) This reaction net has a conserved quantity: given any initial condition, then no matter how the reactions take place from then on, the value 2M + D remains invariant.

For the (unrealistic) synthesis and dissociation of water, 2H + O --> H2O, and H2O --> 2H + O, there are two conserved quantities, one of which expresses the conservation of hydrogen atoms, and one which expresses the conservation of oxygen atoms.

Now suppose that you didn't know the internal structure of H2O, and it was just represented by the symbol W.

Then the reaction pair would be written 2H + O --> W, and W --> 2H + O.

The two conserved quantities are then written as follows:

H + 2W -- conservation of hydrogen atoms O + W -- conservation of oxygen atoms

What would be your method for finding these conserved quantities?

By way of contrast, here is a reaction that does not have any conserved quantities: A --> 2A. (Other than the trivial ones, like the constant 0.)

General statement of the problem. Suppose we are given a set of transitions that comprise a Petri net. Find a systematic method for determining the set of all conserved quantities.

The answer is no rocket science, but is a good application of linear algebra, and it teaches a fundamental principle.

`Fundamental problem. Level = basic. Consider a Petri net involving monomers M and dimers D, along with the reaction M + M --> D and its opposite D --> M + M. (Sorry it is too late at night for me to format this nicely -- and I still haven't made lunch for the kids.) This reaction net has a conserved quantity: given any initial condition, then no matter how the reactions take place from then on, the value 2M + D remains invariant. For the (unrealistic) synthesis and dissociation of water, 2H + O --> H2O, and H2O --> 2H + O, there are two conserved quantities, one of which expresses the conservation of hydrogen atoms, and one which expresses the conservation of oxygen atoms. Now suppose that you didn't know the internal structure of H2O, and it was just represented by the symbol W. Then the reaction pair would be written 2H + O --> W, and W --> 2H + O. The two conserved quantities are then written as follows: H + 2W -- conservation of hydrogen atoms O + W -- conservation of oxygen atoms What would be your method for finding these conserved quantities? By way of contrast, here is a reaction that does not have any conserved quantities: A --> 2A. (Other than the trivial ones, like the constant 0.) General statement of the problem. Suppose we are given a set of transitions that comprise a Petri net. Find a systematic method for determining the set of all conserved quantities. The answer is no rocket science, but is a good application of linear algebra, and it teaches a fundamental principle.`

Let me know when your next article is ready for a final check before publication, David!

I love conserved quantities, so I like your exercise. This exercise was especially poignant back before anyone was able to isolate the element fluorine (because it's so reactive). Even before it was isolated, they knew it existed, because of the conservation law it created!

The example of a diatomic gas happens to be the one studied in section 4 of my new paper with Brendan:

We use the conservation law to get new equilibrium solutions of the master equation from old ones.

`Let me know when your next article is ready for a final check before publication, David! I love conserved quantities, so I like your exercise. This exercise was especially poignant back before anyone was able to isolate the element fluorine (because it's so reactive). Even before it was isolated, they knew it existed, because of the conservation law it created! The example of a diatomic gas happens to be the one studied in section 4 of my new paper with Brendan: * [Quantum techniques for studying equilibrium in chemical reaction networks](http://math.ucr.edu/home/baez/ACK.pdf). We use the conservation law to get new equilibrium solutions of the master equation from old ones.`

Thanks for the reference, that's a nice paper you guys wrote.

I'm a few weeks away from having a readable draft of the next article -- slowed down but not forgotten!

`Thanks for the reference, that's a nice paper you guys wrote. I'm a few weeks away from having a readable draft of the next article -- slowed down but not forgotten!`

Okay, great! Has it been a few weeks yet?

`Okay, great! Has it been a few weeks yet? <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif" alt = ""/>`

Thanks for the reminder, it has been a few weeks. But it's been an atheoretical few weeks for me. Though now I'm starting to wake up again, so it's a new few weeks away :)

Cheers

`Thanks for the reminder, it has been a few weeks. But it's been an atheoretical few weeks for me. Though now I'm starting to wake up again, so it's a new few weeks away :) Cheers`

I just posted a rough draft of the introduction to this blog article on the rate equation:

In writing about the continuous deterministic model for Petri nets -- i.e. the setting for the rate equation -- I found it necessary to give some intuitive justification for this model, since in the previous article I talked about the

stochasticmodel for Petri nets.I will be doing another editing pass for all the wording issues, so it is mainly ready for a review of the content, not the form.

John, I am really glad that you just published Quantum Techniques for Reaction Networks, because this gives a rigorous and quantitative explanation for this passage from the stochastic to the deterministic models. It will be a good reference to include in the blog, for further study by the readers. I am studying it now. Thanks.

`I just posted a rough draft of the introduction to this blog article on the rate equation: * [[Blog - Petri net programming (part 4)]] In writing about the continuous deterministic model for Petri nets -- i.e. the setting for the rate equation -- I found it necessary to give some intuitive justification for this model, since in the previous article I talked about the _stochastic_ model for Petri nets. I will be doing another editing pass for all the wording issues, so it is mainly ready for a review of the content, not the form. John, I am really glad that you just published [Quantum Techniques for Reaction Networks](http://math.ucr.edu/home/baez/reaction.pdf), because this gives a rigorous and quantitative explanation for this passage from the stochastic to the deterministic models. It will be a good reference to include in the blog, for further study by the readers. I am studying it now. Thanks.`

Your introduction looks nice! Since I'm a visual person, I find myself wanting a picture of the apparatus you describe. And since I'm a mathematical person, I finding myself wanting to see equations. "What's really going on here? How much of this verbal description really shows up in the model?" But I imagine the equations will be what comes next... so, it's good that you're making me want to see them.

If you find Quantum techniques for reaction networks hard to understand, please let me know. It might be hard because the target audience is people familiar with Fock space, annihilation and creation operators, and maybe some other math. But none of that knowledge is really

necessary, in principleto understand this paper. So I'll be interested to see if it makes sense to you, and I'll be glad to explain stuff that doesn't make sense.`Your introduction looks nice! Since I'm a visual person, I find myself wanting a picture of the apparatus you describe. And since I'm a mathematical person, I finding myself wanting to see equations. "What's really going on here? How much of this verbal description really shows up in the model?" But I imagine the equations will be what comes next... so, it's good that you're making me want to see them. If you find <a href = "http://math.ucr.edu/home/baez/reaction.pdf">Quantum techniques for reaction networks</a> hard to understand, please let me know. It might be hard because the target audience is people familiar with Fock space, annihilation and creation operators, and maybe some other math. But none of that knowledge is really _necessary, in principle_ to understand this paper. So I'll be interested to see if it makes sense to you, and I'll be glad to explain stuff that doesn't make sense.`

Thanks for the good word. I agree a picture would a nice enhancement. Though I'm pretty much at a loss when it comes to drawing, etc. At least to me, it sounds like a tricky contraption to draw. Any Artists out there?

You're right the next step in the article will contain the equations -- for a straight-ahead presentation of the rate equation, first for a single transition, and then for multiple transitions. I'm not prepared to derive anything about large-number limits in this programmer-oriented introduction to the rate equation. The reason why I introduced this apparatus was to make it plausible that the continuous deterministic model, which is much simpler than the stochastic model, really could be a limiting form of the stochastic model. In a nutshell, this intuition can be captured by the idea that the movement of sand through a pipe, which is a stochastic process, can be pictured as an approximation to a fluid flow.

To my regret, my Azimuth time has been reduced due to some family problems and a back injury that I am working through. In any case my back is getting better due to the miracle of cortisone and physical therapy. I'm going to try to wake up the blogging by adding it to my physical therapy regime :)

`Thanks for the good word. I agree a picture would a nice enhancement. Though I'm pretty much at a loss when it comes to drawing, etc. At least to me, it sounds like a tricky contraption to draw. Any Artists out there? You're right the next step in the article will contain the equations -- for a straight-ahead presentation of the rate equation, first for a single transition, and then for multiple transitions. I'm not prepared to derive anything about large-number limits in this programmer-oriented introduction to the rate equation. The reason why I introduced this apparatus was to make it plausible that the continuous deterministic model, which is much simpler than the stochastic model, really could be a limiting form of the stochastic model. In a nutshell, this intuition can be captured by the idea that the movement of sand through a pipe, which is a stochastic process, can be pictured as an approximation to a fluid flow. To my regret, my Azimuth time has been reduced due to some family problems and a back injury that I am working through. In any case my back is getting better due to the miracle of cortisone and physical therapy. I'm going to try to wake up the blogging by adding it to my physical therapy regime :)`

Hi David,

I have an interesting ebook about injuries which talks a lot about the spine. Some of the stuff in there I had no idea about, such as the distance between vertebrae being important for a spine and something which is controlled in a non-trivial way by hydration levels (and arguing that most people are probably not even close to being hydrated properly and it takes months to get there!). In addition, it lists foods that avoid/add inflammation of the connecting tissues of joints.

I went through this book because on a trip to Japan I got a funky knee injury from doing martial arts. I had to keep going so wanted to change my diet and see if that helped speed things along. It seemed to.

Come to think of it, it was years ago and the knee is totally recovered now. I did many exercises in the book for sometime to make the knee stronger. It seemed helpful. If you or anyone else would like the ebook, drop me an email and I'll get it over to you! Jacob dot biamonte at qubit dot org.

`Hi David, I have an interesting ebook about injuries which talks a lot about the spine. Some of the stuff in there I had no idea about, such as the distance between vertebrae being important for a spine and something which is controlled in a non-trivial way by hydration levels (and arguing that most people are probably not even close to being hydrated properly and it takes months to get there!). In addition, it lists foods that avoid/add inflammation of the connecting tissues of joints. I went through this book because on a trip to Japan I got a funky knee injury from doing martial arts. I had to keep going so wanted to change my diet and see if that helped speed things along. It seemed to. Come to think of it, it was years ago and the knee is totally recovered now. I did many exercises in the book for sometime to make the knee stronger. It seemed helpful. If you or anyone else would like the ebook, drop me an email and I'll get it over to you! Jacob dot biamonte at qubit dot org.`

David wrote:

That makes sense. It's actually surprisingly hard to do it in a nice way. I'm writing a series of blog posts on the large-number limit, starting here, since I had what seems like a promising idea about them. But getting the idea to pan out has proved harder than I expected.

Ouch! Good luck on all these problems.

Yes, blogging works wonders for your health. (Not.)

`David wrote: > I’m not prepared to derive anything about large-number limits in this programmer-oriented introduction to the rate equation. That makes sense. It's actually surprisingly hard to do it in a nice way. I'm writing a series of blog posts on the large-number limit, [starting here](https://johncarlosbaez.wordpress.com/2013/07/01/poisson-brackets-for-reaction-networks-2/), since I had what seems like a promising idea about them. But getting the idea to pan out has proved harder than I expected. > To my regret, my Azimuth time has been reduced due to some family problems and a back injury that I am working through. Ouch! Good luck on all these problems. > I’m going to try to wake up the blogging by adding it to my physical therapy regime :) Yes, blogging works wonders for your health. (Not.)`

Hi Dave,

Have a regretful welcome to the Azimuth Back Club where you're allowed to moan at anybody else in it in the sure knowledge that they know what it feels like and their mirror neurons trigger.

Have you had any ideas about a front end for your javascript petri net simulator?

I don't know whether I reported that, as I can't work in any other language these days, I'd discussed with John Baez, Dave Tweed and Glyn Adgie the feasibility of hacking out the SimpleHCPN.hs from Claus Reinke's HCPN coloured Petri net implementation. and sticking an html5, css3 interface on top of it.

So far I couldn't get the original to work as I hardly ever got wxWidgets to work on my systems and just gave up in chasing undocumented version dependency hell. I'm having some fun mangling the code.

Do you think it would be a good idea to cooperate on a spec for an editor?

Cheers and commiserations

`Hi Dave, Have a regretful welcome to the Azimuth Back Club where you're allowed to moan at anybody else in it in the sure knowledge that they know what it feels like and their mirror neurons trigger. Have you had any ideas about a front end for your javascript petri net simulator? I don't know whether I reported that, as I can't work in any other language these days, I'd discussed with [[John Baez]], [[Dave Tweed]] and [[Glyn Adgie]] the feasibility of hacking out the SimpleHCPN.hs from Claus Reinke's [HCPN coloured Petri net](http://community.haskell.org/~claus/publications/HCPN.pdf) implementation. and sticking an html5, css3 interface on top of it. So far I couldn't get the original to work as I hardly ever got wxWidgets to work on my systems and just gave up in chasing undocumented version dependency hell. I'm having some fun mangling the code. Do you think it would be a good idea to cooperate on a spec for an editor? Cheers and commiserations`

Hi Guys, Long time no chat. I hope that you're well. Thanks for the Back Sympathy. I have recovered really well with tons of physical therapy and now am stronger than before I got injured, so knock on wood will have better muscle protection for the damaged discs. I was lucky to be at a center that combines doctors, physical therapy, and a trainer you can graduate to, and they all communicate with each other. (Continuum Center for Health and Healing, New York City.) I was very happy with the doctor there, Gotlin, who spared me from an unnecessary surgery, which another doctor had scared me into thinking was imminently necessary. He said, let's try some more things, involving several low-dose injections of cortisone combined with physical therapy...it worked. It was part of an osteopathic approach.

Moving along. Jim thanks for your interest in my Petri net software. Actually I have had no thoughts about a front end for it, because it's basically just pedagogical code written in Python. Maybe if I started to use it for experiments I would be driven to make a system out of it, though I'd probably run it as an engine with scripts. I am interested in collaborating on software development work, but probably not on front end stuff. I'm going to take a look at the package that Jacob mentioned.

Concerning the blog, there's not much new there to read, I have a bunch of stuff in manuscript form. But I'm a bit stumped on something. I really want to give an informal justification for why the continuous Petri net model is a limiting case of the stochastic one. I could keep repeating this as a mantra, but it's too much hand waving. In the manuscript it looks fairly straigtforward to explain the idea of the expected time-evolution of a stochastic system. You have a probability distribution over possible outcomes. Each outcome is a vector-valued time series, where the domain is continuous time, and the range consists of vectors of natural numbers, indicating the token counts. These time series are jump-functions. The expected outcome is simply the "weighted sum" of these time series, according to the probability distribution -- meaning, the indicated integral. Crudely speaking, this function will have infinitesmal jumps everywhere (since it is an integral over all possible jump functions), so it is intuitively clear that it will be continuous and smooth. (But on what mathematical assumptions does this smoothness result really rest upon?)

`Hi Guys, Long time no chat. I hope that you're well. Thanks for the Back Sympathy. I have recovered really well with tons of physical therapy and now am stronger than before I got injured, so knock on wood will have better muscle protection for the damaged discs. I was lucky to be at a center that combines doctors, physical therapy, and a trainer you can graduate to, and they all communicate with each other. (Continuum Center for Health and Healing, New York City.) I was very happy with the doctor there, Gotlin, who spared me from an unnecessary surgery, which another doctor had scared me into thinking was imminently necessary. He said, let's try some more things, involving several low-dose injections of cortisone combined with physical therapy...it worked. It was part of an osteopathic approach. Moving along. Jim thanks for your interest in my Petri net software. Actually I have had no thoughts about a front end for it, because it's basically just pedagogical code written in Python. Maybe if I started to use it for experiments I would be driven to make a system out of it, though I'd probably run it as an engine with scripts. I am interested in collaborating on software development work, but probably not on front end stuff. I'm going to take a look at the package that Jacob mentioned. Concerning the blog, there's not much new there to read, I have a bunch of stuff in manuscript form. But I'm a bit stumped on something. I really want to give an informal justification for why the continuous Petri net model is a limiting case of the stochastic one. I could keep repeating this as a mantra, but it's too much hand waving. In the manuscript it looks fairly straigtforward to explain the idea of the expected time-evolution of a stochastic system. You have a probability distribution over possible outcomes. Each outcome is a vector-valued time series, where the domain is continuous time, and the range consists of vectors of natural numbers, indicating the token counts. These time series are jump-functions. The expected outcome is simply the "weighted sum" of these time series, according to the probability distribution -- meaning, the indicated integral. Crudely speaking, this function will have infinitesmal jumps everywhere (since it is an integral over all possible jump functions), so it is intuitively clear that it will be continuous and smooth. (But on what mathematical assumptions does this smoothness result really rest upon?)`

The next question I pose is: to what extent is the expected evolution of the system a "good approximation" to the actual outcomes, or, under what conditions is it a good approximation. It's a leading question, leading towards the idea of a large number limit.

To develop some intuition, I start with a simpler setup. The experiment consists of dropping some number of little beads into a pipe that goes downwards. The beads eventually fall into a bucket at the bottom. Each bead follows a stochastic course. The outcome of the experiment consists of the non-decreasing count of beads in the bucket. For a single bead it is a step function that jumps from 0 to 1 at a randomly determined time. The expected behavior is obtained by convolving a unit step function with the probability distribution function for the jump times. E.g. if the times are uniformly distributed from MinT to MaxT, the expected bead count will linearly increase from 0 at MinT to 1 at MaxT.

Now, if the beads are small, and we drop N of them into the pipe, we can make the approximation that their paths are independent of each other. Then the resulting output is the sum of N independently chosen step functions from the distribution. Let's focus on a specific time t, and look at the distribution of bead counts in the bucket at time t. This will be the sum of N independent, identically distributed variables -- each of which arises from a single bead, and takes values in {0,1}. Let V be the variance of any of these variables. Then, because they are independent, their sum will have variance N * V. So the standard deviation will increase only as the square root of N. But the mean increases linearly with N. Hence the ratio of the standard deviation to the mean will go to zero, which is to say, in the limit, the values will cluster every more tightly around the mean. The expected behavior is therefore an asymptotically exact predictor for the actual behavior. (The central limit theorem tells us more, that the distributions become asymptotically normal, but we don't need that to show the clustering result.)

For any finite size of bead, the assumption of independence breaks down more and more as N increases -- when alot of the cross section of the pipe is filled with beads, there is cloggage, and any other beads will have their paths impeded. But in the limit as the bead size goes to zero, we can still maintain the independence assumption as N increases. We can get closer and closer to a fluid model, involving an infinite number of infinitesmal particles. Think of pouring sand into the pipe. It is intuitive that, here, the continuous deterministic model, which is governed by the expected behavior, will give the exact behavior of the limiting case of fluid flow.

`The next question I pose is: to what extent is the expected evolution of the system a "good approximation" to the actual outcomes, or, under what conditions is it a good approximation. It's a leading question, leading towards the idea of a large number limit. To develop some intuition, I start with a simpler setup. The experiment consists of dropping some number of little beads into a pipe that goes downwards. The beads eventually fall into a bucket at the bottom. Each bead follows a stochastic course. The outcome of the experiment consists of the non-decreasing count of beads in the bucket. For a single bead it is a step function that jumps from 0 to 1 at a randomly determined time. The expected behavior is obtained by convolving a unit step function with the probability distribution function for the jump times. E.g. if the times are uniformly distributed from MinT to MaxT, the expected bead count will linearly increase from 0 at MinT to 1 at MaxT. Now, if the beads are small, and we drop N of them into the pipe, we can make the approximation that their paths are independent of each other. Then the resulting output is the sum of N independently chosen step functions from the distribution. Let's focus on a specific time t, and look at the distribution of bead counts in the bucket at time t. This will be the sum of N independent, identically distributed variables -- each of which arises from a single bead, and takes values in {0,1}. Let V be the variance of any of these variables. Then, because they are independent, their sum will have variance N * V. So the standard deviation will increase only as the square root of N. But the mean increases linearly with N. Hence the ratio of the standard deviation to the mean will go to zero, which is to say, in the limit, the values will cluster every more tightly around the mean. The expected behavior is therefore an asymptotically exact predictor for the actual behavior. (The central limit theorem tells us more, that the distributions become asymptotically normal, but we don't need that to show the clustering result.) For any finite size of bead, the assumption of independence breaks down more and more as N increases -- when alot of the cross section of the pipe is filled with beads, there is cloggage, and any other beads will have their paths impeded. But in the limit as the bead size goes to zero, we can still maintain the independence assumption as N increases. We can get closer and closer to a fluid model, involving an infinite number of infinitesmal particles. Think of pouring sand into the pipe. It is intuitive that, here, the continuous deterministic model, which is governed by the expected behavior, will give the exact behavior of the limiting case of fluid flow.`

But where do things stand with stochastic Petri nets, in terms of the asymptotic correctness of the expected behavior? Things are not so simple here. We have nothing like a sum of independently distributed variables. Even the idea of "increasing the number of tokens" has many dimensions, since there are multiple token containers in the net. And the tokens certainly aren't independent of each other.

We can make the following observations, at a general level. (1) As token counts increase, the proportional effect of any transition firing is smaller, so in relative terms, the sizes of the jumps in the histories gets smaller. (2) As token counts increase, firing rates increase, so these smaller jumps happen more densely in time. The outcomes therefore look "more continuous" as the token counts increase.

But this says nothing about the asymptotic correctness of the expected behavior.

Intuitively, it looks almost axiomatic that, say for a macroscopic chemical reaction, the continuous deterministic model gives an asymptotically exact description of the motion of the system. But how to prove this? And can it be demonstrated using rough arguments and non-advanced mathematics -- like the arguments I gave above for the beads in the pipe.

`But where do things stand with stochastic Petri nets, in terms of the asymptotic correctness of the expected behavior? Things are not so simple here. We have nothing like a sum of independently distributed variables. Even the idea of "increasing the number of tokens" has many dimensions, since there are multiple token containers in the net. And the tokens certainly aren't independent of each other. We can make the following observations, at a general level. (1) As token counts increase, the proportional effect of any transition firing is smaller, so in relative terms, the sizes of the jumps in the histories gets smaller. (2) As token counts increase, firing rates increase, so these smaller jumps happen more densely in time. The outcomes therefore look "more continuous" as the token counts increase. But this says nothing about the asymptotic correctness of the expected behavior. Intuitively, it looks almost axiomatic that, say for a macroscopic chemical reaction, the continuous deterministic model gives an asymptotically exact description of the motion of the system. But how to prove this? And can it be demonstrated using rough arguments and non-advanced mathematics -- like the arguments I gave above for the beads in the pipe.`

John I am reading with great interest your blog on the large-number-limit behavior of stochastic Petri nets. After that I will approach the paper you wrote. At the moment, my understanding of it is, um, very incomplete. But I am chipping away at it.

I saw that you wrote that you show that in a large number limit, the master equation reduces to the the rate equation. Does this imply that the rate equation is "asymptotically correct," in the following kind of sense: the ratio of the deviation of the observed outcomes from the expected outcome goes to zero as the number of tokens goes to infinity. This is the form of statement I am looking for, but I have left some wiggle room in the definitions of these terms.

`John I am reading with great interest your blog on the large-number-limit behavior of stochastic Petri nets. After that I will approach the paper you wrote. At the moment, my understanding of it is, um, very incomplete. But I am chipping away at it. I saw that you wrote that you show that in a large number limit, the master equation reduces to the the rate equation. Does this imply that the rate equation is "asymptotically correct," in the following kind of sense: the ratio of the deviation of the observed outcomes from the expected outcome goes to zero as the number of tokens goes to infinity. This is the form of statement I am looking for, but I have left some wiggle room in the definitions of these terms.`

We were talking about something very similar in this thread.

`> Does this imply that the rate equation is “asymptotically correct,” in the following kind of sense: the ratio of the deviation of the observed outcomes from the expected outcome goes to zero as the number of tokens goes to infinity. We were talking about something very similar in [this thread](http://forum.azimuthproject.org/discussion/1121/a-question-about-petri-nets/#Item_11).`

Sounds related but different. There John said:

I am hoping we can show that what is stated as an assumption in this sentence can be shown to be a

resultof some plausible assumptions. The motivating example is a chemical reaction -- at a macroscopic level we observe only infinitesmal deviations from the expected behavior which is stated by the rate equation. Why is that?`Sounds related but different. There John said: > Let me just state your idea (Graham's) a bit more precisely, in case someone wants to try it. Starting from the master equation we can derive a differential equation, the rate equation, saying how the mean values of the number of things of each species under the assumption that the mean number of things is large and the standard deviation and some higher moments are small in comparison. I am hoping we can show that what is stated as an assumption in this sentence can be shown to be a _result_ of some plausible assumptions. The motivating example is a chemical reaction -- at a macroscopic level we observe only infinitesmal deviations from the expected behavior which is stated by the rate equation. Why is that?`

Well, the answer for a chemical reaction looks to be very simple. Think of chemicals reacting in a large vat. To a good approximation the vat can be broken down into small cells, each of which we think of as containing an instance of the reaction. At a given time, the activities in a cell are largely independent of what is going on the other cells. Here, we are scaling up the Petri net in a spatial way, by "cloning" instances of a "cell" Petri net. This involves scaling up the number of transitions as well as the token counts. For a given window of time between t and t + dt, and a specific transition type, the number of transitions that fire within this window is a sum of independent identically distributed random variables, so the central limit theorem kicks in.

Can we prove a "clustering" result for more interesting cases of scaling up of Petri nets, other than parallel cloning?

`Well, the answer for a chemical reaction looks to be very simple. Think of chemicals reacting in a large vat. To a good approximation the vat can be broken down into small cells, each of which we think of as containing an instance of the reaction. At a given time, the activities in a cell are largely independent of what is going on the other cells. Here, we are scaling up the Petri net in a spatial way, by "cloning" instances of a "cell" Petri net. This involves scaling up the number of transitions as well as the token counts. For a given window of time between t and t + dt, and a specific transition type, the number of transitions that fire within this window is a sum of independent identically distributed random variables, so the central limit theorem kicks in. Can we prove a "clustering" result for more interesting cases of scaling up of Petri nets, other than parallel cloning?`

Actually, the "clustering" result I was hoping to show appears not to be true in general.

Consider the system we discussed in that other thread, with these two transitions:

A+B -> 2A at rate u

A+B -> 2B at rate u

As we said, if we start A and B with equal token counts, then the system behaves like a random walk, which proceeds fastest when token counts are equal, and gets stuck when one of the counts goes to zero.

Yet the expected token counts remain constant.

Now matter how much we scale up the token counts, the system will eventually get stuck at one of the extremes. Even apart from getting stuck, it will tend away from the center point, which is the expected state. Hence the variance of the observed token counts will not go to zero.

So if there is a clustering theorem to be found, it will have to be more nuanced.

`Actually, the "clustering" result I was hoping to show appears not to be true in general. Consider the system we discussed in that other thread, with these two transitions: A+B -> 2A at rate u A+B -> 2B at rate u As we said, if we start A and B with equal token counts, then the system behaves like a random walk, which proceeds fastest when token counts are equal, and gets stuck when one of the counts goes to zero. Yet the expected token counts remain constant. Now matter how much we scale up the token counts, the system will eventually get stuck at one of the extremes. Even apart from getting stuck, it will tend away from the center point, which is the expected state. Hence the variance of the observed token counts will not go to zero. So if there is a clustering theorem to be found, it will have to be more nuanced.`

Re: my explanation above for why there is negligible variance in the dynamics of population counts in a macroscopic chemical reaction

To summarize, this explanation breaks the "reaction vat" into a grid of relatively independent cells. This is both correct and banal.

Yet it does not cover the full story about when and why scaling up the population counts will lead to dynamics with lower relative standard deviations. Recall the model of scaling up that I described there, which corresponds to the independent cells analysis of the reaction vat: replicating the transitions of a Petri net, with parallel cloning. This is artificial construct from the standpoint of Petri nets. When we think of scaling up the population counts, we don't picture a situation where we are also replicating the transitions in the net. No, for a chemical reaction, we continue to think of a

singletransition for each of the reactions that take place.Since this is a question about the dynamics of stochastic Petri nets, whose probabilistic law of motion is captured by the master equation, then it should be answerable through analysis of the master equation.

Here is one way to formulate the statement of decreasing relative standard deviations. Suppose we start a stochastic Petri net in a definite state s. By "running s through the master equation," i.e., looking at the trajectory from s, under the exponentiated Hamiltonian for the network structure, we will see that s evolves into states with wider bands of uncertainty.

Sidenote: if it eventually heads towards an attractor equilibrium point, then the bands of uncertainty can eventually decrease, as it heads towards the equilibrium state s'.

Now, for the probabilistic trajectory starting from s, we can define the standard deviation at each point in time. This definition is based upon the fact that, at each point in time, we have a vector-valued random variable -- which the observed state of the Petri net, on a sample run of the stochastic Petri net started in state s.

Let N be a Petri net, and s be a definite state of N.

Let D(N, s, t) be the standard deviation for the probabilistic s-trajectory, measured at time t.

Then a law of "decreasing relative standard deviation" could be stated as follows:

For an integer parameter k >= 1,

D(N, k * s, t), increases sub-linearly with k.

It would be nice to find theorems which state classes of nets N for which the above statement is true for all states s and times t. Does anyone have some thoughts, conjectures or proofs along these lines?

One could also test it out empirically, by using a master-equation simulator, and trying some specific nets N, states s, and times t. Right now I don't have any such tools at hand. Eventually, through the Petri net programming series, I may end up building some components of my own -- or not, I may just end up writing small pedagogical programs. But in any case, that quite a ways off. Does anyone have working master equation simulator software?

A good first test would be to take a Petri net N that represents a real-world chemical reaction, and then arbitrary state s of N. Then choose a series of sample points in time, and series of values for k. For each point in time t, plot the function D(N, k * s, t) as a function of k, and look at its general. Is it sub-linear, as I would expect it to be?

`Re: my explanation above for why there is negligible variance in the dynamics of population counts in a macroscopic chemical reaction To summarize, this explanation breaks the "reaction vat" into a grid of relatively independent cells. This is both correct and banal. Yet it does not cover the full story about when and why scaling up the population counts will lead to dynamics with lower relative standard deviations. Recall the model of scaling up that I described there, which corresponds to the independent cells analysis of the reaction vat: replicating the transitions of a Petri net, with parallel cloning. This is artificial construct from the standpoint of Petri nets. When we think of scaling up the population counts, we don't picture a situation where we are also replicating the transitions in the net. No, for a chemical reaction, we continue to think of a _single_ transition for each of the reactions that take place. Since this is a question about the dynamics of stochastic Petri nets, whose probabilistic law of motion is captured by the master equation, then it should be answerable through analysis of the master equation. Here is one way to formulate the statement of decreasing relative standard deviations. Suppose we start a stochastic Petri net in a definite state s. By "running s through the master equation," i.e., looking at the trajectory from s, under the exponentiated Hamiltonian for the network structure, we will see that s evolves into states with wider bands of uncertainty. Sidenote: if it eventually heads towards an attractor equilibrium point, then the bands of uncertainty can eventually decrease, as it heads towards the equilibrium state s'. Now, for the probabilistic trajectory starting from s, we can define the standard deviation at each point in time. This definition is based upon the fact that, at each point in time, we have a vector-valued random variable -- which the observed state of the Petri net, on a sample run of the stochastic Petri net started in state s. Let N be a Petri net, and s be a definite state of N. Let D(N, s, t) be the standard deviation for the probabilistic s-trajectory, measured at time t. Then a law of "decreasing relative standard deviation" could be stated as follows: For an integer parameter k >= 1, D(N, k * s, t), increases sub-linearly with k. It would be nice to find theorems which state classes of nets N for which the above statement is true for all states s and times t. Does anyone have some thoughts, conjectures or proofs along these lines? One could also test it out empirically, by using a master-equation simulator, and trying some specific nets N, states s, and times t. Right now I don't have any such tools at hand. Eventually, through the Petri net programming series, I may end up building some components of my own -- or not, I may just end up writing small pedagogical programs. But in any case, that quite a ways off. Does anyone have working master equation simulator software? A good first test would be to take a Petri net N that represents a real-world chemical reaction, and then arbitrary state s of N. Then choose a series of sample points in time, and series of values for k. For each point in time t, plot the function D(N, k * s, t) as a function of k, and look at its general. Is it sub-linear, as I would expect it to be?`

I'm still interested in the answers to the questions I just posed, but I'm not fully satisfied with how I formulated the model of scaling up the population counts. Multiplying the state s by an integer k is about the most straightforward and basic way to scale up the state, but it still looks artificial from the standpoint of the Petri net, because the states s and k * s may be in completely different qualitative regimes of the dynamics of the network.

For instance, if s is an equilibrium point (of the rate equation), then under the conditions on N mentioned in other Azimuth articles -- complex-balanced, if I recall, I need to study this -- s will be an equilibrium for the master equation too. So the s-trajectory will just be the constant state s, over all time, and the standard deviations will be zero at all times. But for k > 1, most states k * s will not be equilibrium states, and the standard deviations of the s-trajectory will be non-zero. So increasing k from 1 to another number caused a huge (infinite) relative increase in the standard deviation.

But it may be that there is no perfect way to formulate the scaling up of population counts, and taking k * s may be the most elementary and "best" way to define it.

I still conjecture that in some probabilistic sense, the functions D(N, k * s, t) will be sub-linear in k. I'm envisioning functions that are "basically sub-linear," with the exception of some set of points of measure zero, which would include the equilibrium points. Or we might need a more nuanced of sub-linear behavior, which says that over some partition of the natural numbers into intervals, the restriction of this function to each of the intervals is a sub-linear function. This caveat would take into account the phenomenon that could occur where D(N, k * s, t) increases sub-linearly in k over, say the range 1 to j, but then, say at j + 1, the state (j + 1) * s takes the dynamics into an entirely different qualitative regime, in which the standard deviation may jump up to a much higher (or down to a much lower) value.

Also we'd have to make sure in this definition of a basically sub-linear function not to allow "cheating" by partitioning the natural numbers, say, into the trivial union of singleton intervals. I conjecture that there will generally be just a finite number of jump points, on the intuition that, as one heads out to infinity along the ray of multiples of s, there won't be "that many" qualitatively different dynamic regimes. (I haven't defined a dynamic regime, but this idea would include the separation of the state space into different basins of attraction around attractor points). Some it might suffice to define a basically sub-linear function as one that is sub-linear over each of the intervals in a finite partition of the natural numbers. Or if that doesn't catch some of the more bizarre cases (e.g. systems which could have an infinite number of basins of attraction), then one could extend the definition to allow partitions of the natural numbers into intervals, subject to the condition that within any such partition, the sizes of the intervals are unbounded.

Any thoughts or help would be appreciated! Thanks.

`I'm still interested in the answers to the questions I just posed, but I'm not fully satisfied with how I formulated the model of scaling up the population counts. Multiplying the state s by an integer k is about the most straightforward and basic way to scale up the state, but it still looks artificial from the standpoint of the Petri net, because the states s and k * s may be in completely different qualitative regimes of the dynamics of the network. For instance, if s is an equilibrium point (of the rate equation), then under the conditions on N mentioned in other Azimuth articles -- complex-balanced, if I recall, I need to study this -- s will be an equilibrium for the master equation too. So the s-trajectory will just be the constant state s, over all time, and the standard deviations will be zero at all times. But for k > 1, most states k * s will not be equilibrium states, and the standard deviations of the s-trajectory will be non-zero. So increasing k from 1 to another number caused a huge (infinite) relative increase in the standard deviation. But it may be that there is no perfect way to formulate the scaling up of population counts, and taking k * s may be the most elementary and "best" way to define it. I still conjecture that in some probabilistic sense, the functions D(N, k * s, t) will be sub-linear in k. I'm envisioning functions that are "basically sub-linear," with the exception of some set of points of measure zero, which would include the equilibrium points. Or we might need a more nuanced of sub-linear behavior, which says that over some partition of the natural numbers into intervals, the restriction of this function to each of the intervals is a sub-linear function. This caveat would take into account the phenomenon that could occur where D(N, k * s, t) increases sub-linearly in k over, say the range 1 to j, but then, say at j + 1, the state (j + 1) * s takes the dynamics into an entirely different qualitative regime, in which the standard deviation may jump up to a much higher (or down to a much lower) value. Also we'd have to make sure in this definition of a basically sub-linear function not to allow "cheating" by partitioning the natural numbers, say, into the trivial union of singleton intervals. I conjecture that there will generally be just a finite number of jump points, on the intuition that, as one heads out to infinity along the ray of multiples of s, there won't be "that many" qualitatively different dynamic regimes. (I haven't defined a dynamic regime, but this idea would include the separation of the state space into different basins of attraction around attractor points). Some it might suffice to define a basically sub-linear function as one that is sub-linear over each of the intervals in a finite partition of the natural numbers. Or if that doesn't catch some of the more bizarre cases (e.g. systems which could have an infinite number of basins of attraction), then one could extend the definition to allow partitions of the natural numbers into intervals, subject to the condition that within any such partition, the sizes of the intervals are unbounded. Any thoughts or help would be appreciated! Thanks.`

David - over the summer, I had a student Arjun Jain work on proving a theorem that clarifies the sense in which the master equation has the rate equation as its large-number limit. As you note, it's a bit subtle. We made a lot of progress, and got it down to the point where the only unjustified step is proving that the derivative of the limit of something equals the limit of the derivative. For a physics paper I guess that means we're done.

You can look at what Arjun wrote - here's a PDF file and [TeX file]((http://math.ucr.edu/home/baez/large.tex). It's a bit terse since I haven't gotten to it yet - I need to add the banter and chat that transform a calculation into a blog article that people might enjoy reading. In its notation, it relies on my paper Quantum techniques for reaction networks, which is a summary of other stuff on the blog.

Since you're curious about the large-number limit, I'd be very happy if you helped us turn this into a blog article. But it's probably a lot more mathematical than Petri net programming (part 4) is intended to be.

`David - over the summer, I had a student Arjun Jain work on proving a theorem that clarifies the sense in which the master equation has the rate equation as its large-number limit. As you note, it's a bit subtle. We made a lot of progress, and got it down to the point where the only unjustified step is proving that the derivative of the limit of something equals the limit of the derivative. For a physics paper I guess that means we're done. You can look at what Arjun wrote - here's a [PDF file](http://math.ucr.edu/home/baez/large.pdf) and [TeX file]((http://math.ucr.edu/home/baez/large.tex). It's a bit terse since I haven't gotten to it yet - I need to add the banter and chat that transform a calculation into a blog article that people might enjoy reading. In its notation, it relies on my paper [Quantum techniques for reaction networks](http://arxiv.org/abs/1306.3451), which is a summary of other stuff on the blog. Since you're curious about the large-number limit, I'd be very happy if you helped us turn this into a blog article. But it's probably a lot more mathematical than [Petri net programming (part 4)](http://www.azimuthproject.org/azimuth/show/Blog%20-%20Petri%20net%20programming%20(part%204)) is intended to be.`

I'll say a few more things about what Arjun and I did.

First, instead of focusing on standard deviations as you did, it turns out to be essential to further and think about not just standard deviations but all higher moments as well. We need a bunch of these to be small compared to the mean number of things, to make any

oneof them remain small compared to the mean number of things as time passes.Second, instead of rescaling in a particular way, e.g. by multiplying the number of each thing by some integer, it seems easier to consider a general family of stochastic states, depending on a parameter, where the mean number of things of each type is proportional to that parameter, and which obeys whatever other conditions we need.

Putting these ideas together we get a concept which we call a "semiclassical family of states". The word "semiclassical" is stolen from quantum mechanics, where similar issues show up: we want, for example, the quantum description of light to reduce to the classical one in a suitable limit of large numbers of photons.

In the quantum situation, we often take the large-number limit in conjunction with letting Planck's constant $\hbar$ — which we imagine as adjustable — go to zero. This means we're taking a limit where we have more and more photons of smaller and smaller energy.

So, instead of using a parameter where the mean number of things is proportional to that parameter, as I hinted before, we actually use the reciprocal of that parameter and call it $\hbar$. The idea is that we're counting things in bunches of size $1/\hbar$.

I explained this 'bunches' idea in "The large-number limit for reaction networks" (Part 1 and Part 2). But, it took some good ideas from Arjun to actually get an argument showing that for semiclassical families of states, the master equation reduces to the rate equation as $\hbar \to 0$.

As usual, we're not actually doing quantum mechanics; we're just stealing ideas and mathematics and terminology from quantum mechanics.

`I'll say a few more things about what Arjun and I did. First, instead of focusing on standard deviations as you did, it turns out to be essential to further and think about not just standard deviations but all higher [moments](http://arxiv.org/abs/1306.3451) as well. We need a bunch of these to be small compared to the mean number of things, to make any _one_ of them remain small compared to the mean number of things as time passes. Second, instead of rescaling in a particular way, e.g. by multiplying the number of each thing by some integer, it seems easier to consider a general family of stochastic states, depending on a parameter, where the mean number of things of each type is proportional to that parameter, and which obeys whatever other conditions we need. Putting these ideas together we get a concept which we call a "semiclassical family of states". The word "semiclassical" is stolen from quantum mechanics, where similar issues show up: we want, for example, the quantum description of light to reduce to the classical one in a suitable limit of large numbers of photons. In the quantum situation, we often take the large-number limit in conjunction with letting Planck's constant $\hbar$ — which we imagine as adjustable — go to zero. This means we're taking a limit where we have more and more photons of smaller and smaller energy. So, instead of using a parameter where the mean number of things is proportional to that parameter, as I hinted before, we actually use the reciprocal of that parameter and call it $\hbar$. The idea is that we're counting things in bunches of size $1/\hbar$. I explained this 'bunches' idea in "The large-number limit for reaction networks" ([Part 1](http://johncarlosbaez.wordpress.com/2013/07/01/poisson-brackets-for-reaction-networks-2/) and [Part 2](http://johncarlosbaez.wordpress.com/2013/07/06/the-large-number-limit-for-reaction-networks-part-2/)). But, it took some good ideas from Arjun to actually get an argument showing that for semiclassical families of states, the master equation reduces to the rate equation as $\hbar \to 0$. As usual, we're not actually doing quantum mechanics; we're just stealing ideas and mathematics and terminology from quantum mechanics.`

Is this "semiclassical family of states" a necessary as well as sufficient condition?

In the example

there seem to be two problems: the exact symmetry, and the fact that there are states which can be reached but not escaped from. Is there any way of relating the semiclassical idea to properties like this?

Does it help to think of things like superconductivity where (as I understand it) things don't become classical as the number of particles becomes large?

`Is this "semiclassical family of states" a necessary as well as sufficient condition? In the example ~~~~ A+B -> 2A at rate u A+B -> 2B at rate u ~~~~ there seem to be two problems: the exact symmetry, and the fact that there are states which can be reached but not escaped from. Is there any way of relating the semiclassical idea to properties like this? Does it help to think of things like superconductivity where (as I understand it) things don't become classical as the number of particles becomes large?`

Graham wrote:

I really doubt it. It seems absurdly hard to find

necessaryconditions for a sequence of solutions of the master equation to have mean values that approach a solution of the rescaled rate equation over some time interval. There could be all sorts of 'accidental' reasons why it works.And I think your example here shows that we can have solutions of the master equation that are far from semiclassical (that is, have big standard deviations or higher moments) whose means still satisfy the rescaled rate equation exactly!

I don't know anything very interesting to say, just this. In the semiclassical case the standard deviations are small compared to the means, so the chance that you would 'hit the end and get stuck' in your example — that is, reach a situation where there are all A's or all B's — becomes negligible.

Hmm, that's a thought. I guess the first step is to take some standard closed-form description of the quantum state of a superconductor (or superfluid) and translate it from quantum mechanics to stochastic mechanics, using our analogy. The second step is to stare at what you get, and think about it.

`Graham wrote: > Is this “semiclassical family of states” a necessary as well as sufficient condition? I really doubt it. It seems absurdly hard to find _necessary_ conditions for a sequence of solutions of the master equation to have mean values that approach a solution of the rescaled rate equation over some time interval. There could be all sorts of 'accidental' reasons why it works. And I think your example here shows that we can have solutions of the master equation that are far from semiclassical (that is, have big standard deviations or higher moments) whose means still satisfy the rescaled rate equation exactly! > ... there seem to be two problems: the exact symmetry, and the fact that there are states which can be reached but not escaped from. Is there any way of relating the semiclassical idea to properties like this? I don't know anything very interesting to say, just this. In the semiclassical case the standard deviations are small compared to the means, so the chance that you would 'hit the end and get stuck' in your example — that is, reach a situation where there are all A's or all B's — becomes negligible. > Does it help to think of things like superconductivity where (as I understand it) things don’t become classical as the number of particles becomes large? Hmm, that's a thought. I guess the first step is to take some standard closed-form description of the quantum state of a superconductor (or superfluid) and translate it from quantum mechanics to stochastic mechanics, using our analogy. The second step is to stare at what you get, and think about it.`