Home › Azimuth Project › › Azimuth Blog

It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.4K
- Chat 502
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 73
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 718

Options

I thought I'd try live-blogging this BIRS workshop:

and then polish up what I write and post it on the Azimuth Blog.

We've got people from different backgrounds:

- computational complexity theory
- wetlab / experimental science
- pure and applied mathematics
- software verification

The first talk will be

- David Soloveichik, UC San Francisco, The computational power of chemical reaction networks.

According to this webpage from the UC San Francisco center for systems and synthetic biology:

David did his graduate work with Erik Winfree at Caltech, focusing on algorithmic self-assembly and on synthetic networks of nucleic-acid interactions based on strand displacement cascades. He is interested in "molecular programming": the systematic design of complex molecular systems based on the principles of computer science and distributed computing. More generally, he is trying to create a theoretical foundation of chemical computation applicable to both synthetic and natural systems.

According to his homepage, his research interests are:

**Wet-lab**The rational design of molecular interactions for synthetic biology, nanotechnology, and bioengineering. The goal is to engineer autonomous molecular systems that can sense, compute, and perform various actions. Using nucleic-acid "strand displacement cascades" as the molecular primitive, we are able to attain freedom of design that hasn't been previously possible.**Theory**The theoretical foundation of chemical computation. Once we have a way to program molecular interactions, what programming language shall we use? How molecules can process information and carry out computation is still not well-understood; however, a formal connection to models of concurrent computation may allow systematic and scalable design, rigorous analysis and verification. Further, computational principles may elucidate the design of biological regulatory networks.

## Comments

Here are some of David Soloveichik's recent talks:

`Here are some of David Soloveichik's recent talks: <ul> <li>An introduction to strand displacement cascades for the Foresight Institute Conference (Palo Alto, Jan 2013): <a href="http://solo.ucsf.edu/talks/Foresight_Institute_Talk.pdf">An Artificial "Biochemistry" with DNA</a> </li> <li>Paper presented at DNA Computing and Molecular Programming 18 (Aarhus, Denmark, Aug 2012): <a href="http://solo.ucsf.edu/talks/DNA18-Deterministic Computation (Aug 2012).pdf">Deterministic Function Computation with Chemical Reaction Networks</a> </li> <li>My tutorial talk for DNA Computing and Molecular Programming 17 (Pasadena, Aug 2011): <a href="http://solo.ucsf.edu/talks/Soloveichik_DNA_17_Tutorial_(2011-09-20).pdf">The Programming Language of Chemical Kinetics, and How To Discipline Your DNA Molecules using Strand Displacement Cascades</a> </li> <li>High-level introduction to algorithmic self-assembly and stochastic chemical reaction networks as computer-theoretic models: <a href="http://solo.ucsf.edu/talks/UW_CS_theory_talk.pdf">Computer-Theoretic Abstractions for Molecular Programming</a> </li> <li> On algorithmic behavior in chemical reaction networks and implementing arbitrary chemical reaction networks with DNA: <a href="http://solo.ucsf.edu/talks/pres_programming_crns.pdf">Programming Well-Mixed Chemical Kinetics</a> </li> <li> On provable asymptotic bounds on the number of leaps for tau-leaping as a function of maximum molecular count: <a href="http://solo.ucsf.edu/talks/pres_robust.pdf">Bounded Tau-leaping and Robust Chemical Reaction Networks</a> </li> </ul>`

Solovocheik's talk:

CRNs (chemical reaction networks) show up in:

Why use them for computation? People want to go beyond the von Neumann architecture for computation. People want to understand how cells process information. But the computational perspective in this talk has

not yet proved relevant in biology. With perhaps some exceptions, biology "computes" in some other ways!`Solovocheik's talk: CRNs (chemical reaction networks) show up in: * chemistry * population biology * sensor networks * math: * vector addition systems * Petri nets * commutative semigroups * bounded context-free languages * uniform recurrence equations Why use them for computation? People want to go beyond the von Neumann architecture for computation. People want to understand how cells process information. But the computational perspective in this talk has _not yet proved relevant in biology_. With perhaps some exceptions, biology "computes" in some other ways!`

Outline of the talk:

`Outline of the talk: * the model * using deterministic stabilizing computations to compute semilinear predicates * Turing-universal computation * computation speed * open questions`

the modelThe model will be the master equation for a chemical reaction network... since this has been explained in detail in the Network Theory notes, I won't review it!

Does every CRN have a molecular realization, even those without conservations laws?

Yes, using

strand displacement cascades, e.g. Soloveichik, Seelig and Winfree DNA as a unviersal substrate for chemical kinetics and subsequent papers, e.g. "Programmable chemical controllers made from DNA". Abstract of the first paper:`**the model** The model will be the master equation for a chemical reaction network... since this has been explained in detail in the [Network Theory notes](http://math.ucr.edu/home/baez/networks/networks_4.html), I won't review it! Does every CRN have a molecular realization, even those without conservations laws? Yes, using _strand displacement cascades_, e.g. Soloveichik, Seelig and Winfree [DNA as a unviersal substrate for chemical kinetics](http://www.pnas.org/content/107/12/5393.full) and subsequent papers, e.g. "Programmable chemical controllers made from DNA". Abstract of the first paper: > **Abstract:** Molecular programming aims to systematically engineer molecular and chemical systems of autonomous function and ever-increasing complexity. A key goal is to develop embedded control circuitry within a chemical system to direct molecular events. Here we show that systems of DNA molecules can be constructed that closely approximate the dynamic behavior of arbitrary systems of coupled chemical reactions. By using strand displacement reactions as a primitive, we construct reaction cascades with effectively unimolecular and bimolecular kinetics. Our construction allows individual reactions to be coupled in arbitrary ways such that reactants can participate in multiple reactions simultaneously, reproducing the desired dynamical properties. Thus arbitrary systems of chemical equations can be compiled into real chemical systems. We illustrate our method on the Lotka–Volterra oscillator, a limit-cycle oscillator, a chaotic system, and systems implementing feedback digital logic and algorithmic behavior.`

Dichotomies of computation with CRNs:

discrete vs continuous: master equation or rate equation?

uniform vs non-uniform is a single CRN supposed to handle all inputs, or do we allow adding extra reactions for larger inputs? It's a bit like Turing machines vs Boolean circuits.

deterministic vs probabilistic: is the correct output guaranteed or merely likely?

halting vs stabilizing: does the CRN "know" when it has finished? In the first case the CRN irreversibly produces some molecules that signal that the computation is done. In the second, it eventually stabilizes to the right answer, but we may not know how long.

These distinctions dramatically affect the computational power. In the case of uniform computation:

Deterministic and halting: finite computational power.

Deterministic and stabilizing: semilinear.

probabilistic and halting: Turing-universal

probabilistic and stabilizing: $\Delta_2^0$

If we use Turing machines but don't require them to signal when they've halted, they can "compute" uncomputable stuff.

`Dichotomies of computation with CRNs: * discrete vs continuous: master equation or rate equation? * uniform vs non-uniform is a single CRN supposed to handle all inputs, or do we allow adding extra reactions for larger inputs? It's a bit like Turing machines vs Boolean circuits. * deterministic vs probabilistic: is the correct output guaranteed or merely likely? * halting vs stabilizing: does the CRN "know" when it has finished? In the first case the CRN irreversibly produces some molecules that signal that the computation is done. In the second, it eventually stabilizes to the right answer, but we may not know how long. These distinctions dramatically affect the computational power. In the case of uniform computation: * Deterministic and halting: finite computational power. * Deterministic and stabilizing: semilinear. * probabilistic and halting: Turing-universal * probabilistic and stabilizing: $\Delta_2^0$ If we use Turing machines but don't require them to signal when they've halted, they can "compute" uncomputable stuff.`

Using deterministic stabilizing computations to compute semilinear predicatesA set $A \subseteq \mathbb{N}^d$ is

linearif it consists of points of the form $u_0 + n_1 u_1 + \cdots + n_p u_p$ for fixed $u_i \in \mathbb{N}^d$ and all possible $n_i \in \mathbb{N}$. It'ssemilinearif it's a finite union of linear sets.The predicates on $\mathbb{N}^d$ that can be computed by deterministic stabilizing CRNs are precisely the semilinear ones.

The simplest are $=$ and $\ge$.

Start with one Y and some amounts of $X_1, X_2$. Say you want to know if $X_2 \ge X_1$. Use reactions

$$X_2 + N \to Y$$ $$X_1 + Y \to N $$ You will be left with just a $Y$ or $N$, which answers your question.

`**Using deterministic stabilizing computations to compute semilinear predicates** A set $A \subseteq \mathbb{N}^d$ is **linear** if it consists of points of the form $u_0 + n_1 u_1 + \cdots + n_p u_p$ for fixed $u_i \in \mathbb{N}^d$ and all possible $n_i \in \mathbb{N}$. It's **semilinear** if it's a finite union of linear sets. The predicates on $\mathbb{N}^d$ that can be computed by deterministic stabilizing CRNs are precisely the semilinear ones. The simplest are $=$ and $\ge$. Start with one Y and some amounts of $X_1, X_2$. Say you want to know if $X_2 \ge X_1$. Use reactions $$X_2 + N \to Y$$ $$X_1 + Y \to N $$ You will be left with just a $Y$ or $N$, which answers your question.`

Testing for equality. Say you want to ask if $X_1 = X_2$. Use these reactions:

$$X_1 + X_2 \to Y$$ $$Y + N \to Y$$ $$X_1 + Y \to X_1 + N$$ $$X_2 + Y \to X_2 + N$$ The first reaction lets $X_1$ and $X_2$ cancel in pairs, producing $Y$'s. If you only run this reaction, you'll eventually be left with either $X_1$'s or $X_2$'s or nothing but $Y$.

The other reactions deal with the cases where there are $X_1$'s or $X_2$'s left over. But the key thing is to notice that no matter what order we run the reactions, we'll eventually be left with the right answer.

Try some examples!

`Testing for equality. Say you want to ask if $X_1 = X_2$. Use these reactions: $$X_1 + X_2 \to Y$$ $$Y + N \to Y$$ $$X_1 + Y \to X_1 + N$$ $$X_2 + Y \to X_2 + N$$ The first reaction lets $X_1$ and $X_2$ cancel in pairs, producing $Y$'s. If you only run this reaction, you'll eventually be left with either $X_1$'s or $X_2$'s or nothing but $Y$. The other reactions deal with the cases where there are $X_1$'s or $X_2$'s left over. But the key thing is to notice that no matter what order we run the reactions, we'll eventually be left with the right answer. Try some examples!`

Outputof a state:contains Y but not N - then the output is YES

contains N but not Y - then the output is NO

otherwise the output is undefined.

Output-stable statesare states with YES or NO output such that all states reachable from them have the same output.Deterministic stabilizing computation: for any input, a correct output-stable state is reachable from any reachable state.This sort of system can loop forever. However, we can compute any semilinear predicate using a CRN that

cannotloop forever.`**Output** of a state: contains Y but not N - then the output is YES contains N but not Y - then the output is NO otherwise the output is undefined. **Output-stable states** are states with YES or NO output such that all states reachable from them have the same output. **Deterministic stabilizing computation**: for any input, a correct output-stable state is reachable from any reachable state. This sort of system can loop forever. However, we can compute any semilinear predicate using a CRN that _cannot_ loop forever.`

A proof: it's impossible to compute the predicate $X_1 \ge X_1^2$ in this way. From Angluin, Aspnes and Eistenstat's paper Stably computable predicates are semilinear.

I won't go through the proof, but it uses a cute fact I never knew:

Dickson's Lemma.Any subset of $\mathbb{N}^d$ has a finite set of minimal elements, where we define $x \le y$ if $x_i \le y_i$ for all $i$.`A proof: it's impossible to compute the predicate $X_1 \ge X_1^2$ in this way. From Angluin, Aspnes and Eistenstat's paper [Stably computable predicates are semilinear](http://www.cs.yale.edu/homes/aspnes/papers/podc2006-proceedings.pdf). I won't go through the proof, but it uses a cute fact I never knew: **Dickson's Lemma.** Any subset of $\mathbb{N}^d$ has a finite set of minimal elements, where we define $x \le y$ if $x_i \le y_i$ for all $i$.`

The second talk is

Winfree likes Marvin Minsky's book

Computation: Finite and Infinite Machines. Turing machines have an unbounded tape... so they permit unbounded input and unbounded memory, as opposed to boolean circuits (finite input and finite memory), and finite state machines (unbounded input but finite memory). In chemistry we can talk about finiteness, or bounds, for time, space, energy and program size (e.g. the number of species). So, there are many choices of topics to study.`The second talk is * Erik Winfree, Caltech, Beyond chemical reaction networks. Winfree likes Marvin Minsky's book <i>Computation: Finite and Infinite Machines</i>. Turing machines have an unbounded tape... so they permit unbounded input and unbounded memory, as opposed to boolean circuits (finite input and finite memory), and finite state machines (unbounded input but finite memory). In chemistry we can talk about finiteness, or bounds, for time, space, energy and program size (e.g. the number of species). So, there are many choices of topics to study.`

Cells aren't just well-mixed bags of chemicals, so the chemical reaction network formalism is not really sufficient to model them.

Looking at how a ribosome "reads" RNA, maybe something more like a Turing machine model is appropriate!

A somewhat better model is a stack machine. These are well-studied, as universal as Turing machines, and faster.

You can simulate stack machines with "polymer CRNs"

"Tile self-assembly" is another model, inspired by crystal growth.

`Cells aren't just well-mixed bags of chemicals, so the chemical reaction network formalism is not really sufficient to model them. Looking at how a ribosome "reads" RNA, maybe something more like a Turing machine model is appropriate! A somewhat better model is a <a href = "http://en.wikipedia.org/wiki/Stack_machine">stack machine</a>. These are well-studied, as universal as Turing machines, and faster. You can simulate stack machines with "polymer CRNs" "Tile self-assembly" is another model, inspired by crystal growth.`

The third talk was:

I was too burned out to take notes, but she actually

buildsthese things.`The third talk was: * Lulu Qian, Caltech, Implementing complex CRNs with modular DNA components. I was too burned out to take notes, but she actually _builds_ these things.`

We're going to try to work on problems. Here's a list:

Decidability of output stability (see above).

How to design CRNs in the face of unintended side reactions: robustness.

Leader election.

Polymer CRNs.

Incorporating biological constraints - catalysis, saturation, everything gets degraded.

CRNs with tile assembly.

Complexity of stochastic distributions generated by CRNs.

Computational and construction power of programmable hairpin systems.

CRN complexity vs boolean circuit complexity.

`We're going to try to work on problems. Here's a list: 1. Decidability of output stability (see above). 2. How to design CRNs in the face of unintended side reactions: robustness. 3. Leader election. 4. Polymer CRNs. 5. Incorporating biological constraints - catalysis, saturation, everything gets degraded. 6. CRNs with tile assembly. 7. Complexity of stochastic distributions generated by CRNs. 8. Computational and construction power of programmable hairpin systems. 9. CRN complexity vs boolean circuit complexity.`

Great stuff for my "next" box. Postscript is a push-down stack machine so maybe you can persuade Andrew Stacey to have a go ;) Fraid the links in post #2 aren't working.

`Great stuff for my "next" box. Postscript is a push-down stack machine so maybe you can persuade Andrew Stacey to have a go ;) Fraid the links in post #2 aren't working.`

He used relative links! Thanks, I fixed them:

`He used relative links! Thanks, I fixed them: <ul> <li>An introduction to strand displacement cascades for the Foresight Institute Conference (Palo Alto, Jan 2013): <a href="http://solo.ucsf.edu/talks/Foresight_Institute_Talk.pdf">An Artificial "Biochemistry" with DNA</a> </li> <li>Paper presented at DNA Computing and Molecular Programming 18 (Aarhus, Denmark, Aug 2012): <a href="http://solo.ucsf.edu/talks/DNA18-Deterministic Computation (Aug 2012).pdf">Deterministic Function Computation with Chemical Reaction Networks</a> </li> <li>My tutorial talk for DNA Computing and Molecular Programming 17 (Pasadena, Aug 2011): <a href="http://solo.ucsf.edu/talks/Soloveichik_DNA_17_Tutorial_(2011-09-20).pdf">The Programming Language of Chemical Kinetics, and How To Discipline Your DNA Molecules using Strand Displacement Cascades</a> </li> <li>High-level introduction to algorithmic self-assembly and stochastic chemical reaction networks as computer-theoretic models: <a href="http://solo.ucsf.edu/talks/UW_CS_theory_talk.pdf">Computer-Theoretic Abstractions for Molecular Programming</a> </li> <li> On algorithmic behavior in chemical reaction networks and implementing arbitrary chemical reaction networks with DNA: <a href="http://solo.ucsf.edu/talks/pres_programming_crns.pdf">Programming Well-Mixed Chemical Kinetics</a> </li> <li> On provable asymptotic bounds on the number of leaps for tau-leaping as a function of maximum molecular count: <a href="http://solo.ucsf.edu/talks/pres_robust.pdf">Bounded Tau-leaping and Robust Chemical Reaction Networks</a> </li> </ul>`

The first talk on Tuesday:

• David F. Anderson, University of Wisconsin, Chemical reaction network theory for both deterministic and stochastic models.

Classical chemical reaction network theory due to Horn, Jackson and Feinberg for ODE models in the early 1970s:

• Deficiency

• Complex balanced equilibria

• Deficiency zero theorem

Continuous time Markov chain model (master equation):

• Connection with ODE models via law of large numbers

• Complex balance and deficiency zero theorem

Unfortunately I've explained all this stuff in the Network Theory series, so I have no motivation to take notes, even though his explanations are simpler than mine. Luckily his notes should appear online at some point.

`The first talk on Tuesday: • David F. Anderson, University of Wisconsin, Chemical reaction network theory for both deterministic and stochastic models. Classical chemical reaction network theory due to Horn, Jackson and Feinberg for ODE models in the early 1970s: • Deficiency • Complex balanced equilibria • Deficiency zero theorem Continuous time Markov chain model (master equation): • Connection with ODE models via law of large numbers • Complex balance and deficiency zero theorem Unfortunately I've explained all this stuff in the <a href = "http://math.ucr.edu/home/baez/networks/">Network Theory</a> series, so I have no motivation to take notes, even though his explanations are simpler than mine. Luckily his notes should appear online at some point.`

Some stuff I didn't know:

Theorem (Horn and Jackson).If a reaction network has a complex balanced equilibrium, then:It has no equilibria that are not complex balanced.

The reaction network must be weakly reversible.

Every stochiometric compatibility class contains precisely one complex balanced equilibrium.

Each complex balanced equilibrium is locally stable.

In fact, for every weakly reversible reaction network there exists a choice of rate constants for which there's a complex balanced equilibrium. But, except in the deficiency zero case, such choices form a set of measure zero.

`Some stuff I didn't know: **Theorem (Horn and Jackson).** If a reaction network has a complex balanced equilibrium, then: 1. It has no equilibria that are not complex balanced. 1. The reaction network must be weakly reversible. 1. Every stochiometric compatibility class contains precisely one complex balanced equilibrium. 1. Each complex balanced equilibrium is locally stable. In fact, for every weakly reversible reaction network there exists a choice of rate constants for which there's a complex balanced equilibrium. But, except in the deficiency zero case, such choices form a set of measure zero.`

A cool paper:

• Guy Shinar and Marin Feinberg, Structural sources of robustness in biochemical reaction networks,

Science(2010).Theorem.Suppose there is a chemical reaction network such that:its deficiency equals one;

it has a positive steady state;

it has two "non-terminal complexes" that differ only in species $S$. ("Non-terminal" is a concept that's easier to explain with a picture of a reaction network).

Then the species $S$ is

absolutely robust: with any initial conditions, the rate equation will approach an equilibrium where the concentration of $S$ approaches a specific fixed value $c$, independent of the initial conditions!However, things work very differently if we treat the system stochastically, using the master equation:

`A cool paper: • Guy Shinar and Marin Feinberg, Structural sources of robustness in biochemical reaction networks, _<a href = "http://www.sciencemag.org/content/327/5971/1389.full">Science</a>_ (2010). **Theorem.** Suppose there is a chemical reaction network such that: 1. its deficiency equals one; 1. it has a positive steady state; 1. it has two "non-terminal complexes" that differ only in species $S$. ("Non-terminal" is a concept that's easier to explain with a picture of a reaction network). Then the species $S$ is <b>absolutely robust</b>: with any initial conditions, the rate equation will approach an equilibrium where the concentration of $S$ approaches a specific fixed value $c$, independent of the initial conditions! However, things work very differently if we treat the system stochastically, using the master equation: * David F. Anderson, German A. Enciso and Matthew D. Johnston, <a href = "http://arxiv.org/abs/1310.3761">Stochastic analysis of biochemical reaction networks with absolute concentration robustness</a>.`

Another cool paper: David Doty has written about the difficulties of using a chemical reaction network to build a "timer":

`Another cool paper: David Doty has written about the difficulties of using a chemical reaction network to build a "timer": * David Doty, <a href = "http://www.dna.caltech.edu/~ddoty/papers/tcrn.pdf">Timing in chemical reaction networks</a>, SODA 2014: Proceedings of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms, (Portland, Oregon, USA, January 5-7, 2014), pp. 772-784. > We show that if a CRN respects finite density (at most $O(n)$ additional molecules can be produced from $n$ initial molecules), then starting from any dense initial configuration (all molecular species initially present have initial count $\Omega(n)$, where $n$ is the initial molecular count and volume), every producible species is produced in constant time with high probability. This implies that no CRN obeying the stated constraints can function as a _timer_, able to produce a molecule, but doing so only after a time that is an unbounded function of the input size. This has consequences regarding an open question of Angluin, Aspnes, and Eisenstat concerning the ability of population protocols to perform fast, reliable leader election and to simulate arbitrary algorithms from a uniform initial state.`

Next talk:

`Next talk: * Luca Cardelli, Morphisms of chemical reaction networks.`

A

population protocolis a chemical reaction with only 2-in, 2-out reactions. Here's one for "approximate majority detection":We start with some $x$'s and some $y$'s and want to see which one is in the majority, so we run these chemical reactions:

$$x + y \to x + b$$ $$ x + y \to y + b$$ $$x + b \to 2x$$ $$ y + b \to 2y $$ In $O(n \log n)$ time these reactions are very likely to convert all $x$'s and $y$'s to whatever is in the

majorityinitially. The chemical $b$ acts as an "undecided voter" who becomes either an $x$ or a $y$, depending on who it meets.Another version is:

$$x + y \to 2b $$ $$ x + b \to 2x $$ $$y + b \to 2y$$ The only known proof that the approximate majority protocol actually in a discrete-time formalism is very hard --- see the above paper. The proof in the continuous-time formalism is easier:

`A <b>population protocol</b> is a chemical reaction with only 2-in, 2-out reactions. Here's one for "approximate majority detection": * Dana Angluin, James Aspnes, and David Eisenstat, <a href= "http://www.cs.yale.edu/homes/aspnes/papers/approximate-majority-abstract.html">A simple population protocol for fast robust approximate majority</a>. Distributed Computing 21(2):87–102, July 2008 (DISC 2007 special issue). We start with some $x$'s and some $y$'s and want to see which one is in the majority, so we run these chemical reactions: $$x + y \to x + b$$ $$ x + y \to y + b$$ $$x + b \to 2x$$ $$ y + b \to 2y $$ In $O(n \log n)$ time these reactions are very likely to convert all $x$'s and $y$'s to whatever is in the <i>majority</i> initially. The chemical $b$ acts as an "undecided voter" who becomes either an $x$ or a $y$, depending on who it meets. Another version is: $$x + y \to 2b $$ $$ x + b \to 2x $$ $$y + b \to 2y$$ The only known proof that the approximate majority protocol actually in a discrete-time formalism is very hard --- see the above paper. The proof in the continuous-time formalism is easier: * Perrron, Vasudevan, and Vojonvic, [Using three states for binary consensus on complete graphs](http://research.microsoft.com/pubs/78881/tr-2008-114.pdf), September 2008.`

This approximate majority algorithm is seen quite clearly in a certain biological system: a certain "epigenetic switch". However, it is usually "obfuscated' or "encrypted": hidden in a bigger, more complicated chemical reaction network. For example, see:

This calls for a theory of morphisms of reaction networks. When do two CRNs give the same rate equation? Or: when do trajectories of one CRN converge to trajectories of another?

This is much simpler than working with the master equation.

The answer is purely syntactic in terms of the reaction network, independent of the rate constants.

`This approximate majority algorithm is seen quite clearly in a certain biological system: a certain "epigenetic switch". However, it is usually "obfuscated' or "encrypted": hidden in a bigger, more complicated chemical reaction network. For example, see: * Luca Cardelli, <a href = "http://www.nature.com/srep/2012/120913/srep00656/full/srep00656.html">The cell cycle switch is approximate majority obfuscated</a>. This calls for a theory of morphisms of reaction networks. When do two CRNs give the same rate equation? Or: when do trajectories of one CRN converge to trajectories of another? This is much simpler than working with the master equation. The answer is purely syntactic in terms of the reaction network, independent of the rate constants.`

An "influence network" consists of gates where process activates or deactivates another.

We model each gate as a chemical reaction network of a certain form, with 3 states (complexes), standing for "off", "intermediate" and "on", and 4 reactions going from one to the other: the reactions

off → intermediate → on

being catalyzed by one chemical and the reverse reactions catalyzed by another.

`An "influence network" consists of gates where process activates or deactivates another. We model each gate as a chemical reaction network of a certain form, with 3 states (complexes), standing for "off", "intermediate" and "on", and 4 reactions going from one to the other: the reactions off → intermediate → on being catalyzed by one chemical and the reverse reactions catalyzed by another.`

He drew examples of influence networks:

Mutual inhibition and self activation.

Mutual inhibition and mutual anti-activation.

Cell signal switching - a very common structure in nature, from yeast to us.

Better switching - the "new" cell cycle switch, a beautifully symmetric influence network.

`He drew examples of influence networks: * Mutual inhibition and self activation. * Mutual inhibition and mutual anti-activation. * Cell signal switching - a very common structure in nature, from yeast to us. * Better switching - the "new" cell cycle switch, a beautifully symmetric influence network.`

We say one network

emulatesanother if for any rate constants of the second, we can find rate constants for the first that makes its rate equation have solutions exactly mimicking that of the second, but where several species in the first correspond to one in the second.So, there's a map sending:

In a

homomorphismthe map on reactions is determined by the map on species in the obvious way, i.e. if $A$ is sent to $f(A)$ and $B$ is sent to $f(B)$ then the reaction$$2A + B \to 3B$$ is sent to the reaction

$$2f(A) + f(B) \to 3 f(B)$$ To make the first network emulate the second, we need to set equal the initial concentrations of all species in the inverse image of a given species.

`We say one network <b>emulates</b> another if for any rate constants of the second, we can find rate constants for the first that makes its rate equation have solutions exactly mimicking that of the second, but where several species in the first correspond to one in the second. So, there's a map sending: * species to species * reactions to reactions In a <b>homomorphism</b> the map on reactions is determined by the map on species in the obvious way, i.e. if $A$ is sent to $f(A)$ and $B$ is sent to $f(B)$ then the reaction $$2A + B \to 3B$$ is sent to the reaction $$2f(A) + f(B) \to 3 f(B)$$ To make the first network emulate the second, we need to set equal the initial concentrations of all species in the inverse image of a given species.`

Here's a draft of a paper on this stuff:

• Luca Cardelli, Morphisms of reaction networks that couple structure to function.

and here are a bunch of networks including the approximate majority network (labelled

AM), many of which show up in biology, and morphisms between them:`Here's a draft of a paper on this stuff: • Luca Cardelli, <a href = "http://lucacardelli.name/Papers/CRN%20Morphisms%20%28draft%29.pdf">Morphisms of reaction networks that couple structure to function</a>. and here are a bunch of networks including the approximate majority network (labelled <b>AM</b>), many of which show up in biology, and morphisms between them: <img src = "http://math.ucr.edu/home/baez/networks/cardelli_network_morphisms.jpg" alt = ""/>`

A

CRN homomorphismis the kind of homomorphism I already described.A

reactant homomorphismfrom one CRN to another is more general: it sends species to species, and for any reaction in the first CRN with input $A + B + C \cdots$ there's a reaction in the second with input $f(A) + f(B) + f(C) + \cdots $. ("Reactant" is another name for "input".)A

stoichiomorphismis a kind of morphism that takes rate constants into account, a bit harder to understand but important.The main theorem: given a stoichiomorphism that's also a reactant homomorphism, any solution of the rate equation for the second CRN gives an identical-looking solution to the rate equation for the first. We can say the first

emulatesthe second.`A <b>CRN homomorphism</b> is the kind of homomorphism I already described. A <b>reactant homomorphism</b> from one CRN to another is more general: it sends species to species, and for any reaction in the first CRN with input $A + B + C \cdots$ there's a reaction in the second with input $f(A) + f(B) + f(C) + \cdots $. ("Reactant" is another name for "input".) A <b>stoichiomorphism</b> is a kind of morphism that takes rate constants into account, a bit harder to understand but important. The main theorem: given a stoichiomorphism that's also a reactant homomorphism, any solution of the rate equation for the second CRN gives an identical-looking solution to the rate equation for the first. We can say the first <b>emulates</b> the second.`

Given a stoichiomorphism, any way of changing the rate constants of the second CRN gives a corresponding way to change rates constants in the first.

`Given a stoichiomorphism, any way of changing the rate constants of the second CRN gives a corresponding way to change rates constants in the first.`

Note to self: the way in which one CRN emulates another in Cardelli's formalism looks awfully similar to the way one dynamical system emulates another in Lerman's formalism:

• Eugene Lerman, Networks of dynamical systems.

There's a kind of 'covering' of reaction networks going on here, which reminds me a lot of what's happening in Lerman's work.

`Note to self: the way in which one CRN emulates another in Cardelli's formalism looks awfully similar to the way one dynamical system emulates another in Lerman's formalism: • Eugene Lerman, <a href = "http://johncarlosbaez.wordpress.com/2014/03/18/networks-of-dynamical-systems/">Networks of dynamical systems</a>. <img src = "http://math.ucr.edu/home/baez/networks/cardelli_network_morphisms_2.jpg" alt = ""/> There's a kind of 'covering' of reaction networks going on here, which reminds me a lot of what's happening in Lerman's work.`

Next talk, on verifying "implementations" of chemical reaction networks where we're using one CRN to simulate another:

Interestingly, Seung Woo Shin was Brendan Fong's roommate when Brendan was visiting the Centre for Quantum Technologies in Singapore a few years ago! They've both done work on chemical reaction networks now... pure coincidence.

`Next talk, on verifying "implementations" of chemical reaction networks where we're using one CRN to simulate another: * Seung Woo Shin, U.C. Berkeley, Verifying CRN implementations. Interestingly, Seung Woo Shin was [[Brendan Fong]]'s roommate when Brendan was visiting the Centre for Quantum Technologies in Singapore a few years ago! They've both done work on chemical reaction networks now... pure coincidence.`

For example, we could simulate

$$ A \to Z $$ $$ B \to Z $$ $$ C \to Z $$ by

$$ A \to i $$ $$ B \to i $$ $$ C \to i $$ $$ i \to Z $$ We call the first CRN the

formalCRN, and the second theimplementationCRN.`For example, we could simulate $$ A \to Z $$ $$ B \to Z $$ $$ C \to Z $$ by $$ A \to i $$ $$ B \to i $$ $$ C \to i $$ $$ i \to Z $$ We call the first CRN the **formal** CRN, and the second the **implementation** CRN.`

So, we need a notion of "map between CRNs", but different than that considered by Luca Cardelli because we will not be considering reaction rates or rate equations - just the "logical" question of whether some collection of species can react and turn into another.

`So, we need a notion of "map between CRNs", but different than that considered by Luca Cardelli because we will not be considering reaction rates or rate equations - just the "logical" question of whether some collection of species can react and turn into another.`

However, in their work, Seung and David Winfree do not consider the concept of "map between CRNs": they talk about an equivalence relation between CRNs. This is a tactical mistake, in my opinion: in math you should always first develop a concept of map,

thenuse this to develop a concept of equivalence. (This is one of the basic insights of category theory.) We spent a day discussing this and I tried to persuade them to study maps between CRNs; I have some ideas on what these should be, and I should blog about them!`However, in their work, Seung and David Winfree do not consider the concept of "map between CRNs": they talk about an equivalence relation between CRNs. This is a tactical mistake, in my opinion: in math you should always first develop a concept of map, _then_ use this to develop a concept of equivalence. (This is one of the basic insights of category theory.) We spent a day discussing this and I tried to persuade them to study maps between CRNs; I have some ideas on what these should be, and I should blog about them!`

They ignore "gate species" which are assumed to always be present. I.e. if $Z$ is always present they simplify

$$ A + Z \to B $$ $$ B \to C $$ to

$$ A \to B $$ $$ B \to C $$ We have to

specify by handwhich are the "gate species". I should think about CRNs where a certain set of species are designated as gate species.`They ignore "gate species" which are assumed to always be present. I.e. if $Z$ is always present they simplify $$ A + Z \to B $$ $$ B \to C $$ to $$ A \to B $$ $$ B \to C $$ We have to _specify by hand_ which are the "gate species". I should think about CRNs where a certain set of species are designated as gate species.`

Suppose we want to implement

$$ A + B \to C $$ We could do it via a CRN with an extra "intermediate" species $i$:

$$ A \to i $$ $$ i + B \to C $$ But they consider this CRN "untidy" because one of the species $A$ can turn into an $i$ and "get stuck" in that form if there ar no $B$'s. They consider this other implementation "tidy":

$$ A \leftrightarrow i $$ $$ i + B \to C $$ They focus on tidy ones.

`Suppose we want to implement $$ A + B \to C $$ We could do it via a CRN with an extra "intermediate" species $i$: $$ A \to i $$ $$ i + B \to C $$ But they consider this CRN "untidy" because one of the species $A$ can turn into an $i$ and "get stuck" in that form if there ar no $B$'s. They consider this other implementation "tidy": $$ A \leftrightarrow i $$ $$ i + B \to C $$ They focus on tidy ones.`

Suppose we have CRN 1 as our formal CRN:

$$ A \to B $$ $$ B + C \to D $$ and CRN 2 as our implementation CRN:

$$ A \to i $$ $$ i \to B $$ $$ B \leftrightarrow j $$ $$ j + C \to D $$ The species in capital letters, those in CRN 1, they call

formal species. Every species in CRN 1 is mapped to one in CRN 2, and they use the same letter for its image in CRN 2. The lowercase letters, species which are not in the image of this map, they callintermediates.A

stateis a complex, i.e. a multiset of species. Aformal stateis a state of CRN 2 that contains only formal species. These can also be thought of as states of CRN 1.Suppose a formal state $Y$ is reachable from a formal state $X$ in CRN 2 if and only if it is reachable in CRN 1. Then we say CRN 2 is

reachability equivalentto CRN 1.In our example above, CRN 2 is reachability equivalent to CRN 1.

`Suppose we have CRN 1 as our formal CRN: $$ A \to B $$ $$ B + C \to D $$ and CRN 2 as our implementation CRN: $$ A \to i $$ $$ i \to B $$ $$ B \leftrightarrow j $$ $$ j + C \to D $$ The species in capital letters, those in CRN 1, they call **formal species**. Every species in CRN 1 is mapped to one in CRN 2, and they use the same letter for its image in CRN 2. The lowercase letters, species which are not in the image of this map, they call **intermediates**. A **state** is a complex, i.e. a multiset of species. A **formal state** is a state of CRN 2 that contains only formal species. These can also be thought of as states of CRN 1. Suppose a formal state $Y$ is reachable from a formal state $X$ in CRN 2 if and only if it is reachable in CRN 1. Then we say CRN 2 is **reachability equivalent** to CRN 1. In our example above, CRN 2 is reachability equivalent to CRN 1.`

Note that this CRN:

$$ A \to B $$ $$ B \to C $$ $$ C \to A $$ is reachability equivalent to this:

$$ B \to A $$ $$ C \to B $$ $$ A \to A $$ But in some other sense these

aren'tequivalent...`Note that this CRN: $$ A \to B $$ $$ B \to C $$ $$ C \to A $$ is reachability equivalent to this: $$ B \to A $$ $$ C \to B $$ $$ A \to A $$ But in some other sense these _aren't_ equivalent...`

In his 2012 thesis, Dong introduced "bisimulation equivalence" for CRNs. It was inspired by the standard notion of "weak bisimulation" from the state-transition system literature... but different.

Two CRNs are

bisimulation equivalentif we can find a good map from species in the implementation CRN to complexes of the formal CRN. We call this map aninterpretation. But what does "good" mean here"?Atomic:For every species $X$ in the formal CRN there is at least one species in the implementation CRN that gets interpreted as $X$.Delimiting:Under the interpretation, the set of reactions in the two CRNs are just the same, up to trivial reactions.Permissive:For any state $S'$ of the implementation CRN, if $S$ is its interpretation and a formal reaction $R$ can occur in state $S$, then there exists at least one reaction $R'$ in the implementation which can occur in $S'$ (up to trivial reactions) and whose interpretation is $R$.A

trivialreaction is a reaction in the implementation CRN that gets interpreted as a reaction in the formal CRN whose source and target are equal.(All this will become much nicer when we use more category theory.)

`In his 2012 thesis, Dong introduced "bisimulation equivalence" for CRNs. It was inspired by the standard notion of "weak bisimulation" from the state-transition system literature... but different. Two CRNs are **bisimulation equivalent** if we can find a good map from species in the implementation CRN to complexes of the formal CRN. We call this map an **interpretation**. But what does "good" mean here"? 1. **Atomic:** For every species $X$ in the formal CRN there is at least one species in the implementation CRN that gets interpreted as $X$. 2. **Delimiting:** Under the interpretation, the set of reactions in the two CRNs are just the same, up to trivial reactions. 3. **Permissive:** For any state $S'$ of the implementation CRN, if $S$ is its interpretation and a formal reaction $R$ can occur in state $S$, then there exists at least one reaction $R'$ in the implementation which can occur in $S'$ (up to trivial reactions) and whose interpretation is $R$. A **trivial** reaction is a reaction in the implementation CRN that gets interpreted as a reaction in the formal CRN whose source and target are equal. (All this will become much nicer when we use more category theory.)`

Another notion: "pathway decomposition equivalence", appears in Shin's 2012 thesis and also a paper by Shin, Thachuk and Winfree that will appear in 2014.

`Another notion: "pathway decomposition equivalence", appears in Shin's 2012 thesis and also a paper by Shin, Thachuk and Winfree that will appear in 2014. * Seung Woo Shin,[Compiling and verifying DNA-based chemical reaction network implementations](http://thesis.library.caltech.edu/6676/), PhD thesis, Caltech, 2012. > **Abstract:** One goal of molecular programming and synthetic biology is to build chemical circuits that can control chemical processes at the molecular level. Remarkably, it has been shown that synthesized DNA molecules can be used to construct complex chemical circuits that operate without any enzyme or cellular component. However, designing DNA molecules at the individual nucleotide base level is often difficult and laborious, and thus chemical reaction networks (CRNs) have been proposed as a higher-level programming language. So far, several general-purpose schemes have been described for designing synthetic DNA molecules that simulate the behavior of arbitrary CRNs, and many more are being actively investigated. > Here, we solve two problems related to this topic. First, we present a general-purpose CRN-to-DNA compiler that can apply user-defined compilation schemes for translating formal CRNs to domain-level specifications for DNA molecules. In doing so, we develop a language in which such schemes can be concisely and precisely described. This compiler can greatly reduce the amount of tedious manual labor faced by researchers working in the field. Second, we present a general method for the formal verification of the correctness of such compilation. We first show that this problem reduces to testing a notion of behavioral equivalence between two CRNs, and then we construct a mathematical formalism in which that notion can be precisely defined. Finally, we provide algorithms for testing that notion. This verification process can be thought of as an equivalent of model checking in molecular computation, and we hope that the generality of our verification techniques will eventually allow us to apply them not only to DNA-based CRN implementations but to a wider class of molecular programs.`

This live blogging is real fun, even just to skim, it's like on an online, real-time, state-of-the-art textbook.

John wrote:

I love heuristics as an inevitable start of dealing with complexity and ignorance.

I've only read the posts but the rigorous glossary is the best bit so far. I guess it will be very useful ie. make things clearer and plausibly simpler to use if I can get to a critical reading of the cited papers, so, as usual. many thanks for that.

You might enjoy some of the stuff my friend +Marcus Lynch's former colleague, Larry Bull, an AI prof. has done. He's into unusual computing (zeno-phyto computation or some such) and I think he made an arithmetic adder out of pond lily leaves.

`This live blogging is real fun, even just to skim, it's like on an online, real-time, state-of-the-art textbook. John wrote: > in my opinion: in math you should always first develop a concept of map, then use this to develop a concept of equivalence. I love heuristics as an inevitable start of dealing with complexity and ignorance. I've only read the posts but the rigorous glossary is the best bit so far. I guess it will be very useful ie. make things clearer and plausibly simpler to use if I can get to a critical reading of the cited papers, so, as usual. many thanks for that. You might enjoy some of the stuff my friend +Marcus Lynch's former colleague, [Larry Bull](http://www.cems.uwe.ac.uk/~lbull/), an AI prof. has done. He's into unusual computing (zeno-phyto computation or some such) and I think he made an arithmetic adder out of pond lily leaves.`

Jim wrote:

Thanks! I would love it if you - and others - asked questions on the first blog article based on these notes:

I'd especially appreciate questions like "Huh? What the hell does $\Delta_2^0$ mean?" There probably aren't many people who understand all the jargon in this blog post.

Zounds!

Did you mean "zeno-" or "xeno-"?

`Jim wrote: > This live blogging is real fun, even just to skim, it’s like on an online, real-time, state-of-the-art textbook. Thanks! I would love it if you - and others - asked questions on the first blog article based on these notes: * [The computational power of chemical reaction networks](http://johncarlosbaez.wordpress.com/2014/06/10/the-computational-power-of-chemical-reaction-networks/). I'd especially appreciate questions like "Huh? What the hell does $\Delta_2^0$ mean?" There probably aren't many people who understand all the jargon in this blog post. > I think he made an arithmetic adder out of pond lily leaves. Zounds! Did you mean "zeno-" or "xeno-"?`

I only got as far as seeing the Delta-2^0, say it and and baulked: no time to complain. Did you do that on purpose? El Nino is the priority, then, for me, Crucifix's Van der Pol Milankovitch paper and equations 3.6-3.9 in A simple climate model.

`I only got as far as seeing the Delta-2^0, say it and and baulked: no time to complain. Did you do that on purpose? El Nino is the priority, then, for me, Crucifix's Van der Pol Milankovitch paper and equations 3.6-3.9 in A simple climate model.`

Sure, the El Niño stuff should be more important to most of you folks. It's just that if anyone is interested in chemical reaction networks (= stochastic Petri nets) and bothers to read the blog article but gets bogged down by the jargon, they can ask (on the blog) what it means and I'll be happy to explain.

`Sure, the El Niño stuff should be more important to most of you folks. It's just that if anyone is interested in chemical reaction networks (= stochastic Petri nets) and bothers to read the blog article but gets bogged down by the jargon, they can ask (on the blog) what it means and I'll be happy to explain.`

Nobody knows how to set up a chemical reaction that starts with a large number of molecules of various kinds and with high probability quickly "elects a leader" - creates a single molecule of some type $L$. This is the

fast leader election problem.However, David Doty knows a chemical reaction that with high probability elects a leader quickly if we are allowed to start with a large number of molecules of one type and a much smaller number of molecules of some other kind. Here it is:

Start with a state where the number of $A$'s is $\sqrt{n}/\log n$, and the number of $B$'s is $n$. Use a chemical reaction network with the following reactions:

$$ A + A \to L $$ This gives birth to the leader.

$$ L + B \to L + F $$ When a $B$ meets the leader, he becomes a follower.

$$A + F \to 2 F $$ $$B + F \to 2 F $$ The followers infect the $A$'s and $B$'s, turning them into followers.

$$L + L \to L $$ If two leaders meet one gets killed. This is slow since there won't be many $L$'s so it takes a long time for them to bump into each other. But it's unlikely that this process is invoked, since the "infection" spreads exponentially and with high probability all $A$'s are converted to $F$'s before a second leader is born.

`Nobody knows how to set up a chemical reaction that starts with a large number of molecules of various kinds and with high probability quickly "elects a leader" - creates a single molecule of some type $L$. This is the **fast leader election problem**. However, David Doty knows a chemical reaction that with high probability elects a leader quickly if we are allowed to start with a large number of molecules of one type and a much smaller number of molecules of some other kind. Here it is: Start with a state where the number of $A$'s is $\sqrt{n}/\log n$, and the number of $B$'s is $n$. Use a chemical reaction network with the following reactions: $$ A + A \to L $$ This gives birth to the leader. $$ L + B \to L + F $$ When a $B$ meets the leader, he becomes a follower. $$A + F \to 2 F $$ $$B + F \to 2 F $$ The followers infect the $A$'s and $B$'s, turning them into followers. $$L + L \to L $$ If two leaders meet one gets killed. This is slow since there won't be many $L$'s so it takes a long time for them to bump into each other. But it's unlikely that this process is invoked, since the "infection" spreads exponentially and with high probability all $A$'s are converted to $F$'s before a second leader is born.`

Sorry I don't have the time to read all of this carefully, so I may have missed some relevant information.

I was asking myself how good the CRN and DNA "model languages" work in decribing real chemical processes. Was this discussed in the workshop? Is there somewhere an overview which says that using that and that "model language" the corresponding real chemical system performs that and that well (well in the sense doing what the model decribes)?

In particular I could imagine that there are no real chemical systems (at least not without tweaking) for certain chemical reaction networks, but this is just a wild guess.

`>Wet-lab The rational design of molecular interactions for synthetic biology, nanotechnology, and bioengineering. The goal is to engineer autonomous molecular systems that can sense, compute, and perform various actions. Sorry I don't have the time to read all of this carefully, so I may have missed some relevant information. I was asking myself how good the CRN and DNA "model languages" work in decribing real chemical processes. Was this discussed in the workshop? Is there somewhere an overview which says that using that and that "model language" the corresponding real chemical system performs that and that well (well in the sense doing what the model decribes)? In particular I could imagine that there are no real chemical systems (at least not without tweaking) for certain chemical reaction networks, but this is just a wild guess.`

Yes, there are many difficulties with translating theoretical work on chemical reaction networks into systems that actually work in the lab. As everywhere, there are theorists who disregard this and study the beautiful math, theorists who get interested in the difficulties, and experimentalists who spend their days confronting them.

There are chemical reaction networks like

$$ A \to A + B $$ that violate conservation of mass, and chemical reaction networks like

$$ A \to B $$ $$ B \to C $$ $$ C \to A $$ that violate the laws of thermodynamics (if the reverse reactions aren't included). Interestingly, some of these "impossible" reaction networks can still be

simulatedusing DNA strand displacement cascades. People at Caltech are trying to make DNA strand displacement cascades into a general technique for implementing chemical reaction networks:So far they have only implemented a couple of small networks.

Personally I think that learning how organisms work and doing things like that may be more successful. However, I could be wrong: a plane doesn't work like a bird.

`Yes, there are many difficulties with translating theoretical work on chemical reaction networks into systems that actually work in the lab. As everywhere, there are theorists who disregard this and study the beautiful math, theorists who get interested in the difficulties, and experimentalists who spend their days confronting them. There are chemical reaction networks like $$ A \to A + B $$ that violate conservation of mass, and chemical reaction networks like $$ A \to B $$ $$ B \to C $$ $$ C \to A $$ that violate the laws of thermodynamics (if the reverse reactions aren't included). Interestingly, some of these "impossible" reaction networks can still be _simulated_ using DNA strand displacement cascades. People at Caltech are trying to make DNA strand displacement cascades into a general technique for implementing chemical reaction networks: * David Soloveichik, Georg Seelig and Erik Winfree, [DNA as a universal substrate for chemical kinetics](http://www.pnas.org/content/early/2010/02/25/0909380107.full.pdf+html). So far they have only implemented a couple of small networks. Personally I think that learning how organisms work and doing things like that may be more successful. However, I could be wrong: a plane doesn't work like a bird.`

I wrote a second and final blog article on this workshop, which will appear on the Azimuth Blog on 00:30 Saturday 28 June 2014. If you have questions, comments or suggestions let me know!

`I wrote a second and final blog article on this workshop, which will appear on the Azimuth Blog on 00:30 Saturday 28 June 2014. If you have questions, comments or suggestions let me know! * [[Blog - chemical reaction network talks]].`

I decided to post this one now, since there hasn't much discussion on the previous one, El Niño Project (Part 2), and I want to get going on further El Niño posts soon. So, it's here:

`I decided to post this one now, since there hasn't much discussion on the previous one, El Niño Project (Part 2), and I want to get going on further El Niño posts soon. So, it's here: * [Chemical reaction network talks](http://johncarlosbaez.wordpress.com/2014/06/26/chemical-reaction-networks/), Azimuth.`