It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 502
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 72
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 715

Options

It's about time for this entry:

For now it's mainly just a way to accumulate references...

## Comments

How did we manage so long without such a page? I added some more references.

This seems a better place to reply to your comment about Petri nets and Bayesian nets in `Blog - Contribution to the MPE blog'.

I don't get your intuition here. They are both examples of statistical inference, but so many things are examples of statistical inference (especially to me!) that this is not much of a connection.

A Bayesian network is acyclic. That seems a very restricted kind of reaction network.

`How did we manage so long without such a page? I added some more references. This seems a better place to reply to your comment about Petri nets and Bayesian nets in `Blog - Contribution to the MPE blog'. >Groovy! This is an important ‘inverse problem’ I haven’t started thinking about: not starting with a reaction network and deriving predictions, but starting with empirical data and trying to fit it to a reaction network. >In some way that I haven’t begun to ponder, this should connect reaction networks to machine learning and Bayesian networks. I don't get your intuition here. They are both examples of statistical inference, but so many things are examples of statistical inference (especially to me!) that this is not much of a connection. > There’s something big going on here… but it’s not a simple case of “I see a network here and a network there, so they must be somehow the same”. In Bayesian networks the edges indicate causality, some random variable affecting another. That’s a seemingly different use of edges from reaction networks, where they mean something turning into something else! So we’ll need to invent a sufficiently rich framework to fit these two together. A Bayesian network is acyclic. That seems a very restricted kind of reaction network.`

Couple of quick points:

Bayesian networks don't necessarily encode causality: they encode conditional dependencies, but you can have things the other way.

It's possible to come up with networks which aren't acyclic: the graphs in, eg, turbo coding. There's extensive theory on loopy belief propagation, which attempts to formulate a good set of criteria when belief propagation with (a) converge and (b) converge to the right solution in a graph with cycles.

It's probably already known, but the Chow-Liu tree is essentially the best (to second order error terms) tree appropximation to a given probability distribution. It turns out greedily adding the pair with greatest mutual information the tree is optimal for tree approximations. Since mutual information is related to entropy this may be of interest in that connection.

`Couple of quick points: 1. Bayesian networks don't necessarily encode causality: they encode conditional dependencies, but you can have things the other way. 2. It's possible to come up with networks which aren't acyclic: the graphs in, eg, [turbo coding](http://en.wikipedia.org/wiki/Turbo_code). There's extensive theory on loopy [belief propagation](http://en.wikipedia.org/wiki/Belief_propagation), which attempts to formulate a good set of criteria when belief propagation with (a) converge and (b) converge to the right solution in a graph with cycles. 3. It's probably already known, but the [Chow-Liu tree](http://en.wikipedia.org/wiki/Chow%E2%80%93Liu_tree) is essentially the best (to second order error terms) tree appropximation to a given probability distribution. It turns out greedily adding the pair with greatest mutual information the tree is optimal for tree approximations. Since mutual information is related to entropy this may be of interest in that connection.`

Sorry for the drive-by comment, but the remark

reminded me of what I did while deriving the discrete master equation.

`Sorry for the drive-by comment, but the remark > In Bayesian networks the edges indicate causality, some random variable affecting another. That’s a seemingly different use of edges from reaction networks, where they mean something turning into something else! So we’ll need to invent a sufficiently rich framework to fit these two together. reminded me of what I did while deriving the [discrete master equation](http://phorgyphynance.wordpress.com/2011/10/29/network-theory-and-discrete-calculus-the-discrete-master-equation/).`

Graham wrote:

Yeah, it's possibly a red herring. I know slick category-theoretic formulations of both reaction networks and Bayesian networks, with some similarities, so it would be fun to connect them somehow, perhaps using these formulations.

`Graham wrote: > I don’t get your intuition here. They are both examples of statistical inference, but so many things are examples of statistical inference (especially to me!) that this is not much of a connection. Yeah, it's possibly a red herring. I know slick category-theoretic formulations of both reaction networks and Bayesian networks, with some similarities, so it would be fun to connect them somehow, perhaps using these formulations.`

I added this to Network theory:

The term

adaptive networkis sometimes used to denote a network whose topology changes with time in a way that interacts with dynamics on the network. For an introduction see:Journal of the Royal Society Interface5(6 March 2008), 2597–271.A quote:

`I added this to [[Network theory]]: The term *adaptive network* is sometimes used to denote a network whose topology changes with time in a way that interacts with dynamics on the network. For an introduction see: * Thilo Gross and Bernd Blasius, [Adaptive coevolutionary networks: a review](http://rsif.royalsocietypublishing.org/content/5/20/259.full.html), _Journal of the Royal Society Interface_ **5** (6 March 2008), 2597–271. A quote: > A network consists of a number of network nodes connected by links (....) The specific pattern of connections defines the network's topology. For many applications it is not necessary to capture the topology of a given real-world network exactly in a model. Rather, in many cases the processes of interest depend only on certain topological properties (Costa et al. 2007). The majority of recent studies revolve around two key questions corresponding to two distinct lines of research: what are the values of important topological properties of a network that is evolving in time? And, how does the functioning of the network depend on these properties? > The first line of research is concerned with the dynamics of networks. Here, the topology of the network itself is regarded as a dynamical system. It changes in time according to specific, often local, rules. Investigations in this area have revealed that certain evolution rules give rise to peculiar network topologies with special properties. Notable examples include the formation of small world (Watts & Strogatz 1998) and scale-free networks (Price 1965; Barabàsi & Albert 1999). > The second major line of network research focuses on the dynamics on networks. Here, each node of the network represents a dynamical system. The individual systems are coupled according to the network topology. Thus, the topology of the network remains static while the states of the nodes change dynamically. Important processes that are studied within this framework include synchronization of the individual dynamical systems (Barahona & Pecora 2002) and contact processes such as opinion formation and epidemic spreading (Kuperman & Abramson 2001; Pastor-Satorras & Vespignani 2001; May & Lloyd 2001; Newman 2002; Boguñá et al. 2003). These studies have made it clear that certain topological properties have a strong impact on the dynamics. For instance, it was shown that vaccination of a fraction of the nodes cannot stop epidemics on a scale-free network (May & Lloyd 2001; Pastor-Satorras & Vespignani 2001). > Until recently, the two lines of network research were pursued almost independently in the physical literature. While there was certainly a strong interaction and cross-fertilization, a given model would either describe the dynamics of a certain network or the dynamics on a certain network. Nevertheless, it is clear that in most real-world networks the evolution of the topology is invariably linked to the state of the network and vice versa. For instance, consider a road network. The topology of the network, that is the pattern of roads, influences the dynamic state, e.g. the flow and density of traffic. But if traffic congestions are common on a given road, it is probable that new roads will be built in order to decrease the load on the congested one. In this way a feedback loop between the state and topology of the network is formed. This feedback loop can give rise to a complicated mutual interaction between a time varying network topology and the nodes' dynamics. Networks which exhibit such a feedback loop are called coevolutionary or adaptive networks (figure 1).`

on the other hand one could call a nonexisting road a very congested road (that is you have to push away all the earth and houses and mountains which are in your way...)

`>But if traffic congestions are common on a given road, it is probable that new roads will be built in order to decrease the load on the congested one. on the other hand one could call a nonexisting road a very congested road (that is you have to push away all the earth and houses and mountains which are in your way...)`

I got the Estrada book, and am going through it. It covers a lot of ideas.

I find this subject, which is mentioned there, to intriguing:

Wikipedia: Spectral Graph Theory

The general idea is to look at graphs in terms of the eigenvalues of their adjacency matrices.

What is the meaning of this approach? I didn't read much about it, but as an exercise here I'll try to think out the beginnings of it.

The adjacency matrix of a graph is just a representation for a binary relation A on a set S. Once we talk about eigenvalues, that means we're thinking of it as a linear transformation T from R^n to R^n, where n is the number of elements in S. The natural basis of R^n corresponds to the members of S. For s in S, T(s) is the set of elements that are connected to s through the relation.

So what are the eigenvectors of T? Here are some simple examples:

If S0 is a subset of S which doesn't relate to anything through A, then A(S0) is the empty set, so the characteristic function of S0 is a null vector of T. I.e., a set of isolated nodes in the graph has eigenvalue 0.

If S0 is a subset of S which gets permuted by A -- the set of nodes in an isolated cycle -- then the characteristic function of S0 is a fixed point of T i.e. an eigenvector with eigenvalue 1. A union of such cycles will also have eigenvalue 1.

The characteristic function of an isolated clique of N elements will be an eigenvector with eigenvalue N. Same for a union of N-cliques.

Other examples? What could be the graph-theoretic interpretation of eigenvectors with non-integer eigenvalues?

`I got the Estrada book, and am going through it. It covers a lot of ideas. I find this subject, which is mentioned there, to intriguing: Wikipedia: [Spectral Graph Theory](http://en.wikipedia.org/wiki/Spectral_graph_theory) The general idea is to look at graphs in terms of the eigenvalues of their adjacency matrices. What is the meaning of this approach? I didn't read much about it, but as an exercise here I'll try to think out the beginnings of it. The adjacency matrix of a graph is just a representation for a binary relation A on a set S. Once we talk about eigenvalues, that means we're thinking of it as a linear transformation T from R^n to R^n, where n is the number of elements in S. The natural basis of R^n corresponds to the members of S. For s in S, T(s) is the set of elements that are connected to s through the relation. So what are the eigenvectors of T? Here are some simple examples: * If S0 is a subset of S which doesn't relate to anything through A, then A(S0) is the empty set, so the characteristic function of S0 is a null vector of T. I.e., a set of isolated nodes in the graph has eigenvalue 0. * If S0 is a subset of S which gets permuted by A -- the set of nodes in an isolated cycle -- then the characteristic function of S0 is a fixed point of T i.e. an eigenvector with eigenvalue 1. A union of such cycles will also have eigenvalue 1. * The characteristic function of an isolated clique of N elements will be an eigenvector with eigenvalue N. Same for a union of N-cliques. Other examples? What could be the graph-theoretic interpretation of eigenvectors with non-integer eigenvalues?`

Note that there's two different "graph matrices" that people consider spectra of:

adjacency matrixas you discuss above.Laplacian matrixwhich is defined as $\sum_i d_i e_{i i} - \sum_{i,j\in E} e_{i j}$, i.e. putting the degree (number of incident edges) on the diagonal with the opposite signs to actual edges.Firstly note that both these matrices are hermitian, so all eigenvalues are real and all eigenvectors are orthogonal. One of the things about the Laplacian matrix is "diagonally dominant", which amongst other things means the spectrum is well defined. Now if you scale the edges of the adjacency graph there's the straightforward connection that an eigenvector corresponds to the probability of where a particle will end up after a huge number of "uniformly random" steps if its initial distribution is uniform over all the nodes. This is of practical use, since if you think that the eigenvector of an enormous adjacency matrix is a solution, you can get an approximate eigenvector by summing lots of a random walks statistics.

For the Laplacian matrix there's spectral partitioning, which shows that eigenvectors are related to partitions that minimise certain weigh functions over cuts which divde the graph into various partitions.

One final interesting thing, which I'm interested in a lot, is the question: is there an analogue of "mulit-scale" decompositions like wavelets and fourier decompositions are for "1-D data" for "data on a graph"? One popular way to do this is to say "in the 1-D case we have various "basis functions" (sines, cosines of various frequencies), these turn out to be eigenfunctions fo certain differential equation kernels, and we know how to define similar kernels on graphs and numerically solve for eigenfunctions and say these define the multiscale decomposition basis functions". If you do this, it's related to graph spectra. I'm not too keen on this approach because it doesn't have that many nice properties, but I'm not aware of anything better.

There's plenty more stuff around, but these show graph spectra have lots of connections.

`Note that there's two different "graph matrices" that people consider spectra of: 1. The _adjacency matrix_ as you discuss above. 2. The _Laplacian matrix_ which is defined as $\sum_i d_i e_{i i} - \sum_{i,j\in E} e_{i j}$, i.e. putting the degree (number of incident edges) on the diagonal with the opposite signs to actual edges. Firstly note that both these matrices are hermitian, so all eigenvalues are real and all eigenvectors are orthogonal. One of the things about the Laplacian matrix is "diagonally dominant", which amongst other things means the spectrum is well defined. Now if you scale the edges of the adjacency graph there's the straightforward connection that an eigenvector corresponds to the probability of where a particle will end up after a huge number of "uniformly random" steps if its initial distribution is uniform over all the nodes. This is of practical use, since if you think that the eigenvector of an enormous adjacency matrix is a solution, you can get an approximate eigenvector by summing lots of a random walks statistics. For the Laplacian matrix there's [spectral partitioning](http://en.wikipedia.org/wiki/Graph_partition#Spectral_partitioning_and_spectral_bisection), which shows that eigenvectors are related to partitions that minimise certain weigh functions over cuts which divde the graph into various partitions. One final interesting thing, which I'm interested in a lot, is the question: is there an analogue of "mulit-scale" decompositions like wavelets and fourier decompositions are for "1-D data" for "data on a graph"? One popular way to do this is to say "in the 1-D case we have various "basis functions" (sines, cosines of various frequencies), these turn out to be eigenfunctions fo certain differential equation kernels, and we know how to define similar kernels on graphs and numerically solve for eigenfunctions and say these define the multiscale decomposition basis functions". If you do this, it's related to graph spectra. I'm not too keen on this approach because it doesn't have that many nice properties, but I'm not aware of anything better. There's plenty more stuff around, but these show graph spectra have lots of connections.`

David Tweed, I'm glad to hear you comments on this topic.

True, for the case of undirected graphs.

Doesn't every matrix have a spectrum, which is just defined to be its set of eigenvalues?

Do you have any good introductory references on the subject? So far in the Estrada book, I've found lots of interesting statements about the spectrum of the adjacency matrix, but it doesn't pin down the answer to my initial question. Suppose my graph has two nodes, and the vector (8.8, 14.1) is an eigenvector of the adjacency matrix with eigenvalue 5.9. What is that telling me? You seem to be getting closer to my query when you said:

Can you spell this out a bit more. What do you mean by scaling the edges of the adjacency graph? What do you mean by taking uniformly random steps; what is the setup for these steps? It's a new terrain for me, so I'm not able to read between the lines too much here.

Thanks!

`David Tweed, I'm glad to hear you comments on this topic. > Firstly note that both these matrices are hermitian, so all eigenvalues are real and all eigenvectors are orthogonal. True, for the case of undirected graphs. > One of the things about the Laplacian matrix is “diagonally dominant”, which amongst other things means the spectrum is well defined. Doesn't every matrix have a spectrum, which is just defined to be its set of eigenvalues? Do you have any good introductory references on the subject? So far in the Estrada book, I've found lots of interesting statements about the spectrum of the adjacency matrix, but it doesn't pin down the answer to my initial question. Suppose my graph has two nodes, and the vector (8.8, 14.1) is an eigenvector of the adjacency matrix with eigenvalue 5.9. What is that telling me? You seem to be getting closer to my query when you said: > Now if you scale the edges of the adjacency graph there’s the straightforward connection that an eigenvector corresponds to the probability of where a particle will end up after a huge number of “uniformly random” steps if it’s initial distribution is uniform over all the nodes. This is of practical use, since if you think that the eigenvector of an enormous adjacency matrix is a solution, you can get an approximate eigenvector by summing lots of a random walks statistics. Can you spell this out a bit more. What do you mean by scaling the edges of the adjacency graph? What do you mean by taking uniformly random steps; what is the setup for these steps? It's a new terrain for me, so I'm not able to read between the lines too much here. Thanks!`

Ok, I think I see your point: if you scale the rows of the adjacency matrix so that it becomes a stochastic matrix, then the fixed points of matrix have the meaning as attractor points for the corresponding Markov chain. So this is an example where eigenvectors with eigenvalue 1 have a clear meaning. But it is for a different matrix than the adjacency matrix (which is not generally a scalar multiple of it), and it concerns the specific eigenvalue 1 -- so my question is still open about the meaning of general eigenvectors of the adjacency matrix.

`Ok, I think I see your point: if you scale the rows of the adjacency matrix so that it becomes a stochastic matrix, then the fixed points of matrix have the meaning as attractor points for the corresponding Markov chain. So this is an example where eigenvectors with eigenvalue 1 have a clear meaning. But it is for a different matrix than the adjacency matrix (which is not generally a scalar multiple of it), and it concerns the specific eigenvalue 1 -- so my question is still open about the meaning of general eigenvectors of the adjacency matrix.`

Here's a reference about how the laplacian matrix relates to graph connectivity. I'll try to write some more later, but one of the important things is that some characteristics of graphs are encoded in specific (eignvector, eigenvalue) positions in the ordered list of eigenvalues. So the second smallest eigenvalue is related to connectivity of the graph.

The use of random walks is, as I understand it, more for computiational purposes on graphs the size of billions or trillions of nodes that people like Google and Facebook work with.

I was in a bit of a rush and you're right matrix spectra are well-defined; what I was thinking was that diagonally dominant matrices are more well-behaved.

Try to right more later.

`Here's a [reference about how the laplacian matrix relates to graph connectivity](http://www.math.pku.edu.cn/teachers/yaoy/Fall2011/lecture08.pdf). I'll try to write some more later, but one of the important things is that some characteristics of graphs are encoded in specific (eignvector, eigenvalue) positions in the ordered list of eigenvalues. So the second smallest eigenvalue is related to connectivity of the graph. The use of random walks is, as I understand it, more for computiational purposes on graphs the size of billions or trillions of nodes that people like Google and Facebook work with. I was in a bit of a rush and you're right matrix spectra are well-defined; what I was thinking was that diagonally dominant matrices are more well-behaved. Try to right more later.`

I made some progress in answering my question.

Let A be the adjacency matrix for a directed graph G that contains n nodes. This is, of course, the general case, and whenever we speak of an undirected graph, that is taken as a shorthand for a directed graph with edges going in both directions.

Let T: R^n --> R^n be the linear transformation performed by A.

Let v in R^n be a vector whose components are natural numbers. So v represents a multiset of nodes in G. Then T(v) also has components that are natural numbers.

I.e., T maps N^n into N^n.

Now let's start by considering the interpretation of an eigenvector w in N^n. Since T(w) is in N^n, it must be that T(w) = q w, for some non-negative rational number q.

Let's take a further special case, where w1 is an eigenvector whose components are 0 or 1. Then T(w1) = k w, for some non-negative integer k.

w1 is the characteristic function for a subgraph G1 of G.

Hence, for any node i in G, T(w1)(i) = the number of predecessors that i has in G1.

Since w1 is an eigenvector with eigenvalue k, we also know that:

For any node i in G, T(w1)(i) = k if i is in G1, else it equals 0.

Putting these two statements together, it follows that:

So T(w1)(i) = the number of predecessors that i has in G1 = k if i is in G1, else 0.

Conclusion: the characteristic function a subgraph G1 is an eigenvector of T, with eigenvalue k, if and only if every node in G1 has exactly k predecessors in G1.

Let G' be the restriction of G to G1.

Cases:

k = 0. Then for each node i in G1, all of the predecessors are outside of G1. That is to say, G' has no edges at all.

k = 1. Every node in G' has a single predecessor. Then G' is a union of finite cycles and/or infinite linear chains.

k = 2. Every node in G' has exactly two predecessors.

Example: an infinite binary tree, oriented with the edges coming into the root.

Example: an undirected graph that consists of linear chains and finite cycles.

k = 3.

Example: the cube, viewed as an undirected graph.

Example: an infinite hexagonal lattice, viewed as an undirected graph

In the world of undirected graphs, the characteristic functions for any subgraph in one of the following forms is an eigenvector:

`I made some progress in answering my question. Let A be the adjacency matrix for a directed graph G that contains n nodes. This is, of course, the general case, and whenever we speak of an undirected graph, that is taken as a shorthand for a directed graph with edges going in both directions. Let T: R^n --> R^n be the linear transformation performed by A. Let v in R^n be a vector whose components are natural numbers. So v represents a multiset of nodes in G. Then T(v) also has components that are natural numbers. I.e., T maps N^n into N^n. Now let's start by considering the interpretation of an eigenvector w in N^n. Since T(w) is in N^n, it must be that T(w) = q w, for some non-negative rational number q. Let's take a further special case, where w1 is an eigenvector whose components are 0 or 1. Then T(w1) = k w, for some non-negative integer k. w1 is the characteristic function for a subgraph G1 of G. Hence, for any node i in G, T(w1)(i) = the number of predecessors that i has in G1. Since w1 is an eigenvector with eigenvalue k, we also know that: For any node i in G, T(w1)(i) = k if i is in G1, else it equals 0. Putting these two statements together, it follows that: So T(w1)(i) = the number of predecessors that i has in G1 = k if i is in G1, else 0. Conclusion: the characteristic function a subgraph G1 is an eigenvector of T, with eigenvalue k, if and only if every node in G1 has exactly k predecessors in G1. Let G' be the restriction of G to G1. Cases: k = 0. Then for each node i in G1, all of the predecessors are outside of G1. That is to say, G' has no edges at all. k = 1. Every node in G' has a single predecessor. Then G' is a union of finite cycles and/or infinite linear chains. k = 2. Every node in G' has exactly two predecessors. Example: an infinite binary tree, oriented with the edges coming into the root. Example: an undirected graph that consists of linear chains and finite cycles. k = 3. Example: the cube, viewed as an undirected graph. Example: an infinite hexagonal lattice, viewed as an undirected graph In the world of undirected graphs, the characteristic functions for any subgraph in one of the following forms is an eigenvector: * isolated points * cycles * infinite chains * regular polyhedron * regular infinite lattice * clique`

The spectral approach to studying graphs is deeply connected to:

1) The study of electrical networks made of resistors, and random walks on graphs. For graph Laplacians, random walks, quantum random walks, networks made of resistors and a (crude) application to chemistry, see Part 15 and Part 16 of the network theory series.

2) the spectral approach to geometry in general. This is a huge and fascinating subject.

`The spectral approach to studying graphs is deeply connected to: 1) The study of electrical networks made of resistors, and random walks on graphs. For graph Laplacians, random walks, quantum random walks, networks made of resistors and a (crude) application to chemistry, see <a href = "http://math.ucr.edu/home/baez/networks/networks_15.html">Part 15</a> and <a href = "http://math.ucr.edu/home/baez/networks/networks_16.html">Part 16</a> of the network theory series. 2) the <a href = "http://en.wikipedia.org/wiki/Spectral_geometry">spectral approach to geometry in general</a>. This is a huge and fascinating subject.`

Fan Chung has written extensively on spectral graph theory. Her sgt papers are here:

http://www.math.ucsd.edu/~fan/cgi-bin/papers.pl?kwd=eigen

she also has a couple of books on the subject. She advocates the use of the normalized Laplacian matrix, where both the rows & columns are scaled by the square root of the degree of the corresponding node, making the matrix symmetric and with ones on the diagonal.

`Fan Chung has written extensively on spectral graph theory. Her sgt papers are here: [http://www.math.ucsd.edu/~fan/cgi-bin/papers.pl?kwd=eigen](http://www.math.ucsd.edu/~fan/cgi-bin/papers.pl?kwd=eigen) she also has a couple of books on the subject. She advocates the use of the normalized Laplacian matrix, where both the rows & columns are scaled by the square root of the degree of the corresponding node, making the matrix symmetric and with ones on the diagonal.`

I just got an email from one V. Ramayya saying:

Of course the reason is that my abilities and knowledge are limited. But, I took this opportunity to remove everything from that Diagrams page on the nLab and put it here:

on the Azimuth Library.

I also added some material on Jay Forrester's work on system dynamics.

I also tried to organize this page a bit more.

`I just got an email from one V. Ramayya saying: > I am a mechanical and software engineer interested in applying Category Theory to Systems Biology as applied to Ovarian Cancer. > When googling through the internet, I came across your webpage on Diagrams <http://ncatlab.org/johnbaez/show/Diagrams>. Going through the material on this webpage, I did not see any reference to Professor Forrester's "System Dynamics". > Since the life-cycle of Cancer is dynamic, may I know why "system dynamics" was not talked about on this page? Of course the reason is that my abilities and knowledge are limited. But, I took this opportunity to remove everything from that Diagrams page on the nLab and put it here: * [[Network theory]] on the Azimuth Library. I also added some material on Jay Forrester's work on system dynamics. I also tried to organize this page a bit more.`

I found out some interesting things about Systems Biology Graphical Notation (SBGN) from Nicolas Le Novere, who is part of the group developing this.

I said:

He said:

This was really helpful to me, since I didn't know this! I'll study them more, starting with the Process Description language since this sounds similar to chemical reaction networks, only more powerful.

I added this info to the Network theory page of the Azimuth Library.

`I found out some interesting things about Systems Biology Graphical Notation (SBGN) from [Nicolas Le Novere](http://lenoverelab.org/perso/lenov/index.html), who is part of the group developing this. I said: > I'm working on the mathematical foundations of many different diagrammatic languages for networks. My ultimate goal is to study networks in biology and ecology, hence my interest in SBGN. So far I'm finding plenty of food for thought in simpler subjects, like the use of diagrammatic languages in engineering. These are simpler for me since the engineers already use these networks in a mathematical way; they just don't bring the full resources of modern mathematics to bear. He said: > It is worth to say that the three SBGN languages have equivalent modelling representations. SBGN Process Descriptions are representations of ... processes, as used in chemical kinetics (and in general in general systems theory). One can derive an ODE system from an SBGN PD (the opposite is harder). SBGN Entity relationships correspond to rule-based models (e.g. models encoded in BioNetGen, Kappa etc.). SBGN Activity Flows correspond to logical models (the approaches used by Stuart Kauffman and Rene Thomas). This was really helpful to me, since I didn't know this! I'll study them more, starting with the Process Description language since this sounds similar to chemical reaction networks, only more powerful. I added this info to the [[Network theory]] page of the [[Azimuth Library]].`

Here's the revised section:

Diagrams are used extensively in a variety of ways in biology, and this project is an attempt to clarify and standardize their different uses:

SBGN is made up of three different languages, representing different visions of biological systems. Each language involves a comprehensive set of symbols with precise semantics, together with detailed syntactic rules how maps are to be interpreted:

`Here's the revised section: <hr/> Diagrams are used extensively in a variety of ways in biology, and this project is an attempt to clarify and standardize their different uses: * [Systems Biology Graphical Notation (SBGN)](http://www.sbgn.org/Main_Page) homepage. SBGN is made up of three different languages, representing different visions of biological systems. Each language involves a comprehensive set of symbols with precise semantics, together with detailed syntactic rules how maps are to be interpreted: * The SBGN [Process Description (PD) language](http://precedings.nature.com/documents/3721/version/2) shows the temporal courses of biochemical interactions in a network. It can be used to show all the molecular interactions taking place in a network of biochemical entities, with the same entity appearing multiple times in the same diagram. Process Descriptions correspond to systems of ordinary differential equations. <img src = "http://math.ucr.edu/home/baez/SBGN_process_description.jpg" alt = "PD"/> * The SBGN [Entity Relationship (ER) language](http://precedings.nature.com/documents/3719/version/2) allows you to see all the relationships in which a given entity participates, regardless of the temporal aspects. Relationships can be seen as rules describing the influences of entities nodes on other relationships. Entity Relationships correspond to rule-based models (e.g. models encoded in [[BioNetGen]], [[Kappa]], etc.). <img src = "http://math.ucr.edu/home/baez/SBGN_entity_relationship.jpg" alt = "ER"/> * The SBGN [Activity Flow (AF) language](http://precedings.nature.com/documents/3724/version/1) depicts the flow of information between biochemical entities in a network. It omits information about the state transitions of entities and is particularly convenient for representing the effects of perturbations, whether genetic or environmental in nature. Activity Flows correspond to logical models of the sort used by Stuart Kauffman and Rene Thomas. <img src = "http://math.ucr.edu/home/baez/SBGN_activity_flow.jpg" alt = "AF"/>`

More useful information from Nicolas Le Novere! He had written:

I replied:

He replied:

`More useful information from Nicolas Le Novere! He had written: > SBGN Process Descriptions are representations of ... processes, as used in chemical kinetics (and in general in general systems theory). One can derive an ODE system from an SBGN PD (the opposite is harder). SBGN Entity relationships correspond to rule-based models (e.g. models encoded in BioNetGen, Kappa etc.). SBGN Activity Flows correspond to logical models (the approaches used by Stuart Kauffman and Rene Thomas). I replied: > Thanks, you don't know how helpful that is to me! I'm familiar with getting systems of ODE in chemistry from "chemical reaction networks", so I'm hoping "process descriptions" are similar but more general. The others are less familiar to me but I'll learn about them. He replied: > Absolutely. The Process Description language was the first SBGN language, and evolved from the "CellDesigner notation". CellDesigner is a software that was originally a graphical editor for a modelling language called SBML (the initial name of the software was SBedit). And SBML was essentially based on list of reactions. (Nowadays, it is way more, but that was the initial aim.) > [http://www.celldesigner.org](http://www.celldesigner.org) > [http://www.nature.com/nbt/journal/v23/n8/abs/nbt1111.html](http://www.nature.com/nbt/journal/v23/n8/abs/nbt1111.html) > [http://sbml.org](http://sbml.org) > [http://bioinformatics.oxfordjournals.org/content/19/4/524.full](http://bioinformatics.oxfordjournals.org/content/19/4/524.full) > As for the others, the Entity Relationship language evolved from the Molecular Interaction Maps of Kurt Kohn: > [http://discover.nci.nih.gov/mim/](http://discover.nci.nih.gov/mim/) > [http://www.molbiolcell.org/content/17/1/1.long](http://www.molbiolcell.org/content/17/1/1.long) > The Activity Flows are just what molecular biologists use on their powerpoint presentations really. > A fairly long presentation on SBGN can be found here: > [http://lenoverelab.org/perso/lenov/LECTURES/PinkSeminar2009-clean.pdf](http://lenoverelab.org/perso/lenov/LECTURES/PinkSeminar2009-clean.pdf)`

On Google+, Ted Pavlic wrote:

I'll add this comment to Network theory.

`On Google+, [Ted Pavlic](https://sols.asu.edu/people/theodore-p-pavlic) wrote: > A number of prominent control theorists (like [Richard Murray](http://www.cds.caltech.edu/~murray/wiki/Main_Page) and his most recent graduate students and postdocs) have turned to synthetic biology as a major application area. Consequently, they make heavy use of another diagram used in living systems -- the gene regulatory network. Gene regulatory networks are very similar to signal flow diagrams as they incorporate negative and positive feedback and amplification. You didn't mention gene regulatory networks, but I think you should consider them as you venture outside of control. > At larger ecosystem scales, it is important to consider food webs and how to augment them with mass-balance constraints. To this end, researchers have developed "[ecological stoichiometry](http://en.wikipedia.org/wiki/Ecological_stoichiometry)" which turns food webs into chemical reaction networks. More recently, ES researchers in ecology have retuned it to consider the scale of the cell. This new version is being called "[biological stoichiometry](http://www.nature.com/scitable/knowledge/library/biological-stoichiometry-102248897)." Both cases seem to fit into your ultimate goal. > I'm a control theorist working in a behavioral ecology lab -- splitting my collaborative time between engineers, physicists, mathematicians, and biologists (behavioral, neuroethological, physiological). Additionally, I periodically interact with biogeochemists interested in geological engineering that lives exactly at the technology--biosphere interface. So there are a lot of people working in this space, and it would probably be useful to survey how people are already shuttling ideas about abstraction across these disciplinary boundaries. I'll add this comment to [[Network theory]].`