It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 499
- Study Groups 19
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 1
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 67
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 2
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 147
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 713

## Comments

Frederick - beautiful! I take it back: people should not take powers of the matrix

$$ \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right) $$ by hand, they should take powers of the matrix

$$ \left( \begin{array}{cc} f & g \\ h & 0 \end{array} \right) $$ by hand. The paths we're trying to count will pop out even more visibly. But one still needs to think, to understand exactly why it works! It's not magic, just logic.

`Frederick - beautiful! I take it back: people should not take powers of the matrix \[ \left( \begin{array}{cc} 1 & 1 \\\\ 1 & 0 \end{array} \right) \] by hand, they should take powers of the matrix \[ \left( \begin{array}{cc} f & g \\\\ h & 0 \end{array} \right) \] by hand. The paths we're trying to count will pop out even more visibly. But one still needs to think, to understand exactly why it works! It's not magic, just logic.`

Hey John,

I have a question about this, since you are an expert.

Suppose I have the sequence \[1,1,2,4,7,13,24,...\]

These are the

Tribonacci Numbersor A000073 in the OEIS.This is given by the recurrence

\[ x_{n} = x_{n-1} + x_{n-2} + x_{n-3} \]

I know a matrix that I can use to compute the

nth digit of this in \(\mathcal{O}(ln(n))\).How would you make a matrix to represent this recurrence?

(I'm asking as a novice turning to a master. I think my solution has some reduction applied that makes the graph connection less clear.)

`Hey John, I have a question about this, since you are an expert. Suppose I have the sequence \\[1,1,2,4,7,13,24,...\\] These are the [*Tribonacci Numbers*](http://mathworld.wolfram.com/TribonacciNumber.html) or [A000073](https://oeis.org/A000073) in the OEIS. This is given by the recurrence \\[ x_{n} = x_{n-1} + x_{n-2} + x_{n-3} \\] I know a matrix that I can use to compute the *n*th digit of this in \\(\mathcal{O}(ln(n))\\). How would you make a matrix to represent this recurrence? (I'm asking as a novice turning to a master. I think my solution has some reduction applied that makes the graph connection less clear.)`

The matrix is sort of like, a de-categorized category.

It's still weird though. What does a graph have to do with multidimensional vector spaces?

`The matrix is sort of like, a de-categorized category. It's still weird though. What does a graph have to do with multidimensional vector spaces?`

In order to do matrix multiplication, we need a multiplication and a sum operator. If we take composition \(\lbrace\circ\rbrace\) to be our multiplication, what is the sum? Just simple old set union \(\lbrace\cup\rbrace\)?

`>[\\(\cdots\\)] they should take powers of the matrix >\\[ \left( \begin{array}{cc} f & g \\\\ h & 0 \end{array} \right) \\] >by hand. The paths we're trying to count will just pop right out. But one still needs to think, to understand exactly why it works! It's not magic, just logic. In order to do matrix multiplication, we need a multiplication and a sum operator. If we take composition \\(\lbrace\circ\rbrace\\) to be our multiplication, what is the sum? Just simple old set union \\(\lbrace\cup\rbrace\\)?`

Composition and union are right.

\[ A^1 = \left( \begin{array}{cc} \lbrace f \rbrace & \lbrace g \rbrace \\ \lbrace h \rbrace & \emptyset \end{array} \right) \]

\[ A^2 = \left( \begin{array}{cc} \lbrace f;f, \; g;h \rbrace & \lbrace f;g \rbrace \\ \lbrace h;f \rbrace & \lbrace h;g \rbrace \end{array} \right) \]

`Composition and union are right. \\[ A^1 = \left( \begin{array}{cc} \lbrace f \rbrace & \lbrace g \rbrace \\\\ \lbrace h \rbrace & \emptyset \end{array} \right) \\] \\[ A^2 = \left( \begin{array}{cc} \lbrace f;f, \; g;h \rbrace & \lbrace f;g \rbrace \\\\ \lbrace h;f \rbrace & \lbrace h;g \rbrace \end{array} \right) \\]`

\[ A^3 = \left( \begin{array}{cc} \lbrace f;f;f, \; g;h;f, \; f;g;h \rbrace & \lbrace f;f;g, \; g;h;g \rbrace \\ \lbrace h;f;f, \; h;g;h \rbrace & \lbrace h;f;g \rbrace \end{array} \right) \]

`\\[ A^3 = \left( \begin{array}{cc} \lbrace f;f;f, \; g;h;f, \; f;g;h \rbrace & \lbrace f;f;g, \; g;h;g \rbrace \\\\ \lbrace h;f;f, \; h;g;h \rbrace & \lbrace h;f;g \rbrace \end{array} \right) \\]`

Keith wrote:

I intended people to just multiply and add without worrying about what it means! Physicists do this all the time - it's very useful. They call it "formal manipulations". Mathematicians call it "abstract algebra" and can't resist making the rules precise.

In other words: make up something called \(\cdot\) for multiplication, something called \(+\) for addition - and don't ask what they mean, just have them obey the usual rules of a rig:

addition and multiplication are associative,

addition is also commutative,

multiplication distributes over addition,

anything plus 0 is itself,

anything times 0 is zero,

anything times 1 is itself.

This will let you start multiplying the matrix

$$ \left( \begin{array}{cc} f & g \\ h & 0 \end{array} \right) $$ by itself, over and over... and you'll see all the paths in this graph start to emerge:

You'll also see why I didn't want multiplication to be commutative.

Of course you should then wonder what's going on. You might decide that \(\cdot\) means the same thing as composition \(\circ\) in our category. You might reinvent a whole bunch of math. It's more fun than learning it from a book!

Fredrick was doing something like this in the comment right before this one, but he was working less abstractly: he seems to be taking \(\cdot\) to be composition and \(+\) to be union of multisets, approximately speaking. But for this you need to decide what it means to compose two finite multisets of morphisms: that's where the distributive law comes in!

There's also the usual problem of whether you use \(f\circ g\) to mean "first \(g\) then \(f\)" or something like \(f ;g\) to mean "first \(f\) then \(g\)". As usual, it doesn't matter much as long as you're consistent. The second convention is a bit less annoying.

`Keith wrote: > In order to do matrix multiplication, we need a multiplication and a sum operator. If we take composition {∘} to be our multiplication, what is the sum? Just simple old set union {∪}? I intended people to just multiply and add without worrying about what it means! Physicists do this all the time - it's very useful. They call it "formal manipulations". Mathematicians call it "abstract algebra" and can't resist making the rules precise. In other words: make up something called \\(\cdot\\) for multiplication, something called \\(+\\) for addition - and don't ask what they mean, just have them obey the usual rules of a [rig](https://en.wikipedia.org/wiki/Semiring): * addition and multiplication are associative, * addition is also commutative, * multiplication distributes over addition, * anything plus 0 is itself, * anything times 0 is zero, * anything times 1 is itself. This will let you start multiplying the matrix \[ \left( \begin{array}{cc} f & g \\\\ h & 0 \end{array} \right) \] by itself, over and over... and you'll see all the paths in this graph start to emerge: <center><img src = "http://math.ucr.edu/home/baez/mathematical/7_sketches/graph_f.png"></center> You'll also see why I didn't want multiplication to be commutative. Of course you should then wonder what's going on. You might decide that \\(\cdot\\) means the same thing as composition \\(\circ\\) in our category. You might reinvent a whole bunch of math. It's more fun than learning it from a book! Fredrick was doing something like this in the comment right before this one, but he was working less abstractly: he seems to be taking \\(\cdot\\) to be composition and \\(+\\) to be union of [multisets](https://en.wikipedia.org/wiki/Multiset), approximately speaking. But for this you need to decide what it means to compose two finite multisets of morphisms: that's where the distributive law comes in! There's also the usual problem of whether you use \\(f\circ g\\) to mean "first \\(g\\) then \\(f\\)" or something like \\(f ;g\\) to mean "first \\(f\\) then \\(g\\)". As usual, it doesn't matter much as long as you're consistent. The second convention is a bit less annoying.`

Keith wrote:

I think the question boils down to this: "what do composition of paths and unions of multisets of paths have to do with multiplication and addition"? And the answer is, they both obey the same rules: the rules of a rig, which I listed above.

`Keith wrote: > What does a graph have to do with multidimensional vector spaces? I think the question boils down to this: "what do composition of paths and unions of multisets of paths have to do with multiplication and addition"? And the answer is, they both obey the same rules: the rules of a rig, which I listed above.`

\[ A^4 = \left( \begin{array}{c | cc} s \rightarrow t & x & y \\ \hline x & \lbrace f;f;f;f, \; g;h;f;f, \; f;g;h;f \; f;f;g;h, \; f;h;g;h \rbrace & \lbrace f;f;f;g, \; g;h;f;g, \; f;g;h;g \rbrace \\ y & \lbrace h;f;f;f \; h;f;g;h \; h;g;h;f \rbrace & \lbrace h;f;f;g \; h;g;h;g \rbrace \end{array} \right) \]

$$ A^1 = \left( \begin{array}{c | cc} s \rightarrow t & x & y \\ \hline x & \lbrace f \rbrace & \lbrace g \rbrace \\ y & \lbrace h \rbrace & \emptyset \end{array} \right) $$ Oh, I see! ^:)^

This actually does not require a pointed graph. The matrix gives all the paths between every pair of nodes.

`\\[ A^4 = \left( \begin{array}{c | cc} s \rightarrow t & x & y \\\\ \hline x & \lbrace f;f;f;f, \; g;h;f;f, \; f;g;h;f \; f;f;g;h, \; f;h;g;h \rbrace & \lbrace f;f;f;g, \; g;h;f;g, \; f;g;h;g \rbrace \\\\ y & \lbrace h;f;f;f \; h;f;g;h \; h;g;h;f \rbrace & \lbrace h;f;f;g \; h;g;h;g \rbrace \end{array} \right) \\] \[ A^1 = \left( \begin{array}{c | cc} s \rightarrow t & x & y \\\\ \hline x & \lbrace f \rbrace & \lbrace g \rbrace \\\\ y & \lbrace h \rbrace & \emptyset \end{array} \right) \] Oh, I see! ^:)^ This actually does not require a pointed graph. The matrix gives all the paths between every pair of nodes.`

Matthew wrote:

Cool, the Tribonacci numbers! I discussed them in Week 1 of this course, and gave some homework about them - you can see solutions:

This could lead you down an even deeper rabbit-hole than the one you're exploring now!

This is going to take some work, so I will only give you the answer because you flattered me so much. (They say "flattery gets you nowhere," but they're lying.)

There's a famous way to convert higher-order linear ordinary differential equations into systems of first-order ones. For example if we take

$$ \frac{d^2}{d t^2} x(t) = 3 \frac{d}{dt} x(t) + 7 x(t) $$ and write \(\dot{x}\) for \(\frac{d}{dt} x(t)\) it becomes

$$ \frac{d}{dt} \left(\begin{array}{c} \dot{x} \\ x \end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\ 1 & 0 \end{array} \right) \left(\begin{array}{c} \dot{x} \\ x \end{array}\right) .$$ We can then solve the equation using matrix tricks. More to the point, a similar trick works for linear recurrences. So we can take

$$ x_{n+2} = 3 x_{n+1} + 7 x_n $$ and write it as

$$ \left(\begin{array}{c} x_{n+2} \\ x_{n+1}\end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\ 1 & 0 \end{array} \right) \left(\begin{array}{c} x_{n+1} \\ x_n \end{array}\right) .$$ We can make it even more slick using an "evolution operator" \(U\) that increments the "time" \(n\) by one step. By definition

$$ U \left(\begin{array}{c} x_{n+1} \\ x_{n}\end{array}\right) =\left(\begin{array}{c} x_{n+2} \\ x_{n+1}\end{array}\right) .$$ Then our earlier equation becomes

$$ U \left(\begin{array}{c} x_{n+1} \\ x_{n}\end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\ 1 & 0 \end{array} \right) \left(\begin{array}{c} x_{n+1} \\ x_n \end{array}\right) .$$ but this just means

$$ U = \left(\begin{array}{cc} 3 & 7 \\ 1 & 0 \end{array} \right) $$ so we know stuff like

$$ \left(\begin{array}{c} x_{n+23} \\ x_{n+22}\end{array}\right) = U^{23} \left(\begin{array}{c} x_{n+1} \\ x_n \end{array}\right) $$ I think the answer to your question is lurking in what I said. Note that I considered a second-order differential equation and a second-order recurrence, but the exact same idea works for the third-order one you are interested in. You just need a \(3 \times 3\) matrix instead of a \(2 \times 2\).

`Matthew wrote: > \\[ x\_{n} = x\_{n-1} + x\_{n-2} + x\_{n-3} \\] Cool, the Tribonacci numbers! I discussed them in Week 1 of this course, and gave some homework about them - you can see solutions: * [Quantization and Categorification](http://math.ucr.edu/home/baez/qg-winter2004/), Winter 2004. This could lead you down an even deeper rabbit-hole than the one you're exploring now! > How would you make a matrix to represent this recurrence? This is going to take some work, so I will only give you the answer because you flattered me so much. (They say "flattery gets you nowhere," but they're lying.) There's a famous way to convert higher-order linear ordinary differential equations into systems of first-order ones. For example if we take \[ \frac{d^2}{d t^2} x(t) = 3 \frac{d}{dt} x(t) + 7 x(t) \] and write \\(\dot{x}\\) for \\(\frac{d}{dt} x(t)\\) it becomes \[ \frac{d}{dt} \left(\begin{array}{c} \dot{x} \\\\ x \end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\\\ 1 & 0 \end{array} \right) \left(\begin{array}{c} \dot{x} \\\\ x \end{array}\right) .\] We can then solve the equation using matrix tricks. More to the point, a similar trick works for linear recurrences. So we can take \[ x\_{n+2} = 3 x\_{n+1} + 7 x\_n \] and write it as \[ \left(\begin{array}{c} x\_{n+2} \\\\ x\_{n+1}\end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\\\ 1 & 0 \end{array} \right) \left(\begin{array}{c} x\_{n+1} \\\\ x\_n \end{array}\right) .\] We can make it even more slick using an "evolution operator" \\(U\\) that increments the "time" \\(n\\) by one step. By definition \[ U \left(\begin{array}{c} x\_{n+1} \\\\ x\_{n}\end{array}\right) =\left(\begin{array}{c} x\_{n+2} \\\\ x\_{n+1}\end{array}\right) .\] Then our earlier equation becomes \[ U \left(\begin{array}{c} x\_{n+1} \\\\ x\_{n}\end{array}\right) = \left(\begin{array}{cc} 3 & 7 \\\\ 1 & 0 \end{array} \right) \left(\begin{array}{c} x\_{n+1} \\\\ x\_n \end{array}\right) .\] but this just means \[ U = \left(\begin{array}{cc} 3 & 7 \\\\ 1 & 0 \end{array} \right) \] so we know stuff like \[ \left(\begin{array}{c} x\_{n+23} \\\\ x\_{n+22}\end{array}\right) = U^{23} \left(\begin{array}{c} x\_{n+1} \\\\ x\_n \end{array}\right) \] I think the answer to your question is lurking in what I said. Note that I considered a second-order differential equation and a second-order recurrence, but the exact same idea works for the third-order one you are interested in. You just need a \\(3 \times 3\\) matrix instead of a \\(2 \times 2\\).`

Let \(\mathbf{Matrix}\) be a category whose objects are matrices and morphisms are matrix homomorphisms.

Does there exist a \(F,U\in \mathrm{Mor}(\mathbf{Matrix})\) such that, \[ \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right) \overset{F}{\rightarrow} \left( \begin{array}{cc} f & g \\ h & 0 \end{array} \right) \] and, \[ \left( \begin{array}{cc} f & g \\ h & 0 \end{array} \right) \overset{U}{\rightarrow} \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right) ? \]

`Let \\(\mathbf{Matrix}\\) be a category whose objects are matrices and morphisms are matrix homomorphisms. Does there exist a \\(F,U\in \mathrm{Mor}(\mathbf{Matrix})\\) such that, \\[ \left( \begin{array}{cc} 1 & 1 \\\\ 1 & 0 \end{array} \right) \overset{F}{\rightarrow} \left( \begin{array}{cc} f & g \\\\ h & 0 \end{array} \right) \\] and, \\[ \left( \begin{array}{cc} f & g \\\\ h & 0 \end{array} \right) \overset{U}{\rightarrow} \left( \begin{array}{cc} 1 & 1 \\\\ 1 & 0 \end{array} \right) ? \\]`

I've never heard of a "matrix homomorphism", so you'd have to define that concept before I could answer. I know what a homomorphism of matrix algebras is, but that doesn't go between matrices: it goes between

setsof matrices.`I've never heard of a "matrix homomorphism", so you'd have to define that concept before I could answer. I know what a homomorphism of matrix algebras is, but that doesn't go between matrices: it goes between _sets_ of matrices.`

Oh okay. How would we formalize the idea behind what I was trying to get across? Can we?

`Oh okay. How would we formalize the idea behind what I was trying to get across? Can we?`

You'd probably want to consider the category of modules, since modules are the appropriate generalizations of vector spaces to arbitrary rings. (Though, John only requires rigs, which are strictly more general.) I suspect that endofunctors on this category would let you swap out the underlying ring in the manner you're suggesting.

(But take this with a grain of salt -- I'm throwing around high-powered rules without quite knowing what they mean!)

`You'd probably want to consider the category of [modules](https://en.wikipedia.org/wiki/Module_(mathematics)), since modules are the appropriate generalizations of vector spaces to arbitrary rings. (Though, John only requires [rigs](https://en.wikipedia.org/wiki/Rig_%28mathematics%29), which are strictly more general.) I suspect that endofunctors on this category would let you swap out the underlying ring in the manner you're suggesting. (But take this with a grain of salt -- I'm throwing around high-powered rules without quite knowing what they mean!)`

I believe we are thinking of the same approach.

In another thread I wrote a few recurrence puzzles. I threw a 9th order one in there. I asked folks to compute \(10^7\) terms in these recurrences.

The only way I know how to compute so many terms uses your trick...

`> I think the answer to your question is lurking in what I said. Note that I considered a second-order differential equation and a second-order recurrence, but the exact same idea works for the third-order one you are interested in. You just need a \\(3 \times 3\\) matrix instead of a \\(2 \times 2\\). I believe we are thinking of the same approach. In another thread I wrote a few recurrence puzzles. I threw a 9th order one in there. I asked folks to compute \\(10^7\\) terms in these recurrences. The only way I know how to compute so many terms uses your trick...`

Hey John,

I think I know what Keith is getting at.

Take the free semigroup \(\mathfrak{S} = \langle S, ; \rangle\) over an infinite alphabet (using Frederick Eisel's notation).

As you noted, you can make a rig using multisets over \(\mathfrak{S}\) with \(\cup\) and \(\otimes\), with \(\otimes\) defined as:

$$ X \otimes Y := ⦃ x ; y \; : \; x \in X \text{ and } y \in Y ⦄ $$ Moreover, while multisets form a rig, we also have

finitemultisets form a rig.For any rig, the finite \(n \times n\) square matrices over that rig form another rig.

There's a "rig-homomorphism" that takes \(n \times n\) matrices over finite multisets to \(n \times n\) matrices over \(\mathbb{N}\). In particular, it maps corresponding matrix-elements \(a_{ij} \mapsto \lVert a_{ij} \rVert \).

I don't think there's an adjunction there, though.

`Hey John, > I've never heard of a "matrix homomorphism", so you'd have to define that concept before I could answer. I know what a homomorphism of matrix algebras is, but that doesn't go between matrices: it goes between _sets_ of matrices. I think I know what Keith is getting at. Take the free [semigroup](https://en.wikipedia.org/wiki/Semigroup) \\(\mathfrak{S} = \langle S, ; \rangle\\) over an infinite alphabet (using Frederick Eisel's notation). As you noted, you can make a [rig](https://en.wikipedia.org/wiki/Rig) using multisets over \\(\mathfrak{S}\\) with \\(\cup\\) and \\(\otimes\\), with \\(\otimes\\) defined as: \[ X \otimes Y := ⦃ x ; y \; : \; x \in X \text{ and } y \in Y ⦄ \] Moreover, while multisets form a rig, we also have *finite* multisets form a rig. For any rig, the finite \\(n \times n\\) square matrices over that rig form another rig. There's a "rig-homomorphism" that takes \\(n \times n\\) matrices over finite multisets to \\(n \times n\\) matrices over \\(\mathbb{N}\\). In particular, it maps corresponding matrix-elements \\(a_{ij} \mapsto \lVert a_{ij} \rVert \\). I don't think there's an adjunction there, though.`

Tobias Fritz posted a puzzle a while ago and it seems relevant for the current discussion on the number of paths in a graph:

I

thinkthe answer to the first part of the puzzle is the monoidal poset \(\mathcal{N}\ := \left(\mathbb{N} \cup \{\infty\}, \le, 1, \cdot\right)\). The \(\mathcal{N}\)-enriched category \(\mathcal{X}\) has objects the nodes of the graph and the hom-object maps two nodes to the total number of paths between them. The two properties of the \(\mathcal{N}\)-enriched category are:Does anyone know how to solve the second part of the puzzle, that is, how to change the monoidal poset to count paths of length \(n\) separately for each \(n\)?

`Tobias Fritz [posted a puzzle](https://forum.azimuthproject.org/discussion/comment/18654/#Comment_18654) a while ago and it seems relevant for the current discussion on the number of paths in a graph: > **Puzzle TF4:** We know that we can keep track of *whether it's possible* to get from one vertex to another using a \\(\mathbf{Bool}\\)-enriched category; we can also keep track of *how many steps it takes* using a \\(\mathbf{Cost}\\)-enriched category. Can we also keep track of *how many paths there are* using an enriched category? In which monoidal poset would we have to enrich? And can we do this in such a way that we count paths of length \\(n\\) separately for each \\(n\\)? I _think_ the answer to the first part of the puzzle is the monoidal poset \\(\mathcal{N}\ := \left\(\mathbb{N} \cup \\{\infty\\}, \le, 1, \cdot\right\)\\). The \\(\mathcal{N}\\)-enriched category \\(\mathcal{X}\\) has objects the nodes of the graph and the hom-object maps two nodes to the total number of paths between them. The two properties of the \\(\mathcal{N}\\)-enriched category are: - \\(1 \le \mathcal{X}(x, x)\\), which says there is at least one path between a node and itself (the zero-length path). - \\(\mathcal{X}(x, y) \cdot \mathcal{X}(y, z) \le \mathcal{X}(x, z)\\), which sets a lower bound on the number of paths between two nodes based on intermediate paths. Does anyone know how to solve the second part of the puzzle, that is, how to change the monoidal poset to count paths of length \\(n\\) separately for each \\(n\\)?`

Dan Oneata wrote:

Perhaps the right thing would be the monoidal poset with object set \(\mathbb{N}^\mathbb{N}\), infinite sequences of natural numbers, with the product partial order, and with monoidal operation given by convolution?

Edit: Actually, I think there's a problem with this if the graph contains cycles. Then if \(p: x\to y\), \(q: y\to y\), and \(r: y\to z\) are paths, we'd be double-counting the path \(p;q;r : x\to z\) both as the concatenation of \(p;q\) with \(r\) and as the concatenation of \(p\) with \(q; r\). This problem doesn't arise in the "all paths" version Dan describes, since the concatenation function \(\mathrm{Path}(x,y)\times\mathrm{Path}(y,z) \to \mathrm{Path}(x,z)\) is either injective or both sides are infinite.

`Dan Oneata wrote: > Does anyone know how to solve the second part of the puzzle, that is, how to change the monoidal poset to count paths of length \\(n\\) separately for each \\(n\\)? Perhaps the right thing would be the monoidal poset with object set \\(\mathbb{N}^\mathbb{N}\\), infinite sequences of natural numbers, with the product partial order, and with monoidal operation given by convolution? Edit: Actually, I think there's a problem with this if the graph contains cycles. Then if \\(p: x\to y\\), \\(q: y\to y\\), and \\(r: y\to z\\) are paths, we'd be double-counting the path \\(p;q;r : x\to z\\) both as the concatenation of \\(p;q\\) with \\(r\\) and as the concatenation of \\(p\\) with \\(q; r\\). This problem doesn't arise in the "all paths" version Dan describes, since the concatenation function \\(\mathrm{Path}(x,y)\times\mathrm{Path}(y,z) \to \mathrm{Path}(x,z)\\) is either injective or both sides are infinite.`

I would like to flesh out the connection between finite pointed graphs and recursion relations a little more.

Suppose we have a recursion relation of the form $$a_{n+m}=\sum_{j=1}^m\beta_ja_{n+m-j}\qquad\text{for }\beta_j\in\mathbb{N},\,m>0,\,\beta_m\neq0.$$ I think there is a nice graph with the fewest possible nodes that yields this recurrence relation for its sequence that counts the number of paths of a given length.

This graph can be built up as follows. Start with a loop of length \(m\) with a distinguished point \(x_1\), name the other nodes \(x_2,...,x_{m}\). Add \(\beta_m-1\) arrows from \(x_m\) to \(x_1\), then add \(\beta_{m-1}\) arrows from \(x_{m-1}\) to \(x_1\), then add \(\beta_{m-2}\) arrows from \(x_{m-2}\) to \(x_1\), and so on until you add \(\beta_1\) arrows from \(x_1\) to itself. This process gives us a pointed graph unique up to arrow labelling.

No graph with fewer than \(m\) nodes can have a basic loop (in the sense of a loop not decomposable into loops of shorter length) of length \(m\), so if this graph satisfies the recurrence relation, it is a graph with the fewest possible nodes that does so. Following Dr. Baez's idea in comment #4, a loop of length \(n+m\) comes from a loop of length \(n+m-1\) composed with one of the \(\beta_1\) loops of length 1 or from a loop of length \(n+m-2\) composed with one of the \(\beta_2\) basic loops of length 2, and so on, or a loop of length \(n\) composed with one of the \(\beta_m\) loops of length \(m\). So, this graph produces the desired recursion relation.

Note that the opposite graph, the graph with the same nodes but with the arrows reversed, yields the same recurrence relation.

If \(\gcd(\{\beta_j\}_{1\leq j\leq m})=1\), then I think this graph (and its opposite) has the fewest possible arrows for a graph with \(m\) nodes that gives the above recurrence relation.

`I would like to flesh out the connection between finite pointed graphs and recursion relations a little more. Suppose we have a recursion relation of the form $$a_{n+m}=\sum_{j=1}^m\beta_ja_{n+m-j}\qquad\text{for }\beta_j\in\mathbb{N},\,m>0,\,\beta_m\neq0.$$ I think there is a nice graph with the fewest possible nodes that yields this recurrence relation for its sequence that counts the number of paths of a given length. This graph can be built up as follows. Start with a loop of length \\(m\\) with a distinguished point \\(x_1\\), name the other nodes \\(x_2,...,x_{m}\\). Add \\(\beta_m-1\\) arrows from \\(x_m\\) to \\(x_1\\), then add \\(\beta_{m-1}\\) arrows from \\(x_{m-1}\\) to \\(x_1\\), then add \\(\beta_{m-2}\\) arrows from \\(x_{m-2}\\) to \\(x_1\\), and so on until you add \\(\beta_1\\) arrows from \\(x_1\\) to itself. This process gives us a pointed graph unique up to arrow labelling. No graph with fewer than \\(m\\) nodes can have a basic loop (in the sense of a loop not decomposable into loops of shorter length) of length \\(m\\), so if this graph satisfies the recurrence relation, it is a graph with the fewest possible nodes that does so. Following Dr. Baez's idea in [comment #4](https://forum.azimuthproject.org/discussion/comment/18795/#Comment_18795), a loop of length \\(n+m\\) comes from a loop of length \\(n+m-1\\) composed with one of the \\(\beta_1\\) loops of length 1 or from a loop of length \\(n+m-2\\) composed with one of the \\(\beta_2\\) basic loops of length 2, and so on, or a loop of length \\(n\\) composed with one of the \\(\beta_m\\) loops of length \\(m\\). So, this graph produces the desired recursion relation. Note that the opposite graph, the graph with the same nodes but with the arrows reversed, yields the same recurrence relation. If \\(\gcd(\\{\beta_j\\}_{1\leq j\leq m})=1\\), then I think this graph (and its opposite) has the fewest possible arrows for a graph with \\(m\\) nodes that gives the above recurrence relation.`

John Baez mused that my formula might be right after all! But it turns out that it can hardly be correct. As Christopher Upshaw noticed, the whole Ansatz is pretty weird. Maybe even weird enough to warrant further inspection for its own sake... Nevertheless,

I gave a this formula:

\[ n = \sum_i^k a_i \cdot b_i \]

Which supeficially looks like it could be a special case, coming from another formula, for matrix multiplication (matrix powers, so relevant):

\[ c_{ij} = \sum_k^n a_{ik}\cdot b_{kj} \]

And while I mentioned upthread how my formula undercounts, there's another problem, a weakness John had spotted right away:

I had proposed to 'simply' look at all possible loops in the graph. But that is an infinite set! Put like this it is actually the set of all paths of the kind we're looking for!

That's not what I had in mind, I thought that there'd be a finite number of loops until the whole graph would be covered. But, as my mentioned realization suggests, finding all paths

is the problem!Jeez #-D :_S (:-) Cheers!

`John Baez mused that my formula might be right after all! But it turns out that it can hardly be correct. As Christopher Upshaw noticed, the whole Ansatz is pretty weird. Maybe even weird enough to warrant further inspection for its own sake... Nevertheless, I gave a this formula: \\[ n = \sum_i^k a_i \cdot b_i \\] Which supeficially looks like it could be a special case, coming from another formula, for matrix multiplication (matrix powers, so relevant): \\[ c_{ij} = \sum_k^n a_{ik}\cdot b_{kj} \\] And while I mentioned upthread how my formula undercounts, there's another problem, a weakness John had spotted right away: > 1. how you are reducing the case of a general graph to the simpler sort of graph you're discussing I had proposed to 'simply' look at all possible loops in the graph. But that is an infinite set! Put like this it is actually the set of all paths of the kind we're looking for! That's not what I had in mind, I thought that there'd be a finite number of loops until the whole graph would be covered. But, as my mentioned realization suggests, finding all paths _is the problem_! Jeez #-D :_S (:-) Cheers!`

This time, I want to approach the connection between graphs and recursion relations from the point of view of matrices. Given a recurrence relation with non-negative coefficients $$a_{n+m}=\sum_{j=1}^m\beta_ja_{n+m-j},$$ we can associate with it an m x m matrix $$\left(\begin{array}{c}a_{n+m}\\\\a_{n+m-1}\\\\a_{n+m-2}\\\\\vdots\\\\a_{n+1}\end{array}\right)=\left(\begin{array}{cccc}\beta_1 & \beta_2&\cdots&\beta_m\\\\1&0&\cdots&0\\\\0&1&\cdots&0\\\\\vdots&\ddots&&\vdots\\\\0&\cdots&1&0\end{array}\right)\left(\begin{array}{c}a_{n+m-1}\\\\a_{n+m-2}\\\\a_{n+m-3}\\\\\vdots\\\\a_n\end{array}\right)$$ with non-negative entries. If we interpret this matrix as an adjacency matrix, its corresponding graph is the one I described in comment #69.

`This time, I want to approach the connection between graphs and recursion relations from the point of view of matrices. Given a recurrence relation with non-negative coefficients $$a_{n+m}=\sum_{j=1}^m\beta_ja_{n+m-j},$$ we can associate with it an m x m matrix $$\left(\begin{array}{c}a_{n+m}\\\\a_{n+m-1}\\\\a_{n+m-2}\\\\\vdots\\\\a_{n+1}\end{array}\right)=\left(\begin{array}{cccc}\beta_1 & \beta_2&\cdots&\beta_m\\\\1&0&\cdots&0\\\\0&1&\cdots&0\\\\\vdots&\ddots&&\vdots\\\\0&\cdots&1&0\end{array}\right)\left(\begin{array}{c}a_{n+m-1}\\\\a_{n+m-2}\\\\a_{n+m-3}\\\\\vdots\\\\a_n\end{array}\right)$$ with non-negative entries. If we interpret this matrix as an adjacency matrix, its corresponding graph is the one I described in [comment #69](https://forum.azimuthproject.org/discussion/comment/18969/#Comment_18969).`

David Lambert,

Great job David!

I believe this is exactly what John was driving at in his comment #60.

I am not sure, but I suspect if we want to have different "initial conditions" rather than just the vector \(\langle 1,0,0,0,\cdots \rangle\) we need to have a two pointed graph. This is reflected in our answers to

Puzzle 104.`[David Lambert](https://forum.azimuthproject.org/profile/2401/David%20Lambert), Great job David! I believe this is exactly what John was driving at in his comment [#60](https://forum.azimuthproject.org/discussion/comment/18946/#Comment_18946). I am not sure, but I suspect if we want to have different "initial conditions" rather than just the vector \\(\langle 1,0,0,0,\cdots \rangle\\) we need to have a two pointed graph. This is reflected in our answers to **Puzzle 104**.`

By the way, John what are you using you create your beautiful graphs?

`By the way, John what are you using you create your beautiful graphs?`

@OwenBiesel: Hmm, if we change it to be a three argument function, i.e. take Hom(y,y) as well, we can actually get the right number. We just have to "divide" by Hom(y,y) and if we take the sequence 1,0,0....as 1=X, as the zero length path then reflexivity keeps us from having to divide by zero.

Now when I say divide, I really mean "deconvolute". let P X be sequences, such that \(X_0 = 1\,\text{and}\,P = \hat{P} \ast X\). Then, \[P_n= \sum_{k=0}^n \hat{P}_{n-k}*X_k\]

\[P_n= P_n*{X_0} + \sum_{k=1}^{n} \hat{P}_{n-k}*{X_k} \]

\[\hat{P}_n = P_n - \sum_{k=1}^{n} \hat{P}_{n-k}*X_k\]

\[\hat{P}_n = P_n - \sum_{k=0}^{n-1} \hat{P}_{n-1-k}*X_{k+1}\]

\[\hat{P}_n = P_n - ((\hat{P}\circ(n \mapsto n-1)) \ast (X \circ (k \mapsto k+1))_n\]

Call this maping \((P,X)\mapsto \hat{P}\), \("\div"\).

lets name this not-quite-an-enriched-category \(\mathscr{P}\) for "Paths", and use that as well for the function from pairs of objects to sequences. Suppressing what graph it is made from for now \[ \text{At B} ,\mathscr{P}(A,B) \otimes \mathscr{P}(B,C) = (\mathscr{P}(A,B)\div\mathscr{P}(B,B))\ast\mathscr{P}(B,C) \] And then \[ \mathscr{P}(A,B) \otimes \mathscr{P}(B,C) \le \mathscr{P}(A,C) \]

`@OwenBiesel: Hmm, if we change it to be a three argument function, i.e. take Hom(y,y) as well, we can actually get the right number. We just have to "divide" by Hom(y,y) and if we take the sequence 1,0,0....as 1=X, as the zero length path then reflexivity keeps us from having to divide by zero. Now when I say divide, I really mean "deconvolute". let P X be sequences, such that \\(X_0 = 1\,\text{and}\,P = \hat{P} \ast X\\). Then, \\[P\_n= \sum\_{k=0}^n \hat{P}\_{n-k}\*X\_k\\] \\[P\_n= P\_n\*{X\_0} + \sum\_{k=1}^{n} \hat{P}\_{n-k}\*{X\_k} \\] \\[\hat{P}\_n = P\_n - \sum\_{k=1}^{n} \hat{P}\_{n-k}\*X\_k\\] \\[\hat{P}\_n = P\_n - \sum\_{k=0}^{n-1} \hat{P}\_{n-1-k}\*X\_{k+1}\\] \\[\hat{P}\_n = P\_n - ((\hat{P}\circ(n \mapsto n-1)) \ast (X \circ (k \mapsto k+1))\_n\\] Call this maping \\((P,X)\mapsto \hat{P}\\), \\("\div"\\). lets name this not-quite-an-enriched-category \\(\mathscr{P}\\) for "Paths", and use that as well for the function from pairs of objects to sequences. Suppressing what graph it is made from for now \\[ \text{At B} ,\mathscr{P}(A,B) \otimes \mathscr{P}(B,C) = (\mathscr{P}(A,B)\div\mathscr{P}(B,B))\ast\mathscr{P}(B,C) \\] And then \\[ \mathscr{P}(A,B) \otimes \mathscr{P}(B,C) \le \mathscr{P}(A,C) \\]`

I..I give up on the formatting.

`I..I give up on the formatting.`

Christopher, try these. The trick is to escape every asterisk and underscore.

\[P_n= \sum_{k=0}^n \hat{P}_{n-k}*X_k\]

\[P_n= P_n*{X_0} + \sum_{k=1}^{n} \hat{P}_{n-k}*{X_k} \]

\[\hat{P}_n = P_n - \sum_{k=1}^{n} \hat{P}_{n-k}*X_k\]

\[\hat{P}_n = P_n - \sum_{k=0}^{n-1} \hat{P}_{n-1-k}*X_{k+1}\]

\[\hat{P}_n = P_n - ((\hat{P}\circ(n \mapsto n-1)) \ast (X \circ (k \mapsto k+1))_n\]

`Christopher, try these. The trick is to escape every asterisk and underscore. --- \\[P\_n= \sum\_{k=0}^n \hat{P}\_{n-k}\*X\_k\\] \\[P\_n= P\_n\*{X\_0} + \sum\_{k=1}^{n} \hat{P}\_{n-k}\*{X\_k} \\] \\[\hat{P}\_n = P\_n - \sum\_{k=1}^{n} \hat{P}\_{n-k}\*X\_k\\] \\[\hat{P}\_n = P\_n - \sum\_{k=0}^{n-1} \hat{P}\_{n-1-k}\*X\_{k+1}\\] \\[\hat{P}\_n = P\_n - ((\hat{P}\circ(n \mapsto n-1)) \ast (X \circ (k \mapsto k+1))\_n\\]`

Thank you!

`Thank you!`

Christopher Upshaw wrote:

Nice! I see what you mean by it being not-quite an enriched category, since there's a different tensor product operation for composition at each object.

But what if we did something slightly different: what if the enrichment sends a pair of objects \(A,B\) not to the sequence of path lengths \(\mathscr{P}(A,B)\) but to the "reduced" sequence \(\mathscr{R}(A,B) = \mathscr{P}(A,B) \div \mathscr{P}(B,B)\)? I think this sequence counts something like "Paths from \(A\) to \(B\) of length \(n\) that don't meet \(B\) before the end of the path."

Regardless, then we'd have (assuming that if \(N\leq M\) then \(N\div P \leq M\div P\))

\[\mathscr{R}(A,B) \ast \mathscr{R}(B,C) = \mathscr{P}(A,B) \div \mathscr{P}(B,B) \ast \mathscr{P}(B,C) \div \mathscr{P}(C,C)\] \[ = (\mathscr{P}(A,B) \ast \mathscr{P}(B,C) \div \mathscr{P}(B,B)) \div \mathscr{P}(C,C) \] \[ \leq \mathscr{P}(A,C) \div \mathscr{P}(C,C)= \mathscr{R}(A,C).\]

Something tells me this can't work, though. If \(\mathscr{R}(A,B)\) has the interpretation I mentioned, then if there exist paths \(A\to B\) and \(B\to A\) we must have \(\mathscr{R}(A,B)\ast\mathscr{R}(B,A) \leq \mathscr{R}(A,A) = (1,0,0,\dots)\), which doesn't make sense.

`Christopher Upshaw wrote: >lets name this not-quite-an-enriched-category \\(\mathscr{P}\\) for "Paths", and use that as well for the function from pairs of objects to sequences. Suppressing what graph it is made from for now \\[ \text{At B} ,\mathscr{P}(A,B) \otimes \mathscr{P}(B,C) = (\mathscr{P}(A,B)\div\mathscr{P}(B,B))\ast\mathscr{P}(B,C) \\] And then \\[ \mathscr{P}(A,B) \otimes \mathscr{P}(B,C) \le \mathscr{P}(A,C) \\] Nice! I see what you mean by it being not-quite an enriched category, since there's a different tensor product operation for composition at each object. But what if we did something slightly different: what if the enrichment sends a pair of objects \\(A,B\\) not to the sequence of path lengths \\(\mathscr{P}(A,B)\\) but to the "reduced" sequence \\(\mathscr{R}(A,B) = \mathscr{P}(A,B) \div \mathscr{P}(B,B)\\)? I think this sequence counts something like "Paths from \\(A\\) to \\(B\\) of length \\(n\\) that don't meet \\(B\\) before the end of the path." Regardless, then we'd have (assuming that if \\(N\leq M\\) then \\(N\div P \leq M\div P\\)) \\[\mathscr{R}(A,B) \ast \mathscr{R}(B,C) = \mathscr{P}(A,B) \div \mathscr{P}(B,B) \ast \mathscr{P}(B,C) \div \mathscr{P}(C,C)\\] \\[ = (\mathscr{P}(A,B) \ast \mathscr{P}(B,C) \div \mathscr{P}(B,B)) \div \mathscr{P}(C,C) \\] \\[ \leq \mathscr{P}(A,C) \div \mathscr{P}(C,C)= \mathscr{R}(A,C).\\] Something tells me this can't work, though. If \\(\mathscr{R}(A,B)\\) has the interpretation I mentioned, then if there exist paths \\(A\to B\\) and \\(B\to A\\) we must have \\(\mathscr{R}(A,B)\ast\mathscr{R}(B,A) \leq \mathscr{R}(A,A) = (1,0,0,\dots)\\), which doesn't make sense.`

In comment there are some ideas. I would like to know the names of these ideas.

We have a monoidal skeleton-category labeled \( \mathbf{Matrix}_{skel} \) and two categories \( \mathbf{Matrix}_{\mathbb{N}} \) and \( \mathbf{Matrix}_{multiset}\) .

Here are example objects of those categories.

$$ \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right) $$ and

$$ \left( \begin{array}{cc} \lbrace f \rbrace & \lbrace g \rbrace \\ \lbrace h \rbrace & \emptyset \end{array} \right) $$ The two categories differ in the type of their cells and the definitions of their monoidal operators but they are both similar to their skeleton in the same way.

What do we call these similarities?

@KeithEPeterson wanted to call them "matrix homomorphisms" but that is apparently wrong.

Edit: Thanks Matthew Doty #81 I corrected the latex and am studying what you posted. That looks like what I was seeking.

`In [comment](https://forum.azimuthproject.org/discussion/comment/18948/#Comment_18948) there are some ideas. I would like to know the names of these ideas. We have a monoidal skeleton-category labeled \\( \mathbf{Matrix}_{skel} \\) and two categories \\( \mathbf{Matrix}\_{\mathbb{N}} \\) and \\( \mathbf{Matrix}\_{multiset}\\) . Here are example objects of those categories. \[ \left( \begin{array}{cc} 1 & 1 \\\\ 1 & 0 \end{array} \right) \] and \[ \left( \begin{array}{cc} \lbrace f \rbrace & \lbrace g \rbrace \\\\ \lbrace h \rbrace & \emptyset \end{array} \right) \] The two categories differ in the type of their cells and the definitions of their monoidal operators but they are both similar to their skeleton in the same way. What do we call these similarities? @KeithEPeterson wanted to call them "matrix homomorphisms" but that is apparently wrong. Edit: Thanks [Matthew Doty #81](https://forum.azimuthproject.org/discussion/comment/19020/#Comment_19020) I corrected the latex and am studying what you posted. That looks like what I was seeking.`

I think though if we just declare that 1,0,... is larger then anything else it works. This is kind of a cheat, but ..

`I think though if we just declare that 1,0,... is larger then anything else it works. This is kind of a cheat, but ..`

Frederick Eisele wrote in #79

TL;DR: I would say you are looking at a\(\mathbf{Rig}\)-homomorphism\(\lVert \cdot \rVert : \mathbf{Multiset}_{\mathfrak{M}} \to \mathbb{N}\). You also are looking at \(\mathbf{Rig}\)-endofunctor called \(\mathbf{Matrix}^{N \times N}\). The map \(U\) Keith shows us in #61 is \(\mathbf{Matrix}^{N \times N}_{ \lVert \cdot \rVert }\).(I fixed up your LaTeX and noted that one of sets of matrices was just over \(\mathbb{N}\), rather than \(\mathbb{R}\)).

\(\mathbf{Rig}\) is what John Baez calls the category of

semirings. He has mentioned them elsewhere.One familiar rig is \(\langle \mathbb{N}, 0, +, 1, \cdot \rangle\)

If you have a monoid \(\mathfrak{M} = \langle S, I, ; \rangle\), we can construct a rig \(\mathbf{Multiset}_{\mathfrak{M}}\) of

finite multisetsof \(\mathfrak{M}\). I am following the ideas you gave, Frederick, in #55, #56 and #59.The rig \(\mathbf{Multiset}_{\mathfrak{M}}\) is conceptually like \(\langle \mathbb{N}, 0, +, 1, \cdot \rangle \). It has the following differences:

Let \(\lVert \cdot \rVert : \mathbf{Multiset}_{\mathfrak{M}} \to \mathbb{N}\) measure the cardinality of a multiset. This is a rig-homomorphism between \(\mathbf{Multiset}_{\mathfrak{M}}\) and \(\mathbb{N}\).

If \(\mathfrak{R}\) is a rig, we can make a new rig \(\mathbf{Matrix}^{N \times N}_{\mathfrak{R}}\) of finite \(N \times N\) square matrices. Addition is defined to be element-wise addition, just like in linear algebra. Matrix multiplication is defined to be sums of products.

\(\mathbf{Matrix}^{N \times N} \) is also a functor. If there a rig-homomorphism between \(\phi : \mathfrak{R} \to \mathfrak{S}\), then \(\mathbf{Matrix}^{N \times N}_{\phi} : \mathbf{Matrix}^{N \times N}_{\mathfrak{R}} \to \mathbf{Matrix}^{N \times N}_{\mathfrak{S}} \) just acts element-wise, mapping \(a_{ij} \mapsto \phi(a)_{ij}\).

Example: In the special case of\[ A^3 = \left( \begin{array}{cc} \lbrace f;f;f, \; g;h;f, \; f;g;h \rbrace & \lbrace f;f;g, \; g;h;g \rbrace \\ \lbrace h;f;f, \; h;g;h \rbrace & \lbrace h;f;g \rbrace \end{array} \right) \]

We can see what \(\mathbf{Matrix}^{2 \times 2}_{ \lVert \cdot \rVert }\) does, and how it is a rig-homomorphism:

\[ \begin{align} \mathbf{Matrix}^{2 \times 2}_{ \lVert \cdot \rVert }(A^3) & = \left( \begin{array}{cc} 3 & 2 \\ 2 & 1 \end{array} \right) \\ & = \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right)^3 \\ & = \left( \begin{array}{cc} \lVert\lbrace f \rbrace\rVert & \lVert\lbrace g \rbrace\rVert \\ \lVert\lbrace h \rbrace\rVert & \lVert \emptyset \rVert \end{array} \right)^3 \\ & = \left( \mathbf{Matrix}^{2 \times 2}_{ \lVert \cdot \rVert }(A) \right)^3 \\ \end{align} \]

`Frederick Eisele wrote in [#79](https://forum.azimuthproject.org/discussion/comment/19011/#Comment_19011) > We have a monoidal skeleton-category labeled \\( \mathbf{Matrix}_{skel} \\) and two categories \\( \mathbf{Matrix}\_{\mathbb{N}} \\) and \\( \mathbf{Matrix}\_{multiset}\\) . > Here are example objects of those categories. > \[ \left( \begin{array}{cc} 1 & 1 \\\\ 1 & 0 \end{array} \right) \] > > and > > \[ \left( \begin{array}{cc} \lbrace f \rbrace & \lbrace g \rbrace \\\\ \lbrace h \rbrace & \emptyset \end{array} \right) \] > > The two categories differ in the type of their cells and the definitions of their monoidal operators but they are both similar to their skeleton in the same way. > > What do we call these similarities? **TL;DR**: I would say you are looking at a *\\(\mathbf{Rig}\\)-homomorphism* \\(\lVert \cdot \rVert : \mathbf{Multiset}_{\mathfrak{M}} \to \mathbb{N}\\). You also are looking at \\(\mathbf{Rig}\\)-endofunctor called \\(\mathbf{Matrix}^{N \times N}\\). The map \\(U\\) Keith shows us in [#61](https://forum.azimuthproject.org/discussion/comment/18948/#Comment_18948) is \\(\mathbf{Matrix}^{N \times N}\_{ \lVert \cdot \rVert }\\). (I fixed up your LaTeX and noted that one of sets of matrices was just over \\(\mathbb{N}\\), rather than \\(\mathbb{R}\\)). ------------------------------------------------- \\(\mathbf{Rig}\\) is what John Baez calls the category of [*semirings*](https://en.wikipedia.org/wiki/Semiring). He has mentioned them elsewhere. One familiar rig is \\(\langle \mathbb{N}, 0, +, 1, \cdot \rangle\\) If you have a monoid \\(\mathfrak{M} = \langle S, I, ; \rangle\\), we can construct a rig \\(\mathbf{Multiset}\_{\mathfrak{M}}\\) of *finite multisets* of \\(\mathfrak{M}\\). I am following the ideas you gave, Frederick, in [#55](https://forum.azimuthproject.org/discussion/comment/18938/#Comment_18938), [#56](https://forum.azimuthproject.org/discussion/comment/18942/#Comment_18942) and [#59](https://forum.azimuthproject.org/discussion/comment/18945/#Comment_18945). The rig \\(\mathbf{Multiset}_{\mathfrak{M}}\\) is conceptually like \\(\langle \mathbb{N}, 0, +, 1, \cdot \rangle \\). It has the following differences: - \\(0\\) is replaced with \\(\emptyset\\) - \\(+\\) is replaced with \\(\cup\\) - \\(1\\) is replaced with \\(\\{I\\}\\) - \\(\cdot\\) is replaced with \\(X \otimes Y := \lbrace x ; y \; : \; x \in X \text{ and } y \in Y \rbrace\\) Let \\(\lVert \cdot \rVert : \mathbf{Multiset}\_{\mathfrak{M}} \to \mathbb{N}\\) measure the cardinality of a multiset. This is a rig-homomorphism between \\(\mathbf{Multiset}\_{\mathfrak{M}}\\) and \\(\mathbb{N}\\). If \\(\mathfrak{R}\\) is a rig, we can make a new rig \\(\mathbf{Matrix}^{N \times N}_{\mathfrak{R}}\\) of finite \\(N \times N\\) square matrices. Addition is defined to be element-wise addition, just like in linear algebra. Matrix multiplication is defined to be sums of products. \\(\mathbf{Matrix}^{N \times N} \\) is also a functor. If there a rig-homomorphism between \\(\phi : \mathfrak{R} \to \mathfrak{S}\\), then \\(\mathbf{Matrix}^{N \times N}\_{\phi} : \mathbf{Matrix}^{N \times N}\_{\mathfrak{R}} \to \mathbf{Matrix}^{N \times N}_{\mathfrak{S}} \\) just acts element-wise, mapping \\(a\_{ij} \mapsto \phi(a)\_{ij}\\). **Example**: In the special case of \\[ A^3 = \left( \begin{array}{cc} \lbrace f;f;f, \; g;h;f, \; f;g;h \rbrace & \lbrace f;f;g, \; g;h;g \rbrace \\\\ \lbrace h;f;f, \; h;g;h \rbrace & \lbrace h;f;g \rbrace \end{array} \right) \\] We can see what \\(\mathbf{Matrix}^{2 \times 2}\_{ \lVert \cdot \rVert }\\) does, and how it is a rig-homomorphism: \\[ \begin{align} \mathbf{Matrix}^{2 \times 2}\_{ \lVert \cdot \rVert }(A^3) & = \left( \begin{array}{cc} 3 & 2 \\\\ 2 & 1 \end{array} \right) \\\\ & = \left( \begin{array}{cc} 1 & 1 \\\\ 1 & 0 \end{array} \right)^3 \\\\ & = \left( \begin{array}{cc} \lVert\lbrace f \rbrace\rVert & \lVert\lbrace g \rbrace\rVert \\\\ \lVert\lbrace h \rbrace\rVert & \lVert \emptyset \rVert \end{array} \right)^3 \\\\ & = \left( \mathbf{Matrix}^{2 \times 2}\_{ \lVert \cdot \rVert }(A) \right)^3 \\\\ \end{align} \\]`

Thanks @MatthewDoty all is now clear.

`Thanks @MatthewDoty all is now clear.`

Dan Oneata asked a question about counting paths of a given length in comment #67. As Matthew pointed out, this is the question I answered in comment #60. But I answered it indirectly, in a way that would force people to think a lot and learn a lot. Let me be more direct!

Suppose we have a graph and we want to count the number of paths of length \(n\) from any node \(i\) to any node \(j\).

To do this, let \(U\) be the

incidence matrix, where for any two nodes \(i\) and \(j\), the matrix entry \(U_{i j} \) is the number of edges from \(i\) to \(j\).Then the number of paths of length \(n\) from any node \(i\) to any node \(j\) is \( (U^n)_{i j} \).

For further explorations of this idea see Section 2.5.3 of the book, called "Matrix multiplication in a quantale".

`Dan Oneata asked a question about counting paths of a given length in comment #67. As Matthew pointed out, this is the question I answered in [comment #60](https://forum.azimuthproject.org/discussion/comment/18946/#Comment_18946). But I answered it indirectly, in a way that would force people to think a lot and learn a lot. Let me be more direct! Suppose we have a graph and we want to count the number of paths of length \\(n\\) from any node \\(i\\) to any node \\(j\\). To do this, let \\(U\\) be the **incidence matrix**, where for any two nodes \\(i\\) and \\(j\\), the matrix entry \\(U_{i j} \\) is the number of edges from \\(i\\) to \\(j\\). Then the number of paths of length \\(n\\) from any node \\(i\\) to any node \\(j\\) is \\( (U^n)_{i j} \\). For further explorations of this idea see Section 2.5.3 of the book, called "Matrix multiplication in a quantale".`

@John Thank you very much for the hints in comment #83! Unfortunately, I cannot see what choice of quantale \( \mathcal{V} = (V, \le, I, \otimes) \) allows us compute the number of paths of length \( n \). The formula to multiply two \( \mathcal{V} \)-matrices \( M \) and \( N \) (equation 2.97 in the book) is: \[ (M * N)(x, z) := \bigvee_{y \in Y} M(x, y) \otimes N(y, z). \] It seems to me that in order to compute the number of paths of length \( n \) we need a quantale whose multiplication corresponds to matrix multiplication: \[ (M * N)(x, z) := \sum_{y \in Y} M(x, y) * N(y, z). \]

`@John Thank you very much for the hints in [comment #83](https://forum.azimuthproject.org/discussion/comment/19032/#Comment_19032)! Unfortunately, I cannot see what choice of quantale \\( \mathcal{V} = (V, \le, I, \otimes) \\) allows us compute the number of paths of length \\( n \\). The formula to multiply two \\( \mathcal{V} \\)-matrices \\( M \\) and \\( N \\) (equation 2.97 in the book) is: \\[ (M * N)(x, z) := \bigvee_{y \in Y} M(x, y) \otimes N(y, z). \\] It seems to me that in order to compute the number of paths of length \\( n \\) we need a quantale whose multiplication corresponds to matrix multiplication: \\[ (M * N)(x, z) := \sum_{y \in Y} M(x, y) * N(y, z). \\]`

Dan - no choice of quantale lets us do that. A quantale is a very nice monoidal

preorder, but we need a very nice monoidalcategory: the category \(\mathbf{Set}\).\(\mathbf{Set}\) is not a quantale, but Equation 2.97 generalizes, and gives this:

\[ (M * N)(x, z) := \bigsqcup_{y \in Y} M(x, y) \times N(y, z). \]

where \(\bigsqcup\) means disjoint union of sets, and \(\times\) means Cartesian product of sets. \(\bigsqcup\), also known as 'coproduct', is a generalization of 'join'. \(\times\) is a special case of a monoidal structure \(\otimes\) in a monoidal category.

Remember, Fong and Spivak do the funny thing of first discussing categories enriched over preorders (for example quantales), then categories enriched over \(\mathbf{Set}\) (which we're discussing now - they're plain old categories), and finally categories enriched over arbitrary monoidal categories (which include both the previous examples).

Only at the final step will we be in the position to write down a matrix multiplication formula that specializes to handle both quantale-enriched categories and plain old categories!

And you will need to remind me to do this, if you want to see it, because Fong and Spivak say very little if anything about this.

However, you can already clearly see the analogy between the quantale-enriched formula

\[ (M * N)(x, z) := \bigvee_{y \in Y} M(x, y) \otimes N(y, z) \]

and the \(\mathbf{Set}\)-enriched formula

\[ (M * N)(x, z) := \bigsqcup_{y \in Y} M(x, y) \times N(y, z). \]

`Dan - no choice of quantale lets us do that. A quantale is a very nice monoidal _preorder_, but we need a very nice monoidal _category_: the category \\(\mathbf{Set}\\). \\(\mathbf{Set}\\) is not a quantale, but Equation 2.97 generalizes, and gives this: \\[ (M * N)(x, z) := \bigsqcup_{y \in Y} M(x, y) \times N(y, z). \\] where \\(\bigsqcup\\) means disjoint union of sets, and \\(\times\\) means Cartesian product of sets. \\(\bigsqcup\\), also known as 'coproduct', is a generalization of 'join'. \\(\times\\) is a special case of a monoidal structure \\(\otimes\\) in a monoidal category. Remember, Fong and Spivak do the funny thing of first discussing categories enriched over preorders (for example quantales), then categories enriched over \\(\mathbf{Set}\\) (which we're discussing now - they're plain old categories), and finally categories enriched over arbitrary monoidal categories (which include both the previous examples). Only at the final step will we be in the position to write down a matrix multiplication formula that specializes to handle both quantale-enriched categories and plain old categories! And you will need to remind me to do this, if you want to see it, because Fong and Spivak say very little if anything about this. However, you can already clearly see the analogy between the quantale-enriched formula \\[ (M * N)(x, z) := \bigvee_{y \in Y} M(x, y) \otimes N(y, z) \\] and the \\(\mathbf{Set}\\)-enriched formula \\[ (M * N)(x, z) := \bigsqcup_{y \in Y} M(x, y) \times N(y, z). \\]`

Thank you for the answer, John! It does make a lot of sense! I will try to remind you to discuss matrix multiplication for categories enriched over monoidal categories. Are there any available online resources on this topic?

I wonder, however, what is the precise connection between quantale matrix multiplication and quantale-enriched categories? Is it true that given a quantale \(\mathcal{V}\) and two \(\mathcal{V}\)-enriched categories \(\mathcal{X}\) and \(\mathcal{Y}\) then their matrix multiplication \(\mathcal{X}*\mathcal{Y}\) is also a \(\mathcal{V}\)-enriched category? (I see that in section 2.5.3 of the book the repeated matrix multiplication is used to obtain the desired quantale-enriched categories.)

`Thank you for [the answer](https://forum.azimuthproject.org/discussion/comment/19042/#Comment_19042), John! It does make a lot of sense! I will try to remind you to discuss matrix multiplication for categories enriched over monoidal categories. Are there any available online resources on this topic? I wonder, however, what is the precise connection between quantale matrix multiplication and quantale-enriched categories? Is it true that given a quantale \\(\mathcal{V}\\) and two \\(\mathcal{V}\\)-enriched categories \\(\mathcal{X}\\) and \\(\mathcal{Y}\\) then their matrix multiplication \\(\mathcal{X}\*\mathcal{Y}\\) is also a \\(\mathcal{V}\\)-enriched category? (I see that in section 2.5.3 of the book the repeated matrix multiplication is used to obtain the desired quantale-enriched categories.)`

Dan wrote:

The relevant buzzword is Day convolution but I'm afraid it will be a rather demanding project to extract the information you want from the much more general work that people have done on this.

That seems unlikely, but I could be mixed up. Take \(\mathcal{X}\) and \(\mathcal{Y}\) to be two \(\mathbf{Bool}\)-categories for example. These are preorders. How would you make \(\mathcal{X} \ast \mathcal{Y}\) into a \(\mathbf{Bool}\)-category again? Questions like this are usually not very subtle. There's usually either an obvious way to proceed, which usually works - or no obvious way, and no way at all. (Of course, "obvious" means "obvious after you have a clear mental picture of what's going on".)

Yes,

repeatedmatrix multiplication is key here. If \(\mathcal{X}\) is \(\mathcal{V}\)-weighted graph then we build up a \(\mathcal{V}\)-category by iterated matrix multiplication.The easiest way to understand this is to take \(\mathcal{V} = \mathbf{Bool}\). Then a \(\mathcal{V}\)-weighted graph \(\mathcal{X}\) is just a graph: for each pair of nodes you have an edge ("true") or not ("false"). Then we want to build up a preorder where \(x \le y\) iff there's a path from \(x\) to \(y\) in our graph.

\(\mathcal{X}\) has the paths of length 1, \(\mathcal{X} \ast \mathcal{X}\) has the paths of length 2, and so on. We need to go on forever, in a certain clever way, to get all paths!

And we need all paths to ensure that our preorder obeys the transitive law. For example, if there's a path of length 2 from \(x\) to \(y\) and a path of length 3 from \(y\) to \(z\), we know there's a path of length 5 from \(x\) to \(z\). But if we quit at \(\mathcal{X} \ast \mathcal{X} \ast \mathcal{X} \ast \mathcal{X}\) we wouldn't get that path.

This makes me feel it's unlikely that \(\mathcal{X} \ast \mathcal{Y}\) would be a preorder even if \(\mathcal{X}\) and \(\mathcal{Y}\) are already preorders. How are you going to get transitivity?

The book also discusses the case \(\mathcal{V} = \mathbf{Cost}\). That's much more fun! The idea is someone tells you the prices of all direct flights between cities, and you have to work out the cost of the cheapest route between cities, by looking through one-step flights, two-step flights, etc. Again we need to go on forever.

`Dan wrote: > Are there any available online resources on this topic? The relevant buzzword is [Day convolution](https://ncatlab.org/nlab/show/Day+convolution) but I'm afraid it will be a rather demanding project to extract the information you want from the much more general work that people have done on this. > Is it true that given a quantale \\(\mathcal{V}\\) and two \\(\mathcal{V}\\)-enriched categories \\(\mathcal{X}\\) and \\(\mathcal{Y}\\) then their matrix multiplication \\(\mathcal{X}\*\mathcal{Y}\\) is also a \\(\mathcal{V}\\)-enriched category? That seems unlikely, but I could be mixed up. Take \\(\mathcal{X}\\) and \\(\mathcal{Y}\\) to be two \\(\mathbf{Bool}\\)-categories for example. These are preorders. How would you make \\(\mathcal{X} \ast \mathcal{Y}\\) into a \\(\mathbf{Bool}\\)-category again? Questions like this are usually not very subtle. There's usually either an obvious way to proceed, which usually works - or no obvious way, and no way at all. (Of course, "obvious" means "obvious after you have a clear mental picture of what's going on".) > I see that in section 2.5.3 of the book the repeated matrix multiplication is used to obtain the desired quantale-enriched categories. Yes, _repeated_ matrix multiplication is key here. If \\(\mathcal{X}\\) is \\(\mathcal{V}\\)-weighted graph then we build up a \\(\mathcal{V}\\)-category by iterated matrix multiplication. The easiest way to understand this is to take \\(\mathcal{V} = \mathbf{Bool}\\). Then a \\(\mathcal{V}\\)-weighted graph \\(\mathcal{X}\\) is just a graph: for each pair of nodes you have an edge ("true") or not ("false"). Then we want to build up a preorder where \\(x \le y\\) iff there's a path from \\(x\\) to \\(y\\) in our graph. \\(\mathcal{X}\\) has the paths of length 1, \\(\mathcal{X} \ast \mathcal{X}\\) has the paths of length 2, and so on. We need to go on forever, in a certain clever way, to get all paths! And we need all paths to ensure that our preorder obeys the transitive law. For example, if there's a path of length 2 from \\(x\\) to \\(y\\) and a path of length 3 from \\(y\\) to \\(z\\), we know there's a path of length 5 from \\(x\\) to \\(z\\). But if we quit at \\(\mathcal{X} \ast \mathcal{X} \ast \mathcal{X} \ast \mathcal{X}\\) we wouldn't get that path. This makes me feel it's unlikely that \\(\mathcal{X} \ast \mathcal{Y}\\) would be a preorder even if \\(\mathcal{X}\\) and \\(\mathcal{Y}\\) are already preorders. How are you going to get transitivity? The book also discusses the case \\(\mathcal{V} = \mathbf{Cost}\\). That's much more fun! The idea is someone tells you the prices of all direct flights between cities, and you have to work out the cost of the cheapest route between cities, by looking through one-step flights, two-step flights, etc. Again we need to go on forever.`

John – thank you for your patience and for the very clear explanation! For some reason, I was missing the idea of a \(\mathcal{V}\)-weighted graph and the fact that we start the iteration from a \(\mathcal{V}\)-weighted graph and not from a \(\mathcal{V}\)-category. Now it all looks obvious in hindsight.

`John – thank you for your patience and for the very clear explanation! For some reason, I was missing the idea of a \\(\mathcal{V}\\)-weighted graph and the fact that we start the iteration from a \\(\mathcal{V}\\)-weighted graph and not from a \\(\mathcal{V}\\)-category. Now it all looks obvious in hindsight.`

Dan - no problem! I discussed the project of building \(\mathcal{V}\)-categories from \(\mathcal{V}\)-weighted graphs in Lecture 33, but mostly in the special case \(\mathcal{V} = \mathbf{Cost}\).

`Dan - no problem! I discussed the project of building \\(\mathcal{V}\\)-categories from \\(\mathcal{V}\\)-weighted graphs in [Lecture 33](https://forum.azimuthproject.org/discussion/2192/lecture-33-chapter-2-tying-up-loose-ends/p1), but mostly in the special case \\(\mathcal{V} = \mathbf{Cost}\\).`

@OwenBiesel I wonder if these two approaches to de looping extend to other things.

`@OwenBiesel I wonder if these two approaches to de looping extend to other things.`