Options

Lecture 68 - Chapter 4: Feedback in Collaborative Design

Last time we began studying feedback in co-design diagrams. This led us into a fascinating topic which we'll explore more deeply today: cups and caps. Ultimately it leads to the subject of 'compact closed' categories, which Fong and Spivak introduce in Section 4.5.1. Covering this material adequately will take longer than the three weeks I'd intended to spend on each chapter, but I think it's worth it.

Last time we saw that for each preorder \(X\) there's a feasibility relation called the cup

$$ \cup_X \colon X^{\text{op}} \times X \nrightarrow \textbf{1} $$ which we draw as follows:

image

To define the cup, we remembered that feasibility relations \(X^{\text{op}} \times X \nrightarrow \textbf{1}\) are monotone functions \( (X^{\text{op}} \times X)^\text{op} \times \textbf{1} \to \mathbf{Bool} \), and we defined \(\cup_X\) to be the composite

$$ (X^{\text{op}} \times X)^\text{op} \times \textbf{1} \stackrel{\sim}{\to} (X^{\text{op}} \times X)^\text{op} \stackrel{\sim}{\to} (X^{\text{op}})^\text{op} \times X^{\text{op}} \stackrel{\sim}{\to} X \times X^{\text{op}} \stackrel{\sim}{\to} X^{\text{op}} \times X \stackrel{\text{hom}}{\to} \textbf{Bool} $$ where all the arrows with little squiggles over them are isomorphisms - most of them discussed in Puzzles 213-215. In short, the cup is the hom-functor \(\text{hom} \colon X^{\text{op}} \times X \to \mathbf{Bool}\) in disguise!

The cup's partner is called the cap

$$ \cap_X \colon \textbf{1} \nrightarrow X \times X^{\text{op}} $$ and we draw it like this:

image

The cap is also the hom-functor in disguise! To define it, remember that feasibility relations \(\textbf{1} \nrightarrow X \times X^{\text{op}} \) are monotone functions \(\textbf{1}^{\text{op}} \times (X \times X^{\text{op}}) \). But \(\textbf{1}^{\text{op}} = \textbf{1}\), so we define the cap to be the compoiste

$$ \textbf{1}^{\text{op}} \times (X \times X^{\text{op}}) = \textbf{1}\times (X \times X^{\text{op}}) \stackrel{\sim}{\to} X \times X^{\text{op}} \stackrel{\sim}{\to} X^{\text{op}} \times X \stackrel{\text{hom}}{\to} \textbf{Bool} . $$ One great thing about the cup and cap is that they let us treat the edges in our co-design diagrams as flexible wires. In particular, they obey the snake equations, also known as the zig-zag identities. These say that we can pull taut a zig-zag of wire.

The first snake equation says

image

In other words,

$$ (1_X \times \cup_X) (\cap_X \times 1_X) = 1_X .$$ Please study the diagram and the corresponding equation very carefully to make sure you see how each part of one corresponds to a part of the other! And please ask questions if there's anything puzzling. It takes a while to get used to these things.

The second snake equation says

image

In other words,

$$ (\cup_X \times 1_{X^{\text{op}}}) (1_{X^{\text{op}}} \times \cap_X) = 1_{X^{\text{op}}} .$$ A great exercise, to make sure you understand what's going on, is to prove the snake equations. You just need to remember all the definition, use them to compute the left-hand side of the identity, and show it equals the much simpler right-hand side.

Puzzle 217. Prove the snake equations.

In fact some of you have already started doing this!

To read other lectures go here.

Comments

  • 1.

    (I think there's a typo in the first snake equation, John – surely it should be cup of \(X^\text{op}\) not cup of \(X\))

    (Also in the second one, shouldn't the arrows in the left hand diagram go the other way? otherwise it's just the first one, but "op".)

    Comment Source:(I think there's a typo in the first snake equation, John – surely it should be cup of \\(X^\text{op}\\) not cup of \\(X\\)) (Also in the second one, shouldn't the arrows in the left hand diagram go the other way? otherwise it's just the first one, but "op".)
  • 2.

    (Also in the second one, shouldn't the arrows in the left hand diagram go the other way? otherwise it's just the first one, but "op".)

    I think that's the point.

    Comment Source:>(Also in the second one, shouldn't the arrows in the left hand diagram go the other way? otherwise it's just the first one, but "op".) I think that's the point.
  • 3.

    shouldn't the diagram on the right of the second snake equation be flipped around to \(X^{op}\)?

    Comment Source:shouldn't the diagram on the right of the second snake equation be flipped around to \\(X^{op}\\)?
  • 4.

    hmm, I'd have thought the second snake equation should be an independent fact in its own right rather than merely the result of applying op to the first.

    I think it should look like this (apologies for the hand drawn picture!):

    second-snake

    Comment Source:hmm, I'd have thought the second snake equation should be an independent fact in its own right rather than merely the result of applying op to the first. I think it should look like this (apologies for the hand drawn picture!): ![second-snake](https://i.imgur.com/sydXrFJ.jpg)
  • 5.

    Michael wrote:

    shouldn't the diagram on the right of the second snake equation be flipped around to \(X^{\text{op}}\)?

    Yes, that was a cut-and-paste error... I was getting sleepy!

    Comment Source:Michael wrote: > shouldn't the diagram on the right of the second snake equation be flipped around to \\(X^{\text{op}}\\)? Yes, that was a cut-and-paste error... I was getting sleepy!
  • 6.
    edited July 31

    There are even worse mistakes in the pictures; I'll fix them pretty soon. There are certain arbitrary conventions in this game, but one should handle them consistently and I didn't. It's easy to get the conventions right when I'm writing things on paper and can look at all the diagrams at once, but it's harder when I'm drawing diagrams in LaTeX, using Irfanview to create png files from them, and writing a lecture in LaTeX on the forum, which I can only look at when it's done!

    Now you folks are exploring the various subtleties of this subject and seeing how tricky it is. I'll let you know when I fix this lecture; then everything should make sense!

    Comment Source:There are even worse mistakes in the pictures; I'll fix them pretty soon. There are certain arbitrary conventions in this game, but one should handle them consistently and I didn't. It's easy to get the conventions right when I'm writing things on paper and can look at all the diagrams at once, but it's harder when I'm drawing diagrams in LaTeX, using Irfanview to create png files from them, and writing a lecture in LaTeX on the forum, which I can only look at when it's done! Now you folks are exploring the various subtleties of this subject and seeing how tricky it is. I'll let you know when I fix this lecture; then everything should make sense!
  • 7.
    edited July 31

    Okay! I think I've got this lecture fixed, with consistent conventions.

    • The cup:

    $$ \cup_X \colon X^{\text{op}} \times X \nrightarrow \textbf{1} $$

    image
    • The cap:

    $$ \cap_X \colon \textbf{1} \nrightarrow X \times X^{\text{op}} $$

    image

    These conventions are different than the book's, because in the book's discussion of compact closed categories they use a 'unit' which corresponds to a cup \( \cup_X \colon X \times X^{\text{op}} \nrightarrow \textbf{1} \) and a 'counit' which corresponds to a cap \( \cap_X \colon \textbf{1} \nrightarrow X^{\text{op}} \times X \). These are equally consistent convention, but much less commonly used in the category theory literature, for good reasons I won't explain now, so I think it's better for us to use the more standard ones.

    Either way, the snake equations are then the two equations one can write down using \(\cap_X\) and \(\cup_X\). For me they are:

    • The first snake equation:

    $$ (1_X \times \cup_X) (\cap_X \times 1_X) = 1_X .$$

    image
    • The second snake equation:

    $$ (\cup_X \times 1_{X^{\text{op}}}) (1_{X^{\text{op}}} \times \cap_X) = 1_{X^{\text{op}}} .$$

    image
    Comment Source:Okay! I think I've got this lecture fixed, with consistent conventions. * The cup: \[ \cup_X \colon X^{\text{op}} \times X \nrightarrow \textbf{1} \] <center><img width = "120" src = "http://math.ucr.edu/home/baez/mathematical/7_sketches/cup.png"></center> * The cap: \[ \cap_X \colon \textbf{1} \nrightarrow X \times X^{\text{op}} \] <center><img width = "120" src = "http://math.ucr.edu/home/baez/mathematical/7_sketches/cap.png"></center> These conventions are different than the book's, because in the book's discussion of compact closed categories they use a 'unit' which corresponds to a cup \\( \cup_X \colon X \times X^{\text{op}} \nrightarrow \textbf{1} \\) and a 'counit' which corresponds to a cap \\( \cap_X \colon \textbf{1} \nrightarrow X^{\text{op}} \times X \\). These are equally consistent convention, but much less commonly used in the category theory literature, for good reasons I won't explain now, so I think it's better for us to use the more standard ones. Either way, the snake equations are then the two equations one can write down using \\(\cap_X\\) and \\(\cup_X\\). For me they are: * The first snake equation: \[ (1\_X \times \cup\_X) (\cap\_X \times 1_X) = 1\_X .\] <center><img src = "http://math.ucr.edu/home/baez/mathematical/7_sketches/zigzag_1.png"></center> * The second snake equation: \[ (\cup_X \times 1\_{X^{\text{op}}}) (1\_{X^{\text{op}}} \times \cap_X) = 1_{X^{\text{op}}} .\] <center><img src = "http://math.ucr.edu/home/baez/mathematical/7_sketches/zigzag_2.png"></center>
  • 8.
    edited July 31

    Don't we want to show something like,

    \[ (\hom^{op}(x,x) \times 1_X)(1_X \times \hom(x,x))\\ \cong (1^{op}_X \times 1_X)\hom(x,x)\\ \cong 1_X \]

    Or in pretty pictures,

    image
    Comment Source:Don't we want to show something like, \\[ (\hom^{op}(x,x) \times 1\_X)(1\_X \times \hom(x,x))\\\\ \cong (1^{op}\_X \times 1\_X)\hom(x,x)\\\\ \cong 1\_X \\] Or in pretty pictures, <center> <img src="https://i.imgur.com/2FlgfFQ.png" /> </center>
  • 9.
    edited July 31

    Keith - the middle diagram in your trio of diagrams can't equal the other two, because its top and bottom look different from the other two. It shows a feasibility relation going from 'nothing' (namely \(\textbf{1}\)) to \( X \times X^{\text{op}}\) (or maybe \( X^{\text{op}} \times X\), depending on your conventions) while the other two show feasibility relations going from \(X\) to \(X\).

    Your equations don't make sense to me. Take \( \text{hom}^{\text{op}}(x,x) \times 1_X\), for example. \(1_X\) could be the name for a feasibility relation from \(X\) to \(X\), or it could be the name for a monotone function from \(X\) to \(X\). I don't know which since you didn't say which category these equations are describing morphisms in.

    I don't know what \(\text{hom}^{\text{op}}(x,x)\) means, or what particular value of \(x\) you're talking about. No matter what guess I make, I'm having trouble seeing what \(\text{hom}^{\text{op}}(x,x) \times 1_X\) is supposed to mean.

    Hmm, maybe you are using \(\text{hom}^{\text{op}}(x,x)\) as a name for the hom-functor for \( X^{\text{op}}\). It that's what you mean, call it \(\text{hom}\), or better \( \text{hom}_{X^{\text{op}}}\) to indicate which preorder it's the hom-functor of.

    There's a hom-functor

    $$ \text{hom}_X : X^{\text{op}} \times X \to \mathbf{Bool}, $$ and similarly a hom-functor

    $$ \text{hom}_{X^{\text{op}}} : (X^{\text{op}})^{\text{op}} \times X^{\text{op}} \to \mathbf{Bool}, $$ or

    $$ \text{hom}_{X^{\text{op}}} : X \times X^{\text{op}} \to \mathbf{Bool} $$ for short.

    Comment Source:Keith - the middle diagram in your trio of diagrams can't equal the other two, because its top and bottom look different from the other two. It shows a feasibility relation going from 'nothing' (namely \\(\textbf{1}\\)) to \\( X \times X^{\text{op}}\\) (or maybe \\( X^{\text{op}} \times X\\), depending on your conventions) while the other two show feasibility relations going from \\(X\\) to \\(X\\). Your equations don't make sense to me. Take \\( \text{hom}^{\text{op}}(x,x) \times 1_X\\), for example. \\(1_X\\) could be the name for a feasibility relation from \\(X\\) to \\(X\\), or it could be the name for a monotone function from \\(X\\) to \\(X\\). I don't know which since you didn't say which category these equations are describing morphisms in. I don't know what \\(\text{hom}^{\text{op}}(x,x)\\) means, or what particular value of \\(x\\) you're talking about. No matter what guess I make, I'm having trouble seeing what \\(\text{hom}^{\text{op}}(x,x) \times 1_X\\) is supposed to mean. Hmm, maybe you are using \\(\text{hom}^{\text{op}}(x,x)\\) as a name for the hom-functor for \\( X^{\text{op}}\\). It that's what you mean, call it \\(\text{hom}\\), or better \\( \text{hom}_{X^{\text{op}}}\\) to indicate which preorder it's the hom-functor of. There's a hom-functor \[ \text{hom}\_X : X^{\text{op}} \times X \to \mathbf{Bool}, \] and similarly a hom-functor \[ \text{hom}_{X^{\text{op}}} : (X^{\text{op}})^{\text{op}} \times X^{\text{op}} \to \mathbf{Bool}, \] or \[ \text{hom}_{X^{\text{op}}} : X \times X^{\text{op}} \to \mathbf{Bool} \] for short.
  • 10.
    edited August 2

    First snake equation.

    We can show that the diagrams for the snake equation are equal in terms of feasibility relations by expanding the right hand side (RHS) and showing it equals the identity on \(X\).

    $$ \left(1_X \times \cup_X \right)\left(\cap_X \times 1_X\right) = 1_X $$ Each of these definitions a function from the product of their inputs and outputs to \(\mathbf{Bool}\).

    The first part is cap and the identity: \((\cap_X \times 1_X ) : 1 \times X \nrightarrow X \times X^\mathrm{op} \times X\) $$(\cap_X \times 1_X )((\bullet,x),(x',x^\ast,x'')) = (x^\ast \le x') \ \mathrm{and} \ (x \le x'') $$ where \(\bullet \in 1\).

    Dually, we get \((1_X \times \cup_X ) : X \times X^\mathrm{op} \times X \nrightarrow X \times 1\) $$(1_X \times \cup_X )((x',x^\ast,x''),(x,\bullet)) = (x'' \le x^\ast) \ \mathrm{and} \ (x' \le x) $$ Composing, we get $$ (1_X \times \cup_X ) (\cap_X \times 1_X )((\bullet,x),(x,\bullet)) = x \le x'' \le x^\ast \le x' \le x $$ where the composite is true only if we can find one \(x'',x^\ast,x'\) in \(X\) such that \(x\le x\), but this is always true since we can set \(x = x' = x'' = x^\ast\).

    Therefore \((1_X \times \cup_X ) (\cap_X \times 1_X ) \simeq 1_X\). Equality holds if \(X \otimes 1 = X = 1 \otimes X\) because \((1_X \times \cup_X ) (\cap_X \times 1_X ) : 1\otimes X \nrightarrow X \otimes 1 \).

    Second snake equation

    Same as above, but with opposite preorders (i.e., turn the arrows around).

    $$ (1_{X^\mathrm{op}} \times \cap_{X^\mathrm{op}})((x,\bullet),(x',x^\ast,x'')) = x'' \le x^\ast \ \mathrm{and} \ x' \le x $$ and

    $$ (\cup_{X^\mathrm{op}} \times 1_{X^\mathrm{op}})((x',x^\ast,x''),(\bullet,x)) = x^\ast \le x' \ \mathrm{and} \ x \le x'' $$ Composing gives $$ (\cup_{X^\mathrm{op}} \times 1_{X^\mathrm{op}})(1_{X^\mathrm{op}} \times \cap_{X^\mathrm{op}})((x,\bullet),(\bullet,x)) = x \le x'' \le x^\ast \le x' \le x $$ for some choice of \(x'',x^\ast,x'\) in \(X\), but again this is always possible since we can choose \(x = x' = x'' = x^\ast\).

    This composite behaves the same as the identity on \(X^\mathrm{op}\), the dual object of \(X\).

    Edits: Changed the name of the element in 1 from 1 to \(\bullet\). Also see Yoav's comment since I only prove that the two profunctors are equal along their diagonal.

    Comment Source:**First snake equation**. We can show that the diagrams for the snake equation are equal in terms of feasibility relations by expanding the right hand side (RHS) and showing it equals the identity on \\(X\\). \[ \left(1_X \times \cup_X \right)\left(\cap_X \times 1_X\right) = 1_X \] Each of these definitions a function from the product of their inputs and outputs to \\(\mathbf{Bool}\\). The first part is cap and the identity: \\((\cap_X \times 1_X ) : 1 \times X \nrightarrow X \times X^\mathrm{op} \times X\\) \[(\cap_X \times 1_X )((\bullet,x),(x',x^\ast,x'')) = (x^\ast \le x') \ \mathrm{and} \ (x \le x'') \] where \\(\bullet \in 1\\). Dually, we get \\((1_X \times \cup_X ) : X \times X^\mathrm{op} \times X \nrightarrow X \times 1\\) \[(1_X \times \cup_X )((x',x^\ast,x''),(x,\bullet)) = (x'' \le x^\ast) \ \mathrm{and} \ (x' \le x) \] Composing, we get \[ (1_X \times \cup_X ) (\cap_X \times 1_X )((\bullet,x),(x,\bullet)) = x \le x'' \le x^\ast \le x' \le x \] where the composite is true only if we can find one \\(x'',x^\ast,x'\\) in \\(X\\) such that \\(x\le x\\), but this is always true since we can set \\(x = x' = x'' = x^\ast\\). Therefore \\((1_X \times \cup_X ) (\cap_X \times 1_X ) \simeq 1_X\\). Equality holds if \\(X \otimes 1 = X = 1 \otimes X\\) because \\((1_X \times \cup_X ) (\cap_X \times 1_X ) : 1\otimes X \nrightarrow X \otimes 1 \\). **Second snake equation** Same as above, but with opposite preorders (i.e., turn the arrows around). \[ (1_{X^\mathrm{op}} \times \cap_{X^\mathrm{op}})((x,\bullet),(x',x^\ast,x'')) = x'' \le x^\ast \ \mathrm{and} \ x' \le x \] and \[ (\cup_{X^\mathrm{op}} \times 1_{X^\mathrm{op}})((x',x^\ast,x''),(\bullet,x)) = x^\ast \le x' \ \mathrm{and} \ x \le x'' \] Composing gives \[ (\cup_{X^\mathrm{op}} \times 1_{X^\mathrm{op}})(1_{X^\mathrm{op}} \times \cap_{X^\mathrm{op}})((x,\bullet),(\bullet,x)) = x \le x'' \le x^\ast \le x' \le x \] for some choice of \\(x'',x^\ast,x'\\) in \\(X\\), but again this is always possible since we can choose \\(x = x' = x'' = x^\ast\\). This composite behaves the same as the identity on \\(X^\mathrm{op}\\), the dual object of \\(X\\). **Edits:** Changed the name of the element in 1 from 1 to \\(\bullet\\). Also see Yoav's comment since I only prove that the two profunctors are equal along their diagonal.
  • 11.

    Scott, your snake equation proof looks good, except that you only show that the two profunctors are equal along the diagonal. That is, you only show that for all \(x\), $$ (1_X \times \cup_X ) (\cap_X \times 1_X )((1,x),(x,1)) = 1_X(x,x) $$ You need to show that for all \(x,\tilde{x}\), $$ (1_X \times \cup_X ) (\cap_X \times 1_X )((1,x),(\tilde{x},1)) = 1_X(x,\tilde{x}). $$ Replacing \(x\) by \(\tilde{x}\) in the appropriate places in your proof, you get that \((1_X \times \cup_X ) (\cap_X \times 1_X )((1,x),(\tilde{x},1))\) if and only if there exist \(x'', x^*, x'\) such that \(x\le x'' \le x^* \le x' \le \tilde{x}\), and of course this hold if and only if \(x\le\tilde{x}\) or in other words \(1_X(x,\tilde{x})\).

    On a stylistic note, I think 0 is better than 1 as the name for the only element in \(\mathbf{1}\), as John used in the last lecture. I've seen others here use a bullet or star symbol, which I think are also better than 1.

    Comment Source:Scott, your snake equation proof looks good, except that you only show that the two profunctors are equal along the diagonal. That is, you only show that for all \\(x\\), \[ (1_X \times \cup_X ) (\cap_X \times 1_X )((1,x),(x,1)) = 1_X(x,x) \] You need to show that for all \\(x,\tilde{x}\\), \[ (1_X \times \cup_X ) (\cap_X \times 1_X )((1,x),(\tilde{x},1)) = 1_X(x,\tilde{x}). \] Replacing \\(x\\) by \\(\tilde{x}\\) in the appropriate places in your proof, you get that \\((1_X \times \cup_X ) (\cap_X \times 1_X )((1,x),(\tilde{x},1))\\) if and only if there exist \\(x'', x^*, x'\\) such that \\(x\le x'' \le x^* \le x' \le \tilde{x}\\), and of course this hold if and only if \\(x\le\tilde{x}\\) or in other words \\(1_X(x,\tilde{x})\\). On a stylistic note, I think 0 is better than 1 as the name for the only element in \\(\mathbf{1}\\), as John used in the last lecture. I've seen others here use a bullet or star symbol, which I think are also better than 1.
  • 12.
    edited August 2

    Thanks Yoav, I think I forgot the definition of the identity profunctor and was thinking it imposed equality between inputs and outputs, but of course that violates the definition of a profunctor.

    On a stylistic note, I think 0 is better than 1 as the name for the only element in 1, as John used in the last lecture. I've seen others here use a bullet or star symbol, which I think are also better than 1.

    I couldn't remember and got lazy so I didn't look up what we had used before. I'll change that

    Comment Source:Thanks Yoav, I think I forgot the definition of the identity profunctor and was thinking it imposed equality between inputs and outputs, but of course that violates the definition of a profunctor. > On a stylistic note, I think 0 is better than 1 as the name for the only element in 1, as John used in the last lecture. I've seen others here use a bullet or star symbol, which I think are also better than 1. I couldn't remember and got lazy so I didn't look up what we had used before. I'll change that
  • 13.

    Had a couple of thoughts on how these string diagrams of profunctors actually work.

    My initial instinct was to think of a profunctor \(\Phi : X \nrightarrow Y\) as a "box", with a "wire" coming in on the left and another "wire" going out on the right:

    image

    These boxes can be connected together end-to-end, which corresponds to composing profunctors:

    image

    They can also stacked vertically, which corresponds to tensoring profunctors:

    image

    (Of course we need to prove that "connecting" and "stacking" commute, i.e.

    $$(\Psi\circ\Phi)\otimes(\Psi'\circ\Phi') = (\Psi\otimes\Psi')\circ(\Phi\otimes\Phi'), $$ but that's pretty straightforward once you notice that \(\otimes\) distributes over arbitrary joins in a quantale.)

    However, I'm now thinking this "box" intuition is slightly misleading. We've seen how a profunctor \(\Phi : X \nrightarrow Y\) can equally well be written as a profunctor \(Y^\text{op} \times X \nrightarrow \textbf{1}\), or \(\textbf{1} \nrightarrow Y \times X^\text{op}\), or \(Y^\text{op} \nrightarrow X^\text{op}\). Which suggests these boxes are all equally good representations of \(\Phi\):

    image

    So really it doesn't matter which "side" the wires go into or out of the box. All that matters is the direction, in or out.

    This suggests we ought to think of profunctors not as boxes, but rather as blobs with inputs and outputs.

    image

    Composition means connecting the output of one blob to the input of another. Tensoring means gathering together a bunch of blobs into one:

    image

    But the "horizontal v vertical" or "left to right" aspects of string diagrams – at least for profunctors – turn out to be irrelevant.

    Comment Source:Had a couple of thoughts on how these string diagrams of profunctors actually work. My initial instinct was to think of a profunctor \\(\Phi : X \nrightarrow Y\\) as a "box", with a "wire" coming in on the left and another "wire" going out on the right: <center><img src="https://i.imgur.com/9r8wboJ.png"></center> These boxes can be connected together end-to-end, which corresponds to composing profunctors: <center><img src="https://i.imgur.com/FZKUsVv.png"></center> They can also stacked vertically, which corresponds to tensoring profunctors: <center><img src="https://i.imgur.com/zUgpafv.png"></center> (Of course we need to prove that "connecting" and "stacking" commute, i.e. \[(\Psi\circ\Phi)\otimes(\Psi'\circ\Phi') = (\Psi\otimes\Psi')\circ(\Phi\otimes\Phi'), \] but that's pretty straightforward once you notice that \\(\otimes\\) distributes over arbitrary joins in a quantale.) However, I'm now thinking this "box" intuition is slightly misleading. We've seen how a profunctor \\(\Phi : X \nrightarrow Y\\) can equally well be written as a profunctor \\(Y^\text{op} \times X \nrightarrow \textbf{1}\\), or \\(\textbf{1} \nrightarrow Y \times X^\text{op}\\), or \\(Y^\text{op} \nrightarrow X^\text{op}\\). Which suggests these boxes are all equally good representations of \\(\Phi\\): <center><img src="https://i.imgur.com/yh6BAqG.png"></center> So really it doesn't matter which "side" the wires go into or out of the box. All that matters is the direction, in or out. This suggests we ought to think of profunctors not as boxes, but rather as blobs with inputs and outputs. <center><img src="https://i.imgur.com/RcWbzjT.png"></center> Composition means connecting the output of one blob to the input of another. Tensoring means gathering together a bunch of blobs into one: <center><img src="https://i.imgur.com/AYGEndl.png"></center> But the "horizontal v vertical" or "left to right" aspects of string diagrams – at least for profunctors – turn out to be irrelevant.
  • 14.

    @Anindya Thank you for the "blob diagrams"! They make a lot sense to me. And it seems in their case there is no need to introduce the special feedback feasibility relations – the cap and cup profunctors are the identity blob, with \(X\) as both input and output.

    Comment Source:@Anindya Thank you for the "blob diagrams"! They make a lot sense to me. And it seems in their case there is no need to introduce the special feedback feasibility relations – the cap and cup profunctors are the identity blob, with \\(X\\) as both input and output.
  • 15.

    yes. just to clarify – all of the above assumes that we've proved all the snake equations. these basically tell us that it doesn't matter which "side of box" wires enter or exit from, and that our wires are "floppy" in that we can introduce or remove snakey bends at will.

    Comment Source:yes. just to clarify – all of the above assumes that we've proved all the snake equations. these basically tell us that it doesn't matter which "side of box" wires enter or exit from, and that our wires are "floppy" in that we can introduce or remove snakey bends at will.
  • 16.

    Does that reduce the string diagrams to Directed graphs?

    Comment Source:Does that reduce the string diagrams to Directed graphs?
  • 17.

    thing is directed graphs have loops in them, but I'm a bit unsure what it would mean to plug the output of a blob directly into one of its inputs. for instance suppose you had \(\Phi : X \nrightarrow Y\) and \(\Psi : Y \nrightarrow Z\) – you could tensor them to give \(\Phi \otimes \Psi : X \times Y \nrightarrow Y \times Z\) – and maybe if you "loop" by plugging the output \(Y\) into the input, you'd get plain old \(\Psi\circ\Phi : X \nrightarrow Z\)? but I'm just speculating here.

    Comment Source:thing is directed graphs have loops in them, but I'm a bit unsure what it would mean to plug the output of a blob directly into one of its inputs. for instance suppose you had \\(\Phi : X \nrightarrow Y\\) and \\(\Psi : Y \nrightarrow Z\\) – you could tensor them to give \\(\Phi \otimes \Psi : X \times Y \nrightarrow Y \times Z\\) – and maybe if you "loop" by plugging the output \\(Y\\) into the input, you'd get plain old \\(\Psi\circ\Phi : X \nrightarrow Z\\)? but I'm just speculating here.
  • 18.
    edited August 2

    I think that's right, looping an output into an input works like this: Let \(f : X \otimes Y \nrightarrow Z \otimes Y\). Tensoring and composing with cap, cup, and identity profunctors allows you to feed the \(Y\) output into the \(Y\) input, and get a new profunctor via the composition rule. So if \(f^\star : X \nrightarrow Y\) is that profunctor, then it's definition is

    $$f^\star(x,z) = \begin{cases} f((x,y),(z,y')) & \mathrm{if} \ y \le y' \ \text{for any} \ y,y' \in Y \\ \mathrm{false} & \mathrm{otherwise} \end{cases}$$ So basically, unless \(f\) requires less \(Y\) than it produces, you don't get a non-trivial profunctor \(f^\star:X \nrightarrow Z\). For any pair \((x,z)\), \(f^\star\) would return false. If \(f\) does require less \(Y\) then it produces, then it reduces to asking whether you can get \(x\) given \(z\), so you can simply forget about \(Y\) inputs and outputs.

    Comment Source:I think that's right, looping an output into an input works like this: Let \\(f : X \otimes Y \nrightarrow Z \otimes Y\\). Tensoring and composing with cap, cup, and identity profunctors allows you to feed the \\(Y\\) output into the \\(Y\\) input, and get a new profunctor via the composition rule. So if \\(f^\star : X \nrightarrow Y\\) is that profunctor, then it's definition is \[f^\star(x,z) = \begin{cases} f((x,y),(z,y')) & \mathrm{if} \ y \le y' \ \text{for any} \ y,y' \in Y \\\ \mathrm{false} & \mathrm{otherwise} \end{cases}\] So basically, unless \\(f\\) requires less \\(Y\\) than it produces, you don't get a non-trivial profunctor \\(f^\star:X \nrightarrow Z\\). For any pair \\((x,z)\\), \\(f^\star\\) would return false. If \\(f\\) does require less \\(Y\\) then it produces, then it reduces to asking whether you can get \\(x\\) given \\(z\\), so you can simply forget about \\(Y\\) inputs and outputs.
  • 19.

    One way to think about this \(f\) is to imagine it represents a machine or process that turns Z into X. So given \(z\) amount of Z, \(f\) answers whether or not we can make \(x\) amount of X. But the machine requires electricity (Y), but also happens to produce electricity.

    Thus, \(f\) depends on whether or not we have enough electricity. But if the machine makes less electricity than it needs (which must be true if the 2nd law of thermodynamics is true), then we cannot make X given Z for any amounts of X and Z. We need an external supply of energy.

    Alternatively, if the machine makes more electricity than it needs, we don't need to specify how much electricity is supplied as an input or demanded as an output. The machine will run regardless if there's enough Z to make X. Maybe Y is money instead of electricity if you want thermodynamics to hold, and the machine costs \(y'\) to run but tourists pay \(y\) to watch it run (a contrived example to be sure).

    Comment Source:One way to think about this \\(f\\) is to imagine it represents a machine or process that turns Z into X. So given \\(z\\) amount of Z, \\(f\\) answers whether or not we can make \\(x\\) amount of X. But the machine requires electricity (Y), but also happens to produce electricity. Thus, \\(f\\) depends on whether or not we have enough electricity. But if the machine makes less electricity than it needs (which must be true if the 2nd law of thermodynamics is true), then we cannot make X given Z for any amounts of X and Z. We need an external supply of energy. Alternatively, if the machine makes more electricity than it needs, we don't need to specify how much electricity is supplied as an input or demanded as an output. The machine will run regardless if there's enough Z to make X. Maybe Y is money instead of electricity if you want thermodynamics to hold, and the machine costs \\(y'\\) to run but tourists pay \\(y\\) to watch it run (a contrived example to be sure).
  • 20.
    edited August 2

    I think the blob interpretation, where the bends can be removed and all the strings can be combed straight and have the arrows point in the same direction is incorrect.

    Consider what I'm going to call spider profunctors,

    \[ S: X_1 \otimes X_2 \otimes X_3 \otimes \cdots X_n \nrightarrow Y_1 \otimes Y_2 \otimes Y_3 \otimes \cdots Y_m \]

    I conjecture that there exists an arrangement of spiders and loops that can't be undone and "combed straight" as it were.

    Comment Source:I think the blob interpretation, where the bends can be removed and all the strings can be combed straight and have the arrows point in the same direction is incorrect. Consider what I'm going to call spider profunctors, \\[ S: X\_1 \otimes X\_2 \otimes X\_3 \otimes \cdots X\_n \nrightarrow Y\_1 \otimes Y\_2 \otimes Y\_3 \otimes \cdots Y\_m \\] I conjecture that there exists an arrangement of spiders and loops that can't be undone and "combed straight" as it were.
  • 21.
    edited August 2

    Actually, thinking about it, I don't think I need loops to prove it.

    This spider can't be made into a blob where all the arrows point in the same direction,

    \[ X^{op} \otimes Y \nrightarrow Z^{op} \otimes W. \]

    Edit: Therefore the assumption that,

    "horizontal v vertical" or "left to right" aspects of string diagrams – at least for profunctors – turn out to be irrelevant.

    is incorrect. Directionality cannot be ignored and made irrelevant under topological moves.

    Comment Source:Actually, thinking about it, I don't think I need loops to prove it. This spider can't be made into a blob where all the arrows point in the same direction, \\[ X^{op} \otimes Y \nrightarrow Z^{op} \otimes W. \\] Edit: Therefore the assumption that, >"horizontal v vertical" or "left to right" aspects of string diagrams – at least for profunctors – turn out to be irrelevant. is incorrect. Directionality cannot be ignored and made irrelevant under topological moves.
  • 22.

    However, if we add is a swapping profunctor,

    \[ \sigma : A \otimes B \nrightarrow B \otimes A \]

    then every profunctor can be given an isomorphic profunctor where when as string diagrams all the string arrows point in the same direction.

    Comment Source:However, if we add is a swapping profunctor, \\[ \sigma : A \otimes B \nrightarrow B \otimes A \\] then every profunctor can be given an isomorphic profunctor where when as string diagrams all the string arrows point in the same direction.
  • 23.

    @Keith – surely we already have an isomorphism of \(\mathcal{V}\)-categories \(X \times Y \cong Y \times X\) given by \((x, y) \mapsto (y, x)\). We can check that this is in fact a \(\mathcal{V}\)-functor:

    \[ (Y \times X)((y, x), (y', x')) = Y(y, y') \otimes X(x, x') = X(x, x') \otimes Y(y, y') = (X \times Y)((x, y), (x', y')) \]

    NB we need \(\otimes\) to be symmetric in order to define the product \(\mathcal{V}\)-category \(X \times Y\) in the first place.

    Comment Source:@Keith – surely we already have an isomorphism of \\(\mathcal{V}\\)-categories \\(X \times Y \cong Y \times X\\) given by \\((x, y) \mapsto (y, x)\\). We can check that this is in fact a \\(\mathcal{V}\\)-functor: \\[ (Y \times X)((y, x), (y', x')) = Y(y, y') \otimes X(x, x') = X(x, x') \otimes Y(y, y') = (X \times Y)((x, y), (x', y')) \\] NB we need \\(\otimes\\) to be symmetric in order to define the product \\(\mathcal{V}\\)-category \\(X \times Y\\) in the first place.
  • 24.

    Thought I'd update/fix my previous attempt at proving the first snake inequality given the conventions we've settled on for cap and cup. I've also restricted it to the case where \(\mathcal{V} = \textbf{Bool}\), although the proof works in the general case too.

    The first snake inequality concerns the following row of profunctors:

    $$ X \cong \textbf{1} \times X \nrightarrow (X \times X^\text{op}) \times X \cong X \times (X^\text{op} \times X) \nrightarrow X \times \textbf{1} \cong X $$ Given any objects \(x, y\) in \(X\) the value of the composite profunctor at \((x, y)\) is

    $$ \bigvee\big(\Phi((\bullet, x), (z, z', z'')) \otimes \Psi((z, z', z''), (y, \bullet))\big) $$ where we're joining over triples \((z, z', z'')\) in \(X \times X^\text{op} \times X\) and

    $$ \Phi((\bullet, x), (z, z', z'')) = \cap_X(\bullet, (z, z')) \wedge \text{1}_X(x, z'') = (z' \leq z) \wedge (x \leq z'') $$ $$ \Psi((z, z', z''), (y, \bullet)) = \text{1} _X(z, y) \wedge \cup_X((z', z''), \bullet) = (z \leq y) \wedge (z'' \leq z') $$ So our join is

    $$ \bigvee\big((z' \leq z) \wedge (x \leq z'') \wedge (z \leq y) \wedge (z'' \leq z')\big) $$ Reorder the terms to get

    $$ \bigvee\big((x \leq z'') \wedge (z'' \leq z') \wedge (z' \leq z) \wedge (z \leq y) \big) $$ Now if this join is true, then the "summand" must be true for some triple \((z, z', z'')\), and that implies \((x \leq y)\).

    Conversely, if \((x \leq y)\) then the "summand" is true for the triple \((x, x, x)\), so the join is true.

    Hence the join is true iff \((x \leq y)\), so the composite profunctor equals the identity profunctor. QED.

    Comment Source:Thought I'd update/fix my previous attempt at proving the first snake inequality given the conventions we've settled on for cap and cup. I've also restricted it to the case where \\(\mathcal{V} = \textbf{Bool}\\), although the proof works in the general case too. The first snake inequality concerns the following row of profunctors: \[ X \cong \textbf{1} \times X \nrightarrow (X \times X^\text{op}) \times X \cong X \times (X^\text{op} \times X) \nrightarrow X \times \textbf{1} \cong X \] Given any objects \\(x, y\\) in \\(X\\) the value of the composite profunctor at \\((x, y)\\) is \[ \bigvee\big(\Phi((\bullet, x), (z, z', z'')) \otimes \Psi((z, z', z''), (y, \bullet))\big) \] where we're joining over triples \\((z, z', z'')\\) in \\(X \times X^\text{op} \times X\\) and \[ \Phi((\bullet, x), (z, z', z'')) = \cap_X(\bullet, (z, z')) \wedge \text{1}_X(x, z'') = (z' \leq z) \wedge (x \leq z'') \] \[ \Psi((z, z', z''), (y, \bullet)) = \text{1} _X(z, y) \wedge \cup_X((z', z''), \bullet) = (z \leq y) \wedge (z'' \leq z') \] So our join is \[ \bigvee\big((z' \leq z) \wedge (x \leq z'') \wedge (z \leq y) \wedge (z'' \leq z')\big) \] Reorder the terms to get \[ \bigvee\big((x \leq z'') \wedge (z'' \leq z') \wedge (z' \leq z) \wedge (z \leq y) \big) \] Now if this join is true, then the "summand" must be true for some triple \\((z, z', z'')\\), and that implies \\((x \leq y)\\). Conversely, if \\((x \leq y)\\) then the "summand" is true for the triple \\((x, x, x)\\), so the join is true. Hence the join is true iff \\((x \leq y)\\), so the composite profunctor equals the identity profunctor. QED.
  • 25.
    edited August 3

    @John - I was looking at your old Network Theory III notes and I saw a familiar pattern:

    Given finite sets \(X\) and \(Y\) , a stochastic map \(f : X \rightsquigarrow Y\) assigns a real number \(f_{yx}\) to each pair \(x \in X\), \(y \in Y\) in such a way that for any \(x\), the numbers \(f_{yx}\) form a probability distribution on Y . We call \(f_{yx}\) the probability of \(y\) given \(x\).

    So, we demand:

    • \(f_{yx} \geq 0\) for all \(x ∈ X\) \(y ∈ Y\)
    • \(\displaystyle{\sum_{y\in Y} f_{yx} = 1} \) for all \(x \in X\)

    We can compose stochastic maps \(f : X \rightsquigarrow Y\) and \(g : Y \rightsquigarrow Z\) by matrix multiplication:

    $$ (g \circ f )_{zx} = \sum_{y\in Y} g_{zy} f_{yz}$$ and get a stochastic map \(g ◦ f : X \rightsquigarrow Z\).

    We let \(\mathtt{FinStoch}\) be the category with

    • finite sets as objects,
    • stochastic maps \(f : X \rightsquigarrow Y\) as morphisms.

    It is perhaps well beyond the scope of this course, but is \(\mathtt{FinStoch}\) a \(\mathcal{V}\)-enriched profunctor category? It doesn't seem like it is because \(\sum\) is not the same as \(\bigvee\). Is it some kind of related profunctor category?

    Comment Source:@John - I was looking at your old [Network Theory III](http://math.ucr.edu/home/baez/networks_oxford/networks_entropy.pdf) notes and I saw a familiar pattern: > Given finite sets \\(X\\) and \\(Y\\) , a stochastic map \\(f : X \rightsquigarrow Y\\) assigns a > real number \\(f\_{yx}\\) to each pair \\(x \in X\\), \\(y \in Y\\) in such a way that for > any \\(x\\), the numbers \\(f\_{yx}\\) form a probability distribution on Y . > We call \\(f\_{yx}\\) the probability of \\(y\\) given \\(x\\). > > So, we demand: > > - \\(f\_{yx} \geq 0\\) for all \\(x ∈ X\\) \\(y ∈ Y\\) > - \\(\displaystyle{\sum\_{y\in Y} f\_{yx} = 1} \\) for all \\(x \in X\\) > > We can compose stochastic maps \\(f : X \rightsquigarrow Y\\) and \\(g : Y \rightsquigarrow Z\\) by matrix multiplication: > > \[ (g \circ f )_{zx} = \sum_{y\in Y} g_{zy} f_{yz}\] > > and get a stochastic map \\(g ◦ f : X \rightsquigarrow Z\\). > > We let \\(\mathtt{FinStoch}\\) be the category with > > - finite sets as objects, > - stochastic maps \\(f : X \rightsquigarrow Y\\) as morphisms. It is perhaps well beyond the scope of this course, but is \\(\mathtt{FinStoch}\\) a \\(\mathcal{V}\\)-enriched profunctor category? It doesn't seem like it is because \\(\sum\\) is not the same as \\(\bigvee\\). Is it some kind of related profunctor category?
  • 26.
    edited August 3

    Anindya wrote:

    Composition means connecting the output of one blob to the input of another. Tensoring means gathering together a bunch of blobs into one:

    image

    RIGHT!!! image

    This is an incredibly important point. I've been slowly leading up to it, but I probably should have just come out and said it.

    But the "horizontal vs vertical" or "left to right" aspects of string diagrams – at least for profunctors – turn out to be irrelevant.

    Sort of. We have a great deal of flexibility in manipulating our diagrams for profunctors: for example we can take both sides of an equation and 'turn them around' using caps and cups, turning inputs into outputs and vice versa, and get another true equation. And yet, we do need to be careful about whether we're talking about \(\Phi \colon X \nrightarrow Y\) or \(\Phi \colon Y \nrightarrow X\), because this affects which formulas involving \(\Phi\) will make sense, or be true. And ironically, it matters most of all when \(X = Y\)!

    Comment Source:Anindya wrote: > Composition means connecting the output of one blob to the input of another. Tensoring means gathering together a bunch of blobs into one: > <center><img src="https://i.imgur.com/AYGEndl.png"></center> RIGHT!!! <img src = "http://math.ucr.edu/home/baez/emoticons/thumbsup.gif"> This is an incredibly important point. I've been slowly leading up to it, but I probably should have just come out and said it. > But the "horizontal vs vertical" or "left to right" aspects of string diagrams – at least for profunctors – turn out to be irrelevant. Sort of. We have a great deal of flexibility in manipulating our diagrams for profunctors: for example we can take both sides of an equation and 'turn them around' using caps and cups, turning inputs into outputs and vice versa, and get another true equation. And yet, we do need to be careful about whether we're talking about \\(\Phi \colon X \nrightarrow Y\\) or \\(\Phi \colon Y \nrightarrow X\\), because this affects which formulas involving \\(\Phi\\) will make sense, or be true. And ironically, it matters most of all when \\(X = Y\\)!
  • 27.
    edited August 3

    Matthew wrote:

    It is perhaps well beyond the scope of this course, but is \(\mathtt{FinStoch}\) a \(\mathcal{V}\)-enriched profunctor category? It doesn't seem like it is because \(\sum\) is not the same as \(\bigvee\). Is it some kind of related profunctor category?

    Very interesting question. I don't think it's an enriched profunctor category; I think both this category and enriched profunctor categories are special cases of a more general sort of category.

    \(\mathtt{FinStoch}\) is a subcategory of a simpler category called \(\mathtt{Mat}(\mathbb{R})\), where

    • objects are finite sets,

    • a morphism \(f \colon X \to Y\) is a 'matrix', namely a function \(f \colon Y \times X \to \mathbb{R}\),

    • composition is done via matrix multiplication: we compose morphisms \(f : X \to Y\) and \(g : Y \to Z\) to get \(g \circ f : X \to Z\) as follows:

    $$ (g \circ f )_{zx} = \sum_{y\in Y} g_{zy} f_{yz} $$ This category is equivalent (but not isomorphic) to the category of finite-dimensional real vector spaces.

    In fact we can define a category \(\mathtt{Mat}(R)\) in the exact same way whenever \(R\) is any ring, or even any rig! A great example is \( R = [0,\infty) \) with the usual notions of \(+\) and \(\times\). \(\mathtt{FinStoch}\) is actually a subcategory of \(\mathtt{Mat}([0,\infty))\)

    You might enjoy this exploration of \(\mathtt{FinStoch}\), \(\mathtt{Mat}([0,\infty))\) and related categories:

    This is the beginning of a long story, continued in other blog articles in that series.

    But notice that any quantale gives a rig with \(+ = \vee\) and \(\times = \otimes\). So we can also create categories \(\mathtt{Mat}(\mathcal{V})\) when \(\mathcal{V}\) is a quantale! And in this case there's no need to limit ourselves to finite sets as objects! Infinite joins are well-defined, so we can say

    $$ (g \circ f )_{zx} = \bigvee_{y\in Y} g_{zy} \otimes f_{yz}$$ even when \(Y\) is infinite!

    So we get a category where morphisms are possibly infinite-sized matrices with entries in a quantale \(\mathcal{V}\). But these are not the categories of \(\mathcal{V}\)-enriched profunctors - do you see why? See how they're related?

    So there must be some 'least common generalization' that includes everything I've been talking about, but I'm not sure what it is.

    Comment Source:Matthew wrote: > It is perhaps well beyond the scope of this course, but is \\(\mathtt{FinStoch}\\) a \\(\mathcal{V}\\)-enriched profunctor category? It doesn't seem like it is because \\(\sum\\) is not the same as \\(\bigvee\\). Is it some kind of related profunctor category? Very interesting question. I don't think it's an enriched profunctor category; I think both this category and enriched profunctor categories are special cases of a more general sort of category. \\(\mathtt{FinStoch}\\) is a subcategory of a simpler category called \\(\mathtt{Mat}(\mathbb{R})\\), where * objects are finite sets, * a morphism \\(f \colon X \to Y\\) is a 'matrix', namely a function \\(f \colon Y \times X \to \mathbb{R}\\), * composition is done via matrix multiplication: we compose morphisms \\(f : X \to Y\\) and \\(g : Y \to Z\\) to get \\(g \circ f : X \to Z\\) as follows: \[ (g \circ f )\_{zx} = \sum_{y\in Y} g\_{zy} f\_{yz} \] This category is equivalent (but not isomorphic) to the category of finite-dimensional real vector spaces. In fact we can define a category \\(\mathtt{Mat}(R)\\) in the exact same way whenever \\(R\\) is any ring, or even any [rig](https://en.wikipedia.org/wiki/Semiring)! A great example is \\( R = [0,\infty) \\) with the usual notions of \\(+\\) and \\(\times\\). \\(\mathtt{FinStoch}\\) is actually a subcategory of \\(\mathtt{Mat}([0,\infty))\\) You might enjoy this exploration of \\(\mathtt{FinStoch}\\), \\(\mathtt{Mat}([0,\infty))\\) and related categories: * John Baez, [Relative entropy (part 1)](https://johncarlosbaez.wordpress.com/2013/06/20/relative-entropy-part-1/), _Azimuth_, 20 June 2013. This is the beginning of a long story, continued in other blog articles in that series. But notice that any quantale gives a rig with \\(+ = \vee\\) and \\(\times = \otimes\\). So we can also create categories \\(\mathtt{Mat}(\mathcal{V})\\) when \\(\mathcal{V}\\) is a quantale! And in this case there's no need to limit ourselves to finite sets as objects! Infinite joins are well-defined, so we can say \[ (g \circ f )\_{zx} = \bigvee_{y\in Y} g\_{zy} \otimes f\_{yz}\] even when \\(Y\\) is infinite! So we get a category where morphisms are possibly infinite-sized matrices with entries in a quantale \\(\mathcal{V}\\). But these are _not_ the categories of \\(\mathcal{V}\\)-enriched profunctors - do you see why? See how they're related? So there must be some 'least common generalization' that includes everything I've been talking about, but I'm not sure what it is.
  • 28.

    Very nice answer to Puzzle 217 in comment #10, Scott!

    Therefore \((1_X \times \cup_X ) (\cap_X \times 1_X ) \simeq 1_X\). Equality holds if \(X \otimes 1 = X = 1 \otimes X\) because \((1_X \times \cup_X ) (\cap_X \times 1_X ) : 1\otimes X \nrightarrow X \otimes 1 \).

    Hmm, this is an interesting nuance.

    You're right, the claimed equation

    $$ (1_X \times \cup_X ) (\cap_X \times 1_X ) = 1_X $$ is not really right, because the right-hand side is a profunctor from \(X\) to \(X\), while the right is really a profunctor from \(\mathbf{1} \times X\) to \(X \times \mathbf{1}\).

    Luckily, there are god-given isomorphisms called the left unitor

    $$ \lambda_X \colon \mathbf{1} \times X \nrightarrow X $$ and right unitor

    $$ \rho_X \colon X \otimes \mathbf{1} \nrightarrow X .$$ (These actually begin life as monotone functions with monotone inverses, but we can convert them into enriched profunctors by taking companions.)

    So, a more precise way to state the claimed equation is

    $$ \rho_X^{-1} (1_X \times \cup_X ) (\cap_X \times 1_X ) \lambda_X = 1_X .$$ More precise, but not completely precise: there's an even subtler subtlety! The morphism \(\cap_X \times 1_X\) has target \( (X \times X) \times X\), while the morphism \(1_X \times \cup_X \) has source \( X \times (X \times X)\). So, you can only compose them if you stick the obvious isomorphism from \( (X \times X) \times X\) to \( X \times (X \times X)\) between them! This an example of an associator

    $$ \alpha_{X,Y,Z} \colon (X \times Y) \times Z \nrightarrow X \times (Y \times Z) .$$ All this may be seen as overly fussy by some people. But anyway, the left unitor, right unitor and associator are three main features of a monoidal category, and any category of \(\mathcal{V}\)-enriched profunctors is a monoidal category!

    Soon I will talk about these... and you can read about them in Section 4.4.3 of the book, though you won't see the precise definition.

    I hadn't remembered that we already need to think a bit about the unitors and associators to be completely precise about the snake equations. Many less careful people are willing to pretend these are identity morphisms and thus ignorable. This is actually justified by 'Mac Lane's coherence theorem'.

    Comment Source:Very nice answer to Puzzle 217 in [comment #10](https://forum.azimuthproject.org/discussion/comment/20436/#Comment_20436), Scott! >Therefore \\((1_X \times \cup_X ) (\cap_X \times 1_X ) \simeq 1_X\\). Equality holds if \\(X \otimes 1 = X = 1 \otimes X\\) because \\((1_X \times \cup_X ) (\cap_X \times 1_X ) : 1\otimes X \nrightarrow X \otimes 1 \\). Hmm, this is an interesting nuance. You're right, the claimed equation \[ (1_X \times \cup_X ) (\cap_X \times 1_X ) = 1_X \] is not really right, because the right-hand side is a profunctor from \\(X\\) to \\(X\\), while the right is really a profunctor from \\(\mathbf{1} \times X\\) to \\(X \times \mathbf{1}\\). Luckily, there are god-given isomorphisms called the **left unitor** \[ \lambda_X \colon \mathbf{1} \times X \nrightarrow X \] and **right unitor** \[ \rho_X \colon X \otimes \mathbf{1} \nrightarrow X .\] (These actually begin life as monotone functions with monotone inverses, but we can convert them into enriched profunctors by taking companions.) So, a more precise way to state the claimed equation is \[ \rho_X^{-1} (1_X \times \cup_X ) (\cap_X \times 1_X ) \lambda_X = 1_X .\] More precise, but not completely precise: there's an even subtler subtlety! The morphism \\(\cap_X \times 1_X\\) has target \\( (X \times X) \times X\\), while the morphism \\(1_X \times \cup_X \\) has source \\( X \times (X \times X)\\). So, you can only compose them if you stick the obvious isomorphism from \\( (X \times X) \times X\\) to \\( X \times (X \times X)\\) between them! This an example of an **associator** \[ \alpha_{X,Y,Z} \colon (X \times Y) \times Z \nrightarrow X \times (Y \times Z) .\] All this may be seen as overly fussy by some people. But anyway, the left unitor, right unitor and associator are three main features of a [monoidal category](https://en.wikipedia.org/wiki/Monoidal_category#Formal_definition), and any category of \\(\mathcal{V}\\)-enriched profunctors is a monoidal category! Soon I will talk about these... and you can read about them in Section 4.4.3 of the book, though you won't see the precise definition. I hadn't remembered that we already need to think a bit about the unitors and associators to be completely precise about the snake equations. Many less careful people are willing to pretend these are identity morphisms and thus ignorable. This is actually justified by 'Mac Lane's coherence theorem'.
  • 29.

    @John,

    The subltety tripped me up a little at first since you'd written equality in the original post, but the type signatures don't work out to be the same, only isomorphic! Of course, sometimes we have equality and sometimes we only have isomorphism.

    Of course, in modeling the real world this distinction sometimes doesn't matter very much. Case in point our feasibilitiy relations.

    Comment Source:@John, The subltety tripped me up a little at first since you'd written equality in the original post, but the type signatures don't work out to be the same, only isomorphic! Of course, sometimes we have equality and sometimes we only have isomorphism. Of course, in modeling the real world this distinction sometimes doesn't matter very much. Case in point our feasibilitiy relations.
  • 30.

    Hi Matthew,

    Your question here is quite intriguing, I haven't an answer but wanted to share what I've found up to now.

    Let's fix a field \(\mathbb{K}\) for the rest. One can speak of the category FdVectWB of finite dimensional vector spaces over \(\mathbb{K}\) with an ordered choice of basis for each. It's a monoidal category with the classic tensor product of vector spaces. It is equivalent to the monoidal category of natural numbers as objects, \(\mathbb{K}\)-valued matrices composed by multiplication, and Kronecker matrix product as monoidal product as in say here. We use that equivalence to do actual linear algebra calculations. Without the monoidal structure, we can only factorize a linear map as a sequence of matrices we multiply. With tensoring we can do a finer anatomy of morphisms by finding an exploded string diagram. This was made explicit by Penrose here and put in our categorical context here. In this view, boxes in the diagram are linear maps from the tensor product of input vector spaces to the tensor product of the output vector spaces.

    To calculate, one views boxes with i inputs and o outputs as \(i+o\)-rank tensors with \(i\) lower indices and \(o\) uppder indices, what in software we'd call \(i+o\) dimensional matrices, and wires between boxes express tensor contraction for a common index, lower in one and upper in the other.

    In Differential Gemetry and Phyics the upper and lower character of tensor indices express coordinate change phenomena. In those contexts one thinks of a (n-covariant, m-contravariant)-tensor as a linear map \(T:V_1 \times V_2 \times \cdots \times V_n \times W_1^* \times W_2^* \times \cdots \times W_m^* \to \mathbb{K}\).

    For vectors (1-rank tensors), a transfrom from a basis to another, provokes an inverse transform of the components of the vector (contravariance). For covectors in the dual space, the components just suffer the same transformation (as in here). For higher rank tensors, it is the indices who have variance (co/contra), and tensor components are transformed so the whole multilinear form remains invariant.

    In this mindset an arrow \(f:V \to W\) in FdVectWB can be thought as a 1-covariant, 1-contravariant tensor with components \(F_v^w\), that takes a vector of \(V\) and a covector of \(W^*\) to produce an scalar, so is of type

    \[F:V \times W^* \to \mathbb{K}\]

    This has a striking formal similarity with the definition of profunctor. I think that what is happening here is that profunctors are categorified (!) analogues of linear maps under a suitable notion of categorfied linear algebra. The explanations I'm struggling to follow are from nCafé host Urs Schreiber. In typically terse nLab style the idea is here. There profunctors appear as morphisms of a bicategory of "2-vector spaces with basis". In this comment in the nCafé it's hinted how ordinary vector spaces would "sit" in that bicategory. Then this comment in the nForum expands it a bit. Despite that, I'm not really getting how vector spaces sit in Urs 2-vector spaces with basis. Hope this makes sense.

    Comment Source:Hi Matthew, Your question [here](https://forum.azimuthproject.org/discussion/comment/20463/#Comment_20463) is quite intriguing, I haven't an answer but wanted to share what I've found up to now. Let's fix a field \\(\mathbb{K}\\) for the rest. One can speak of the category **FdVectWB** of finite dimensional vector spaces over \\(\mathbb{K}\\) with an ordered choice of basis for each. It's a monoidal category with the classic tensor product of vector spaces. It is equivalent to the monoidal category of natural numbers as objects, \\(\mathbb{K}\\)-valued matrices composed by multiplication, and Kronecker matrix product as monoidal product as in say [here](http://researchers.ms.unimelb.edu.au/~iain/tohoku/HigherCategoriesStringsCubes.pdf). We use that equivalence to do actual linear algebra calculations. Without the monoidal structure, we can only factorize a linear map as a sequence of matrices we multiply. With tensoring we can do a finer anatomy of morphisms by finding an exploded string diagram. This was made explicit by Penrose [here](http://homepages.math.uic.edu/~kauffman/Penrose.pdf) and put in our categorical context [here](http://arxiv.org/abs/1308.3586v1). In this view, boxes in the diagram are linear maps from the tensor product of input vector spaces to the tensor product of the output vector spaces. To calculate, one views boxes with i inputs and o outputs as \\(i+o\\)-rank tensors with \\(i\\) lower indices and \\(o\\) uppder indices, what in software we'd call \\(i+o\\) dimensional matrices, and wires between boxes express tensor contraction for a common index, lower in one and upper in the other. In Differential Gemetry and Phyics the upper and lower character of tensor indices express coordinate change phenomena. In those contexts one thinks of a (n-covariant, m-contravariant)-tensor as a linear map \\(T:V_1 \times V_2 \times \cdots \times V_n \times W_1^* \times W_2^* \times \cdots \times W_m^* \to \mathbb{K}\\). For vectors (1-rank tensors), a transfrom from a basis to another, provokes an inverse transform of the components of the vector (contravariance). For covectors in the dual space, the components just suffer the same transformation (as in [here](https//en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors)). For higher rank tensors, it is the indices who have variance (co/contra), and tensor components are transformed so the whole multilinear form remains invariant. In this mindset an arrow \\(f:V \to W\\) in **FdVectWB** can be thought as a 1-covariant, 1-contravariant tensor with components \\(F_v^w\\), that takes a vector of \\(V\\) and a covector of \\(W^*\\) to produce an scalar, so is of type \\[F:V \times W^* \to \mathbb{K}\\] This has a striking formal similarity with the definition of profunctor. I think that what is happening here is that profunctors are categorified (!) analogues of linear maps under a suitable notion of categorfied linear algebra. The explanations I'm struggling to follow are from nCafé host Urs Schreiber. In typically terse nLab style the idea is [here](https://ncatlab.org/nlab/show/2-vector+space#AbstractApproach). There profunctors appear as morphisms of a bicategory of "2-vector spaces with basis". In [this comment](https://golem.ph.utexas.edu/category/2008/10/what_is_categorification.html#c020600) in the nCafé it's hinted how ordinary vector spaces would "sit" in that bicategory. Then [this comment](https://nforum.ncatlab.org/discussion/1122/geometry/?Focus=9284#Comment_9284) in the nForum expands it a bit. Despite that, I'm not really getting how vector spaces sit in Urs 2-vector spaces with basis. Hope this makes sense.
  • 31.

    Yes, profunctors are categorified linear algebra! I've said a few times that composing profunctors is just like matrix multiplication. But let's state it more boldly:

    Profunctor theory is to category theory as linear algebra is to set theory!

    and this is why profunctors are so important. Also, the Yoneda embedding is like the embedding of a set on the vector space having that set as its basis!

    It's not necessary to think about 2-vector spaces to understand these ideas.

    Comment Source:Yes, profunctors are categorified linear algebra! I've said a few times that composing profunctors is just like matrix multiplication. But let's state it more boldly: <center> **Profunctor theory is to category theory as linear algebra is to set theory!** </center> and this is why profunctors are so important. Also, the Yoneda embedding is like the embedding of a set on the vector space having that set as its basis! It's not necessary to think about 2-vector spaces to understand these ideas.
  • 32.
    edited August 21

    So is Linear algebra something like the theory of profunctors for numbers?

    Edit: Actually, can you make this relation,

    Profunctor theory is to category theory as linear algebra is to set theory!

    more precise?

    Comment Source:So is Linear algebra something like the theory of profunctors for numbers? Edit: Actually, can you make this relation, >**Profunctor theory is to category theory as linear algebra is to set theory!** more precise?
  • 33.

    I think a V enriched profunctor over discreet preorders is a matrix with entries in V. I think.

    Comment Source:I think a V enriched profunctor over discreet preorders is a matrix with entries in V. I think.
  • 34.
    edited August 26

    Part 1/2

    Sorry for elaborating the excursus, but wanted to give another stab to the profunctors vs linear algebra analogy trying to tie together what John has said. It's fun/profitable to pursue this, juicy, for me is work in progress. The gist of the idea is to compare the vector space monad coming from the free-forgetful adjunction relating sets and vector spaces, with something monad-like ensuing from the process of \(\mathcal{V}\)-enriched free cocompletion, relating \(\mathcal{V}\)-enriched categories and \(\mathcal{V}\)-enrichedly cocomplete categories.

    The setup of the free-forgetful adjunction for sets and vector spaces is in example 2.4 here. Assuming a field \(\mathbb{K}\), the free vector space functor \(F\) takes a set \(S\) and produces the vector space with basis \(S\) made of formal linear combinations, or equally, finitely supported functions in \(\mathbb{K}^S\). The forgetful functor \(U\) preserves the set of formal linear combinations, but forgets vector space structure.

    The adjunction produces a monad in \(\mathbf{Set}\), \(T := U \circ F\). E. Riehl describes the unit and multiplication in example 5.1.4(iii) of book ISBN 048680903X. Exercise 4.5.10 here describes the Kleisli category as having sets as objects and \(\mathbb{K}\)-valued matrices as arrows.

    In this monad there is a bit of "degeneracy" in that the Kleisli and EM-categories are equivalent and this needn't be the case in general.

    From Riehl's book, section 5.2

    The upshot of Lemma 5.2.13 is that the Kleisli category for a monad embeds as the full subcategory of free T-algebras and all maps between such. Lemma 5.2.13 also tells us precisely when the Kleisli and Eilenberg–Moore categories are equivalent: this is the case when all algebras are free. For instance, all vector spaces are free modules over any chosen basis, so the Kleisli and Eilenberg–Moore categories for the free vector space monad are equivalent

    Paul Taylor says:

    \(\mathbf{Vsp}\) (i. e. \(\mathbf{Vect}\)) is the Eilenberg-Moore category for the monad on Set induced by the adjunction in Example 7.1.4(b) (our case). The Kleisli category consists of those vector spaces that have bases (which is all of them, given the axiom of choice).

    If we drop Choice, for instance for constructivist reasons, that destroys the equivalence and the EM-category assumes the rol of the general category of vector spaces, and Kleisli's, the one for spaces with basis.

    In particular, having bases, a map in the Kleisli category just says where the basis vectors of the source space are sent in the target one. Since linear maps preserve linear combinations, that defines the whole map between spaces. That's a case of extension operator in general monad theory.

    Comment Source:Part 1/2 Sorry for elaborating the excursus, but wanted to give another stab to the profunctors vs linear algebra analogy trying to tie together what John has said. It's fun/profitable to pursue this, juicy, for me is work in progress. The gist of the idea is to compare the vector space monad coming from the free-forgetful adjunction relating sets and vector spaces, with something monad-like ensuing from the process of \\(\mathcal{V}\\)-enriched free **cocompletion**, relating \\(\mathcal{V}\\)-enriched categories and \\(\mathcal{V}\\)-enrichedly *cocomplete* categories. The setup of the free-forgetful adjunction for sets and vector spaces is in example 2.4 [here](http://math.uchicago.edu/~may/REU2012/REUPapers/Sankar.pdf). Assuming a field \\(\mathbb{K}\\), the free vector space functor \\(F\\) takes a set \\(S\\) and produces the vector space with basis \\(S\\) made of formal linear combinations, or equally, finitely supported functions in \\(\mathbb{K}^S\\). The forgetful functor \\(U\\) preserves the set of formal linear combinations, but forgets vector space structure. The adjunction produces a monad in \\(\mathbf{Set}\\), \\(T := U \circ F\\). E. Riehl describes the unit and multiplication in example 5.1.4(iii) of book ISBN 048680903X. Exercise 4.5.10 [here](https://math.feld.cvut.cz/ftp/velebil/downloads/cats-tacl-2017-notes.pdf) describes the Kleisli category as having sets as objects and \\(\mathbb{K}\\)-valued matrices as arrows. In this monad there is a bit of "degeneracy" in that the Kleisli and EM-categories are equivalent and this needn't be the case in general. From Riehl's book, section 5.2 > The upshot of Lemma 5.2.13 is that the Kleisli category for a monad embeds as the full subcategory of free T-algebras and all maps between such. Lemma 5.2.13 also tells us precisely when the Kleisli and Eilenberg–Moore categories are equivalent: this is the case when all algebras are free. For instance, all vector spaces are free modules over any chosen basis, so the Kleisli and Eilenberg–Moore categories for the free vector space monad are equivalent Paul Taylor says: > \\(\mathbf{Vsp}\\) (i. e. \\(\mathbf{Vect}\\)) is the Eilenberg-Moore category for the monad on Set induced by the adjunction in Example [7.1.4(b)](http://www.cs.man.ac.uk/~pt/Practical-Foundations/html/s71.html#r7.1.4) (our case). The Kleisli category consists of those vector spaces that have bases (which is all of them, given the axiom of choice). If we drop Choice, for instance for constructivist reasons, that destroys the equivalence and the EM-category assumes the rol of the general category of vector spaces, and Kleisli's, the one for spaces with basis. In particular, having bases, a map in the Kleisli category just says where the basis vectors of the source space are sent in the target one. Since linear maps preserve linear combinations, that defines the whole map between spaces. That's a case of [extension operator](https://en.wikipedia.org/wiki/Kleisli_category#Extension_operators_and_Kleisli_triples) in general monad theory.
  • 35.
    edited August 28

    Part 2/2

    Now, in the profunctor side of things, John starts with a teaser:

    Sums, existential quantifiers, colimits and coends are all special cases of 'weighted colimits'. Basically they're all ways to think about addition.

    And here drops the bomb:

    Colimits are analogous to linear combinations, so colimit-preserving functors are like linear maps.

    I think he means here weighted colimits but wants to help us to join the dots with Yoneda embedding theory.

    We can recall how the reationals sit inside the real numbers, with the reals defined as limits of sequences of rationals. The reals are the (Cauchy) completion of the rationals, and constant sequences of rationals represent how they embed. In categories we have limits along diagrams, and dually colimits. Speaking about colimits, not all categories have all of them. The process of extending a category to a bigger one having all of them is called cocompletion. The free cocompletion is a universal construction giving a best cocompletion in the sense of having a concrete universal property. The Yoneda Embedding explains that the original category embeds fully and faithfully in the free cocompletion (though it is bigger). The free cocompletion of a category \(C\) is its presheaf category, a functor category \([C^{op}, \mathbf{Set}] = \mathbf{Set}^{C^{op}}\). The Yoneda Embedding sends an object of \(C\) to a representable functor. As the rationals in the reals, a presheaf is the limit of a diagram of representable presheaves, that are images of the objects of \(C\) by the Yoneda embedding. This is standard category theory. A paragraph in the nLab entry says that this generalizes well to enriched categories, changing \(\mathbf{Set}\) for \(\mathcal{V}\).

    One can view cocompletness as extra structure a category can have, with cocontinous functors between them preserving that structure and forming a category \(\mathbf{Cocomp}\).

    There are size issues here. We cannot build functor categories with large category exponentials, so we are speaking about cocompleting only small categories.

    In Mac Lane/Moerdijk, they say:

    The process of sending a category to its preseaf category is a functor from \(\mathbf{Cat}\), the (large) category of all small categories to \(\mathbf{Cocomp}\), the ("superlarge") category of "all" cocomplete categories, with morphisms all colimit preserving functors. This corollary states in effect that the Yoneda embedding provides universal arrows and so, like universal arrows generally, constitute the units of an adjunction (an adjunction in which forming the presheaf category is left adjoint to the forgetful functor \(\mathbf{Cocomp}\ \to \mathbf{Cat}\) that forget cocompleteness. This suggestive formulation stumbles on the fact that cocomplete categories are hardly ever small [CWM, p. 110], so do not become small by forgetting the colimits!

    So the cocompletion is a functor, and having an adjoint is suggestive but we have size issues.

    In the quoted nLab entry John analyzes this, and concludes we're struck in recovering an adjunction, but cites a discarded draft from Fiore et al. that confronted the issue (!). I'm continuing ignoring this size issues as if an actual adjunction existed, hoping the problems are manageable somehow.

    In the enriched case, the colimits considered are weighted. Here in the nCafe John explains how categorical colimits are special weighted colimits, and even reduce to linear combinations in a specific case (the measure example).

    The core of the analogy would be that the functor sending a small category \(C\) to the \(\mathcal{V}\)-enriched presheaf category \(\mathcal{V}^{C^{op}}\) (that puts a hat on \(C\)), making it enrichedly cocomplete (having all weighted colimits) is analogous to the free vector space functor, that takes a set and builds its linear span (the set of formal linear combinations).

    For vector spaces, when I sent the basis vector of an space capriciously to another, I determined a linear map. For a \(\mathcal{V}\) enrichedly cocomplete category \(C\), if I have an enriched functor \(F:C \to \mathcal{V}^{D^{op}}\), the Yoneda extension lifts it to a weighted-colimit preserving \(\mathcal{V}\)-enriched functor \(\hat{F}:\mathcal{V}^{C^{op}} \to \mathcal{V}^{D^{op}}\)

    I said:

    When

    $$ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} $$ we write \( \Phi : X\nrightarrow Y \) but nLab, Wikipedia and Borceaux write \( \Phi : Y\nrightarrow X \) (from the covariant to the contravariant argument).

    And John replied:

    I actually prefer the nLab, Wikipedia and Borceaux convention, because a \(\mathcal{V}\)-enriched functor from \(\mathcal{X} \times \mathcal{Y}^{\text{op}}\) to \(\mathcal{V}\) can be reinterpreted as a functor from \(\mathcal{X}\) to the so-called presheaf category \(\mathcal{V}^{\mathcal{Y}^{\text{op}}}\), and that's a good thing.

    whose Yoneda extension as per before leads to:

    Profunctors \(\Phi: \mathcal{C} \nrightarrow \mathcal{D}\) between categories are secretly the same as colimit-preserving functors \(F : \mathbf{Set}^{\mathcal{C}^{\text{op}}} \to \mathbf{Set}^{\mathcal{D}^{\text{op}}} \)

    So profunctors emerge by themselves as the suitable notion of map for cocomplete categories and cocompleteness preserving maps (in the enriched context) [EDIT: this could need a rethink]

    Now, from this

    The embedding of a set in the vector space whose basis is that set is analogous to the Yoneda embedding.

    The embedding of the basis in the space is the "insertion of generators", that is the unit of the free-forgetful adjunction between sets and vector spaces (CWM. pg. 87), and so is the unit of the vector space monad (as in here). Analogously, one wants that the would-be enriched cocompletion monad ("hat monad") had as unit the Yoneda embedding.

    As for the "hat monad" product (natural transformation), I haven't worked any details and I'm running out of weekend, but it must boil down to the coend formula that Keith was referring to, while John tells us that we could track coends to weighted colimits.

    So "the category of sets is to the category of vector spaces as the category of small categories is to the category of profunctors".

    As final note, the Kleisli category of that informal hat monad is then that category of Urs I spoken of before, with categories as placeholder to enriched presheaf categories, and profunctors as morphisms. True that 2-morphisms don't enter here. I found fortunate anyhow that they came out, in that it could serve as honest motivation to approach to the doorstep of 2-categorical and higher stuff that can be difficult for non-pros to reach.

    Comment Source:Part 2/2 Now, in the profunctor side of things, John starts with a [teaser](https://forum.azimuthproject.org/discussion/comment/20164/#Comment_20164): > Sums, existential quantifiers, colimits and coends are all special cases of 'weighted colimits'. Basically they're all ways to think about *addition*. And [here](https://forum.azimuthproject.org/discussion/comment/20274/#Comment_20274) drops the bomb: > Colimits are analogous to linear combinations, so colimit-preserving functors are like linear maps. I think he means here weighted colimits but wants to help us to join the dots with Yoneda embedding theory. We can recall how the reationals sit inside the real numbers, with the reals defined as limits of sequences of rationals. The reals are the (Cauchy) completion of the rationals, and constant sequences of rationals represent how they embed. In categories we have limits along diagrams, and dually colimits. Speaking about colimits, not all categories have all of them. The process of extending a category to a bigger one having all of them is called cocompletion. The [free cocompletion](https://ncatlab.org/nlab/show/free+cocompletion) is a universal construction giving a *best* cocompletion in the sense of having a concrete universal property. The Yoneda Embedding explains that the original category embeds fully and faithfully in the free cocompletion (though it is bigger). The free cocompletion of a category \\(C\\) is its presheaf category, a functor category \\([C^{op}, \mathbf{Set}] = \mathbf{Set}^{C^{op}}\\). The Yoneda Embedding sends an object of \\(C\\) to a representable functor. As the rationals in the reals, a presheaf is the limit of a diagram of representable presheaves, that are images of the objects of \\(C\\) by the Yoneda embedding. This is standard category theory. A paragraph in the nLab entry says that this generalizes well to enriched categories, changing \\(\mathbf{Set}\\) for \\(\mathcal{V}\\). One can view cocompletness as extra structure a category can have, with cocontinous functors between them preserving that structure and forming a category \\(\mathbf{Cocomp}\\). There are size issues here. We cannot build functor categories with large category exponentials, so we are speaking about cocompleting only small categories. In Mac Lane/Moerdijk, they say: > The process of sending a category to its preseaf category is a functor from \\(\mathbf{Cat}\\), the (large) category of all small categories to \\(\mathbf{Cocomp}\\), the ("superlarge") category of "all" cocomplete categories, with morphisms all colimit preserving functors. This corollary states in effect that the Yoneda embedding provides universal arrows and so, like universal arrows generally, constitute the units of an adjunction (an adjunction in which forming the presheaf category is left adjoint to the forgetful functor \\(\mathbf{Cocomp}\\ \to \mathbf{Cat}\\) that forget cocompleteness. This suggestive formulation stumbles on the fact that cocomplete categories are hardly ever small [CWM, p. 110], so do not become small by forgetting the colimits! So the cocompletion is a functor, and having an adjoint is suggestive but we have size issues. In the quoted nLab entry John analyzes [this](https://ncatlab.org/nlab/show/free+cocompletion#free_cocompletion_as_a_pseudomonad), and concludes we're struck in recovering an adjunction, but cites a discarded draft from Fiore et al. that confronted the issue (!). I'm continuing ignoring this size issues as if an actual adjunction existed, hoping the problems are manageable somehow. In the enriched case, the colimits considered are weighted. [Here](https://golem.ph.utexas.edu/category/2007/02/day_on_rcfts.html#c007688) in the nCafe John explains how categorical colimits are special weighted colimits, and even reduce to linear combinations in a specific case (the measure example). The core of the analogy would be that the functor sending a small category \\(C\\) to the \\(\mathcal{V}\\)-enriched presheaf category \\(\mathcal{V}^{C^{op}}\\) (that puts a hat on \\(C\\)), making it enrichedly cocomplete (having all weighted colimits) is analogous to the free vector space functor, that takes a set and builds its linear span (the set of formal linear combinations). For vector spaces, when I sent the basis vector of an space capriciously to another, I determined a linear map. For a \\(\mathcal{V}\\) enrichedly cocomplete category \\(C\\), if I have an enriched functor \\(F:C \to \mathcal{V}^{D^{op}}\\), the Yoneda extension lifts it to a weighted-colimit preserving \\(\mathcal{V}\\)-enriched functor \\(\hat{F}:\mathcal{V}^{C^{op}} \to \mathcal{V}^{D^{op}}\\) I said: When \[ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} \] we write \\( \Phi : X\nrightarrow Y \\) but nLab, Wikipedia and Borceaux write \\( \Phi : Y\nrightarrow X \\) (from the covariant to the contravariant argument). And John [replied](https://forum.azimuthproject.org/discussion/comment/20596/#Comment_20596): > I actually prefer the nLab, Wikipedia and Borceaux convention, because a \\(\mathcal{V}\\)-enriched functor from \\(\mathcal{X} \times \mathcal{Y}^{\text{op}}\\) to \\(\mathcal{V}\\) can be reinterpreted as a functor from \\(\mathcal{X}\\) to the so-called **presheaf category** \\(\mathcal{V}^{\mathcal{Y}^{\text{op}}}\\), and that's a good thing. whose Yoneda extension as per before leads to: > Profunctors \\(\Phi: \mathcal{C} \nrightarrow \mathcal{D}\\) between categories are secretly the same as colimit-preserving functors \\(F : \mathbf{Set}^{\mathcal{C}^{\text{op}}} \to \mathbf{Set}^{\mathcal{D}^{\text{op}}} \\) So profunctors emerge by themselves as the suitable notion of map for cocomplete categories and cocompleteness preserving maps (in the enriched context) [EDIT: this could need a rethink] Now, from [this](https://forum.azimuthproject.org/discussion/comment/20274/#Comment_20274) > The embedding of a set in the vector space whose basis is that set is analogous to the Yoneda embedding. The embedding of the basis in the space is the "insertion of generators", that is the unit of the free-forgetful adjunction between sets and vector spaces (CWM. pg. 87), and so is the unit of the vector space monad (as in [here](https://ncatlab.org/nlab/show/unit+of+an+adjunction)). Analogously, one wants that the would-be enriched cocompletion monad ("hat monad") had as unit the Yoneda embedding. As for the "hat monad" product (natural transformation), I haven't worked any details and I'm running out of weekend, but it must boil down to the coend formula that Keith was [referring to](https://forum.azimuthproject.org/discussion/comment/20152/#Comment_20152), while John tells us that we could track coends to weighted colimits. So "the category of sets is to the category of vector spaces as the category of small categories is to the category of profunctors". As final note, the Kleisli category of that informal hat monad is then that category of Urs I spoken of [before](https://forum.azimuthproject.org/discussion/comment/20662/#Comment_20662), with categories as placeholder to enriched presheaf categories, and profunctors as morphisms. True that 2-morphisms don't enter here. I found fortunate anyhow that they came out, in that it could serve as honest motivation to approach to the doorstep of 2-categorical and higher stuff that can be difficult for non-pros to reach.
  • 36.

    Interesting, the extension operator is just a flipped "bind" the default way to define a monad (on the category of types and computations.) In Haskell

    Comment Source:Interesting, the extension operator is just a flipped "bind" the default way to define a monad (on the category of types and computations.) In Haskell
  • 37.
    edited August 28

    Hey, there's a very recent (July!) paper from Fiore, Gambino, Hyland and Winskel addressing precisely the size issues of the profunctor part of the analogy I wrote about earlier, how good!

    Comment Source:Hey, there's a very recent (July!) [paper](https://link.springer.com/article/10.1007/s00029-017-0361-3) from Fiore, Gambino, Hyland and Winskel addressing precisely the size issues of the profunctor part of the analogy I wrote about earlier, how good!
  • 38.

    Christopher Upshaw wrote,

    Interesting, the extension operator is just a flipped "bind" the default way to define a monad (on the category of types and computations.) In Haskell

    Could you elaborate on that a bit?

    Comment Source:Christopher Upshaw wrote, >Interesting, the extension operator is just a flipped "bind" the default way to define a monad (on the category of types and computations.) In Haskell Could you elaborate on that a bit?
  • 39.
    edited August 29

    Hey there, Jesus! You seem to be doing quite well figuring out what I was hinting at. I was too lazy to explain all this stuff, but you found a page on the nLab where I was explaining some of it to Mike Stay once upon a time, so it's okay that I'm a bit less energetic now. image If you have any questions, just ask!

    And yes, the size issues are annoying here: it's really tempting to think of 'free cocompletion' as a monad on \(\mathrm{Cat}\), but in fact it maps small categories to large ones, and I guess large ones to extra-large ones (though I haven't even thought about that), and so on. Gambino, Hyland and Power were writing a very nice paper that stumbled into this tar pit, and instead of just saying 'pretend size issues aren't really a problem' (as I'd be tempted to do), they dove in and decided to generalize the whole theory of monads to deal with this problem. I'm glad to hear they finally did it!

    Comment Source:Hey there, Jesus! You seem to be doing quite well figuring out what I was hinting at. I was too lazy to explain all this stuff, but you found a page on the nLab where I was explaining some of it to Mike Stay once upon a time, so it's okay that I'm a bit less energetic now. <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif"> If you have any questions, just ask! And yes, the size issues are annoying here: it's really tempting to think of 'free cocompletion' as a monad on \\(\mathrm{Cat}\\), but in fact it maps small categories to large ones, and I guess large ones to extra-large ones (though I haven't even thought about that), and so on. Gambino, Hyland and Power were writing a very nice paper that stumbled into this tar pit, and instead of just saying 'pretend size issues aren't really a problem' (as I'd be tempted to do), they dove in and decided to generalize the whole theory of monads to deal with this problem. I'm glad to hear they finally did it!
  • 40.

    It seems to me you could get around the size issue by using an infinite hierarchy instead of the set class distinction.

    Comment Source:It seems to me you could get around the size issue by using an infinite hierarchy instead of the set class distinction.
  • 41.

    What a motivational booster! One question, assuming legit that monad language, would \(\mathbf{Cocomp}\) be equivalent to the Eilemberg-Moore category of that presheaf "monad"?

    Comment Source:What a motivational booster! One question, assuming legit that monad language, would \\(\mathbf{Cocomp}\\) be equivalent to the Eilemberg-Moore category of that presheaf "monad"?
  • 42.

    Okay so in Haskell and other compile-time* typed languages each expresion has a type. This lets the compiler catch a lot of problems before the program is run. It turns out that just as you can describe a set theory as a category, you can also do so with a type theory. If you do this to Haskel you get a* category called Hask with types as objects, and functions as morphisms.

    • there are actually a lot of very similar but not identical categories that are all called Hask, /basically/ depending on how fine grained you are being, and if you aren't very careful you will get something that isn't quite always associative. This usually doesn't matter. (Its a very engineering, not mathematics, thing. Like how physicists will often ignore subtleties of analysis.)

    Using the slightly oddly named 'data' keyword can define new types, by defining how to construct the type, and how to construct its members. For example:

    data Maybe a :: Type where | Just :: a -> Maybe a | Nothing :: Maybe a

    is one way to define the Maybe type constructor. * \(a :: b\) means 'a has type b'.** ** Yes or has kind b, people who know haskell).

    This means that given a type, say, Integer, then Maybe Integer is also a type, and you make its members in two ways, by applying Just to an Integer or just using Nothing by itself.*

    • I am using very verbose syntax that isn't in Haskell2010 but is handled by the defacto haskell compiler GHC, and wouldn't normally be used with something this simple.)

    Now for operations/functions which use the same notation but mean different things for different types, like say "+", haskell has Type Classes, which are neither types nor classes, but rather a way to go from types to values. They are defined on type constructors, so while the typeclass that defines "+" (Num) has instances on types, things like Ratio, Integer, or Maybe Integer, the ones I am going to talk about have instances on type constructors like "Maybe". Those that take one type and produce another.

    Now two of these typeclasses have names that might be familiar "Functor" and "Monad". These don't describe a generic functor or monad, but rather elaborate a function on types to types into a functor from Hask to Hask. In Haskell the instances for Maybe look like:

    instance Functor Maybe where fmap f (Just x) = Just (f x) fmap Nothing = Nothing

    "fmap" gives the mapping on morphisms, that is expresions of type A -> B, and produces an expresion of type "Maybe A -> Maybe B" and indeed the Functor instance equips the function on types "Maybe" into an endofunctor. (You can check the functor laws.) (f is the function)

    instance Functor => Monad Maybe where return = Just (Just x) >>= fm = fm x (Nothing) >>= fm = Nothing

    Where (>>=) has type "Maybe a -> (a -> Maybe b) -> Maybe b" Natural transformations on endofunctors of Hask end up being parametric functions, (and vice versa). So, "return" is the natural transformation from id to M, (M = Maybe here). And ">>=" (aka bind), gives the semantics of the Keisly category. In the paper they use "extend" which is just a bind that takes the function first (i.e. has type "(a -> M b) -> (M a -> M b)"* ) which is easier to describe in category theory terms. (and avoids the problem of not even locally small categories that defining the monad using the multiplication ("M (M a) -> M a" in haskell types).

    My second comment is that in a type theory the size issues are kind of trivial. Either like Hask you just define Type :: Type, which sure lets in a paradox, but actually isn't quite as bad as it would be normally. Hask still works as a programing language type theory just fine, even though its inconsistent. (in at least two other was as well.)

    Or you do like actual theorem provers and have an infinite hierarchy, where \(Type_i :: Type_{i+1}\), and then Cat can be defined in a level agnostic way, and the pre-sheaf monad is defined to take types in \(Type_n\) and produce things in \(Type_{n+1}\) and then you can define the multiplication as going down levels or as going to the lifting of "M a" into the higher level. And while technically these definitions live in \(Type_{\omega}\), and say the category of level agnostic types would have to live in \(Type_{\omega + 1}) that Cat is fine too.

    Comment Source:Okay so in Haskell and other compile-time* typed languages each expresion has a type. This lets the compiler catch a lot of problems before the program is run. It turns out that just as you can describe a set theory as a category, you can also do so with a type theory. If you do this to Haskel you get a* category called Hask with types as objects, and functions as morphisms. * there are actually a lot of very similar but not identical categories that are all called Hask, /basically/ depending on how fine grained you are being, and if you aren't very careful you will get something that isn't quite always associative. This usually doesn't matter. (Its a very engineering, not mathematics, thing. Like how physicists will often ignore subtleties of analysis.) Using the slightly oddly named 'data' keyword can define new types, by defining how to construct the type, and how to construct its members. For example: data Maybe a :: Type where | Just :: a -> Maybe a | Nothing :: Maybe a is one way to define the Maybe type constructor. * \\(a :: b\\) means 'a has type b'.** ** Yes or has _kind_ b, people who know haskell). This means that given a type, say, Integer, then Maybe Integer is also a type, and you make its members in two ways, by applying Just to an Integer or just using Nothing by itself.* * I am using very verbose syntax that isn't in Haskell2010 but is handled by the defacto haskell compiler GHC, and wouldn't normally be used with something this simple.) Now for operations/functions which use the same notation but mean different things for different types, like say "+", haskell has Type Classes, which are neither types nor classes, but rather a way to go from types to values. They are defined on type constructors, so while the typeclass that defines "+" (Num) has instances on types, things like Ratio, Integer, or Maybe Integer, the ones I am going to talk about have instances on type constructors like "Maybe". Those that take one type and produce another. Now two of these typeclasses have names that might be familiar "Functor" and "Monad". These don't describe a generic functor or monad, but rather elaborate a function on types to types into a functor from Hask to Hask. In Haskell the instances for Maybe look like: instance Functor Maybe where fmap f (Just x) = Just (f x) fmap Nothing = Nothing "fmap" gives the mapping on morphisms, that is expresions of type A -> B, and produces an expresion of type "Maybe A -> Maybe B" and indeed the Functor instance equips the function on types "Maybe" into an endofunctor. (You can check the functor laws.) (f is the function) instance Functor => Monad Maybe where return = Just (Just x) >>= fm = fm x (Nothing) >>= fm = Nothing Where (>>=) has type "Maybe a -> (a -> Maybe b) -> Maybe b" Natural transformations on endofunctors of Hask end up being parametric functions, (and vice versa). So, "return" is the natural transformation from id to M, (M = Maybe here). And ">>=" (aka bind), gives the semantics of the Keisly category. In the paper they use "extend" which is just a bind that takes the function first (i.e. has type "(a -> M b) -> (M a -> M b)"* ) which is easier to describe in category theory terms. (and avoids the problem of not even locally small categories that defining the monad using the multiplication ("M (M a) -> M a" in haskell types). My second comment is that in a type theory the size issues are kind of trivial. Either like Hask you just define Type :: Type, which sure lets in a paradox, but actually isn't quite as bad as it would be normally. Hask still works as a programing language type theory just fine, even though its inconsistent. (in at least two other was as well.) Or you do like actual theorem provers and have an infinite hierarchy, where \\(Type_i :: Type_{i+1}\\), and then Cat can be defined in a level agnostic way, and the pre-sheaf monad is defined to take types in \\(Type_n\\) and produce things in \\(Type_{n+1}\\) and then you can define the multiplication as going down levels or as going to the lifting of "M a" into the higher level. And while technically these definitions live in \\(Type_{\omega}\\), and say the category of level agnostic types would have to live in \\(Type_{\omega + 1}) that Cat is fine too.
Sign In or Register to comment.