It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 500
- Study Groups 19
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 1
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 67
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 2
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 147
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 713

Options

I'm having some trouble understanding conceptual differences between string diagrams and the "usual" categoric diagrams with morphisms as arrows, especially when it comes to drawing monoidal products.

In other words, how would I draw the following diagram:

using the notation where morphisms are arrows? Sure, I can always do this:

where I just say the function accepts a product as the input, but I feel this is just raising another question: how did I end up with \( A \times B \) ? A possible answer could be that we can just specify the product using the universal property and we somehow just "have" it.

But I feel this doesn't get to the gist of the answer. To translate a monoidal product to usual notation, we'd need an arrow to accept two things as input. Arrows are inherently one-dimensional objects and have as inputs two-dimensional objects, points. I suspect that using two-dimensional shapes as arrows instead of one-dimensional could help alleviate the problem. Which is exactly what string diagrams are, in the end!

Is this sort of reasoning valid? Where can I read more about this? Are there higher-dimensional generalizations of string diagrams?

This seems like an important thing to know but I haven't been able to find good resources. CT is usually introduced as points and arrows between them, but does this mean there's an inherent limitation to arrow notation? It took me quite a while trying to draw products using arrows before I realized this might not be possible.

## Comments

First, we generally write \(A \otimes B\) for the general monoidal product in \(f : A \otimes B \rightarrow C\) although the Cartesian product \(\times\) can be a monoidal product in \(\mathbf{Set}\). The monoidal product isn't the same as the cartesian product though, at least not in general! So it's important to make the distinction.

Note that morphisms can be more general than functions! While many specific examples of morphisms are functions or like functions (e.g., functions for sets, continuous functions for topological spaces, group homomorphisms, ring homomorphisms, etc.), we also have a category \(\mathbf{Rel}\) where the objects are sets and the morphisms are relations between sets. Here's an example:

$$ R: X \to Y \qquad R = \{(x,y): x^2 + y^2 = 1 \ \text{for} \ x \in X, \ y \in Y\}$$ We can compose it with another relation \(S\) $$S : Y \to Z \qquad S = \{(y,z): 2y + 3z = 0 \ \text{for} \ y \in Y, \ z \in Z\}$$ by finding pairs of \(x\) and \(z\) which have a shared \(y\) value: $$SR : X \to Z \qquad SR = \{(x,z) : x^2 + y^2 = 1, \ 2y + 3z = 0 \ \text{for some} \ y \in Y\}$$

Morphisms only take 1 input and 1 output. The morphism \(f : A \otimes B \to C \) has one input, called \(A \otimes B\). Of course, \(A \otimes B\) is an object that is a combination of two other objects \(A\) and \(B\). In some sense, the string diagram hides the fact that we combine \(A\) and \(B\) before applying \(f\) (if we're thinking of \(f\) as a function). If you write \(f: X \to C\) then let \(X = A \otimes B\) it becomes a bit clearer in the notation that \(A \otimes B\) is just one object.

Well, if \(A \times B\) is the cartesian product (or direct product, etc.) then yes, if the category always has finite products, then there's an object which satisfies the properties of being a product of \(A\) and \(B\). We happen to write it as \(A \times B\) to remind ourselves that this object is the product of two other objects, but we don't have to.

For any monoidal category, the monoidal product is a functor \(\otimes : \mathcal{C} \times \mathcal{C} \to \mathcal{C}\) that sends pairs of objects in \(\mathcal{C}\) to other objects, and pairs of morphisms to morphisms. By definition, it always gives something for any pair of objects. You can define the monoidal product using limits (like products) and colimits (like coproducts) by simply saying that the monoidal product sends pairs of objects to whatever you get when you compute the limit/colimit (if it exists).

I'd watch these TheCatsters videos on string diagrams if you haven't already. In these examples, natural transformations become points (2-morphisms), functors become lines (morphisms), and categories become 2D regions (objects). Arrow notation goes the other way, objects are points, morphisms are lines, and 2-morphisms are regions (although still usually drawn like arrows). So yes, there are higher dimensional generalizations of string diagrams.

`First, we generally write \\(A \otimes B\\) for the general monoidal product in \\(f : A \otimes B \rightarrow C\\) although the Cartesian product \\(\times\\) can be a monoidal product in \\(\mathbf{Set}\\). The monoidal product isn't the same as the cartesian product though, at least not in general! So it's important to make the distinction. > I just say the function accepts a product as the input Note that morphisms can be more general than functions! While many specific examples of morphisms are functions or like functions (e.g., functions for sets, continuous functions for topological spaces, group homomorphisms, ring homomorphisms, etc.), we also have a category \\(\mathbf{Rel}\\) where the objects are sets and the morphisms are relations between sets. Here's an example: \[ R: X \to Y \qquad R = \\{(x,y): x^2 + y^2 = 1 \ \text{for} \ x \in X, \ y \in Y\\}\] We can compose it with another relation \\(S\\) \[S : Y \to Z \qquad S = \\{(y,z): 2y + 3z = 0 \ \text{for} \ y \in Y, \ z \in Z\\}\] by finding pairs of \\(x\\) and \\(z\\) which have a shared \\(y\\) value: \[SR : X \to Z \qquad SR = \\{(x,z) : x^2 + y^2 = 1, \ 2y + 3z = 0 \ \text{for some} \ y \in Y\\}\] > To translate a monoidal product to usual notation, we'd need an arrow to accept two things as input. Arrows are inherently one-dimensional objects Morphisms only take 1 input and 1 output. The morphism \\(f : A \otimes B \to C \\) has one input, called \\(A \otimes B\\). Of course, \\(A \otimes B\\) is an object that is a combination of two other objects \\(A\\) and \\(B\\). In some sense, the string diagram hides the fact that we combine \\(A\\) and \\(B\\) before applying \\(f\\) (if we're thinking of \\(f\\) as a function). If you write \\(f: X \to C\\) then let \\(X = A \otimes B\\) it becomes a bit clearer in the notation that \\(A \otimes B\\) is just one object. > how did I end up with \\(A \times B \\)? Well, if \\(A \times B\\) is the cartesian product (or direct product, etc.) then yes, if the category always has finite products, then there's an object which satisfies the properties of being a product of \\(A\\) and \\(B\\). We happen to write it as \\(A \times B\\) to remind ourselves that this object is the product of two other objects, but we don't have to. For any monoidal category, the monoidal product is a functor \\(\otimes : \mathcal{C} \times \mathcal{C} \to \mathcal{C}\\) that sends pairs of objects in \\(\mathcal{C}\\) to other objects, and pairs of morphisms to morphisms. By definition, it always gives something for any pair of objects. You can define the monoidal product using limits (like products) and colimits (like coproducts) by simply saying that the monoidal product sends pairs of objects to whatever you get when you compute the limit/colimit (if it exists). > Is this sort of reasoning valid? Where can I read more about this? Are there higher-dimensional generalizations of string diagrams? I'd watch [these TheCatsters videos](https://www.youtube.com/watch?v=USYRDDZ9yEc&list=PL50ABC4792BD0A086) on string diagrams if you haven't already. In these examples, natural transformations become points (2-morphisms), functors become lines (morphisms), and categories become 2D regions (objects). Arrow notation goes the other way, objects are points, morphisms are lines, and 2-morphisms are regions (although still usually drawn like arrows). So yes, there are higher dimensional generalizations of string diagrams.`

Thanks Scott for the extensive answer! I do realize I was very loose with the terminology, changing between monoidal and cartesian product and referring to morphisms as functions. I hope I did manage to get the point across, as these errors seem to just be local and not really crucial to the question.

This seems to be the something along the lines of what I'm talking about! Bartosz Milewski also talks about how we can "encode more stuff" in a 2d plane! He also refers to Poincare duality but at the moment I'm finding it very difficult to understand.

The diagrams TheCatsters are talking about seem to be one level above (functors between categories); while string diagrams Seven Sketches talks about refer to morphisms between objects. The notation seems isomorphic, though! I'm not sure how to express myself better; but in Catsters string diagrams, if you replace areas (categories) with lines and lines (functors) with boxes, you get basically the same thing, except there's no 2-morphisms right now.

I think I understand what you're saying but it doesn't seem to answer my question. My question is, reformulated, what property is it exactly of string diagrams (and not of usual diagrams) that makes them suitable for depicting monoidal products?

I agree, but one can also argue that usual diagrams hide the process of combining A and B to get \( A \times B ))! We always refer to the universal construction of product and say "there is a product satisfying special conditions", but we never actually have a morphism which constructs it given two arguments. I'm trying to understand if the reason is more fundamental than just our limited notation; the answers from Catsters seems to be that it isn't.

`Thanks Scott for the extensive answer! I do realize I was very loose with the terminology, changing between monoidal and cartesian product and referring to morphisms as functions. I hope I did manage to get the point across, as these errors seem to just be local and not really crucial to the question. > I'd watch these TheCatsters videos on string diagrams if you haven't already. In these examples, natural transformations become points (2-morphisms), functors become lines (morphisms), and categories become 2D regions (objects). Arrow notation goes the other way, objects are points, morphisms are lines, and 2-morphisms are regions (although still usually drawn like arrows). So yes, there are higher dimensional generalizations of string diagrams. This seems to be the something along the lines of what I'm talking about! [Bartosz Milewski also talks](https://www.youtube.com/watch?v=eOdBTqY3-Og) about how we can "encode more stuff" in a 2d plane! He also refers to [Poincare duality](https://en.wikipedia.org/wiki/Poincar%C3%A9_duality) but at the moment I'm finding it very difficult to understand. The diagrams TheCatsters are talking about seem to be one level above (functors between categories); while string diagrams Seven Sketches talks about refer to morphisms between objects. The notation seems isomorphic, though! I'm not sure how to express myself better; but in Catsters string diagrams, if you replace areas (categories) with lines and lines (functors) with boxes, you get basically the same thing, except there's no 2-morphisms right now. > Well, if \\(A \times B\\) is the cartesian product (or direct product, etc.) then yes, if the category always has finite products, then there's an object which satisfies the properties of being a product of \\(A\\) and \\(B\\). We happen to write it as \\(A \times B\\) to remind ourselves that this object is the product of two other objects, but we don't have to. I think I understand what you're saying but it doesn't seem to answer my question. My question is, reformulated, what property is it exactly of string diagrams (and not of usual diagrams) that makes them suitable for depicting monoidal products? > In some sense, the string diagram hides the fact that we combine \\(A\\) and \\(B\\) before applying \\(f\\) (if we're thinking of \\(f\\) as a function). If you write \\(f: X \to C\\) then let \\(X = A \otimes B\\) it becomes a bit clearer in the notation that \\(A \otimes B\\) is just one object. I agree, but one can also argue that usual diagrams hide the process of combining A and B to get \\( A \times B \))! We always refer to the universal construction of product and say "there is a product satisfying special conditions", but we never actually have a morphism which constructs it given two arguments. I'm trying to understand if the reason is more fundamental than just our limited notation; the answers from Catsters seems to be that it isn't.`

Yes, I had the same thought. In video 4 or 5, cap and cup make an appearance and the connection becomes clearer. The string diagrams in seven sketches are designed more for monoidal/hypergraph categories, I think. So it's not like there's exactly one way to draw a string diagram. Perhaps the biggest difference is that string diagrams are vertically/horizontally flipped between the two.

In the string diagrams, the monoidal product is denoted by the juxtaposition of two strings. So a single "string" can be composed of two strings, whereas in the usual commutative diagram notation, the product of two objects is still an object, so just a dot/point. The strings just make it easier to see certain identities; hidden in the notation. Consider an example: real numbers with the order relation \(\le\) as morphisms and addition \(+\) the monoidal product.

Given the morphisms, $$x \le a + b $$ $$a + y \le c + z $$ $$ b + c \le w $$ it is not clear that the composition of these three morphisms into a single morphism is $$ x + y \le w + z$$ but drawing the string diagram makes this clear.

To do the composition, we make use of the identity morphism and monoidal product. Tensoring \(y \le y\) with \(x \le a + b\) gives \(x +y \le a + b +y\); tensoring \(b \le b\) with \(a + y \le c +z\) gives \(b + a + y \le b + c + z\); tensoring \(b + c \le w \) with \(z \le z\) gives \(b + c + z \le w + z\). With these three tensored morphisms, we can compose them with morphism composition because the inputs and outputs now match. Where they don't match exactly, we can tensor or compose with some coherence maps (braid, associator, unitors, cap, cup, etc) to get the inputs to match, then we compose.

Moving wires around in a string is applying various morphisms/natural transformations (e.g., the coherence maps like the associator, the braid, unitors, etc.) and identities (pentagram identity, triangle identity, etc.). But the string diagram makes their application straightforward. I am not sure I've really explained myself well though.

Precisely, there isn't a morphism that sends \(A\) and \(B\) to the monoidal product of \(A\) and \(B\). Technically, morphisms only have one object as a domain, and one object as codomain. In monoidal categories, we can "multiply" together objects to get other objects (and this multiplication plays nicely with morphisms and composition).

But the monoidal product does not necessarily have the universal property of the product. A simple counterexample is the category of sets with the disjoint union as the monoidal product (and the empty set as the identity). We can tensor objects and morphisms this way (relying on a colimit to compute them, rather than a limit).

Or consider a category with three objects \(A,B,C\) and only the required identity morphisms. This category has no products (or coproducts). But we can make it into a monoidal category, but defining a functor \(\otimes\) like this: $$A \otimes A = B $$ $$ B \otimes B = A $$ $$ A \otimes B = C = B \otimes A$$ where \(C\) is the monoidal identity object. In this case, the monoidal product is strictly commutative and associative (rather than being those things only up to isomorphism).

You don't need to use limits/colimits to define the monoidal product, but there are many interesting examples where we can simply define the monoidal product by computing a limit or colimit.

`> The diagrams TheCatsters are talking about seem to be one level above (functors between categories); while string diagrams Seven Sketches talks about refer to morphisms between objects. Yes, I had the same thought. In video 4 or 5, cap and cup make an appearance and the connection becomes clearer. The string diagrams in seven sketches are designed more for monoidal/hypergraph categories, I think. So it's not like there's exactly one way to draw a string diagram. Perhaps the biggest difference is that string diagrams are vertically/horizontally flipped between the two. > My question is, reformulated, what property is it exactly of string diagrams (and not of usual diagrams) that makes them suitable for depicting monoidal products? In the string diagrams, the monoidal product is denoted by the juxtaposition of two strings. So a single "string" can be composed of two strings, whereas in the usual commutative diagram notation, the product of two objects is still an object, so just a dot/point. The strings just make it easier to see certain identities; hidden in the notation. Consider an example: real numbers with the order relation \\(\le\\) as morphisms and addition \\(+\\) the monoidal product. Given the morphisms, \[x \le a + b \] \[a + y \le c + z \] \[ b + c \le w \] it is not clear that the composition of these three morphisms into a single morphism is \[ x + y \le w + z\] but drawing the string diagram makes this clear. To do the composition, we make use of the identity morphism and monoidal product. Tensoring \\(y \le y\\) with \\(x \le a + b\\) gives \\(x +y \le a + b +y\\); tensoring \\(b \le b\\) with \\(a + y \le c +z\\) gives \\(b + a + y \le b + c + z\\); tensoring \\(b + c \le w \\) with \\(z \le z\\) gives \\(b + c + z \le w + z\\). With these three tensored morphisms, we can compose them with morphism composition because the inputs and outputs now match. Where they don't match exactly, we can tensor or compose with some coherence maps (braid, associator, unitors, cap, cup, etc) to get the inputs to match, then we compose. Moving wires around in a string is applying various morphisms/natural transformations (e.g., the coherence maps like the associator, the braid, unitors, etc.) and identities (pentagram identity, triangle identity, etc.). But the string diagram makes their application straightforward. I am not sure I've really explained myself well though. > We always refer to the universal construction of product and say "there is a product satisfying special conditions", but we never actually have a morphism which constructs it given two arguments Precisely, there isn't a morphism that sends \\(A\\) and \\(B\\) to the monoidal product of \\(A\\) and \\(B\\). Technically, morphisms only have one object as a domain, and one object as codomain. In monoidal categories, we can "multiply" together objects to get other objects (and this multiplication plays nicely with morphisms and composition). But the monoidal product does not necessarily have the universal property of the product. A simple counterexample is the category of sets with the disjoint union as the monoidal product (and the empty set as the identity). We can tensor objects and morphisms this way (relying on a colimit to compute them, rather than a limit). Or consider a category with three objects \\(A,B,C\\) and only the required identity morphisms. This category has no products (or coproducts). But we can make it into a monoidal category, but defining a functor \\(\otimes\\) like this: \[A \otimes A = B \] \[ B \otimes B = A \] \[ A \otimes B = C = B \otimes A\] where \\(C\\) is the monoidal identity object. In this case, the monoidal product is strictly commutative and associative (rather than being those things only up to isomorphism). You don't need to use limits/colimits to define the monoidal product, but there are many interesting examples where we can simply define the monoidal product by computing a limit or colimit.`

Thanks Scott. I definitely have to think more about this before I can have a good reply.

`Thanks Scott. I definitely have to think more about this before I can have a good reply.`

following..interesting..

`following..interesting..`