It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 500
- Study Groups 20
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 69
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 5
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 714

Options

Last time I explained monoidal categories, which are a framework for studying processes that we can compose and tensor. We can do a lot with monoidal categories! For example, if we have a monoidal category with morphisms

$$ \Phi \colon a \to c \otimes d $$ $$ \Psi \colon d \otimes b \to e \otimes f $$ $$ \Theta \colon c \otimes e \to g $$ then by a combination of composing and tensoring we can cook up a morphism like this:

which goes from \(a \otimes b\) to \(g \otimes f\). This sort of picture is called a **string diagram**, and we've seen plenty of them already.

We don't *need* to use string diagrams to work with monoidal categories:

**Puzzle 281.** Describe the morphism in the above string diagram using a more traditional formula involving composition \(\circ\), tensoring \(\otimes\), the associator \(\alpha\), and the left and right unitors \(\lambda\) and \(\rho\).

However, they make it a lot easier and more intuitive!

An interesting feature of string diagrams is that they hide the the associator and the left and right unitors. You can't easily see them in these diagrams! However, when you turn a string diagram into a more traditional formula as in Puzzle 281, you'll see that you need to include associators and unitors to get a formula that makes sense.

This may seems strange: if we need the associators and unitors in our formulas, why don't we need them in our diagrams?

The ultimate answer is 'Mac Lane's strictification theorem'. This says that every monoidal category is equivalent to a one where the associator and unitors are *identity* morphisms. So, we can take any monoidal category and replace it by an equivalent one where the tensor product is 'strictly' associative, not just up to isomorphism:

$$ (x \otimes y) \otimes z = x \otimes (y \otimes z) $$ and similarly, the left and right unit laws hold strictly:

$$ I \otimes x = x = x \otimes I $$ This lets us stop worrying about associators and unitors. String diagrams are secretly doing this for us!

Often people use Mac Lane's strictification theorem in a loose way, simply using it as an excuse to act like monoidal categories are all strict. That's actually not so bad, if you're not too obsessed with precisoin.

To state Mac Lane's strictification theorem precisely, we first need to say exactly what it means for two monoidal categories to be 'equivalent'. For this we need to define a 'monoidal equivalence' between monoidal categories. Then, we define a **strict** monoidal category to be one where the associator and unitors are identity morphisms. Mac Lane's theorem then says that every monoidal category is monoidally equivalent to a strict one.

If you're curious about the details, try my notes:

All the necessary terms are defined, leading up to a precise statement of Mac Lane's strictification theorem at the very end. But this theorem takes quite a lot of work to prove, and I don't do that! You can see a sketch of the proof here:

- John Armstrong, The "strictification" theorem.

But there's more! If all we have is a monoidal category, the strings in our diagrams aren't allowed to cross. But last time I mentioned symmetric monoidal categories, where we have a natural isomorphism called the **symmetry**

$$ \sigma_{x,y} \colon x \otimes y \to y \otimes x $$ that allows us to switch objects, obeying various rules. This lets us make sense of string diagrams where wires cross, like this:

**Puzzle 282.** Describe the morphism in the above string diagram with a formula involving composition \(\circ\), tensoring \(\otimes\), the associator \(\alpha\), the left and right unitors \(\lambda,\rho\), and the symmetry \(\sigma\).

There is a version of Mac Lane's strictification theorem for symmetric monoidal categories, too! You can find it stated in my notes. This lets us replace any symmetric monoidal category by a **strict** one, where the associator and unitors *but not the symmetry* are identity morphisms.

We really need the symmetry: it cannot in general be swept under the rug. That should be sort of obvious: for example, switching two numbers in an ordered pair really *does* something, we can't just say it's the identity.

Again, please ask questions! I'm sketching some ideas that would take considerably longer to explain in full detail.

## Comments

I have a question: how is the "strictification" related to "coherence" (in the sense of the "coherence theorem" in Mac Lane's CWM for instance) – is it a stronger version of coherence? or a modern reworking of coherence? or are the two concepts quite separate? both seem to be about justifying that you can "forget" about associators etc in practice.

`I have a question: how is the "strictification" related to "coherence" (in the sense of the "coherence theorem" in Mac Lane's CWM for instance) – is it a stronger version of coherence? or a modern reworking of coherence? or are the two concepts quite separate? both seem to be about justifying that you can "forget" about associators etc in practice.`

281: \( (\Theta \otimes 1) \circ \alpha . (1 \otimes \Psi) \circ \alpha \circ (\Phi \otimes 1) \)

Two questions here: 1. Am I missing unitors somewhere? 2. The associator only seems to go

from\( (X \otimes Y) \otimes Z \)to\(X \otimes (Y \otimes Z) \): but I use it in both directions: how do I fix this?`281: \\( (\Theta \otimes 1) \circ \alpha . (1 \otimes \Psi) \circ \alpha \circ (\Phi \otimes 1) \\) Two questions here: 1. Am I missing unitors somewhere? 2. The associator only seems to go *from* \\( (X \otimes Y) \otimes Z \\) *to* \\(X \otimes (Y \otimes Z) \\): but I use it in both directions: how do I fix this?`

re

Puzzle 281Going from left to right we have three main blocks:

\(\qquad\Phi \otimes 1_b : a \otimes b \to (c \otimes d) \otimes b\)

\(\qquad1_c \otimes \Psi : c \otimes (d \otimes b) \to c \otimes (e \otimes f)\)

\(\qquad\Theta \otimes 1_f : (c \otimes e) \otimes f \to g \otimes f\)

These don't quite match up, so we need associators to plug the gaps:

\(\qquad\alpha_{c, d, b} : (c \otimes d) \otimes b \to c \otimes (d \otimes b)\)

\(\qquad\alpha^{-1}_{c, e, f} : c \otimes (e \otimes f) \to (c \otimes e) \otimes f\)

So the composite morphism is:

\(\qquad(\Theta \otimes 1_f) \circ \alpha^{-1}_{c, e, f} \circ (1_c \otimes \Psi) \circ \alpha _{c, d, b} \circ (\Phi \otimes 1_b) : a \otimes b \to g \otimes f\)

`re **Puzzle 281** Going from left to right we have three main blocks: \\(\qquad\Phi \otimes 1_b : a \otimes b \to (c \otimes d) \otimes b\\) \\(\qquad1_c \otimes \Psi : c \otimes (d \otimes b) \to c \otimes (e \otimes f)\\) \\(\qquad\Theta \otimes 1_f : (c \otimes e) \otimes f \to g \otimes f\\) These don't quite match up, so we need associators to plug the gaps: \\(\qquad\alpha_{c, d, b} : (c \otimes d) \otimes b \to c \otimes (d \otimes b)\\) \\(\qquad\alpha^{-1}_{c, e, f} : c \otimes (e \otimes f) \to (c \otimes e) \otimes f\\) So the composite morphism is: \\(\qquad(\Theta \otimes 1_f) \circ \alpha^{-1}_{c, e, f} \circ (1_c \otimes \Psi) \circ \alpha _{c, d, b} \circ (\Phi \otimes 1_b) : a \otimes b \to g \otimes f\\)`

I'm wondering if we can derive these formulas in a graphical way. For example, \(\Phi\), \(\Psi\) and \(\Theta\) are intertwined with \(1_b, 1_c, 1_f\), as two inverted parabolas, and in the formulas, they are alternated. Also, we can draw two straight lines, crossing the wires b-d-c and c-e-f, respectively. Looking at c, the positive-shaped straight line has c-d-b and can be connected to \(\alpha_{c,d,b}\); the negative-shaped straight line can be connected to \(\alpha^{-1}_{c,e,f}\). Maybe this could be helpful with more complex diagrams. Is this procedure correct?

`I'm wondering if we can derive these formulas in a graphical way. For example, \\(\Phi\\), \\(\Psi\\) and \\(\Theta\\) are intertwined with \\(1_b, 1_c, 1_f\\), as two inverted parabolas, and in the formulas, they are alternated. Also, we can draw two straight lines, crossing the wires b-d-c and c-e-f, respectively. Looking at c, the positive-shaped straight line has c-d-b and can be connected to \\(\alpha_{c,d,b}\\); the negative-shaped straight line can be connected to \\(\alpha^{-1}_{c,e,f}\\). Maybe this could be helpful with more complex diagrams. Is this procedure correct?`

Ah, okay, so this answers my question. Explicitly: since \( \alpha \) is a natural

isomorphism, we're not making any extra assumptions by talking about its inverse \( \alpha^{-1}\).I omitted subscripts for \(\alpha \) and 1: I assume this is fine because they can never be ambiguous. Am I right in this assumption?

`Ah, okay, so this answers my question. Explicitly: since \\( \alpha \\) is a natural *isomorphism*, we're not making any extra assumptions by talking about its inverse \\( \alpha^{-1}\\). I omitted subscripts for \\(\alpha \\) and 1: I assume this is fine because they can never be ambiguous. Am I right in this assumption?`

Here is an answer to

Puzzle 282\( a \otimes b \rightarrow (c \otimes d) \otimes b \rightarrow c \otimes (d \otimes b) \rightarrow c \otimes (b \otimes d) \rightarrow c \otimes (e \otimes f) \rightarrow (c \otimes e) \otimes f \rightarrow g \otimes f \)

gives a sequence of operations:

\( (\Theta \otimes 1_f) \circ \alpha_{c,e,f}^{-1} \circ (1_c \otimes \Psi) \circ (1_c \otimes \sigma_{d,b}) \circ \alpha_{c,d,b} \circ (\Phi \otimes 1_b) \)

`Here is an answer to **Puzzle 282** \\( a \otimes b \rightarrow (c \otimes d) \otimes b \rightarrow c \otimes (d \otimes b) \rightarrow c \otimes (b \otimes d) \rightarrow c \otimes (e \otimes f) \rightarrow (c \otimes e) \otimes f \rightarrow g \otimes f \\) gives a sequence of operations: \\( (\Theta \otimes 1_f) \circ \alpha_{c,e,f}^{-1} \circ (1_c \otimes \Psi) \circ (1_c \otimes \sigma_{d,b}) \circ \alpha_{c,d,b} \circ (\Phi \otimes 1_b) \\)`

We write subscripts for \(\alpha\) since \(\alpha\) is really a natural transformation which transform one functor (doing the monoidal product twice on the left) to another functor (doing the monoidal product twice on the right). The subscripts give a component of the associator (a morphism that switches the monoidal product of a particular set of three objects).

Technically, you need the subscripts to get a morphism from the associator and then compose that morphism with other morphisms. There's a similar story for the identity morphism (although we also use 1 for the identity functor and identity natural transformation (which is just the identity morphism on each object)). It's not that it's never ambiguous, but usually context makes it clear. And that's kinda the whole point of string diagrams. In the string diagrams, these subtle technical distinctions don't end up mattering.

`> I omitted subscripts for α and 1: I assume this is fine because they can never be ambiguous. Am I right in this assumption? We write subscripts for \\(\alpha\\) since \\(\alpha\\) is really a natural transformation which transform one functor (doing the monoidal product twice on the left) to another functor (doing the monoidal product twice on the right). The subscripts give a component of the associator (a morphism that switches the monoidal product of a particular set of three objects). Technically, you need the subscripts to get a morphism from the associator and then compose that morphism with other morphisms. There's a similar story for the identity morphism (although we also use 1 for the identity functor and identity natural transformation (which is just the identity morphism on each object)). It's not that it's never ambiguous, but usually context makes it clear. And that's kinda the whole point of string diagrams. In the string diagrams, these subtle technical distinctions don't end up mattering.`

You can construct formulae where it is ambiguous, but that requires a ambiguous refactoring of a product. Which ends up being really obviously ambiguous and weird looking anyway!

`You can construct formulae where it is ambiguous, but that requires a ambiguous refactoring of a product. Which ends up being really obviously ambiguous and weird looking anyway!`

I had an interesting thought.

Earlier we established that a recipe can be given as a string diagram. And here, we're establishing that string diagrams can be given a corresponding equation (up to isomorphism).

So does that mean that every recipe can be written in as an equation? For some reason, I find humor in that.

`I had an interesting thought. Earlier we established that a recipe can be given as a string diagram. And here, we're establishing that string diagrams can be given a corresponding equation (up to isomorphism). So does that mean that every recipe can be written in as an equation? For some reason, I find humor in that.`

If you strictify a monoidal category, is it somehow related to a skeletal category? It seems like there should be a skeletal category lurking in there since we are collapsing all isomorphic objects into one object.

`If you strictify a monoidal category, is it somehow related to a skeletal category? It seems like there should be a skeletal category lurking in there since we are collapsing all isomorphic objects into one object.`

@Michael – from what I can make out from the proof of the strictification theorem it doesn't actually collapse all isomorphic objects into one object – it just identifies, eg \(A\otimes (B\otimes C)\) and \((A\otimes B)\otimes C\) for all objects \(A, B, C\). So the strict monoidal category isn't necessarily skeletal.

`@Michael – from what I can make out from the proof of the strictification theorem it doesn't actually collapse all isomorphic objects into one object – it just identifies, eg \\(A\otimes (B\otimes C)\\) and \\((A\otimes B)\otimes C\\) for all objects \\(A, B, C\\). So the strict monoidal category isn't necessarily skeletal.`

Michael wrote:

No. I'll start by refering you to comment #12 on Lecture 72. Then I'll say some more...

`Michael wrote: > If you strictify a monoidal category, is it somehow related to a skeletal category? No. I'll start by refering you to [comment #12 on Lecture 72](https://forum.azimuthproject.org/discussion/comment/20709/#Comment_20709). Then I'll say some more... > Ken wrote: > > Theorem 1, MacLane's theorem - is this similar to how we got a "skeletal" poset out of a preorder by collapsing all the isomorphisms? > **NO!!!** > That's a mistake every newcomer makes. It's a natural guess, but fact Mac Lane's theorem is proved by making the monoidal category very 'fat', the opposite of skeletal. > Unlike a preorder, a skeletal category still has lots of isomorphisms: it's just that they go from an object to itself. So, to make the associator and unitor isomorphisms between identity morphisms, it doesn't help to make the category skeletal. In fact it makes it harder! > It would take quite a while to explain this, and I don't have the energy, but it's fairly rare for a monoidal category to be both strict and skeletal. To get a vague sense of how tricky things are: the category of finite sets with \\(\times\\) as its monoidal structure is monoidally equivalent to one that's both strict and skeletal, but not the category of all sets.`

Michael wrote:

No, we're not. The actual proof of Mac Lane's strictification theorem proceeds by creating tons

moreisomorphic objects. You start with a monoidal category \(\mathcal{C}\). Then you create a new monoidal category \(\mathrm{str}(\mathcal{C})\) whose objects arelistsof objects in \(\mathcal{C}\). The tensor product of two lists$$ (c_1, \dots, c_m) $$ and

$$ (d_1, \dots, d_n) $$ is the list

$$ (c_1, \dots, c_m, d_1, \dots, d_n )$$ It's easy to see that this tensor product is strictly associative. A morphism in \(\mathrm{str}(\mathcal{C})\) from

$$ (c_1, \dots, c_m) $$ and

$$ (d_1, \dots, d_n) $$ is defined to be a morphism

$$ f \colon c_1 \otimes (c_2 \otimes (c_3 \otimes \cdots )) \to d_1 \otimes (d_2 \otimes (d_3 \otimes \cdots )) $$ in \(\mathcal{C}\).

It takes some work to make \(\mathcal{C}\) into a monoidal category and show that \(\mathrm{str}(\mathcal{C})\) is monoidally equivalent to \(\mathcal{C})\).

But my point here is \(\mathrm{str}(\mathcal{C}\) is not skeletal! In this category

$$ (c_1, \dots, c_m) $$ and

$$ (d_1, \dots, d_n) $$ are isomorphic iff

$$ c_1 \otimes (c_2 \otimes (c_3 \otimes \cdots )) $$ and

$$ d_1 \otimes (d_2 \otimes (d_3 \otimes \cdots )) $$ are isomorphic in \(\mathcal{C}\). That happens a whole lot! For example, the two-element list

$$ (c_1,c_2) $$ is isomorphic to the one-element list

$$ (c_1 \otimes c_2) .$$

`Michael wrote: > It seems like there should be a skeletal category lurking in there since we are collapsing all isomorphic objects into one object. No, we're not. The actual proof of Mac Lane's strictification theorem proceeds by creating tons _more_ isomorphic objects. You start with a monoidal category \\(\mathcal{C}\\). Then you create a new monoidal category \\(\mathrm{str}(\mathcal{C})\\) whose objects are _lists_ of objects in \\(\mathcal{C}\\). The tensor product of two lists $$ (c_1, \dots, c_m) $$ and $$ (d_1, \dots, d_n) $$ is the list $$ (c_1, \dots, c_m, d_1, \dots, d_n )$$ It's easy to see that this tensor product is strictly associative. A morphism in \\(\mathrm{str}(\mathcal{C})\\) from $$ (c_1, \dots, c_m) $$ and $$ (d_1, \dots, d_n) $$ is defined to be a morphism $$ f \colon c_1 \otimes (c_2 \otimes (c_3 \otimes \cdots )) \to d_1 \otimes (d_2 \otimes (d_3 \otimes \cdots )) $$ in \\(\mathcal{C}\\). It takes some work to make \\(\mathcal{C}\\) into a monoidal category and show that \\(\mathrm{str}(\mathcal{C})\\) is monoidally equivalent to \\(\mathcal{C})\\). But my point here is \\(\mathrm{str}(\mathcal{C}\\) is not skeletal! In this category $$ (c_1, \dots, c_m) $$ and $$ (d_1, \dots, d_n) $$ are isomorphic iff $$ c_1 \otimes (c_2 \otimes (c_3 \otimes \cdots )) $$ and $$ d_1 \otimes (d_2 \otimes (d_3 \otimes \cdots )) $$ are isomorphic in \\(\mathcal{C}\\). That happens a whole lot! For example, the two-element list $$ (c_1,c_2) $$ is isomorphic to the one-element list $$ (c_1 \otimes c_2) .$$`

Anindya wrote:

The strict monoidal category is indeed not necessarily skeletal, as you can see from my last comment.

But your first sentence is hovering on the brink between truth and falsehood, as you can also see from my last comment. If in the monoidal category \( \mathcal{C}\) we have two unequal objects

$$ X = A\otimes (B\otimes C) $$ and

$$ Y = ((A\otimes B)\otimes C ,$$ there will also be two unequal objects \( (X) \) and \( (Y) \) in the strictification \(\mathrm{str}(\mathcal{C})\). These are one-element lists.

In short, we are not identifying any objects that were previously unequal! Instead, we are throwing in a lot

moreobjects, and creating anewtensor product that is strictly associative.`Anindya wrote: > from what I can make out from the proof of the strictification theorem it doesn't actually collapse all isomorphic objects into one object – it just identifies, e.g.. \\(A\otimes (B\otimes C)\\) and \\((A\otimes B)\otimes C\\) for all objects \\(A, B, C\\). So the strict monoidal category isn't necessarily skeletal. The strict monoidal category is indeed not necessarily skeletal, as you can see from my last comment. But your first sentence is hovering on the brink between truth and falsehood, as you can also see from my last comment. If in the monoidal category \\( \mathcal{C}\\) we have two unequal objects $$ X = A\otimes (B\otimes C) $$ and $$ Y = ((A\otimes B)\otimes C ,$$ there will also be two unequal objects \\( (X) \\) and \\( (Y) \\) in the strictification \\(\mathrm{str}(\mathcal{C})\\). These are one-element lists. In short, we are not identifying any objects that were previously unequal! Instead, we are throwing in a lot _more_ objects, and creating a _new_ tensor product that is strictly associative.`

Fascinating.

`Fascinating.`

@John, thanks for that. It strikes me that this looks very much like the comparison functor between monoids (on a set) and T-algebras, in that we start with a set + associative binary operation + unit and end up with a set + an n-ary operation for each n + a whole bunch of generalised associative laws. Would it be right to say strictification is a "categorification" of this operation on plain monoids?

`@John, thanks for that. It strikes me that this looks very much like the comparison functor between monoids (on a set) and T-algebras, in that we start with a set + associative binary operation + unit and end up with a set + an n-ary operation for each n + a whole bunch of generalised associative laws. Would it be right to say strictification is a "categorification" of this operation on plain monoids?`

Anindya, John

Thanks for clarification. I was at first intimidated by the proof so didn't dive in but after your reading your comments and sketches, I went through it and can see what is going on. Basically, the theorem proved that \(\mathcal{C}\) and \(\mathrm{str}(\mathcal{C})\) are equivalent by showing there are two functors that are inverses between them. Hence, we can go back and forth between \(\mathcal{C}\) and \(\mathrm{str}(\mathcal{C})\) to move parentheses around. Indeed, all of the original isomorphisms are still there and the strict monoidal category also has isomorphisms within them so cannot be skeletal.

`Anindya, John Thanks for clarification. I was at first intimidated by the proof so didn't dive in but after your reading your comments and sketches, I went through it and can see what is going on. Basically, the theorem proved that \\(\mathcal{C}\\) and \\(\mathrm{str}(\mathcal{C})\\) are equivalent by showing there are two functors that are inverses between them. Hence, we can go back and forth between \\(\mathcal{C}\\) and \\(\mathrm{str}(\mathcal{C})\\) to move parentheses around. Indeed, all of the original isomorphisms are still there and the strict monoidal category also has isomorphisms within them so cannot be skeletal.`

Michael wrote:

Close but not quite. These two functors are just 'weak' inverses. This is a great excuse to talk about an important issue.

Suppose in general we have two functors \( F \colon \mathcal{A} \to \mathcal{B}\) and \(G \colon \mathcal{B} \to \mathcal{A} \) with

$$ G F = 1_{\mathcal{A}} , \qquad F G = 1_{\mathcal{B}} $$ Then we say \(F\) and \(G\) are

inverses, and the categories \(\mathcal{A}\) and \(\mathcal{B}\) areisomorphic.In this case \(\mathcal{A}\) and \(\mathcal{B}\) look exactly alike except for the names of objects and morphisms: we've got a one-to-one correspondence between their objects, and a one-to-one correspondence between their morphisms which sends identities to identities and preserves composition.

But now suppose we have two functors \( F \colon \mathcal{A} \to \mathcal{B}\) and \(G \colon \mathcal{B} \to \mathcal{A} \) with natural isomorphisms

$$ \alpha \colon G F \Rightarrow 1_{\mathcal{A}} , \qquad \beta \colon F G \Rightarrow 1_{\mathcal{B}} $$ Then we say \(F\) and \(G\) are

weak inverses, and the categories \(\mathcal{A}\) and \(\mathcal{B}\) areequivalent.In this case there's no need for \(\mathcal{A}\) and \(\mathcal{B}\) to have the same number of objects, or morphisms. And yet, it turns out that equivalent categories are "the same for all practical purposes" - practical according to category theory, that is!

For example, \(\mathcal{A}\) could have infinitely many objects, with exactly one morphism from any object to any other object, while \(\mathcal{B}\) could be \(\mathbf{1}\), with exactly one object and one morphism. These categories are not isomorphic, but they are equivalent. In fact \(\mathcal{B}\) is skeletal, and every category is equivalent to a skeletal category.

Mac Lane's strictification theorem is also a way of finding a 'nicer' category (namely a strict monoidal one) that's equivalent (in fact monoidally equivalent) to one you started with (namely any monoidal one). But Mac Lane's theorem does the opposite of making your category skeletal! It takes your category and throws in

moreobjects... all isomorphic to ones you had already.`Michael wrote: > Basically, the theorem proved that \\(\mathcal{C}\\) and \\(\mathrm{str}(\mathcal{C})\\) are equivalent by showing there are two functors that are inverses between them. Close but not quite. These two functors are just 'weak' inverses. This is a great excuse to talk about an important issue. Suppose in general we have two functors \\( F \colon \mathcal{A} \to \mathcal{B}\\) and \\(G \colon \mathcal{B} \to \mathcal{A} \\) with \[ G F = 1_{\mathcal{A}} , \qquad F G = 1_{\mathcal{B}} \] Then we say \\(F\\) and \\(G\\) are **inverses**, and the categories \\(\mathcal{A}\\) and \\(\mathcal{B}\\) are **isomorphic**. In this case \\(\mathcal{A}\\) and \\(\mathcal{B}\\) look exactly alike except for the names of objects and morphisms: we've got a one-to-one correspondence between their objects, and a one-to-one correspondence between their morphisms which sends identities to identities and preserves composition. But now suppose we have two functors \\( F \colon \mathcal{A} \to \mathcal{B}\\) and \\(G \colon \mathcal{B} \to \mathcal{A} \\) with natural isomorphisms \[ \alpha \colon G F \Rightarrow 1_{\mathcal{A}} , \qquad \beta \colon F G \Rightarrow 1_{\mathcal{B}} \] Then we say \\(F\\) and \\(G\\) are **weak inverses**, and the categories \\(\mathcal{A}\\) and \\(\mathcal{B}\\) are **equivalent**. In this case there's no need for \\(\mathcal{A}\\) and \\(\mathcal{B}\\) to have the same number of objects, or morphisms. And yet, it turns out that equivalent categories are "the same for all practical purposes" - practical according to category theory, that is! For example, \\(\mathcal{A}\\) could have infinitely many objects, with exactly one morphism from any object to any other object, while \\(\mathcal{B}\\) could be \\(\mathbf{1}\\), with exactly one object and one morphism. These categories are not isomorphic, but they are equivalent. In fact \\(\mathcal{B}\\) is skeletal, and every category is equivalent to a skeletal category. Mac Lane's strictification theorem is also a way of finding a 'nicer' category (namely a strict monoidal one) that's equivalent (in fact monoidally equivalent) to one you started with (namely any monoidal one). But Mac Lane's theorem does the opposite of making your category skeletal! It takes your category and throws in _more_ objects... all isomorphic to ones you had already.`

Could you say that str(C) is the "free strict monoidal category" generated from a monoidal category, and that this functor MonCat -> StrMonCat has a right adjoint that keeps the strings and forgets the strictness?

EDIT: I guess an equivalence must be stronger than an adjunction, but it looks like a stock free/forgetful adjunction as well

`Could you say that str(C) is the "free strict monoidal category" generated from a monoidal category, and that this functor MonCat -> StrMonCat has a right adjoint that keeps the strings and forgets the strictness? EDIT: I guess an equivalence must be stronger than an adjunction, but it looks like a stock free/forgetful adjunction as well`

Setting it up like this gives a whole spectrum of associations versus the black/white version when you use an equal sign. Totally makes sense now why we use weak inverses or adjoints instead of inverses. You are ignoring a lot of information if you start with an equal sign!

The string diagrams for these illustrates the difference very well imo.

`>But now suppose we have two functors \\( F \colon \mathcal{A} \to \mathcal{B}\\) and \\(G \colon \mathcal{B} \to \mathcal{A} \\) with natural isomorphisms >\[ \alpha \colon G F \Rightarrow 1_{\mathcal{A}} , \qquad \beta \colon F G \Rightarrow 1_{\mathcal{B}} \] >Then we say \\(F\\) and \\(G\\) are **weak inverses**, and the categories \\(\mathcal{A}\\) and \\(\mathcal{B}\\) are **equivalent**. Setting it up like this gives a whole spectrum of associations versus the black/white version when you use an equal sign. Totally makes sense now why we use weak inverses or adjoints instead of inverses. You are ignoring a lot of information if you start with an equal sign! The string diagrams for these illustrates the difference very well imo. ![weak equivalence](http://aether.co.kr/images/weak_equivalence.svg)`

Michael - yes! That's one of the main lessons of category theory! These days we use 'categorification' to mean the process of taking math done with equations and boosting it up to math done with isomorphisms, equivalences, or other such things.

A lot of old math, done with equations, really deserves to be improved in this way!

`Michael - yes! That's one of the main lessons of category theory! These days we use ['categorification'](https://en.wikipedia.org/wiki/Categorification) to mean the process of taking math done with equations and boosting it up to math done with isomorphisms, equivalences, or other such things. A lot of old math, done with equations, really deserves to be improved in this way!`

John wrote:

It is interesting to learn more about the process of 'decategorification', when you lifted to some higher abstraction level, found structures which suit your purposes, and then need to find come concrete "implementations" and "substrates" which have this structure. Maybe there is some nice (introductory) read with examples?

`John wrote: >These days we use 'categorification' to mean the process of taking math done with equations and boosting it up to math done with isomorphisms, equivalences, or other such things It is interesting to learn more about the process of 'decategorification', when you lifted to some higher abstraction level, found structures which suit your purposes, and then need to find come concrete "implementations" and "substrates" which have this structure. Maybe there is some nice (introductory) read with examples?`

John wrote here:

I drew some diagrams for weak inverses and adjoints using the definitions above:

Weak inverses

Adjoints

For the adjoint equivalence, once we set the snakes equations, FGF=F and GFG=G, and you get the diagram on the right.

`John wrote [here](https://forum.azimuthproject.org/discussion/comment/20708/#Comment_20708): > **Definition 12\\({}^\prime\\).** If \\(C\\) and \\(D\\) are monoidal categories, a monoidal functor \\(F \colon C \to D\\) is a **monoidal equivalence** if there is a monoidal functor \\(G \colon D \to C\\) such that there exist monoidal natural isomorphisms \\(\alpha \colon 1_C \Rightarrow FG \\), \\(\beta \colon GF \Rightarrow 1_D\\). >3) However, there's a wonderful theorem that if we have \\(\alpha\\) and \\(\beta\\) as above, we can always _improve_ them in a systematic way to get new ones that _do_ satisfy the snake equations! Then we say we have an **adjoint equivalence**, because then \\(F\\) and \\(G\\) are also adjoint functors. I drew some diagrams for weak inverses and adjoints using the definitions above: Weak inverses ![weak_inverse_cap_cup](http://aether.co.kr/images/weak_inverse_cap_cup.svg) Adjoints ![weak_inverse_cap_cup](http://aether.co.kr/images/adjoint_cap_cup.svg) For the adjoint equivalence, once we set the snakes equations, FGF=F and GFG=G, and you get the diagram on the right.`

I wonder what the sort of complex manipulation of sums and products seen in say Real Analysis looks like as string diagrams? Probably you would want some sort of monad to let you skip down a sequence..

`I wonder what the sort of complex manipulation of sums and products seen in say Real Analysis looks like as string diagrams? Probably you would want some sort of monad to let you skip down a sequence..`