I have an innate need to learn by coding small tools and I also have to apply it to practical software problems. And I am overwhelmed by the number of different threads discussing topics too advanced for me.

Where should I start actually ?

]]>**given some requirements, what resources will let us fulfill these requirements?**

and

**given some resources, what requirements will these resources let us fulfill?**

You'll notice that these questions are two sides of the same coin! There will be an enriched profunctor going from resources to requirements they fulfill, and we can 'flip' it to get an enriched profunctor from requirements to resources needed to fulfill them.

As the name suggests, an 'enriched profunctor' is a bit like a functor between enriched categories... for pros. That is, for professionals.

Indeed, most category theorists consider enriched profunctors rather sophisticated. But Fong and Spivak bring them down to earth by their clever trick of focusing on *preorders* rather than more general categories.

Remember that in Chapter 1 they introduced preorders. A **preorder** is a set \(X\) equipped with a relation \(\le\) obeying

[ x \le x]

and

[ x \le y \text{ and } y \le z \; \implies \; x \le z .]

Later we saw a preorder is secretly a category with at most one morphism from any object \(x\) to any object \(y\): if one exists we write \(x \le y\). But because there's at most *one*, we never have to worry about equations between morphisms. Everything simplifies enormously! This is the key to Fong and Spivak's expository strategy.

In Chapter 2 they introduced monoidal preorders. These are a special case of 'monoidal categories', which we haven't discussed yet - but they're much simpler! A **monoidal preorder** is a preorder \( (X,\le) \) with an operation \(\otimes : X \times X \to X\) and element \(I \in X\) obeying

$$ (x \otimes y) \otimes z = x \otimes (y \otimes z) $$

$$ I \otimes x = x = x \otimes I $$

and

$$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$

We used preorders to study *resources*: we said \( x \le y \) if \(x \) is cheaper than \(y\), or you can get \(x\) if you have \(y\). Then we used \(\otimes\) to *combine* resources, and used \(I\) for a 'nothing' resource: \(x\) combined with nothing is just \(x\).

(Actually Fong and Spivak use the opposite convention, writing \(x \le y\) to mean you can get \(y\) if you have \(x\). This seems weird if you think of resources as being like money, but natural if you think of your preorder as a category, and remember \(x \le y\) means there's a morphism \(f : x \to y\). I should probably use this convention.)

Later in Chapter 2 they generalized preorders a bit, and introduced categories enriched in a monoidal preorder. Remember the idea: first we choose a monoidal preorder to enrich in, and call it \(\mathcal{V}\). Then a **\(\mathcal{V}\)-enriched category**, say \(\mathcal{X}\), consists of

a set of

**objects**\(\text{Ob}(\mathcal{X})\), andfor every two objects \(x,y\), an element \(\mathcal{X}(x,y)\) of \(\mathcal{V}\),

such that

a) \( I\leq\mathcal{X}(x,x) \) for every object \(x\in\text{Ob}(\mathcal{X})\), and

b) \( \mathcal{X}(x,y)\otimes\mathcal{X}(y,z)\leq\mathcal{X}(x,z) \) for all objects \(x,y,z\in\mathrm{Ob}(\mathcal{X})\).

We saw that if \(\mathcal{V} = \mathbf{Bool}\), a \(\mathcal{V}\)-enriched category is just a preorder: the truth value \(\mathcal{X}(x,y)\) tells you if you *can* get from \(x\) to \(y\). But if \(\mathcal{V} \) is something fancier, like \(\mathbf{Cost}\), \(\mathcal{X}(x,y)\) tells you more, like *how much it costs* to get from \(x\) to \(y\).

Where do enriched profunctors fit into this game?

Here: given \(\mathcal{V}\)-enriched categories \(\mathcal{X}\) and \(\mathcal{Y}\), a \(\mathcal{V}\)-enriched profunctor is a clever kind of thing going from \(\mathcal{X}\) to \(\mathcal{Y}\).

If we take objects of \(\mathcal{X}\) to be *requirements* and objects of \(\mathcal{Y}\) to be *resources*, we can use a \(\mathcal{V}\)-enriched profunctor from \(\mathcal{X}\) to \(\mathcal{Y}\) to describe, for each choice of requirements, which resources will fulfill it... or how much it will cost to make them fulfill it... or various other things like that, depending on \(\mathcal{V}\).

On the other hand, we can use a \(\mathcal{V}\)-enriched profunctor going back from \(\mathcal{Y}\) to \(\mathcal{X}\) to describe, for each choice of resources, which requirements they will fulfill.. or how much it will cost to make it fulfill them... or various other things like that, depending on \(\mathcal{V}\).

It's all very beautiful and fun. Dive in! Read Section 4.1 and 4.2.1, and maybe 4.2.2 if you're feeling energetic.

Happy Fourth of July! You don't need fireworks for excitement if you've got profunctors.

]]>We took the function \(f : \mathbb{N} \to \mathbb{N}\) that doubles any natural number

[ f(a) = 2a . ]

This function has no inverse, since you can't divide an odd number by 2 and get a natural number! But if you did the puzzles, you saw that \(f\) has a "right adjoint" \(g : \mathbb{N} \to \mathbb{N}\). This is defined by the property

[ f(a) \le b \textrm{ if and only if } a \le g(b) . ]

or in other words,

[ 2a \le b \textrm{ if and only if } a \le g(b) . ]

Using our knowledge of fractions, we have

[ 2a \le b \textrm{ if and only if } a \le b/2 ]

but since \(a\) is a natural number, this implies

[ 2a \le b \textrm{ if and only if } a \le \lfloor b/2 \rfloor ]

where we are using the floor function to pick out the largest integer \(\le b/2\). So,

[ g(b) = \lfloor b/2 \rfloor. ]

Moral: the right adjoint \(g \) is the "best approximation from below" to the nonexistent inverse of \(f\).

If you did the puzzles, you also saw that \(f\) has a left adjoint! This is the "best approximation from above" to the nonexistent inverse of \(f\): it gives you the smallest integer that's \(\ge n/2\).

So, while \(f\) has no inverse, it has two "approximate inverses". The left adjoint comes as close as possible to the (perhaps nonexistent) correct answer while making sure to never choose a number that's *too small*. The right adjoint comes as close as possible while making sure to never choose a number that's *too big*.

The two adjoints represent two opposing philosophies of life: *make sure you never ask for too little* and *make sure you never ask for too much*. This is why they're philosophically profound. But the great thing is that they are defined in a completely precise, systematic way that applies to a huge number of situations!

If you need a mnemonic to remember which is which, remember left adjoints are "left-wing" or "liberal" or "generous", while right adjoints are "right-wing" or "conservative" or "cautious".

Let's think a bit more about how we can compute them in general, starting from the basic definition.

Here's the definition again. Suppose we have two preorders \((A,\le_A)\) and \((B,\le_B)\) and a monotone function \(f : A \to B\). Then we say a monotone function \(g: B \to A\) is a **right adjoint of \(f\)** if

[ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) ]

for all \(a \in A\) and \(b \in B\). In this situation we also say that \(f\) is a **left adjoint of \(g\)**.

The names should be easy to remember, since \(f\) shows up on the *left* of the inequality \( f(a) \le_B b \), while \(g\) shows up on the *right* of the inequality \( a \le_A g(b) \). But let's see how they actually work!

Suppose you know \(f : A \to B\) and you're trying to figure out its right adjoint \(g: B \to A\). Say you're trying to figure out \(g(b)\). You don't know what it is, but you know

[ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) ]

So, you go around looking at choices of \(a \in A\). For each one you compute \(f(a)\). If \(f(a) \le_B b\), then you know \(a \le_A g(b)\). So, you need to choose \(g(b)\) to be greater than or equal to every element of this set:

[ \{a \in A : \; f(a) \le_B b \} ]

In other words, \(g(b)\) must be an **upper bound** of this set. But you shou'ldn't choose \(g(b)\) to be any bigger that it needs to be! After all, you know \(a \le_A g(b)\) *only if* \(f(a) \le_B b\). So,
\(g(b)\) must be a **least upper bound** of the above set.

Note that I'm carefully speaking about *a* least upper bound. Our set could have two different least upper bounds, say \(a\) and \(a'\). Since they're both the least, we must have \(a \le a'\) and \(a' \le a\). This doesn't imply \(a = a'\), in general! But it does if our preorder \(A\) is a "poset". A **poset** is a preorder \((A, \le_A)\) obeying this extra axiom:

[ \textrm{ if } a \le a' \textrm{ and } a' \le a \textrm{ then } a = a' ]

for all \(a,a' \in A\).

In a poset, our desired least upper bound may still not *exist*. But if it does, it's *unique*, and Fong and Spivak write it this way:

[ \bigvee \{a \in A : \; f(a) \le_B b \} ]

The \(\bigvee\) symbol stands for "least upper bound", also known as **supremum** or **join**.

So, here's what we've shown:

If \(f : A \to B\) has a right adjoint \(g : B \to A\) and \(A\) is a poset, this right adjoint is unique and we have a formula for it:

[ g(b) = \bigvee \{a \in A : \; f(a) \le_B b \} . ]

And we can copy our whole line of reasoning and show this:

If \(g : B \to A\) has a left adjoint \(f : A \to B\) and \(B\) is a poset, this left adjoint is unique and we have a formula for it:

[ f(a) = \bigwedge \{b \in B : \; a \le_A g(b) \} . ]

Here the \(\bigwedge\) symbol stands for "greatest lower bound", also known as the **infimum** or **meet**.

We're making progress: we can now actually compute left and right adjoints! Next we'll start looking at more examples.

]]>Is there perhaps a subtle difference between these two? Seven sketches doesn't seem to mention quotient categories at any point

]]>Their book is free here:

- Brendan Fong and David Spivak,
*Seven Sketches in Compositionality: An Invitation to Applied Category Theory*.

If you're in Boston you can actually go to the course. It's at MIT January 14 - Feb 1, Monday-Friday, 14:00-15:00 in room 4-237.

They taught it last year too, and last year's YouTube videos are on the same YouTube channel.

]]>We are writing to let you know about a fantastic opportunity to learn about the emerging interdisciplinary field of applied category theory from some of its leading researchers at the ACT2019 School. It will begin February 18, 2019 and culminate in a meeting in Oxford, July 22–26. Applications are due January 30th! For more details, go here:

]]>Hoping to discuss once I've read and digested it some!

]]>**Definition.** A **category** \(\mathcal{C}\) consists of:

a collection of

**objects**anda set of

**morphisms**\(f : x \to y\) from any object \(x\) to any object \(y\),

such that:

a) each pair of morphisms \(f : x \to y\) and \(g: y \to z\) has a **composite** \(g \circ f : x \to z \) and

b) each object \(x\) has a morphism \(1_x : x \to x\) called its **identity**,

for which

i) the **associative law** holds: \(h \circ (g \circ f) = (h \circ g) \circ f\), and

ii) the **left and right unit laws** hold: \(1_y \circ f = f = f \circ 1_x \) for any morphism \(f: x \to y\).

A category looks like this:

**Definition.** Given categories \(\mathcal{C}\) and \(\mathcal{D}\), a **functor** \(F: \mathcal{C} \to \mathcal{D} \) maps

each object \(x\) of \(\mathcal{C}\) to an object \(F(x)\) of \(\mathcal{D}\),

each morphism \(f: x \to y\) in \(\mathcal{C}\) to a morphism \(F(f) : F(x) \to F(y) \) in \(\mathcal{D}\) ,

in such a way that:

a) it preserves composition: \(F(g \circ f) = F(g) \circ F(f) \), and

b) it preserves identities: \(F(1_x) = 1_{F(x)}\).

A functor looks sort of like this, leaving out some detail:

**Definition.** Given categories \(\mathcal{C},\mathcal{D}\) and functors \(F, G: \mathcal{C} \to \mathcal{D}\), a **natural transformation** \(\alpha : F \to G\) is a choice of morphism

[ \alpha_x : F(x) \to G(x) ]

for each object \(x \in \mathcal{C}\), such that for each morphism \(f : x \to y\) in \(\mathcal{C}\) we have

[ G(f) \alpha_x = \alpha_y F(f) ,]

or in other words, this **naturality square** commutes:

A natural transformation looks sort of like this:

You should also review the free category on a graph if you don't remember that.

Okay, now for a bunch of puzzles! If you're good at this stuff, please let beginners do the easy ones.

**Puzzle 129.** Let \(\mathbf{1}\) be the free category on the graph with one node and no edges:

Let \(\mathbf{2}\) be the free category on the graph with two nodes and one edge from the first node to the second:

How many functors are there from \(\mathbf{1}\) to \(\mathbf{2}\), and how many natural transformations are there between all these functors? It may help to draw a graph with functors \(F : \mathbf{1} \to \mathbf{2} \) as nodes and natural transformations between these as edges.

**Puzzle 130.** Let \(\mathbf{3}\) be the free category on this graph:

How many functors are there from \(\mathbf{1}\) to \(\mathbf{3}\), and how many natural transformations are there between all these functors? Again, it may help to draw a graph showing all these functors and natural transformations.

**Puzzle 131.** How many functors are there from \(\mathbf{2}\) to \(\mathbf{3}\), and how many natural transformations are there between all these functors? Again, it may help to draw a graph.

**Puzzle 132.** For any category \(\mathcal{C}\), what's another name for a functor \(F: \mathbf{1} \to \mathcal{C}\)? There's a simple answer using concepts you've already learned in this course.

**Puzzle 133.** For any category \(\mathcal{C}\), what's another name for a functor \(F: \mathbf{2} \to \mathcal{C}\)? Again, there's a simple answer using concepts you've already learned here.

**Puzzle 134.** For any category \(\mathcal{C}\), what's another name for a natural transformation \(\alpha : F \Rightarrow G\) between functors \(F,G: \mathbf{1} \to \mathcal{C}\)? Yet again there's a simple answer using concepts you've learned here.

**Puzzle 135.** For any category \(\mathcal{C}\), classify all functors \(F : \mathcal{C} \to \mathbf{1} \).

**Puzzle 136.** For any natural number \(n\), we can define a category \(\mathbf{n}\) generalizing the categories \(\mathbf{1},\mathbf{2}\) and \(\mathbf{3}\) above: it's the free category on a graph with nodes \(v_1, \dots, v_n\) and edges \(f_i : v_i \to v_{i+1}\) where \(1 \le i < n\). How many functors are there from \(\mathbf{m}\) to \(\mathbf{n}\)?

**Puzzle 137.** How many natural transformations are there between all the functors from \(\mathbf{m}\) to \(\mathbf{n}\)?

I think Puzzle 137 is the hardest; here are two easy ones to help you recover:

**Puzzle 138.** For any category \(\mathcal{C}\), classify all functors \(F : \mathbf{0} \to \mathcal{C}\).

**Puzzle 139.** For any category \(\mathcal{C}\), classify all functors \(F : \mathcal{C} \to \mathbf{0} \).

Feasibility relations work between preorders, but for simplicity suppose we have two posets \(X\) and \(Y\). We can draw them using Hasse diagrams:

Here an arrow means that one element is less than or equal to another: for example, the arrow \(S \to W\) means that \(S \le W\). But we don't bother to draw all possible inequalities as arrows, just the bare minimum. For example, obviously \(S \le S\) by reflexivity, but we don't bother to draw arrows from each element to itself. Also \(S \le N\) follows from \(S \le E\) and \(E \le N\) by transitivity, but we don't bother to draw arrows that follow from others using transitivity. This reduces clutter.

(Usually in a Hasse diagram we draw bigger elements near the top, but notice that \(e \in Y\) is not bigger than the other elements of \(Y\). In fact it's neither \(\ge\) or \(\le\) any other elements of \(Y\) - it's just floating in space all by itself. That's perfectly allowed in a poset.)

Now, we saw that a **feasibility relation** from \(X\) to \(Y\) is a special sort of relation from \(X\) to \(Y\). We can think of a relation from \(X\) to \(Y\) as a function \(\Phi\) for which \(\Phi(x,y)\) is either \(\text{true}\) or \(\text{false}\) for each pair of elements \( x \in X, y \in Y\). Then a **feasibility relation** is a relation such that:

If \(\Phi(x,y) = \text{true}\) and \(x' \le x\) then \(\Phi(x',y) = \text{true}\).

If \(\Phi(x,y) = \text{true}\) and \(y \le y'\) then \(\Phi(x,y') = \text{true}\).

Fong and Spivak have a cute trick for drawing feasibility relations: when they draw a blue dashed arrow from \(x \in X\) to \(y \in Y\) it means \(\Phi(x,y) = \text{true}\). But again, they leave out blue dashed arrows that would follow from rules 1 and 2, to reduce clutter!

Let's do an example:

So, we see \(\Phi(E,b) = \text{true}\). But we can use the two rules to draw further conclusions from this:

Since \(\Phi(E,b) = \text{true}\) and \(S \le E\) then \(\Phi(S,b) = \text{true}\), by rule 1.

Since \(\Phi(S,b) = \text{true}\) and \(b \le d\) then \(\Phi(S,d) = \text{true}\), by rule 2.

and so on.

**Puzzle 171.** Is \(\Phi(E,c) = \text{true}\) ?

**Puzzle 172.** Is \(\Phi(E,e) = \text{true}\)?

I hope you get the idea! We can think of the arrows in our Hasse diagrams as *one-way streets* going between cities in two countries, \(X\) and \(Y\). And we can think of the blue dashed arrows as *one-way plane flights* from cities in \(X\) to cities in \(Y\). Then \(\Phi(x,y) = \text{true}\) if we can get from \(x \in X\) to \(y \in Y\) *using any combination of streets and plane flights!*

That's one reason \(\Phi\) is called a feasibility relation.

What's cool is that rules 1 and 2 can also be expressed by saying

[ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} ]

is a monotone function. And it's especially cool that we need the '\(\text{op}\)' over the \(X\). Make sure you understand that: the \(\text{op}\) over the \(X\) but not the \(Y\) is why we can drive *to* an airport in \(X\), then take a plane, then drive *from* an airport in \(Y\).

Here are some ways to lots of feasibility relations. Suppose \(X\) and \(Y\) are preorders.

**Puzzle 173.** Suppose \(f : X \to Y \) is a monotone function from \(X\) to \(Y\). Prove that there is a feasibility relation \(\Phi\) from \(X\) to \(Y\) given by

[ \Phi(x,y) \text{ if and only if } f(x) \le y .]

**Puzzle 174.** Suppose \(g: Y \to X \) is a monotone function from \(Y\) to \(X\). Prove that there is a feasibility relation \(\Psi\) from \(X\) to \(Y\) given by

[ \Psi(x,y) \text{ if and only if } x \le g(y) .]

**Puzzle 175.** Suppose \(f : X \to Y\) and \(g : Y \to X\) are monotone functions, and use them to build feasibility relations \(\Phi\) and \(\Psi\) as in the previous two puzzles. When is

[ \Phi = \Psi ? ]

]]>If you are unaware of the game, here's a brief summary. Factorio is an open-ended game where you build factories to harvest raw resources and convert them into a manufactured goods. You do this by building structures which convert some resources to other resources (at a certain rate). For example, a stone furnace smelts iron ore to iron plates (and requires coal for power). [\mathrm{StoneFurnace}: \ \mathrm{IronOre} + \mathrm{Coal} \rightarrow \mathrm{IronPlate} ] The wiki page has explanations for all game mechanics. You can probably already see the similarities to resources theories (Chapter 2) and codesign diagrams (Chapter 4).

Like real factories, we want to design efficient/effective factories that produce an end product at a given rate (or produce a given amount, etc.). A simple version of this question is this: how many factories do we need to make iron plates from iron ore and coal if we want to produce iron plates at a given rate \(r\)? As you can find here, the furnace makes 0.28 plates/sec so we would need \(n = \lceil \frac{r}{0.28} \rceil\) furnaces, and supply them with iron ore at a rate \(r\) and coal at a rate \(0.0225 n\) at least. This question is easy to answer with a single building, but gets harder as the factory gets more complicated. So the goal is to come up with a way to describe a factory using category theory, where

- objects are resources/reagents/ingredients: iron ore, plates, etc.
- morphisms are processes which convert resources/combinations of resources to other resources.

But with that category, we also want to be able to ultimately answer questions such as "How many furnaces do I need?", "At what rate do I need to produce a certain raw material to produce an end resource?"

I think there's several routes to an answer (as is often in science/math). Initially, I thought about this problem using feasibility relations (Chapter 4), but found it tricky to keep track of the resources. After reading Tai-Danae Bradley's booklet, I realized that the theme of functorial semantics is useful here. Instead of creating one category to capture both how recipes can be combined and the corresponding rates, we can split the problem into two slightly separate problems:

- Construct a category, call it \(\mathbf{Fact}\), which describes how recipes can be combined to produce bigger recipes.
- Construct a second category, call it \(\mathbf{Rate}\), which describes production, consumption, supply and demand rates for the resources and processes. And along with it, a functor \(F\) which maps recipe diagrams in \(\mathbf{Fact}\) to \(\mathbf{Rate}\) which tells us what the rates of production/consumption are.

Hopefully, this is enough info to get started. While I have some results for both problems, I'd rather not spoil the fun of discovering them for anyone else (at least until others have a chance to try it).

]]>[ ax^5 + bx^4 + cx^3 + dx^2 + ex + f = 0. ]

He used a trick for converting one view of a problem into another, and then converting the other view back into the original one. By now, we've extracted the essence of this trick and dubbed it a "Galois connection". It's far more general than Galois dreamed.

Remember, a **preorder** is a set \(A\) with a relation \(\le_A\) that's reflexive and transitive. When we're in the mood for being careful, we write a preorder as a pair \( (A,\le_A)\). When we're feeling lazy we'll just call it something like \(A\), and just write \(\le\) for the relation.

**Definition.** Given preorders \((A,\le_A)\) and \((B,\le_B)\), a **monotone function** from \(A\) to \(B\) is a function \(f : A \to B\) such that

[ x \le_A y \textrm{ implies } f(x) \le_B f(y) ]

for all elements \(x,y \in A\),

**Puzzle 10.** There are many examples of monotone maps between preorders. List a few interesting ones!

**Definition.** Given preorders \((A,\le_A)\) and \((B,\le_B)\), a **Galois connection** is a monotone function \(f : A \to B\) together with a monotone function \(g: B \to A\) such that

[ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) ]

for all \(a \in A, b \in B\). In this situation we call \(f\) the **left adjoint** and \(g\) the **right adjoint**.

So, the right adjoint of \(f\) is a way of going back from \(B\) to \(A\) that's related to \(f\) in some way.

**Puzzle 11.** Show that if the monotone function \(f: A \to B\) has an inverse \(g : B \to A \) that is also a monotone function, then \(g\) is *both a right adjoint and a left adjoint* of \(f\).

So, adjoints are some sort of generalization of inverses. But as you'll eventually see, they're much more exciting!

I will spend quite a few lectures describing really interesting examples, and you'll start seeing what Galois connections are good for. It shouldn't be obvious yet, unless you already happen to know or you're some sort of superhuman genius. I just want to get the definition on the table right away.

Here's one easy example to get you started. Let \(\mathbb{N}\) be the set of natural numbers with its usual notion of \(\le\). There's a function \(f : \mathbb{N} \to \mathbb{N}\) with \(f(x) = 2x \). This function doesn't have an inverse. But:

**Puzzle 12.** Find a right adjoint for \(f\): that is, a function \(g : \mathbb{N} \to \mathbb{N}\) with

[ f(m) \le n \textrm{ if and only if } m \le g(n) ]

for all \(m,n \in \mathbb{N}\). How many right adjoints can you find?

**Puzzle 13.** Find a left adjoint for \(f\): that is, a function \(g : \mathbb{N} \to \mathbb{N}\) with

[ g(m) \le n \textrm{ if and only if } m \le f(n) ]

for all \(m,n \in \mathbb{N}\). How many left adjoints can you find?

]]>In other words, how would I draw the following diagram:

using the notation where morphisms are arrows? Sure, I can always do this:

where I just say the function accepts a product as the input, but I feel this is just raising another question: how did I end up with \( A \times B \) ? A possible answer could be that we can just specify the product using the universal property and we somehow just "have" it.

But I feel this doesn't get to the gist of the answer. To translate a monoidal product to usual notation, we'd need an arrow to accept two things as input. Arrows are inherently one-dimensional objects and have as inputs two-dimensional objects, points. I suspect that using two-dimensional shapes as arrows instead of one-dimensional could help alleviate the problem. Which is exactly what string diagrams are, in the end!

Is this sort of reasoning valid? Where can I read more about this? Are there higher-dimensional generalizations of string diagrams?

This seems like an important thing to know but I haven't been able to find good resources. CT is usually introduced as points and arrows between them, but does this mean there's an inherent limitation to arrow notation? It took me quite a while trying to draw products using arrows before I realized this might not be possible.

]]>- Tai-Danae Bradley,
*What Is Applied Category Theory?*

Abstract.This is a collection of introductory, expository notes on applied category theory, inspired by the 2018 Applied Category Theory Workshop, and in these notes we take a leisurely stroll through two themes (functorial semantics and compositionality), two constructions (monoidal categories and decorated cospans) and two examples (chemical reaction networks and natural language processing) within the field.

Check it out!

]]>**Adjoint functors**. We've focused a lot on the simplest of categories: preorders. Pairs of adjoint functors between these are also called Galois connections, and we first met them in Lecture 4. In Lecture 6 we saw that a*left*adjoint is a 'best approximation from above' to the possibly nonexistent inverse of a monotone function between preorders, while a*right*adjoint is a 'best approximation from below'. Much later, starting in Lecture 47, we looked at adjoint functors between categories in general. We saw that the pattern persists: left adjoints are 'liberal' while right adjoints are 'conservative'.**Compact closed categories.**In Lecture 67, in our study of feasibility relations, we began looking at caps and cups. We saw these allow us to describe feedback, or, more generally, the process of 'bending back' an input to some process and turning it into an output - or vice versa. In Lecture 71 we saw that caps and cups exist - and obey the all-important snake equations - in any category of enriched profunctors. And in Lecture 74, we saw this works in any 'compact closed' category. Morphisms in a compact closed category can be drawn as string diagrams, which we can manipulate just like boxes with wires coming in and out! In particular, we can 'bend back' the wires.

These are both great ideas... but amazingly, they are *two aspects of the same idea!*

To see this, start with a pair of adjoint functors:

[ F \colon \mathcal{A} \to \mathcal{B}, \quad G \colon \mathcal{B} \to \mathcal{A} ]

By definition, there's a bijection between these sets:

[ \alpha_{a,b} \colon \mathcal{B}(F(a),b) \to \mathcal{A}(a,G(b)) ]

for any objects \(a\) in \(\mathcal{A}\) and \(b\) in \(\mathcal{B}\). Moreover this is a natural isomorphism.

What can we do with this? Not much until we know some elements of these sets! So let's take \(b = F(a)\):

[ \alpha_{a,F(a)} : \mathcal{B}(F(a),F(a)) \to \mathcal{A}(a,G(F(a))) ]

There's an obvious element of \(\mathcal{B}(F(a),F(a))\), namely the identity \(1_{F(a)}\). Our bijection maps this to some morphism from \(a\) to \(G(F(a))\), which we write as

[ \eta_a \colon a \to G(F(a)) .]

You get such a morphism for any \(a\). And using the fact that \(\alpha\) is *natural*, you can prove these morphisms define a natural transformation

[ \eta \colon 1_{\mathcal{A}} \to G F ]

This is called the **unit**. (I'm sorry; that word that is used for too many different things in mathematics.)

Amazingly, *the unit is a lot like a cap.* Why? Remember that when we have an object \(x\) in a compact closed category, the cap is a morphism

[ \cap_x \colon I \to x \otimes x^\ast.]

This resembles the unit, with \(x\) playing the role of \(G\), and \(x^\ast\) playing the role of \(F\). The surprise is that this resemblance is significant, not just superficial!

What about the cup? Well, we can take our bijection

[ \alpha_{a,b} : \mathcal{B}(F(a),b) \to \mathcal{A}(a,G(b)) ]

and let \(a = G(b)\), getting

[ \alpha_{G(b),b} : \mathcal{B}(F(G(b)),b) \to \mathcal{A}(G(b),G(b)) .]

There's an obvious element of \( \mathcal{A}(G(b),G(b))\), namely the identity \(1_{G(b)}\). It must come from some morphism from \(F(G(b))\) to \(b\), which we write as

[ \epsilon_b \colon F(G(b)) \to b, ]

and you can prove such morphisms define a natural transformation

[ \epsilon \colon F G \Rightarrow 1_{\mathcal{B}} ]

called the **counit**. This should remind you of how any object \(x\) in a compact closed category has a cup:

[ \cup_x \colon x^\ast \otimes x \to I .]

So far our evidence for an analogy between the unit and counit and the cap and cup is pretty thin. The real test is the snake equation. If we can prove the unit and counit obey that, something real must be going on!

We can do it. Of course, first we need to *state* the snake equation for the unit and counit. I don't have room to do this here, so watch these short videos by my friends Eugenia Cheng and Simon Willerton:

- The Catsters, Adjunctions 1, Adjunctions 2, Adjunctions 3, Adjunctions 4.

where they call the snake equations the 'triangle equations' - you'll see why. They *start* by defining an 'adjunction' to be a pair of functors \( F \colon \mathcal{A} \to \mathcal{B}\), \( G \colon \mathcal{B} \to \mathcal{A} \) equipped with a unit and counit \(\eta \colon 1_{\mathcal{A}} \to GF \), \( \epsilon \colon FG \to 1_{\mathcal{B}}\) obeying the triangle equations. Then they show this definition is equivalent to the definition of adjoint functors we've been using!

The success of this analogy suggests that maybe we could use string diagrams to work with categories, functors and natural transformations. It's true! To learn how, watch these:

- The Catsters, String diagrams 1, String diagrams 2, String diagrams 3, String diagrams 4.

After setting up string diagrams for category theory, Simon describes adjunctions using string diagrams in part 3. You'll see exactly why the unit is like a cap and the counit like a cup - and you'll see the snake equations pop out at the end! In parts 4 and 5 he uses string diagrams to get *monads* from adjunctions. Monads are very popular in programming languages like Haskell, but this will give a completely different outlook on them.

I should warn you: all this is a *different* idea than using string diagrams to study enriched categories and enriched profunctors, as we'd been doing in Chapter 4. So don't get them mixed up. But everything fits together in the end - as you've probably seen, category theory keeps generalizing everything in order to unify it and eventually simplify it.

There's much more to say; you can see my own take on it by reading this:

- John Baez, The tale of
*n*-categories

You'll see how adjunctions and monads and compact closed categories all fit nicely into the framework of *2-categories*. Just as you need categories to work efficiently with set-based mathematics, you need 2-categories to work efficiently with category-based mathematics. These days my students and I have been using 2-categories (and related gadgets like double categories) to study Markov processes, Petri nets and other kinds of networks.

I'm tempted to go on, but this course was meant to give you just a tiny taste of the grand meal of category theory and its many applications, so I will restrain myself and stop here. I've been getting very abstract, but next time I'll give you some suggestions to read more about applications.

]]>I'm trying to finish a bunch of papers. I usually get started writing around noon or 1 pm, and when I get into it it's hard for me switch gears and write a lecture, especially since I've been trying to go to the gym almost every day at 6.

It's getting harder to write the lectures as the book proceeds and the 'sketches' get more sketchy, leaving me to fill in more details.

I have the feeling that many students have fallen behind the rather quick pace of the lectures, leaving only a small band of energetic followers.

My energy is slowly running out.

As for 2, I don't know if I *should* be filling in so many details. Maybe people would be happier if I gave more of an overview. This will be even more of an issue soon. Fong and Spivak give just a rough definition of 'monoidal category' in Section 4.4.3. The definition is a bit complicated, but it's a fundamental concept in category theory. Should I spend time to fill in the details or not? This is just one example of the decisions we face.

As for 3, it would be great to hear from people who *aren't* in the small band of energetic students who are leaving lots of comments on the lectures and solving lots of puzzles.

As for 4, that's mainly my problem, but I should warn you: I'm considering finishing at the end of chapter 4, after a good explanation of monoidal categories, compact closed categories, and PERT charts as another application of \(\mathcal{V}\)-enriched profunctors. I've put a lot of energy into this course and hate the idea of quitting before its done, but it's also tough to wake up each morning and know I need to spend an hour or two writing lectures along with papers. This will get a bit tougher on Wednesday when I go to Singapore.

]]>But it looks like I'm doing it in a strange way. Later tonight I'll post Lecture 77, titled "The End". But then *after that* I'll post Lecture 76, part two of "The Grand Synthesis".

Huh?

It just worked out that way. I started writing "The Grand Synthesis" but got distracted by listing various references for further study, and realized they should go in a separate post, which I'm almost done writing. Since it's been way too long since I've posted *anything*, I'll post that first, and then go back to "The Grand Synthesis".

We started by returning to a major theme of Chapter 2: enriched categories. We saw that enriched functors between these were just a special case of something more flexible: enriched profunctors. We saw some concrete applications of these, but also their important theoretical role.

Simply put: moving from functors to profunctors is completely analogous to moving from functions to *matrices!* Thus, introducing profunctors gives category theory some of the advantages of *linear algebra*.

Recall: a function between sets

[ f \colon X \to Y ]

can be seen as a special kind of \(X \times Y\)-shaped matrix

[ \phi \colon X \times Y \to \mathbb{R} ]

namely one where the matrix entry \(\phi(x,y) \) is \(1\) if \(y = f(x)\), and \(0\) otherwise. In short:

[ \phi(x,y) = \delta_{f(x), y} ]

where \(\delta\) is the Kronecker delta. Composing functions then turns out to be a special case of multiplying matrices. Here I'm using \(\mathbb{R}\) because most of you have seen matrices of real numbers, but we could equally well use \(\mathbf{Bool} = \lbrace \texttt{true}, \texttt{false} \rbrace \), and get matrices of truth values, which are just *relations*. Matrix multiplication has the usual composition of relations as a special case!

Similarly, a \(\mathcal{V}\)-enriched functor

[ F \colon \mathcal{X} \to \mathcal{Y} ]

can be seen a special kind of \(\mathcal{V}\)-enriched profunctor

[ \Phi \colon \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} ]

namely the 'companion' of \(F\), given by

[ \Phi(x,y) = \mathcal{V}(F(x), y) .]

This is a fancier relative of the Kronecker delta! For matrices of booleans \( \delta_{f(x), y} = \texttt{true}\) iff \(f(x) = y\), but \( \mathcal{V}(F(x), y) = \texttt{true}\) iff \(f(x) \le y \).

The analogy is completed by this fact: the formula for composing enriched profunctors is really just matrix multiplication written with less familiar symbols:

[ (\Psi\Phi)(x,z) = \bigvee_{y \in \mathrm{Ob}(\mathcal{Y})} \Phi(x,y) \otimes \Psi(y,z). ]

Here \(\bigvee\) plays the role of a sum and \(\otimes\) plays the role of multiplication.

To clarify this analogy, we studied the category \(\mathbf{Prof}_\mathcal{V}\) with

- \(\mathcal{V}\)-enriched categories as objects

and

- \(\mathcal{V}\)-enriched profunctors as morphisms.

We saw that it was a compact closed category. This means that you can work with morphisms in this category using string diagrams, and you can bend the strings around using caps and cups. In short, \(\mathcal{V}\)-enriched profunctors are like circuits made of components connected by flexible pieces of wire, which we can stick together to form larger circuits.

And while you may not have learned it in your linear algebra class, this 'flexibility' is exactly one of the advantages of linear algebra! For any field \(k\) (for example the real numbers \(\mathbb{R}\)) there is a category \(\mathbf{FinVect}_k\) with

- finite-dimensional vector spaces over \(k\) as objects

and

- linear maps as morphisms.

This category is actually equivalent to the category with finite sets as objects and \(k\)-valued matrices as morphisms, where we compose matrices by matrix multiplication. And like \(\mathbf{Prof}_\mathcal{V}\), the category \(\mathbf{FinVect}_k\) is compact closed, as I mentioned last time. So, while a *function between sets* has a rigidly defined 'input' and 'output' (i.e. domain and codomain), a *linear map between finite-dimensional vector spaces* can be 'bent' or 'turned around' in various ways - as you may have first seen when you learned about the transpose of a matrix.

There's one other piece of this story whose full significance I haven't quite explained yet.

We've seen pairs of adjoint functors, and we've seen 'duals' in compact closed categories. In fact they're closely related! There is a general concept of adjunction that has both of these as special cases! And adjunctions give something all you functional programmers have probably been wishing I'd talk about all along: monads. So I'll try to explain this next time.

]]>I'll start without feedback. I seem to like examples from business and economics for these purposes:

This describes someone who buys bread and then sells it, perhaps at a higher price. This is described by the composite of two feasibility relations:

[ \mathrm{Purchase} \colon \mathbb{N} \nrightarrow \mathbb{N} ]

and

[ \mathrm{Sell} \colon \mathbb{N} \nrightarrow \mathbb{N} ]

where \(\mathbb{N}\) is the set of natural numbers given its usual ordering \(\le\).

Be careful about which way these feasibility relations go:

\( \mathrm{Purchase}(j,k) = \texttt{true}\) if you can purchase \(j\) loaves of bread for \(k\) dollars.

\( \mathrm{Sell}(i,j) = \texttt{true} \) if you can make \(i\) dollars selling \(j\) loaves of bread.

The variable at right is the 'resource', while the variable at left describes what you can obtain using this resource. For example, in purchasing bread, \( \mathrm{Purchase}(j,k) = \text{true}\) if starting with \(k\) dollars as your 'resource' you can buy \(j\) loaves of bread. This is an arbitrary convention, but it's the one in the book!

When we compose these we get a feasibility relation

[ \mathrm{Purchase} \mathrm{Sell} \colon \mathbb{N} \to \mathbb{N} ]

(and again, there's an annoying arbitrary choice of convention in the order here). We have

- \( (\mathrm{Purchase}\mathrm{Sell})(i,k) = \text{true} \) if you can make \(i\) dollars if you have \(k\) dollars to purchase bread which you then sell.

I haven't said what the feasibility relations \( \mathrm{Purchase}\) and \( \mathrm{Sell}\) actually *are*: they could be all sorts of things. But let's pick something specific, so you can do some computations with them. Let's keep it very simple: let's say you can buy a loaf of bread for \( \$ 2\) and sell it for \( \$ 3\).

**Puzzle 218.** Write down a formula for the feasibility relation \(\mathrm{Purchase}.\)

**Puzzle 219.** Write down a formula for the feasibility relation \(\mathrm{Sell}.\)

**Puzzle 220.** Compute the composite feasibility relation \( \mathrm{Purchase} \mathrm{Sell}\). (Hint: we discussed composing feasibility relations in Lecture 58.)

That was just a warmup. Now let's introduce feedback!

Now you can reinvest some of the money you make to buy more loaves of bread! That creates a 'feedback loop'. Obviously this changes things dramatically: now you can start with a little money and keep making more. But how does the mathematics work now?

First, you'll notice this feedback loop has a cap at left and a cup at right. I defined these last time.

But this feedback loop also involves two feasibility relations called \(\hat{\textstyle{\sum}}\) and \(\check{\textstyle{\sum}}\). We use the one at left,

[ \hat{\textstyle{\sum}} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} ,]

to say that the money we reinvest (which loops back), plus the money we take as profit (which comes out of the diagram at left), equals the money we make by selling bread.

We use the one at right,

[ \check{\textstyle{\sum}} \colon \mathbb{N} \nrightarrow \mathbb{N} \times \mathbb{N} ,]

to say that the money we have reinvested (which has looped around), plus the new money we put in (which comes into the diagram at right), equals the money we use to purchase bread.

These two feasibility relations are both built from the monotone function

[ \textstyle{\sum} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} ]

defined in the obvious way:

[ \textstyle{\sum}(m,n) = m + n .]

Remember, we saw in Lecture 65 that any monotone function \(F \colon \mathcal{X} \to \mathcal{Y} \) gives two feasibility relations, its 'companion' \(\hat{F} \colon \mathcal{X} \nrightarrow \mathcal{Y}\) and its 'conjoint' \(\check{F} \colon \mathcal{Y} \nrightarrow \mathcal{X}\).

**Puzzle 221.** Give a formula for the feasibility relation \( \hat{\textstyle{\sum}} \colon \mathbb{N} \times \mathbb{N} \nrightarrow \mathbb{N} \). In other words, say when \(\hat{\textstyle{\sum}}(a,b,c) = \texttt{true}\).

**Puzzle 222.** Give a formula for the feasibility relation \( \check{\textstyle{\sum}} \colon \mathbb{N} \nrightarrow \mathbb{N} \times \mathbb{N} \).

And now finally for the big puzzle that all the others were leading up to:

**Puzzle 223.** Give a formula for the feasibility relation described by this co-design diagram:

You can guess the answer, and then you can work it systematically by composing and tensoring the feasibility relations defined by the boxes, the cap and the cup! This is a good way to make sure you understand everything I've been talking about lately.

]]>We've already seen caps and cups for feasibility relations in Lecture 68. We can just generalize what we did.

As usual, let's assume \(\mathcal{V}\) is a commutative quantale, so we get a category \(\mathbf{Prof}_\mathcal{V}\) where:

- objects are \(\mathcal{V}\)-enriched categories

and

- morphisms are \(\mathcal{V}\)-enriched profunctors.

To keep my hands from getting tired, from now on in this lecture I'll simply write 'enriched' when I mean '\(\mathcal{V}\)-enriched'.

Let \(\mathcal{X}\) be an enriched category. Then there's an enriched profunctor called the **cup**

[ \cup_{\mathcal{X}} \colon \mathcal{X}^{\text{op}} \otimes \mathcal{X} \nrightarrow \textbf{1} ]

drawn as follows:

To define it, remember that enriched profunctors \(\mathcal{X}^{\text{op}} \otimes \mathcal{X} \nrightarrow \textbf{1}\) are really just enriched functors \( (\mathcal{X}^{\text{op}} \otimes \mathcal{X})^\text{op} \otimes \textbf{1} \to \mathcal{V} \). Also, remember that \(\mathcal{X}\) comes with a **hom-functor**, which is the enriched functor

[ \mathrm{hom} \colon \mathcal{X}^{\text{op}} \otimes \mathcal{X} \to \mathcal{V} ]

sending any object \( (x,x') \) to \( \mathcal{X}(x,x')\). So, we define \(\cup_\mathcal{X}\) to be the composite

[ (\mathcal{X}^{\text{op}} \otimes \mathcal{X})^\text{op} \otimes \textbf{1} \stackrel{\sim}{\to} (\mathcal{X}^{\text{op}} \otimes \mathcal{X})^\text{op} \stackrel{\sim}{\to} (\mathcal{X}^{\text{op}})^\text{op} \otimes \mathcal{X}^{\text{op}} \stackrel{\sim}{\to} \mathcal{X} \otimes \mathcal{X}^{\text{op}} \stackrel{\sim}{\to} \mathcal{X}^{\text{op}} \otimes \mathcal{X} \stackrel{\text{hom}}{\to} \mathcal{V} ]

where the arrows with squiggles over them are isomorphisms, most of which I explained last time.

There's also an enriched profunctor called the **cap**

[ \cap_\mathcal{X} \colon \textbf{1} \nrightarrow \mathcal{X} \otimes \mathcal{X}^{\text{op}} ]

drawn like this:

To define this, remember that enriched profunctors \(\textbf{1} \nrightarrow \mathcal{X} \otimes \mathcal{X}^{\text{op}} \) are enriched profunctors \(\textbf{1}^{\text{op}} \otimes (\mathcal{X} \otimes \mathcal{X}^{\text{op}}) \). But \(\textbf{1}^{\text{op}} = \textbf{1}\), so we define the cap to be the composite

[ \textbf{1}^{\text{op}} \otimes (\mathcal{X} \otimes \mathcal{X}^{\text{op}}) = \textbf{1}\otimes (\mathcal{X} \otimes \mathcal{X}^{\text{op}}) \stackrel{\sim}{\to} \mathcal{X} \otimes \mathcal{X}^{\text{op}} \stackrel{\sim}{\to} \mathcal{X}^{\text{op}} \otimes \mathcal{X} \stackrel{\text{hom}}{\to} \mathcal{V} . ]

As we've already seen for feasibility relations, the cap and cup obey the **snake equations**, also known as **zig-zag equations** or **yanking equations**. (Everyone likes making up their own poetic names for these equations.) The first snake equation says

In other words, the composite

is the identity, where the arrows with squiggles over them are obvious isomorphisms that I described last time. The second snake equation says

In other words, the composite

is the identity.

Last time I sketched how \(\mathbf{Prof}_{\mathcal{V}}\) is a monoidal category, meaning one with a tensor product obeying certain rules. It's also symmetric monoidal, meaning it has isomorphisms

[ \sigma_{\mathcal{X}, \mathcal{Y}} \colon \mathcal{X} \otimes \mathcal{Y} \nrightarrow \mathcal{Y} \otimes \mathcal{X} ]

obeying certain rules. These let us switch the order of objects in a tensor product: in terms of diagrams, it means wires can cross each other! And finally, when every object in a symmetric monoidal category has a cap and cup obeying the snake equations, we say that category is compact closed. I will define all these concepts more carefully soon. For now I just want you to know that

and also that \(\mathbf{Prof}_{\mathcal{V}}\) is an example of a compact closed category. If you're impatient to learn more, try Section 4.4 of the book.

**Puzzle 227.** Prove the snake equations in \(\mathbf{Prof}_{\mathcal{V}}\).

For this, I should state the snake equations more precisely! The first one says this composite:

is the identity, where \(\alpha\) is the associator and \(\lambda, \rho\) are the left and right unitors, defined last time. The second snake equation says this composite:

is the identity.

]]>[ f : X \to \mathbb{R} ]

assigning each resource its price. Often we have

[ f(x) + f(x') = f(x + x') ]

and this makes pricing very simple: to compute the price of a bunch of resources you just add up their prices.

On the other hand, there are sometimes sales where you can buy the first few items of some kind at a discount, but to keep you from buying too many, the price per item goes up when you buy more. In this case we have

[ f(x) + f(x') \le f(x + x') .]

The whole costs more than than sum of its parts!

More commonly are discounts on buying goods in bulk, to encourage you to buy more. For example, "the second gallon of milk is half-price". In this case we have

[ f(x) + f(x') \ge f(x + x') .]

Now the whole costs *less* than the sum of its parts! For more, see:

- Wikipedia, Economies of scale.

Now, last time I showered you with free goods you hadn't asked for: various flavors of "monoidal monotones". These are maps between monoidal preorders that behave in various ways. This was not abstraction for its own sake. Among other things, they can be used to model the three situations I just described. The first situation, where \( f(x) + f(x') = f(x + x')\), happens when \(f\) is "strict monoidal". The second, where \(f(x) + f(x') \le f(x + x')\), happens when \(f\) is "lax monoidal". And the third, where \( f(x) + f(x') \ge f(x + x')\), happens when \(f\) is "oplax monoidal".

But these different flavors also tend to show up when we have Galois connections - that is, monotone functions with adjoints. Last time we looked at the function

[ i : \mathbb{Z} \to \mathbb{R} ]

which sends each integer to itself, *regarded as a real number*. This is strict monoidal monotone with respect to addition. In particular,

[ i(x) + i(x') = i(x + x') .]

This function \(i\) has a right adjoint

[ \lfloor \cdot \rfloor : \mathbb{R} \to \mathbb{Z} , ]

the **floor function**, which is lax monoidal monotone. In particular,

[ \lfloor x \rfloor + \lfloor x' \rfloor \le \lfloor x + x' \rfloor .]

For example, \( \lfloor 0.7 \rfloor + \lfloor 0.4 \rfloor = 0 \) is less than \( \lfloor 0.7 + 0.4 \rfloor = 1 \). On the other hand, \(i\) has a left adjoint

[ \lceil \cdot \rceil : \mathbb{R} \to \mathbb{Z} , ]

the **ceiling function**, which is oplax monoidal monotone. In particular,

[ \lceil x \rceil + \lceil x' \rceil \ge \lceil x + x' \rceil .]

For example, \( \lceil 0.7 \rceil + \lceil 0.2 \rceil = 2 \) is greater than \( \lceil 0.7 + 0.2 \rceil= 1 \).

So, your prices at the grocery store would be lax monoidal if the clerk rounded your bill down to the nearest dollar... but oplax monoidal if the clerk rounded it up!

Is this a coincidence, this relation between right/left adjoints and lax/oplax monoidal monotones? No! In fact there's a very general, beautiful pattern at work here.

In everything that follows, \(X\) and \(Y\) will be monoidal preorders. I'll use the same symbols for \(\le\), \(\otimes\) and \(I\) in both \(X\) and \(Y\). This reduces clutter, and the context makes everything unambiguous. Remember that a monotone map \(f : X \to Y\) is a **strict** monoidal monotone if

[ f(x) \otimes f(x) = f(x \otimes x') \textrm{ and } I = f(I). ]

It's a **lax** monoidal monotone if

[ f(x) \otimes f(x) \le f(x \otimes x') \textrm{ and } I \le f(I), ]

and it's an **oplax** monoidal monotone if

[ f(x) \otimes f(x) \ge f(x \otimes x') \textrm{ and } I \ge f(I). ]

**Theorem.** Suppose \(f : X \to Y\) is a strict monoidal monotone and \(g: Y \to X\) is a right adjoint of \(f\). Then \(g\) is a lax monoidal monotone.

**Proof.** Since \(g\) is a right adjoint of \(f\) it is, by definition, a monotone function. Thus, to show \(g\) is a lax monoidal monotone, we only need to prove a couple of inequalities. The first of these is

[ g(y) \otimes g(y') \le g(y \otimes y') ]

for all \(y,y' \in Y\). Since \(g\) is a right adjoint of \(f\), this is equivalent to

[ f(g(y) \otimes g(y')) \le y \otimes y'. ]

So, let's show this!

Since \(f\) is strict monoidal we have

[ f(g(y) \otimes g(y')) = f(g(y)) \, \otimes \, f(g(y')) .]

Since \(g\) is a right adjoint of \(f\) we have

[ f(g(y)) \le y \textrm{ if and only if } g(y) \le g(y) ]

and the latter is true so indeed we have \(f(g(y)) \le y\), and by the same logic \(f(g(y')) \le y'\). By the monoidal preorder law this implies

[ f(g(y)) \, \otimes \, f(g(y')) \le y \otimes y'. ]

Putting all the pieces together we get

[ f(g(y) \otimes g(y')) \; = \; f(g(y)) \, \otimes\, f(g(y')) \; \le \; y \otimes y' ]

which is what we needed to show.

The second inequality we need to prove is

[ I \le g(I) .]

Since \(g\) is a right adjoint of \(f\) this is equivalent to \(f(I) \le I\), and since \(f\) is strict monoidal we actually have \(f(I) = I\). So, we're done! \( \qquad \blacksquare \)

This theorem has a partner, whose proof is very similar: just turn around a lot of inequalities!

**Theorem.** Suppose \(g: Y \to X\) is a strict monoidal monotone and \(f: X \to Y\) is a left adjoint of \(g\). Then \(f\) is an oplax monoidal monotone.

But even better, both these theorems are special cases of a beautifully symmetrical super-theorem, which is proved in basically the same way!

**Theorem.** Suppose the monotone function \(f: X \to Y\) is a left adjoint to the monotone function \(g: Y \to X\). Then \(f\) is an oplax monoidal if and only if \(g\) is a lax monoidal.

You can see that this implies the other two results, because a strict monoidal monotone is both lax and oplax. (In fact, a bit more generally, any monotone function that's both lax and oplax monoidal is called a **strong** monoidal monotone. So, in our first two theorems today, we could have replaced the word "strict" by "strong", and they'd still be true.)

**Puzzle 83.** Prove the super-theorem, preferably without "cheating" and looking at the proof of the earlier one. The trick, as so often in math, is simply to write down the facts you're given, and also the facts you want to prove, and play around with the former until you get the latter.

A **compact closed category** is a symmetric monoidal category \(\mathcal{C}\) where every object \(x\) has a **dual** \(x^\ast\) equipped with two morphisms called the **cap** or **unit**

[ \cap_x \colon I \to x \otimes x^\ast ]

and the **cup** or **counit**

[ \cup_x \colon x^\ast \otimes x \to I ]

obeying two equations called the **snake equations**.

You've seen these equations a couple times before! In Lecture 68 I was telling you about feedback in co-design diagrams: caps and cups let you describe feedback. I was secretly telling you that the category of feasibility relations was a compact closed category. In Lecture 71 I came back to this theme at a higher level of generality. Feasibility relations are just \(\mathcal{V}\)-enriched profunctors for \(\mathcal{V} = \mathbf{Bool}\), and in Lecture 71 I was secretly telling you that the category of \(\mathcal{V}\)-profunctors is always a compact closed category! But now I'm finally telling you what a compact closed category is in general.

The snake equations are easiest to remember using string diagrams. In a compact closed category we draw arrows on string in these diagrams as well as labeling them by objects. For any object \(x\) a left-pointing wire labelled \(x\) means the same as a right-pointing wire labelled \(x^\ast\). Thus, we draw the cap as

This picture has no wires coming in at left, which says that the cap is a morphism from \(I\), the unit object of our symmetric monoidal category. It has two wires doing out at right: the top wire with a right-pointing arrow, stands for \(x\), while the bottom wire with a right-pointing arrow stands for \(x^\ast\), and together these tell us that cap is a morphism to \(x \otimes x^\ast\).

Similarly, we draw the cup as

and this diagram, to the trained eye, says that the cup is a morphism from \(x^\ast \otimes x \) to \( I \).

In this language, the snake equations simply say that we can straighten out a 'zig-zag':

or a 'zag-zig':

If we don't use string diagrams, these equation look more complicated. The first says that this composite morphism is the identity:

where the unnamed isomorphisms are the inverse of the left unitor, the associator and the right unitor. The second says that this composite is the identity:

where the unnamed isomorphisms are the inverse of the right unitor, the inverse of the associator, and the left unitor. These are a lot less intuitive, I think! One advantage of string diagrams is that they hide associators and unitors, yet let us recover them if we really need them.

If you've faithfully done all the puzzles so far, you've proved the following grand result, which summarizes a lot of this chapter:

**Theorem.** Suppose \(\mathcal{V}\) is a commutative quantale. Then the category \(\mathbf{Prof}_{\mathcal{V}}\) with

- \(\mathcal{V}\)-enriched categories as objects

and

- \(\mathcal{V}\)-enriched profunctors as morphisms

is compact closed, where tensor product, associator and unitors are defined as in Lecture 70, and the dual, caps and cups are defined as in Lecture 71.

But the most famous example of a compact closed category comes from linear algebra! It's the category \(\mathbf{FinVect}_k\), with

- finite-dimensional vector spaces over the field \(k\) as objects

and

- linear maps as morphisms.

If you don't know about fields, you may still know about real vector spaces: that's the case \(k = \mathbb{R}\). There's a tensor product \(V \otimes W\) of vector spaces \(V\) and \(W\), which has dimension equal to the dimension of \(V\) times the dimension of \(W\). And there's a dual \(V^\ast\) of a vector space \(V\), which is just the space of all linear maps from \(V\) to \(k\).

Tensor products and dual vector spaces are very important in linear algebra. My main point here is that the profunctors work a lot like linear maps: both are morphisms in some compact closed category! Indeed, the introduction of profunctors into category theory was very much like the introduction of linear algebra in ordinary set-based mathematics. I've tried to hint at this several times: the ultimate reason is that composing profunctors is a lot like multiplying matrices! This is easiest to see for \(\mathcal{V}\)-enriched profunctors we've been dealing with. Composing these:

[ (\Psi\Phi)(x,z) = \bigvee_{y \in \mathrm{Ob}(\mathcal{Y})} \Phi(x,y) \otimes \Psi(y,z)]

looks just like matrix multiplication, with \(\bigvee\) replacing addition in the field \(k\) and \(\otimes\) replacing multiplication. So it's not surprising that this analogy extends, with the opposite of a \(\mathcal{V}\)-enriched category acting like a dual vector space.

If you're comfortable with tensor products and duals of vector spaces, you may want to solidify your understanding of compact closed categories by doing this puzzle:

**Puzzle 283.** Guess what the cap and cup

$$ \cap_V \colon k \to V \otimes V^\ast, \qquad \cup_V \colon V^\ast \otimes V \to k $$

are for a finite-dimensional vector space \(V\), and check your guess by proving the snake equations.

Here are some good things to know about compact closed categories:

**Puzzle 284.** Using the cap and cup, any morphism \(f \colon x \to y \) in a compact closed category gives rise to a morphism from \(y^\ast\) to \(x^\ast\). This amounts to 'turning \(f\) around' in a certain sense, and we call this morphism \(f^\ast \colon y^\ast \to x^\ast \). Write down a formula for \(f^\ast\) and also draw it as a string diagram.

**Puzzle 285.** Show that \( (fg)^\ast = g^\ast f^\ast \) for any composable morphisms \(f\) and \(g\), and show that \( (1_x)^\ast = 1_x \) for any object \(x\).

**Puzzle 286.** What is a slick way to state the result in Puzzle 285?

**Puzzle 287.** Show that if \(x\) is an object in a compact closed category, \( (x^\ast)^\ast\) is isomorphic to \(x\).

[ \Phi \colon a \to c \otimes d ] [ \Psi \colon d \otimes b \to e \otimes f ] [ \Theta \colon c \otimes e \to g ]

then by a combination of composing and tensoring we can cook up a morphism like this:

which goes from \(a \otimes b\) to \(g \otimes f\). This sort of picture is called a **string diagram**, and we've seen plenty of them already.

We don't *need* to use string diagrams to work with monoidal categories:

**Puzzle 281.** Describe the morphism in the above string diagram using a more traditional formula involving composition \(\circ\), tensoring \(\otimes\), the associator \(\alpha\), and the left and right unitors \(\lambda\) and \(\rho\).

However, they make it a lot easier and more intuitive!

An interesting feature of string diagrams is that they hide the the associator and the left and right unitors. You can't easily see them in these diagrams! However, when you turn a string diagram into a more traditional formula as in Puzzle 281, you'll see that you need to include associators and unitors to get a formula that makes sense.

This may seems strange: if we need the associators and unitors in our formulas, why don't we need them in our diagrams?

The ultimate answer is 'Mac Lane's strictification theorem'. This says that every monoidal category is equivalent to a one where the associator and unitors are *identity* morphisms. So, we can take any monoidal category and replace it by an equivalent one where the tensor product is 'strictly' associative, not just up to isomorphism:

[ (x \otimes y) \otimes z = x \otimes (y \otimes z) ]

and similarly, the left and right unit laws hold strictly:

[ I \otimes x = x = x \otimes I ]

This lets us stop worrying about associators and unitors. String diagrams are secretly doing this for us!

Often people use Mac Lane's strictification theorem in a loose way, simply using it as an excuse to act like monoidal categories are all strict. That's actually not so bad, if you're not too obsessed with precisoin.

To state Mac Lane's strictification theorem precisely, we first need to say exactly what it means for two monoidal categories to be 'equivalent'. For this we need to define a 'monoidal equivalence' between monoidal categories. Then, we define a **strict** monoidal category to be one where the associator and unitors are identity morphisms. Mac Lane's theorem then says that every monoidal category is monoidally equivalent to a strict one.

If you're curious about the details, try my notes:

All the necessary terms are defined, leading up to a precise statement of Mac Lane's strictification theorem at the very end. But this theorem takes quite a lot of work to prove, and I don't do that! You can see a sketch of the proof here:

- John Armstrong, The "strictification" theorem.

But there's more! If all we have is a monoidal category, the strings in our diagrams aren't allowed to cross. But last time I mentioned symmetric monoidal categories, where we have a natural isomorphism called the **symmetry**

[ \sigma_{x,y} \colon x \otimes y \to y \otimes x ]

that allows us to switch objects, obeying various rules. This lets us make sense of string diagrams where wires cross, like this:

**Puzzle 282.** Describe the morphism in the above string diagram with a formula involving composition \(\circ\), tensoring \(\otimes\), the associator \(\alpha\), the left and right unitors \(\lambda,\rho\), and the symmetry \(\sigma\).

There is a version of Mac Lane's strictification theorem for symmetric monoidal categories, too! You can find it stated in my notes. This lets us replace any symmetric monoidal category by a **strict** one, where the associator and unitors *but not the symmetry* are identity morphisms.

We really need the symmetry: it cannot in general be swept under the rug. That should be sort of obvious: for example, switching two numbers in an ordered pair really *does* something, we can't just say it's the identity.

Again, please ask questions! I'm sketching some ideas that would take considerably longer to explain in full detail.

]]>I was an art student and, like all art students, I was encouraged to believe that there were a few great figures like Picasso and Kandinsky, Rembrandt and Giotto and so on who sort-of appeared out of nowhere and produced artistic revolution.

As I looked at art more and more, I discovered that that wasn’t really a true picture.

What really happened was that there was sometimes very fertile scenes involving lots and lots of people – some of them artists, some of them collectors, some of them curators, thinkers, theorists, people who were fashionable and knew what the hip things were – all sorts of people who created a kind of ecology of talent. And out of that ecology arose some wonderful work.

The period that I was particularly interested in, ’round about the Russian revolution, shows this extremely well. So I thought that originally those few individuals who’d survived in history – in the sort-of “Great Man” theory of history – they were called “geniuses”. But what I thought was interesting was the fact that they all came out of a scene that was very fertile and very intelligent.

So I came up with this word “scenius” – and scenius is the intelligence of a whole... operation or group of people. And I think that’s a more useful way to think about culture, actually. I think that – let’s forget the idea of “genius” for a little while, let’s think about the whole ecology of ideas that give rise to good new thoughts and good new work.

Maybe we can speak also of a categorical "scene" too.

]]>I'm writing this partially to remind myself to make sure to tell all of you about this book when it comes out!

]]>Last time we saw that for each preorder \(X\) there's a feasibility relation called the **cup**

[ \cup_X \colon X^{\text{op}} \times X \nrightarrow \textbf{1} ]

which we draw as follows:

To define the cup, we remembered that feasibility relations \(X^{\text{op}} \times X \nrightarrow \textbf{1}\) are monotone functions \( (X^{\text{op}} \times X)^\text{op} \times \textbf{1} \to \mathbf{Bool} \), and we defined \(\cup_X\) to be the composite

[ (X^{\text{op}} \times X)^\text{op} \times \textbf{1} \stackrel{\sim}{\to} (X^{\text{op}} \times X)^\text{op} \stackrel{\sim}{\to} (X^{\text{op}})^\text{op} \times X^{\text{op}} \stackrel{\sim}{\to} X \times X^{\text{op}} \stackrel{\sim}{\to} X^{\text{op}} \times X \stackrel{\text{hom}}{\to} \textbf{Bool} ]

where all the arrows with little squiggles over them are isomorphisms - most of them discussed in Puzzles 213-215. In short, the cup is the hom-functor \(\text{hom} \colon X^{\text{op}} \times X \to \mathbf{Bool}\) in disguise!

The cup's partner is called the **cap**

[ \cap_X \colon \textbf{1} \nrightarrow X \times X^{\text{op}} ]

and we draw it like this:

The cap is also the hom-functor in disguise! To define it, remember that feasibility relations \(\textbf{1} \nrightarrow X \times X^{\text{op}} \) are monotone functions \(\textbf{1}^{\text{op}} \times (X \times X^{\text{op}}) \). But \(\textbf{1}^{\text{op}} = \textbf{1}\), so we define the cap to be the compoiste

[ \textbf{1}^{\text{op}} \times (X \times X^{\text{op}}) = \textbf{1}\times (X \times X^{\text{op}}) \stackrel{\sim}{\to} X \times X^{\text{op}} \stackrel{\sim}{\to} X^{\text{op}} \times X \stackrel{\text{hom}}{\to} \textbf{Bool} . ]

One great thing about the cup and cap is that they let us treat the edges in our co-design diagrams as flexible wires. In particular, they obey the **snake equations**, also known as the **zig-zag identities**. These say that we can pull taut a zig-zag of wire.

The first snake equation says

In other words,

[ (1_X \times \cup_X) (\cap_X \times 1_X) = 1_X .]

Please study the diagram and the corresponding equation very carefully to make sure you see how each part of one corresponds to a part of the other! And please ask questions if there's anything puzzling. It takes a while to get used to these things.

The second snake equation says

In other words,

[ (\cup_X \times 1_{X^{\text{op}}}) (1_{X^{\text{op}}} \times \cap_X) = 1_{X^{\text{op}}} .]

A great exercise, to make sure you understand what's going on, is to prove the snake equations. You just need to remember all the definition, use them to compute the left-hand side of the identity, and show it equals the much simpler right-hand side.

**Puzzle 217.** Prove the snake equations.

In fact some of you have already started doing this!

]]>We've seen that these parts can be stuck together in series, by 'composition':

and in parallel, using 'tensoring':

One reason I wanted to show you this is for you to practice reasoning with diagrams in situations where you can both compose and tensor morphisms. Examples include:

functions between sets

linear maps between vector spaces

electrical circuits

PERT charts

the example we spent a lot of time on: feasibility relations

or more generally, \(\mathcal{V}\)-enriched profunctors.

The kind of structure where you can compose and tensor morphisms is called a 'monoidal category'. This is a category \(\mathcal{C}\) together with:

a functor \(\otimes \colon \mathcal{C} \times \mathcal{C} \to \mathcal{C} \) called

**tensoring**,an object \(I \in \mathcal{C}\) called the

**unit**for tensoring,a natural isomorphism called the

**associator**

[ \alpha_{X,Y,Z} \colon (X \otimes Y) \otimes Z \stackrel{\sim}{\longrightarrow} X \otimes (Y \otimes Z) ]

- a natural isomorphism called the
**left unitor**

[ \lambda_X \colon I \otimes X \stackrel{\sim}{\longrightarrow} X ]

- and a natural isomorphism called the
**right unitor**

[ \rho_x \colon X \otimes I \stackrel{\sim}{\longrightarrow} X ]

- such that the associator and unitors obey enough equations so that all diagrams built using tensoring and these isomorphisms commute.

We need the associator and unitors because in examples it's usually *not* true that \( (X \otimes Y) \otimes Z\) is *equal* to \(X \otimes (Y \otimes Z)\), etc. They're just isomorphic! But we want the associator and unitors to obey equations because they're just doing boring stuff like moving parentheses around, and if we use them in two different ways to go from, say,

[ ((W \otimes X) \otimes Y) \otimes Z) ]

to

[ W \otimes (X \otimes (Y \otimes Z)) ]

we want those two ways to agree! Otherwise life would be too confusing.

If you want to see exactly what equations the associator and unitors should obey, read this:

- John Baez, Some definitions everyone should know.

But beware: these equations, discovered by Mac Lane in 1963, are a bit scary at first! They say that certain diagrams built using tensoring, the associator and unitors commute, and the point is that Mac Lane proved a theorem saying these are enough to imply that *all* diagrams of this sort commute.

This result called 'Mac Lane's coherence theorem'. It's rather subtle; if you're curious about the details try this:

- Peter Hines, Reconsidering MacLane:the foundations of categorical coherence, October 2013.

Note: monoidal categories may not necessarily have a natural isomorphism

[ \beta_{X,Y} \colon X \otimes Y \stackrel{\sim}{\longrightarrow} Y \otimes X .]

When we have that, obeying some more equations, we have a 'braided monoidal category'. You can see the details in my notes. And when our braided monoidal category has the feature that braiding twice:

[ X\otimes Y \stackrel{\beta_{X,Y}}{\longrightarrow } Y \otimes X \stackrel{\beta_{Y,X}}{\longrightarrow } X \otimes Y ]

is the identity, we have a 'symmetric monoidal category'. In this case we call the braiding a **symmetry** and often write it as

[ \sigma_{X,Y} \colon X \otimes Y \stackrel{\sim}{\longrightarrow} Y \otimes X ]

since the letter \(\sigma\) should make you think 'symmetry'.

All the examples of monoidal categories I listed are actually symmetric monoidal - *unless* you think of circuit diagrams as having wires in 3d space that can actually get tangled up with each other, in which case they are morphisms in a braided monoidal category.

**Puzzle 278.** Use the definition of monoidal category to prove the **interchange law**

[ (f \otimes g) (f' \otimes g') = ff' \otimes gg' ]

whenever \(f,g,f',g'\) are morphsms making either side of the equation well-defined. (Hint: you only need the part of the definition I explained in my lecture, not the scary diagrams I didn't show you.)

**Puzzle 279.** Draw a picture illustrating this equation.

**Puzzle 280.** Suppose \(f : I \to I\) and \(g : I \to I\) are morphisms in a monoidal category going from the unit object to itself. Show that

[ fg = gf .]

]]>It's free to read, free to publish in, and it's about building big things from smaller parts. Here's the top of the journal's home page right now:

Here's the official announcement:

]]>We are pleased to announce the launch of

Compositionality, a new diamond open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline. Topics may concern foundational structures, an organizing principle, or a powerful tool. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition. To learn more about the scope and editorial policies of the journal, please visit our website at www.compositionality-journal.org.

Compositionalityis the culmination of a long-running discussion by many members of the extended category theory community, and the editorial policies, look, and mission of the journal have yet to be finalized. We would love to get your feedback about our ideas on the forum we have established for this purpose:http://reddit.com/r/compositionality

Lastly, the journal is currently receiving applications to serve on the editorial board; submissions are due May 31 and will be evaluated by the members of our steering board: John Baez, Bob Coecke, Kathryn Hess, Steve Lack, and Valeria de Paiva.

https://tinyurl.com/call-for-editors

We will announce a call for submissions in mid-June.

We're looking forward to your ideas and submissions!

Best regards,

Brendan Fong, Nina Otter, and Joshua Tan