It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.2K
- Applied Category Theory Course 348
- Applied Category Theory Seminar 2
- Exercises 149
- Discussion Groups 48
- How to Use MathJax 15
- Chat 475
- Azimuth Code Project 108
- News and Information 145
- Azimuth Blog 148
- Azimuth Forum 29
- Azimuth Project 190
- - Strategy 109
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 708
- - Latest Changes 700
- - - Action 14
- - - Biodiversity 8
- - - Books 2
- - - Carbon 9
- - - Computational methods 38
- - - Climate 53
- - - Earth science 23
- - - Ecology 43
- - - Energy 29
- - - Experiments 30
- - - Geoengineering 0
- - - Mathematical methods 69
- - - Meta 9
- - - Methodology 16
- - - Natural resources 7
- - - Oceans 4
- - - Organizations 34
- - - People 6
- - - Publishing 4
- - - Reports 3
- - - Software 20
- - - Statistical methods 2
- - - Sustainability 4
- - - Things to do 2
- - - Visualisation 1
- General 39

Options

We humans often ignore things we're not interested in. This is often essential. But sometimes it causes trouble. For example, global warming is caused by a civilization that has conveniently ignored a colorless, odorless gas called carbon dioxide that's a byproduct of burning carbon.

To take a humbler example, consider this wiring diagram we studied in Lecture 25:

In fact this diagram ignores some of the waste products we created while making this pie. A more complete picture might look like this:

Now we've added some wires in red! We're admitting that cracking eggs creates egg shells, that peeling lemons creates lemon peels, and that using sticks of butter leaves behind paper.

What's going on here, mathematically? We have *two* stories about a process: a less complete one and a more complete one. And these are just two of many possible stories. However, all these stories are related. Category theory can help us make this more precise.

Since the above example is already rather big, let's take a simpler one. Suppose you have the ability to crack an egg into a bowl and then separate out the yolk, the white, and the egg shell. We have a set \(S\) of five resources:

$$ S = \{ \textrm{bowl}, \textrm{egg}, \textrm{yolk}, \textrm{white}, \textrm{egg shell} \} $$ This gives a commutative monoid consisting of combinations of these resources:

$$ a [\textrm{bowl}] + b [\textrm{egg}] + c[\textrm{yolk}]+ d[\textrm{white}] + e [\textrm{egg shell}] $$
where \(a,b,c,d,e \in \mathbb{N}\) and the square brackets are just there because they look nice. In Lecture 25 we saw that this kind of monoid is called \(\mathbb{N}[S]\). But already in Lecture 22 we saw how to make it into a commutative monoidal *preorder* by choosing a reaction network and using it to define a concept of \(\le\), where \(x \le y\) means "we can make \(x\) from \(y\)".

Let's choose a reaction network that describes what you can do:

$$ [\textrm{bowl}] + [\textrm{egg}] \to [\textrm{bowl}] + [\textrm{yolk}] + [\textrm{white}] + [\textrm{egg shell}] . $$ This gives an inequality

$$ [\textrm{bowl}] + [\textrm{yolk}] + [\textrm{white}] + [\textrm{egg shell}] \le [\textrm{bowl}] + [\textrm{egg}] $$ and we can derive a bunch more using the laws of a commutative monoidal preorder. Taking all these inequalities as true, we make \(\mathbb{N}[S]\) into a commutative monoidal preorder.

But now let's ignore egg shells. And while we're at it, let's ignore the bowl! It's a "renewable resource", or "catalyst", so you might decide to take it for granted if you're in a kitchen that *has* a bowl.

We now get a simpler set of resources:

$$ T = \{ \textrm{egg}, \textrm{yolk}, \textrm{white} \} $$ We also get a simpler commutative monoid \(\mathbb{N}[T]\), with elements of this form:

$$ b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] $$ where \(b,c,d \in \mathbb{N}\). We can make this into a commutative monoidal preorder using the simpler reaction network

$$ [\textrm{egg}] \to [\textrm{yolk}] + [\textrm{white}] . $$ which gives the inequality

$$ [\textrm{yolk}] + [\textrm{white}] \le [\textrm{egg}] $$ and of course many others that we can derive from this using the laws of a commutative monoidal preorder.

Now for the cool part: we'll formalize the process of ignoring the waste product \( [\textrm{egg shell}]\) and the renewable resource \([\textrm{bowl}]\)!

I hope you can guess how it goes: I spent the last two lectures explaining monoidal monotones and their adjoints, and now we'll use those ideas.

There's function

$$ f : \mathbb{N}[S] \to \mathbb{N}[T] $$ that forgets bowls and egg shells:

$$ f( a [\textrm{bowl}] + b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] + e [\textrm{egg shell}]) = b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}]. $$
This is the nicest possible kind of map between monoidal preorders: it's a *strict monoidal monotone!* Remember, this means that:

It's monotone: if \(x \le x'\) then \(f(x) \le f(x')\) for all \(x,x' \in \mathbb{N}[S]\).

It obeys \(f(x+x') = f(x) + f(x')\) for all \(x,x' \in \mathbb{N}[S]\).

It obeys \(f(0) = 0\).

You can check 2 and 3 using the formula for \(f\). For 1, the key thing to check is this. We know that in \(\mathbb{N}[S]\) we have

$$ [\textrm{bowl}] + [\textrm{yolk}] + [\textrm{white}] + [\textrm{egg shell}] \le [\textrm{bowl}] + [\textrm{egg}] $$ so we need to check that

$$ f( [\textrm{bowl}] + [\textrm{yolk}] + [\textrm{white}] + [\textrm{egg shell}]) \le f( [\textrm{bowl}] + [\textrm{egg}] ) $$ in \(\mathbb{N}[T]\). Using the formula for \(f\), this says

$$ [\textrm{yolk}] + [\textrm{white}] \le [\textrm{egg}] . $$ But this is true: we set things up to make it true!

We actually need to check that *all* the true inequalities in \(\mathbb{N}[S]\) get sent by \(f\) to true inequalities in \(\mathbb{N}[T]\), but since all the inequalities in \(\mathbb{N}[S]\) are derived from the one we wrote down, this is sort of obvious.

("Sort of obvious?" With a bit more category theory we could make this hand-waving argument rigorous without a boring proof by induction. But anyone who studies math or computer science needs to do a lot of boring proofs by induction, and that approach is fine too.)

Now for two very interesting puzzles. We've seen that there's a systematic way to "ignore waste products and renewable resources". But is there a systematic way to *un-ignore* them?

That is: can we guess what a more complete description of a process is, given a less complete description? This seems like black magic. But that's what adjoints are all about: finding approximate inverses to maps that don't really have inverses! So:

**Puzzle 84.** Does the monoidal monotone \(f : \mathbb{N}[S] \to \mathbb{N}[T] \) have a right adjoint? If so, what's the formula for it? Thanks to Lecture 27 we know that this right adjoint, if it exists, is lax monoidal monotone. Is it strict monoidal monotone?

**Puzzle 85.** Does the monoidal monotone \(f : \mathbb{N}[S] \to \mathbb{N}[T] \) have a left adjoint? If so, what's the formula for it? Thanks to Lecture 27 we know that this left adjoint, if it exists, is oplax monoidal monotone. Is it strict monoidal monotone?

I'm fascinated by these puzzles, but I *haven't let myself solve them* because I'll enjoy it more if you find the answers. Then we can philosophize about what it all means. We'll have learned something about what it means to "un-ignore" some side-effects that we'd been ignoring!

## Comments

A random hypothesis I came up with while reading this, that I also haven’t put much thought into yet: Is every strict monoidal monotone between resource theories the type that merely forgets things, up to a relabeling of the underlying set? If so, does this hold for general monoidal preorders? If not, what other sorts of strict monoidal monotone are there for resource theories? (We can call this

Puzzle JMC7, keeping in mind I haven’t solved it yet, and I don’t remember the last number I used.)`A random hypothesis I came up with while reading this, that I also haven’t put much thought into yet: Is every strict monoidal monotone between resource theories the type that merely forgets things, up to a relabeling of the underlying set? If so, does this hold for general monoidal preorders? If not, what other sorts of strict monoidal monotone are there for resource theories? (We can call this **Puzzle JMC7**, keeping in mind I haven’t solved it yet, and I don’t remember the last number I used.)`

I bet the answer to your first question is "no". Someone want to come up with a counterexample?

`I bet the answer to your first question is "no". Someone want to come up with a counterexample?`

In case anyone finds Puzzles 84 and 85 intimidating, I should add that solving them does not require strokes of brilliant insight: it's a matter of computation.

We know that

ifthe desired right or left adjoint exists, it's given by the formula in Lecture 6. So, you can use this formula and see what it gives.If it gives an undefined answer, the adjoint does not exist. If it's defined, we still aren't sure it's the adjoint. But the definition of adjoint gives a condition one can check using a calculation!

So, it's all a matter of computation... unless one has a clever idea to speed things up.

`In case anyone finds Puzzles 84 and 85 intimidating, I should add that solving them does not require strokes of brilliant insight: it's a matter of computation. We know that _if_ the desired right or left adjoint exists, it's given by the formula in [Lecture 6](https://forum.azimuthproject.org/discussion/1901/lecture-6-chapter-1-computing-adjoints/p1). So, you can use this formula and see what it gives. If it gives an undefined answer, the adjoint does not exist. If it's defined, we still aren't sure it's the adjoint. But the [definition of adjoint](https://forum.azimuthproject.org/discussion/1828/lecture-4-chapter-1-galois-connections/p1) gives a condition one can check using a calculation! So, it's all a matter of computation... unless one has a clever idea to speed things up.`

If this puzzle is still intimidating, think a bit about what the right adjoint \(g : \mathbb{N}[T] \to \mathbb{N}[S] \) of \(f : \mathbb{N}[S] \to \mathbb{N}[T] \) would be like. Roughly speaking,

$$ g( b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] ) $$ would be the biggest combination of bowls, eggs, yolks, whites and egg shells, say \(x \in \mathbb{N}[S] \), such that

$$ f(x) \le b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] . $$ Here "biggest" means with respect to the preorder on \(x \in \mathbb{N}[T]\).

`If this puzzle is still intimidating, think a bit about what the right adjoint \\(g : \mathbb{N}[T] \to \mathbb{N}[S] \\) of \\(f : \mathbb{N}[S] \to \mathbb{N}[T] \\) would be like. Roughly speaking, \[ g( b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] ) \] would be the biggest combination of bowls, eggs, yolks, whites and egg shells, say \\(x \in \mathbb{N}[S] \\), such that \[ f(x) \le b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] . \] Here "biggest" means with respect to the preorder on \\(x \in \mathbb{N}[T]\\).`

First, let's establish that the general form of a complex is a function \(x : S \to \mathbb{N}\) giving quantities for each resource, where \(f\) has finite support (only finitely many values are nonzero). When \(S\) is finite, this is just a tuple. In this setting, the monoidal monotone \(f\) that forgets some elements of \(S\) sends \(x\) to its restriction on \(T\).

Puzzle 84:Let's consider what a right adjoint \(f \dashv g\) should look like. For complexes \(x \in \mathbb{N}[S]\) and \(y \in \mathbb{N}[T]\), we must have the property \(f(x) \le y \Leftrightarrow x \le g(y)\). For any particular \(T\)-complex \(y\), the set of \(T\)-complexes less than it are all those with no more \(T\)-resources than \(y\). The set of \(S\)-complexes that map into this set have the same restriction on \(T\)-resources, butnorestriction on \((S \setminus T)\)-resources. Requiring \(g(y)\) to be greater than or equal to every one of these \(S\)-complexes is impossible, because there is no greatest natural number.Thus, \(f\) has no right adjoint.

Puzzle 85:Here, we need the property \(g(x) \le y \Leftrightarrow x \le f(y)\), where \(x \in \mathbb{N}[T]\) and \(y \in \mathbb{N}[S]\). For any particular \(T\)-complex \(x\), the set of \(T\)-complexes greater than it are all those with no more \(T\)-resources than \(x\). The set of \(S\)-complexes that map into this set have the same restriction on \(T\)-resources, butnorestriction on \(S \setminus T\)-resources. Thus, in order to be less than or equal to every such \(S\)-complex, \(g(x)\) must assign \(0\) to every \(S \setminus T\)-resource:\[g(x)(r) = \begin{cases} x(r) & \text{when } r \in T \\ 0 & \text{otherwise} \end{cases}\]

Finally, we know that \(g\) is strict monoidal monotone. To put it simply, 0 + 0 is still 0, and the \(T\)-resource values are unchanged by \(g\).

As an addendum, if \(S \setminus T\) is infinite, then there is technically a restriction on \(S\)-complexes -- namely that it must have finite support over \(S \setminus T\) resources. But the proof doesn't change if we explicitly discard those -- there's always a finitely supported complex (or a set of them) that we'd have to consider anyway which does its job.

`First, let's establish that the general form of a complex is a function \\(x : S \to \mathbb{N}\\) giving quantities for each resource, where \\(f\\) has finite support (only finitely many values are nonzero). When \\(S\\) is finite, this is just a tuple. In this setting, the monoidal monotone \\(f\\) that forgets some elements of \\(S\\) sends \\(x\\) to its restriction on \\(T\\). **Puzzle 84:** Let's consider what a right adjoint \\(f \dashv g\\) should look like. For complexes \\(x \in \mathbb{N}[S]\\) and \\(y \in \mathbb{N}[T]\\), we must have the property \\(f(x) \le y \Leftrightarrow x \le g(y)\\). For any particular \\(T\\)-complex \\(y\\), the set of \\(T\\)-complexes less than it are all those with no more \\(T\\)-resources than \\(y\\). The set of \\(S\\)-complexes that map into this set have the same restriction on \\(T\\)-resources, but _no_ restriction on \\((S \setminus T)\\)-resources. Requiring \\(g(y)\\) to be greater than or equal to every one of these \\(S\\)-complexes is impossible, because there is no greatest natural number. Thus, \\(f\\) has no right adjoint. **Puzzle 85:** Here, we need the property \\(g(x) \le y \Leftrightarrow x \le f(y)\\), where \\(x \in \mathbb{N}[T]\\) and \\(y \in \mathbb{N}[S]\\). For any particular \\(T\\)-complex \\(x\\), the set of \\(T\\)-complexes greater than it are all those with no more \\(T\\)-resources than \\(x\\). The set of \\(S\\)-complexes that map into this set have the same restriction on \\(T\\)-resources, but _no_ restriction on \\(S \setminus T\\)-resources. Thus, in order to be less than or equal to every such \\(S\\)-complex, \\(g(x)\\) must assign \\(0\\) to every \\(S \setminus T\\)-resource: \\[g(x)(r) = \begin{cases} x(r) & \text{when } r \in T \\\\ 0 & \text{otherwise} \end{cases}\\] Finally, we know that \\(g\\) is strict monoidal monotone. To put it simply, 0 + 0 is still 0, and the \\(T\\)-resource values are unchanged by \\(g\\). As an addendum, if \\(S \setminus T\\) is infinite, then there is technically a restriction on \\(S\\)-complexes -- namely that it must have finite support over \\(S \setminus T\\) resources. But the proof doesn't change if we explicitly discard those -- there's always a finitely supported complex (or a set of them) that we'd have to consider anyway which does its job.`

I think the above result comes more or less immediately when you consider that a finitely supported function \(S \to \mathbb{N}\) is no more than \(\mathbb{N}^S\), the \(S\)-indexed product on \(\mathbb{N}\). Restriction discards some of these copies of \(\mathbb{N}\); to recover them, we need either the minimal or maximal element of \(\mathbb{N}\), depending on if we want a left adjoint or a right adjoint. \(\mathbb{N}\) has one, but not the other.

(I think I just invoked the fact that \(Set\) is Cartesian closed. Is that the case?)

`I think the above result comes more or less immediately when you consider that a finitely supported function \\(S \to \mathbb{N}\\) is no more than \\(\mathbb{N}^S\\), the \\(S\\)-indexed product on \\(\\mathbb{N}\\). Restriction discards some of these copies of \\(\mathbb{N}\\); to recover them, we need either the minimal or maximal element of \\(\mathbb{N}\\), depending on if we want a left adjoint or a right adjoint. \\(\mathbb{N}\\) has one, but not the other. (I think I just invoked the fact that \\(Set\\) is Cartesian closed. Is that the case?)`

Hi Jonathan, I have a question about how you are thinking about the relationships in \(\mathbb N[S]\) and \(\mathbb N[T]\). You say:

Does this mean that you are defining \(\leq\) according to the amount of resources in each complex? I thought that we were using \(\leq\) to represent possible reactions. Remember, \(y \leq y'\) means that we can get \(y\) from \(y'\) not that there are fewer resources in \(y\) than in \(y'\). As an example:

\[ [\textrm{yolk}] + [\textrm{white}] \leq [\textrm{egg}] \] because you can get a yolk and a white from a whole egg. However \[ [\textrm{egg}] \nleq 2[\textrm{egg}] \] because you can't get exactly one egg from two eggs.

Maybe I'm misunderstanding your explanation or the definition of \(\le\)?

`Hi Jonathan, I have a question about how you are thinking about the relationships in \\(\mathbb N[S]\\) and \\(\mathbb N[T]\\). You say: > For any particular \\(T\\)-complex \\(y\\), the set of \\(T\\)-complexes less than it are all those with no more \\(T\\)-resources than \\(y\\). Does this mean that you are defining \\(\leq\\) according to the amount of resources in each complex? I thought that we were using \\(\leq\\) to represent possible reactions. Remember, \\(y \leq y'\\) means that we can get \\(y\\) from \\(y'\\) not that there are fewer resources in \\(y\\) than in \\(y'\\). As an example: \\[ [\textrm{yolk}] + [\textrm{white}] \leq [\textrm{egg}] \\] because you can get a yolk and a white from a whole egg. However \\[ [\textrm{egg}] \nleq 2[\textrm{egg}] \\] because you can't get exactly one egg from two eggs. Maybe I'm misunderstanding your explanation or the definition of \\(\le\\)?`

Heh. I think you're right. I'll sleep on it...

`Heh. I think you're right. I'll sleep on it...`

To be fair, I think all his arguments work, but on the opposite monoidal preorder.

$$ (2[\textrm{egg}] \leq [\textrm{egg}]) \Leftrightarrow ([\textrm{egg}] \leq 2[\textrm{egg}])^\textrm{op} $$

`To be fair, I think all his arguments work, but on the opposite monoidal preorder. $$ (2[\textrm{egg}] \leq [\textrm{egg}]) \Leftrightarrow ([\textrm{egg}] \leq 2[\textrm{egg}])^\textrm{op} $$`

I'm a little confused about how the bowl renewable resource works in \(\mathbb N [S]\). I claim that if \(x, x' \in \mathbb N [S]\) and \(x \leq x'\) then the number of bowls in \(x\) and in \(x'\) are the same. This is because the relation \(\leq\) is built from applying the monoidal preorder rule ( \(x \leq x', y \leq y' \implies x \otimes y \leq x' \otimes y'\) ) to some set of "generating" relations. These generating relations are the reflexive relations along with:

$$ [\textrm{bowl}] + [\textrm{yolk}] + [\textrm{white}] + [\textrm{egg shell}] \le [\textrm{bowl}] + [\textrm{egg}]$$ Since all of the generating relations have the same number of bowls on both sides, so will any relation coming from applying the monoidal preorder rule.

This also matches up with my intuition, because every reaction in the kitchen starts and ends with the same number of bowls.

I bring this up because it leads me to a problem when trying to come up with a right adjoint. John suggests looking for:

Let \[ r= [\textrm{bowl}] + b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] + 0 [\textrm{egg shell}]\] and \[ r'= 2[\textrm{bowl}] + b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] + 0 [\textrm{egg shell}].\] Now \(f(r) = f(r') = b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] \). So by the reflexive rule we have \[f(r) \le b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] , \quad f(r') \le b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] .\] This means that \(g(b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}])\) must be bigger than both \(r\) and \(r'\). But if my claim above holds, then there is no such element of \(\mathbb N[S]\).

I think this shows that there can be no right adjoint \(g\), but that doesn't feel quite right. Where have I gone wrong?

`I'm a little confused about how the bowl renewable resource works in \\(\mathbb N [S]\\). I claim that if \\(x, x' \in \mathbb N [S]\\) and \\(x \leq x'\\) then the number of bowls in \\(x\\) and in \\(x'\\) are the same. This is because the relation \\(\leq\\) is built from applying the monoidal preorder rule ( \\(x \leq x', y \leq y' \implies x \otimes y \leq x' \otimes y'\\) ) to some set of "generating" relations. These generating relations are the reflexive relations along with: \[ [\textrm{bowl}] + [\textrm{yolk}] + [\textrm{white}] + [\textrm{egg shell}] \le [\textrm{bowl}] + [\textrm{egg}]\] Since all of the generating relations have the same number of bowls on both sides, so will any relation coming from applying the monoidal preorder rule. This also matches up with my intuition, because every reaction in the kitchen starts and ends with the same number of bowls. I bring this up because it leads me to a problem when trying to come up with a right adjoint. John suggests looking for: > the biggest combination of bowls, eggs, yolks, whites and egg shells, say \\(x \in \mathbb{N}[S] \\), such that > \[ f(x) \le b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] . \] Let \\[ r= [\textrm{bowl}] + b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] + 0 [\textrm{egg shell}]\\] and \\[ r'= 2[\textrm{bowl}] + b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] + 0 [\textrm{egg shell}].\\] Now \\(f(r) = f(r') = b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] \\). So by the reflexive rule we have \\[f(r) \le b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] , \quad f(r') \le b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}] .\\] This means that \\(g(b [\textrm{egg}] + c[\textrm{yolk}] + d[\textrm{white}])\\) must be bigger than both \\(r\\) and \\(r'\\). But if my claim above holds, then there is no such element of \\(\mathbb N[S]\\). I think this shows that there can be no right adjoint \\(g\\), but that doesn't feel quite right. Where have I gone wrong?`

@Keith, \(2[\textrm{egg}] \leq [\textrm{egg}]\) would mean that we can get 2 eggs from exactly 1 egg, which also isn't a valid reaction in the baking monoidal preorder described in the lecture. So I don't think Johnathan's proof works even for the opposite preorder.

`@Keith, \\(2[\textrm{egg}] \leq [\textrm{egg}]\\) would mean that we can get 2 eggs from exactly 1 egg, which also isn't a valid reaction in the baking monoidal preorder described in the lecture. So I don't think Johnathan's proof works even for the opposite preorder.`

Nice work in comment #5, Jonathan! I'm glad to see someone tackle these puzzles.

Sophie's point is a good one, though:

Nothing in comment #5 mentions how the ordering is defined in terms of the reactions we have available, so the argument must be incomplete, at least... though I think it's on the right track, and maybe Sophie fixed it in comment #10. I'll think about that tomorrow. For now, something simpler:

I think your argument would be exactly right if we were dealing with a collection of reactions that can destroy one item of each time. Given a set \(S = \{s_1, \dots, s_n\} \) and reactions

$$ s_i \to 0 $$ the preorder we get on \(\mathbb{N}(S)\) is the one where

$$ a_1 s_1 + \cdots a_n s_n \le b_1 s_2 + \cdots + b_n s_n \; \textrm{ iff } \; a_i \le b_i \textrm{ for all } 1 \le i \le n .$$ In other words, we can get from one complex to another only by destroying things.

Let's consider this case. Suppose we have any function \(\phi : S \to T \). We can use it to define a homomorphism

$$ f: \mathbb{N}[S] \to \mathbb{N}[T] $$ by

$$ f( a_1 s_1 + \cdots + a_n s_n) = a_1 s_{\phi(1)} + \cdots + a_n s_{\phi(n)} $$ The \(f\) in my puzzles is an example of this idea. But let's not solve those puzzles now: instead, give \(\mathbb{N}[S] \) and \(\mathbb{N}[T] \) the preorders described above.. Then \(f\) is a strict monoidal monotone. I believe Jonathan's argument shows \(f\) has no right adjoint unless \(\phi\) is onto.

`Nice work in [comment #5](https://forum.azimuthproject.org/discussion/comment/18341/#Comment_18341), Jonathan! I'm glad to see someone tackle these puzzles. Sophie's point is a good one, though: > Does this mean that you are defining \\(\leq\\) according to the amount of resources in each complex? I thought that we were using \\(\leq\\) to represent possible reactions. Remember, \\(y \leq y'\\) means that we can get \\(y\\) from \\(y'\\) not that there are fewer resources in \\(y\\) than in \\(y'\\). Nothing in [comment #5](https://forum.azimuthproject.org/discussion/comment/18341/#Comment_18341) mentions how the ordering is defined in terms of the reactions we have available, so the argument must be incomplete, at least... though I think it's on the right track, and maybe Sophie fixed it in [comment #10](https://forum.azimuthproject.org/discussion/comment/18346/#Comment_18346). I'll think about that tomorrow. For now, something simpler: I think your argument would be exactly right if we were dealing with a collection of reactions that can destroy one item of each time. Given a set \\(S = \\{s_1, \dots, s_n\\} \\) and reactions \[ s_i \to 0 \] the preorder we get on \\(\mathbb{N}(S)\\) is the one where \[ a_1 s_1 + \cdots a_n s_n \le b_1 s_2 + \cdots + b_n s_n \; \textrm{ iff } \; a_i \le b_i \textrm{ for all } 1 \le i \le n .\] In other words, we can get from one complex to another only by destroying things. Let's consider this case. Suppose we have any function \\(\phi : S \to T \\). We can use it to define a homomorphism \[ f: \mathbb{N}[S] \to \mathbb{N}[T] \] by \[ f( a_1 s_1 + \cdots + a_n s_n) = a_1 s_{\phi(1)} + \cdots + a_n s_{\phi(n)} \] The \\(f\\) in my puzzles is an example of this idea. But let's not solve those puzzles now: instead, give \\(\mathbb{N}[S] \\) and \\(\mathbb{N}[T] \\) the preorders described above.. Then \\(f\\) is a strict monoidal monotone. I believe Jonathan's argument shows \\(f\\) has no right adjoint unless \\(\phi\\) is onto.`

By the way, if we want to prove a monotone function between

posetshas no left (resp. right) adjoint, one option is to use the Adjoint Functor Theorem for posets and show that it doesn't preserve all joins (resp. meets).Conversely, if our monotone map between posets

doespreserve all joins (resp. meets) then itdoeshave a left (resp. right) adjoint. But in this case it's often just as easy to guess the desired adjoint and prove it obeys the definition of adjoint.`By the way, if we want to prove a monotone function between _posets_ has no left (resp. right) adjoint, one option is to use the [Adjoint Functor Theorem for posets](https://forum.azimuthproject.org/discussion/2031/lecture-16-chapter-1-the-adjoint-functor-theorem-for-posets/p1) and show that it doesn't preserve all joins (resp. meets). Conversely, if our monotone map between posets _does_ preserve all joins (resp. meets) then it _does_ have a left (resp. right) adjoint. But in this case it's often just as easy to guess the desired adjoint and prove it obeys the definition of adjoint.`

John, my understanding is that an empty reaction network is one in which only the trivial relation \(x \le x\) holds for all \(x\). Under that definition, my argument comes out even worse: I was working with a completely unrelated relation to begin with. We could imagine that one to be an empty reaction network except that reagents may (discretely)

evaporate.Thinking through my argument, it's only

accidentallycorrect for empty reaction networks. It's still true that we need minimal/maximal elements of \(\mathbb{N}\) to cover the forgotten reagents, since an infinity of \(S\)-complexes collapse onto the same \(T\)-complex. But the only element whose inverse image we need to worry about for computing \(g(x)\) is \(x\) itself.(

EDIT:Wait, I'm even doubting this now -- because no two of the collapsing infinity of \(S\)-complexes are related, we don't have a unique join.)Also, I'm a little confused by this homomorphism: $$ f( a_1 s_1 + \cdots + a_n s_n) = a_1 s_{\phi(1)} + \cdots + a_n s_{\phi(n)} $$ If \(\phi = \{1 \mapsto 1, 2 \mapsto 1\}\), then \(f(a_1 s_1 + a_2 s_2) = a_1 s_1 + a_2 a_1 = (a_1 + a_2) s_1\), meaning this function doesn't

forgetreagents, it just treats them asinterchangeable. (But this does nicely disprove my hypothesis from comment #1!)`John, my understanding is that an empty reaction network is one in which only the trivial relation \\(x \le x\\) holds for all \\(x\\). Under that definition, my argument comes out even worse: I was working with a completely unrelated relation to begin with. We could imagine that one to be an empty reaction network except that reagents may (discretely) _evaporate_. Thinking through my argument, it's only _accidentally_ correct for empty reaction networks. It's still true that we need minimal/maximal elements of \\(\mathbb{N}\\) to cover the forgotten reagents, since an infinity of \\(S\\)-complexes collapse onto the same \\(T\\)-complex. But the only element whose inverse image we need to worry about for computing \\(g(x)\\) is \\(x\\) itself. (**EDIT:** Wait, I'm even doubting this now -- because no two of the collapsing infinity of \\(S\\)-complexes are related, we don't have a unique join.) Also, I'm a little confused by this homomorphism: \[ f( a_1 s_1 + \cdots + a_n s_n) = a_1 s_{\phi(1)} + \cdots + a_n s_{\phi(n)} \] If \\(\phi = \\{1 \mapsto 1, 2 \mapsto 1\\}\\), then \\(f(a_1 s_1 + a_2 s_2) = a_1 s_1 + a_2 a_1 = (a_1 + a_2) s_1\\), meaning this function doesn't _forget_ reagents, it just treats them as _interchangeable_. (But this does nicely disprove my hypothesis from [comment #1](https://forum.azimuthproject.org/discussion/comment/18335/#Comment_18335)!)`

Jonathan - while you were writing your comment, I was correcting mine: I realized I need some ways to make reagents "evaporate". Check it out. I think we're coming closer to agreement. But:

Yeah, the homomorphism I wrote down is not relevant to forgetting reagents. So, I'm actually handing some very different situation than the one in the puzzle.

A function between finite sets \(A\) and \(B\) gives rise to two monoid homomorphisms, the "pullback" going from \( \mathbb{N}[B] \) to \( \mathbb{N}[A]\) and the "pushforward" from \( \mathbb{N}[A]\) to \( \mathbb{N}[B] \). In comment #12 I described the "pushforward", which is the wrong one for our puzzles. :-O

`Jonathan - while you were writing your comment, I was correcting mine: I realized I need some ways to make reagents "evaporate". Check it out. I think we're coming closer to agreement. But: > Also, I'm a little confused by this homomorphism... Yeah, the homomorphism I wrote down is not relevant to forgetting reagents. So, I'm actually handing some very different situation than the one in the puzzle. A function between finite sets \\(A\\) and \\(B\\) gives rise to two monoid homomorphisms, the "pullback" going from \\( \mathbb{N}[B] \\) to \\( \mathbb{N}[A]\\) and the "pushforward" from \\( \mathbb{N}[A]\\) to \\( \mathbb{N}[B] \\). In [comment #12](https://forum.azimuthproject.org/discussion/comment/18349/#Comment_18349) I described the "pushforward", which is the wrong one for our puzzles. :-O`

Wait a minute, are you secretly teaching us monoidal preorders (co)-change of base?

Edit: Actually no, I'm thinking of something else.

`Wait a minute, are you secretly teaching us monoidal preorders [(co)-change of base](https://ncatlab.org/nlab/show/base+change)? Edit: Actually no, I'm thinking of something else.`

Keith, I can't make heads or tails of that page. (That's part of why I'm participating in this class!) Would you be willing to give a summary, even a brief one?

Edit: Well then nevermind! :)

`Keith, I can't make heads or tails of that page. (That's part of why I'm participating in this class!) Would you be willing to give a summary, even a brief one? Edit: Well then nevermind! :)`

@Jonathan #1, Puzzle

JMC7: Great question! For general resource theories, I haven't seen a precise definition of when a monoidal monotone "merely forgets things". But in general, there are three main types of monoidal monotones:subsetof some other symmetric monoidal preorder \(X\), where both the monoid structure and the order are simply given by taking the given structures on \(X\) and restricting them. For example, when your company hires a new employee, then the time of that employee becomes a new type of resource. But the things that you can do without using the time of that employee are still exactly the same.The neat thing is that you can write every monoidal monotone as a combination of one of these: first, add capabilities; then, change the underlying set; then, discover new resources!

I'll leave it open to find more examples of monoidal monotones that do not "merely forget things", but this comment may have given some hints.

`@Jonathan #1, Puzzle **JMC7**: Great question! For general resource theories, I haven't seen a precise definition of when a monoidal monotone "merely forgets things". But in general, there are three main types of monoidal monotones: * Those which add capabilities: those are the monoidal monotones that are given by the identity map \\(x\mapsto x\\) between monoidal preorders with the same underlying set and monoid structure, \\((X,\leq,+) \to (X,\leq',+)\\). In other words, we only add further comparison relations while not doing anything else. For example, the discovery of a new technology is often like this: the set of resources and how they combine does not change, but suddenly you can do stuff that you could not do before! * Those which only change the underlying set: those are the monoidal monotones of the form \\(f:(X,\leq,+)\to (Y,\leq,+)\\), where \\(x\leq x'\\) holds if and only if \\(f(x)\leq f(x')\\), and for every \\(y\in Y\\) there is \\(x\in X\\) with \\(y\leq f(x)\leq y\\). For example, introducing a new currency may be an example of this: when you include a new currency in your resource theory, then you've expanded the theory, so that the original one is included in the new one. But it's still pretty much the same theory, since the new currency is perfectly interconvertible with your original one. (I'm ignoring conversion fees for simplicity.) So monoidal monotones of this type don't really do anything. * Those which discover new kinds of resources: these are inclusion maps of some symmetric monoidal preordered *subset* of some other symmetric monoidal preorder \\(X\\), where both the monoid structure and the order are simply given by taking the given structures on \\(X\\) and restricting them. For example, when your company hires a new employee, then the time of that employee becomes a new type of resource. But the things that you can do without using the time of that employee are still exactly the same. The neat thing is that you can write every monoidal monotone as a combination of one of these: first, add capabilities; then, change the underlying set; then, discover new resources! I'll leave it open to find more examples of monoidal monotones that do not "merely forget things", but this comment may have given some hints.`

John, I have a question about your hint in #13.

You said that

Looking back, it seems like that direction of the adjoint functor theorem is conditioned on \(\mathbb N [S]\) having all joins (resp. meets). But I think \(\mathbb N [S]\) has neither since \(x \leq x'\) means that \(x\) and \(x'\) must have the same number of bowls. I gave some reasons why in #10. So this direction of the adjoint functor theorem won't apply to our specific puzzle. Is this correct?

In #10 I suggested a reason why \(f\) does not have a right adjoint but I'm still wondering if I made an error. I think I'll try tackling this from the direction John suggested:

`John, I have a question about your hint in [#13](https://forum.azimuthproject.org/discussion/comment/18346/#Comment_18346). You said that > Conversely, if our monotone map between posets _does_ preserve all joins (resp. meets) then it _does_ have a left (resp. right) adjoint. But in this case it's often just as easy to guess the desired adjoint and prove it obeys the definition of adjoint. Looking back, it seems like that direction of the adjoint functor theorem is conditioned on \\(\mathbb N [S]\\) having all joins (resp. meets). But I think \\(\mathbb N [S]\\) has neither since \\(x \leq x'\\) means that \\(x\\) and \\(x'\\) must have the same number of bowls. I gave some reasons why in [#10](https://forum.azimuthproject.org/discussion/comment/18346/#Comment_18346). So this direction of the adjoint functor theorem won't apply to our specific puzzle. Is this correct? In [#10](https://forum.azimuthproject.org/discussion/comment/18346/#Comment_18346) I suggested a reason why \\(f\\) does not have a right adjoint but I'm still wondering if I made an error. I think I'll try tackling this from the direction John suggested: > By the way, if we want to prove a monotone function between _posets_ has no left (resp. right) adjoint, one option is to use the [Adjoint Functor Theorem for posets](https://forum.azimuthproject.org/discussion/2031/lecture-16-chapter-1-the-adjoint-functor-theorem-for-posets/p1) and show that it doesn't preserve all joins (resp. meets).`

Sophie wrote:

Good point! But somewhere or other I mentioned that in this direction of the adjoint functor theorem, where we are trying to prove some monotone function \(f : A \to B\) has a right adjoint, we don't really need \(A\) to have

alljoins, only those that are used in the formula for the would-be right adjoint$$ g(b) = \bigvee \{a \in A : \; f(a) \le_B b \} . $$ Similarly, we don't need \(f\) to preserve

alljoins, just those that exist in \(A\). You can see this by carefully reading the proof of the adjoint functor theorem for posets.Oh yeah - I said this in Lecture 17.

(Somehow in our conversations we always wind up talking about the precise necessary and sufficient conditions for things to be true! That's probably good: it means we're getting into the details.)

So, the adjoint functor theorem can even be used in situations where not all meets (or joins) exist. However, if you're trying to prove an adjoint exists, it's almost always easier to just guess the desired adjoint and then show it works. Guessing isn't hard, since we know a formula for the adjoint if it exists!

`Sophie wrote: > Looking back, it seems like that direction of the adjoint functor theorem is conditioned on \\(\mathbb N [S]\\) having all joins (resp. meets). But I think \\(\mathbb N [S]\\) has neither since \\(x \leq x'\\) means that \\(x\\) and \\(x'\\) must have the same number of bowls. I gave some reasons why in [#10](https://forum.azimuthproject.org/discussion/comment/18346/#Comment_18346). So this direction of the adjoint functor theorem won't apply to our specific puzzle. Is this correct? Good point! But somewhere or other I mentioned that in this direction of the adjoint functor theorem, where we are trying to prove some monotone function \\(f : A \to B\\) has a right adjoint, we don't really need \\(A\\) to have _all_ joins, only those that are used in the formula for the would-be right adjoint \[ g(b) = \bigvee \\{a \in A : \; f(a) \le_B b \\} . \] Similarly, we don't need \\(f\\) to preserve _all_ joins, just those that exist in \\(A\\). You can see this by carefully reading the proof of the adjoint functor theorem for posets. Oh yeah - I said this in [Lecture 17](https://forum.azimuthproject.org/discussion/2037/lecture-17-chapter-1-the-grand-synthesis/p1). (Somehow in our conversations we always wind up talking about the precise necessary and sufficient conditions for things to be true! That's probably good: it means we're getting into the details.) So, the adjoint functor theorem can even be used in situations where not all meets (or joins) exist. However, if you're trying to prove an adjoint exists, it's almost always easier to just guess the desired adjoint and then show it works. Guessing isn't hard, since we know a formula for the adjoint if it exists!`

Sophie wrote:

I think you're correct about \(f\) not having a right adjoint. If \(f\) has a right adjoint \(g\) then it's given by the formula in my last comment. And this formula can be translated into the following more intuitive description:

$$ g( b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] ) = x $$ iff \(x \in \mathbb{N}[S] \),is biggest combination of bowls, eggs, yolks, whites and egg shells such that

$$ f(x) \le b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] . $$ Here "biggest" means with respect to the preorder on \(x \in \mathbb{N}[T]\).

However, there

is nobiggest combination with this property! Because \(f\) ignores all bowls and egg shells, if \(x\) obeys the above inequality so does \(x + [\textrm{bowl}] \).Does this argument sound valid? Don't trust me: I often get things backwards.

However, this makes me more optimistic that \(f\) has a

leftadjoint.`Sophie wrote: > In [#10](https://forum.azimuthproject.org/discussion/comment/18346/#Comment_18346) I suggested a reason why \\(f\\) does not have a right adjoint but I'm still wondering if I made an error. I think I'll try tackling this from the direction John suggested: I think you're correct about \\(f\\) not having a right adjoint. If \\(f\\) has a right adjoint \\(g\\) then it's given by the formula in my last comment. And this formula can be translated into the following more intuitive description: \[ g( b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] ) = x \] iff \\(x \in \mathbb{N}[S] \\),is biggest combination of bowls, eggs, yolks, whites and egg shells such that \[ f(x) \le b [\textrm{egg}] + c[\textrm{yolk}\ + d[\textrm{white}] . \] Here "biggest" means with respect to the preorder on \\(x \in \mathbb{N}[T]\\). However, there _is no_ biggest combination with this property! Because \\(f\\) ignores all bowls and egg shells, if \\(x\\) obeys the above inequality so does \\(x + [\textrm{bowl}] \\). Does this argument sound valid? Don't trust me: I often get things backwards. However, this makes me more optimistic that \\(f\\) has a _left_ adjoint.`

Tobias wrote:

This sounds a lot like how every functor can be factored as one that only fails to be essentially surjective (but is full and faithful), one that only fails to be full (but is essentially surjective and faithful) and one that only fails to be faithful (but is essentially surjective and full). (Please don't ask me to remember in which order these 3 factors should arranged, I can do it but it's tiring.)

`Tobias wrote: > The neat thing is that you can write every monoidal monotone as a combination of one of these: first, add capabilities; then, change the underlying set; then, discover new resources! This sounds a lot like how every functor can be factored as one that only fails to be essentially surjective (but is full and faithful), one that only fails to be full (but is essentially surjective and faithful) and one that only fails to be faithful (but is essentially surjective and full). (Please don't ask me to remember in which order these 3 factors should arranged, I can do it but it's tiring.)`

Regarding #22: A monotone map is a functor between particularly special kinds of categories, right? (Those where every diagram commutes.) Is it correct that the fact that a monoidal monotone also preserves the monoidal structure doesn't actually gain us anything, as far as the factorization goes that Tobias mentioned?

`Regarding [#22](https://forum.azimuthproject.org/discussion/comment/18383/#Comment_18383): A monotone map is a functor between particularly special kinds of categories, right? (Those where every diagram commutes.) Is it correct that the fact that a monoidal monotone also preserves the monoidal structure doesn't actually gain us anything, as far as the factorization goes that Tobias mentioned?`

@Jonathan Castello

I suspect that might actually come up in a future lecture.

`@Jonathan Castello I suspect that might actually come up in a future lecture.`

@Keith: Oh, I would be intensely surprised if it didn't, given that this is the whole angle of Seven Sketches. I'm more curious about how that informs the factorization John and Tobias are alluding to, and how the monoidal structure does or does not constrain that further.

`@[Keith](https://forum.azimuthproject.org/discussion/comment/18386/#Comment_18386): Oh, I would be intensely surprised if it didn't, given that this is the whole angle of Seven Sketches. I'm more curious about how that informs the factorization John and Tobias are alluding to, and how the monoidal structure does or does not constrain that further.`

Jonathan wrote:

Right! A preorder is precisely a category in which every diagram commutes, and a monotone function is precisely a functor between such categories. A monoidal preorder is a monoidal category in which every diagram commutes, and a monoidal monotone is precisely a monoidal functor between such categories.

So, in this course so far, we are doing category theory in the degenerate case where we never need to check that any diagram commutes!

I'm not 100% sure what you mean, but I can guess. All this stuff works even in the non-monoidal situation.

Suppose \(f : A \to D\) is a monotone function between preorders. Borrowing words from Tobias, then \(f\) can be factored as:

A monotone \(f_1: A \to B\) that only adds capabilities: that is, one that acts the identity map \(x\mapsto x\) between preorders with the same underlying set. In other words, we only add further inequalities while not doing anything else. For example, the discovery of a new technology is often like this: the set of resources and how they combine does not change, but suddenly you can do stuff that you could not do before!

A monotone \(f_2 : B \to C\) that only changes the underlying set: that is, one where \(b\leq b'\) holds if and only if \(f(b)\leq f(b')\), and for every \(c\in C\) there is \(b\in B\) with \(c\leq f(b)\leq c\). For example, introducing a new currency may be an example of this: when you include a new currency in your resource theory, then you've expanded the theory, so that the original one is included in the new one. But it's still pretty much the same theory, since the new currency is perfectly interconvertible with your original one.

A monotone \(f_3: C \to D\) that only discovers new kinds of resources: this is inclusion map of some \(C\) as a subset of preorder \(D\), where the preorder on \(C\) is simply given by taking the given preorder on \(D\) and restricting it. For example, when your company hires a new employee, then the time of that employee becomes a new type of resource. But the things that you can do without using the time of that employee are still exactly the same.

This is a special case of a fact about categories. Suppose \(f : A \to D\) is a functor between categories. Then \(f\) can be factored as:

a functor \(f_1 : A \to B\) that is essentially surjective and full (but not necessarily faithful).

a functor \(f_2 : B \to C\) that is essentially surjective and faithful (but not necessarily full).

a functor \(f_3: C \to D\) that is faithful and full (but not necessarily essentially surjective).

This in turn is a generalization of the usual fact that any function can be factored as an onto function followed by a 1-1 function. All this stuff is studied in vastly more detail in my Lectures on \(n\)-categories and cohomology, in the part about "Postnikov towers".

`Jonathan wrote: > A monotone map is a functor between particularly special kinds of categories, right? Right! A preorder is precisely a category in which every diagram commutes, and a monotone function is precisely a functor between such categories. A monoidal preorder is a monoidal category in which every diagram commutes, and a monoidal monotone is precisely a monoidal functor between such categories. So, in this course so far, we are doing category theory in the degenerate case where we never need to check that any diagram commutes! > Is it correct that the fact that a monoidal monotone also preserves the monoidal structure doesn't actually gain us anything, as far as the factorization goes that Tobias mentioned? I'm not 100% sure what you mean, but I can guess. All this stuff works even in the non-monoidal situation. Suppose \\(f : A \to D\\) is a monotone function between preorders. Borrowing words from Tobias, then \\(f\\) can be factored as: * A monotone \\(f_1: A \to B\\) that only adds capabilities: that is, one that acts the identity map \\(x\mapsto x\\) between preorders with the same underlying set. In other words, we only add further inequalities while not doing anything else. For example, the discovery of a new technology is often like this: the set of resources and how they combine does not change, but suddenly you can do stuff that you could not do before! * A monotone \\(f_2 : B \to C\\) that only changes the underlying set: that is, one where \\(b\leq b'\\) holds if and only if \\(f(b)\leq f(b')\\), and for every \\(c\in C\\) there is \\(b\in B\\) with \\(c\leq f(b)\leq c\\). For example, introducing a new currency may be an example of this: when you include a new currency in your resource theory, then you've expanded the theory, so that the original one is included in the new one. But it's still pretty much the same theory, since the new currency is perfectly interconvertible with your original one. * A monotone \\(f_3: C \to D\\) that only discovers new kinds of resources: this is inclusion map of some \\(C\\) as a subset of preorder \\(D\\), where the preorder on \\(C\\) is simply given by taking the given preorder on \\(D\\) and restricting it. For example, when your company hires a new employee, then the time of that employee becomes a new type of resource. But the things that you can do without using the time of that employee are still exactly the same. This is a special case of a fact about categories. Suppose \\(f : A \to D\\) is a functor between categories. Then \\(f\\) can be factored as: * a functor \\(f_1 : A \to B\\) that is essentially surjective and full (but not necessarily faithful). * a functor \\(f_2 : B \to C\\) that is essentially surjective and faithful (but not necessarily full). * a functor \\(f_3: C \to D\\) that is faithful and full (but not necessarily essentially surjective). This in turn is a generalization of the usual fact that any function can be factored as an onto function followed by a 1-1 function. All this stuff is studied in vastly more detail in my [Lectures on \\(n\\)-categories and cohomology](http://math.ucr.edu/home/baez/cohomology.pdf), in the part about "Postnikov towers".`

These three types of functors correspond to stuff, structure, and properties!! So we have a correspondence between

Hmm... now I have to think about why these correspondences make sense! This is very fun!

`These three types of functors correspond to [stuff, structure, and properties](https://ncatlab.org/nlab/show/stuff,+structure,+property)!! So we have a correspondence between - only adding capabilities is the same as forgetting purely properties - only changing the underlying set and forgetting purely structure - only discovering a new kind of resource and forgetting purely stuff Hmm... now I have to think about why these correspondences make sense! This is very fun!`

EDIT - [reported as suggestion to the book]

`EDIT - [reported as suggestion to the book]`

Sophie! Yes, this "stuff, structure and properties" trinity is exactly what's going on here. We can take any functor \(f: A \to D\) and write it as

forgets only stuff: it's essentially surjective and full (but not necessarily faithful)followed by

forgets only structureit's essentially surjective and faithful (but not necessarily full)followed by

forgets only properties: it's faithful and full (but not necessarily essentially surjective).In the case when \(A\) and \(D\) are preorders, so are \(B\) and \(C\), and we are in the situation discussed by Tobias.

(Tobias made everything monoidal, just to spice it up, but it works fine in the non-monoidal situation too.)

If you ever want to dig

reallydeep into this stuff, structure and properties business, read my Lectures on \(n\)-categories and cohomology. It may be rather stressful, but it explains how this trinity continues when we go to \(n\)-categories, and how it's connected to homotopy theory.`Sophie! Yes, this "stuff, structure and properties" trinity is exactly what's going on here. We can take any functor \\(f: A \to D\\) and write it as * a functor \\(f_1 : A \to B\\) that **forgets only stuff**: it's essentially surjective and full (but not necessarily faithful) followed by * a functor \\(f_2 : B \to C\\) that **forgets only structure** it's essentially surjective and faithful (but not necessarily full) followed by * a functor \\(f_3: C \to D\\) that **forgets only properties**: it's faithful and full (but not necessarily essentially surjective). In the case when \\(A\\) and \\(D\\) are preorders, so are \\(B\\) and \\(C\\), and we are in the situation discussed by Tobias. (Tobias made everything monoidal, just to spice it up, but it works fine in the non-monoidal situation too.) If you ever want to dig _really_ deep into this stuff, structure and properties business, read my [Lectures on \\(n\\)-categories and cohomology](http://math.ucr.edu/home/baez/cohomology.pdf). It may be rather stressful, but it explains how this trinity continues when we go to \\(n\\)-categories, and how it's connected to homotopy theory.`

Has anyone figured out an answer for

puzzle 85?Here's what I got so far. The "would be" left adjoint should look like:

$$ h(x) = \bigwedge \left\{ y \in \mathbb{N}[S] \; : \; f(y) \to x \right\} . $$ I've tried instantiating the formula for a resource in \( \mathbb{N}[T] \) – I picked \([\textrm{egg}]\) and I've used the fact that \( f \) forgets bowls and shells:

$$ h([\textrm{egg}]) = \bigwedge \left\{ [\textrm{egg}], [\textrm{egg}] + [\textrm{bowl}], [\textrm{egg}] + [\textrm{shells}], [\textrm{egg}] + [\textrm{bowl}] + [\textrm{shells}], \cdots \right\} . $$ But I cannot find an element in \( \mathbb{N}[S] \) that is less than (that is, can be produced from) both \( [\textrm{egg}] \) and \( [\textrm{egg}] + [\textrm{bowl}] \). I'm tempted to conclude there is no left adjoint, but I doubt my reasoning as it goes against John's optimism.

`Has anyone figured out an answer for **puzzle 85**? Here's what I got so far. The "would be" left adjoint should look like: \[ h(x) = \bigwedge \\left\\{ y \in \mathbb{N}[S] \; : \; f(y) \to x \\right\\} . \] I've tried instantiating the formula for a resource in \\( \mathbb{N}[T] \\) – I picked \\([\textrm{egg}]\\) and I've used the fact that \\( f \\) forgets bowls and shells: \[ h([\textrm{egg}]) = \bigwedge \\left\\{ [\textrm{egg}], [\textrm{egg}] + [\textrm{bowl}], [\textrm{egg}] + [\textrm{shells}], [\textrm{egg}] + [\textrm{bowl}] + [\textrm{shells}], \cdots \\right\\} . \] But I cannot find an element in \\( \mathbb{N}[S] \\) that is less than (that is, can be produced from) both \\( [\textrm{egg}] \\) and \\( [\textrm{egg}] + [\textrm{bowl}] \\). I'm tempted to conclude there is no left adjoint, but I doubt my reasoning as it goes against [John's optimism](https://forum.azimuthproject.org/discussion/comment/18382/#Comment_18382).`

This comment is irrelevant but i will say this kind of example is a very good one for people like me not very familiar with language of category theory--it uses a concrete simple example and the introduces the terms (like left adjoint) the way that would be described in category theory terms (most people can cook an egg without knowing any math). I hope these lectures will be arxived or stay online.

`This comment is irrelevant but i will say this kind of example is a very good one for people like me not very familiar with language of category theory--it uses a concrete simple example and the introduces the terms (like left adjoint) the way that would be described in category theory terms (most people can cook an egg without knowing any math). I hope these lectures will be arxived or stay online.`

Puzzle 85Suppose \(y\in\mathbb{N}(T)\), hence \(y=a_{E}[E]+a_{Y}[Y]+a_{W}[W]\) where \(a_E, a_Y, a_W\in \mathbb{N}\). I am not sure if I am correct but my observation is that in \(\mathbb{N}(T)\), even the reflexive property implies that

$$a_{E}[E]\le a_{E}[E],$$ but we do not have

$$a_{E}[E]\le b_{E}[E],$$ even if \(a_{E}\le b_{E}\) in \(\mathbb{N}\) because we do not have the condition that \(0\le [E]\). \(0\) just serves as the identity in \(\mathbb{N}(T)\) but it does not mean it is less than the "non-zero" elements. This impose strong restrictions on the coefficients on any two "comparable" elements in \(\mathbb{N}(T)\).

Now consider all the \(x\in\mathbb{N}(S)\) in the form \(x=b_{B}[B]+b_{E}[E]+b_{Y}[Y]+b_{W}[W]+b_{S}[S]\) such that \(y\le f(x)\), then we have

$$a_{E}[E]+a_{Y}[Y]+a_{W}[W]\le b_{E}[E]+b_{Y}[Y]+b_{W}[W].$$ In the following, I am considering the case when \(a_{Y}\le a_{W}\), the argument for \(a_{Y} > a_{W}\) will be similar. By the restriction mention above and the relation \([Y]+[W]\le [E]\) from the reation network, we have

So one would hope for \(g(y)\in \mathbb{N}(S)\) to have the form \(g(y)=b_{B}[B]+a_{E}[E]+a_{Y}[Y]+a_{W}[W]+b_{S}[S]\) because it is supposed to be the "least element" of the set \(\left\{x\in \mathbb{N}(S)| y\le f(x)\right\}\). So the candidates for \(g(y)\) are

However, there is not a common element in \(\mathbb{N}(S)\) that are simultaneously smaller than all the elements above, so I propose that \(f\) does not have a left adjoint.

`**Puzzle 85** Suppose \\(y\in\mathbb{N}(T)\\), hence \\(y=a_{E}[E]+a_{Y}[Y]+a_{W}[W]\\) where \\(a_E, a_Y, a_W\in \mathbb{N}\\). I am not sure if I am correct but my observation is that in \\(\mathbb{N}(T)\\), even the reflexive property implies that $$a_{E}[E]\le a_{E}[E],$$ but we do not have $$a_{E}[E]\le b_{E}[E],$$ even if \\(a_{E}\le b_{E}\\) in \\(\mathbb{N}\\) because we do not have the condition that \\(0\le [E]\\). \\(0\\) just serves as the identity in \\(\mathbb{N}(T)\\) but it does not mean it is less than the "non-zero" elements. This impose strong restrictions on the coefficients on any two "comparable" elements in \\(\mathbb{N}(T)\\). Now consider all the \\(x\in\mathbb{N}(S)\\) in the form \\(x=b_{B}[B]+b_{E}[E]+b_{Y}[Y]+b_{W}[W]+b_{S}[S]\\) such that \\(y\le f(x)\\), then we have $$a_{E}[E]+a_{Y}[Y]+a_{W}[W]\le b_{E}[E]+b_{Y}[Y]+b_{W}[W].$$ In the following, I am considering the case when \\(a_{Y}\le a_{W}\\), the argument for \\(a_{Y} > a_{W}\\) will be similar. By the restriction mention above and the relation \\([Y]+[W]\le [E]\\) from the reation network, we have - \\(b_{E}-a_{E}=a_{Y}-b_{Y}=a_{W}-b_{W};\\) - \\(0\le b_{Y}\le a_{Y};\\) - \\(a_{W}-a_{Y}\le b_{W}\le a_{W}.\\) So one would hope for \\(g(y)\in \mathbb{N}(S)\\) to have the form \\(g(y)=b_{B}[B]+a_{E}[E]+a_{Y}[Y]+a_{W}[W]+b_{S}[S]\\) because it is supposed to be the "least element" of the set \\(\\left\\{x\in \mathbb{N}(S)| y\le f(x)\\right\\}\\). So the candidates for \\(g(y)\\) are - \\(a_{E}[E]+a_{Y}[Y]+a_{W}[W],\\) - \\([B]+a_{E}[E]+a_{Y}[Y]+a_{W}[W],\\) - \\(a_{E}[E]+a_{Y}[Y]+a_{W}[W]+[S],\\) - \\([B]+a_{E}[E]+a_{Y}[Y]+a_{W}[W]+[S],\\) - etc However, there is not a common element in \\(\mathbb{N}(S)\\) that are simultaneously smaller than all the elements above, so I propose that \\(f\\) does not have a left adjoint.`

Cheuk Man Hwang - thanks for your comment! I'll need to think about it carefully, because I thought that \(f\) had a left adjoint. I'll get back to you on this!

`Cheuk Man Hwang - thanks for your comment! I'll need to think about it carefully, because I thought that \\(f\\) had a left adjoint. I'll get back to you on this!`

Cheuk wrote:

This is right, because we aren't including any process that lets you throw away eggs!

`Cheuk wrote: > I am not sure if I am correct but my observation is that in \\(\mathbb{N}(T)\\), even the reflexive property implies that > $$a_{E}[E]\le a_{E}[E],$$ > but we do not have > $$a_{E}[E]\le b_{E}[E],$$ > even if \\(a_{E}\le b_{E}\\) in \\(\mathbb{N}\\) because we do not have the condition that \\(0\le [E]\\). \\(0\\) just serves as the identity in \\(\mathbb{N}(T)\\) but it does not mean it is less than the "non-zero" elements. This is right, because we aren't including any process that lets you throw away eggs!`