It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.2K
- Programming with Categories Course 21
- Exercises - Programming with Categories Course 15
- Applied Category Theory Course 341
- Applied Category Theory Seminar 4
- Exercises - Applied Category Theory Course 149
- Discussion Groups 50
- How to Use MathJax 15
- Chat 487
- Azimuth Code Project 108
- News and Information 147
- Azimuth Blog 149
- Azimuth Forum 29
- Azimuth Project 189
- - Strategy 108
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 711
- - Latest Changes 701
- - - Action 14
- - - Biodiversity 8
- - - Books 2
- - - Carbon 9
- - - Computational methods 38
- - - Climate 53
- - - Earth science 23
- - - Ecology 43
- - - Energy 29
- - - Experiments 30
- - - Geoengineering 0
- - - Mathematical methods 69
- - - Meta 9
- - - Methodology 16
- - - Natural resources 7
- - - Oceans 4
- - - Organizations 34
- - - People 6
- - - Publishing 4
- - - Reports 3
- - - Software 21
- - - Statistical methods 2
- - - Sustainability 4
- - - Things to do 2
- - - Visualisation 1
- General 41

Options

I've already said that left and right adjoints give *the best approximate ways to solve a problem that has no solution*, namely finding the inverse of a monotone function that has no inverse. I've defined them and given you some puzzles about them. But now let's review these puzzles and extract some valuable lessons!

We took the function \(f : \mathbb{N} \to \mathbb{N}\) that doubles any natural number

$$ f(a) = 2a . $$ This function has no inverse, since you can't divide an odd number by 2 and get a natural number! But if you did the puzzles, you saw that \(f\) has a "right adjoint" \(g : \mathbb{N} \to \mathbb{N}\). This is defined by the property

$$ f(a) \le b \textrm{ if and only if } a \le g(b) . $$ or in other words,

$$ 2a \le b \textrm{ if and only if } a \le g(b) . $$ Using our knowledge of fractions, we have

$$ 2a \le b \textrm{ if and only if } a \le b/2 $$ but since \(a\) is a natural number, this implies

$$ 2a \le b \textrm{ if and only if } a \le \lfloor b/2 \rfloor $$ where we are using the floor function to pick out the largest integer \(\le b/2\). So,

$$ g(b) = \lfloor b/2 \rfloor. $$ Moral: the right adjoint \(g \) is the "best approximation from below" to the nonexistent inverse of \(f\).

If you did the puzzles, you also saw that \(f\) has a left adjoint! This is the "best approximation from above" to the nonexistent inverse of \(f\): it gives you the smallest integer that's \(\ge n/2\).

So, while \(f\) has no inverse, it has two "approximate inverses". The left adjoint comes as close as possible to the (perhaps nonexistent) correct answer while making sure to never choose a number that's *too small*. The right adjoint comes as close as possible while making sure to never choose a number that's *too big*.

The two adjoints represent two opposing philosophies of life: *make sure you never ask for too little* and *make sure you never ask for too much*. This is why they're philosophically profound. But the great thing is that they are defined in a completely precise, systematic way that applies to a huge number of situations!

If you need a mnemonic to remember which is which, remember left adjoints are "left-wing" or "liberal" or "generous", while right adjoints are "right-wing" or "conservative" or "cautious".

Let's think a bit more about how we can compute them in general, starting from the basic definition.

Here's the definition again. Suppose we have two preorders \((A,\le_A)\) and \((B,\le_B)\) and a monotone function \(f : A \to B\). Then we say a monotone function \(g: B \to A\) is a **right adjoint of \(f\)** if

$$ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) $$
for all \(a \in A\) and \(b \in B\). In this situation we also say that \(f\) is a **left adjoint of \(g\)**.

The names should be easy to remember, since \(f\) shows up on the *left* of the inequality \( f(a) \le_B b \), while \(g\) shows up on the *right* of the inequality \( a \le_A g(b) \). But let's see how they actually work!

Suppose you know \(f : A \to B\) and you're trying to figure out its right adjoint \(g: B \to A\). Say you're trying to figure out \(g(b)\). You don't know what it is, but you know

$$ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) $$ So, you go around looking at choices of \(a \in A\). For each one you compute \(f(a)\). If \(f(a) \le_B b\), then you know \(a \le_A g(b)\). So, you need to choose \(g(b)\) to be greater than or equal to every element of this set:

$$ \{a \in A : \; f(a) \le_B b \} $$
In other words, \(g(b)\) must be an **upper bound** of this set. But you shou'ldn't choose \(g(b)\) to be any bigger that it needs to be! After all, you know \(a \le_A g(b)\) *only if* \(f(a) \le_B b\). So,
\(g(b)\) must be a **least upper bound** of the above set.

Note that I'm carefully speaking about *a* least upper bound. Our set could have two different least upper bounds, say \(a\) and \(a'\). Since they're both the least, we must have \(a \le a'\) and \(a' \le a\). This doesn't imply \(a = a'\), in general! But it does if our preorder \(A\) is a "poset". A **poset** is a preorder \((A, \le_A)\) obeying this extra axiom:

$$ \textrm{ if } a \le a' \textrm{ and } a' \le a \textrm{ then } a = a' $$ for all \(a,a' \in A\).

In a poset, our desired least upper bound may still not *exist*. But if it does, it's *unique*, and Fong and Spivak write it this way:

$$ \bigvee \{a \in A : \; f(a) \le_B b \} $$
The \(\bigvee\) symbol stands for "least upper bound", also known as **supremum** or **join**.

So, here's what we've shown:

If \(f : A \to B\) has a right adjoint \(g : B \to A\) and \(A\) is a poset, this right adjoint is unique and we have a formula for it:

$$ g(b) = \bigvee \{a \in A : \; f(a) \le_B b \} . $$ And we can copy our whole line of reasoning and show this:

If \(g : B \to A\) has a left adjoint \(f : A \to B\) and \(B\) is a poset, this left adjoint is unique and we have a formula for it:

$$ f(a) = \bigwedge \{b \in B : \; a \le_A g(b) \} . $$
Here the \(\bigwedge\) symbol stands for "greatest lower bound", also known as the **infimum** or **meet**.

We're making progress: we can now actually compute left and right adjoints! Next we'll start looking at more examples.

## Comments

For any set \(X\) let \(PX\) be the

power setof set, namely the set of all subsets of \(X\). It's easy to see that the subset relation \(\subseteq\) makes \(PX\) into a poset. Suppose we have any function between sets$$ f : X \to Y $$ This gives a function

$$ f_{!} : PX \to PY $$ sending each subset \(S \subseteq X\) to its

imageunder \(f\):$$f_!(S) = \{ y \in Y: \; y = f(x) \textrm{ for some } x \in S \} . $$

Puzzle 17.Show that \( f_{!} : PX \to PY \) is a monotone function.Puzzle 18.Does \( f_{!} \) always have a left adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which itdoeshave a left adjoint.Puzzle 19.Does \(f_{!}\) always have a right adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which itdoeshave a right adjoint.It would be nice if experts hold back on answering these puzzles until others have had a crack at them. I asked them in a slightly odd way which, I hope, will make them more enlightening to solve.

`For any set \\(X\\) let \\(PX\\) be the **[power set](https://en.wikipedia.org/wiki/Power_set)** of set, namely the set of all subsets of \\(X\\). It's easy to see that the subset relation \\(\subseteq\\) makes \\(PX\\) into a poset. Suppose we have any function between sets $$ f : X \to Y $$ This gives a function $$ f_{!} : PX \to PY $$ sending each subset \\(S \subseteq X\\) to its **[image](https://en.wikipedia.org/wiki/Image_(mathematics)#Image_of_a_subset)** under \\(f\\): $$f_!(S) = \\{ y \in Y: \; y = f(x) \textrm{ for some } x \in S \\} . $$ **Puzzle 17.** Show that \\( f_{!} : PX \to PY \\) is a monotone function. **Puzzle 18.** Does \\( f_{!} \\) always have a left adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which it *does* have a left adjoint. **Puzzle 19.** Does \\(f_{!}\\) always have a right adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which it *does* have a right adjoint. It would be nice if experts hold back on answering these puzzles until others have had a crack at them. I asked them in a slightly odd way which, I hope, will make them more enlightening to solve.`

Thanks, it was enlightening to see the material from a slightly different angle!

[Removed the identification of a couple of typos that John has since fixed.]

`Thanks, it was enlightening to see the material from a slightly different angle! [Removed the identification of a couple of typos that John has since fixed.]`

Puzzle 17. We must show that if \(S \subseteq T\) then \(f(S) \subseteq f(T)\), which is equivalent to saying that \(s \in f_\ast(S) \rightarrow s \in f_*(T)\). But if \(s \in f_\ast(S)\) then \(s = f(x)\) for some \(x \in S\), which means \(x \in T\), so \(s \in f_\ast(T)\), as was to be shown.`**Puzzle 17**. We must show that if \\(S \subseteq T\\) then \\(f(S) \subseteq f(T)\\), which is equivalent to saying that \\(s \in f_\ast(S) \rightarrow s \in f_*(T)\\). But if \\(s \in f_\ast(S)\\) then \\(s = f(x)\\) for some \\(x \in S\\), which means \\(x \in T\\), so \\(s \in f_\ast(T)\\), as was to be shown.`

Thanks for the corrections, Dan! I fixed my lecture. Thanks also for answering Puzzle 17, which is the warmup for the more thrilling puzzles that come next.

`Thanks for the corrections, Dan! I fixed my lecture. Thanks also for answering Puzzle 17, which is the warmup for the more thrilling puzzles that come next.`

Here's a start on the thrilling

Puzzle 18. \(f_\ast\) doesnotalways have a left adjoint, and here's an example.Let \(X = \{1,2,3\}\) and \(Y=\{A,B\}\) and define \(f\) by \(f(1) = A, f(2) = A, f(3) = B\). \(f_\ast(\{1\}) = \{A\}\), so if the left adjoint \(g_\ast\) does exist, then by definition of adjoint \(g_\ast(\{A\}) \subseteq \{1\}\). Similarly \(g_\ast(\{A\}) \subseteq \{2\}\). So \(g_\ast(\{A\}) = \emptyset\). Thus \(g_\ast(\{A\}) \subseteq \{3\}\), so by definition of adjoint \(\{A\} \subseteq f_\ast(\{3\}) = \{B\}\), which is a contradiction.

If \(f\) is injective then this sort of argument can't work, and in fact it seems "intuitively obvious" that \(g_\ast = (f^{-1})_\ast\) will be a successful left adjoint.

`Here's a start on the thrilling **Puzzle 18**. \\(f_\ast\\) does **not** always have a left adjoint, and here's an example. Let \\(X = \\{1,2,3\\}\\) and \\(Y=\\{A,B\\}\\) and define \\(f\\) by \\(f(1) = A, f(2) = A, f(3) = B\\). \\(f_\ast(\\{1\\}) = \\{A\\}\\), so if the left adjoint \\(g_\ast\\) does exist, then by definition of adjoint \\(g_\ast(\\{A\\}) \subseteq \\{1\\}\\). Similarly \\(g_\ast(\\{A\\}) \subseteq \\{2\\}\\). So \\(g_\ast(\\{A\\}) = \emptyset\\). Thus \\(g_\ast(\\{A\\}) \subseteq \\{3\\}\\), so by definition of adjoint \\(\\{A\\} \subseteq f_\ast(\\{3\\}) = \\{B\\}\\), which is a contradiction. If \\(f\\) is injective then this sort of argument can't work, and in fact it seems "intuitively obvious" that \\(g_\ast = (f^{-1})_\ast\\) will be a successful left adjoint.`

Following Dan Schmidt's post 5, it seems that if \(f_{\ast}\) has a left adjoint \(g_{\ast}\), then \(f\) must be injective. Otherwise, if \(f\) is not injective, there are elements \(a_1, a_2 \in A, a_1 \neq a_2\) such that \(f(a_1) = b = f(a_2)\). Then, \(f_{\ast}\) sends both \(\{a_1\}\) and \(\{a_2\}\) to \(\{b\}\). The left adjoint \(g_{\ast}: PY \to PX\) must then send \(\{b\}\) to something at most \(\{a_1\} \wedge \{a_2\} = \emptyset\), which must be \(\emptyset\) itself. But then we would have \(\emptyset = g_{\ast}(\{b\}) \leq \emptyset\) while \(\{b\} \nleq f_{\ast}(\emptyset) = \emptyset\), contradicting our hypothesis.

`Following Dan Schmidt's post 5, it seems that if \\(f_{\ast}\\) has a left adjoint \\(g_{\ast}\\), then \\(f\\) must be injective. Otherwise, if \\(f\\) is not injective, there are elements \\(a_1, a_2 \in A, a_1 \neq a_2\\) such that \\(f(a_1) = b = f(a_2)\\). Then, \\(f_{\ast}\\) sends both \\(\\{a_1\\}\\) and \\(\\{a_2\\}\\) to \\(\\{b\\}\\). The left adjoint \\(g_{\ast}: PY \to PX\\) must then send \\(\\{b\\}\\) to something at most \\(\\{a_1\\} \wedge \\{a_2\\} = \emptyset\\), which must be \\(\emptyset\\) itself. But then we would have \\(\emptyset = g_{\ast}(\\{b\\}) \leq \emptyset\\) while \\(\\{b\\} \nleq f_{\ast}(\emptyset) = \emptyset\\), contradicting our hypothesis.`

There's a good function going the other way, \(f^{\ast}: PY \rightarrow PX\), the

preimagefunction, defined by$$f^{\ast}(S \in PY) = \{x \in X: f(x) \in S\} .$$ Claim: this is right adjoint to the image function \(f_{\ast}: PX \rightarrow PY\).

Proof: \(f_{\ast}(S) \subseteq T\) means that \(S\) maps into \(T\), which means that \(S\) is included in the preimage of \(T\), i.e., \(S \subseteq f^{\ast}(T)\).

`There's a good function going the other way, \\(f^{\ast}: PY \rightarrow PX\\), the **preimage** function, defined by $$f^{\ast}(S \in PY) = \\{x \in X: f(x) \in S\\} .$$ Claim: this is right adjoint to the image function \\(f_{\ast}: PX \rightarrow PY\\). Proof: \\(f_{\ast}(S) \subseteq T\\) means that \\(S\\) maps into \\(T\\), which means that \\(S\\) is included in the preimage of \\(T\\), i.e., \\(S \subseteq f^{\ast}(T)\\).`

Yes, David just answered Puzzle 19. The

preimageorinverse imageas David just defined it:$$ f^{\ast}(S) = \{x \in X: f(x) \in S\} $$ gives a monotone function \(f^{\ast}: PY \rightarrow PX\) that is the right adjoint of \(f_{\ast} : PX \to PY \).

Puzzle 20.Does \(f^{\ast}: PY \rightarrow PX \) have a right adjoint of its own?`Yes, David just answered Puzzle 19. The **preimage** or **inverse image** as David just defined it: $$ f^{\ast}(S) = \\{x \in X: f(x) \in S\\} $$ gives a monotone function \\(f^{\ast}: PY \rightarrow PX\\) that is the right adjoint of \\(f_\{\ast} : PX \to PY \\). **Puzzle 20.** Does \\(f^{\ast}: PY \rightarrow PX \\) have a right adjoint of its own?`

Puzzle TR1. Whypreciselymust \(g(b)\) be a least upper bound of the set?(Maybe this is obvious, but it took me quite a while)

`> In other words, \\(g(b)\\) must be an **[upper bound](https://en.wikipedia.org/wiki/Upper_and_lower_bounds)** of this set. But you shouldn't choose \\(g(b)\\) to be any bigger that it needs to be! After all, you know \\(a \le_A g(b)\\) _only if_ \\(f(a) \le_B b\\). So, \\(g(b)\\) must be a **[least upper bound](https://en.wikipedia.org/wiki/Infimum_and_supremum)** of the above set. **Puzzle TR1**. Why *precisely* must \\(g(b)\\) be a least upper bound of the set? (Maybe this is obvious, but it took me quite a while)`

Thomas Read: yes, that's a good puzzle. I glossed over the reasoning there because I wanted to keep moving. Can someone fill in the details in a rigorous yet terse way?

`Thomas Read: yes, that's a good puzzle. I glossed over the reasoning there because I wanted to keep moving. Can someone fill in the details in a rigorous yet terse way?`

I could contribute how I understand

Puzzle TR1, though I'm not sure how terse it is.We know that \(g(b)\) is an upper bound for the set \(A_b = \{a \in A : f(a) \leq_B b\}\). We want to show that \(g(b)\) is a

leastupper bound for \(A_b\).It is enough to show that \(g(b) \in A_b\), from which it follows that any upper bound for \(A_b\) must be at least \(g(b)\).

But \(g(b) \in A_b\) iff \(f(g(b)) \leq_B b\) [by the definition \(A_b\)], iff \(g(b) \leq_A g(b)\) [by the definition of adjoints], which is true.

`I could contribute how I understand **Puzzle TR1**, though I'm not sure how terse it is. We know that \\(g(b)\\) is an upper bound for the set \\(A_b = \\{a \in A : f(a) \leq_B b\\}\\). We want to show that \\(g(b)\\) is a *least* upper bound for \\(A_b\\). It is enough to show that \\(g(b) \in A_b\\), from which it follows that any upper bound for \\(A_b\\) must be at least \\(g(b)\\). But \\(g(b) \in A_b\\) iff \\(f(g(b)) \leq_B b\\) [by the definition \\(A_b\\)], iff \\(g(b) \leq_A g(b)\\) [by the definition of adjoints], which is true.`

Alex - that's great! It's clear, and it's as terse as it gets without leaving out steps. I think the most important step, not made clear in my explanation, is that we're using the definition of adjoints to show \(f(g(b)) \leq_B b\).

`Alex - that's great! It's clear, and it's as terse as it gets without leaving out steps. I think the most important step, not made clear in my explanation, is that we're using the definition of adjoints to show \\(f(g(b)) \leq_B b\\).`

While writing my post I was trying to think of sayings that express these philosophies of life:

For the second philosophy, the left adjoint philosophy, I can think of

Err on the side of caution.or

Better safe than sorry.For the right adjoint philosophy all I can think of is

It is easier to ask forgiveness than to get permission.That's not perfect. I'm not looking for a saying that advocates going hog-wild and massively violating the rules; I want one that says if you're going to make a little mistake, it's better to get slightly more than you should, rather than slightly less.

A left adjoint person would fill a cup as far as they can without it overflowing, while a right adjoint person would stop as soon as it overflows a bit.

`While writing my post I was trying to think of sayings that express these philosophies of life: > The two adjoints represent two opposing philosophies of life: _make sure you never ask for too little_ and _make sure you never ask for too much_. For the second philosophy, the left adjoint philosophy, I can think of _Err on the side of caution._ or _Better safe than sorry._ For the right adjoint philosophy all I can think of is _It is easier to ask forgiveness than to get permission._ That's not perfect. I'm not looking for a saying that advocates going hog-wild and massively violating the rules; I want one that says if you're going to make a little mistake, it's better to get slightly more than you should, rather than slightly less. A left adjoint person would fill a cup as far as they can without it overflowing, while a right adjoint person would stop as soon as it overflows a bit.`

I think for

Puzzle 18we also need \(f\) surjective? Since if we take \(y \in Y \setminus f_{\ast}(X)\), then for a left adjoint \(g_{\ast}\):$$ g_{\ast}(\{y\}) \subseteq X \iff \{y\} \subseteq f_{\ast}(X) $$ where the left hand side is clearly always true, but the right hand side is false by definition.

`I think for **Puzzle 18** we also need \\(f\\) surjective? Since if we take \\(y \in Y \setminus f_{\ast}(X)\\), then for a left adjoint \\(g_{\ast}\\): $$ g_{\ast}(\\{y\\}) \subseteq X \iff \\{y\\} \subseteq f_{\ast}(X) $$ where the left hand side is clearly always true, but the right hand side is false by definition.`

The pitcher goes often to the well, but is broken at last.

`>A left adjoint person would fill a cup as far as they can without it overflowing, while a right adjoint person would stop as soon as it overflows a bit. The pitcher goes often to the well, but is broken at last.`

I'm afraid I'm still confused about the answers to

puzzles 12 and 13that are re-quoted here in the lecture material - I get a right adjoint \(g(n) = \lfloor n/2 \rfloor \), not \( \lceil n/2 \rceil\) as is given in the lecture. Or (much less likely!) there's an error in the lecture, but I figure someone would have caught it by now. Here's my reasoning:Simply copying John's definitions above: for \(f : \mathbb{N} \to \mathbb{N}, f(n) = 2n\), the right adjoint \(g : \mathbb{N} \to \mathbb{N} \) must fulfill the condition $$ f(m) \le n \textrm{ if and only if } m \le g(n) $$ If we identify $$ g(n) = \lceil n/2 \rceil. $$ then with simple substitution and rewriting (still following the lecture above), the condition for the right adjoint is simply $$ \lceil n/2 \rceil \ge m \textrm{ if and only if } n/2 \ge m .$$ But this is trivially false. For example, setting n=5, m=3 yields:

$$ \lceil 5/2 \rceil \ge 3 \textrm{ if and only if } 5/2 \ge 3 $$ $$ 3 \ge 3 \textrm{ if and only if } 5/2 \ge 3 $$ What's wrong here? I feel like I'm making an elementary arithmetic error or I'm confused about how you're using the ceiling and floor functions. If you'd prefer I move this to the discussion of puzzles 12-16, that's fine too.

`I'm afraid I'm still confused about the answers to **puzzles 12 and 13** that are re-quoted here in the lecture material - I get a right adjoint \\(g(n) = \lfloor n/2 \rfloor \\), not \\( \lceil n/2 \rceil\\) as is given in the lecture. Or (much less likely!) there's an error in the lecture, but I figure someone would have caught it by now. Here's my reasoning: Simply copying John's definitions above: for \\(f : \mathbb{N} \to \mathbb{N}, f(n) = 2n\\), the right adjoint \\(g : \mathbb{N} \to \mathbb{N} \\) must fulfill the condition $$ f(m) \le n \textrm{ if and only if } m \le g(n) $$ If we identify $$ g(n) = \lceil n/2 \rceil. $$ then with simple substitution and rewriting (still following the lecture above), the condition for the right adjoint is simply $$ \lceil n/2 \rceil \ge m \textrm{ if and only if } n/2 \ge m .$$ But this is trivially false. For example, setting n=5, m=3 yields: $$ \lceil 5/2 \rceil \ge 3 \textrm{ if and only if } 5/2 \ge 3 $$ $$ 3 \ge 3 \textrm{ if and only if } 5/2 \ge 3 $$ What's wrong here? I feel like I'm making an elementary arithmetic error or I'm confused about how you're using the ceiling and floor functions. If you'd prefer I move this to the discussion of puzzles 12-16, that's fine too.`

If we're copying John's definitions above, with \(f : \mathbb{N} \to \mathbb{N}, f(n) = 2n\), the right adjoint \(g : \mathbb{N} \to \mathbb{N} \) and $$ f(m) \le n \textrm{ if and only if } m \le g(n) $$ with, $$ g(n) = \lceil n/2 \rceil. $$ we should get $$ 2(m) \le n \textrm{ if and only if } m \le \lceil n/2 \rceil, $$ by direct subsitution, since \(f(n) = 2n\) .

`If we're copying John's definitions above, with \\(f : \mathbb{N} \to \mathbb{N}, f(n) = 2n\\), the right adjoint \\(g : \mathbb{N} \to \mathbb{N} \\) and $$ f(m) \le n \textrm{ if and only if } m \le g(n) $$ with, $$ g(n) = \lceil n/2 \rceil. $$ we should get $$ 2(m) \le n \textrm{ if and only if } m \le \lceil n/2 \rceil, $$ by direct subsitution, since \\(f(n) = 2n\\) .`

am I right in thinking the proof that \(f\dashv g \implies g(b) = \sup A_b\) does not at any point use the monotonicity of \(f\) or \(g\)?

`am I right in thinking the proof that \\(f\dashv g \implies g(b) = \sup A_b\\) does not at any point use the monotonicity of \\(f\\) or \\(g\\)?`

"Not too much, not too little, just right" doesn't exactly capture this cup situation but it is close. You want an adjoint Goldilocks principle.

"Just too much, and just too little" ??

`> A left adjoint person would fill a cup as far as they can without it overflowing, while a right adjoint person would stop as soon as it overflows a bit. "Not too much, not too little, just right" doesn't exactly capture this cup situation but it is close. You want an adjoint Goldilocks principle. "Just too much, and just too little" ??`

In terms of the Goldilocks story, it would be something like: Goldilocks walks into the house of bears that did not have any child (cub?) in this version and is forced to choose between food that is too hot or too cold; beds that are too stiff or too soft. There is no ideal, so Goldilocks must make do with whichever choices are the closest approximation to her ideal (maybe the stiff bed isn't as stiff as the soft bed is soft).

`>"Not too much, not too little, just right" doesn't exactly capture this cup situation but it is close. >You want an adjoint Goldilocks principle. In terms of the Goldilocks story, it would be something like: Goldilocks walks into the house of bears that did not have any child (cub?) in this version and is forced to choose between food that is too hot or too cold; beds that are too stiff or too soft. There is no ideal, so Goldilocks must make do with whichever choices are the closest approximation to her ideal (maybe the stiff bed isn't as stiff as the soft bed is soft).`

It seems like the existence of a Galois connection between \( (A,\leq_A) \) and \( (B, \leq_B ) \) requires least-upper bounds and greatest-lower bounds to be defined. Does a Galois connection between \( (A,\leq_A) \) and \( (B, \leq_B ) \) imply that \( (A,\leq_A) \) and \( (B, \leq_B ) \) are complete lattices?

For complete lattices, LUB and GLB need to exist for arbitrary subsets, but for a Galois connection, the subsets are not arbitrary. Yet, I haven't been able to formulate an example of a Galois connection between posets that are not complete lattices.

`It seems like the existence of a Galois connection between \\( (A,\leq_A) \\) and \\( (B, \leq_B ) \\) requires least-upper bounds and greatest-lower bounds to be defined. Does a Galois connection between \\( (A,\leq_A) \\) and \\( (B, \leq_B ) \\) imply that \\( (A,\leq_A) \\) and \\( (B, \leq_B ) \\) are complete lattices? For complete lattices, LUB and GLB need to exist for arbitrary subsets, but for a Galois connection, the subsets are not arbitrary. Yet, I haven't been able to formulate an example of a Galois connection between posets that are not complete lattices.`

A typo, this bit has one quote mark too many:

`A typo, this bit has one quote mark too many: > Since they're both the least, we must have a≤a′ and a′≤a′. This doesn't imply a=a′ , in general!`

I think I might have answered my own question. I will try to post a picture...

\( f : P \rightarrow Q \) and \(g : P \leftarrow Q \)

\( g(q_1) = \bigvee \{ p_1 \} = p1 \)

\( g(q_{21}) = \bigvee \{ p_1, p_{21}, p_{22}, p_{32}\} = p_{32} \)

\( g(q_{22}) = \bigvee \{ p_1, p_{21}, p_{22}, p_{31} \} = p_{31} \)

`I think I might have answered my own question. I will try to post a picture... <img src="http://folk.uio.no/danielsf/img/example_galois.jpg"/> \\( f : P \rightarrow Q \\) and \\(g : P \leftarrow Q \\) \\( g(q_1) = \bigvee \\{ p_1 \\} = p1 \\) \\( g(q_{21}) = \bigvee \\{ p_1, p_{21}, p_{22}, p_{32}\\} = p_{32} \\) \\( g(q_{22}) = \bigvee \\{ p_1, p_{21}, p_{22}, p_{31} \\} = p_{31} \\)`

Ah, whoops! I hadn't seen this when I posted in lecture 4: "Boolean algebras are an important kind of poset. The power set functor defines an (contravariant) equivalence \(P: Set^{op}→Bool\): any function \(f: X→Y\) corresponds to the preimage map \(f^∗:PY→PX\), which is a monotone map, and also a boolean algebra homomorphism, meaning it preserves meets and joins (the very important fact that preimage preserves set operations)."

We determined that the direct image \(f_!\) is the left adjoint to preimage f*. But there is yet another twist to this

puzzle: the preimage also has arightadjoint - what is it? It is certainly less obvious; I'm curious to see the reasoning of someone figuring it out on their own.This triple adjunction has deep connections to topos theory and logic, which I hope to learn more about.

`Ah, whoops! I hadn't seen this when I posted in lecture 4: "Boolean algebras are an important kind of poset. The power set functor defines an (contravariant) equivalence \\(P: Set^{op}→Bool\\): any function \\(f: X→Y\\) corresponds to the preimage map \\(f^∗:PY→PX\\), which is a monotone map, and also a boolean algebra homomorphism, meaning it preserves meets and joins (the very important fact that preimage preserves set operations)." We determined that the direct image \\(f_!\\) is the left adjoint to preimage f*. But there is yet another twist to this **puzzle**: the preimage also has a *right* adjoint - what is it? It is certainly less obvious; I'm curious to see the reasoning of someone figuring it out on their own. This triple adjunction has deep connections to topos theory and logic, which I hope to learn more about.`

Daniel Fava wrote:

I guess you answered that question in the negative in comment #23, since neither poset in your picture has all least upper bounds - nor does either have all greatest lower bounds.

But you answered your question so un-verbosely that I can't resist stating the answer in words. If \(f : A \to B\) has a right adjoint \(g : B \to A\) and \(A\) is a poset, we have

$$ g(b) = \bigvee \{a \in A : \; f(a) \le_B b \} $$ so the least upper bound

in this formulamust exist in \(A\) for any \(b\). Similarly, if \(g : B \to A\) has a left adjoint \(f : A \to B\) and \(B\) is a poset, we have$$ f(a) = \bigwedge \{b \in B : \; a \le_A g(b) \} $$ so the greatest lower bound

in this formulamust exist in \(B\) for any \(a\). But no further greatest lower bounds or least upper upper bounds need exist!Here's an extreme example: for any poset \(A\) whatsoever, the identity function \(1_A : A \to A\) has a left and right adjoint, namely itself. This is easy to check straight from the definition:

$$ a \le a \textrm{ if and only if } a \le a . $$ If you compute these adjoints using the formulas above, you see that only sets of the form

$$ \{ a \in A : \; a \le b \} $$ need have greatest lower bounds - and they do, namely the element \(b\). Similarly, only sets of the form

$$ \{a \in A: \; a \ge b \} $$ need have least upper bounds - and they do, namely the element \(b\).

(Trivial examples can be very enlightening.)

`Daniel Fava wrote: > It seems like the existence of a Galois connection between \\( (A,\leq_A) \\) and \\( (B, \leq_B ) \\) requires least upper bounds and greatest lower bounds to be defined. Does a Galois connection between \\( (A,\leq_A) \\) and \\( (B, \leq_B ) \\) imply that \\( (A,\leq_A) \\) and \\( (B, \leq_B ) \\) are complete lattices? I guess you answered that question in the negative in comment #23, since neither poset in your picture has all least upper bounds - nor does either have all greatest lower bounds. But you answered your question so un-verbosely that I can't resist stating the answer in words. If \\(f : A \to B\\) has a right adjoint \\(g : B \to A\\) and \\(A\\) is a poset, we have $$ g(b) = \bigvee \\{a \in A : \; f(a) \le_B b \\} $$ so the least upper bound _in this formula_ must exist in \\(A\\) for any \\(b\\). Similarly, if \\(g : B \to A\\) has a left adjoint \\(f : A \to B\\) and \\(B\\) is a poset, we have $$ f(a) = \bigwedge \\{b \in B : \; a \le_A g(b) \\} $$ so the greatest lower bound _in this formula_ must exist in \\(B\\) for any \\(a\\). But no further greatest lower bounds or least upper upper bounds need exist! Here's an extreme example: for any poset \\(A\\) whatsoever, the identity function \\(1_A : A \to A\\) has a left and right adjoint, namely itself. This is easy to check straight from the definition: $$ a \le a \textrm{ if and only if } a \le a . $$ If you compute these adjoints using the formulas above, you see that only sets of the form $$ \\{ a \in A : \; a \le b \\} $$ need have greatest lower bounds - and they do, namely the element \\(b\\). Similarly, only sets of the form $$ \\{a \in A: \; a \ge b \\} $$ need have least upper bounds - and they do, namely the element \\(b\\). (Trivial examples can be very enlightening.)`

Daniel Fava had written:

For some weird reason, on this particular installation of MathJax we need to type

`\\{`

to get a {, and similarly for }. Two backslashes before { and }. I don't know why. I fixed the problem for you.

`Daniel Fava had written: > (Couldn't figure out \{ so going with [ instead) For some weird reason, on this particular installation of MathJax we need to type `\\{` to get a {, and similarly for }. Two backslashes before { and }. I don't know why. I fixed the problem for you.`

Keith Peterson - are you agreeing with Cole that I made a mistake? You didn't quite come out and say it. I easily get these things backwards when I'm not careful. I have trouble telling left from right in certain circumstances: perhaps it's connected to the fact that I'm left-handed. This is dangerous for a category theorist! It's possible my whole article needs to be fixed, including my claim that

It could be the other way around! I'll sit down when I have 15 minutes of peace and quiet and straighten it out.

`Keith Peterson - are you agreeing with Cole that I made a mistake? You didn't quite come out and say it. I easily get these things backwards when I'm not careful. I have [trouble telling left from right](http://theconversation.com/why-some-people-have-trouble-telling-left-from-right-and-why-its-so-important-38325) in certain circumstances: perhaps it's connected to the fact that I'm left-handed. This is dangerous for a category theorist! It's possible my whole article needs to be fixed, including my claim that > The right adjoint comes as close as possible to the (perhaps nonexistent) correct answer while making sure to never choose a number that's _too small_. The left adjoint comes as close as possible while making sure to never choose a number that's _too big_. It could be the other way around! I'll sit down when I have 15 minutes of peace and quiet and straighten it out.`

John Baez, I refuse to say something is incorrect when I myself don't fully understand something.

It's a probably from a cultural bias. Westerners want really hard to see the number line as moving left to right, ie from their less dominant side to their more dominant side, probably precisely because it makes the \(\leq\) operation easier for right-handed people to visualize.

`John Baez, I refuse to say something is incorrect when I myself don't fully understand something. >perhaps it's connected to the fact that I'm left-handed. This is dangerous for a category theorist! It's a probably from a cultural bias. Westerners want really hard to see the number line as moving left to right, ie from their less dominant side to their more dominant side, probably precisely because it makes the \\(\leq\\) operation easier for right-handed people to visualize.`

Keith - that would be a nice excuse, but I have more general form of left-right confusion. If my wife asks me which way she needs to turn next while driving down the road, I may instantly know the correct answer. If I gesture with my hands it's fine (except she can't see me because she's looking at the road). But if I blurt out the answer, I often

saythe exact opposite of the correct answer! I then quickly realize that I said the wrong thing.I have a lot of trouble convincing her that I'm not confused about the direction: I'm just using the wrong

wordfor the direction. Needless to say, she doesn't find this distinction very helpful.Interestingly, if I don't quickly blurt out the answer too quickly, I'm likely to say the correct answer even without conscious self-correction. It's as if some part of my brain is using the opposite definitions of these words, but the part with the correct definitions gets the upper hand, given a second or two. I think this is connected to me being left-handed. It's pretty irritating.

`Keith - that would be a nice excuse, but I have more general form of left-right confusion. If my wife asks me which way she needs to turn next while driving down the road, I may instantly know the correct answer. If I gesture with my hands it's fine (except she can't see me because she's looking at the road). But if I blurt out the answer, I often _say_ the exact opposite of the correct answer! I then quickly realize that I said the wrong thing. I have a lot of trouble convincing her that I'm not confused about the direction: I'm just using the wrong _word_ for the direction. Needless to say, she doesn't find this distinction very helpful. Interestingly, if I don't quickly blurt out the answer too quickly, I'm likely to say the correct answer even without conscious self-correction. It's as if some part of my brain is using the opposite definitions of these words, but the part with the correct definitions gets the upper hand, given a second or two. I think this is connected to me being left-handed. It's pretty irritating.`

John, I 'm also left-handed and suffer from this same driving condition. It's a long-running joke in my family, though I notice it's not as much of an issue these days with Waze always on (which is pretty much a requirement for driving around LA). I did think I had some mild form of brain damage for a while (I still might), but I'm somewhat reassured that maybe it's not, as others such as yourself experience it also.

`John, I 'm also left-handed and suffer from this same driving condition. It's a long-running joke in my family, though I notice it's not as much of an issue these days with Waze always on (which is pretty much a requirement for driving around LA). I did think I had some mild form of brain damage for a while (I still might), but I'm somewhat reassured that maybe it's not, as others such as yourself experience it also.`

Okay, I've finally had a moment to relax and think. I

didget it backwards. My answer annoyed me at the time, because it went against my mnemonic for right and left adjoints. I'm much happier now!Cole D. Pruitt wrote:

You're the one who caught the error! I've fixed it. Here's the correct story:

But if you did the puzzles, you saw that \(f\) has a "right adjoint" \(g : \mathbb{N} \to \mathbb{N}\). This is defined by the property

$$ f(a) \le b \textrm{ if and only if } a \le g(b) . $$ or in other words,

$$ 2a \le b \textrm{ if and only if } a \le g(b) .$$ Using our knowledge of fractions, we have

$$ 2a \le b \textrm{ if and only if } a \le b/2 $$ but since \(a\) is a natural number, this implies

$$ 2a \le b \textrm{ if and only if } a \le \lfloor b/2 \rfloor $$ where we are using the floor function to pick out the largest integer \(\le b/2\). So,

$$ g(b) = \lfloor b/2 \rfloor. $$ Moral: the right adjoint \(g \) is the "best approximation from below" to the nonexistent inverse of \(f\).

If you did the puzzles, you also saw that \(f\) has a left adjoint! This is the "best approximation from above" to the nonexistent inverse of \(f\): it gives you the smallest integer that's \(\ge n/2\).

So, while \(f\) has no inverse, it has two "approximate inverses". The left adjoint comes as close as possible to the (perhaps nonexistent) correct answer while making sure to never choose a number that's

too small. The right adjoint comes as close as possible while making sure to never choose a number that'stoo big.The two adjoints represent two opposing philosophies of life:

make sure you never ask for too littleandmake sure you never ask for too much. This is why they're philosophically profound. But the great thing is that they are defined in a completely precise, systematic way that applies to a huge number of situations!If you need a mnemonic to remember which is which, remember left adjoints are "left-wing" or "liberal" or "generous", while right adjoints are "right-wing" or "conservative" or "cautious".

`Okay, I've finally had a moment to relax and think. I _did_ get it backwards. My answer annoyed me at the time, because it went against my mnemonic for right and left adjoints. I'm much happier now! [Cole D. Pruitt](https://forum.azimuthproject.org/discussion/comment/16584/#Comment_16584) wrote: > I'm afraid I'm still confused about the answers to **Puzzles 12 and 13** that are re-quoted here in the lecture material - I get a right adjoint \\(g(n) = \lfloor n/2 \rfloor \\), not \\( \lceil n/2 \rceil\\) as is given in the lecture. Or (much less likely!) there's an error in the lecture, but I figure someone would have caught it by now. You're the one who caught the error! I've fixed it. Here's the correct story: But if you did the puzzles, you saw that \\(f\\) has a "right adjoint" \\(g : \mathbb{N} \to \mathbb{N}\\). This is defined by the property $$ f(a) \le b \textrm{ if and only if } a \le g(b) . $$ or in other words, $$ 2a \le b \textrm{ if and only if } a \le g(b) .$$ Using our knowledge of fractions, we have $$ 2a \le b \textrm{ if and only if } a \le b/2 $$ but since \\(a\\) is a natural number, this implies $$ 2a \le b \textrm{ if and only if } a \le \lfloor b/2 \rfloor $$ where we are using the [floor function](https://en.wikipedia.org/wiki/Floor_and_ceiling_functions) to pick out the largest integer \\(\le b/2\\). So, $$ g(b) = \lfloor b/2 \rfloor. $$ Moral: the right adjoint \\(g \\) is the "best approximation from below" to the nonexistent inverse of \\(f\\). If you did the puzzles, you also saw that \\(f\\) has a left adjoint! This is the "best approximation from above" to the nonexistent inverse of \\(f\\): it gives you the smallest integer that's \\(\ge n/2\\). So, while \\(f\\) has no inverse, it has two "approximate inverses". The left adjoint comes as close as possible to the (perhaps nonexistent) correct answer while making sure to never choose a number that's _too small_. The right adjoint comes as close as possible while making sure to never choose a number that's _too big_. The two adjoints represent two opposing philosophies of life: _make sure you never ask for too little_ and _make sure you never ask for too much_. This is why they're philosophically profound. But the great thing is that they are defined in a completely precise, systematic way that applies to a huge number of situations! If you need a mnemonic to remember which is which, remember left adjoints are "left-wing" or "liberal" or "generous", while right adjoints are "right-wing" or "conservative" or "cautious".`

I drew out what I thought were adjoints following the definitions using \(f(x) = 2x\). Is this correct?

`I drew out what I thought were adjoints following the definitions using \\(f(x) = 2x\\). Is this correct? ![Adjoints](http://aether.co.kr/wp-content/uploads/adjoints.jpg)`

Here is one using a surjection from

Example 1.86:So generally for posets :

`Here is one using a surjection from **Example 1.86**: ![Surjection](http://aether.co.kr/wp-content/uploads/adjoint-surjective.jpg) So generally for posets : ![General](http://aether.co.kr/wp-content/uploads/adjoint-general.jpg)`

Yes, Michael, this is right and very nice:

This is the picture I have in my mind when I think "Left adjoints are generous: they're the best approximation from above. Right adjoints are cautious: they're the best approximations from below."

And later you distilled this concept to its true essence:

This chart should be on the wall of every student trying to learn left and right adjoints! Thanks for drawing it!

`Yes, Michael, this is right and very nice: <center><img width = "600" src = "http://aether.co.kr/wp-content/uploads/adjoints.jpg"></center> This is the picture I have in my mind when I think "Left adjoints are generous: they're the best approximation from above. Right adjoints are cautious: they're the best approximations from below." And later you distilled this concept to its true essence: <center><img width = "600" src = "http://aether.co.kr/wp-content/uploads/adjoint-general.jpg"></center> This chart should be on the wall of every student trying to learn left and right adjoints! Thanks for drawing it!`

Cool glad I got my head around that. Since those are correct I will make the last picture more complete by adding in bijections.

`Cool glad I got my head around that. Since those are correct I will make the last picture more complete by adding in bijections. ![General](http://aether.co.kr/wp-content/uploads/adjoint-general2.jpg)`

I really like Michael's diagrams. Especially the use of blue on the left and red on the right. What is the copyright on them? How are they created?

`I really like Michael's diagrams. Especially the use of blue on the left and red on the right. What is the copyright on them? How are they created?`

The B escaped the parentheses in the source.

`>Suppose we have two preorders \\((A,\le_A)\\) and \\((B,\le_)B\\) The B escaped the parentheses in the source.`

Grant - thanks! I caught it and put it back in.

`Grant - thanks! I caught it and put it back in.`

@Frederick Sorry missed this comment... I draw them on illustrator.

`@Frederick Sorry missed this comment... I draw them on illustrator.`

A vague notion for sure, but is there a sense for linear transformations in which left adjoints give spanning vectors, right adjoints give independent vectors?

(the intuition is that those two can be "squeezed" into an invertible subspace)

Or if that's nonsense, here's a question:

Puzzle AB1.Prove that a linear transform \(T \in \mathcal{L}(V,W)\) is a monotone map from linear subspaces in \((V, \subseteq_V)\) to linear subspaces in \((W, \subseteq_W)\).Puzzle AB2.Given a linear transform \(T \in \mathcal{L}(V,W)\) are there right and left adjoints of \(T\) for linear subspaces in \((V, \subseteq_V)\) and \((W, \subseteq_W)\)?Puzzle AB3.How is this related to the pseudoinverse of a matrix?`A vague notion for sure, but is there a sense for linear transformations in which left adjoints give spanning vectors, right adjoints give independent vectors? (the intuition is that those two can be "squeezed" into an invertible subspace) Or if that's nonsense, here's a question: **Puzzle AB1.** Prove that a linear transform \\(T \in \mathcal{L}(V,W)\\) is a monotone map from linear subspaces in \\((V, \subseteq_V)\\) to linear subspaces in \\((W, \subseteq_W)\\). **Puzzle AB2.** Given a linear transform \\(T \in \mathcal{L}(V,W)\\) are there right and left adjoints of \\(T\\) for linear subspaces in \\((V, \subseteq_V)\\) and \\((W, \subseteq_W)\\)? **Puzzle AB3.** How is this related to the pseudoinverse of a matrix?`

Hey Michael Hong, your diagrams are what made me finally grok adjoints. Thank you!

`Hey Michael Hong, your diagrams are what made me finally grok adjoints. Thank you!`

Puzzle JMC4:Find the right adjoint of \(f : \mathbb{N} \to \mathbb{N}\) defined by \(f(x) = \lfloor \frac{x}{2} \rfloor\).Puzzle JMC5:Call the above right adjoint \(g\). Why does \(g\)nothave a right adjoint? (What would it be if it did, and how can we formalize this?)These adjoints all make a nice pattern. It seems like the pattern would "never end" if we worked in \(\mathbb{Z} \to \mathbb{Z}\), i.e. the chain of adjoints (in both directions) containing \(x \mapsto 2x\) never terminates.

Also, it seems like even in \(\mathbb{N} \to \mathbb{N}\), the sequence of left adjoints starting from \(x \mapsto 2x\) never ends. However, I've computed a few of these out, and it seems to approach the identity function the further out you go.

(EDIT: JMC4 and JMC5 are just Exercise 1.80 in the book, hah. Oops.)

`**Puzzle JMC4:** Find the right adjoint of \\(f : \mathbb{N} \to \mathbb{N}\\) defined by \\(f(x) = \lfloor \frac{x}{2} \rfloor\\). **Puzzle JMC5:** Call the above right adjoint \\(g\\). Why does \\(g\\) _not_ have a right adjoint? (What would it be if it did, and how can we formalize this?) These adjoints all make a nice pattern. It seems like the pattern would "never end" if we worked in \\(\mathbb{Z} \to \mathbb{Z}\\), i.e. the chain of adjoints (in both directions) containing \\(x \mapsto 2x\\) never terminates. Also, it seems like even in \\(\mathbb{N} \to \mathbb{N}\\), the sequence of left adjoints starting from \\(x \mapsto 2x\\) never ends. However, I've computed a few of these out, and it seems to approach the identity function the further out you go. (EDIT: JMC4 and JMC5 are just Exercise 1.80 in the book, hah. Oops.)`

@ThomasRead

Yes, thanks for pointing this out. Continuing with

Puzzle 18, if \(f_{\ast}\) has a left-adjoint \(g_{\ast}\), then \(f\) must be surjective.From before, we know that if \(f_{\ast}\) has a left-adjoint \(g_{\ast}\), then \(f\) must be injective. Together, we have that \(f\) must be bijective.

Conversely, if \(f\) is bijective, so is \(f_{\ast}\), and thus has its inverse for a left-adjoint.

Therefore, the functions \(f\) for which \(f_{\ast}\) has a left-adjoint are exactly those that are bijective.

`@ThomasRead Yes, thanks for pointing this out. Continuing with **Puzzle 18**, if \\(f_{\ast}\\) has a left-adjoint \\(g_{\ast}\\), then \\(f\\) must be surjective. From before, we know that if \\(f_{\ast}\\) has a left-adjoint \\(g_{\ast}\\), then \\(f\\) must be injective. Together, we have that \\(f\\) must be bijective. Conversely, if \\(f\\) is bijective, so is \\(f_{\ast}\\), and thus has its inverse for a left-adjoint. Therefore, the functions \\(f\\) for which \\(f_{\ast}\\) has a left-adjoint are exactly those that are bijective.`

On

Here is my try:

First, observing that $$ f_{\ast} : PX \to PY $$ sending each subset \(S \subseteq X\) to its

imageunder \(f\)is producing the subsets in Y of the same cardinality, or less compared to the the subset in X that's being mapped by \(f_{\ast}\). This is from the definition of the image in wikipedia linked above. Specifically, important to note it is possible to get images of 'smaller' cardinality (because the mapping can be surjective), but not higher cardinality, because then the mapping would not be a function.

I will use this observation in below.

Now let's proof that \(f_{\ast}\) is monotone, by contradiction.

Assume, that for some subsets \(px_1 \subseteq px_2 \in PX , f_{\ast}(px_2) \subseteq f_{\ast}(px_1) \in PY \).

In other words, when we apply \(f_{\ast}\) to px_1, it produces a superset (not a subset) of \(f_{\ast}(px_2) \). Superset means that cardinality (number of elements) of \(f_{\ast}(px_1) \) is greater than the cardinality of \( f_{\ast}(px_2) \).

But this also means that \(f_{\ast}\) is producing images of higher cardinality than the one in PX. However, that would contradict the definition of the image of a subset definition.

Therefore, our assumption was wrong.

(did not read any other comments sofar, so hope I am not too far off :-) )

`On > **Puzzle 17.** Show that \\( f_{\ast} : PX \to PY \\) is a monotone function. Here is my try: First, observing that $$ f_{\ast} : PX \to PY $$ sending each subset \\(S \subseteq X\\) to its **[image](https://en.wikipedia.org/wiki/Image_(mathematics)#Image_of_a_subset)** under \\(f\\) is producing the subsets in Y of the same cardinality, or less compared to the the subset in X that's being mapped by \\(f_{\ast}\\). This is from the definition of the image in wikipedia linked above. Specifically, important to note it is possible to get images of 'smaller' cardinality (because the mapping can be surjective), but not higher cardinality, because then the mapping would not be a function. I will use this observation in below. Now let's proof that \\(f_{\ast}\\) is monotone, by contradiction. Assume, that for some subsets \\(px_1 \subseteq px_2 \in PX , f_{\ast}(px_2) \subseteq f_{\ast}(px_1) \in PY \\). In other words, when we apply \\(f_{\ast}\\) to px_1, it produces a superset (not a subset) of \\(f_{\ast}(px_2) \\). Superset means that cardinality (number of elements) of \\(f_{\ast}(px_1) \\) is greater than the cardinality of \\( f_{\ast}(px_2) \\). But this also means that \\(f_{\ast}\\) is producing images of higher cardinality than the one in PX. However, that would contradict the definition of the image of a subset definition. Therefore, our assumption was wrong. (did not read any other comments sofar, so hope I am not too far off :-) )`

On

Here is my try:

My current guess is, that the answers depend on whether the original function $$ f : X \to Y $$ was injective, surjective or bijective.

Let's consider 3 cases.

(note 1, \(PX, PY \) are posets, not pre-orders, therefore, if there is a left or right adjoint for \(f_{\ast}\) -- they would be unique )

(note 2, \(f_{\ast}\) is monotone )

1) If \(f \) was bijective, then \(f_{\ast}\) would be a bijective too.

\(f_{\ast}\) would be a bijection, in this case, because:

a)

PYwould have the same number of subsets asPX.b) the image of a subset of PX under f, would be a bijection itself. Therefore, there would be an inverse of that image function. Therefore, the will be an inverse of \(f_{\ast}\).

Since there is an inverse, left and right adjoints will be just equal to that inverse of \(f_{\ast}\)

Example:

Let's say \(X={1,2} \) , \(f(x) = x+a \) where a is just some constant natural number. inverse of f \(f^{-1}(x) = x - a \).

\(PX= \{1\},\{2\},\{1,2\}, \emptyset \)

\(PY= \{1+a\},\{2+a\},\{1+a,2+a\}, \emptyset \)

\(f_{\ast}\) would be a monotone function that maps \( \{1,2\} \to \{1+a,2+a\} \) and so on.

It's inverse, is a function that maps back from \( \{1+a,2+a\} \to \{1,2\} \) and so on.

Since the inverse is exact, there are no separate left/right adjoints -- they are all equal to this inverse.

2) If \( f : X \to Y \) was injective [...deleted as I am rethinking my reasoning ..]

3) If \( f : X \to Y \) was surjective [...deleted as I am rethinking my reasoning ... ]

Let's say \(X=\{1,2,3\} \) , \(f(x) = x \mod 2 \) . X is cardinality 3, Y is cardinality 2, because there are just 2 members: {0,1}

exact inverse of f does not exist.

\( PX= \{1\}, \{2\}, \{3\}, \{1,2\},\{1,3\},\{2,3\}, \{1,2,3\}, \emptyset \)

\(PY= \{0\},\{1\},\{0,1\}, \emptyset \)

\(f_{\ast}\) in this case is also surjective (maps elements of PX onto PY).

Exact inverse of \(f_{\ast}\) does not exist, either.

`On > **Puzzle 18.** Does \\( f_{\ast} \\) always have a left adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which it *does* have a left adjoin. > **Puzzle 19.** Does \\(f_{\ast}\\) always have a right adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which it *does* have a right adjoin. Here is my try: My current guess is, that the answers depend on whether the original function $$ f : X \to Y $$ was injective, surjective or bijective. Let's consider 3 cases. (note 1, \\(PX, PY \\) are posets, not pre-orders, therefore, if there is a left or right adjoint for \\(f_{\ast}\\) -- they would be unique ) (note 2, \\(f_{\ast}\\) is monotone ) 1) If \\(f \\) was bijective, then \\(f_{\ast}\\) would be a bijective too. \\(f_{\ast}\\) would be a bijection, in this case, because: a) *PY* would have the same number of subsets as *PX*. b) the image of a subset of PX under f, would be a bijection itself. Therefore, there would be an inverse of that image function. Therefore, the will be an inverse of \\(f_{\ast}\\). Since there is an inverse, left and right adjoints will be just equal to that inverse of \\(f_{\ast}\\) Example: Let's say \\(X=\{1,2\} \\) , \\(f(x) = x+a \\) where a is just some constant natural number. inverse of f \\(f^{-1}(x) = x - a \\). \\(PX= \\{1\\},\\{2\\},\\{1,2\\}, \emptyset \\) \\(PY= \\{1+a\\},\\{2+a\\},\\{1+a,2+a\\}, \emptyset \\) \\(f_{\ast}\\) would be a monotone function that maps \\( \\{1,2\\} \to \\{1+a,2+a\\} \\) and so on. It's inverse, is a function that maps back from \\( \\{1+a,2+a\\} \to \\{1,2\\} \\) and so on. Since the inverse is exact, there are no separate left/right adjoints -- they are all equal to this inverse. --- 2) If \\( f : X \to Y \\) was injective [...deleted as I am rethinking my reasoning ..] --- 3) If \\( f : X \to Y \\) was surjective [...deleted as I am rethinking my reasoning ... ] Let's say \\(X=\\{1,2,3\\} \\) , \\(f(x) = x \mod 2 \\) . X is cardinality 3, Y is cardinality 2, because there are just 2 members: {0,1} exact inverse of f does not exist. \\( PX= \\{1\\}, \\{2\\}, \\{3\\}, \\{1,2\\},\\{1,3\\},\\{2,3\\}, \\{1,2,3\\}, \emptyset \\) \\(PY= \\{0\\},\\{1\\},\\{0,1\\}, \emptyset \\) \\(f_{\ast}\\) in this case is also surjective (maps elements of PX onto PY). Exact inverse of \\(f_{\ast}\\) does not exist, either.`

Hello! I'm still unsure about a few things, so I would really appreciate some feedback on my line of reasoning:

We can use the forumla to compute the right (or left) adjoint

only ifwe know that the right (or left) adjoint exists. If the posets in question have all meets and joins, that doesn't say anything about the existence of the right (or left) adjoint; this was the case of puzzle 18—even though the powerset ordered by inclusion forms a complete lattice, \(f_!\) doesn't always have a left adjoint. It seems to me that in general proving the existence of the adjoint is rather difficult, so probably the formulas cannot be readily applied.`Hello! I'm still unsure about a few things, so I would really appreciate some feedback on my line of reasoning: We can use the forumla to compute the right (or left) adjoint _only if_ we know that the right (or left) adjoint exists. If the posets in question have all meets and joins, that doesn't say anything about the existence of the right (or left) adjoint; this was the case of [puzzle 18](https://forum.azimuthproject.org/discussion/comment/16490/#Comment_16490)—even though the powerset ordered by inclusion forms a complete lattice, [\\(f_!\\) doesn't always have a left adjoint](https://forum.azimuthproject.org/discussion/comment/16501/#Comment_16501). It seems to me that in general proving the existence of the adjoint is rather difficult, so probably the formulas cannot be readily applied.`

My take is that the formula tells us what values the (right) adjoint must take

ifit exists. That means that oftentimes we can construct a perfectly goodfunctionby using the formula. We can then take this very concrete function and check it against the defining property of adjoints (\(f(x) \le y \Leftrightarrow x \le g(y)\)). If it fails, we know that the function taking inputs to the result of that formula isn't a right adjoint, and hence a right adjoint can't exist. The function we made is totally valid, it's just not a right adjoint!This was the case for Exercise 1.80, where we find that \(\lfloor \cdot / 3 \rfloor\) has a right adjoint, but its right adjoint doesn't have a right adjoint of its own. The formula told us that since \(\{x \mid f(x) \le 0\} = \emptyset\), we should have \(g(0) = \bigvee \emptyset = 0\). But this failed the property of adjoints for \(x = 0, y = 0\).

`My take is that the formula tells us what values the (right) adjoint must take _if_ it exists. That means that oftentimes we can construct a perfectly good _function_ by using the formula. We can then take this very concrete function and check it against the defining property of adjoints (\\(f(x) \le y \Leftrightarrow x \le g(y)\\)). If it fails, we know that the function taking inputs to the result of that formula isn't a right adjoint, and hence a right adjoint can't exist. The function we made is totally valid, it's just not a right adjoint! This was the case for [Exercise 1.80](https://forum.azimuthproject.org/discussion/1959), where we find that \\(\lfloor \cdot / 3 \rfloor\\) has a right adjoint, but its right adjoint doesn't have a right adjoint of its own. The formula told us that since \\(\\{x \mid f(x) \le 0\\} = \emptyset\\), we should have \\(g(0) = \bigvee \emptyset = 0\\). But this failed the property of adjoints for \\(x = 0, y = 0\\).`

Thank you very much for the answer Jonathan Castello—it was very helpful! If I understood correctly, when looking for the adjoint of a function we can start from the formula and, if it's well defined, we still have to prove the definition of adjoints. Otherwise, if the formula is not well defined or we are able to come up with a counter-example of the definition, then there is no adjoint. I'm still not very confident on how to come up with counter-examples, but I guess this is a matter of experience.

I would have loved to see puzzles 18 and 19 solved in a more mechanical fashion, that is, starting from the formulas and applying the corresponding definitions. In particular, for puzzle 19, I would like to see how we arrive at the answer posted by David Tanzer.

`Thank you very much for the answer [Jonathan Castello](https://forum.azimuthproject.org/profile/2316/Jonathan%20Castello)—it was very helpful! If I understood correctly, when looking for the adjoint of a function we can start from the formula and, if it's well defined, we still have to prove the definition of adjoints. Otherwise, if the formula is not well defined or we are able to come up with a counter-example of the definition, then there is no adjoint. I'm still not very confident on how to come up with counter-examples, but I guess this is a matter of experience. I would have loved to see puzzles 18 and 19 solved in a more mechanical fashion, that is, starting from the formulas and applying the corresponding definitions. In particular, for puzzle 19, I would like to see how we arrive at [the answer posted by David Tanzer](https://forum.azimuthproject.org/discussion/comment/16504/#Comment_16504).`

Just including my answers here for my own sake, I tried to skip other people's answers until writing this.

Answer 1: Hand-waving argument is that the meet and the join are the intersection and the union respectively with respect to sets. Since the intersection of a bunch of sets exists and is unique (at least in those situations that I can think of, but sets are weird, so I will assume that we are dealing with such sets and power sets), and the same holds for unions, the left and right adjoint exist and they are equal to ... (see next more detailed answer).

Answer 2: This is the worked out version I wrote to make sure answer 1 was right. For the left adjoint we need:

\(g_!(R) \subseteq S \iff R \subseteq f_!(S)\) which means we want \(g_!(R) = \cap \{S: R \subseteq f_!(S) \}\), which should exist and should be unique (at least it is in those situations I ever dealt with sets). The same reasoning leads to the right adjoint being (with abuse of notation, since I denote it the same) \(g_!(R) = \cup \{S: R \subseteq f_!(S) \}\).

Edit May4th: Sigh, that was wrong, reading the above thread. I don't have the time to figure out what is wrong with it, but my sense is that I only used one direction of theiff, namely that if \(R \subseteq f_!(S)\), then we have \(g_!(R) \subseteq S\), so \(g_!(R) = \cap \{S: R \subseteq f_!(S)\}\) if this satisfies the other direction too -- which I haven't checked. Will do later.Edit on May 15th: Finally getting back to this. I tried to figure out where I went wrong, so let me try again. We have a left adjoint \(g_!\) if \(g_!(R) \subseteq S \iff R \subseteq f_!(S)\). Fix some \(R\). The right hand side of that holds exactly for all the sets \(S \in \mathcal{S}\) where \(\mathcal{S} = \{S: R \subseteq f_!(S)\}\). This means that we have to have \(g_!(R) \subseteq S\) for every \(S \in \mathcal{S}\)andthat it holds for no other \(S\). This implies that \(g_!(R) = \cap \mathcal{S}\) (Note:for me \(\cap \mathcal{S}\) means to take the intersections over the elements of \(\mathcal{S}\), which may be poor or wrong notation), but that is not enough yet. Why? Let \(S(R) = \cap \mathcal{S} \). Then it could be that \(S(R)\) is not in \( \mathcal{S}\), in which case we have \(g_!(R) \subseteq S(R)\) (by definition), yet we do not have \( R \subseteq f_!(S(R)) \). So I should have proved that \(S(R) \in \mathcal{S}\), which I would have realized doesn't work.For instance, suppose that \(\mathcal{S} = \{ \{0\}, \{1\}\}\). Then their intersection will be the empty set, so that \(S(R) = \emptyset \)and hence \(f_!(S(R)) = \emptyset\). Unless \(R = \emptyset\), this is a counterexample, assuming we can come up with a function \(f\) and a non-empty \(R\) such that we get such a \(\mathcal{S}\). This is easy, especially after reading parts of this thread: \(f(0) = 0\), \(f(1) = 0\) with \(R = 0\). And that is where I went wrong.

This still doesn't show when the left adjoint exists, other than that it exists if (and only if in this case, I believe) \(S(R) = \cap \mathcal{S} \in \mathcal{S}\). But at this point I wanted to primarily figure out where my thinking was wrong, not figure it all out.

`Just including my answers here for my own sake, I tried to skip other people's answers until writing this. > Puzzle 18. Does f! always have a left adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which it does have a left adjoint. > Puzzle 19. Does f! always have a right adjoint? If so, describe it. If not, give an example where it doesn't, and some conditions under which it does have a right adjoint. Answer 1: Hand-waving argument is that the meet and the join are the intersection and the union respectively with respect to sets. Since the intersection of a bunch of sets exists and is unique (at least in those situations that I can think of, but sets are weird, so I will assume that we are dealing with such sets and power sets), and the same holds for unions, the left and right adjoint exist and they are equal to ... (see next more detailed answer). Answer 2: This is the worked out version I wrote to make sure answer 1 was right. For the left adjoint we need: \\(g_!(R) \subseteq S \iff R \subseteq f_!(S)\\) which means we want \\(g_!(R) = \cap \\{S: R \subseteq f_!(S) \\}\\), which should exist and should be unique (at least it is in those situations I ever dealt with sets). The same reasoning leads to the right adjoint being (with abuse of notation, since I denote it the same) \\(g_!(R) = \cup \\{S: R \subseteq f_!(S) \\}\\). **Edit May4th**: Sigh, that was wrong, reading the above thread. I don't have the time to figure out what is wrong with it, but my sense is that I only used one direction of the *iff*, namely that if \\(R \subseteq f_!(S)\\), then we have \\(g_!(R) \subseteq S\\), so \\(g_!(R) = \cap \\{S: R \subseteq f_!(S)\\}\\) if this satisfies the other direction too -- which I haven't checked. Will do later. **Edit on May 15th**: Finally getting back to this. I tried to figure out where I went wrong, so let me try again. We have a left adjoint \\(g_!\\) if \\(g_!(R) \subseteq S \iff R \subseteq f_!(S)\\). Fix some \\(R\\). The right hand side of that holds exactly for all the sets \\(S \in \mathcal{S}\\) where \\(\mathcal{S} = \\{S: R \subseteq f_!(S)\\}\\). This means that we have to have \\(g_!(R) \subseteq S\\) for every \\(S \in \mathcal{S}\\) **and** that it holds for no other \\(S\\). This implies that \\(g_!(R) = \cap \mathcal{S}\\) (**Note:** for me \\(\cap \mathcal{S}\\) means to take the intersections over the elements of \\(\mathcal{S}\\), which may be poor or wrong notation), but that is not enough yet. Why? Let \\(S(R) = \cap \mathcal{S} \\). Then it could be that \\(S(R)\\) is not in \\( \mathcal{S}\\), in which case we have \\(g_!(R) \subseteq S(R)\\) (by definition), yet we do not have \\( R \subseteq f_!(S(R)) \\). So I should have proved that \\(S(R) \in \mathcal{S}\\), which I would have realized doesn't work. For instance, suppose that \\(\mathcal{S} = \\{ \\{0\\}, \\{1\\}\\}\\). Then their intersection will be the empty set, so that \\(S(R) = \emptyset \\)and hence \\(f_!(S(R)) = \emptyset\\). Unless \\(R = \emptyset\\), this is a counterexample, assuming we can come up with a function \\(f\\) and a non-empty \\(R\\) such that we get such a \\(\mathcal{S}\\). This is easy, especially after reading parts of this thread: \\(f(0) = 0\\), \\(f(1) = 0\\) with \\(R = 0\\). And that is where I went wrong. This still doesn't show when the left adjoint exists, other than that it exists if (and only if in this case, I believe) \\(S(R) = \cap \mathcal{S} \in \mathcal{S}\\). But at this point I wanted to primarily figure out where my thinking was wrong, not figure it all out.`

I wrote:

$$ g(b) = \bigvee \{a \in A : \; f(a) \le_B b \} . $$

$$ f(a) = \bigwedge \{b \in B : \; a \le_A g(b) \} . $$ Jonathan wrote:

That's true, and that's what I said in my lecture: I derived each of these formulas assuming the adjoint in question exists.

But I think the situation is much better than this. I believe that if the formulas give well-defined functions, these are necessarily the desired adjoints! It would be fun to try to show this, straight from the definitions. My hunch is that it's easy.

`I wrote: > So, here's what we've shown: > If \\(f : A \to B\\) has a right adjoint \\(g : B \to A\\) and \\(A\\) is a poset, this right adjoint is unique and we have a formula for it: \[ g(b) = \bigvee \\{a \in A : \; f(a) \le_B b \\} . \] > And we can copy our whole line of reasoning and show this: > If \\(g : B \to A\\) has a left adjoint \\(f : A \to B\\) and \\(B\\) is a poset, this left adjoint is unique and we have a formula for it: \[ f(a) = \bigwedge \\{b \in B : \; a \le_A g(b) \\} . \] Jonathan wrote: > My take is that the formula tells us what values the [...] adjoint must take if it exists. That's true, and that's what I said in my lecture: I derived each of these formulas assuming the adjoint in question exists. But I think the situation is much better than this. I believe that if the formulas give well-defined functions, these are necessarily the desired adjoints! It would be fun to try to show this, straight from the definitions. My hunch is that it's easy.`