Options

Lecture 16 - Chapter 1: The Adjoint Functor Theorem for Posets

Today I'll prove the first really profound result in this course: the Adjoint Functor Theorem for Posets. It establishes a deep link between left adjoints and joins, and between right adjoints and meets. This is the climax of Chapter 1: if you survive this lecture, the final one will be a downhill slide!

Last time I showed that left adjoints preserve joins and right adjoints preserve meets - but I only considered "binary" meets and joins: that is, the meets and join of a pair of elements. We can do much better.

Remember, given any subset \( S \) of a poset \( A \), we say the join of \( S \) is the least upper bound of \( S \), if it exists. We denote this join by \( \bigvee S \), but don't be fooled by the notation into thinking it always exists. Similarly, the meet or greatest lower bound of \( S \subseteq A \) is denoted by \( \bigwedge S \) - if it exists.

Now for two theorems. I suggest that you read the statements of both these theorems and ponder them a bit before reading the proofs. The statements are ultimately more important than the proofs, so don't be demoralized if you find the proofs tricky!

Theorem. If a monotone function \( f: A \to B \) is a left adjoint, it preserves joins whenever they exist. That is, whenever a set \( S \subseteq A \) has a join we have

$$ f (\bigvee S ) = \bigvee \{ f(a) : \; a \in S\} . $$ Similarly, if a monotone function between posets \(g : B \to A \) is a right adjoint, it preserves meets whenever they exist. That is, whenever a set \( S \subseteq B \) has a meet we have

$$ g(\bigwedge S) = \bigwedge \{ g(b) : \; b \in S \}. $$ Proof. We'll just prove the first half, since the second works the same way. We'll assume \( f: A \to B \) is a left adjoint, meaning that it has a right adjoint \( g: B \to A \), and we'll show that \(f\) preserves joins whenever they exist. This is very similar to the proof in Lecture 15, but I'll run through the argument again because it's so important. I'll go a bit faster this time!

Suppose \(S \subseteq A\) has a join \(j = \bigvee S\). This implies that \(a \le j\) for all \( a \in S \), so \(f(a) \le f(j) \), so \( f(j) \) is an upper bound of \( \{ f(a) : \; a \in S\} \). We just need to show it's the least upper bound of this set. So, suppose \( b \in B \) is any other upper bound of this set. This means that

$$ f(a) \le b $$ for all \(a \in S \), but thanks to the magic of adjoints this gives

$$ a \le g(b) $$ for all \(a \in S \) so \( g(b) \) is an upper bound of \( S\). Since \( j \) is the least upper bound we conclude

$$ j \le g(b) , $$ but thanks to the magic of adjoints this gives

$$ f(j) \le b $$ so \( f(j) \) is indeed the least upper bound of \( \{ f(a) : \; a \in S\} \). \( \qquad \blacksquare \)

Okay, that was fun. But now comes the really exciting part: a kind of converse is true too! It's easiest to state when we have lots of joins or meets available. We say a poset has all joins if every subset has a join, and similarly for meets:

Adjoint Functor Theorem for Posets. Suppose \(A\) is a poset that has all joins and \( B \) is any poset. Then a monotone map \(f : A \to B \) is a left adjoint if and only if it preserves all joins.

Similarly, suppose \(B\) is a poset that has all meets and \( A \) is any poset. Then a monotone map \(g : B \to A \) is a right adjoint if and only if it preserves all meets.

Proof. Again we'll prove only the first half. So, we'll assume \(A\) is a poset that has all joins, \( B \) is any poset, and \(f : A \to B \) is a monotone map.

The previous theorem assures us that if \( f \) is a left adjoint it preserves all joins, so we only need to prove the converse.

Suppose that \(f\) preserves all joins. To show it's a left adjoint, we construct its right adjoint \( g : B \to A \) using the formula given in Lecture 6:

$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Since \(A\) has all joins, \(g(b)\) is well-defined. To see that \(g\) is a monotone, note that if \( b \le b' \) then

$$ \{a \in A : \; f(a) \le b \} \subseteq \{a \in A : \; f(a) \le b' \} $$ so

$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} \le \bigvee \{a \in A : \; f(a) \le b' \} = g(b') . $$ See why?

Next we show that \(f\) is the left adjoint of \(g\):

$$ f(a_0) \le b_0 \textrm{ if and only if } a_0 \le g(b_0) $$ for all \( a_0 \in A, b_0 \in B \).

To show this, first suppose \( f(a_0) \le b_0 \). Then \( a_0 \) is an element of \( \{ a \in A : f(a) \le b_0 \} \), so

$$ a_0 \le \bigvee \{ a \in A : f(a) \le b_0 \} = g(b_0) $$ by the definition of \( g \). So, we have \( a_0 \le g(b_0) \) as desired.

Conversely, suppose \( a_0 \le g(b_0) \). Then \(f(a_0)) \le f(g(b_0)) \), so if we can show \(f(g(b_0)) \le b_0 \) then we'll have \( f(a_0) \le b_0 \). For this, note:

$$ f(g(b_0)) = f( \bigvee \{a \in A : \; f(a) \le b_0 \}) = \bigvee \{f(a) \in B : \; f(a) \le b_0 \}) \le b_0 $$ where in the middle step we finally use the fact that \(f\) preserves joins. So, we have \( a_0 \le g(b_0) \) as desired - and we're done! \( \qquad \blacksquare \)

So the connection between joins and left adjoints, or meets and right adjoints, is very strong. Next time, in my last lecture on Chapter 1, I'll explain this a bit more.

To read other lectures go here.

Comments

  • 1.
    edited April 17

    By the way, we can write the first theorem here more tersely if we use the notion of "image" from Lecture 9. If \(f : A \to B \) is any function, for any subset \(S \subseteq A \), its image under \(f\) is

    $$f_{!}(S) = \{b \in B: \; b = f(a) \textrm{ for some } a \in A\} .$$ Another way to write this is

    $$f_{!}(S) = \{f(a) : \; a \in A\} .$$ Thus, we can take our first theorem and rewrite it this way:

    Theorem. If a monotone function \( f: A \to B \) between posets is a left adjoint and the join of \( S \subseteq A \) exists, then

    $$ f (\bigvee S ) = \bigvee f_{!} (S). $$ Similarly, if a monotone function \(g : B \to A \) between posets is a right adjoint and the meet of \( S \subseteq B \) exists, then

    $$ g(\bigwedge S) = \bigwedge g_{!}(S). $$ I didn't write it this way because I thought it would confuse some people, but it's very cute.

    Comment Source:By the way, we can write the first theorem here more tersely if we use the notion of "image" from Lecture 9. If \\(f : A \to B \\) is any function, for any subset \\(S \subseteq A \\), its **[image](https://en.wikipedia.org/wiki/Image_(mathematics)#Image_of_a_subset)** under \\(f\\) is $$f_{!}(S) = \\{b \in B: \; b = f(a) \textrm{ for some } a \in A\\} .$$ Another way to write this is $$f_{!}(S) = \\{f(a) : \; a \in A\\} .$$ Thus, we can take our first theorem and rewrite it this way: **Theorem.** If a monotone function \\( f: A \to B \\) between posets is a left adjoint and the join of \\( S \subseteq A \\) exists, then $$ f (\bigvee S ) = \bigvee f_{!} (S). $$ Similarly, if a monotone function \\(g : B \to A \\) between posets is a right adjoint and the meet of \\( S \subseteq B \\) exists, then $$ g(\bigwedge S) = \bigwedge g_{!}(S). $$ I didn't write it this way because I thought it would confuse some people, but it's very cute.
  • 2.

    How does on capture TeX code for your lecture. Pandoc doesn't do a good job.

    Comment Source:How does on capture TeX code for your lecture. Pandoc doesn't do a good job.
  • 3.
    edited April 17

    The source code of all my lectures is here:

    http://math.ucr.edu/home/baez/mathematical/7_sketches/

    If you redistribute them in any form, please do so in a way that acknowledges that I'm the author.

    I keep catching more mistakes and fixing them, so these files keep changing.

    Comment Source:The source code of all my lectures is here: http://math.ucr.edu/home/baez/mathematical/7_sketches/ If you redistribute them in any form, please do so in a way that acknowledges that I'm the author. I keep catching more mistakes and fixing them, so these files keep changing.
  • 4.

    Walter Tautz: As a headsup, for the \(\TeX\) content that John uses in his posts they're slightly different from standard due to the changes needed for this forum (to keep mathjax and markdown compatible).

    If you go to the gitbook repository that we've got going you'll find most of the changes already done for you (mostly getting rid of extra backslashes). The only other change you might want will be to replace every instance of \( and \) in the markdown files in the repository with $$ or $.

    Comment Source:Walter Tautz: As a headsup, for the \\(\TeX\\) content that John uses in his posts they're *slightly* different from standard due to the changes needed for this forum (to keep mathjax and markdown compatible). If you go to the [gitbook repository](github.com/rabuf/applied-category-theory) that we've got going you'll find most of the changes already done for you (mostly getting rid of extra backslashes). The only other change you might want will be to replace every instance of `\(` and `\)` in the markdown files in the repository with `$$` or `$`.
  • 5.

    Good users of LaTeX use something like \( \) and \[ \] rather than $ $ and $$ $$. For global search and replace, it's very useful to be able to easily tell what's a left parenthesis and what's a right parenthesis!

    Comment Source:Good users of LaTeX use something like `\( \)` and `\[ \]` rather than `$ $` and `$$ $$`. For global search and replace, it's very useful to be able to easily tell what's a left parenthesis and what's a right parenthesis!
  • 6.

    Ah, good to know. I was just trying to make things work with the (slightly) different versions of markdown and mathjax. So in that case, the gitbook version is mostly correct except you'll want to replace the few instances of $$ $$ with \[ \]. I've already stripped out the other extra backslashes that this forum required. I'll convert the gitbook markdown files to use that convention instead of $$ tonight.

    Comment Source:Ah, good to know. I was just trying to make things work with the (slightly) different versions of markdown and mathjax. So in that case, the gitbook version is mostly correct except you'll want to replace the few instances of `$$ $$` with `\[ \]`. I've already stripped out the other extra backslashes that this forum required. I'll convert the gitbook markdown files to use that convention instead of `$$` tonight.
  • 7.
    edited April 17

    I forgot that I could use \[ \] here. There are actually about 30 or 50 appearances of it in the lectures. Let's see if I can fix them.

    Comment Source:I forgot that I could use `\[ \]` here. There are actually about 30 or 50 appearances of it in the lectures. Let's see if I can fix them.
  • 8.

    I think I fixed them all.

    Comment Source:I think I fixed them all.
  • 9.

    Requiring that a poset has all joins is a pretty strong condition. I'm wondering if there are circumstances where we could get away with something weaker, eg all finite joins is OK providing that the partial order on \(A\) satisfies such-and-such a condition, or that the function \(f\) is "nice" in some to-be-specified way.

    Comment Source:Requiring that a poset has *all* joins is a pretty strong condition. I'm wondering if there are circumstances where we could get away with something weaker, eg all finite joins is OK providing that the partial order on \\(A\\) satisfies such-and-such a condition, or that the function \\(f\\) is "nice" in some to-be-specified way.
  • 10.
    edited April 18

    Anindya - good point! Next time I'll give weaker conditions that are sufficient to get an adjoint. These are in fact necessary and sufficient. You can guess them yourself by looking at the proof of this theorem.

    Comment Source:Anindya - good point! Next time I'll give weaker conditions that are sufficient to get an adjoint. These are in fact necessary and sufficient. You can guess them yourself by looking at the proof of this theorem.
  • 11.

    Way back in Lecture 6 we saw how a right adjoint for \(f\), if it exists, has to send \(b\) to the sup of \(G(b) = \{a \in A : \; f(a) \le b \}\).

    So it is necessary and sufficient for \(A\) to have all sups of sets of form \(G(b)\).

    It strikes me that \(G(b)\) is a "down-set": \(a'' \le a' \in G(b) \implies f(a'') \le f(a') \le b \implies a'' \in G(b)\)

    The down-sets of \(A\) form a topology on \(A\) – if that topology is compact then \(G(b)\) is the union of a finite number of basic downsets of form \(\{ a \in A : a \le a_i \}\), and then the sup of \(G(b)\) would be the join of the \(a_i\). Not sure how useful this is tho.

    Comment Source:Way back in Lecture 6 we saw how a right adjoint for \\(f\\), if it exists, has to send \\(b\\) to the sup of \\(G(b) = \\{a \in A : \; f(a) \le b \\}\\). So it is necessary and sufficient for \\(A\\) to have all sups of sets of form \\(G(b)\\). It strikes me that \\(G(b)\\) is a "down-set": \\(a'' \le a' \in G(b) \implies f(a'') \le f(a') \le b \implies a'' \in G(b)\\) The down-sets of \\(A\\) form a topology on \\(A\\) – if that topology is compact then \\(G(b)\\) is the union of a finite number of basic downsets of form \\(\\{ a \in A : a \le a_i \\}\\), and then the sup of \\(G(b)\\) would be the join of the \\(a_i\\). Not sure how useful this is tho.
  • 12.
    edited April 18

    Anindya - yes, for \(f : A \to B\) to have a right adjoint, and thus be a left adjoint, it's enough for all the sets

    $$ \{a \in A : \; f(a) \le b \} $$ to have joins. (The "join" of any subset of \(A\) is its sup, or least upper bound.) I mention this in Lecture 17.

    There must be some really nice relationships between this fact and topology you mention. Someone must understand them, but I don't. Does anyone here?

    Comment Source:Anindya - yes, for \\(f : A \to B\\) to _have_ a right adjoint, and thus _be_ a left adjoint, it's enough for all the sets \[ \\{a \in A : \; f(a) \le b \\} \] to have joins. (The "join" of any subset of \\(A\\) is its sup, or least upper bound.) I mention this in [Lecture 17](https://forum.azimuthproject.org/discussion/2037/lecture-17-chapter-1-the-grand-synthesis#Head). There must be some really nice relationships between this fact and topology you mention. Someone must understand them, but I don't. Does anyone here?
  • 13.

    The down set (or downward closed set) and it's dual up set (upper set as the book goes by), are the Poset analog of over- and undercategories.

    In this sense, a downset is like the covering space over a Topology, but I'm just parroting the nlab.

    nlab, overcategory

    Puzzle 1 KEP: Prove that the downset of the top element in a poset \( P \) is equivalent to the poset \( P \) itself.

    Puzzle 2 KEP: Prove that the upset of the bottom element in a poset \( P \) is equivalent to the poset \( P \) itself.

    Comment Source:The **down set** (or **downward closed set**) and it's dual **up set** (**upper set** as the book goes by), are the Poset analog of **over**- and **undercategories**. In this sense, a downset is like the covering space over a Topology, but I'm just parroting the nlab. [nlab, overcategory](https://ncatlab.org/nlab/show/over+category) **Puzzle 1 KEP**: Prove that the downset of the top element in a poset \\( P \\) is equivalent to the poset \\( P \\) itself. **Puzzle 2 KEP**: Prove that the upset of the bottom element in a poset \\( P \\) is equivalent to the poset \\( P \\) itself.
  • 14.
    edited April 19

    KEP1: Recall that the principal downset for an element \(z\) is defined as \(\operatorname{\downarrow}(z) = \{y \in P : y \le z\}\). Assume that \(P\) has a maximum, \(\top\), and let \(x \in P\). Then \(x \le \top\) by definition, so \(x \in \operatorname{\downarrow}(\top)\). Therefore \(P \subseteq \operatorname{\downarrow}(\top)\). Since we also have \(\operatorname{\downarrow}(\top) \subseteq P\) by definition, we know \(P = \operatorname{\downarrow}(\top)\).

    KEP2: By duality (upsets in \(P\) are downsets in \(P^{op}\)).

    Comment Source:**KEP1:** Recall that the [principal downset](https://en.wikipedia.org/wiki/Upper_set) for an element \\(z\\) is defined as \\(\operatorname{\downarrow}(z) = \\{y \in P : y \le z\\}\\). Assume that \\(P\\) has a maximum, \\(\top\\), and let \\(x \in P\\). Then \\(x \le \top\\) by definition, so \\(x \in \operatorname{\downarrow}(\top)\\). Therefore \\(P \subseteq \operatorname{\downarrow}(\top)\\). Since we also have \\(\operatorname{\downarrow}(\top) \subseteq P\\) by definition, we know \\(P = \operatorname{\downarrow}(\top)\\). **KEP2:** By duality (upsets in \\(P\\) are downsets in \\(P^{op}\\)).
  • 15.
    edited April 19

    For those who like pictures, here's an upset in the power set \(P\{1,2,3,4\}\), as drawn by Pgdx:

    image

    You can see that if anything is marked in green, so is everything above it: that's what makes it an "upset". It's a "principal upset", because it consists of all \(S \subseteq \{1,2,3,4\} \) with \(\{1\} \subseteq S \).

    Puzzle. If \(X\) is some set, can there be upsets of \(P(X)\) that aren't principal upsets?

    Comment Source:For those who like pictures, here's an upset in the power set \\(P\\{1,2,3,4\\}\\), as drawn by <a href = "https://commons.wikimedia.org/wiki/User:Pgdx">Pgdx</a>: <center><img src = "http://math.ucr.edu/home/baez/mathematical/7_sketches/upset.png"></center> You can see that if anything is marked in green, so is everything above it: that's what makes it an "upset". It's a "principal upset", because it consists of all \\(S \subseteq \\{1,2,3,4\\} \\) with \\(\\{1\\} \subseteq S \\). **Puzzle.** If \\(X\\) is some set, can there be upsets of \\(P(X)\\) that aren't principal upsets?
  • 16.
    edited April 19

    Sure -- take the union of \(\operatorname{\uparrow}(\{1\})\) and \(\operatorname{\uparrow}(\{3\})\). This is an upset because anything above an element in the union is above an element in one of the individual upsets, hence is itself in one of the upsets, and therefore is in the union. If this were a principal upset, \(\{1\} \wedge \{3\} = \emptyset\) would need to be in the upset (since principality implies a single minimal element). But it isn't, so it's not.

    Puzzle JMC1: Show that a principal upset must have a unique minimal element.

    Puzzle JMC2: Show that if \(x, y \in \operatorname{\uparrow}(z)\) and \(x \wedge y\) exists, then \(x \wedge y \in \operatorname{\uparrow}(z)\).

    Comment Source:Sure -- take the union of \\(\operatorname{\uparrow}(\\{1\\})\\) and \\(\operatorname{\uparrow}(\\{3\\})\\). This is an upset because anything above an element in the union is above an element in one of the individual upsets, hence is itself in one of the upsets, and therefore is in the union. If this were a principal upset, \\(\\{1\\} \wedge \\{3\\} = \emptyset\\) would need to be in the upset (since principality implies a _single_ minimal element). But it isn't, so it's not. **Puzzle JMC1:** Show that a principal upset must have a unique minimal element. **Puzzle JMC2:** Show that if \\(x, y \in \operatorname{\uparrow}(z)\\) and \\(x \wedge y\\) exists, then \\(x \wedge y \in \operatorname{\uparrow}(z)\\).
  • 17.
    edited April 22

    In #14 Jonathan mentioned principal upsets for a given element of the poset. This can be easily generalized for sets instead of elements: for a poset \((P, \leq)\) and \(S \subseteq P\), we can define \(\operatorname{\uparrow}(S) = \{y \in P : s \in S \Longrightarrow y \ge s\}\). I'm venturing:

    Puzzle JL1: Can you give an example of upper set that doesn't arise this way?

    Comment Source:In #14 Jonathan mentioned principal upsets for a given element of the poset. This can be easily generalized for sets instead of elements: for a poset \\(\(P, \leq\)\\) and \\(S \subseteq P\\), we can define \\(\operatorname{\uparrow}(S) = \\{y \in P : s \in S \Longrightarrow y \ge s\\}\\). I'm venturing: **Puzzle JL1**: Can you give an example of [upper set](https://en.wikipedia.org/wiki/Upper_set) that *doesn't* arise this way?
  • 18.
    edited April 21

    JL1: Let \(U\) be an upset, and let \(x \in \operatorname{\uparrow}(U)\). Then there is some \(y \in U\) such that \(y \le x\). Since \(U\) is an upset, this means that \(x \in U\). Therefore, \(U = \operatorname{\uparrow}(U)\). So every upset \(U\) is of the form \(\operatorname{\uparrow}(S)\) for some \(S\); at the very least, take \(S\) to be \(U\) itself, forgetting its order structure.

    I think there’s an adjoint relationship happening here too, between \(\operatorname{\uparrow}\) and the forgetful function from posets to sets. In fact, \(\operatorname{\uparrow}\) also seems to be a closure operator, which as someone else noted is what monads are for posets.

    Comment Source:**JL1:** Let \\(U\\) be an upset, and let \\(x \in \operatorname{\uparrow}(U)\\). Then there is some \\(y \in U\\) such that \\(y \le x\\). Since \\(U\\) is an upset, this means that \\(x \in U\\). Therefore, \\(U = \operatorname{\uparrow}(U)\\). So every upset \\(U\\) is of the form \\(\operatorname{\uparrow}(S)\\) for some \\(S\\); at the very least, take \\(S\\) to be \\(U\\) itself, forgetting its order structure. I think there’s an adjoint relationship happening here too, between \\(\operatorname{\uparrow}\\) and the forgetful function from posets to sets. In fact, \\(\operatorname{\uparrow}\\) also seems to be a closure operator, which as someone else noted is what monads are for posets.
  • 19.

    Hi Jonathan, you have shown nicely that my puzzle was a bit silly, I should have pondered it more. But I'm trying to retort with a better worded one (not particularly hard):

    Puzzle JL2: Can you give an example of upper set which has no meet?

    Comment Source:Hi Jonathan, you have shown nicely that my puzzle was a bit silly, I should have pondered it more. But I'm trying to retort with a better worded one (not particularly hard): **Puzzle JL2**: Can you give an example of upper set which has no meet?
  • 20.

    JL2: Yes; it's based on the "ironic" poset \(P\) below.

      *
     / \
    *   *
    

    The upper set \(P\) has no meet.

    Comment Source:**JL2:** Yes; it's based on the "ironic" poset \\(P\\) below. <pre> * / \ * * </pre> The upper set \\(P\\) has no meet.
  • 21.

    Hi Jonathan, right, that does the trick. I've tried to build a question whose answer were something in the line of the reals strictly above zero, thus being nontrivially bottomless, but it seems I can't figure how to properly puzzletize it, without you ruining the attempt! :)

    Comment Source:Hi Jonathan, right, that does the trick. I've tried to build a question whose answer were something in the line of the reals strictly above zero, thus being nontrivially bottomless, but it seems I can't figure how to properly puzzletize it, without you ruining the attempt! :)
  • 22.

    I'm trying to think of how posets simplify the conditions of the general https://ncatlab.org/nlab/show/adjoint+functor+theorem. First, it is clear that limits and colimits in posets are only meets and joins, because diagram commutation is nothing more than the existence of "morphisms", i.e. relation elements. And of course, for this same reason posets are locally small. At first glance, I'm not sure how one might verify the "solution set condition" here; and I'm not familiar enough with "cototal" to judge. The other criteria is codomain being "well-powered", i.e. every object has a small poset of subobjects, this is trivially true; but lastly, I am not sure about "cogenerators", because the classic cogenerator is the subobject classifier, but Poset does not have one: https://math.stackexchange.com/questions/1650277/subobject-classifier-for-partial-orders/1650434. So, I've brought up a bunch of fancy stuff without a conclusion! But if anyone has any insight on this, it would be appreciated. (Ah, I think the "solution set" and "small cogenerating set" are trivially true if the posets are small... are we assuming that?)

    Comment Source:I'm trying to think of how posets simplify the conditions of the general https://ncatlab.org/nlab/show/adjoint+functor+theorem. First, it is clear that limits and colimits in posets are only meets and joins, because diagram commutation is nothing more than the existence of "morphisms", i.e. relation elements. And of course, for this same reason posets are locally small. At first glance, I'm not sure how one might verify the "solution set condition" here; and I'm not familiar enough with "cototal" to judge. The other criteria is codomain being "well-powered", i.e. every object has a small poset of subobjects, this is trivially true; but lastly, I am not sure about "cogenerators", because the classic cogenerator is the subobject classifier, but Poset does not have one: https://math.stackexchange.com/questions/1650277/subobject-classifier-for-partial-orders/1650434. So, I've brought up a bunch of fancy stuff without a conclusion! But if anyone has any insight on this, it would be appreciated. (Ah, I think the "solution set" and "small cogenerating set" are trivially true if the posets are small... are we assuming that?)
  • 23.
    edited April 24

    image

    Christian wrote:

    (Ah, I think the "solution set" and "small cogenerating set" are trivially true if the posets are small... are we assuming that?)

    Yes. We're assuming that our preorders and posets are sets: that's what "small" means. So, we haven't been talking about "partially ordered proper classes".

    A good example of a partially ordered proper class is the class of all sets, ordered by inclusion. A good example of a totally ordered proper class is the class of ordinals; another nice one is the class of cardinals.

    You can get nasty stuff to happen with these. For starters, note that every subset of these posets has a join, but not every subclass. For example, the union of any set of sets is a set, but the union of a class of sets may not be a set. Or: every set of ordinals has a least upper bound, but not the class of all ordinals, since the least upper bound of all ordinals would be an ordinal \(\Omega\) that's greater than or equal to all others, but we must have \(\Omega \le \Omega + 1\). Similarly, we can't have a largest cardinal, since any cardinal \(\alpha\) must have \(\alpha \le 2^\alpha\).

    I believe one parlay these problems into an example of a monotone map between partially ordered classes that preserves all small joins but does not have a right adjoint. But I'm not succeeding in inventing one! So I'll record this:

    Puzzle. Can we find an example of a monotone map between partially ordered classes that preserves all small joins but does not have a right adjoint?

    The nLab article you cite very nicely points out the special way in which preorders simplify the conditions on the adjoint functor theorem. The key result, a real shocker, is this:

    Theorem (Freyd). If a small category has all small limits, or all small colimits, it must be a preorder.

    The nLab page complete small category gives the strikingly simple proof.

    The impact of this shocker is that while this theorem is true:

    Theorem. If \(C\) and \(D\) are small categories, \(C\) has all small limits, and \(F : C \to D\) preserves all small limits, \(F\) has a left adjoint.

    it's of limited usefulness, because these conditions imply that \(C\) is a preorder! So we need a subtler theorem with weaker condition if we want to handle the case when \(C\) is a full-fledged category, not a mere preorder.

    Comment Source:<img width = "150" src = "http://math.ucr.edu/home/baez/mathematical/warning_sign.jpg"> Christian wrote: > (Ah, I think the "solution set" and "small cogenerating set" are trivially true if the posets are small... are we assuming that?) Yes. We're assuming that our preorders and posets are sets: that's what "small" means. So, we haven't been talking about "partially ordered proper classes". A good example of a partially ordered proper class is the class of all sets, ordered by inclusion. A good example of a totally ordered proper class is the class of ordinals; another nice one is the class of cardinals. You can get nasty stuff to happen with these. For starters, note that every _subset_ of these posets has a join, but not every _subclass_. For example, the union of any _set_ of sets is a set, but the union of a _class_ of sets may not be a set. Or: every set of ordinals has a least upper bound, but not the class of all ordinals, since the least upper bound of all ordinals would be an ordinal \\(\Omega\\) that's greater than or equal to all others, but we must have \\(\\Omega \le \Omega + 1\\). Similarly, we can't have a largest cardinal, since any cardinal \\(\alpha\\) must have \\(\alpha \le 2^\alpha\\\). I believe one parlay these problems into an example of a monotone map between partially ordered classes that preserves all _small_ joins but does not have a right adjoint. But I'm not succeeding in inventing one! So I'll record this: **Puzzle.** Can we find an example of a monotone map between partially ordered classes that preserves all _small_ joins but does not have a right adjoint? The nLab article you cite very nicely points out the special way in which preorders simplify the conditions on the adjoint functor theorem. The key result, a real shocker, is this: **Theorem (Freyd).** If a small category has all small limits, or all small colimits, it must be a preorder. The nLab page [complete small category](https://ncatlab.org/nlab/show/complete+small+category) gives the strikingly simple proof. The impact of this shocker is that while this theorem is true: **Theorem.** If \\(C\\) and \\(D\\) are small categories, \\(C\\) has all small limits, and \\(F : C \to D\\) preserves all small limits, \\(F\\) has a left adjoint. it's of limited usefulness, because these conditions imply that \\(C\\) is a preorder! So we need a subtler theorem with weaker condition if we want to handle the case when \\(C\\) is a full-fledged category, not a mere preorder.
  • 24.

    Jesus wrote:

    I've tried to build a question whose answer were something in the line of the reals strictly above zero, thus being nontrivially bottomless, but it seems I can't figure how to properly puzzletize it, without you ruining the attempt.

    Having given away the answer it's not a good puzzle anymore, but one can ask:

    Puzzle. Find a totally ordered set \(S\) with an upper set that's not principal, i.e. not of the form \( \{s \in S: \, s \ge x\} \).

    Comment Source:Jesus wrote: > I've tried to build a question whose answer were something in the line of the reals strictly above zero, thus being nontrivially bottomless, but it seems I can't figure how to properly puzzletize it, without you ruining the attempt. Having given away the answer it's not a good puzzle anymore, but one can ask: **Puzzle.** Find a totally ordered set \\(S\\) with an upper set that's not principal, i.e. not of the form \\( \\{s \in S: \, s \ge x\\} \\).
  • 25.

    Apologizing for looking into pre-history, I have a question about a theorem from Chapter 1 ; this came up in a discussion with Grant Roy at the Caltech Study Group

    Theorem (Adjoint functor theorem for preorders).

    Suppose Q is a preorder that has all meets and let P be any preorder. A monotone map g : Q → P preserves meets if and only if it is a right adjoint. Similarly, if P has all joins and Q is any preorder, a monotone map f : P → Q preserves joins if and only if it is a left adjoint.

    The question deals with a case where \( P \) does not have all joins, and we have a right adjoint which is not a surjection; specifically, when \( P \) has two elements which are symmetric (in a sense which should be clearer in the example below), and the right adjoint maps to one, but not the other.

    Based on the theorem part for right adjoints, let's take the "bowtie" poset

    as our \(P\), and \( 1 \to 2 \to 3 \) as our \(Q\).

    Choose \( g(1) = g(2) = g(3) = a \) for \( g : Q \to P \) .

    To the best of my understanding, \( Q \) has all meets and \( g \) is a monotone map which preserves them, therefore \(g \) is a right adjoint according to the above stated theorem.

    Constructing its left adjoint, based on the candidate in the proof: \[ f(p) = \bigwedge \{q \in Q : \; p \le_P g(q) \} \] we find \( f(a) = f(c) = f(d) = 1 \) and \( f(b) = \bigwedge \emptyset = ? \);

    Applying brute force search for an assignment for \(f(b)\), I could not find one satisfying a Galois Connection in this case; however, this case seems to meet the settings of the theorem, isn't it?

    Could anyone help me identify my mistake?

    Comment Source:Apologizing for looking into pre-history, I have a question about a theorem from Chapter 1 ; this came up in a discussion with [Grant Roy](https://forum.azimuthproject.org/profile/1768/Grant%20Roy) at the [Caltech Study Group](https://forum.azimuthproject.org/discussion/2066/applied-category-theory-course-caltech-study-group) #### Theorem (Adjoint functor theorem for preorders). > Suppose Q is a preorder that has all meets and let P be any preorder. A monotone map g : Q → P preserves meets if and only if it is a right adjoint. Similarly, if P has all joins and Q is any preorder, a monotone map f : P → Q preserves joins if and only if it is a left adjoint. The question deals with a case where \\( P \\) does not have all joins, and we have a right adjoint which is not a surjection; specifically, when \\( P \\) has two elements which are _symmetric_ (in a sense which should be clearer in the example below), and the right adjoint maps to one, but not the other. Based on the theorem part for right adjoints, let's take the ["bowtie" poset](https://forum.azimuthproject.org/discussion/comment/17902/#Comment_17902) ![](https://i.imgur.com/Km2PosD.png) as our \\(P\\), and \\( 1 \to 2 \to 3 \\) as our \\(Q\\). Choose \\( g(1) = g(2) = g(3) = a \\) for \\( g : Q \to P \\) . To the best of my understanding, \\( Q \\) has all meets and \\( g \\) is a monotone map which preserves them, therefore \\(g \\) is a right adjoint according to the above stated theorem. Constructing its left adjoint, based on the candidate in the proof: \\[ f(p) = \bigwedge \\{q \in Q : \; p \le_P g(q) \\} \\] we find \\( f(a) = f(c) = f(d) = 1 \\) and \\( f(b) = \bigwedge \emptyset = ? \\); Applying brute force search for an assignment for \\(f(b)\\), I could not find one satisfying a Galois Connection in this case; however, this case seems to meet the settings of the theorem, isn't it? Could anyone help me identify my mistake?
  • 26.

    I actually think you're free to map \(b\) anywhere.

    The function\(f\) simply has that freedom, while \(g\) doesn't and forgets everything.

    Comment Source:I actually think you're *free* to map \\(b\\) anywhere. The function\\(f\\) simply has that freedom, while \\(g\\) doesn't and *forgets* everything.
  • 27.
    edited June 29

    Thanks Keith. That was my line of thinking as well, yet, there must be something wrong with my arguments as the theorem provides a construction which guarantees a left adjoint.

    My question is: what's wrong?

    It is either that the example above does not satisfy the settings of the theorem, or that I have a mistake in the calculations (or both... ;-)). But I can't seem to find either.

    Comment Source:Thanks Keith. That was my line of thinking as well, yet, there must be something wrong with my arguments as the theorem provides a construction which guarantees a left adjoint. My question is: what's wrong? It is either that the example above does not satisfy the settings of the theorem, or that I have a mistake in the calculations (or both... ;-)). But I can't seem to find either.
  • 28.
    edited June 30

    @Eldad Afik, that's a neat question! I wish this special case had occurred to me before, but even after trying to carefully read all the material I missed this one.

    The formula for constructing \(f\) predicts that \(f(b)=\bigwedge_Q \emptyset=\top_Q=3\), but as you said that choice (and all other choices) fails to satisfy the Galois condition.

    After walking through the proof much more carefully than I did before, I think I found the source of the problem. Although it appears that \(g\) "preserves all meets", this actually isn't the case. I believe the term "all meets" should be inspected in terms of the power set of \(Q\). For most subsets, i.e.

    $$ \{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}, \{1,2,3\}, $$ your map \(g\) sends each subset to \(\{a\}\), and the meet is preserved.

    However, writing out the power set elements explicitly reveals that we left one out--the empty subset! "Preserve meets" means that we can transport the subset along \(g\) before or after taking the meet and get the same answer, \(g(\bigwedge_Q S)=\bigwedge_P g_!(S)\).

    If \(g\) does "preserve meets" in the case of the empty set, we get

    $$ g({\bigwedge}_Q \emptyset) = {\bigwedge}_P g_!(\emptyset) = {\bigwedge}_P \emptyset $$ But the meet of \(\emptyset\) does not exist in \(P\)! (\(P\) has no top element.) So, if I'm looking at this in the correct way, the term "preserves all meets" doesn't just impose a condition on the map, but also a little universal condition on the codomain set: the codomain of the map must have a meet for the empty set. That was not at all obvious to me when the theorem just says that \(P\) is "any poset".

    In your example, the codomain \(P\) has no top element, so no meet of \(\emptyset\), so it's impossible for any map into that poset to "preserve all meets".

    Comment Source:@Eldad Afik, that's a neat question! I wish this special case had occurred to me before, but even after trying to carefully read all the material I missed this one. The formula for constructing \\(f\\) predicts that \\(f(b)=\bigwedge_Q \emptyset=\top_Q=3\\), but as you said that choice (and all other choices) fails to satisfy the Galois condition. After walking through the proof much more carefully than I did before, I think I found the source of the problem. Although it appears that \\(g\\) "preserves all meets", this actually isn't the case. I believe the term "all meets" should be inspected in terms of the power set of \\(Q\\). For most subsets, i.e. \[ \\{1\\}, \\{2\\}, \\{3\\}, \\{1,2\\}, \\{1,3\\}, \\{2,3\\}, \\{1,2,3\\}, \] your map \\(g\\) sends each subset to \\(\\{a\\}\\), and the meet is preserved. However, writing out the power set elements explicitly reveals that we left one out--the empty subset! "Preserve meets" means that we can transport the subset along \\(g\\) before or after taking the meet and get the same answer, \\(g(\bigwedge_Q S)=\bigwedge_P g_!(S)\\). If \\(g\\) does "preserve meets" in the case of the empty set, we get \[ g({\bigwedge}_Q \emptyset) = {\bigwedge}_P g_\!(\emptyset) = {\bigwedge}_P \emptyset \] But the meet of \\(\emptyset\\) does not exist in \\(P\\)! (\\(P\\) has no top element.) So, if I'm looking at this in the correct way, the term "preserves all meets" doesn't just impose a condition on the map, but also a little universal condition on the codomain set: the codomain of the map must have a meet for the empty set. That was not at all obvious to me when the theorem just says that \\(P\\) is "any poset". In your example, the codomain \\(P\\) has no top element, so no meet of \\(\emptyset\\), so it's impossible for *any* map into that poset to "preserve all meets".
  • 29.
    edited July 6

    @Pete Morcos

    Many thanks for your detailed reply!

    I have a questions about : \(f(b)=\bigwedge_Q \emptyset=\top_Q=3\)

    How do we compare the elements of the empty set to any other elements?

    Is the above statement is true because all \( q \in Q \) satisfy \( q \le q' \) for any \( q' \in \emptyset \) ?

    Using this logic I could also say that all \( q \in Q \) satisfy \( q' \le q \) for any \( q' \in \emptyset \) , which means all \( q \in Q \) are \( q \equiv q' \) for any \( q' \in \emptyset \), so using transitivity of equivalence, we have that all \( q \in Q \) are equivalent, isn't it?

    Another question arises when we take another preorder, thanks to Grant Roy, taking

    as our \( P \), Q is still \( 1 \to 2 \to 3 \) as above, and \( g : Q \to P \):

    \[ g(3) = d \] \[ g(2) = b \] \[ g(1) = a \]

    \( g\) is monotone, meet preserving, and even the empty set should find a meet in our new \( P\), which is \(e\).

    The construction for a left adjoint would assign \( f(e) = \bigwedge_Q \emptyset = 1 \)

    and the Galois connection is not satisfied again?!

    I'd be happy for your insights.

    Comment Source:@[Pete Morcos](https://forum.azimuthproject.org/profile/2373/Pete%20Morcos) Many thanks for your detailed reply! I have a questions about : \\(f(b)=\bigwedge_Q \emptyset=\top_Q=3\\) How do we compare the elements of the empty set to any other elements? Is the above statement is true because all \\( q \in Q \\) satisfy \\( q \le q' \\) for any \\( q' \in \emptyset \\) ? Using this logic I could also say that all \\( q \in Q \\) satisfy \\( q' \le q \\) for any \\( q' \in \emptyset \\) , which means all \\( q \in Q \\) are \\( q \equiv q' \\) for any \\( q' \in \emptyset \\), so using transitivity of equivalence, we have that all \\( q \in Q \\) are equivalent, isn't it? Another question arises when we take another preorder, thanks to [Grant Roy](https://forum.azimuthproject.org/profile/1768/Grant%20Roy), taking ![](https://i.imgur.com/ByTrB3Q.png) as our \\( P \\), Q is still \\( 1 \to 2 \to 3 \\) as above, and \\( g : Q \to P \\): \\[ g(3) = d \\] \\[ g(2) = b \\] \\[ g(1) = a \\] \\( g\\) is monotone, meet preserving, and even the empty set should find a meet in our new \\( P\\), which is \\(e\\). The construction for a left adjoint would assign \\( f(e) = \bigwedge_Q \emptyset = 1 \\) and the Galois connection is not satisfied again?! I'd be happy for your insights.
  • 30.
    edited July 7

    Hi Eldad, John has replied to a few queries in other lectures about how to properly deal with the empty set as a special case (I don't have links to his comments at the moment). I would like a formal symbolic way of expressing the issue, but I haven't had the chance to give that a try.

    The quantifiers \(\forall\) and \(\exists\) seem to take two opposite (dual?) approaches. I suspect that John's discussion in Chapter 1 relating the quantifiers to adjunctions are relevant, but I don't have specifics for you. When dealing with the empty set, \(\forall\) seems to be "vacuously" true, as they say, and \(\exists\) is always false.

    I kind of think that what's going on is that there is an implicit set union happening inside the definitions of many of the operations we care about, and the empty set is the identity element for the union operator. The \(\forall\) quantifier seems to involve the logical operator "and", so any empty list of logical tests becomes true, the identity for "and". Similarly, the \(\exists\) quantifier is somehow related to the logical operator "or", and so applying an empty list of tests in that context yields false, the identity for "or".

    I had hoped to try to formalize these guesses and ask John about it back when he was answering people's questions about the empty set, but the class moves too quickly for me to ask my questions while they're still relevant.

    Anyway, even if my guesses aren't correct, I believe that it's correct that \(\forall\) applied to zero logical tests simply returns a value of true, and \(\exists\) applied to zero logical tests returns false.

    The definition of a meet or join involves \(\forall\), so when applied to the empty set, the test just drops out. "The largest element \(q\) in \(Q\) such that \(\forall q' \in S, q \leq q'\)." The second clause is vacuously true for \(S = \emptyset\), so it becomes \(\forall(\textrm{nothing to test}) = \textrm{true}\). Thus the meet of the empty set reduces to just "The largest element \(q\) in \(Q\)".

    I believe your proposed example fails to be a problem because to demonstrate transitivity, there is an implied \(\exists\). To apply the transitivity rule, you must produce an actual intermediate element \(q'\) and show that \(q \cong q'\) and \(q' \cong q''\). But in your example, \(q'\) is just a formal symbol used to sweep through all elements of the empty set; it never takes on any actual values, so there is no transitivity argument to be made.


    Now finally getting to your poset example, I'm going to assume that I had things correct in my previous post, that the meet of the empty set is the top element. That means \(\bigwedge_Q \emptyset = 3\) and \(\bigwedge_P \emptyset = e\). The proposed function \(g\) doesn't send 3 to e, so it's not meet preserving, and can't be a right adjoint.

    However, your \(g\) does seem to preserve joins (and \(Q\) has all joins), so I think it can be a left adjoint. For the empty set, the joins are \(1\) and \(a\), which checks out. The right adjoint would send \((a,c \mapsto 1), (b \mapsto 2), (d,e \mapsto 3)\), if I calculated it correctly.

    Comment Source:Hi Eldad, John has replied to a few queries in other lectures about how to properly deal with the empty set as a special case (I don't have links to his comments at the moment). I would like a formal symbolic way of expressing the issue, but I haven't had the chance to give that a try. The quantifiers \\(\forall\\) and \\(\exists\\) seem to take two opposite (dual?) approaches. I suspect that John's discussion in Chapter 1 relating the quantifiers to adjunctions are relevant, but I don't have specifics for you. When dealing with the empty set, \\(\forall\\) seems to be "vacuously" true, as they say, and \\(\exists\\) is always false. I kind of think that what's going on is that there is an implicit set union happening inside the definitions of many of the operations we care about, and the empty set is the identity element for the union operator. The \\(\forall\\) quantifier seems to involve the logical operator "and", so any empty list of logical tests becomes true, the identity for "and". Similarly, the \\(\exists\\) quantifier is somehow related to the logical operator "or", and so applying an empty list of tests in that context yields false, the identity for "or". I had hoped to try to formalize these guesses and ask John about it back when he was answering people's questions about the empty set, but the class moves too quickly for me to ask my questions while they're still relevant. Anyway, even if my guesses aren't correct, I believe that it's correct that \\(\forall\\) applied to zero logical tests simply returns a value of true, and \\(\exists\\) applied to zero logical tests returns false. The definition of a meet or join involves \\(\forall\\), so when applied to the empty set, the test just drops out. "The largest element \\(q\\) in \\(Q\\) such that \\(\forall q' \in S, q \leq q'\\)." The second clause is vacuously true for \\(S = \emptyset\\), so it becomes \\(\forall(\textrm{nothing to test}) = \textrm{true}\\). Thus the meet of the empty set reduces to just "The largest element \\(q\\) in \\(Q\\)". I believe your proposed example fails to be a problem because to demonstrate transitivity, there is an implied \\(\exists\\). To apply the transitivity rule, you must produce an actual intermediate element \\(q'\\) and show that \\(q \cong q'\\) and \\(q' \cong q''\\). But in your example, \\(q'\\) is just a formal symbol used to sweep through all elements of the empty set; it never takes on any actual values, so there is no transitivity argument to be made. ***** Now finally getting to your poset example, I'm going to assume that I had things correct in my previous post, that the meet of the empty set is the top element. That means \\(\bigwedge_Q \emptyset = 3\\) and \\(\bigwedge_P \emptyset = e\\). The proposed function \\(g\\) doesn't send 3 to e, so it's not meet preserving, and can't be a right adjoint. However, your \\(g\\) does seem to preserve *joins* (and \\(Q\\) has all joins), so I think it can be a left adjoint. For the empty set, the joins are \\(1\\) and \\(a\\), which checks out. The right adjoint would send \\((a,c \mapsto 1), (b \mapsto 2), (d,e \mapsto 3)\\), if I calculated it correctly.
  • 31.
    edited July 7

    Hi Pete, I agree with your view on quantifiers and that one can view \(\forall x \in X: \phi(x)\) as \(\bigwedge_{x \in X} \phi(x)\) (similarly for "exists"). In a posetal category meets are products and the empty product (a limit of an empty diagram) is the terminal object. Specifically in the boolean poset \(false \leq true\), \(false\) is the initial object (reflecting the fact ex falso quodlibet, from falsehood it all follows) and \(true\) is the terminal one. But we'd need more apparatus to turn analogy into theorem.

    Added later: in classical FOL you can say \(\forall x \in \emptyset: \phi(x) \overset{(1)}{\iff} \neg \neg \forall x \in \emptyset: \phi(x) \overset{(2)}\iff \neg \exists x \in \emptyset: \neg \phi(x)\), but \(\exists x \in \emptyset: \neg \phi(x)\) is false (no candidate \(x\)), so \(\forall x \in \emptyset: \phi(x)\) holds.

    (1) by double negation, (2) by "cuantificational de Morgan".

    Comment Source:Hi Pete, I agree with your view on quantifiers and that one can view \\(\forall x \in X: \phi(x)\\) as \\(\bigwedge_{x \in X} \phi(x)\\) (similarly for "exists"). In a posetal category meets are products and the empty product (a limit of an empty diagram) is the terminal object. Specifically in the boolean poset \\(false \leq true\\), \\(false\\) is the initial object (reflecting the fact *ex falso quodlibet*, from falsehood it all follows) and \\(true\\) is the terminal one. But we'd need more apparatus to turn analogy into theorem. **Added later**: in classical FOL you can say \\(\forall x \in \emptyset: \phi(x) \overset{(1)}{\iff} \neg \neg \forall x \in \emptyset: \phi(x) \overset{(2)}\iff \neg \exists x \in \emptyset: \neg \phi(x)\\), but \\(\exists x \in \emptyset: \neg \phi(x)\\) is false (no candidate \\(x\\)), so \\(\forall x \in \emptyset: \phi(x)\\) holds. (1) by double negation, (2) by "cuantificational de Morgan".
  • 32.
    edited July 8

    In reply to #30

    That means \(\bigwedge_Q \emptyset = 3\) and \(\bigwedge_P \emptyset = e\). The proposed function \(g\) doesn't send 3 to e, so it's not meet preserving, and can't be a right adjoint.

    Good catch, guess I still have to get used to checking the \( \emptyset \).

    Thank you so much taking the time to participate in this discussion and reply in details.

    Same goes for the first part of your reply, where you explain the application of the quantifiers. I greatly appreciate it, thank you Pete!

    Comment Source:In reply to [#30](https://forum.azimuthproject.org/discussion/comment/19940/#Comment_19940) > That means \\(\bigwedge_Q \emptyset = 3\\) and \\(\bigwedge_P \emptyset = e\\). The proposed function \\(g\\) doesn't send 3 to e, so it's not meet preserving, and can't be a right adjoint. Good catch, guess I still have to get used to checking the \\( \emptyset \\). Thank you so much taking the time to participate in this discussion and reply in details. Same goes for the first part of your reply, where you explain the application of the quantifiers. I greatly appreciate it, thank you Pete!
Sign In or Register to comment.