Options

Lecture 6 - Chapter 1: Computing Adjoints

2»

Comments

  • 51.

    John Baez wrote:

    I believe that if the formulas give well-defined functions, these are necessarily the desired adjoints!

    In puzzle 18, we have seen that \(f_!\) doesn't have a left adjoint if \(f\) is not injective (comments 5 and 6). Does this mean that the function given by the formula is not well-defined? Marc Kaufmann indicated the function given by the formula in comment 49:

    $$g_!(R) = \bigcap \left\{S \in PX: R \subseteq f_!(S) \right\}.$$

    Could someone help me out see why the function above is not well-defined when \(f\) is not injective? (I'm sorry to come back at this puzzle, but it seems I'm missing something very elementary.)

    Comment Source:[John Baez](https://forum.azimuthproject.org/profile/17/John%20Baez) wrote: > I believe that if the formulas give well-defined functions, these are necessarily the desired adjoints! In puzzle 18, we have seen that \\(f_!\\) doesn't have a left adjoint if \\(f\\) is not injective (comments [5](https://forum.azimuthproject.org/discussion/comment/16501/#Comment_16501) and [6](https://forum.azimuthproject.org/discussion/comment/16509/#Comment_16509)). Does this mean that the function given by the formula is not well-defined? [Marc Kaufmann](https://forum.azimuthproject.org/profile/2226/Marc%20Kaufmann) indicated the function given by the formula in [comment 49](https://forum.azimuthproject.org/discussion/comment/17730/#Comment_17730): > $$g_!(R) = \bigcap \left\{S \in PX: R \subseteq f_!(S) \right\}.$$ Could someone help me out see why the function above is not well-defined when \\(f\\) is not injective? (I'm sorry to come back at this puzzle, but it seems I'm missing something very elementary.)
  • 52.
    edited May 2018

    Dan - okay, you've shot down my "belief" above, which was a very silly belief to begin with. Let me abstract your argument a bit, to make it easier for me to understand.

    Assume \(A\) and \(B\) are posets with all joins and \(f : A \to B\) is a monotone function.

    Then we can define a function \(g: B \to A\) by

    $$ g(b) = \bigvee \{a \in A : \; f(a) \le_B b \} . $$ since the join exists. My "belief" was that \(g\) is a right adjoint to \(f\). On the other hand, in Lecture 16 we proved that if \(f\) has a right adjoint, \(f\) must preserve all joins that exist in \(A\). Since we're assuming \(A\) has all joins, \(f\) must preserve all joins.

    So, in our situation, my "belief" says that \(f\) has a right adjoint.... but if this is true, \(f\) must preserve all joins.

    If this were true, we could draw a remarkable conclusion: whenever \(f : A \to B\) is a monotone function between posets with all joins, \(f\) preserves all joins.

    But this is false! Your counterexample shows this is false, but there are even simpler counterexamples.

    So, my "belief" was a load of hooey! :P

    Comment Source:Dan - okay, you've shot down my "belief" above, which was a very silly belief to begin with. Let me abstract your argument a bit, to make it easier for me to understand. Assume \\(A\\) and \\(B\\) are posets with all joins and \\(f : A \to B\\) is a monotone function. Then we can define a function \\(g: B \to A\\) by \[ g(b) = \bigvee \\{a \in A : \; f(a) \le_B b \\} . \] since the join exists. My "belief" was that \\(g\\) is a right adjoint to \\(f\\). On the other hand, in [Lecture 16](https://forum.azimuthproject.org/discussion/2031/lecture-16-chapter-1-the-adjoint-functor-theorem-for-posets/p1) we _proved_ that if \\(f\\) has a right adjoint, \\(f\\) must preserve all joins that exist in \\(A\\). Since we're assuming \\(A\\) has _all_ joins, \\(f\\) must preserve _all_ joins. So, in our situation, my "belief" says that \\(f\\) has a right adjoint.... but if this is true, \\(f\\) must preserve all joins. If this were true, we could draw a remarkable conclusion: whenever \\(f : A \to B\\) is a monotone function between posets with all joins, \\(f\\) preserves all joins. But this is false! Your counterexample shows this is false, but there are even simpler counterexamples. So, my "belief" was a load of hooey! :P
  • 53.

    I get a 404 Not Found for the charts in Comments 32 through 35. Is this just me? If not, can the charts be restored -- they seem to be very important. Thank you. - Charlie

    Comment Source:I get a 404 Not Found for the charts in Comments 32 through 35. Is this just me? If not, can the charts be restored -- they seem to be very important. Thank you. - Charlie
  • 54.

    Michael Hong - that's your charts Charlie is talking about!

    Comment Source:Michael Hong - that's your charts Charlie is talking about!
  • 55.

    Hey Charlie

    I had to upgrade my server environment and right now in the process of restoring my database. The pictures will be back soon. Sorry for the inconvenience...

    Comment Source:Hey Charlie I had to upgrade my server environment and right now in the process of restoring my database. The pictures will be back soon. Sorry for the inconvenience...
  • 56.

    Thanks Michael - No problem. There's plenty to keep me busy before I get to them!

    Comment Source:Thanks Michael - No problem. There's plenty to keep me busy before I get to them!
  • 57.

    Don't know where to ask this, so I'll try here. Not a technical question, but anyway: how do you pronounce "poset"?

    Comment Source:Don't know where to ask this, so I'll try here. Not a technical question, but anyway: how do you pronounce "poset"?
  • 58.

    I say "poe-set", as in Edgar Allan Poe, rhyming with "show".

    Comment Source:I say "poe-set", as in [Edgar Allan Poe](https://en.wikipedia.org/wiki/Edgar_Allan_Poe), rhyming with "show".
  • 59.

    Thanks Jonathan. I like it. The only time I heard it pronounced was in a video lecture and the lecturer's (hesitant) pronunciation was as in "deposit", which doesn't sound "right" to me.

    Comment Source:Thanks Jonathan. I like it. The only time I heard it pronounced was in a video lecture and the lecturer's (hesitant) pronunciation was as in "deposit", which doesn't sound "right" to me.
  • 60.

    Hey Michael (#55) - I see the diagrams are back! Thank you! I find them very helpful! - Charlie

    Comment Source:Hey Michael (#55) - I see the diagrams are back! Thank you! I find them very helpful! - Charlie
  • 61.
    edited May 2018

    John, in Comment 25, there is something that confuses me:

    Here's an extreme example: for any poset \(A\) whatsoever, the identity function \(1_A : A \to A\) has a left and right adjoint, namely itself. This is easy to check straight from the definition:

    $$ a \le a \textrm{ if and only if } a \le a . $$

    Shouldn't this be \(a \le b \textrm{ if and only if } a \le b \), no? Or why should this only hold for \(a\) on both sides of the comparison?

    Comment Source:John, in [Comment 25](https://forum.azimuthproject.org/discussion/comment/16703/#Comment_16703), there is something that confuses me: > Here's an extreme example: for any poset \\(A\\) whatsoever, the identity function \\(1_A : A \to A\\) has a left and right adjoint, namely itself. This is easy to check straight from the definition: > $$ a \le a \textrm{ if and only if } a \le a . $$ Shouldn't this be \\(a \le b \textrm{ if and only if } a \le b \\), no? Or why should this only hold for \\(a\\) on both sides of the comparison?
  • 62.

    "Imaginary" meets and joins?

    Dealing with meets and joins seems to require peppering the discussion with annoying qualifiers, "if the meet exists." I was wondering if there was a way to do things more loosely.

    In the early days of real algebra, polynomial roots sometimes existed, and sometimes didn't, such as for \(x^2+1=0\). Then they tried just asserting that a solution exists, and got the imaginary numbers.

    At first, they thought of the imaginaries as only "scaffolding", which were used on the way to a solution, but couldn't be part of the final answer. Have folks found it useful to do something similar with meets and joins? Just assert that they all exist, by augmenting the preorder with new "imaginary" elements \(\boxed\top\) and \(\boxed\bot\) that are greater/less than the entire preorder.

    A formula such as

    $$(a \wedge b) \vee a = a$$ seems like a "reasonable" identity, because we don't care what the meet is, just the fact that it must be less than \(a\). But it fails to evaluate when the meet doesn't exist, even though we're not even going to look at it. The identity feels like it should be "locally" true no matter what the "global" structure of the preorder is. By permitting it to formally resolve to \(\boxed\bot\ \vee a = a\) whenever necessary, the identity evaluates to true for all preorders.

    There may be terrible pitfalls with this approach, so I'm curious whether it's something that has been explored and found useful?

    Comment Source:**"Imaginary" meets and joins?** Dealing with meets and joins seems to require peppering the discussion with annoying qualifiers, "if the meet exists." I was wondering if there was a way to do things more loosely. In the early days of real algebra, polynomial roots sometimes existed, and sometimes didn't, such as for \\(x^2+1=0\\). Then they tried just asserting that a solution exists, and got the imaginary numbers. At first, they thought of the imaginaries as only "scaffolding", which were used on the way to a solution, but couldn't be part of the final answer. Have folks found it useful to do something similar with meets and joins? Just assert that they all exist, by augmenting the preorder with new "imaginary" elements \\(\boxed\top\\) and \\(\boxed\bot\\) that are greater/less than the entire preorder. A formula such as $$(a \wedge b) \vee a = a$$ seems like a "reasonable" identity, because we don't care what the meet is, just the fact that it must be less than \\(a\\). But it fails to evaluate when the meet doesn't exist, even though we're not even going to look at it. The identity feels like it should be "locally" true no matter what the "global" structure of the preorder is. By permitting it to formally resolve to \\(\boxed\bot\\ \vee a = a\\) whenever necessary, the identity evaluates to true for all preorders. There may be terrible pitfalls with this approach, so I'm curious whether it's something that has been explored and found useful?
  • 63.
    edited May 2018

    What a fantastic question, Pete! I don't know what the answer is, but there may be some leads to follow if we dig at it some more.

    There are a couple of ways that a set can fail to have a join. As you noted, one is that there is simply no common upper bound on the set, and this can be rectified by including an explicit top element \(\top\). But another is that there can be too many least upper bounds. Category theorists may be okay with this, as working "up to isomorphism" seems to be standard practice (see the principle of equivalence), but I do rather appreciate the assurance that I'm working with a uniquely determined object. (I could justify this as a conservative way to avoid breaking the principle of equivalence, since identical objects are necessarily equivalent and cannot be distinguished.)

    So let's consider how to introduce unique "imaginary" joins. From real analysis, we know that one way to construct the reals is by using Cauchy sequences of rationals. This essentially occurs by identifying all sequences of a certain kind (i.e. Cauchy sequences) that do not converge to a rational value, and adding enough points so that they \(do\) converge (to one of these new values). We can try something similar.

    Consider a subset \(X\) of our preorder. We want to define the join \(\bigvee X\) of our subset somehow, even if there doesn't exist an actual element that is the unique join. Well, the set of all upper bounds of \(X\) is a thing (even if it's just the empty set \(\emptyset\)). This set is an upper set; we might consider this to be the "certain kind of" set analogous to Cauchy sequences that we want to use to complete our preorder. We want every such set to have an infimum -- a limit point of the set -- giving the least upper bound of \(X\). As with the real numbers, we might investigate taking the collection of all upper sets to be our completed preorder.

    In fact, this condition is precisely what characterizes directed complete partial orders, or "dcpo"s. The quote from Wikipedia below shows some more of the connections:

    Requiring the existence of directed suprema can be motivated by viewing directed sets as generalized approximation sequences and suprema as limits of the respective (approximative) computations. This intuition, in the context of denotational semantics, was the motivation behind the development of domain theory.

    It would be very interesting to see if there are any results giving a means of producing a dcpo from a preorder! My hunch is that it would only really make sense for posets, and even then there might need to be some extra conditions before such a procedure gives meaningful results.

    (EDIT: It would probably make more sense to consider the lower set \(\operatorname{\downarrow}X\), rather than the upper set of upper bounds of \(X\), too. Seems like an open set / closed set kind of distinction, but it might matter here.)

    Comment Source:What a fantastic question, [Pete](https://forum.azimuthproject.org/discussion/comment/18246/#Comment_18246)! I don't know what the answer is, but there may be some leads to follow if we dig at it some more. There are a couple of ways that a set can fail to have a join. As you noted, one is that there is simply no common upper bound on the set, and this can be rectified by including an explicit top element \\(\top\\). But another is that there can be _too many_ least upper bounds. Category theorists may be okay with this, as working "up to isomorphism" seems to be standard practice (see the [principle of equivalence](https://ncatlab.org/nlab/show/principle+of+equivalence)), but I do rather appreciate the assurance that I'm working with a uniquely determined object. (I could justify this as a conservative way to avoid breaking the principle of equivalence, since identical objects are necessarily equivalent and cannot be distinguished.) So let's consider how to introduce unique "imaginary" joins. From real analysis, we know that one way to [construct the reals](https://en.wikipedia.org/wiki/Construction_of_the_real_numbers#Construction_from_Cauchy_sequences) is by using [Cauchy sequences](https://en.wikipedia.org/wiki/Cauchy_sequence) of rationals. This essentially occurs by identifying all sequences of a certain kind (i.e. Cauchy sequences) that do not converge to a rational value, and adding enough points so that they \\(do\\) converge (to one of these new values). We can try something similar. Consider a subset \\(X\\) of our preorder. We want to define the join \\(\bigvee X\\) of our subset somehow, even if there doesn't exist an actual element that is the unique join. Well, the set of all upper bounds of \\(X\\) is a thing (even if it's just the empty set \\(\emptyset\\)). This set is an [upper set](https://en.wikipedia.org/wiki/Upper_set); we might consider this to be the "certain kind of" set analogous to Cauchy sequences that we want to use to complete our preorder. We want every such set to have an infimum -- a limit point of the set -- giving the least upper bound of \\(X\\). As with the real numbers, we might investigate taking the collection of all upper sets to be our completed preorder. In fact, this condition is precisely what characterizes [directed complete partial orders](https://en.wikipedia.org/wiki/Complete_partial_order), or "dcpo"s. The quote from Wikipedia below shows some more of the connections: > Requiring the existence of directed suprema can be motivated by viewing directed sets as generalized approximation sequences and suprema as limits of the respective (approximative) computations. This intuition, in the context of denotational semantics, was the motivation behind the development of domain theory. It would be very interesting to see if there are any results giving a means of producing a dcpo from a preorder! My hunch is that it would only really make sense for posets, and even then there might need to be some extra conditions before such a procedure gives meaningful results. (**EDIT:** It would probably make more sense to consider the _lower set_ \\(\operatorname{\downarrow}X\\), rather than the upper set of upper bounds of \\(X\\), too. Seems like an [open set](https://en.wikipedia.org/wiki/Open_set) / [closed set](https://en.wikipedia.org/wiki/Closed_set) kind of distinction, but it might matter here.)
  • 64.
    edited May 2018

    It would be very interesting to see if there are any results giving a means of producing a dcpo from a preorder! My hunch is that it would only really make sense for posets, and even then there might need to be some extra conditions before such a procedure gives meaningful results.

    I know of one result.

    A related way to the Cauchy sequence construction of the reals from the rational numbers is the method of Dedekind cuts. In lattice theory, this is generalized to the Dedekind-MacNeil Completion of a preorder. The Dedekind-MacNeil completion of a preorder \(\preceq\) is the smallest complete lattice that embeds \(\preceq\). Your intuition is very close to the construction. The members of the lattice are the lower sets of the upper sets of \(\preceq\). See Davey and Priestley (2002), §7.38 The Dedekind–MacNeille completion for more info.

    It would probably make more sense to consider the lower set \(\operatorname{\downarrow}X\), rather than the upper set of upper bounds of \(X\), too. Seems like an open set / closed set kind of distinction, but it might matter here.

    Your intuition is right on the money once again! Every preorder \(\preceq\) gives rise to an Alexandrov topology. The open sets of that topology are upper sets and the closed sets are lower sets. Every Alexandrov topology in turn gives rise to a preorder via its specialization order. The specialization order of the Alexandrov topology for a preorder \(\preceq\) is isomorphic to \(\preceq\) itself. In fact there is a full blown categorical duality where \(\textsf{PreOrder} \cong \textsf{AlexandrovTop}^{op}\).

    Comment Source:> It would be very interesting to see if there are any results giving a means of producing a dcpo from a preorder! My hunch is that it would only really make sense for posets, and even then there might need to be some extra conditions before such a procedure gives meaningful results. I know of one result. A related way to the Cauchy sequence construction of the reals from the rational numbers is the method of *Dedekind cuts*. In lattice theory, this is generalized to the [*Dedekind-MacNeil Completion*](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion) of a preorder. The Dedekind-MacNeil completion of a preorder \\(\preceq\\) is the smallest complete lattice that embeds \\(\preceq\\). Your intuition is very close to the construction. The members of the lattice are the lower sets of the upper sets of \\(\preceq\\). See [Davey and Priestley (2002), §7.38 The Dedekind–MacNeille completion](https://books.google.com/books?id=vVVTxeuiyvQC&pg=PA166#v=onepage&q&f=false) for more info. > It would probably make more sense to consider the _lower set_ \\(\operatorname{\downarrow}X\\), rather than the upper set of upper bounds of \\(X\\), too. Seems like an [open set](https://en.wikipedia.org/wiki/Open_set) / [closed set](https://en.wikipedia.org/wiki/Closed_set) kind of distinction, but it might matter here. Your intuition is right on the money once again! Every preorder \\(\preceq\\) gives rise to an [Alexandrov topology](https://en.wikipedia.org/wiki/Alexandrov_topology). The open sets of that topology are upper sets and the closed sets are lower sets. Every Alexandrov topology in turn gives rise to a preorder via its [specialization order](https://en.wikipedia.org/wiki/Specialization_(pre)order). The specialization order of the Alexandrov topology for a preorder \\(\preceq\\) is isomorphic to \\(\preceq\\) itself. In fact there is a full blown categorical duality where \\(\textsf{PreOrder} \cong \textsf{AlexandrovTop}^{op}\\).
  • 65.
    edited May 2018

    @Matthew: oh man, that's awesome! I was thinking of mentioning Dedekind cuts, but I wasn't sure which one would make the most sense in context, so I stuck to Cauchy sequences. I'll definitely have to read a bit more into the Dedekind-MacNeille completion.

    (Incidentally, the Wikipedia page suggests it's only defined for posets rather than all preorders, which lines up with my other hunch. Is that accurate?)

    EDIT: Also, digging a little further, it looks like the concept we wanted after all was just that of a complete lattice. Lattices assign all sets suprema, but DCPOs only assign directed sets suprema. DCPOs are neat, and quite useful for denotational semantics of programming languages, but aren't quite what we wanted here.

    Comment Source:@[Matthew](https://forum.azimuthproject.org/discussion/comment/18252/#Comment_18252): oh man, that's _awesome_! I was thinking of mentioning Dedekind cuts, but I wasn't sure which one would make the most sense in context, so I stuck to Cauchy sequences. I'll definitely have to read a bit more into the Dedekind-MacNeille completion. (Incidentally, the Wikipedia page suggests it's only defined for posets rather than all preorders, which lines up with my other hunch. Is that accurate?) **EDIT:** Also, digging a little further, it looks like the concept we wanted after all was just that of a [complete lattice](https://en.wikipedia.org/wiki/Complete_lattice). Lattices assign _all_ sets suprema, but DCPOs only assign _directed_ sets suprema. DCPOs are neat, and quite useful for [denotational semantics](https://en.wikipedia.org/wiki/Denotational_semantics) of programming languages, but aren't _quite_ what we wanted here.
  • 66.

    Thanks guys! I started late and didn't think anyone was still visiting the Chapter 1 threads.

    That really opened my eyes. The use of all upper bounds as a way to get as tight a limit as possible is great, and so much better than my crude suggestion.

    I'd never heard of completions before, so that's a great concept to add to my brain. The Dedekind-MacNeille completion is nifty. So from a quick skim of the Wikipedia page, it seems that the intuition is to take the identity \(\wedge \uparrow a\ = a\) and "widen" it to talk about sets instead of elements, which forces new elements to appear as needed in the completed lattice.

    I wonder if there's a way to use the completed lattice to back-transfer useful formulas from the completed lattice into the original poset, like my identity above. It would be cool if one could identify which formulas were "safe" to use in the poset -- technically incorrect but still formally usable. I guess the answer is to just use the completed lattice instead.

    I appreciate the links and references.

    Comment Source:Thanks guys! I started late and didn't think anyone was still visiting the Chapter 1 threads. That really opened my eyes. The use of all upper bounds as a way to get as tight a limit as possible is great, and so much better than my crude suggestion. I'd never heard of completions before, so that's a great concept to add to my brain. The Dedekind-MacNeille completion is nifty. So from a quick skim of the Wikipedia page, it seems that the intuition is to take the identity \\(\wedge \uparrow a\\ = a\\) and "widen" it to talk about sets instead of elements, which forces new elements to appear as needed in the completed lattice. I wonder if there's a way to use the completed lattice to back-transfer useful formulas from the completed lattice into the original poset, like my identity above. It would be cool if one could identify which formulas were "safe" to use in the poset -- technically incorrect but still formally usable. I guess the answer is to just use the completed lattice instead. I appreciate the links and references.
  • 67.
    edited May 2018

    John Baez #26

    Daniel Fava had written:

    (Couldn't figure out { so going with [ instead)

    For some weird reason, on this particular installation of MathJax we need to type

    \\{

    to get a {, and similarly for }. Two backslashes before { and }. I don't know why. I fixed the problem for you.

    To make this less mysterious to anyone curious, it's caused by multiple layers of text processing. The wiki software gives special meaning to some characters like *, so if you want the actual character instead of italics, you have to "escape" it by typing \*, which means "treat the character after the \ as raw text".

    I'm guessing that MathJax uses the same escape character to identify math commands. That means you have to escape the escape as \\, which the wiki reads as an escaped backslash character, emitting a single backslash. So you type \\( and MathJax actually sees the processed text \(.

    This meta-escaping grows exponentially with the number of processing layers! I've had to use quadruple backslashes to represent a single backslash in some programming situations. (I chose to write this comment without using yellow literal text, so there's a sea of backslashes in the source text.)

    Comment Source:John Baez [#26](https://forum.azimuthproject.org/discussion/comment/16704/#Comment_16704) > Daniel Fava had written: >> (Couldn't figure out \{ so going with [ instead) > For some weird reason, on this particular installation of MathJax we need to type > `\\{` > to get a {, and similarly for }. Two backslashes before { and }. I don't know why. I fixed the problem for you. To make this less mysterious to anyone curious, it's caused by multiple layers of text processing. The wiki software gives special meaning to some characters like \*, so if you want the actual character instead of italics, you have to "escape" it by typing \\\*, which means "treat the character after the \\ as raw text". I'm guessing that MathJax uses the same escape character to identify math commands. That means you have to escape the escape as \\\\, which the wiki reads as an escaped backslash character, emitting a single backslash. So you type \\\\\( and MathJax actually sees the processed text \\\(. This meta-escaping grows exponentially with the number of processing layers! I've had to use quadruple backslashes to represent a single backslash in some programming situations. (I chose to write this comment without using yellow `literal text`, so there's a sea of backslashes in the source text.)
  • 68.
    edited May 2018

    Let me try to spell out my understanding of the left/right directions (please correct me if I'm mistaken):

    "Left" and "right" only make sense with respect to something, and in Galois connections they are (meant to be) with respect to the given function (i.e. the function whose inverse/adjoint we are supposed to find), such that when we draw out the construction as Michael Hong does, an instance (arrow) of left adjoint is adjoined to the left of an instance of the given function, and an instance of right adjoint is adjoined to the right of an instance of the given function, both in the sense that the adjoint arrow's end point is connected to the given function arrow's start point.

    A tricky issue here is that there is still another potential reference point in the construction for us to interpret the "left/right" direction, namely the (perhaps nonexistent) correct answer to the inverse-finding problem. A left adjoint arrow, involving the ceiling function, is to the right of the "correct answer", while a right adjoint arrow, involving the floor function, is to the left of the "correct answer".

    In sum, if we change the reference point, the "left/right" relations also switch. So it's important to bear in mind that the "left/right" direction of an adjoint is with respect to the given function it's adjoined to.

    It took me a while to figure this out, and perhaps it's only me, but in case anyone else is having the same difficulty understanding the definition of Galois connections, I hope these notes will be of some help. :-)

    Comment Source:Let me try to spell out my understanding of the left/right directions (please correct me if I'm mistaken): "Left" and "right" only make sense _with respect to_ something, and in Galois connections they are (meant to be) with respect to the **given function** (i.e. the function whose inverse/adjoint we are supposed to find), such that when we draw out the construction as [Michael Hong](https://forum.azimuthproject.org/discussion/comment/16766/#Comment_16766) does, an instance (arrow) of left adjoint is adjoined to the left of an instance of the given function, and an instance of right adjoint is adjoined to the right of an instance of the given function, both in the sense that the adjoint arrow's end point is connected to the given function arrow's start point. A tricky issue here is that there is still another potential **reference point** in the construction for us to interpret the "left/right" direction, namely the (perhaps nonexistent) correct answer to the inverse-finding problem. A left adjoint arrow, involving the ceiling function, is to the right of the "correct answer", while a right adjoint arrow, involving the floor function, is to the left of the "correct answer". In sum, if we change the reference point, the "left/right" relations also switch. So it's important to bear in mind that the "left/right" direction of an adjoint is with respect to the given function it's adjoined to. It took me a while to figure this out, and perhaps it's only me, but in case anyone else is having the same difficulty understanding the definition of Galois connections, I hope these notes will be of some help. :-)
  • 69.

    Julio Song wrote:

    "Left" and "right" only make sense with respect to something, and in Galois connections they are (meant to be) with respect to the given function (i.e. the function whose inverse/adjoint we are supposed to find)

    Julio, I'd say you're right on. The one thing I'd add is that people often say things like "This function \(f\) is a left adjoint" without a reference to any other function, and what they mean is "There exists a function \(g\) such that \(f\) is the left adjoint of \(g\)." This is okay, because \(f\) can only be the left adjoint of at most one function, namely its right adjoint if it has one, so "being a left [or right] adjoint" is equivalent to the property of "having a right [or left] adjoint". So, totally free of any other context, we can say things like

    "The ceiling function from R to Z is a left adjoint." (It has a right adjoint, namely the inclusion Z to R.)

    or

    "The floor function from R to Z is a right adjoint." (The inclusion Z to R is its left adjoint.)

    or

    "The inclusion Z to R is both a left and right adjoint." (Its right adjoint is ceiling and its left adjoint is floor.)

    Comment Source:Julio Song wrote: > "Left" and "right" only make sense with respect to something, and in Galois connections they are (meant to be) with respect to the given function (i.e. the function whose inverse/adjoint we are supposed to find) Julio, I'd say you're right on. The one thing I'd add is that people often say things like "This function \\(f\\) is a left adjoint" *without* a reference to any other function, and what they mean is "There exists a function \\(g\\) such that \\(f\\) is the left adjoint of \\(g\\)." This is okay, because \\(f\\) can only be the left adjoint of at most one function, namely its right adjoint if it has one, so "being a left [or right] adjoint" is equivalent to the *property* of "having a right [or left] adjoint". So, totally free of any other context, we can say things like "The ceiling function from R to Z is a left adjoint." (It *has* a right adjoint, namely the inclusion Z to R.) or "The floor function from R to Z is a right adjoint." (The inclusion Z to R is its left adjoint.) or "The inclusion Z to R is both a left and right adjoint." (Its right adjoint is ceiling and its left adjoint is floor.)
  • 70.
    edited May 2018

    I had a similar confusion.

    Consider three functions \( L, C, R \) where \( C : A \rightarrow B \) and \( L, R : B \rightarrow A \). Where \( L \) and \( R \) are left and right adjoint to \( C \) respectively.

    This implies that \( C \) is left adjoint to \( R \) and right adjoint to \( L \).

    \( R \) is the approximate inverse of \( C \) from above and \( L \) is the approximate inverse of \( C \) from below. I am tempted to call them limits.

    The error is in treating \( L \) and \( R \) as if they were left and right ajoints of each other, they are not.

    What is the relationship between \( L \) and \( R \)? It appears that they act as bounds for the true value in \( C \).

    Comment Source:I had a [similar confusion](https://forum.azimuthproject.org/discussion/comment/18678/#Comment_18678). Consider three functions \\( L, C, R \\) where \\( C : A \rightarrow B \\) and \\( L, R : B \rightarrow A \\). Where \\( L \\) and \\( R \\) are left and right adjoint to \\( C \\) respectively. This implies that \\( C \\) is left adjoint to \\( R \\) and right adjoint to \\( L \\). \\( R \\) is the approximate inverse of \\( C \\) from above and \\( L \\) is the approximate inverse of \\( C \\) from below. I am tempted to call them limits. The error is in treating \\( L \\) and \\( R \\) as if they were left and right ajoints of each other, **they are not**. What is the relationship between \\( L \\) and \\( R \\)? It appears that they act as bounds for the true value in \\( C \\).
  • 71.

    Fredrick said

    The error is in treating \( L \) and \( R \) as if they were left and right ajoints of each other, they are not.

    I feel this is spot-on but see two similar formulations in Fong & Spivak's book:

    1. "We say that \(f\) is the left adjoint and \(g\) is the right adjoint of the Galois connection." (Definition 1.74)

    2. "We say that \(L\) is left adjoint to \(R\) (and that \(R\) is right adjoint to \(L\)) ..." (Definition 3.56)

    Are these merely loose talks or rather special cases where L and R are indeed left/right adjoints of each other?

    I think Owen's comment also addresses this issue, though with a different view (if I have understood correctly)

    [...] \(f\) can only be the left adjoint of at most one function, namely its right adjoint if it has one, so "being a left [or right] adjoint" is equivalent to the property of "having a right [or left] adjoint".

    Anyway, now it looks like there are at least three potential reference points for the "left/right" direction of adjunction: (i) the given function, (ii) the (perhaps nonexistent) correct answer, (iii) the other adjoint in the same pair. :-?

    Comment Source:[Fredrick said](https://forum.azimuthproject.org/discussion/comment/18769/#Comment_18769) > The error is in treating \\( L \\) and \\( R \\) as if they were left and right ajoints of each other, **they are not**. I feel this is spot-on but see two similar formulations in Fong & Spivak's book: 1. "We say that \\(f\\) is the _left adjoint_ and \\(g\\) is the _right adjoint_ of the Galois connection." (Definition 1.74) 2. "We say that \\(L\\) _is left adjoint to_ \\(R\\) (and that \\(R\\) _is right adjoint to_ \\(L\\)) ..." (Definition 3.56) Are these merely loose talks or rather special cases where L and R **are indeed** left/right adjoints of each other? I think [Owen's comment](https://forum.azimuthproject.org/discussion/comment/18761/#Comment_18761) also addresses this issue, though with a different view (if I have understood correctly) > [...] \\(f\\) can only be the left adjoint of at most one function, namely its right adjoint if it has one, so "being a left [or right] adjoint" is equivalent to the *property* of "having a right [or left] adjoint". Anyway, now it looks like there are at least **three** potential reference points for the "left/right" direction of adjunction: (i) the given function, (ii) the (perhaps nonexistent) correct answer, (iii) the other adjoint in the same pair. :-?
  • 72.

    I think there's a confusion between two very separate and distinct scenarios, where similar names are used for different things.

    Consider three functions \( L, C, R \) where \( C : A \rightarrow B \) and \( L, R : B \rightarrow A \). Where \( L \) and \( R \) are left and right adjoint to \( C \) respectively.

    The core issue here might be the fact that \(C : A \to B\) is both a left adjoint and a right adjoint. We have two adjunctions here: \(L \dashv C\) and \(C \dashv R\).

    However, for an adjunction in general, we might name the two halves of the conjunction by \(L \dashv R\). In this scenario, there is no \(C\).

    Comment Source:I think there's a confusion between two very separate and distinct scenarios, where similar names are used for different things. > Consider three functions \\( L, C, R \\) where \\( C : A \rightarrow B \\) and \\( L, R : B \rightarrow A \\). Where \\( L \\) and \\( R \\) are left and right adjoint to \\( C \\) respectively. The core issue here might be the fact that \\(C : A \to B\\) is both a left adjoint and a right adjoint. We have _two_ adjunctions here: \\(L \dashv C\\) and \\(C \dashv R\\). However, for an adjunction in general, we might name the two halves of the conjunction by \\(L \dashv R\\). In this scenario, there is no \\(C\\).
  • 73.

    By the way, I found the following remark on Milewski's blog:

    [T]he functor L is called the left adjoint to the functor R, while the functor R is the right adjoint to L. (Of course, left and right make sense only if you draw your diagrams one particular way.)

    which seems to confirm that in this scenario the "left/right" direction has a totally different connotation from that in the finding-inverse-for-a-given-function problem, exactly as Jonathan said above:

    I think there's a confusion between two very separate and distinct scenarios, where similar names are used for different things.

    No idea if these two distinct scenarios for "left/right" have any deeper inner connection.

    Comment Source:By the way, I found the following remark on [Milewski's blog](https://bartoszmilewski.com/2016/04/18/adjunctions/): > [T]he functor L is called the left adjoint to the functor R, while the functor R is the right adjoint to L. (Of course, left and right make sense only if you draw your diagrams one particular way.) which seems to confirm that in this scenario the "left/right" direction has a **totally different** connotation from that in the finding-inverse-for-a-given-function problem, exactly as [Jonathan said above](https://forum.azimuthproject.org/discussion/comment/18881/#Comment_18881): > I think there's a confusion between two very separate and distinct scenarios, where similar names are used for different things. No idea if these two distinct scenarios for "left/right" have any deeper inner connection.
  • 74.
    edited June 2018

    The inner connection is primarily that the "best approximation" problem can be solved using adjoint functors. Single adjunctions \(L \dashv R\) are the fundamental thing; it just happens that sometimes you can build a chain of adjunctions \(A \dashv B \dashv C\). In such a chain, \(B\) is both a left adjoint (to \(C\)) and a right adjoint (to \(A\)), and it's these adjunctions that formalize how \(B\) is a best under-approximation to \(C\) and a best over-approximation to \(A\). For the same reason, \(A\) is a best under-approximation to \(B\), and \(C\) is a best over-approximation to \(B\).

    As an aside, here's a crazy mnemonic tying the left/right idea to the least/greater idea. In Japanese, there's a kanji 下 , which can be read "shita" (shhtah; isn't linguistics fun?). It means "below", or "under", and it looks an awful lot like a rotated \(\dashv\). So the thing on the left side of the adjunction is "below" the thing on the right.

    Comment Source:The inner connection is primarily that the "best approximation" problem can be solved using adjoint functors. Single adjunctions \\(L \dashv R\\) are the fundamental thing; it just happens that sometimes you can build a chain of adjunctions \\(A \dashv B \dashv C\\). In such a chain, \\(B\\) is both a left adjoint (to \\(C\\)) and a right adjoint (to \\(A\\)), and it's these adjunctions that formalize how \\(B\\) is a best under-approximation to \\(C\\) and a best over-approximation to \\(A\\). For the same reason, \\(A\\) is a best under-approximation to \\(B\\), and \\(C\\) is a best over-approximation to \\(B\\). As an aside, here's a _crazy_ mnemonic tying the left/right idea to the least/greater idea. In Japanese, there's a kanji 下 , which can be read "shita" (_shhtah_; isn't linguistics fun?). It means "below", or "under", and it looks an awful lot like a rotated \\(\dashv\\). So the thing on the left side of the adjunction is "below" the thing on the right.
  • 75.

    Thanks for the additional remark, @Jonathan! And great mnemonic haha (actually itself is also a CJK character, and so is of course :P).

    Comment Source:Thanks for the [additional remark](https://forum.azimuthproject.org/discussion/comment/18921/#Comment_18921), @Jonathan! And great mnemonic haha (actually [丅](https://en.wiktionary.org/wiki/丅) itself is also a CJK character, and so is [丄](https://en.wiktionary.org/wiki/丄) of course :P).
  • 76.
    edited June 2018

    I think something there is some discrepancy in what I think of "approaching from above ( or below)" means and what @JohnBaez does.

    Here is my picture of the situation for the right adjoint of the function \(2\times -\):

    adjoint2x

    The arrows for the floor function approach their target from above...

    The same is true for the mini surjective function example by @Michael Hong:

    miniSurjective

    Here, if \(f\dashv r\) (\(r\) is the right adjoint of \(f\), \(f(1)= * \leq *\) so \(1\leq r( * )\) then \(r( * )=1\) which I can only describe as "approaching from above".

    In my Hasse diagrams, the order increases from to bottom, instead of from left to right. I am too used to doing it that way. Sorry for the hand-drawn pics, I don't have the skills or the patience to make them with the computer.

    Comment Source:I think something there is some discrepancy in what I think of "approaching from above ( or below)" means and what @JohnBaez does. Here is my picture of the situation for the right adjoint of the function \\(2\times -\\): ![adjoint2x](https://sites.google.com/site/testjustformevigg/_/rsrc/1528642702021/7sketches/20180609_094818.jpg?height=291&width=320) The arrows for the floor function approach their target from above... The same is true for the mini surjective function example by @Michael Hong: ![miniSurjective](https://sites.google.com/site/testjustformevigg/7sketches/20180609mini.jpg?attredirects=0) Here, if \\(f\dashv r\\) (\\(r\\) is the right adjoint of \\(f\\), \\(f(1)= * \leq *\\) so \\(1\leq r( * )\\) then \\(r( * )=1\\) which I can only describe as "approaching from above". In my Hasse diagrams, the order increases from to bottom, instead of from left to right. I am too used to doing it that way. Sorry for the hand-drawn pics, I don't have the skills or the patience to make them with the computer.
  • 77.

    About the question on right and left posed by Julio Song: I always draw the left adjoint going from left to right and the right adjoint going from right to left. So, if \(f\) is a left adjoint to some function \(g\) and I want to find out if \(f\) has a left adjoint, I draw a new diagram with \(f\) going from right to left.

    Comment Source:About the question on right and left posed by Julio Song: I always draw the left adjoint going from left to right and the right adjoint going from right to left. So, if \\(f\\) is a left adjoint to some function \\(g\\) and I want to find out if \\(f\\) has a left adjoint, I draw a new diagram with \\(f\\) going from right to left.
  • 78.
    edited June 2018

    @Allison+Burgers @Allison+Burgers @Allison Burgers @Allison%20Burgers @"Allison Burgers" @Allison\ Burgers @Allison\%20Burgers

    (maybe the spaces were a bad idea)

    ANYway, coming in late to this discussion, I don't see any other answers to your puzzles. My linear algebra is very rusty, but here's my go:

    AB1:

    A linear subspace X in V can be identified by a basis B, a subset of V. T(X) can be defined as the linear subspace of W identified by the T(B), that is, by the set of T(b) for each b in B.

    If X ≤ Y, then X is a subspace of Y in V, and we can construct a basis C of Y such that C is a superset of B. Then T(B) is a subset of T(C), and therefore T(X) is a subspace of T(Y); that is, T(X) ≤ T(Y) under the order operator in W. Thus T is monotone.

    AB2:

    Let K be the kernel of T, i.e. the "largest" subspace of V such that T(K) = 0, and let P be the subspace of W that is orthogonal to the image of T(V). The right adjoint should basically map P to 0, apply a linear inverse of T from T(V) to (a subspace of) V, and always "add" K.

    AFAICT, a right adjoint should always exist, although for a highly degenerate transform it will be very simple. If T maps V (and thus any subspace) to zero, the right adjoint will map any subspace of W, including W itself and zero, to the whole space V.

    A left adjoint should apply a linear inverse of T from T(V) to (a subspace of) V, but it should not "add" K. However, only if P is the empty set can we construct a left adjoint; otherwise we run into trouble.

    AB3:

    This is where my linear algebra knowledge runs out!

    vector space linear transform graph

    On the left is an example in which T is surjective, so we get a left adjoint in addition to the right adjoint. On the right is an example in which the complement of Image(T) in W is not empty, so the right adjoint is richer, in a sense, but the left adjoint is not possible.

    Comment Source:@Allison+Burgers @Allison\+Burgers @Allison Burgers @Allison%20Burgers @"Allison Burgers" @Allison\ Burgers @Allison\%20Burgers (maybe the spaces were a bad idea) ANYway, coming in late to this discussion, I don't see any other answers to your puzzles. My linear algebra is very rusty, but here's my go: AB1: A linear subspace X in V can be identified by a basis B, a subset of V. T(X) can be defined as the linear subspace of W identified by the T(B), that is, by the set of T(b) for each b in B. If X ≤ Y, then X is a subspace of Y in V, and we can construct a basis C of Y such that C is a superset of B. Then T(B) is a subset of T(C), and therefore T(X) is a subspace of T(Y); that is, T(X) ≤ T(Y) under the order operator in W. Thus T is monotone. AB2: Let K be the kernel of T, i.e. the "largest" subspace of V such that T(K) = 0, and let P be the subspace of W that is orthogonal to the image of T(V). The right adjoint should basically map P to 0, apply a linear inverse of T from T(V) to (a subspace of) V, and always "add" K. AFAICT, a right adjoint should always exist, although for a highly degenerate transform it will be very simple. If T maps V (and thus any subspace) to zero, the right adjoint will map any subspace of W, including W itself and zero, to the whole space V. A left adjoint should apply a linear inverse of T from T(V) to (a subspace of) V, but it should not "add" K. However, only if P is the empty set can we construct a left adjoint; otherwise we run into trouble. AB3: This is where my linear algebra knowledge runs out! ![vector space linear transform graph](https://getupstreettheater.files.wordpress.com/2018/06/vector_space_linear_transform_adjoints2.png) On the left is an example in which T is surjective, so we get a left adjoint in addition to the right adjoint. On the right is an example in which the complement of Image(T) in W is not empty, so the right adjoint is richer, in a sense, but the left adjoint is not possible.
  • 79.
    edited June 2018

    I find it helpful to break down the Galois connection requirement into the "if" and the "only if". Eg for \(f(a) = 2a\), I label the left and right adjoints \(L\) and \(R\), respectively, and look at the various conditions \(\{C_{implication}\}\):

    (least) $$C_{f \leftarrow R}: f(a) \leq b \Leftarrow a \leq R(b) \textrm{ means } R(b) \leq_{\Bbb{R}} \Bigl\lfloor \frac{b}{2} \Bigr\rfloor$$

    (upper bound) $$C_{f \to R}: f(a) \leq b \Rightarrow a \leq R(b) \textrm{ means } R(b) \geq_{\Bbb{R}} \Bigl\lfloor \frac{b}{2} \Bigr\rfloor$$

    Similarly,

    (greatest) $$C_{L \to f}: L(b) \leq a \Rightarrow b \leq f(a) \textrm{ means } L(b) \geq_{\Bbb{R}} \Bigl\lceil \frac{b}{2} \Bigr\rceil$$

    (lower bound) $$C_{L \leftarrow f}: L(b) \leq a \Leftarrow b \leq f(a) \textrm{ means } L(b) \leq_{\Bbb{R}} \Bigl\lceil \frac{b}{2} \Bigr\rceil$$

    In the vector subspace linear transform example,

    $$C_{T \leftarrow R}: T(v) \subseteq w \Leftarrow v \subseteq R(w)$$ means \(R(w) = R(w \cap Image(T)) \; \forall w \subseteq W\), i.e. \(Image(T)^{\perp}\) is in a very loose sense in the "kernel" of \(R\) (\(R(Image(T)^{\perp}) = R(\Bbb{0})\)), and \(R\) "at most" gives you the linear inverse on \(Image(T)\),

    $$C_{T \to R}: T(v) \subseteq w \Rightarrow v \subseteq R(w)$$ means \(Ker(T) \subseteq R(w) \; \forall w \subseteq W\) and \(R\) "at least" gives you the linear inverse on \(Image(T)\),

    $$C_{L \to T}: L(w) \subseteq v \Rightarrow w \subseteq T(v)$$ means \(w \subseteq Image(T) \; \forall w \subseteq W\), i.e. \(Image(T)^{\perp} = \Bbb{0}\) (otherwise \(L\) is not defined), and \(L\) is "at least" the linear inverse on \(Image(T)\), and

    $$C_{L \leftarrow T}: L(w) \subseteq v \Leftarrow w \subseteq T(v)$$ means \(L(w) \cap Ker(T) = \Bbb{0} \; \forall w \subseteq W\) and \(L\) is "at most" the linear inverse on \(Image(T)\).

    \(C_{T \leftarrow R}\) gives you "least", \(C_{T \to R}\) gives you "upper bound"; together, \(R\) is the "least upper bound"; \(C_{L \to T}\) and \(C_{L \leftarrow T}\) are "greatest", "lower bound", respectively.

    Comment Source:I find it helpful to break down the Galois connection requirement into the "if" and the "only if". Eg for \\(f(a) = 2a\\), I label the left and right adjoints \\(L\\) and \\(R\\), respectively, and look at the various conditions \\(\\{C_{implication}\\}\\): (least) $$C_{f \leftarrow R}: f(a) \leq b \Leftarrow a \leq R(b) \textrm{ means } R(b) \leq_{\Bbb{R}} \Bigl\lfloor \frac{b}{2} \Bigr\rfloor$$ (upper bound) $$C_{f \to R}: f(a) \leq b \Rightarrow a \leq R(b) \textrm{ means } R(b) \geq_{\Bbb{R}} \Bigl\lfloor \frac{b}{2} \Bigr\rfloor$$ Similarly, (greatest) $$C_{L \to f}: L(b) \leq a \Rightarrow b \leq f(a) \textrm{ means } L(b) \geq_{\Bbb{R}} \Bigl\lceil \frac{b}{2} \Bigr\rceil$$ (lower bound) $$C_{L \leftarrow f}: L(b) \leq a \Leftarrow b \leq f(a) \textrm{ means } L(b) \leq_{\Bbb{R}} \Bigl\lceil \frac{b}{2} \Bigr\rceil$$ In the vector subspace linear transform example, $$C_{T \leftarrow R}: T(v) \subseteq w \Leftarrow v \subseteq R(w)$$ means \\(R(w) = R(w \cap Image(T)) \; \forall w \subseteq W\\), i.e. \\(Image(T)^{\perp}\\) is in a very loose sense in the "kernel" of \\(R\\) (\\(R(Image(T)^{\perp}) = R(\Bbb{0})\\)), and \\(R\\) "at most" gives you the linear inverse on \\(Image(T)\\), $$C_{T \to R}: T(v) \subseteq w \Rightarrow v \subseteq R(w)$$ means \\(Ker(T) \subseteq R(w) \; \forall w \subseteq W\\) and \\(R\\) "at least" gives you the linear inverse on \\(Image(T)\\), $$C_{L \to T}: L(w) \subseteq v \Rightarrow w \subseteq T(v)$$ means \\(w \subseteq Image(T) \; \forall w \subseteq W\\), i.e. \\(Image(T)^{\perp} = \Bbb{0}\\) (otherwise \\(L\\) is not defined), and \\(L\\) is "at least" the linear inverse on \\(Image(T)\\), and $$C_{L \leftarrow T}: L(w) \subseteq v \Leftarrow w \subseteq T(v)$$ means \\(L(w) \cap Ker(T) = \Bbb{0} \; \forall w \subseteq W\\) and \\(L\\) is "at most" the linear inverse on \\(Image(T)\\). \\(C_{T \leftarrow R}\\) gives you "least", \\(C_{T \to R}\\) gives you "upper bound"; together, \\(R\\) is the "least upper bound"; \\(C_{L \to T}\\) and \\(C_{L \leftarrow T}\\) are "greatest", "lower bound", respectively.
  • 80.

    It's also interesting to note that for \(f(a) = 2a\), \(R_f(b) \leq L_f(b) \; \forall b\) but for the vector space example, when a left adjoint \(L_T\) is defined, \(L_T(w) \subseteq R_T(w) \; \forall w \subseteq W\).

    This is why I find that the "cautious versus generous" phrasing has not helped me develop my intuition here.

    Note that by taking the cross product of the two partial orders, we can construct a partial order for which the left and right adjoints do not have either of the two relations \(L \leq R\) or \(R \leq L\) consistently across their whole range.

    Comment Source:It's also interesting to note that for \\(f(a) = 2a\\), \\(R_f(b) \leq L_f(b) \; \forall b\\) but for the vector space example, when a left adjoint \\(L_T\\) is defined, \\(L_T(w) \subseteq R_T(w) \; \forall w \subseteq W\\). This is why I find that the "cautious versus generous" phrasing has not helped me develop my intuition here. Note that by taking the cross product of the two partial orders, we can construct a partial order for which the left and right adjoints do not have either of the two relations \\(L \leq R\\) or \\(R \leq L\\) consistently across their whole range.
  • 81.

    Fong and Spivak (p.32, Theorem 1.115) give an "adjoint functor theorem for preorders", which I guess is just the adjoint-computing method in this lecture. In addition, Fong and Spivak explicitly say that this method is only applicable when the two preorders have all meets/joins (respectively), which John only mentioned in passing:

    In a poset, our desired least upper bound may still not exist. But if it does, it's unique,...

    I think this "applicability condition" is worth emphasizing, because a) it is important for beginners to realize and bear in mind that we cannot readily use the formulae (albeit handy) for all preorders, and b) the condition is probably generalizable after the notions of limit/colimit are introduced, and it is good for students to realize the conceptual interrelation.

    Comment Source:Fong and Spivak (p.32, Theorem 1.115) give an "adjoint functor theorem for preorders", which I guess is just the adjoint-computing method in this lecture. In addition, Fong and Spivak explicitly say that this method is only applicable when the two preorders **have all meets/joins** (respectively), which John only mentioned in passing: > In a poset, our desired least upper bound may still not _exist_. But if it does, it's _unique_,... I think this "applicability condition" is worth emphasizing, because _a_) it is important for beginners to realize and bear in mind that we cannot readily use the formulae (albeit handy) for all preorders, and _b_) the condition is probably generalizable after the notions of limit/colimit are introduced, and it is good for students to realize the conceptual interrelation.
  • 82.

    I think the formula still applies, technically, if the particular meet(s) needed exist(s) then so does the adjoint, otherwise they both don't exist.

    Comment Source:I think the formula still applies, technically, if the particular meet(s) needed exist(s) then so does the adjoint, otherwise they both don't exist.
Sign In or Register to comment.