Options

Lecture 8 - Chapter 1: The Logic of Subsets

I'd like to tell you about two kinds of logic. In both, we start with a set \( X \) of "states of the world" and build a set of statements about the world, also known as "propositions". In the first, propositions correspond to subsets of \( X \). In the second, propositions correspond to partitions of \( X \). In both approaches we get a poset of propositions where the partial order is "implication", written \( \implies \).

The first kind of logic is very familiar. We could call it "subset logic", but it's part of what people usually call "classical logic". This is the sort of logic we learn in school, assuming we learn any at all. The second kind of logic is less well known: it's called "partition logic". Interestingly, Fong and Spivak spend more time on the second kind.

I'll start by talking about the first kind.

Most of us learn the relation between propositions and subsets, at least implicitly, when we meet Venn diagrams:

image

This is a picture of some set \( X \) of "states of the world". But the world here is very tiny: it's just a letter. It can be any letter in the Latin, Greek or Cyrillic alphabet. Each region in the Venn diagram is subset of \( X \): for example, the upper left circle contains all the letters in the Greek alphabet. But each region can also be seen as a proposition: a statement about the world. For example, the upper left circle corresponds to the proposition "The letter belongs to the Greek alphabet".

As a result, everything you can do with subsets of \( X \) turns into something you can do with propositions. Suppose \( P, Q, \) and \( R \) are subsets of \( X \). We can also think of these as propositions, and:

  • if \( P \subseteq Q \) we say the proposition \( P \) implies the proposition \( Q \), and we write \( P \implies Q \).

  • If \( P \cap Q = R \) we say \(P \textbf { and } Q = R \).

  • If \( P \cup Q = R \) we say the proposition \( P \textbf{ o r} Q = R \).

All the rules obeyed by "subset", "and" and "or" become rules obeyed by "implies", "and" and "or".

I hope you know this already, but if you don't, you're in luck: this is this most important thing you've heard all year! Please think about it and ask questions until it revolutionizes the way you think about logic.

But really, all this stuff is about one particular way of getting a poset from the set \( X \).

For any set \( X \) the power set of \( X \) is the collection of all subsets of \( X \). We call it \( P(X) \). It's a poset, where the partial ordering is \( \subseteq \).

For example, here is a picture of the poset \( P(X) \) when \( X = \{x,y,z\} \):

image

As you can see, it looks like a 3-dimensional cube. Here's a picture of \( P(X) \) when \( X \) has 4 elements:

image

In this picture we say whether each element is in or out of the subset by writing a 1 or 0. This time we get a 4-dimensional cube.

What's the union of two subsets \( S, T \subseteq X \)? It's the smallest subset of \( X \) that contains both \( S \) and \( T \) as subsets. This is an example of a concept we can define in any poset:

Definition. Given a poset \( (A, \le) \), the join of \( a, b \in A \), if it exists, is the least element \( c \in A \) such that \( a \le c \) and \( b \le c \). We denote the join of \( a \) and \( b \) as \( a \vee b \).

Quite generally we can try to think of any poset as a poset of propositions. Then \( \vee \) means "or". In the logic we're studying today, this poset is \( P X \) and \( \vee \) is just "union", or \( \cup \).

Similarly, what's the intersection of two subsets \( S, T \subseteq X \)? Well, it's the largest subset of \( X \) that is contained as a subset of both \( S \) and \( T \). Again this is an example of a concept we can define in any poset:

Definition. Given a poset \( (A, \le) \), the meet of \( a, b \in A \), if it exists, is the greatest element \( c \in A \) such that \( c \le a \) and \( c \le b \). We denote the meet of \( a \) and \( b \) as \( a \wedge b \).

When we think of a poset as a poset of propositions, \( \wedge \) means "and". When our poset is \( P(X) \), \( \wedge \) is just "intersection". \( \cap \).

We could go on with this, and if this were a course on classical logic I would. But this is a course on applied category theory! So, we shouldn't just stick with a fixed set \( X \). We should see what happens when we let it vary! We get a poset of propositions for each set \( X \), but all these posets are related to each other.

I'll talk about this more next time, but let me give you a teaser now. Say we have two sets \( X \) and \( Y \) and a function \( f : X \to Y \). Then we get a monotone map from the poset \( P(Y) \) to the poset \( P(X) \), called

$$ f^* : P(Y) \to P(X) $$ For any \( S \in P(Y) \), the set \( f^*(S) \in P(X) \) is defined like this:

$$ f^*(S) = \{ x \in X : \; f(x) \in S \} $$ Next time, I'll show you this monotone map has both a left and a right adjoint! And these turn out to be connected to the logical concepts of "there exists" and "for all". I believe this was first discovered by the great category theorist Bill Lawvere.

So you see, I haven't given up talking about left and right adjoints. I'm really just getting ready to explain how they show up in logic: first in good old classical "subset logic", and then in the weird new "partition logic".

To read other lectures go here.

Comments

  • 1.
    edited April 4

    I'm really just getting ready to explain how they show up in logic: first in good old classical "subset logic", and then in the weird new "partition logic".

    I am pretty excited to read what's next!

    I wanted to a few puzzles I ran into a while ago related to these topics.

    First, a some definitions...

    Definition. A poset \((A,\leq,\wedge,\vee)\) with a join \(\vee\) and a meet \(\wedge\) is called a lattice. (Note: Lattices must obey the anti-symmetry law!)

    Definiton. The product poset of two posets \((A,\leq_A)\) and \((B,\leq_B)\) is \((A \times B, \leq_{A\times B})\) where

    $$ (a_1,b_1) \leq_{A\times B} (a_2,b_2) \Longleftrightarrow a_1 \leq_A a_2 \text{ and } b_1 \leq_B b_2 $$ Definition. Let \((A,\leq)\) be a poset. The diagonal function \(\Delta : A \to A\times A\) is defined:

    $$ \Delta(a) := (a,a) $$


    Let \(A\) be a lattice.

    MD Puzzle 1: Show that \(\Delta\) is monotonically increasing on \(\leq_{A\times A}\)

    MD Puzzle 2: Find the right adjoint \(r : A\times A \to A\) to \(\Delta\) such that:

    $$ \Delta(x) \leq_{A\times A} (y,z) \Longleftrightarrow x \leq_{A} r(y,z) $$ MD Puzzle 3: Find the left adjoint \(l : A\times A \to A\) to \(\Delta\) such that:

    $$ l(x,y) \leq_{A} z \Longleftrightarrow (x,y) \leq_{A\times A} \Delta(z) $$ MD Puzzle 4: Consider \(\mathbb{N}\) under the partial ordering \(\cdot\ |\ \cdot\), where

    $$ a\ |\ b \Longleftrightarrow a \text{ divides } b $$ What are the adjoints \(l\) and \(r\) in this case?

    Comment Source:> I'm really just getting ready to explain how they show up in logic: first in good old classical "subset logic", and then in the weird new "partition logic". I am pretty excited to read what's next! I wanted to a few puzzles I ran into a while ago related to these topics. First, a some definitions... **Definition.** A poset \\((A,\leq,\wedge,\vee)\\) with a join \\(\vee\\) and a meet \\(\wedge\\) is called a **lattice**. (Note: Lattices ***must*** obey the anti-symmetry law!) **Definiton.** The **product poset** of two posets \\((A,\leq_A)\\) and \\((B,\leq_B)\\) is \\((A \times B, \leq_{A\times B})\\) where $$ (a_1,b_1) \leq_{A\times B} (a_2,b_2) \Longleftrightarrow a_1 \leq_A a_2 \text{ and } b_1 \leq_B b_2 $$ **Definition.** Let \\((A,\leq)\\) be a poset. The **diagonal function** \\(\Delta : A \to A\times A\\) is defined: $$ \Delta(a) := (a,a) $$ --------------------------- Let \\(A\\) be a lattice. **MD Puzzle 1**: Show that \\(\Delta\\) is monotonically increasing on \\(\leq_{A\times A}\\) **MD Puzzle 2**: Find the *right adjoint* \\(r : A\times A \to A\\) to \\(\Delta\\) such that: $$ \Delta(x) \leq_{A\times A} (y,z) \Longleftrightarrow x \leq_{A} r(y,z) $$ **MD Puzzle 3**: Find the *left adjoint* \\(l : A\times A \to A\\) to \\(\Delta\\) such that: $$ l(x,y) \leq_{A} z \Longleftrightarrow (x,y) \leq_{A\times A} \Delta(z) $$ **MD Puzzle 4**: Consider \\(\mathbb{N}\\) under the partial ordering \\(\cdot\ |\ \cdot\\), where $$ a\ |\ b \Longleftrightarrow a \text{ divides } b $$ What are the adjoints \\(l\\) and \\(r\\) in this case?
  • 2.

    These are very good puzzles, Matthew! More magic tricks with adjoints! I won't give away the answers. I'll just reassure everyone that "right Galois adjoint" means the same thing as what I'm calling "right adjoint", and "left Galois adjoint" means the same thing as "left adjoint".

    Since I explained how to compute adjoints in Lecture 6, all of you can work out the answers to MD Puzzles 2 and 3 by simply computing the adjoints.

    Comment Source:These are very good puzzles, Matthew! More magic tricks with adjoints! I won't give away the answers. I'll just reassure everyone that "right Galois adjoint" means the same thing as what I'm calling "right adjoint", and "left Galois adjoint" means the same thing as "left adjoint". Since I explained how to compute adjoints in [Lecture 6](https://forum.azimuthproject.org/discussion/1901/lecture-6-chapter-1-computing-adjoints), all of you can work out the answers to MD Puzzles 2 and 3 by simply _computing_ the adjoints.
  • 3.
    edited April 4

    Hey! I just changed my question to match your nomenclature.

    More magic tricks with adjoints!

    Category theory in general all feels like magic to me...!

    Comment Source:Hey! I just changed my question to match your nomenclature. > More magic tricks with adjoints! Category theory in general all feels like magic to me...!
  • 4.

    Attempted answers:

    MD 1. \(\Delta\) is monotonic, because \(a \leq b \to (a,a) \leq (b,b)\), by definition of \(\leq_{A\times A}\)

    MD 2. By the method in lecture 6, r(y,z) = least upper bound of X = \(\{x : \Delta (x) \leq (y,z) \}\). Since X is the set of elements of A less than min(y,z), r(y,z) is min(y,z).

    MD 3. By duality, l(y,z) = max(y,z)

    MD 4.

    r: least upper bound of X = \(\{x : \Delta (x) \leq (y,z) \}\): least common multiple

    l: greatest lower bound of X = \(\{x : \Delta (x) \geq (y,z) \}\) : greatest common divisor

    I think I've been sloppy and got some of this flipped - to be fixed later.

    Comment Source:Attempted answers: MD 1. \\(\Delta\\) is monotonic, because \\(a \leq b \to (a,a) \leq (b,b)\\), by definition of \\(\leq_{A\times A}\\) MD 2. By the method in lecture 6, r(y,z) = least upper bound of X = \\(\\{x : \Delta (x) \leq (y,z) \\}\\). Since X is the set of elements of A less than min(y,z), r(y,z) is min(y,z). MD 3. By duality, l(y,z) = max(y,z) MD 4. r: least upper bound of X = \\(\\{x : \Delta (x) \leq (y,z) \\}\\): least common multiple l: greatest lower bound of X = \\(\\{x : \Delta (x) \geq (y,z) \\}\\) : greatest common divisor I think I've been sloppy and got some of this flipped - to be fixed later.
  • 5.

    I'm basically just going to copy Matthew Doty's puzzles but with lexicographical order:

    Definiton. The lexicographical order of two posets \((A,\leq_A)\) and \((B,\leq_B)\) is \((A \times B, \leq^{lex})\) where

    $$ (a_1,b_1) \leq^{lex} (a_2,b_2) \Longleftrightarrow a_1 \lt_A a_2 \text{ or } (a_1 =_A a_2 \text{ and } b_1 \leq_B b_2) $$


    Let \(A\) be a lattice.

    AV Puzzle 1: Show that \(\Delta\) is monotonically increasing on \(\leq^{lex}\)

    AV Puzzle 2: Find the right adjoint \(r : A\times A \to A\) to \(\Delta\) such that:

    $$ \Delta(x) \leq^{lex} (y,z) \Longleftrightarrow x \leq_{A} r(y,z) $$ AV Puzzle 3: Find the left adjoint \(l : A\times A \to A\) to \(\Delta\) such that:

    $$ l(x,y) \leq_{A} z \Longleftrightarrow (x,y) \leq^{lex} \Delta(z) $$ I think there are solutions but I could be wrong.

    Comment Source:I'm basically just going to copy Matthew Doty's puzzles but with lexicographical order: **Definiton.** The **lexicographical order** of two posets \\((A,\leq_A)\\) and \\((B,\leq_B)\\) is \\((A \times B, \leq^{lex})\\) where $$ (a_1,b_1) \leq^{lex} (a_2,b_2) \Longleftrightarrow a_1 \lt_A a_2 \text{ or } (a_1 =_A a_2 \text{ and } b_1 \leq_B b_2) $$ --------------------------- Let \\(A\\) be a lattice. **AV Puzzle 1**: Show that \\(\Delta\\) is monotonically increasing on \\(\leq^{lex}\\) **AV Puzzle 2**: Find the *right adjoint* \\(r : A\times A \to A\\) to \\(\Delta\\) such that: $$ \Delta(x) \leq^{lex} (y,z) \Longleftrightarrow x \leq_{A} r(y,z) $$ **AV Puzzle 3**: Find the *left adjoint* \\(l : A\times A \to A\\) to \\(\Delta\\) such that: $$ l(x,y) \leq_{A} z \Longleftrightarrow (x,y) \leq^{lex} \Delta(z) $$ I think there are solutions but I could be wrong.
  • 6.

    Alex wrote:

    MD 2. By the method in Lecture 6, r(y,z) = least upper bound of X = {x:Δ(x)≤(y,z)}

    • Since X is the set of elements of A less than min(y,z), r(y,z) is min(y,z).

    This looks very good - except for one thing. Your calculation was right, but you jumped to a conclusion at the end.

    In a totally ordered set either \(y \le z\) or \(y \le z\) or both (in which case \(y = z\), so the minimum min(y,z) exists: it's the smaller one of \(y\) and \(z\) (or if they're equal, it's both).

    But Matthew Doty's puzzle is extremely interesting, perhaps even more interesting, when our poset is not totally ordered. In this case \(\textrm{min}(y,z)\) is no longer the best answer to the puzzle, because the minimum may not exist, but the answer may still exist.

    (For example, consider a poset \( P X \) of all subsets of \(X\). This is not totally ordered, so it's easy to have two subsets \(S , T \subseteq X \), neither of which is smaller than the other.)

    Similarly for this:

    MD 3. By duality, l(y,z) = max(y,z).

    Comment Source:Alex wrote: > MD 2. By the method in Lecture 6, r(y,z) = least upper bound of X = {x:Δ(x)≤(y,z)} > * Since X is the set of elements of A less than min(y,z), r(y,z) is min(y,z). This looks very good - except for one thing. Your calculation was right, but you jumped to a conclusion at the end. In a totally ordered set either \\(y \\le z\\) or \\(y \le z\\) or both (in which case \\(y = z\\), so the **minimum** min(y,z) exists: it's the smaller one of \\(y\\) and \\(z\\) (or if they're equal, it's both). But Matthew Doty's puzzle is extremely interesting, perhaps even more interesting, when our poset is _not_ totally ordered. In this case \\(\textrm{min}(y,z)\\) is no longer the best answer to the puzzle, because the minimum may not exist, but the answer may still exist. (For example, consider a poset \\( P X \\) of all subsets of \\(X\\). This is not totally ordered, so it's easy to have two subsets \\(S , T \subseteq X \\), neither of which is smaller than the other.) Similarly for this: > MD 3. By duality, l(y,z) = max(y,z).
  • 7.
    edited April 5

    Reuben - that's an interesting puzzle that I'd never thought about. I will restrain myself from trying to solve it now, because I need to write Lecture 9! I hope someone solves it. If not, I'll have to.

    Comment Source:Reuben - that's an interesting puzzle that I'd never thought about. I will restrain myself from trying to solve it now, because I need to write Lecture 9! I hope someone solves it. If not, I'll have to.
  • 8.

    After posting my puzzles, I realized that I was also assuming that A and B were totally ordered in my own solution. I haven't yet thought about the existence of a solution in the more general case of posets.

    I think Reuben's solutions can be generalized to posets by replacing min and max with meet and join respectively, using his same reasoning.

    Comment Source:After posting my puzzles, I realized that I was also assuming that A and B were totally ordered in my own solution. I haven't yet thought about the existence of a solution in the more general case of posets. I think Reuben's solutions can be generalized to posets by replacing min and max with meet and join respectively, using his same reasoning.
  • 9.
    edited April 5

    Here's my shot at Alex Varga's fascinating puzzles, for partially ordered sets (I'm assuming a partial order so that "\(x<y\)" means the same as "\(x\leq y\) and \(x\neq y\)." I'm not sure what \(x<y\) should mean for preorders.):

    AV1: Let's suppose \(x\leq y\). We want to show that \((x,x)\leq (y,y)\) in the lexicographic order, i.e. \(x<y\) or (\(x=y\) and \(x\leq y\)). The assumption \(x\leq y\) gives us two possibilities: \(x<y\) and \(x=y\). If \(x<y\) we have \((x,x)\leq (y,y)\) from its first criterion, and if \(x=y\) we have it from the second.

    AV2: We wish to find some function \(r(y,z)\) such that \(x\leq r(y,z)\iff (x,x)\leq (y,z)\). Expanding out the latter relation we have "\(x < y\) or (\(x=y\) and \(x\leq z\))". There are two cases: either \(y\leq z\) or \(y\not\leq z\). If \(y\leq z\), then "\(x < y\) or (\(x=y\) and \(x\leq z\))" is equivalent to "\(x\leq y\)", so \(r(y,z) = y\). If \(y\not\leq z\), then "\(x=y\) and \(x\leq z\)" is false for every \(x\), so "\(x < y\) or (\(x=y\) and \(x\leq z\))" is equivalent to "\(x<y\)". This is not a condition on \(x\) equivalent to one of the form \(x\leq r(y,z)\) unless \(y\) has some "predecessor" \(y'\), i.e. an element such that \(x<y\) if and only if \(x\leq y'\). If such a \(y'\) exists, then \[r(y,z) = \begin{cases}y\text{ if }y\leq z\\y'\text{ otherwise.}\end{cases}\] Otherwise, no such adjoint \(r\) exists.

    I imagine AV3 will be similar but I haven't worked it out.

    Comment Source:Here's my shot at Alex Varga's fascinating puzzles, for partially ordered sets (I'm assuming a partial order so that "\\(x<y\\)" means the same as "\\(x\leq y\\) and \\(x\neq y\\)." I'm not sure what \\(x<y\\) should mean for preorders.): AV1: Let's suppose \\(x\leq y\\). We want to show that \\((x,x)\leq (y,y)\\) in the lexicographic order, i.e. \\(x<y\\) or (\\(x=y\\) and \\(x\leq y\\)). The assumption \\(x\leq y\\) gives us two possibilities: \\(x<y\\) and \\(x=y\\). If \\(x<y\\) we have \\((x,x)\leq (y,y)\\) from its first criterion, and if \\(x=y\\) we have it from the second. AV2: We wish to find some function \\(r(y,z)\\) such that \\(x\leq r(y,z)\iff (x,x)\leq (y,z)\\). Expanding out the latter relation we have "\\(x < y\\) or (\\(x=y\\) and \\(x\leq z\\))". There are two cases: either \\(y\leq z\\) or \\(y\not\leq z\\). If \\(y\leq z\\), then "\\(x < y\\) or (\\(x=y\\) and \\(x\leq z\\))" is equivalent to "\\(x\leq y\\)", so \\(r(y,z) = y\\). If \\(y\not\leq z\\), then "\\(x=y\\) and \\(x\leq z\\)" is false for every \\(x\\), so "\\(x < y\\) or (\\(x=y\\) and \\(x\leq z\\))" is equivalent to "\\(x<y\\)". This is not a condition on \\(x\\) equivalent to one of the form \\(x\leq r(y,z)\\) unless \\(y\\) has some "predecessor" \\(y'\\), i.e. an element such that \\(x<y\\) if and only if \\(x\leq y'\\). If such a \\(y'\\) exists, then \\[r(y,z) = \begin{cases}y\text{ if }y\leq z\\\\y'\text{ otherwise.}\end{cases}\\] Otherwise, no such adjoint \\(r\\) exists. I imagine AV3 will be similar but I haven't worked it out.
  • 10.

    Alex wrote:

    After posting my puzzles, I realized that I was also assuming that A and B were totally ordered in my own solution.

    Okay, good!

    I haven't yet thought about the existence of a solution in the more general case of posets.

    When you do, you'll see that it's easy and closely connected to some of the main concepts of this course.

    Comment Source:Alex wrote: > After posting my puzzles, I realized that I was also assuming that A and B were totally ordered in my own solution. Okay, good! > I haven't yet thought about the existence of a solution in the more general case of posets. When you do, you'll see that it's easy and closely connected to some of the main concepts of this course.
  • 11.

    First off, kudos to Alex and Owen! These are some great problems and solution is super insightful.

    @ John Baez

    Following Owen's response here, it's not necessary to demand a total order on \(\leq\).

    Please correct me if I am mistaken, but it suffices to demand that \(\leq\) have immediate predecessors (for the right adjoint).

    Dually, to solve AV3, we should demand that \(\leq\) have immediate successors. That is, there's a successor operation \((\cdot)^+ : A \to A\) such that for all \(x\) that \(x > y\) if and only if \(x \geq y^+\).

    The left adjoint given by:

    $$ l(y,z) = \begin{cases}y & \text{if }y\geq z\\y^+& \text{otherwise}\end{cases} $$ (it's just the same as the right adjoint with successor swapped for processor and the order flipped)

    Proof.

    This proof closely follows Owen's original proof here.

    We require that \(l\) satisfy the following law:

    $$ \begin{eqnarray} l(y,z) \leq_A x & \Longleftrightarrow & (y,z) \leq^{lex} (x,x) \\ & \Longleftrightarrow & (y,z) <^{lex} (x,x) \text { or } (y,z) = (x,x) \\ & \Longleftrightarrow & y <_A x \text{ or } (y = x \text{ and } z <_A x) \text { or } (y,z) = (x,x) \end{eqnarray} $$ Consider the case where \(y = z\). Then \(l(y,z) = l(y,y)\), and

    $$ \begin{eqnarray} l(y,y) \leq_A x & \Longleftrightarrow & y <_A x \text{ or } (y = y \text{ and } y <_A x) \text { or } (y,y) = (x,x) \\ & \Longleftrightarrow & y \leq_A x \end{eqnarray} $$ Which is satisfied since \(l(y,y) = y\) by definition.

    Next consider when \(y > z\). Then \(y = x \text{ and } z <_A x\) is equivalent to \(y = x\) and \((y,z) \neq (x,x)\), so:

    $$ \begin{eqnarray} l(y,z) \leq_A x & \Longleftrightarrow & y <_A x \text{ or } (y = x \text{ and } z <_A x) \text { or } (y,z) = (x,x) \\ & \Longleftrightarrow & y <_A x \text{ or } y = x \\ & \Longleftrightarrow & y \leq_A x \end{eqnarray} $$ Which again is satisfied since \(l(y,z) = y\) when \(y > z\).

    Finally assume \(y \not\geq z\), then \(y = x \text{ and } z <_A x\) and \((y,z) = (x,x)\) are always false, so \( l(y,z) \leq_A x \Longleftrightarrow y <_A x\). But we know that \(y^+ \leq_A x \Longleftrightarrow y <_A x\) for all \(x\), so \(l(y,z) = y^+\) is the right answer here.

    \(\Box\)

    Comment Source:First off, kudos to Alex and Owen! These are some great problems and solution is super insightful. @ John Baez Following Owen's response [here](https://forum.azimuthproject.org/discussion/comment/16673/#Comment_16673), it's not necessary to demand a total order on \\(\leq\\). Please correct me if I am mistaken, but it suffices to demand that \\(\leq\\) have *immediate predecessors* (for the right adjoint). Dually, to solve AV3, we should demand that \\(\leq\\) have *immediate successors*. That is, there's a successor operation \\((\cdot)^+ : A \to A\\) such that for all \\(x\\) that \\(x > y\\) if and only if \\(x \geq y^+\\). The *left adjoint* given by: $$ l(y,z) = \begin{cases}y & \text{if }y\geq z\\\\y^+& \text{otherwise}\end{cases} $$ (it's just the same as the right adjoint with successor swapped for processor and the order flipped) **Proof**. This proof closely follows Owen's original proof [here](https://forum.azimuthproject.org/discussion/comment/16673/#Comment_16673). We require that \\(l\\) satisfy the following law: $$ \begin{eqnarray} l(y,z) \leq_A x & \Longleftrightarrow & (y,z) \leq^{lex} (x,x) \\\\ & \Longleftrightarrow & (y,z) <^{lex} (x,x) \text { or } (y,z) = (x,x) \\\\ & \Longleftrightarrow & y <_A x \text{ or } (y = x \text{ and } z <_A x) \text { or } (y,z) = (x,x) \end{eqnarray} $$ Consider the case where \\(y = z\\). Then \\(l(y,z) = l(y,y)\\), and $$ \begin{eqnarray} l(y,y) \leq_A x & \Longleftrightarrow & y <_A x \text{ or } (y = y \text{ and } y <_A x) \text { or } (y,y) = (x,x) \\\\ & \Longleftrightarrow & y \leq_A x \end{eqnarray} $$ Which is satisfied since \\(l(y,y) = y\\) by definition. Next consider when \\(y > z\\). Then \\(y = x \text{ and } z <_A x\\) is equivalent to \\(y = x\\) and \\((y,z) \neq (x,x)\\), so: $$ \begin{eqnarray} l(y,z) \leq_A x & \Longleftrightarrow & y <_A x \text{ or } (y = x \text{ and } z <_A x) \text { or } (y,z) = (x,x) \\ & \Longleftrightarrow & y <_A x \text{ or } y = x \\ & \Longleftrightarrow & y \leq_A x \end{eqnarray} $$ Which again is satisfied since \\(l(y,z) = y\\) when \\(y > z\\). Finally assume \\(y \not\geq z\\), then \\(y = x \text{ and } z <_A x\\) and \\((y,z) = (x,x)\\) are always false, so \\( l(y,z) \leq_A x \Longleftrightarrow y <_A x\\). But we know that \\(y^+ \leq_A x \Longleftrightarrow y <_A x\\) for all \\(x\\), so \\(l(y,z) = y^+\\) is the right answer here. \\(\Box\\)
  • 12.
    edited April 5

    MD Puzzle 2: Find the right adjoint \(r : A\times A \to A\) to \(\Delta\) such that:

    $$ \Delta(x) \leq_{A\times A} (y,z) \Longleftrightarrow x \leq_{A} r(y,z) $$ Directly taking John's tutorial and dropping in the functions in the appropriate places we get,

    If \(\Delta: A \to A\times A\) has a right adjoint \(r : A\times A \to A\) and \(A\) is a poset, this right adjoint is unique and we have a formula for it:

    $$ r(x,y) = \bigvee \{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \} . $$ MD Puzzle 3: Find the left adjoint \(l : A\times A \to A\) to \(\Delta\) such that:

    $$ l(x,y) \leq_{A} z \Longleftrightarrow (x,y) \leq_{A\times A} \Delta(z) $$

    If \(\Delta: A \to A\times A\) has a left adjoint \(l : A\times A \to A\) and \(A\) is a poset, this left adjoint is unique and we have a formula for it:

    $$ l(x,y) = \bigwedge \{a \in A : \; (x,y) \leq_{A\times A} \Delta(a) \} .$$

    Comment Source:**MD Puzzle 2**: Find the *right adjoint* \\(r : A\times A \to A\\) to \\(\Delta\\) such that: $$ \Delta(x) \leq_{A\times A} (y,z) \Longleftrightarrow x \leq_{A} r(y,z) $$ Directly taking John's tutorial and dropping in the functions in the appropriate places we get, >If \\(\Delta: A \to A\times A\\) has a right adjoint \\(r : A\times A \to A\\) and \\(A\\) is a poset, this right adjoint is unique and we have a formula for it: $$ r(x,y) = \bigvee \\{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \\} . $$ **MD Puzzle 3**: Find the *left adjoint* \\(l : A\times A \to A\\) to \\(\Delta\\) such that: $$ l(x,y) \leq_{A} z \Longleftrightarrow (x,y) \leq_{A\times A} \Delta(z) $$ >If \\(\Delta: A \to A\times A\\) has a left adjoint \\(l : A\times A \to A\\) and \\(A\\) is a poset, this left adjoint is unique and we have a formula for it: $$ l(x,y) = \bigwedge \\{a \in A : \; (x,y) \leq_{A\times A} \Delta(a) \\} .$$
  • 13.

    Hey Keith,

    Actually, you seem to be assuming a complete lattice.

    Can you see how what the adjunctions are in an ordinary lattice?

    Comment Source:Hey Keith, Actually, you seem to be assuming a *complete* lattice. Can you see how what the adjunctions are in an ordinary lattice?
  • 14.

    Why does the above derivation in #12 assuming a complete lattice? What went wrong?

    Comment Source:Why does the above derivation in #12 assuming a complete lattice? What went wrong?
  • 15.
    edited April 6

    In these cases:

    $$ r(x,y) = \bigvee \{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \} $$ $$ l(x,y) = \bigwedge \{a \in A : \; (x,y) \leq_{A\times A} \Delta(a) \} $$ You're assuming that you can just take infima \(\bigwedge\) and suprema \(\bigvee\). In a simple lattice \((L, \wedge, \vee)\) you don't have those operations available.

    Comment Source:In these cases: $$ r(x,y) = \bigvee \\{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \\} $$ $$ l(x,y) = \bigwedge \\{a \in A : \; (x,y) \leq_{A\times A} \Delta(a) \\} $$ You're assuming that you can just take infima \\(\bigwedge\\) and suprema \\(\bigvee\\). In a simple lattice \\((L, \wedge, \vee)\\) you don't have those operations available.
  • 16.
    edited April 6

    So then, why is John using infs and sups when defining the unique formulas in lecture 6?

    Comment Source:So then, why is John using *infs* and *sups* when defining the unique formulas in lecture 6?
  • 17.

    So then, why is John using infs and sups when defining the unique formulas in lecture 6?

    For complete lattices such as power set algebras and \(\mathbb{R}\), those characterize adjoints.

    But as I try to show in Puzzle MD 4 (where I consider the natural numbers ordered by the evenly divides relation), you can have left and right adjoints even when you can't take infima and suprema.

    However, you can cheat out infima and suprema even if they don't exist by using Dedekind-Macneil completions. I did this over in the Categories for the Working Hacker discussion. I can write a formal proof regarding them and Galois connections if you like.

    Comment Source:> So then, why is John using infs and sups when defining the unique formulas in lecture 6? For complete lattices such as power set algebras and \\(\mathbb{R}\\), those characterize adjoints. But as I try to show in **Puzzle MD 4** (where I consider the natural numbers ordered by the *evenly divides* relation), you can have left and right adjoints even when you can't take infima and suprema. However, you can cheat out infima and suprema even if they don't exist by using Dedekind-Macneil completions. I did this over in the [Categories for the Working Hacker](https://forum.azimuthproject.org/discussion/comment/16649/#Comment_16649) discussion. I can write a formal proof regarding them and Galois connections if you like.
  • 18.

    If not for me, for everyone else.

    Comment Source:If not for me, for everyone else.
  • 19.
    edited April 6

    I've attempted Mathew Doty's puzzle and I've made the same mistake as Keith E. Peterson – I've plugged in John's formula from Lecture 6. However, if we assume a complete lattice, is the following reasoning correct?

    $$ \begin{eqnarray} r(x,y) &=& \bigvee \{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \} \\ &=& \bigvee \{a \in A : \; (a,a) \leq_{A\times A} (x,y) \} \\ &=& \bigvee \{a \in A : \; a \leq_A x, a \leq_A y \} \\ &=& x \vee y \end{eqnarray} $$ Edit: I think the last step is wrong: initially I thought that \(x\) and \(y\) are in the set \(R = \{a \in A : \; a \leq_A x, a \leq_A y \} \), but that's not true. The set \(R\) might contain one of them if there is a relation between \(x\) and \(y\) (either \(x \le y\) or \(y \le x\)), but generally there isn't (the set is not totally ordered).

    Comment Source:I've attempted Mathew Doty's puzzle and I've made the same mistake as Keith E. Peterson – I've plugged in John's formula from [Lecture 6](https://forum.azimuthproject.org/discussion/1901/lecture-6-chapter-1-computing-adjoints#latest). However, if we assume a complete lattice, is the following reasoning correct? $$ \begin{eqnarray} r(x,y) &=& \bigvee \\{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \\} \\\\ &=& \bigvee \\{a \in A : \; (a,a) \leq_{A\times A} (x,y) \\} \\\\ &=& \bigvee \\{a \in A : \; a \leq_A x, a \leq_A y \\} \\\\ &=& x \vee y \end{eqnarray} $$ Edit: I think the last step is wrong: initially I thought that \\(x\\) and \\(y\\) are in the set \\(R = \\{a \in A : \; a \leq_A x, a \leq_A y \\} \\), but that's not true. The set \\(R\\) might contain one of them if there is a relation between \\(x\\) and \\(y\\) (either \\(x \le y\\) or \\(y \le x\\)), but generally there isn't (the set is not totally ordered).
  • 20.
    edited April 6

    I'll start by writing the answers to my questions. The answers to MD 2 and MD 3 are given distinctly by:

    $$ \vee \dashv \Delta \dashv \wedge $$ In MD 4 I ask about the special case of \(\mathbb{N}\) ordered by the evenly divides relation \(\cdot\ |\ \cdot\), and the answer is

    $$ lcm \dashv \Delta \dashv gcd $$ As I was saying, the poset \((\mathbb{N}, \cdot\ |\ \cdot)\) does not have infima or suprema, so you can't use them directly to figure all of this out.

    It's often nice to operate as if we have a infima and suprema for a preorder \((P,\leq_P)\) even if it doesn't have them. Also, it would be nice if it was a poset!

    We can have all of this by constructing the smallest poset that has them and embeds \(P\). It is called the Dedekind–MacNeille completion of \(P\). It is related to the Dedekind cut construction of the real numbers.

    Dedekind–MacNeille gives rise to a monad \(\mathbf{DM}\) on the category of preorders with monotone maps as morphisms.

    Definition. For a given preorder \((P,\leq_P)\), let

    • \(A^u := \{p \in P\ :\ \forall a \in A. a \leq p\}\) and
    • \(A^d := \{p \in P\ :\ \forall a \in A. p \leq a\}\)

    Define \(\mathbf{DM}(P) := \{A \subseteq P\ :\ A = (A^u)^d\}\).

    The structure \((\mathbf{DM}(P), \subseteq, \bigcup, \bigcap)\) is the Dedekind–MacNeille completion of \(P\).

    The principle ideal function \((\cdot \downarrow) : P \to \mathbf{DM}(P)\) takes every element to its completion \(x \downarrow\;:= \{x\}^d\).

    Finally, we can lift every function \(f: A \to B\) between two posets \(A\) and \(B\) into a function between their completions \(f^{\mathbf{DM}} : \mathbf{DM}(A) \to \mathbf{DM}(B)\) using: $$ f^{\mathbf{DM}}(X) := ((f_!(X))^u)^d $$

    By convention its nice to distinguish objects in the completed structures with the Fracktur font \(\mathfrak{a}, \mathfrak{b}, \ldots\)

    Lemma. Let \(f: A \to B\) and \(g: B \to A\) be maps on the preorders \(A\) and \(B\). Then:

    $$ \begin{eqnarray} f^{\mathbf{DM}} \dashv g^{\mathbf{DM}} & \Longleftrightarrow & \forall \mathfrak{b}. g^{\mathbf{DM}}(\mathfrak{b}) = \bigcup\{ \mathfrak{a} \in \mathbf{DM}(A)\ :\ f^{\mathbf{DM}}(\mathfrak{a}) \subseteq \mathfrak{b} \} \\ & \Longleftrightarrow & \forall \mathfrak{a}. f^{\mathbf{DM}}(\mathfrak{a}) = \bigcap\{ \mathfrak{b} \in \mathbf{DM}(B)\ :\ \mathfrak{b} \subseteq g^{\mathbf{DM}}(\mathfrak{a}) \} \end{eqnarray} $$ and $$ f^{\mathbf{DM}} \dashv g^{\mathbf{DM}} \Longrightarrow f \dashv g $$
    Proof. \(f^{\mathbf{DM}} \dashv g^{\mathbf{DM}} \Longrightarrow f \dashv g\) follows by naturality of the principle ideal operation \((\cdot\downarrow)\). See Davey and Priestley (2002), §7.38 The Dedekind–MacNeille completion. \(\Box\)

    So certainly proving a Galois adjunction in the Dedekind–MacNeille completion suffices to show a Galois connection.

    I think the converse of this Lemma is true too but I can't find a reference:

    Conjecture. \(f \dashv g \Longrightarrow f^{\mathbf{DM}} \dashv g^{\mathbf{DM}} \)

    This would give a full on transfer theorem.

    If I find the time I will tackle this, but I also wanted to do some Haskell in another thread today, so I might not get around to it until the weekend.

    Comment Source:I'll start by writing the answers to my questions. The answers to **MD 2** and **MD 3** are given distinctly by: $$ \vee \dashv \Delta \dashv \wedge $$ In **MD 4** I ask about the special case of \\(\mathbb{N}\\) ordered by the *evenly divides* relation \\(\cdot\ |\ \cdot\\), and the answer is $$ lcm \dashv \Delta \dashv gcd $$ As I was saying, the poset \\((\mathbb{N}, \cdot\ |\ \cdot)\\) does not have infima or suprema, so you can't use them directly to figure all of this out. It's often nice to operate as if we have a infima and suprema for a preorder \\((P,\leq_P)\\) even if it doesn't have them. Also, it would be nice if it was a poset! We can have all of this by constructing the smallest poset that has them and embeds \\(P\\). It is called the [*Dedekind–MacNeille completion*](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion) of \\(P\\). It is related to the Dedekind cut construction of the real numbers. Dedekind–MacNeille gives rise to a *monad* \\(\mathbf{DM}\\) on the category of preorders with monotone maps as morphisms. > **Definition.** For a given preorder \\((P,\leq_P)\\), let > > - \\(A^u := \\{p \in P\ :\ \forall a \in A. a \leq p\\}\\) and > - \\(A^d := \\{p \in P\ :\ \forall a \in A. p \leq a\\}\\) > > Define \\(\mathbf{DM}(P) := \\{A \subseteq P\ :\ A = (A^u)^d\\}\\). > > The structure \\((\mathbf{DM}(P), \subseteq, \bigcup, \bigcap)\\) is the Dedekind–MacNeille completion of \\(P\\). > > The *principle ideal* function \\((\cdot \downarrow) : P \to \mathbf{DM}(P)\\) takes every element to its completion \\(x \downarrow\;:= \\{x\\}^d\\). > > Finally, we can lift every function \\(f: A \to B\\) between two posets \\(A\\) and \\(B\\) into a function between their completions \\(f^{\mathbf{DM}} : \mathbf{DM}(A) \to \mathbf{DM}(B)\\) using: > $$ f^{\mathbf{DM}}(X) := ((f_!(X))^u)^d $$ By convention its nice to distinguish objects in the completed structures with the Fracktur font \\(\mathfrak{a}, \mathfrak{b}, \ldots\\) > **Lemma.** Let \\(f: A \to B\\) and \\(g: B \to A\\) be maps on the preorders \\(A\\) and \\(B\\). Then: > > $$ \begin{eqnarray} f^{\mathbf{DM}} \dashv g^{\mathbf{DM}} & \Longleftrightarrow & \forall \mathfrak{b}. g^{\mathbf{DM}}(\mathfrak{b}) = \bigcup\{ \mathfrak{a} \in \mathbf{DM}(A)\ :\ f^{\mathbf{DM}}(\mathfrak{a}) \subseteq \mathfrak{b} \} \\ & \Longleftrightarrow & \forall \mathfrak{a}. f^{\mathbf{DM}}(\mathfrak{a}) = \bigcap\{ \mathfrak{b} \in \mathbf{DM}(B)\ :\ \mathfrak{b} \subseteq g^{\mathbf{DM}}(\mathfrak{a}) \} \end{eqnarray} $$ > and > $$ f^{\mathbf{DM}} \dashv g^{\mathbf{DM}} \Longrightarrow f \dashv g $$ **Proof.** \\(f^{\mathbf{DM}} \dashv g^{\mathbf{DM}} \Longrightarrow f \dashv g\\) follows by naturality of the principle ideal operation \\((\cdot\downarrow)\\). See [Davey and Priestley (2002), §7.38 The Dedekind–MacNeille completion](https://books.google.com/books?id=vVVTxeuiyvQC&pg=PA166#v=onepage&q&f=false). \\(\Box\\) So certainly proving a Galois adjunction in the Dedekind–MacNeille completion *suffices* to show a Galois connection. I think the converse of this Lemma is true too but I can't find a reference: > **Conjecture**. \\(f \dashv g \Longrightarrow f^{\mathbf{DM}} \dashv g^{\mathbf{DM}} \\) This would give a full on *transfer theorem*. If I find the time I will tackle this, but I also wanted to do some Haskell in another thread today, so I might not get around to it until the weekend.
  • 21.
    edited April 6

    Keith asked:

    So then, why is John using infs and sups when defining the unique formulas in lecture 6?

    Matthew replied:

    For complete lattices such as power set algebras and \(\mathbb{R}\), those characterize adjoints.

    No, that wasn't my reasoning. Right or wrong, my position in Lecture 6 was this: I wasn't assuming the posets in question have all infs and sup, I was claiming that they must have the infs and sups in question, given my assumptions.

    In more detail, suppose \(A\) and \(B\) are arbitrary preorders. If \(f : A \to B\) has a right adjoint \(g : B \to A\) and \(A\) is a poset, this right adjoint is unique and we have a formula for it:

    $$ g(b) = \bigvee \{a \in A : \; f(a) \le_B b \} . $$ Here's the proof, as fleshed out by Alex Chen.

    1) Since

    $$ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) $$ we know \(g(b)\) is an upper bound of the set \( \{a \in A : \; f(a) \le_B b \} \). So, we just need to show it's the least upper bound.

    2) However, \(g(b)\) is in the set \( \{a \in A : \; f(a) \le_B b \} \), i.e. \(f(g(b)) \le_B b\). Why? Because

    $$ f(g(b)) \le_B b \textrm{ if and only if } g(b) \le_A g(b) . $$ So, any lower bound of this set must be \(\ge g(b)\). Thus, \(g(b)\) is a least upper bound.

    3) So far we haven't used the assumption that \(A\) is a poset. We need this only to conclude that \(g(b)\) is the unique least upper bound. In a poset, if a set has two least upper bounds \(x\) and \(x'\), we must have \(x \le x'\) and \(x' \le x\), so \(x = x'\). So, in a poset, upper bounds are unique.

    Similarly, if \(g : B \to A\) has a left adjoint \(f : A \to B\) and \(B\) is a poset, this left adjoint is unique and we have a formula for it:

    $$ f(a) = \bigwedge \{b \in B : \; a \le_A g(b) \} .$$ To repeat: I'm not assuming or claiming the existence of any sups or infs other than those I'm actually using here. I'm saying that these particular sups and infs must exist given the assumptions.

    Comment Source:Keith asked: > So then, why is John using infs and sups when defining the unique formulas in lecture 6? Matthew replied: > For complete lattices such as power set algebras and \\(\mathbb{R}\\), those characterize adjoints. No, that wasn't my reasoning. Right or wrong, my position in [Lecture 6](https://forum.azimuthproject.org/discussion/1901/lecture-6-chapter-1-computing-adjoints#latest) was this: I wasn't _assuming_ the posets in question have _all_ infs and sup, I was _claiming_ that they _must_ have the infs and sups _in question_, given my assumptions. In more detail, suppose \\(A\\) and \\(B\\) are arbitrary preorders. If \\(f : A \to B\\) has a right adjoint \\(g : B \to A\\) and \\(A\\) is a poset, this right adjoint is unique and we have a formula for it: $$ g(b) = \bigvee \\{a \in A : \; f(a) \le_B b \\} . $$ Here's the proof, [as fleshed out by Alex Chen](https://forum.azimuthproject.org/discussion/comment/16556/#Comment_16556). 1) Since $$ f(a) \le_B b \textrm{ if and only if } a \le_A g(b) $$ we know \\(g(b)\\) is an upper bound of the set \\( \\{a \in A : \; f(a) \le_B b \\} \\). So, we just need to show it's _the least_ upper bound. 2) However, \\(g(b)\\) is in the set \\( \\{a \in A : \; f(a) \le_B b \\} \\), i.e. \\(f(g(b)) \le_B b\\). Why? Because $$ f(g(b)) \le_B b \textrm{ if and only if } g(b) \le_A g(b) . $$ So, any lower bound of this set must be \\(\ge g(b)\\). Thus, \\(g(b)\\) is _a_ least upper bound. 3) So far we haven't used the assumption that \\(A\\) is a poset. We need this only to conclude that \\(g(b)\\) is _the unique_ least upper bound. In a poset, if a set has two least upper bounds \\(x\\) and \\(x'\\), we must have \\(x \le x'\\) and \\(x' \le x\\), so \\(x = x'\\). So, in a poset, upper bounds are unique. Similarly, if \\(g : B \to A\\) has a left adjoint \\(f : A \to B\\) and \\(B\\) is a poset, this left adjoint is unique and we have a formula for it: $$ f(a) = \bigwedge \\{b \in B : \; a \le_A g(b) \\} .$$ To repeat: I'm not assuming or claiming the existence of any sups or infs other than those I'm actually using here. I'm saying that these particular sups and infs _must exist_ given the assumptions.
  • 22.
    edited April 6

    To continue, let me give a silly trivial example that illustrates the point I just made. I gave this example in an answer to Daniel Fava in the Lecture 6 thread:

    For any poset \(A\) whatsoever, the identity function \(1_A : A \to A\) has a left and right adjoint, namely itself. This is easy to check straight from the definition:

    $$ a \le a \textrm{ if and only if } a \le a . $$ If you compute these adjoints using the formulas above, you see that it only requires sets of the form

    $$ \{ a \in A : \; a \le b \} $$ to have greatest lower bounds - and such a set indeed does, namely the element \(b\). Similarly, only sets of the form

    $$ \{b \in A: \; a \le b \} $$ need have least upper bounds - and such a set indeed does, namely the element \(a\).

    So, I'm claiming

    $$ \bigwedge \{ a \in A : \; a \le b \} = b $$ and

    $$ \bigvee \{b \in A: \; a \ge b \} = a $$ whenever \(A\) is any poset.

    That said, if someone gave me a puzzle whose answer was \(a\), and I said the answer was \( \bigvee \{b \in A: \; a \ge b \} \), we'd have to say my answer wasn't the best available, because I failed to simplify it as much as possible.

    Comment Source:To continue, let me give a silly trivial example that illustrates the point I just made. I gave this example in [an answer to Daniel Fava in the Lecture 6 thread](https://forum.azimuthproject.org/discussion/comment/16703/#Comment_16703): For any poset \\(A\\) whatsoever, the identity function \\(1_A : A \to A\\) has a left and right adjoint, namely itself. This is easy to check straight from the definition: $$ a \le a \textrm{ if and only if } a \le a . $$ If you compute these adjoints using the formulas above, you see that it only requires sets of the form $$ \\{ a \in A : \; a \le b \\} $$ to have greatest lower bounds - and such a set indeed does, namely the element \\(b\\). Similarly, only sets of the form $$ \\{b \in A: \; a \le b \\} $$ need have least upper bounds - and such a set indeed does, namely the element \\(a\\). So, I'm claiming $$ \bigwedge \\{ a \in A : \; a \le b \\} = b $$ and $$ \bigvee \\{b \in A: \; a \ge b \\} = a $$ whenever \\(A\\) is any poset. That said, if someone gave me a puzzle whose answer was \\(a\\), and I said the answer was \\( \bigvee \\{b \in A: \; a \ge b \\} \\), we'd have to say my answer wasn't the best available, because I failed to simplify it as much as possible.
  • 23.

    No, that wasn't my reasoning. Right or wrong, my position in Lecture 6 was this: I wasn't assuming the posets in question have all infs and sup, I was claiming that they must have the infs and sups in question, given my assumptions. ... That said, if someone gave me a puzzle whose answer was \(a\), and I said the answer was \( \bigvee \{b \in A: \; a \ge b \} \), we'd have to say my answer wasn't the best available, because I failed to simplify it as much as possible.

    Okay.

    I was thinking like this: the most general way to think about Galois connections is on preorders. But this is annoying because they don't obey the anti-symmetry rule. They don't have infima and suprema which are natural.

    However, I'm arguing there's a place a we can go: The Dedekind-MacNeille Completion Functor. If we embed our preorder up there, now we've got a real partial order like we've always wanted. We've even got sets which is nice. And we've got suprema and infima. And, when I can get around to it, I think I can prove a transfer theorem for adjunctions and fixed points.

    (Transfer is my idea, but I got the idea of using it to transform preorders from Erné (1991).)

    Here's a parallel: the textbook way to think about derivatives in calculus is with the \(\delta-\epsilon\) formulation on a real closed Archimedean field. But this is annoying because there's a lot of quantifiers and those are hard. Also, we don't have infinitesimals or their reciprocals which are natural (for Euler and Leibniz, anyway). Even Archimedes found it natural to use infinitesimals and break the rules that are his namesake in his lost palimpsest. And we can have it all with the Robinson's ultraproduct construction, and we have the transfer theorem for first order propositions.

    Now, I can see why maybe it's annoying. Nobody really uses nonstandard analysis for much because it's hard to motivate and ultraproducts are clumsy. But for some, it validates their intuition. And I say Dedekind-MacNeille completions do the same for preorders. But that's just my opinion.

    Comment Source:> No, that wasn't my reasoning. Right or wrong, my position in Lecture 6 was this: I wasn't assuming the posets in question have all infs and sup, I was *claiming* that they *must* have the infs and sups in question, given my assumptions. > ... > That said, if someone gave me a puzzle whose answer was \\(a\\), and I said the answer was \\( \bigvee \\{b \in A: \; a \ge b \\} \\), we'd have to say my answer wasn't the best available, because I failed to simplify it as much as possible. Okay. I was thinking like this: the *most general* way to think about Galois connections is on preorders. But this is annoying because they don't obey the anti-symmetry rule. They don't have infima and suprema which are natural. However, I'm arguing there's a place a we can go: *The Dedekind-MacNeille Completion Functor*. If we embed our preorder up there, now we've got a real partial order like we've always wanted. We've even got sets which is nice. And we've got suprema and infima. And, when I can get around to it, I think I can prove a transfer theorem for adjunctions and fixed points. (Transfer is my idea, but I got the idea of using it to transform preorders from [Erné (1991)](https://link.springer.com/article/10.1007/BF00383401).) Here's a parallel: the *textbook* way to think about derivatives in calculus is with the \\(\delta-\epsilon\\) formulation on a real closed Archimedean field. But this is annoying because there's a lot of quantifiers and those are hard. Also, we don't have infinitesimals or their reciprocals which are natural (for Euler and Leibniz, anyway). Even Archimedes found it natural to use infinitesimals and break the rules that are his namesake in his [lost palimpsest](https://en.wikipedia.org/wiki/Archimedes_Palimpsest). And we can have it all with the Robinson's ultraproduct construction, and we have the transfer theorem for first order propositions. Now, I can see why maybe it's annoying. Nobody really uses nonstandard analysis for much because it's hard to motivate and ultraproducts are clumsy. But for some, it validates their intuition. And I say Dedekind-MacNeille completions do the same for preorders. But that's just my opinion.
  • 24.
    edited April 6

    Now, back to Keith Anderson's answers to Matthew Doty's puzzles. I'll only talk about this one:

    MD Puzzle 2: Find the right adjoint \(r : A\times A \to A\) to the monotone function \(\Delta : A \to A \times A \) given by

    $$ \Delta(x) = (x,x) .$$ Here is Keith's answer:


    Directly taking John's tutorial and dropping in the functions in the appropriate places we get:

    If \(\Delta: A \to A\times A\) has a right adjoint \(r : A\times A \to A\) and \(A\) is a poset, this right adjoint is unique and we have a formula for it:

    $$ r(x,y) = \bigvee \{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \} . $$


    I think this is correct. As I've emphasized, this formula does not require that all subsets of \(A\) have least upper bounds: if we assume \(\Delta\) has a right adjoint we know that the set in question has a least upper bound.

    But while Keith's answer is correct, we can get a simpler answer... which is the answer Matthew undoubtedly wanted. Namely, I claim:

    Theorem. If \(A\) is any poset and \(\Delta: A \to A\times A\) has a right adjoint \(r : A\times A \to A\) , this right adjoint is unique and

    $$ r(x,y) = x \wedge y .$$ In other words, \(r(x,y)\) is the greatest lower bound of the set \( \{x,y\} \).

    To ease our burden, let's prove this assuming that this greatest lower bound \(x \wedge y\) exists. (We can worry about why that assumption is true later.)

    For this, let's use the definition of right adjoint:

    $$ a \le_A r(x,y) \textrm{ if and only if } \Delta(a) \le_{A \times A} (x,y) $$ or in other words

    $$ a \le_A r(x,y) \textrm{ if and only if } a \le_A x \textrm{ and } a \le_A y. $$ To prove that \(r(x,y) = x \wedge y\) it's therefore enough to show

    $$ a \le_A x \wedge y \textrm{ if and only if } a \le_A x \textrm{ and } a \le_A y. $$ MD Puzzle 2'. Can someone show this?

    Comment Source:Now, back to [Keith Anderson's answers](https://forum.azimuthproject.org/discussion/comment/16691/#Comment_16691) to [Matthew Doty's puzzles](https://forum.azimuthproject.org/discussion/comment/16627/#Comment_16627). I'll only talk about this one: **MD Puzzle 2**: Find the *right adjoint* \\(r : A\times A \to A\\) to the monotone function \\(\Delta : A \to A \times A \\) given by $$ \Delta(x) = (x,x) .$$ Here is Keith's answer: <hr/> Directly taking John's tutorial and dropping in the functions in the appropriate places we get: If \\(\Delta: A \to A\times A\\) has a right adjoint \\(r : A\times A \to A\\) and \\(A\\) is a poset, this right adjoint is unique and we have a formula for it: $$ r(x,y) = \bigvee \\{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \\} . $$ <hr/> I think this is correct. As I've emphasized, this formula does _not_ require that _all_ subsets of \\(A\\) have least upper bounds: if we assume \\(\Delta\\) has a right adjoint we know that _the set in question_ has a least upper bound. But while Keith's answer is correct, we can get a simpler answer... which is the answer Matthew undoubtedly wanted. Namely, I claim: **Theorem.** If \\(A\\) is any poset and \\(\Delta: A \to A\times A\\) has a right adjoint \\(r : A\times A \to A\\) , this right adjoint is unique and $$ r(x,y) = x \wedge y .$$ In other words, \\(r(x,y)\\) is the greatest lower bound of the set \\( \\{x,y\\} \\). To ease our burden, let's prove this assuming that this greatest lower bound \\(x \wedge y\\) exists. (We can worry about why that assumption is true later.) For this, let's use the definition of right adjoint: $$ a \le_A r(x,y) \textrm{ if and only if } \Delta(a) \le_{A \times A} (x,y) $$ or in other words $$ a \le_A r(x,y) \textrm{ if and only if } a \le_A x \textrm{ and } a \le_A y. $$ To prove that \\(r(x,y) = x \wedge y\\) it's therefore enough to show $$ a \le_A x \wedge y \textrm{ if and only if } a \le_A x \textrm{ and } a \le_A y. $$ **MD Puzzle 2'.** Can someone show this?
  • 25.
    edited April 6

    MD Puzzle 2': Since, $$ (a,b) \leq_{A\times A} (x,y) \\ \Longleftrightarrow \\ a \leq_A x \text{ and } b \leq_A y, $$ it follows then that, $$ r(x,y) = \bigvee \{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \} \\ \Longleftrightarrow \\ r(x,y) = \bigvee \{a \in A : \; a \leq_A x \text{ and } a \leq_A y \},$$ which is indeed a long form way to write, $$ r(x,y) = x \wedge y .$$

    Comment Source:**MD Puzzle 2':** Since, $$ (a,b) \leq_{A\times A} (x,y) \\ \Longleftrightarrow \\ a \leq_A x \text{ and } b \leq_A y, $$ it follows then that, $$ r(x,y) = \bigvee \{a \in A : \; \Delta(a) \leq_{A\times A} (x,y) \} \\ \Longleftrightarrow \\ r(x,y) = \bigvee \{a \in A : \; a \leq_A x \text{ and } a \leq_A y \},$$ which is indeed a long form way to write, $$ r(x,y) = x \wedge y .$$
  • 26.
    edited April 8

    To prove that \(r(x,y) = x \wedge y\) it's therefore enough to show

    $$ a \le_A x \wedge y \textrm{ if and only if } a \le_A x \textrm{ and } a \le_A y. $$
    MD Puzzle 2'. Can someone show this?

    Let's look directly at the definition of \(\wedge\) from Fong and Spivak, pg. 17:

    Definition 1.60. Let \((P, \leq)\) be a preorder, and let \(A \subseteq P\) be a subset. We say that an element \(p \in P\) is the meet of \(A\) if

    1. for all \(a \in A\), we have \(p \leq a\), and
    2. for all \(q\) such that \(q \leq a\) for all \(a \in A\), we have that \(q \leq p\).

    We write \(p = \bigwedge A\), or \(p = \bigwedge_{a \in A} a\). If \(A\) just consists of two elements, say \(A = \{a, b\}\), we can denote \(\bigwedge A\) simply by \(a \wedge b\).

    So let's assume \( a \le_A x\) and \(a \le_A y\). We want to show \(a \le_A x \wedge y \). By assumption we have \(\forall z \in \{x,y\}. a \leq z\). Then by (2) in Definition 1.60 we have \(a \leq \bigwedge \{x,y\}\), which can be rewritten as \(a \leq x \wedge y\) according to Spivak and Fong's short hand.

    Next let's assume \(a \le_A x \wedge y \). We want to show \( a \le_A x\) and \(a \le_A y\). Our assumption \(a \le_A x \wedge y \) is shorthand for \(a \leq \bigwedge \{x,y\}\). By (1) we have \(\forall z \in \{x,y\}. a \leq z\). But that's just the same as \( a \le_A x\) and \(a \le_A y\) as desired.

    Comment Source:> To prove that \\(r(x,y) = x \wedge y\\) it's therefore enough to show > > $$ a \le_A x \wedge y \textrm{ if and only if } a \le_A x \textrm{ and } a \le_A y. $$ > > **MD Puzzle 2'.** Can someone show this? Let's look directly at the definition of \\(\wedge\\) from Fong and Spivak, pg. 17: > Definition 1.60. Let \\((P, \leq)\\) be a preorder, and let \\(A \subseteq P\\) be a subset. We say that an element > \\(p \in P\\) is the meet of \\(A\\) if > > 1. for all \\(a \in A\\), we have \\(p \leq a\\), and > 2. for all \\(q\\) such that \\(q \leq a\\) for all \\(a \in A\\), we have that \\(q \leq p\\). > > We write \\(p = \bigwedge A\\), or \\(p = \bigwedge_{a \in A} a\\). If \\(A\\) just consists of two elements, say \\(A = \\{a, b\\}\\), we can denote \\(\bigwedge A\\) simply by \\(a \wedge b\\). So let's assume \\( a \le_A x\\) and \\(a \le_A y\\). We want to show \\(a \le_A x \wedge y \\). By assumption we have \\(\forall z \in \\{x,y\\}. a \leq z\\). Then by (2) in Definition 1.60 we have \\(a \leq \bigwedge \\{x,y\\}\\), which can be rewritten as \\(a \leq x \wedge y\\) according to Spivak and Fong's short hand. Next let's assume \\(a \le_A x \wedge y \\). We want to show \\( a \le_A x\\) and \\(a \le_A y\\). Our assumption \\(a \le_A x \wedge y \\) is shorthand for \\(a \leq \bigwedge \\{x,y\\}\\). By (1) we have \\(\forall z \in \\{x,y\\}. a \leq z\\). But that's just the same as \\( a \le_A x\\) and \\(a \le_A y\\) as desired.
  • 27.
    edited April 8

    I hope you know this already, but if you don't, you're in luck: this is this most important thing you've heard all year! Please think about it and ask questions until it revolutionizes the way you think about logic.

    For me it revolutionized how I'm thinking about logic, it's a completely new perspective, and it actually simplifies things a lot, thank you @John! Moving to lecture 9 now, I wonder what is going to happen next.

    Comment Source:>I hope you know this already, but if you don't, you're in luck: this is this most important thing you've heard all year! Please think about it and ask questions until it revolutionizes the way you think about logic. For me it revolutionized how I'm thinking about logic, it's a completely new perspective, and it actually simplifies things a lot, thank you @John! Moving to lecture 9 now, I wonder what is going to happen next.
  • 28.

    Igor - great! I hoped this would have that effect for some students. There is a lot more one can say about this. For example, in Matthew Doty's puzzles MD1 - MD3 we learn that the logical operations "and" and "or" can be described as right and left adjoints. This is just the beginning of a long and wonderful story. But in Lecture 9 I moved straight on to considering how functions between sets fit into this story.

    Comment Source:Igor - great! I hoped this would have that effect for some students. There is a _lot_ more one can say about this. For example, in [Matthew Doty's puzzles MD1 - MD3](https://forum.azimuthproject.org/discussion/comment/16627/#Comment_16627) we learn that the logical operations "and" and "or" can be described as right and left adjoints. This is just the beginning of a long and wonderful story. But in Lecture 9 I moved straight on to considering how functions between sets fit into this story.
  • 29.

    in Matthew Doty's puzzles MD1 - MD3 we learn that the logical operations "and" and "or" can be described as right and left adjoints.

    Wow!

    Comment Source:> in Matthew Doty's puzzles MD1 - MD3 we learn that the logical operations "and" and "or" can be described as right and left adjoints. Wow!
  • 30.

    Yes, David - wow! Adjoints rule the world. "Or" is generous and liberal, while "and" is cautious and conservative.

    Comment Source:Yes, David - wow! Adjoints rule the world. "Or" is generous and liberal, while "and" is cautious and conservative.
Sign In or Register to comment.