It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 496
- Study Groups 10
- Epidemiology 5
- Network Theory 2
- Leaf Modeling 1
- Review Sections 9
- MIT 2020: Programming with Categories 52
- MIT 2020: Lectures 21
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 64
- Azimuth Code Project 110
- Statistical methods 2
- Drafts 1
- Math Syntax Demos 15
- Wiki - Latest Changes 2
- Strategy 111
- Azimuth Project 1.1K
- - Spam 1
- News and Information 147
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 707

## Comments

By the way, that makes a lot of sense despite perhaps being unintuitive at first. Every time you add an element to a set, the meet of that set gets “pushed farther down the poset”. So if the set is empty we need to start as far up as we can.

Similarly, AND of the empty set is True (there are no false elements!) and OR of the empty set is False (there are no true elements!).

`By the way, that makes a lot of sense despite perhaps being unintuitive at first. Every time you add an element to a set, the meet of that set gets “pushed farther down the poset”. So if the set is empty we need to start as far up as we can. Similarly, AND of the empty set is True (there are no false elements!) and OR of the empty set is False (there are no true elements!).`

Ah, I see where my "counterexamples" went wrong. Thanks a lot for your help, John and Dan!

`Ah, I see where my "counterexamples" went wrong. Thanks a lot for your help, John and Dan!`

Dan, your answer in #150 seems to suggest that \(\emptyset\) is a subset of some poset \( A\). So then if I were to assume that \( A =\emptyset\), then does the meet of \(\emptyset\) not exist?

I also want to understand what you meant by start as far up as we can when it comes to the empty set in #151. So going back to Alex's 2nd example in #147, it is true that \(g_1\) is not a right adjoint but it also does not preserve meet since:

$$g_1(\land\emptyset)=g_1(\land\{0\})=g_1(0)=0$$ but $$\land g_1(\emptyset)=\land\emptyset=1$$ where for the first equation, \(\emptyset\subseteq\{0\}\) and for the second equation \(\emptyset\subseteq\{0,1\}\). So this is indeed consistent with Proposition 1.88.

`Dan, your answer in #150 seems to suggest that \\(\emptyset\\) is a subset of some poset \\( A\\). So then if I were to assume that \\( A =\emptyset\\), then does the meet of \\(\emptyset\\) not exist? I also want to understand what you meant by start as far up as we can when it comes to the empty set in #151. So going back to Alex's 2nd example in #147, it is true that \\(g_1\\) is not a right adjoint but it also does not preserve meet since: $$g_1(\land\emptyset)=g_1(\land\\{0\\})=g_1(0)=0$$ but $$\land g_1(\emptyset)=\land\emptyset=1$$ where for the first equation, \\(\emptyset\subseteq\\{0\\}\\) and for the second equation \\(\emptyset\subseteq\\{0,1\\}\\). So this is indeed consistent with Proposition 1.88.`

Definition 1.55 includes this sentence: "A monotone function \(f: P \to Q\) is called an

isomorphismif there exists a monotone function \(g: Q \to P\) such that \(f.g = id_P\) and \(g.f = id_Q\).” (Here \(P\) and \(Q\) are preorders each with its own ordering).Question: Is the “and…” part redundant? It seems that \(g.f = id_Q\) is implied by what precedes it.

`Definition 1.55 includes this sentence: "A monotone function \\(f: P \to Q\\) is called an *isomorphism* if there exists a monotone function \\(g: Q \to P\\) such that \\(f.g = id_P\\) and \\(g.f = id_Q\\).” (Here \\(P\\) and \\(Q\\) are preorders each with its own ordering). Question: Is the “and…” part redundant? It seems that \\(g.f = id_Q\\) is implied by what precedes it.`

In Figure 1.2 (the first Hasse diagram), there are undirected edges. Are these supposed to all be arrow-ed upwards? If not, can someone explain?

`In Figure 1.2 (the first Hasse diagram), there are undirected edges. Are these supposed to all be arrow-ed upwards? If not, can someone explain?`

Dan Schmidt wrote:

Good! So, for any poset \(A\), the meet of \(\emptyset \subseteq A \) in , if it exists, is the

top elementof \(A\): the element \(p \in A\), necessarily unique, such that \(q \le p\) for all \(q \in A\).The top element is usually denoted \(\top\). We draw it at the top of a Hasse diagram, like this Hasse diagram of the poset of 4-bit strings:

`Dan Schmidt wrote: > Following Definition 1.60 in the book, \\(p\\) is the meet of \\(\emptyset\\) if > 1. for all \\(a \in \emptyset\\), \\(p \leq a\\), which imposes no restrictions at all, since there is no such \\(a\\); > 2. for all \\(q\\) such that \\(q \leq a\\) for all \\(a \in A\\), it is true that \\(q \leq p\\). Well, the antecedent is true of every \\(q\\), since no restriction is placed on it. So \\(q \leq p\\) for all \\(q\\) in the entire poset, and \\(p\\) must satisfy this if possible. For example, if the poset is a power set \\(PX\\) with the inclusion relation, \\(p\\) is the entire set \\(X\\). Good! So, for any poset \\(A\\), the meet of \\(\emptyset \subseteq A \\) in , if it exists, is the **top element** of \\(A\\): the element \\(p \in A\\), necessarily unique, such that \\(q \le p\\) for all \\(q \in A\\). The top element is usually denoted \\(\top\\). We draw it at the top of a Hasse diagram, like this Hasse diagram of the poset of 4-bit strings: <img src = "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9b/Hypercubeorder_binary.svg/500px-Hypercubeorder_binary.svg.png">`

Derrick: In Figure 1.2 of the book all the edges

aredirected upwards! At least I see upwards-pointing arrows on these edges in Figure 1.2 in my copy ofSeven Sketches. Don't you?`Derrick: In Figure 1.2 of the book all the edges _are_ directed upwards! At least I see upwards-pointing arrows on these edges in Figure 1.2 in my copy of _[Seven Sketches](https://arxiv.org/abs/1803.05316)_. Don't you?`

John: Hmm.. I only see upwards-pointing arrows for the middle, vertically aligned arrows. The ones at 45 degree angles are undirected. I wonder why that is?

I just checked -- in my browser, they do all have arrows! I guess something's wrong with my pdf reader. [I'm using Document Viewer, which is probably Evince, in Gnome on Fedora 27] It all makes sense now.

`John: Hmm.. I only see upwards-pointing arrows for the middle, vertically aligned arrows. The ones at 45 degree angles are undirected. I wonder why that is? I just checked -- in my browser, they do all have arrows! I guess something's wrong with my pdf reader. [I'm using Document Viewer, which is probably Evince, in Gnome on Fedora 27] It all makes sense now.`

Aqilah: I don't think I understand your first question; can you rephrase? The empty set is a subset of every set. If our entire poset is the empty set, then there is no meet of the empty set in that poset, since there are no elements to be the meet.

By "start as far up as we can", I just mean that \(\bigwedge (A \cup \{a\}) \subseteq\bigwedge A\). So \(\bigwedge \emptyset\) had better be very "big", so as not to put any restrictions on \(\bigwedge\{a\}\).

`Aqilah: I don't think I understand your first question; can you rephrase? The empty set is a subset of every set. If our entire poset is the empty set, then there is no meet of the empty set in that poset, since there are no elements to be the meet. By "start as far up as we can", I just mean that \\(\bigwedge (A \cup \\{a\\}) \subseteq\bigwedge A\\). So \\(\bigwedge \emptyset\\) had better be very "big", so as not to put any restrictions on \\(\bigwedge\\{a\\}\\).`

Hi guys, so I just finished reading this thread, moving on to the other ones and getting started on participating in this amazing project. BTW: I am quite happy with this plan of trying to finish the course by the time school starts in the northern hemisphere. After reading through the thread I feel coming up with examples is a good form of introduction so here I go (heh):

I liked the chess ruminations so I'll name an example using Go; the set of all possible kifu - that is possible ways to play a game of go from start to any legal position (because the game can end in any position due to one player resigning) is a preorder. While there is a rule to prevent the board from going back to the state it was in on the previous move there is no such rule to prevent cycles taking more than one move so I think the kifu's are not partially ordered (but they could be if we count number of moves recorded on the kifu?).

If you don't include the history then we definitely have a preorder since for any positions \(x, y, z\) such that \(x \le y\) and \(y \le z\) there is a kifu in the kifu set that describes the path from \(x \le z\) but we have lost the information containing the order of moves (did I just make a functor? ;)

The reasons \(x \le x\) is that passing is a legal move (however three passes end the game so actually we'll either need to include number of passes in our definition of the gamestate in the second example or assume that no passing has happened when presented with a board state and actually that rule means that \(x \le x\) isn't true! or is it? The morphism needs a player to make a move so an identity morphism would be an empty move, but passing isn't cause it adds information to the gamestate).

I wanted to make a couple of more general examples but I'm afraid my ability is not up to the task of stating / defining the scenarios. Still I'll make an attempt since I've heard the best way to get good explanations on the internet is to be slightly wrong about something but claiming it's truth vehemently.

Given a 'game' that has rules 1. for going from one game state to another 2. for possible initial states

We define a 'story' as what has happened in a game up to a point. Stories are a preorder. Actually this is just putting fun labels on automata / state machines.

Then finally I wanted to say that an 'explanation' given a 'language' form a partial order with respect to kolmogorov complexity in that language. Now this may actually be false in certain languages since different explanations could have the same level of complexity (i.e. preorder not poset) but I'd like to know if you guys have any way of figuring out which attributes of a language would determine that.

`Hi guys, so I just finished reading this thread, moving on to the other ones and getting started on participating in this amazing project. BTW: I am quite happy with this plan of trying to finish the course by the time school starts in the northern hemisphere. After reading through the thread I feel coming up with examples is a good form of introduction so here I go (heh): I liked the chess ruminations so I'll name an example using Go; the set of all possible kifu - that is possible ways to play a game of go from start to any legal position (because the game can end in any position due to one player resigning) is a preorder. While there is a rule to prevent the board from going back to the state it was in on the previous move there is no such rule to prevent cycles taking more than one move so I think the kifu's are not partially ordered (but they could be if we count number of moves recorded on the kifu?). If you don't include the history then we definitely have a preorder since for any positions \\(x, y, z\\) such that \\(x \le y\\) and \\(y \le z\\) there is a kifu in the kifu set that describes the path from \\(x \le z\\) but we have lost the information containing the order of moves (did I just make a functor? ;) The reasons \\(x \le x\\) is that passing is a legal move (however three passes end the game so actually we'll either need to include number of passes in our definition of the gamestate in the second example or assume that no passing has happened when presented with a board state and actually that rule means that \\(x \le x\\) isn't true! or is it? The morphism needs a player to make a move so an identity morphism would be an empty move, but passing isn't cause it adds information to the gamestate). I wanted to make a couple of more general examples but I'm afraid my ability is not up to the task of stating / defining the scenarios. Still I'll make an attempt since I've heard the best way to get good explanations on the internet is to be slightly wrong about something but claiming it's truth vehemently. Given a 'game' that has rules 1. for going from one game state to another 2. for possible initial states We define a 'story' as what has happened in a game up to a point. Stories are a preorder. Actually this is just putting fun labels on automata / state machines. Then finally I wanted to say that an 'explanation' given a 'language' form a partial order with respect to kolmogorov complexity in that language. Now this may actually be false in certain languages since different explanations could have the same level of complexity (i.e. preorder not poset) but I'd like to know if you guys have any way of figuring out which attributes of a language would determine that.`

Dan: Yes, the empty set is a subset of any set. But am I mistaken in thinking that in order for the meet of the empty set to make sense, we need to specify the preordered set for which it is a subset of? For example, in the power set of some set \(X\) with inclusion as the preorder, the meet of the empty set is \( X\) but in \(P(X)\cup\{Y\}\) where \(X\subset Y\) and \(X\not=Y\), the meet of the empty set is then \(Y\)?

Basically, I'm wondering if it is necessary to specify the preorder set we're looking at when it comes to finding the meet of the empty set.

`Dan: Yes, the empty set is a subset of any set. But am I mistaken in thinking that in order for the meet of the empty set to make sense, we need to specify the preordered set for which it is a subset of? For example, in the power set of some set \\(X\\) with inclusion as the preorder, the meet of the empty set is \\( X\\) but in \\(P(X)\cup\\{Y\\}\\) where \\(X\subset Y\\) and \\(X\not=Y\\), the meet of the empty set is then \\(Y\\)? Basically, I'm wondering if it is necessary to specify the preorder set we're looking at when it comes to finding the meet of the empty set.`

It's totally necessary to specify the preorder set in which you're taking the meet, and not just when you're taking the meet of the empty set! The definitions of meet and join (definition 1.60 in the book) use the preorder set in the definitions. For example, the meet of \(\emptyset\) when our preorder set is \(\{1,2\}\) with the usual numerical meaning of \(\leq\) is \(\{1,2\}\), while the meet of \(\emptyset\) when our preorder set is \(\{1,2,3\}\) is \(\{1,2,3\}\).

`It's totally necessary to specify the preorder set in which you're taking the meet, and not just when you're taking the meet of the empty set! The definitions of meet and join (definition 1.60 in the book) use the preorder set in the definitions. For example, the meet of \\(\emptyset\\) when our preorder set is \\(\\{1,2\\}\\) with the usual numerical meaning of \\(\leq\\) is \\(\\{1,2\\}\\), while the meet of \\(\emptyset\\) when our preorder set is \\(\\{1,2,3\\}\\) is \\(\\{1,2,3\\}\\).`

Let P be a poset.

Then the least upper bound of {} is, naturally, the least of the upper bounds of {}. Every member of P is an upper bound for {}. So the least upper bound for {} is the least member of P, i.e., the minimum element of P -- if such an element exists.

By the same logic, the greatest lower bound of {} is the maximum element of P, if it exists. This is the same conclusion Dan reached.

Notes:

Since we are in a poset, antisymmetry implies there can be at most one maximum, and one minimum element. For if there were two, \(a\) and \(b\), then we'd have both \(a \le b\) and \(b \le a\), which would imply they are the same.

Minimal and maximal elements are a different, and weaker notion than the concepts of minimum and maximum, which are global. To be minimal means there's nothing less than it, and to be maximal means there's nothing greater than it. In the poset where the only relation is identity, every element is minimal, and maximal -- but there is no maximum or minimum for the whole set.

`Let P be a poset. Then the least upper bound of {} is, naturally, the least of the upper bounds of {}. Every member of P is an upper bound for {}. So the least upper bound for {} is the least member of P, i.e., the minimum element of P -- if such an element exists. By the same logic, the greatest lower bound of {} is the maximum element of P, if it exists. This is the same conclusion Dan reached. Notes: * Since we are in a poset, antisymmetry implies there can be at most one maximum, and one minimum element. For if there were two, \\(a\\) and \\(b\\), then we'd have both \\(a \le b\\) and \\(b \le a\\), which would imply they are the same. * Minimal and maximal elements are a different, and weaker notion than the concepts of minimum and maximum, which are global. To be minimal means there's nothing less than it, and to be maximal means there's nothing greater than it. In the poset where the only relation is identity, every element is minimal, and maximal -- but there is no maximum or minimum for the whole set.`

Jerry #154 - I think the "and..." part is necessary. For example:

Let \(P=\{a\}\), \(Q=\{1,2\}\). Let \(f(a)=1\) and \(g(1)=g(2)=a\). Then \(f\), \(g\) are monotone, \(f.g=id_P\) but \(g.f\neq id_Q\).

`Jerry #154 - I think the "and..." part is necessary. For example: Let \\(P=\\{a\\}\\), \\(Q=\\{1,2\\}\\). Let \\(f(a)=1\\) and \\(g(1)=g(2)=a\\). Then \\(f\\), \\(g\\) are monotone, \\(f.g=id_P\\) but \\(g.f\neq id_Q\\).`

Another example of a preorder:

Poses your body can assume ordered by how far you center of gravity is from the ground.

`Another example of a preorder: Poses your body can assume ordered by how far you center of gravity is from the ground.`

Thomas #164 - thank you!

`Thomas #164 - thank you!`

I'm not sure of the etiquette here and whether this should be in a separate topic, or if anything Chapter 1-related goes here. I have a question on the order of partitions.

Eq. (1.2) on page 5 shows an order by coarseness of the partition. In other words, we say that \(A ≤ B\) if, whenever \(x\) is connected to \(y\) in \(A\), then \(x\) is connected to \(y\) in \(B\).

Then on the bottom of page 12, having introduced the notion that partitions on \(A\) can be thought of as surjective functions out of \(A\), we can come up with a slightly more formal definition of that same notion of order. We say that \(f: A ↠ P\) is finer than \(g: A ↠ Q\) if there is a function \(h: P → Q\) such that \(f.h = g\).

I'm a bit puzzled by this. I can see from the basic left-totality requirement of a function that you cannot map a partition of lower cardinality to a partition of higher cardinality. That will allow the arrows in Eq. (1.2) and disallow the reverse of those arrows. My problem is that it would seem to me that you can easily map partitions of the same cardinality to each other. So I can define a function that maps the partitions in the middle row to each other. By the above definition, that means that those partitions are all mutually finer than each other. But we have already learnt on page 11 that the middle row are non-comparable. So does the definition on the bottom of page introduce unwanted additional comparisons over the earlier definition?

`I'm not sure of the etiquette here and whether this should be in a separate topic, or if anything Chapter 1-related goes here. I have a question on the order of partitions. Eq. (1.2) on page 5 shows an order by coarseness of the partition. In other words, we say that \\(A ≤ B\\) if, whenever \\(x\\) is connected to \\(y\\) in \\(A\\), then \\(x\\) is connected to \\(y\\) in \\(B\\). Then on the bottom of page 12, having introduced the notion that partitions on \\(A\\) can be thought of as surjective functions out of \\(A\\), we can come up with a slightly more formal definition of that same notion of order. We say that \\(f: A ↠ P\\) is finer than \\(g: A ↠ Q\\) if there is a function \\(h: P → Q\\) such that \\(f.h = g\\). I'm a bit puzzled by this. I can see from the basic left-totality requirement of a function that you cannot map a partition of lower cardinality to a partition of higher cardinality. That will allow the arrows in Eq. (1.2) and disallow the reverse of those arrows. My problem is that it would seem to me that you can easily map partitions of the same cardinality to each other. So I can define a function that maps the partitions in the middle row to each other. By the above definition, that means that those partitions are all mutually finer than each other. But we have already learnt on page 11 that the middle row are non-comparable. So does the definition on the bottom of page introduce unwanted additional comparisons over the earlier definition?`

Recognizing that a partition can be represented as a surjection, all partitions of the same cardinality have such a surjective function to the same codomain. There is an obvious order over those codomains. The monotone map from the partitions to the preorder of discrete sets induces an order on the set of partitions. I will make a picture.

`Recognizing that a partition can be represented as a surjection, all partitions of the same cardinality have such a surjective function to the same codomain. There is an obvious order over those codomains. The monotone map from the partitions to the preorder of discrete sets induces an order on the set of partitions. I will make a picture.`

Fredrick #168 - thanks for that. In case my question wasn't clearly phrased, I was asking if these definitions of the notion of order are actually incompatible. With the first definition, the middle row in Eq. (1.2) are non-comparable. With the second one, it would seem that the partitions in the middle row are mutually finer than each other. But it seemed as though the second definition was being presented as an alternative formulation of the same notion of order.

`Fredrick #168 - thanks for that. In case my question wasn't clearly phrased, I was asking if these definitions of the notion of order are actually incompatible. With the first definition, the middle row in Eq. (1.2) are non-comparable. With the second one, it would seem that the partitions in the middle row are mutually finer than each other. But it seemed as though the second definition was being presented as an alternative formulation of the same notion of order.`

Don #167, #169 - the two definitions are equivalent. If the two partitions have the same cardinality then you can find an \(h: P \rightarrow Q\), but not necessarily satisfying \(f.h=g\).

For example, we can represent the partitions from the left and middle of the middle row by:

$$ f(\bullet)=f(\circ)=1, f(\ast)=2 $$ and $$ g(\bullet)=g(\ast)=1, g(\circ)=2 $$ where \(P=Q=\{1, 2\}\).

Then there is no function \(h: \{1,2\} \rightarrow \{1,2\}\) such that \(f.h=g\) (since \(f(\bullet)=f(\circ)\), hence \(h(f(\bullet))=h(f(\circ))\), but \(g(\bullet) \neq g(\circ)\)).

`Don #167, #169 - the two definitions are equivalent. If the two partitions have the same cardinality then you can find an \\(h: P \rightarrow Q\\), but not necessarily satisfying \\(f.h=g\\). For example, we can represent the partitions from the left and middle of the middle row by: $$ f(\bullet)=f(\circ)=1, f(\ast)=2 $$ and $$ g(\bullet)=g(\ast)=1, g(\circ)=2 $$ where \\(P=Q=\\{1, 2\\}\\). Then there is no function \\(h: \\{1,2\\} \rightarrow \\{1,2\\}\\) such that \\(f.h=g\\) (since \\(f(\bullet)=f(\circ)\\), hence \\(h(f(\bullet))=h(f(\circ))\\), but \\(g(\bullet) \neq g(\circ)\\)).`

Thomas #170 - You've supplied precisely the bit that I was missing. Super helpful, thanks.

`Thomas #170 - You've supplied precisely the bit that I was missing. Super helpful, thanks.`

Proposition 1.81 (the basic theory of Galois connections) starts by supposing that \(f: P → Q\) and \(g: Q → P\) are functions and shows that two things are equivalent: a.) is that \(f\) and \(g\) form a Galois connection and b.) is Eq. (1.6). Going from a.) to b.) is straightforward. In the proof of the reverse, from b.) to a.), the first step in the proof goes: "then since \(g\) is monotonic...". Is this a valid assumption? This is the first step is this side of the proof and the basic setup only invites us to suppose that \(f: P → Q\) and \(g: Q → P\) are functions.

`Proposition 1.81 (the basic theory of Galois connections) starts by supposing that \\(f: P → Q\\) and \\(g: Q → P\\) are functions and shows that two things are equivalent: a.) is that \\(f\\) and \\(g\\) form a Galois connection and b.) is Eq. (1.6). Going from a.) to b.) is straightforward. In the proof of the reverse, from b.) to a.), the first step in the proof goes: "then since \\(g\\) is monotonic...". Is this a valid assumption? This is the first step is this side of the proof and the basic setup only invites us to suppose that \\(f: P → Q\\) and \\(g: Q → P\\) are functions.`

There's another strange assumption (to me at least) in the proof of Proposition 1.88 on page 24. In the process of proving the adjointness condition (in the second-last paragraph), they use Eq. (1.6) from Proposition 1.81. But the whole point of Proposition 1.81 is that Eq. (1.6) is just equivalent to the existence of a Galois connection. Is it not a bit circular to assume the existence of a Galois connection when attempting to prove the adjointness condition?

`There's another strange assumption (to me at least) in the proof of Proposition 1.88 on page 24. In the process of proving the adjointness condition (in the second-last paragraph), they use Eq. (1.6) from Proposition 1.81. But the whole point of Proposition 1.81 is that Eq. (1.6) is just equivalent to the existence of a Galois connection. Is it not a bit circular to assume the existence of a Galois connection when attempting to prove the adjointness condition?`

172: I suspect f and g should be assumed to be monotonic from the start. 173: I think you're right. Can this direction be proved in a similar way to the reverse direction (next paragraph)?

`172: I suspect f and g should be assumed to be monotonic from the start. 173: I think you're right. Can this direction be proved in a similar way to the reverse direction (next paragraph)?`

For exercise 1.83,

Show that if \( f:P\rightarrow Q \) has a a right adjoint \(g\), then it is unique up to isomorphism. That means, for any other right adjoint \(g'\), we have \(g(q)\cong g'(q)\) for all \(q \in Q\).

I'm having a little trouble getting started. I wanted to show that both \(g(q) \geq g'(q)\) and \(g'(q) \geq g(q)\), but I'm having trouble going from the definition of a right adjoint to this. Does anyone have any advice on how to get started?

Thank you!

`For exercise 1.83, Show that if \\( f:P\rightarrow Q \\) has a a right adjoint \\(g\\), then it is unique up to isomorphism. That means, for any other right adjoint \\(g'\\), we have \\(g(q)\cong g'(q)\\) for all \\(q \in Q\\). I'm having a little trouble getting started. I wanted to show that both \\(g(q) \geq g'(q)\\) and \\(g'(q) \geq g(q)\\), but I'm having trouble going from the definition of a right adjoint to this. Does anyone have any advice on how to get started? Thank you!`

Hi Daniel,

Your approach is a good one! You know that \(p\leq g(q)\) and \(p\leq g'(q)\) are both equivalent to \(f(p)\leq q\), and so they must be equivalent to each other. What relations do you get when you let \(p=g(q)\)? When you let \(p=g'(q)\)?

`Hi Daniel, Your approach is a good one! You know that \\(p\leq g(q)\\) and \\(p\leq g'(q)\\) are both equivalent to \\(f(p)\leq q\\), and so they must be equivalent to each other. What relations do you get when you let \\(p=g(q)\\)? When you let \\(p=g'(q)\\)?`

173: I agree that the reference to Proposition 1.81 in Theorem 1.88 seems circular, and I think I've found a way to fix it:

By the logic of the proof up to the point in question, we have \(g(f(p_0)) \leq g(q_0)\). From the definition of \(f\), we have \(g(f(p_0)) = g(\bigwedge \{ q \in Q: \; p_0 \leq g(q) \})\). But \(g\) preserves meets, so \(g(f(p_0)) = g(\bigwedge \{ q \in Q: \; p_0 \leq g(q) \}) = \bigwedge g(\{ q \in Q: \; p_0 \leq g(q) \})\). Then \(p_0\) is obviously a lower bound for \(g(\{ q \in Q: \; p_0 \leq g(q) \})\), so we get the desired inequality \(p_0 \leq g(f(p_0)) \leq g(q_0)\).

`173: I agree that the reference to Proposition 1.81 in Theorem 1.88 seems circular, and I think I've found a way to fix it: By the logic of the proof up to the point in question, we have \\(g(f(p_0)) \leq g(q_0)\\). From the definition of \\(f\\), we have \\(g(f(p_0)) = g(\bigwedge \\{ q \in Q: \; p_0 \leq g(q) \\})\\). But \\(g\\) preserves meets, so \\(g(f(p_0)) = g(\bigwedge \\{ q \in Q: \; p_0 \leq g(q) \\}) = \bigwedge g(\\{ q \in Q: \; p_0 \leq g(q) \\})\\). Then \\(p_0\\) is obviously a lower bound for \\(g(\\{ q \in Q: \; p_0 \leq g(q) \\})\\), so we get the desired inequality \\(p_0 \leq g(f(p_0)) \leq g(q_0)\\).`

John Nolan and Don Ryan - I haven't looked at it yet, but if you think there's a fixable flaw in the proof of Theorem 1.88, it would be good to report it here:

If there's a problem, Brendan and David will fix it.

By the way, it's also good to download the latest copy of the book (as of 7 April 2018), which has posets restored to their rightful place, and various other mistakes caught by the students in this course fixed.

`John Nolan and Don Ryan - I haven't looked at it yet, but if you think there's a fixable flaw in the proof of Theorem 1.88, it would be good to report it here: * [Seven Sketches book: Typos, comments, questions, and suggestions.](https://docs.google.com/document/d/160G9OFcP5DWT8Stn7TxdVx83DJnnf7d5GML0_FOD5Wg/edit) If there's a problem, Brendan and David will fix it. By the way, it's also good to download the [latest copy of the book](http://math.mit.edu/~dspivak/teaching/sp18/7Sketches.pdf) (as of 7 April 2018), which has posets restored to their rightful place, and various other mistakes caught by the students in this course fixed.`

If anyone here read these lectures but not my recent corrections, they may be worth another look:

The right adjoint to the function \(f : \mathbb{N} \to \mathbb{N}\) that doubles any natural number is \(g(n) = \lfloor n/2\rfloor \), not \(g(n) = \lceil n/2\rceil \). With this fixed it becomes an example of one of my favorite principles:

I changed my notation to match that in the book, and more importantly fixed a serious mistake. With all this fixed, we see that given any function \(f : X \to Y\) between sets, the left adjoint of the

inverse image\(f^{*} : PY \to PX\) is \( f_{!} : PX \to PY \), given by$$f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} $$ for any \(S \subseteq X\), while the right adjoint is \( f_{\ast} : PX \to PY \), given by

$$f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} .$$ This lets us see yet another example of those principles. Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading. Then:

Puzzle 24.What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"?Puzzle 25.What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?`If anyone here read these lectures but not my recent corrections, they may be worth another look: * [Lecture 6 - Chapter 1: Computing Adjoints](https://forum.azimuthproject.org/discussion/1901/lecture-6-chapter-1-computing-adjoints) The right adjoint to the function \\(f : \mathbb{N} \to \mathbb{N}\\) that doubles any natural number is \\(g(n) = \lfloor n/2\rfloor \\), not \\(g(n) = \lceil n/2\rceil \\). With this fixed it becomes an example of one of my favorite principles: > The left adjoint comes as close as possible to the (perhaps nonexistent) correct answer while making sure to _never choose a number that's too small_. The right adjoint comes as close as possible while making sure to _never choose a number that's too big_. > The two adjoints represent two opposing philosophies of life: _make sure you never ask for too little_ and _make sure you never ask for too much_. If you need a mnemonic to remember which is which, remember left adjoints are "left-wing" or "liberal" or "generous", while right adjoints are "right-wing" or "conservative" or "cautious". * [Lecture 9 - Chapter 1: Adjoints in the Logic of Subsets.](https://forum.azimuthproject.org/discussion/1931/lecture-9-adjoints-and-the-logic-of-subsets) I changed my notation to match that in the book, and more importantly fixed a serious mistake. With all this fixed, we see that given any function \\(f : X \to Y\\) between sets, the left adjoint of the **inverse image** \\(f^{*} : PY \to PX\\) is \\( f_{!} : PX \to PY \\), given by $$f_{!}(S) = \\{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\\} $$ for any \\(S \subseteq X\\), while the right adjoint is \\( f_{\ast} : PX \to PY \\), given by $$f_{\ast}(S) = \\{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\\} .$$ This lets us see yet another example of those principles. Let \\( X \\) be the set of states of your room, and \\( Y \\) the set of states of a thermometer in your room: that is, thermometer readings. Let \\(f : X \to Y \\) map any state of your room to the thermometer reading. Then: **Puzzle 24.** What is \\(f_{!}(\\{\text{there is a living cat in your room}\\})\\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"? **Puzzle 25.** What is \\(f_{\ast}(\\{\text{there is a living cat in your room}\\})\\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?`

Puzzle 4. Just to give an "applied" example:

In rational continuum mechanics, Truesdell starts defining a poset, and then a Boolean algebra, to construct a "universe of bodies" in which to define mechanics (see Clifford Truesdell, A first course in rational continuum mechanics).

`Puzzle 4. Just to give an "applied" example: In rational continuum mechanics, Truesdell starts defining a poset, and then a Boolean algebra, to construct a "universe of bodies" in which to define mechanics (see Clifford Truesdell, A first course in rational continuum mechanics).`

The inverse image should be \(f^* : PY \to PX \)? And thanks for the 1 page LaTex tutorial. I think it's all I need. :)

`The inverse image should be \\(f^* : PY \to PX \\)? And thanks for the 1 page LaTex tutorial. I think it's all I need. :)`

Jim - yes, given \(f : X \to Y\) the inverse image is \(f^{*} : PY \to PX\). I fixed one place where I made a typo about this, namely comment #179.

`Jim - yes, given \\(f : X \to Y\\) the inverse image is \\(f^{*} : PY \to PX\\). I fixed one place where I made a typo about this, namely comment #179.`

One observation that may save some time to the readers - at least I had to think about it for a while, being close to declare that I have found an error. In the proof of Proposition 1.81 we have $$ f(p) = \bigwedge\{q \in Q| p \leq g(q)\} $$ and it is not immediately clear what happens when right hand side is empty set. From the Definition 1.60 one can deduce, that $$\bigwedge \emptyset $$ if exists must be maximal element. And indeed, if we assume that "all meets exists" we must assume also existence of the maximal element. Or am I making some mistake here?

`One observation that may save some time to the readers - at least I had to think about it for a while, being close to declare that I have found an error. In the proof of Proposition 1.81 we have $$ f(p) = \bigwedge\{q \in Q| p \leq g(q)\} $$ and it is not immediately clear what happens when right hand side is empty set. From the Definition 1.60 one can deduce, that $$\bigwedge \emptyset $$ if exists must be maximal element. And indeed, if we assume that "all meets exists" we must assume also existence of the maximal element. Or am I making some mistake here?`

Artur - you're not making a mistake; this is an important observation.

The meet of empty set, if it exists, must be the biggest thing that's smaller than everything in the empty set. But "smaller than everything in the empty set" is a vacuous condition, so the meet of the empty set, if it exists, must be the biggest thing. That is, it must be an element that's greater than or equal to all others. In a poset, this element is unique (if it exists), and we call it the

topelement or \(\top\). In the logic of subsets this element is also calledtrue.All this has an upside-down version for joins: the join of the empty set, if it exists, must be an element than's less than or equal to all others; in a poset it is unique (if it exists) and is called the

bottomelement or \(\bot\) orfalse.`Artur - you're not making a mistake; this is an important observation. The meet of empty set, if it exists, must be the biggest thing that's smaller than everything in the empty set. But "smaller than everything in the empty set" is a vacuous condition, so the meet of the empty set, if it exists, must be the biggest thing. That is, it must be an element that's greater than or equal to all others. In a poset, this element is unique (if it exists), and we call it the **top** element or \\(\top\\). In the logic of subsets this element is also called **true**. All this has an upside-down version for joins: the join of the empty set, if it exists, must be an element than's less than or equal to all others; in a poset it is unique (if it exists) and is called the **bottom** element or \\(\bot\\) or **false**.`

For puzzle 4-

NOTE: leq denotes less than equal to.

1) Set of all finite words (using the English alphabet) with dictionary ordering- x leq y if the word x occurs before y or at the same position as y in a dictionary (here a finite word is just a finite sequence of letters). This is a preorder which is also a (skeletal) partial order.

2) Consider a non-empty finite set where every element is related to every other element including itself. If the set has more than 1 element then this is a preorder which is not a (skeletal) partial order . One could think of the finite set as a family of people with leq thought of as "related to". So, everyone in a family is related to each other (we're assuming a person is related to him or herself), So, person x is related to person y and person y is related to person x, but this doesn't mean that person x is the same as person y.

`For puzzle 4- NOTE: leq denotes less than equal to. 1) Set of all finite words (using the English alphabet) with dictionary ordering- x leq y if the word x occurs before y or at the same position as y in a dictionary (here a finite word is just a finite sequence of letters). This is a preorder which is also a (skeletal) partial order. 2) Consider a non-empty finite set where every element is related to every other element including itself. If the set has more than 1 element then this is a preorder which is not a (skeletal) partial order . One could think of the finite set as a family of people with leq thought of as "related to". So, everyone in a family is related to each other (we're assuming a person is related to him or herself), So, person x is related to person y and person y is related to person x, but this doesn't mean that person x is the same as person y.`

Nice examples, Nachiket! By the way, you can get a nice less than or equal to sign by typing

`\\( \le \\)`

The funny parentheses tell the system that you're doing math.

`Nice examples, Nachiket! By the way, you can get a nice less than or equal to sign by typing `\\( \le \\)` The funny parentheses tell the system that you're doing math.`

@Alex Kreitzberg (43) What is an 'event' in your definition? I see that an event A is somehow \(0 \leq A \leq 1\) but I cannot see exactly how nor how \(P(A)\) is defined.

`@Alex Kreitzberg (43) What is an 'event' in your definition? I see that an event A is somehow \\(0 \leq A \leq 1\\) but I cannot see exactly how nor how \\(P(A)\\) is defined.`

\(\forall x\in X, x \leq x\) looks like a standard poset to me, not just a preorder

\(\forall x,y \in X, x \leq y \) and \( y \leq x \) implies \(x \leq x \)

Incidentally, have we diverged from Fong & Spivak in these lectures, i.e. do we use poset and preorder with their standard meanings?

`\\(\forall x\in X, x \leq x\\) looks like a standard poset to me, not just a preorder \\(\forall x,y \in X, x \leq y \\) and \\( y \leq x \\) implies \\(x \leq x \\) Incidentally, have we diverged from Fong & Spivak in these lectures, i.e. do we use poset and preorder with their standard meanings?`

@JohnBeattie: The original version on arxiv does indeed use a different meaning for poset (basically their poset is a preorder). However the latest version here uses the standard meanings for poset and preorder.

`@JohnBeattie: The original version on arxiv does indeed use a different meaning for poset (basically their poset is a preorder). However the latest version [here](http://math.mit.edu/~dspivak/teaching/sp18/7Sketches.pdf) uses the standard meanings for poset and preorder.`

Please nab the latest version, John! Right now they are updating the book about once a week, though this will probably grind to a halt for the next two weeks, during ACT2018.

`Please nab the latest version, John! Right now they are updating the book about once a week, though this will probably grind to a halt for the next two weeks, during [ACT2018](https://johncarlosbaez.wordpress.com/2017/09/12/act-2018/).`

Ah-ha. Thank you. Relief all round :)

`Ah-ha. Thank you. Relief all round :)`

Wasn't sure where to ask this but I had a question about joins and meets with empty sets.

For example:

Let's say we have a poset \( {A, \leq }\), and \(a \in A\). What is the \(a \vee \varnothing\)? Do you approach it like the or operator, ie 0 or 1 = 1? I assume it would be the opposite for \(a \cap \varnothing\)?

`Wasn't sure where to ask this but I had a question about joins and meets with empty sets. For example: Let's say we have a poset \\( \{A, \leq \}\\), and \\(a \in A\\). What is the \\(a \vee \varnothing\\)? Do you approach it like the or operator, ie 0 or 1 = 1? I assume it would be the opposite for \\(a \cap \varnothing\\)?`

Michael: An arbitrary poset \(\langle A, \le \rangle\) need not contain an element like \(\varnothing\); consider \(\{\mathrm{false}, \mathrm{true}\}\) with the order given in the book. Are you concerned specifically with the powerset order \(\langle \mathcal{P}(A), \subseteq \rangle\)? In that case, \(a \vee \varnothing = a\), since \(\varnothing \subseteq a\) for all \(a\), and hence \(a\) is already the least upper bound of \(\{a, \varnothing\}\). Similarly, \(a \wedge \varnothing = \varnothing\), since \(\varnothing\) is already the greatest lower bound of the two.

`Michael: An arbitrary poset \\(\langle A, \le \rangle\\) need not contain an element like \\(\varnothing\\); consider \\(\\{\mathrm{false}, \mathrm{true}\\}\\) with the order given in the book. Are you concerned specifically with the powerset order \\(\langle \mathcal{P}(A), \subseteq \rangle\\)? In that case, \\(a \vee \varnothing = a\\), since \\(\varnothing \subseteq a\\) for all \\(a\\), and hence \\(a\\) is already the least upper bound of \\(\\{a, \varnothing\\}\\). Similarly, \\(a \wedge \varnothing = \varnothing\\), since \\(\varnothing\\) is already the greatest lower bound of the two.`

@Jonathan Castello

Thanks for clearer this up and also expanding my horizon LOL. For some reason I keep thinking empty set is always part of a set but now I see it definitely doesn't have to be.

`@Jonathan Castello Thanks for clearer this up and also expanding my horizon LOL. For some reason I keep thinking empty set is always part of a set but now I see it definitely doesn't have to be.`

I hope you don't mind me bringing up old threads, how it's just too cool.

The Lambek–Moser theorem is very much a theorem about left and right adjoints applied to the natural numbers (\(\mathbb{N}\)), though in the literature, it seems no one has noticed this.

What's neat is that the adjoints partition the natural numbers.

`I hope you don't mind me bringing up old threads, how it's just too cool. The [Lambek–Moser theorem](https://en.wikipedia.org/wiki/Lambek–Moser_theorem) is very much a theorem about left and right adjoints applied to the natural numbers (\\(\mathbb{N}\\)), though in the literature, it seems no one has noticed this. What's neat is that the adjoints partition the natural numbers.`

This is amazing Keith!

Thank you for pointing this out.

`> The [Lambek–Moser theorem](https://en.wikipedia.org/wiki/Lambek–Moser_theorem) is very much a theorem about left and right adjoints applied to the natural numbers (\\(\mathbb{N}\\)), though in the literature, it seems no one has noticed this. > > What's neat is that the adjoints partition the natural numbers. This is amazing Keith! Thank you for pointing this out.`

In the process of writing this comment, I realized that I don't like my answer as an answer to Puzzzle 5. It is a separation into three subsets, but one of these subsets does not feel that enlightening to me. The answer by @JaredSummers feels much nicer: "Because for elements x and y there are exactly three possibilities: x<y or y<x or x=y"

However, I decided to post this comment as an observation:

Because it partitions any set S with a preorder into three disjunct sets X, Y and Z, where members of X only appear on the left-hand-side of an binary relation and members of Z only appear on the right-hand-side (and the set Z holds everything else). Intuitively, X can be thought of the set of sources, Z as set of sinks and Y as set of in-betweens and other stuff. Formally:

$$\forall x \in X . (\exists y \in (S \setminus X) . x \leq y) \land (\exists! y \in (S \setminus X) . y \leq x)$$ $$\forall z \in Z . (\exists y \in (S \setminus Z) . y \leq z) \land (\exists! y \in (S \setminus Z) . z \leq y)$$ $$Y = S \setminus X \cup Z$$ $$S = X \cup Y \cup Z $$ $$ X \cap Y \cap Z = \emptyset$$ This is feels like extracting the set of lower and upper bound out of an preorder. The trichotomy would then be "Any member of the preorder is either an lower bound (exclusively on the left-hand-side of the order relation), an upper bound or neither."

`In the process of writing this comment, I realized that I don't like my answer as an answer to Puzzzle 5. It is a separation into three subsets, but one of these subsets does not feel that enlightening to me. The answer by @JaredSummers feels much nicer: "Because for elements x and y there are exactly three possibilities: x<y or y<x or x=y" However, I decided to post this comment as an observation: ------ > "Puzzle 5. Why is this property called "trichotomy"?" Because it partitions any set S with a preorder into three disjunct sets X, Y and Z, where members of X only appear on the left-hand-side of an binary relation and members of Z only appear on the right-hand-side (and the set Z holds everything else). Intuitively, X can be thought of the set of sources, Z as set of sinks and Y as set of in-betweens and other stuff. Formally: $$\forall x \in X . (\exists y \in (S \setminus X) . x \leq y) \land (\exists! y \in (S \setminus X) . y \leq x)$$ $$\forall z \in Z . (\exists y \in (S \setminus Z) . y \leq z) \land (\exists! y \in (S \setminus Z) . z \leq y)$$ $$Y = S \setminus X \cup Z$$ $$S = X \cup Y \cup Z $$ $$ X \cap Y \cap Z = \emptyset$$ This is feels like extracting the set of lower and upper bound out of an preorder. The trichotomy would then be "Any member of the preorder is either an lower bound (exclusively on the left-hand-side of the order relation), an upper bound or neither."`

Unless Firefox's pdf search is failing me, a quick glance suggests v2 on arXiv (and the current version on the book's site) have reverted to the standard definition for poset.

`Unless Firefox's pdf search is failing me, a quick glance suggests v2 on arXiv (and the current version on the book's site) have reverted to the standard definition for poset.`