#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Lecture 9 - Chapter 1: Adjoints and the Logic of Subsets

2»

• Options
51.
edited April 2018

Tnx Jesus - that's the only way I've been progressing but I haven't grokked \text or \textrm or how to line up my $$f$$'s.

Comment Source:Tnx Jesus - that's the only way I've been progressing but I haven't grokked \text or \textrm or how to line up my \$$f\$$'s. 
• Options
52.
edited April 2018

Jim: braces like { and } are used by LaTeX to group symbols, so

\$$f^{-1}\$$

gives

$$f^{-1}$$

while

\$$f^(-1)\\ gives crap: LaTeX thinks you only want the ( to be a superscript: \(f^(-1)$$

I will soon tire of explaining LaTeX, and we should probably have a separate discussion for that sort of thing - but this is my good deed for the day.

Comment Source:Jim: braces like { and } are used by LaTeX to group symbols, so \$$f^{-1}\$$ gives \$$f^{-1}\$$ while \$$f^(-1)\\ gives crap: LaTeX thinks you only want the ( to be a superscript: \\(f^(-1)\$$ I will soon tire of explaining LaTeX, and we should probably have a separate discussion for that sort of thing - but this is my good deed for the day. <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif">
• Options
53.
edited April 2018

Jim - damn, I deleted your comment and it was the cheat sheet I'd polished up, but apparently with some typos added. I don't know any way to recover deleted comments.

Let me rewrite it below.

Comment Source:Jim - damn, I deleted your comment and it was the cheat sheet I'd polished up, but apparently with some typos added. I don't know any way to recover deleted comments. Let me rewrite it below.
• Options
54.
edited April 2018

Preimages, images and direct images

Function:

$$f : X \to Y$$ Inverse function (if it exists):

$$f^{-1} : Y \to X$$ $$f^{-1}(y) = x \textrm{ if and only if } f(x) = y$$ Preimage or "inverse image":

$$f^* : PY \to PX$$ $$f^{\ast}(S) = \{x \in X: \; f(x) \in S\} .$$ Image or "proper direct image":

$$f_{!}: PX \rightarrow PY$$ $$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} .$$ Direct image:

$$f_{\ast}: PX \rightarrow PY$$ $$f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} .$$

Comment Source:**Preimages, images and direct images** Function: $$f : X \to Y$$ Inverse function (if it exists): $$f^{-1} : Y \to X$$ $$f^{-1}(y) = x \textrm{ if and only if } f(x) = y$$ Preimage or "inverse image": $$f^* : PY \to PX$$ $$f^{\ast}(S) = \\{x \in X: \; f(x) \in S\\} .$$ Image or "proper direct image": $$f_{!}: PX \rightarrow PY$$ $$f_{!}(S) = \\{y \in Y: \; y = f(x) \textrm{ for some } x \in S\\} .$$ Direct image: $$f_{\ast}: PX \rightarrow PY$$ $$f_{\ast}(S) = \\{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\\} .$$ 
• Options
55.

If you want to practice LaTeX try out this this site.

Comment Source:If you want to practice LaTeX try out this [this site](https://www.codecogs.com/latex/eqneditor.php).
• Options
56.
edited April 2018

John Baez 45 wrote:

I'd disagree slightly. As you know - but others may not, so I have to talk slowly - there's a specially nice kind of poset called a Boolean algebra; this is the kind that shows up as a poset of propositions in classical logic. If we take my sketched definition of "not" and apply it to a poset that's a Boolean algebra, we get the usual concept of "not", which obeys the law of excluded middle.

But yes, if we take the same definition and apply it to more general class of posets, called Heyting algebras, we get intuitionistic logic.

So, we can think of this definition of "not" as suitable for both classical and intuitionistic logic, depending on what sort of poset we apply it to.

John Baez 47 wrote:

Category theorists tend to like intuitionistic logic, which includes classical logic as a special case: the internal logic of a topos is intuitionistic, but when the topos is "Boolean" the principle of excluded middle will hold, and the logic is classical. We may get to a bit of this later in the book.

More specifically, a Boolean algebra is a special case that is both a Heyting algebra and a co-Heyting algebra. In fact, that nlab page directly answers my puzzle, so don't peek at it if you still want to figure my puzzle out yourself.

Comment Source:John Baez 45 wrote: >I'd disagree slightly. As you know - but others may not, so I have to talk slowly - there's a specially nice kind of poset called a Boolean algebra; this is the kind that shows up as a poset of propositions in classical logic. If we take my sketched definition of "not" and apply it to a poset that's a Boolean algebra, we get the usual concept of "not", which obeys the law of excluded middle. >But yes, if we take the same definition and apply it to more general class of posets, called Heyting algebras, we get intuitionistic logic. >So, we can think of this definition of "not" as suitable for both classical and intuitionistic logic, depending on what sort of poset we apply it to. John Baez 47 wrote: >Category theorists tend to like intuitionistic logic, which includes classical logic as a special case: the internal logic of a topos is intuitionistic, but when the topos is "Boolean" the principle of excluded middle will hold, and the logic is classical. We may get to a bit of this later in the book. More specifically, a Boolean algebra is a special case that is both a Heyting algebra and a [co-Heyting algebra](https://ncatlab.org/nlab/show/co-Heyting+algebra). In fact, that nlab page directly answers my puzzle, so don't peek at it if you still want to figure my puzzle out yourself.
• Options
57.

I suppose that the definition of state as "a way that a system can be" isn't very clear to me. According to the wikipedia article you linked, it seems like a state space is defined to be the image of some function. But as no function or even codomain is ever mentioned, the wiki effectively defines a state space to be a set. Since a set can contain propositions, why can a state not be a proposition? Maybe I'm being confused by the real world examples, but why is "has more water than can fit in my mug" not a way the ocean can be?

Comment Source:I suppose that the definition of state as "a way that a system can be" isn't very clear to me. According to the wikipedia article you linked, it seems like a state space is defined to be the image of some function. But as no function or even codomain is ever mentioned, the wiki effectively defines a state space to be a set. Since a set can contain propositions, why can a state not be a proposition? Maybe I'm being confused by the real world examples, but why is "has more water than can fit in my mug" not a way the ocean can be?
• Options
58.
edited April 2018

Daniel: when I say "a way that a system can be" - a description I've deleted from my lecture here, since it seems strikingly ineffective in communicating the idea to you - I mean a complete specification of everything there is to know about a system's state.

Saying "the ocean has more water than can fit in my mug" does not completely specify everything there is to know about the ocean's state. There are lots of different ways the ocean could be, that would be consistent with that proposition being true.

In our "classical" understanding of reality, a fundamental idea is that when we completely specify everything there is to know about a system's state, we are choosing a point in some set $$X$$. This is called the system's state space. Propositions about the system are elements of the power set $$P X$$. A proposition $$S \subseteq P X$$ is defined to be true in the state $$x$$ if $$x \in S$$.

A state can be identified with an atomic proposition, meaning a singleton $$\{ x \} \subseteq X$$. This is a proposition that is true in exactly one state.

"The ocean has more water than can fit in my mug" is not true in exactly one state.

It comes as a big surprise that you don't find this stuff utterly familiar. I had thought everyone who did science knew this stuff. I didn't think I had to actually explain it. Now I'm realizing I was naive. Perhaps people only learn this stuff if they read about the foundations of classical and quantum mechanics, and how classical logic differs from quantum logic?

In the classical mechanics course you took, I explained how the position and momentum of a classical system is described by a point in a cotangent bundle $$X = T^* Q$$. I may have called this set $$X$$ a phase space: that's what they call it in physics. But it's also called a state space.

The terms "state space" and "phase space" are roughly synonyms, but people use the term "state space" in contexts that go way beyond classical mechanics. If we're playing a game of tic-tac-toe, there's a set $$X$$ of possible ways to have X's and O's placed on the board. A point $$X$$ is called a state of the game. Propositions, like "there are 3 X's in a row", correspond in a 1-1 way with elements of $$P X$$.

This is how reality works... until you hit quantum mechanics. In quantum mechanics the concepts of state and observable change; that's why it's revolutionary.

Comment Source:Daniel: when I say "a way that a system can be" - a description I've deleted from my lecture here, since it seems strikingly ineffective in communicating the idea to you - I mean a _complete specification of everything there is to know about a system's state_. Saying "the ocean has more water than can fit in my mug" does not completely specify everything there is to know about the ocean's state. There are lots of different ways the ocean could be, that would be consistent with that proposition being true. In our "classical" understanding of reality, a fundamental idea is that when we completely specify everything there is to know about a system's state, we are choosing a point in some set \$$X\$$. This is called the system's **state space**. Propositions about the system are elements of the power set \$$P X \$$. A proposition \$$S \subseteq P X \$$ is defined to be **true** in the state \$$x\$$ if \$$x \in S\$$. A state can be identified with an **atomic** proposition, meaning a singleton \$$\\{ x \\} \subseteq X \$$. This is a proposition that is true in exactly one state. "The ocean has more water than can fit in my mug" is not true in exactly one state. It comes as a big surprise that you don't find this stuff utterly familiar. I had thought everyone who did science knew this stuff. I didn't think I had to actually explain it. Now I'm realizing I was naive. Perhaps people only learn this stuff if they read about the foundations of classical and quantum mechanics, and how classical logic differs from quantum logic? In the classical mechanics course you took, I explained how the position and momentum of a classical system is described by a point in a cotangent bundle \$$X = T^* Q\$$. I may have called this set \$$X \$$ a **phase space**: that's what they call it in physics. But it's also called a **state space**. The terms "state space" and "phase space" are roughly synonyms, but people use the term "state space" in contexts that go way beyond classical mechanics. If we're playing a game of tic-tac-toe, there's a set \$$X\$$ of possible ways to have X's and O's placed on the board. A point \$$X\$$ is called a **state** of the game. Propositions, like "there are 3 X's in a row", correspond in a 1-1 way with elements of \$$P X\$$. This is how reality works... until you hit quantum mechanics. In quantum mechanics the concepts of state and observable change; that's why it's revolutionary.
• Options
59.
edited April 2018

@JohnBaez: here's maybe a less specific, but interesting (to me) question, could you please answer if you have time. When you hear/encounter the term "adjoints" in this context, what the first/second/third mental picture do you get?

As an example - when one hears the word "graph", usually people visualize some small graph, maybe some process associated with it - addition of nodes etc. Or "classical mechanics" may evoke the pictures of points moving along the trajectories and some vectors (either static or dynamic), "gradient" or "local minimum" evoke imagery of hills, maybe some objects moving downward. "Balancing" may produce some feeling rather than image, and so on.

This might be something that allows you to use them efficiently as well.

Comment Source:@JohnBaez: here's maybe a less specific, but interesting (to me) question, could you please answer if you have time. When you hear/encounter the term "adjoints" in this context, what the first/second/third mental picture do you get? As an example - when one hears the word "graph", usually people visualize some small graph, maybe some process associated with it - addition of nodes etc. Or "classical mechanics" may evoke the pictures of points moving along the trajectories and some vectors (either static or dynamic), "gradient" or "local minimum" evoke imagery of hills, maybe some objects moving downward. "Balancing" may produce some feeling rather than image, and so on. This might be something that allows you to use them efficiently as well.
• Options
60.

@JohnBaez : you define the direct image as

$$f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} .$$ This seems to include all elements of $$y_0 \in Y-f_!(X)$$, i.e., those that are not the image of any element of X (since the condition $$x\in S \textrm{ for all } x \textrm{ such that } y_0 = f(x)$$ is vacuously satisfied: there is no such x to worry about!). Is that correct?

Or should the definition be

$$f_{\ast}(S) = \{y \in f_!(S): x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\}$$ or, equivalently,

$$f_{\ast}(S) = \{y \in f_!(S): f^{\ast}(\{y\}) \subseteq S\}$$

Comment Source:@JohnBaez : you define the direct image as $$f_{\ast}(S) = \\{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\\} .$$ This seems to include all elements of \$$y_0 \in Y-f_!(X)\$$, i.e., those that are not the image of any element of X (since the condition \$$x\in S \textrm{ for all } x \textrm{ such that } y_0 = f(x)\$$ is vacuously satisfied: there is no such x to worry about!). Is that correct? Or should the definition be $$f_{\ast}(S) = \\{y \in f_!(S): x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\\}$$ or, equivalently, $$f_{\ast}(S) = \\{y \in f_!(S): f^{\ast}(\\{y\\}) \subseteq S\\}$$
• Options
61.
edited April 2018

@Valter #60 – the definition is correct, and yes $$f_* (S)$$ always contains everything not in the image, ie $$Y- f_! (X)$$

Comment Source:@Valter #60 – the definition is correct, and yes \$$f_* (S) \$$ always contains everything not in the image, ie \$$Y- f_! (X) \$$
• Options
62.

@AnindyaBhattacharyya #61: thanks! I think I get it now: for example, if we did not include elements like my $$y_0$$ then $$\emptyset = f^{\ast}(\{y_0\}) \subset T$$ for any subset T of X, but we would not have $$\{y_0\} \subseteq f_{\ast}(T)$$.

Comment Source:@AnindyaBhattacharyya #61: thanks! I think I get it now: for example, if we did not include elements like my \$$y_0\$$ then \$$\emptyset = f^{\ast}(\\{y_0\\}) \subset T\$$ for any subset T of X, but we would not have \$$\\{y_0\\} \subseteq f_{\ast}(T)\$$.
• Options
63.
edited April 2018

@Matthew, I don't know if I have understood exactly what you meant with _|_, but my answer is this:

type Bottom = forall t. t

hilbertExplosion :: Bottom -> a
hilbertExplosion bottom = id bottom


Why is it called hilbertExplosion?

Comment Source:@Matthew, I don't know if I have understood exactly what you meant with _|_, but my answer is this: type Bottom = forall t. t hilbertExplosion :: Bottom -> a hilbertExplosion bottom = id bottom Why is it called hilbertExplosion?
• Options
64.
edited April 2018

EDIT: These pictures are incorrect. I had some major issues understanding the logic behind the definitions of $$f_!$$ and $$f_*$$ which led to this confusion. Sincerely sorry.

I have corrected these diagrams in Exercise 90 Discussion.

More math porn(confusion)...

The simplest example of $$f^{\ast}$$ , $$f_!$$ , and $$f_{\ast}$$ where you can see the difference between adjoints of pullback vs adjoints of a total order would be $$f:1 \rightarrow 3$$.

Edit: So in this example, the function $$f$$ sends the singleton to the top most object of the 3 element set. This is crucial to understanding these pictures. Just realized the arrow can be taken in a more general way. The example below is also the more specific example of 2 element set being sent to the bottom 2 objects of the 3 element set.

The adjoints of a pullback are interesting because there are multiple adjoints (not to mention the higher dimension it lies in). The left adjoint is degenerate in this example. In order to see a better view of a left adjoint we move up one dimension to $$f:2 \rightarrow 3$$.

This time the left adjoint is not degenerate because started in the second layer of the power set. I hope this is correct LOL.

(I have left out arrows into empty set to keep it clean of unnecessary clutter. In the second example $$f:2 \rightarrow 3$$, I also left out some left adjoint arrows because there is a lot of them and the point was to show a better left adjoint. I guess a good exercise is to find those other arrows?)

Edit : I realized I screwed up and switched the left and right adjoints. I have corrected the mistakes. Sorry about that.

Some other things to note are :

1. In this example, it seems $$f^{\ast}$$ is the left adjoint to a lattice injection.

2. Both of these examples are just one sample from the set of $$f^{\ast}$$ , $$f_!$$ , and $$f_{\ast}$$ that are created by $$f$$. There are 3 samples of $$f^{\ast}$$ , $$f_!$$ , and $$f_{\ast}$$ each for $$f:1 \rightarrow 3$$ and 3 for $$f:2 \rightarrow 3$$). I guess another good exercise is finding all of them based on these pictures?

Comment Source:**EDIT**: These pictures are incorrect. I had some major issues understanding the logic behind the definitions of \$$f_!\$$ and \$$f_*\$$ which led to this confusion. Sincerely sorry. I have corrected these diagrams in [Exercise 90 Discussion](https://forum.azimuthproject.org/discussion/1966/exercise-90-chapter-1#latest). More math porn(confusion)... The simplest example of \$$f^{\ast}\$$ , \$$f_!\$$ , and \$$f_{\ast}\$$ where you can see the difference between adjoints of pullback vs adjoints of a total order would be \$$f:1 \rightarrow 3\$$. ![13](http://aether.co.kr/wp-content/uploads/1-3-pullback-1.jpg) **Edit**: So in this example, the function \$$f\$$ sends the singleton to the top most object of the 3 element set. This is crucial to understanding these pictures. Just realized the arrow can be taken in a more general way. The example below is also the more specific example of 2 element set being sent to the bottom 2 objects of the 3 element set. The adjoints of a pullback are interesting because there are multiple adjoints (not to mention the higher dimension it lies in). The left adjoint is degenerate in this example. In order to see a better view of a left adjoint we move up one dimension to \$$f:2 \rightarrow 3\$$. ![23](http://aether.co.kr/wp-content/uploads/2-3-pullback-1.jpg) This time the left adjoint is not degenerate because started in the second layer of the power set. I hope this is correct LOL. (I have left out arrows into empty set to keep it clean of unnecessary clutter. In the second example \$$f:2 \rightarrow 3\$$, I also left out some left adjoint arrows because there is a lot of them and the point was to show a better left adjoint. I guess a good exercise is to find those other arrows?) **Edit** : I realized I screwed up and switched the left and right adjoints. I have corrected the mistakes. Sorry about that. Some other things to note are : 1. In this example, it seems \$$f^{\ast}\$$ is the left adjoint to a lattice injection. 2. Both of these examples are just one sample from the set of \$$f^{\ast}\$$ , \$$f_!\$$ , and \$$f_{\ast}\$$ that are created by \$$f\$$. There are 3 samples of \$$f^{\ast}\$$ , \$$f_!\$$ , and \$$f_{\ast}\$$ each for \$$f:1 \rightarrow 3\$$ and 3 for \$$f:2 \rightarrow 3\$$). I guess another good exercise is finding all of them based on these pictures?
• Options
65.
edited April 2018

Valter Sorana asked if the definition should be

$$f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} .$$ or instead

$$f_{\ast}(S) = \{y \in f_!(S): x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\}$$ I have a hunch that it's the first one because it's logically simpler, and it's obtained from the definition of

$$f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} .$$ simply by changing "for some" (i.e., $$\exists$$) to "for all" (i.e., $$\forall$$). Both these suggest to me that the first definition is "following the tao of category theory". Category theory tends to avoid complications. It tends to be extremely simple and systematic.

But my hunch could be wrong!

Luckily, you don't need to trust me. You can simply solve this puzzle:

Puzzle 23. Show that $$f_{\ast}: PX \rightarrow PY$$ is the right adjoint of $$f^{\ast}: PY \rightarrow PX$$.

Since the right adjoint is unique when it exists, it can't be true that both definitions make $$f_{\ast}$$ into the right adjoint of $$f^{\ast}$$. Only one of the definitions will work - and that's the right definition!

Comment Source:[Valter Sorana asked](https://forum.azimuthproject.org/discussion/comment/16856/#Comment_16856) if the definition should be $$f_{\ast}(S) = \\{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\\} .$$ or instead $$f_{\ast}(S) = \\{y \in f_!(S): x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\\}$$ I have a hunch that it's the first one because it's logically simpler, and it's obtained from the definition of $$f_{!}(S) = \\{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\\} .$$ simply by changing "for some" (i.e., \$$\exists\$$) to "for all" (i.e., \$$\forall\$$). Both these suggest to me that the first definition is "following the tao of category theory". Category theory tends to avoid complications. It tends to be extremely simple and systematic. But my hunch could be wrong! Luckily, you don't need to trust me. You can simply solve this puzzle: **Puzzle 23.** Show that \$$f_{\ast}: PX \rightarrow PY \$$ is the right adjoint of \$$f^{\ast}: PY \rightarrow PX \$$. Since the right adjoint is unique when it exists, it can't be true that _both_ definitions make \$$f_{\ast}\$$ into the right adjoint of \$$f^{\ast}\$$. Only one of the definitions will work - and that's the right definition!
• Options
66.

@Michael: this looks incredibly helpful (like the other diagrams you did!) but can you say a little bit more about exactly what posets are involved here, and what f is - I'm being a bit slow, but having trouble understanding what f:1→3 means and why we get the pullback we do.

Comment Source:@Michael: this looks incredibly helpful (like the other diagrams you did!) but can you say a little bit more about exactly what posets are involved here, and what f is - I'm being a bit slow, but having trouble understanding what f:1→3 means and why we get the pullback we do.
• Options
67.
edited April 2018

Igor wrote

Here's maybe a less specific, but interesting (to me) question, could you please answer if you have time. When you hear/encounter the term "adjoints" in this context, what the first/second/third mental picture do you get?

It depends a bit on what you mean by "this context". Right now we're doing adjoints of monotone functions between posets, and I have some specific mental images for those. But these are a special case of adjoints of functors between categories, and I have other images for those.

Let me focus on the first!

My main mental image of a poset $$A$$ is a simple-minded row of dots, with "bigger" ones to the right.

My main mental image of a monotone map $$f : A \to B$$ is two such rows of dots, one above the other, with arrows pointing from the dots on the top to the dots on bottom, no arrows crossing each other.

My main mental image of a right adjoint is another monotone map $$g: B \to A$$ going back up from the bottom row to the top row, with the property that $$g$$ is the "best approximation from below" to the nonexistent inverse $$f^{-1}$$. Michael Hong drew a nice example:

and also some nice very small examples:

where each row of dots has at most 3 dots.

There is much more to say about this, but that's a good place to start. I think you need to have these images in your collection.

Comment Source:[Igor wrote](https://forum.azimuthproject.org/discussion/comment/16855/#Comment_16855) > Here's maybe a less specific, but interesting (to me) question, could you please answer if you have time. When you hear/encounter the term "adjoints" in this context, what the first/second/third mental picture do you get? It depends a bit on what you mean by "this context". Right now we're doing adjoints of monotone functions between posets, and I have some specific mental images for those. But these are a special case of adjoints of functors between categories, and I have other images for those. Let me focus on the first! My main mental image of a poset \$$A\$$ is a simple-minded row of dots, with "bigger" ones to the right. My main mental image of a monotone map \$$f : A \to B \$$ is two such rows of dots, one above the other, with arrows pointing from the dots on the top to the dots on bottom, no arrows crossing each other. My main mental image of a right adjoint is another monotone map \$$g: B \to A\$$ going back up from the bottom row to the top row, with the property that \$$g\$$ is the "best approximation from below" to the nonexistent inverse \$$f^{-1}\$$. Michael Hong drew a nice example: <center><img width = "500" src ="http://aether.co.kr/wp-content/uploads/adjoints.jpg"></center> and also some nice very small examples: <center><img width = "400" src = "http://aether.co.kr/wp-content/uploads/adjoint-general.jpg"></center> where each row of dots has at most 3 dots. There is much more to say about this, but that's a good place to start. I think you need to have these images in your collection. 
• Options
68.

@Michael #64 – the figures do indeed seem very helpful, but I also have some trouble understanding them. More precisely, in the first figure why does $$f_!$$ map an element from the power set of $$1$$ to four elements of the power set of $$3$$?

Comment Source:@Michael #64 – the figures do indeed seem very helpful, but I also have some trouble understanding them. More precisely, in the first figure why does \$$f_!\$$ map an element from the power set of \$$1\$$ to four elements of the power set of \$$3\$$?
• Options
69.
edited April 2018

Hi Dan, I think I've only just got a bit of a grip thanks to Frederick's diagram.

I think there can be many left adjoint $$f_{!}$$s in this example.

$$f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} .$$ one for each partition?

This is wrong as John has pointed out in #77.

Comment Source:Hi Dan, I think I've only just got a bit of a grip thanks to Frederick's [diagram](https://forum.azimuthproject.org/discussion/1910/exercise-80-chapter-1). <s>I think there can be many left adjoint \$$f_{!}\$$s in this example. $$f_{!}(S) = \\{y \in Y: \; y = f(x) \textrm{ for some } x \in S\\} .$$ <s>one for each partition? This is wrong as John has pointed out in #77.
• Options
70.
edited April 2018

In the light of the comments of Valter and Aninya (61,62,63) my proof (in comment 38) of puzzle 23 may be in need of adjustment, perhaps small. That proof was: Given S⊆X, for all x∈X, observe x∈S iff f(x)∈f∗(S).

Puzzle 23. Show that $$f_{\ast}: PX \rightarrow PY$$ is the right adjoint of $$f^{\ast}: PY \rightarrow PX$$.

Comment Source:In the light of the comments of Valter and Aninya (61,62,63) my proof (in comment 38) of puzzle 23 may be in need of adjustment, perhaps small. That proof was: Given S⊆X, for all x∈X, observe x∈S iff f(x)∈f∗(S). Puzzle 23. Show that \$$f_{\ast}: PX \rightarrow PY \$$ is the right adjoint of \$$f^{\ast}: PY \rightarrow PX \$$. 
• Options
71.
edited April 2018

If we consider actions of people and the consequences of those actions as being described by a function. Then the pullback describes the subset of people responsible for a specific subset of consequences.

Comment Source:If we consider actions of people and the consequences of those actions as being described by a function. Then the pullback describes the subset of people responsible for a specific subset of consequences. 
• Options
72.
edited April 2018

EDIT: These pictures are incorrect. I had some major issues understanding the logic behind the definitions of $$f_!$$ and $$f_*$$ which led to this confusion. Sincerely sorry.

I have corrected these diagrams in Exercise 90 Discussion.

@Reuben So the function $$f : 1 \rightarrow 3$$ is the function from the singleton set to the 3 element set. There are 3 of these of which I have chosen only one as the example where the singleton is sent to the top most object of the 3 element set. Now the pullback requires additional structure on these and in this example we are going to add partitions in the form of the power set and then pullback that structure back onto the singleton set. So we get $$f^{\ast} : P3 \rightarrow P1$$

I didn't realize that the first picture of $$f$$ can be misleading in that in can be read in a general way representing all 3 versions of the function $$f$$ instead of the single one that I chose at random. So with this in mind, if you follow the definitions of the $$f^{\ast}$$, $$f_!$$, $$f_{\ast}$$ it should become clearer.

Comment Source:EDIT: These pictures are incorrect. I had some major issues understanding the logic behind the definitions of \$$f_!\$$ and \$$f_*\$$ which led to this confusion. Sincerely sorry. I have corrected these diagrams in [Exercise 90 Discussion](https://forum.azimuthproject.org/discussion/1966/exercise-90-chapter-1#latest). @Reuben So the function \$$f : 1 \rightarrow 3\$$ is the function from the singleton set to the 3 element set. There are 3 of these of which I have chosen only one as the example where the singleton is sent to the top most object of the 3 element set. Now the pullback requires additional structure on these and in this example we are going to add partitions in the form of the power set and then pullback that structure back onto the singleton set. So we get \$$f^{\ast} : P3 \rightarrow P1\$$ I didn't realize that the first picture of \$$f\$$ can be misleading in that in can be read in a general way representing all 3 versions of the function \$$f\$$ instead of the single one that I chose at random. So with this in mind, if you follow the definitions of the \$$f^{\ast}\$$, \$$f_!\$$, \$$f_{\ast}\$$ it should become clearer. 
• Options
73.
edited April 2018

re: puzzle 23 \\ Notice that $$f^{\ast}(T) \subseteq S \textrm{ iff } T \subseteq f_{\ast}(S)$$ holds iff $$f^{\ast}(T\cap R) \subseteq S \cap D \textrm{ iff } T \cap R \subseteq f_{\ast}(S \cap D )$$ holds, where D is the domain of f and R is the range of f. So the proof of the puzzle is completed by observing that, given $$S \subseteq X \cap D$$, for all $$x \in X\cap D$$ we have $$x \in S \textrm{ iff } f(x) \in f_{\ast}(S).$$ Edit: After posting this, I realized there is no need to intersect with the domain of f since f is a function, so X = D, etc.

Comment Source:re: puzzle 23 \\\\ Notice that $$f^{\ast}(T) \subseteq S \textrm{ iff } T \subseteq f_{\ast}(S)$$ holds iff $$f^{\ast}(T\cap R) \subseteq S \cap D \textrm{ iff } T \cap R \subseteq f_{\ast}(S \cap D )$$ holds, where D is the domain of f and R is the range of f. So the proof of the puzzle is completed by observing that, given \$$S \subseteq X \cap D\$$, for all \\$$x \in X\cap D\$$ we have $$x \in S \textrm{ iff } f(x) \in f_{\ast}(S).$$ Edit: After posting this, I realized there is no need to intersect with the domain of f since f is a function, so X = D, etc.
• Options
74.

Keith wrote:

If we consider actions of people and the consequences of those actions as being described by a function. Then the pullback describes the subset of people responsible for a specific subset of consequences.

That sounds right, though I'd say the subset of people who could be responsible.

Comment Source:Keith wrote: > If we consider actions of people and the consequences of those actions as being described by a function. Then the pullback describes the subset of people responsible for a specific subset of consequences. That sounds right, though I'd say the subset of people who _could_ be responsible. 
• Options
75.

@michael ok, making a little more sense now, but in the cases where more than one red arrow comes from the same object, are you representing different possible functions? (sorry, still being slow here :) )

Comment Source:@michael ok, making a little more sense now, but in the cases where more than one red arrow comes from the same object, are you representing different possible functions? (sorry, still being slow here :) )
• Options
76.
edited April 2018

EDIT: These pictures are incorrect. I had some major issues understanding the logic behind the definitions of $$f_!$$ and $$f_*$$ which led to this confusion. Sincerely sorry.

I have corrected these diagrams in Exercise 90 Discussion.

@Dan As Jim Stuttard pointed out and Frederick drew out in the exercise, there can be many adjoints in this example because there are many possible scenarios you can choose from going from $$PX \rightarrow PY$$ all of which are correct. The reason being that PY is not a linear order but forks off into multiple paths and there can more than answer depending on your choice or decision.

But now thinking about why $$f_!$$ is pointing to 4 partitions I think I might have switched $$f_!$$ and $$f^{\ast}$$. The right adjoint should point to the partitions higher in the order in a surjective function and left should point downhill. I will make the changes in my previous comment.

So to explain, the right adjoint requires all of $$PX$$ to be contained in $$PY$$. For my example with 1 object partition, the right adjoint must therefore point to objects in $$PY$$ that have a partition in the top dot of the 3 element set. There exist 4 of these in the $$PY$$ and that is why there are four choices of adjoints for the first example.

Same can be done for the left adjoint: it only requires some $$PX$$ to be contained in $$PY$$. To see this you have to look at the second example to see a non degenerate left adjoint. Of the 2 object partition, only one can be represented in $$PY$$ and the left adjoint will generously acknowledges its existence as an approximation.

Comment Source:EDIT: These pictures are incorrect. I had some major issues understanding the logic behind the definitions of \$$f_!\$$ and \$$f_*\$$ which led to this confusion. Sincerely sorry. I have corrected these diagrams in [Exercise 90 Discussion](https://forum.azimuthproject.org/discussion/1966/exercise-90-chapter-1#latest). @Dan As Jim Stuttard pointed out and Frederick drew out in the exercise, there can be many adjoints in this example because there are many possible scenarios you can choose from going from \$$PX \rightarrow PY\$$ all of which are correct. The reason being that PY is not a linear order but forks off into multiple paths and there can more than answer depending on your choice or decision. But now thinking about why \$$f_!\$$ is pointing to 4 partitions I think I might have switched \$$f_!\$$ and \$$f^{\ast}\$$. The right adjoint should point to the partitions higher in the order in a surjective function and left should point downhill. I will make the changes in my previous comment. So to explain, the right adjoint requires all of \$$PX\$$ to be contained in \$$PY\$$. For my example with 1 object partition, the right adjoint must therefore point to objects in \$$PY\$$ that have a partition in the top dot of the 3 element set. There exist 4 of these in the \$$PY\$$ and that is why there are four choices of adjoints for the first example. Same can be done for the left adjoint: it only requires some \$$PX\$$ to be contained in \$$PY\$$. To see this you have to look at the second example to see a non degenerate left adjoint. Of the 2 object partition, only one can be represented in \$$PY\$$ and the left adjoint will generously acknowledges its existence as an approximation. 
• Options
77.

Michael wrote:

As Jim Stuttard pointed out and Frederick drew out in the exercise, there can be many adjoints in this example because there are many possible scenarios you can choose from going from $$PX \rightarrow PY$$ all of which are correct. The reason being that PY is not a linear order but forks off into multiple paths and there can more than answer depending on your choice or decision.

I'm having some trouble understanding this conversation, but "many adjoints" doesn't sound right to me. Any monotone function between posets has at most one left adjoint and at most one right adjoint. We saw that already in Lecture 6 where we derived formulas for these adjoints if they exist. It's also easy to see directly: assume a monotone map has two left adjoints and prove they must be equal.

None of this requires that the posets be linearly ordered.

Comment Source:Michael wrote: > As Jim Stuttard pointed out and Frederick drew out in the exercise, there can be many adjoints in this example because there are many possible scenarios you can choose from going from \$$PX \rightarrow PY\$$ all of which are correct. The reason being that PY is not a linear order but forks off into multiple paths and there can more than answer depending on your choice or decision. I'm having some trouble understanding this conversation, but "many adjoints" doesn't sound right to me. Any monotone function between posets has at most one left adjoint and at most one right adjoint. We saw that already in [Lecture 6](https://forum.azimuthproject.org/discussion/1901/lecture-6-chapter-1-computing-adjoints) where we derived formulas for these adjoints if they exist. It's also easy to see directly: assume a monotone map has two left adjoints and prove they must be equal. None of this requires that the posets be linearly ordered.
• Options
78.
edited April 2018

@John I think I have used the word adjoint too sparingly... Maybe "multiple ways that the adjoint can approximate the function" is a better more precise wording? Is this possible?

Comment Source:@John I think I have used the word adjoint too sparingly... Maybe "multiple ways that the adjoint can approximate the function" is a better more precise wording? Is this possible?
• Options
79.
edited April 2018

Q. Re #68 and #77: I didn't see an order in a bag of balls so took it as an unordered set? If it's unordered it wouldn't obey the laws for monotonicity. How adjoints between multisets might be defined I didn't notice in the book (perhaps for good reason)?

Comment Source:Q. Re #68 and #77: I didn't see an order in a bag of balls so took it as an unordered set? If it's unordered it wouldn't obey the laws for monotonicity. How adjoints between multisets might be defined I didn't notice in the book (perhaps for good reason)?
• Options
80.

Michael wrote:

Maybe "multiple ways that the adjoint can approximate the function" is a better more precise wording? Is this possible?

It's better because it sounds like it might be true that there exist such multiple ways. But I don't know what it means - perhaps because I didn't carefully read everything you wrote. What is a "way that an adjoint can approximate a function", and which function are you talking about the adjoint approximating?

I sometimes talk about the right adjoint of $$f$$ being the "best approximation from below to the nonexistent inverse of $$f$$". But of course this is a bit of a joke, because how can you approximate a nonexistent function? The beauty of adjoints are that they make this vague intuition precise.

However, that doesn't help me understand what you're saying.

Comment Source:Michael wrote: > Maybe "multiple ways that the adjoint can approximate the function" is a better more precise wording? Is this possible? It's better because it sounds like it might be true that there exist such multiple ways. But I don't know what it means - perhaps because I didn't carefully read everything you wrote. What is a "way that an adjoint can approximate a function", and which function are you talking about the adjoint approximating? I sometimes talk about the right adjoint of \$$f\$$ being the "best approximation from below to the nonexistent inverse of \$$f\$$". But of course this is a bit of a joke, because how can you approximate a nonexistent function? The beauty of adjoints are that they make this vague intuition precise. However, that doesn't help me understand what you're saying.
• Options
81.
edited April 2018

@John I'm sorry I should have made myself more clear and precise in what I said. What I meant by "multiple adjoints" was really "there are multiple ways that the adjoint can approximate the function." Here I was referring to the example I was working out above in #64 which was inspired by exercise 1.90 and the definitions you used in this lecture. I tried to explore all the possibilities in a more concrete way instead of a few selections to get a better intuition and to check if I had my head on straight.

The function I chose was $$f : 1 \rightarrow 3$$ and its inverse image (given by your definition) $$f^{\ast} : P3 \rightarrow P1$$. Then when you take the right adjoint $$f_{\ast}$$ from what I see, the adjoint has some choices on how it approximates and sends elements from P1 to P3 and still satisfies the definition $$f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\}$$. I am not sure if this is correct but if the codomain's poset has forks then it seems there can be multiple ways the adjoint function can approximate the inverse?

Comment Source:@John I'm sorry I should have made myself more clear and precise in what I said. What I meant by "multiple adjoints" was really "there are multiple ways that the adjoint can approximate the function." Here I was referring to the example I was working out above in [#64](https://forum.azimuthproject.org/discussion/comment/16867/#Comment_16867) which was inspired by exercise 1.90 and the definitions you used in this lecture. I tried to explore all the possibilities in a more concrete way instead of a few selections to get a better intuition and to check if I had my head on straight. The function I chose was \$$f : 1 \rightarrow 3\$$ and its inverse image (given by your definition) \$$f^{\ast} : P3 \rightarrow P1\$$. Then when you take the right adjoint \$$f_{\ast}\$$ from what I see, the adjoint has some choices on how it approximates and sends elements from P1 to P3 and still satisfies the definition \$$f_{\ast}(S) = \\{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\\} \$$. I am not sure if this is correct but if the codomain's poset has forks then it seems there can be multiple ways the adjoint function can approximate the inverse? 
• Options
82.
edited April 2018

Michael: as I've said, we know for general reasons that there's only one choice of right adjoint. So you don't get any choice of what $$f_*(S)$$ is. Indeed, the formula

$$f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\}$$ says what $$f_*(S)$$ must be, as long as you know what $$f$$ is.

Comment Source:Michael: as I've said, we know for general reasons that there's only one choice of right adjoint. So you don't get any choice of what \$$f_*(S)\$$ is. Indeed, the formula $$f_{\ast}(S) = \\{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\\}$$ says what \$$f_*(S)\$$ must be, as long as you know what \$$f\$$ is.
• Options
83.

John Baez, thanks for the answer, both it and especially @MichaelHong excellent drawings clicked the concept for me quickly.

Comment Source:[John Baez](https://forum.azimuthproject.org/profile/17/John%20Baez), thanks for the [answer](https://forum.azimuthproject.org/discussion/comment/16893/#Comment_16893), both it and especially @MichaelHong excellent drawings clicked the concept for me quickly.
• Options
84.
edited April 2018

Example 1.89 notes that for a function $$f$$, the left and right adjoints of the pullback of $$f$$, $$f_{!}$$ and $$f_{\ast}$$ respectively, have very natural interpretations. It would be helpful for me, at least, to show that these, as defined in 1.89, are indeed left and right adjoints.

Further questions: does the procedure for generating $$f_{!}$$ and $$f_{\ast}$$ generalize to pullbacks outside of the category of sets?

Comment Source:[A while ago Reuben wrote](https://forum.azimuthproject.org/discussion/1891/example-1-89-chapter-1-suggested-exercise/p1): > Example 1.89 notes that for a function \$$f\$$, the left and right adjoints of the pullback of \$$f\$$, \$$f_{!} \$$ and \$$f_{\ast} \$$ respectively, have very natural interpretations. It would be helpful for me, at least, to show that these, as defined in 1.89, are indeed left and right adjoints. > Further questions: does the procedure for generating \$$f_{!}\$$ and \$$f_{\ast}\$$ generalize to pullbacks outside of the category of sets?
• Options
85.
edited April 2018

Hi, Reuben! Sorry to take so long to reply, but I hope that Lecture 9 and the ensuing discussion here help a bit. In the lecture I proved this:

Theorem. $$f_{!}: PX \rightarrow PY$$ is the left adjoint of $$f^{\ast}: PX \rightarrow PY$$.

I also proposed this puzzle:

Puzzle 23. Show that $$f_{\ast}: PX \rightarrow PY$$ is the right adjoint of $$f^{\ast}: PY \rightarrow PX$$.

Further questions: does the procedure for generating $$f_{!}$$ and $$f_{\ast}$$ generalize to pullbacks outside of the category of sets?

Yes, these work in other categories. A good reference is Volume 1 of Peter Johnstone's Sketches of an Elephant. In Lemma 1.3.1 he shows that in any category with pullbacks and images, the map $$f^*$$ has a left adjoint $$f_{!}$$. A large and useful class of categories with these properties are the regular categories. For example: the category of vector spaces, or the category of groups, or any topos.

The right adjoint $$f_{\ast}$$ is harder to obtain: in Lemma 1.4.10 he shows that it exists in any Boolean coherent category. For example, the category of sets, or the category of sets equipped with the action of some particular group, or any other Boolean topos.

Comment Source:Hi, Reuben! Sorry to take so long to reply, but I hope that [Lecture 9](https://forum.azimuthproject.org/discussion/1931/lecture-9-chapter-1-adjoints-and-the-logic-of-subsets/p1) and the ensuing discussion here help a bit. In the lecture I proved this: **Theorem.** \$$f_{!}: PX \rightarrow PY \$$ is the left adjoint of \$$f^{\ast}: PX \rightarrow PY \$$. I also proposed this puzzle: **Puzzle 23.** Show that \$$f_{\ast}: PX \rightarrow PY \$$ is the right adjoint of \$$f^{\ast}: PY \rightarrow PX \$$. > Further questions: does the procedure for generating \$$f_{!}\$$ and \$$f_{\ast}\$$ generalize to pullbacks outside of the category of sets? <img width = "150" src = "http://math.ucr.edu/home/baez/mathematical/warning_sign.jpg"> Yes, these work in other categories. A good reference is Volume 1 of Peter Johnstone's _Sketches of an Elephant_. In Lemma 1.3.1 he shows that in any category with [pullbacks](https://en.wikipedia.org/wiki/Pullback_(category_theory)) and [images](https://ncatlab.org/nlab/show/image), the map \$$f^*\$$ has a left adjoint \$$f_{!}\$$. A large and useful class of categories with these properties are the [regular categories](https://ncatlab.org/nlab/show/regular+category). For example: the category of vector spaces, or the category of groups, or any topos. The right adjoint \$$f_{\ast}\$$ is harder to obtain: in Lemma 1.4.10 he shows that it exists in any [Boolean coherent category](https://ncatlab.org/nlab/show/Boolean+category). For example, the category of sets, or the category of sets equipped with the action of some particular group, or any other [Boolean topos](https://ncatlab.org/nlab/show/Boolean+topos). 
• Options
86.
edited April 2018

Oh, nice. Then graphs have both right and left adjoints of pullbacks since the category of Graphs is a topos.

Comment Source:Oh, nice. Then graphs have both right and left adjoints of pullbacks since the category of Graphs is a topos.
• Options
87.
edited April 2018

I said $$f^{\ast}$$ has a right adjoint in a Boolean topos. The category of Graphs is a topos, but it's not a Boolean topos, so you probably won't get that right adjoint, just the left adjoint. It'd be a good exercise to take a little example and show the right adjoint doesn't exist.

Comment Source:I said \$$f^{\ast}\$$ has a right adjoint in a Boolean topos. The category of Graphs is a topos, but it's not a Boolean topos, so you probably won't get that right adjoint, just the left adjoint. It'd be a good exercise to take a little example and show the right adjoint doesn't exist.
• Options
88.

If a person really wanted to, could they restrict Graph (or any topos) to only the portion that satisifes the constrainsts of a Boolean category so as to follow along with the lectures, but in the topos of their choice?

Anyways, I'm going to try that puzzle.

Comment Source:Oh wait, I didn't read over your last paragraph. If a person *really* wanted to, could they restrict **Graph** (or any topos) to only the portion that satisifes the constrainsts of a Boolean category so as to follow along with the lectures, but in the topos of their choice? Anyways, I'm going to try that puzzle.
• Options
89.
edited April 2018

It's not easy to find an interesting "portion" of a topos that's Boolean. The topos $$\text{Graph}$$ has a subcategory consisting of graphs with no edges, and this is a Boolean topos, but it's equivalent to $$\text{Set}$$. If you look at the various equivalent characterization of Boolean topoi you'll see how tough it is for topoi to be Boolean.

I think it'll be enlightening to tackle that puzzle.

Comment Source:It's not easy to find an interesting "portion" of a topos that's Boolean. The topos \$$\text{Graph} \$$ has a subcategory consisting of graphs with no edges, and this is a Boolean topos, but it's equivalent to \$$\text{Set} \$$. If you look at the various equivalent characterization of Boolean topoi you'll see how tough it is for topoi to be Boolean. I think it'll be enlightening to tackle that puzzle.
• Options
90.

I'm still not caught up so I don't have useful additions to the discussion I just had a fun idea when I read #87.

You had trouble finding philosophical right adjoints of the same kind as "better safe than sorry" which, for example, was a left adjoint.

I was thinking this when you said it but it might be that all the people who used right adjoint philosophies took unnecessary risk and died (I'm aware that a bunch of bias sensors just went of for some of you ^.^)

But isn't human life some sort of boolean topos (you are either alive or you are dead) so the right adjoint doesn't exist! :)

Now I'm inspired to try your exercise, tomorrow, when I wake up.

Comment Source:I'm still not caught up so I don't have useful additions to the discussion I just had a fun idea when I read #87. You had trouble finding philosophical right adjoints of the same kind as "better safe than sorry" which, for example, was a left adjoint. I was thinking this when you said it but it might be that all the people who used right adjoint philosophies took unnecessary risk and died (I'm aware that a bunch of bias sensors just went of for some of you ^.^) But isn't human life some sort of boolean topos (you are either alive or you are dead) so the right adjoint doesn't exist! :) Now I'm inspired to try your exercise, tomorrow, when I wake up.
• Options
91.
edited April 2018

John Baez wrote:

It's not easy to find an interesting "portion" of a topos that's Boolean.

I mentioned in this thread in comment #12 that you can use Glivenko's theorem from intuitionistic logic to construct a boolean subcategory for any Cartesian closed category.

I have no idea what this looks like for the topos Graph. Is this just equivalent to Set?

I carried this out in Haskell over in the Categories for Working Hackers #51 thread. You are right, this was not easy for me.

In Haskell, $$(X \to \bot) \to \bot$$ forms a rather strange monad. The type $$x \to \bot$$ is kind of like Proxy :: x -> * (using Data.Proxy).

Comment Source:[John Baez](https://forum.azimuthproject.org/profile/17/John%20Baez) wrote: > It's not easy to find an interesting "portion" of a topos that's Boolean. I mentioned in this thread in [comment #12](https://forum.azimuthproject.org/discussion/comment/16689/#Comment_16689) that you can use [Glivenko's theorem](https://en.wikipedia.org/wiki/Double-negation_translation#Propositional_logic) from intuitionistic logic to construct a boolean subcategory for any [Cartesian closed category](https://en.wikipedia.org/wiki/Cartesian_closed_category). I have no idea what this looks like for the topos **Graph**. Is this just equivalent to **Set**? I carried this out in Haskell over in the [Categories for Working Hackers #51](https://forum.azimuthproject.org/discussion/comment/16975/#Comment_16975) thread. You are right, this was not easy for me. In Haskell, \$$(X \to \bot) \to \bot\$$ forms a rather strange monad. The type \$$x \to \bot\$$ is kind of like Proxy :: x -> * (using [Data.Proxy](https://hackage.haskell.org/package/base-4.11.0.0/docs/Data-Proxy.html)).
• Options
92.
edited April 2018

You can take any topos and create a Boolean topos by taking "sheaves in the double negation topology". I'm not good at this stuff, but the nLab says:

In case of $$\mathrm{Set}^{\rightrightarrows}$$, the presheaf topos of directed graphs, the action of $$\neg\neg$$ as a closure operator on subgraphs $$X \subseteq Y$$ amounts to adding to the edge set of $$X$$ all the edges in $$Y$$ between vertices that are in $$X$$. The patient reader will find further details on the $$\neg\neg$$-topology for this elementary example spelled out at length at Quiv.

I'm not smart enough to see what the resulting Boolean topos looks like! Is it just $$\mathrm{Set}$$, or something a bit more fun, like $$\mathrm{Set}^2$$?

Comment Source:<img width = "150" src = "http://math.ucr.edu/home/baez/mathematical/warning_sign.jpg"> You can take any topos and create a Boolean topos by taking ["sheaves in the double negation topology"](https://ncatlab.org/nlab/show/double+negation#topology). I'm not good at this stuff, but the nLab says: > In case of \$$\mathrm{Set}^{\rightrightarrows}\$$, the presheaf topos of directed graphs, the action of \$$\neg\neg\$$ as a closure operator on subgraphs \$$X \subseteq Y\$$ amounts to adding to the edge set of \$$X\$$ all the edges in \$$Y\$$ between vertices that are in \$$X\$$. The patient reader will find further details on the \$$\neg\neg\$$-topology for this elementary example spelled out at length at [Quiv](https://ncatlab.org/nlab/show/Quiv). I'm not smart enough to see what the resulting Boolean topos looks like! Is it just \$$\mathrm{Set}\$$, or something a bit more fun, like \$$\mathrm{Set}^2\$$?
• Options
93.

Ilmu Khan wrote:

I was thinking this when you said it but it might be that all the people who used right adjoint philosophies took unnecessary risk and died.

Possibly. I might tweak that idea slightly, by saying: people have more reasons to invent proverbs advocating caution than proverbs advocating lack of caution! It takes some experience to see the merit of caution; if you die gaining this experience you may never learn caution... but humans have the marvelous ability to transfer experience by talking, so we try to teach children caution with proverbs and fairy tales.

But isn't human life some sort of boolean topos (you are either alive or you are dead) so the right adjoint doesn't exist!

Well, that's a fun joke, but I mentioned that a boolean topos is the kind where the right adjoint to the preimage map does exist!

Comment Source:Ilmu Khan wrote: > I was thinking this when you said it but it might be that all the people who used right adjoint philosophies took unnecessary risk and died. Possibly. I might tweak that idea slightly, by saying: people have more reasons to invent proverbs advocating caution than proverbs advocating lack of caution! It takes some experience to see the merit of caution; if you die gaining this experience you may never learn caution... but humans have the marvelous ability to transfer experience by talking, so we try to teach children caution with proverbs and fairy tales. > But isn't human life some sort of boolean topos (you are either alive or you are dead) so the right adjoint doesn't exist! Well, that's a fun joke, but I mentioned that a boolean topos is the kind where the right adjoint to the preimage map _does_ exist!
• Options
94.

Typo in the Theorem: $$f^*$$ should go in the opposite direction.

Comment Source:Typo in the Theorem: \$$f^* \$$ should go in the opposite direction.
• Options
95.

Thanks, Bartosz! I seem to have fixed that more than once. Unfortunately I didn't fix it an odd number of times.

Comment Source:Thanks, Bartosz! I seem to have fixed that more than once. Unfortunately I didn't fix it an odd number of times.
• Options
96.
edited May 2018

John, Thank you! This is all very helpful and meaningful. I'm curious, how might this extend to the arithmetical hierarchy? Here's a chapter by Kevin T. Kelly.

In recursion theory, I find it very interesting how sets are ranked in terms of their complexity, and in particular, that the oracle for recursion is ranked at the "triple jump", which is to say, three jumps higher than the recursive functions themselves. In general, that's the Yates Index Theorem, but I need to study it and understand it.

Comment Source:John, Thank you! This is all very helpful and meaningful. I'm curious, how might this extend to the [arithmetical hierarchy](https://en.wikipedia.org/wiki/Arithmetical_hierarchy)? Here's [a chapter by Kevin T. Kelly](https://www.andrew.cmu.edu/user/kk3n/complearn/chapter11.pdf). In recursion theory, I find it very interesting how sets are ranked in terms of their complexity, and in particular, that the oracle for recursion is ranked at the "triple jump", which is to say, three jumps higher than the recursive functions themselves. In general, that's the Yates Index Theorem, but I need to study it and understand it.
• Options
97.

Catching up here.

Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't.

Dan Schmidt answered the first two parts, but I don't see an answer for the last part. Schmidt defined "not" in the language of posets, that is, terms of meets and joins. If meet and join are not defined for all pairs of elements (in a poset they might not be), then it doesn't have a "not" operation if "not" is defined in this way. This means a poset has to have a greatest lower bound and a least upper bound to have a "not" operation. (In other words, the poset must be a "complete lattice.") So, for example:

• The set of factors of 60 ordered by "divides" has a "not" operation because it has a greatest lower bound (1) and a least upper bound (60).

• Any powerset ordered by $$\subseteq$$ each has a "not" operation because it has a greatest lower bound (the empty set) and least upper bound (the original set)

• A tent-shaped poset $${A,B,C}$$ with $$A \subseteq B,C \subseteq B$$ doesn't have a "not" operation because it doesn't have a greatest lower bound.

• A V-shaped poset $${A,B,C}$$ with $$B \subseteq A,B\subseteq C$$ doesn't have a "not" operation because it doesn't have a least upper bound.

• The set of all positive integers ordered by "divides" doesn't have a "not" operation because it doesn't have a least upper bound.

Beyond "not": "and" and "or" must not necessarily be defined for posets either. It seems to me that "and" is defined iff there is a greatest lower bound, and "or" is defined iff there is a least upper bound. Is that right?

Comment Source:Catching up here. Puzzle 22. What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that do have a "not" operation and some that don't. Dan Schmidt answered the first two parts, but I don't see an answer for the last part. Schmidt defined "not" in the language of posets, that is, terms of meets and joins. If meet and join are not defined for all pairs of elements (in a poset they might not be), then it doesn't have a "not" operation if "not" is defined in this way. This means a poset has to have a greatest lower bound and a least upper bound to have a "not" operation. (In other words, the poset must be a "complete lattice.") So, for example: * The set of factors of 60 ordered by "divides" has a "not" operation because it has a greatest lower bound (1) and a least upper bound (60). * Any powerset ordered by \$$\subseteq\$$ each has a "not" operation because it has a greatest lower bound (the empty set) and least upper bound (the original set) * A tent-shaped poset \$$\{A,B,C\}\$$ with \$$A \subseteq B,C \subseteq B\$$ doesn't have a "not" operation because it doesn't have a greatest lower bound. * A V-shaped poset \$$\{A,B,C\}\$$ with \$$B \subseteq A,B\subseteq C\$$ doesn't have a "not" operation because it doesn't have a least upper bound. * The set of all positive integers ordered by "divides" doesn't have a "not" operation because it doesn't have a least upper bound. Beyond "not": "and" and "or" must not necessarily be defined for posets either. It seems to me that "and" is defined iff there is a greatest lower bound, and "or" is defined iff there is a least upper bound. Is that right?