It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 500
- Study Groups 19
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 1
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 68
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 2
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 147
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 713

Options

We've seen that classical logic is closely connected to the logic of subsets. For any set \( X \) we get a poset \( P(X) \), the **power set** of \(X\), whose elements are *subsets* of \(X\), with the partial order being \( \subseteq \). If \( X \) is a set of "states" of the world, elements of \( P(X) \) are "propositions" about the world. Less grandiosely, if \( X \) is the set of states of any system, elements of \( P(X) \) are propositions about that system.

This trick turns logical operations on propositions - like "and" and "or" - into operations on subsets, like intersection \(\cap\) and union \(\cup\). And these operations are then special cases of things we can do in *other* posets, too, like join \(\vee\) and meet \(\wedge\).

We could march much further in this direction. I won't, but try it yourself!

**Puzzle 22.** What operation on subsets corresponds to the logical operation "not"? Describe this operation in the language of posets, so it has a chance of generalizing to other posets. Based on your description, find some posets that *do* have a "not" operation and some that don't.

I want to march in another direction. Suppose we have a function \(f : X \to Y\) between sets. This could describe an *observation*, or *measurement*. For example, \( X \) could be the set of states of your room, and \( Y \) could be the set of states of a thermometer in your room: that is, thermometer readings. Then for any state \( x \) of your room there will be a thermometer reading, the temperature of your room, which we can call \( f(x) \).

This should yield some function between \( P(X) \), the set of propositions about your room, and \( P(Y) \), the set of propositions about your thermometer. It does. But in fact there are *three* such functions! And they're related in a beautiful way!

The most fundamental is this:

**Definition.** Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq Y \) define its **inverse image** under \(f\) to be

$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} . $$ The pullback is a subset of \( X \).

The inverse image is also called the **preimage**, and it's often written as \(f^{-1}(S)\). That's okay, but I won't do that: I don't want to fool you into thinking \(f\) needs to have an inverse \( f^{-1} \) - it doesn't. Also, I want to match the notation in Example 1.89 of *Seven Sketches*.

The inverse image gives a monotone function

$$ f^{\ast}: P(Y) \to P(X), $$ since if \(S,T \in P(Y)\) and \(S \subseteq T \) then

$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\}

\subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) . $$
Why is this so fundamental? Simple: in our example, propositions about the state of your thermometer give propositions about the state of your room! If the thermometer says it's 35°, then your room is 35°, at least near your thermometer. Propositions about the measuring apparatus are useful because they give propositions about the system it's measuring - that's what measurement is all about! This explains the "backwards" nature of the function \(f^{\ast}: P(Y) \to P(X)\), going back from \(P(Y)\) to \(P(X)\).

Propositions about the system being measured also give propositions about the measurement apparatus, but this is more tricky. What does "there's a living cat in my room" tell us about the temperature I read on my thermometer? This is a bit confusing... but there is an answer because a function \(f\) really does also give a "forwards" function from \(P(X) \) to \(P(Y)\). Here it is:

**Definition.** Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define its **image** under \(f\) to be

$$ f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S\} . $$ The image is a subset of \( Y \).

The image is often written as \(f(S)\), but I'm using the notation of *Seven Sketches*, which comes from category theory. People pronounce \(f_{!}\) as "\(f\) lower shriek".

The image gives a monotone function

$$ f_{!}: P(X) \to P(Y) $$ since if \(S,T \in P(X)\) and \(S \subseteq T \) then

$$
f_{!}(S) = \{y \in Y: \; y = f(x) \textrm{ for some } x \in S \}

\subseteq \{y \in Y: \; y = f(x) \textrm{ for some } x \in T \} = f_{!}(T) . $$
But here's the cool part:

**Theorem.** \( f_{!}: P(X) \to P(Y) \) is the left adjoint of \( f^{\ast}: P(Y) \to P(X) \).

**Proof.** We need to show that for any \(S \subseteq X\) and \(T \subseteq Y\) we have

$$ f_{!}(S) \subseteq T \textrm{ if and only if } S \subseteq f^{\ast}(T) . $$ David Tanzer gave a quick proof in Puzzle 19. It goes like this: \(f_{!}(S) \subseteq T\) is true if and only if \(f\) maps elements of \(S\) to elements of \(T\), which is true if and only if \( S \subseteq \{x \in X: \; f(x) \in T\} = f^{\ast}(T) \). \(\quad \blacksquare\)

This is great! But there's also *another* way to go forwards from \(P(X)\) to \(P(Y)\), which is a *right* adjoint of \( f^{\ast}: P(Y) \to P(X) \). This is less widely known, and I don't even know a simple name for it. Apparently it's less useful.

**Definition.** Suppose \(f : X \to Y \) is a function between sets. For any \( S \subseteq X \) define

$$ f_{\ast}(S) = \{y \in Y: x \in S \textrm{ for all } x \textrm{ such that } y = f(x)\} . $$ This is a subset of \(Y \).

**Puzzle 23.** Show that \( f_{\ast}: P(X) \to P(Y) \) is the right adjoint of \( f^{\ast}: P(Y) \to P(X) \).

What's amazing is this. Here's another way of describing our friend \(f_{!}\). For any \(S \subseteq X \) we have

$$ f_{!}(S) = \{y \in Y: x \in S \textrm{ for some } x \textrm{ such that } y = f(x)\} . $$
This looks almost exactly like \(f_{\ast}\). The only difference is that while the left adjoint \(f_{!}\) is defined using "for some", the right adjoint \(f_{\ast}\) is defined using "for all". In logic "for some \(x\)" is called the **existential quantifier** \(\exists x\), and "for all \(x\)" is called the **universal quantifier** \(\forall x\). So we are seeing that *existential and universal quantifiers arise as left and right adjoints!*

This was discovered by Bill Lawvere in this revolutionary paper:

- F. Willam Lawvere, Adjointness in foundations,
*Dialectica***23**(1969). Reprinted with author commentary in*Theory and Applications of Categories***16**(2006), 1-16.

By now this observation is part of a big story that "explains" logic using category theory.

Two more puzzles! Let \( X \) be the set of states of your room, and \( Y \) the set of states of a thermometer in your room: that is, thermometer readings. Let \(f : X \to Y \) map any state of your room to the thermometer reading.

**Puzzle 24.** What is \(f_{!}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "liberal" or "generous" nature of left adjoints, meaning that they're a "best approximation from above"?

**Puzzle 25.** What is \(f_{\ast}(\{\text{there is a living cat in your room}\})\)? How is this an example of the "conservative" or "cautious" nature of right adjoints, meaning that they're a "best approximation from below"?

## Comments

Puzzle 22. I think that \(X\) is "not \(Y\)" if \(X \wedge Y = \bigvee \emptyset\) (they have no elements in common) and \(X \vee Y = \bigwedge \emptyset\) (their union is the entire set).Instead of referring explicitly to other elements of the poset like "the empty subset" or "the full subset" I have used the properties we derived earlier that the join of nothing is the bottom element of the poset and the meet of nothing is the top element of the poset.

`**Puzzle 22**. I think that \\(X\\) is "not \\(Y\\)" if \\(X \wedge Y = \bigvee \emptyset\\) (they have no elements in common) and \\(X \vee Y = \bigwedge \emptyset\\) (their union is the entire set). Instead of referring explicitly to other elements of the poset like "the empty subset" or "the full subset" I have used the properties we derived earlier that the join of nothing is the bottom element of the poset and the meet of nothing is the top element of the poset.`

$$f_{!}(S) = \{ y \in Y: y = f(x) \textrm{ for all } x \in S \} .$$

I think there's a mistake in your definition of "\(f\) lower shriek" - shouldn't it be "the \(y\)s such that every \(x\) sent to \(y\) is in \(S\)"?

`> **Definition.** Suppose \\(f : X \to Y \\) is a function between sets. For any \\( S \subseteq X \\) define $$f_{!}(S) = \\{ y \in Y: y = f(x) \textrm{ for all } x \in S \\} .$$ > This is a subset of \\(Y \\). People pronounce \\(f_{!}\\) as "\\(f\\) lower shriek". I think there's a mistake in your definition of "\\(f\\) lower shriek" - shouldn't it be "the \\(y\\)s such that every \\(x\\) sent to \\(y\\) is in \\(S\\)"?`

$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in S\} = f^{\ast}(T) $$ Should this be

$$ f^{\ast}(S) = \{x \in X: \; f(x) \in S\} \subseteq \{x \in X:\; f(x) \in T\} = f^{\ast}(T) $$

`$$ f^{\ast}(S) = \\{x \in X: \; f(x) \in S\\} \subseteq \\{x \in X:\; f(x) \in S\\} = f^{\ast}(T) $$ Should this be $$ f^{\ast}(S) = \\{x \in X: \; f(x) \in S\\} \subseteq \\{x \in X:\; f(x) \in T\\} = f^{\ast}(T) $$`

I agree with Anindya—the definition of \(f_!:PX\to PY\) that makes it the right adjoint to \(f^\ast: PY\to PX\) should be \[f_!(S) = \{y\in Y : \text{ for all }x\in X, \text{ if }f(x)=y\text{ then }x\in S\}.\]

One way to see how the dual quantifiers "for all" and "there exists" arise is to derive \(f_!\) from \(f_\ast\) using complements: first note that \(f^\ast\) commutes with complements in the sense that \(f^\ast(Y\setminus T) = X\setminus f^\ast(T)\). Then we find that \[f^\ast(T)\subseteq S \iff X\setminus S \subseteq f^\ast(Y\setminus T)\] (by taking the complement of both sides, which reverses the inclusion) \[\iff f_\ast(X\setminus S)\subseteq Y\setminus T\] (using the adjunction between \(f_\ast\) and \(f^\ast\)) \[\iff T\subseteq Y\setminus f_\ast(X\setminus S)\] (by taking the complement of both sides again). Therefore the operation \(f_!(S) := Y \setminus f_\ast(X\setminus S)\) is a right adjoint for \(f^\ast\).

This double-complement expression tells you that if \(f_\ast\) is described by a quantifier (in our case \(\exists\)) then \(f_!\) should be described by the dual quantifier (in our case, \(\neg\exists\neg = \forall\)). Indeed, we can check carefully that one interpretation of \(f_!(S)\) is the set of all \(y\) for which it's

nottrue that some \(x\)notin \(S\) maps to \(y\).`I agree with Anindya—the definition of \\(f_!:PX\to PY\\) that makes it the right adjoint to \\(f^\ast: PY\to PX\\) should be \\[f_!(S) = \\{y\in Y : \text{ for all }x\in X, \text{ if }f(x)=y\text{ then }x\in S\\}.\\] One way to see how the dual quantifiers "for all" and "there exists" arise is to derive \\(f_!\\) from \\(f_\ast\\) using complements: first note that \\(f^\ast\\) commutes with complements in the sense that \\(f^\ast(Y\setminus T) = X\setminus f^\ast(T)\\). Then we find that \\[f^\ast(T)\subseteq S \iff X\setminus S \subseteq f^\ast(Y\setminus T)\\] (by taking the complement of both sides, which reverses the inclusion) \\[\iff f_\ast(X\setminus S)\subseteq Y\setminus T\\] (using the adjunction between \\(f_\ast\\) and \\(f^\ast\\)) \\[\iff T\subseteq Y\setminus f_\ast(X\setminus S)\\] (by taking the complement of both sides again). Therefore the operation \\(f_!(S) := Y \setminus f_\ast(X\setminus S)\\) is a right adjoint for \\(f^\ast\\). This double-complement expression tells you that if \\(f_\ast\\) is described by a quantifier (in our case \\(\exists\\)) then \\(f_!\\) should be described by the dual quantifier (in our case, \\(\neg\exists\neg = \forall\\)). Indeed, we can check carefully that one interpretation of \\(f_!(S)\\) is the set of all \\(y\\) for which it's *not* true that some \\(x\\) *not* in \\(S\\) maps to \\(y\\).`

One way of picturing about this is in terms of the natural \(Y\)-indexed partition \(f\) induces on \(X\). Colour each partition

whiteif none of its members is in \(S\),blackif all of its members are in \(S\),greyif it's "on the borderline". Then \(f_* (S)\) is the \(y\) indices corresponding to grey or black partitions, and \(f_!(S)\) is the \(y\) indices of just the black ones. The double complements thing that Owen talks about is basically a matter of swapping black and white.`One way of picturing about this is in terms of the natural \\(Y\\)-indexed partition \\(f\\) induces on \\(X\\). Colour each partition _white_ if none of its members is in \\(S\\), _black_ if all of its members are in \\(S\\), _grey_ if it's "on the borderline". Then \\(f_* (S)\\) is the \\(y\\) indices corresponding to grey or black partitions, and \\(f_!(S)\\) is the \\(y\\) indices of just the black ones. The double complements thing that Owen talks about is basically a matter of swapping black and white.`

Anindya Bhattacharyya wrote:

That's a really nice mental picture! To expand on the "swapping black and white" idea, there are two steps to go from \(f_\ast\) to \(f_!\): first, taking the complement in \(X\) swaps the roles of black and white, so you'd go from "grey or black" to "grey or white". Then taking the complement in \(Y\) means you take all the partitions you

didn'tbefore, so instead of "grey or white" you get "neither grey nor white"—i.e. "black".`Anindya Bhattacharyya wrote: > One way of picturing about this is in terms of the natural \\(Y\\)-indexed partition \\(f\\) induces on \\(X\\). Colour each partition _white_ if none of its members is in \\(S\\), _black_ if all of its members are in \\(S\\), _grey_ if it's "on the borderline". Then \\(f_* (S)\\) is the \\(y\\) indices corresponding to grey or black partitions, and \\(f_!(S)\\) is the \\(y\\) indices of just the black ones. The double complements thing that Owen talks about is basically a matter of swapping black and white. That's a really nice mental picture! To expand on the "swapping black and white" idea, there are two steps to go from \\(f_\ast\\) to \\(f_!\\): first, taking the complement in \\(X\\) swaps the roles of black and white, so you'd go from "grey or black" to "grey or white". Then taking the complement in \\(Y\\) means you take all the partitions you *didn't* before, so instead of "grey or white" you get "neither grey nor white"—i.e. "black".`

Jared - thanks for catching that typo. I fixed it.

Anindya and Owen - thanks for catching that much-worse-than-a-mere-typo. I was rushing in writing this post (as I so often am) and I screwed up. I usually use the double complement way of thinking about \(f_!\) but I didn't want to talk about that because I'm avoiding negation so far. I knew the double complement trick would turn a "for exists" into a "for all" since

$$ \forall = \neg \exists \neg $$ But I just randomly grabbed a way to cross out a "there exists" and write a "for all".

I'll fix my post later today - but so people know what the mistake was after it's gone, I've added a clearer description of it to Anindya's original comment.

`Jared - thanks for catching that typo. I fixed it. Anindya and Owen - thanks for catching that much-worse-than-a-mere-typo. I was rushing in writing this post (as I so often am) and I screwed up. I usually use the double complement way of thinking about \\(f_!\\) but I didn't want to talk about that because I'm avoiding negation so far. I knew the double complement trick would turn a "for exists" into a "for all" since $$ \forall = \neg \exists \neg $$ But I just randomly grabbed a way to cross out a "there exists" and write a "for all". I'll fix my post later today - but so people know what the mistake was after it's gone, I've added a clearer description of it to Anindya's original comment.`

My stab at Puzzle 22 starting from the definition that \( \lnot X = X \to \bot \) in a

Heyting Algebra. This requires that we have a bounded lattice with a smallest and greatest element.This definition \( \lnot X = X \to \bot \) means that \( X \land \lnot X = \bot \). You then need De Morgan's laws or Law of the Excluded Middle to get \( X \lor \lnot X \to \top \). In a general Heyting Algebra, these laws aren't true, so I'm not sure if this is allowed to be part of the definition. Turning this into the notation above: $$ X \wedge \lnot X = 0 $$, where \( 0 \) is the smallest element of the bounded lattice.

An example where there is a not operation is on the poset created by the powerset of a finite set, where \( \lnot X \) is the complement of X.

An example where there isn't a not operation would be the subset \(\mathbb{Z}\) in \(\mathbb{R}\) with the usual ordering, as there is no smallest element.

`My stab at Puzzle 22 starting from the definition that \\( \\lnot X = X \\to \\bot \\) in a **[Heyting Algebra](https://en.wikipedia.org/wiki/Heyting_algebra)**. This requires that we have a bounded lattice with a smallest and greatest element. This definition \\( \\lnot X = X \\to \\bot \\) means that \\( X \\land \\lnot X = \\bot \\). You then need De Morgan's laws or Law of the Excluded Middle to get \\( X \\lor \\lnot X \\to \\top \\). In a general Heyting Algebra, these laws aren't true, so I'm not sure if this is allowed to be part of the definition. Turning this into the notation above: $$ X \wedge \lnot X = 0 $$, where \\( 0 \\) is the smallest element of the bounded lattice. An example where there is a not operation is on the poset created by the powerset of a finite set, where \\( \\lnot X \\) is the complement of X. An example where there isn't a not operation would be the subset \\(\\mathbb{Z}\\) in \\(\\mathbb{R}\\) with the usual ordering, as there is no smallest element.`

I think you want to say $$\top\to X\vee\neg X,$$ the converse being "for free".

`I think you want to say $$\top\to X\vee\neg X,$$ the converse being "for free".`

In a the poset \( P(X) \), the complement of the top element \( X \) defines negation, $$ \neg(X): P(X) \rightarrow P(X) \\ x \mapsto X \setminus x. $$ Proof.

Let \( x=X \), then \(X \setminus X = \varnothing \). Let \( x=\varnothing \), then \(X \setminus \varnothing = X\). Finally, let \( x=y,\) for some \(y, \, \varnothing \subsetneq y \subsetneq X \), then there will exist some subset \(z, \varnothing \subsetneq z \subsetneq X \) where \(z \neq y \) and \( z = X \setminus y \).

`In a the poset \\( P(X) \\), the complement of the top element \\( X \\) defines negation, $$ \neg(X): P(X) \rightarrow P(X) \\ x \mapsto X \setminus x. $$ Proof. Let \\( x=X \\), then \\(X \setminus X = \varnothing \\). Let \\( x=\varnothing \\), then \\(X \setminus \varnothing = X\\). Finally, let \\( x=y,\\) for some \\(y, \\, \varnothing \subsetneq y \subsetneq X \\), then there will exist some subset \\(z, \varnothing \subsetneq z \subsetneq X \\) where \\(z \neq y \\) and \\( z = X \setminus y \\).`

Puzzle KP:This complement operation is left adjoint to what already defined operation?`**Puzzle KP:** This complement operation is left adjoint to what already defined operation?`

@ Joel - I don’t think your answer is correct. It is close. In classical logic we want \( \neg \neg x = x \). I can think of Heyting algebras where this is violated. The most we can guarantee is \(x \leq (x \to \bot) \to \bot \).

I'm with Keith E. Peterson on this one.

However, I see how a fix for your answer.

Puzzle MD (Glivenko's Theorem).Let \((A, \leq, \wedge, \vee, \bot, \top, \to)\) be a Heyting algebra. Define \(\overline{\overline{A}} := \{(a \to \bot) \to \bot \ :\ a \in A\}\). Prove in the Heyting subalgebra \((\overline{\overline{A}}, \leq, \wedge, \vee, \bot, \top, \to)\) that \(\sim x := x \to \bot\) is the left adjoint that Keith is talking about in his puzzle.`@ Joel - I don’t think your answer is correct. It is close. In classical logic we want \\( \neg \neg x = x \\). I can think of Heyting algebras where this is violated. The most we can guarantee is \\(x \leq (x \to \bot) \to \bot \\). I'm with Keith E. Peterson on this one. However, I see how a fix for your answer. **Puzzle MD (Glivenko's Theorem).** Let \\((A, \leq, \wedge, \vee, \bot, \top, \to)\\) be a Heyting algebra. Define \\(\overline{\overline{A}} := \\{(a \to \bot) \to \bot \ :\ a \in A\\}\\). Prove in the Heyting subalgebra \\((\overline{\overline{A}}, \leq, \wedge, \vee, \bot, \top, \to)\\) that \\(\sim x := x \to \bot\\) is the left adjoint that Keith is talking about in his puzzle.`

Consider the operation \(C_f \) defined to be the pullback followed by the pushout.

$$ C_f := f_* \circ f^*. $$

Puzzle KP2.Extending this puzzle in lecture 4, treat \(C_f\) as a monad, and find it's 'unit' and 'multiplication' natural transformations.`Consider the operation \\(C_f \\) defined to be the pullback followed by the pushout. $$ C_f := f_* \circ f^*. $$ **Puzzle KP2.** Extending [this puzzle](https://forum.azimuthproject.org/discussion/comment/16320/#Comment_16320) in lecture 4, treat \\(C_f\\) as a monad, and find it's 'unit' and 'multiplication' natural transformations.`

I've just realised the white/grey/black picture I mentioned above only works if \(f\) is

surjective.If that isn't the case, then there are a bunch of \(y\)s that aren't hit by anything in \(X\), and they correspond to empty partitions of \(X\). It's not clear what "colour" these empty partitions should be. In a sense they are also "borderline cases" but in the opposite way to the grey cells.

I suppose you could colour them "antigrey" and say \(f_*\) is grey and black, while \(f_!\) is antigrey and black. Then the double complement still works by flipping black and white, then complementing – which takes us from grey + black to grey + white to antigrey + black.

Note in particular that \(f_!(S)\subseteq f_*(S)\) only holds in the surjective case.

///

One more warning – the notation in Seven Sketches is the other way round (p25) – they use \(f_!\) for the direct image and \(f_*\) for the right-adjoint-to-preimage-that-doesn't-have-a-name.

`I've just realised the white/grey/black picture I mentioned above only works if \\(f\\) is _surjective_. If that isn't the case, then there are a bunch of \\(y\\)s that aren't hit by anything in \\(X\\), and they correspond to empty partitions of \\(X\\). It's not clear what "colour" these empty partitions should be. In a sense they are also "borderline cases" but in the opposite way to the grey cells. I suppose you could colour them "antigrey" and say \\(f_*\\) is grey and black, while \\(f_!\\) is antigrey and black. Then the double complement still works by flipping black and white, then complementing – which takes us from grey + black to grey + white to antigrey + black. Note in particular that \\(f_!(S)\subseteq f_*(S)\\) only holds in the surjective case. /// One more warning – the notation in Seven Sketches is the other way round (p25) – they use \\(f_!\\) for the direct image and \\(f_*\\) for the right-adjoint-to-preimage-that-doesn't-have-a-name.`

Anindya - thanks for the extra warming. They are probably using the notation in the "correct" way: the way first established by Grothendieck and others in their work on the six operations in homological algebra.

`Anindya - thanks for the extra warming. They are probably using the notation in the "correct" way: the way first established by Grothendieck and others in their work on [the six operations](https://en.wikipedia.org/wiki/Six_operations) in homological algebra.`

I had no idea this stuff was first established by Grothendieck! It's kinda surprising that operations that seem so "intuitive" and "obvious" as inverse image etc are actually pretty recent innovations in mathematics.

`I had no idea this stuff was first established by Grothendieck! It's kinda surprising that operations that seem so "intuitive" and "obvious" as inverse image etc are actually pretty recent innovations in mathematics.`

Anindya and Owen - okay, I've tried to fix my explanation of the right adjoint to \(f^{\ast}\), while simultaneously switching my usage of \(f_!\) and \(f_{\ast}\) so that now they match Example 1.89 in

Seven Sketchesand also the nLab, which assures us that \(f_{\ast}\) is the usual name for therightadjoint to \(f^{\ast}\)This is the sort of juggling that I'm likely to screw up, especially since it's 4:02 am here and I'm doing this in a bout of insomnia. So, please take a look and see if it's all consistent now!

`Anindya and Owen - okay, I've tried to fix my explanation of the right adjoint to \\(f^{\ast}\\), while simultaneously switching my usage of \\(f_!\\) and \\(f_{\ast}\\) so that now they match Example 1.89 in _Seven Sketches_ and also the nLab, which assures us that \\(f_{\ast}\\) is the usual name for the _right_ adjoint to \\(f^{\ast}\\) This is the sort of juggling that I'm likely to screw up, especially since it's 4:02 am here and I'm doing this in a bout of insomnia. So, please take a look and see if it's all consistent now!`

@John #17 – that looks basically right to me, just three small typos:

`@John #17 – that looks basically right to me, just three small typos: 1. "The inverse image is also called the preimage, and it's often written as \\(f^{-1}(X)\\)" – I think this should be \\(f^{-1}(S)\\) 2. "The image is often written as \\(f(X)\\)" – I think this should be \\(f(S)\\) 3. "\\(\\{y \in Y:\textrm{ for some } x \in T \\}\\)" – there's a missing \\(y = f(x)\\)`

So \(f_!(\text{there's a living cat in my room})\) is the set of all temperatures that are possible if there is a living cat in the room.

`So \\(f_!(\text{there's a living cat in my room})\\) is the set of all temperatures that are possible if there is a living cat in the room.`

Consider the property of the room "everyone in the room thinks it's too hot"; call it H. Then f_*(H) = {y in Y : for all states x of the room, if f(x)=y then x is in H}. That's a nice set: it's the set of temperatures that guarantee that everyone will think it's too hot.

But the property C "there's a cat in the room" would have f_*(C)=\emptyset because there's no temperature that guarantees there'll be a cat in the room.

`Consider the property of the room "everyone in the room thinks it's too hot"; call it H. Then f_*(H) = {y in Y : for all states x of the room, if f(x)=y then x is in H}. That's a nice set: it's the set of temperatures that guarantee that everyone will think it's too hot. But the property C "there's a cat in the room" would have f_*(C)=\emptyset because there's no temperature that guarantees there'll be a cat in the room.`

A couple of typos: you write f^* : PX --> PY in the statement of theorem and in the line below its proof. I believe it should be f^*: PY --> PX

`A couple of typos: you write f^* : PX --> PY in the statement of theorem and in the line below its proof. I believe it should be f^*: PY --> PX`

I'm having difficulty with the idea of regarding a set X as a set of states for some system and PX propositions of that system. Perhaps I'm pushing the analogy too far, but here goes.

Let X be the states of the Pacific ocean. How do we deal with states that are identically true or false? For instance, an element of X (i.e. a state of the ocean) is C:="the ocean contains more water than can fit in my coffee mug". Passing to PX, we get a singleton {C}. But because the state of C is always occupied, we have true implies {C}. In this model, that means {C} contains PX which seems absurd. What am I missing here?

`I'm having difficulty with the idea of regarding a set X as a set of states for some system and PX propositions of that system. Perhaps I'm pushing the analogy too far, but here goes. Let X be the states of the Pacific ocean. How do we deal with states that are identically true or false? For instance, an element of X (i.e. a state of the ocean) is C:="the ocean contains more water than can fit in my coffee mug". Passing to PX, we get a singleton {C}. But because the state of C is always occupied, we have true implies {C}. In this model, that means {C} contains PX which seems absurd. What am I missing here?`

Daniel Cicala wrote:

Fixed! Thanks! By the way, you can do TeX here: the main tricky part is that you write inline equations surrounded by

`\\(`

and`\\)`

rather than the usual dollar signs. You can also click on the little gear above any comment and see how people have written them... and you can edit your own comments. As a lover of beauty, I'll beautify your comment with some TeX.`[Daniel Cicala wrote](https://forum.azimuthproject.org/discussion/comment/16724/#Comment_16724): > A couple of typos... Fixed! Thanks! By the way, you can do TeX here: the main tricky part is that you write inline equations surrounded by `\\(` and `\\)` rather than the usual dollar signs. You can also click on the little gear above any comment and see how people have written them... and you can edit your own comments. As a lover of beauty, I'll beautify your comment with some TeX.`

David Spivak wrote:

Yes, I was going to pose that as a puzzle... and I'll put it in my lecture above now, so you've just answered that puzzle! But someone else can answer this one:

Puzzle 25.In the same situation, what is$$ f_*\{\text{there's a living cat in my room}\} \; ?$$

`David Spivak wrote: > So \\(f_!\\{\text{there's a living cat in my room}\\}\\) is the set of all temperatures that are possible if there is a living cat in the room. Yes, I was going to pose that as a puzzle... and I'll put it in my lecture above now, so you've just answered that puzzle! But someone else can answer this one: **Puzzle 25.** In the same situation, what is $$ f_*\\{\text{there's a living cat in my room}\\} \; ?$$`

The function \( f_* \{\text{there's a living cat in my room} \} \) should be the range of temperatures all the cats living in my room can tolerate.

`The function \\( f_* \\{\text{there's a living cat in my room} \\} \\) should be the range of temperatures all the cats living in my room can tolerate.`

Anindya wrote:

Fixed! Thanks very much for your Herculean labors in helping me clean up the Augean stable of this post.

`Anindya wrote: > that looks basically right to me, just three small typos: Fixed! Thanks very much for your Herculean labors in helping me clean up the [Augean stable](http://www.perseus.tufts.edu/Herakles/stables.html) of this post.`

Daniel wrote:

That doesn't make sense: states don't have truth values! It's propositions that have truth values.

States are

ways a system can be. For example in the classical mechanics of a particle on the line, the set of states is taken to be \(\mathbb{R}^2\). A point in here is a position-momentum pair. The idea is that the particle can have any real number as its position and any number as its momentum, and the position and momentum tell us everything there is to know about the particle. We say they tell us its "state".The set of states of some system is usually called its

state space. The concept of state space is important not only in physics but also computer science, game theory... and pretty much everything where you have a "system" that has a set of "ways it can be":Now, about this:

That's not a state. That's a proposition: something that can be true or false! In many situations, a proposition can be seen a subset \(S\) of the state space \(X\): in this case, all the states of the Pacific Ocean in which that ocean contains more water than can fit in my coffee mug. If the proposition is identically true, then \(S = X\). If it's identically false, then \(S = \emptyset\).

Note that every state \(x \in X\) gives a proposition, namely the singleton \(\{x\}\). You can think of this as the proposition "the system is in state \(x\)". But there are lots of

otherpropositions.Warning: when we get to quantum mechanics we should generalize these ideas. One way is to replace the category of sets by some other category. Then the "state space" becomes an object \(X\) in that category, and propositions become elements of the poset of subobjects of \(X\). There is much more to say, but this is a start.

`Daniel wrote: > Let \\(X\\) be the states of the Pacific ocean. How do we deal with states that are identically true or false? That doesn't make sense: states don't have truth values! It's propositions that have truth values. States are _ways a system can be_. For example in the classical mechanics of a particle on the line, the set of states is taken to be \\(\mathbb{R}^2\\). A point in here is a position-momentum pair. The idea is that the particle can have any real number as its position and any number as its momentum, and the position and momentum tell us everything there is to know about the particle. We say they tell us its "state". The set of states of some system is usually called its **state space**. The concept of state space is important not only in physics but also computer science, game theory... and pretty much everything where you have a "system" that has a set of "ways it can be": * Wikipedia, [State space](https://en.wikipedia.org/wiki/State_space). Now, about this: > "the ocean contains more water than can fit in my coffee mug." That's not a state. That's a proposition: something that can be true or false! In many situations, a proposition can be seen a subset \\(S\\) of the state space \\(X\\): in this case, all the states of the Pacific Ocean in which that ocean contains more water than can fit in my coffee mug. If the proposition is identically true, then \\(S = X\\). If it's identically false, then \\(S = \emptyset\\). Note that every state \\(x \in X\\) gives a proposition, namely the singleton \\(\\{x\\}\\). You can think of this as the proposition "the system is in state \\(x\\)". But there are lots of _other_ propositions. Warning: when we get to quantum mechanics we should generalize these ideas. One way is to replace the category of sets by some other category. Then the "state space" becomes an object \\(X\\) in that category, and propositions become elements of the poset of subobjects of \\(X\\). There is much more to say, but this is a start.`

Keith Peterson wrote:

I don't think so. You're putting a universal quantifier on cats - "all cats" - but I think the universal quantifier should go somewhere else.

What do the rest of you folks think about this?

`Keith Peterson wrote: > The function \\( f_*\\){\\(\text{there's a living cat in my room}\\)} should be the range of temperatures all the cats living in my room can tolerate. I don't think so. You're putting a universal quantifier on cats - "all cats" - but I think the universal quantifier should go somewhere else. What do the rest of you folks think about this?`

puzzle 25

The range is the set of all readings, each of a state which at least one cat can tolerate.

`puzzle 25 The range is the set of all readings, each of a state which at least one cat can tolerate.`

I think it’s {70 °F}. That’s what I have my thermostat to. I know that when my cat is in the room then, ceteris paribus, that’s the temperature.

On the other hand \(f_\ast(\{\text{my liquid helium container has been breached}\})\) = {−452.2 °F}.

But I argue:

$$ f_\ast(\{\text{there’s a living cat in my room, my liquid helium container has been breached}\}) = \varnothing $$ This is because these are distinct states. My cat has a lot more sense l than his owner and knows better than to stick around after he has knocked over the liquid helium. So there is no

onetemperature that maps to all of these states.I think other people will have different answers. Some people have their thermostats set to 72 °F, for instance, while others keep their homes and liquid helium under more pressure.

`I think it’s {70 °F}. That’s what I have my thermostat to. I know that when my cat is in the room then, ceteris paribus, that’s the temperature. On the other hand \\(f_\ast(\\{\text{my liquid helium container has been breached}\\})\\) = {−452.2 °F}. But I argue: $$ f_\ast(\\{\text{there’s a living cat in my room, my liquid helium container has been breached}\\}) = \varnothing $$ This is because these are distinct states. My cat has a lot more sense l than his owner and knows better than to stick around after he has knocked over the liquid helium. So there is no *one* temperature that maps to all of these states. I think other people will have different answers. Some people have their thermostats set to 72 °F, for instance, while others keep their homes and liquid helium under more pressure.`

Dennis Loumas answered Puzzle 25:

Right! What about Puzzle 26? I ask because this one is causing some more trouble.

`Dennis Loumas answered Puzzle 25: > the set of all readings, each of a state which at least one cat can tolerate Right! What about Puzzle 26? I ask because this one is causing some more trouble.`

For \(f_\ast(\{\text{there is a living cat in your room}\})\), I get the set of all thermometer readings which are

onlypossible if there is a live cat in the room. (This is interpreting the state set {there is a living cat in your room} not as a singleton but as the set of all room states in which the room contains a living cat.) I imagine for most interpretations of the function \(f\) this set is empty.If we think of \(f_!\) and \(f_\ast\) as best approximations to an imaginary "inverse" of \(f^\ast\), then we're trying to find the best approximation from above or below to some imaginary set of temperature readings that correspond exactly to the room states containing a live cat. The "liberal" left adjoint \(f_!\) generously throws in all the temperatures for which

somecorresponding room state contains a live cat, while the "conservative" right adjoint \(f_\ast\) cautiously only allows the temperatures for whichallcorresponding room states contain a live cat.`For \\(f_\ast(\\{\text{there is a living cat in your room}\\})\\), I get the set of all thermometer readings which are *only* possible if there is a live cat in the room. (This is interpreting the state set {there is a living cat in your room} not as a singleton but as the set of all room states in which the room contains a living cat.) I imagine for most interpretations of the function \\(f\\) this set is empty. If we think of \\(f_!\\) and \\(f_\ast\\) as best approximations to an imaginary "inverse" of \\(f^\ast\\), then we're trying to find the best approximation from above or below to some imaginary set of temperature readings that correspond exactly to the room states containing a live cat. The "liberal" left adjoint \\(f_!\\) generously throws in all the temperatures for which *some* corresponding room state contains a live cat, while the "conservative" right adjoint \\(f_\ast\\) cautiously only allows the temperatures for which *all* corresponding room states contain a live cat.`

Owen answered Puzzle 26 as follows:

Good - yes, that's the answer I was fishing for! It's probably the empty set, unless for example we're only considering a restricted set of states where you own a cat who always comes inside when the temperature drops below freezing... not all physically possible states of the matter in your room.

Exactly!!!

We're trying to figure out what the presence of a living cat in the room says about the temperature. The left and right adjoint give two ways of doing it: the generous way ("these are the temperatures where there

couldbe a live cat in the room") and the cautious way ("these are the temperature where theremustbe one").`Owen answered Puzzle 26 as follows: > For \\(f_\ast(\\{\text{there is a living cat in your room}\\})\\), I get the set of all thermometer readings which are *only* possible if there is a live cat in the room. Good - yes, that's the answer I was fishing for! It's probably the empty set, unless for example we're only considering a restricted set of states where you own a cat who always comes inside when the temperature drops below freezing... not all physically possible states of the matter in your room. > If we think of \\(f_!\\) and \\(f_\ast\\) as best approximations to an imaginary "inverse" of \\(f^\ast\\), then we're trying to find the best approximation from above or below to some imaginary set of temperature readings that correspond exactly to the room states containing a live cat. The "liberal" left adjoint \\(f_!\\) generously throws in all the temperatures for which *some* corresponding room state contains a live cat, while the "conservative" right adjoint \\(f_\ast\\) cautiously only allows the temperatures for which *all* corresponding room states contain a live cat. Exactly!!! We're trying to figure out what the presence of a living cat in the room says about the temperature. The left and right adjoint give two ways of doing it: the generous way ("these are the temperatures where there _could_ be a live cat in the room") and the cautious way ("these are the temperature where there _must_ be one").`

Keith Peterson wrote:

I'll sketch a strategy to define it in terms of a

rightadjoint.We can think of "complement" in logical terms as "negation". To define "negation" in a fairly general poset, I might try to use a few facts:

in our posetthat means "\(a\) implies \(b\)" - can be defined in terms of a right adjoint, since$$ x \wedge a \implies b \textrm{ if and only if } x \implies (a \to b) $$

$$ \neg a = (a \to \bot) $$

$$ \bot = \bigvee \emptyset $$

`Keith Peterson wrote: > This complement operation is left adjoint to what already defined operation? I'll sketch a strategy to define it in terms of a _right_ adjoint. We can think of "complement" in logical terms as "negation". To define "negation" in a fairly general poset, I might try to use a few facts: * The [material conditional](https://en.wikipedia.org/wiki/Material_conditional) \\(a \to b\\) - that is, the proposition _in our poset_ that means "\\(a\\) implies \\(b\\)" - can be defined in terms of a right adjoint, since $$ x \wedge a \implies b \textrm{ if and only if } x \implies (a \to b) $$ * Negation can be defined in terms of the material conditional and the proposition "false", which is often written \\(\bot\\): $$ \neg a = (a \to \bot) $$ * The proposition "false", or \\(\bot\\), is the least upper bound of the empty set in our poset: $$ \bot = \bigvee \emptyset $$`

Re

Puzzle 25I tried to answer what to me was a slightly more plausible formulation which reversed a cat heating the room to "the bed is hot enough for the cat?" (maybe looking for it). This isn't very good as it suggests the answer is T|F. But I don't think this necessarily excludes a map so I got \(f_\ast\) as the range of such temps. I need a better formulation.And thanks to all for the unbeatable, brain-mangling fun! :).

`Re **Puzzle 25** I tried to answer what to me was a slightly more plausible formulation which reversed a cat heating the room to "the bed is hot enough for the cat?" (maybe looking for it). This isn't very good as it suggests the answer is T|F. But I don't think this necessarily excludes a map so I got \\(f_\ast\\) as the range of such temps. I need a better formulation. And thanks to all for the unbeatable, brain-mangling fun! :).`

Duh. #19 says that's \(f_!\)!

`Duh. #19 says that's \\(f_!\\)!`

From #20 Perhaps my version is \(f_*\) as it only asks for a guarantee that the bed is hot enough for cats not that that's where the cat is; if that makes a difference?

`From #20 Perhaps my version is \\(f_*\\) as it only asks for a guarantee that the bed is hot enough for cats not that that's where the cat is; if that makes a difference?`

Puzzle 23.

Given \(S \subseteq X\), for all \(x \in X\), observe $$x \in S \textrm{ iff } f(x) \in f_{\ast}(S).$$

`Puzzle 23. Given \\(S \subseteq X\\), for all \\\(x \in X\\\), observe $$x \in S \textrm{ iff } f(x) \in f_{\ast}(S).$$`

I used this comment to test latex symbols but now I can't delete it fully, only edit it. sorry

`I used this comment to test latex symbols but now I can't delete it fully, only edit it. sorry`

I've got v. confused with notation as nearly always so I've had to start a cheat sheet to help correct my notes. I haven't yet worked up to a sig for \(f!\) so I just guessed one. Are the preceeding ones correct? Are the names I've lifted from wikipedia consistent with the current book? Please can folks suggest the best synonyms to add to these defs. I'll add the adjointness descriptions when I've a bit of time (and maybe defs for tensor and hom if and when I've found out their relevance).

`I've got v. confused with notation as nearly always so I've had to start a cheat sheet to help correct my notes. I haven't yet worked up to a sig for \\(f!\\) so I just guessed one. Are the preceeding ones correct? Are the names I've lifted from wikipedia consistent with the current book? Please can folks suggest the best synonyms to add to these defs. I'll add the adjointness descriptions when I've a bit of time (and maybe defs for tensor and hom if and when I've found out their relevance). <img src="https://docs.google.com/drawings/d/e/2PACX-1vRmwcOUxmhkpmZsYpNzjBrqOAgoi8D9wtnkdR11ygQv-fwzA4-kwy_VKnJTiG0JpL4-j9pnSjj8n4UV/pub?w=1864&h=1557">`

Dennis - great answer to Puzzle 23! Yes, it's that quick if one does it right!

I edited your comment a bit, but not changing the substance. It sounds like you're running into a few peculiarities of how LaTeX works here:

1) You often need to hit "refresh" on your browser to see the LaTeX symbols to turn into math symbols. This only affects what

yousee; you don't need to do this to get others to see it..2) You can only delete a comment of your own after a little while. (See if you can delete it now.)

and from the welcome page:

3) To create "displayed" equations, centered on the page, use double dollar signs:

`$$E = \sqrt{m^2 + p^2}$$`

produces this: $$E = \sqrt{m^2 + p^2}$$4) To create "inline" equations, mixed in with your text, use this other method:

`\\(E = \sqrt{m^2 + p^2}\\)`

produces this: \(E = \sqrt{m^2 + p^2}\).`Dennis - great answer to Puzzle 23! Yes, it's that quick if one does it right! I edited your comment a bit, but not changing the substance. It sounds like you're running into a few peculiarities of how LaTeX works here: 1) You often need to hit "refresh" on your browser to see the LaTeX symbols to turn into math symbols. This only affects what _you_ see; you don't need to do this to get others to see it.. 2) You can only delete a comment of your own after a little while. (See if you can delete it now.) and from the [welcome page](https://forum.azimuthproject.org/discussion/1717/welcome-to-the-applied-category-theory-course): 3) To create "displayed" equations, centered on the page, use double dollar signs: `$$E = \sqrt{m^2 + p^2}$$` produces this: $$E = \sqrt{m^2 + p^2}$$ 4) To create "inline" equations, mixed in with your text, use this other method: `\\(E = \sqrt{m^2 + p^2}\\)` produces this: \\(E = \sqrt{m^2 + p^2}\\).`

PS google drawings seems quicker than modifying the hasse and symmetry cube code in the haskell diagrams pkg atm. So many thanks to https://forum.azimuthproject.org/profile/2031/Fredrick Eisele for the lead and the great examples.

`PS google drawings seems quicker than modifying the hasse and symmetry cube code in the haskell diagrams pkg atm. So many thanks to https://forum.azimuthproject.org/profile/2031/Fredrick%20Eisele for the lead and the great examples.`

Jim wrote::

Your chart is fine, though it breaks my heart that you didn't write it in LaTeX.

There's a bit of variability in how people use names, which I've tried to explain in my posts. A lot of these arise because there are a few terms that everyone learns in math courses which get revamped when people study specialized branches of math connected to category theory - like algebraic geometry - where more intricate things become important:

1) As you note, "preimage" and "inverse image" are synonyms. Both these are standard terms that all mathematicians know. Usually they use the symbol \(f^{-1}(S)\), which I've deprecated for reasons explained in this post: like the term "inverse image", it can fool beginners into thinking the function \(f\) has an inverse \(f^{-1}\) when it doesn't. \(f^*(S)\) is used by category theorists and algebraic geometers.

2) The "proper direct image" \(f_!(S)\) is used by category theorists and algebraic geometers; ordinary mathematicians call this thing merely the "image" and denote it as \(f(S)\). The notation \(f(S)\) is sloppy because strictly speaking we should only apply \(f : X \to Y\) to an element of \(X\), not a subset \(S \subseteq X\).

3) The concept of "direct image" \(f_*(S)\) is not widely known outside algebraic geometry and topos theory, so there's less variability of terminology here.

4) The "proper inverse image" \(f^!(S)\) is even less widely known outside algebraic geometry and topos theory, and you've just pointed out why: in the category of sets it's the exact same thing as the "preimage". It only becomes a distinct concept in more fancy contexts that we haven't met yet, and in fact may never meet in this course.

`[Jim wrote:](https://forum.azimuthproject.org/discussion/comment/16770/#Comment_16770): > Are the preceding ones correct? Are the names I've lifted from Wikipedia consistent with the current book? Your chart is fine, though it breaks my heart that you didn't write it in LaTeX. <img src = "http://math.ucr.edu/home/baez/emoticons/cry.gif"> There's a bit of variability in how people use names, which I've tried to explain in my posts. A lot of these arise because there are a few terms that everyone learns in math courses which get revamped when people study specialized branches of math connected to category theory - like algebraic geometry - where more intricate things become important: 1) As you note, "preimage" and "inverse image" are synonyms. Both these are standard terms that all mathematicians know. Usually they use the symbol \\(f^{-1}(S)\\), which I've deprecated for reasons explained in this post: like the term "inverse image", it can fool beginners into thinking the function \\(f\\) has an inverse \\(f^{-1}\\) when it doesn't. \\(f^*(S)\\) is used by category theorists and algebraic geometers. 2) The "proper direct image" \\(f_!(S)\\) is used by category theorists and algebraic geometers; ordinary mathematicians call this thing merely the "image" and denote it as \\(f(S)\\). The notation \\(f(S)\\) is sloppy because strictly speaking we should only apply \\(f : X \to Y\\) to an element of \\(X\\), not a subset \\(S \subseteq X\\). 3) The concept of "direct image" \\(f_*(S)\\) is not widely known outside algebraic geometry and topos theory, so there's less variability of terminology here. 4) The "proper inverse image" \\(f^!(S)\\) is even less widely known outside algebraic geometry and topos theory, and you've just pointed out why: in the category of sets it's the exact same thing as the "preimage". It only becomes a distinct concept in more fancy contexts that we haven't met yet, and in fact may never meet in this course.`

Hey John,

Thanks for clearing up my confusion regarding \(f_\ast\)!

In #34 you wrote:

This is what Joel Jennings was saying in #8.

This isn't

classical logic, though. It's a sublogic calledintuitionistic logic. I was pioneered by L. E. J. Brouwer in his paperThe Unreliability of the Logical Principles(1908). In this paper he rejects thelaw of the excluded middle:$$ a \vee \neg a $$ If we define \(\neg a := a \to \bot\), we end up with posets where this isn't true.

Maybe we don't want it to be true! You can use it to make rather disappointing proofs, as Brouwer argues. Also, I can't write a program in Haskell with type signature`p :: Either (a -> _|_) a`

.I argued that if you have a lattice \((P, \leq, \wedge, \vee, \to, \bot)\) we can always construct a sublattice with domain \(\bar{\bar{P}} \subset P\) where the excluded middle works again. This is done by closing over every element with the closure operator \(\neg (\neg\ \cdot)\). This is

Glivenko's Theorem. Since \(a \leq \neg (\neg a)\), then \(\neg (\neg\ \cdot)\) is amonad, as you hinted in your puzzle in lecture 4.`Hey John, Thanks for clearing up my confusion regarding \\(f_\ast\\)! In [#34](https://forum.azimuthproject.org/discussion/1931/lecture-9-adjoints-and-the-logic-of-subsets#latest) you wrote: > I'll sketch a strategy to define it in terms of a _right_ adjoint. > > We can think of "complement" in logical terms as "negation". To define "negation" in a fairly general poset, I might try to use a few facts: > > * The [material conditional](https://en.wikipedia.org/wiki/Material_conditional) \\(a \to b\\) - that is, the proposition _in our poset_ that means "\\(a\\) implies \\(b\\)" - can be defined in terms of a right adjoint, since > > $$ x \wedge a \implies b \textrm{ if and only if } x \implies (a \to b) $$ This is what Joel Jennings was saying in [#8](https://forum.azimuthproject.org/discussion/comment/16680/#Comment_16680). This isn't *classical logic*, though. It's a sublogic called *intuitionistic logic*. I was pioneered by L. E. J. Brouwer in his paper [*The Unreliability of the Logical Principles* (1908)](https://arxiv.org/pdf/1511.01113.pdf). In this paper he rejects the [*law of the excluded middle*](https://en.wikipedia.org/wiki/Law_of_excluded_middle): $$ a \vee \neg a $$ If we define \\(\neg a := a \to \bot\\), we end up with posets where this isn't true. **Maybe we don't want it to be true**! You can use it to make rather disappointing proofs, as Brouwer argues. Also, I can't write a program in Haskell with type signature `p :: Either (a -> _|_) a`. I argued that if you have a lattice \\((P, \leq, \wedge, \vee, \to, \bot)\\) we can always construct a sublattice with domain \\(\bar{\bar{P}} \subset P\\) where the excluded middle works again. This is done by closing over every element with the closure operator \\(\neg (\neg\ \cdot)\\). This is *Glivenko's Theorem*. Since \\(a \leq \neg (\neg a)\\), then \\(\neg (\neg\ \cdot)\\) is a *monad*, as you hinted in your puzzle in [lecture 4](https://forum.azimuthproject.org/discussion/comment/16320/#Comment_16320).`

Matthew Doty wrote:

I'd disagree slightly. As you know - but others may not, so I have to talk slowly - there's a specially nice kind of poset called a Boolean algebra; this is the kind that shows up as a poset of propositions in classical logic. If we take my sketched definition of "not" and apply it to a poset that's a Boolean algebra, we get the usual concept of "not", which obeys the law of excluded middle.

But yes, if we take the same definition and apply it to more general class of posets, called Heyting algebras, we get intuitionistic logic.

So, we can think of this definition of "not" as suitable for both classical and intuitionistic logic, depending on what sort of poset we apply it to.

`Matthew Doty wrote: > This isn't _classical logic_, though. It's a sublogic called _intuitionistic Logic_. I'd disagree slightly. As you know - but others may not, so I have to talk slowly - there's a specially nice kind of poset called a [Boolean algebra](https://ncatlab.org/nlab/show/Boolean+algebra#general); this is the kind that shows up as a poset of propositions in classical logic. If we take my sketched definition of "not" and apply it to a poset that's a Boolean algebra, we get the usual concept of "not", which obeys the law of excluded middle. But yes, if we take the same definition and apply it to more general class of posets, called [Heyting algebras](https://ncatlab.org/nlab/show/Heyting+algebra), we get intuitionistic logic. So, we can think of this definition of "not" as suitable for both classical and intuitionistic logic, depending on what sort of poset we apply it to.`

I'm not trying to say one is right and the other is wrong! These are just the names I learned for them in school.

Intuitionistic logic is great for programming. Haskell is intimately connected to intuitionistic logic.

Puzzle: How do you write a type for \(\bot\) in Haskell, and then write a program`hilbertExplosion :: _|_ -> a`

? (Hint: you might want`{-# LANGUAGE Rank2Types #-}`

)Here's an amazing example of applying Intuitionistic logic to Haskell: two months ago I saw a new compiler plugin released

JustDoIt:It takes the type signature of the program you want, and treats it like a proposition. It then uses an old theorem prover Roy Dykhoff wrote in 1991. Roy based his prover on Gentzen's LJ sequent calculus. It then turns the proof into a program and writes it for you. Still feels like witchcraft...

But... I do a lot of computer proofs. Those are hard. I use the excluded middle

a lot. You can call me atertium non daturjunky. Or just intellectually lazy.`> Matthew Doty wrote: > >This isn't *classical logic*, though. It's a sublogic called *intuitionistic logic*. > I'd disagree slightly. I'm not trying to say one is right and the other is wrong! These are just the names I learned for them in school. Intuitionistic logic is great for programming. Haskell is intimately connected to intuitionistic logic. **Puzzle**: How do you write a type for \\(\bot\\) in Haskell, and then write a program `hilbertExplosion :: _|_ -> a`? (Hint: you might want `{-# LANGUAGE Rank2Types #-}`) Here's an amazing example of applying Intuitionistic logic to Haskell: two months ago I saw a new compiler plugin released [*JustDoIt*](https://github.com/nomeata/ghc-justdoit): <pre> {-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-} module Test where import GHC.JustDoIt foo :: ((a -> r) -> r) -> (a -> ((b -> r) -> r)) -> ((b -> r) -> r) foo = (…) </pre> It takes the type signature of the program you want, and treats it like a proposition. It then uses an old theorem prover Roy Dykhoff wrote in 1991. Roy based his prover on Gentzen's LJ sequent calculus. It then turns the proof into a program and writes it for you. Still feels like witchcraft... But... I do a lot of computer proofs. Those are hard. I use the excluded middle *a lot*. You can call me a *tertium non datur* junky. Or just intellectually lazy.`

Matthew wrote:

Me too. My point was a bit different: I was saying that the poset-based definition of "not" that I sketched in #34 applies to both classical and intuitionistic logic.

Category theorists tend to like intuitionistic logic, which includes classical logic as a special case: the internal logic of a topos is intuitionistic, but when the topos is "Boolean" the principle of excluded middle will hold, and the logic is classical. We may get to a bit of this later in the book.

`Matthew wrote: > I'm not trying to say one is right and the other is wrong! These are just the names I learned for them in school. Me too. My point was a bit different: I was saying that the poset-based definition of "not" that I sketched in [#34](https://forum.azimuthproject.org/discussion/comment/16758/#Comment_16758) applies to both classical and intuitionistic logic. Category theorists tend to like intuitionistic logic, which includes classical logic as a special case: the internal logic of a topos is intuitionistic, but when the topos is "Boolean" the principle of excluded middle will hold, and the logic is classical. We may get to a bit of this later in the book.`

Jim: I've polished up your LaTeX and taken the liberty of adding a bit of information. I've also arranged the three important kinds of image in the order you should learn them, and added a bit of information about them, and put my preferred names for them first.

I've added your cheatsheet to here:

`Jim: I've polished up your LaTeX and taken the liberty of adding a bit of information. I've also arranged the three important kinds of image in the order you should learn them, and added a bit of information about them, and put my preferred names for them first. I've added your cheatsheet to here: * [Applied Category Theory - Course Materials](http://www.azimuthproject.org/azimuth/show/Applied+Category+Theory#Materials).`

Hi Jim, you can push the little wheel at the upper right of other comments and click in "view source" to see how to enter math symbols, and also use this to entrer freehand drawn symbols and get the translation.

`Hi Jim, you can push the little wheel at the upper right of other comments and click in "view source" to see how to enter math symbols, and also use [this](http://detexify.kirelabs.org/classify.html) to entrer freehand drawn symbols and get the translation.`

Cheers John. I've never needed latex before. Please delete my latest post then as you're too fast for me :)!

Update : didn't realise it a link to here. I can't get any spaces in text and \(f^(-1\) is a mystery.

`Cheers John. I've never needed latex before. Please delete my latest post then as you're too fast for me :)! Update : didn't realise it a link to here. I can't get any spaces in text and \\(f^(-1\\) is a mystery.`