Options

Lecture 17 - Chapter 1: The Grand Synthesis

In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:

  • Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.

  • Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.

Today we'll conclude our discussion of Chapter 1 with two more bombshells:

  • Joins are left adjoints, and meets are right adjoints.

  • Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.

This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!

Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.

In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:

$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:

$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.

Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist:

$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist:

$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.

Suppose \(A\) is a poset with all binary joins. Then we get a function

$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:

$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that

$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$ And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal

$$ \Delta : A \to A \times A $$ sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\).

Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:

$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).

Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).

A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function

$$ \wedge : A \times A \to A $$ that's the right adjoint of \( \Delta \). This is just a clever way of saying

$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.

Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.

All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.

Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by

$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.

I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.

Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset.

Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again.

Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \).

Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \).

Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).

So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.

Once you start looking you can find duality everywhere, from ancient Chinese philosophy:

image

to modern computers:

image

But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!

This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.

To read other lectures go here.

Comments

  • 1.

    Here's a nice example of how Galois connections can arise in practice, using the notion of opposite poset that you introduced here: Say we have a collection of things \(A\) and a collection of properties \(B\) that the things might have; for example, \(A\) could be a set of animals \[A = \{\text{dog, cat, octopus, snake, elephant}\}\] and \(B\) could be a set of animal properties \[B = \{\text{is a mammal, has fur, has a tail, has legs}\}\] Then we get a function \(f:P(A)\to P(B)\) sending a subset of the animals to the set of properties they all share, e.g. \[f(\{\text{dog, cat, elephant}\})=\{\text{is a mammal, has a tail, has legs}\}\] \[f(\{\text{dog, cat, octopus, elephant}\} )=\{\text{has legs}\}\] and a function \(g:P(B)\to P(A)\) sending a subset of the properties to the set of animals that have them all, e.g. \[g(\{\text{is a mammal, has a tail}\}) = \{\text{dog, cat, elephant}\}\] \[g(\{\text{is a mammal, has fur, has a tail}\}) = \{\text{dog, cat}\}\] Usually we make power sets like \(P(A)\) and \(P(B)\) into posets using containment \(\subseteq\), but then these functions aren't monotone: the larger the set of animals, the fewer properties they have in common, and vice versa. But we can make both functions monotone by using reverse containment, \(\supseteq\), for one of the posets (say, \(P(B)\)). This means that we regard \(f\) as a monotone function \(P(A)\to P(B)^\text{op}\) and \(g\) as a monotone function \(P(B)^\text{op}\to P(A)\). And these monotone functions are always a Galois connection! Spelled out, this says that for two subsets \(S\subseteq A\) and \(T\subseteq B\), we always have \[f(S) \supseteq T \iff S\subseteq g(T),\] and this equivalence holds because both containments just mean that every animal in the set \(S\) has every property in the set \(T\).

    You can even go further and think to yourself "Any meaningful set of animals that includes dogs and elephants should include cats too, because cats have every property that dogs and elephants both have (namely, tails, legs, and being mammals)." This amounts to composing the functions \(g\) and \(f\), and the laws of adjoints tell us that \(S\subseteq g\circ f (S)\). However, if we do the same process again, we don't get any more the second time: \(g\circ f(g\circ f(S)) = g\circ f(S)\). You could say that the set {dog, cat, elephant} is already closed under the operation of adding in all the other animals that have all the properties these animals have: in particular, these are all mammals, and there aren't any more mammals in our set of animals. In other words, this set of animals is exactly the set \(g(\{\text{is a mammal}\})\). In fact, \(g\) always outputs a closed set of animals!

    We can do the same with sets of properties, and describe a set of animal properties as closed if there are no extra properties all animals with those properties also share. For example, a closed set of properties including "has fur" must also include "is a mammal", because the only animals with fur we're considering are also mammals; similarly, for our list of animals, the only animals that are mammals also have legs and tails. Here are the closed sets of properties: \[\varnothing, \{\text{has a tail}\}, \{\text{has legs}\}, \{\text{is a mammal, has a tail, has legs}\}, \{\text{is a mammal, has fur, has a tail, has legs}\}\] And here are the closed sets of animals: \[\{\text{dog, cat, octopus, snake, elephant}\}, \{\text{dog, cat, snake, elephant}\},\] \[\{\text{dog, cat, octopus, elephant}\}, \{\text{dog, cat, elephant}\}, \{\text{dog, cat}\}\] It's not a coincidence that there are five of each: the way I've ordered the closed sets, they correspond to each other via \(f\) and \(g\): for example, \[f(\{\text{has a tail}\}) = \{\text{dog, cat, snake, elephant}\}\] \[g(\{\text{dog, cat, snake, elephant}\}) = \{\text{has a tail}\}\]

    We end up with an order-reversing one-to-one correspondence between the closed sets of animals and properties! You can think of these closed sets as telling us some information about how we should taxonomize our animals, with each closed set of properties corresponding hierarchically to some meaningful collection of animals:

    animal taxonomy

    But a lot of this works just as well for any Galois connection—I'll post what I know as a series of puzzles.

    Comment Source:Here's a nice example of how Galois connections can arise in practice, using the notion of _opposite poset_ that you introduced here: Say we have a collection of things \\(A\\) and a collection of properties \\(B\\) that the things might have; for example, \\(A\\) could be a set of animals \\[A = \\{\text{dog, cat, octopus, snake, elephant}\\}\\] and \\(B\\) could be a set of animal properties \\[B = \\{\text{is a mammal, has fur, has a tail, has legs}\\}\\] Then we get a function \\(f:P(A)\to P(B)\\) sending a subset of the animals to the set of properties they all share, e.g. \\[f(\\{\text{dog, cat, elephant}\\})=\\{\text{is a mammal, has a tail, has legs}\\}\\] \\[f(\\{\text{dog, cat, octopus, elephant}\\} )=\\{\text{has legs}\\}\\] and a function \\(g:P(B)\to P(A)\\) sending a subset of the properties to the set of animals that have them all, e.g. \\[g(\\{\text{is a mammal, has a tail}\\}) = \\{\text{dog, cat, elephant}\\}\\] \\[g(\\{\text{is a mammal, has fur, has a tail}\\}) = \\{\text{dog, cat}\\}\\] Usually we make power sets like \\(P(A)\\) and \\(P(B)\\) into posets using containment \\(\subseteq\\), but then these functions aren't monotone: the larger the set of animals, the fewer properties they have in common, and vice versa. But we can make both functions monotone by using _reverse_ containment, \\(\supseteq\\), for one of the posets (say, \\(P(B)\\)). This means that we regard \\(f\\) as a monotone function \\(P(A)\to P(B)^\text{op}\\) and \\(g\\) as a monotone function \\(P(B)^\text{op}\to P(A)\\). And these monotone functions are always a Galois connection! Spelled out, this says that for two subsets \\(S\subseteq A\\) and \\(T\subseteq B\\), we always have \\[f(S) \supseteq T \iff S\subseteq g(T),\\] and this equivalence holds because both containments just mean that every animal in the set \\(S\\) has every property in the set \\(T\\). You can even go further and think to yourself "Any meaningful set of animals that includes dogs and elephants should include cats too, because cats have every property that dogs and elephants both have (namely, tails, legs, and being mammals)." This amounts to composing the functions \\(g\\) and \\(f\\), and the laws of adjoints tell us that \\(S\subseteq g\circ f (S)\\). However, if we do the same process again, we don't get any more the second time: \\(g\circ f(g\circ f(S)) = g\circ f(S)\\). You could say that the set {dog, cat, elephant} is already _closed_ under the operation of adding in all the other animals that have all the properties these animals have: in particular, these are all mammals, and there aren't any more mammals in our set of animals. In other words, this set of animals is exactly the set \\(g(\\{\text{is a mammal}\\})\\). In fact, \\(g\\) always outputs a closed set of animals! We can do the same with sets of properties, and describe a set of animal properties as _closed_ if there are no extra properties all animals with those properties also share. For example, a closed set of properties including "has fur" must also include "is a mammal", because the only animals with fur we're considering are also mammals; similarly, for our list of animals, the only animals that are mammals also have legs and tails. Here are the closed sets of properties: \\[\varnothing, \\{\text{has a tail}\\}, \\{\text{has legs}\\}, \\{\text{is a mammal, has a tail, has legs}\\}, \\{\text{is a mammal, has fur, has a tail, has legs}\\}\\] And here are the closed sets of animals: \\[\\{\text{dog, cat, octopus, snake, elephant}\\}, \\{\text{dog, cat, snake, elephant}\\},\\] \\[\\{\text{dog, cat, octopus, elephant}\\}, \\{\text{dog, cat, elephant}\\}, \\{\text{dog, cat}\\}\\] It's not a coincidence that there are five of each: the way I've ordered the closed sets, they correspond to each other via \\(f\\) and \\(g\\): for example, \\[f(\\{\text{has a tail}\\}) = \\{\text{dog, cat, snake, elephant}\\}\\] \\[g(\\{\text{dog, cat, snake, elephant}\\}) = \\{\text{has a tail}\\}\\] We end up with an order-reversing one-to-one correspondence between the closed sets of animals and properties! You can think of these closed sets as telling us some information about how we should taxonomize our animals, with each closed set of properties corresponding hierarchically to some meaningful collection of animals: <center>![animal taxonomy](https://docs.google.com/drawings/d/e/2PACX-1vRVvs7JTqINXpN7S-LgyZFm32STzcs5U3asHzhJt2ZgpgbiWwoxrYsqIjMrt7S8__--SRTO5_BaKSC8/pub?w=509&amp;h=408)</center> But a lot of this works just as well for any Galois connection—I'll post what I know as a series of puzzles.
  • 2.
    edited April 2018

    Here are my puzzles about closed elements: Let \(f: A\to B\) and \(g: B\to A\) be the left and right adjoints in a Galois connection between posets \(A\) and \(B\), so that \(f(a)\leq b \iff a \leq g(b)\). Given \(a\in A\), we'll write \(g(f(a))\) as \([a]\) for short, and \([b]=f(g(b))\) for each \(b\in B\).

    Puzzle OB1 (Kenobi): Show that for all \(a\in A\) and \(b\in B\), we have \(a\leq [a]\) and \(b\geq [b]\).

    We'll say that an element \(x\) (of \(A\) or \(B\)) is closed if \(x = [x]\).

    Puzzle OB2: Show that for all \(a\in A\) and \(b\in B\), the elements \(f(a)\in B\) and \(g(b)\in A\) are both closed. (Hint: You can deduce both \(f(a)\leq f(g(f(a)))\) and \(f(g(f(a)))\leq f(a)\) from OB1.) Deduce that \([a]\) and \([b]\) are themselves closed.

    Puzzle OB3: Show that if \(a\leq a'\) in \(A\) then \([a]\leq [a']\). Deduce that if \(a\leq a'\) and \(a'\) is closed, then \([a]\leq a'\). We say that \([a]\) is the smallest closed element that is at least as big as \(a\). Show that for any \(b\in B\), \([b]\) is the biggest closed element that is at least as small as \(b\).

    Puzzle OB4: Let's write \(A_\text{closed}\) and \(B_\text{closed}\) for the sets of closed elements in \(A\) and \(B\). Show that \(f\) and \(g\) restrict to functions \(A_\text{closed}\leftrightarrow B_\text{closed}\), and that on these subsets they are inverses of each other! Deduce that \(A_\text{closed}\) and \(B_\text{closed}\) are isomorphic posets.

    (John, this all ended up being longer than I meant it to be. I hope you don't mind me plopping it all in here!)

    Comment Source:Here are my puzzles about closed elements: Let \\(f: A\to B\\) and \\(g: B\to A\\) be the left and right adjoints in a Galois connection between posets \\(A\\) and \\(B\\), so that \\(f(a)\leq b \iff a \leq g(b)\\). Given \\(a\in A\\), we'll write \\(g(f(a))\\) as \\([a]\\) for short, and \\([b]=f(g(b))\\) for each \\(b\in B\\). **Puzzle OB1 (Kenobi):** Show that for all \\(a\in A\\) and \\(b\in B\\), we have \\(a\leq [a]\\) and \\(b\geq [b]\\). We'll say that an element \\(x\\) (of \\(A\\) or \\(B\\)) is _closed_ if \\(x = [x]\\). **Puzzle OB2:** Show that for all \\(a\in A\\) and \\(b\in B\\), the elements \\(f(a)\in B\\) and \\(g(b)\in A\\) are both closed. (Hint: You can deduce both \\(f(a)\leq f(g(f(a)))\\) and \\(f(g(f(a)))\leq f(a)\\) from OB1.) Deduce that \\([a]\\) and \\([b]\\) are themselves closed. **Puzzle OB3:** Show that if \\(a\leq a'\\) in \\(A\\) then \\([a]\leq [a']\\). Deduce that if \\(a\leq a'\\) and \\(a'\\) is closed, then \\([a]\leq a'\\). We say that \\([a]\\) is the _smallest_ closed element that is at least as big as \\(a\\). Show that for any \\(b\in B\\), \\([b]\\) is the _biggest_ closed element that is at least as _small_ as \\(b\\). **Puzzle OB4:** Let's write \\(A_\text{closed}\\) and \\(B_\text{closed}\\) for the sets of closed elements in \\(A\\) and \\(B\\). Show that \\(f\\) and \\(g\\) restrict to functions \\(A_\text{closed}\leftrightarrow B_\text{closed}\\), and that on these subsets they are inverses of each other! Deduce that \\(A_\text{closed}\\) and \\(B_\text{closed}\\) are isomorphic posets. (John, this all ended up being longer than I meant it to be. I hope you don't mind me plopping it all in here!)
  • 3.

    Owen - this is great! I want people to jump in like this. In particular, one of my big regrets was not talking about the material on "closure operators" in Chapter 1. These are a special case of an item much-beloved by functional programmers: monads. A monad on a poset is a closure operator. You've introduced a lot of the key ideas in Puzzles OB1 - OB4.

    Comment Source:Owen - this is great! I _want_ people to jump in like this. In particular, one of my big regrets was not talking about the material on "closure operators" in Chapter 1. These are a special case of an item much-beloved by functional programmers: _monads_. A monad on a poset is a closure operator. You've introduced a lot of the key ideas in Puzzles OB1 - OB4.
  • 4.

    Hey Owen,

    This reminds me a lot of formal concept analysis. Did you have this in mind when you presented your galois connections above?

    Comment Source:Hey Owen, This reminds me a lot of [formal concept analysis](https://en.wikipedia.org/wiki/Formal_concept_analysis#Formal_contexts_and_concepts). Did you have this in mind when you presented your galois connections above?
  • 5.
    edited April 2018

    Whenever I glanced at it, formal concept analysis seemed like unimpressively obvious application of classical logic and a bit of set theory. Quite possibly I've missed the interesting parts. But throwing a Galois connection into the stew would certainly liven it up!

    image

    One of my favorite Galois connections is the one due to Galois, and this again has an "op" in it. He was classifying subfields \(K\) of a given field \(F\), by associating to each subfield \(K \subseteq F\) the subgroup \(G\) of the group \(\mathrm{Aut}(F)\) of symmetries of \(F\) that act as the identity on every element of \(K\). The bigger the subfield \(K\) is, the smaller the subgroup \(G\) is. So, we get a monotone function from the poset of subfields of \(K\) to the opposite of the poset of subgroups of \(\mathrm{Aut}(F)\). We also get a map going back, and these form a Galois connection, and that's how Galois theory works.

    Comment Source:Whenever I glanced at it, [formal concept analysis](https://en.wikipedia.org/wiki/Formal_concept_analysis) seemed like unimpressively obvious application of classical logic and a bit of set theory. Quite possibly I've missed the interesting parts. But throwing a Galois connection into the stew would certainly liven it up! <img width = "150" src = "http://math.ucr.edu/home/baez/mathematical/warning_sign.jpg"> One of my favorite Galois connections is the one due to Galois, and this again has an "op" in it. He was classifying subfields \\(K\\) of a given field \\(F\\), by associating to each subfield \\(K \subseteq F\\) the subgroup \\(G\\) of the group \\(\mathrm{Aut}(F)\\) of symmetries of \\(F\\) that act as the identity on every element of \\(K\\). The bigger the subfield \\(K\\) is, the smaller the subgroup \\(G\\) is. So, we get a monotone function from the poset of subfields of \\(K\\) to the _opposite_ of the poset of subgroups of \\(\mathrm{Aut}(F)\\). We also get a map going back, and these form a Galois connection, and that's how Galois theory works.
  • 6.
    edited April 2018

    Matthew Doty wrote:

    Hey Owen, This reminds me a lot of formal concept analysis. Did you have this in mind when you presented your galois connections above?

    No, I didn't know this kind of thing had a name! Although I imagine whoever I first heard talking about examples like these probably did know—thanks so much for providing the reference!

    [Imagine the squiggly "challenges ahead" road sign here]

    I think the reason this kind of example stuck in my head was that I learned it at about the same time I was learning about the Zariski topology on \(\mathbb{C}^n\). As the collection of objects, take the set of points of \(\mathbb{C}^n\), and have the collection of "attributes" be the ring \(\mathbb{C}[x_1,\dots,x_n]\) of polynomials in \(n\) variables, where a point \((c_1,\dots,c_n)\) has attribute \(f\) if and only if \(f(c_1,\dots,c_n) = 0\). Then the "closure" operation on subsets of \(\mathbb{C}^n\) is the literal closure in the Zariski topology, and one way of thinking about Hilbert's Nullstellensatz is that the corresponding closure operator on subsets of \(\mathbb{C}[x_1,\dots,x_n]\) is the radical of the ideal generated by the subset.

    Comment Source:Matthew Doty wrote: >Hey Owen, This reminds me a lot of [formal concept analysis](https://en.wikipedia.org/wiki/Formal_concept_analysis#Formal_contexts_and_concepts). Did you have this in mind when you presented your galois connections above? No, I didn't know this kind of thing had a name! Although I imagine whoever I first heard talking about examples like these probably did know—thanks so much for providing the reference! [Imagine the squiggly "challenges ahead" road sign here] I think the reason this kind of example stuck in my head was that I learned it at about the same time I was learning about the Zariski topology on \\(\mathbb{C}^n\\). As the collection of objects, take the set of points of \\(\mathbb{C}^n\\), and have the collection of "attributes" be the ring \\(\mathbb{C}[x_1,\dots,x_n]\\) of polynomials in \\(n\\) variables, where a point \\((c_1,\dots,c_n)\\) has attribute \\(f\\) if and only if \\(f(c_1,\dots,c_n) = 0\\). Then the "closure" operation on subsets of \\(\mathbb{C}^n\\) is the literal closure in the Zariski topology, and one way of thinking about Hilbert's Nullstellensatz is that the corresponding closure operator on subsets of \\(\mathbb{C}[x_1,\dots,x_n]\\) is the radical of the ideal generated by the subset.
  • 7.

    So you can reframe a closure operator as a partition into subsets containing maximums. And this view point should categorify into a view point on monads. But this is giving me a headache. Especially because all the programming endo monads are 1-1. (For objects, maps, and even values (using the unit) ) (though join is often a quite interesting onto* partition).

    Also does the concept of partitions just lift to a Functor which is a partition on objects, and whose action on morphisms is a partition for each hom set.


    I'm going to try to see if you can make a natural deduction version of partition logic, and then get a programming language from that. The same way you can from say, a sub structural logic, or modal logic.

    Comment Source:So you can reframe a closure operator as a partition into subsets containing maximums. And this view point should categorify into a view point on monads. But this is giving me a headache. Especially because all the programming endo monads are 1-1. (For objects, maps, and even values (using the unit) ) (though join is often a quite interesting onto* partition). Also does the concept of partitions just lift to a Functor which is a partition on objects, and whose action on morphisms is a partition for each hom set. ----------- I'm going to try to see if you can make a natural deduction version of partition logic, and then get a programming language from that. The same way you can from say, a sub structural logic, or modal logic.
  • 8.

    Cristopher Upshaw wrote:

    I'm going to try to see if you can make a natural deduction version of partition logic, and then get a programming language from that. The same way you can from say, a sub structural logic, or modal logic.

    This is the kind of statement that fascinates me to no end, yet I have no idea how to start understanding where these kinds of constructions come from. Can you recommend any resources for acquiring the necessary background?

    Comment Source:Cristopher Upshaw wrote: > I'm going to try to see if you can make a natural deduction version of partition logic, and then get a programming language from that. The same way you can from say, a sub structural logic, or modal logic. This is the kind of statement that fascinates me to no end, yet I have no idea how to start understanding where these kinds of constructions come from. Can you recommend any resources for acquiring the necessary background?
  • 9.
    edited April 2018

    Christopher Upshaw wrote:

    I'm going to try to see if you can make a natural deduction version of partition logic, and then get a programming language from that. The same way you can from say, a sub structural logic, or modal logic.

    I understand that partition logic has the following adjunctions:

    $$ \, \setminus \, \dashv \, \vee \, \dashv \Delta \dashv \wedge $$ So if partition logic had exponentiation it would just be Boolean logic. I don't know how you would embed a lambda calculus into without that...

    EDIT: Keith E. Peterson mentions below that partitions have implication after all. I am rather curious now how the logic differs from classical logic.

    Comment Source:[Christopher Upshaw](https://forum.azimuthproject.org/profile/2188/Christopher%20Upshaw) wrote: > I'm going to try to see if you can make a natural deduction version of partition logic, and then get a programming language from that. The same way you can from say, a sub structural logic, or modal logic. I understand that partition logic has the following adjunctions: $$ \, \setminus \, \dashv \, \vee \, \dashv \Delta \dashv \wedge $$ So if partition logic had exponentiation it would just be Boolean logic. I don't know how you would embed a lambda calculus into without that... EDIT: Keith E. Peterson mentions below that partitions have implication after all. I am rather curious now how the logic differs from classical logic.
  • 10.

    Partition implication is defined by David P. Ellerman as follows,

    $$ \mathrm{dit}(\pi_1 \implies \pi_2) := \mathrm{int}(\mathrm{dit}(\pi_1)^c\cup \mathrm{dit}(\pi_2)) $$

    Comment Source:Partition implication is defined by David P. Ellerman as follows, $$ \mathrm{dit}(\pi_1 \implies \pi_2) := \mathrm{int}(\mathrm{dit}(\pi_1)^c\cup \mathrm{dit}(\pi_2)) $$
  • 11.
    edited April 2018

    Jonathan Castello wrote:

    This is the kind of statement that fascinates me to no end, yet I have no idea how to start understanding where these kinds of constructions come from. Can you recommend any resources for acquiring the necessary background?

    There's a very rich set of connections between category theory, logic, and computation. Since I don't know what you know, it's a bit tricky to suggest what to read next. For category theory and computation try this:

    This might be helpful as well:

    Both discuss the Curry-Howard correspondence, which is one of the main links worth understanding.

    I also suggest that you ask hundreds of small, specific questions on this forum.

    Comment Source:Jonathan Castello wrote: > This is the kind of statement that fascinates me to no end, yet I have no idea how to start understanding where these kinds of constructions come from. Can you recommend any resources for acquiring the necessary background? There's a very rich set of connections between category theory, logic, and computation. Since I don't know what you know, it's a bit tricky to suggest what to read next. For category theory and computation try this: * Bartosz Milewski, _[Category Theory for Programmers](https://bartoszmilewski.com/2014/10/28/category-theory-for-programmers-the-preface/)._ This might be helpful as well: * Samson Abramsky and Nikos Tzevelekos, _[Introduction to Categories and Categorical Logic](https://arxiv.org/abs/1102.1313)._ Both discuss the Curry-Howard correspondence, which is one of the main links worth understanding. I also suggest that you ask hundreds of small, specific questions on this forum.
  • 12.

    Matthew wrote:

    So if partition logic had exponentiation it would just be Boolean logic.

    I don't see why you say that. Are you saying that any poset with this chain of adjunctions must be a Boolean algebra? That's imaginable, but I don't know such a theorem.

    Anyway, the poset of partitions is very far from a Boolean algebra: it's a lattice, but it's not even a distributive lattice! Distributivity of \(\vee\) over \(\wedge\) fails already here:

    image
    Comment Source:Matthew wrote: > So if partition logic had exponentiation it would just be Boolean logic. I don't see why you say that. Are you saying that any poset with this chain of adjunctions must be a Boolean algebra? That's imaginable, but I don't know such a theorem. Anyway, the poset of partitions is very far from a Boolean algebra: it's a lattice, but it's not even a distributive lattice! Distributivity of \\(\vee\\) over \\(\wedge\\) fails already here: <center><img src = "http://math.ucr.edu/home/baez/mathematical/7_sketches/partition_hasse_diagram.png"></center>
  • 13.
    edited April 2018

    Keith Peterson wrote:

    Partition implication is defined by David P. Ellerman as follows...

    When possible, a very nice way to define implication as an operation

    $$ \to : A \times A \to A $$ in a poset \(A\) is by demanding that it give a right adjoint to \( \wedge\) in the following sense:

    $$ p \wedge q \le r \textrm{ if and only if } p \le (q \to r) $$ for all \(p,q,r \in A\). Or in words, "\(p \wedge q\) implies \(r\)" is true if and only if "\(p\) implies \(q \to r \)".

    Here we have to carefully distinguish implication as a binary relation on \(A\), which is \(\le\), from implication as a binary operation on \(A\), which I'm calling \(\to\) but is also called \( \supset \) or other things.

    However, a poset with all meets and an operation \(\to\) that's right adjoint to \( \vee \) in the above sense is "cartesian closed", and in this situation binary meets distribute over joins, which fails in the lattice of partitions. So, in fact, the lattice of partitions does not admit an operation \(\to\) as above.

    Comment Source:Keith Peterson wrote: > Partition implication is defined by David P. Ellerman as follows... When possible, a very nice way to define implication as an operation $$ \to : A \times A \to A $$ in a poset \\(A\\) is by demanding that it give a right adjoint to \\( \wedge\\) in the following sense: $$ p \wedge q \le r \textrm{ if and only if } p \le (q \to r) $$ for all \\(p,q,r \in A\\). Or in words, "\\(p \wedge q\\) implies \\(r\\)" is true if and only if "\\(p\\) implies \\(q \to r \\)". Here we have to carefully distinguish implication as a binary relation on \\(A\\), which is \\(\le\\), from implication as a binary operation on \\(A\\), which I'm calling \\(\to\\) but is also called \\( \supset \\) or other things. **However,** a poset with all meets and an operation \\(\to\\) that's right adjoint to \\( \vee \\) in the above sense is "cartesian closed", and in this situation binary meets distribute over joins, which fails in the lattice of partitions. So, in fact, the lattice of partitions does not admit an operation \\(\to\\) as above.
  • 14.

    John Baez wrote:

    I also suggest that you ask hundreds of small, specific questions on this forum.

    Well, since partition logic seems to be the topic du jour, here's a thought that's been bouncing around my head. There's a paper I read last year that explains Stålmarck's method, an algorithm for determining if a given propositional formula is a tautology:

    Sheeran M., Stålmarck G. (1998) A Tutorial on Stålmarck’s Proof Procedure for Propositional Logic. In: Gopalakrishnan G., Windley P. (eds) Formal Methods in Computer-Aided Design. FMCAD 1998. Lecture Notes in Computer Science, vol 1522. Springer, Berlin, Heidelberg

    The proof systems this paper describes look like they're taking a discrete partition on subformulae and inferring additional relationships. When a subformula is found to be in the same equivalence class as \(\top\), we know it's a tautology. So my.. it's not exactly a small, specific question, but my thought process is that I'd like to understand the correspondence (if any) between Stålmarck's method and partition logic. Hopefully I'll have some actual questions somewhere down the line.

    Comment Source:John Baez wrote: > I also suggest that you ask hundreds of small, specific questions on this forum. Well, since partition logic seems to be the topic _du jour_, here's a thought that's been bouncing around my head. There's a paper I read last year that explains Stålmarck's method, an algorithm for determining if a given propositional formula is a tautology: [Sheeran M., Stålmarck G. (1998) A Tutorial on Stålmarck’s Proof Procedure for Propositional Logic. In: Gopalakrishnan G., Windley P. (eds) Formal Methods in Computer-Aided Design. FMCAD 1998. Lecture Notes in Computer Science, vol 1522. Springer, Berlin, Heidelberg](https://doi.org/10.1007/3-540-49519-3_7) The proof systems this paper describes look like they're taking a discrete partition on subformulae and inferring additional relationships. When a subformula is found to be in the same equivalence class as \\(\top\\), we know it's a tautology. So my.. it's not exactly a small, specific _question_, but my _thought process_ is that I'd like to understand the correspondence (if any) between Stålmarck's method and partition logic. Hopefully I'll have some actual questions somewhere down the line.
  • 15.
    edited April 2018

    I don't see why you say that. Are you saying that any poset with this chain of adjunctions must be a Boolean algebra? That's imaginable, but I don't know such a theorem.

    I think so.

    Here's my sketch of the proof: in a Heyting algebra with the left adjoint \(\setminus\) to \(\vee\), first show \(\top \leq \top \setminus\ (\top \setminus\ a) \to a\) from the adjunction laws. After that, along with \(\top \leq a \to b \to a\) and \(\top \leq (a \to b \to c) \to (a \to b) \to a \to c\) and modus ponens \(\top \leq a \to b \Longrightarrow \top \leq a \Longrightarrow \top \leq b\) you have enough to show all of classical propositional logic. Then, using \(\top \leq a \to b \Longleftrightarrow a \leq b\) you can use classical propositional logic to recover the traditional axioms for a Boolean algebra.

    Comment Source:> I don't see why you say that. Are you saying that any poset with this chain of adjunctions must be a Boolean algebra? That's imaginable, but I don't know such a theorem. I think so. Here's my sketch of the proof: in a Heyting algebra with the left adjoint \\(\setminus\\) to \\(\vee\\), first show \\(\top \leq \top \setminus\ (\top \setminus\ a) \to a\\) from the adjunction laws. After that, along with \\(\top \leq a \to b \to a\\) and \\(\top \leq (a \to b \to c) \to (a \to b) \to a \to c\\) and modus ponens \\(\top \leq a \to b \Longrightarrow \top \leq a \Longrightarrow \top \leq b\\) you have enough to show all of classical propositional logic. Then, using \\(\top \leq a \to b \Longleftrightarrow a \leq b\\) you can use classical propositional logic to recover the traditional axioms for a Boolean algebra.
  • 16.
    edited April 2018

    image

    Matthew Doty - That's really interesting! With a little reading I now see this: if we have a poset \(A\) with the whole chain of adjunctions

    $$ \, \setminus \, \dashv \, \vee \, \dashv \Delta \dashv \, \wedge \dashv\, \to $$ with \((A,\wedge,\top)\) giving a cartesian category and \((A,\vee,\bot)\) giving a cocartesian category, then \(A\) becomes both a Heyting algebra and a co-Heyting algebra. Together these give a bi-Heyting algebra. The nLab writes:

    A co-Heyting algebra is a bounded distributive lattice \(L\) equipped with a binary subtraction operation \( \backslash : L\times L\to L \) such that \(x\backslash y\leq z\) iff \(x\leq y\vee z\).

    (Existence of \(\backslash\) amounts to an adjunction \( \backslash y\,\dashv\, y\vee \) and the existence of a left adjoint implies that \(y\vee \) preserves limits \(\wedge\) hence the assumption of distributivity in the definition is redundant and has been put in for emphasis only.)

    A bi-Heyting algebra is a bounded distributive lattice \(L\) that carries a Heyting algebra structure with implication \(\to\) and a co-Heyting algebra structure with subtraction \(\backslash\).

    However, it seems there are bi-Heyting algebras that aren't Boolean algebras! I don't know what rule they lack, but the nLab writes:

    Co-Heyting algebras were initially called Brouwerian algebras. Bi-Heyting algebras were introduced and studied in a series of papers by Cecylia Rauszer in the 1970s who called them semi-Boolean algebras which suggests to view them as a generalization of Boolean algebras.

    and it says a Boolean algebra is a "degenerate example" of a bi-Heyting algebra. So: how do bi-Heyting algebras differ? Time to read the works of Cecylia Rauszer! They seem like a natural concept.

    Comment Source:<img width = "150" src = "http://math.ucr.edu/home/baez/mathematical/warning_sign.jpg"> Matthew Doty - That's really interesting! With a little reading I now see this: if we have a poset \\(A\\) with the whole chain of adjunctions $$ \, \setminus \, \dashv \, \vee \, \dashv \Delta \dashv \, \wedge \dashv\, \to $$ with \\((A,\wedge,\top)\\) giving a cartesian category and \\((A,\vee,\bot)\\) giving a cocartesian category, then \\(A\\) becomes both a [Heyting algebra](https://ncatlab.org/nlab/show/Heyting+algebra) and a [co-Heyting algebra](https://ncatlab.org/nlab/show/Heyting+algebra). Together these give a **bi-Heyting algebra**. The nLab writes: > A **co-Heyting algebra** is a bounded distributive lattice \\(L\\) equipped with a binary **subtraction** operation \\( \backslash : L\times L\to L \\) such that \\(x\backslash y\leq z\\) iff \\(x\leq y\vee z\\). > (Existence of \\(\backslash\\) amounts to an adjunction \\( \backslash y\,\dashv\, y\vee \\) and the existence of a left adjoint implies that \\(y\vee \\) preserves limits \\(\wedge\\) hence the assumption of distributivity in the definition is redundant and has been put in for emphasis only.) > A **bi-Heyting algebra** is a bounded distributive lattice \\(L\\) that carries a Heyting algebra structure with implication \\(\to\\) and a co-Heyting algebra structure with subtraction \\(\backslash\\). _However_, it seems there are bi-Heyting algebras that aren't Boolean algebras! I don't know what rule they lack, but the nLab writes: > Co-Heyting algebras were initially called **Brouwerian algebras**. Bi-Heyting algebras were introduced and studied in a series of papers by Cecylia Rauszer in the 1970s who called them **semi-Boolean algebras** which suggests to view them as a generalization of Boolean algebras. and it says a Boolean algebra is a "degenerate example" of a bi-Heyting algebra. So: how do bi-Heyting algebras differ? Time to read the works of Cecylia Rauszer! They seem like a natural concept.
  • 17.
    edited April 2018

    It says a Boolean algebra is a "degenerate example" of a bi-Heyting algebra. So: how do bi-Heyting algebras differ? Time to read the works of Cecylia Rauszer!

    This made my day.

    I understand now where the attempted proof I sketched out falls over: if we let \( ⌐ a := \top \setminus a\) then while we have \( \vdash ⌐ \,⌐ a \to a\) but sadly \( \nvdash a \to ⌐ \, ⌐ a \).

    Cecylia's logic is fascinating. Here's her 1971 paper.

    She defines \(\neg a := a \to \bot \), and observes that

    $$ \vdash \neg\, ⌐ (a \to b) \to \neg\, ⌐ a \to \neg \, ⌐ b .$$ She gives a Hilbert calculus with the rule:

    $$ \frac{a}{\neg \, ⌐ a} $$ So \(\neg\, ⌐\) is a modality! This is pretty wild.

    Bi-Heyting algebras seem like a natural concept.

    I would say so. Thank you so much John for teaching this course!

    Comment Source:> It says a Boolean algebra is a "degenerate example" of a bi-Heyting algebra. So: how do bi-Heyting algebras differ? Time to read the works of Cecylia Rauszer! This made my day. I understand now where the attempted proof I sketched out falls over: if we let \\( ⌐ a := \top \setminus a\\) then while we have \\( \vdash ⌐ \,⌐ a \to a\\) but sadly \\( \nvdash a \to ⌐ \, ⌐ a \\). Cecylia's logic is fascinating. Here's her [1971 paper](http://matwbn.icm.edu.pl/ksiazki/fm/fm83/fm83120.pdf). She defines \\(\neg a := a \to \bot \\), and observes that $$ \vdash \neg\, ⌐ (a \to b) \to \neg\, ⌐ a \to \neg \, ⌐ b .$$ She gives a Hilbert calculus with the rule: $$ \frac{a}{\neg \, ⌐ a} $$ So \\(\neg\, ⌐\\) is a [modality](https://plato.stanford.edu/entries/logic-modal/)! This is pretty wild. > Bi-Heyting algebras seem like a natural concept. I would say so. Thank you so much John for teaching this course!
  • 18.
    edited April 2018

    This was written above, but I'm not sure I understand it: $$\wedge \dashv \to.$$ Don't both of these have type \(A \times A \to A\)? How can they be adjoint if they're going in the same direction?

    Comment Source:This was written above, but I'm not sure I understand it: $$\wedge \dashv \to.$$ Don't both of these have type \\(A \times A \to A\\)? How can they be adjoint if they're going in the same direction?
  • 19.

    It's cute shorthand for the two functors \(a \wedge \cdot\) and \(a \to \cdot\).

    I wrote a little puzzle a while ago to implement this particular adjunction in Haskell: https://forum.azimuthproject.org/discussion/comment/16072/#Comment_16072

    People use the monad associated with their closure operator is used all over the place in parsers and interpreters.

    Comment Source:It's cute shorthand for the two functors \\(a \wedge \cdot\\) and \\(a \to \cdot\\). I wrote a little puzzle a while ago to implement this particular adjunction in Haskell: https://forum.azimuthproject.org/discussion/comment/16072/#Comment_16072 People use the monad associated with their closure operator is used all over the place in parsers and interpreters.
  • 20.
    edited April 2018

    https://www.cs.cmu.edu/~fp/courses/15816-f16/schedule.html I used this and previous years for the logic side of things. He doesn't even mention categories, iirc. But the logic to type theory elaboration is taking a preorder\({}^{\ast}\) to a category\(^{\ast\ast}\).

    \({}^{\ast}\) very much preorders and not posets, isomorphic objects are all over the place. In fact I think linear logic might be an equivalence relation. But sub singleton logic can't be so.. Hmm. Need to reread.

    \(^{\ast\ast}\) Somewhat strange categories. They are for sure not "Set like" in the usual sense, that's the whole point. As the internal language of those is a normal lambda calculus. For example, in linear logic there are two pairs of meet / join, and each meet distributes with the join that's not it's dual.

    Comment Source:https://www.cs.cmu.edu/~fp/courses/15816-f16/schedule.html I used this and previous years for the logic side of things. He doesn't even mention categories, iirc. But the logic to type theory elaboration is taking a preorder\\({}^{\ast}\\) to a category\\(^{\ast\ast}\\). \\({}^{\ast}\\) very much preorders and not posets, isomorphic objects are all over the place. In fact I think linear logic might be an equivalence relation. But sub singleton logic can't be so.. Hmm. Need to reread. \\(^{\ast\ast}\\) Somewhat strange categories. They are for sure not "Set like" in the usual sense, that's the whole point. As the internal language of those is a normal lambda calculus. For example, in linear logic there are two pairs of meet / join, and each meet distributes with the join that's not it's dual.
  • 21.
    edited April 2018

    Jonathan: it's good you asked about this! It's incredibly important to understand the sense in which \(\to\) is adjoint to \(\vee\), since it shows that logic is built from adjoints!

    As Matthew tersely indicated, if a poset \(A\) has binary meets, for every element \(b \in A\) there's a monotone function

    $$ \cdot \wedge b : A \to A $$ sending each element \(a \in A\) to \(a \wedge b\). Sometimes all these monotone functions have right adjoints, which we denote by

    $$ b \to \cdot : A \to A $$ These send each \(c \in A\) to \( b \to c \). As I explained in comment #13, this occurs iff

    $$ a \wedge b \le c \textrm{ if and only if } a \le (b \to c) $$ for all \(a,b,c \in A\). Or in words, "\(a \wedge b\) implies \(c\)" if and only if "\(a\) implies \(b \to c \)".

    This is the sense in which the logical connective \(\to\) is the right adjoint to "and".

    Here we have to carefully distinguish implication as a binary relation on \(A\), which is \(\le\), from the material conditional, a binary operation on \(A\), which I'm calling \(\to\) but is also called \( \supset \) or other things.

    \(a \le b\) is a statement "external" to the poset, while \(a \to b\) is "internal" to the poset: it's an element of the poset. A lot of category theory and logic involves switching back and forth between external and internal viewpoints.

    Comment Source:Jonathan: it's good you asked about this! It's incredibly important to understand the sense in which \\(\to\\) is adjoint to \\(\vee\\), since it shows that _logic is built from adjoints!_ As Matthew tersely indicated, if a poset \\(A\\) has binary meets, for every element \\(b \in A\\) there's a monotone function $$ \cdot \wedge b : A \to A $$ sending each element \\(a \in A\\) to \\(a \wedge b\\). Sometimes all these monotone functions have right adjoints, which we denote by $$ b \to \cdot : A \to A $$ These send each \\(c \in A\\) to \\( b \to c \\). As I explained in comment #13, this occurs iff $$ a \wedge b \le c \textrm{ if and only if } a \le (b \to c) $$ for all \\(a,b,c \in A\\). Or in words, "\\(a \wedge b\\) implies \\(c\\)" if and only if "\\(a\\) implies \\(b \to c \\)". _This is the sense in which the logical connective \\(\to\\) is the right adjoint to "and"_. Here we have to carefully distinguish implication as a binary relation on \\(A\\), which is \\(\le\\), from the [material conditional](https://en.wikipedia.org/wiki/Material_conditional), a binary operation on \\(A\\), which I'm calling \\(\to\\) but is also called \\( \supset \\) or other things. \\(a \le b\\) is a statement "external" to the poset, while \\(a \to b\\) is "internal" to the poset: it's an element of the poset. A lot of category theory and logic involves switching back and forth between external and internal viewpoints.
  • 22.
    edited April 2018

    As noted by John in #15 above, the implication for partition logic is not defined by the usual adjunction, but it has a cool definition equivalent to the one mentioned by Keith Peterson in #10 where the ditset of a partition is just the complement of the partition as a binary equivalence relation, and the interior operation on any subset of \(X^2\) is the complement of the reflexive-symmetric-transitive closure of the complement--which is not a topological interior operation (otherwise partition logic would just be intuitionistic logic). In other words, the ditset of a partition is just the set of ordered pairs from the underlying set that are in different blocks of the partition. Such an ordered pair is called a "distinction" or "dit" of the partition, and the "Yoga" of the logic of partitions relative to logic of subsets is that a dit of a partition is the correlate of an element of a subset. The cooler but equivalent definition of the implication \(Q\Rightarrow P\) is that it is partition like \(P\) except that whenever a part or block \(B\) of \(P\) is contained in a block of \(Q\), then that block of \(P\) is replaced by the discretized version, i.e., all the singletons of the elements in \(P\). When the lattice of partitions is presented with the refinement partial order, then the top is the discrete partition of all singletons. The relation of the implication operation to the partial order is the same as in a Boolean algebra of subsets, namely if the implication is equal to the top, then the partial order holds between those two partitions (or subsets in the BA case). The full definition of all the binary 'Boolean' operations on partitions and the correctness and completeness theorems for a system of semantic tableaus for partition logic are all given in the paper on the logic of partitions cited in my Introduction.

    Comment Source:As noted by John in #15 above, the implication for partition logic is not defined by the usual adjunction, but it has a cool definition equivalent to the one mentioned by Keith Peterson in #10 where the ditset of a partition is just the complement of the partition as a binary equivalence relation, and the interior operation on any subset of \\(X^2\\) is the complement of the reflexive-symmetric-transitive closure of the complement--which is not a topological interior operation (otherwise partition logic would just be intuitionistic logic). In other words, the ditset of a partition is just the set of ordered pairs from the underlying set that are in different blocks of the partition. Such an ordered pair is called a "distinction" or "dit" of the partition, and the "Yoga" of the logic of partitions relative to logic of subsets is that a dit of a partition is the correlate of an element of a subset. The cooler but equivalent definition of the implication \\(Q\Rightarrow P\\) is that it is partition like \\(P\\) except that whenever a part or block \\(B\\) of \\(P\\) is contained in a block of \\(Q\\), then that block of \\(P\\) is replaced by the discretized version, i.e., all the singletons of the elements in \\(P\\). When the lattice of partitions is presented with the refinement partial order, then the top is the discrete partition of all singletons. The relation of the implication operation to the partial order is the same as in a Boolean algebra of subsets, namely if the implication is equal to the top, then the partial order holds between those two partitions (or subsets in the BA case). The full definition of all the binary 'Boolean' operations on partitions and the correctness and completeness theorems for a system of semantic tableaus for partition logic are all given in the paper on the logic of partitions cited in my Introduction.
  • 23.
    edited April 2018

    Matthew: here's an interesting paper on bi-Heyting algebras:

    I think it goes like this, though I'm confused about a few points:

    Take any lattice that is both complete (any subset has a meet), cocomplete (any subset has a join), and completely distributive (roughly: any meet of joins is a join of meets; see the paper for details). Then it's both a Heyting and co-Heyting algebra.

    Ranzato continues:

    It turns out that the class of completely distributive complete lattices is strictly contained in the class of complete Heyting and co-Heyting algebras. Clearly, any completely distributive lattice is a complete Heyting and co-Heyting algebra. On the other hand, this containment turns out to be strict, as shown by the following counterexample.

    Part of what's confusing is that nobody should ever say "complete Heyting and co-Heyting algebra" without making it clear, at least the first time, whether "complete" modifies both "Heyting" and "co-Heyting", or just one.

    Another part of what's confusing is that the counterexample is a Boolean algebra that's not a complete Boolean algebra!

    Comment Source:Matthew: here's an interesting paper on bi-Heyting algebras: * Francesco Ranzato, [A new characterization of complete Heyting and co-Heyting algebras](https://arxiv.org/abs/1504.03919). I think it goes like this, though I'm confused about a few points: Take any lattice that is both **complete** (any subset has a meet), **cocomplete** (any subset has a join), and **completely distributive** (roughly: any meet of joins is a join of meets; see the paper for details). Then it's both a Heyting and co-Heyting algebra. Ranzato continues: > It turns out that the class of completely distributive complete lattices is strictly contained in the class of complete Heyting and co-Heyting algebras. Clearly, any completely distributive lattice is a complete Heyting and co-Heyting algebra. On the other hand, this containment turns out to be strict, as shown by the following counterexample. Part of what's confusing is that nobody should ever say "complete Heyting and co-Heyting algebra" without making it clear, at least the first time, whether "complete" modifies both "Heyting" _and_ "co-Heyting", or just one. Another part of what's confusing is that the counterexample is a Boolean algebra that's not a complete Boolean algebra!
  • 24.
    edited April 2018

    To clarify #21:

    As Matthew tersely indicated, if a poset \(A\) has binary meets, for each element \(a \in A\) there's a monotone function $$ a \wedge \cdot : A \to A $$ sending each element \( b \in A \) to \(a \wedge b\). Sometimes all these monotone functions have right adjoints, which we denote by $$ a \to \cdot : A \to A $$ which send each \(c \in A\) to \( a \to c \). As I explained in comment #13, this occurs iff $$ a \wedge b \le c \textrm{ if and only if } a \le (b \to c) $$ for all \(a,b,c \in A\). Or in words, "\(a \wedge b\) implies \(c\)" if and only if "\(a\) implies \(b \to c \)".

    The latter condition actually shows that \((\wedge b)\) and \((b \to)\) are adjoint, correct? (Of course, \(\wedge\) is commutative.) We didn't actually use the \((a \wedge)\) and \((a \to)\) introduced prior.

    It seems like the chain \(\Delta \dashv \, \wedge \dashv\, \to\) is a little bit fictitious. By which I mean, \(\wedge\) is simultaneously denoting two different species: a binary operator and a family of unary operators, the former of which might not have a right adjoint, and the latter of which might not have left adjoints.

    Comment Source:To clarify #21: > As Matthew tersely indicated, if a poset \\(A\\) has binary meets, for each element \\(a \in A\\) there's a monotone function > $$ a \wedge \cdot : A \to A $$ > sending each element \\( b \in A \\) to \\(a \wedge b\\). Sometimes all these monotone functions have right adjoints, which we denote by > $$ a \to \cdot : A \to A $$ > which send each \\(c \in A\\) to \\( a \to c \\). As I explained in comment #13, this occurs iff > $$ a \wedge b \le c \textrm{ if and only if } a \le (b \to c) $$ > for all \\(a,b,c \in A\\). Or in words, "\\(a \wedge b\\) implies \\(c\\)" if and only if "\\(a\\) implies \\(b \to c \\)". The latter condition actually shows that \\((\wedge b)\\) and \\((b \to)\\) are adjoint, correct? (Of course, \\(\wedge\\) is commutative.) We didn't actually use the \\((a \wedge)\\) and \\((a \to)\\) introduced prior. It seems like the chain \\(\Delta \dashv \, \wedge \dashv\, \to\\) is a little bit fictitious. By which I mean, \\(\wedge\\) is simultaneously denoting two different species: a binary operator and a family of unary operators, the former of which might not have a right adjoint, and the latter of which might not have left adjoints.
  • 25.
    edited April 2018

    Jonathan wrote approximately:

    The latter condition actually shows that \( \cdot \wedge b \) and \(b \to \cdot\) are adjoint, correct?

    Right.

    (Of course, \(\wedge\) is commutative.)

    Right, so \(\cdot \wedge b = b \wedge \cdot \). (Mathematicians use \(\cdot\) for a slot that's left open to put something in when describing functions.)

    We didn't actually use the \((a \wedge)\) and \((a \to)\) introduced prior.

    I introduced \(a\) in a phrase containing "for each element \(a \in A\)", so it applies even if that element happens to be called \(b\). So, everything is fine: I merely failed to use the variable names in a way that would make the explanation easy to follow. I'll go back and smooth it down.

    Comment Source:Jonathan wrote approximately: > The latter condition actually shows that \\( \cdot \wedge b \\) and \\(b \to \cdot\\) are adjoint, correct? Right. > (Of course, \\(\wedge\\) is commutative.) Right, so \\(\cdot \wedge b = b \wedge \cdot \\). (Mathematicians use \\(\cdot\\) for a slot that's left open to put something in when describing functions.) > We didn't actually use the \\((a \wedge)\\) and \\((a \to)\\) introduced prior. I introduced \\(a\\) in a phrase containing "for each element \\(a \in A\\)", so it applies even if that element happens to be called \\(b\\). So, everything is fine: I merely failed to use the variable names in a way that would make the explanation easy to follow. I'll go back and smooth it down.
  • 26.
    edited April 2018

    Right -- I just wanted to clarify that if I were to break that condition down into the two adjoints, those would be relative to \(b\), rather than \(a\).

    Comment Source:Right -- I just wanted to clarify that if I were to break that condition down into the two adjoints, those would be relative to \\(b\\), rather than \\(a\\).
  • 27.
    edited April 2018

    Right. I've fixed my comment to make that clearer.

    I found this absolutely mind-blowing when I first learned it: in rough terms,

    "implies" is adjoint to "and"!!!

    And when combined with all the other adjoints, including those involving \(\exists\) and \(\forall\), we get the feeling that logic is all about adjunctions!

    image

    It seems like the chain \(\Delta \dashv \, \wedge \dashv\, \to\) is a little bit fictitious. By which I mean, \(\wedge\) is simultaneously denoting two different species: a binary operator and a family of unary operators, the former of which might not have a right adjoint, and the latter of which might not have left adjoints.

    True. That chain is a sloppy way of talking about something... but that thing is real and amazing.

    Comment Source:Right. I've fixed my comment to make that clearer. I found this absolutely mind-blowing when I first learned it: in rough terms, <center><b>"implies" is adjoint to "and"!!!</b></center> And when combined with all the other adjoints, including those involving \\(\exists\\) and \\(\forall\\), we get the feeling that logic is all about adjunctions! <center><img src = "http://math.ucr.edu/home/baez/emoticons/spiral_eyes.gif"></center> > It seems like the chain \\(\Delta \dashv \, \wedge \dashv\, \to\\) is a little bit fictitious. By which I mean, \\(\wedge\\) is simultaneously denoting two different species: a binary operator and a family of unary operators, the former of which might not have a right adjoint, and the latter of which might not have left adjoints. True. That chain is a sloppy way of talking about something... but that thing is real and amazing.
  • 28.

    @John

    From Lecture 17:

    A similar argument shows that joins are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function

    $$ \vee : A \to A \times A $$ that's the
    right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.

    Is there something mixed up here? From the flow of everything it seems like joins should be meet? And also if \( \wedge \) is going to be right adjoint of \(\Delta\) shouldn't it go in the opposite direction of \( \Delta \) ie. \( \wedge : A \times A \to A \)? or am I missing something here?

    Comment Source:@John From Lecture 17: >A similar argument shows that joins are really right adjoints! If \\( A \\) is a poset with all binary meets, we get a monotone function >\[ \vee : A \to A \times A \] >that's the _right_ adjoint of \\( \Delta \\). This is just a clever way of saying >\[ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' \] >which is also easy to check. Is there something mixed up here? From the flow of everything it seems like joins should be meet? And also if \\( \wedge \\) is going to be right adjoint of \\(\Delta\\) shouldn't it go in the opposite direction of \\( \Delta \\) ie. \\( \wedge : A \times A \to A \\)? or am I missing something here?
  • 29.

    You're right, Michael - good catch! There were 3 nasty mistakes in the passage you quoted. I think I've fixed them all. Please check.

    Comment Source:You're right, Michael - good catch! There were 3 nasty mistakes in the passage you quoted. I think I've fixed them all. Please check.
  • 30.
    edited April 2018

    Hi Owen,

    It's amazing that you came up with your insights in #1 by yourself. That's definitely Formal Concept Analysis at work.

    Regarding the Galois connection in #5 that John noted, an small but formal exposition with proof, example and diagrams can be found in Amanda Bower, Category theory and Galois theory, Rose-Hulman Undergraduate Mathematics Journal 14 (2013), 134–142.

    Comment Source:Hi Owen, It's amazing that you came up with your insights in [#1](https://forum.azimuthproject.org/discussion/comment/17342/#Comment_17342) by yourself. That's definitely Formal Concept Analysis at work. Regarding the Galois connection in [#5](https://forum.azimuthproject.org/discussion/comment/17342/#Comment_17342) that John noted, an small but formal exposition with proof, example and diagrams can be found in Amanda Bower, [Category theory and Galois theory](http://math.ucr.edu/home/baez/qg-fall2015/amanda_bower_category_theory_and_galois_theory.pdf), [Rose-Hulman Undergraduate Mathematics Journal](https://www.rose-hulman.edu/mathjournal/v14n1.php) 14 (2013), 134–142.
  • 31.

    Re "joins are left adjoints, meets are right adjoints"...

    There's another aspect to this that hasn't been mentioned yet, but is related to Puzzle 46.

    Suppose \(A\) is a preorder. The subsets of \(A\) form a poset \(PA\).

    There is a natural monotone function \(A\rightarrow PA\) sending \(a\in A\) to \(\operatorname{\downarrow}a = \{x\in A : x\le a\}\).

    Now given any \(S\in PA\), \(S\subseteq{\operatorname{\downarrow}a}\) iff \(a\) is an upper bound of \(S\).

    And if the join of \(S\) exists, \(a\) is an upper bound of \(S\) iff \(\bigvee S \le a\)

    So if \(A\) has all joins, \(\bigvee S \le a\) iff \(S\subseteq{\operatorname{\downarrow}a}\)

    ie \(\bigvee\) is a left adjoint to \(\downarrow\)

    Comment Source:Re "joins *are* left adjoints, meets *are* right adjoints"... There's another aspect to this that hasn't been mentioned yet, but is related to **Puzzle 46**. Suppose \\(A\\) is a preorder. The subsets of \\(A\\) form a poset \\(PA\\). There is a natural monotone function \\(A\rightarrow PA\\) sending \\(a\in A\\) to \\(\operatorname{\downarrow}a = \\{x\in A : x\le a\\}\\). Now given any \\(S\in PA\\), \\(S\subseteq{\operatorname{\downarrow}a}\\) iff \\(a\\) is an upper bound of \\(S\\). And if the join of \\(S\\) exists, \\(a\\) is an upper bound of \\(S\\) iff \\(\bigvee S \le a\\) So if \\(A\\) has all joins, \\(\bigvee S \le a\\) iff \\(S\subseteq{\operatorname{\downarrow}a}\\) ie \\(\bigvee\\) is a left adjoint to \\(\downarrow\\)
  • 32.

    addendum: The situation for meets is a bit more complicated. \(\downarrow\) preserves meets but not joins, so it doesn't have a right adjoint. Instead we have to reverse the ordering on \(PA\), then we get \(\operatorname{\uparrow}a \supseteq S\) iff \(a \le \bigwedge S\) , and so \(\bigwedge\) is right adjoint to \(\uparrow\).

    Comment Source:addendum: The situation for meets is a bit more complicated. \\(\downarrow\\) preserves meets but not joins, so it doesn't have a right adjoint. Instead we have to reverse the ordering on \\(PA\\), then we get \\(\operatorname{\uparrow}a \supseteq S\\) iff \\(a \le \bigwedge S\\) , and so \\(\bigwedge\\) is right adjoint to \\(\uparrow\\).
  • 33.
    edited April 2018

    Anindya - you gave a very nice answer to Puzzle 46 in #31 and #32. Congratulations!

    There's another sort of answer where we look at a poset \(A\) in which every \(S\)-tuple of elements has a join (for some fixed set \(S\).) The join then defines a function

    $$ \bigvee: A^S \to A $$ This is a monotone function that's the left adjoint of the "generalized diagonal"

    $$ \Delta : A \to A^S $$

    Comment Source:Anindya - you gave a very nice answer to Puzzle 46 in #31 and #32. Congratulations! There's another sort of answer where we look at a poset \\(A\\) in which every \\(S\\)-tuple of elements has a join (for some fixed set \\(S\\).) The join then defines a function $$ \bigvee: A^S \to A $$ This is a monotone function that's the left adjoint of the "generalized diagonal" $$ \Delta : A \to A^S $$
  • 34.

    I've been doing some book worm work, and have found references detailing the rendering of the Galois Theory and Alebraic Geometry themes of #5 and #6 in Formal Concept Analysis terms in technical detail.

    For #5 (Galois Theory), in this Conference, under get all presentations, there are the slides of a talk of Marcel Erné covering it. It's nice to compare the diagram there with the example of the paper I cited in #30.

    For #6, there is: Becker T. (2005) Features of Interaction Between Formal Concept Analysis and Algebraic Geometry. In: Ganter B., Stumme G., Wille R. (eds) Formal Concept Analysis. Lecture Notes in Computer Science, vol 3626. Springer, Berlin, Heidelberg DOI https://doi.org/10.1007/11528784_3

    Comment Source:I've been doing some book worm work, and have found references detailing the rendering of the Galois Theory and Alebraic Geometry themes of [#5](https://forum.azimuthproject.org/discussion/comment/17352/#Comment_17352) and [#6](https://forum.azimuthproject.org/discussion/comment/17368/#Comment_17368) in Formal Concept Analysis terms in technical detail. For [#5](https://forum.azimuthproject.org/discussion/comment/17352/#Comment_17352) (Galois Theory), in this [Conference](http://cla2011.loria.fr/), under [get all presentations](http://cla2011.loria.fr/presentations/all-presentations-v2.zip), there are the slides of a talk of Marcel Erné covering it. It's nice to compare the diagram there with the example of the paper I cited in [#30](https://forum.azimuthproject.org/discussion/comment/17525/#Comment_17525). For [#6](https://forum.azimuthproject.org/discussion/comment/17368/#Comment_17368), there is: Becker T. (2005) Features of Interaction Between Formal Concept Analysis and Algebraic Geometry. In: Ganter B., Stumme G., Wille R. (eds) Formal Concept Analysis. Lecture Notes in Computer Science, vol 3626. Springer, Berlin, Heidelberg [DOI https://doi.org/10.1007/11528784_3](https://doi.org/10.1007/11528784_3)
  • 35.
    edited May 2018

    Define

    Grand Synthesis

    Not sure if this is what Prof. Baez is saying when he says :

    Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.

    Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.

    Joins are left adjoints, and meets are right adjoints.

    Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.

    But I tried connect the dots... It almost seems too good to be true.

    Comment Source:![Define](http://aether.co.kr/images/grand_synthesis_define.svg) ![Grand Synthesis](http://aether.co.kr/images/grand_synthesis.svg) Not sure if this is what Prof. Baez is saying when he says : >Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. >Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. >Joins are left adjoints, and meets are right adjoints. >Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. But I tried connect the dots... It almost seems too good to be true.
  • 36.

    Just to add to the comments on Formal Concept Analysis, there's further discussion after my post on it at the n-Category Cafe. My student Jonathan Elliott wrote his thesis on doing this with 'fuzzy truth values' which in this categorical context means not dealing with posets, which are categories enriched over true and false, but dealing with fuzzy posets which are categories enriched over some 'fuzzy truth values' smeared between true and false. Hopefully there wil be a paper soon and I can explain more then...

    Comment Source:Just to add to the comments on Formal Concept Analysis, there's further discussion after my post on it at the [n-Category Cafe](https://golem.ph.utexas.edu/category/2013/09/formal_concept_analysis.html). My student Jonathan Elliott wrote [his thesis](http://etheses.whiterose.ac.uk/18342/) on doing this with 'fuzzy truth values' which in this categorical context means not dealing with posets, which are categories enriched over true and false, but dealing with fuzzy posets which are categories enriched over some 'fuzzy truth values' smeared between true and false. Hopefully there wil be a paper soon and I can explain more then...
  • 37.
    edited May 2018

    That discussion is very interesting and helped me to enhance my understanding of "concept" lattices. It led me to think of the center of an adjunction as the right generalization of the concept lattice when moving from Galois connections to adjunctions in general. I'm interested in this story.

    Comment Source:That discussion is very interesting and helped me to enhance my understanding of "concept" lattices. It led me to think of the [center](https://ncatlab.org/nlab/show/fixed+point+of+an+adjunction) of an adjunction as the right generalization of the concept lattice when moving from Galois connections to adjunctions in general. I'm interested in this story.
  • 38.
    edited May 2018

    Images for understanding Chapter 1

    I started the course late, and have just finished chapter 1. I've put in a bunch of work creating mnemonic images to help me understand the material, which I've just put online at my personal page here at the wiki.

    I hope some of these images may be useful to folks still working through chapter 1, or needing a review. A few of the results go beyond the book & lectures, so there may be errors!

    Note to John: I tried to follow the wiki instructions, but the "lab elves" may want to take a look at what I've done. In addition to my own wiki page, I created discussion threads for each topic in the Applied Category Theory Discussion Groups forum category. My apologies if that's too many threads for one person's project. I didn't see a way I could sequester them into a custom category, since I'm not an admin.

    EDIT: On further thought, I think a separate forum category is a bad idea. Threads will drop out of sight eventually, but a new category will remain visible in the category list long after I stop posting. I'll just continue segregating the threads by using a prefix on the thread titles.

    For convenience, here's a copy of the table of contents from my personal page.


    I wanted more formatting control than available here in the forums, so as an experiment each post is a large image. I'm unsure how this experiment will work out.

    These do not form a comprehensive tutorial. I only picked topics where I felt an image would help me understand, so not everything important is covered.

    1. Map diagrams - Visual refresher on the terms injective, surjective, single-valued, total, function, relation, and graph.
    2. Confusing order terminology - The terms and symbols used to describe orders and adjunctions was hard for me to absorb. The reason seems to be that they use incompatible mnemonics. Once I figured out the exact inconsistencies, I found it much easier to keep everything straight.
    3. The two forward images - Pictures and mnemonics for the adjoints to the preimage \(h^*\), which are \(h_!\) and \(h_*\).
    4. Monotone maps - The definition is a bottom-up one, looking at individual elements. I found it useful to draw pictures of a top-down view, looking at entire order relations.
    5. Iterating the Galois connection maps - Following the maps more than once yields helpful little formulas and some insight. Much of this is in the book but a few things are different. Lots of pictures.
      • The 1-hop constraint - The basic definition of an adjunction. I then go on a long detour, viewing the definition in terms of the action on entire order relations at once.
      • The 2-hop inequality - I find this one very helpful when looking at maps in diagram form.
      • The 3-hop equivalence - I don't think this is in the book, and it helped me get a better intuition of Galois connections.
      • The 4-hop fixed point - Apparently every Galois connection has a bijection embedded inside it. This wasn't clear to me from the book, so working through the pictures was quite helpful.
    6. Non-bijectiveness - Galois connections are interesting because they're not-quite bijections. I worked out a few small results (with pictures) to explore how exactly that works.
    7. A big mnemonic image for Galois connections - All the math and pictures above packed into one master image. This has become my starting point whenever thinking about Galois connections. The topic was quite confusing to me at first, but this image has helped me tremendously. However, the image relies on terms and visual conventions defined in the earlier posts, so you'll need to read them first.

    EDIT: it seems that some of what I drew forms solutions to Owen's puzzles in #2 above, except that I invented a lot of terms so the pictures won't match the official terminology.

    Comment Source:**Images for understanding Chapter 1** I started the course late, and have just finished chapter 1. I've put in a bunch of work creating mnemonic images to help me understand the material, which I've just put online at my [personal page](http://www.azimuthproject.org/azimuth/show/Pete+Morcos) here at the wiki. I hope some of these images may be useful to folks still working through chapter 1, or needing a review. A few of the results go beyond the book & lectures, so there may be errors! Note to John: I tried to follow the wiki instructions, but the "lab elves" may want to take a look at what I've done. In addition to my own wiki page, I created discussion threads for each topic in the [Applied Category Theory Discussion Groups](https://forum.azimuthproject.org/categories/discussion-groups) forum category. My apologies if that's too many threads for one person's project. I didn't see a way I could sequester them into a custom category, since I'm not an admin. EDIT: On further thought, I think a separate forum category is a bad idea. Threads will drop out of sight eventually, but a new category will remain visible in the category list long after I stop posting. I'll just continue segregating the threads by using a prefix on the thread titles. For convenience, here's a copy of the table of contents from my personal page. --- I wanted more formatting control than available here in the forums, so as an experiment each post is a large image. I'm unsure how this experiment will work out. These do *not* form a comprehensive tutorial. I only picked topics where I felt an image would help me understand, so not everything important is covered. + [General feedback thread](https://forum.azimuthproject.org/discussion/2183/petepics-feedback) 1. **[Map diagrams](https://forum.azimuthproject.org/discussion/2181/petepics-chapter-1-map-diagrams)** - Visual refresher on the terms *injective, surjective, single-valued, total, function, relation,* and *graph*. 1. **[Confusing order terminology](https://forum.azimuthproject.org/discussion/2182/petepics-chapter-1-order-terminology-confusion)** - The terms and symbols used to describe orders and adjunctions was hard for me to absorb. The reason seems to be that they use incompatible mnemonics. Once I figured out the exact inconsistencies, I found it much easier to keep everything straight. 1. **[The two forward images](https://forum.azimuthproject.org/discussion/2184/petepics-chapter-1-the-two-forward-images)** - Pictures and mnemonics for the adjoints to the preimage \\(h^\*\\), which are \\(h_!\\) and \\(h_\*\\). 1. **[Monotone maps](https://forum.azimuthproject.org/discussion/2185/petepics-chapter-1-monotone-maps)** - The definition is a bottom-up one, looking at individual elements. I found it useful to draw pictures of a top-down view, looking at entire order relations. 1. **Iterating the Galois connection maps** - Following the maps more than once yields helpful little formulas and some insight. Much of this is in the book but a few things are different. Lots of pictures. + **[The 1-hop constraint](https://forum.azimuthproject.org/discussion/2186/petepics-chapter-1-iterated-galois-maps-the-1-hop-constraint)** - The basic definition of an adjunction. I then go on a long detour, viewing the definition in terms of the action on entire order relations at once. + **[The 2-hop inequality](https://forum.azimuthproject.org/discussion/2187/petepics-chapter-1-iterated-galois-maps-the-2-hop-inequality)** - I find this one very helpful when looking at maps in diagram form. + **[The 3-hop equivalence](https://forum.azimuthproject.org/discussion/2188/petepics-chapter-1-iterated-galois-maps-the-3-hop-equivalence)** - I don't think this is in the book, and it helped me get a better intuition of Galois connections. + **[The 4-hop fixed point](https://forum.azimuthproject.org/discussion/2189/petepics-chapter-1-iterated-galois-maps-the-4-hop-fixed-point)** - Apparently every Galois connection has a bijection embedded inside it. This wasn't clear to me from the book, so working through the pictures was quite helpful. 1. **[Non-bijectiveness](https://forum.azimuthproject.org/discussion/2190/petepics-chapter-1-galois-connection-non-bijectiveness)** - Galois connections are interesting because they're not-quite bijections. I worked out a few small results (with pictures) to explore how exactly that works. 1. **[A big mnemonic image for Galois connections](https://forum.azimuthproject.org/discussion/2191/petepics-chapter-1-a-big-mnemonic-image-for-galois-connections)** - All the math and pictures above packed into one master image. This has become my starting point whenever thinking about Galois connections. The topic was quite confusing to me at first, but this image has helped me tremendously. However, the image relies on terms and visual conventions defined in the earlier posts, so you'll need to read them first. --- EDIT: it seems that some of what I drew forms solutions to Owen's puzzles in #2 above, except that I invented a lot of terms so the pictures won't match the official terminology.
  • 39.
    edited May 2018

    Thanks, Pete! :D

    Edit: Maybe we can get someone to reorganize your picture threads in their own subforum like they deserve.

    Comment Source:Thanks, Pete! :D Edit: Maybe we can get someone to reorganize your picture threads in their own subforum like they deserve.
  • 40.

    Pete, I am finding your diagrams to be very useful. And yes, one more vote for a subforum!

    Comment Source:[Pete](https://forum.azimuthproject.org/discussion/comment/18610/#Comment_18610), I am finding your diagrams to be very useful. And yes, one more vote for a subforum!
  • 41.

    Owen, your example in 1. above is proving to be extremely helpful for me. I am using it to better understand and remember many of the concepts developed in Chapter 1. Outstanding! Thank you!!

    Comment Source:Owen, your example in 1. above is proving to be extremely helpful for me. I am using it to better understand and remember many of the concepts developed in Chapter 1. Outstanding! Thank you!!
Sign In or Register to comment.