It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 500
- Study Groups 19
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 1
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 68
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 2
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 147
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 713

Options

Okay: I've told you what a Galois connection is. But now it's time to explain why they matter. This will take much longer - and be much more fun.

Galois connections do something really cool: they tell you *the best possible way to recover data that can't be recovered*.

More precisely, they tell you *the best approximation to reversing a computation that can't be reversed.*

Someone hands you the output of some computation, and asks you what the input was. Sometimes there's a unique right answer. But sometimes there's more than one answer, or none! That's when your job gets hard. In fact, impossible! But don't let that stop you.

Suppose we have a function between sets, \(f : A \to B\) . We say a function \(g: B \to A \) is the **inverse** of \(f\) if

$$ g(f(a)) = a \textrm{ for all } a \in A \quad \textrm{ and } \quad f(g(b)) = b \textrm{ for all } b \in B $$ Another equivalent way to say this is that

$$ f(a) = b \textrm{ if and only if } a = g(b) $$ for all \(a \in A\) and \(b \in B\).

So, the idea is that \(g\) undoes \(f\). For example, if \(A = B = \mathbb{R}\) is the set of real numbers, and \(f\) doubles every number, then \(f\) has an inverse \(g\), which halves every number.

But what if \(A = B = \mathbb{N}\) is the set of *natural* numbers, and \(f\) doubles every natural number. This function has no inverse!

So, if I say "\(2a = 4\); tell me \(a\)" you can say \(a = 2\). But if I say "\(2a = 3\); tell me \(a\)" you're stuck.

But you can still try to give me a "best approximation" to the nonexistent natural number \(a\) with \(2 a = 3\).

"Best" in what sense? We could look for the number \(a\) that makes \(2a\) as close as possible to 3. There are two equally good options: \(a = 1\) and \(a = 2\). Here we are using the usual distance function, or metric, on \(\mathbb{N}\), which says that the distance between \(x\) and \(y\) is \(|x-y|\).

But we're not talking about distance functions in this class now! We're talking about *preorders*. Can we define a "best approximation" using just the relation \(\le\) on \(\mathbb{N}\)?

Yes! But we can do it in two ways!

**Best approximation from below.** Find the largest possible \(a \in \mathbb{N}\) such that \(2a \le 3\). Answer: \(a = 1\).

**Best approximation from above.** Find the smallest possible \(a \in \mathbb{N}\) such that \(3 \le 2a\). Answer: \(a = 2\).

Okay, now work this out more generally:

**Puzzle 14.** Find the function \(g : \mathbb{N} \to \mathbb{N}\) such that \(g(b) \) is the largest possible natural number \(a\) with \(2a \le b\).

**Puzzle 15.** Find the function \(g : \mathbb{N} \to \mathbb{N}\) such that \(g(b)\) is the smallest possible natural number \(a\) with \(b \le 2a\).

Now think about Lecture 4 and the puzzles there! I'll copy them here with notation that better matches what I'm using now:

**Puzzle 12.** Find a right adjoint for the function \(f: \mathbb{N} \to \mathbb{N}\) that doubles natural numbers: that is, a function \(g : \mathbb{N} \to \mathbb{N}\) with

$$ f(a) \le b \textrm{ if and only if } a \le g(b) $$ for all \(a,b \in \mathbb{N}\).

**Puzzle 13.** Find a left adjoint for the same function \(f\): that is, a function \(g : \mathbb{N} \to \mathbb{N}\) with

$$ g(b) \le a \textrm{ if and only if } b \le f(a) $$ Next:

**Puzzle 16.** What's going on here? What's the pattern you see, and why is it working this way?

## Comments

Puzzle 14Checking some concrete values, \(2(1) \leq 3, 2(2) \not \leq 3, 2(2) \leq 5, 2(3) \not \leq 5\). These suggest the function \(g(b) = \lfloor b/2 \rfloor \) is our maximum. More formally, we want \(g(b) = max\){\( a : 2a \leq b, a \in Z \)}, We need to show it's in our set, and that any other element in our set is smaller.First, \(2\lfloor b / 2 \rfloor \leq b \) so \(g(b) \in \){\( a : 2a \leq b \)}. Second, division by 2 and flooring are both monotonic functions, so if a is in our set, we have $$ 2a \leq b \Rightarrow a \leq b/2 \Rightarrow \lfloor a \rfloor \leq \lfloor b/2 \rfloor \Rightarrow a \leq \lfloor b/2 \rfloor $$ \(\lfloor b/2 \rfloor\) is the required maximum.

Puzzle 15This argument is analogous, except with \(\lceil b / 2 \rceil \). I would type it out, but I don't have time currently (famous last words).Puzzle 16I'm going to give an observation, but my understanding on this isn't complete.Given the definitions for adjunctions introduced in this lecture, it's clear they are unique (

Edit: This is true for the given example, but isn't true for every preorder, I shouldn't have said this was clear. And because the Galois connection definition is well defined for any preorder, my following suggestion won't generalize to a characterization for preorders by way of uniqueness!). Which means the definition in puzzle 12 is equivalent to the max definition. We can therefore show we can prove properties from one version to the other. I'll give the direction I've currently figured out.Suppose \(g\) is defined as in problem 14. Because all our functions are monotonic we have $$f(a) \leq b \Rightarrow g(f(a)) \leq g(b) \Rightarrow a \leq g(b)$$ and $$a \leq g(b) \Rightarrow f(a) \leq f(g(b)) \Rightarrow f(a) \leq b$$ Because \(f(g(b)) \leq b\) by definition of g. (It's the largest element x such that \(f(x) \leq b\)).

It should be possible to show these definitions are equivalent to maximizing in the sense defined in problem 14.

`**Puzzle 14** Checking some concrete values, \\(2(1) \leq 3, 2(2) \not \leq 3, 2(2) \leq 5, 2(3) \not \leq 5\\). These suggest the function \\(g(b) = \lfloor b/2 \rfloor \\) is our maximum. More formally, we want \\(g(b) = max\\){\\( a : 2a \leq b, a \in Z \\)}, We need to show it's in our set, and that any other element in our set is smaller. First, \\(2\lfloor b / 2 \rfloor \leq b \\) so \\(g(b) \in \\){\\( a : 2a \leq b \\)}. Second, division by 2 and flooring are both monotonic functions, so if a is in our set, we have $$ 2a \leq b \Rightarrow a \leq b/2 \Rightarrow \lfloor a \rfloor \leq \lfloor b/2 \rfloor \Rightarrow a \leq \lfloor b/2 \rfloor $$ \\(\lfloor b/2 \rfloor\\) is the required maximum. **Puzzle 15** This argument is analogous, except with \\(\lceil b / 2 \rceil \\). I would type it out, but I don't have time currently (famous last words). **Puzzle 16** I'm going to give an observation, but my understanding on this isn't complete. Given the definitions for adjunctions introduced in this lecture, it's clear they are unique (**Edit: This is true for the given example, but isn't true for every preorder, I shouldn't have said this was clear. And because the Galois connection definition is well defined for any preorder, my following suggestion won't generalize to a characterization for preorders by way of uniqueness!**). Which means the definition in puzzle 12 is equivalent to the max definition. We can therefore show we can prove properties from one version to the other. I'll give the direction I've currently figured out. Suppose \\(g\\) is defined as in problem 14. Because all our functions are monotonic we have $$f(a) \leq b \Rightarrow g(f(a)) \leq g(b) \Rightarrow a \leq g(b)$$ and $$a \leq g(b) \Rightarrow f(a) \leq f(g(b)) \Rightarrow f(a) \leq b$$ Because \\(f(g(b)) \leq b\\) by definition of g. (It's the largest element x such that \\(f(x) \leq b\\)). It should be possible to show these definitions are equivalent to maximizing in the sense defined in problem 14.`

I'm not sure if this is the answer you want John...

I want to expand on Alex Kreitzberg's observation. He is touching on an

alternate definitionof a Galois pair \(f \dashv g\):$$ f \text{ and } g \text{ are monotone functions and } f(g(b)) \leq b \text{ and } a \leq g(f(a)) $$ This is equivalent to the definition Fong, Spivak and you yourself use.

Moreover, if a monotone function has a left (or right) Galois adjoint it is unique.

`> **Puzzle 16**. What's going on here? What's the pattern you see, and why is it working this way? I'm not sure if this is the answer you want John... I want to expand on Alex Kreitzberg's observation. He is touching on an _alternate definition_ of a Galois pair \\(f \dashv g\\): $$ f \text{ and } g \text{ are monotone functions and } f(g(b)) \leq b \text{ and } a \leq g(f(a)) $$ This is equivalent to the definition Fong, Spivak and you yourself use. Moreover, if a monotone function has a left (or right) Galois adjoint it is unique.`

Here's my go at Puzzle 16: Let's say we have two monotone functions \(f : A\to B\) and \(g:B\to A\) between preorders and we're wondering whether the following three conditions on \(f\) and \(g\) are equivalent: $$\text{For all }a\text{ and }b,\ f(a)\leq b \iff a \leq g(b)$$ $$\text{For all }a,\ f(a)\text{ is the smallest $b$ with }a\leq g(b).$$ $$\text{For all }b,\ g(b)\text{ is the largest $a$ with }f(a)\leq b.$$ We'll show that the first and second are equivalent. First, note that since \(g\) is monotone, for any choice of \(a\) the set of all \(b\) such that \(a\leq g(b)\) is an upper set of \(B\). Therefore saying that \(f(a)\) is the smallest \(b\) with \(a\leq g(b)\) is saying that this upper set

isthe set of all elements of \(B\) at least as large as \(f(a)\). In other words, \(a\leq g(b)\) if and only if \(b\) is in this upper set if and only if \(b \geq f(a)\).The equivalence between the first and third conditions is similar. I was surprised that you didn't need both the second and third to get something equivalent to the first! In fact, the second and third are already equivalent to each other.

`Here's my go at Puzzle 16: Let's say we have two monotone functions \\(f : A\to B\\) and \\(g:B\to A\\) between preorders and we're wondering whether the following three conditions on \\(f\\) and \\(g\\) are equivalent: $$\text{For all }a\text{ and }b,\ f(a)\leq b \iff a \leq g(b)$$ $$\text{For all }a,\ f(a)\text{ is the smallest $b$ with }a\leq g(b).$$ $$\text{For all }b,\ g(b)\text{ is the largest $a$ with }f(a)\leq b.$$ We'll show that the first and second are equivalent. First, note that since \\(g\\) is monotone, for any choice of \\(a\\) the set of all \\(b\\) such that \\(a\leq g(b)\\) is an upper set of \\(B\\). Therefore saying that \\(f(a)\\) is the smallest \\(b\\) with \\(a\leq g(b)\\) is saying that this upper set *is* the set of all elements of \\(B\\) at least as large as \\(f(a)\\). In other words, \\(a\leq g(b)\\) if and only if \\(b\\) is in this upper set if and only if \\(b \geq f(a)\\). The equivalence between the first and third conditions is similar. I was surprised that you didn't need both the second and third to get something equivalent to the first! In fact, the second and third are already equivalent to each other.`

Some excellent responses! Just one small issue, coming from some mistakes in

Seven Sketches. Everything Matthew and Owen just said is true for posets, but not for preorders.Remember that a

preorderis a set with a binary relation \(\le\) that's reflexive and transitive. Aposetis a preorder where \(x \le y\) and \(y \le x\) imply \(x = y\).The left or right adjoint of a monotone function between posets is unique if it exists. This need not be true for preorders.

The issue can be seen clearly in the phrases "the smallest \(b\) with \(a \le g(b)\)". In a poset, such an \(a\) is unique if it exists. In a preorder, that's not true, since we could have \(a \le a'\) and \(a' \le a\) yet still \(a \ne a'\).

Adding to the confusion,

Seven Sketchesuses "poset" to mean "preorder", and "skeletal poset" to mean "poset". So, when the authors say the left or right adjoint of a monotone function between posets is unique if it exists, that's true with theusualdefinition of poset, but not fortheirdefinition.Luckily, I have convinced the authors to straighten this out. Here's what I wrote in an email to Brendan Fong. He just replied saying that he and David are fixing the mistakes I describe, and switching to the standard definition of "poset".

Someone in the course pointed out something that's more than a typo. If you're going to use "poset" to mean "preorder" (bad, bad, bad) then you can't talk about "the" meet or join of two elements in a poset, because even when it exists it's not unique.

Of course you can use "the" in the sophisticated way, meaning "unique up to canonical isomorphism"... but that seems a bit fancy for your intended audience, and it at least would need to be explained.

You guys just say things like:

You could fix this by changing "the" to "a", but every equation you write down involving meets and joins is wrong unless you restrict to the "skeletal poset" case. For example, Example 1.62:

More importantly, Prop. 1.84 - right adjoints preserve meets. The equations here are really just isomorphisms!

This then makes your statement of the adjoint functor theorem for posets incorrect.

I think this is the best solution:

Call preorders "preorders" and call posets "posets". Do not breed a crew of students who use these words in nonstandard ways! You won't breed enough of them to take over the world, so all you will accomplish is making them less able to communicate with other people. And for what: just because you don't like the sound of the word "preorder"?

Define meets and joins for preorders, but point out that they're unique for posets, and say this makes things a bit less messy.

State the adjoint functor theorem for posets... actual posets!

`Some excellent responses! Just one small issue, coming from some mistakes in _Seven Sketches_. Everything Matthew and Owen just said is true for posets, but not for preorders. Remember that a **preorder** is a set with a binary relation \\(\le\\) that's reflexive and transitive. A **poset** is a preorder where \\(x \le y\\) and \\(y \le x\\) imply \\(x = y\\). The left or right adjoint of a monotone function between posets is unique if it exists. This need not be true for preorders. The issue can be seen clearly in the phrases "the smallest \\(b\\) with \\(a \le g(b)\\)". In a poset, such an \\(a\\) is unique if it exists. In a preorder, that's not true, since we could have \\(a \le a'\\) and \\(a' \le a\\) yet still \\(a \ne a'\\). Adding to the confusion, _Seven Sketches_ uses "poset" to mean "preorder", and "skeletal poset" to mean "poset". So, when the authors say the left or right adjoint of a monotone function between posets is unique if it exists, that's true with the _usual_ definition of poset, but not for _their_ definition. <img src = "http://math.ucr.edu/home/baez/emoticons/confused_rolleyes.gif"> Luckily, I have convinced the authors to straighten this out. Here's what I wrote in an email to Brendan Fong. He just replied saying that he and David are fixing the mistakes I describe, and switching to the standard definition of "poset". <hr/> Someone in the course pointed out something that's more than a typo. If you're going to use "poset" to mean "preorder" (bad, bad, bad) then you can't talk about "the" meet or join of two elements in a poset, because even when it exists it's not unique. Of course you can use "the" in the sophisticated way, meaning "unique up to canonical isomorphism"... but that seems a bit fancy for your intended audience, and it at least would need to be explained. You guys just say things like: > Let P be a poset, and let A be a subset. We say that an element is the meet of A if ... You could fix this by changing "the" to "a", but every equation you write down involving meets and joins is wrong unless you restrict to the "skeletal poset" case. For example, Example 1.62: > In any poset P, we have \\(p \vee p = p \wedge p = p\\). More importantly, Prop. 1.84 - right adjoints preserve meets. The equations here are really just isomorphisms! This then makes your statement of the adjoint functor theorem for posets incorrect. I think this is the best solution: 1. Call preorders "preorders" and call posets "posets". Do not breed a crew of students who use these words in nonstandard ways! You won't breed enough of them to take over the world, so all you will accomplish is making them less able to communicate with other people. And for what: just because you don't like the sound of the word "preorder"? 2. Define meets and joins for preorders, but point out that they're unique for posets, and say this makes things a bit less messy. 3. State the adjoint functor theorem for posets... actual posets!`

John Baez #4:

Okay... but I don't see how my alternative definition uses anti-symmetry (i.e. the rule \(x \leq y\) and \(y \leq x\) imply \(x = y\)).

Here's my attempted proof:

Lemma: Assume that \(f\) and \(g\) are monotone and for all \(a\) and \(b\) we have \(f(g(b))\leq b\) and \(a \leq g(f(a))\)We want to show \(f \dashv g\), which is to say for all \(a\) and \(b\):

$$ f(a)\leq b\text{ if and only if } a \leq g(b) $$

Proof.I hope it's okay if I only show \(f(a)\leq b \Longrightarrow a \leq g(b)\), since the other direction is quite similar.Assume \(f(a)\leq b\). Then by monotony of \(g\) we have \(g(f(a)) \leq g(b)\). However, since \(a \leq g(f(a))\) by assumption, then we have \(a \leq g(b)\) by transitivity.

\(\Box\)

Since anti-symmetry wasn't used I don't see why this proof doesn't apply to preorders...? I greatly appreciate you taking the time to help me out.

`[John Baez #4](https://forum.azimuthproject.org/discussion/comment/16344/#Comment_16344): > Some excellent responses! Just one small issue, coming from some mistakes in Seven Sketches. Everything Matthew and Owen just said is true for posets, but not for preorders. > Remember that a **preorder** is a set with a binary relation ≤ that's reflexive and transitive. A **poset** is a preorder where \\(x \leq y\\) and \\(y \leq x\\) imply \\(x = y\\). Okay... but I don't see how my alternative definition uses anti-symmetry (i.e. the rule \\(x \leq y\\) and \\(y \leq x\\) imply \\(x = y\\)). Here's my attempted proof: **Lemma**: Assume that \\(f\\) and \\(g\\) are monotone and for all \\(a\\) and \\(b\\) we have \\(f(g(b))\leq b\\) and \\(a \leq g(f(a))\\) We want to show \\(f \dashv g\\), which is to say for all \\(a\\) and \\(b\\): $$ f(a)\leq b\text{ if and only if } a \leq g(b) $$ **Proof.** I hope it's okay if I only show \\(f(a)\leq b \Longrightarrow a \leq g(b)\\), since the other direction is quite similar. Assume \\(f(a)\leq b\\). Then by monotony of \\(g\\) we have \\(g(f(a)) \leq g(b)\\). However, since \\(a \leq g(f(a))\\) by assumption, then we have \\(a \leq g(b)\\) by transitivity. \\(\Box\\) Since anti-symmetry wasn't used I don't see why this proof doesn't apply to preorders...? I greatly appreciate you taking the time to help me out.`

Matthew: I was being pretty vague when I wrote

I didn't mean

nothingyou said was true for preorders. For example, I think the alternative characterization of Galois connections works fine for preorders. Looking over what you said, this is the only thing that I'm sure is false for preorders:I tried to hint at the reason why:

Do you see how to cook up a monotone function between preorders that has more than one left adjoint?

`Matthew: I was being pretty vague when I wrote > Everything Matthew and Owen just said is true for posets, but not for preorders. I didn't mean _nothing_ you said was true for preorders. For example, I think the alternative characterization of Galois connections works fine for preorders. Looking over what you said, this is the only thing that I'm sure is false for preorders: > Moreover, if a monotone function has a left (or right) Galois adjoint it is unique. I tried to hint at the reason why: > The left or right adjoint of a monotone function between posets is unique if it exists. This need not be true for preorders. > The issue can be seen clearly in the phrases "the smallest \\(b\\) with \\(a \le g(b)\\)". In a poset, such an \\(a\\) is unique if it exists. In a preorder, that's not true, since we could have \\(a \le a'\\) and \\(a' \le a\\) yet still \\(a \ne a'\\). Do you see how to cook up a monotone function between preorders that has more than one left adjoint?`

Yeah, I think I can see one - consider \(\mathbb{Z} ∐ \mathbb{Z}\). Let \(u : \mathbb{Z} ∐ \mathbb{Z} \to \mathbb{Z} \) be the forgetful functor that takes \(x_l \mapsto x\) and \(x_r \mapsto x\). Define the preorder on \(\mathbb{Z} ∐ \mathbb{Z}\) to be \(a \leq b\) if and only if \(u(a) \leq_{\mathbb{Z}} u(b)\).

Now consider the endomorphism \(f : \mathbb{Z} ∐ \mathbb{Z} \to \mathbb{Z} ∐ \mathbb{Z}\) where:

$$ x_l \mapsto (x+1)_l \\ x_r \mapsto (x+1)_r $$ I can see two left/right adjoints for this.

First, this function is invertible, some one left/right adjoint is \(f^{-1}\). Explicitly, this maps:

$$ x_l \mapsto (x-1)_l \\ x_r \mapsto (x-1)_r $$ There is also another left/right adjoint \(s\) that switches the sides of the coproduct:

$$ x_l \mapsto (x-1)_r \\ x_r \mapsto (x-1)_l $$ There are in fact an infinite number of left/right adjoints to \(f\). Consider any partition \(P\) on \(\mathbb{Z} ∐ \mathbb{Z}\). For each \(p \in P\), we can map the elements using either \(f^{-1}\) or \(s\). The resulting map is another left/right adjoint.

——————————

I am sure there is a simpler example.

Thank you again for taking the time to help me get clear on the difference between adjoints for preorders and adjoints for posets!

`> Do you see how to cook up a monotone function between preorders that has more than one left adjoint? Yeah, I think I can see one - consider \\(\mathbb{Z} ∐ \mathbb{Z}\\). Let \\(u : \mathbb{Z} ∐ \mathbb{Z} \to \mathbb{Z} \\) be the forgetful functor that takes \\(x_l \mapsto x\\) and \\(x_r \mapsto x\\). Define the preorder on \\(\mathbb{Z} ∐ \mathbb{Z}\\) to be \\(a \leq b\\) if and only if \\(u(a) \leq_{\mathbb{Z}} u(b)\\). Now consider the endomorphism \\(f : \mathbb{Z} ∐ \mathbb{Z} \to \mathbb{Z} ∐ \mathbb{Z}\\) where: $$ x_l \mapsto (x+1)_l \\\\ x_r \mapsto (x+1)_r $$ I can see two left/right adjoints for this. First, this function is invertible, some one left/right adjoint is \\(f^{-1}\\). Explicitly, this maps: $$ x_l \mapsto (x-1)_l \\\\ x_r \mapsto (x-1)_r $$ There is also another left/right adjoint \\(s\\) that switches the sides of the coproduct: $$ x_l \mapsto (x-1)_r \\\\ x_r \mapsto (x-1)_l $$ There are in fact an infinite number of left/right adjoints to \\(f\\). Consider any partition \\(P\\) on \\(\mathbb{Z} ∐ \mathbb{Z}\\). For each \\(p \in P\\), we can map the elements using either \\(f^{-1}\\) or \\(s\\). The resulting map is another left/right adjoint. —————————— I am sure there is a simpler example. Thank you again for taking the time to help me get clear on the difference between adjoints for preorders and adjoints for posets!`

Great! Here's a fun example.

Let \(A\) be any set, and make it into a preorder by defining every element to be less than or equal to every other element. Do the same for some set \(B\). Then any function \(f : A \to B\) is monotone, because we have \(f(a) \le f(a')\) no matter what \(a,a' \in A\) are. Similarly any function \(g : B \to A\) is monotone. And no matter what \(f\) and \(g\) are, \(g\) will be be a right adjoint to \(f\), since

$$ f(a) \le b \textrm{ if and only if } a \le g(b) $$ (both are always true). Similarly, \(g\) will always be a left adjoint to \(f\).

This shows that when we make our preorders as far from posets as possible, right and left adjoints become ridiculously non-unique.

`Great! Here's a fun example. Let \\(A\\) be any set, and make it into a preorder by defining every element to be less than or equal to every other element. Do the same for some set \\(B\\). Then any function \\(f : A \to B\\) is monotone, because we have \\(f(a) \le f(a')\\) no matter what \\(a,a' \in A\\) are. Similarly any function \\(g : B \to A\\) is monotone. And no matter what \\(f\\) and \\(g\\) are, \\(g\\) will be be a right adjoint to \\(f\\), since $$ f(a) \le b \textrm{ if and only if } a \le g(b) $$ (both are always true). Similarly, \\(g\\) will always be a left adjoint to \\(f\\). This shows that when we make our preorders as far from posets as possible, right and left adjoints become ridiculously non-unique.`

Inverse functions as a special case of adjoints: if \(A\) and \(B\) be preorders, where the ordering is the identity relation, then \(f: A \rightarrow B\) and \(g: B \rightarrow A\) are adjoint iff they are inverse functions.

`Inverse functions as a special case of adjoints: if \\(A\\) and \\(B\\) be preorders, where the ordering is the identity relation, then \\(f: A \rightarrow B\\) and \\(g: B \rightarrow A\\) are adjoint iff they are inverse functions.`

John Baez #6 wrote:

I actually see 4 equivalent definitions of a Galois connection \(f \dashv g\) for two preorders \(\langle A, \sqsubseteq\rangle\) and \(\langle B, \preceq\rangle\):

(1) \(f(a) \preceq b\) if and only if \(a \sqsubseteq g(b)\)

(2) \(f\) and \(g\) are mono and \(f(g(b)) \preceq b\) and \(a \sqsubseteq g(f(a))\)

(3) \(f\) is mono and \(f(g(b)) \preceq b\) and \(f(a) \preceq b \Longrightarrow a \sqsubseteq g(b)\)

(4) \(g\) is mono and \(a \sqsubseteq g(f(a))\) and \(a \sqsubseteq g(b) \Longrightarrow f(a) \preceq b\)

(3) and (4) are based on Owen Biesel's observation.

It looks like these definitions are pretty general - I think you can use them to give alternate ways of programming adjunctions in Haskell.

Let me double check, if this is the case we can maybe make a change to the Haskell

`adjunctions`

library.`[John Baez #6](https://forum.azimuthproject.org/discussion/comment/16420/#Comment_16420) wrote: > For example, I think the alternative characterization of Galois connections works fine for preorders. I actually see 4 equivalent definitions of a Galois connection \\(f \dashv g\\) for two preorders \\(\langle A, \sqsubseteq\rangle\\) and \\(\langle B, \preceq\rangle\\): (1) \\(f(a) \preceq b\\) if and only if \\(a \sqsubseteq g(b)\\) (2) \\(f\\) and \\(g\\) are mono and \\(f(g(b)) \preceq b\\) and \\(a \sqsubseteq g(f(a))\\) (3) \\(f\\) is mono and \\(f(g(b)) \preceq b\\) and \\(f(a) \preceq b \Longrightarrow a \sqsubseteq g(b)\\) (4) \\(g\\) is mono and \\(a \sqsubseteq g(f(a))\\) and \\(a \sqsubseteq g(b) \Longrightarrow f(a) \preceq b\\) -------------------------------------------- (3) and (4) are based on Owen Biesel's observation. It looks like these definitions are pretty general - I think you can use them to give alternate ways of programming adjunctions in Haskell. Let me double check, if this is the case we can maybe make a change to the Haskell `adjunctions` library.`

Matthew - that would be cool!

`Matthew - that would be cool!`

Made the PR this morning :D

`Made the PR this morning :D`

Minor typo (I think) in your lecture John:

Think that should be 2a=

3?If not, it's a "thinko" on my part (kudos to Patrick O'Neill for the "thinko" concept!).

`Minor typo (I think) in your lecture John: >But you can still try to give me a "best approximation" to the nonexistent natural number a with 2a=4. Think that should be 2a=**3**? If not, it's a "thinko" on my part (kudos to [Patrick O'Neill](https://forum.azimuthproject.org/discussion/comment/16151/#Comment_16151) for the "thinko" concept!).`

Thanks, Scott! It was definitely a typo on my part, not a thinko on yours. You'll be relieved to hear that there is indeed a natural number with \(2a = 4\). Even in the "new math".

`Thanks, Scott! It was definitely a typo on my part, not a thinko on yours. You'll be relieved to hear that there is indeed a natural number with \\(2a = 4\\). Even in the "new math". <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif">`

Matthew Doty #10 Thanks, very enlightening, especially using different notation for the two orders instead of e.g. \(\sqsubseteq_A, \sqsubseteq_B\). Typo: the orders need to be swapped in the definitions 1 - 4.

So:

(1) \(f(a) \preceq b\) if and only if \(a \sqsubseteq g(b)\)

(2) \(f\) and \(g\) are mono and \(f(g(b)) \preceq b\) and \(a \sqsubseteq g(f(a))\)

(3) \(f\) is mono and \(f(g(b)) \preceq b\) and \(f(a) \preceq b \Longrightarrow a \sqsubseteq g(b)\)

(4) \(g\) is mono and \(a \sqsubseteq g(f(a))\) and \(a \sqsubseteq g(b) \Longrightarrow f(a) \preceq b\)

`[Matthew Doty #10](https://forum.azimuthproject.org/discussion/comment/16506/#Comment_16506) Thanks, very enlightening, especially using different notation for the two orders instead of e.g. \\(\sqsubseteq_A, \sqsubseteq_B\\). Typo: the orders need to be swapped in the definitions 1 - 4. > I actually see 4 equivalent definitions of a Galois connection \\(f \dashv g\\) for two preorders \\(\langle A, \sqsubseteq\rangle\\) and \\(\langle B, \preceq\rangle\\) So: (1) \\(f(a) \preceq b\\) if and only if \\(a \sqsubseteq g(b)\\) (2) \\(f\\) and \\(g\\) are mono and \\(f(g(b)) \preceq b\\) and \\(a \sqsubseteq g(f(a))\\) (3) \\(f\\) is mono and \\(f(g(b)) \preceq b\\) and \\(f(a) \preceq b \Longrightarrow a \sqsubseteq g(b)\\) (4) \\(g\\) is mono and \\(a \sqsubseteq g(f(a))\\) and \\(a \sqsubseteq g(b) \Longrightarrow f(a) \preceq b\\)`

Thanks John!

Great to have you on the forums.

`Thanks John! Great to have you on the forums.`

This may not be very useful, but since I had this thought while reading, I might as well post it.

I was wondering what you meant by

bestapproximation, and I can see how this is a natural way of defining it, given that all we have is the partial (or pre)order. I was wondering though whether another type ofbestapproximation might be about limiting the domain, rather than limiting the value the function takes. So for instance, the domain on which we have an inverse for the function \(f: \mathbb{N} \to \mathbb{N}\) with \(f(n) = 2\cdot n \) is \(2 \cdot \mathbb{N}\) (by which I mean the set of all even numbers). Thus, in that case I would get an approximation that is limited in its domain, but accurate, whereas the right and left adjoints are given on the full domain, but wrong in places.After I thought a bit about it, I felt though that this is worse than the right and left adjoints, because the right and left adjoints together contain more information. I think (without having proved it) that the domain I was thinking of is the domain where the right and left adjoints agree in value -- so it has less information than the adjoints.

`This may not be very useful, but since I had this thought while reading, I might as well post it. I was wondering what you meant by *best* approximation, and I can see how this is a natural way of defining it, given that all we have is the partial (or pre)order. I was wondering though whether another type of *best* approximation might be about limiting the domain, rather than limiting the value the function takes. So for instance, the domain on which we have an inverse for the function \\(f: \mathbb{N} \to \mathbb{N}\\) with \\(f(n) = 2\cdot n \\) is \\(2 \cdot \mathbb{N}\\) (by which I mean the set of all even numbers). Thus, in that case I would get an approximation that is limited in its domain, but accurate, whereas the right and left adjoints are given on the full domain, but wrong in places. After I thought a bit about it, I felt though that this is worse than the right and left adjoints, because the right and left adjoints together contain more information. I think (without having proved it) that the domain I was thinking of is the domain where the right and left adjoints agree in value -- so it has less information than the adjoints.`

I just discovered an application of Galois connection to economics (or, more precisely, mechanism design) in a newly revised paper by Georg Noldeke and Larry Samuelson on "The Implementation Duality". They use the "antitone" definition of Galois connection, though (i.e., \( f(p) \leq q \Leftrightarrow p \geq g(q) \) ).

Here is a quote from p. 8 of the paper (a "profile" u gives utility u(x) to an agent of type x, (Φv)(x) is the highest utility that an agent of type x can get when trading/being matched to a counterpart; similarly for v(y) and Ψu):

`I just discovered an application of Galois connection to economics (or, more precisely, mechanism design) in a newly revised paper by Georg Noldeke and Larry Samuelson on ["The Implementation Duality"](https://cowles.yale.edu/sites/default/files/files/pub/d20/d2091.pdf). They use the "antitone" definition of Galois connection, though (i.e., \\( f(p) \leq q \Leftrightarrow p \geq g(q) \\) ). Here is a quote from p. 8 of the paper (a "profile" u gives utility u(x) to an agent of type x, (Φv)(x) is the highest utility that an agent of type x can get when trading/being matched to a counterpart; similarly for v(y) and Ψu): > Suppose we have a pair of profiles u and v such that each buyer x ∈ X is content to obtain u(x) rather than matching with any seller y ∈ Y and providing that seller with utility v(y), that is, the inequality u ≥ Φv holds. It is then intuitive that every seller y ∈ Y would similarly weakly prefer to obtain utility v(y) to matching with any buyer x ∈ X who insists on receiving utility u(x), that is, the inequality v ≥ Ψu holds. Reversing the roles of buyers and sellers in this explanation motivates the other direction of the equivalence.`

Trying to summarize what I get here as a pattern: If \(f\dashv g\), then:

the right adjoint \(g\) approximates the inverse of \(f\) from above: \(p\leq g(f(p))\)

and

the left adjoint \(f\) approximates the inverse of \(g\) from below \(f(g(q))\leq q\).

[Edited: I switched "right" and "left" in my first attempt, as Valter points out below. ]

`Trying to summarize what I get here as a pattern: If \\(f\dashv g\\), then: the right adjoint \\(g\\) approximates the inverse of \\(f\\) from above: \\(p\leq g(f(p))\\) and the left adjoint \\(f\\) approximates the inverse of \\(g\\) from below \\(f(g(q))\leq q\\). [Edited: I switched "right" and "left" in my first attempt, as Valter points out below. ]`

@IgnacioViglizzo : isn't it the other way round?

the right adjoint \(g\) of \(f\) of approximates the inverse of \(f\) from above: \(p\leq g(f(p))\), whereas a true inverse (if it existed) would bring \(f^{-1}(f(p))\) down to \(p\).

and

the left adjoint \(f\) of \(g\) approximates the inverse of \(g\) from below\(f(g(q))\leq q\), whereas a true inverse (if it existed) would bring \(g^{-1}(g(q))\) up to \(q\).

But this seems to go against John's characterization of right adjoints being conservative and left ones being "generous", so I may have made a mistake somewhere.

`@IgnacioViglizzo : isn't it the other way round? the right adjoint \\(g\\) of \\(f\\) of approximates the inverse of \\(f\\) from above: \\(p\leq g(f(p))\\), whereas a true inverse (if it existed) would bring \\(f^{-1}(f(p))\\) down to \\(p\\). and the left adjoint \\(f\\) of \\(g\\) approximates the inverse of \\(g\\) from below\\(f(g(q))\leq q\\), whereas a true inverse (if it existed) would bring \\(g^{-1}(g(q))\\) up to \\(q\\). But this seems to go against John's characterization of right adjoints being conservative and left ones being "generous", so I may have made a mistake somewhere.`

@ValterSorana: you are completely right! It is so easy to get this mixed up!

`@ValterSorana: you are completely right! It is so easy to get this mixed up!`

A couple mnemonics I find helpful: When we write \(f\dashv g\), the left adjoint \(f\) is on the left, and the right adjoint \(g\) is on the right.

Also, in the important relationships defining adjoints, the left adjoint appears on the left side of \(\leq\), and the right adjoint appears on the right side. For example: $$f(a)\leq b \iff a \leq g(b).$$ The \(f\) appears on the left-hand side of the first inequality, and the \(g\) appears on the right side of the second. And they're still on their correct sides if we write the two inequalities in the other order, as in $$a \leq g(b) \iff f(a)\leq b.$$ The rule of thumb for other important inequalities, like \(a \leq g(f(a))\) and \(f(g(b)) \leq b\), if you look at which function is on the outside: the right adjoint \(g\) is on the right side of \(\leq\) when it's on the outside of the composite, and when the left adjoint is on the outside it's on the left side. They need to be there in order for the defining relationship to translate these two inequalities into the always true statements \(f(a)\leq f(a)\) and \(g(b)\leq g(b)\).

`A couple mnemonics I find helpful: When we write \\(f\dashv g\\), the left adjoint \\(f\\) is on the left, and the right adjoint \\(g\\) is on the right. Also, in the important relationships defining adjoints, the left adjoint appears on the left side of \\(\leq\\), and the right adjoint appears on the right side. For example: \[f(a)\leq b \iff a \leq g(b).\] The \\(f\\) appears on the left-hand side of the first inequality, and the \\(g\\) appears on the right side of the second. And they're still on their correct sides if we write the two inequalities in the other order, as in \[a \leq g(b) \iff f(a)\leq b.\] The rule of thumb for other important inequalities, like \\(a \leq g(f(a))\\) and \\(f(g(b)) \leq b\\), if you look at which function is on the outside: the right adjoint \\(g\\) is on the right side of \\(\leq\\) when it's on the outside of the composite, and when the left adjoint is on the outside it's on the left side. They need to be there in order for the defining relationship to translate these two inequalities into the always true statements \\(f(a)\leq f(a)\\) and \\(g(b)\leq g(b)\\).`