Options

Lecture 12 - Chapter 1: Generative Effects

We now reach a topic highlighted by Fong and Spivak: "generative effects". These are, roughly, situations where the whole is more than the sum of its parts. A nice example shows up in the logic of partitions.

Remember that any set \(X\) has a poset of partitions. This poset is called \(\mathcal{E}(X)\), and its elements are partitions of \(X\). Each partition \(P\) corresponds to an equivalence relation \(\sim_P\), where \(x \sim_P y\) if and only if \(x\) and \(y\) are in the same part of \(P\). This makes it easy to describe the partial order on \(\mathcal{E}(X)\): we say a partition \(P\) is finer than a partition \(Q\), or \(Q\) is coarser than \(P\), or simply \(P \le Q\), when

$$ x \sim_P y \textrm{ implies } x \sim_Q y $$ for all \(x,y \in X\).

Now that we have this poset, how can we do logic with it?

In the logic of subsets, we've seen that "and" and "or" are the operations of "meet" and "join" in a certain poset. So, the logic of partitions should use "meet" and "join" in the poset \(\mathcal{E}(X)\). Let's see what they're like!

First, we can ask whether two partitions \(P\) and \(Q\) have a meet. Remember: the meet \(P \wedge Q\), if it exists, is the coarsest partition that is finer than \(P\) and \(Q\).

Puzzle 34. Can you draw the coarsest partition that's finer than these two partitions?

image

In fact the meet of two partitions always exists, and it's easy to describe:

Puzzle 35. Suppose \(P\) and \(Q\) are two partitions of a set \(X\). Show that there's an equivalence relation \( \approx \) defined by

$$ x \approx y \textrm{ if and only if } x \sim_P y \textrm{ and } x \sim_Q y . $$ Puzzle 36. Every equivalence relation gives a partition as in Puzzle 29. Show that \(\approx\) gives the partition \(P \wedge Q\).

We can also ask whether \(P\) and \(Q\) have a join! The join \(P \vee Q\), if it exists, is the finest partition that is coarser than \(P\) and \(Q\).

Can you draw the finest partition that's coarser than these two partitions?

image

To check your answer, see our discussion of Exercise 2.

The join of two partitions always exists. Since "and" and "or" are meet and join in the logic of subsets, you might think the to describe the join of partitions, we just copy Puzzle 35 and replace "and" with "or". But no!

Puzzle 37. Suppose \(P\) and \(Q\) are two partitions. Show that the relation \( \frown\) defined by

$$ x \frown y \textrm{ if and only if } x \sim_P y \textrm{ or } x \sim_Q y $$ is not always an equivalence relation.

To get the equivalence relation corresponding to \(P \vee Q\), we have to work harder. Say we have any partitions \(P\) and \(Q\) of some set \(X\). If you give me two elements \(x, y \in X\) and ask if they're in the same part of \(P \vee Q\), it's not enough to check whether

$$ x \textrm{ and } y \textrm{ are in the same part of } P \textrm{ or the same part of } Q . $$ (That's what \(x \frown y\) means.) Instead, you have to check whether there's a list of elements \(z_1, \dots, z_n\) such that

$$ x \textrm{ and } z_1 \textrm{ are in the same part of } P \textrm{ or the same part of } Q $$ and

$$ z_1 \textrm{ and } z_2 \textrm{ are in the same part of } P \textrm{ or the same part of } Q $$ and so on, and finally

$$ z_n \textrm{ and } y \textrm{ are in the same part of } P \textrm{ or the same part of } Q . $$ This picture by Michael Hong shows how it works:

image

So, the join is a lot more complicated than the meet!

Puzzle 38. If \(P\) and \(Q\) are two partitions on a set \(X\), let the relation \( \simeq \) be the transitive closure of the relation \( \frown \) defined in Puzzle 37. This means that \( x \simeq y\) if and only if

$$ x \frown z_1 \textrm{ and } z_1 \frown z_2 \textrm{ and } \dots \textrm{ and } z_{n-1} \frown z_n \textrm{ and } z_n \frown y $$ for some \(z_1, \dots, z_n \in X\). Show that \( \simeq \) is an equivalence relation.

Puzzle 39. Show that the equivalence relation \( \simeq \) on \(X\) gives the partition \(P \vee Q \).

This is the first hint of what Fong and Spivak call a "generative effect". To decide if two elements \(x , x' \in X\) are in the same part of the meet \(P \wedge Q\), it's enough to know if they're the same part of \(P\) and the same part of \(Q\), since

$$ x \sim_{P \wedge Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ and } x \sim_Q x'. $$ But this does not work for the join!

$$ \textbf{THIS IS FALSE: } \; x \sim_{P \vee Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ or } x \sim_Q x' . $$ To decide if \(x \sim_{P \vee Q} y\) you need to look at other elements of \(X\), too. It's not a "local" calculation - it's a "global" one!

But to make this really precise and clear, we need to think about "pulling back" partitions. We'll do that next time.

To read other lectures go here.

Comments

  • 1.

    There is a typo in the definition of join: you wrote the meet again!

    Comment Source:There is a typo in the definition of join: you wrote the meet again!
  • 2.

    Thanks, Valter. Fixed!

    Comment Source:Thanks, Valter. Fixed!
  • 3.
    edited April 11

    I also fixed the statement of Puzzle 36, and another place where I'd written \(\wedge\) instead of \(\vee\).

    Comment Source:I also fixed the statement of Puzzle 36, and another place where I'd written \\(\wedge\\) instead of \\(\vee\\).
  • 4.

    Now Puzzle 36 says that \( \approx \) is the join; shouldn’t it be the meet?

    Comment Source:Now Puzzle 36 says that \\( \approx \\) is the join; shouldn’t it be the meet?
  • 5.

    @Valter: Seconded. I believe

    Puzzle 36. Every equivalence relation gives a partition as in Puzzle 29. Show that \(\approx\) gives the partition \(P \vee Q\).

    should say

    Puzzle 36. Every equivalence relation gives a partition as in Puzzle 29. Show that \(\approx\) gives the partition \(P \wedge Q\).

    Comment Source:@Valter: Seconded. I believe >**Puzzle 36.** Every equivalence relation gives a partition as in [Puzzle 29](https://forum.azimuthproject.org/discussion/1963/lecture-10-the-logic-of-partitions/p1). Show that \\(\approx\\) gives the partition \\(P \vee Q\\). should say >**Puzzle 36.** Every equivalence relation gives a partition as in [Puzzle 29](https://forum.azimuthproject.org/discussion/1963/lecture-10-the-logic-of-partitions/p1). Show that \\(\approx\\) gives the partition \\(P \wedge Q\\).
  • 6.

    34: there is no pair shared by both P and Q, so the meet is discrete.

    35: conjunction of equivalence relations preserves reflexivity, symmetry, and transitivity (x~y~z P and x~y~z Q implies x~z P and x~z Q, etc.)

    36: clearly this conjunction is finer than each. suppose there were another such relation R that was coarser; but then there exists x,y such that x~y P and x~y Q, but x!~y R, so it is not finer, contradiction. hence the conjunction is the coarsest relation which is finer than P and Q, thus the "meet" in the poset.

    37: the problem is with transitivity - if x~y P and y~z Q, that does not imply x~z in either. so the naive disjunction of relation "breaks" along these points, and is not transitive.

    38: so, if we add transitive closure, then this ignores the ~ pairs which span across the break points. we then have an equivalence relation, because reflexivity and symmetry were trivially satisfied in the definition.

    39: clearly the disjunction is coarser than each, and the transitive closure only eliminates some of the new "joined" ~s which failed to be transitive. suppose there were another such relation R that was finer; then either transitivity is lost, or there exists x~y R but x!~y P and x!~ Q, so it is not coarser, contradiction. hence the transitive closure of the disjunction is the finest relation which is coarser than P and Q, thus the "join" in the poset.

    someone please feel free to make these nicer! (or find mistakes)

    Comment Source:**34**: there is no pair shared by both P and Q, so the meet is discrete. **35**: conjunction of equivalence relations preserves reflexivity, symmetry, and transitivity (x~y~z P and x~y~z Q implies x~z P and x~z Q, etc.) **36**: clearly this conjunction is finer than each. suppose there were another such relation R that was coarser; but then there exists x,y such that x~y P and x~y Q, but x!~y R, so it is not finer, contradiction. hence the conjunction is the coarsest relation which is finer than P and Q, thus the "meet" in the poset. **37**: the problem is with transitivity - if x~y P and y~z Q, that does not imply x~z in either. so the naive disjunction of relation "breaks" along these points, and is not transitive. **38**: so, if we add transitive closure, then this ignores the ~ pairs which span across the break points. we then have an equivalence relation, because reflexivity and symmetry were trivially satisfied in the definition. **39**: clearly the disjunction is coarser than each, and the transitive closure only eliminates some of the new "joined" ~s which failed to be transitive. suppose there were another such relation R that was finer; then either transitivity is lost, or there exists x~y R but x!~y P and x!~ Q, so it is not coarser, contradiction. hence the transitive closure of the disjunction is the finest relation which is coarser than P and Q, thus the "join" in the poset. someone please feel free to make these nicer! (or find mistakes)
  • 7.
    edited April 15

    Puzzle CW1: how can we use meets and joins to distinguish partitions in the powerset-powerset poset P(P(X))?

    Comment Source:**Puzzle CW1:** how can we use meets and joins to distinguish partitions in the powerset-powerset poset P(P(X))?
  • 8.

    Christian - we have a system for numbering everyone's puzzles, so yours is CW1.

    I like it! I like it so much I'm gonna try it. A partition of \(X\) is a subset \(P \subseteq P(X)\), or if you want to show off, an element \(P \in P(P(X)) \). But the first description is better for characterizing partitions. Any subset \(S \subseteq P(X) \) has a meet \(\bigvee S\) which is the union of all the sets in \(S\), and for a partition this union must be all of \(X\), so we need

    $$ \bigvee P = X $$ I don't see an equally terse way of saying that the sets in \(P\) are pairwise disjoint and nonempty. Clearly this says that the 1-fold intersections of sets \(P\) are nonempty while the 2-fold and higher intersections are empty.

    Comment Source:Christian - we have a system for numbering everyone's puzzles, so yours is CW1. I like it! I like it so much I'm gonna try it. A partition of \\(X\\) is a subset \\(P \subseteq P(X)\\), or if you want to show off, an element \\(P \in P(P(X)) \\). But the first description is better for characterizing partitions. Any subset \\(S \subseteq P(X) \\) has a meet \\(\bigvee S\\) which is the union of all the sets in \\(S\\), and for a partition this union must be all of \\(X\\), so we need $$ \bigvee P = X $$ I don't see an equally terse way of saying that the sets in \\(P\\) are pairwise disjoint and nonempty. Clearly this says that the 1-fold intersections of sets \\(P\\) are nonempty while the 2-fold and higher intersections are empty.
  • 9.
    edited April 12

    I've tried to clarify the idea of "generative effects" a bit by adding a summary at the end:


    This is the first hint of what Fong and Spivak call a "generative effect". To tell whether \(x\) and \(y\) are in the same part of \(P \wedge Q\), it's enough to know if they're in the same part of \(P\) and the same part of \(Q\):

    $$ x \sim_{P \wedge Q} y \textrm{ if and only if } x \sim_P y \textrm{ and } x \sim_Q y. $$ But tell whether \(x\) and \(y\) are in the same part of \(P \vee Q\), it's not enough to know if they're in the same part of \(P\) or the same part of \(Q\):

    $$ \textbf{THIS IS FALSE: } x \sim_{P \vee Q} y \textrm{ if and only if } x \sim_P y \textrm{ or } x \sim_Q y $$ To decide whether \(x \sim_{P \vee Q} y\) you need to look at other elements of \(X\), too. It's not a "local" calculation - it's a "global" one!

    Comment Source:I've tried to clarify the idea of "generative effects" a bit by adding a summary at the end: <hr/> This is the first hint of what Fong and Spivak call a "generative effect". To tell whether \\(x\\) and \\(y\\) are in the same part of \\(P \wedge Q\\), it's enough to know if they're in the same part of \\(P\\) and the same part of \\(Q\\): $$ x \sim_{P \wedge Q} y \textrm{ if and only if } x \sim_P y \textrm{ and } x \sim_Q y. $$ But tell whether \\(x\\) and \\(y\\) are in the same part of \\(P \vee Q\\), it's _not_ enough to know if they're in the same part of \\(P\\) or the same part of \\(Q\\): $$ \textbf{THIS IS FALSE: } x \sim_{P \vee Q} y \textrm{ if and only if } x \sim_P y \textrm{ or } x \sim_Q y $$ To decide whether \\(x \sim_{P \vee Q} y\\) you need to look at _other_ elements of \\(X\\), too. It's not a "local" calculation - it's a "global" one!
  • 10.
    edited April 12

    Valter Sorana and Matthew Doty were right. Puzzle 36 should, and now does, say:

    Puzzle 36. Every equivalence relation gives a partition as in Puzzle 29. Show that \(\approx\) gives the partition \(P \wedge Q\).

    Whenever I say "meet" there's a 20% chance that I should have said "join", and vice versa. I know what I mean, I just find these words incredibly unevocative. If we called them "greatest lower bound" or "inf" and "least upper bound" and "sup" I would do much better.

    Comment Source:[Valter Sorana](https://forum.azimuthproject.org/discussion/comment/17036/#Comment_17036) and [Matthew Doty](https://forum.azimuthproject.org/discussion/comment/17038/#Comment_17038) were right. Puzzle 36 should, and now does, say: > **Puzzle 36.** Every equivalence relation gives a partition as in [Puzzle 29](https://forum.azimuthproject.org/discussion/1963/lecture-10-the-logic-of-partitions/p1). Show that \\(\approx\\) gives the partition \\(P \wedge Q\\). Whenever I say "meet" there's a 20% chance that I should have said "join", and vice versa. I know what I mean, I just find these words incredibly unevocative. If we called them "greatest lower bound" or "inf" and "least upper bound" and "sup" I would do much better.
  • 11.

    I've been staring at this for a day, and it all just hit me like brick and seems obvious now!

    Thanks John & commenters!

    The construction of P \/ Q reminds me very much of how free categories, monoids, monads etc are constructed (ie induction), might it be regarded as some kind of free coproduct or something?

    Also, it reminds me of the asymmetry between the straightforward product of monoids vs the complicated coproduct of monoids, is this related somehow to partition joins?

    Comment Source:I've been staring at this for a day, and it all just hit me like brick and seems obvious now! Thanks John & commenters! The construction of P \/ Q reminds me very much of how free categories, monoids, monads etc are constructed (ie induction), might it be regarded as some kind of free coproduct or something? Also, it reminds me of the asymmetry between the straightforward product of monoids vs the complicated coproduct of monoids, is this related somehow to partition joins?
  • 12.

    @Ken – I think they are related: given two monoids, a red one and a blue one, the smallest monoid containing both doesn't simply consist of red elements or blue ones, but also includes alternating sequences, eg red-blue-red, blue-red-blue-red.

    In the same way, the join of red \(\sim_P\) and blue \(\sim_Q\) doesn't just consist of red links or blue links, but also includes alternating paths of links, eg red-blue-red, blue-red-blue-red.

    I'm guessing there's some deep categorical reason for this, ie at some level the two constructions are basically the same.

    Comment Source:@Ken – I think they are related: given two monoids, a red one and a blue one, the smallest monoid containing both doesn't simply consist of red elements or blue ones, but also includes alternating sequences, eg red-blue-red, blue-red-blue-red. In the same way, the join of red \\(\sim_P\\) and blue \\(\sim_Q\\) doesn't just consist of red links or blue links, but also includes alternating paths of links, eg red-blue-red, blue-red-blue-red. I'm guessing there's some deep categorical reason for this, ie at some level the two constructions are basically the same.
  • 13.

    Ken wrote:

    I've been staring at this for a day, and it all just hit me like brick and seems obvious now!

    Great! It would have been clearer with more pictures. I wish some god of pictures like Michael Hong would draw a set equipped with two partitions \(P\) and \(Q\), say red and blue, and show how to get from one point \(x\) to another point \(y\) by hopping like this:

    \( x\) and \(z_1\) are in the same part of \(P\)

    and

    \(z_1\) and \(z_2\) are in the same part of \(Q\)

    and so on, maybe not too long, ending with something like

    \(z_2\) and \(y\) are in the same part of \(P\).

    For extra fun, then draw the partition \(P \vee Q\).

    Comment Source:Ken wrote: > I've been staring at this for a day, and it all just hit me like brick and seems obvious now! Great! It would have been clearer with more pictures. I wish some god of pictures like Michael Hong would draw a set equipped with two partitions \\(P\\) and \\(Q\\), say red and blue, and show how to get from one point \\(x\\) to another point \\(y\\) by hopping like this: \\( x\\) and \\(z_1\\) are in the same part of \\(P\\) and \\(z_1\\) and \\(z_2\\) are in the same part of \\(Q\\) and so on, maybe not too long, ending with something like \\(z_2\\) and \\(y\\) are in the same part of \\(P\\). For extra fun, then draw the partition \\(P \vee Q\\).
  • 14.
    edited April 13

    image

    (I use this symbol to let beginners know that this is more difficult material, and that they shouldn't feel bad if it makes no sense to them!)

    Ken wrote:

    The construction of \(P \vee Q\) reminds me very much of how free categories, monoids, monads etc are constructed (i.e. induction), might it be regarded as some kind of free coproduct or something?

    Anindya wrote:

    I'm guessing there's some deep categorical reason for this, i.e. at some level the two constructions are basically the same.

    That's a great point! A partition is the same as an equivalence relation, and an equivalence relation on a set \(X\) is the same as a groupoid that's also a preorder with \(X\) as its set of objects. So, I believe when we're forming \(P \vee Q\) we're forming the coproduct in the category of "groupoids that are also preorders with \(X\) as set of objects". These things are pretty different than monoids, but there's a similarity: both are categories with a fixed set of objects. (A monoid is a category with one object.) I believe this makes them similar enough that the coproduct (aka \(\vee\)) of partitions is defined using the same sort of "alternating, inductive" construction as coproduct of monoids!

    It's nice to look at coproducts in the category \(\mathrm{Cat}/X\) where objects are "categories with \(X\) as their set of objects" and morphisms are "functors that are the identity on objects". This gives that alternating inductive construction in a very pure form. Given categories \(C\) and \(D\) in \(\mathrm{Cat}/X\), their coproduct is a category whose morphisms are alternating strings of morphisms in \(C\) and \(D\).

    Comment Source:<img width = "100" src = "http://math.ucr.edu/home/baez/mathematical/warning_sign.jpg"> (I use this symbol to let beginners know that this is more difficult material, and that they shouldn't feel bad if it makes no sense to them!) Ken wrote: > The construction of \\(P \vee Q\\) reminds me very much of how free categories, monoids, monads etc are constructed (i.e. induction), might it be regarded as some kind of free coproduct or something? Anindya wrote: > I'm guessing there's some deep categorical reason for this, i.e. at some level the two constructions are basically the same. That's a great point! A partition is the same as an equivalence relation, and an equivalence relation on a set \\(X\\) is the same as a groupoid that's also a preorder with \\(X\\) as its set of objects. So, I believe when we're forming \\(P \vee Q\\) we're forming the coproduct in the category of "groupoids that are also preorders with \\(X\\) as set of objects". These things are pretty different than monoids, but there's a similarity: both are categories with a fixed set of objects. (A monoid is a category with one object.) I believe this makes them similar enough that the coproduct (aka \\(\vee\\)) of partitions is defined using the same sort of "alternating, inductive" construction as coproduct of monoids! It's nice to look at coproducts in the category \\(\mathrm{Cat}/X\\) where objects are "categories with \\(X\\) as their set of objects" and morphisms are "functors that are the identity on objects". This gives that alternating inductive construction in a very pure form. Given categories \\(C\\) and \\(D\\) in \\(\mathrm{Cat}/X\\), their coproduct is a category whose morphisms are alternating strings of morphisms in \\(C\\) and \\(D\\).
  • 15.

    @John – here's a hand-drawn pic of your \(P \vee Q\) if that helps...

    P v Q illustrated with pink and blue links

    Comment Source:@John – here's a hand-drawn pic of your \\(P \vee Q\\) if that helps... ![P v Q illustrated with pink and blue links](https://i.imgur.com/7ZZVc3T.png)
  • 16.
    edited April 13

    Anindya - very nice! Maybe someone else can make a slick electronic version suitable for a coffee table book. But your picture shows how to walk from 11 to 13 by an alternating sequence of \(\sim_P\) and \(\sim_Q\) steps, proving that

    $$ 11 \sim_{P \vee Q} 13 $$ even though

    $$ 11 \nsim_P 13 $$ and

    $$ 11 \nsim_Q 13 $$

    Comment Source:Anindya - very nice! Maybe someone else can make a slick electronic version suitable for a coffee table book. But your picture shows how to walk from 11 to 13 by an alternating sequence of \\(\sim_P\\) and \\(\sim_Q\\) steps, proving that $$ 11 \sim_{P \vee Q} 13 $$ even though $$ 11 \nsim_P 13 $$ and $$ 11 \nsim_Q 13 $$
  • 17.

    I've been playing with using Graphviz for some of my diagrams. There's a site where you can run this online. I'm using the dot language I'll paste my code for it here with some explanation:

    digraph Anindya { rankdir=TB; size="8,5" node [shape = circle]; 11; 12; 13; 21; 22; 23 subgraph {rank=source; 11; 12; 13} subgraph {rank=sink; 21; 22; 23} edge [arrowhead=none] 11 -> 12 [label="~_P" color=red] 12 -> 22 [label="~_Q" color=blue] 22 -> 23 [label="~_P" color=red] 23 -> 13 [label="~_Q" color=blue] 21 -> 22 [style=invis] }

    The dot format can be thought of as ordering the nodes in the graph topologically. rankdir says the first element should be at the top, and move down. You can also do LR, RL, BT.

    I've specified each of the nodes (though this isn't strictly needed) on line 4. Nodes can be used in edges first if you want.

    I've gathered the top row (11, 12, 13) and bottom row (21, 22, 23) into subgraphs in order to force the order.

    I have a connection between 21 and 22 which is not actually in the graph, I've made its style invis. This was because the renderer kept putting 21 on the right in its row, contrary to the image we actually want. Adding invisible connections (and nodes!) can force certain layouts to happen.

    There are ways to include styled text so we can actually get the subscript. I don't know how to do that right now.

    Comment Source:I've been playing with using [Graphviz](https://www.graphviz.org/) for some of my diagrams. There's a site where you can [run this online](http://www.webgraphviz.com/). I'm using the [dot language](https://graphviz.gitlab.io/_pages/pdf/dotguide.pdf) I'll paste my code for it here with some explanation: ``` digraph Anindya { rankdir=TB; size="8,5" node [shape = circle]; 11; 12; 13; 21; 22; 23 subgraph {rank=source; 11; 12; 13} subgraph {rank=sink; 21; 22; 23} edge [arrowhead=none] 11 -> 12 [label="~_P" color=red] 12 -> 22 [label="~_Q" color=blue] 22 -> 23 [label="~_P" color=red] 23 -> 13 [label="~_Q" color=blue] 21 -> 22 [style=invis] } ``` The dot format can be thought of as ordering the nodes in the graph topologically. `rankdir` says the first element should be at the top, and move down. You can also do `LR`, `RL`, `BT`. I've specified each of the nodes (though this isn't strictly needed) on line 4. Nodes can be used in edges first if you want. I've gathered the top row (11, 12, 13) and bottom row (21, 22, 23) into subgraphs in order to force the order. I have a connection between 21 and 22 which is not actually in the graph, I've made its style `invis`. This was because the renderer kept putting 21 on the right in its row, contrary to the image we actually want. Adding invisible connections (and nodes!) can force certain layouts to happen. There are ways to include styled text so we can actually get the subscript. I don't know how to do that right now.
  • 18.
    edited April 14

    Of course, you can use graphviz's subgraphs! Thanks, Jared Summers!

    (For some reason my android vkeyboard's space bar becomes tiny here. I'm sorry, it's causing me to post my comments prematurely...)

    Comment Source:Of course, you can use graphviz's subgraphs! Thanks, Jared Summers! (For some reason my android vkeyboard's space bar becomes tiny here. I'm sorry, it's causing me to post my comments prematurely...)
  • 19.
    edited April 16

    @John Does this work?

    Join Partition

    Comment Source:@John Does this work? ![Join Partition](http://aether.co.kr/images/Join-partition.svg)
  • 20.

    @Ken, in #11 I would also say that when we get to it we'll can say that meets and joins are just products and coproducts when we see the lattice of partitions, that is a poset, also as a category.

    Comment Source:@Ken, in [#11](https://forum.azimuthproject.org/discussion/comment/17099/#Comment_17099) I would also say that when we get to it we'll can say that meets and joins are just products and coproducts when we see the lattice of partitions, that is a poset, also as a category.
  • 21.

    There is a cool little theorem that might be mentioned: if the union of two equivalence relations on \(X\) is the whole set \(X^2\), then one of the equivalence relations is already the whole set \(X^2\).

    Comment Source:There is a cool little theorem that might be mentioned: if the union of two equivalence relations on \\(X\\) is the whole set \\(X^2\\), then one of the equivalence relations is already the whole set \\(X^2\\).
  • 22.
    edited April 14

    Michael Hong - brilliant! I'll add it to Lecture 12, and credit you. This explains the idea better than mere words and symbols.

    Comment Source:Michael Hong - brilliant! I'll add it to Lecture 12, and credit you. This explains the idea better than mere words and symbols.
  • 23.
    edited April 15

    David - yes, that's cool. I should point out to everyone that the union of equivalence relations, the relation \(\sim_P \cup \sim_Q\), is the operation I'd been calling \( \frown \) in the lecture. The LaTeX name for this symbol is \frown, and I used that name because it's an unsatisfactory first attempt at defining \(\sim_{P \vee Q}\).

    Comment Source:David - yes, that's cool. I should point out to everyone that the union of equivalence relations, the relation \\(\sim_P \cup \sim_Q\\), is the operation I'd been calling \\( \frown \\) in the lecture. The LaTeX name for this symbol is `\frown`, and I used that name because it's an unsatisfactory first attempt at defining \\(\sim_{P \vee Q}\\).
  • 24.

    Quick proof-by-contradiction of @David's "cool little theorem" above (#21):

    Suppose we have a set \(X\) and two equivalence relations on it, red and blue, whose union is the total relation on \(X\). In other words, any two nodes in \(X\) are either red-linked, or blue-linked, or both.

    Suppose further than neither red nor blue are total. Then we can find nodes \(A\) and \(B\) that are not red-linked (so must be blue-linked), and nodes \(C\) and \(D\) that are not blue-linked (so must be red-linked).

    Now consider \(AC\) and \(BD\). If both are red, we have \(A \text{ red } C \text{ red } D \text{ red } B\) – contradiction. If both are blue, we have \(C \text{ blue } A \text{ blue } B \text{ blue } D\) – contradiction.

    If \(AC\) is blue and \(BD\) is red, consider \(AD\). If \(AD\) is red, we have \(A \text{ red } D \text{ red } B\) – contradiction. But if \(AD\) is blue, we have \(C \text{ blue } A \text{ blue } D\) – contradiction.

    If \(AC\) is red and \(BD\) is blue, consider \(BC\) – by a similar argument, this can't be red and can't be blue.

    So any which way leads to a contradiction, hence either the red or the blue relation was already total.

    Comment Source:Quick proof-by-contradiction of @David's "cool little theorem" above (#21): Suppose we have a set \\(X\\) and two equivalence relations on it, red and blue, whose union is the total relation on \\(X\\). In other words, any two nodes in \\(X\\) are either red-linked, or blue-linked, or both. Suppose further than neither red nor blue are total. Then we can find nodes \\(A\\) and \\(B\\) that are not red-linked (so must be blue-linked), and nodes \\(C\\) and \\(D\\) that are not blue-linked (so must be red-linked). Now consider \\(AC\\) and \\(BD\\). If both are red, we have \\(A \text{ red } C \text{ red } D \text{ red } B\\) – contradiction. If both are blue, we have \\(C \text{ blue } A \text{ blue } B \text{ blue } D\\) – contradiction. If \\(AC\\) is blue and \\(BD\\) is red, consider \\(AD\\). If \\(AD\\) is red, we have \\(A \text{ red } D \text{ red } B\\) – contradiction. But if \\(AD\\) is blue, we have \\(C \text{ blue } A \text{ blue } D\\) – contradiction. If \\(AC\\) is red and \\(BD\\) is blue, consider \\(BC\\) – by a similar argument, this can't be red and can't be blue. So any which way leads to a contradiction, hence either the red or the blue relation was already total.
  • 25.

    @John: I was thinking about your \(\mathrm{Cat}/X\) example in #14 – I can see how it works, and how it generalises both equivalence relations on \(X\) (ie groupoid preorders with object set \(X\)) and monoids (ie categories with object set \(\mathbf{1}\)).

    However, it struck me that \(\mathrm{Cat}/X\) is a kind of awkward category whose definition necessarily involves talking about identity of objects (ie a necessary evil, if you like!). It looks a bit like the comma category \((\mathrm{Obj}\downarrow X)\), where \(\mathrm{Obj}\) and \(X\) are functors from \(\mathrm{Cat}\) to \(\mathrm{Set}\), but that resemblance is misleading - the coproduct in that category is straightforward - we just put the two \(X\)-labelled categories side-by-side.

    This makes me think that the "alternating, inductive" construction arises from merging objects. The crucial point is that we have a red arrow, and a blue arrow, then we merge the red target with the blue source. This creates a new composable pair, red-then-blue.

    We see this in isolated form in the standard counterexample of a functor whose "image" is not a category. Take \(C\) with 4 objects and arrows \(C_0 \rightarrow C_1\) and \(C_2 \rightarrow C_3\), and \(D\) with 3 objects and arrows \(D_0 \rightarrow D_1 \rightarrow D_2\). We have a functor \(F : C \rightarrow D\) such that \(F(C_0) = D_0\), \(F(C_1) = F(C_2) = D_1\), \(F(C_3) = D_2\). Then \(F(C_0 \rightarrow C_1)\) and \(F(C_2 \rightarrow C_3)\) are composable in \(D\), but \(D_0 \rightarrow D_2\) is not in the image of \(F\). So the full image of \(F\) is strictly larger than the image.

    Comment Source:@John: I was thinking about your \\(\mathrm{Cat}/X\\) example in #14 – I can see how it works, and how it generalises both equivalence relations on \\(X\\) (ie groupoid preorders with object set \\(X\\)) and monoids (ie categories with object set \\(\mathbf{1}\\)). However, it struck me that \\(\mathrm{Cat}/X\\) is a kind of awkward category whose definition necessarily involves talking about identity of objects (ie a necessary evil, if you like!). It looks a bit like the comma category \\((\mathrm{Obj}\downarrow X)\\), where \\(\mathrm{Obj}\\) and \\(X\\) are functors from \\(\mathrm{Cat}\\) to \\(\mathrm{Set}\\), but that resemblance is misleading - the coproduct in that category is straightforward - we just put the two \\(X\\)-labelled categories side-by-side. This makes me think that the "alternating, inductive" construction arises from merging objects. The crucial point is that we have a red arrow, and a blue arrow, then we merge the red target with the blue source. This creates a new composable pair, red-then-blue. We see this in isolated form in the standard counterexample of a functor whose "image" is not a category. Take \\(C\\) with 4 objects and arrows \\(C_0 \rightarrow C_1\\) and \\(C_2 \rightarrow C_3\\), and \\(D\\) with 3 objects and arrows \\(D_0 \rightarrow D_1 \rightarrow D_2\\). We have a functor \\(F : C \rightarrow D\\) such that \\(F(C_0) = D_0\\), \\(F(C_1) = F(C_2) = D_1\\), \\(F(C_3) = D_2\\). Then \\(F(C_0 \rightarrow C_1)\\) and \\(F(C_2 \rightarrow C_3)\\) are composable in \\(D\\), but \\(D_0 \rightarrow D_2\\) is not in the image of \\(F\\). So the full image of \\(F\\) is strictly larger than the image.
  • 26.
    edited April 16

    image

    Anindya: yes, my category \(\text{Cat} / X \) is equivalent to the comma category you describe.

    \(\text{Cat} / X \) is an example of a slice category, also known as an over category. Given a category \(C\) and an object \(c \in C\), the objects of the slice category \(C/c\) are objects of \(C\) over \(c\), meaning things like this:

    $$ f: c' \to c $$ The morphisms in \(C/c\) are commutative triangles of the obvious sort (see the link).

    Of course, to define \(\mathrm{Cat}/X\) as a slice category, we need to think of the set \(X\) as a category! To do this, we take the codiscrete category on \(X\), meaning we put in exactly one morphism between each pair of objects. We can call this \(\textrm{codisc}(X)\). So, the slice category I was calling \(\text{Cat} / X \) should more precisely be called \(\text{Cat} / \text{codisc}(X). \)

    But a slice category is a special case of a comma category, so we can also think of \(\text{Cat} / X \) as a comma category!

    Following this through makes \(\text{Cat} / X \) into a comma category in a slightly different way than you're doing. However, it's equivalent to your way, since the functor \(\mathrm{codisc} : \text{Set} \to \text{Cat}\) sending any set to its codiscrete category is right adjoint to the functor \( \text{Ob} : \text{Cat} \to \text{Set}\) sending any category to its set of objects. We can always rework the description of a comma category using adjoint functors in this manner, and get an equivalent category.

    All these constructs are "somewhat awkward" or "evil" in that they (at least implicitly) involve equations between objects. One reason is that we're treating \(\text{Cat}\) as a mere category when it's a really a 2-category, i.e., we're neglecting natural isomorphisms.

    However, there's a strong argument that the collection of categories with a fixed set of objects should be treated as a mere category, not a 2-category - because that's the approach we already take with monoids, which are categories with just one object. I'm not saying we should always do this, but there's a good argument that one should allow oneself to do it.

    But by now I'm too tired of writing to present this argument.

    Comment Source:<img width = "150" src = "http://math.ucr.edu/home/baez/mathematical/warning_sign.jpg"> Anindya: yes, my category \\(\text{Cat} / X \\) is equivalent to the comma category you describe. \\(\text{Cat} / X \\) is an example of a [slice category, also known as an over category](https://ncatlab.org/nlab/show/over+category). Given a category \\(C\\) and an object \\(c \in C\\), the objects of the slice category \\(C/c\\) are **objects of \\(C\\) over \\(c\\)**, meaning things like this: $$ f: c' \to c $$ The morphisms in \\(C/c\\) are commutative triangles of the obvious sort (see the link). Of course, to define \\(\mathrm{Cat}/X\\) as a slice category, we need to think of the set \\(X\\) as a category! To do this, we take the [codiscrete category](https://ncatlab.org/nlab/show/codiscrete+groupoid) on \\(X\\), meaning we put in exactly one morphism between each pair of objects. We can call this \\(\textrm{codisc}(X)\\). So, the slice category I was calling \\(\text{Cat} / X \\) should more precisely be called \\(\text{Cat} / \text{codisc}(X). \\) But a slice category is a special case of a comma category, so we can also think of \\(\text{Cat} / X \\) as a comma category! Following this through makes \\(\text{Cat} / X \\) into a comma category in a slightly different way than you're doing. However, it's equivalent to your way, since the functor \\(\mathrm{codisc} : \text{Set} \to \text{Cat}\\) sending any set to its codiscrete category is right adjoint to the functor \\( \text{Ob} : \text{Cat} \to \text{Set}\\) sending any category to its set of objects. We can always rework the description of a comma category using adjoint functors in this manner, and get an equivalent category. All these constructs are "somewhat awkward" or "evil" in that they (at least implicitly) involve equations between objects. One reason is that we're treating \\(\text{Cat}\\) as a mere category when it's a really a 2-category, i.e., we're neglecting natural isomorphisms. However, there's a strong argument that the collection of categories with a fixed set of objects should be treated as a mere category, not a 2-category - because that's the approach we already take with monoids, which are categories with just one object. I'm not saying we should always do this, but there's a good argument that one should allow oneself to do it. But by now I'm too tired of writing to present this argument.
Sign In or Register to comment.