It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.2K
- Applied Category Theory Course 348
- Applied Category Theory Seminar 2
- Exercises 149
- Discussion Groups 48
- How to Use MathJax 15
- Chat 475
- Azimuth Code Project 108
- News and Information 145
- Azimuth Blog 148
- Azimuth Forum 29
- Azimuth Project 190
- - Strategy 109
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 708
- - Latest Changes 700
- - - Action 14
- - - Biodiversity 8
- - - Books 2
- - - Carbon 9
- - - Computational methods 38
- - - Climate 53
- - - Earth science 23
- - - Ecology 43
- - - Energy 29
- - - Experiments 30
- - - Geoengineering 0
- - - Mathematical methods 69
- - - Meta 9
- - - Methodology 16
- - - Natural resources 7
- - - Oceans 4
- - - Organizations 34
- - - People 6
- - - Publishing 4
- - - Reports 3
- - - Software 20
- - - Statistical methods 2
- - - Sustainability 4
- - - Things to do 2
- - - Visualisation 1
- General 39

Options

I am a retired mathematician and mathematical logician (BS, MIT '65, and PhD. Boston U. '71) associated as a Visiting Scholar at the University of California at Riverside (John Baez's home base too). My mentor in mathematics was Gian-Carlo Rota of MIT who died prematurely in 1999 (age 66). I then focused on one of his unfinished projects which was to develop a logic of equivalence relations = partitions = quotient sets, which would be a type of dual to the usual Boolean logic of subsets (usually mis-specified as the special case of propositional logic) since quotient sets are categorically dual to subsets. Dedekind had defined the join and meet of partitions in the 19th century, but one needs at least an implication operation on partitions to warrant calling it a "logic of partitions." After figuring out how to define the implication and further seeing the algorithm to translate all the Boolean operations on subsets over to partitions, the long paper developing partition logic (along with correctness and completeness theorems for a system of semantic tableaus for partitions) was published in the Review of Symbolic Logic. I gave the definition of implication for partitions in a comment on Lesson 11. A shorter and better introduction to partition logic was later published in the Logic Journal of the IGPL.

The quantitative version of Boole's logic of subsets is just finite logical probability theory as developed by Boole in his *Laws of Thought*. Rota always held that probability (i.e., the normalized size of a subset) is to subsets as information is to partitions.

$$\frac{\text{Probability}}{\text{Subsets}}=\frac{\text{Information}}{\text{Partitions}}$$
The key to working that out was to see the analogy between an "element of a subset" and a "distinction of a partition," where a *distinction* or *dit* of a partition is an ordered pair of elements of the underlying set \(X\) that are in different blocks or parts of the partition. The *indistinctions* or *indits* of a partition are the ordered pairs of elements in the same block or part of the partition. Hence the set of all indits of a partition is just the binary equivalence relation associated with the partition, and the set of all dits of a partition, the *ditset* \(dit(P)\), is the complementary binary relation called an *apartness relation* or just a *partition relation*. The *logical entropy* \(h(P)\) of a partition is just the normalized size of the ditset:

$$h(P)=\frac{|dit(P)|}{|X^2|}.$$ Logical entropy is related in a definite way to Shannon entropy. All the Shannon definitions of simple, joint, conditional, and mutual entropy can be obtained by a uniform requantifying transformation from the corresponding definitions in logical information theory. Moreover, logical entropy (unlike Shannon entropy) is a measure in the sense of measure theory, and thus the logical definitions of simple, joint, conditional, and mutual logical entropy of partitions is just the measure applied respectively to the ditsets, the union of ditsets, the difference of ditsets, and the intersection of ditsets. Indeed, logical entropy is a probability measure with the interpretation that \(h(P)\) is the probability of getting a distinction in two independent draws from the set \(X\). The basic paper developing logical information theory (and spelling out the comparisons with the closely related Shannon theory) was also published in the Logic Journal of the IGPL.

I went into a little detail about logical information theory as the quantitative version of partition logic since it bears on the question of which of the opposite partial orderings, the coarseness ordering or the refinement ordering, should be used on partitions. Since refinement increases the number of distinctions, it increases logical (and Shannon) entropy so the refinement ordering seems best if one is emphasizing the ordering of partitions according to their information content measured by logical or Shannon entropy.

Another topic of my research has been "heteromorphisms" which are morphisms between the objects of different categories, e.g., the canonical injection of the set of generators into the free group on that set. Homomorphisms may be treated formally by hom-functors, and heteromorphisms are treated formally by the "het-functors" or *profunctors* that appear later in the course. For one reason or another, Mac Lane and Eilenberg did not include heteromorphisms in their basic "ontology" for category theory even though they are just as much a tool for the proverbial "working mathematician" (referenced in the title of Mac Lane's text) as are homomorphisms. The French school of category theory, e.g., the Grothendieck school, is much less prudish and let hets 'out of the closet' as just "morphisms" routinely. The differing attitude about hets comes out in the treatment of adjoint functors. The simpler and I think more natural treatment of adjoints was developed by Bodo Pareigis in the late 60's and published in his text translated into English as: Pareigis, Bodo. 1970. *Categories and Functors*. New York: Academic Press. It also comes out in what one takes as more important or more basic: adjoint functors or representable functors. Grothendieck gives representable functors the pride of place, whereas the American school puts the emphasis on adjoint functors. Hets, representable functors, and adjoints are all tied together in the Pareigis treatment of adjoints. Take the het-functor whose value \(\text{het}(X,G)\) is the set of set-to-group maps from a set \(X\) to a group \(G\). The underlying set functor is just the right-representation of those hets by homs in the category of sets, and the free-group functor is just the left-representation of those *same hets* by homs in the category of groups. Thus one has the natural isomorphisms:

$$ \text{hom}(F(X),G)\cong \text{het}(X,G)\cong \text{hom}(X,U(G)).$$ The heterophobic American school always leaves out the middle term of hets and just defines an adjunction using the natural isomorphism between the hom-sets. All left- and right-representations of a het-functor define a pair of adjoint functors, and given a pair of adjoint functors, there is a het-functor so that the given adjoints are isomorphic to the left- and right-representations of the het-functor. All this holds as well for adjunctions on preorders or Galois connections. An introductory paper is here.

## Comments

It's an honor to have you here, David!

`It's an honor to have you here, David!`

Hi, David! It's an interesting coincidence thaty former student Brendan Fong started using partitions - under the guise of corelations - in our work on network theory. That's why I'm talking a bit about the logic of partitions here.

`Hi, David! It's an interesting coincidence thaty former student Brendan Fong started using partitions - under the guise of corelations - in our work on network theory. That's why I'm talking a bit about the logic of partitions here.`

Nice to know that you're here, David! I remember reading your paper on heteromorphisms, which is how I first learned about profunctors. I'm a big fan of this perspective, as well as of closely related structures like proarrow equipments.

`Nice to know that you're here, David! I remember reading your paper on heteromorphisms, which is how I first learned about profunctors. I'm a big fan of this perspective, as well as of closely related structures like [proarrow equipments](https://ncatlab.org/nlab/show/2-category+equipped+with+proarrows).`

Hello David, thanks a lot ..very interesting post ..

`Hello David, thanks a lot ..very interesting post ..`

Tobias, I first learned about profunctors when I wrote the paper on heteromorphisms, and John pointed out that I had reinvented profunctors so I added those references...

`Tobias, I first learned about profunctors when I wrote the paper on heteromorphisms, and John pointed out that I had reinvented profunctors so I added those references...`

In this thread, I think I just learned that heteromorphisms are profunctors and corelations are partitions. There's frankly no way I'd ever have figured that out on my own unless I were working on a project that centered around those ideas to begin with.

This sort of thing is precisely why I signed up for this course, but is there something out there like a list of equivalent or closely-related terms from different branches of the literature?

`In this thread, I think I just learned that heteromorphisms are profunctors and corelations are partitions. There's frankly no way I'd ever have figured that out on my own unless I were working on a project that centered around those ideas to begin with. This sort of thing is precisely why I signed up for this course, but is there something out there like a list of equivalent or closely-related terms from different branches of the literature?`

Robert - I find that Wikipedia and the nLab are pretty good about pointing out the links between concepts. So that's where I often start. But talking to people is even better, which is why I "waste my time" blogging so much.

By the way: a corelation is not exactly the same as a partition: a corelation \(f : X \to Y\) is the same as a partition of \(X + Y\) (that is, the disjoint union of \(X\) and \(Y\)). That's different enough to deserve a different name.

On the other hand, partitions of a set \(X\) are in one-to-one correspondence with equivalence relations on \(X\)... so "partition" and "equivalence relation" are really just two different ways of thinking about the same thing. It's probably a useful difference, but it may fool some people into not noticing the unity of the concepts.

`Robert - I find that Wikipedia and the nLab are pretty good about pointing out the links between concepts. So that's where I often start. But talking to people is even better, which is why I "waste my time" blogging so much. By the way: a corelation is not exactly the same as a partition: a corelation \\(f : X \to Y\\) is the same as a partition of \\(X + Y\\) (that is, the disjoint union of \\(X\\) and \\(Y\\)). That's different enough to deserve a different name. On the other hand, partitions of a set \\(X\\) are in one-to-one correspondence with equivalence relations on \\(X\\)... so "partition" and "equivalence relation" are really just two different ways of thinking about the same thing. It's probably a useful difference, but it may fool some people into not noticing the unity of the concepts.`

Thanks, John! I've found that Wikipedia is sometimes a little sparse, and the nLab is not exactly written with the novice in mind. Certainly talking to people is the way to go for a lot of these things, but I don't want to waste anyone's time! On that note, and related to your clarifying remark about corelations, do you happen to know of any terms that I should take note of when considering two partitions on the same set (rather than what I'm understanding as one partition on two sets)?

`Thanks, John! I've found that Wikipedia is sometimes a little sparse, and the nLab is not exactly written with the novice in mind. Certainly talking to people is the way to go for a lot of these things, but I don't want to waste anyone's time! On that note, and related to your clarifying remark about corelations, do you happen to know of any terms that I should take note of when considering two partitions on the same set (rather than what I'm understanding as one partition on two sets)?`

In game theory, overlapping partitions are sometimes referred to as

information partitions. Each partition corresponds to a particular player. This leads to a notion of common knowledge Robert Aumann introduces in his 1976Agreeing to Disagree. The Stanford Encyclopedia of Philosophy has a nice article about this.`> On that note, and related to your clarifying remark about corelations, do you happen to know of any terms that I should take note of when considering two partitions on the same set (rather than what I'm understanding as one partition on two sets)? In game theory, overlapping partitions are sometimes referred to as *information partitions*. Each partition corresponds to a particular player. This leads to a notion of common knowledge Robert Aumann introduces in his 1976 *Agreeing to Disagree*. The [Stanford Encyclopedia of Philosophy](https://plato.stanford.edu/entries/common-knowledge/#2.3) has a nice article about this.`

"hets 'out of the closet'" - I love it!

`"hets 'out of the closet'" - I love it!`

I only barely understand this stuff (possibly I may a bit from an intuitive level but the mathematical details leave me in the dust--i know basics but can't do the proofs) but one of the most interesting papers I ever read was by G C Rota (co-author) in PNAS 1993--'stochastic interpretation of the Reimann Zeta function'. There are stochastic interpretations of Ricci flow too, among others. 'Randomness vs determinism'.

`I only barely understand this stuff (possibly I may a bit from an intuitive level but the mathematical details leave me in the dust--i know basics but can't do the proofs) but one of the most interesting papers I ever read was by G C Rota (co-author) in PNAS 1993--'stochastic interpretation of the Reimann Zeta function'. There are stochastic interpretations of Ricci flow too, among others. 'Randomness vs determinism'.`

By the way, I see the following from Ellerman (2016) "Brain Functors: A mathematical model of intentional perception and action":

which seems to suggest that the term "homomorphism" is due to the fact that the source and target objects of the relevant morphism are

in the same category. While that could be the case (considering the prefixhomo-means "same"), I also vaguely remember that the term "homomorphism" had already existed before Category Theory was invented, and that in e.g. "group homomorphism" the prefixhomo-merely refers to the fact that the source and target objects in questionhave the same (or rather "similar") structure. But apparently the term "heteromorphism" only naturally pairs up and contrasts with "homomorphism" in its first sense (i.e. "the same category") but not in its second sense (i.e. "structure-preserving")... :-?`By the way, I see the following from [Ellerman (2016) "Brain Functors: A mathematical model of intentional perception and action"](http://www.edusoft.ro/brain/index.php/brain/article/view/567): >When a morphism is between objects of the same category, it is called a homomorphism or hom, and when between objects of different categories it is a heteromorphism or het. which seems to suggest that the term "homomorphism" is due to the fact that the source and target objects of the relevant morphism are _in the same category_. While that could be the case (considering the prefix _homo-_ means "same"), I also vaguely remember that the term "homomorphism" had already existed before Category Theory was invented, and that in e.g. "group homomorphism" the prefix _homo-_ merely refers to the fact that the source and target objects in question _have the same (or rather ["similar"](https://en.wikipedia.org/wiki/Homomorphism)) structure_. But apparently the term "heteromorphism" only naturally pairs up and contrasts with "homomorphism" in its first sense (i.e. "the same category") but not in its second sense (i.e. "structure-preserving")... :-?`

Yes, Ellerman is cleverly imputing a new meaning to the term "homomorphism".

`Yes, Ellerman is cleverly imputing a new meaning to the term "homomorphism".`