Options

Preorder of mathematical structures

I've been learning about all these crazy structures and I'm trying to understand not just each one, but how it ties and connects to all the other ones I know. Inspired by the level shifting at the end of chapter one, I posed a question on whether it's possible to define a type of preorder where the objects are arbitrary algebraic structures and \( \leq \) is inclusion. So if a structure B is structure A + some extra properties, then \( A \leq B \) ("A is less defined than B").

I drew a part of the Hasse diagram of such a preorder:

It mainly consists of things talked about in the first two and a half chapters (things I know so far). First of all, does this look correct? I'm asking because this closely corresponds to my internal view of the topic - when I'm learning what these abstract structures are - I'm internally arranging them in a preorder! It's only after so many years I realized that this thing I'm internally arranging in my head corresponds to a preorder. I guess this is one of the wonders of CT - it allows me to more clearly communicate what I have in my head.

Now, some of this structure is captured linguistically, by adding various prefixes to a word. But I feel that language doesn't properly capture all the intricacies, as is for the example of the unital commutative quantale. Just knowing the word quantale doesn't tell me how it's connected to a preorder.

But seeing it as a part of a larger diagram immediately tells me many things! At a first glance, I can tell that it's a symmetric monoidal closed preorder with some extra structure. I can even fill in the blanks in some cases with minimal effort just by realizing some part of a square is missing. I think some famous mathematician said (Tao, perhaps?) that most of his papers were just completing the hypercube in this sort of way (although I can't find the reference).


So my question is, is this sort of diagrammatic reasoning useful? Does this scale as I add more objects? I assume it would be like Fredrick commented to my question; perhaps it'd become unmanageable, where each structure could be factorized by all the axioms that apply to it. Googling indeed yields many results that are very cluttered!

But then again, all these pictures present a sort of a flat, unnested hierarchy. Could smart nesting somehow alleviate the problem? What if we tried to define our structures in such that the resulting diagram becomes nested and self-referential? And to try to answer that question, aren't we trying to do exactly that with category theory? As far as I understand, with all the self-referential objects in CT (Category of categories, preorder of preorder structures...) one of the things we're trying to do is to find the 'best' way to define things such that there are no leaky abstractions and that theorems naturally follow.

To illustrate what I mean: I realized I have some redundancy in my previous graph and this is the improved version:

This notation follows closely Seven sketches, where the inside of the boxes represents objects of certain categories and dotted arrows represent functors, or in this case, monoidal maps. But those categories themselves can be arranged in a preorder! So this shows us that we have several preorders (one of which contains the object preorder itself) between which there is a preorder structure! We can say that \( \mathcal{F} \leq \mathcal{M} \) (the entire F is less defined than entire M) Now, I'm aware that this diagram can be further improved, but I think I've managed to get the point across.

I'd love to be able to see more of this diagram; to have a 'world map' of sorts where I can locate myself and observe both the nearby landscape as well as the distant mountain ranges.

Is this sort of approach feasible?

Comments

  • 1.

    Bruno: You may be interested in this blog post which also mentions "hypercube completion": http://www.inference.vc/my-thoughts-on-alchemy/

    Comment Source:Bruno: You may be interested in this blog post which also mentions "hypercube completion": http://www.inference.vc/my-thoughts-on-alchemy/
  • 2.

    I found, amongst many others, an interesting idea in Eugenia Cheng's video on Categories in Life. You'll have to watch it to get the idea but the best I can say is that she describes gender and racial prejudice as a cube diagram; then pulls down one face of the diagram to represent relative magnitudes of prejudice which adds another dimension to the model.

    Comment Source:I found, amongst many others, an interesting idea in Eugenia Cheng's [video](https://www.youtube.com/watch?v=ho7oagHeqNc) on Categories in Life. You'll have to watch it to get the idea but the best I can say is that she describes gender and racial prejudice as a cube diagram; then pulls down one face of the diagram to represent relative magnitudes of prejudice which adds another dimension to the model.
  • 3.

    In terms of (n,r)-categories, a preorder can be viewed as a (0,1)-category, while a category can be viewed as a (1,1)-category.

    Comment Source:In terms of [(n,r)-categories](https://ncatlab.org/nlab/show/%28n%2Cr%29-category), a preorder can be viewed as a [(0,1)-category](https://ncatlab.org/nlab/show/%280%2C1%29-category), while a category can be viewed as a (1,1)-category.
  • 4.
    edited June 23

    As another example:

    lattice-of-lattices

    This is a lattice of lattices from section 6.3 in the book on Formal Concept Analysis from Wille and Ganter (ISBN 978-3-642-59830-2).

    That is an example of a concept lattice. Another example can be this classification in wikipedia. From the "Group-like structures" table there's a general procedure that gives the Hasse diagram of the corresponding lattice that gives the same representation of inclusions as you depict. The relation there is the predication of a fundamental property said of a class of structures (e. g. "groups require invertibility"). It's nice how the diagram emerges by magic from the table and, as you nicely say, that reflects what happens in our heads when we organize that flood of information.

    This ideas cry for a categorical treatment. A formal context determines a Galois connection, hence an adjunction, and prof. Willerton has written in the nCafe

    Several of you will have realised ... [that] the concept lattice is the centre of the adjunction

    Comment Source:As another example: ![lattice-of-lattices](http://i68.tinypic.com/ngtz13.jpg) This is a lattice of lattices from section 6.3 in the book on Formal Concept Analysis from Wille and Ganter (ISBN 978-3-642-59830-2). That is an example of a concept lattice. Another example can be [this](https://en.wikipedia.org/wiki/Magma_(algebra)#Classification_by_properties) classification in wikipedia. From the "Group-like structures" table there's a general procedure that gives the Hasse diagram of the corresponding lattice that gives the same representation of inclusions as you depict. The relation there is the predication of a fundamental property said of a class of structures (e. g. "groups require invertibility"). It's nice how the diagram emerges by magic from the table and, as you nicely say, that reflects what happens in our heads when we organize that flood of information. This ideas cry for a categorical treatment. A formal context determines a Galois connection, hence an adjunction, and prof. Willerton has written in the [nCafe](https://golem.ph.utexas.edu/category/2013/09/formal_concept_analysis.html#more) > Several of you will have realised ... [that] the concept lattice is the centre of the adjunction
  • 5.
    edited June 24

    @Valter, that's exactly where I saw the hypercube completion quote! What he drew in that post is basically a 3d commutative diagram - it just goes a long way to show that people seem to use categorical way of thinking without realizing it. Maybe even better said: people use whatever is intuitive and category theorists just give a super special name to those things that are consistent.

    @Jim, I've seen the video as well. That's an interesting explanation!

    @Keith, thanks for the link. That seems like it might be an interesting direction to explore.

    @Jesus, the group classification on wikipedia is a thing I have bookmarked and often revisit. Here's one more I just found while writing this comment (you have to click on 'Show' near 'Binary relations'). And that nCafe link is fascinating! It's kind of a generalized "putting buckets into balls" example 1.103 from the Seven sketches. The concept of a concept seems to be related to what I'm talking about. I realized my preorder from above is also a lattice - a preorder that has all meets and joins. In other words, there's a most abstract, least defined concept and there's a most concrete and specific one.


    that reflects what happens in our heads when we organize that flood of information.

    This is what I'm most curious about! There's a recent paper from an AI research lab that has this quote:

    What especially strikes me is this sentence:

    ...the fact that it is a working physical model which works in the same way as the process it parallels...

    This is exactly what seems to be happening. There is some structure to these mathematical concepts and, in this narrow domain, my physical model of them (my brain) seems to "work in the same way" as them - learning what they are seems to mirror them.

    Now, it might be kind of 'obvious' that the most natural way to understand something is to fully mirror the process you're trying to understand (see also the good regulator theorem) . What is not obvious is the question of how the process of becoming a mirror of the process - the process of learning - arises! There seems to be a lot of fuss about this these days: about how to automate this process with machines, but it seems to many hacks thrown together. I find categorical thinking extremely potent for this sort of questions. I'd even go out on a limb and state that the learning process itself might be describable with CT.

    P.S. It's amazing how that AI paper is talking so much about compositionality but never mentioning CT. I feel like CT should be called compositionality theory. At least the acronym would stay the same :)

    Comment Source:@Valter, that's exactly where I saw the hypercube completion quote! What he drew in that post is basically a 3d commutative diagram - it just goes a long way to show that people seem to use categorical way of thinking without realizing it. Maybe even better said: people use whatever is intuitive and category theorists just give a super special name to those things that are consistent. @Jim, I've seen the video as well. That's an interesting explanation! @Keith, thanks for the link. That seems like it might be an interesting direction to explore. @Jesus, the group classification on wikipedia is a thing I have bookmarked and often revisit. Here's [one more](https://en.wikipedia.org/wiki/Lattice_(order)) I just found while writing this comment (you have to click on 'Show' near 'Binary relations'). And that nCafe link is fascinating! It's kind of a generalized "putting buckets into balls" example 1.103 from the Seven sketches. The concept of a concept seems to be related to what I'm talking about. I realized my preorder from above is also a lattice - a preorder that has all meets and joins. In other words, there's a most abstract, least defined concept and there's a most concrete and specific one. --- > that reflects what happens in our heads when we organize that flood of information. This is what I'm most curious about! There's a [recent paper](https://arxiv.org/abs/1806.01261) from an AI research lab that has this quote: ![](https://image.ibb.co/k4r0fT/Screenshot_20180624_001014.png) What especially strikes me is this sentence: > ...the fact that it is a working physical model which works in the same way as the process it parallels... __This is exactly what seems to be happening.__ There is some structure to these mathematical concepts and, in this narrow domain, my physical model of them (my brain) seems to "work in the same way" as them - learning what they are seems to mirror them. Now, it might be kind of 'obvious' that the most natural way to understand something is to fully mirror the process you're trying to understand (see also the [good regulator theorem](https://en.wikipedia.org/wiki/Good_regulator)) . What is _not_ obvious is the question of how the _process of becoming_ a mirror of the process - the process of learning - arises! There seems to be a lot of fuss about this these days: about how to automate this process with machines, but it seems to many hacks thrown together. I find categorical thinking extremely potent for this sort of questions. I'd even go out on a limb and state that the learning process itself might be describable with CT. P.S. It's amazing how that AI paper is talking so much about compositionality but never mentioning CT. I feel like CT should be called compositionality theory. At least the acronym would stay the same :)
  • 6.
    edited July 7

    Hi Bruno, sorry for the delay. That table of binary relations exemplifies exactly what I had in mind. I didn't knew your cibernetic theorem but had shared your inklings about the role of control theory in this area. You mention several excelent motivational drivers to study the cognition-action chain. But also machine learning insights are usefull, they are an entrance to look at the formerly unattended lower level processes of perception. For instance in arXiv:1609.05518 there is nice toy model doing a simiplistic symbol grounding task with neural means. To me this points to a more realistic attitude than the surprisingly successful distributional experiments that serve as starting point of one of the attacks to the semantics problem with categorical means (DisCoCat). The puzzle is huge.

    Comment Source:Hi Bruno, sorry for the delay. That table of binary relations exemplifies exactly what I had in mind. I didn't knew your cibernetic theorem but had shared your inklings about the role of control theory in this area. You mention several excelent motivational drivers to study the cognition-action chain. But also machine learning insights are usefull, they are an entrance to look at the formerly unattended lower level processes of perception. For instance in arXiv:1609.05518 there is nice toy model doing a simiplistic symbol grounding task with neural means. To me this points to a more realistic attitude than the surprisingly successful distributional experiments that serve as starting point of one of the attacks to the semantics problem with categorical means (DisCoCat). The puzzle is huge.
Sign In or Register to comment.