@Valter, that's exactly where I saw the hypercube completion quote! What he drew in that post is basically a 3d commutative diagram - it just goes a long way to show that people seem to use categorical way of thinking without realizing it. Maybe even better said: people use whatever is intuitive and category theorists just give a super special name to those things that are consistent.
@Jim, I've seen the video as well. That's an interesting explanation!
@Keith, thanks for the link. That seems like it might be an interesting direction to explore.
@Jesus, the group classification on wikipedia is a thing I have bookmarked and often revisit. Here's [one more](https://en.wikipedia.org/wiki/Lattice_(order)) I just found while writing this comment (you have to click on 'Show' near 'Binary relations').
And that nCafe link is fascinating! It's kind of a generalized "putting buckets into balls" example 1.103 from the Seven sketches.
The concept of a concept seems to be related to what I'm talking about. I realized my preorder from above is also a lattice - a preorder that has all meets and joins. In other words, there's a most abstract, least defined concept and there's a most concrete and specific one.
> that reflects what happens in our heads when we organize that flood of information.
This is what I'm most curious about! There's a [recent paper](https://arxiv.org/abs/1806.01261) from an AI research lab that has this quote:
What especially strikes me is this sentence:
> ...the fact that it is a working physical model which works in the same way as the process it parallels...
__This is exactly what seems to be happening.__ There is some structure to these mathematical concepts and, in this narrow domain, my physical model of them (my brain) seems to "work in the same way" as them - learning what they are seems to mirror them.
Now, it might be kind of 'obvious' that the most natural way to understand something is to fully mirror the process you're trying to understand (see also the [good regulator theorem](https://en.wikipedia.org/wiki/Good_regulator)) . What is _not_ obvious is the question of how the _process of becoming_ a mirror of the process - the process of learning - arises! There seems to be a lot of fuss about this these days: about how to automate this process with machines, but it seems to many hacks thrown together. I find categorical thinking extremely potent for this sort of questions.
I'd even go out on a limb and state that the learning process itself might be describable with CT.
P.S. It's amazing how that AI paper is talking so much about compositionality but never mentioning CT. I feel like CT should be called compositionality theory. At least the acronym would stay the same :)