Options

Lecture 75 - Chapter 4: The Grand Synthesis

Let's review our progress, and try to put all the pieces of this course together in a neat package.

We started by returning to a major theme of Chapter 2: enriched categories. We saw that enriched functors between these were just a special case of something more flexible: enriched profunctors. We saw some concrete applications of these, but also their important theoretical role.

Simply put: moving from functors to profunctors is completely analogous to moving from functions to matrices! Thus, introducing profunctors gives category theory some of the advantages of linear algebra.

Recall: a function between sets

$$ f \colon X \to Y $$ can be seen as a special kind of \(X \times Y\)-shaped matrix

$$ \phi \colon X \times Y \to \mathbb{R} $$ namely one where the matrix entry \(\phi(x,y) \) is \(1\) if \(y = f(x)\), and \(0\) otherwise. In short:

$$ \phi(x,y) = \delta_{f(x), y} $$ where \(\delta\) is the Kronecker delta. Composing functions then turns out to be a special case of multiplying matrices. Here I'm using \(\mathbb{R}\) because most of you have seen matrices of real numbers, but we could equally well use \(\mathbf{Bool} = \lbrace \texttt{true}, \texttt{false} \rbrace \), and get matrices of truth values, which are just relations. Matrix multiplication has the usual composition of relations as a special case!

Similarly, a \(\mathcal{V}\)-enriched functor

$$ F \colon \mathcal{X} \to \mathcal{Y} $$ can be seen a special kind of \(\mathcal{V}\)-enriched profunctor

$$ \Phi \colon \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} $$ namely the 'companion' of \(F\), given by

$$ \Phi(x,y) = \mathcal{V}(F(x), y) .$$ This is a fancier relative of the Kronecker delta! For matrices of booleans \( \delta_{f(x), y} = \texttt{true}\) iff \(f(x) = y\), but \( \mathcal{V}(F(x), y) = \texttt{true}\) iff \(f(x) \le y \).

The analogy is completed by this fact: the formula for composing enriched profunctors is really just matrix multiplication written with less familiar symbols:

$$ (\Psi\Phi)(x,z) = \bigvee_{y \in \mathrm{Ob}(\mathcal{Y})} \Phi(x,y) \otimes \Psi(y,z). $$ Here \(\bigvee\) plays the role of a sum and \(\otimes\) plays the role of multiplication.

To clarify this analogy, we studied the category \(\mathbf{Prof}_\mathcal{V}\) with

  • \(\mathcal{V}\)-enriched categories as objects

and

  • \(\mathcal{V}\)-enriched profunctors as morphisms.

We saw that it was a compact closed category. This means that you can work with morphisms in this category using string diagrams, and you can bend the strings around using caps and cups. In short, \(\mathcal{V}\)-enriched profunctors are like circuits made of components connected by flexible pieces of wire, which we can stick together to form larger circuits.

And while you may not have learned it in your linear algebra class, this 'flexibility' is exactly one of the advantages of linear algebra! For any field \(k\) (for example the real numbers \(\mathbb{R}\)) there is a category \(\mathbf{FinVect}_k\) with

  • finite-dimensional vector spaces over \(k\) as objects

and

  • linear maps as morphisms.

This category is actually equivalent to the category with finite sets as objects and \(k\)-valued matrices as morphisms, where we compose matrices by matrix multiplication. And like \(\mathbf{Prof}_\mathcal{V}\), the category \(\mathbf{FinVect}_k\) is compact closed, as I mentioned last time. So, while a function between sets has a rigidly defined 'input' and 'output' (i.e. domain and codomain), a linear map between finite-dimensional vector spaces can be 'bent' or 'turned around' in various ways - as you may have first seen when you learned about the transpose of a matrix.

There's one other piece of this story whose full significance I haven't quite explained yet.

We've seen pairs of adjoint functors, and we've seen 'duals' in compact closed categories. In fact they're closely related! There is a general concept of adjunction that has both of these as special cases! And adjunctions give something all you functional programmers have probably been wishing I'd talk about all along: monads. So I'll try to explain this next time.

To read other lectures go here.

Comments

  • 1.

    Very interesting..thanks

    Comment Source:Very interesting..thanks
  • 2.
    edited September 14

    John wrote:

    This category is actually equivalent to the category with finite sets as objects and k k-valued matrices as morphisms

    How do k-valued matrices act on finite sets?

    Comment Source:John wrote: >This category is actually equivalent to the category with finite sets as objects and k k-valued matrices as morphisms How do k-valued matrices act on finite sets?
  • 3.

    Coincidentally I came across some very useful notes from John on "groupoidification" the other day which spells out this profunctors = matrices thing in more detail. Unfortunately they are sitting on the math.ucr.edu server, which is still kaput, but the URL is here should it ever come back to life: http://www.math.ucr.edu/home/baez/groupoidification/

    Comment Source:Coincidentally I came across some very useful notes from John on "groupoidification" the other day which spells out this profunctors = matrices thing in more detail. Unfortunately they are sitting on the math.ucr.edu server, which is still kaput, but the URL is here should it ever come back to life: http://www.math.ucr.edu/home/baez/groupoidification/
  • 4.

    Unfortunately they are sitting on the math.ucr.edu server, which is still kaput, but the URL is here should it ever come back to life: http://www.math.ucr.edu/home/baez/groupoidification/

    One can use the Wayback Machine to access an archive of the URL: https://web.archive.org/web/20180519002806/http://www.math.ucr.edu/home/baez/groupoidification/

    Comment Source:> Unfortunately they are sitting on the math.ucr.edu server, which is still kaput, but the URL is here should it ever come back to life: http://www.math.ucr.edu/home/baez/groupoidification/ One can use the Wayback Machine to access an archive of the URL: https://web.archive.org/web/20180519002806/http://www.math.ucr.edu/home/baez/groupoidification/
  • 5.

    We've seen pairs of adjoint functors, and we've seen 'duals' in compact closed categories. In fact they're closely related! There is a general concept of adjunction that has both of these as special cases!

    Interest stirred up!

    Comment Source:> We've seen pairs of adjoint functors, and we've seen 'duals' in compact closed categories. In fact they're closely related! There is a general concept of adjunction that has both of these as special cases! Interest stirred up!
  • 6.

    Okay, one thing I keep noticing is that in these "richer" "spaces" (especially in CS applications) its often much more natural to define division then subtraction. So I guess I am asking, is there a theory of "division rigs", or "positive fields"?

    Comment Source:Okay, one thing I keep noticing is that in these "richer" "spaces" (especially in CS applications) its often much more natural to define division then subtraction. So I guess I am asking, is there a theory of "division rigs", or "positive fields"?
  • 7.

    "(Exercise: check that the cardinality of the groupoid of finite sets is e = 2.718281828... If you get stuck, read "week147".)"

    Ah! That's why the species of bags (aka finite multisets) is the taylor series of e^x.

    Comment Source:"(Exercise: check that the cardinality of the groupoid of finite sets is e = 2.718281828... If you get stuck, read "week147".)" Ah! That's why the species of bags (aka finite multisets) is the taylor series of e^x.
  • 8.
    edited September 23

    Igor wrote:

    How do \(k\)-valued matrices act on finite sets?

    They don't, nor need they. In general, morphisms don't act on objects. You just need to know what a morphism from one object to another is, and how to compose them.

    To a modern mathematician, a matrix \(T\) is a "rectangle of numbers" where the numbers are chosen from some field \(k\). The rows of this matrix can be indexed by any finite set \(X\), and the columns can be indexed by any finite set \(Y\). (That's the modern part.) So, for any \(x \in X\) and any \(y \in Y\) we get an element of \(k\). In short, a matrix is a function

    $$ T \colon X \times Y \to k $$ You can write the matrix entries as \(T(x,y)\) if you like, but let's write them as \(T_{xy}\), since people love to use subscripts when talking about matrices.

    In the category I mentioned, we think of \(T\) as a morphism from \(X\) to \(Y\). If we have another morphism \(S\) from \(Y\) to \(Z\), that is a function

    $$ S \colon Y \times Z \to k $$ we compose them as follows:

    $$ (ST)_{xz} = \sum_{y \in Y} S_{xy} T_{yz} $$ Note that this is exactly like the formula for composing profunctors in my lecture!

    There are lots of arbitrary conventions running around here, like which index labels the rows and which labels the columns, and whether we think of \(T \colon X \times Y \to k\) as a morphism from \(X\) to \(Y\) or a morphism from \(Y\) to \(X\). I think the conventions I've been using for profunctors don't match the most common conventions for thinking of matrices as linear maps. But don't worry! It doesn't really matter which conventions you use.

    Comment Source:Igor wrote: > How do \\(k\\)-valued matrices act on finite sets? They don't, nor need they. In general, morphisms don't act on objects. You just need to know what a morphism from one object to another is, and how to compose them. To a modern mathematician, a matrix \\(T\\) is a "rectangle of numbers" where the numbers are chosen from some field \\(k\\). The rows of this matrix can be indexed by any finite set \\(X\\), and the columns can be indexed by any finite set \\(Y\\). (That's the modern part.) So, for any \\(x \in X\\) and any \\(y \in Y\\) we get an element of \\(k\\). In short, a matrix is a function $$ T \colon X \times Y \to k $$ You can write the matrix entries as \\(T(x,y)\\) if you like, but let's write them as \\(T_{xy}\\), since people love to use subscripts when talking about matrices. In the category I mentioned, we think of \\(T\\) as a morphism from \\(X\\) to \\(Y\\). If we have another morphism \\(S\\) from \\(Y\\) to \\(Z\\), that is a function $$ S \colon Y \times Z \to k $$ we compose them as follows: $$ (ST)\_{xz} = \sum\_{y \in Y} S\_{xy} T\_{yz} $$ Note that this is exactly like the formula for composing profunctors in my lecture! There are lots of arbitrary conventions running around here, like which index labels the rows and which labels the columns, and whether we think of \\(T \colon X \times Y \to k\\) as a morphism from \\(X\\) to \\(Y\\) or a morphism from \\(Y\\) to \\(X\\). I think the conventions I've been using for profunctors don't match the most common conventions for thinking of matrices as linear maps. But don't worry! It doesn't really matter which conventions you use.
  • 9.

    Anindya: math.ucr.edu has been revived!

    Comment Source:Anindya: math.ucr.edu has been revived!
  • 10.

    Yay!

    Comment Source:Yay!
  • 11.
    edited September 25

    Christopher wrote:

    Okay, one thing I keep noticing is that in these "richer" "spaces" (especially in CS applications) its often much more natural to define division then subtraction.

    Yes, this is why people invented fractions long before they invented negative numbers. Negative numbers are much more abstract. (Venetian bankers wrote numbers in red to keep track of debts. Only gradually did people start thinking of these as "negative numbers", and even Descartes believed that negative numbers were just as "imaginary" as the square root of -1.)

    So I guess I am asking, is there a theory of "division rigs", or "positive fields"?

    Yes, they're called semifields - but beware, this term also is also used to mean something completely different. And even sticking to the meaning we're interested in, Wikipedia seems to vacillate on whether a semifield must have an additive identity \(0\).

    (They should, according to my taste. And they do, if you read the definition in Wikipedia, and go back to the definition of semiring. But Wikipedia claims the positive real numbers form a semiring with the usual + and \(\times\). I think only the nonnegative reals should form a semifield.)

    I don't know much about semifields. Grothendieck generalized algebraic geometry from fields to commutative rings, but more recently people noticed that there's a lot of algebraic geometry that can be done perfectly well with commutative rigs! It should be interesting to see what advantages semifields have.

    For example: every vector space over a field has a basis, and any two bases have the same cardinality, called the dimension of that vector space. Is this true for semifields?

    Comment Source:Christopher wrote: > Okay, one thing I keep noticing is that in these "richer" "spaces" (especially in CS applications) its often much more natural to define division then subtraction. Yes, this is why people invented fractions long before they invented negative numbers. Negative numbers are much more abstract. (Venetian bankers wrote numbers in red to keep track of debts. Only gradually did people start thinking of these as "negative numbers", and even Descartes believed that negative numbers were just as "imaginary" as the square root of -1.) > So I guess I am asking, is there a theory of "division rigs", or "positive fields"? Yes, they're called [semifields](https://en.wikipedia.org/wiki/Semifield) - but beware, this term also is also used to mean something completely different. And even sticking to the meaning we're interested in, Wikipedia seems to vacillate on whether a semifield must have an additive identity \\(0\\). (They _should_, according to my taste. And they _do_, if you read the definition in Wikipedia, and go back to the definition of [semiring](https://en.wikipedia.org/wiki/Semiring#Definition). But Wikipedia claims the _positive_ real numbers form a semiring with the usual + and \\(\times\\). I think only the _nonnegative_ reals should form a semifield.) I don't know much about semifields. Grothendieck generalized algebraic geometry from fields to commutative rings, but more recently people noticed that there's a lot of algebraic geometry that can be done perfectly well with commutative rigs! It should be interesting to see what advantages semifields have. For example: every vector space over a field has a basis, and any two bases have the same cardinality, called the **dimension** of that vector space. Is this true for semifields?
  • 12.

    That's a fascinating question.

    Comment Source:That's a fascinating question.
Sign In or Register to comment.