Options

Lecture 52 - The Hom-Functor

Now let's take a deeper look at adjoint functors - in our first look at these in Lecture 47, I deliberately downplayed some technicalities. But we need to understand these to truly understand adjoints, and they're actually beautiful in their own right. One of these is the so-called 'hom-functor'

$$ \mathrm{hom}: \mathcal{C}^{\mathrm{op}} \times \mathcal{C} \to \mathbf{Set} $$ that every category \(\mathcal{C}\) comes born with. Some of you have already figured out how this works. But let me explain it anyway.

Here are two important principles of category theory:

Every collection of things is eager to become the objects of a category.

and

Every map sending things of one kind to things of another kind is eager to become a functor.

But a category is more than a mere collection of objects, and a functor is more than a mere functor. A category, after all, needs morphisms! And a functor must know what to do to morphisms!

True, there's a cheap way to make any set into a category. Just throw in the morphisms that are absolutely required: the identity morphisms! There's also a cheap way to make any function into a functor. But usually we want to do something more interesting.

Now, whenever you've got a category \(\mathcal{C}\), there's a set of morphisms from any object \(x\) to any object \(y\). This is called a hom-set, and people often write it as \(\mathrm{hom}(x,y)\). Fong and Spivak call it \(\mathcal{C}(x,y)\), and that's good because it reminds you which category \(x\) and \(y\) are objects of.

Anyway, the hom-functor is what we get when we think about hom-sets using the principles I stated above.

A category \(\mathcal{C}\) has a collection of objects \(\mathrm{Ob}(\mathcal{C})\), and given any two of these we get a set of morphisms from the first to the second. So, there's a function

$$ \mathrm{hom} : \mathrm{Ob}(\mathcal{C}) \times \mathrm{Ob}(\mathcal{C}) \to \mathrm{Ob}(\mathbf{Set}) $$ sending any pair of objects \( (c,c') \in \mathrm{Ob}(\mathcal{C}) \times \mathrm{Ob}(\mathcal{C}) \) to the set \( \mathcal{C}(c,c') \) of morphisms from \(c\) to \(c'\). Note that a guy in \(\mathrm{Ob}(\mathbf{Set})\) is just a set!

But this function looks like it wants to become a functor

$$ \mathrm{hom} : \mathcal{C} \times \mathcal{C} \to \mathbf{Set}. $$ That doesn't quite work - but if we try, we'll see what goes wrong, and how to fix it.

First of all, what's \(\mathcal{C} \times \mathcal{C}\)? That actually works fine:

Theorem. For any categories \(\mathcal{X}\) and \(\mathcal{Y}\), there is a category \(\mathcal{X} \times \mathcal{Y}\) for which:

  • An object is a pair \( (x,y) \in \mathrm{Ob}(\mathcal{X}) \times \mathrm{Ob}(\mathcal{Y}) \).

  • A morphism from \( (x,y) \) to \( (x',y') \) is a pair of morphisms \( f: x \to y'\) and \(g: y \to y'\). We write this as \( (f,g) : (x,y) \to (x',y') \).

  • We compose morphisms as follows:

$$ (f',g') \circ (f,g) = (f' \circ f, g' \circ g) .$$

  • Identity morphisms are defined as follows:

$$ 1_{(x,y)} = (1_x, 1_y) .$$ Proof. Just check associativity and the right/left unit laws. \( \qquad \blacksquare \)

The problem is that we can't define our would-be functor

$$ \mathrm{hom} : \mathcal{C} \times \mathcal{C} \to \mathbf{Set} $$ on morphisms. We know what we want it to do to objects of \(\mathcal{C} \times \mathcal{C}\):

$$ \mathrm{hom}(c,c') = \mathcal{C}(c,c') .$$ But suppose we have a morphism in \(\mathcal{C} \times \mathcal{C}\), say

$$ (f,g) : (c,c') \to (d,d') $$ This should get sent to a morphism in \(\mathbf{Set}\), that is a function, called

$$ \mathrm{hom}(f,g) : \mathrm{hom}(c,c') \to \mathrm{hom}(d,d') $$ This function should take any morphism \(h \in \mathrm{hom}(c,c')\) and give a morphism in \(\mathrm{hom}(d,d')\). Can we accomplish this with what we have? Draw a diagram of everything:

$$ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \downarrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} $$ Can we get a morphism from \(d\) to \(d'\) from this?

No!

We can compose \(h\) with \(g\) just fine. But we can't compose it with \(f\), because \(f\) is pointing the wrong way!

So, we need to turn around an arrow, and for that we need the concept of an 'opposite category'.

Theorem. For any category \(\mathcal{C}\) there is a category \(\mathcal{C}^{\text{op}}\), called the opposite of \(\mathcal{C}\), for which:

  • The objects of \(\mathcal{C}^{\text{op}}\) are the objects of \(\mathcal{C}\).

  • A morphism \(f : c \to c'\) in \(\mathcal{C}^{\text{op}}\) is a morphism \(f : c' \to c\) in \(\mathcal{C}\).

  • The composite \(g \circ f \) of morphisms \(f : c \to c'\), \(g: c' \to c''\) in \(\mathcal{C}^{\text{op}}\) is the composite \(f \circ g\) of the corresponding morphism \(g : c'' \to c' \), \(f: c' \to c\) in \(\mathcal{C}\).

  • The identity morphism of an object \(c\) of \(\mathcal{C}^{\text{op}}\) is the same as its identity morphism in \(\mathcal{C}\).

Proof. Again, just check associativity and the left/right unit laws. These are facts that already hold in \(\mathcal{C}\); we're just turning them around backwards! \( \qquad \blacksquare\)

Now we can succeed in getting our hom-functor! It's really a functor

$$ \mathrm{hom} : \mathcal{C}^{\mathrm{op}} \times \mathcal{C} \to \mathbf{Set} . $$ We already know what we want it to do to objects:

$$ \mathrm{hom}(c,c') = \mathcal{C}(c,c') .$$ Now suppose have a morphism in \(\mathcal{C}^{\text{op}} \times \mathcal{C}\), say

$$ (f,g) : (c,c') \to (d,d') . $$ Thanks to the fiendishly clever 'op', this is the same as a morphism

$$ (f,g) : (d,c') \to (c,d') $$ in \(\mathcal{C}\times \mathcal{C}\). Our hom-functor should send this to a morphism in \(\mathbf{Set}\), namely a function

$$ \mathrm{hom}(f,g) : \mathrm{hom}(c,c') \to \mathrm{hom}(d,d') $$ This function should take any morphism \(h \in \mathrm{hom}(c,c')\) and give a morphism in \(\mathrm{hom}(d,d')\). Can we get this to work now? Again, draw everything we've got:

$$ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \uparrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} $$ Can we get a morphism from \(d\) to \(d'\) from this?

Yes!

Just compose all the arrows and get

$$ g \circ h \circ f : d \to d'. $$ Now we're on the road to success. Of course we have to check that our would-be hom-functor really is a functor! But I'll let you do that:

Puzzle 161. Prove that for any category \(\mathcal{C}\) there is a functor, the hom-functor

$$ \mathrm{hom} : \mathcal{C}^{\mathrm{op}} \times \mathcal{C} \to \mathbf{Set} $$ that sends any object \( (c,c') \) of \(\mathcal{C}^{\mathrm{op}} \times \mathcal{C}\) to the set \(\mathcal{C}(x,y)\), and sends any morphism

$$ (f,g) : (c,c') \to (d,d') $$ in \(\mathcal{C}^{\text{op}} \times \mathcal{C}\) to the function

$$ \mathrm{hom}(f,g) : \mathrm{hom}(c,c') \to \mathrm{hom}(d,d') $$ that maps any \(h \in \mathrm{hom}(c,c') \) to \( g \circ h \circ f \in \mathrm{hom}(d,d')\).

You have to prove it preserves composition and identities!

To read other lectures go here.

Comments

  • 1.
    edited June 28

    Puzzle 161:

    Preservation of identities follows easily from,

    \[ \mathrm{hom}(id_c , id_{c'})(f) = id_{c'} \circ f \circ id_c \],

    and the preservation of the composite \(j\circ k \circ l\) is given by, \[\begin{align} \mathrm{hom}(f,g)(j\circ k \circ l) \\ =\mathrm{hom}(f,g)\circ \mathrm{hom}(l,j)(k) \\ = g\circ j\circ k \circ l \circ f. \end{align} \]

    Comment Source:Puzzle 161: Preservation of identities follows easily from, \\[ \mathrm{hom}(id\_c , id\_{c'})(f) = id\_{c'} \circ f \circ id_c \\], and the preservation of the composite \\(j\circ k \circ l\\) is given by, \\[\begin{align} \mathrm{hom}(f,g)(j\circ k \circ l) \\\\ =\mathrm{hom}(f,g)\circ \mathrm{hom}(l,j)(k) \\\\ = g\circ j\circ k \circ l \circ f. \end{align} \\]
  • 2.
    edited June 28

    Looks good, Keith... but I'm an old fuddy-duddy: I like to see everything spelled out in detail. Preservation of identities says that

    $$ \mathrm{hom}(1_{c,c'}) = 1_{\mathrm{hom}(c,c')} $$ so I'd want to see an argument leading up to this conclusion, and preservation of composition says that

    $$ \mathrm{hom}((f,g) \circ (l,j)) = \mathrm{hom}(f,g) \circ \mathrm{hom}(l,j) $$ so I'd want to see an argument leading up to this. You've got a lot of the building-blocks there!

    Comment Source:Looks good, Keith... but I'm an old fuddy-duddy: I like to see everything spelled out in detail. Preservation of identities says that \[ \mathrm{hom}(1_{c,c'}) = 1_{\mathrm{hom}(c,c')} \] so I'd want to see an argument leading up to this conclusion, and preservation of composition says that \[ \mathrm{hom}((f,g) \circ (l,j)) = \mathrm{hom}(f,g) \circ \mathrm{hom}(l,j) \] so I'd want to see an argument leading up to this. You've got a lot of the building-blocks there!
  • 3.
    edited June 28

    Another layman question from me: in this diagram

    $$ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \downarrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} $$ why must the question mark function be equivalent to some composition of f, h, and g? Can't there simply exist an independent function in \(\mathcal{C}\) from d to d'? I don't understand how the non-composability of f, h, g necessarily blocks this possibility, or say, why the successful mapping from hom(c,c') to hom(d,d') hinges on the commutativity d \(\rightarrow\) d' = \(g \circ h \circ f\) at all. Maybe I'm misunderstanding something fundamental... :-?

    Comment Source:Another layman question from me: in this diagram \[ \begin{matrix} & & h & & \\\\ & c & \rightarrow & c' &\\\\ f & \downarrow & & \downarrow & g\\\\ & d & \rightarrow & d' &\\\\ & & ? & & \\\\ \end{matrix} \] why must the question mark function be equivalent to some composition of f, h, and g? Can't there simply exist an independent function in \\(\mathcal{C}\\) from d to d'? I don't understand how the non-composability of f, h, g necessarily blocks this possibility, or say, why the successful mapping from hom(c,c') to hom(d,d') hinges on the commutativity d \\(\rightarrow\\) d' = \\(g \circ h \circ f\\) at all. Maybe I'm misunderstanding something fundamental... :-?
  • 4.
    edited June 28

    Oh, I think I see what I did wrong.

    I made a functor from \(\mathcal{C} \to \mathbf{Set}\) instead of \(\mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\).

    On a side note, since \(\mathcal{C}^{op} \times \mathcal{C}\) is a category, we must also have a functor, \[ (\mathcal{C}^{op} \times \mathcal{C})^{op} \times \mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\\ =\mathcal{C} \times \mathcal{C}^{op} \times \mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}. \]

    We can keep doing this construction ad infinitum.

    Also, on another side note, since \(\mathrm{hom} : \mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\) is a functor to \(\mathbf{Set}\), \(\mathrm{hom}\) counts as a database instance, however it is one that comes automatic with every category.

    Comment Source:Oh, I think I see what I did wrong. I made a functor from \\(\mathcal{C} \to \mathbf{Set}\\) instead of \\(\mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\\). On a side note, since \\(\mathcal{C}^{op} \times \mathcal{C}\\) is a category, we must also have a functor, \\[ (\mathcal{C}^{op} \times \mathcal{C})^{op} \times \mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\\\\ =\mathcal{C} \times \mathcal{C}^{op} \times \mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}. \\] We can keep doing this construction ad infinitum. Also, on another side note, since \\(\mathrm{hom} : \mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\\) is a functor to \\(\mathbf{Set}\\), \\(\mathrm{hom}\\) counts as a database instance, however it is one that comes automatic with every category.
  • 5.

    note to @John – the second diagram in the lecture is wrong, it should look like this:

    $$ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \uparrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} $$

    Comment Source:note to @John – the second diagram in the lecture is wrong, it should look like this: \[ \begin{matrix} & & h & & \\\\ & c & \rightarrow & c' &\\\\ f & \uparrow & & \downarrow & g\\\\ & d & \rightarrow & d' &\\\\ & & ? & & \\\\ \end{matrix} \]
  • 6.
    edited June 28

    Julio Song wrote:

    Another layman question from me: in this diagram

    $$ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \downarrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} $$ why must the question mark function be equivalent to some composition of f, h, and g?

    As it stands, that diagrams cannot make \(? = g\circ h\circ g\), since \(f\) is pointing the wrong way.

    \[ d \overset{f}\leftarrow c \overset{h}\rightarrow c' \overset{g}\rightarrow d'\\ \not= \\ d \overset{?}\rightarrow d'. \]

    However, if we use Anindya's diagram,

    $$ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \uparrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} $$

    then it's easy to see that \(? = g\circ h\circ g\) is satisfied, which is the same as saying, \[ d \overset{f}\rightarrow c \overset{h}\rightarrow c' \overset{g}\rightarrow d'\\ = \\ d \overset{?}\rightarrow d'. \]

    which from Anindya's diagram is very easy to verify.

    Comment Source:Julio Song wrote: >Another layman question from me: in this diagram >\[ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \downarrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} \] >why must the question mark function be equivalent to some composition of f, h, and g? As it stands, that diagrams cannot make \\(? = g\circ h\circ g\\), since \\(f\\) is pointing the wrong way. \\[ d \overset{f}\leftarrow c \overset{h}\rightarrow c' \overset{g}\rightarrow d'\\\\ \not= \\\\ d \overset{?}\rightarrow d'. \\] However, if we use Anindya's diagram, >\[ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \uparrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} \] then it's easy to *see* that \\(? = g\circ h\circ g\\) is satisfied, which is the same as saying, \\[ d \overset{f}\rightarrow c \overset{h}\rightarrow c' \overset{g}\rightarrow d'\\\\ = \\\\ d \overset{?}\rightarrow d'. \\] which from Anindya's diagram is very easy to verify.
  • 7.

    Thing about that argument is that it shows that \(h \mapsto g\circ h\circ f\) is a possible definition for the hom functor, but it doesn't explain why it's necessary. If I understand @Julio correctly that's the nub of his question. It's all very well noting that this definition happens to work neatly, but why this definition and not some other one? I must confess I don't have a simple answer to this, and I suspect the best answer might be something like "this definition is the one that makes the Yoneda Lemma work".

    Incidentally I have a similar sense of slight puzzlement over the definition of a natural transformation. I can see how this is a neat way of defining "morphisms between functors", but is it the only way? Is there any way of deriving the definition rather than pulling it out of thin air and checking it works? This might seem kinda pedantic and trivial but I suspect that if we were to try generalising these constructions to higher dimensions, picking the "obvious" answer and checking it might not work.

    Comment Source:Thing about that argument is that it shows that \\(h \mapsto g\circ h\circ f\\) is a _possible_ definition for the hom functor, but it doesn't explain why it's _necessary_. If I understand @Julio correctly that's the nub of his question. It's all very well noting that this definition happens to work neatly, but _why_ this definition and not some other one? I must confess I don't have a simple answer to this, and I suspect the best answer might be something like "this definition is the one that makes the Yoneda Lemma work". Incidentally I have a similar sense of slight puzzlement over the definition of a natural transformation. I can see how this is a neat way of defining "morphisms between functors", but is it the only way? Is there any way of _deriving_ the definition rather than pulling it out of thin air and checking it works? This might seem kinda pedantic and trivial but I suspect that if we were to try generalising these constructions to higher dimensions, picking the "obvious" answer and checking it might not work.
  • 8.

    @Anindya Yes, that's exactly the nub of my question!

    Comment Source:@Anindya Yes, that's **exactly** the nub of my question!
  • 9.
    edited June 28

    I'll answer your second paragraph since it helps to answer the first,

    Anindya Bhattacharyya wrote:

    Incidentally I have a similar sense of slight puzzlement over the definition of a natural transformation. I can see how this is a neat way of defining "morphisms between functors", but is it the only way? Is there any way of deriving the definition rather than pulling it out of thin air and checking it works? This might seem kinda pedantic and trivial but I suspect that if we were to try generalising these constructions to higher dimensions, picking the "obvious" answer and checking it might not work.

    Since natural transformations are maps between functors, they must preserve functorial structure.

    For every functor \(F : \mathcal{C} \to \mathcal{D}\), we have the following law,

    \[ F(f \circ g) = F(f) \circ F(g) \]

    and more specifically, we get a special case,

    \[ F(id_x \circ f \circ id_y) = F(id_x) \circ F(f) \circ F(id_y). \]

    Since this functorial structure must be preserved, what counts as a possible map \(\alpha : F \to G\) is in some sense forced on us: it takes every morphism \(f\) being mapped by \(F\) to some corresponding morphism \(f'\) being mapped by \(G\).

    Or another to look at it, there is a category \(\mathbf{Cat}\) that has categories as objects and functors as morphisms. Now if we ask, what are functors in this situtation, we get the notion of a natural transformation.

    Thing about that argument is that it shows that \(h \mapsto g\circ h\circ f\) is a possible definition for the hom functor, but it doesn't explain why it's necessary. If I understand @Julio correctly that's the nub of his question. It's all very well noting that this definition happens to work neatly, but why this definition and not some other one? I must confess I don't have a simple answer to this, and I suspect the best answer might be something like "this definition is the one that makes the Yoneda Lemma work".

    When I thought of the puzzle, on my first try was to compose horizontally, but the functorial laws wont work unless we have \(g\) as an identity,

    $$ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \uparrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} , \begin{matrix} & & j & & \\ & c' & \rightarrow & c'' &\\ g^{op} & \uparrow & & \downarrow & k\\ & d' & \rightarrow & d'' &\\ & & ?' & & \\ \end{matrix} $$ however, composing vertically by subsituting \(h=?'\) works perfectly fine,

    \[ \begin{matrix} & & k & & \\ & b & \rightarrow & b' &\\ l & \uparrow & h = ?' & \downarrow & j\\ & c & \rightarrow & c' &\\ f & \uparrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} \]

    since \(\mathrm{hom}\) is a functor on a pair where the first entry is flipped, functoriality of composition therefore is,

    \[ \mathrm{hom}(l\circ f , g \circ j)(k) \\ = \mathrm{hom}(f,g)\circ\mathrm{hom}(l,j)(k) \]

    and functorial identity is,

    \[ \mathrm{hom}(id_x\circ f \circ id_y , id_{y'} \circ g \circ id_{x'})(h) \\ = \mathrm{hom}(id_{x'},id_y) \circ \mathrm{hom}(f,g)\circ\mathrm{hom}(id_x,id_{y'})(h) \]

    Comment Source:I'll answer your second paragraph since it helps to answer the first, Anindya Bhattacharyya wrote: >Incidentally I have a similar sense of slight puzzlement over the definition of a natural transformation. I can see how this is a neat way of defining "morphisms between functors", but is it the only way? Is there any way of deriving the definition rather than pulling it out of thin air and checking it works? This might seem kinda pedantic and trivial but I suspect that if we were to try generalising these constructions to higher dimensions, picking the "obvious" answer and checking it might not work. Since natural transformations are maps between functors, they must preserve functorial structure. For every functor \\(F : \mathcal{C} \to \mathcal{D}\\), we have the following law, \\[ F(f \circ g) = F(f) \circ F(g) \\] and more specifically, we get a special case, \\[ F(id\_x \circ f \circ id\_y) = F(id_x) \circ F(f) \circ F(id\_y). \\] Since this functorial structure must be preserved, what counts as a possible map \\(\alpha : F \to G\\) is in some sense forced on us: it takes every morphism \\(f\\) being mapped by \\(F\\) to *some* corresponding morphism \\(f'\\) being mapped by \\(G\\). Or another to look at it, there is a category \\(\mathbf{Cat}\\) that has categories as objects and functors as morphisms. Now if we ask, what are functors in this situtation, we get the notion of a natural transformation. >Thing about that argument is that it shows that \\(h \mapsto g\circ h\circ f\\) is a _possible_ definition for the hom functor, but it doesn't explain why it's _necessary_. If I understand @Julio correctly that's the nub of his question. It's all very well noting that this definition happens to work neatly, but _why_ this definition and not some other one? I must confess I don't have a simple answer to this, and I suspect the best answer might be something like "this definition is the one that makes the Yoneda Lemma work". When I thought of the puzzle, on my first try was to compose horizontally, but the functorial laws wont work unless we have \\(g\\) as an identity, \[ \begin{matrix} & & h & & \\\\ & c & \rightarrow & c' &\\\\ f & \uparrow & & \downarrow & g\\\\ & d & \rightarrow & d' &\\\\ & & ? & & \\\\ \end{matrix} , \begin{matrix} & & j & & \\\\ & c' & \rightarrow & c'' &\\\\ g^{op} & \uparrow & & \downarrow & k\\\\ & d' & \rightarrow & d'' &\\\\ & & ?' & & \\\\ \end{matrix} \] however, composing vertically by subsituting \\(h=?'\\) works perfectly fine, \\[ \begin{matrix} & & k & & \\\\ & b & \rightarrow & b' &\\\\ l & \uparrow & h = ?' & \downarrow & j\\\\ & c & \rightarrow & c' &\\\\ f & \uparrow & & \downarrow & g\\\\ & d & \rightarrow & d' &\\\\ & & ? & & \\\\ \end{matrix} \\] since \\(\mathrm{hom}\\) is a functor on a pair where the first entry is flipped, functoriality of composition therefore is, \\[ \mathrm{hom}(l\circ f , g \circ j)(k) \\\\ = \mathrm{hom}(f,g)\circ\mathrm{hom}(l,j)(k) \\] and functorial identity is, \\[ \mathrm{hom}(id\_x\circ f \circ id\_y , id\_{y'} \circ g \circ id\_{x'})(h) \\\\ = \mathrm{hom}(id\_{x'},id\_y) \circ \mathrm{hom}(f,g)\circ\mathrm{hom}(id\_x,id\_{y'})(h) \\]
  • 10.

    My CS intuition would say: Anything else that wasn't trivial, would require information we don't have. There aren't really other choices if we want the functor to use the info we have and no more.

    Comment Source:My CS intuition would say: Anything else that wasn't trivial, would require information we don't have. There aren't really other choices if we want the functor to use the info we have and no more.
  • 11.
    edited June 28

    This functor is a bit weird since the following holds where we can 'roll' everything to one side,

    \[ \begin{align} \mathrm{hom}(l\circ f , g \circ j)(k) \\ = \mathrm{hom}(l\circ f,g)\circ\mathrm{hom}(k,j)(id_{b'}) \\ = \mathrm{hom}(l\circ f,id_{d'})\circ\mathrm{hom}(k,g)(j) \\ = \mathrm{hom}(l\circ f,id_{d'})\circ\mathrm{hom}(j\circ k,id_{d'})(g) \\ = \mathrm{hom}(k \circ l\circ f,id_{d'})\circ\mathrm{hom}(g\circ j,id_{d'})(id_{d'}) \\ = \mathrm{hom}(g\circ j\circ k \circ l\circ f ,id_{d'})(id_{d'}), \end{align} \]

    and likewise to 'roll' everything in the other direction.

    Comment Source:This functor is a bit weird since the following holds where we can 'roll' everything to one side, \\[ \begin{align} \mathrm{hom}(l\circ f , g \circ j)(k) \\\\ = \mathrm{hom}(l\circ f,g)\circ\mathrm{hom}(k,j)(id\_{b'}) \\\\ = \mathrm{hom}(l\circ f,id\_{d'})\circ\mathrm{hom}(k,g)(j) \\\\ = \mathrm{hom}(l\circ f,id\_{d'})\circ\mathrm{hom}(j\circ k,id\_{d'})(g) \\\\ = \mathrm{hom}(k \circ l\circ f,id\_{d'})\circ\mathrm{hom}(g\circ j,id\_{d'})(id\_{d'}) \\\\ = \mathrm{hom}(g\circ j\circ k \circ l\circ f ,id\_{d'})(id\_{d'}), \end{align} \\] and likewise to 'roll' everything in the other direction.
  • 12.

    That is an interesting property. I wonder if other functors on C^op x C have that property.

    Comment Source:That is an interesting property. I wonder if other functors on C^op x C have that property.
  • 13.
    edited June 29

    Puzzle 161

    homfunctor preservation rules

    I apologize for taking the liberty to rename objects and morphisms as shown in the diagram above for it was easier to work with for me. I have also taken out all diagonal morphisms and identity morphism minus the one shown for simplicity of proving the preservation rules.

    Unit Preservation:

    So we start with identities \(1_a:a \rightarrow a\) and \(1_b:b \rightarrow b\) and hope that when we take the homfunctor \(C(1_a , 1_b)\), it is the identity for objects in homfunctor, \(1_{C(a,b)}\). First take the product \((1_a, 1_b) = (a \rightarrow a, b \rightarrow b)\) and then by taking the homfunctor, we get the morphism \(C(1_a , 1_b) : C(a,b) \rightarrow C(a,b)=1_{C(a,b)}\) which is too trivial to see the details.

    Composition Preservation:

    We need to show \(C(f,i) \circ C(g,h)= C(g \circ f, i \circ h)\). The left hand side is the composition shown in the diagram on the right which takes the object \(C(a,b) \rightarrow C(a',b) \rightarrow C(a'',b'')\). On the right side, we get \(C(g \circ f, i \circ h) = C(a'' \rightarrow a' \rightarrow a, b \rightarrow b' \rightarrow b'') = C(a'' \rightarrow a, b \rightarrow b'')\) which is just the morphism \(C(a,b) \rightarrow C(a'',b'')\) as you can see by the composition \(i \circ h \circ C(a,b) \circ g \circ f:a'' \rightarrow b''\).

    For the newbies like I, while doing this puzzle found this to be helpful when translating from diagrams to equations.

    homfunctor equation

    Comment Source:**Puzzle 161** ![homfunctor preservation rules](http://aether.co.kr/images/homfunctor_preservation_example.svg) I apologize for taking the liberty to rename objects and morphisms as shown in the diagram above for it was easier to work with for me. I have also taken out all diagonal morphisms and identity morphism minus the one shown for simplicity of proving the preservation rules. *Unit Preservation*: So we start with identities \\(1_a:a \rightarrow a\\) and \\(1_b:b \rightarrow b\\) and hope that when we take the homfunctor \\(C(1_a , 1_b)\\), it is the identity for objects in homfunctor, \\(1_{C(a,b)}\\). First take the product \\((1_a, 1_b) = (a \rightarrow a, b \rightarrow b)\\) and then by taking the homfunctor, we get the morphism \\(C(1_a , 1_b) : C(a,b) \rightarrow C(a,b)=1_{C(a,b)}\\) which is too trivial to see the details. *Composition Preservation*: We need to show \\(C(f,i) \circ C(g,h)= C(g \circ f, i \circ h)\\). The left hand side is the composition shown in the diagram on the right which takes the object \\(C(a,b) \rightarrow C(a',b) \rightarrow C(a'',b'')\\). On the right side, we get \\(C(g \circ f, i \circ h) = C(a'' \rightarrow a' \rightarrow a, b \rightarrow b' \rightarrow b'') = C(a'' \rightarrow a, b \rightarrow b'')\\) which is just the morphism \\(C(a,b) \rightarrow C(a'',b'')\\) as you can see by the composition \\(i \circ h \circ C(a,b) \circ g \circ f:a'' \rightarrow b''\\). For the newbies like I, while doing this puzzle found this to be helpful when translating from diagrams to equations. ![homfunctor equation](http://aether.co.kr/images/homfunctor_equation.svg)
  • 14.
    edited June 29

    Elaborate a bit more on Keith's answer.

    Puzzle 161

    1) Preservation of composition:

    Suppose \(h\in\mathcal{C}(c, c')\) and \((f,g)\) is a morphism from \((c, c')\) to \((d, d')\) and \((l, j)\) is a morphism from \((d, d')\) to \((e, e')\), we have

    $$ \begin{array}{ccc} \mathrm{hom}\big((l, j)\circ(f, g)\big) h &=&\mathrm{hom}\big((l\circ_{op} f, j\circ g)\big)h\\ &:=&(j\circ g)\circ h \circ (l\circ_{op} f)\\ &=&(j\circ g)\circ h \circ (f\circ l)\\ &=&j\circ (g\circ h \circ f)\circ l\\ &:=&\mathrm{hom}\big((l, j)\big) (g\circ h \circ f)\\ &:=&\mathrm{hom}\big((l, j)\big) \circ \mathrm{hom}\big((f, g)\big) h\\ \end{array} $$ This shows that \(\mathrm{hom}\big((l, j)\circ(f, g)\big)=\mathrm{hom}\big((l, j)\big) \circ \mathrm{hom}\big((f, g)\big)\).

    2) Preservation of identities:

    Suppose \(h\in\mathcal{C}(c, c')\) and \( 1 _{c, c'}=(\mathrm{id}_c, \mathrm{id} _{c'})\), then $$ \begin{array}{ccc} \mathrm{hom}(1_{c, c'}) h &:=&\mathrm{id} _{c'}\circ h\circ\mathrm{id} _{c}\\\\ &=&h \end{array} $$ Hence \(\mathrm{hom}(1 _{c, c'})\) is the identity map on the set \(\mathcal{C}(c, c')\), i.e. \(\mathrm{hom}(1 _{c, c'})=1 _{\mathcal{C}(c, c')}\).

    Comment Source:Elaborate a bit more on Keith's answer. **Puzzle 161** 1) **Preservation of composition:** Suppose \\(h\in\mathcal{C}(c, c')\\) and \\((f,g)\\) is a morphism from \\((c, c')\\) to \\((d, d')\\) and \\((l, j)\\) is a morphism from \\((d, d')\\) to \\((e, e')\\), we have \[ \begin{array}{ccc} \mathrm{hom}\big((l, j)\circ(f, g)\big) h &=&\mathrm{hom}\big((l\circ_{op} f, j\circ g)\big)h\\\\ &:=&(j\circ g)\circ h \circ (l\circ_{op} f)\\\\ &=&(j\circ g)\circ h \circ (f\circ l)\\\\ &=&j\circ (g\circ h \circ f)\circ l\\\\ &:=&\mathrm{hom}\big((l, j)\big) (g\circ h \circ f)\\\\ &:=&\mathrm{hom}\big((l, j)\big) \circ \mathrm{hom}\big((f, g)\big) h\\\\ \end{array} \] This shows that \\(\mathrm{hom}\big((l, j)\circ(f, g)\big)=\mathrm{hom}\big((l, j)\big) \circ \mathrm{hom}\big((f, g)\big)\\). 2) **Preservation of identities:** Suppose \\(h\in\mathcal{C}(c, c')\\) and \\( 1 _{c, c'}=(\mathrm{id}_c, \mathrm{id} _{c'})\\), then \[ \begin{array}{ccc} \mathrm{hom}(1_{c, c'}) h &:=&\mathrm{id} _{c'}\circ h\circ\mathrm{id} _{c}\\\\ &=&h \end{array} \] Hence \\(\mathrm{hom}(1 _{c, c'})\\) is the identity map on the set \\(\mathcal{C}(c, c')\\), i.e. \\(\mathrm{hom}(1 _{c, c'})=1 _{\mathcal{C}(c, c')}\\).
  • 15.
    edited June 29

    Julio wrote:

    $$ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \downarrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} $$ why must the question mark function be equivalent to some composition of \(f, h,\) and \(g\)? Can't there simply exist an independent function morphism in \(\mathcal{C}\) from \(d\) to \(d'\)?

    Good question. But consider the example where the only objects and morphisms in \(\mathcal{C}\) are those shown in the picture - and composites of what's shown, and identity morphisms. There's no reason there should be anything else! Then you're stuck.

    This is a great example of a general principle in category theory: you can't make an omelette if you don't have eggs. You can only cook with the ingredients you have.

    You can try to wriggle out of this in various ways, and it would be educational to try.

    Stephen explained it another way:

    My CS intuition would say: Anything else that wasn't trivial, would require information we don't have. There aren't really other choices if we want the functor to use the info we have and no more.

    To fully get this intuition, I think one has to fight against it for a while and see all the bad things that happen.

    Comment Source:Julio wrote: >\[ \begin{matrix} & & h & & \\ & c & \rightarrow & c' &\\ f & \downarrow & & \downarrow & g\\ & d & \rightarrow & d' &\\ & & ? & & \\ \end{matrix} \] > why must the question mark function be equivalent to some composition of \\(f, h,\\) and \\(g\\)? Can't there simply exist an independent <del>function</del> morphism in \\(\mathcal{C}\\) from \\(d\\) to \\(d'\\)? Good question. But consider the example where the only objects and morphisms in \\(\mathcal{C}\\) are those shown in the picture - and composites of what's shown, and identity morphisms. There's no reason there should be anything else! Then you're stuck. This is a great example of a general principle in category theory: _you can't make an omelette if you don't have eggs_. You can only cook with the ingredients you have. You can try to wriggle out of this in various ways, and it would be educational to try. Stephen explained it another way: > My CS intuition would say: Anything else that wasn't trivial, would require information we don't have. There aren't really other choices if we want the functor to use the info we have and no more. To fully get this intuition, I think one has to fight against it for a while and see all the bad things that happen.
  • 16.
    edited June 29

    Anindya wrote:

    Thing about that argument is that it shows that \(h \mapsto g\circ h\circ f\) is a possible definition for the hom functor, but it doesn't explain why it's necessary.

    I know you know this, but I'm using your nicely phrased question as a way to tell Julio:

    Once you've chosen your definitions, you can prove theorems: the theorems say that certain consequences follow necessarily from the definitions. But the definitions are freely chosen.

    There's no such thing as a 'necessary' definition. There are only better and worse definitions, and what counts as better is a matter of experience - and even taste to some extent. The main way to see if a definition is good, is to try to use it to prove theorems.

    If I understand @Julio correctly that's the nub of his question. It's all very well noting that this definition happens to work neatly, but why this definition and not some other one?

    In this situation the usual response is to ask the questioner to suggest another definition. Often they can't find an alternative, or the only alternatives are unsatisfactory in some way. Then it becomes obvious why the usual definition was chosen. Sometimes there are good alternatives, and then things get really interesting.

    Comment Source:Anindya wrote: > Thing about that argument is that it shows that \\(h \mapsto g\circ h\circ f\\) is a _possible_ definition for the hom functor, but it doesn't explain why it's _necessary_. I know you know this, but I'm using your nicely phrased question as a way to tell Julio: Once you've chosen your definitions, you can prove theorems: the theorems say that certain consequences follow necessarily from the definitions. But the definitions are freely chosen. There's no such thing as a 'necessary' definition. There are only better and worse definitions, and what counts as better is a matter of experience - and even taste to some extent. The main way to see if a definition is good, is to try to use it to prove theorems. > If I understand @Julio correctly that's the nub of his question. It's all very well noting that this definition happens to work neatly, but _why_ this definition and not some other one? In this situation the usual response is to ask the questioner to suggest another definition. Often they can't find an alternative, or the only alternatives are unsatisfactory in some way. Then it becomes obvious why the usual definition was chosen. Sometimes there _are_ good alternatives, and then things get really interesting.
  • 17.

    Cheuk Man Hwang wrote:

    1) Preservation of composition:

    Suppose \(h\in\mathcal{C}(c, c')\) and \((f,g)\) is a morphism from \((c, c')\) to \((d, d')\) and \((l, j)\) is a morphism from \((d, d')\) to \((e, e')\). Then we have

    $$ \begin{array}{ccc} \mathrm{hom}\big((l, j)\circ(f, g)\big) h &=&\mathrm{hom}\big((l\circ_{op} f, j\circ g)\big)h\\\\ &:=&(j\circ g)\circ h \circ (l\circ_{op} f)\\\\ &=&(j\circ g)\circ h \circ (f\circ l)\\\\ &=&j\circ (g\circ h \circ f)\circ l\\\\ &:=&\mathrm{hom}\big((l, j)\big) (g\circ h \circ f)\\\\ &:=&\mathrm{hom}\big((l, j)\big) \circ \mathrm{hom}\big((f, g)\big) h\\\\ \end{array} $$ This shows that \(\mathrm{hom}\big((l, j)\circ(f, g)\big)=\mathrm{hom}\big((l, j)\big) \circ \mathrm{hom}\big((f, g)\big)\).

    Great! There's something nice about not skipping any steps and seeing how all the rules get used.

    Comment Source:Cheuk Man Hwang wrote: > 1) **Preservation of composition:** > Suppose \\(h\in\mathcal{C}(c, c')\\) and \\((f,g)\\) is a morphism from \\((c, c')\\) to \\((d, d')\\) and \\((l, j)\\) is a morphism from \\((d, d')\\) to \\((e, e')\\). Then we have > \[ \begin{array}{ccc} \mathrm{hom}\big((l, j)\circ(f, g)\big) h &=&\mathrm{hom}\big((l\circ_{op} f, j\circ g)\big)h\\\\ &:=&(j\circ g)\circ h \circ (l\circ_{op} f)\\\\ &=&(j\circ g)\circ h \circ (f\circ l)\\\\ &=&j\circ (g\circ h \circ f)\circ l\\\\ &:=&\mathrm{hom}\big((l, j)\big) (g\circ h \circ f)\\\\ &:=&\mathrm{hom}\big((l, j)\big) \circ \mathrm{hom}\big((f, g)\big) h\\\\ \end{array} \] > This shows that \\(\mathrm{hom}\big((l, j)\circ(f, g)\big)=\mathrm{hom}\big((l, j)\big) \circ \mathrm{hom}\big((f, g)\big)\\). Great! There's something nice about not skipping any steps and seeing how all the rules get used.
  • 18.

    Thinking about this string diagrammatically, if I understand corectly, \(\mathrm{hom}\) (if you pardon me using a drawing) looks something like:

    image
    Comment Source:Thinking about this string diagrammatically, if I understand corectly, \\(\mathrm{hom}\\) (if you pardon me using a drawing) looks something like: <center> <img src="https://imgur.com/RXtfbSP.png"> </center>
  • 19.

    Now that I think about it, the \(\mathrm{hom}\) functor reminds me lot of a double-ended queue.

    Comment Source:Now that I think about it, the \\(\mathrm{hom}\\) functor reminds me lot of a [double-ended queue](https://en.wikipedia.org/wiki/Double-ended_queue).
  • 20.
    edited June 29

    Keith's string diagram #18 is epiphanic! And huge thanks to @John (#15 #16) and @Chritopher (#10) for the methodological clarifications! :-bd Now that I (think I) have a better understanding, I'll write down some tips in case other beginners might find them useful.

    First, \(\mathrm{hom}(c,c')\) and \(\mathrm{hom}(d,d')\) are objects in \(\mathbf{Set}\) which should be diagrammatically dots, so the squares in this lecture are not diagrams in \(\mathbf{Set}\). They are not diagrams in \(\mathcal{C} \times \mathcal{C}\) or \(\mathcal{C}^{op} \times \mathcal{C}\) either, for objects in those categories should be pairs. In fact, those squares we have been using are more likely still residing in \(\mathcal{C}\) (realizing this swept away a lot of my puzzles!). That is, we are talking about \(\mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\) directly via diagrams in \(\mathcal{C}\) rather than by illustrating new diagrams in \(\mathcal{C}^{op} \times \mathcal{C}\) or \(\mathbf{Set}\). As such, I find it easier to conceive the hom-functor dynamically as a particular perspective (i.e. the \(\mathbf{Set}\)-perspective) to \(\mathcal{C}\) which helps us establish a certain configuration (like the one in Keith's string diagram).

    Second, since now we are in \(\mathcal{C}\) (which is any category), if we forget about the hom-functor temporarily and only think about \(\mathcal{C}\), there may well exist various independently defined arrows \(d \to d'\), for \(d\) and \(d'\) are merely two random objects after all. But once we put on the hom-functor spectacles, we are taken into a different (and more restricted) scenery, where the possibly independently existing \(d \to d'\) arrows are no longer important (or even visible), because the hom-functor – which must map/preserve morphisms – needs to establish an 100% secure input-output relation in the \(\mathbf{Set}\)-perspective between \(\mathrm{hom}(c,c')\) and \(\mathrm{hom}(d,d')\), hence @John's words in the lecture:

    This function should take any morphism \(h \in \mathrm{hom}(c,c')\) and give a morphism in \(\mathrm{hom}(d,d')\).

    Thus, the question is not whether there might be \(d \to d'\) arrows in \(\mathcal{C}\) or not (which is a valid question for its own sake but simply uninteresting in our hom-functor discourse), but more restrictively given any \(c \to c'\) arrow as input (together with the relevant morphisms \(f, g\)), whether or not we can confidently guarantee at least one such arrow as output. If we can have such a guarantee, then it means our hom-functor at hand successfully preserves morphisms and qualifies as a true functor. The obvious way to achieve this is (like everyone above has pointed out) via the composition \(d \to c\to c' \to d'\) (i.e. \(g∘h∘f\)), which in turn requires the additional \(op\)-trick on the first component of the \(\mathcal{C}\)-morphism pair \(\langle f, g\rangle\) (I find the angle bracket notation easier as otherwise I might mistake \(f, g\) for weirdly named \(\mathcal{C}\)-objects).

    What Keith's string diagram helped me realize (by completely omitting the bottom-side of the square) is precisely the point that we do not care whether or not there exist independent \(d \to d'\) arrows but merely want to determine a dependent one via manipulating \(f\), \(h\), and \(g\).

    Comment Source:Keith's string diagram [#18](https://forum.azimuthproject.org/discussion/comment/19760/#Comment_19760) is epiphanic! And huge thanks to @John ([#15](https://forum.azimuthproject.org/discussion/comment/19757/#Comment_19757) [#16](https://forum.azimuthproject.org/discussion/comment/19758/#Comment_19758)) and @Chritopher ([#10](https://forum.azimuthproject.org/discussion/comment/19744/#Comment_19744)) for the methodological clarifications! :-bd Now that I (think I) have a better understanding, I'll write down some tips in case other beginners might find them useful. First, \\(\mathrm{hom}(c,c')\\) and \\(\mathrm{hom}(d,d')\\) are objects in \\(\mathbf{Set}\\) which should be diagrammatically dots, so the squares in this lecture are _not_ diagrams in \\(\mathbf{Set}\\). They are not diagrams in \\(\mathcal{C} \times \mathcal{C}\\) or \\(\mathcal{C}^{op} \times \mathcal{C}\\) either, for objects in those categories should be pairs. In fact, those squares we have been using are more likely still residing in \\(\mathcal{C}\\) (realizing this swept away a lot of my puzzles!). That is, we are talking about \\(\mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\\) directly via diagrams in \\(\mathcal{C}\\) rather than by illustrating new diagrams in \\(\mathcal{C}^{op} \times \mathcal{C}\\) or \\(\mathbf{Set}\\). As such, I find it easier to conceive the hom-functor dynamically as a particular *perspective* (i.e. the \\(\mathbf{Set}\\)-perspective) to \\(\mathcal{C}\\) which helps us establish a certain configuration (like the one in Keith's string diagram). Second, since now we are in \\(\mathcal{C}\\) (which is any category), if we forget about the hom-functor temporarily and only think about \\(\mathcal{C}\\), there may well exist various independently defined arrows \\(d \to d'\\), for \\(d\\) and \\(d'\\) are merely two random objects after all. But once we put on the hom-functor spectacles, we are taken into a different (and more restricted) scenery, where the possibly independently existing \\(d \to d'\\) arrows are no longer important (or even visible), because the hom-functor – which must map/preserve morphisms – needs to establish an _100% secure input-output relation_ in the \\(\mathbf{Set}\\)-perspective between \\(\mathrm{hom}(c,c')\\) and \\(\mathrm{hom}(d,d')\\), hence @John's words in the lecture: >This function should take any morphism \\(h \in \mathrm{hom}(c,c')\\) and give a morphism in \\(\mathrm{hom}(d,d')\\). Thus, the question is *not* whether there might be \\(d \to d'\\) arrows in \\(\mathcal{C}\\) or not (which is a valid question for its own sake but simply uninteresting in our hom-functor discourse), but more restrictively given any \\(c \to c'\\) arrow as input (together with the relevant morphisms \\(f, g\\)), whether or not we can _confidently guarantee_ at least _one_ such arrow as output. If we _can_ have such a guarantee, then it means our hom-functor at hand successfully preserves morphisms and qualifies as a true functor. The obvious way to achieve this is (like everyone above has pointed out) via the composition \\(d \to c\to c' \to d'\\) (i.e. \\(g∘h∘f\\)), which in turn requires the additional \\(op\\)-trick on the first component of the \\(\mathcal{C}\\)-morphism pair \\(\langle f, g\rangle\\) (I find the angle bracket notation easier as otherwise I might mistake \\(f, g\\) for weirdly named \\(\mathcal{C}\\)-objects). What Keith's string diagram helped me realize (by completely omitting the bottom-side of the square) is precisely the point that we do not care whether or not there exist independent \\(d \to d'\\) arrows but merely want to _determine a dependent one_ via manipulating \\(f\\), \\(h\\), and \\(g\\).
  • 21.

    Julio Song wrote:

    What Keith's string diagram helped me realize (by completely omitting the bottom-side of the square) is precisely the point that we do not care whether or not there exist independent \(d \to d'\) arrows but merely want to determine a dependent one via manipulating \(f\), \(h\), and \(g\).

    That is exactly what the \(\mathrm{hom}\) functor is doing. Also, my diagram reminds me of a stalagmite.

    In fact, you gave me an idea as to how to give a possible formal definition of \(\mathrm{hom}\),

    \[ \mathrm{hom}(f,g)(h)=\begin{cases} u := g\circ h \circ f & \text{ if } target(f)=source(h) \\ & \text{ and } target(h)=source(g)\\ & \\ \varnothing & \text{ otherwise.} \end{cases} \]

    Comment Source:Julio Song wrote: >What Keith's string diagram helped me realize (by completely omitting the bottom-side of the square) is precisely the point that we do not care whether or not there exist independent \\(d \to d'\\) arrows but merely want to _determine a dependent one_ via manipulating \\(f\\), \\(h\\), and \\(g\\). That is exactly what the \\(\mathrm{hom}\\) functor is doing. Also, my diagram reminds me of a [stalagmite](https://en.wikipedia.org/wiki/Stalagmite). In fact, you gave me an idea as to how to give a possible formal definition of \\(\mathrm{hom}\\), \\[ \mathrm{hom}(f,g)(h)=\begin{cases} u := g\circ h \circ f & \text{ if } target(f)=source(h) \\\\ & \text{ and } target(h)=source(g)\\\\ & \\\\ \varnothing & \text{ otherwise.} \end{cases} \\]
  • 22.
    edited June 29

    In comment #19 I remarked how \(\mathrm{hom}\) reminded me of a double-ended queue. Thinking about it some more, however, gave me the realization that there exist these six natural transformations,

    \[ \mathrm{AddFront}(e) := \\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto \mathrm{hom}(e,id_{target(k)})\mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \]

    \[ \mathrm{AddBack}(e) := \\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto \mathrm{hom}(id_{source(j)},e)\mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \]

    \[ \mathrm{DeleteFront} := \\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto \mathrm{hom}(id_{target(j)},k)\cdots\mathrm{hom}(f,g)(h) \]

    \[ \mathrm{DeleteBack} := \\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto \mathrm{hom}(j,id_{source(k)})\cdots\mathrm{hom}(f,g)(h) \]

    \[ \mathrm{PeekFront} := \\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto j \]

    \[ \mathrm{PeekBack} := \\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto k \]

    Edit: Note that \(\mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h)\) is short for \(j\circ \cdots \circ f \circ h \circ g \circ \cdots \circ k\).

    Comment Source:In comment #19 I remarked how \\(\mathrm{hom}\\) reminded me of a double-ended queue. Thinking about it some more, however, gave me the realization that there exist these six natural transformations, \\[ \mathrm{AddFront}(e) := \\\\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto \mathrm{hom}(e,id\_{target(k)})\mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \\] \\[ \mathrm{AddBack}(e) := \\\\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto \mathrm{hom}(id\_{source(j)},e)\mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \\] \\[ \mathrm{DeleteFront} := \\\\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto \mathrm{hom}(id\_{target(j)},k)\cdots\mathrm{hom}(f,g)(h) \\] \\[ \mathrm{DeleteBack} := \\\\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto \mathrm{hom}(j,id\_{source(k)})\cdots\mathrm{hom}(f,g)(h) \\] \\[ \mathrm{PeekFront} := \\\\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto j \\] \\[ \mathrm{PeekBack} := \\\\ \mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h) \mapsto k \\] Edit: Note that \\(\mathrm{hom}(j,k)\cdots\mathrm{hom}(f,g)(h)\\) is short for \\(j\circ \cdots \circ f \circ h \circ g \circ \cdots \circ k\\).
  • 23.

    Woah. Very cool.

    Comment Source:Woah. Very cool.
  • 24.
    edited June 29

    If we set the first variable of \(\mathrm{hom}\) to \(id_{source(h)}\), then I think our double-ended queue gets turned into something like a linked list,

    \[ \mathrm{hom}(id_{source(h)}, k)\cdots\mathrm{hom}(id_{source(h)}, g)(h) \\ = k \circ \cdots \circ g \circ h \circ id_{source(h)} \circ \cdots \circ id_{source(h)} \]

    where \(id_{source(h)}\) acts like the list's \(\texttt{null}\) element.

    Comment Source:If we set the first variable of \\(\mathrm{hom}\\) to \\(id_{source(h)}\\), then I think our double-ended queue gets turned into something like a linked list, \\[ \mathrm{hom}(id_{source(h)}, k)\cdots\mathrm{hom}(id_{source(h)}, g)(h) \\\\ = k \circ \cdots \circ g \circ h \circ id_{source(h)} \circ \cdots \circ id_{source(h)} \\] where \\(id_{source(h)}\\) acts like the list's \\(\texttt{null}\\) element.
  • 25.
    edited June 29

    Julio wrote:

    Second, since now we are in \(\mathcal{C}\) (which is any category), if we forget about the hom-functor temporarily and only think about \(\mathcal{C}\), there may well exist various independently defined arrows \(d \to d'\), for \(d\) and \(d'\) are merely two random objects after all. But once we put on the hom-functor spectacles, we are taken into a different (and more restricted) scenery, where the possibly independently existing \(d \to d'\) arrows are no longer important (or even visible), because the hom-functor – which must map/preserve morphisms – needs to establish an 100% secure input-output relation in the \(\mathbf{Set}\)-perspective between \(\mathrm{hom}(c,c')\) and \(\mathrm{hom}(d,d')\), hence @John's words in the lecture:

    This function should take any morphism \(h \in \mathrm{hom}(c,c')\) and give a morphism in \(\mathrm{hom}(d,d')\).

    Thus, the question is not whether there might be \(d \to d'\) arrows in \(\mathcal{C}\) or not (which is a valid question for its own sake but simply uninteresting in our hom-functor discourse), but more restrictively given any \(c \to c'\) arrow as input (together with the relevant morphisms \(f, g\)), whether or not we can confidently guarantee at least one such arrow as output.

    Right! Well put. You've got it now.

    We are looking for a systematic recipe to build a function that takes morphisms in \(h \in \mathrm{hom}(c,c')\) and gives morphisms in \(\mathrm{hom}(d,d')\). We easily get such a recipe if we know morphisms \(f: d \to c\) and \(g: c' \to d'\), and that recipe is what the hom-functor exploits. We don't get such a recipe if we only know morphisms \(f : c \to d\) and \(g: c' \to d'\). So, we need the "op" in

    $$ \text{hom} : \mathcal{C}^\text{op} \times \mathcal{C} \to \mathbf{Set} . $$ ("Systematic recipe" is vague talk for "functor"; proving that the hom-functor is really a functor, as some students have done above, imposes some constraints that are well-nigh impossible to meet if one isn't systematic.)

    Comment Source:Julio wrote: > Second, since now we are in \\(\mathcal{C}\\) (which is any category), if we forget about the hom-functor temporarily and only think about \\(\mathcal{C}\\), there may well exist various independently defined arrows \\(d \to d'\\), for \\(d\\) and \\(d'\\) are merely two random objects after all. But once we put on the hom-functor spectacles, we are taken into a different (and more restricted) scenery, where the possibly independently existing \\(d \to d'\\) arrows are no longer important (or even visible), because the hom-functor – which must map/preserve morphisms – needs to establish an _100% secure input-output relation_ in the \\(\mathbf{Set}\\)-perspective between \\(\mathrm{hom}(c,c')\\) and \\(\mathrm{hom}(d,d')\\), hence @John's words in the lecture: >This function should take any morphism \\(h \in \mathrm{hom}(c,c')\\) and give a morphism in \\(\mathrm{hom}(d,d')\\). > Thus, the question is *not* whether there might be \\(d \to d'\\) arrows in \\(\mathcal{C}\\) or not (which is a valid question for its own sake but simply uninteresting in our hom-functor discourse), but more restrictively given any \\(c \to c'\\) arrow as input (together with the relevant morphisms \\(f, g\\)), whether or not we can _confidently guarantee_ at least _one_ such arrow as output. Right! Well put. You've got it now. We are looking for a systematic recipe to build a function that takes morphisms in \\(h \in \mathrm{hom}(c,c')\\) and gives morphisms in \\(\mathrm{hom}(d,d')\\). We easily get such a recipe if we know morphisms \\(f: d \to c\\) and \\(g: c' \to d'\\), and that recipe is what the hom-functor exploits. We don't get such a recipe if we only know morphisms \\(f : c \to d\\) and \\(g: c' \to d'\\). So, we need the "op" in \[ \text{hom} : \mathcal{C}^\text{op} \times \mathcal{C} \to \mathbf{Set} . \] ("Systematic recipe" is vague talk for "functor"; proving that the hom-functor is really a functor, as some students have done above, imposes some constraints that are well-nigh impossible to meet if one isn't systematic.)
  • 26.

    Keith wrote:

    \[ \mathrm{hom}(f,g)(h)=\begin{cases} u := g\circ h \circ f & \text{ if } target(f)=source(h) \\ & \text{ and } target(h)=source(g)\\ & \\ \varnothing & \text{ otherwise.} \end{cases} \]

    Thanks for this. Gave me a better perspective on how the hom functor works.

    Below is a diagram showing preservation of composition highlighting your hom gadget.

    homfunctor preservation of composition

    Comment Source:Keith wrote: >\\[ >\mathrm{hom}(f,g)(h)=\begin{cases} >u := g\circ h \circ f & \text{ if } target(f)=source(h) \\\\ >& \text{ and } target(h)=source(g)\\\\ >& \\\\ >\varnothing & \text{ otherwise.} >\end{cases} >\\] Thanks for this. Gave me a better perspective on how the hom functor works. Below is a diagram showing preservation of composition highlighting your hom gadget. ![homfunctor preservation of composition](http://aether.co.kr/images/homfunctor_composition.svg)
  • 27.

    An additional note on the op-trick for other beginners:

    The situation in my previous comment #20 requires that the arrow direction in the first component of the product hom-functor be eventually reversed. There are two ways to do this:

    (i) via a non-reversed arrow in the domain category (\(\mathcal{C}\)) plus a contravariant hom-functor;

    (ii) via an already-reversed arrow in the domain category plus a covariant hom-functor, where the arrow reversing is done by the op-trick which changes the domain category from \(\mathcal{C}\) to \(\mathcal{C}^{op}\).

    As such, in the notation \(\mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\), both components of the product hom-functor are covariant. What we mean by saying "the first component is contravariant" is that its effect needs to be contravariant, not the actual functor, at least in the op-trick notation, because if we simultaneously apply the op-trick and the contravariant functor we effectively reverse the normal \(\mathcal{C}\)-arrow direction twice (which amounts to not reversing it at all)!

    In Categories for the Working Mathematician (p.34) Mac Lane describes this op-trick as "[t]he contravariant hom-functor [...] written covariantly". On that note, I find Mac Lane's book surprisingly lucid on various points that have confused me.

    Comment Source:An additional note on the op-trick for other beginners: The situation in my previous comment [#20](https://forum.azimuthproject.org/discussion/comment/19763/#Comment_19763) requires that the arrow direction in the **first** component of the product hom-functor be eventually reversed. There are two ways to do this: **(i)** via a non-reversed arrow in the domain category (\\(\mathcal{C}\\)) plus a _contravariant_ hom-functor; **(ii)** via an already-reversed arrow in the domain category plus a _covariant_ hom-functor, where the arrow reversing is done by the op-trick which changes the domain category from \\(\mathcal{C}\\) to \\(\mathcal{C}^{op}\\). As such, in the notation \\(\mathcal{C}^{op} \times \mathcal{C} \to \mathbf{Set}\\), both components of the product hom-functor are **covariant**. What we mean by saying "the first component is contravariant" is that its _effect_ needs to be contravariant, not the actual _functor_, at least in the op-trick notation, because if we simultaneously apply the op-trick and the contravariant functor we effectively reverse the normal \\(\mathcal{C}\\)-arrow direction _twice_ (which amounts to not reversing it at all)! In _Categories for the Working Mathematician_ (p.34) Mac Lane describes this op-trick as **"[t]he contravariant hom-functor [...] written covariantly"**. On that note, I find Mac Lane's book surprisingly lucid on various points that have confused me.
  • 28.
    edited July 5

    Julio: it's very good that you brought this up! If you read old books like Mac Lane's you will meet contravariant functors, so you need to know how to deal with them.

    But a lot of modern category theorists, and certainly Fong and Spivak's book and me in this course, never use contravariant functors \(F : \mathcal{X} \to \mathcal{Y} \). As a substitute, we always use ordinary functors \(F : \mathcal{X}^{\text{op}} \to \mathcal{Y}\) or \(F : \mathcal{X} \to \mathcal{Y}^{\text{op}} \).

    All three of these are just different ways of talking about the same idea. The practical advantage with the modern approach is that if you see someone write \(F : \mathcal{X} \to \mathcal{Y}\), you never need to ask "is that functor covariant or contravariant?", and perhaps riffle back 10 pages to try to find where this was stated. You know right away that it's covariant - that is, an ordinary functor.

    The conceptual advantage is that all our arrows between categories are morphisms in \(\mathrm{Cat}\) - that is, ordinary functors. It's bad, in category theory, to be unclear which category your arrows are morphisms in. When category theorists study categories, they like to work in \(\mathrm{Cat}\).

    Further, we never need to engage in mental backflips like this:

    if we simultaneously apply the op-trick and the contravariant functor we effectively reverse the normal \(\mathcal{C}\)-arrow direction twice (which amounts to not reversing it at all)!

    We can't, because we don't talk about contravariant functors.

    Of course it's fun to get confused and then deconfused - that's what math is all about. But this particular kind of mental acrobatics can be avoided, saving our energy for more exciting games.

    Comment Source:Julio: it's very good that you brought this up! If you read old books like Mac Lane's you will meet contravariant functors, so you need to know how to deal with them. But a lot of modern category theorists, and certainly Fong and Spivak's book and me in this course, **never use** contravariant functors \\(F : \mathcal{X} \to \mathcal{Y} \\). As a substitute, we always use ordinary functors \\(F : \mathcal{X}^{\text{op}} \to \mathcal{Y}\\) or \\(F : \mathcal{X} \to \mathcal{Y}^{\text{op}} \\). All three of these are just different ways of talking about the same idea. The practical advantage with the modern approach is that if you see someone write \\(F : \mathcal{X} \to \mathcal{Y}\\), you never need to ask "is that functor covariant or contravariant?", and perhaps riffle back 10 pages to try to find where this was stated. You know right away that it's covariant - that is, an ordinary functor. The conceptual advantage is that all our arrows between categories are morphisms in \\(\mathrm{Cat}\\) - that is, ordinary functors. It's bad, in category theory, to be unclear which category your arrows are morphisms in. When category theorists study categories, they like to work in \\(\mathrm{Cat}\\). Further, we never need to engage in mental backflips like this: > if we simultaneously apply the op-trick and the contravariant functor we effectively reverse the normal \\(\mathcal{C}\\)-arrow direction twice (which amounts to not reversing it at all)! We _can't_, because we don't talk about contravariant functors. Of course it's fun to get confused and then deconfused - that's what math is all about. But this particular kind of mental acrobatics can be avoided, saving our energy for more exciting games.
  • 29.
    edited July 5

    @John Yes, now that I have understood this, I totally see why modern theorists want to define only one type of functor. I just got confused by the mentioning of "contravariant" here and there (especially when we were trying to understand the op-trick under Lecture 47) and the lack of really explicit formulation (e.g. what exactly is the contravariant thing? what does "contravariant" mean when contravariant functors are already obsolete?), precisely because of what you nicely explained in #28, i.e. when we use the op-trick we should not really still keep talking about "contravariant"!

    For someone (aka me) who does not (yet) know enough category theory to properly reason about such things as notational/methodological conventions, this kind of apparently pedantic details can easily become unnecessary hurdles, but once they are clarified, the picture suddenly becomes a lot clearer. :)

    Comment Source:@John Yes, now that I have understood this, I totally see why modern theorists want to define only one type of functor. I just got confused by the mentioning of "contravariant" here and there (especially when we were trying to understand the op-trick under [Lecture 47](https://forum.azimuthproject.org/discussion/2253/lecture-47-chapter-3-adjoint-functors/p1)) and the lack of really explicit formulation (e.g. what exactly is the contravariant thing? what does "contravariant" mean when contravariant functors are already obsolete?), precisely because of what you nicely explained in [#28](https://forum.azimuthproject.org/discussion/comment/19895/#Comment_19895), i.e. when we use the op-trick we should _not_ really still keep talking about "contravariant"! For someone (aka me) who does not (yet) know enough category theory to properly reason about such things as notational/methodological conventions, this kind of apparently pedantic details can easily become unnecessary hurdles, but once they are clarified, the picture suddenly becomes a lot clearer. :)
  • 30.
    edited July 6

    Julio wrote:

    what does "contravariant" mean when contravariant functors are already obsolete?), precisely because of what you nicely explained in #28, i.e. when we use the op-trick we should not really still keep talking about "contravariant"!

    Old fashioned terminology like contravariant still shows up in computer science.

    Here is an example from the Scala documentation. Scala supports subtyping where we see the terminology come up.

    Let's take a video game for example. In Mario Brothers, we can pretend the programmers created class \(\mathtt{MarioEnemy}\) for modeling enemies. That class has a subtype \(\mathtt{Goomba}\), and another subtype \(\mathtt{KoopaTroopa}\), with further subtypes \(\mathtt{GreenKoopaTroopa}\) and \(\mathtt{RedKoopaTroopa}\). The Subtypes form a little tree like this:

    $$ \begin{matrix} \mathtt{MarioEnemy} & \geq & \mathtt{Goomba} & & \\ & \geq & \mathtt{KoopaTroopa} & \geq & \mathtt{GreenKoopaTroopa} \\ & & & \geq & \mathtt{RedKoopaTroopa} \\ \end{matrix} $$ So as you can see, subtyping in Scala forms a preorder.

    A parameterized class like Array[A] or Tuple[A,B] is a type level constructor that takes a type as a parameter. These are like functors, but they do not need to be functors and are not more they are not.

    A covariant class \(\mathtt{F}\) in Scala means that if there are two types \(\mathtt{A} \leq \mathtt{B}\) and then \(\mathtt{F}[\mathtt{A}] \leq \mathtt{F}[\mathtt{B}]\).

    For instance, a list of \(\mathtt{RedKoopaTroopa}\) is a list of \(\mathtt{KoopaTroopa}\) because \(\mathtt{List}\) is covariant. In symbols: $$\mathtt{RedKoopaTroopa} \leq \mathtt{KoopaTroopa} \Longrightarrow \mathtt{List}[\mathtt{RedKoopaTroopa}] \leq \mathtt{List}[\mathtt{KoopaTroopa}]$$ A contravariant class \(\mathtt{G}\) is one where \(\mathtt{A} \leq \mathtt{B}\) implies \(\mathtt{G}[\mathtt{B}] \leq \mathtt{G}[\mathtt{A}]\). These are more exotic, but a popular example is what Bartosz Milewski calls \(\mathtt{Op}\) (see his 2015 blog post on this). It's defined like this:

    $$ \mathtt{Op}[+\mathtt{X}][-\mathtt{Y}] := \mathtt{Function}[\mathtt{Y},\mathtt{X}]$$ The class \(\mathtt{Op}[\mathtt{X}]\) is an example of a contravariant class: if \(\mathtt{A}\leq \mathtt{B}\) then \(\mathtt{Op}[\mathtt{X}][\mathtt{B}] \leq \mathtt{Op}[\mathtt{X}][\mathtt{A}]\).

    To bring this back to category theory, old fashioned textbooks call \(\mathtt{Op}[\mathtt{X}]\) the contravariant Yoneda functor.

    I don't know if this helps...

    Comment Source:Julio wrote: > what does "contravariant" mean when contravariant functors are already obsolete?), precisely because of what you nicely explained in [#28](https://forum.azimuthproject.org/discussion/comment/19895/#Comment_19895), i.e. when we use the op-trick we should _not_ really still keep talking about "contravariant"! Old fashioned terminology like *contravariant* still shows up in computer science. Here is an example from the [Scala documentation](https://docs.scala-lang.org/tour/variances.html). Scala supports *subtyping* where we see the terminology come up. Let's take a video game for example. In *Mario Brothers*, we can pretend the programmers created class \\(\mathtt{MarioEnemy}\\) for modeling enemies. That class has a subtype \\(\mathtt{Goomba}\\), and another subtype \\(\mathtt{KoopaTroopa}\\), with further subtypes \\(\mathtt{GreenKoopaTroopa}\\) and \\(\mathtt{RedKoopaTroopa}\\). The Subtypes form a little tree like this: \[ \begin{matrix} \mathtt{MarioEnemy} & \geq & \mathtt{Goomba} & & \\\\ & \geq & \mathtt{KoopaTroopa} & \geq & \mathtt{GreenKoopaTroopa} \\\\ & & & \geq & \mathtt{RedKoopaTroopa} \\\\ \end{matrix} \] So as you can see, subtyping in Scala forms a preorder. A *parameterized class* like `Array[A]` or `Tuple[A,B]` is a type level constructor that takes a type as a parameter. These are like functors, but they do not need to be functors and are not more they are not. A *covariant* class \\(\mathtt{F}\\) in Scala means that if there are two types \\(\mathtt{A} \leq \mathtt{B}\\) and then \\(\mathtt{F}\[\mathtt{A}\] \leq \mathtt{F}\[\mathtt{B}\]\\). For instance, a list of \\(\mathtt{RedKoopaTroopa}\\) is a list of \\(\mathtt{KoopaTroopa}\\) because \\(\mathtt{List}\\) is *covariant*. In symbols: \[\mathtt{RedKoopaTroopa} \leq \mathtt{KoopaTroopa} \Longrightarrow \mathtt{List}[\mathtt{RedKoopaTroopa}] \leq \mathtt{List}[\mathtt{KoopaTroopa}]\] A *contravariant* class \\(\mathtt{G}\\) is one where \\(\mathtt{A} \leq \mathtt{B}\\) implies \\(\mathtt{G}\[\mathtt{B}\] \leq \mathtt{G}\[\mathtt{A}\]\\). These are more exotic, but a popular example is what Bartosz Milewski calls \\(\mathtt{Op}\\) (see [his 2015 blog post](https://bartoszmilewski.com/2015/02/03/functoriality/) on this). It's defined like this: \[ \mathtt{Op}[+\mathtt{X}][-\mathtt{Y}] := \mathtt{Function}[\mathtt{Y},\mathtt{X}]\] The class \\(\mathtt{Op}[\mathtt{X}]\\) is an example of a contravariant class: if \\(\mathtt{A}\leq \mathtt{B}\\) then \\(\mathtt{Op}\[\mathtt{X}\]\[\mathtt{B}\] \leq \mathtt{Op}\[\mathtt{X}\]\[\mathtt{A}\]\\). To bring this back to category theory, old fashioned textbooks call \\(\mathtt{Op}\[\mathtt{X}\]\\) the [*contravariant Yoneda functor*](https://en.wikipedia.org/wiki/Yoneda_lemma#Naming_conventions). I don't know if this helps...
  • 31.

    The terminology "contravariant" is not obsolete in mathematics, it's used all over, so we all need to know it. We still say hom is contravariant in the first argument and covariant in the second, but what we mean is that it's a functor \(\text{hom} : \mathcal{C}^{\text{op}} \times \mathcal{C} \to \mathbf{Set} \), with an op in the first slot and not the second.

    Comment Source:The terminology "contravariant" is not obsolete in mathematics, it's used all over, so we all need to know it. We still say hom is contravariant in the first argument and covariant in the second, but what we _mean_ is that it's a functor \\(\text{hom} : \mathcal{C}^{\text{op}} \times \mathcal{C} \to \mathbf{Set} \\), with an op in the first slot and not the second.
  • 32.

    Many thanks to @Matthew for the helpful CS info and to @John for the terminological clarification! I am no longer confused and fully happy now. :)

    Comment Source:Many thanks to @Matthew for the helpful [CS info](https://forum.azimuthproject.org/discussion/comment/19898/#Comment_19898) and to @John for the [terminological clarification](https://forum.azimuthproject.org/discussion/comment/19901/#Comment_19901)! I am no longer confused and fully happy now. :)
  • 33.

    In a nutshell, a contravariant functor is one that flips morphisms.

    Comment Source:In a nutshell, a contravariant functor is one that flips morphisms.
  • 34.

    Yes.

    Comment Source:Yes.
  • 35.
    edited July 10

    I should put a little warning symbol here like John does.

    warning

    John said

    But a lot of modern category theorists, and certainly Fong and Spivak's book and me in this course, never use contravariant functors \(F : \mathcal{X} \to \mathcal{Y} \). As a substitute, we always use ordinary functors \(F : \mathcal{X}^{\text{op}} \to \mathcal{Y}\) or \(F : \mathcal{X} \to \mathcal{Y}^{\text{op}} \).

    All three of these are just different ways of talking about the same idea.

    Whilst it is true that the collection of functors \(\mathcal{X}^{\text{op}} \to \mathcal{Y}\) is the same as the collection of functors \(\mathcal{X}\to \mathcal{Y}^{\text{op}} \), these give rise to two different categories. Remember that the functor category \(\operatorname{Fun}(\mathcal{A}, \mathcal{B})\) has functors from \(\mathcal{A}\) to \(\mathcal{B}\) as its objects and natural transformations between these functors as its morphisms. If you work it through, you will find that the morphisms in \(\operatorname{Fun}( \mathcal{X}^{\text{op}} , \mathcal{Y})\) go 'the opposite way' to those of \(\operatorname{Fun}( \mathcal{X}, \mathcal{Y}^{\text{op}} )\), so in fact

    $$ \operatorname{Fun}( \mathcal{X}^{\text{op}} , \mathcal{Y})^{\text{op}} = \operatorname{Fun}( \mathcal{X}, \mathcal{Y}^{\text{op}}). $$ So you should exercise a little care when talking of the category of contravariant functors.

    Comment Source:I should put a little warning symbol here like John does. <img src="http://math.ucr.edu/home/baez/mathematical/warning_sign.jpg" alt="warning" width="100px"/> John said > But a lot of modern category theorists, and certainly Fong and Spivak's book and me in this course, **never use** contravariant functors \\(F : \mathcal{X} \to \mathcal{Y} \\). As a substitute, we always use ordinary functors \\(F : \mathcal{X}^{\text{op}} \to \mathcal{Y}\\) or \\(F : \mathcal{X} \to \mathcal{Y}^{\text{op}} \\). > > All three of these are just different ways of talking about the same idea. Whilst it is true that the collection of functors \\(\mathcal{X}^{\text{op}} \to \mathcal{Y}\\) is the same as the collection of functors \\(\mathcal{X}\to \mathcal{Y}^{\text{op}} \\), these give rise to two different categories. Remember that the functor category \\(\operatorname{Fun}(\mathcal{A}, \mathcal{B})\\) has functors from \\(\mathcal{A}\\) to \\(\mathcal{B}\\) as its objects and natural transformations between these functors as its morphisms. If you work it through, you will find that the morphisms in \\(\operatorname{Fun}( \mathcal{X}^{\text{op}} , \mathcal{Y})\\) go 'the opposite way' to those of \\(\operatorname{Fun}( \mathcal{X}, \mathcal{Y}^{\text{op}} )\\), so in fact \[ \operatorname{Fun}( \mathcal{X}^{\text{op}} , \mathcal{Y})^{\text{op}} = \operatorname{Fun}( \mathcal{X}, \mathcal{Y}^{\text{op}}). \] So you should exercise a little care when talking of the *category* of contravariant functors.
  • 36.
    edited July 11

    Nice, Simon! I thought

    $$ \mathrm{Fun}^{\text{op}} = \mathrm{PainInTheButt}.$$ By the way, we've been writing \(\mathcal{B}^\mathcal{A}\) for the functor category you're calling \(\mathrm{Fun}(\mathcal{A}, \mathcal{B})\). In case anyone here doesn't remember, I discussed functor categories here:

    Comment Source:Nice, Simon! I thought \[ \mathrm{Fun}^{\text{op}} = \mathrm{PainInTheButt}.\] By the way, we've been writing \\(\mathcal{B}^\mathcal{A}\\) for the functor category you're calling \\(\mathrm{Fun}(\mathcal{A}, \mathcal{B})\\). In case anyone here doesn't remember, I discussed functor categories here: * [Lecture 41 - Chapter 3: Composing Natural Transformations](https://forum.azimuthproject.org/discussion/2249/lecture-45-chapter-3-composing-natural-transformations/p1).
Sign In or Register to comment.