Options

2 - Complex definitions / simple proofs

edited February 14 in General

At least for the introductory category theory material that I have gone through, I have noticed that the hardest thing about proving something in category theory is understanding and unraveling the definitions, in order to figure out exactly what must be shown in order to prove a result. Once you get to that point, the results may flow almost automatically, like a computation.

(But: is it different, for proving something more substantial than an exercise, like the Yoneda lemma? Or is that proof just basically a more complex, but still inevitable, computation?)

In any case, the general trend towards abstraction -- which is paramount in category theory -- definitely does involve concentrating complexity in the definitions. It looks like once the definitions are made at their most appropriate general level of abstraction, then they are purified and some results which would otherwise look hard to show will follow almost automatically. If you can do the work of understanding what abstraction X means, and then showing that object Y is an X, then all the things that you know about X become automatic results that apply to Y.

Disclaimer: I haven't gotten very far with category theory -- that's why I'm here, working through it with the team -- so these general comments are more of a gut assessment than an experienced judgement.

Comments

  • 1.
    edited February 14

    Michael Spivak wrote about this concentration of complexity in definitions, in another context of mathematical abstraction - in differential geometry / abstract calculus. Here is a quotation on the topic, from the preface to Calculus on Manifolds:

    The reader probably suspects that the modern Stokes' Theorem is at least as difficult as the classical theorems derived from it [classical Stoke's Theorem, Green's theorem, the Divergence Theorem]. On the contrary, it is a very simple consequence of yet another version of Stokes' Theorem; this very abstract version is the final and main result of Chapter 4. It is entirely reasonable to suppose the the difficulties so far avoided must be hidden here. Yet the proof of this theorem is, in the mathematician's sense, an utter triviality -- a straightforward computation. On the other hand, even the statement of this triviality cannot be understood without a hoarde of difficult definitions from Chapter 4. There are good reasons why the theorems should be all easy and the definitions hard. As the evolution of Stoke's theorem revealed, a single simple principle can masquerade as several difficult results; the proofs of may theorems involve stripping away the disguise. The definitions, on the other hand, serve a twofold purpose: they are rigorous replacements for vague notions, and machinery for elegant proofs.

    Concentrating the depth of a subject in the definitions is undeniably economical, but it is bound to produce some difficulties for the student.

    To what extent can these observations be carried over to category theory?

    Comment Source:Michael Spivak wrote about this concentration of complexity in definitions, in another context of mathematical abstraction - in differential geometry / abstract calculus. Here is a quotation on the topic, from the preface to _Calculus on Manifolds_: > The reader probably suspects that the modern Stokes' Theorem is at least as difficult as the classical theorems derived from it [classical Stoke's Theorem, Green's theorem, the Divergence Theorem]. On the contrary, it is a very simple consequence of yet another version of Stokes' Theorem; this very abstract version is the final and main result of Chapter 4. It is entirely reasonable to suppose the the difficulties so far avoided must be hidden here. Yet the proof of this theorem is, in the mathematician's sense, an utter triviality -- a straightforward computation. On the other hand, even the statement of this triviality cannot be understood without a hoarde of difficult definitions from Chapter 4. There are good reasons why the theorems should be all easy and the definitions hard. As the evolution of Stoke's theorem revealed, a single simple principle can masquerade as several difficult results; the proofs of may theorems involve stripping away the disguise. The definitions, on the other hand, serve a twofold purpose: they are rigorous replacements for vague notions, and machinery for elegant proofs. > Concentrating the depth of a subject in the definitions is undeniably economical, but it is bound to produce some difficulties for the student. To what extent can these observations be carried over to category theory?
  • 2.

    There is a parallel with software systems, which are also involve complex systems of definitions and abstractions.

    Software is interesting in that it can be given a mathematical interpretation, and an operational / physical one as well.

    But it's especially clear with functional programming, that software can consist of a system of mathematical definitions.

    (Other math, however, can get even harder, as the definitions can get more abstract, and they can refer to non-computable entities, or even non-constructible entities.)

    Comment Source:There is a parallel with software systems, which are also involve complex systems of definitions and abstractions. Software is interesting in that it can be given a mathematical interpretation, and an operational / physical one as well. But it's especially clear with functional programming, that software can consist of a system of mathematical definitions. (Other math, however, can get even harder, as the definitions can get more abstract, and they can refer to non-computable entities, or even non-constructible entities.)
  • 3.
    edited February 14

    I find something like this 'Michael-Spivak-effect', when doing software development.

    The hardest part about getting a good system working is identifying the appropriate abstractions and interfaces -- the definitions. Once that's worked out, the actual coding of implementations can be a rather straightforward matter, which follows a path that manifestly follows from the abstract requirements. And in the best of cases, such as occurs with pure functional programming, the implementation consists of nothing more than the writing down of the definitions.

    Comment Source:I find something like this 'Michael-Spivak-effect', when doing software development. The hardest part about getting a good system working is identifying the appropriate abstractions and interfaces -- the definitions. Once that's worked out, the actual coding of implementations can be a rather straightforward matter, which follows a path that manifestly follows from the abstract requirements. And in the best of cases, such as occurs with pure functional programming, the implementation consists of nothing more than the writing down of the definitions.
  • 4.
    edited February 14

    The complexity of the concrete gets mastered by analyzing it into abstractions, which are crystallized as systems of definitions.

    This applies to math in general, and to programming, which is a form of applied mathematics.

    <END-TALK/OPEN-DISCUSSION>

    Comment Source:The complexity of the concrete gets mastered by analyzing it into abstractions, which are crystallized as systems of definitions. This applies to math in general, and to programming, which is a form of applied mathematics. <END-TALK/OPEN-DISCUSSION>
  • 5.
    edited February 1

    The hardest part about getting a good system working is identifying the appropriate abstractions and interfaces -- the definitions. Once that's worked out, the actual coding of implementations can be a rather straightforward matter, which follows a path that manifestly follows from the abstract requirements. And in the best of cases, such as occurs with pure functional programming, the implementation consists of nothing more than the writing down of the definitions.

    So... I really don't think it's this easy.

    In textbook category theory, definitions are given in terms of sets. However, in Haskell you do not have sets. The central category everyone works with in Haskell is (->). In addition, discussion centers around endofunctors in this category. Moving between textbook category theory and Haskell takes some getting used to.

    Here's an example: in regular category theory. Consider the covariant hom-functor \(Hom(A, -) : \mathcal{C} \to \mathbf{Set}\). Let's call this functor \(h^A\) for short (following wikipedia's convention). The functor \(h^A\) is defined by its action on objects and morphisms in \(\mathcal{C}\):

    • An object \(X\) is mapped to the set of morphisms \(Hom(A, X)\)
    • A morphism \(f : X \to Y\) is mapped to \(h^A(f) : Hom(A,X) \to Hom(A,Y)\) where \(h^A(f)(g) = f \circ g\)

    The Yoneda lemma states that \(Nat(h^A, F) \cong F A\). That is, the set of natural transformations from \(h^A\) to \(F\) is isomorphic to the objects in \(F\).

    We can translate the Yoneda lemma into Haskell. But we have to drop all of this discussions of sets, and rethink a bit what are natural transformations. I can post a bit later on this if you are interested.

    [EDIT: Fixing links]

    Comment Source:> The hardest part about getting a good system working is identifying the appropriate abstractions and interfaces -- the definitions. Once that's worked out, the actual coding of implementations can be a rather straightforward matter, which follows a path that manifestly follows from the abstract requirements. And in the best of cases, such as occurs with pure functional programming, the implementation consists of nothing more than the writing down of the definitions. So... I really don't think it's this easy. In textbook category theory, definitions are given in terms of sets. However, in Haskell you do not have sets. The central category everyone works with in Haskell is `(->)`. In addition, discussion centers around _endofunctors_ in this category. Moving between textbook category theory and Haskell takes some getting used to. Here's an example: in regular category theory. Consider the _covariant_ hom-functor \\(Hom(A, -) : \mathcal{C} \to \mathbf{Set}\\). Let's call this functor \\(h^A\\) for short (following [wikipedia's convention](https://en.wikipedia.org/wiki/Yoneda_lemma#Formal_statement)). The functor \\(h^A\\) is defined by its action on objects and morphisms in \\(\mathcal{C}\\): - An object \\(X\\) is mapped to the set of morphisms \\(Hom(A, X)\\) - A morphism \\(f : X \to Y\\) is mapped to \\(h^A(f) : Hom(A,X) \to Hom(A,Y)\\) where \\(h^A(f)(g) = f \circ g\\) The Yoneda lemma states that \\(Nat(h^A, F) \cong F A\\). That is, the set of [natural transformations](https://en.wikipedia.org/wiki/Natural_transformation) from \\(h^A\\) to \\(F\\) is isomorphic to the objects in \\(F\\). We can translate the Yoneda lemma into Haskell. But we have to drop all of this discussions of sets, and rethink a bit what are natural transformations. I can post a bit later on this if you are interested. [EDIT: Fixing links]
  • 6.
    edited January 31

    Thanks for this informative comment! By all means, it would be great to hear more about it.

    Here you're making the argument that things are not so simple with Haskell.

    I still wonder though to what extent it is true within pure category theory. At least for the basic stuff I've going through, almost all the weight of the matter has been concentrated in the definitions rather than the proofs. Does this trend continue into the more advanced realms of category theory?

    Comment Source:Thanks for this informative comment! By all means, it would be great to hear more about it. Here you're making the argument that things are not so simple with Haskell. I still wonder though to what extent it is true within pure category theory. At least for the basic stuff I've going through, almost all the weight of the matter has been concentrated in the definitions rather than the proofs. Does this trend continue into the more advanced realms of category theory?
  • 7.
    edited February 1

    Here you're making the argument that things are not so simple with Haskell.

    Well, just different I guess. I'll go into this below.

    I still wonder though to what extent it is true within pure category theory. At least for the basic stuff I've going through, almost all the weight of the matter has been concentrated in the definitions rather than the proofs. Does this trend continue into the more advanced realms of category theory?

    There are complicated proofs in CT. I am not sure how deep your course is going to dive.


    In Haskell, the functor \(h^A\) corresponds to (->) a. We call (->) a the reader monad.

    Since Haskell doesn't have sets, we can't really talk about \(Nat(F,G)\). Instead, we use a type for natural transformations:

     {-# LANGUAGE RankNTypes, TypeOperators #-}
    
     infixr 0 ~>
     type f ~> g = forall x. f x -> g x
    

    Note: In conventional category theory we demand a natural transformation \(\eta\) obey a law, appropriately called naturality. The law asserts that for any natural transformation \(\eta : F \to G\) we have that for all \(f : X \to Y\) that

    \[ \eta_Y \circ F f \cong G f \circ \eta_X \]

    Haskell can't enforce this, so we just assume it. Languages with more powerful type systems like Coq's and Idris' can enforce this rule.

    You can write \(\eta_X\) in Haskell as eta @X by turning a language extension. Just put {-# LANGUAGE TypeApplications #-} at the top of your file.


    Anyhow, the Yoneda lemma \(Nat(h^A, F) \cong F A\) in Haskell would be written as

    ((->) a ~> f) ≅ f a

    Or in other words

    (forall b. (a -> b) -> f b) ≅ f a

    The proof of the Yoneda lemma corresponds to writing two functions:

    liftYoneda :: Functor f => f a -> ((->) a ~> f)
    liftYoneda = ...
    
    lowerYoneda :: ((->) a ~> f) -> f a
    lowerYoneda = ...
    

    Then show that lowerYoneda . liftYoneda = id and liftYoneda . lowerYoneda = id.

    Feel free to give it a try, or if you prefer I can show you the answers here.

    But I maintain the chief take away is this: In Haskell, types take the place of sets. It makes things more confusing I'm afraid :(

    Comment Source:> Here you're making the argument that things are not so simple with Haskell. Well, just different I guess. I'll go into this below. > I still wonder though to what extent it is true within pure category theory. At least for the basic stuff I've going through, almost all the weight of the matter has been concentrated in the definitions rather than the proofs. Does this trend continue into the more advanced realms of category theory? There are complicated proofs in CT. I am not sure how deep your course is going to dive. ------------------------------------------------------- In Haskell, the functor \\(h^A\\) corresponds to `(->) a`. We call `(->) a` the _reader monad_. Since Haskell doesn't have sets, we can't really talk about \\(Nat(F,G)\\). Instead, we use a _type_ for natural transformations: {-# LANGUAGE RankNTypes, TypeOperators #-} infixr 0 ~> type f ~> g = forall x. f x -> g x Note: In conventional category theory we demand a natural transformation \\(\eta\\) obey a law, appropriately called _naturality_. The law asserts that for any natural transformation \\(\eta : F \to G\\) we have that for all \\(f : X \to Y\\) that \\[ \eta_Y \circ F f \cong G f \circ \eta_X \\] Haskell can't enforce this, so we just assume it. Languages with more powerful type systems like Coq's and Idris' can enforce this rule. You _can_ write \\(\eta_X\\) in Haskell as `eta @X` by turning a language extension. Just put `{-# LANGUAGE TypeApplications #-}` at the top of your file. ------------------------------------------------------- Anyhow, the Yoneda lemma \\(Nat(h^A, F) \cong F A\\) in Haskell would be written as `((->) a ~> f) ≅ f a` Or in other words `(forall b. (a -> b) -> f b) ≅ f a` The proof of the Yoneda lemma corresponds to writing two functions: liftYoneda :: Functor f => f a -> ((->) a ~> f) liftYoneda = ... lowerYoneda :: ((->) a ~> f) -> f a lowerYoneda = ... Then show that `lowerYoneda . liftYoneda = id` and `liftYoneda . lowerYoneda = id`. Feel free to give it a try, or if you prefer I can show you the answers here. But I maintain the chief take away is this: **In Haskell, types take the place of sets.** It makes things more confusing I'm afraid :(
Sign In or Register to comment.