Options

Chapter 3

Comments

  • 1.

    About Definition 3.41: when the authors say "A diagram in \(\mathcal{C}\) is a functor from any category \( D\)", do they actually mean "A diagram in \(\mathcal{C}\) is a functor \( D\) from any category" (i.e. with D referring to "functor" instead of "any category")?

    Comment Source:About Definition 3.41: when the authors say "A _diagram_ in \\(\mathcal{C}\\) is a functor from any category \\( D\\)", do they actually mean "A _diagram_ in \\(\mathcal{C}\\) is a functor \\( D\\) from any category" (i.e. with D referring to "functor" instead of "any category")?
  • 2.

    I don't think so. The Category D determines what kind of diagram/(limit/co limit) the functor is.

    Comment Source:I don't think so. The Category D determines what kind of diagram/(limit/co limit) the functor is.
  • 3.
    edited May 30

    Let me take a shot.

    With reference to comment 1. Yes, \(D\) is a functor the category is \(\mathcal{J}\)

    Definition 3.41

    A diagram in \(C\) is a functor from any category \( D : \mathcal{J} \rightarrow C \). We say that the diagram \(D\) commutes if \( D f = D f' \) holds for every parallel pair of morphisms \(f , f' : a \rightarrow b\) in \(\mathcal{J}\) . We call \(\mathcal{J}\) the indexing category for the diagram.

    Paraphrased Definition 3.41

    A diagram in \(C\) is a functor, \(D\), from an indexing category, \(\mathcal{J}\). The functor \(D\) must preserve structure, i.e. it commutes, parallel paths are preserved by \(D\). $$ D : \mathcal{J} \rightarrow C $$ The subsequent Example 3.42 can be confusing.

    In Definition 3.41 \( D \) is a functor, a morphism between categories \(\mathcal{J}\) and \( C \).

    In Example 3.42 \(\mathcal{D}\) is a category.

    Here is a mapping between the names:

    $$ \begin{array}{c c | l } Definition 3.41 & Example 3.42 & \text{role} \\ \mathcal{J} & \mathcal{C} & \text{index category} \\ C & \mathcal{D} & \text{diagram in category} \\ D & F, G & \text{diagram} \end{array} $$ The diagram is a formal version of the informal idea of a template.

    Comment Source:Let me take a shot. With reference to [comment 1](https://forum.azimuthproject.org/discussion/comment/18762/#Comment_18762). Yes, \\(D\\) is a functor the category is \\(\mathcal{J}\\) **Definition 3.41** A _diagram_ in \\(C\\) is a functor from any category \\( D : \mathcal{J} \rightarrow C \\). We say that the diagram \\(D\\) _commutes_ if \\( D f = D f' \\) holds for every parallel pair of morphisms \\(f , f' : a \rightarrow b\\) in \\(\mathcal{J}\\) . We call \\(\mathcal{J}\\) the _indexing category_ for the diagram. **Paraphrased Definition 3.41** A _diagram_ in \\(C\\) is a functor, \\(D\\), from an _indexing category_, \\(\mathcal{J}\\). The functor \\(D\\) must preserve structure, i.e. it _commutes_, parallel paths are preserved by \\(D\\). \[ D : \mathcal{J} \rightarrow C \] The subsequent **Example 3.42** can be confusing. In **Definition 3.41** \\( D \\) is a functor, a morphism between categories \\(\mathcal{J}\\) and \\( C \\). In **Example 3.42** \\(\mathcal{D}\\) is a category. Here is a mapping between the names: \[ \begin{array}{c c | l } Definition 3.41 & Example 3.42 & \text{role} \\\\ \mathcal{J} & \mathcal{C} & \text{index category} \\\\ C & \mathcal{D} & \text{diagram in category} \\\\ D & F, G & \text{diagram} \end{array} \] The diagram is a formal version of the informal idea of a template.
  • 4.
    edited May 30

    This is awesome, @Fredrick! Thanks so much for the clarification (and also thank you @Christopher for the comment)!

    Comment Source:This is awesome, @Fredrick! Thanks so much for the clarification (and also thank you @Christopher for the comment)!
  • 5.
    edited June 3

    In example 3.51 I think objects \(u\) and \(z\) and morphisms \(a, b, h, k\) could mislead someone, one could delete them and have just an elaborated naturality square 3.49, and to elaborate it more, paint also a complex conmutative diagram in \(\mathcal{C}\), that would have two separate, blue and a red same-shape "shadows" in \(\mathcal{D}\), with a green \(\mathcal{D}\)-morphism between each blue and red shadow of the objects of the diagram in \(\mathcal{C}\).

    \(a, b, h, k\) were happy denizens of \(\mathcal{D}\) that didn't knew they had to relate with the new blue and red images of morphisms of \(\mathcal{C}\).

    Comment Source:In example 3.51 I think objects \\(u\\) and \\(z\\) and morphisms \\(a, b, h, k\\) could mislead someone, one could delete them and have just an elaborated naturality square 3.49, and to elaborate it more, paint also a complex conmutative diagram in \\(\mathcal{C}\\), that would have two separate, blue and a red same-shape "shadows" in \\(\mathcal{D}\\), with a green \\(\mathcal{D}\\)-morphism between each blue and red shadow of the objects of the diagram in \\(\mathcal{C}\\). \\(a, b, h, k\\) were happy denizens of \\(\mathcal{D}\\) that didn't knew they had to relate with the new blue and red images of morphisms of \\(\mathcal{C}\\).
  • 6.

    Regarding the introduction, one real life case that happens in industry is that software projects are evolving animals with incremental versions, updates, upgrades, new features, fixes... and the transition from waterfall to agilistic requirement management demands that data schemas need to be adaptable. One must provide a mechanism that allow a version deployed at a client to be actualized to a newer one without the client loosing its data. So frequently even in the same project an update implies a sort of data "migration".

    Comment Source:Regarding the introduction, one real life case that happens in industry is that software projects are evolving animals with incremental versions, updates, upgrades, new features, fixes... and the transition from waterfall to agilistic requirement management demands that data schemas need to be adaptable. One must provide a mechanism that allow a version deployed at a client to be actualized to a newer one without the client loosing its data. So frequently even in the same project an update implies a sort of data "migration".
  • 7.
    edited June 5

    Julio - I guess you're happy now, but a "diagram of shape \(\mathcal{A}\) in the category \(\mathcal{B}\)" is just a functor \(F : \mathcal{A} \to \mathcal{B}\). (I'm deliberately using completely different letters, because you should never get attached to particular letters.)

    So, for example, there's a category \(\mathcal{A}\) called the square:

    image

    and we can look at a diagram of this shape in any category \(\mathcal{B}\). Very roughly speaking, it's like a picture of shape \(\mathcal{A}\) drawn on the canvas \(\mathcal{B}\). This is a very important and beautiful idea.

    Comment Source:Julio - I guess you're happy now, but a "diagram of shape \\(\mathcal{A}\\) in the category \\(\mathcal{B}\\)" is just a functor \\(F : \mathcal{A} \to \mathcal{B}\\). (I'm deliberately using completely different letters, because you should never get attached to particular letters.) So, for example, there's a category \\(\mathcal{A}\\) called the **square**: <center><img src = "http://math.ucr.edu/home/baez/mathematical/7_sketches/graph_square.png"></center> and we can look at a diagram of this shape in any category \\(\mathcal{B}\\). Very roughly speaking, it's like a picture of shape \\(\mathcal{A}\\) drawn on the canvas \\(\mathcal{B}\\). This is a very important and beautiful idea.
  • 8.

    $$\times \vdash \Delta \vdash +$$ I've been so intrigued with this idea. It is mind blowing for me for some reason like finding out Vader is Luke's father. But in reading Chapter 3 of Spivak and Fong text, they also use a similar notation \(\Pi_{F} \vdash \Delta_{F} \vdash \Sigma_{F}\) and then use this to explain left and right adjunctions. The way they explain it sounds like they are saying that left adjoints are "sum-like" somehow acting like colimits in that they pick out interconnected parts and right adjoints are "product-like" somehow acting like limits in that they find tuples with common traits.

    Is this a safe intuition to have for adjunctions in general or only true for the examples they use in the book? It seems to go well with left=liberal, right=conservative analogy John uses as well but being a newby not sure how far the analogy can be taken.

    Comment Source:$$\times \vdash \Delta \vdash +$$ I've been so intrigued with this idea. It is mind blowing for me for some reason like finding out Vader is Luke's father. But in reading Chapter 3 of Spivak and Fong text, they also use a similar notation \\(\Pi_{F} \vdash \Delta_{F} \vdash \Sigma_{F}\\) and then use this to explain left and right adjunctions. The way they explain it sounds like they are saying that left adjoints are "sum-like" somehow acting like colimits in that they pick out interconnected parts and right adjoints are "product-like" somehow acting like limits in that they find tuples with common traits. Is this a safe intuition to have for adjunctions in general or only true for the examples they use in the book? It seems to go well with left=liberal, right=conservative analogy John uses as well but being a newby not sure how far the analogy can be taken.
  • 9.
    edited July 14

    It is mind blowing for me for some reason like finding out Vader is Luke's father.

    Yeah! Even more so: category theory revealed that our basic concepts in arithmetic and logic are related in ways that nobody had suspected before! It's astounding!

    The way they explain it sounds like they are saying that left adjoints are "sum-like" somehow acting like colimits in that they pick out interconnected parts and right adjoints are "product-like" somehow acting like limits in that they find tuples with common traits. Is this a safe intuition to have for adjunctions in general or only true for the examples they use in the book?

    It's pretty safe for adjunctions between categories that are fairly similar to \(\mathbf{Set}\). You'll noticed that in our study of databases we're focusing on categories of the form \(\mathbf{Set}^{\mathcal{C}}\). These are fairly similar to \(\mathbf{Set}\) in many ways. (Technically we say they are "toposes".)

    If we were working with categories like \(\mathbf{Set}^{\text{op}}\), everything would be turned around and your intuitions would be destroyed. A left adjoint functor from \(\mathcal{C}\) to \(\mathcal{D}\) gives a right adjoint functor from \(\mathcal{C}^{\text{op}} \) to \(\mathcal{D}^\text{op}\), and vice versa!

    Then there are categories like \(\mathbf{FinVect}\), the category of finite-dimensional vector spaces and linear maps, that are equivalent to their own opposite. These are neither like \(\mathbf{Set}\) nor like \(\mathbf{Set}^{\text{op}}\), but somehow poised right in between.

    It takes some time to develop intuitions refined enough to handle all these different flavors of category. However, you are on the right track... thinking about important stuff!

    Comment Source:> It is mind blowing for me for some reason like finding out Vader is Luke's father. Yeah! Even more so: category theory revealed that our _basic concepts in arithmetic and logic_ are related in ways that nobody had suspected before! It's astounding! > The way they explain it sounds like they are saying that left adjoints are "sum-like" somehow acting like colimits in that they pick out interconnected parts and right adjoints are "product-like" somehow acting like limits in that they find tuples with common traits. Is this a safe intuition to have for adjunctions in general or only true for the examples they use in the book? It's pretty safe for adjunctions between categories that are fairly similar to \\(\mathbf{Set}\\). You'll noticed that in our study of databases we're focusing on categories of the form \\(\mathbf{Set}^{\mathcal{C}}\\). These are fairly similar to \\(\mathbf{Set}\\) in many ways. (Technically we say they are "toposes".) If we were working with categories like \\(\mathbf{Set}^{\text{op}}\\), everything would be turned around and your intuitions would be destroyed. A left adjoint functor from \\(\mathcal{C}\\) to \\(\mathcal{D}\\) gives a right adjoint functor from \\(\mathcal{C}^{\text{op}} \\) to \\(\mathcal{D}^\text{op}\\), and vice versa! Then there are categories like \\(\mathbf{FinVect}\\), the category of finite-dimensional vector spaces and linear maps, that are equivalent to their own opposite. These are neither like \\(\mathbf{Set}\\) nor like \\(\mathbf{Set}^{\text{op}}\\), but somehow poised right in between. It takes some time to develop intuitions refined enough to handle all these different flavors of category. However, you are on the right track... thinking about important stuff!
  • 10.
    edited July 14

    Then there are categories like \(\mathbf{FinVect}\), the category of finite-dimensional vector spaces and linear maps, that are equivalent to their own opposite. These are neither like \(\mathbf{Set}\) nor like \(\mathbf{Set}^{\text{op}}\), but somehow poised right in between.

    WOW. I knew vector spaces are beautiful mathematical objects but this is yet again mind blowing. Sounds like they are kind of like a hologram of adjunctions.

    Comment Source:>Then there are categories like \\(\mathbf{FinVect}\\), the category of finite-dimensional vector spaces and linear maps, that are equivalent to their own opposite. These are neither like \\(\mathbf{Set}\\) nor like \\(\mathbf{Set}^{\text{op}}\\), but somehow poised right in between. WOW. I knew vector spaces are beautiful mathematical objects but this is yet again mind blowing. Sounds like they are kind of like a hologram of adjunctions.
  • 11.
    edited July 14

    Whenever you take the transpose of a matrix, you are exploiting the fact that \(\mathbf{Vect}\) is equivalent to its own opposite! You're turning a linear map \(T : \mathbb{R}^m \to \mathbb{R}^n \) into a linear map \(T^\top : \mathbb{R}^n \to \mathbb{R}^m \), which is somehow the same map seen backwards.

    This is part of why linear algebra is so powerful. Among other things, quantum mechanics explains our world using linear algebra. Every process in quantum mechanics has a time-reversed process!

    Comment Source:Whenever you take the transpose of a matrix, you are exploiting the fact that \\(\mathbf{Vect}\\) is equivalent to its own opposite! You're turning a linear map \\(T : \mathbb{R}^m \to \mathbb{R}^n \\) into a linear map \\(T^\top : \mathbb{R}^n \to \mathbb{R}^m \\), which is somehow the same map seen backwards. This is part of why linear algebra is so powerful. Among other things, quantum mechanics explains our world using linear algebra. Every process in quantum mechanics has a time-reversed process!
  • 12.

    Here's a Puzzle using Chapter 3's concepts:

    Consider your favorite database schema \(\mathcal{C}\). Suppose you have two instances of the databases that you wish to merge. Let \(I:\mathcal{C}\to\mathbf{Set}\) and \(J:\mathcal{C}\to\mathbf{Set}\) be the instances of each database. For example, company A (database \(I\)) and company B (database \(J\)) are merging their employee records, and luckily for you, both databases have the same structure \(\mathcal{C}\).

    Merging these databases is equivalent to a universal construction in a certain category. What is the construction and in what category? Here's a hint, the merged database is another instance of \(\mathcal{C}\) so call it \(K:\mathcal{C} \to \mathbf{Set} \).

    Comment Source:Here's a **Puzzle** using Chapter 3's concepts: Consider your favorite database schema \\(\mathcal{C}\\). Suppose you have two instances of the databases that you wish to merge. Let \\(I:\mathcal{C}\to\mathbf{Set}\\) and \\(J:\mathcal{C}\to\mathbf{Set}\\) be the instances of each database. For example, company A (database \\(I\\)) and company B (database \\(J\\)) are merging their employee records, and luckily for you, both databases have the same structure \\(\mathcal{C}\\). Merging these databases is equivalent to a universal construction in a certain category. What is the construction and in what category? Here's a hint, the merged database is another instance of \\(\mathcal{C}\\) so call it \\(K:\mathcal{C} \to \mathbf{Set} \\).
Sign In or Register to comment.