It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 496
- Study Groups 10
- Epidemiology 5
- Network Theory 2
- Leaf Modeling 1
- Review Sections 9
- MIT 2020: Programming with Categories 52
- MIT 2020: Lectures 21
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 64
- Azimuth Code Project 110
- Statistical methods 2
- Drafts 1
- Math Syntax Demos 15
- Wiki - Latest Changes 2
- Strategy 111
- Azimuth Project 1.1K
- - Spam 1
- News and Information 147
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 707

Options

Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the *way* in which we match up these two objects, to see that they look the same.

For example, any two of these squares look the same after you rotate and/or reflect them:

An isomorphism between two of these squares is a *process of rotating and/or reflecting* the first so it looks just like the second.

As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:

**Definition.** Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an **inverse** of \(f\) is a morphism \(g: y \to x\) such that

- \(g\) is a
**left inverse**of \(f\): \(g \circ f = 1_x \)

and

- \(g\) is a
**right inverse**of \(f\): \(f \circ g = 1_y \).

I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).

**Puzzle 140.** Prove that any morphism has at most one inverse.

**Puzzle 141.** Give an example of a morphism in some category that has more than one left inverse.

**Puzzle 142.** Give an example of a morphism in some category that has more than one right inverse.

Now we're ready for isomorphisms!

**Definition.** A morphism \(f : x \to y\) is an **isomorphism** if it has an inverse.

**Definition.** Two objects \(x,y\) in a category \(\mathcal{C}\) are **isomorphic** if there exists an isomorphism \(f : x \to y\).

Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!

What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:

The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:

$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:

$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:

$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!

In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.

We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a **preorder** is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A **poset** is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\).

**Puzzle 143.** Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic.

**Puzzle 144.** Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).

Puzzle 144 says that in a poset, the only isomorphisms are identities.

Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.

**Puzzle 145.** Show that every isomorphism in \(\mathbf{Set}\) is a **bijection**, that is, a function that is one-to-one and onto.

**Puzzle 146.** Show that every bijection is an isomorphism in \(\mathbf{Set}\).

So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.

One more example:

**Definition.** If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a **natural isomorphism.**

This name makes sense! The objects in the so-called 'functor category'
\(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the *isomorphisms* deserve to be called 'natural isomorphisms'.

But what are they like?

Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism

$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:

Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism

$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that

$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means

$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).

In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).

But the converse is true, too! It takes a *little* more work to prove, but not much. So, I'll leave it as a puzzle.

**Puzzle 147.** Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.

Doing this will help you understand natural isomorphisms. But you also need examples!

**Puzzle 148.** Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily *equal!*

We should talk about this.

## Comments

In

puzzle 143it sounds that \(g\) should go backwards: \(g: y \to x\).`In **puzzle 143** it sounds that \\(g\\) should go backwards: \\(g: y \to x\\).`

Jesus - yes, I'll fix that!

By the way, I'm sorry I didn't give any lectures on Thursday and Friday. I was a bit burnt out, and really busy trying to fix up my paper with Brendan, A compositional approach to passive linear networks. (The referee demanded lots of changes.)

`Jesus - yes, I'll fix that! By the way, I'm sorry I didn't give any lectures on Thursday and Friday. I was a bit burnt out, and really busy trying to fix up my paper with Brendan, A compositional approach to passive linear networks. (The referee demanded lots of changes.)`

The reservoir of energy you are displaying in putting this lectures together still astonishes me, it's sincerely appreciated.

`The reservoir of energy you are displaying in putting this lectures together still astonishes me, it's sincerely appreciated.`

Puzzle 141Consider the category of Vector spaces. The clockwise rotation by a certain angle a is a bijection. It has an infinite number of left inverses. For example, the counterclockwise rotation by the same angle and the clockwise rotation by (360-a) degrees are both left inverses.`<b>Puzzle 141</b> Consider the category of Vector spaces. The clockwise rotation by a certain angle a is a bijection. It has an infinite number of left inverses. For example, the counterclockwise rotation by the same angle and the clockwise rotation by (360-a) degrees are both left inverses.`

Another for

Puzzle 141, inSetfor a surjection defining a labeled partition, the function in the converse direction sending partition labels to any of the elements in its block is a left inverse.`Another for **Puzzle 141**, in **Set** for a surjection defining a labeled partition, the function in the converse direction sending partition labels to any of the elements in its block is a left inverse.`

Hi, I think one can argue that rotation by 350º and by -10º are the same morphism in

Vect. They are equal element-wise ("extensionally"), so that would leave us only with one inverse. Another option without this problem is the complex exponential. The complex logarithm needs to be multi-valued, so multiple choices give distinct left inverses.`Hi, I think one can argue that rotation by 350º and by -10º are the same morphism in **Vect**. They are equal element-wise ("extensionally"), so that would leave us only with one inverse. Another option without this problem is the complex exponential. The [complex logarithm](https://en.wikipedia.org/wiki/Complex_logarithm#Problems_with_inverting_the_complex_exponential_function) needs to be multi-valued, so multiple choices give distinct left inverses.`

Puzzle 141.Define \(f : \mathbb{Z} \to \mathbb{Z}\) by \(f(x) = 2x\). Then both \(g(x) = \lfloor \frac{x}{2} \rfloor\) and \(h(x) = \lfloor \frac{x + 1}{2}\rfloor\) are left inverses of \(f\).Puzzle 142.Define \(f' : \mathbb{Z} \to \mathbb{Z}\) by \(f'(x) = \lfloor \frac{x}{2} \rfloor\). Then both \(g'(x) = 2x\) and \(h'(x) = 2x + 1\) are right inverses of \(f'\).Notice that \(f\) is injective but not surjective, and \(f'\) is surjective but not injective. In \(\textbf{Set}\), functions with inverses are the same as bijections, and we want to avoid those here.

There also seems to be a relationship suggested here with Galois connections, since several of these functions showed up back when we were discussing those.

`**Puzzle 141.** Define \\(f : \mathbb{Z} \to \mathbb{Z}\\) by \\(f(x) = 2x\\). Then both \\(g(x) = \lfloor \frac{x}{2} \rfloor\\) and \\(h(x) = \lfloor \frac{x + 1}{2}\rfloor\\) are left inverses of \\(f\\). **Puzzle 142.** Define \\(f' : \mathbb{Z} \to \mathbb{Z}\\) by \\(f'(x) = \lfloor \frac{x}{2} \rfloor\\). Then both \\(g'(x) = 2x\\) and \\(h'(x) = 2x + 1\\) are right inverses of \\(f'\\). Notice that \\(f\\) is injective but not surjective, and \\(f'\\) is surjective but not injective. In \\(\textbf{Set}\\), functions with inverses are the same as bijections, and we want to avoid those here. There also seems to be a relationship suggested here with Galois connections, since several of these functions showed up back when we were discussing those.`

A function with a left inverse is a powerful way to think about a parser/serializer pair.

`A function with a left inverse is a powerful way to think about a parser/serializer pair.`

Oh, I found an excellent example yesterday while reading up on sequences.

Let our category be sequences of natural numbers with the natural numbers as identity.

https://arxiv.org/pdf/0712.2244.pdf

`>Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. >Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Here's an interesting example I found on the arxiv that answers both questions. Oh, I found an excellent example yesterday while reading up on sequences. Let our category be sequences of natural numbers with the natural numbers as identity. >We can start with standard definitions for left and right inverses. Namely, given a sequence \\(a(n)\\) we say that a sequence \\(b(n)\\) is a left inverse of a if the sequence \\(b(a(n))\\) is the natural number sequence. I denote a left inverse sequence as \\(\mathrm{leftInv}(n)\\). Correspondingly, a right inverse sequence is denoted as \\(\mathrm{rightInv}(n)\\), and it satisfies the property that the composition sequence \\(a(\mathrm{rightInv}(n))\\) is the natural numbers sequence. It goes without saying that the sequences \\(\mathrm{leftInv}(n)\\) and \\(\mathrm{rightInv}(n)\\) depend on the sequence \\(a\\). I will sometimes use the notation \\(\mathrm{leftInv}(a)(n)\\) and \\(\mathrm{rightInv}(a)(n)\\) in cases where I need this dependency to be explicit. >**Left inverse.** First we assume that \\(a(n)\\) is positive. Next, if \\(a(n)\\) takes the same value for two different indices \\(n\\), then the left inverse sequence cannot be defined. If \\(a(n)\\) doesn’t reach a number \\(K\\) for any index \\(n\\), then \\(\mathrm{leftInv}(K)\\) could be any number. That is, in this case the left inverse isn’t defined uniquely. From here, we see that we can define the left inverse uniquely only if \\(a(n)\\) is a permutation of natural numbers and in this case the left inverse sequence is the reverse permutation. Many of the interesting sequences are increasing. To be able to define a left inverse for an increasing sequence, we need this sequence not to take the same value for different indices. This requirement translates into a simple condition: our increasing sequence has to be strictly increasing. Suppose \\(a(n)\\) is a strictly increasing sequence. In this case the left inverse sequence can be defined. It still might not be unique, or more precisely, it is guaranteed not to be unique unless \\(a\\) is the sequence of natural numbers. Each time the left inverse is not unique we have infinitely many left inverses. To enjoy some order in this chaos of left inverses I would like to restrict candidates for the left inverse to non-decreasing sequences. In this case we can define two special left inverse sequences: \\(\mathrm{minimalLeftInv}\\) and \\(\mathrm{maximalLeftInv}\\), called the minimal left inverse and the maximal left inverse correspondingly. We define them so that for any non-decreasing sequence \\(b(n)\\), such that \\(b(n)\\) is a left inverse of \\(a(n)\\), the following equations are true: \\(\mathrm{minimalLeftInv}(n) \leq b(n) \leq \mathrm{maximalLeftInv}(n)\\). It is easy to see that the \\(\mathrm{minimalLeftInv}(a)(n)\\) is the number of elements in \\(a(n)\\) that are less than or equal to \\(n\\). Also the \\(\mathrm{maximalLeftInv}(a)(n)\\) is the number of elements in a(n) that are less than \\(n)\\), plus \\(1\\). In particular, \\(\mathrm{maximalLeftInv}(a)(n)− \mathrm{minimalLeftInv}(a)(n)\\) equals \\(0\\) if \\(n)\\) belongs to \\(a(n)\\) and \\(1\\) otherwise. From here trivially we get the following equations: \\[\mathrm{minimalLeftInv}(a)(n) + 1 \\\\ =\mathrm{maximalLeftInv}(a)(n) + \text{ characteristic function of } a(n) \\\\ =\mathrm{maximalLeftInv}v(n + 1)\\]. >\\(\cdots\\) >*Right inverse.* Again we assume that a(n) is positive. It is easy to see that if \\(a(n)\\) doesn’t reach a number \\(K\\) for any index \\(n\\), then the right inverse can’t be defined. If \\(a(n)\\) takes the same value for two or more different indices \\(n\\), then the right inverse sequence can reach only one of those index values (and we can choose which one). From here, we see that we can define the right inverse uniquely only if \\(a(n)\\) is a permutation of natural numbers and in this case the right inverse sequence is the reverse permutation. Suppose \\(a(n)\\) is a sequence that reaches every natural number value. Therefore, the right inverse sequence can be defined. The right inverse sequence might not be unique, but we can try to define two special right inverse sequences: \\(\mathrm{minimalRightInv}\\) and \\(\mathrm{maximalRightInv}\\), called the minimal and the maximal right inverse correspondingly. We define them so that for any sequence \\(b(n)\\), such that \\(b(n)\\) is a right inverse of \\(a(n)\\), the following equations are true: \\(\mathrm{minimalRightInv}(n) \leq b(n) \leq \mathrm{maximalRightInv}(n)\\). It is easy to see that \\(\mathrm{minimalRightInv}(n)\\) is the smallest index \\(k\\), such that \\(a(k) = n\\). Also, \\(\mathrm{maximalRightInv}(n)\\) is the largest index \\(k\\), such that \\(a(k) = n\\). It is easy to see that the minimal right inverse is always defined. At the same time, for the maximal right inverse to be defined, it is necessary and sufficient that \\(a(n)\\) reaches every value a finite number of times. Suppose that \\(a(n)\\) is a non-decreasing sequence that reaches every natural number value a finite number of times. Then the maximal right inverse is defined and >\\[ maximalRightInv(n) = minimalRightInv(n + 1) − 1.\\] >Suppose \\(a(n)\\) is a strictly increasing sequence. Then both the minimal and maximal left inverses are defined. Moreover, both of them are non-decreasing sequences that reach every value a finite number of times. This means that we can define the minimal and maximal right inverses on the sequences \\(\mathrm{minimalRightInv}(a)(n)\\) and \\(\mathrm{maximalRightInv}(a)(n)\\). The following properties are true: >\\[ \mathrm{minimalRightInv}(\mathrm{minimalRightInv}(a))(n) = a(n)\\\\ \mathrm{maximalRightInv}(\mathrm{minimalRightInv}(a))(n) + 1 = a(n + 1)\\\\ \mathrm{minimalRightInv}(\mathrm{maximalRightInv}(a))(n + 1) = a(n) + 1\\\\ \mathrm{maximalRightInv}(\mathrm{maximalRightInv}(a))(n) = a(n). \\] - 'How to Create a New Integer Sequence,' Tanya Khovanova https://arxiv.org/pdf/0712.2244.pdf`

Jesus wrote:

Yes. Morphisms in \(\mathbf{Vect}\) are linear transformations, and these rotations are the same linear transformation of the plane.

Also note that these rotations are both left

and rightinverses of rotation by 10º. In short, they are inverses of rotation by 10º. (Remember, 'inverse' means 'left and right inverse'.) In Puzzle 140 we saw that any morphism has at most one inverse. So, rotation by 350º and by -10º can't possibly be different.`Jesus wrote: > Hi, I think one can argue that rotation by 350º and by -10º are the same morphism in \\(\mathbf{Vect}.\\) Yes. Morphisms in \\(\mathbf{Vect}\\) are linear transformations, and these rotations are the same linear transformation of the plane. Also note that these rotations are both left _and right_ inverses of rotation by 10º. In short, they are inverses of rotation by 10º. (Remember, 'inverse' means 'left and right inverse'.) In Puzzle 140 we saw that any morphism has at most one inverse. So, rotation by 350º and by -10º can't possibly be different.`

Jesus wrote:

I may be getting confused, since I tend to mix up left and right, but the function in the converse direction seems to be a

rightinverse! If so, you've solved Puzzle 142, not Puzzle 141. But it's nice either way.To pick a very specific example, let's use this surjection:

$$ f: \{a,b\} \to \{c\} . $$ Any function

$$ g: \{c \} \to \{a,b\} $$ is a right inverse of \(f\), meaning that

\[ f \circ g = 1_{\{c\}} . \]

To see this, note that

$$ (f \circ g)(c) = c $$ no matter what \(g\) is.

`Jesus wrote: > Another for **Puzzle 141**, in **Set** for a surjection defining a labeled partition, the function in the converse direction sending partition labels to any of the elements in its block is a left inverse. I may be getting confused, since I tend to mix up left and right, but the function in the converse direction seems to be a _right_ inverse! If so, you've solved Puzzle 142, not Puzzle 141. But it's nice either way. To pick a very specific example, let's use this surjection: \[ f: \\{a,b\\} \to \\{c\\} . \] Any function \[ g: \\{c \\} \to \\{a,b\\} \] is a right inverse of \\(f\\), meaning that \\[ f \circ g = 1_{\\{c\\}} . \\] To see this, note that \[ (f \circ g)(c) = c \] no matter what \\(g\\) is.`

Jonathan wrote:

Yes, that's nice! These functions \(g\) and \(h\) take different values on odd integers, but they agree on even integers - they just halve any even integer - so they both provide a left inverse to the function \(f\), which double integers.

You could in fact have chosen \(g\) to map the odd integers to whatever you wanted!

`Jonathan wrote: > **Puzzle 141.** Define \\(f : \mathbb{Z} \to \mathbb{Z}\\) by \\(f(x) = 2x\\). Then both \\(g(x) = \lfloor \frac{x}{2} \rfloor\\) and \\(h(x) = \lfloor \frac{x + 1}{2}\rfloor\\) are left inverses of \\(f\\). Yes, that's nice! These functions \\(g\\) and \\(h\\) take different values on odd integers, but they agree on even integers - they just halve any even integer - so they both provide a left inverse to the function \\(f\\), which double integers. You could in fact have chosen \\(g\\) to map the odd integers to whatever you wanted!`

Jonathan wrote:

Nice again! The functions \(g'\) doubles any integer; \(h'\) doubles it and adds one, but \(f'\) takes each even integer and the next odd integer to the same value, so \(f' \circ g' = f' \circ h'\), and in fact \(f' \circ g' = f' \circ h' = 1_{\mathbb{Z}}\).

Right. Everyone who hasn't pondered this already should tackle these:

Puzzle.Exactly which functions have left inverses?Puzzle.Exactly which functions have right inverses?The answer in each case is a famous class of functions.

`Jonathan wrote: > **Puzzle 142.** Define \\(f' : \mathbb{Z} \to \mathbb{Z}\\) by \\(f'(x) = \lfloor \frac{x}{2} \rfloor\\). Then both \\(g'(x) = 2x\\) and \\(h'(x) = 2x + 1\\) are right inverses of \\(f'\\). Nice again! The functions \\(g'\\) doubles any integer; \\(h'\\) doubles it and adds one, but \\(f'\\) takes each even integer and the next odd integer to the same value, so \\(f' \circ g' = f' \circ h'\\), and in fact \\(f' \circ g' = f' \circ h' = 1_{\mathbb{Z}}\\). > Notice that \\(f\\) is injective but not surjective, and \\(f'\\) is surjective but not injective. In \\(\textbf{Set}\\), functions with inverses are the same as bijections, and we want to avoid those here. Right. Everyone who hasn't pondered this already should tackle these: **Puzzle.** Exactly which functions have left inverses? **Puzzle.** Exactly which functions have right inverses? The answer in each case is a famous class of functions.`

Puzzle 148.Let \(G\) be the graph with one node (labeled Person) and one edge (labeled BestFriend) from Person to itself, and let \(\mathcal C\) be the free category on \(G\).

Let \(F\) be the database where \(F(\textrm{Person}) = \{\textrm{Alice},\textrm{Bob}\}\) and \(F(\textrm{BestFriend})\) maps Alice to Bob and Bob to Alice.

Now we want to build another database \(G\) that is naturally isomorphic to \(F\). We know if \(\alpha: F \Rightarrow G\) is a natural isomorphism then \(\alpha_{\textrm{Person}}\) must be a bijection. This tells us that \(G(\textrm{Person})\) must be a two element set, say \(G(\textrm{Person}) = \{\textrm{Carmen}, \textrm{David}\}\) and we can define

\[\alpha_{\textrm{Person}}(\textrm{Alice}) = \textrm{Carmen} \quad \alpha_{\textrm{Person}}(\textrm{Bob}) = \textrm{David}\]

We've defined \(G\) on objects, so to make \(G\) a functor from \(\mathcal{C}\) to \(\textbf{Set}\) all that's left to do is define \(G\) on morphisms. However, to make the naturality squares commute (so that \(\alpha\) is indeed a natural transformation) we must have \(G(\textrm{BestFriend})\) map Carmen to David and David to Carmen.

We are done! \(G\) is a database and \(\alpha: F \Rightarrow G\) is a natural isomorphism.

So in database world, it seems to me that natural isomorphisms preserve the relationships between the things but not the things themselves. In this example, our natural isomorphism renamed our people but kept the relationship that there were two people who were each the other's best friend.

`**Puzzle 148.** Let \\(G\\) be the graph with one node (labeled Person) and one edge (labeled BestFriend) from Person to itself, and let \\(\mathcal C\\) be the free category on \\(G\\). Let \\(F\\) be the database where \\(F(\textrm{Person}) = \\{\textrm{Alice},\textrm{Bob}\\}\\) and \\(F(\textrm{BestFriend})\\) maps Alice to Bob and Bob to Alice. Now we want to build another database \\(G\\) that is naturally isomorphic to \\(F\\). We know if \\(\alpha: F \Rightarrow G\\) is a natural isomorphism then \\(\alpha_{\textrm{Person}}\\) must be a bijection. This tells us that \\(G(\textrm{Person})\\) must be a two element set, say \\(G(\textrm{Person}) = \\{\textrm{Carmen}, \textrm{David}\\}\\) and we can define \\[\alpha_{\textrm{Person}}(\textrm{Alice}) = \textrm{Carmen} \quad \alpha_{\textrm{Person}}(\textrm{Bob}) = \textrm{David}\\] We've defined \\(G\\) on objects, so to make \\(G\\) a functor from \\(\mathcal{C}\\) to \\(\textbf{Set}\\) all that's left to do is define \\(G\\) on morphisms. However, to make the naturality squares commute (so that \\(\alpha\\) is indeed a natural transformation) we must have \\(G(\textrm{BestFriend})\\) map Carmen to David and David to Carmen. ![](http://slibkind.github.io/act/puzzle148.jpg) We are done! \\(G\\) is a database and \\(\alpha: F \Rightarrow G\\) is a natural isomorphism. So in database world, it seems to me that natural isomorphisms preserve the relationships between the things but not the things themselves. In this example, our natural isomorphism renamed our people but kept the relationship that there were two people who were each the other's best friend.`

Ouch, yep the surjection is on the left, and it can be precomposed by any assignment of labels to representatives. Thanks!

`Ouch, yep the surjection is on the left, and it can be precomposed by any assignment of labels to representatives. Thanks!`

Great, Sophie!

The most important puzzle in this lecture, toward our goal of understanding databases, is Puzzle 148. In comment #14, Sophie gave an example of two naturally isomorphic databases built using the same schema:

So, everyone should ponder her answer and ask questions if there's anything unclear about it.

The rough idea is that when we have two naturally isomorphic databases, the

namesof things may be different, but therelationshipsbetween things are the same! We've got a category \(\mathcal{C}\) with one object \(\textrm{Person}\) and morphisms given by$$ 1_{\textrm{Person}} : \textrm{Person} \to \textrm{Person} $$ $$ \textrm{BestFriend} : \textrm{Person} \to \textrm{Person} $$ $$ \textrm{BestFriend} \circ \textrm{BestFriend} : \textrm{Person} \to \textrm{Person} $$ $$ \textrm{BestFriend} \circ \textrm{BestFriend} \circ \textrm{BestFriend}: \textrm{Person} \to \textrm{Person} $$ and so on, forever. This

abstractlydescribes the idea of people and their best friends. It's adatabase schema.Then Sophie considers two databases built using this scheman. In other words, two functors

$$ F, G : \mathcal{C} \to \mathbf{Set} .$$ Each of these assigns a specific

setto the object \(\textrm{Person}\) and a specific function to the morphism \(\textrm{BestFriend}\). The first has$$ F(\textrm{Person}) = \lbrace \textrm{Alice}, \textrm{Bob} \rbrace $$ and

$$ F(\textrm{BestFriend}) (\textrm{Alice}) = \textrm{Bob} , \qquad F(\textrm{BestFriend}) (\textrm{Bob}) = \textrm{Alice} .$$ So, it describes a

concretesituation involving people and their best friends, where Alice is Bob's best friend and vice versa.The second has

$$ G(\textrm{Person}) = \lbrace \textrm{Carmen}, \textrm{David} \rbrace $$ and

$$ G(\textrm{BestFriend}) (\textrm{Carmen}) = \textrm{David} , \qquad G(\textrm{BestFriend}) (\textrm{David}) = \textrm{Carmen} .$$ This is different than \(F\), but it's 'the same in a way': it again describes a concrete situation where two people are each other's best friend!

We can make this 'the same in a way' idea precise using a natural isomorphism \(\alpha : F \Rightarrow G \). This gives a bijection

$$ \alpha_{\textrm{Person}} : F(\textrm{Person}) \to G(\textrm{Person}) $$ obeying the naturality condition

$$ \alpha_{\textrm{Person}} \circ F(\textrm{BestFriend}) = G(\textrm{BestFriend}) \circ \alpha_{\textrm{Person}} .$$ This naturality condition is the crucial thing to understand! It says

For example:

Get it?

Here are some followup puzzles... for people other than Sophie:

Puzzle.With the same choice of \(F\) and \(G\), find another natural isomorphism \(\alpha : F \Rightarrow G\).Puzzle.With the same choice of \(F\) and \(G\), find a natural transformation \(\alpha : F \Rightarrow G\) that is not a natural isomorphism.Puzzle.With the same choice of \(F\) and \(G\), find a transformation \(\alpha : F \Rightarrow G\) that is not natural.`Great, Sophie! The most important puzzle in this lecture, toward our goal of understanding databases, is Puzzle 148. In [comment #14](https://forum.azimuthproject.org/discussion/comment/19536/#Comment_19536), Sophie gave an example of two naturally isomorphic databases built using the same schema: <center><img src = "http://slibkind.github.io/act/puzzle148.jpg"></center> So, everyone should ponder her answer and ask questions if there's anything unclear about it. The rough idea is that when we have two naturally isomorphic databases, the _names_ of things may be different, but the _relationships_ between things are the same! We've got a category \\(\mathcal{C}\\) with one object \\(\textrm{Person}\\) and morphisms given by \[ 1_{\textrm{Person}} : \textrm{Person} \to \textrm{Person} \] \[ \textrm{BestFriend} : \textrm{Person} \to \textrm{Person} \] \[ \textrm{BestFriend} \circ \textrm{BestFriend} : \textrm{Person} \to \textrm{Person} \] \[ \textrm{BestFriend} \circ \textrm{BestFriend} \circ \textrm{BestFriend}: \textrm{Person} \to \textrm{Person} \] and so on, forever. This _abstractly_ describes the idea of people and their best friends. It's a **database schema**. Then Sophie considers two databases built using this scheman. In other words, two functors \[ F, G : \mathcal{C} \to \mathbf{Set} .\] Each of these assigns a specific _set_ to the object \\(\textrm{Person}\\) and a specific function to the morphism \\(\textrm{BestFriend}\\). The first has \[ F(\textrm{Person}) = \lbrace \textrm{Alice}, \textrm{Bob} \rbrace \] and \[ F(\textrm{BestFriend}) (\textrm{Alice}) = \textrm{Bob} , \qquad F(\textrm{BestFriend}) (\textrm{Bob}) = \textrm{Alice} .\] So, it describes a _concrete_ situation involving people and their best friends, where Alice is Bob's best friend and vice versa. The second has \[ G(\textrm{Person}) = \lbrace \textrm{Carmen}, \textrm{David} \rbrace \] and \[ G(\textrm{BestFriend}) (\textrm{Carmen}) = \textrm{David} , \qquad G(\textrm{BestFriend}) (\textrm{David}) = \textrm{Carmen} .\] This is different than \\(F\\), but it's 'the same in a way': it again describes a concrete situation where two people are each other's best friend! We can make this 'the same in a way' idea precise using a natural isomorphism \\(\alpha : F \Rightarrow G \\). This gives a bijection \[ \alpha_{\textrm{Person}} : F(\textrm{Person}) \to G(\textrm{Person}) \] obeying the naturality condition \[ \alpha_{\textrm{Person}} \circ F(\textrm{BestFriend}) = G(\textrm{BestFriend}) \circ \alpha_{\textrm{Person}} .\] This naturality condition is the crucial thing to understand! It says > If \\(x\\) is \\(y\\)'s best friend in the first database, and \\(\alpha_{\textrm{Person}} x = x'\\) and \\(\alpha_{\textrm{Person}}y = y'\\), then \\(x'\\) is \\(y'\\)'s best friend in the second database. For example: > If Alice is Bob's best friend in the first database, and Alice is mapped to Carmen and Bob is mapped to David, then Carmen must be David's best friend in the second database. Get it? Here are some followup puzzles... for people other than Sophie: **Puzzle.** With the same choice of \\(F\\) and \\(G\\), find another natural isomorphism \\(\alpha : F \Rightarrow G\\). **Puzzle.** With the same choice of \\(F\\) and \\(G\\), find a natural transformation \\(\alpha : F \Rightarrow G\\) that is not a natural isomorphism. **Puzzle.** With the same choice of \\(F\\) and \\(G\\), find a transformation \\(\alpha : F \Rightarrow G\\) that is not natural.`

Thanks John! I really liked this puzzle because it brought to life one of my favorite tenets for thinking about math and science:

Things are exactly their relationships to other things.`Thanks John! I really liked this puzzle because it brought to life one of my favorite tenets for thinking about math and science: *Things are exactly their relationships to other things.*`

Yes, I like that too! That principle eventually becomes the Yoneda Lemma in category theory. I really like Tae-Danae Bradley's highly readable blog posts explaining this:

The Yoneda perspective.

The Yoneda embedding.

The Yoneda lemma.

The "Yoneda perspective", in her words, is that

`Yes, I like that too! That principle eventually becomes the Yoneda Lemma in category theory. I really like Tae-Danae Bradley's highly readable blog posts explaining this: * [The Yoneda perspective](http://www.math3ma.com/mathema/2017/8/30/the-yoneda-perspective). * [The Yoneda embedding](http://www.math3ma.com/mathema/2017/9/6/the-yoneda-embedding). * [The Yoneda lemma](http://www.math3ma.com/mathema/2017/9/14/the-yoneda-lemma). The "Yoneda perspective", in her words, is that > An object is completely determined by its relationships to other objects.`

Hey John,

We've chatted about this before but it's always good to review.

Lemma (A). In ZF, a function isinjectiveif and only if it has aleftinverse.Lemma (B). In ZFC, a function issurjectiveif and only if it has arightinverse.In fact, Lemma (B) is equivalent to the axiom of choice.

I have to run, but I will try to swing by later and give proofs of

Lemmas (A)and(B)if someone doesn't beat me to it.`Hey [John](https://forum.azimuthproject.org/profile/17/John%20Baez), > Puzzle (A). Exactly which functions have left inverses? > > Puzzle (B). Exactly which functions have right inverses? We've chatted about this before but it's always good to review. **Lemma (A)**. In ZF, a function is *injective* if and only if it has a *left* inverse. **Lemma (B)**. In ZFC, a function is *surjective* if and only if it has a *right* inverse. In fact, Lemma (B) is equivalent to the [axiom of choice](https://en.wikipedia.org/wiki/Axiom_of_choice). I have to run, but I will try to swing by later and give proofs of **Lemmas (A)** and **(B)** if someone doesn't beat me to it.`

Here's a proof (with a slight modification of

Lemma (A)):If \(s : X \rightarrow A\) has a left inverse \(r\), then \(s(x) = s(y) \implies rs(x) = rs(y) \implies x = y\), so \(s\) is injective.

If \(r : A \rightarrow X\) has a right inverse \(s\), then for any \(x \in X\) we have \(r(s(x)) = rs(x) = x\), so \(r\) is surjective.

The converses are slightly tricksier.

If \(s : X \rightarrow A\) is injective, and \(a \in A\) is in the image of \(s\), we can define \(r(a)\) to be the unique \(X\)-element sent to \(a\). If \(a\) is

notin the image of \(s\), we can define it to be whatever we like – but we must define it to be something, so \(X\) cannot be empty. If \(X\) is empty then we need \(A\) empty too in order to get a left inverse.in summary: an injection \(s : X \rightarrow A\) always has a left inverse

unless\(X\) is empty and \(A\) is non-empty.If \(r : A \rightarrow X\) is surjective, then for each \(x \in X\) we can associate a non-empty set \(A_x\) of \(A\)-elements sent to \(x\). If we can pick one element from each \(A_x\), we can build a right inverse – but this requires (and for that matter implies) the Axiom of Choice.

in summary: a surjection \(r : A \rightarrow X\) always has a right inverse

providingwe accept the Axiom of Choice.`Here's a proof (with a slight modification of **Lemma (A)**): If \\(s : X \rightarrow A\\) has a left inverse \\(r\\), then \\(s(x) = s(y) \implies rs(x) = rs(y) \implies x = y\\), so \\(s\\) is injective. If \\(r : A \rightarrow X\\) has a right inverse \\(s\\), then for any \\(x \in X\\) we have \\(r(s(x)) = rs(x) = x\\), so \\(r\\) is surjective. The converses are slightly tricksier. If \\(s : X \rightarrow A\\) is injective, and \\(a \in A\\) is in the image of \\(s\\), we can define \\(r(a)\\) to be the unique \\(X\\)-element sent to \\(a\\). If \\(a\\) is _not_ in the image of \\(s\\), we can define it to be whatever we like – but we must define it to be something, so \\(X\\) cannot be empty. If \\(X\\) is empty then we need \\(A\\) empty too in order to get a left inverse. in summary: an injection \\(s : X \rightarrow A\\) always has a left inverse _unless_ \\(X\\) is empty and \\(A\\) is non-empty. If \\(r : A \rightarrow X\\) is surjective, then for each \\(x \in X\\) we can associate a non-empty set \\(A_x\\) of \\(A\\)-elements sent to \\(x\\). If we can pick one element from each \\(A_x\\), we can build a right inverse – but this requires (and for that matter implies) the Axiom of Choice. in summary: a surjection \\(r : A \rightarrow X\\) always has a right inverse _providing_ we accept the Axiom of Choice.`

Matthew wrote:

Whoops! True! Thanks for going over it again.

`Matthew wrote: > We've chatted about this before... Whoops! True! Thanks for going over it again.`

Let \(r: y \to x\) be a right inverse, and \(s: x \to y\) its left inverse, ie

\[ (r \circ s)(x) = x \]

then composition in the opposite direction produces an idempotent function,

\[ (s\circ r)\circ (s\circ r)(y) = (s\circ r)(y) \]

Proof:\[ \begin{align} (s\circ r)\circ (s\circ r)(y) \\ = (s\circ (r\circ s)\circ r)(y) \\ = (s\circ (1_x)\circ r)(y) \\ = (s\circ r)(y). \\ \end{align} \]

`Let \\(r: y \to x\\) be a right inverse, and \\(s: x \to y\\) its left inverse, ie \\[ (r \circ s)(x) = x \\] then composition in the opposite direction produces an idempotent function, \\[ (s\circ r)\circ (s\circ r)(y) = (s\circ r)(y) \\] **Proof:** \\[ \begin{align} (s\circ r)\circ (s\circ r)(y) \\\\ = (s\circ (r\circ s)\circ r)(y) \\\\ = (s\circ (1\_x)\circ r)(y) \\\\ = (s\circ r)(y). \\\\ \end{align} \\]`

Keith - yes, that's a nice observation! You can prove this in any category, not just the category of sets. If \(r: y \to x\) and \(s: x \to y\) are morphisms in any category, and \(r \circ s = 1_x\), then \(s \circ r\) is idempotent:

\[ \begin{array}{cl} (s\circ r)\circ (s\circ r) &= s\circ (r\circ s) \circ r\\ &= s\circ 1_x\circ r \\ &= s\circ r. \\ \end{array} \]

It's fun to think about what this means in various categories.

`Keith - yes, that's a nice observation! You can prove this in any category, not just the category of sets. If \\(r: y \to x\\) and \\(s: x \to y\\) are morphisms in any category, and \\(r \circ s = 1_x\\), then \\(s \circ r\\) is idempotent: \\[ \begin{array}{cl} (s\circ r)\circ (s\circ r) &= s\circ (r\circ s) \circ r\\\\ &= s\circ 1\_x\circ r \\\\ &= s\circ r. \\\\ \end{array} \\] It's fun to think about what this means in various categories.`

Re. Puzzles (16) by John Baez: I'm having trouble seeing how there can be a natural transformation that's not an isomorphism for Sophie's example. Since \(C\) has just 1 object, a transformation \(F \rightarrow G\) consists of just one map \(F(\textrm{Person}) \rightarrow G(\textrm{Person}) \). There are only \(2^2 = 4\) possibilities for this map; 2 of which are bijections giving rise to natural isomorphisms, and the other 2 cannot satisfy the naturality condition. Am I missing something?

`Re. Puzzles (16) by John Baez: I'm having trouble seeing how there can be a natural transformation that's not an isomorphism for Sophie's example. Since \\(C\\) has just 1 object, a transformation \\(F \rightarrow G\\) consists of just one map \\(F(\textrm{Person}) \rightarrow G(\textrm{Person}) \\). There are only \\(2^2 = 4\\) possibilities for this map; 2 of which are bijections giving rise to natural isomorphisms, and the other 2 cannot satisfy the naturality condition. Am I missing something?`

there is an overlapping of puzzle’s numbers with previous lecture

`there is an overlapping of puzzle’s numbers with previous lecture`

Ignacio - ugh, that's very hard to fix now. Thanks for pointing it out, though!

`Ignacio - ugh, that's very hard to fix now. Thanks for pointing it out, though!`

Nasos - you're right! I was mistaken. The only natural transformations \(\alpha : F \Rightarrow G\) are natural isomorphisms, and there are two of them. There are also two non-natural transformations.

`Nasos - you're right! I was mistaken. The only natural transformations \\(\alpha : F \Rightarrow G\\) are natural isomorphisms, and there are two of them. There are also two non-natural transformations.`

It was quite encouraging to learn in comments to previous lecture that "elementary" proofs are hard for everybody, so let me try

Puzzle 140: Prove that any morphism has at most one inverse.Let \(g: y \to x\) to be an inverse of \(f: x \to y\), and \(h: y \to x\) to be another inverse of \(f\). In this case we have the right inverse condition hold for both: $$ f \circ g = 1_y $$ $$ f \circ h = 1_y $$ This implies that $$ f \circ g = f \circ h $$ Composing both sides with \(g\) gives $$ g \circ f \circ g = g \circ f \circ h $$ Using left inverse condition, \(g \circ f = 1_x\), we get $$1_x \circ g = 1_x \circ h$$ $$ g = h $$ So all inverses of a given function are equal (isomorphic?) to each other, so we can say that such inverse is unique (up to isomorphism?).

It also seems that having left and right inverses together form a universal property (I'm still not quite sure what are these exactly), which uniquely identifies the inverse of a given function.

`It was quite encouraging to learn in comments to previous lecture that "elementary" proofs are hard for everybody, so let me try **Puzzle 140**: Prove that any morphism has at most one inverse. Let \\(g: y \to x\\) to be an inverse of \\(f: x \to y\\), and \\(h: y \to x\\) to be another inverse of \\(f\\). In this case we have the right inverse condition hold for both: \[ f \circ g = 1_y \] \[ f \circ h = 1_y \] This implies that \[ f \circ g = f \circ h \] Composing both sides with \\(g\\) gives \[ g \circ f \circ g = g \circ f \circ h \] Using left inverse condition, \\(g \circ f = 1_x\\), we get \[1_x \circ g = 1_x \circ h\] \[ g = h \] So all inverses of a given function are equal (isomorphic?) to each other, so we can say that such inverse is unique (up to isomorphism?). It also seems that having left and right inverses together form a universal property (I'm still not quite sure what are these exactly), which uniquely identifies the inverse of a given function.`

@Igor – it strikes me that we don't really need the bit about \(g\) being a right inverse.

It's enough for \(g\) to be a left inverse and \(h\) to be a right inverse.

Then we have \(g = g\circ(f\circ h) = (g\circ f)\circ h = h\).

`@Igor – it strikes me that we don't really need the bit about \\(g\\) being a right inverse. It's enough for \\(g\\) to be a left inverse and \\(h\\) to be a right inverse. Then we have \\(g = g\circ(f\circ h) = (g\circ f)\circ h = h\\).`

@Anindya - thank you, indeed, a nice observation. So restating it - if function \(f\) has a left inverse \(g\) and a right inverse \(h\), then these inverses are equal (isomorphic), and \(f\), in turn, has the unique inverse.

This translates to the following:

\(f\) having a left inverse is another way to state that it is injective

\(f\) having a right inverse is another way to state that it is surjective

having both thus translates to that \(f\) is bijective, and thus has the unique inverse \(f^{-1}\).

EDIT:But actually these 2 properties is much more general and abstract way to think about morphisms not just as about functions between sets, and this innocuously looking lecture brought a lot of surprising things to our attention.

I use the following mnemonic: having a left inverse means that there is no loss of information when morphing an object \(x\) to an object \(y\), so we can somehow reverse this process and get back to \(x\).

And having a right inverse allows us to reconstruct \(y\), previously morphed into \(x\) by \(g: y \to x\).

It is interesting that using these we can relax the requirement \(g \circ f = 1_x\) by defining some equivalence classes on \(x\) and allowing the identity morphism to hold up to isomorphism - i.e. we can partially restore \(x\) and \(y\) both ways. Actually these are the cases which I'm interested in the most, looking forward what comes next in the course.

`@Anindya - thank you, indeed, a nice observation. So restating it - if function \\(f\\) has a left inverse \\(g\\) and a right inverse \\(h\\), then these inverses are equal (isomorphic), and \\(f\\), in turn, has the unique inverse. This translates to the following: - \\(f\\) having a left inverse is another way to state that it is injective - \\(f\\) having a right inverse is another way to state that it is surjective - having both thus translates to that \\(f\\) is bijective, and thus has the unique inverse \\(f^{-1}\\). **EDIT:** But actually these 2 properties is much more general and abstract way to think about morphisms not just as about functions between sets, and this innocuously looking lecture brought a lot of surprising things to our attention. I use the following mnemonic: having a left inverse means that there is no loss of information when morphing an object \\(x\\) to an object \\(y\\), so we can somehow reverse this process and get back to \\(x\\). And having a right inverse allows us to reconstruct \\(y\\), previously morphed into \\(x\\) by \\(g: y \to x\\). It is interesting that using these we can relax the requirement \\(g \circ f = 1_x\\) by defining some equivalence classes on \\(x\\) and allowing the identity morphism to hold up to isomorphism - i.e. we can partially restore \\(x\\) and \\(y\\) both ways. Actually these are the cases which I'm interested in the most, looking forward what comes next in the course.`

@Igor – I think that's a good rule-of-thumb but there are some gotchas to bear in mind.

• talking about \(f\) being "injective" or "surjective" only makes sense in categories where the arrows are functions of some sort

• any function with a left inverse is injective, but not vice versa – consider the map \(\varnothing \rightarrow \textrm{1}\)

• any function with a right inverse is surjective, but the converse only holds if we have the Axiom of Choice

`@Igor – I think that's a good rule-of-thumb but there are some gotchas to bear in mind. • talking about \\(f\\) being "injective" or "surjective" only makes sense in categories where the arrows are functions of some sort • any function with a left inverse is injective, but not vice versa – consider the map \\(\varnothing \rightarrow \textrm{1}\\) • any function with a right inverse is surjective, but the converse only holds if we have the Axiom of Choice`

Igor wrote:

Great! You have proven that any two inverses of a given morphism are equal.

You're not being bold enough here: you're not extracting all the wisdom available from your actual argument. You proved any two inverses of a given morphism

in any categoryareequalto each other. So you don't need to say "function" here, and you don't need to say "isomorphic", or "up to isomorphism".In fact, we haven't even discussed what it

meansfor two functions to be isomorphic, or for morphisms in a category to be isomorphic, so it's probably wisest at this point to avoid saying such things."Being the inverse of \(f\)" is a property that uniquely identifies a function if such a function exists - that's what you showed.

A universal property is fancier. It's a property of an object in a category that identifies it

uniquely up to isomorphism. But we haven't talked about those yet.`Igor wrote: > **Puzzle 140**: Prove that any morphism has at most one inverse. > Let \\(g: y \to x\\) to be an inverse of \\(f: x \to y\\), and \\(h: y \to x\\) to be another inverse of \\(f\\). In this case we have the right inverse condition hold for both: > \[ f \circ g = 1_y \] > \[ f \circ h = 1_y \] > This implies that > \[ f \circ g = f \circ h \] > Composing both sides with \\(g\\) gives >\[ g \circ f \circ g = g \circ f \circ h \] > Using left inverse condition, \\(g \circ f = 1_x\\), we get > \[1_x \circ g = 1_x \circ h\] >\[ g = h \] Great! You have proven that any two inverses of a given morphism are equal. > So all inverses of a given function are equal (isomorphic?) to each other, so we can say that such inverse is unique (up to isomorphism?). You're not being bold enough here: you're not extracting all the wisdom available from your actual argument. You proved any two inverses of a given morphism _in any category_ are _equal_ to each other. So you don't need to say "function" here, and you don't need to say "isomorphic", or "up to isomorphism". In fact, we haven't even discussed what it _means_ for two functions to be isomorphic, or for morphisms in a category to be isomorphic, so it's probably wisest at this point to avoid saying such things. > It also seems that having left and right inverses together form a universal property (I'm still not quite sure what are these exactly), which uniquely identifies the inverse of a given function. "Being the inverse of \\(f\\)" is a property that uniquely identifies a function if such a function exists - that's what you showed. A universal property is fancier. It's a property of an object in a category that identifies it _uniquely up to isomorphism_. But we haven't talked about those yet.`

Igor wrote:

Please don't put "isomorphic" in parentheses like that when you really mean "equal" - it turned a perfectly correct sentence into a very confusing one. It's as if someone said

It makes me want to say

"you were doing so well... why did you hedge your bets at the last minute?"Equality is a wonderful thing. When two things are equal, we should proudly announce it.

`Igor wrote: > So restating it - if function \\(f\\) has a left inverse \\(g\\) and a right inverse \\(h\\), then these inverses are equal (isomorphic), and \\(f\\), in turn, has the unique inverse. Please don't put "isomorphic" in parentheses like that when you really mean "equal" - it turned a perfectly correct sentence into a very confusing one. It's as if someone said > 2 plus 2 equals 4 (approximately). It makes me want to say _"you were doing so well... why did you hedge your bets at the last minute?"_ Equality is a wonderful thing. When two things are equal, we should proudly announce it.`