#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Categories for the Working Hacker - A Discussion Group

• Options
51.
edited April 2018

Hey Juan!

I wanted to share because I'm sort of proud of this wackiness, but I'll get back to working through Conal's paper soon!

@Matthew, I don't know if I have understood exactly what you meant with _|_, but my answer is this:

type Bottom = forall t. t

hilbertExplosion :: Bottom -> a
hilbertExplosion bottom = id bottom


Oh, _|_ is just me trying to be cute and represent $$\bot$$. The symbol $$\bot$$ denotes $$\bigvee\{\}$$ or "the minimal element in a preorder".

Why is it called hilbertExplosion?

David Hilbert derived this rule in his essay The Foundations of Mathematics (1927). It is the first consequence he gives after stating his 12 axioms.

Philosophers also call this rule ex falso (sequiter) quadlibet in Latin and The Principle Of Pseudo-Scotus.

Now that we have Bottom (I prefer Falsum so I will use that), we can plug it into John's definition of negation, and combine it with Glivenko's Theorem.

This gives us a monad in Haskell which re-purposes Haskell's type checker as a simple proof checker for classical logic!

I will be following Hilbert's formalism for logic.

Haskell lacks impredicative polymorphism sadly. This means if we want to do anything fancy with the type system we have to box stuff all over the place.

{-# LANGUAGE Rank2Types #-}

module Classical where

newtype Falsum = Falsum { efq :: forall a . a } -- efq stands for *ex falso quodlibet*

-- John Baez's definition of negation
newtype Not a = Not { unNot :: a -> Falsum }


The first thing to note is that John's negation gives rise to a contravariant functor, which is a functor with a rule contramap :: (a -> b) -> f b -> f a. This map has an old name in logic - contraposition

$$\vdash (\phi \to \psi) \to (\neg \psi \to \neg \phi)$$ Here's how we construct it:

contraposition :: (a -> b) -> Not b -> Not a
contraposition = (flip . curry . (Not .) . uncurry . ((.) .)) unNot


I also like to use the unboxed version of contraposition:

contraposition' :: (a -> b) -> (b -> Falsum) -> a -> Falsum
contraposition' = (flip . curry . uncurry) (.)


newtype Classical a = Thm { getProof :: Not (Not a) }


The first thing to show is that this is a functor. Here is the instance:

instance Functor Classical where
fmap = ( flip
. curry
. (Thm .)
. uncurry
. (flip (contraposition . contraposition) .)
) getProof


Next, we want to show it is an Applicative. The pure operation reflects that intuitionistic logic (captured by the Identity functor in Haskell, as per the celebrated Curry-Howard correspondence) is a sublogic of classical logic.

Apply <*> for this monad reflects the rule philosophers traditionally call modus ponens:

$$\vdash \phi \to \psi \Longrightarrow\; \vdash \phi \Longrightarrow \; \vdash \psi$$ My instance uses the unboxed definitions for Classical, which are given by:

toDoubleNeg :: Classical a -> (a -> Falsum) -> Falsum
toDoubleNeg = flip (flip (unNot . getProof) . Not)

fromDoubleNeg :: ((a -> Falsum) -> Falsum) -> Classical a
fromDoubleNeg = Thm . Not . contraposition' unNot


Here's the applicative instance:

instance Applicative Classical where
pure  = Thm . Not . flip unNot
(<*>) = ( curry
. (fromDoubleNeg .)
. uncurry
. (. toDoubleNeg)
. flip
. (. toDoubleNeg)
. (contraposition' .)
. flip
. (contraposition' .)
. flip
) contraposition'


Along with modus ponens, classical logic has several axioms. The axiomatization varies amongst logicians, but one system commonly used is:

$$\begin{eqnarray} & \vdash \phi \to \psi \to \phi & \\ & \vdash (\phi \to \psi \to \chi) \to (\phi \to \psi) \to \phi \to \chi & \\ & \vdash \neg \neg \phi \to \phi \end{eqnarray}$$ And we can indeed construct and check proofs for all three axioms.

The first two correspond to the K and S combinators, which are pure and <*> for the (->) e functor. See Conor McBride's Idioms: applicative programming with effects (2009) for a discussion of this.

ax1 :: Classical (a -> b -> a)
ax1 = pure const  -- or ax1 = pure const

ax2 :: Classical ((a -> b -> c) -> (a -> b) -> a -> c)
ax2 = pure (<*>) -- or ax2 = pure (\f x -> f x x)


Finally, what's left is to show $$\vdash \neg\neg \phi \to \phi$$. I like to follow Dirk van Dalen's presentation on page 159 of Logic and Structure (2013).

First, we can write unApply, which is the inverse of (<*>):

unApply :: (Classical a -> Classical b) -> Classical (a -> b)
unApply = ( fromDoubleNeg
. ((<*>) (<*>) ((<*>) pure))
. ( curry
. flip
) ( ( flip
. uncurry
. flip
. (flip ($) .) ) (. (efq .)) . (. pure) ) ) . (toDoubleNeg .) . (. fromDoubleNeg)  We can then use it and the intuitionistic identity $$\neg \neg \neg \phi \to \neg \phi$$ to prove the final axiom: ax3 :: Classical ((Not (Not a)) -> a) ax3 = (. (toDoubleNeg . Thm)) <$> unApply ( fromDoubleNeg
. flip ((flip ($)) . toDoubleNeg . pure) . toDoubleNeg)  All that's left to show is that this is Monad. This isn't super hard, but I'll leave it as an exercise for anyone reading who wants to try... Comment Source:Hey Juan! Actually, I got side tracked with thinking about something you asked in another thread. I wanted to share because I'm sort of proud of this wackiness, but I'll get back to working through Conal's paper soon! ---------------------------------------- In [Lecture 9 - Chapter 1: Adjoints and the Logic of Subsets](https://forum.azimuthproject.org/discussion/comment/16866/#Comment_16866) > @Matthew, I don't know if I have understood exactly what you meant with _|_, but my answer is this: > <pre> type Bottom = forall t. t hilbertExplosion :: Bottom -> a hilbertExplosion bottom = id bottom </pre> Oh, _|_ is just me trying to be cute and represent \$$\bot\$$. The symbol \$$\bot\$$ denotes \$$\bigvee\\{\\}\$$ or "the [minimal element](https://en.wikipedia.org/wiki/Maximal_and_minimal_elements) in a preorder". > Why is it called hilbertExplosion? David Hilbert derived this rule in his essay [The Foundations of Mathematics (1927)](https://www.marxists.org/reference/subject/philosophy/works/ge/hilbert.htm). It is the first consequence he gives after stating his 12 axioms. Philosophers also call this rule *ex falso (sequiter) quadlibet* in Latin and *The Principle Of Pseudo-Scotus*. Now that we have Bottom (I prefer Falsum so I will use that), we can plug it into John's definition of negation, and combine it with [Glivenko's Theorem](https://plato.stanford.edu/entries/logic-intuitionistic/#TraClaIntLog). This gives us a *monad* in Haskell which re-purposes Haskell's type checker as a simple proof checker for classical logic! I will be following Hilbert's formalism for logic. Haskell lacks [*impredicative polymorphism*](https://ghc.haskell.org/trac/ghc/wiki/ImpredicativePolymorphism) sadly. This means if we want to do anything fancy with the type system we have to box stuff all over the place. <pre> {-# LANGUAGE Rank2Types #-} module Classical where newtype Falsum = Falsum { efq :: forall a . a } -- efq stands for *ex falso quodlibet* -- John Baez's definition of negation newtype Not a = Not { unNot :: a -> Falsum } </pre> The first thing to note is that John's negation gives rise to a *contravariant* functor, which is a functor with a rule contramap :: (a -> b) -> f b -> f a. This map has an old name in logic - [contraposition](https://en.wikipedia.org/wiki/Contraposition) $$\vdash (\phi \to \psi) \to (\neg \psi \to \neg \phi)$$ Here's how we construct it: <pre> contraposition :: (a -> b) -> Not b -> Not a contraposition = (flip . curry . (Not .) . uncurry . ((.) .)) unNot </pre> I also like to use the unboxed version of contraposition: <pre> contraposition' :: (a -> b) -> (b -> Falsum) -> a -> Falsum contraposition' = (flip . curry . uncurry) (.) </pre> We have enough to start with the *Classical Logic Theorem Monad* <pre> newtype Classical a = Thm { getProof :: Not (Not a) } </pre> The first thing to show is that this is a *functor*. Here is the instance: <pre> instance Functor Classical where fmap = ( flip . curry . (Thm .) . uncurry . (flip (contraposition . contraposition) .) ) getProof </pre> Next, we want to show it is an *Applicative*. The pure operation reflects that intuitionistic logic (captured by the Identity functor in Haskell, as per the celebrated [Curry-Howard correspondence](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence)) is a sublogic of classical logic. Apply <*> for this monad reflects the rule philosophers traditionally call *modus ponens*: $$\vdash \phi \to \psi \Longrightarrow\; \vdash \phi \Longrightarrow \; \vdash \psi$$ My instance uses the unboxed definitions for Classical, which are given by: <pre> toDoubleNeg :: Classical a -> (a -> Falsum) -> Falsum toDoubleNeg = flip (flip (unNot . getProof) . Not) fromDoubleNeg :: ((a -> Falsum) -> Falsum) -> Classical a fromDoubleNeg = Thm . Not . contraposition' unNot </pre> Here's the applicative instance: <pre> instance Applicative Classical where pure = Thm . Not . flip unNot (<*>) = ( curry . (fromDoubleNeg .) . uncurry . (. toDoubleNeg) . flip . (. toDoubleNeg) . (contraposition' .) . flip . (contraposition' .) . flip ) contraposition' </pre> Along with *modus ponens*, classical logic has several axioms. The axiomatization varies amongst logicians, but one system commonly used is: $$\begin{eqnarray} & \vdash \phi \to \psi \to \phi & \\\\ & \vdash (\phi \to \psi \to \chi) \to (\phi \to \psi) \to \phi \to \chi & \\\\ & \vdash \neg \neg \phi \to \phi \end{eqnarray}$$ And we can indeed construct and check proofs for all three axioms. The first two correspond to the [**K** and **S** combinators](https://en.wikipedia.org/wiki/SKI_combinator_calculus), which are pure and <*> for the (->) e functor. See Conor McBride's [*Idioms: applicative programming with effects* (2009)](http://www.staff.city.ac.uk/~ross/papers/Applicative.html) for a discussion of this. <pre> ax1 :: Classical (a -> b -> a) ax1 = pure const -- or ax1 = pure const ax2 :: Classical ((a -> b -> c) -> (a -> b) -> a -> c) ax2 = pure (<*>) -- or ax2 = pure (\f x -> f x x) </pre> Finally, what's left is to show \$$\vdash \neg\neg \phi \to \phi\$$. I like to follow Dirk van Dalen's presentation on page 159 of [Logic and Structure (2013)](https://www.researchgate.net/file.PostFileLoader.html?id=5518f022d5a3f25c318b4611&assetKey=AS%3A273746110156809%401442277566498). First, we can write unApply, which is the inverse of (<*>): <pre> unApply :: (Classical a -> Classical b) -> Classical (a -> b) unApply = ( fromDoubleNeg . ((<*>) (<*>) ((<*>) pure)) . ( curry . flip ) ( ( flip . uncurry . flip . (flip ($) .) ) (. (efq .)) . (. pure) ) ) . (toDoubleNeg .) . (. fromDoubleNeg) </pre> We can then use it and the intuitionistic identity \$$\neg \neg \neg \phi \to \neg \phi\$$ to prove the final axiom: <pre> ax3 :: Classical ((Not (Not a)) -> a) ax3 = (. (toDoubleNeg . Thm)) <$> unApply ( fromDoubleNeg . flip ((flip ($)) . toDoubleNeg . pure) . toDoubleNeg) </pre> All that's left to show is that this is Monad. This isn't super hard, but I'll leave it as an exercise for anyone reading who wants to try... 
• Options
52.
edited April 2018

Matthew, after sleeping over my problem I think I've arrived at the solution of creating an intance of Interpret for the function (->) category which gets its Language instance from the general category one.

So, getting both run and interpretfor the (->) instance Tagless Puzzle 2, the code for (->) is simply:

{-# LANGUAGE Rank2Types #-}

module LanguageFun2 where

import CompilingToCategories
import Language
import LanguageCat

instance Interpreted (->) where
run f h = f h

interpret :: Term a -> a
interpret t = (run :: (h -> a) -> h -> a) t ()


I've tested over the same examples that worked over the direct implementation of Language (->) and these work:

interpret (apply (lambda here) (int 6))
interpret (apply (lambda here) (bool True))
interpret (ifte (bool True) (int 3) (int 4))


But I have problems with:

ex = lambda (lambda (add (before here) here))


I've needed to add FlexibleContexts because it complained with this:

• Non type-variable argument in the constraint: ConstCat k Bool
(Use FlexibleContexts to permit this)
• When checking the inferred type
ex :: forall h (k :: * -> * -> *).
(BoolCat k, ConstCat k Bool, ConstCat k Int, OrdCat k Int,
NumCat k Int, Closed k) =>
k h (Int -> Int -> Int)


and after adding it, the type I get is

ex :: (BoolCat k, ConstCat k Bool, ConstCat k Int, OrdCat k Int,
NumCat k Int, Closed k) =>k h (Int -> Int -> Int)


But now, when I try

interpret (apply (apply ex (int 3)) (int 2))


I get

• Could not deduce (BoolCat k) arising from a use of ‘ex’
from the context: Language k
bound by a type expected by the context:
Language k => k h Int
at <playground>:5:1-44
Possible fix:
add (BoolCat k) to the context of
a type expected by the context:
Language k => k h Int
• In the first argument of ‘apply’, namely ‘ex’
In the first argument of ‘apply’, namely ‘(apply ex (int 3))’
In the first argument of ‘interpret’, namely
‘(apply (apply ex (int 3)) (int 2))’


And this is all I have.

PS: The Monad Classical Puzzle will have to wait for now :-)

Comment Source:[Matthew](https://forum.azimuthproject.org/profile/1818/Matthew%20Doty), after sleeping over my problem I think I've arrived at the solution of creating an intance of Interpret for the function (->) category which gets its Language instance from the general category one. So, getting both run and interpretfor the (->) instance [Tagless Puzzle 2](https://forum.azimuthproject.org/discussion/comment/16735/#Comment_16735), the code for (->) is simply: {-# LANGUAGE Rank2Types #-} module LanguageFun2 where import CompilingToCategories import Language import LanguageCat instance Interpreted (->) where run f h = f h interpret :: Term a -> a interpret t = (run :: (h -> a) -> h -> a) t () I've tested over the same examples that worked over the direct implementation of Language (->) and these work: interpret (apply (lambda here) (int 6)) interpret (apply (lambda here) (bool True)) interpret (ifte (bool True) (int 3) (int 4)) But I have problems with: ex = lambda (lambda (add (before here) here)) I've needed to add FlexibleContexts because it complained with this: • Non type-variable argument in the constraint: ConstCat k Bool (Use FlexibleContexts to permit this) • When checking the inferred type ex :: forall h (k :: * -> * -> *). (BoolCat k, ConstCat k Bool, ConstCat k Int, OrdCat k Int, NumCat k Int, Closed k) => k h (Int -> Int -> Int) and after adding it, the type I get is ex :: (BoolCat k, ConstCat k Bool, ConstCat k Int, OrdCat k Int, NumCat k Int, Closed k) =>k h (Int -> Int -> Int) But now, when I try interpret (apply (apply ex (int 3)) (int 2)) I get • Could not deduce (BoolCat k) arising from a use of ‘ex’ from the context: Language k bound by a type expected by the context: Language k => k h Int at <playground>:5:1-44 Possible fix: add (BoolCat k) to the context of a type expected by the context: Language k => k h Int • In the first argument of ‘apply’, namely ‘ex’ In the first argument of ‘apply’, namely ‘(apply ex (int 3))’ In the first argument of ‘interpret’, namely ‘(apply (apply ex (int 3)) (int 2))’ And this is all I have. PS: The [Monad Classical Puzzle](https://forum.azimuthproject.org/discussion/comment/16975/#Comment_16975) will have to wait for now :-) 
• Options
53.
edited April 2018

Matthew, after sleeping over my problem I think I've arrived at the solution of creating an instance of Interpret for the function (->) category which gets its Language instance from the general category one.

...

But I have problems with:

ex = lambda (lambda (add (before here) here))

I am not 100%, but I think the issue is you've got some functional dependencies in your classes.

First, I would generalize () for Terminal like this:

class Category k => Terminal k u | k -> u where
it :: a k u


The reason I think we can do this is that terminal objects for a category are all isomorphic, so we can treat whatever instance we make as uniquely determined.

Also here:

class Terminal k => ConstCat k b where
unitArrow :: b -> (() k b)


I don't think you need a b parameter for ConstCat... instead, I would do:

class Terminal k u => ConstCat k u | k -> u where
unitArrow :: b -> (u k b)


In fact, we can generalize ConstCat k u and get rid of the u parameter.

Observe:

constArrow :: ConstCat k u => b -> a k b
constArrow b = (unitArrow b) . it


So we can just do:

class ConstCat k where
constArrow :: b -> (a k b)


Sorry if this doesn't compile... :-(

If you want, we make a repo on github so I can pull your code down. That might be easier for us and others who want to join in to communicate code as things get more complicated.

Comment Source:> Matthew, after sleeping over my problem I think I've arrived at the solution of creating an instance of Interpret for the function (->) category which gets its Language instance from the general category one. > > ... > > But I have problems with: > ><code>ex = lambda (lambda (add (before here) here))</code> I am not 100%, but I think the issue is you've got some [functional dependencies](https://wiki.haskell.org/Functional_dependencies) in your classes. First, I would generalize () for Terminal like this: <pre> class Category k => Terminal k u | k -> u where it :: a k u </pre> The reason I think we can do this is that terminal objects for a category are all *isomorphic*, so we can treat whatever instance we make as uniquely determined. Also here: <pre> class Terminal k => ConstCat k b where unitArrow :: b -> (() k b) </pre> I don't think you need a b parameter for ConstCat... instead, I would do: <pre> class Terminal k u => ConstCat k u | k -> u where unitArrow :: b -> (u k b) </pre> In fact, we can generalize ConstCat k u and get rid of the u parameter. Observe: <pre> constArrow :: ConstCat k u => b -> a k b constArrow b = (unitArrow b) . it </pre> So we can just do: <pre> class ConstCat k where constArrow :: b -> (a k b) </pre> Sorry if this doesn't compile... :-( If you want, we make a repo on github so I can pull your code down. That might be easier for us and others who want to join in to communicate code as things get more complicated.
• Options
54.
edited April 2018

Hi Matthew,

The git repository is created at categories-working-hacker.

The last commit includes both your suggestions and an the modified interpreter.

I can enter in ghci the expression ex = lambda (lambda (add (before here) here)) only if I enable :seti -XFlexibleContexts

But when I try to evaluate interpret (apply (apply ex (int 3)) (int 2)) I get the error:

<interactive>:4:25: error:
• Could not deduce (BoolCat k) arising from a use of ‘ex’
from the context: Language k
bound by a type expected by the context:
Term Int
at <interactive>:4:1-44
Possible fix:
add (BoolCat k) to the context of
a type expected by the context:
Term Int
• In the first argument of ‘apply’, namely ‘ex’
In the first argument of ‘apply’, namely ‘(apply ex (int 3))’
In the first argument of ‘interpret’, namely
‘(apply (apply ex (int 3)) (int 2))’


I hope the project (using stack) is well configured because I use haskellformac and my stack-fu is a little bit rusty.

Comment Source:Hi [Matthew](https://forum.azimuthproject.org/profile/1818/Matthew%20Doty), The git repository is created at [categories-working-hacker](https://github.com/jmgimeno/categories-working-hacker). The last commit includes both your suggestions and an the modified interpreter. I can enter in ghci the expression ex = lambda (lambda (add (before here) here)) only if I enable :seti -XFlexibleContexts But when I try to evaluate interpret (apply (apply ex (int 3)) (int 2)) I get the error: <interactive>:4:25: error: • Could not deduce (BoolCat k) arising from a use of ‘ex’ from the context: Language k bound by a type expected by the context: Term Int at <interactive>:4:1-44 Possible fix: add (BoolCat k) to the context of a type expected by the context: Term Int • In the first argument of ‘apply’, namely ‘ex’ In the first argument of ‘apply’, namely ‘(apply ex (int 3))’ In the first argument of ‘interpret’, namely ‘(apply (apply ex (int 3)) (int 2))’ I hope the project (using stack) is well configured because I use haskellformac and my _stack-fu_ is a little bit rusty. 
• Options
55.
edited April 2018

@Juan,

Okay, I made a test suite and as long as everything is type annotated it looks like this is compiling fine.

I also turned on a bunch of linting and warnings. If you would prefer I can turn them back off.

I see two ways forward from here:

1. Pick a compile target outside of Haskell. For instance LLVM or WASM are cool. We'll have to modify the category classes to suite doing this I think. I suspect there's a few other classes left to be made. In particular, I think we need something like CategoryFix with fixC :: (y -> y) k y.
2. Move on to the automatic integration paper

Finally, there's a few more adjunction identities I can think of if you are still curious about that!

Comment Source:@[Juan](https://forum.azimuthproject.org/profile/2083/Juan%20Manuel%20Gimeno), Okay, I made a test suite and as long as everything is type annotated it looks like this is compiling fine. I also turned on a bunch of linting and warnings. If you would prefer I can turn them back off. I see two ways forward from here: 1. Pick a compile target outside of Haskell. For instance LLVM or WASM are cool. We'll have to modify the category classes to suite doing this I think. I suspect there's a few other classes left to be made. In particular, I think we need something like CategoryFix with <code>fixC :: (y -> y) &#96;k&#96; y</code>. 2. Move on to the automatic integration paper Finally, there's a few more adjunction identities I can think of if you are still curious about that!
• Options
56.
edited April 2018

what a pull request !!! Most of the code was in need for a cleanup (mainly due to the fact that sometimes I was 'playing by ear', because my knowledge of Haskell is medium-plus at most).

I think that you're referring to "The simple essence of automatic differentiation" paper, aren't you?

After having read over it, the paper mentions "monoidal categories". As the second chapter of the "Seven Sketches" book is about "monoidal preorders", maybe exploring both things in parallel can make sense, don't you think? (Although, I haven't finished the "Compiling to Categories" paper yet)

I'm completely open to new adjunction identities. Adjunctions, at least for me, were a hard wall to smash my head into two years ago. Completely impenetrable. And now, I think, I'm beginning to make a little dent on them. So anything about adjunctions is more than welcome.

Comment Source:@Matthew, what a pull request !!! Most of the code was in need for a cleanup (mainly due to the fact that sometimes I was 'playing by ear', because my knowledge of Haskell is medium-plus at most). I think that you're referring to "The simple essence of automatic differentiation" paper, aren't you? After having read over it, the paper mentions "monoidal categories". As the second chapter of the "Seven Sketches" book is about "monoidal preorders", maybe exploring both things in parallel can make sense, don't you think? (Although, I haven't finished the "Compiling to Categories" paper yet) **I'm completely open to new adjunction identities**. Adjunctions, at least for me, were a hard wall to smash my head into two years ago. Completely impenetrable. And now, I think, I'm beginning to make a little dent on them. So anything about adjunctions is more than welcome.
• Options
57.

what a pull request !!! Most of the code was in need for a cleanup (mainly due to the fact that sometimes I was 'playing by ear', because my knowledge of Haskell is medium-plus at most).

Yeah mine is pretty mediocre too. I'm trying to practice every day.

After having read over it, the paper mentions "monoidal categories". As the second chapter of the "Seven Sketches" book is about "monoidal preorders", maybe exploring both things in parallel can make sense, don't you think? (Although, I haven't finished the "Compiling to Categories" paper yet)

Sounds good.

I'm completely open to new adjunction identities.

Okay, so suppose $$g \dashv f$$. Then we have the following little theorem I lifted from this mathematics stack exchange discussion:

In other words $$g( a \vee b ) = g(a) \vee g(b)$$ and $$f(a \wedge b) = f(a) \wedge f(b)$$.

Puzzle: In Haskell, we can think of these two equations as four functions. What are they? How relaxed can we make the type class constraints?

Comment Source:> what a pull request !!! Most of the code was in need for a cleanup (mainly due to the fact that sometimes I was 'playing by ear', because my knowledge of Haskell is medium-plus at most). Yeah mine is pretty mediocre too. I'm trying to practice every day. > After having read over it, the paper mentions "monoidal categories". As the second chapter of the "Seven Sketches" book is about "monoidal preorders", maybe exploring both things in parallel can make sense, don't you think? (Although, I haven't finished the "Compiling to Categories" paper yet) Sounds good. > **I'm completely open to new adjunction identities.** Okay, so suppose \$$g \dashv f\$$. Then we have the following little theorem I lifted from this [mathematics stack exchange discussion](https://math.stackexchange.com/a/825966): > **Theorem**: Left adjoints preserve all joins; right adjoints preserve all meets. In other words \$$g( a \vee b ) = g(a) \vee g(b)\$$ and \$$f(a \wedge b) = f(a) \wedge f(b)\$$. **Puzzle**: In Haskell, we can think of these two equations as four functions. What are they? How relaxed can we make the type class constraints?
• Options
58.

Juan wrote:

After having read over it, the paper mentions "monoidal categories". As the second chapter of the Seven Sketches book is about "monoidal preorders", maybe exploring both things in parallel can make sense, don't you think?

Monoidal categories - categories where you can multiply objects and morphisms - are pretty fundamental to mathematics. Just as preorders are a special case of categories, monoidal preorders are a special case of monoidal categories. Seven Sketches is highly unorthodox in that it discussses preorders and monoidal preorders before turning to more general categories and monoidal categories. But with luck, by the end, you'll be experts on all these things!

Comment Source:Juan wrote: > After having read over it, the paper mentions "monoidal categories". As the second chapter of the _Seven Sketches_ book is about "monoidal preorders", maybe exploring both things in parallel can make sense, don't you think? Monoidal categories - categories where you can multiply objects and morphisms - are pretty fundamental to mathematics. Just as preorders are a special case of categories, monoidal preorders are a special case of monoidal categories. _Seven Sketches_ is highly unorthodox in that it discussses preorders and monoidal preorders before turning to more general categories and monoidal categories. But with luck, by the end, you'll be experts on all these things!
• Options
59.
edited April 2018

Hi @Matthew, here's my take on the puzzle:

I've created two classes, one for each preservation property

class PreservesCoproduct f where
extractCoproduct   :: f (Either a b) -> Either (f a) (f b)
introduceCoproduct :: Either (f a) (f b) -> f (Either a b)

class PreservesProduct f where
extractProduct    :: f (a, b) -> (f a, f b)
introduceProduct  :: (f a, f b) -> f (a, b)


And then two implementations: (I've needed to add the extentions FlexibleInstances and UndecidableInstances)

instance Adjunction f g => PreservesCoproduct f where
extractCoproduct   = rightAdjunct $fmap Left . unit ||| fmap Right . unit introduceCoproduct = fmap Left ||| fmap Right instance Adjunction f g => PreservesProduct g where extractProduct = fmap fst &&& fmap snd introduceProduct = leftAdjunct$ counit . fmap fst &&& counit . fmap snd


Both introduceCoproduct and extractProduct only need f(or g) to be functors, so they can be put into another instance with a less strong condition.

Function introduceProduct is the uncurried version of f a -> f b -> f (a, b) which can be found in the monoidal presentation for applicative functors.

The last one, extractCoproduct, I cannot relate with anything I know (besides being the dual of the "monoidal" one).

And that's all (for now ;-D)

Comment Source:Hi @Matthew, here's my take on the puzzle: I've created two classes, one for each preservation property class PreservesCoproduct f where extractCoproduct :: f (Either a b) -> Either (f a) (f b) introduceCoproduct :: Either (f a) (f b) -> f (Either a b) class PreservesProduct f where extractProduct :: f (a, b) -> (f a, f b) introduceProduct :: (f a, f b) -> f (a, b) And then two implementations: (I've needed to add the extentions FlexibleInstances and UndecidableInstances) instance Adjunction f g => PreservesCoproduct f where extractCoproduct = rightAdjunct $fmap Left . unit ||| fmap Right . unit introduceCoproduct = fmap Left ||| fmap Right instance Adjunction f g => PreservesProduct g where extractProduct = fmap fst &&& fmap snd introduceProduct = leftAdjunct$ counit . fmap fst &&& counit . fmap snd Both introduceCoproduct and extractProduct only need f(or g) to be functors, so they can be put into another instance with a less strong condition. Function introduceProduct is the uncurried version of f a -> f b -> f (a, b) which can be found in the [monoidal presentation](https://en.wikibooks.org/wiki/Haskell/Applicative_functors#The_monoidal_presentation) for **applicative functors**. The last one, extractCoproduct, I cannot relate with anything I know (besides being the dual of the "monoidal" one). And that's all (for now ;-D) 
• Options
60.

Hmm... let me say my thinking. Taking what you just wrote, we actually have:

introduceCoproduct :: Functor f => Either (f a) (f b) -> f (Either a b)
introduceCoproduct = fmap Left ||| fmap Right

extractProduct    :: f (a, b) -> (f a, f b)
extractProduct = fmap fst &&& fmap snd


These reflect that, for every monotone $$f$$:

$$\begin{eqnarray} f(a) \vee f(b) & \leq & f(a \vee b) \\ f(a \wedge b) & \leq & f(a) \wedge f(b) \\ \end{eqnarray}$$

The Galois connections in Fong and Spivak are defined to be between two preorders $$\leq_A$$ and $$\leq_B$$.

On the other hand, Ed Kmett's adjunctions are always between Category (->) and Category (->).

This restricts us from looking at one of the cutest adjunction constructions I know. If we have a Monad m, we compute two adjunctions! It's a classic construction. I don't think it's in Fong and Spivak :( It is at the heart of the >=> operator in Haskell if you've ever used it.

However, I think I know why Kmett came up his adjunctions library the way he did. He has another adjunction construction he came up with 10 years ago. But his is all in Category (->).

I can talk about these if you like.

Comment Source:Hmm... let me say my thinking. Taking what you just wrote, we actually have: <pre> introduceCoproduct :: Functor f => Either (f a) (f b) -> f (Either a b) introduceCoproduct = fmap Left ||| fmap Right extractProduct :: f (a, b) -> (f a, f b) extractProduct = fmap fst &&& fmap snd </pre> These reflect that, for every monotone \$$f\$$: $$\begin{eqnarray} f(a) \vee f(b) & \leq & f(a \vee b) \\\\ f(a \wedge b) & \leq & f(a) \wedge f(b) \\\\ \end{eqnarray}$$ ------------------------------------- The Galois connections in Fong and Spivak are defined to be between two preorders \$$\leq_A\$$ and \$$\leq_B\$$. On the other hand, Ed Kmett's adjunctions are always between Category (->) and Category (->). This restricts us from looking at one of the cutest adjunction constructions I know. *If we have a Monad m, we compute two adjunctions!* It's a classic construction. I don't think it's in Fong and Spivak :( It is at the heart of the >=> operator in Haskell if you've ever used it. However, I think I know why Kmett came up his adjunctions library the way he did. He has *another* adjunction construction he came up with 10 years ago. But his is all in Category (->). I can talk about these if you like.
• Options
61.

Actually, could someone make an implementation that takes a preorder and gives a visualization of it? A visualizer would be useful for authors and teachers (plus anyone else that likes visual proofs).

In fact, why not make a whole library that implements preorders and their various operations?

Comment Source:Actually, could someone make an implementation that takes a preorder and gives a visualization of it? A visualizer would be useful for authors and teachers (plus anyone else that likes visual proofs). In fact, why not make a whole library that implements preorders and their various operations?
• Options
62.

This restricts us from looking at one of the cutest adjunction constructions I know. If we have a Monad m, we compute two adjunctions! It's a classic construction. I don't think it's in Fong and Spivak It is at the heart of the >=> operator in Haskell if you've ever used it.

I understand >=> to be composition of morphisms in the Kleisli category of the relevant monad. But how does >=> relate to adjunctions?

Comment Source:> This restricts us from looking at one of the cutest adjunction constructions I know. If we have a Monad m, we compute two adjunctions! It's a classic construction. I don't think it's in Fong and Spivak It is at the heart of the >=> operator in Haskell if you've ever used it. I understand >=> to be composition of morphisms in the Kleisli category of the relevant monad. But how does >=> relate to adjunctions?
• Options
63.
edited April 2018

Matthew wrote:

If we have a Monad m, we compute two adjunctions!

I guess you're talking about the adjunctions involving the Kleisli and Eilenberg-Moore categories. Reuben mentioned one of these.

I find these names incredibly intimidating given how simple the ideas are. In particular, the "Eilenberg-Moore category", with the longer and more intimidating name, is arguably the simpler idea. A bunch of us like to call the Eilenberg-Moore category of a monad the category of algebras of the monad, and the Kleisli category the category of free algebras of the monad.

Comment Source:Matthew wrote: > If we have a Monad m, we compute two adjunctions! I guess you're talking about the adjunctions involving the Kleisli and Eilenberg-Moore categories. Reuben mentioned one of these. I find these names incredibly intimidating given how simple the ideas are. In particular, the "Eilenberg-Moore category", with the longer and more intimidating name, is arguably the simpler idea. A bunch of us like to call the Eilenberg-Moore category of a monad the **category of algebras** of the monad, and the Kleisli category the **category of free algebras** of the monad.
• Options
64.

This restricts us from looking at one of the cutest adjunction constructions I know. If we have a Monad m, we compute two adjunctions! It's a classic construction. I don't think it's in Fong and Spivak It is at the heart of the >=> operator in Haskell if you've ever used it. However, I think I know why Kmett came up his adjunctions library the way he did. He has another adjunction construction he came up with 10 years ago. But his is all in Category (->). I can talk about these if you like.

You've piqued my curiosity.

Comment Source:@Matthew, > This restricts us from looking at one of the cutest adjunction constructions I know. If we have a Monad m, we compute two adjunctions! It's a classic construction. I don't think it's in Fong and Spivak **It is at the heart of the >=> operator in Haskell** if you've ever used it. > However, I think I know why Kmett came up his adjunctions library the way he did. He has another adjunction construction he came up with 10 years ago. But his is all in Category (->). > I can talk about these if you like. You've piqued my curiosity. 
• Options
65.

Hi @Matthew, in #57 you translate "right adjoints preserve meets" as if saying "preserve the meet operation", but Proposition 1.84 goes beyond, saying that they preserve even the meet of an arbitrary set of elements (not just two). That code starts to take shape, wish I knew some Haskell.

Comment Source:Hi @Matthew, in [#57](https://forum.azimuthproject.org/discussion/comment/17071/#Comment_17071) you translate "right adjoints preserve meets" as if saying "preserve the meet operation", but Proposition 1.84 goes beyond, saying that they preserve even the meet of an arbitrary set of elements (not just two). That code starts to take shape, wish I knew some Haskell.
• Options
66.

I find these names incredibly intimidating given how simple the ideas are.

I'd love a reference to materials explaining the "Eilenberg-Moore category", as explanations I've found are incredibly difficult (especially compared to the Kleisli category, which from a programming perspective is very straightforward to understand). Unless that's going to be covered in this course, in which case I can wait!

Comment Source:> I find these names incredibly intimidating given how simple the ideas are. I'd love a reference to materials explaining the "Eilenberg-Moore category", as explanations I've found are incredibly difficult (especially compared to the Kleisli category, which from a programming perspective is very straightforward to understand). Unless that's going to be covered in this course, in which case I can wait!
• Options
67.
edited April 2018

Okay guys, I'll try this.

The following was first established in by Kleisli in Every standard construction is induced by a pair of adjoint functors (1965)

I am going to leave a bunch of puzzles so others can finish the construction.

{-# LANGUAGE ConstraintKinds        #-}
{-# LANGUAGE FlexibleContexts       #-}
{-# LANGUAGE FlexibleInstances      #-}
{-# LANGUAGE FunctionalDependencies #-}
{-# LANGUAGE MonoLocalBinds         #-}
{-# LANGUAGE MultiParamTypeClasses  #-}
{-# LANGUAGE TypeOperators          #-}
{-# LANGUAGE UndecidableInstances   #-}
{-# OPTIONS_GHC -fno-warn-orphans #-}

import           Control.Category
import           Prelude          hiding ((.))


The Category typeclass is defined in Control.Category in base. It introduces id and (.), which correspond to identity morphisms and morphism composition from category theory.

We diverge from the Prelude below. In traditional category theory, a functor maps morphisms in one Category into another. The following typeclass captures this.

class (Category x, Category y) => CatFunctor x f y where
catMap :: a x b -> f a y f b


This leads to the first puzzle:

Puzzle 1. What are the laws for CatFunctor? [Hint: They should be familiar...]

An endofunctor is a functor from a category to itself.

type EndoFunctor k f = CatFunctor k f k


Haskell's Functor is a particular kind of endofunctor. It's on the category (->), which is something called Hask.

type HaskFunctor f = EndoFunctor (->) f


This is operationally equivalent to the Functor in base. Here are the two instances:

instance Functor f => CatFunctor (->) f (->) where
catMap = undefined

-- Needs UndecidableInstances and -fno-warn-orphans
instance EndoFunctor (->) f => Functor f where
fmap = undefined


So now we are ready for talking about adjunctions. In category theory, we say two functors $$F: \mathcal{X} \to \mathcal{Y}$$ and $$G: \mathcal{Y} \to \mathcal{X}$$ are adjoint whenever there is a family of bijections such that:

$\mathbf{hom}_{\mathcal{Y}}(FA,B) \cong \mathbf{hom}_{\mathcal{X}}(A,GB)$

This is captured with this typeclass:

class ( CatFunctor x f y
, CatFunctor y g x
) => CatAdjunction x y f g where
catLeftAdjunct  :: (f a y   b) -> (a   x g b)
catRightAdjunct :: (a   x g b) -> (f a y   b)


The usual Haskell adjunctions are just with endofunctors on (->).

So now to the actualy adjoint constructions. From Control.Arrow, there is the following category defined:

newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }


Puzzle 2. Fill out the category instance for Kleisli m below. You can always cheat if you want, but maybe don't share?

instance Monad m => Category (Kleisli m) where
id = undefined
(.) = undefined -- This should be familiar...


Finally, we can construct the adjunction. The left adjoint is free and doesn't depend on anything:

newtype LeftK a = LeftK { unLeftK :: a }

instance Applicative m => CatFunctor (->) LeftK (Kleisli m) where
catMap = Kleisli . (pure .) . haskCatMap
where
haskCatMap f (LeftK x) = LeftK (f x)


Puzzle 3. (the hard one) The right adjoint is not free.

1. Write the newtype declaration for it.
2. Write the instance CatFunctor (Kleisli m) ??? (->) for it
3. Finally, write the instance for Monad m => CatAdjunction (->) (Kleisli m) LeftK ???

I am free to give hints to Puzzle 3. It might help after we finish it to see what this means for posets...

Comment Source:Okay guys, I'll try this. The following was first established in by Kleisli in [*Every standard construction is induced by a pair of adjoint functors* (1965)](http://www.ams.org/journals/proc/1965-016-03/S0002-9939-1965-0177024-4/) I am going to leave a bunch of puzzles so others can finish the construction. Let's start with the boilerplate. <pre> {-# LANGUAGE ConstraintKinds #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FunctionalDependencies #-} {-# LANGUAGE MonoLocalBinds #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# OPTIONS_GHC -fno-warn-orphans #-} module CatAdjunctions where import Control.Category import Control.Monad ((&#60;=&#60;)) -- Hint! import Prelude hiding ((.)) </pre> The Category typeclass is defined in [Control.Category](https://hackage.haskell.org/package/base-4.11.0.0/docs/Control-Category.html) in base. It introduces id and (.), which correspond to identity morphisms and morphism composition from category theory. We diverge from the Prelude below. In traditional category theory, a *functor* maps morphisms in one Category into another. The following typeclass captures this. <pre> class (Category x, Category y) => CatFunctor x f y where catMap :: a x b -> f a y f b </pre> This leads to the first puzzle: **Puzzle 1**. What are the laws for CatFunctor? [Hint: They should be familiar...] An *endofunctor* is a functor from a category to itself. <pre> type EndoFunctor k f = CatFunctor k f k </pre> Haskell's Functor is a particular kind of endofunctor. It's on the category (->), which is something called **Hask**. <pre> type HaskFunctor f = EndoFunctor (->) f </pre> This is operationally equivalent to the Functor in base. Here are the two instances: <pre> instance Functor f => CatFunctor (->) f (->) where catMap = undefined -- Needs UndecidableInstances and -fno-warn-orphans instance EndoFunctor (->) f => Functor f where fmap = undefined </pre> So now we are ready for talking about adjunctions. In category theory, we say two functors \$$F: \mathcal{X} \to \mathcal{Y}\$$ and \$$G: \mathcal{Y} \to \mathcal{X}\$$ are adjoint whenever there is a family of bijections such that: <center>![](https://latex.codecogs.com/gif.latex?%5Cmathbf%7Bhom%7D_%7B%5Cmathcal%7BY%7D%7D%28FA%2CB%29%20%5Ccong%20%5Cmathbf%7Bhom%7D_%7B%5Cmathcal%7BX%7D%7D%28A%2CGB%29)</center> This is captured with this typeclass: <pre> class ( CatFunctor x f y , CatFunctor y g x ) => CatAdjunction x y f g where catLeftAdjunct :: (f a y b) -> (a x g b) catRightAdjunct :: (a x g b) -> (f a y b) </pre> The usual Haskell adjunctions are just with endofunctors on (->). So now to the actualy adjoint constructions. From [Control.Arrow](https://hackage.haskell.org/package/base-4.11.0.0/docs/Control-Arrow.html#g:1), there is the following *category* defined: <pre> newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b } </pre> **Puzzle 2**. Fill out the category instance for Kleisli m below. You can always cheat if you want, but maybe don't share? <pre> instance Monad m => Category (Kleisli m) where id = undefined (.) = undefined -- This should be familiar... </pre> Finally, we can construct the adjunction. The *left* adjoint is free and doesn't depend on anything: <pre> newtype LeftK a = LeftK { unLeftK :: a } instance Applicative m => CatFunctor (->) LeftK (Kleisli m) where catMap = Kleisli . (pure .) . haskCatMap where haskCatMap f (LeftK x) = LeftK (f x) </pre> **Puzzle 3**. (the hard one) The right adjoint is **not** free. 1. Write the newtype declaration for it. 2. Write the instance CatFunctor (Kleisli m) ??? (->) for it 3. Finally, write the instance for Monad m => CatAdjunction (->) (Kleisli m) LeftK ??? ------------------------------------------------ I am free to give hints to Puzzle 3. It might help after we finish it to see what this means for posets...
• Options
68.
edited April 2018

Reuben wrote:

I'd love a reference to materials explaining the "Eilenberg-Moore category", as explanations I've found are incredibly difficult (especially compared to the Kleisli category, which from a programming perspective is very straightforward to understand).

I don't know what's best, but the Eilenberg-Moore category of a monad is the category of "algebras" of a monad, so you should probably start by getting a good understanding of the algebras of a monad. To mathematicians, the most important thing about monads is their algebras: we describe mathematical structures as algebras of monads.

This might help:

This is part of a series of videos by some friends of mine; it might be best to start at the beginning:

Comment Source:Reuben wrote: > I'd love a reference to materials explaining the "Eilenberg-Moore category", as explanations I've found are incredibly difficult (especially compared to the Kleisli category, which from a programming perspective is very straightforward to understand). I don't know what's best, but the Eilenberg-Moore category of a monad is the category of "algebras" of a monad, so you should probably start by getting a good understanding of the algebras of a monad. To mathematicians, the most important thing about monads is their algebras: we describe mathematical structures as algebras of monads. This might help: * The Catsters, [Monads 3](https://www.youtube.com/watch?v=eBQnysX7oLI). This is part of a series of videos by some friends of mine; it might be best to start at the beginning: * The Catsters, [Monads 1](https://www.youtube.com/watch?v=9fohXBj2UEI&list=PL0E91279846EC843E). 
• Options
69.
edited April 2018

the boilerplate code in Kleisli puzzle finishes abrupty with import Control.Monad ((, so I think there are some lines missing.

I've also needed to changeApplicative m to Monad m because it is needed for Keisli m to be a 'Category'.

Would you mind revising it?

Comment Source:@Matthew, the _boilerplate_ code in [Kleisli puzzle](https://forum.azimuthproject.org/discussion/comment/17174/#Comment_17174) finishes abrupty with import Control.Monad ((, so I think there are some lines missing. I've also needed to changeApplicative m to Monad m because it is needed for Keisli m to be a 'Category'. Would you mind revising it?
• Options
70.
edited April 2018

this is my take on the puzzles so far:

Puzzle1 solution: are the functor laws expressed in Category class terms:

catMap id = id
catMap (f . g) = catMap f . catMap g


Puzzle2 solution: is lifting returnand >=> from Monadto the Kleisli newtype:

instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli $f <=< g  Puzzle3 solution: 1 and 2 are related because I need the RighK values to be in the same monad context as the kleisli arrows newtype RightK m b = RightK { unRightK :: m b } instance Monad m => CatFunctor (Kleisli m) (RightK m) (->) where catMap (Kleisli f) (RightK a) = RightK$ a >>= f


What I'm stuck with is the last part of the puzzle. I think I have the correct code but I cannot to make GHC to accept it. The code I have is:

instance Monad m => CatAdjunction (->) (Kleisli m) LeftK (RightK m) where
catLeftAdjunct  f = RightK . f . LeftK
catRightAdjunct f = unRightK . f . unLeftK


And the error I get has to do with functional dependencies:

• Illegal instance declaration for
‘CatAdjunction (->) (Kleisli m) LeftK (RightK m)’
The liberal coverage condition fails in class ‘CatAdjunction’
for functional dependency: ‘f -> g’
Reason: lhs type ‘LeftK’ does not determine rhs type ‘RightK m’
Un-determined variable: m
• In the instance declaration for
‘CatAdjunction (->) (Kleisli m) LeftK (RightK m)’


Any suggestions?

Of course the problem can be that all my code is completely wrong :-D

Comment Source:@Matthew, this is my take on the puzzles so far: **Puzzle1 solution:** are the *functor laws* expressed in Category class terms: catMap id = id catMap (f . g) = catMap f . catMap g **Puzzle2 solution:** is _lifting_ returnand >=> from Monadto the Kleisli newtype: instance Monad m => Category (Kleisli m) where id = Kleisli return (Kleisli f) . (Kleisli g) = Kleisli $f <=< g **Puzzle3 solution:** 1 and 2 are related because I need the RighK values to be in the same monad context as the kleisli arrows newtype RightK m b = RightK { unRightK :: m b } instance Monad m => CatFunctor (Kleisli m) (RightK m) (->) where catMap (Kleisli f) (RightK a) = RightK$ a >>= f What I'm stuck with is the last part of the puzzle. I think I have the correct code but I cannot to make GHC to accept it. The code I have is: instance Monad m => CatAdjunction (->) (Kleisli m) LeftK (RightK m) where catLeftAdjunct f = RightK . f . LeftK catRightAdjunct f = unRightK . f . unLeftK And the error I get has to do with functional dependencies: • Illegal instance declaration for ‘CatAdjunction (->) (Kleisli m) LeftK (RightK m)’ The liberal coverage condition fails in class ‘CatAdjunction’ for functional dependency: ‘f -> g’ Reason: lhs type ‘LeftK’ does not determine rhs type ‘RightK m’ Un-determined variable: m • In the instance declaration for ‘CatAdjunction (->) (Kleisli m) LeftK (RightK m)’ Any suggestions? Of course the problem can be that all my code is completely wrong :-D 
• Options
71.
edited April 2018

First guesses maybe:

Puzzle 1

catMap x f y = catMap x f catMap y?

Puzzle 2

id = return ()? (.) = something to do with (=<<)?

I didn't understand what not sharing meant?

ACT has already given me much of the stuff I've wanted for years. Especially many thanks MD and JGM for these incredible lessons :).

(Gone for breakfast!)

Comment Source:First guesses maybe: Puzzle 1 > catMap x f y = catMap x f catMap y? Puzzle 2 > id = return ()? > (.) = something to do with (=<<)? I didn't understand what not sharing meant? ACT has already given me much of the stuff I've wanted for years. Especially many thanks MD and JGM for these incredible lessons :). (Gone for breakfast!)
• Options
72.
edited April 2018

the boilerplate code in Kleisli puzzle finishes abrupty with import Control.Monad ((, so I think there are some lines missing.

I've also needed to changeApplicative m to Monad m because it is needed for Keisli m to be a 'Category'.

Would you mind revising it?

Okay, I fixed it. It should read:

import           Control.Category
import           Prelude          hiding ((.))


You figured it out without the hint of course... next time I promise to make a PR.

I've also needed to changeApplicative m to Monad m because it is needed for Keisli m to be a 'Category'.

For the CatFunctor instance for LeftK, all it needs it that m be an applicative, since it just calls pure and doesn't use bind in any way. But I see now that's confusing.

The code I have is:

instance Monad m => CatAdjunction (->) (Kleisli m) LeftK (RightK m) where
catLeftAdjunct f = RightK . f . LeftK
catRightAdjunct f = unRightK . f . unLeftK


And the error I get has to do with functional dependencies:

• Illegal instance declaration for
‘CatAdjunction (->) (Kleisli m) LeftK (RightK m)’
The liberal coverage condition fails in class ‘CatAdjunction’
for functional dependency: ‘f -> g’
Reason: lhs type ‘LeftK’ does not determine rhs type ‘RightK m’
Un-determined variable: m
• In the instance declaration for
‘CatAdjunction (->) (Kleisli m) LeftK (RightK m)’


Okay, it's my bad again. There shouldn't be any functional dependencies in CatAdjunction (we are going to have to give types for everything like it's Java sadly).

class ( CatFunctor x f y
, CatFunctor y g x
) => CatAdjunction x y f g where
catLeftAdjunct  :: (f a y   b) -> (a   x g b)
catRightAdjunct :: (a   x g b) -> (f a y   b)


You are very close. You just need to fix the definition you gave for catRightAdjunct.

Comment Source:[@Juan](https://forum.azimuthproject.org/profile/2083/Juan%20Manuel%20Gimeno) - > the _boilerplate_ code in [Kleisli puzzle](https://forum.azimuthproject.org/discussion/comment/17174/#Comment_17174) finishes abrupty with import Control.Monad ((, so I think there are some lines missing. > I've also needed to changeApplicative m to Monad m because it is needed for Keisli m to be a 'Category'. > Would you mind revising it? Okay, I fixed it. It should read: <pre> import Control.Category import Control.Monad ((&#60;=&#60;)) -- Hint! import Prelude hiding ((.)) </pre> You figured it out without the hint of course... next time I promise to make a PR. > I've also needed to changeApplicative m to Monad m because it is needed for Keisli m to be a 'Category'. For the CatFunctor instance for LeftK, all it needs it that m be an applicative, since it just calls pure and doesn't use bind in any way. But I see now that's confusing. > The code I have is: > > <pre> instance Monad m => CatAdjunction (->) (Kleisli m) LeftK (RightK m) where catLeftAdjunct f = RightK . f . LeftK catRightAdjunct f = unRightK . f . unLeftK </pre> > And the error I get has to do with functional dependencies: > > <pre> • Illegal instance declaration for ‘CatAdjunction (->) (Kleisli m) LeftK (RightK m)’ The liberal coverage condition fails in class ‘CatAdjunction’ for functional dependency: ‘f -> g’ Reason: lhs type ‘LeftK’ does not determine rhs type ‘RightK m’ Un-determined variable: m • In the instance declaration for ‘CatAdjunction (->) (Kleisli m) LeftK (RightK m)’ </pre> Okay, it's my bad again. There shouldn't be any functional dependencies in CatAdjunction (we are going to have to give types for everything like it's Java sadly). <pre> class ( CatFunctor x f y , CatFunctor y g x ) => CatAdjunction x y f g where catLeftAdjunct :: (f a y b) -> (a x g b) catRightAdjunct :: (a x g b) -> (f a y b) </pre> -------------------------------------------- **You are very close**. You just need to fix the definition you gave for catRightAdjunct.
• Options
73.

@Jim,

to exclude the . from importing from the Prelude, you neews two pairs of parentheses import Prelude hiring ((.))

Comment Source:@Jim, to exclude the . from importing from the Prelude, you neews two pairs of parentheses import Prelude hiring ((.))
• Options
74.

@Jim -

For puzzle 1, the laws for CatFunctor should be the same as the laws for Functor. It needs to preserve id and (.). In traditional Haskell, the laws are:

fmap id = id
fmap (f . g) = (fmap f) . (fmap g)


For puzzle 2, we always bump up against Haskell's lack of impredicative types when we want to do fancy stuff like this.

Morphisms in the Kleisli category for a monad m look like:

 a -> m b

So if \ a -> a :: a -> a is the identity for (->), what's the identity for Kleisli m?

Also, with regard to category composition (.), I have a hint. They call (<=<) in Control.Monad the "right-to-left Kleisli composition", named after the paper I cited.

Comment Source:[@Jim](https://forum.azimuthproject.org/profile/328/Jim%20Stuttard) - For puzzle 1, the laws for CatFunctor should be the same as the laws for Functor. It needs to preserve id and (.). In traditional Haskell, the laws are: <pre> fmap id = id fmap (f . g) = (fmap f) . (fmap g) </pre> For puzzle 2, we always bump up against Haskell's lack of [*impredicative types*](https://wiki.haskell.org/Impredicative_types) when we want to do fancy stuff like this. Morphisms in the Kleisli category for a monad m look like: <center><code> a -> m b</code></center> So if \ a -> a :: a -> a is the identity for (->), what's the identity for Kleisli m? Also, with regard to category composition (.), I have a hint. They call [(<=<)](http://hackage.haskell.org/package/base-4.11.0.0/docs/Control-Monad.html#v:-60--61--60-) in Control.Monad the "right-to-left Kleisli composition", named after the paper I cited.
• Options
75.
edited April 2018

after taking away the functional dependencies, and changing the definitions of both adjunctions (catLeftAdjunct was missing a runKleisli) I have finally got:

instance Monad m => CatAdjunction (->) (Kleisli m) LeftK (RightK m) where
catLeftAdjunct  f = RightK . runKleisli f . LeftK
catRightAdjunct f = Kleisli $unRightK . f . unLeftK  By the way, as LeftK is the just the Identity and RightK m is just m functor, I think one can simplify puzzle 3 to: newtype Identity a = Identity { runIdentity :: a } -- I do not use Data.Functor.Identity because its functor instance -- overlaps with the EndoFunctor's one. instance Monad m => CatFunctor (->) Identity (Kleisli m) where catMap = Kleisli . (pure .) . fmap instance Monad m => CatFunctor (Kleisli m) m (->) where catMap (Kleisli f) ma = ma >>= f instance Monad m => CatAdjunction (->) (Kleisli m) Identity m where catLeftAdjunct f = runKleisli f . Identity catRightAdjunct f = Kleisli$ f . runIdentity


And that's all (I think).

Comment Source:@Matthew, after taking away the functional dependencies, and changing the definitions of both adjunctions (catLeftAdjunct was missing a runKleisli) I have finally got: instance Monad m => CatAdjunction (->) (Kleisli m) LeftK (RightK m) where catLeftAdjunct f = RightK . runKleisli f . LeftK catRightAdjunct f = Kleisli $unRightK . f . unLeftK By the way, as LeftK is the just the Identity and RightK m is just m functor, I think one can simplify **puzzle 3** to: newtype Identity a = Identity { runIdentity :: a } -- I do not use Data.Functor.Identity because its functor instance -- overlaps with the EndoFunctor's one. instance Monad m => CatFunctor (->) Identity (Kleisli m) where catMap = Kleisli . (pure .) . fmap instance Monad m => CatFunctor (Kleisli m) m (->) where catMap (Kleisli f) ma = ma >>= f instance Monad m => CatAdjunction (->) (Kleisli m) Identity m where catLeftAdjunct f = runKleisli f . Identity catRightAdjunct f = Kleisli$ f . runIdentity And that's all (I think).
• Options
76.

Great job Juan! I saw that LeftK was just Identity but I didn't think to pull it out of mtl and see the symmetry you did!

Here's what I have been thinking about where to go next:

• So, we have enough now to do the dual adjunction to (,) e $$\dashv$$ (->) e. Keith Peterson talked about it in another thread. We have ??? $$\dashv$$ (->) e, but the adjunction is not in Hask. It's in a new category based on Classical.

• Other than that there's still Ed Kmett's construction, which I think is what inspired him to make the adjunctions library in the first place. This is all purely in Hask.

• Finally, I would love to resume digging through Conal's paper, since I am excited to try to apply his machine learning stuff!

Comment Source:Great job Juan! I saw that LeftK was just Identity but I didn't think to pull it out of mtl and see the symmetry you did! Here's what I have been thinking about where to go next: - So, we have enough now to do the dual adjunction to (,) e \$$\dashv\$$ (->) e. Keith Peterson talked about it in another thread. We have ??? \$$\dashv\$$ (->) e, but the adjunction is *not* in **Hask**. It's in a new category based on Classical. - Other than that there's still Ed Kmett's construction, which I think is what inspired him to make the adjunctions library in the first place. This is all purely in **Hask**. - Finally, I would love to resume digging through Conal's paper, since I am excited to try to apply his machine learning stuff!
• Options
77.

Thanks !! I'll push all the code to the repository. If I have time today, I'll try to do some reading, because I'm getting behind with the papers.

Comment Source:@Matthew, Thanks !! I'll push all the code to the repository. If I have time today, I'll try to do some reading, because I'm getting behind with the papers.
• Options
78.
edited April 2018

Hi everyone, I'm very excited to see so many interesting things in this thread! I'd like to add one thing to the mix, which is my attempt at implementation of @Conal's "Simple essence of automatic differentiation" in Haskell, since I see you're talking about it here!

Although most of the code is already in the paper, my implementation is still a work in progress. I'm new to category theory so I'm mostly reading about the concepts before writing the code. I welcome any feedback!

Couple of notes:

• The paper defines several new datatypes to be instances of classes such as Category, Monoidal, Cartesian and Cocartesian. For all of those except the Control.Category, I've defined my own instances. I do see that Haskell already has those other typeclasses defined but I haven't figured out how to define all the typeclases it requires (the definitions get more abstract as I follow the dependency chain)
• In the paper, the Cocartesian typeclass requires only the Category typeclass, but shouldn't it require Monoidal as well, since we're using the monoidal product in the definition?
• I'm still confused about the newtype D. I was convinced it should have the opposite way of defining derivative: b -> a instead of a -> b, since that's how I internalized it in my reverse-mode autodiff machine learning view of the world. Chapter 12 seems to talk about a way to solve this but I'm struggling to understand it how it can be solved without inverting the (a->b) function!
Comment Source:Hi everyone, I'm very excited to see so many interesting things in this thread! I'd like to add one thing to the mix, which is my attempt at [implementation](https://github.com/bgavran/Backprop_as_Functor/blob/master/src/simple_essence_ad.hs) of @Conal's "Simple essence of automatic differentiation" in Haskell, since I see you're talking about it here! Although most of the code is already in the paper, my implementation is still a work in progress. I'm new to category theory so I'm mostly reading about the concepts before writing the code. I welcome any feedback! Couple of notes: * The paper defines several new datatypes to be instances of classes such as Category, Monoidal, Cartesian and Cocartesian. For all of those except the Control.Category, I've defined my own instances. I do see that Haskell already has those other typeclasses defined but I haven't figured out how to define all the typeclases it requires (the definitions get more abstract as I follow the dependency chain) * In the paper, the Cocartesian typeclass requires only the Category typeclass, but shouldn't it require Monoidal as well, since we're using the monoidal product in the definition? * I'm still confused about the newtype D. I was convinced it should have the opposite way of defining derivative: b -> a instead of a -> b, since that's how I internalized it in my *reverse-mode autodiff machine learning* view of the world. Chapter 12 seems to talk about a way to solve this but I'm struggling to understand it how it can be solved without inverting the (a->b) function! 
• Options
79.
edited April 2018

We wanted to finish up Conal's Compiling to Categories paper before moving on to automatic differentiation. The reverse-mode autodiff stuff has some overlap with Fong and Spivak's Chapter 2 as they both tackle related concepts.

I can try and read the automatic differentiation paper now, if you like. But could you post the code in particular you are wondering about so I can know what to focus on when I read?

Comment Source:[@BrunoGavranovic](https://forum.azimuthproject.org/profile/1852/BrunoGavranovic), We wanted to finish up Conal's [Compiling to Categories](http://conal.net/papers/compiling-to-categories/) paper before moving on to automatic differentiation. The reverse-mode autodiff stuff has some overlap with Fong and Spivak's Chapter 2 as they both tackle related concepts. I can try and read the automatic differentiation paper now, if you like. But could you post the code in particular you are wondering about so I can know what to focus on when I read?
• Options
80.
edited April 2018

I'm reading Conal's Compiling to Categories, and I've found some differences in how we represent functions in Language k.

For instance, we have:

neg :: h k Bool -> h k Bool
neg a = notC . a


But Conal, to represent functions has (page 1:4 of the extended version):

constFun :: Closed k => (a k b) -> (z k (a -> b))
constFun f = curry (f . exr)


And the representation of neg is then:

neg :: h k (Bool -> Bool)
neg = constFun notC


(IMHO this seems more natural, because to represent functions we use exponentials objects in the category.)

I don't know if this difference can be significative for what lays ahead.

Comment Source:@Matthew, I'm reading Conal's Compiling to Categories, and I've found some differences in how we represent functions in Language k. For instance, we have: neg :: h k Bool -> h k Bool neg a = notC . a But Conal, to represent functions has (page 1:4 of the extended version): constFun :: Closed k => (a k b) -> (z k (a -> b)) constFun f = curry (f . exr) And the representation of neg is then: neg :: h k (Bool -> Bool) neg = constFun notC (IMHO this seems more natural, because to represent functions we use exponentials objects in the category.) I don't know if this difference can be significative for what lays ahead. 
• Options
81.

sure, that makes sense. Perhaps it's better to leave all autodiff discussion for after the Compiling to Categories is implemented, so there is less clutter in the thread.

Comment Source:@Matthew, sure, that makes sense. Perhaps it's better to leave all autodiff discussion for after the Compiling to Categories is implemented, so there is less clutter in the thread.
• Options
82.
edited April 2018

By the way: if you guys start working on a project, it makes plenty of sense to start a new discussion thread devoted to that project. It's easy to do.

Comment Source:By the way: if you guys start working on a project, it makes plenty of sense to start a new discussion thread devoted to that project. It's easy to do.
• Options
83.

@Reuben : I wrote a blog post about Eilenberg-Moore categories and T-algebras, which might be more approachable. https://bartoszmilewski.com/2017/03/14/algebras-for-monads/

Comment Source:@Reuben : I wrote a blog post about Eilenberg-Moore categories and T-algebras, which might be more approachable. https://bartoszmilewski.com/2017/03/14/algebras-for-monads/
• Options
84.

Functional programming is great, and I'm a huge fan -- but is there any interesting work on representing imperative languages categorically? Everyone and their grandma apparently knows that the simply-typed lambda calculus is the internal language of cartesian closed categories, but what about something like a simple Forth? I haven't had much luck looking for material on the semantics and theory of Forth, so I assume I've been looking in the wrong places (or using the wrong keywords).

Comment Source:Functional programming is great, and I'm a huge fan -- but is there any interesting work on representing imperative languages categorically? Everyone and their grandma apparently knows that the simply-typed lambda calculus is the internal language of cartesian closed categories, but what about something like a simple Forth? I haven't had much luck looking for material on the semantics and theory of Forth, so I assume I've been looking in the wrong places (or using the wrong keywords).
• Options
85.
edited April 2018

maybe in A Foundation for Typed Concatenative Languages you'll find some references of your interest.

Comment Source:@Jonathan, maybe in [A Foundation for Typed Concatenative Languages](https://www2.ccs.neu.edu/racket/pubs/dissertation-kleffner.pdf) you'll find some references of your interest. 
• Options
86.
edited April 2018

Another reference is Kirby’s Theory of Concatenative Combinators.

Reading over this and Kleffner’s thesis Juan linked (its only 36 pages long) I’m a bit puzzled how to tease a category out. From Juan’s thesis the type signatures of concatenative combinators always seem to have some side condition.

I think it might be that you could model concatenative languages with a Kleisli category, but that feels like cheating.

Comment Source:Another reference is Kirby’s [Theory of Concatenative Combinators](http://tunes.org/~iepos/joy.html). Reading over this and Kleffner’s thesis Juan linked (its only 36 pages long) I’m a bit puzzled how to tease a category out. From Juan’s thesis the type signatures of concatenative combinators always seem to have some side condition. I think it might be that you could model concatenative languages with a Kleisli category, but that feels like cheating.
• Options
87.

@Bartosz Thanks! I've been reading your blog for a while - it's really excellent!

Comment Source:@Bartosz Thanks! I've been reading your blog for a while - it's really excellent!
• Options
88.

Joseph Goguen's book Algebraic Semantics of Imperative Programs is "a self-contained and novel 'executable' introduction to formal reasoning about imperative programs." The author writes that it "does not explicitly use category theory, but almost everything in it was inspired in one way or another by category theory."

Goguen's homepage has more about the OBJ language family (and a lot of other very interesting topics).

Comment Source:Joseph Goguen's book [Algebraic Semantics of Imperative Programs](https://mitpress.mit.edu/books/algebraic-semantics-imperative-programs) is "a self-contained and novel 'executable' introduction to formal reasoning about imperative programs." The author [writes](https://www.mta.ca/~cat-dist/catlist/1999/book-goguen) that it "does not explicitly use category theory, but almost everything in it was inspired in one way or another by category theory." Goguen's homepage has more about the [OBJ language family](http://cseweb.ucsd.edu/~goguen/sys/obj.html) (and a lot of other very interesting topics).
• Options
89.
edited May 2018

Jonathan wrote:

Everyone and their grandma apparently knows that the simply-typed lambda calculus is the internal language of cartesian closed categories [...]

Yes, but what people often forget is that cartesian closed categories don't formalize what's really interesting about the simply-typed lambda calculus, namely that it can be seen as a simple programming language.

The problem is that we get a morphism in a cartesian closed category from an equivalence class of terms-in-context, where two terms related by beta or eta reduction are considered equal. Thus, a program that huff and puffs and adds 2 and 3, and one that simply prints out 5 count as the same morphism!

So, even in this "well-understood" case, the relation between category theory and programming has not been worked out in detail.

I began trying to deal with this problem in the winter of 2007 when Mike Stay started working with me. Now he's helping run the company Pyrofex, and I'm working with him and my new student Christian Williams to tackle this issue.

A good approach would also handle imperative languages. I will have to read Goguen's book.

Comment Source:Jonathan wrote: > Everyone and their grandma apparently knows that the simply-typed lambda calculus is the internal language of cartesian closed categories [...] Yes, but what people often forget is that cartesian closed categories don't formalize what's really _interesting_ about the simply-typed lambda calculus, namely that it can be seen as a simple programming language. The problem is that we get a morphism in a cartesian closed category from an _equivalence class_ of terms-in-context, where two terms related by beta or eta reduction are considered equal. Thus, a program that huff and puffs and adds 2 and 3, and one that simply prints out 5 count as _the same morphism!_ So, even in this "well-understood" case, the relation between category theory and programming has not been worked out in detail. I began trying to deal with this problem in [the winter of 2007](http://math.ucr.edu/home/baez/qg-winter2007/index.html#computation) when Mike Stay started working with me. Now he's helping run the company [Pyrofex](https://johncarlosbaez.wordpress.com/2018/02/04/pyrofex/), and I'm working with him and my new student Christian Williams to tackle this issue. A good approach would also handle imperative languages. I will have to read Goguen's book.
• Options
90.

Conal Elliott: Could you briefly explain the universal property in section 2 of Compiling to Categories? I don't understand the syntax of the formula given, notably the "double vee": vh. h v f v g v v exl v h v f v exr v h v g

Comment Source:Conal Elliott: Could you briefly explain the universal property in section 2 of Compiling to Categories? I don't understand the syntax of the formula given, notably the "double vee": vh. h v f v g v v exl v h v f v exr v h v g
• Options
91.
edited May 2018

We've had lectures on manufacturing, and chemistry + scheduling, but how about manufacturing and scheduling?

Manufacturing has 3 layers. In terms of Chapter 2, they are Recipe, Schedule (Plan in the diagram below), and Reality (what happened) or Observations (what was reported about what happened). And all three layers are similar, in parallel, and interconnected top to bottom to top. The Schedule is generated from the Recipe, and Reality tries to follow the Schedule, but often fails.

How would all those relationships be represented in Category Theory? I'd be happy with hand-wavy pointers or full solutions, but don't expect full solutions.

Comment Source:We've had lectures on manufacturing, and chemistry + scheduling, but how about manufacturing and scheduling? Manufacturing has 3 layers. In terms of Chapter 2, they are Recipe, Schedule (Plan in the diagram below), and Reality (what happened) or Observations (what was reported about what happened). And all three layers are similar, in parallel, and interconnected top to bottom to top. The Schedule is generated from the Recipe, and Reality tries to follow the Schedule, but often fails. How would all those relationships be represented in Category Theory? I'd be happy with hand-wavy pointers or full solutions, but don't expect full solutions. ![salsa layers](https://raw.githubusercontent.com/valueflows/valueflows/master/release-doc-in-process/process-layer.png) 
• Options
92.
edited May 2018

Since your question is not mainly about programming, Bob, I might be able to help you with it, and I'm surprised to find it here rather than our discussion of "resource theories".

It's a bit late now, so I'm not up to a detailed analysis right this minute, but the way it would work is that I'd grill you and force you to state your ideas very precisely and translate them into category theory.

If the Schedule is generated in a systematic way from the Plan we might have a functor or other systematic thing going on. It's harder for math to capture relationships where something "tries but often fails", but there are various ways to do it.

On a much simpler level, I've explained:

1. Petri nets and (equivalently) reaction networks for processes in which resources are combined to give other resources,

2. Linear programming for helping figure out how to maximize a linear function given linear constraints, and

3. PERT charts for scheduling.

But in fact all three are different fragments of a unified structure: a timed Petri net. I'll explain this in a week or so.

So, while I probably don't yet know how to do what you want, there's more to the story than I've managed to explain so far.

Comment Source:Since your question is not mainly about programming, Bob, I might be able to help you with it, and I'm surprised to find it here rather than our discussion of "resource theories". It's a bit late now, so I'm not up to a detailed analysis right this minute, but the way it would work is that I'd grill you and force you to state your ideas very precisely and translate them into category theory. If the Schedule is generated in a systematic way from the Plan we might have a functor or other systematic thing going on. It's harder for math to capture relationships where something "tries but often fails", but there are various ways to do it. On a much simpler level, I've explained: 1. Petri nets and (equivalently) reaction networks for processes in which resources are combined to give other resources, 2. Linear programming for helping figure out how to maximize a linear function given linear constraints, and 3. PERT charts for scheduling. But in fact all three are different fragments of a unified structure: a **timed Petri net**. I'll explain this in a week or so. So, while I probably don't yet know how to do what you want, there's more to the story than I've managed to explain so far.
• Options
93.

Thanks, John. I didn't know where to put it. Seemed beyond where we were at in chapter 2. And I do code this stuff. See https://hillside.net/plop/plop97/Proceedings/haugen.pdf

I'm open to being grilled, but am very slow to read and write the mathematical notation so far (don't have a good math background), so it might be better to wait for your timed petri net explanation and see what it does for me.

Comment Source:Thanks, John. I didn't know where to put it. Seemed beyond where we were at in chapter 2. And I do code this stuff. See https://hillside.net/plop/plop97/Proceedings/haugen.pdf I'm open to being grilled, but am very slow to read and write the mathematical notation so far (don't have a good math background), so it might be better to wait for your timed petri net explanation and see what it does for me.
• Options
94.

Okay, why don't you wait until we're done with Chapter 2 on Friday May 25th and then post questions/comments on some relevant-seeming lecture.

My questions will mainly be trying to precisely figure out what you mean; often such questions make people get precise about things they'd left vague before, or at least see where the regions of vagueness lie.

Comment Source:Okay, why don't you wait until we're done with Chapter 2 on Friday May 25th and then post questions/comments on some relevant-seeming lecture. My questions will mainly be trying to precisely figure out what you mean; often such questions make people get precise about things they'd left vague before, or at least see where the regions of vagueness lie.
• Options
95.

Will do. I have running code being used by real people for everything I am thinking about, so it can all be as precise as the code, but the code is not yet categorized, so will not be precise in that sense.

Comment Source:Will do. I have running code being used by real people for everything I am thinking about, so it can all be as precise as the code, but the code is not yet categorized, so will not be precise in that sense.
• Options
96.

I have running code being used by real people for everything I am thinking about, so it can all be as precise as the code, but the code is not yet categorized, so will not be precise in that sense.

What do you write in?

Comment Source:> I have running code being used by real people for everything I am thinking about, so it can all be as precise as the code, but the code is not yet categorized, so will not be precise in that sense. What do you write in?
• Options
97.

What do you write in?

Mostly Python.

Comment Source:> What do you write in? Mostly Python.
• Options
98.

Great - if you've got code, I won't wind up saying "hmm, that could be a nice idea if you made it precise". At worst we'll wind up saying "hmm, it would be nice if either I understood your code better or you could talk to me using more category theory".

Comment Source:Great - if you've got code, I won't wind up saying "hmm, that could be a nice idea if you made it precise". At worst we'll wind up saying "hmm, it would be nice if either I understood your code better or you could talk to me using more category theory".
• Options
99.

When you wind up on chapter 2, I will devote some concentrated time to be able to talk to y'all using category theory + code and we'll see where it goes. Figure it will take me a week after May 25, so I will commit to be ready June 4, and will post something wherever you want it that you will be welcome to destruct.

Comment Source:When you wind up on chapter 2, I will devote some concentrated time to be able to talk to y'all using category theory + code and we'll see where it goes. Figure it will take me a week after May 25, so I will commit to be ready June 4, and will post something wherever you want it that you will be welcome to destruct.
• Options
100.

Work in progress, produced by python code and http://www.cytoscape.org/ :

Eventually this could go in Ignoring Externalities but the discussion there was so focused I thought it would be disruptive right now. (And I am ignoring egg yolk and egg white for now, too.)

Comment Source:Work in progress, produced by python code and http://www.cytoscape.org/ : ![eggs to compost](https://user-images.githubusercontent.com/117439/40447738-303ec846-5e99-11e8-953f-2be02a9a0272.png) Eventually this could go in [Ignoring Externalities](https://forum.azimuthproject.org/discussion/2105/lecture-28-chapter-2-ignoring-externalities) but the discussion there was so focused I thought it would be disruptive right now. (And I am ignoring egg yolk and egg white for now, too.)