Options

Lecture 21 - Chapter 2: Monoidal Preorders

2»

Comments

  • 51.
    edited May 2018

    I'm not sure if it's good to pose puzzles that use concepts beyond what's covered in the lectures. (John?) But for now, here it goes:

    Puzzle TF2: Suppose that we have a monoidal preorder \(X\) which satisfies \(x\otimes x \leq x\) and \(x \leq x\otimes x\) for all \(x\in X\). Does this imply that \(X\) is a semilattice?

    Background: in a resource theory, one may want to call a resource \(x\) quality-like if it satisfies \(x \leq x\otimes x\) and \(x\otimes x \leq x\). This means that no matter how many copies of \(x\) you have, you can turn them into any other number of copies, so quantity doesn't matter at all. For example, knowledge is often of this type: the cost of reproducing knowledge is often negligible. So the puzzle really asks: if all resources are quality-like, what does this imply about the mathematical structure of our resource theory?

    Comment Source:I'm not sure if it's good to pose puzzles that use concepts beyond what's covered in the lectures. (John?) But for now, here it goes: **Puzzle TF2:** Suppose that we have a monoidal preorder \\(X\\) which satisfies \\(x\otimes x \leq x\\) and \\(x \leq x\otimes x\\) for all \\(x\in X\\). Does this imply that \\(X\\) is a [semilattice](https://en.wikipedia.org/wiki/Semilattice)? Background: in a resource theory, one may want to call a resource \\(x\\) *quality-like* if it satisfies \\(x \leq x\otimes x\\) and \\(x\otimes x \leq x\\). This means that no matter how many copies of \\(x\\) you have, you can turn them into any other number of copies, so quantity doesn't matter at all. For example, knowledge is often of this type: the cost of reproducing knowledge is often negligible. So the puzzle really asks: if all resources are quality-like, what does this imply about the mathematical structure of our resource theory?
  • 52.
    edited May 2018

    Tobias, should we also require that \(\otimes\) is symmetric? Otherwise, I can imagine having \(x \otimes y \not\le x \otimes y \otimes x\), which defeats the intuition of "number of copies doesn't matter".

    Comment Source:Tobias, should we also require that \\(\otimes\\) is symmetric? Otherwise, I can imagine having \\(x \otimes y \not\le x \otimes y \otimes x\\), which defeats the intuition of "number of copies doesn't matter".
  • 53.
    edited May 2018

    @Jonathan: yes, symmetric examples would definitely be the more interesting ones, for the reason that you mention. I didn't want to require this, though, since it's the subject of Lecture 22.

    Comment Source:@Jonathan: yes, symmetric examples would definitely be the more interesting ones, for the reason that you mention. I didn't want to require this, though, since it's the subject of Lecture 22.
  • 54.
    edited May 2018

    Matthew Doty Thanks for the puzzle and the references on difference lists! I think the answer to your puzzle, MD1, is:

    instance Monoid (DList a) where                        
      (DList f) <> (DList g) = DList $ f . g               
      mempty = DList id
    

    However, I'm not really sure where the improvement in performance comes in the case of difference lists. You mentioned in comment #40:

    The reason for doing the unevaluated function indirection in difference lists is that (reduce append lol) takes \(\mathcal{O}(n^2)\) time.

    • If we implement list concatenation as a fold right, foldr (++) [], then I think the complexity is linear in the total number of elements of the list of lists, lol.
    • If we implement list concatenation as a fold left, foldl (++) [], then it seems that the complexity is quadratic; I see why this happens in a strict language, but I'm not sure whether that's true in a lazy language, such as Haskell.

    Edit: Meawhile, I've found this blog-post, which answers my question above.

    Comment Source:[Matthew Doty](https://forum.azimuthproject.org/profile/1818/Matthew%20Doty) Thanks for the puzzle and the references on difference lists! I think the answer to your puzzle, **MD1**, is: instance Monoid (DList a) where (DList f) <> (DList g) = DList $ f . g mempty = DList id However, I'm not really sure where the improvement in performance comes in the case of difference lists. You mentioned in comment [#40](https://forum.azimuthproject.org/discussion/comment/17996/#Comment_17996): > The reason for doing the unevaluated function indirection in difference lists is that `(reduce append lol)` takes \\(\mathcal{O}(n^2)\\) time. - If we implement list concatenation as a fold right, `foldr (++) []`, then I think the complexity is linear in the total number of elements of the list of lists, `lol`. - If we implement list concatenation as a fold left, `foldl (++) []`, then it seems that the complexity is quadratic; I see why this happens in a strict language, but I'm not sure whether that's true in a lazy language, such as Haskell. Edit: Meawhile, I've found [this blog-post](http://h2.jaguarpaw.co.uk/posts/demystifying-dlist/), which answers my question above.
  • 55.

    re Puzzle TF2 surely the answer is no: take any lattice \(L\), consider the monoid \(\langle L, \wedge, \top\rangle\), now replace the original partial order on \(L\) with the discrete partial order – so we have \(x \otimes x = x\) but \(L\) no longer has joins or meets.

    Comment Source:re **Puzzle TF2** surely the answer is no: take any lattice \\(L\\), consider the monoid \\(\langle L, \wedge, \top\rangle\\), now replace the original partial order on \\(L\\) with the discrete partial order – so we have \\(x \otimes x = x\\) but \\(L\\) no longer has joins or meets.
  • 56.

    @Matthew – I know a little Haskell (self-taught amateur) so I'll take a look at Data.Monoid – thanks for the pointer!

    Comment Source:@Matthew – I know a little Haskell (self-taught amateur) so I'll take a look at `Data.Monoid` – thanks for the pointer!
  • 57.

    @Anindya: Right, good job!

    Comment Source:@Anindya: Right, good job!
  • 58.
    edited May 2018

    I'm not really sure where the improvement in performance comes in the case of difference lists.

    I admittedly haven't done microbenchmarks to see how bad things are.

    Bryan O'Sullivan et al. certainly talks about this optimization in Chapter 13 of Real World Haskell. Bryan O'Sullivan is also the author of the DList package on Hackage.

    Laziness can mitigate the quadratic slowdown, I will try to write some microbenchmarks later today to see if it is handled.

    Comment Source:> I'm not really sure where the improvement in performance comes in the case of difference lists. I admittedly haven't done microbenchmarks to see how bad things are. Bryan O'Sullivan et al. certainly talks about this optimization in Chapter 13 of [Real World Haskell](http://book.realworldhaskell.org/read/data-structures.html). Bryan O'Sullivan is also the author of the [DList package on Hackage](https://hackage.haskell.org/package/dlist). Laziness can mitigate the quadratic slowdown, I will try to write some microbenchmarks later today to see if it is handled.
  • 59.
    edited May 2018

    If we implement list concatenation as a fold right, foldr (++) [], then I think the complexity is linear in the total number of elements of the list of lists, lol.

    I promise to get to the microbenchmarks, but I wanted to go over why this isn't always an easy solution.

    It's not always easy to use foldr (++) []

    Consider the following elementary backtrack search:

    module BackTrackSearch where
    
    import Control.Applicative (Alternative, (<|>), pure, empty)
    
    data BTree a = Branch (BTree a) (BTree a)
                 | Leaf a
    
    findInBTree :: Alternative m => (a -> Bool) -> BTree a -> m a
    findInBTree p (Branch left right) =     findInBTree p left 
                                        <|> findInBTree p right
    findInBTree p (Leaf a) = if p a 
                             then pure a 
                             else empty
    

    We can swap Alternative functors to get different semantics.

    If I just wanted to find one leaf, I could use findInBTree :: (a -> Bool) -> BTree a -> Maybe a

    But if I wanted prolog type semantics, then I can pick between findInBTree :: (a -> Bool) -> BTree a -> [a] and findInBTree :: (a -> Bool) -> BTree a -> DList a and some other suitable functors. With the former we would get the \(\mathcal{O}(n^2)\) searching in the worst case and the latter we would get \(\mathcal{O}(n)\).

    I could use foldr too, of course, but it would require a rewrite of findInBTree.

    Comment Source:> If we implement list concatenation as a fold right, `foldr (++) []`, then I think the complexity is linear in the total number of elements of the list of lists, `lol`. I promise to get to the microbenchmarks, but I wanted to go over why this isn't always an easy solution. It's not always easy to use `foldr (++) []` Consider the following elementary backtrack search: <pre> module BackTrackSearch where import Control.Applicative (Alternative, (<|>), pure, empty) data BTree a = Branch (BTree a) (BTree a) | Leaf a findInBTree :: Alternative m => (a -> Bool) -> BTree a -> m a findInBTree p (Branch left right) = findInBTree p left <|> findInBTree p right findInBTree p (Leaf a) = if p a then pure a else empty </pre> We can swap `Alternative` functors to get different semantics. If I just wanted to find one leaf, I could use `findInBTree :: (a -> Bool) -> BTree a -> Maybe a` But if I wanted *prolog* type semantics, then I can pick between `findInBTree :: (a -> Bool) -> BTree a -> [a]` and `findInBTree :: (a -> Bool) -> BTree a -> DList a` and some other suitable functors. With the former we would get the \\(\mathcal{O}(n^2)\\) searching in the worst case and the latter we would get \\(\mathcal{O}(n)\\). I could use `foldr` too, of course, but it would require a rewrite of `findInBTree`.
  • 60.
    edited May 2018

    Tobias wrote:

    I'm not sure if it's good to pose puzzles that use concepts beyond what's covered in the lectures. (John?)

    One problems is that we have about 250 people registered for the course, and a much smaller number of very active participants. If we keep focusing attention on the most knowledgeable students, the others will become increasingly scared to join the conversation. I wish we had lots of comments asking basic questions about the lectures. Unfortunately we don't - and I don't think it's because everything I write is so clear that nobody could conceivably have a question.

    On the other hand, it's mainly my responsibility, not yours, to think about these things.

    To reduce the intimidation effect, it added a link to a definition of "semilattice" in your question. If someone carefully examines everything in that link, your question becomes much easier.

    Comment Source:Tobias wrote: > I'm not sure if it's good to pose puzzles that use concepts beyond what's covered in the lectures. (John?) One problems is that we have about 250 people registered for the course, and a much smaller number of very active participants. If we keep focusing attention on the most knowledgeable students, the others will become increasingly scared to join the conversation. I _wish_ we had lots of comments asking basic questions about the lectures. Unfortunately we don't - and I don't think it's because everything I write is so clear that nobody could conceivably have a question. On the other hand, it's mainly my responsibility, not yours, to think about these things. To reduce the intimidation effect, it added a link to a definition of "semilattice" in your question. If someone carefully examines everything in that link, your question becomes much easier.
  • 61.

    Matthew Thanks a lot for the compelling example of backtrack search! Do not worry too much about the micro-benchmarks; after asking the question, I've found some answers on why the difference lists help improving performance [1, 2].

    A few minor corrections to your previous comment:

    • The function findInBTree is missing BTree a from its type; its signature should be findInBTree :: (a -> Bool) -> BTree a -> m a
    • Leaft should be Leaf
    Comment Source:[Matthew](https://forum.azimuthproject.org/profile/1818/Matthew%20Doty) Thanks a lot for the compelling example of backtrack search! Do not worry too much about the micro-benchmarks; after asking the question, I've found some answers on why the difference lists help improving performance [[1](https://stackoverflow.com/questions/13879260/why-are-difference-lists-more-efficient-than-regular-concatenation/13879693#13879693), [2](http://h2.jaguarpaw.co.uk/posts/demystifying-dlist/)]. A few minor corrections to your previous comment: - The function `findInBTree` is missing `BTree a` from its type; its signature should be `findInBTree :: (a -> Bool) -> BTree a -> m a` - `Leaft` should be `Leaf`
  • 62.

    Tobias wrote:

    Background: in a resource theory, one may want to call a resource \(x\) quality-like if it satisfies \(x \leq x\otimes x\) and \(x\otimes x \leq x\). This means that no matter how many copies of \(x\) you have, you can turn them into any other number of copies, so quantity doesn't matter at all. For example, knowledge is often of this type: the cost of reproducing knowledge is often negligible. So the puzzle really asks: if all resources are quality-like, what does this imply about the mathematical structure of our resource theory?

    This is very nice. I've begun discussing this at the end of Lecture 23. However, I start from the other end: I assume we have poset with finite meets, and pose as a puzzle to show that every resource (=element in the poset) is what you're calling "quality-like".

    Comment Source:Tobias wrote: > Background: in a resource theory, one may want to call a resource \\(x\\) *quality-like* if it satisfies \\(x \leq x\otimes x\\) and \\(x\otimes x \leq x\\). This means that no matter how many copies of \\(x\\) you have, you can turn them into any other number of copies, so quantity doesn't matter at all. For example, knowledge is often of this type: the cost of reproducing knowledge is often negligible. So the puzzle really asks: if all resources are quality-like, what does this imply about the mathematical structure of our resource theory? This is very nice. I've begun discussing this at the end of [Lecture 23](https://forum.azimuthproject.org/discussion/2086/lecture-23-chapter-2-commutative-monoidal-posets/p1). However, I start from the other end: I assume we have poset with finite meets, and pose as a puzzle to show that every resource (=element in the poset) is what you're calling "quality-like".
  • 63.

    Dan - Thanks for the catch! I went back and edited my post.

    I am very embarrassed for not running my code before posting

    Comment Source:[Dan](https://forum.azimuthproject.org/discussion/comment/18070/#Comment_18070) - Thanks for the catch! I went back and edited my post. I am very embarrassed for not running my code before posting
  • 64.
    edited May 2018

    Trying to devise example to settle down the concept ..

    assuming that the human population tendency (quality-like resource?) for superexplotation of a public resource S is an object (structure/pattern) and each individual of our population P is as a morphism from this object to it itself (keeping this structure/pattern)..This resembles to a monoidal categorie but not to a monoidal preorder, ok? One could devise some monoidal preorder if one endow this individuals with some underlayer "law" inducing an order, correct?Or perhaps if we split this quality-like resource (superexplotation ) according some intrinsic ordering law?Thanks in advance

    Comment Source:Trying to devise example to settle down the concept .. assuming that the human population tendency (quality-like resource?) for superexplotation of a public resource S is an object (structure/pattern) and each individual of our population P is as a morphism from this object to it itself (keeping this structure/pattern)..This resembles to a monoidal categorie but not to a monoidal preorder, ok? One could devise some monoidal preorder if one endow this individuals with some underlayer "law" inducing an order, correct?Or perhaps if we split this quality-like resource (superexplotation ) according some intrinsic ordering law?Thanks in advance
  • 65.

    You can view any set as a discrete preorder, actually.

    Comment Source:You can view any set as a discrete preorder, actually.
  • 66.

    Thanks @Keith

    Comment Source:Thanks @Keith
  • 67.
    edited May 2018

    John wrote:

    This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg and a slice of bread into a fried egg and a piece of toast!

    As it turns out, mathematicians, as well as programmers, try to define minimal, but sufficient, interfaces (properties) to state that some structure or object (behaves | is constructed | can be used) in a certain way :) I wonder why for monoidal preorders exactly this type of order (it may be called a dictionary order also, if I remember it right) was chosen? What will we miss if some substitute, like \(x \otimes y \rightarrow z \Rightarrow x \leq z, y \leq z\) is used?

    Edit: as it turns out, some answers are in the Lecture 22. Still it is interesting to see how the alternative "monotonicity condition" differs from the one given in the lectures.

    Comment Source:John wrote: >This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg and a slice of bread into a fried egg and a piece of toast! As it turns out, mathematicians, as well as programmers, try to define minimal, but sufficient, interfaces (properties) to state that some structure or object (behaves | is constructed | can be used) in a certain way :) I wonder why for monoidal preorders exactly this type of order (it may be called a *dictionary order* also, if I remember it right) was chosen? What will we miss if some substitute, like \\(x \otimes y \rightarrow z \Rightarrow x \leq z, y \leq z\\) is used? *Edit: as it turns out, some answers are in the Lecture 22. Still it is interesting to see how the alternative "monotonicity condition" differs from the one given in the lectures.*
  • 68.
    edited May 2018

    Igor - I wouldn't call this a "dictionary order", which is another name for "lexicographic order". In the lexicographical order, the word "at" comes before the word "ba" because a \(\le\) b, even though t\( \le \) a is false: the first letter takes precedence.

    As for your main question, we're discussing it here in the comments to Lecture 22 now, so let's continue talking there. If want to ask me about some specific alternative possibilities for the definition of "monoidal preorder", or the difference between the lexicographic order and the product order, that would be great!

    Comment Source:Igor - I wouldn't call this a "dictionary order", which is another name for ["lexicographic order"](https://en.wikipedia.org/wiki/Lexicographical_order). In the lexicographical order, the word "at" comes before the word "ba" because a \\(\le\\) b, _even though t\\( \le \\) a is false_: the first letter takes precedence. As for your main question, we're discussing it [here in the comments to Lecture 22](https://forum.azimuthproject.org/discussion/comment/18302/#Comment_18302) now, so let's continue talking there. If want to ask me about some specific alternative possibilities for the definition of "monoidal preorder", or the difference between the lexicographic order and the product order, that would be great!
  • 69.

    As for your main question, we're discussing it here in the comments to Lecture 22 now, so let's continue talking there

    John - sure, brewing up some questions, will post to Lecture 22.

    As for lexicographic order - thanks for pointing out the difference, indeed this is another type of order for cartesian product.

    Comment Source:>As for your main question, we're discussing it here in the comments to Lecture 22 now, so let's continue talking there John - sure, brewing up some questions, will post to Lecture 22. As for lexicographic order - thanks for pointing out the difference, indeed this is another type of order for cartesian product.
  • 70.
    edited May 2018

    Puzzle 62 An easy way to repair the failed monoidal preorder would be to throw away part of the real numbers. $$ \textbf{R}^+ := ( \mathbb{R}^+, \le, 1, \cdot ) $$ discarding the negatives and 0.

    Also, I think the difficulty pointed out by @JonathanCastello where magnitudes are used to compare real numbers could be augmented. If two values have the same magnitude but have different signs the negative value is \( \lt \). In other words, in order to be equal they must have the same magnitude and sign. e.g. \( -1.0 \lt 1.0 \lt 2.1 \lt -3.3 \).

    Comment Source:**Puzzle 62** An easy way to repair the failed monoidal preorder would be to throw away part of the real numbers. $$ \textbf{R}^+ := ( \mathbb{R}^+, \le, 1, \cdot ) $$ discarding the negatives and 0. Also, I think the difficulty pointed out by @JonathanCastello where magnitudes are used to compare real numbers could be augmented. If two values have the same magnitude but have different signs the negative value is \\( \lt \\). In other words, in order to be equal they must have the same magnitude and sign. e.g. \\( -1.0 \lt 1.0 \lt 2.1 \lt -3.3 \\).
  • 71.
    edited May 2018

    Frederick - that's a nice solution to Puzzle 62. Note also that \(\mathbb{R}^+\) with \(\le\) as partial order, \(\cdot\) as monoidal operation and \(1\) as unit is isomorphic as a monoidal poset to \(\mathbb{R}\) with \(\le\) as partial order, \(+\) as monoidal operation and \(0\) as unit. An isomorphism is

    $$ \ln : \mathbb{R}^+ \to \mathbb{R} $$ with inverse

    $$ \exp: \mathbb{R} \to \mathbb{R}^+ . $$ Another cute thing: you didn't need to throw out zero! There's another nice monoidal poset: \( [0,\infty) \) with \(\le\) as partial order, \(\cdot\) as monoidal operation and \(1\) as unit. This is isomorphic as a monoidal poset to \( [-\infty, 0) \) with \(\le\) as partial order, \(+\) as monoidal operation and \(0\) as unit.

    Comment Source:Frederick - that's a nice solution to Puzzle 62. Note also that \\(\mathbb{R}^+\\) with \\(\le\\) as partial order, \\(\cdot\\) as monoidal operation and \\(1\\) as unit is _isomorphic_ as a monoidal poset to \\(\mathbb{R}\\) with \\(\le\\) as partial order, \\(+\\) as monoidal operation and \\(0\\) as unit. An isomorphism is \[ \ln : \mathbb{R}^+ \to \mathbb{R} \] with inverse \[ \exp: \mathbb{R} \to \mathbb{R}^+ . \] Another cute thing: you didn't need to throw out zero! There's another nice monoidal poset: \\( [0,\infty) \\) with \\(\le\\) as partial order, \\(\cdot\\) as monoidal operation and \\(1\\) as unit. This is isomorphic as a monoidal poset to \\( [-\infty, 0) \\) with \\(\le\\) as partial order, \\(+\\) as monoidal operation and \\(0\\) as unit.
Sign In or Register to comment.