It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 494
- ACT Study Group 6
- Green Mathematics 1
- Azimuth Math Review 6
- MIT 2020: Programming with Categories 53
- MIT 2020: Lectures 21
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 64
- Azimuth Code Project 110
- Statistical methods 2
- Drafts 1
- Math Syntax Demos 15
- Wiki - Latest Changes 0
- Strategy 111
- Azimuth Project 1.1K

Options

Okay, it's time to start up the lectures again!

Here's a place for discussing Chapter 2 of *Seven Sketches*. This chapter is about "resource theories". Resource theories tackle questions like:

- Given what I have, is it possible to get what I want?
- Given what I have, what is the minimum cost to get what I want?
- Given what I have, what is the set of ways to get what I want?

The technical tools you'll learn in this chapter include string diagrams, monoidal preorders and enrichment. Dive in and start asking questions!

## Comments

I'm pretty excited to dig into this chapter! I'm really interested in resource theories from the perspective of modeling interaction between concurrent programs. Various separation logics rely on some notion of resource, and I'd like to better understand how this works.

`I'm pretty excited to dig into this chapter! I'm really interested in resource theories from the perspective of modeling interaction between concurrent programs. Various [separation logics](http://cs.au.dk/~abizjak/documents/publications/box-modality-mfps.pdf) rely on some notion of resource, and I'd like to better understand how this works.`

Can you say a word about what "separation logics" are? The paper you cited doesn't start by explaining that term. Partially ordered commutative monoids, I understand. But I don't have any idea of what sort of "separation" is involved in separation logics.

`Can you say a word about what "separation logics" are? The paper you cited doesn't start by explaining that term. Partially ordered commutative monoids, I understand. But I don't have any idea of what sort of "separation" is involved in separation logics.`

I'm no expert myself! To the best of my knowledge, the simplest kinds of separation logic allow you to reason about "heaps" -- maps from identifiers ("memory locations") to values -- which essentially model memory in an imperative computer program. A "separating conjunction" is used to combine statements about two disjoint heaps into a single statement about a single heap -- more specifically, it says that a heap can be factored into two parts, each satisfying one of the conjuncts.

You can layer some serious abstraction onto this basic idea to get various degrees of shared memory, allowing you to reason about the combined effects of two concurrent programs on shared memory. This

seemsto boil down to a distinction between physical state and "ghost state", where a physical state is logically some combination of extant ghost states that have been separated out.Also, my understanding is that the "partial" in "partial commutative monoid" refers to how \(x \cdot y\) may not be defined for all \(x, y\). I think this is meant to restrict the kinds of actions independent threads can take over the shared resource -- if one thread has \(x\) and another thread has \(y\), and the first thread updates its ghost state to \(x'\), then it may no longer be the case that \(x' \cdot y\) is well-defined, meaning that that update is formally disallowed.

Here's a tutorial on separation logic with some nice diagrams.

`I'm no expert myself! To the best of my knowledge, the simplest kinds of separation logic allow you to reason about "heaps" -- maps from identifiers ("memory locations") to values -- which essentially model memory in an imperative computer program. A "separating conjunction" is used to combine statements about two disjoint heaps into a single statement about a single heap -- more specifically, it says that a heap can be factored into two parts, each satisfying one of the conjuncts. You can layer some serious abstraction onto this basic idea to get various degrees of shared memory, allowing you to reason about the combined effects of two concurrent programs on shared memory. This _seems_ to boil down to a distinction between physical state and "ghost state", where a physical state is logically some combination of extant ghost states that have been separated out. Also, my understanding is that the "partial" in "partial commutative monoid" refers to how \\(x \cdot y\\) may not be defined for all \\(x, y\\). I think this is meant to restrict the kinds of actions independent threads can take over the shared resource -- if one thread has \\(x\\) and another thread has \\(y\\), and the first thread updates its ghost state to \\(x'\\), then it may no longer be the case that \\(x' \cdot y\\) is well-defined, meaning that that update is formally disallowed. Here's [a tutorial on separation logic](http://www0.cs.ucl.ac.uk/staff/p.ohearn/papers/iclp08proceedings.pdf) with some nice diagrams.`

Thanks! This stuff sounds hard to me: I don't ever think about memory in imperative computer programs, so I don't have the intuitions that this work is trying to formalize.

`Thanks! This stuff sounds hard to me: I don't ever think about memory in imperative computer programs, so I don't have the intuitions that this work is trying to formalize.`

Because I'm tired of Hollywood making plothole filled stories, I wouldn't mind formalizing the resource theory of stories.

There exist papers, such as 'Generative Story Worlds as Linear Logic Programs' which show that stories can be seen as a linear logic, but I wouldn't mind describing stories in the general case, plus giving stories string diagrammatic semantics.

`Because I'm tired of Hollywood making plothole filled stories, I wouldn't mind formalizing the resource theory of stories. There exist papers, such as ['Generative Story Worlds as Linear Logic Programs'](https://www.cs.cmu.edu/~cmartens/int7.pdf) which show that stories can be seen as a linear logic, but I wouldn't mind describing stories in the general case, plus giving stories string diagrammatic semantics.`

In fact, I might make a 'Categories for the Working Storyteller' thread in the

Applied Category Theory Discussion Groupssubforum if enough people find interest.`In fact, I might make a 'Categories for the Working Storyteller' thread in the *Applied Category Theory Discussion Groups* subforum if enough people find interest.`

Me, too! ;)

Honestly, I think it's less about the details of computer memory and more about how to work with potentially out-of-date knowledge. If you share a bank account with someone, you want to make sure you never overdraft your account -- but without additional rules about what each person can do with the account, there's simply no way to guarantee that. You can only withdraw if you're certain the account will be able to cover the withdrawal, but at any time the other person could make a withdrawal of their own. The upshot is that you can never withdraw if you want a guarantee against overdrafts.

I've only just started reading this chapter, so perhaps I'll be able to come back to this in the coming weeks with a better angle on things.

`> This stuff sounds hard to me. Me, too! ;) Honestly, I think it's less about the details of computer memory and more about how to work with potentially out-of-date knowledge. If you share a bank account with someone, you want to make sure you never overdraft your account -- but without additional rules about what each person can do with the account, there's simply no way to guarantee that. You can only withdraw if you're certain the account will be able to cover the withdrawal, but at any time the other person could make a withdrawal of their own. The upshot is that you can never withdraw if you want a guarantee against overdrafts. I've only just started reading this chapter, so perhaps I'll be able to come back to this in the coming weeks with a better angle on things.`

This chapter is why I signed up for this course. I am motivated by two project areas where compositionality is a critical issue.

Economic networks and supply chains where some resources are composed of other resources, and some organizations are connected in a network when one org's component is an input to another org's manufacturing process. Likewise recipes (e.g. the lemon meringue pie) could be connected when one process separates eggs and another makes pie crusts and yet another assembles and bakes the pies. (Commercial bakeries often do actually buy separated eggs, yolks and whites in separate containers.)

Software projects that are developing ecosystems of apps. In internal "enterprise" projects, these are often called "microservice meshes", and also often a management nightmare. In open source software, I am involved in two different projects that call themselves something like "open app ecosystem", and I know of several others. In this case, it's how the flocks of software components can be composed - connect to each other to do something larger and more useful.

In both cases, rules of compositionality would be useful. In the economic networks, they are somewhat known. In the software networks, not so much.

`This chapter is why I signed up for this course. I am motivated by two project areas where compositionality is a critical issue. * Economic networks and supply chains where some resources are composed of other resources, and some organizations are connected in a network when one org's component is an input to another org's manufacturing process. Likewise recipes (e.g. the lemon meringue pie) could be connected when one process separates eggs and another makes pie crusts and yet another assembles and bakes the pies. (Commercial bakeries often do actually buy separated eggs, yolks and whites in separate containers.) * Software projects that are developing ecosystems of apps. In internal "enterprise" projects, these are often called "microservice meshes", and also often a management nightmare. In open source software, I am involved in two different projects that call themselves something like "open app ecosystem", and I know of several others. In this case, it's how the flocks of software components can be composed - connect to each other to do something larger and more useful. In both cases, rules of compositionality would be useful. In the economic networks, they are somewhat known. In the software networks, not so much.`

What are your software suites called? I am curious...

`> In open source software, I am involved in two different projects that call themselves something like "open app ecosystem" What are your software suites called? I am curious...`

This is a discussion group between several people who are creating several suites:

https://www.loomio.org/g/exAKrBUp/open-app-ecosystem

Here are a couple of particular suites, all work-in-progress:

`> What are your software suites called? I am curious... This is a discussion group between several people who are creating several suites: https://www.loomio.org/g/exAKrBUp/open-app-ecosystem Here are a couple of particular suites, all work-in-progress: * https://www.loomio.org/d/KEcf2u84/experimentation-with-open-app-ecosystem * https://www.opencoopecosystem.net/`

I wrote:

Jonathan concurred:

The paper you pointed me to has some references on "bunched logic". I'm finding this helpful:

The entailment symbol \(A \vdash B\) can be read as \(A \le B\), and then we can use our knowledge of posets to guess what's going on. They mention that in classical logic we have

$$ A \wedge B \vdash C \quad \mbox{iff} \quad A \vdash B \Rightarrow C $$ We can read \(A \wedge B\) as the meet of \(A\) and \(B\) in some poset. Then the above "iff" says that the monotone map from our poset to itself sending any \(A\) to \(A \wedge B\) has a right adjoint sending \(C\) to some element... which we call \(B \Rightarrow C\).

This is classical logic done with adjoints, or Galois connections. "Bunched" logic is a variant of this theme. Perhaps it's using a different monoidal operation \(\ast\) on our poset. If so, we're in luck, because we'll soon be studying monoidal posets!

`I wrote: > This stuff sounds hard to me Jonathan concurred: > Me, too. ;) The paper you pointed me to has some references on "bunched logic". I'm finding this helpful: * Wikipedia, [Bunched logic](https://en.wikipedia.org/wiki/Bunched_logic). The entailment symbol \\(A \vdash B\\) can be read as \\(A \le B\\), and then we can use our knowledge of posets to guess what's going on. They mention that in classical logic we have $$ A \wedge B \vdash C \quad \mbox{iff} \quad A \vdash B \Rightarrow C $$ We can read \\(A \wedge B\\) as the meet of \\(A\\) and \\(B\\) in some poset. Then the above "iff" says that the monotone map from our poset to itself sending any \\(A\\) to \\(A \wedge B\\) has a right adjoint sending \\(C\\) to some element... which we call \\(B \Rightarrow C\\). This is classical logic done with adjoints, or Galois connections. "Bunched" logic is a variant of this theme. Perhaps it's using a different monoidal operation \\(\ast\\) on our poset. If so, we're in luck, because we'll soon be studying monoidal posets!`

Keith wrote:

Cool! This logic has predicates like "X feels affection toward Y", "X is married to Y" and "X is suicidal". I can definitely imagine cranking out abstract soap opera plots using this sort of logic. :-)

`Keith wrote: > There exist papers, such as ['Generative Story Worlds as Linear Logic Programs'](https://www.cs.cmu.edu/~cmartens/int7.pdf) which show that stories can be seen as a linear logic [...] Cool! This logic has predicates like "X feels affection toward Y", "X is married to Y" and "X is suicidal". I can definitely imagine cranking out abstract soap opera plots using this sort of logic. :-)`

Bob Haugen wrote:

I'm definitely interested in this stuff! In my talk at ACT2018, I concluded by pointing out that in nature there is very little "waste" - what one one organism discards, another uses. I said we should try to emulate that. Afterwards, Helle Hvid told to me to check out circular manufacturing and industrial symbiosis. I hadn't known these buzzwords, but they're just what I'm interested in, and I believe they

canbe formalized, at least to some extent, using resource theories.I'm ultimately motivated by ecological and sustainability issues, as you can see from my talk slides:

(Unfortunately without me talking, not all of this will make as much sense.)

`Bob Haugen wrote: > Economic networks and supply chains where some resources are composed of other resources, and some organizations are connected in a network when one org's component is an input to another org's manufacturing process. I'm definitely interested in this stuff! In my talk at ACT2018, I concluded by pointing out that in nature there is very little "waste" - what one one organism discards, another uses. I said we should try to emulate that. Afterwards, Helle Hvid told to me to check out [circular manufacturing](https://www.manufacturingglobal.com/lean-manufacturing/birth-circular-manufacturing) and [industrial symbiosis](https://en.wikipedia.org/wiki/Industrial_symbiosis). I hadn't known these buzzwords, but they're just what I'm interested in, and I believe they _can_ be formalized, at least to some extent, using resource theories. I'm ultimately motivated by ecological and sustainability issues, as you can see from my talk slides: * [Props in network theory](http://math.ucr.edu/home/baez/ACT2018). (Unfortunately without me talking, not all of this will make as much sense.)`

The link to the slides seems broken.

`The link to the slides seems broken.`

#13

Any chances the video will be available?

`[#13](https://forum.azimuthproject.org/discussion/comment/17774/#Comment_17774) Any chances the video will be available?`

Jesus - whoops! I fixed the link.

Artur - I just discovered that the Statebox team videotaped most of my talk, though the start is missing. It's here. You can see most of the ACT2018 talks here.

`Jesus - whoops! I fixed the link. Artur - I just discovered that the Statebox team videotaped most of my talk, though the start is missing. It's [here](https://www.youtube.com/watch?v=FRKZkCU3PhE). You can see most of the ACT2018 talks [here](https://johncarlosbaez.wordpress.com/2018/04/30/applied-category-theory-2018-live-stream/).`

That is my hope, too.

Project coming up this summer, supposedly, called Accounting for Planetary Survival, where I have already thrown this class into the "potential sources" collection. If it gets officially announced, I will announce prominently here.

`> circular manufacturing and industrial symbiosis. I hadn't known these buzzwords, but they're just what I'm interested in, and I believe they can be formalized, at least to some extent, using resource theories. That is my hope, too. > I'm ultimately motivated by ecological and sustainability issues, Project coming up this summer, supposedly, called Accounting for Planetary Survival, where I have already thrown this class into the "potential sources" collection. If it gets officially announced, I will announce prominently here.`

John Baez wrote:

Yes, you can.

However, I want to see exactly how far one can take narratives and stories. For instance, I'm sure everyone understands that stories can be composed, but also if we're following multiple characters, stories can take place parallel to each other; if a character picks up a book or starts retelling a story, then we have nesting stories; if the story within the story is the story itself then we have a recursive story; then there's time travel, multiple in story universes, branching timelines (comic book authors love these). Basically, storytelling can become a mess and plotholes can easily crop up if the authors aren't using some sort of 'narrative type-checking'.

Plus, as an amateur mathematician and a philosopher, it would be nice to know that the use of narratives and storytelling as "intuition pumps" to get across ideas is perfectly valid.

`John Baez wrote: >Cool! This logic has predicates like "X feels affection toward Y", "X is married to Y" and "X is suicidal". I can definitely imagine cranking out abstract soap opera plots using this sort of logic. :-) Yes, you can. However, I want to see exactly how far one can take narratives and stories. For instance, I'm sure everyone understands that stories can be composed, but also if we're following multiple characters, stories can take place parallel to each other; if a character picks up a book or starts retelling a story, then we have nesting stories; if the story within the story is the story itself then we have a recursive story; then there's time travel, multiple in story universes, branching timelines (comic book authors love these). Basically, storytelling can become a mess and plotholes can easily crop up if the authors aren't using some sort of 'narrative type-checking'. Plus, as an amateur mathematician and a philosopher, it would be nice to know that the use of narratives and storytelling as "intuition pumps" to get across ideas is perfectly valid.`

Keith E. Peterson wrote:

I wonder if George R. R. Martin might finish ASoIaF sooner if he were to have a tool for managing his hundreds of parallel plotlines. (I'm only half joking.)

`Keith E. Peterson wrote: > if we're following multiple characters, stories can take place parallel to each other I wonder if George R. R. Martin might finish ASoIaF sooner if he were to have a tool for managing his hundreds of parallel plotlines. (I'm only half joking.)`

John Baez wrote:

Keith Peterson wrote:

Chapter 2 is already so very much interesting!

`John Baez wrote: >circular manufacturing and industrial symbiosis. I hadn't known these buzzwords, but they're just what I'm interested in, and I believe they can be formalized, at least to some extent, using resource theories Keith Peterson wrote: >There exist papers, such as 'Generative Story Worlds as Linear Logic Programs' which show that stories can be seen as a linear logic, but I wouldn't mind describing stories in the general case, plus giving stories string diagrammatic semantics. Chapter 2 is already so very much interesting!`

comment 11:

What is the exact meaning of this entailment symbol? It seems not in here:

https://en.wikipedia.org/wiki/Logical_connective

`comment 11: >The entailment symbol A⊢B can be read as A≤B, and then we can use our knowledge of posets to guess what's going on. They mention that in classical logic we have What is the exact meaning of this entailment symbol? It seems not in here: https://en.wikipedia.org/wiki/Logical_connective`

@nad: You mean \(\vdash\)? It isn't quite a logical connective -- those live "inside" the logic, so to speak, and make up the structure of any given logical proposition. The turnstile \(\vdash\) is part of the metalanguage: it's used to talk

aboutpropositions. Mathematically, it is a relation between propositions -- which is why John suggests reading it as \(\le\). The judgment \(A \vdash B\) is pronounced "\(B\) is provable from \(A\)".`@nad: You mean \\(\vdash\\)? It isn't quite a logical connective -- those live "inside" the logic, so to speak, and make up the structure of any given logical proposition. The [turnstile](https://en.wikipedia.org/wiki/Turnstile_%28symbol%29) \\(\vdash\\) is part of the metalanguage: it's used to talk _about_ propositions. Mathematically, it is a relation between propositions -- which is why John suggests reading it as \\(\le\\). The judgment \\(A \vdash B\\) is pronounced "\\(B\\) is provable from \\(A\\)".`

@Keith – you may be interested in a project by a friend of mine who's using a kind of modal logic ("eremic logic") to model multi-character social interactions: https://versu.com/ – technical details here https://versublog.files.wordpress.com/2014/05/ptai_evans.pdf

`@Keith – you may be interested in a project by a friend of mine who's using a kind of modal logic ("eremic logic") to model multi-character social interactions: https://versu.com/ – technical details here https://versublog.files.wordpress.com/2014/05/ptai_evans.pdf`

@Jonathan Costello Aha this is a turnstile symbol - thanks for the hint. If I read though this: https://de.wikipedia.org/wiki/Begriffsschrift#Inhaltsstrich_und_Urteilsstrich it says that the symbol "-" meant that a judgement about the content can be made. The symbol "|" in front of "-" meant that the content is "true" (and Frege called this in some sense an "assertion sign" by saying that "der Inhalt werde mit „behauptender Kraft“ geäußert" ("The content is expressed with "assertive force"" "Behauptung= assertion") that seems a bit different from nowadays meaning of being "provable". That is I understand "provable" as that something can be proved to be true or false, i.e. more like what Wikipedia claims was originally meant with the symbol "-"

`@Jonathan Costello Aha this is a turnstile symbol - thanks for the hint. If I read though this: <a href="https://de.wikipedia.org/wiki/Begriffsschrift#Inhaltsstrich_und_Urteilsstrich">https://de.wikipedia.org/wiki/Begriffsschrift#Inhaltsstrich_und_Urteilsstrich</a> it says that the symbol "-" meant that a judgement about the content can be made. The symbol "|" in front of "-" meant that the content is "true" (and Frege called this in some sense an "assertion sign" by saying that "der Inhalt werde mit „behauptender Kraft“ geäußert" ("The content is expressed with "assertive force"" "Behauptung= assertion") that seems a bit different from nowadays meaning of being "provable". That is I understand "provable" as that something can be proved to be true or false, i.e. more like what Wikipedia claims was originally meant with the symbol "-"`

@Jonathan_Costello In addition to my previous comment (24.) it should also be mentioned that linguistic entailment https://en.wikipedia.org/wiki/Entailment_(linguistics) has yet another meaning.

`@Jonathan_Costello In addition to my previous comment (24.) it should also be mentioned that linguistic entailment https://en.wikipedia.org/wiki/Entailment_(linguistics) has yet another meaning.`

@Anindya Bhattacharyya

Thanks! This is a really interesting read.

`@Anindya Bhattacharyya Thanks! This is a really interesting read.`

Nad - people use \( A \vdash B\) to mean lots of slightly different things, but basically it means "if you assume A, you can prove B". In other words, it stands for "logical consequence", also known as "entailment".

In more a bit detail: the idea is that we fix some system of logic with some deduction rules. A and B are statements, or sometimes lists of statements ("sequents"), in this system of logic. If starting from A we can deduce B, we write \(A \vdash B\).

As Jonathan explained, \(\vdash\) is not a logical connective. A logical connective lets you take some statements in your logical system and connect them to get another statement in that system. \(\vdash\) is an example of a judgement.

`Nad - people use \\( A \vdash B\\) to mean lots of slightly different things, but basically it means "if you assume A, you can prove B". In other words, it stands for ["logical consequence", also known as "entailment"](https://en.wikipedia.org/wiki/Logical_consequence). In more a bit detail: the idea is that we fix some system of logic with some deduction rules. A and B are statements, or sometimes lists of statements ("sequents"), in this system of logic. If starting from A we can deduce B, we write \\(A \vdash B\\). As Jonathan explained, \\(\vdash\\) is not a logical connective. A logical connective lets you take some statements in your logical system and connect them to get another statement in that system. \\(\vdash\\) is an example of a [judgement](https://en.wikipedia.org/wiki/Judgment_(mathematical_logic)).`

@John_Baez @Jonathan_Costello

John wrote in 27.

Unfortunately I have here only Wikipedia as a comparision - but on the logical consequence page there seem to be a couple of interpretations within logic, like amongst others under "modal accounts it currently says:

This sounds way more like the original Frege intention (see comment 24.) or the linguistic entailment (see comment 25.) which currently says:

Moreover if you go on the german version of logical consequence page you get "Implikation" which lists the material implication, i.e. the connective is seen as as a possible version of "Implikation", but for the connective it holds that it is true that all can "follow" from a false premise. Or in other words given a false premise everything can be "deduced", which I wouldnt call "provable".

In addition you currently find on the Wikipedia Judgement page :

which is again more like the original Frege meaning.

In short in general entailment seems to say that from a true premise a true assertion should follow, but it doesn't seem to specify what to do with a false premise.

`@John_Baez @Jonathan_Costello John wrote in 27. >Nad - people use A⊢B to mean lots of slightly different things, but basically it means "if you assume A, you can prove B". and >As Jonathan explained, ⊢ is not a logical connective. Are you sure? That is this seems to be the interpretation in what is called <a href="https://en.wikipedia.org/wiki/Turnstile_(symbol)">proof theory</a>. Unfortunately I have here only Wikipedia as a comparision - but on the <a href="https://en.wikipedia.org/wiki/Logical_consequence">logical consequence page</a> there seem to be a couple of interpretations within logic, like amongst others under "modal accounts it currently says: >Modal accounts of logical consequence are variations on the following basic idea: \\( \Gamma \vdash A \\) is true if and only if it is necessary that if all of the elements of \\(\Gamma\\) are true, then A is true. This sounds way more like the original Frege intention (see comment 24.) or the <a href="https://en.wikipedia.org/wiki/Entailment_(linguistics)">linguistic entailment</a> (see comment 25.) which currently says: >Linguistic entailments occur when one may draw necessary conclusions from a particular use of a word. Entailment phrases are relations between propositions, and are always worded as, "if A then B," meaning that if A is true, then B must also be true. Another way of phrasing this is, "if A is true, then B must necessarily be true." Moreover if you go on the german version of <a href="https://en.wikipedia.org/wiki/Logical_consequence">logical consequence page</a> you get "Implikation" which lists the material implication, i.e. the connective is seen as as a possible version of "Implikation", but for the connective it holds that it is true that all can "follow" from a false premise. Or in other words given a false premise everything can be "deduced", which I wouldnt call "provable". In addition you currently find on the Wikipedia <a href"https://en.wikipedia.org/wiki/Judgment_(mathematical_logic)">Judgement</a> page : >In logic, logical assertion is a statement that asserts that a certain premise is true,... which is again more like the original Frege meaning. In short in general entailment seems to say that from a true premise a true assertion should follow, but it doesn't seem to specify what to do with a false premise.`

In the context of something like type theory (which has categorical roots), entailment \(A \vdash B\) is not a truth function, whereas the material conditional is. In other words, the material conditional furnishes a truth value for all operands, regardless of what those operands are. This is seen by the usual truth table for implication, which specifies the value of \(A \rightarrow B\) for all truth values \(A\) and \(B\).

Entailment is a judgment, which is distinct from a truth function. Mathematically, judgments are modeled by relations; in logic and type theory we

generatethese relations using proof systems, to the point that we say \(A \vdash B\) is a nonsensical utterance if it doesn't follow the rules of formation of the proof system. Again, this is quite in contrast to the case with \(A \rightarrow B\), which is a truth function, and hence makes sense (even if the specific sense is "false") over all operands.(I'm intentionally being a little loose with metaphors here -- entailment is somewhat more than a truth function, but any functor assigning every proposition a truth value also carries entailment to a bona fide truth function, so it's not inaccurate, either.)

`In the context of something like type theory (which has categorical roots), entailment \\(A \vdash B\\) is not a [truth function](https://en.wikipedia.org/wiki/Truth_function), whereas the [material conditional](https://en.wikipedia.org/wiki/Material_conditional) is. In other words, the material conditional furnishes a truth value for all operands, regardless of what those operands are. This is seen by the usual truth table for implication, which specifies the value of \\(A \rightarrow B\\) for all truth values \\(A\\) and \\(B\\). Entailment is a [judgment](https://en.wikipedia.org/wiki/Judgment_(mathematical_logic)), which is distinct from a truth function. Mathematically, judgments are modeled by [relations](https://en.wikipedia.org/wiki/Finitary_relation); in logic and type theory we _generate_ these relations using proof systems, to the point that we say \\(A \vdash B\\) is a nonsensical utterance if it doesn't follow the rules of formation of the proof system. Again, this is quite in contrast to the case with \\(A \rightarrow B\\), which is a truth function, and hence makes sense (even if the specific sense is "false") over all operands. (I'm intentionally being a little loose with metaphors here -- entailment is somewhat more than a truth function, but any functor assigning every proposition a truth value also carries entailment to a bona fide truth function, so it's not inaccurate, either.)`

I'd say \(\vdash\) is a syntactic notion, that relates formulas to formulas, (or a set to one of them), that is defined using a deductive calculus using formal means, while \(\vDash\) is the semantic, model theoretic notion. Most nonpathological logical calculi we use are sound and that means that if \(A_1, A_2, A_3 ... \vdash C\) then \(A_1, A_2, A_3 ... \vDash C\).

`I'd say \\(\vdash\\) is a syntactic notion, that relates formulas to formulas, (or a set to one of them), that is defined using a deductive calculus using formal means, while \\(\vDash\\) is the semantic, model theoretic notion. Most nonpathological logical calculi we use are [sound](https://en.wikipedia.org/wiki/Soundness) and that means that if \\(A_1, A_2, A_3 ... \vdash C\\) then \\(A_1, A_2, A_3 ... \vDash C\\).`

@John_Baez @Jonathan_Castello @Jesus_Lopez

Jonathan Castello wrote:

Yes and I say that it seems that the original meaning of entailment \( A \vdash B \) seems to have been "a partial truth function", i.e. a truth function that said that it should be true that if the premise is true then the implication is also true. I.e. if there is only the other option "false" then this gives half of the truth table of the material implication. So the material implication could be seen as a special interpretation of Implikation where the other half is set in a very special way (namely in order to have \( (A \rightarrow B ) \iff \neg ( A \land \neg B) \) ), where it should be said that it seems that again only "half of the equivalence" would be needed for making the usual Reductio ad absurdum work. That is as it seems the equivalence needs only to be true for the case "A is true" ( I wrote "seems" because this is only at a first glance).

Maybe the irritation starts already with the question of what is meant with "provable". It seems some people mean by that not that one can show that something is true or false but just that one can show that something is true. But as said above by "provable" I mean that there exist a truth value and not necessarily that this truth value is "true" only. I understand the type/proof theory notion of entailment thus in the following way: Let \( A \vdash B\) , where for example \( B \Leftrightarrow A \land C \) with C being for example a false statement. Then A entails B means given a truth value for A the truth value for B is set (in the example it is the result of \( A \land C \) and the whole statement "as being evaluated on A" shall be true (?). This shows though that for this interpretation there could be clashes with an interpretation where the material implication is seen as a special case of "Implikation" (That is if A is true than B is false would be in this type/Proof theory example interpreted as true, while this would be false with the material implication). So it is important to say what is meant.

`@John_Baez @Jonathan_Castello @Jesus_Lopez Jonathan Castello wrote: >In the context of something like type theory (which has categorical roots), entailment \\( A \vdash B \\) is not a truth function, whereas the material conditional is. Yes and I say that it seems that the original meaning of entailment \\( A \vdash B \\) seems to have been "a partial truth function", i.e. a truth function that said that it should be true that if the premise is true then the implication is also true. I.e. if there is only the other option "false" then this gives half of the truth table of the material implication. So the material implication could be seen as a special interpretation of <a href="https://de.wikipedia.org/wiki/Implikation">Implikation</a> where the other half is set in a very special way (namely in order to have \\( (A \rightarrow B ) \iff \neg ( A \land \neg B) \\) ), where it should be said that it seems that again only "half of the equivalence" would be needed for making the usual <a href="https://en.wikipedia.org/wiki/Reductio_ad_absurdum">Reductio ad absurdum</a> work. That is as it seems the equivalence needs only to be true for the case "A is true" ( I wrote "seems" because this is only at a first glance). Maybe the irritation starts already with the question of what is meant with "provable". It seems some people mean by that not that one can show that something is true or false but just that one can show that something is true. But as said above by "provable" I mean that there exist a truth value and not necessarily that this truth value is "true" only. I understand the type/proof theory notion of entailment thus in the following way: Let \\( A \vdash B\\) , where for example \\( B \Leftrightarrow A \land C \\) with C being for example a false statement. Then A entails B means given a truth value for A the truth value for B is set (in the example it is the result of \\( A \land C \\) and the whole statement "as being evaluated on A" shall be true (?). This shows though that for this interpretation there could be clashes with an interpretation where the material implication is seen as a special case of "Implikation" (That is if A is true than B is false would be in this type/Proof theory example interpreted as true, while this would be false with the material implication). So it is important to say what is meant.`

I would like to add that I mentioned that it seems that

because as it seems it is this "eventually "unnecessary" other half" of the material implication's truth table which leads to semantic paradoxicals.

`I would like to add that I mentioned that it seems that >only "half of the equivalence" would be needed for making the usual Reductio ad absurdum work. because as it seems it is this "eventually "unnecessary" other half" of the material implication's truth table which leads to semantic paradoxicals.`

I see I'm a couple weeks back. Are folks still discussing chapter 2? I notice that most of the exercises haven't been done as of now. I've done a few them, but I definitely have questions about some and would be happy to find co-discussants.

`I see I'm a couple weeks back. Are folks still discussing chapter 2? I notice that most of the exercises haven't been done as of now. I've done a few them, but I definitely have questions about some and would be happy to find co-discussants.`

Wanted to say that the tup software build system is a real life instance of the meringue pie example. The dependency DAG expresses how a given target depends on previous ones.

`Wanted to say that the [tup](http://gittup.org/tup/) software build system is a real life instance of the meringue pie example. The dependency DAG expresses how a given target depends on previous ones.`

Rif A. Saurous wrote:

I'm willing to go back and discuss chapter 1 if need be.

`Rif A. Saurous wrote: >I see I'm a couple weeks back. Are folks still discussing chapter 2? I'm willing to go back and discuss chapter 1 if need be.`