It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.2K
- Applied Category Theory Course 327
- Applied Category Theory Exercises 149
- Applied Category Theory Discussion Groups 47
- Applied Category Theory Formula Examples 15
- Chat 468
- Azimuth Code Project 107
- News and Information 145
- Azimuth Blog 148
- Azimuth Forum 29
- Azimuth Project 190
- - Strategy 109
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 707
- - Latest Changes 699
- - - Action 14
- - - Biodiversity 8
- - - Books 2
- - - Carbon 9
- - - Computational methods 38
- - - Climate 53
- - - Earth science 23
- - - Ecology 43
- - - Energy 29
- - - Experiments 30
- - - Geoengineering 0
- - - Mathematical methods 69
- - - Meta 9
- - - Methodology 16
- - - Natural resources 7
- - - Oceans 4
- - - Organizations 34
- - - People 6
- - - Publishing 4
- - - Reports 3
- - - Software 20
- - - Statistical methods 2
- - - Sustainability 4
- - - Things to do 2
- - - Visualisation 1
- General 38

Options

In the thread 'Chapter 2' Keith E. Peterson showed interest to

make a 'Categories for the Working Storyteller' thread

That's a splendid idea, Keith! For the last few months I pondered how to transform a set of reaction networks into fairy tales. I'm quite excited about it, so here, let me go ahead and open that thread.

There are already some comments about this otherthread (which I only just noticed), let me copy this link to a paper Keith dug up:

'Generative Story Worlds as Linear Logic Programs' https://www.cs.cmu.edu/~cmartens/int7.pdf

The title certainly sounds interesting!

## Comments

Oh cool, thank you for making the thread for me. That saves me work.

What's nice about that paper (and other's like it), is that linear logic can be given a nice string semantics via proof-nets.

`Oh cool, thank you for making the thread for me. That saves me work. What's nice about that paper (and other's like it), is that linear logic can be given a nice string semantics via proof-nets.`

As an aside the different ways a narrative's timeline can be structured (is there time travel? does it branch? can we affect the past? etc) 'correspond' to various models of computational persistence, as Erik Demaine has described here:

1. Persistent Data Structuresand here

2. Retroactive Data Structures`As an aside the different ways a narrative's timeline can be structured (is there time travel? does it branch? can we affect the past? etc) 'correspond' to various models of computational persistence, as Erik Demaine has described here: **1. Persistent Data Structures** https://www.youtube.com/watch?v=T0yzrZL1py0 and here **2. Retroactive Data Structures** https://www.youtube.com/watch?v=WqCWghETNDc`

The paper by Martens, Ferreira, Bossa, and Cavazza models a Shakespearian universe using monoids represented by maps between tuples. It is only 7 pages long, so go there and read that, I won't copy the examples here. This stuff could be used for automated story building, perhaps by randomly choosing production rules, or by letting the player of a computer game select from the available choices.

`The paper by Martens, Ferreira, Bossa, and Cavazza models a Shakespearian universe using monoids represented by maps between tuples. It is only 7 pages long, so go there and read that, I won't copy the examples here. This stuff could be used for automated story building, perhaps by randomly choosing production rules, or by letting the player of a computer game select from the available choices.`

The lectures by Eric Demaine are to fill 1½ hour slots, and in the first he looks at implementations for partial, full, and confluent persistence. That is, recordig past versions of a data set using a linear history, a branching history, and branching but we're also allowed to merge versions. The 'pointer machine' model for computation is used as background theory to find out computational complexities for the best algorithms we know to establish the respective persistences. About one hour in he starts talking about details beyond these three types and the relation to functional programming (which you might also view as a kind of persistence).

`The lectures by Eric Demaine are to fill 1½ hour slots, and in the first he looks at implementations for partial, full, and confluent persistence. That is, recordig past versions of a data set using a linear history, a branching history, and branching but we're also allowed to merge versions. The 'pointer machine' model for computation is used as background theory to find out computational complexities for the best algorithms we know to establish the respective persistences. About one hour in he starts talking about details beyond these three types and the relation to functional programming (which you might also view as a kind of persistence).`

There is also the paper Linear logic for non-linear storytelling.

`There is also the paper [Linear logic for non-linear storytelling](https://pdfs.semanticscholar.org/1e11/5083898433cbe4653e80d6ab3f077721c4d9.pdf).`

What are some questions that the group feels should be asked about narratives?

For instance, one of the authors of both the above papers, Anne-Gwenn Bosser, reports on her homepage that, "[l]ately, I have been investigating the use of Linear Logic as a conceptual model for Computational Narratives, aka a Proof is a Story."

So I suppose the first questions that could be possibly asked are,

Can every linear proof be given a corresponding story and vice versa, can any story be given a linear proof?`What are some questions that the group feels should be asked about narratives? For instance, one of the authors of both the above papers, Anne-Gwenn Bosser, reports on her homepage that, "[l]ately, I have been investigating the use of Linear Logic as a conceptual model for Computational Narratives, aka a Proof is a Story." So I suppose the first questions that could be possibly asked are, **Can every linear proof be given a corresponding story and vice versa, can any story be given a linear proof?**`

A 'retroactive datastructure' would be capable of answering what-if questions. It means that changes to the past should propagate to the future, somehow.

Puzzle rf2:What does that even mean? What's the most general setting that makes sense? Can you give examplesEric's lecture doesn't spend much time on this question, but let's have some thoughts anyways.

Let's assume a simple data structure, say a list of numbers \(d:D\), and an algorithm \(f:D→D\). A

historywould then be a sequence of states \(d0, f(d0), f(f(d0)), ...\).Okay, there is not much we can do short of repeating the whole computation, which isn't very interesting complexity-wise.

So let's instead assume a data type with more structure, say a list of numbers \(l:L\), which comes with some operations \(push:N→L→L\) and \(pop:L→N×L\). We could now record what happened to our list in terms of these operations, change that, and see what applying the remaining operations does.

This is what Eric's second lecture discusses, well, for a couple of slightly more structured data types, which he then calls

partial retroactivity.Puzzle rf3:I didn't really get further than partial retroactivity, is it reasonable to talk about full or confluent retroactive data structures similar to their definition in Eric's lecture 1?Maybe not, as there isn't much known about retroactive data structures. I mean, if our type encodes a turing complete model for computation we're asking for trouble.

`A 'retroactive datastructure' would be capable of answering what-if questions. It means that changes to the past should propagate to the future, somehow. __Puzzle rf2:__ What does that even mean? What's the most general setting that makes sense? Can you give examples Eric's lecture doesn't spend much time on this question, but let's have some thoughts anyways. Let's assume a simple data structure, say a list of numbers \\(d:D\\), and an algorithm \\(f:D→D\\). A _history_ would then be a sequence of states \\(d0, f(d0), f(f(d0)), ...\\). Okay, there is not much we can do short of repeating the whole computation, which isn't very interesting complexity-wise. So let's instead assume a data type with more structure, say a list of numbers \\(l:L\\), which comes with some operations \\(push:N→L→L\\) and \\(pop:L→N×L\\). We could now record what happened to our list in terms of these operations, change that, and see what applying the remaining operations does. This is what Eric's second lecture discusses, well, for a couple of slightly more structured data types, which he then calls _partial retroactivity_. __Puzzle rf3:__ I didn't really get further than partial retroactivity, is it reasonable to talk about full or confluent retroactive data structures similar to their definition in Eric's lecture 1? Maybe not, as there isn't much known about retroactive data structures. I mean, if our type encodes a turing complete model for computation we're asking for trouble.`

Thanks for all the nice links, Keith! I'm sorry, yes I'm really that slow. I intend to share sketches from my nucleosynthesis -> fairy tale project as soon as I make some.

`Thanks for all the nice links, Keith! I'm sorry, yes I'm really that slow. I intend to share sketches from my nucleosynthesis -> fairy tale project as soon as I make some.`

I'm not worried. Take as long as needed to think over something.

I'm not sure I'm qualified to answer your puzzles.

That being said, I want to try to bring some of the ideas learned in the course and applied them to the story context.

For one thing, in literary theory of Russian Formalism in the early 20th century divided a narrative into two elements: the fabula (фа́була) and the syuzhet (сюже́т).

The fabula is sort of like a base set of events the story is based on, which is usually a poset (though some postmodern works seem to throw that assumption out the window).

The syuzhet is how the events are ordered in the plot of the written narrative. Unless the narrtive is a gamebook (aka "choose-your-own-adventure" style books), the syuzhet is

alwaysa poset, since one part of a written narration always comes before another.Puzzle KEP1:If the fabula is seen as purely a set (ie, we ignore any chronology between events, and treat all the events as being discrete), then there is a monotone map of posets from the fabula to the syuzhet. Prove this.`I'm not worried. Take as long as needed to think over something. I'm not sure I'm qualified to answer your puzzles. That being said, I want to try to bring some of the ideas learned in the course and applied them to the story context. For one thing, in literary theory of Russian Formalism in the early 20th century divided a narrative into two elements: the [fabula (фа́була) and the syuzhet (сюже́т)](https://en.wikipedia.org/wiki/Fabula_and_syuzhet). The fabula is sort of like a base set of events the story is based on, which is usually a poset (though some postmodern works seem to throw that assumption out the window). The syuzhet is how the events are ordered in the plot of the written narrative. Unless the narrtive is a [gamebook (aka "choose-your-own-adventure" style books)](https://en.wikipedia.org/wiki/Gamebook), the syuzhet is *always* a poset, since one part of a written narration always comes before another. **Puzzle KEP1:** If the fabula is seen as purely a set (ie, we ignore any chronology between events, and treat all the events as being discrete), then there is a monotone map of posets from the fabula to the syuzhet. Prove this.`

Whether it's relevant or not I don't know but Bahktin's Rabelais and his World is my all time favourite book on culture; tracing the history of the elimination of the Dionysian and the grotesque folk culture of the carnival from the Cyprian satires onwards. Crowning a donkey as pope and reciting the lord's prayer backwards is a typical transgressive example from Spain.

`Whether it's relevant or not I don't know but Bahktin's Rabelais and his World is my all time favourite book on culture; tracing the history of the elimination of the Dionysian and the grotesque folk culture of the carnival from the Cyprian satires onwards. Crowning a donkey as pope and reciting the lord's prayer backwards is a typical transgressive example from Spain.`

In the Peruvian Aymara language time is reversed so Aymarans don't wait for the bus - the bus arrives from the future.

`In the Peruvian [Aymara language](http://ucsdnews.ucsd.edu/archive/newsrel/soc/backsfuture06.asp) time is reversed so Aymarans don't wait for the bus - the bus arrives from the future.`

If I recall, in Mandarin, time is ordered vertically. So the past is above and the future is below.

Can any Mandarin speakers confirm?

`If I recall, in Mandarin, time is ordered vertically. So the past is above and the future is below. Can any Mandarin speakers confirm?`

I know just a little bit of Mandarin, but I think you are right: "shang" (above) is used for the "time before" and "xia" (below) for the "time after". This is not necessarily about past or future (e.g., morning is "shang wu" (above noon) and afternoon is "xia wu" (below noon) regardless of whether you are talking about yesterday, today or tomorrow), but the idea is that time is ordered vertically. I guess this is related to the fact that Chinese used to be written vertically: the characters you have written in the past are above; those you will write in the future are below (until you have reached the end of the page and you have to restart from the top :-) ).

`I know just a little bit of Mandarin, but I think you are right: "shang" (above) is used for the "time before" and "xia" (below) for the "time after". This is not necessarily about past or future (e.g., morning is "shang wu" (above noon) and afternoon is "xia wu" (below noon) regardless of whether you are talking about yesterday, today or tomorrow), but the idea is that time is ordered vertically. I guess this is related to the fact that Chinese used to be written vertically: the characters you have written in the past are above; those you will write in the future are below (until you have reached the end of the page and you have to restart from the top :-) ).`

Maybe relevant, in this short video, Kurt Vonnegut demonstrates different shapes of stories. For me, shape is a trigger for trying to "find the category" (maybe because I keep conflating category theory and topology).

Anyway, how do you think shapes of stories (or other things) can be categorified?

`Maybe relevant, in this [short video](https://www.youtube.com/watch?v=oP3c1h8v2ZQ), Kurt Vonnegut demonstrates different shapes of stories. For me, shape is a trigger for trying to "find the category" (maybe because I keep conflating category theory and topology). Anyway, how do you think shapes of stories (or other things) can be categorified?`

In the sense of what Kurt was doing, yes actually.

The very construction he gave looks very mathematical. (In fact, it uses the math this course has gone through).

The G-I (Good fortune-Ill Fortune) axis can be modeled as a monotone function from our story to say, the real number line,

$$ \textrm{G-I}: \mathcal{Story} \rightarrow \mathbb{R}. $$ And likewise for the B-E (Begining-End) axis,

$$ \textrm{B-E}: \mathcal{Story} \rightarrow \mathbb{I}. $$ Here \( \mathbb{I} \) is an interval \( [0,1] \).

The shape is then created from the induced graph on the product of both axises,

$$ \textrm{G-I} \times \textrm{B-E}: \mathcal{Story} \times \mathcal{Story} \rightarrow \mathbb{R} \times \mathbb{I} . $$

`In the sense of what Kurt was doing, yes actually. The very construction he gave looks very mathematical. (In fact, it uses the math this course has gone through). The G-I (Good fortune-Ill Fortune) axis can be modeled as a monotone function from our story to say, the real number line, $$ \textrm{G-I}: \mathcal{Story} \rightarrow \mathbb{R}. $$ And likewise for the B-E (Begining-End) axis, $$ \textrm{B-E}: \mathcal{Story} \rightarrow \mathbb{I}. $$ Here \\( \mathbb{I} \\) is an interval \\( [0,1] \\). The shape is then created from the induced graph on the product of both axises, $$ \textrm{G-I} \times \textrm{B-E}: \mathcal{Story} \times \mathcal{Story} \rightarrow \mathbb{R} \times \mathbb{I} . $$`

When following the various subplots that characters undergo, which the papers 'Linear logic for non-linear storytelling' and 'Generative Story Worlds as Linear Logic Programs' try to formalize, you can also create a string diagram that gives the shape of the various plots. You just need to translate from the linear logic the authors' are using to something like proof nets or string diagrammatics.

`When following the various subplots that characters undergo, which the papers 'Linear logic for non-linear storytelling' and 'Generative Story Worlds as Linear Logic Programs' try to formalize, you can also create a string diagram that gives the shape of the various plots. You just need to translate from the linear logic the authors' are using to something like proof nets or string diagrammatics.`

You may be interested to know that this topic is in the news today: http://www.bbc.com/culture/story/20180525-every-story-in-the-world-has-one-of-these-six-basic-plots

`You may be interested to know that this topic is in the news today: http://www.bbc.com/culture/story/20180525-every-story-in-the-world-has-one-of-these-six-basic-plots`

Maybe an author at the BBC was watching what this course is doing, and decided to look into something further.

That's the story my brain wants to believe.

`Maybe an author at the BBC was watching what this course is doing, and decided to look into something further. That's the story my brain wants to believe.`

After reading chapter 2, I suspected I'd come across the appropriate language for discussing a video game design device (used with adventure games and 'visual novel'-type games, and possibly others I haven't seen) called a

puzzle dependency chart, which I became intrigued by last year when it was featured prominently in the game Thimbleweed Park.This discussion group seems like an appropriate place to talk about this (though if you disagree, I think starting a "Categories for the working Game Designer" thread would also be fine), mostly because it seems like we're using similar (I think) notions to talk about something that is ultimately driven by a narrative and its causal relations.

A little background:Puzzle dependency charts were, as far as I know, developed by the folks at LucasArts in the mid-to-late 1980s. It's my understanding that The Secret of Monkey Island was the first game that LucasArts designed using them. Designer/programmer/etc. Ron Gilbert wrote a blog post describing them, a few years back: https://grumpygamer.com/puzzle_dependency_charts . A week or so ago I asked David Fox, another former LucasArts designer/programmer, if he happened to be familiar with their background or what they arose from, but he said that he didn't have anything to offer, other than mentioning that they used a piece of project management software called MicroPlanner X-Pert at the time to draw the charts.I've been working on this off-and-on since about the time Prof. Baez left for that conference, with sort of an evolving perspective about what the foundations of these puzzle dependency charts would be. Initially, I thought they might have just been an adaptation of project management techniques, but as I worked on how to express the combination of "puzzle dependence chart + game" as a \(\mathscr{V}\)-category it started to seem, first, that there would be some kind of combinatorial basis, then after that I thought I was seeing shades of tensor characteristic (or what little I understand of it yet), then a week or so ago I thought something like, "It's like I'm doing regular expressions on a free group or something" after revising what the elements of the set that the symmetric monoidal preorder would need to be.

In case that last paragraph sounded hopelessly naive, let me mention that I'm technically still an undergrad, albeit about 20yrs older than typical college age. (Long story short, I varied between full time and part-time work/school for several years, and switched majors most ever time I found myself going to school full-time again. But my original intention was to do math, and I'm back there now.) So I haven't actually taken modern algebra yet (though I've spent some time with a couple modern algebra books, most recently Dummit & Foote) and until a few days ago I really hadn't read about formal languages at all, but I picked up a copy of Revesz's

Introduction to Formal Languages(because it was cheap and highly-rated on Amazon) and have read some of it over the past few days.I had looked at this thread and seen mention of "linear logic," which I'm not acquainted with; while reading Revesz's book, I saw mention of "linear grammars" and couldn't help but wonder if there was some overlap, mostly notably because of an equivalence the book mentioned between linear grammars and regular languages. (I don't remember the specifics at the moment.) And this was shortly after I started thinking that regular expressions would play a role in this; in fact, my current impression is that the constraints reflected by the puzzle dependency chart are defining a regular expression, and being applied to the set of all histories that one can take through the set of game states (which make up the objects of the \(\mathscr{V}\)-category). (By the way, apologies if I'm mis-using terminology; it seems pretty likely that I am, at this point. Most of this stuff is new to me, and I'm pretty sloppy in how I absorb new things.)

I don't have much sense for how long these comments can be, but I think I'll make a new comment before I start writing up what my little case-study entails, and what I've come across so far. [Edit: I probably won't get to this today.]

Also, apologies in advance for my bad habit of writing sentences that are too long and full of asides. (^^;

`After reading chapter 2, I suspected I'd come across the appropriate language for discussing a video game design device (used with adventure games and 'visual novel'-type games, and possibly others I haven't seen) called a *puzzle dependency chart*, which I became intrigued by last year when it was featured prominently in the game Thimbleweed Park. This discussion group seems like an appropriate place to talk about this (though if you disagree, I think starting a "Categories for the working Game Designer" thread would also be fine), mostly because it seems like we're using similar (I think) notions to talk about something that is ultimately driven by a narrative and its causal relations. **A little background:** Puzzle dependency charts were, as far as I know, developed by the folks at LucasArts in the mid-to-late 1980s. It's my understanding that The Secret of Monkey Island was the first game that LucasArts designed using them. Designer/programmer/etc. Ron Gilbert wrote a blog post describing them, a few years back: https://grumpygamer.com/puzzle_dependency_charts . A week or so ago I asked David Fox, another former LucasArts designer/programmer, if he happened to be familiar with their background or what they arose from, but he said that he didn't have anything to offer, other than mentioning that they used a piece of project management software called MicroPlanner X-Pert at the time to draw the charts. I've been working on this off-and-on since about the time Prof. Baez left for that conference, with sort of an evolving perspective about what the foundations of these puzzle dependency charts would be. Initially, I thought they might have just been an adaptation of project management techniques, but as I worked on how to express the combination of "puzzle dependence chart + game" as a \\(\mathscr{V}\\)-category it started to seem, first, that there would be some kind of combinatorial basis, then after that I thought I was seeing shades of tensor characteristic (or what little I understand of it yet), then a week or so ago I thought something like, "It's like I'm doing regular expressions on a free group or something" after revising what the elements of the set that the symmetric monoidal preorder would need to be. In case that last paragraph sounded hopelessly naive, let me mention that I'm technically still an undergrad, albeit about 20yrs older than typical college age. (Long story short, I varied between full time and part-time work/school for several years, and switched majors most ever time I found myself going to school full-time again. But my original intention was to do math, and I'm back there now.) So I haven't actually taken modern algebra yet (though I've spent some time with a couple modern algebra books, most recently Dummit & Foote) and until a few days ago I really hadn't read about formal languages at all, but I picked up a copy of Revesz's *Introduction to Formal Languages* (because it was cheap and highly-rated on Amazon) and have read some of it over the past few days. I had looked at this thread and seen mention of "linear logic," which I'm not acquainted with; while reading Revesz's book, I saw mention of "linear grammars" and couldn't help but wonder if there was some overlap, mostly notably because of an equivalence the book mentioned between linear grammars and regular languages. (I don't remember the specifics at the moment.) And this was shortly after I started thinking that regular expressions would play a role in this; in fact, my current impression is that the constraints reflected by the puzzle dependency chart are defining a regular expression, and being applied to the set of all histories that one can take through the set of game states (which make up the objects of the \\(\mathscr{V}\\)-category). (By the way, apologies if I'm mis-using terminology; it seems pretty likely that I am, at this point. Most of this stuff is new to me, and I'm pretty sloppy in how I absorb new things.) I don't have much sense for how long these comments can be, but I think I'll make a new comment before I start writing up what my little case-study entails, and what I've come across so far. [Edit: I probably won't get to this today.] Also, apologies in advance for my bad habit of writing sentences that are too long and full of asides. (^^;`

No, I think this is fine. In fact, depending on who you talk to, games (and I mean that in the widest sense possible) are non-linear narratives. Non-linear, in that the plot doesn't follow a strict linear progression, ie in a story and even \(X\) happens and then an event\(Y\) happens, and then \(Z\) happens... Indeed, games allow choices between possible "future plots" if you want to look at it that way.

I think you should get acquainted with linear logic, since linear logic is in some ways, the logic of games.

See for instance the paper, 'Game semantics and linear logic,' by Andreas Blass

`>This discussion group seems like an appropriate place to talk about this (though if you disagree, I think starting a "Categories for the working Game Designer" thread would also be fine) No, I think this is fine. In fact, depending on who you talk to, games (and I mean that in the widest sense possible) are non-linear narratives. Non-linear, in that the plot doesn't follow a strict linear progression, ie in a story and even \\(X\\) happens and then an event\\(Y\\) happens, and then \\(Z\\) happens... Indeed, games allow choices between possible "future plots" if you want to look at it that way. >I had looked at this thread and saw mention of "linear logic," which I'm not acquainted with; I think you should get acquainted with linear logic, since linear logic is in some ways, the logic of games. See for instance the paper, 'Game semantics and linear logic,' by Andreas Blass`

I should probably say "not acquainted with

yet." I took a quick look at the papers mentioned above, mostly just to see if anything stood at to me as being familiar (which...not really). I'm sure I'll give them a more careful reading before long.`I should probably say "not acquainted with *yet*." I took a quick look at the papers mentioned above, mostly just to see if anything stood at to me as being familiar (which...not really). I'm sure I'll give them a more careful reading before long.`

Case study: Thimbleweed Park puzzle dependency chart (Pt. 1)[I wound up writing this a bit out of order, so apologies if there are huge inconsistencies that I've yet to spot.] [Also, I'm probably going to edit in more markup and hopefully some images, sooner or later.]

Where I started:A while back, I drew up a puzzle dependency chart (which I'll abbreviate PDC or sometimes refer to as 'the chart') of the game Thimbleweed Park, as they're described in the blog post that I linked to in an earlier comment. Or rather, I drew up a 'shortest path through the game' one, then started working on a version that includes every action that can be taken in the game. The image below is a scan of that latter version, which covers parts 1 through 4 of the game, although I'm not sure it's complete. (I reached a point where I had a strong sense of "that's good enough for now.) The pages didn't really line up, the graph is a huge mess and times and my handwriting is terrible, but if nothing else, this might give you a rough idea for what a PDC looks like:There are multiple edges coming out of nodes and multiple edges going into nodes. Aside form that, everything progresses left-to-right. (Or would, if it weren't such a mess.)

And I suppose the two important things that the PDC tells us are this:

From there, I suppose the definition recurses until you've reached the left-most node, which is ultimately the beginning of the game. (That's supposing you start with the right-most node, which is 'part 4 complete,' and start following edges that flow into that node.)

[To inject a more recent thought....I think the PDCs reflect a structure that the first of the Erik Demaine vidoes called a 'directed acyclic graph,' or DAG. I've yet to verify this.]

First stab:When I decided to try to express this stuff as a preorder, I considered the shortest possible path through part 2 of the game, which I boiled down to 10 steps (with linear segments of the narrative absorbed into adjacent steps, so that the 10 steps reflect only points where the player's path can branch, or where those branches recombine). I assigned each of the 10 steps a letter, then drew a Hasse diagram -- which, it so happens, is nearly identical to the PDC, except that PDCs are drawn right-to-left. (Also, each node just had a single letter in it.)[Note: I'll replace most of the above paragraph with an image, once I have one.]

I then commenced wondering what the symmetric monoidal structure would be. My first guess was that a <= b would mean "step 'a' can be performed no later than step 'b,'" and that the monoidal product would be the meet (join? the one that's higher on the page) of the nodes.

This fell apart once I considered the enriching process, as there would need to be some element in the preorder to get you from any one object to any other object. My first guess about the objects of the V-category was that they would be states of the game, where I'll define a game state as a function that maps the name of each (relevant) variable in the game engine to its value, for a given moment. (And I'll define 'moment' as 'breakpoint,' I suppose.) Shortly thereafter I revised the definition of the objects to equivalence classes of 'all states accessible at that point' (or something unclear like that; I won't try to state it any more precisely than that, because the definition I had at that time wasn't going to work anyhow).

Postmortem of the first stab:The most glaring problem was that a notion of equivalence class of game states would have to be dependent on which steps had already been performed (and, equivalently, which hadn't) to be well-defined. In other words, simply knowing which step was most recently performed doesn't sufficiently define which states are accessible, after that step.It also seemed like an issue that I'd need the inverses of steps to be able to get from 'later' states to 'earlier' ones. What wasn't at all clear to me at that point (and still isn't, to be honest ...though I've had some thoughts on the matter, which hopefully I'll mention when appropriate) is whether I could simply state some constraints before defining the set X that the preorder would be over; it seemed like this would require that I add a criteria to the enrichment process along the lines of, 'for objects x,y, a hom-object is defined only for objects such that the hom-object satisfies the constraints of the PDC.' The other option was that the constraints of the PDC would be applied after the V-category is defined, which i took to mean that the preorder would need to include hom-objects with inverses and repeated letters.

First revision:Eventually I concluded that to get from an arbitrary game state to another would be best expressed as compositions of steps. Since I'd assigned the 10 steps letters a through j, I was writing this compositions as things like a.b.c.d.e.f.g.h.i.j (usually without the periods, when writing them by hand), which I think (and hope) follows a notation used in the book. (That particular string of characters would mean 'do step a, then step b, etc., up through step j, in the left-to-right order given.')Once I started writing down strings like this, I started wondering if I was perhaps writing down the elements of a free group in 10 letters, with composition as the operation, subject to the constraints of the PDC.

[I just realized something, while typing this up: I don't recall if the book actually define a monoid anywhere...but as I recall, it's essentially 'a group, but without necessary including inverse elements.' Suddenly that seems perfect; I don't particularly want inverse elements or repeated letters unless they're absolutely required for the definition of the V-category; also, I could probably replace the phrase 'free group' with 'free monoid' and be closer to what I've actually been writing down. maybe.)

[To be continued]

`**Case study: Thimbleweed Park puzzle dependency chart (Pt. 1)** [I wound up writing this a bit out of order, so apologies if there are huge inconsistencies that I've yet to spot.] [Also, I'm probably going to edit in more markup and hopefully some images, sooner or later.] **Where I started:** A while back, I drew up a puzzle dependency chart (which I'll abbreviate PDC or sometimes refer to as 'the chart') of the game Thimbleweed Park, as they're described in the blog post that I linked to in an earlier comment. Or rather, I drew up a 'shortest path through the game' one, then started working on a version that includes every action that can be taken in the game. The image below is a scan of that latter version, which covers parts 1 through 4 of the game, although I'm not sure it's complete. (I reached a point where I had a strong sense of "that's good enough for now.) The pages didn't really line up, the graph is a huge mess and times and my handwriting is terrible, but if nothing else, this might give you a rough idea for what a PDC looks like: <img src = "https://i.imgur.com/Qif8nGa.jpg"> There are multiple edges coming out of nodes and multiple edges going into nodes. Aside form that, everything progresses left-to-right. (Or would, if it weren't such a mess.) And I suppose the two important things that the PDC tells us are this: <ul> <li>If there are multiple edges coming into a node, all of the steps described at those multiple edges must be performed before you can perform the action at the node that they flow into.</li> <li>The earliest point at which you can perform an action is when you've completed the actions of all nodes with edges flowing into the node with the action you want to perform.</li> </ul> From there, I suppose the definition recurses until you've reached the left-most node, which is ultimately the beginning of the game. (That's supposing you start with the right-most node, which is 'part 4 complete,' and start following edges that flow into that node.) [To inject a more recent thought....I think the PDCs reflect a structure that the first of the Erik Demaine vidoes called a 'directed acyclic graph,' or DAG. I've yet to verify this.] **First stab:** When I decided to try to express this stuff as a preorder, I considered the shortest possible path through part 2 of the game, which I boiled down to 10 steps (with linear segments of the narrative absorbed into adjacent steps, so that the 10 steps reflect only points where the player's path can branch, or where those branches recombine). I assigned each of the 10 steps a letter, then drew a Hasse diagram -- which, it so happens, is nearly identical to the PDC, except that PDCs are drawn right-to-left. (Also, each node just had a single letter in it.) [Note: I'll replace most of the above paragraph with an image, once I have one.] I then commenced wondering what the symmetric monoidal structure would be. My first guess was that a <= b would mean "step 'a' can be performed no later than step 'b,'" and that the monoidal product would be the meet (join? the one that's higher on the page) of the nodes. This fell apart once I considered the enriching process, as there would need to be some element in the preorder to get you from any one object to any other object. My first guess about the objects of the V-category was that they would be states of the game, where I'll define a game state as a function that maps the name of each (relevant) variable in the game engine to its value, for a given moment. (And I'll define 'moment' as 'breakpoint,' I suppose.) Shortly thereafter I revised the definition of the objects to equivalence classes of 'all states accessible at that point' (or something unclear like that; I won't try to state it any more precisely than that, because the definition I had at that time wasn't going to work anyhow). **Postmortem of the first stab:** The most glaring problem was that a notion of equivalence class of game states would have to be dependent on which steps had already been performed (and, equivalently, which hadn't) to be well-defined. In other words, simply knowing which step was most recently performed doesn't sufficiently define which states are accessible, after that step. It also seemed like an issue that I'd need the inverses of steps to be able to get from 'later' states to 'earlier' ones. What wasn't at all clear to me at that point (and still isn't, to be honest ...though I've had some thoughts on the matter, which hopefully I'll mention when appropriate) is whether I could simply state some constraints before defining the set X that the preorder would be over; it seemed like this would require that I add a criteria to the enrichment process along the lines of, 'for objects x,y, a hom-object is defined only for objects such that the hom-object satisfies the constraints of the PDC.' The other option was that the constraints of the PDC would be applied after the V-category is defined, which i took to mean that the preorder would need to include hom-objects with inverses and repeated letters. **First revision:** Eventually I concluded that to get from an arbitrary game state to another would be best expressed as compositions of steps. Since I'd assigned the 10 steps letters a through j, I was writing this compositions as things like a.b.c.d.e.f.g.h.i.j (usually without the periods, when writing them by hand), which I think (and hope) follows a notation used in the book. (That particular string of characters would mean 'do step a, then step b, etc., up through step j, in the left-to-right order given.') Once I started writing down strings like this, I started wondering if I was perhaps writing down the elements of a free group in 10 letters, with composition as the operation, subject to the constraints of the PDC. [I just realized something, while typing this up: I don't recall if the book actually define a monoid anywhere...but as I recall, it's essentially 'a group, but without necessary including inverse elements.' Suddenly that seems perfect; I don't particularly want inverse elements or repeated letters unless they're absolutely required for the definition of the V-category; also, I could probably replace the phrase 'free group' with 'free monoid' and be closer to what I've actually been writing down. maybe.) [To be continued]`

Case study: Thimbleweed Park puzzle dependency chart (Pt. 2)Then I kept chugging along, unclear where I was heading:I worked on this approach for longer than I probably should have before giving proper consideration to the proof that this stuff would actually satisfy the definition of a symmetric monoidal preorder (which I think I'll start abbreviating SMP). I was intrigued by the idea that the objects of X (the set that the SMP is over) would be histories of the path taken thus far, rather than just a single step, so I kept rolling with it. At this point, I supposed that a <= b would simply mean that 'a is a substring of b' and that the monoidal product would be string concatenation. (That didn't quite work either, but I didn't notice this for a while.)At that point, it seemed that each row of the Hasse diagram would be a composition of steps, with a zero-length string at the bottom (which would work as monoidal unit), then the next row consisting of a node for each of the 10 steps, then each n-th row from the bottom being a composition of n-many letters, chosen from the 10. I only planned to include compositions compatible with the PDC.

The top row, then, would coincide with 'all possible paths through the 10 steps that fit within the constraints of the PDC.' (Actually, I started with this step before thinking about the other elements in the preorder.) There are 24 such paths. I then started to write out the distinct substrings of length 9 and less, gambling on there being lots of redundancy (which there was, at times) so that the process would be manageable by hand. (For one of the lengths, I believe there were 90 distinct substrings; the rest were less than this.)

So, at that point, I figured I had the elements of X. (I considered trying to draw the Hasse diagram, but rows of as much as 90 elements made that seem like a mess waiting to happen.)

At this point, I still wasn't clear on the question of "constrain early, or constrain later," and I'm still not really clear on which to do. (I think I've felt clear about it at moments, but that clarity then faded and now I can't recall why I thought I had an answer. doh.)

Skipping ahead to more recent things:It's not lost of me that I started with an example that's more complicated than what I should have started with, if I'd had any sense of where the process would take me. (These are the lessons we learn from experience, I suppose.) So, the other day, I drew various simpler PDCs that I'll probably start considering next. (They coincide with a 'local' picture of what's happening around the individual nodes of the Thimbleweed Park TDC.And at this point, it doesn't seem like any big surprise that those pieces have two general forms: series and parallel.

`**Case study: Thimbleweed Park puzzle dependency chart (Pt. 2)** **Then I kept chugging along, unclear where I was heading:** I worked on this approach for longer than I probably should have before giving proper consideration to the proof that this stuff would actually satisfy the definition of a symmetric monoidal preorder (which I think I'll start abbreviating SMP). I was intrigued by the idea that the objects of X (the set that the SMP is over) would be histories of the path taken thus far, rather than just a single step, so I kept rolling with it. At this point, I supposed that a <= b would simply mean that 'a is a substring of b' and that the monoidal product would be string concatenation. (That didn't quite work either, but I didn't notice this for a while.) At that point, it seemed that each row of the Hasse diagram would be a composition of steps, with a zero-length string at the bottom (which would work as monoidal unit), then the next row consisting of a node for each of the 10 steps, then each n-th row from the bottom being a composition of n-many letters, chosen from the 10. I only planned to include compositions compatible with the PDC. The top row, then, would coincide with 'all possible paths through the 10 steps that fit within the constraints of the PDC.' (Actually, I started with this step before thinking about the other elements in the preorder.) There are 24 such paths. I then started to write out the distinct substrings of length 9 and less, gambling on there being lots of redundancy (which there was, at times) so that the process would be manageable by hand. (For one of the lengths, I believe there were 90 distinct substrings; the rest were less than this.) So, at that point, I figured I had the elements of X. (I considered trying to draw the Hasse diagram, but rows of as much as 90 elements made that seem like a mess waiting to happen.) At this point, I still wasn't clear on the question of "constrain early, or constrain later," and I'm still not really clear on which to do. (I think I've felt clear about it at moments, but that clarity then faded and now I can't recall why I thought I had an answer. doh.) **Skipping ahead to more recent things:** It's not lost of me that I started with an example that's more complicated than what I should have started with, if I'd had any sense of where the process would take me. (These are the lessons we learn from experience, I suppose.) So, the other day, I drew various simpler PDCs that I'll probably start considering next. (They coincide with a 'local' picture of what's happening around the individual nodes of the Thimbleweed Park TDC. And at this point, it doesn't seem like any big surprise that those pieces have two general forms: series and parallel.`

You'd have to get someone more competent with poset theory than I to help you on that one. All I know for sure is that \(a \leq b\) where \(a\) and \(b\) are key-events in a game means that \(a\) comes before the event \(b\).

Edit: Rather I should say, you can get from one event to another since the event \(a\) can reach the \(a\) trivially.

`You'd have to get someone more competent with poset theory than I to help you on that one. All I know for sure is that \\(a \leq b\\) where \\(a\\) and \\(b\\) are key-events in a game means that \\(a\\) comes before the event \\(b\\). Edit: Rather I should say, you can get from one event to another since the event \\(a\\) can reach the \\(a\\) trivially.`

[Edited to reduce the cranky tone of the original. Apologies, I wasn't feeling well for the past few days.]

Well, I'm not here specifically to ask for help. I'm just documenting what I'm working on, and I'm only working on it because it's interesting to me.

I do a lot of 'thinking out loud,' as that's how I tend to refine my thoughts on something and identify questions that still need answering. (And, for me, typing up an explanation is a form of thinking out loud, I guess.)

But, of course, if anyone who reads it happens to have any gems to offer, things along the line of, "I recognize this behavior for [some other topic], then I'm all ears. This little project has taken me to at least one previously-unfamiliar place (formal languages), and I get the feeling that there may be more...and that they may have more overlap than they let on.

But, please, don't feel obligated. I'll keep plugging away at this, even if no one else is interested. (Though I'd probably post fewer updates about my progress, and wait for an interesting conclusion, assuming one comes.)

`[Edited to reduce the cranky tone of the original. Apologies, I wasn't feeling well for the past few days.] Well, I'm not here specifically to ask for help. I'm just documenting what I'm working on, and I'm only working on it because it's interesting to me. I do a lot of 'thinking out loud,' as that's how I tend to refine my thoughts on something and identify questions that still need answering. (And, for me, typing up an explanation is a form of thinking out loud, I guess.) But, of course, if anyone who reads it happens to have any gems to offer, things along the line of, "I recognize this behavior for [some other topic], then I'm all ears. This little project has taken me to at least one previously-unfamiliar place (formal languages), and I get the feeling that there may be more...and that they may have more overlap than they let on. But, please, don't feel obligated. I'll keep plugging away at this, even if no one else is interested. (Though I'd probably post fewer updates about my progress, and wait for an interesting conclusion, assuming one comes.)`

Steve Brown, a directed graph is a DAG (directed acyclic graph) exactly when there are no cycles in it. I guess that that game disallows traversing any edge more than once, and that would imply it's still in some sense acyclic. But to represent that as as a DAG you may need to duplicate some nodes jnto variants depending on the way the player has reached them.

I'd like to help you formalize your approach, so please continue sharing your thoughts here! Caveat is, I'm short on time and you may have to wait a bit until I manage to find the opportunity to respond.

`Steve Brown, a directed graph is a DAG (directed acyclic graph) exactly when there are no cycles in it. I guess that that game disallows traversing any edge more than once, and that would imply it's still in some sense acyclic. But to represent that as as a DAG you may need to duplicate some nodes jnto variants depending on the way the player has reached them. I'd like to help you formalize your approach, so please continue sharing your thoughts here! Caveat is, I'm short on time and you may have to wait a bit until I manage to find the opportunity to respond.`

Wait, have we defined a category for stories yet?

Let's give it the name \(\mathbf{Story}\) to keep things simple.

Would our objects for \(\mathbf{Story}\) be strictly the characters? (that seems too restrictive in my opinion). Or would the objects of our category be characters, objects in the stories, and the environment? Or, what I believe it should be, the objects are relations of characters, objects in the stories, and the environment?

Now that I think about it, there should also be a category of texts, (let's call it \(\mathbf{Text}\)), which contains strings of symbols as objects. \(\mathbf{Text}\)) is monoidal and concatenates strings of text into sentences. Appends sentences together to created paragraphs. Paragraph-appends paragraphs to create chapters. Chapter-appends chapters into books. Etc. The category \(\mathbf{Text}\) is interesting even if you don't write stories.

Also thinking about it further, there should also be a functor from \(\mathbf{Story}\) to \(\mathbf{Text}\) which, due to my lack of vocabulary and creativity, I'm going to call \(\mathrm{Author}: \mathbf{Story}\to \mathbf{Text}\).

Actually, I shouldn't forget, but there should also be a functor \(\mathbf{Story} \to \mathbf{Game}\) (I have no idea what to call this), which gives the story of the game. Interestingly, games by their nature also produce stories just from playing, so there is also a functor \(\mathrm{Play}:\mathbf{Game} \to \mathbf{Story}\). In fact, this means that gameplay is adjoint to game storytelling.

`Wait, have we defined a category for stories yet? Let's give it the name \\(\mathbf{Story}\\) to keep things simple. Would our objects for \\(\mathbf{Story}\\) be strictly the characters? (that seems too restrictive in my opinion). Or would the objects of our category be characters, objects in the stories, and the environment? Or, what I believe it should be, the objects are relations of characters, objects in the stories, and the environment? Now that I think about it, there should also be a category of texts, (let's call it \\(\mathbf{Text}\\)), which contains strings of symbols as objects. \\(\mathbf{Text}\\)) is monoidal and concatenates strings of text into sentences. Appends sentences together to created paragraphs. Paragraph-appends paragraphs to create chapters. Chapter-appends chapters into books. Etc. The category \\(\mathbf{Text}\\) is interesting even if you don't write stories. Also thinking about it further, there should also be a functor from \\(\mathbf{Story}\\) to \\(\mathbf{Text}\\) which, due to my lack of vocabulary and creativity, I'm going to call \\(\mathrm{Author}: \mathbf{Story}\to \mathbf{Text}\\). Actually, I shouldn't forget, but there should also be a functor \\(\mathbf{Story} \to \mathbf{Game}\\) (I have no idea what to call this), which gives the story of the game. Interestingly, games by their nature also produce stories just from playing, so there is also a functor \\(\mathrm{Play}:\mathbf{Game} \to \mathbf{Story}\\). In fact, this means that gameplay is adjoint to game storytelling.`

If the objects are literally going to represent the objects that might exist within a story world, then I think \(\mathbf{StoryWorld}\) might be more fitting as the category for that. To me, story (and thus \(\mathbf{Story}\)) is defined by the narrative, which corresponds to the sequence of events (which, of course, interact with the elements of the story world).

(If I recall correctly, that also fits with the terminology as you'd find it in various Writer's Digest books about fiction-writing. Several years back, I borrowed several of those books from my (published novelist) sister, when I got the itch to try my hand at telling a story. I never successfully transitioned from the 'outlining' phase to the 'manuscript-writing' phase, but I learned that a sufficiently-granular outline can include everything about the plot points, if you want to carry it that far.)

That said, I'm inclined to see the objects of \(\mathbf{Story}\) as plot points; the 'major' plot points of a typical linear narrative would be among them, but there would also exist objects for whatever degree of granularity might exist within a narrative. (From that perspective, I suppose I might describe \(\mathbf{Story}\) as "...the category of things that might happen, and their causal relationships.")

Morphims between such objects then might coincide with "given objects x,y, a morphism from x to y exists if x, as a step of the narrative, is prerequisite to y."

Then, for a linear narrative, there would be a sequence of morphisms to connect first plot point to second, and so on. Branching narratives could then have non-linear graphs involving those objects.

`If the objects are literally going to represent the objects that might exist within a story world, then I think \\(\mathbf{StoryWorld}\\) might be more fitting as the category for that. To me, story (and thus \\(\mathbf{Story}\\)) is defined by the narrative, which corresponds to the sequence of events (which, of course, interact with the elements of the story world). (If I recall correctly, that also fits with the terminology as you'd find it in various Writer's Digest books about fiction-writing. Several years back, I borrowed several of those books from my (published novelist) sister, when I got the itch to try my hand at telling a story. I never successfully transitioned from the 'outlining' phase to the 'manuscript-writing' phase, but I learned that a sufficiently-granular outline can include everything about the plot points, if you want to carry it that far.) That said, I'm inclined to see the objects of \\(\mathbf{Story}\\) as plot points; the 'major' plot points of a typical linear narrative would be among them, but there would also exist objects for whatever degree of granularity might exist within a narrative. (From that perspective, I suppose I might describe \\(\mathbf{Story}\\) as "...the category of things that might happen, and their causal relationships.") Morphims between such objects then might coincide with "given objects x,y, a morphism from x to y exists if x, as a step of the narrative, is prerequisite to y." Then, for a linear narrative, there would be a sequence of morphisms to connect first plot point to second, and so on. Branching narratives could then have non-linear graphs involving those objects.`

That is kind of what I have in mind.

The idea is that a story with n many "plot-threads" should produce a string diagram with n many strings when given string semantics.

Not every step of a narrative need be prerequisite. These non-requisite morphisms could be called "catalyst functions" since they change the pace of the plot.

`That is kind of what I have in mind. The idea is that a story with n many "plot-threads" should produce a string diagram with n many strings when given string semantics. >Morphims between such objects then might coincide with "given objects x,y, a morphism from x to y exists if x, as a step of the narrative, is prerequisite to y." Not every step of a narrative need be prerequisite. These non-requisite morphisms could be called "catalyst functions" since they change the pace of the plot.`

Ah true, I was thinking in terms of minimal paths through a game, I suppose, rather than 'all possible narratives.'

`Ah true, I was thinking in terms of minimal paths through a game, I suppose, rather than 'all possible narratives.'`

Case study: Thimbleweed Park puzzle dependency chart (Pt. 3)Formal languages and free monoidsThe other day, I picked up my (thumbed-through a bit but otherwise unread, yet) copy of Awodey's Category Theory, in hopes that he would discuss free monoids, and indeed he does. He defines them on p.18, and in a language that mostly lines up with that of the book on formal languages that I picked up, Revesz's Introduction to Formal Languages.We start with an alphabet A and define A* to be the 'Kleene closure' (which Revesz just refers to as the 'set of words on A'). We define the empty string to be the unit element and use string concatenation as the associative binary operation.

Between this point and the "Chomsky hierarchy of languages," there's some talk about 'generative grammars' ...but, if I understand it correctly, they let you define a 'language,' which is always a subset of A*. I don't recall seeing if these subsets are necessarily always themselves monoids, but I'd imagine that the proof would just mean choosing one of the 'types' of language (in the Chomsky language hierachy) and showing that, after any step permissible by its generative grammar, the monoid structure still holds.

With the type 0 languages, there are no restrictions on the generative grammar; it's not really clear to me yet whether this means that any type 0 language would be equivalent to the free monoid, or if it means that type 0 languages could have production rules that break the monoid structure.

...and I can't help but wonder if the computer science-types have already answered this question.

[Follow-up: I must have been using the wrong names for things when I first google'd about this last week or something; a little googling is showing that this certainly seems to have been answered already. The wikipedia page on free monoids, for example, gives a rough definition of regular languages in terms of free monoids: https://en.wikipedia.org/wiki/Free_monoid#Kleene_star ]

Anyhow, I've been meaning to re-read the first few chapters of Revesz, as I'm still new to this formal language stuff, so lining it up with Awodey's definitions isn't a easy task (for me) as of yet. But my earlier claim of, "Puzzle dependency charts are like a regular expression on a free monoid, which produces the symmetric monoidal preorder that we'll use the enrich the V-category" is still feeling promising to me.

And when I look at the structure of a PDC (or rather, the symmtric monoidal preorder produced by applying the constraints defined by the PDC, to a free monoid, I think) what stands out to me is that it seems to be a way to define "casually-connected states, where the complete history of transitions between states is required, in order to define the current state." I can't help but think that the applications for such a structure go beyond video games and other forms of fiction....but I might be getting ahead of myself.

`**Case study: Thimbleweed Park puzzle dependency chart (Pt. 3)** **Formal languages and free monoids** The other day, I picked up my (thumbed-through a bit but otherwise unread, yet) copy of Awodey's Category Theory, in hopes that he would discuss free monoids, and indeed he does. He defines them on p.18, and in a language that mostly lines up with that of the book on formal languages that I picked up, Revesz's Introduction to Formal Languages. We start with an alphabet A and define A* to be the 'Kleene closure' (which Revesz just refers to as the 'set of words on A'). We define the empty string to be the unit element and use string concatenation as the associative binary operation. Between this point and the "Chomsky hierarchy of languages," there's some talk about 'generative grammars' ...but, if I understand it correctly, they let you define a 'language,' which is always a subset of A*. I don't recall seeing if these subsets are necessarily always themselves monoids, but I'd imagine that the proof would just mean choosing one of the 'types' of language (in the Chomsky language hierachy) and showing that, after any step permissible by its generative grammar, the monoid structure still holds. With the type 0 languages, there are no restrictions on the generative grammar; it's not really clear to me yet whether this means that any type 0 language would be equivalent to the free monoid, or if it means that type 0 languages could have production rules that break the monoid structure. ...and I can't help but wonder if the computer science-types have already answered this question. [Follow-up: I must have been using the wrong names for things when I first google'd about this last week or something; a little googling is showing that this certainly seems to have been answered already. The wikipedia page on free monoids, for example, gives a rough definition of regular languages in terms of free monoids: https://en.wikipedia.org/wiki/Free_monoid#Kleene_star ] Anyhow, I've been meaning to re-read the first few chapters of Revesz, as I'm still new to this formal language stuff, so lining it up with Awodey's definitions isn't a easy task (for me) as of yet. But my earlier claim of, "Puzzle dependency charts are like a regular expression on a free monoid, which produces the symmetric monoidal preorder that we'll use the enrich the V-category" is still feeling promising to me. And when I look at the structure of a PDC (or rather, the symmtric monoidal preorder produced by applying the constraints defined by the PDC, to a free monoid, I think) what stands out to me is that it seems to be a way to define "casually-connected states, where the complete history of transitions between states is required, in order to define the current state." I can't help but think that the applications for such a structure go beyond video games and other forms of fiction....but I might be getting ahead of myself.`

Speaking of formal languages and free monoids, I made a discussion group for the study of language from a categorical lens.

I'm surprised free monoids haven't come up there now that you have brought them up.

`Speaking of formal languages and free monoids, [I made a discussion group for the study of language from a categorical lens](https://forum.azimuthproject.org/discussion/2103/applied-category-theory-for-the-working-linguist). I'm surprised free monoids haven't come up there now that you have brought them up.`

Case study: Thimbleweed Park puzzle dependency chart (Pt. 4)Part 4: A New HopeEarlier today I came across something that led me to believe that the symmetric monoidal structure that I'd (at least partly) given up on a week or so ago may actually be quite close to what I need. I mentioned in an earlier comment that I'd decided to take a step back and start with the simplest cases....but while working on that this morning, I noticed a property that all free monoids have called "equidivisibility," described a bit here: https://en.wikipedia.org/wiki/Free_monoid#Equidivisibility . What struck me was that the diagram of this property (in that wikipedia article) looks an awful lot like the diagrams I was drawing a week or so ago, while trying to find a monoidal structure that worked. At that point, I was still working with the free monoid on a 10-letter alphabet, which was then restricted by the constraints of the puzzle dependency chart (PDC), so that was not just finite but of a manageable size.I'll probably keep working with the minimal PDCs (as they basically consist of "series" and "parallel" PDCs, and that seems like the right approach), but at least I don't have reason to believe that I should scrap what I'd done before.

Stuff I skipped over in the first three parts of this write-upLet me get a bit more explicit about how I translated a section of the game Thimbleweed Park into a PDC, and then started trying to make a symmetric monoidal preorder out of it.I took part 2 of that game (essentially, chapter 2 out of a total of 9 chapters, the first of which is almost entirely linear, so not very interesting) and identified 10 steps of the game that must be completed before the game proceeds to part 3, with the goal of doing so by the shortest possible path. (That choice was made mostly to minimize the number of letters I was working with.) A few of these steps consist of two separate actions in the game, but I considered them one step if they only have a single order in which they can be performed (meaning nothing else can be done in between them, and they must be done in a particular order).

I also assigned each step a letter, like so:

a := use film with camera; photograph dead body

b := get bottle

c := give bottle to cashier and receive nickel

d := get WC-67 tube

e := give WC-67 tube to plumbers

f := use nickel with photocopier

g := announce APB

h := get county map

i := use county map with photocopier

j := show photocopied map to sheriff

The puzzle dependence chart for these 10 steps looks like so:

(I just drew this one up, so hopefully it's correct. It looks right to me, but it's pretty easy to make mistakes with this stuff, I find.)

The diagram is read left-to-right. When a path branches after a node, it means either of the following nodes can be performed. (eg. After 'a,' you can do either step 'b' or 'd' next, but no others.) When multiple paths come into a node, it means all steps on those paths much be performed before you can perform the step at that node. (eg. You must have done both steps 'c' and 'e' before you can do 'f,' and you must have done both steps 'f' and 'i' before 'j.'

When it comes to making words out of these letters, I'm using the convention that I believe the book uses, which is that 'abc' means 'do a, then b, then c' (as opposed to a notation like c(b(a())) ).

We can take this diagram and use it to define the following constraints on words generated by the alphabet {a, .., j}:

Some of those might be redundant (ie. 'c' may not come after 'g' seems suspicious...maybe), but hopefully that's all of them. For the record, I wrote those out several weeks back, but I'm trying to check everything for correctness while I type this up.

[

Note:It's exactly this list of contraints that I'm conjecturing define a regular expression; in particular, they define a regular expression that, when applied to the free monoid on the alphabet {a,..,j}, yield the set that a symmetric monoidal preorder is defined over. I'll get to the structure of that SMP in pt. 5, but as of earlier today I'm starting to think that it'll be exactly the structure necessary for that 'equidivisibility' property of free monoids. And, at the risk of getting way ahead of myself, when this stuff first started to show up, I couldn't help but see shades of tensors and 'stuff I'd seen done that had to do with tensors,' such as in Tu's Introduction to Manifolds....but tensors are a topic that I know VERY little about, but a while ago my perspective on them sort of morphed to one of "their primary job is to keep factors separate, in an 'information-preserving' sense, perhaps," and when I saw the monoidal structure seeming to want to do exactly that (meaning, keep factors separate), then it felt like tensors were looming, every which way I looked.]That list of constraints yields 24 possible paths through the PDC:

abcdefghij

abcdegfhij

abcdeghifj

abcdeghfij

abdcefghij

abdcegfhij

abdceghfij

abdceghifj

abdecfghij

abdecgfhij

abdecghfij

abdecghifj

adbcefghij

adbcegfhij

adbceghfij

adbceghifj

adbecfghij

adbecgfhij

adbecghfij

adbecghifj

adebcfghij

adebcgfhij

adebcghfij

adebcghifj

(To be continued; I suspect I'm out of room, or at least nearly so.)

`**Case study: Thimbleweed Park puzzle dependency chart (Pt. 4)** **Part 4: A New Hope** Earlier today I came across something that led me to believe that the symmetric monoidal structure that I'd (at least partly) given up on a week or so ago may actually be quite close to what I need. I mentioned in an earlier comment that I'd decided to take a step back and start with the simplest cases....but while working on that this morning, I noticed a property that all free monoids have called "equidivisibility," described a bit here: https://en.wikipedia.org/wiki/Free_monoid#Equidivisibility . What struck me was that the diagram of this property (in that wikipedia article) looks an awful lot like the diagrams I was drawing a week or so ago, while trying to find a monoidal structure that worked. At that point, I was still working with the free monoid on a 10-letter alphabet, which was then restricted by the constraints of the puzzle dependency chart (PDC), so that was not just finite but of a manageable size. I'll probably keep working with the minimal PDCs (as they basically consist of "series" and "parallel" PDCs, and that seems like the right approach), but at least I don't have reason to believe that I should scrap what I'd done before. **Stuff I skipped over in the first three parts of this write-up** Let me get a bit more explicit about how I translated a section of the game Thimbleweed Park into a PDC, and then started trying to make a symmetric monoidal preorder out of it. I took part 2 of that game (essentially, chapter 2 out of a total of 9 chapters, the first of which is almost entirely linear, so not very interesting) and identified 10 steps of the game that must be completed before the game proceeds to part 3, with the goal of doing so by the shortest possible path. (That choice was made mostly to minimize the number of letters I was working with.) A few of these steps consist of two separate actions in the game, but I considered them one step if they only have a single order in which they can be performed (meaning nothing else can be done in between them, and they must be done in a particular order). I also assigned each step a letter, like so: a := use film with camera; photograph dead body<br> b := get bottle<br> c := give bottle to cashier and receive nickel<br> d := get WC-67 tube<br> e := give WC-67 tube to plumbers<br> f := use nickel with photocopier<br> g := announce APB<br> h := get county map<br> i := use county map with photocopier<br> j := show photocopied map to sheriff<br> The puzzle dependence chart for these 10 steps looks like so: <a href="https://imgur.com/y8otGKY"><img src="https://i.imgur.com/y8otGKY.jpg" title="source: imgur.com" /></a> (I just drew this one up, so hopefully it's correct. It looks right to me, but it's pretty easy to make mistakes with this stuff, I find.) The diagram is read left-to-right. When a path branches after a node, it means either of the following nodes can be performed. (eg. After 'a,' you can do either step 'b' or 'd' next, but no others.) When multiple paths come into a node, it means all steps on those paths much be performed before you can perform the step at that node. (eg. You must have done both steps 'c' and 'e' before you can do 'f,' and you must have done both steps 'f' and 'i' before 'j.' When it comes to making words out of these letters, I'm using the convention that I believe the book uses, which is that 'abc' means 'do a, then b, then c' (as opposed to a notation like c(b(a())) ). We can take this diagram and use it to define the following constraints on words generated by the alphabet {a, .., j}: <ul> <li>All letters must be used exactly once. (Again, this is a 'shortest path through part 2,' so all steps are necessary.)</li> <li>All words must be of the form a _ _ _ _ _ _ _ _ j. (I'm using underscore to mean 'any other step.')</li> <li>Regarding the letters h and g, they can only appear in the following substrings: ghi, ghfi, gfhi</li> <li>e comes before f</li> <li>d comes before e</li> <li>c comes before f, and c may not come after g</li> <li>b comes before f</li> </ul> Some of those might be redundant (ie. 'c' may not come after 'g' seems suspicious...maybe), but hopefully that's all of them. For the record, I wrote those out several weeks back, but I'm trying to check everything for correctness while I type this up. [**Note:** It's exactly this list of contraints that I'm conjecturing define a regular expression; in particular, they define a regular expression that, when applied to the free monoid on the alphabet {a,..,j}, yield the set that a symmetric monoidal preorder is defined over. I'll get to the structure of that SMP in pt. 5, but as of earlier today I'm starting to think that it'll be exactly the structure necessary for that 'equidivisibility' property of free monoids. And, at the risk of getting way ahead of myself, when this stuff first started to show up, I couldn't help but see shades of tensors and 'stuff I'd seen done that had to do with tensors,' such as in Tu's Introduction to Manifolds....but tensors are a topic that I know VERY little about, but a while ago my perspective on them sort of morphed to one of "their primary job is to keep factors separate, in an 'information-preserving' sense, perhaps," and when I saw the monoidal structure seeming to want to do exactly that (meaning, keep factors separate), then it felt like tensors were looming, every which way I looked.] That list of constraints yields 24 possible paths through the PDC:<br> abcdefghij<br> abcdegfhij<br> abcdeghifj<br> abcdeghfij<br> abdcefghij<br> abdcegfhij<br> abdceghfij<br> abdceghifj<br> abdecfghij<br> abdecgfhij<br> abdecghfij<br> abdecghifj<br> adbcefghij<br> adbcegfhij<br> adbceghfij<br> adbceghifj<br> adbecfghij<br> adbecgfhij<br> adbecghfij<br> adbecghifj<br> adebcfghij<br> adebcgfhij<br> adebcghfij<br> adebcghifj<br> (To be continued; I suspect I'm out of room, or at least nearly so.)`

Same here! (Or perhaps it's more than I'm wishing they'd come up, than surprised that they haven't.)

I barely know anything about them yet, but I suppose I'm getting a little crash-course in them.

`>I'm surprised free monoids haven't come up there now that you have brought them up. Same here! (Or perhaps it's more than I'm wishing they'd come up, than surprised that they haven't.) I barely know anything about them yet, but I suppose I'm getting a little crash-course in them.`

Case study: Thimbleweed Park puzzle dependency chart (Pt. 5)Another stab at the symmetric monoidal structureOnce I had the 24 paths through part 2 in-hand, I had two things I wanted to answer: (1) What did the rest of the set look like, and (2) What \( \le \) and monoidal product did I need?For (1), the biggest clue seemed to come in the form of "how will these be used during the enrichment step." Because we'd need a sequence of steps to get us from any game state to any 'later' game state, then we need the option of words of any length from 0 up to all 10. In other words, the set \(X\) consists of the 24 words listed earlier, all substrings of length 1 through 9, and an empty word.

[

Note:As far as I can tell, the enrichment process expects a hom-object from any object to any other object, not just the ones that take you from an 'earlier' point in the game to a 'later' one. (Also, I think I mentioned in an earlier comment that I was thinking of the objects of the category as equivalence classes of 'game states,' where the state can be define by the current value of all relevant variables in the running program, and the equivalence is that of, "all states that can be accessible without performing another of the {a,...,j} actions.)][

Note, cot'd:I'm still not sure how exactly I should address this....but if it's kosher for me to restrict the enrichment process to only defining hom-objects between objects in a way that reflects one state being no earlier than the other (or perhaps defining the empty word as the hom-object for changes of state that from a later state to an earlier one as a way to say 'this can't actually be done') then I'd like to do exactly that. The alternative, I suppose, is that we need inverse elements, but then we make some caveat like "when the game is executing, this inverse elements aren't accessible to the player. (The presence of inverses is why I originally was thinking that I was working with a free group rather than a free monoid; I was assuming that inverse elements had to be in here somewhere, but that I'd just sort of 'constrain them out' at some point in the process.)]I spent a little writing out (non-redundant) substrings, all the while wondering if it would have been quicker to just write a program or something to do this. (Or if there already was one; I looked a bit, but didn't find anything suitable at first, so I just went back to writing things out by hand.)

I think I mentioned some of the counts in an earlier comment; one of the substring lengths had a total of 90 distinct substrings, and the other lengths were all fewer than that.

[

Note:While doing this, I kept looking to see if I could spot some kind of pattern in the transpositions of letters. I never really did....although I suspect that counting transpositions will eventually offer some kind of useful information, in one form or another. Not a high-priority question, through.]For (2), my first thought was that x \( \le \) y means "x is a substring of y," and that the monoidal product would be concatenation of strings.

...and then criteria (i) of the monoidal product complicates things.Actually...before I say much along these lines, I feel like I should mention that I'm in a bit of a morass of not being confident that I'm checking the operations over the correct set (namely, the set of substrings described above vs. A*, aka "before or after we apply the constraints of the PDC), and at the moment I'm not even sure that I should trust my verification of the two operations that I did a week ago, mostly because last week I concluded that they wouldn't work, but earlier today I worked through them again with a far simpler scenario and found that they did work.What I did earlier today was a PDC with just a single node, so the free monoid was over the alphabet {a}, so the elements are the empty word and words of one or more repetitions of the letter 'a.' I then worked through the verification of criteria (i), using " is x a substring of y" and "concatenation" as the symmetric monoidal structure and it worked out. But, looking back, I only considered the simplest case involving the empty word and the word of length one, then wondered if there'd be an inductive way to show that those two operations would also work for other lengths of strings.

I will say this: I think I mentioned earlier than I had started to revise the underlying set so that, instead of simply being words of lengths 0 through 10, elements were equivalence classes. Namely, an element such as abcdefghij would be replaced by { {-, abcdefghij}, {a, bcdefghij}, ... {abcdefghi, j}, {abcdefghij, -} }. ('-' represents the empty word, or empty substring in this case.) But now I'm not sure if I need to go to this trouble at all. (If I do wind up needing to do this, though, let me mention that this is what led me to thinking that the 'equidivisibility of free monoids' was going to be relevant.)

I think I'll end this comment here and get back to it once I'm clearer on what I have and have not done correctly.

`**Case study: Thimbleweed Park puzzle dependency chart (Pt. 5)** **Another stab at the symmetric monoidal structure** Once I had the 24 paths through part 2 in-hand, I had two things I wanted to answer: (1) What did the rest of the set look like, and (2) What \\( \le \\) and monoidal product did I need? For (1), the biggest clue seemed to come in the form of "how will these be used during the enrichment step." Because we'd need a sequence of steps to get us from any game state to any 'later' game state, then we need the option of words of any length from 0 up to all 10. In other words, the set \\(X\\) consists of the 24 words listed earlier, all substrings of length 1 through 9, and an empty word. [**Note:** As far as I can tell, the enrichment process expects a hom-object from any object to any other object, not just the ones that take you from an 'earlier' point in the game to a 'later' one. (Also, I think I mentioned in an earlier comment that I was thinking of the objects of the category as equivalence classes of 'game states,' where the state can be define by the current value of all relevant variables in the running program, and the equivalence is that of, "all states that can be accessible without performing another of the {a,...,j} actions.)] [**Note, cot'd:** I'm still not sure how exactly I should address this....but if it's kosher for me to restrict the enrichment process to only defining hom-objects between objects in a way that reflects one state being no earlier than the other (or perhaps defining the empty word as the hom-object for changes of state that from a later state to an earlier one as a way to say 'this can't actually be done') then I'd like to do exactly that. The alternative, I suppose, is that we need inverse elements, but then we make some caveat like "when the game is executing, this inverse elements aren't accessible to the player. (The presence of inverses is why I originally was thinking that I was working with a free group rather than a free monoid; I was assuming that inverse elements had to be in here somewhere, but that I'd just sort of 'constrain them out' at some point in the process.)] I spent a little writing out (non-redundant) substrings, all the while wondering if it would have been quicker to just write a program or something to do this. (Or if there already was one; I looked a bit, but didn't find anything suitable at first, so I just went back to writing things out by hand.) I think I mentioned some of the counts in an earlier comment; one of the substring lengths had a total of 90 distinct substrings, and the other lengths were all fewer than that. [**Note:** While doing this, I kept looking to see if I could spot some kind of pattern in the transpositions of letters. I never really did....although I suspect that counting transpositions will eventually offer some kind of useful information, in one form or another. Not a high-priority question, through.] For (2), my first thought was that x \\( \le \\) y means "x is a substring of y," and that the monoidal product would be concatenation of strings. **...and then criteria (i) of the monoidal product complicates things.** Actually...before I say much along these lines, I feel like I should mention that I'm in a bit of a morass of not being confident that I'm checking the operations over the correct set (namely, the set of substrings described above vs. A*, aka "before or after we apply the constraints of the PDC), and at the moment I'm not even sure that I should trust my verification of the two operations that I did a week ago, mostly because last week I concluded that they wouldn't work, but earlier today I worked through them again with a far simpler scenario and found that they did work. What I did earlier today was a PDC with just a single node, so the free monoid was over the alphabet {a}, so the elements are the empty word and words of one or more repetitions of the letter 'a.' I then worked through the verification of criteria (i), using " is x a substring of y" and "concatenation" as the symmetric monoidal structure and it worked out. But, looking back, I only considered the simplest case involving the empty word and the word of length one, then wondered if there'd be an inductive way to show that those two operations would also work for other lengths of strings. I will say this: I think I mentioned earlier than I had started to revise the underlying set so that, instead of simply being words of lengths 0 through 10, elements were equivalence classes. Namely, an element such as abcdefghij would be replaced by { {-, abcdefghij}, {a, bcdefghij}, ... {abcdefghi, j}, {abcdefghij, -} }. ('-' represents the empty word, or empty substring in this case.) But now I'm not sure if I need to go to this trouble at all. (If I do wind up needing to do this, though, let me mention that this is what led me to thinking that the 'equidivisibility of free monoids' was going to be relevant.) I think I'll end this comment here and get back to it once I'm clearer on what I have and have not done correctly.`