Options

Lecture 19 - Chapter 2: Chemistry and Scheduling

Before we dive into the math of resource theories, let me say a bit more about what they're good for! I said they're good for answering questions like these:

  1. Given what I have, is it possible to get what I want?
  2. Given what I have, how much will it cost to get what I want?
  3. Given what I have, how long will it take to get what I want?
  4. Given what I have, what is the set of ways to get what I want?

and also others. But let's see examples!

Chemistry. Suppose we have a bunch of chemicals, and reactions involving these chemicals. Then we can ask which collections of molecules can turn into which other collections. We can also ask what's the set of ways in which this can happen.

Puzzle 55. For example, suppose you have these chemical reactions: $$ \text{C} + \text{O}_2 \longrightarrow \text{CO}_2 $$ $$ \text{CO}_2 + \text{NaOH} \longrightarrow \text{NaHCO}_3 $$ $$ \text{NaHCO}_3 + \text{HCl} \longrightarrow \text{H}_2 \text{O} + \text{NaCl} + \text{CO}_2 $$ Can you use these to turn $$ \text{C} + \text{O}_2 + \text{NaOH} + \text{HCl} $$ into $$ \text{CO}_2 + \text{H}_2\text{O} + \text{NaCl} \text{?} $$ If so, how many ways can you do it?

The "can you do it?" question here is called the reachability problem, and you can read about it here:

If you like hard problems, the reachability problem is for you! The puzzle above is not hard, but in general the reachability problem is very hard. It was proved in 1981 that there's an algorithm to solve it, no matter what chemicals and reactions you choose. In other words, it's "decidable". However, it's known that no algorithm can solve it in polynomial time. The only known upper bound on the runtime of an algorithm that solves the reachability problem is insanely bad... in fact, so bad that I'll make you read our book to learn how bad! So, there's a very interesting set of puzzles here for people skilled in computational complexity theory.

One way to draw a bunch of chemicals and reactions involving them is a "Petri net". Here's the Petri net for the reactions I just listed:

image

My book with Jacob is really about Petri nets. They're actually used in computer science more than chemistry! That's because they're a good way to draw a bunch of processes that take certain inputs and produce a bunch of outputs. Whenever we have this sort of situation, the reachability problem becomes important.

Scheduling. Suppose you have a bunch of jobs to do, that take various amounts of time. And suppose you can only start some jobs when others are done. How long will it take to do all these jobs?

This a hugely important problem, for example in big companies. One approach to solving it uses "PERT charts", where "PERT" - businesspeople love acronyms but I hate them - stands for "program evaluation and review technique". Here's an example:

image

The circles are different states, while the edges are different tasks. Each state is labelled with an arbitrary name: 10, 20, 30, 40 and 50. The tasks also have names: A, B, C, D, E, and F. More importantly, each task is labelled by the amount of time that task requires!

Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have done all the tasks leading up to that state. For example, you can't reach state 50 unless you have already done all of tasks C, E, and F.

Puzzle 56. What is the minimum amount of time it takes to get from state 10 to state 50?

Puzzle 57. Which tasks could take longer, without changing the answer to Puzzle 56? How much longer could each task take, without changing the answer? This amount of time is called the slack for that task.

For an introduction to PERT charts and their uses, see:

You can read about algorithms to solve puzzles like those above: companies use the algorithms to schedule tasks! And so do governments. According to the first article:

"PERT" was developed primarily to simplify the planning and scheduling of large and complex projects. It was developed for the U.S. Navy Special Projects Office in 1957 to support the U.S. Navy's Polaris nuclear submarine project. It found applications all over industry. An early example was it was used for the 1968 Winter Olympics in Grenoble which applied PERT from 1965 until the opening of the 1968 Games.

As a category theorist, I am immediately attracted to diagrams, so I loved Petri nets and PERT charts as soon as I saw them. The puzzle for me was then to figure out the category theory hiding behind these diagrams!

I think I understand some things now. For example, it's possible to reinterpret PERT charts as "timed" Petri nets: that is, Petri nets where each reaction takes a specific amount of time.

But I haven't figured out everything. It's all part of a big subject: resource theories!

To read other lectures go here.

«1

Comments

  • 1.
    edited May 8

    I think there's a typo in the third reaction formula -- \(\text{NaHCO}_2\) should be \(\text{NaHCO}_3\), or else the Petri net doesn't correspond with the set of reactions.

    Comment Source:I think there's a typo in the third reaction formula -- \\(\text{NaHCO}_2\\) should be \\(\text{NaHCO}_3\\), or else the Petri net doesn't correspond with the set of reactions.
  • 2.
    edited May 9

    Thanks! You just caught a typo in my book - unfortunately after it was published. It's amazing how a finite-sized book can have an infinite number of typos.

    Speaking of mistakes: in my lecture above I originally wrote:

    Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have reached all its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 and 40.

    This was wrong; this is not how people usually use PERT charts. The usual rules say that you can only reach a state after you have accomplished all the tasks leading up to that state - i.e., traversed all the edges pointing into that state.

    Based on my erroneous statement of the rules, Sophie Libkind got a different answer to Puzzles 56 and 57 - different than the answer by Jared Summers based on the usual rules.

    This led to an interesting discussion which will only make sense if you know my original mistake! I'm going to correct my lecture now, but please compare what I had written:

    Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have reached all its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 and 40.

    to what's there now:

    Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have done all the tasks leading up to that state. For example, you can't reach state 50 unless you have already done all of tasks C, E, and F.

    The change makes a big difference.

    Comment Source:Thanks! You just caught a typo in my book - unfortunately after it was published. It's amazing how a finite-sized book can have an infinite number of typos. Speaking of mistakes: in my lecture above I originally wrote: > Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have reached _all_ its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 _and_ 40. This was wrong; this is not how people usually use PERT charts. The usual rules say that you can only reach a state after you have accomplished all the tasks leading up to that state - i.e., traversed all the edges pointing into that state. Based on my erroneous statement of the rules, [Sophie Libkind got a different answer to Puzzles 56 and 57](https://forum.azimuthproject.org/discussion/comment/17832/#Comment_17832) - different than the answer by Jared Summers based on the _usual_ rules. This led to an interesting discussion which will only make sense if you know my original mistake! I'm going to correct my lecture now, but please compare what I had written: > Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have reached _all_ its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 _and_ 40. to what's there now: > Your goal is to start at state 10 and move all the way to state 50. Since you're bossing lots of people around, you can make them do tasks simultaneously. However, you can only reach a state after you have done _all_ the tasks leading up to that state. For example, you can't reach state 50 unless you have already done _all_ of tasks C, E, and F. The change makes a big difference.
  • 3.

    Puzzle 55

    Yes, 1 way (which the Petri net illustrates).

    Puzzle 56

    7 months.

    Puzzle 57

    Only task E can take longer, up to 1 month.

    Side note: I've had to look at too many PERT charts and am presently in PERT hell thanks to an ill-timed remark in a meeting on my part. Never suggest you know more about management than management (even when it's true). They may try to recruit you.

    Comment Source:### Puzzle 55 Yes, 1 way (which the Petri net illustrates). ### Puzzle 56 7 months. ### Puzzle 57 Only task E can take longer, up to 1 month. Side note: I've had to look at too many PERT charts and am presently in PERT hell thanks to an ill-timed remark in a meeting on my part. Never suggest you know more about management than management (even when it's true). They may try to recruit you.
  • 4.

    Is there a difference between reachability and satisfiability?

    Comment Source:Is there a difference between reachability and satisfiability?
  • 5.

    Jared Summers #3

    Never suggest you know more about management than management (even when it's true). They may try to recruit you.

    I expect there's a story behind that comment. Not necessarily any of my business, just sayin'...

    Comment Source:[Jared Summers #3](https://forum.azimuthproject.org/discussion/comment/17819/#Comment_17819) > Never suggest you know more about management than management (even when it's true). They may try to recruit you. I expect there's a story behind that comment. Not necessarily any of my business, just sayin'...
  • 6.
    edited May 9

    Keith E. Peterson asked:

    Is there a difference between reachability and satisfiability?

    Reachability is concerned with the general problem of the existence of a path between point A and point B. (Propositional) satisfiability is concerned with determining if an assignment of values to variables exists that makes a given formula true.

    You can probably connect the two by considering point A to be an empty assignment, and point B to any of the class of assignment that simplify the formula to "true". Then reachability would ask if we can augment our starting assignment to one in the class of satisfying assignments. But I'm not sure if this contributes much to a discussion on satisfiability.

    Comment Source:Keith E. Peterson asked: > Is there a difference between reachability and satisfiability? Reachability is concerned with the general problem of the existence of a path between point A and point B. (Propositional) satisfiability is concerned with determining if an assignment of values to variables exists that makes a given formula true. You can probably connect the two by considering point A to be an empty assignment, and point B to any of the class of assignment that simplify the formula to "true". Then reachability would ask if we can augment our starting assignment to one in the class of satisfying assignments. But I'm not sure if this contributes much to a discussion on satisfiability.
  • 7.
    edited May 9

    Some remarks on the chemical Petri net and catalysis:

    The feedback loop to the \(\text{CO}_2\) makes this system interesting. It means that once we obtain 1 equivalent of \(\text{CO}_2\), we no longer need to supply any since it is always regenerated with every reaction cycle.

    $$ \text{CO}_2 + \text{NaOH} \longrightarrow \text{NaHCO}_3 $$ $$ \text{NaHCO}_3 + \text{HCl} \longrightarrow \text{H}_2 \text{O} + \text{NaCl} + \text{CO}_2 $$ So essentially, if one wanted to produce the products water \(\text{H}_2\text{O}\) and table salt \(\text{NaCl}\) from the reactants sodium hydroxide \(\text{NaOH}\) and hydrochloric acid \(\text{HCl}\) via this petri net, one could do so with only a single molecule of \(\text{CO}_2\), while only being limited by the amount of the reactants one has on hand. The reaction stops once one runs out of one.

    One could thus simplify the above equations to:

    $$ \text{HCl} + \text{NaOH} \xrightarrow{\text{'cat.'} \text{CO}_2} \text{NaCl} + \text{H}_2\text{O} $$ Where 'cat.' stands for 'catalyzed by'. I put it in quotations because in order for something to actually be catalysis it would have to increase the reaction rate by lowering the activation energy (this can happen by a number of mechanisms). With the above reaction (acid/base, thermodynamically favored), I very much doubt that in reality having the \(\text{CO}_2\) around would speed things up, but according to our Petri net model, it's required for the reaction to proceed.

    The step we left out in the above simplified equation, i.e the middle step in:

    $$ \text{CO}_2 + \text{NaOH} + \text{HCl}\longrightarrow \text{NaHCO}_3 \ + \text{HCl} \longrightarrow \text{CO}_2 \ + \text{H}_2 \text{O} + \text{NaCl} $$ would be referred to as the mechanism of catalysis by chemists. When one finds that adding a specific substance drastically increases the rate of a reaction, the search is on to find out how it does so. This often involves a lower energy intermediate that can only be formed in the presence of the catalyst.

    As the book points out, the oven in the lemon pie example can be thought of as a 'catalyst' as well. Just to give a non-chemical example (although baking is in some sense also chemistry).

    Comment Source:Some remarks on the chemical Petri net and catalysis: The feedback loop to the \\(\text{CO}_2\\) makes this system interesting. It means that once we obtain 1 equivalent of \\(\text{CO}_2\\), we no longer need to supply any since it is always regenerated with every reaction cycle. \[ \text{CO}_2 + \text{NaOH} \longrightarrow \text{NaHCO}_3 \] \[ \text{NaHCO}_3 + \text{HCl} \longrightarrow \text{H}_2 \text{O} + \text{NaCl} + \text{CO}_2 \] So essentially, if one wanted to produce the products water \\(\text{H}_2\text{O}\\) and table salt \\(\text{NaCl}\\) from the reactants sodium hydroxide \\(\text{NaOH}\\) and hydrochloric acid \\(\text{HCl}\\) via this petri net, one could do so with only a single molecule of \\(\text{CO}_2\\), while only being limited by the amount of the reactants one has on hand. The reaction stops once one runs out of one. One could thus simplify the above equations to: \[ \text{HCl} + \text{NaOH} \xrightarrow{\text{'cat.'} \text{CO}_2} \text{NaCl} + \text{H}_2\text{O} \] Where 'cat.' stands for 'catalyzed by'. I put it in quotations because in order for something to actually be catalysis it would have to increase the reaction rate by lowering the activation energy (this can happen by a number of mechanisms). With the above reaction (acid/base, thermodynamically favored), I very much doubt that in reality having the \\(\text{CO}_2\\) around would speed things up, but according to our Petri net model, it's required for the reaction to proceed. The step we left out in the above simplified equation, i.e the middle step in: $$ \text{CO}_2 + \text{NaOH} + \text{HCl}\longrightarrow \text{NaHCO}_3 \ + \text{HCl} \longrightarrow \text{CO}_2 \ + \text{H}_2 \text{O} + \text{NaCl} $$ would be referred to as the mechanism of catalysis by chemists. When one finds that adding a specific substance drastically increases the rate of a reaction, the search is on to find out how it does so. This often involves a lower energy intermediate that can only be formed in the presence of the catalyst. As the book points out, the oven in the lemon pie example can be thought of as a 'catalyst' as well. Just to give a non-chemical example (although baking is in some sense also chemistry).
  • 8.

    @Jonathan Castello

    I suspect a better way to view the reachability and satisfiability as being equivalent would be to view a proof (more specifically modus ponens) as being a path between propositions.

    I believe homotopy type theory makes this more explicit.

    Comment Source:@Jonathan Castello I suspect a better way to view the reachability and satisfiability as being equivalent would be to view a proof (more specifically modus ponens) as being a path between propositions. I believe homotopy type theory makes this more explicit.
  • 9.
    edited May 9

    Provability and satisfiability are distinct, though. Satisfiability is concerned with a single formula, not a relationship between two formulae. For instance, I know that \(\bot \vdash A \land \neg A\), but \(A \land \neg A\) is not satisfiable.

    Worse, satisfiability is weaker than validity, so it wouldn't even be fair to characterize satisfiability by \(\top \vdash P\).

    Comment Source:Provability and satisfiability are distinct, though. Satisfiability is concerned with a single formula, not a relationship between two formulae. For instance, I know that \\(\bot \vdash A \land \neg A\\), but \\(A \land \neg A\\) is not satisfiable. Worse, satisfiability is weaker than validity, so it wouldn't even be fair to characterize satisfiability by \\(\top \vdash P\\).
  • 10.

    I ask because John Baez notes that with reachability it is "known that no algorithm can solve it in polynomial time."

    So I wondered if reachability is equivalent to satisfiability since satisfiability is well known to be NP-complete.

    Comment Source:I ask because John Baez notes that with reachability it is "known that no algorithm can solve it in polynomial time." So I wondered if reachability is equivalent to satisfiability since satisfiability is well known to be NP-complete.
  • 11.
    edited May 9

    Hm. I spent a little more time digging here. The specific reachability problem in question concerns Petri nets; this is good, because reachability is definitely solvable in polynomial time for finite directed graphs. The issue seems to be that Petri nets induce an infinite directed graph of a particular character, whose vertices are sets of resources (the set of tokens placed on the Petri net at a given time, I believe) and whose edges are given by the action of the Petri net itself. There is a considerable amount of regularity to this graph -- as one would expect of something with a finite description! -- but not enough to give anything resembling a reasonable algorithm. (Phrased this way, it's somewhat surprising it's decidable at all.)

    On the other hand, Petri net reachability is decidable. This isn't true of most proof systems of any strength. John's book explains why this is interesting:

    On the bright side, it means that Petri nets might be fairly powerful when viewed as computers themselves! After all, for a universal Turing machine, the analogue of the reachability problem is undecidable. So if the reachability problem for Petri nets were decidable, they couldn’t serve as universal computers. But if it were decidable but hard, Petri nets might be fairly powerful—though still not universal—computers.

    As for satisfiability, we're given a formula of finite size \(n\) and asked to determine if it has any satisfying assignment. There can only be at most \(2^n\) assignments, so we can brute-force all of them to give a singly-exponential algorithm. This is a significantly better bound than anything referenced in Section 25.1 of John's book, so I am inclined to believe that (a) Petri net reachability is significantly more powerful than propositional satisfiability, and that (b) if an equivalence were established, that would be worth a publication.

    (EDIT: Still mulling this one over. It isn't obvious whether Petri net reachability is in NP at all. I can imagine that there exist reachability problems where the shortest witnesses are exponential in the size of the Petri net. From problem 51 (Section 25.2) in John's book, this would probably involve giving a presentation of the desired arrow in terms of the Petri net morphisms and tensor product. Given such a presentation, we can surely check that it does indeed produce the arrow we desire; but if no polynomial-sized presentation is guaranteed to exist, we can't rightly say the problem is in NP.)

    Comment Source:Hm. I spent a little more time digging here. The specific reachability problem in question concerns Petri nets; this is good, because reachability is definitely solvable in polynomial time [for finite directed graphs](https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm). The issue seems to be that Petri nets induce an infinite directed graph of a particular character, whose vertices are sets of resources (the set of tokens placed on the Petri net at a given time, I believe) and whose edges are given by the action of the Petri net itself. There is a considerable amount of regularity to this graph -- as one would expect of something with a finite description! -- but not enough to give anything resembling a reasonable algorithm. (Phrased this way, it's somewhat surprising it's decidable at all.) On the other hand, Petri net reachability _is_ decidable. This isn't true of most proof systems of any strength. John's book explains why this is interesting: > On the bright side, it means that Petri nets might be fairly powerful when viewed as computers themselves! After all, for a universal Turing machine, the analogue of the reachability problem is undecidable. So if the reachability problem for Petri nets were decidable, they couldn’t serve as universal computers. But if it were decidable but hard, Petri nets might be fairly powerful—though still not universal—computers. As for satisfiability, we're given a formula of finite size \\(n\\) and asked to determine if it has any satisfying assignment. There can only be at most \\(2^n\\) assignments, so we can brute-force all of them to give a singly-exponential algorithm. This is a significantly better bound than anything referenced in Section 25.1 of John's book, so I am inclined to believe that (a) Petri net reachability is significantly more powerful than propositional satisfiability, and that (b) if an equivalence were established, that would be worth a publication. (EDIT: Still mulling this one over. It isn't obvious whether Petri net reachability is in NP at all. I can imagine that there exist reachability problems where the shortest witnesses are exponential in the size of the Petri net. From problem 51 (Section 25.2) in John's book, this would probably involve giving a presentation of the desired arrow in terms of the Petri net morphisms and tensor product. Given such a presentation, we can surely check that it does indeed produce the arrow we desire; but if no polynomial-sized presentation is guaranteed to exist, we can't rightly say the problem is in NP.)
  • 12.

    I got a slightly different answer for Puzzle 57 and was wondering what you all think!

    Puzzle 57 My interpretation of the problem was that you don't need to do every task, you only need to visit each preceding state. By skirting around task E, you can do this in 7 months no matter how long task E takes. Thus, task E can take an arbitrary amount of time without changing the answer to Puzzle 56.

    Comment Source:I got a slightly different answer for Puzzle 57 and was wondering what you all think! <b>Puzzle 57</b> My interpretation of the problem was that you don't need to do every task, you only need to visit each preceding state. By skirting around task E, you can do this in 7 months no matter how long task E takes. Thus, task E can take an arbitrary amount of time without changing the answer to Puzzle 56.
  • 13.
    edited May 9

    Nice comment, Marius Furter! The \(CO_2\) should be a catalyst in the way you describe, because without it, the reaction would stop altogether.

    Comment Source:Nice comment, Marius Furter! The \\(CO_2\\) should be a catalyst in the way you describe, because without it, the reaction would stop altogether.
  • 14.

    No, Sophie Libkind, John asked to get all the way to state '50', and he also made clear that to get to a state you have to do all the tasks leading there. What you described is more like getting done all the preparations for the tasks leading to '50'. For that, however, the time should be 4 moments.

    It's so nice to have a course where we can quickly eradicate misunderstandings like these!

    Comment Source:No, Sophie Libkind, John asked to get all the way to state '50', and he also made clear that to get to a state you have to do all the tasks leading there. What you described is more like getting done all the preparations for the tasks leading to '50'. For that, however, the time should be 4 moments. It's so nice to have a course where we can quickly eradicate misunderstandings like these!
  • 15.

    @Sophie_Libkind to comment 12:

    I find it also a bit unclear what John means with "you have reached all its predecessors" By that he probably doesn't mean "had been at all predecessors states" but more "performed all predecessors tasks", because (at least for a production process) one usually doesnt stop in the predecessor states that is you usually don't leave things in an unfinished state, i.e. you usually need to do all tasks - unless some steps are redundancy steps. In the former interpretation 6 month would be enough since then state 50 had been reached through 10->30->50 and all predecessor states (which take at most 4 months) had been reached as well and there would be no slack time, but in the latter interpretation (the one I think is meant) one needs 7 months and E would have a slack time of 1 month. So it depends how much redundancy you want to/ are allowed to have in your system.

    Comment Source:@Sophie_Libkind to comment 12: I find it also a bit unclear what John means with "you have reached all its predecessors" By that he probably doesn't mean "had been at all predecessors states" but more "performed all predecessors tasks", because (at least for a production process) one usually doesnt stop in the predecessor states that is you usually don't leave things in an unfinished state, i.e. you usually need to do all tasks - unless some steps are redundancy steps. In the former interpretation 6 month would be enough since then state 50 had been reached through 10->30->50 and all predecessor states (which take at most 4 months) had been reached as well and there would be no slack time, but in the latter interpretation (the one I think is meant) one needs 7 months and E would have a slack time of 1 month. So it depends how much redundancy you want to/ are allowed to have in your system.
  • 16.

    @Sophie Libkind and hi @nad. If you add 1 to A then A = 4 + D = 1 + F = 3 = 8 which busts 7 so E + 1 is unique as A = 3 + E = 3+ 1 = 7 as per the previous puzzle.

    Comment Source:@Sophie Libkind and hi @nad. If you add 1 to A then A = 4 + D = 1 + F = 3 = 8 which busts 7 so E + 1 is unique as A = 3 + E = 3+ 1 = 7 as per the previous puzzle.
  • 17.
    edited May 9

    Keith wrote:

    So I wondered if reachability is equivalent to satisfiability since satisfiability is well known to be NP-complete.

    I will wildly guess that the "satisfiability" you're talking about is SAT, the Boolean satisfiability problem. This asks whether a Boolean expression like \( p \vee (q \wedge r) \vee (\neg p \wedge r)\) is true for some assignment of T and F to all the variables. Reachability, on the other hand, is a question involving Petri nets.

    So, they are quite different things. To prove the equivalence you'd wondering about, you'd need to figure out some way to translate reachability questions about Petri nets into satisfiability questions about Boolean expressions.

    But I'm pretty sure this is impossible - at least, not in exponential time - because NP problems like SAT can be solved in exponential time, and nobody knows if reachability can be solved in exponential time. We have an exponential lower bound on the runtime of a certain algorithm for reachability, but the best known upper bound is astronomical.

    Reachability can, however, be proved equivalent to lots of other questions about Petri nets.

    You can see everything I know about this here:

    It's really fun stuff.

    Comment Source:Keith wrote: > So I wondered if reachability is equivalent to satisfiability since satisfiability is well known to be NP-complete. I will wildly guess that the "satisfiability" you're talking about is [SAT](https://en.wikipedia.org/wiki/Boolean_satisfiability_problem), the Boolean satisfiability problem. This asks whether a Boolean expression like \\( p \vee (q \wedge r) \vee (\neg p \wedge r)\\) is true for some assignment of T and F to all the variables. Reachability, on the other hand, is a question involving Petri nets. So, they are quite different things. To prove the equivalence you'd wondering about, you'd need to figure out some way to translate reachability questions about Petri nets into satisfiability questions about Boolean expressions. But I'm pretty sure this is impossible - at least, not in exponential time - because NP problems like SAT can be solved in exponential time, and _nobody knows_ if reachability can be solved in exponential time. We have an exponential _lower bound_ on the runtime of a certain algorithm for reachability, but the best known upper bound is astronomical. Reachability can, however, be proved equivalent to lots of other questions about Petri nets. You can see everything I know about this here: * John Baez and Jacob Biamonte, _[Quantum Techniques for Stochastic Mechanics](https://arxiv.org/abs/1209.3632)_, Section 25.1: The Reachability Problem. It's really fun stuff.
  • 18.
    edited May 9

    Marius - your remarks on catalysis are very interesting and important! One of the beauties of resource theory is that it lets us make the concept of "catalyst" very general and mathematical.

    In Lecture 20, I talk about some reactions in manufacturing, like these:

    $$ \textrm{[processing chip]} + \textrm{[memory chip]} + 4 \textrm{[minute]} \to \textrm{[laptop]} $$ $$ \textrm{[processing chip]} + 2 \textrm{[memory chip]} + 3 \textrm{[minute]} \to \textrm{[desktop]} $$ $$ \textrm{[laptop]} \to 750\textrm{[profit]} $$ $$ \textrm{[desktop]} \to 1000 \textrm{[profit]} $$ These are often studied using linear programming. Linear programming ignores catalysis because it doesn't distinguish between a reaction, say

    $$ X + Y \to Z, $$ and a similar reaction that involves a catalyst:

    $$ X + Y + C \to Z + C. $$ The manufacturing reactions I listed don't involve catalysis, but if they did, linear programming would be somewhat inadequate to capture all the details! (At least the simple sort of linear programming I know about. Maybe there's a fancier version that handles catalysis.)

    Comment Source:Marius - your [remarks on catalysis](https://forum.azimuthproject.org/discussion/comment/17825/#Comment_17825) are very interesting and important! One of the beauties of resource theory is that it lets us make the concept of "catalyst" very general and mathematical. In [Lecture 20](https://forum.azimuthproject.org/discussion/2081/lecture-20-chapter-2-resource-theories), I talk about some reactions in manufacturing, like these: $$ \textrm{[processing chip]} + \textrm{[memory chip]} + 4 \textrm{[minute]} \to \textrm{[laptop]} $$ $$ \textrm{[processing chip]} + 2 \textrm{[memory chip]} + 3 \textrm{[minute]} \to \textrm{[desktop]} $$ $$ \textrm{[laptop]} \to 750\textrm{[profit]} $$ $$ \textrm{[desktop]} \to 1000 \textrm{[profit]} $$ These are often studied using linear programming. Linear programming _ignores catalysis_ because it doesn't distinguish between a reaction, say $$ X + Y \to Z, $$ and a similar reaction that involves a catalyst: $$ X + Y + C \to Z + C. $$ The manufacturing reactions I listed don't involve catalysis, but if they did, linear programming would be somewhat inadequate to capture all the details! (At least the simple sort of linear programming I know about. Maybe there's a fancier version that handles catalysis.)
  • 19.
    edited May 9

    Sophie wrote:

    I got a slightly different answer for Puzzle 57 and was wondering what you all think!

    Puzzle 57. My interpretation of the problem was that you don't need to do every task, you only need to visit each preceding state. By skirting around task E, you can do this in 7 months no matter how long task E takes. Thus, task E can take an arbitrary amount of time without changing the answer to Puzzle 56.

    Very good point! This is a perfectly self-consistent viewpoint. But now that I think about it, I'm pretty sure that people who use PERT charts usually say you must accomplish every task (=arrow) pointing to a given state (=node) before you can go ahead and do further tasks pointing out of that state.

    So, my description of the problem was inaccurate, at least for PERT charts as ordinarily used. I've fixed my lecture, and explained the change in comment 2.

    One annoying thing about the Wikipedia article on PERT charts is that they don't start by clearly stating the rules of the game.

    Comment Source:Sophie wrote: > I got a slightly different answer for Puzzle 57 and was wondering what you all think! > <b>Puzzle 57.</b> My interpretation of the problem was that you don't need to do every task, you only need to visit each preceding state. By skirting around task E, you can do this in 7 months no matter how long task E takes. Thus, task E can take an arbitrary amount of time without changing the answer to Puzzle 56. Very good point! This is a perfectly self-consistent viewpoint. But now that I think about it, I'm pretty sure that people who use PERT charts usually say you must accomplish every task (=arrow) pointing to a given state (=node) before you can go ahead and do further tasks pointing out of that state. So, my description of the problem was inaccurate, at least for PERT charts as ordinarily used. I've fixed my lecture, and explained the change in [comment 2](https://forum.azimuthproject.org/discussion/comment/17816/#Comment_17816). One annoying thing about the Wikipedia article on PERT charts is that they don't start by clearly stating the rules of the game.
  • 20.
    edited May 9

    Possibly the https://en.wikipedia.org/wiki/Travelling_salesman_problem would be a classical "target" for resource theories? Am I correct?

    Comment Source:Possibly the https://en.wikipedia.org/wiki/Travelling_salesman_problem would be a classical "target" for resource theories? Am I correct?
  • 21.
    edited May 9

    Nad wrote:

    I find it also a bit unclear what John means with "you have reached all its predecessors" By that he probably doesn't mean "had been at all predecessors states" but more "performed all predecessors tasks"...

    That's not really what I meant, but it's what I should have meant. One reason is that Jared Summers, an official expert on PERT charts, answered my puzzles in the way that's consistent with this correction.

    Comment Source:Nad wrote: > I find it also a bit unclear what John means with "you have reached all its predecessors" By that he probably doesn't mean "had been at all predecessors states" but more "performed all predecessors tasks"... That's not really what I meant, but it's what I _should have_ meant. One reason is that [Jared Summers, an official expert on PERT charts](https://forum.azimuthproject.org/discussion/comment/17819/#Comment_17819), answered my puzzles in the way that's consistent with this correction.
  • 22.

    Pierre Prado wrote:

    Possible the "https://en.wikipedia.org/wiki/Travelling_salesman_problem" would be a classical "target" for resource theories? Am I correct?

    The traveling salesman problem is reducible to Boolean satisfiability if my memory is correct.

    Comment Source:Pierre Prado wrote: >Possible the "https://en.wikipedia.org/wiki/Travelling_salesman_problem" would be a classical "target" for resource theories? Am I correct? The traveling salesman problem is reducible to Boolean satisfiability if my memory is correct.
  • 23.
    edited May 9

    Jonathan Castello - nice comment! Everything you said sounds right to me.

    It isn't obvious whether Petri net reachability is in NP at all.

    I'm no good at computational complexity, so I could be missing something obvious, but hope that if this problem were known to be in NP I'd have bumped into this fact during my literature search.

    In 1976, Roger J. Lipton showed that any algorithm for solving the Petri net reachability problem requires least an exponential amount of memory space:

    and thus also at least an exponential amount of time. As far as I know, this doesn't rule out the possibility that this problem is in \(\text{NP}\). But maybe I'm missing some theorems.

    Comment Source:Jonathan Castello - [nice comment](https://forum.azimuthproject.org/discussion/comment/17830/#Comment_17830)! Everything you said sounds right to me. > It isn't obvious whether Petri net reachability is in NP at all. I'm no good at computational complexity, so I could be missing something obvious, but hope that if this problem were known to be in NP I'd have bumped into this fact during my literature search. In 1976, Roger J. Lipton showed that any algorithm for solving the Petri net reachability problem requires least an exponential amount of memory space: * Roger J. Lipton, <a href = "http://www.cs.yale.edu/publications/techreports/tr63.pdf">The reachability problem requires exponential space</a>, Technical Report 62, Yale University, 1976. and thus also at least an exponential amount of time. As far as I know, this doesn't rule out the possibility that this problem is in \\(\text{NP}\\). But maybe I'm missing some theorems.
  • 24.
    edited May 9

    Ew. I seem to be missing the point. I decide to believe that, if my interjections become too awful, someone will point that out to me loud and clear. So I won't stop just because I see room for improvement.

    That being said, man, what an interesting pack of discussions! Thanks everybody!

    Comment Source:Ew. I seem to be missing the point. I decide to believe that, if my interjections become too awful, someone will point that out to me loud and clear. So I won't stop just because I see room for improvement. That being said, man, what an interesting pack of discussions! Thanks everybody!
  • 25.
    edited May 9

    I wonder if we could get prof Erik Demaine in here. He has a pretty good handle on what it takes to reduce a problem to a known strongly NP-hard problem.

    For instance, he's shown that playing "offline" Tetris optimally, or more precisely, given a known Tetris board configuration, and some known future Tetris pieces, can we get to a board configuration that is not a game over, is NP-complete, by reducing the game to 3-PARTITION (which I believe is reducible to SAT, but don't quote me on that)..

    In terms of Tetris, my question above is, if we view board configurations and pieces as resources, can we even produce any valid new resource (that being any specific board configuration) following the rules of the game without getting a game over?

    Comment Source:I wonder if we could get prof Erik Demaine in here. He has a pretty good handle on what it takes to reduce a problem to a known strongly NP-hard problem. For instance, he's shown that playing "offline" Tetris optimally, or more precisely, given a known Tetris board configuration, and some known future Tetris pieces, can we get to a board configuration that is not a game over, is NP-complete, by reducing the game to 3-PARTITION (which I believe is reducible to SAT, but don't quote me on that).. In terms of Tetris, my question above is, if we view board configurations and pieces as *resources*, can we even produce *any* valid new resource (that being any specific board configuration) following the rules of the game without getting a game over?
  • 26.
    edited May 10

    John Baez wrote in #23:

    As far as I know, this doesn't rule out the possibility that this problem is in NP.

    We know that \(\mathsf{DTIME}(f(n)) \subsetneq \mathsf{DSPACE}(f(n))\). This is because, intuitively, in order to use up all of that space you needed to take the time to write it out. But the containment is strict because often times you can just mutate your input. For example qsort runs in \(\mathsf{DTIME}(\mathcal{O}(n \, \mathsf{ln}(n)))\) but only takes \(\mathsf{DSPACE}(\mathcal{O}(n))\) (this is including its input string).

    We also know that \(\mathsf{PSPACE} \subsetneq \mathsf{EXPSPACE}\) from the space hierarchy separation theorem.

    From the above two observations, we have \(\mathsf{P} \subsetneq \mathsf{EXSPACE}\).

    Hence, a polynomial time reduction of Petri net reachability to SAT would suffice to prove \(\mathsf{EXPSPACE} \subseteq \mathsf{NP}\) and thus \(\mathsf{P} \neq \mathsf{NP}\). By the simple argument above I feel I deserve at least half the Millenium prize money if someone in this thread comes up with such a polynomial time reduction ;-)

    Comment Source:[John Baez wrote in #23](https://forum.azimuthproject.org/discussion/comment/17859/#Comment_17859): > As far as I know, this doesn't rule out the possibility that this problem is in NP. We know that \\(\mathsf{DTIME}(f(n)) \subsetneq \mathsf{DSPACE}(f(n))\\). This is because, intuitively, in order to use up all of that space you needed to take the time to write it out. But the containment is strict because often times you can just mutate your input. For example [qsort](http://www.cplusplus.com/reference/cstdlib/qsort/) runs in \\(\mathsf{DTIME}(\mathcal{O}(n \, \mathsf{ln}(n)))\\) but only takes \\(\mathsf{DSPACE}(\mathcal{O}(n))\\) (this is including its input string). We also know that \\(\mathsf{PSPACE} \subsetneq \mathsf{EXPSPACE}\\) from the [space hierarchy separation theorem](https://en.wikipedia.org/wiki/Space_hierarchy_theorem). From the above two observations, we have \\(\mathsf{P} \subsetneq \mathsf{EXSPACE}\\). Hence, a polynomial time reduction of Petri net reachability to SAT would suffice to prove \\(\mathsf{EXPSPACE} \subseteq \mathsf{NP}\\) and thus \\(\mathsf{P} \neq \mathsf{NP}\\). By the simple argument above I feel I deserve at least half the Millenium prize money if someone in this thread comes up with such a polynomial time reduction ;-)
  • 27.

    Matthew, can you elaborate on why reducing Petri net reachability to SAT would imply \(\text{EXPSPACE} \subseteq \text{NP}\)? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me.

    Approaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, \(\text{P} \ne \text{NP}\).

    Comment Source:Matthew, can you elaborate on why reducing Petri net reachability to SAT would imply \\(\text{EXPSPACE} \subseteq \text{NP}\\)? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me. Approaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, \\(\text{P} \ne \text{NP}\\).
  • 28.
    edited May 9

    Yeah, like Jonathan I don't see how proving one particular problem that's known to take at least exponential space is in \(\text{NP}\) would imply \(\textrm{EXPSPACE} \subseteq \textrm{NP}\).

    I've never heard anything about Petri net reachability being \(\textrm{EXPSPACE} \)-complete. If it's true I wanna know!

    Comment Source:Yeah, like Jonathan I don't see how proving one particular problem that's known to take at least exponential space is in \\(\text{NP}\\) would imply \\(\textrm{EXPSPACE} \subseteq \textrm{NP}\\). I've never heard anything about Petri net reachability being \\(\textrm{EXPSPACE} \\)-complete. If it's true I wanna know!
  • 29.
    edited May 9

    Robert wrote:

    No, Sophie Libkind, John asked to get all the way to state '50', and he also made clear that to get to a state you have to do all the tasks leading there.

    No, I didn't make that clear at all. I should have made it clear, but I said something very different:

    However, you can only reach a state after you have reached all its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 and 40.

    Sophie correctly picked up on this issue: I was demanding only that all previous states be reached, not that all previous tasks be completed.

    I will now fix this mistake of mine.

    Comment Source:Robert wrote: > No, Sophie Libkind, John asked to get all the way to state '50', and he also made clear that to get to a state you have to do all the tasks leading there. No, I didn't make that clear at all. I _should have_ made it clear, but I said something very different: > However, you can only reach a state after you have reached all its predecessors! For example, you can't reach state 50 unless you have already reached states 20, 30 and 40. Sophie correctly picked up on this issue: I was demanding only that all previous states be reached, not that all previous tasks be completed. I will now fix this mistake of mine.
  • 30.

    Let me make a weaker claim instead of a full equivalence.

    Is (Boolean) satisfiability a type of reachability?

    Or in constructivist language, can we build the various Boolean operations and model SAT problems internally in Petri-nets or other equivalent models of resource theories?

    The answer seems "yes" because Petri-nets are in a computationally stronger class.

    Comment Source:Let me make a weaker claim instead of a full equivalence. *Is (Boolean) satisfiability a type of reachability*? Or in constructivist language, can we build the various Boolean operations and model SAT problems internally in Petri-nets or other equivalent models of resource theories? The answer seems "yes" because Petri-nets are in a computationally stronger class.
  • 31.
    edited May 9

    Keith - you'll notice I've carefully sidestepped the question of whether the Boolean satisfiability problem can be reduced to the Petri net reachability problem. That's a really interesting question, but I don't know the answer.

    The answer seems "yes" because Petri nets are in a computationally stronger class.

    That's an interesting guess but not an a proof.

    Above, I only discussed whether the Petri net reachability problem can be reduced to the Boolean satisfiability problem. I don't know the answer to that either, but at least I can say some mildly entertaining things about it!

    Comment Source:Keith - you'll notice I've carefully sidestepped the question of whether the Boolean satisfiability problem can be reduced to the Petri net reachability problem. That's a really interesting question, but I don't know the answer. > The answer seems "yes" because Petri nets are in a computationally stronger class. That's an interesting guess but not an a proof. Above, I only discussed whether the Petri net reachability problem can be reduced to the Boolean satisfiability problem. I don't know the answer to that either, but at least I can say some mildly entertaining things about it!
  • 32.
    edited May 9

    Is there something on the reachability problem but with the assumption that at all steps you must have \( \leq d_i \) of object \( X_i \)? This is breaking the symmetric monoidal structure by taking a very small subset of the objects, but I wanted a finite dimensional linear operator on \( \otimes_i \mathbb{C}^{d_i+1} \) that sends each computational basis state to the result of applying any one of the reactions (including identity) in uniform superposition (provided still satisfy all the constraints). Then apply this some large number of times and evaluate the matrix element between starting and ending to see if it is 0. I see some with \( d_i =1 \), but not anything higher.

    Comment Source:Is there something on the reachability problem but with the assumption that at all steps you must have \\( \leq d_i \\) of object \\( X_i \\)? This is breaking the symmetric monoidal structure by taking a very small subset of the objects, but I wanted a finite dimensional linear operator on \\( \otimes_i \mathbb{C}^{d_i+1} \\) that sends each computational basis state to the result of applying any one of the reactions (including identity) in uniform superposition (provided still satisfy all the constraints). Then apply this some large number of times and evaluate the matrix element between starting and ending to see if it is 0. I see some with \\( d_i =1 \\), but not anything higher.
  • 33.
    edited May 9

    Hi, Ammar! My first comment is: the bad news is that we're not on a Wordpress blog, so you need to use \\( \\) around your math instead of $latex $. But the good news is that you can edit your comments by clicking on the little gear at upper right.

    I'll say something more interesting later, I promise! But right now I need to attend the department teat.

    Comment Source:Hi, Ammar! My first comment is: the bad news is that we're not on a Wordpress blog, so you need to use `\\( \\)` around your math instead of `$latex $`. But the good news is that you can edit your comments by clicking on the little gear at upper right. I'll say something more interesting later, I promise! But right now I need to attend the department teat.
  • 34.

    We've discussed catalysts a bit, but is there anything akin to an inhibitor in this setting? In other words, something where an arrow \(X \to Z\) exists, but an arrow \(X \otimes Y \to Z \otimes Y\) does not?

    Comment Source:We've discussed catalysts a bit, but is there anything akin to an inhibitor in this setting? In other words, something where an arrow \\(X \to Z\\) exists, but an arrow \\(X \otimes Y \to Z \otimes Y\\) does not?
  • 35.
    edited May 10

    Jonathan - great questions!

    The framework we're discussing right now can't explain "inhibition". In biochemistry, "inhibitors" often work by binding to one of the reactants. But this only happens in a context where our reactions have "rates" associated to them. That is, we might have reaction \(X \otimes Y \to A\) that occurs with such a high rate that \(X \otimes Y \to Z \otimes Y\) rarely gets a chance to happens, because the rate of \(X \to Z\) is lower. More precisely: if there are enough \(Y\)s around, most of the \(X\)s bind to the \(Y\)s and form \(A\)s before they get a chance to become \(Z\)s. In the framework discussed near the start of Chapter 2 - namely, symmetric monoidal posets - reactions don't have rates attached to them.

    For the same reason, this simple framework can't explain how catalysts in chemistry increase the rates of reactions. It can only explain situations where a reaction is impossible without a catalyst, but becomes possible with it.

    I've been thinking a lot about open reaction networks with rates, so you can click the link to read more about those if you're curious. That's a framework that can handle inhibition!

    Comment Source:Jonathan - great questions! The framework we're discussing right now can't explain "inhibition". In biochemistry, "inhibitors" often work by binding to one of the reactants. But this only happens in a context where our reactions have "rates" associated to them. That is, we might have reaction \\(X \otimes Y \to A\\) that occurs with such a high rate that \\(X \otimes Y \to Z \otimes Y\\) rarely gets a chance to happens, because the rate of \\(X \to Z\\) is lower. More precisely: if there are enough \\(Y\\)s around, most of the \\(X\\)s bind to the \\(Y\\)s and form \\(A\\)s before they get a chance to become \\(Z\\)s. In the framework discussed near the start of Chapter 2 - namely, symmetric monoidal posets - reactions don't have rates attached to them. For the same reason, this simple framework can't explain how catalysts in chemistry increase the rates of reactions. It can only explain situations where a reaction is impossible without a catalyst, but becomes possible with it. I've been thinking a lot about [open reaction networks with rates](https://johncarlosbaez.wordpress.com/2017/07/30/a-compositional-framework-for-reaction-networks/), so you can click the link to read more about those if you're curious. That's a framework that can handle inhibition!
  • 36.

    John Baez wrote in #28:

    Yeah, like Jonathan I don't see how proving one particular problem that's known to take at least exponential space is in \(\textsf{NP}\) would imply \(\textsf{EXPSPACE} \subseteq \textsf{NP}\).

    I've never heard anything about Petri net reachability being \(\textsf{EXPSPACE} \)-complete. If it's true I wanna know!

    My pithy argument was too pithy.

    Fortunately, I have found some references.

    Esparza et al. Decidability Issues for Petri Nets (1994) remark remark that Petri net reachability is \(\textsf{EXPSPACE} \)-complete for symmetric Petri nets. The write "loosely speaking, a Petri net is symmetric if for every transition \(t\) there is a reverse transition \(t^\prime \) whose occurrence 'undoes' the effect of the occurrence of \(t\)". They quote the proof in Cardoza, Lipton and Meyer Exponential Space Complete Problems for Petri Nets and Commutative Semigroups (1976). Cardoza et al. call symmetric Petri nets reversible.

    The wider class of reachability problems for arbitrary Petri nets must then contain \(\textsf{EXPSPACE} \).

    Moreover, \(\textsf{NP} \subsetneq \textsf{EXPSPACE} \) - this is because it is a folk theorem that \(\textsf{NP} \subseteq \textsf{PSPACE}\) and \(\textsf{PSPACE} \subsetneq \textsf{EXPSPACE}\) by the hierarchy theorem.

    Comment Source:[John Baez wrote in #28](https://forum.azimuthproject.org/discussion/comment/17869/#Comment_17869): > Yeah, like Jonathan I don't see how proving one particular problem that's known to take at least exponential space is in \\(\textsf{NP}\\) would imply \\(\textsf{EXPSPACE} \subseteq \textsf{NP}\\). > > I've never heard anything about Petri net reachability being \\(\textsf{EXPSPACE} \\)-complete. If it's true I wanna know! My pithy argument was *too* pithy. Fortunately, I have found some references. Esparza et al. [*Decidability Issues for Petri Nets* (1994)](https://pdfs.semanticscholar.org/08f6/b004cd6e8cbf63b7a5e89398e3d208db1a0a.pdf) remark remark that Petri net reachability is \\(\textsf{EXPSPACE} \\)-complete for *symmetric* Petri nets. The write "loosely speaking, a Petri net is *symmetric* if for every transition \\(t\\) there is a reverse transition \\(t^\prime \\) whose occurrence 'undoes' the effect of the occurrence of \\(t\\)". They quote the proof in Cardoza, Lipton and Meyer [*Exponential Space Complete Problems for Petri Nets and Commutative Semigroups* (1976)](https://dl.acm.org/citation.cfm?id=803630). Cardoza et al. call symmetric Petri nets *reversible*. The wider class of reachability problems for arbitrary Petri nets must then *contain* \\(\textsf{EXPSPACE} \\). Moreover, \\(\textsf{NP} \subsetneq \textsf{EXPSPACE} \\) - this is because it is a folk theorem that \\(\textsf{NP} \subseteq \textsf{PSPACE}\\) and \\(\textsf{PSPACE} \subsetneq \textsf{EXPSPACE}\\) by the hierarchy theorem.
  • 37.

    @John_Baez @Sophie_Libkind @Robert_Figura

    John wrote in 19.

    "This is a perfectly self-consistent viewpoint."

    It seems you forgot something here.

    That is Puzzle 57 reads:

    Puzzle 57. Which tasks could take longer, without changing the answer to Puzzle 56?..

    and

    Puzzle 56. What is the minimum amount of time it takes to get from state 10 to state 50?

    But seven months is not the minimum time it takes to get from state 10 to state 50. Please see again comment 15.

    John wrote:

    That's not really what I meant, but it's what I should have meant. One reason is that Jared Summers, an official expert on PERT charts, answered my puzzles in the way that's consistent with this correction.

    He gave the same answer as me for the second variant (see comment 15). This seems indeed to be the official reading of PERT charts -at least I understood it in the same way (I recently also needed to look a bit at those organization tools jobwise and I understand that he his not very happy with having to deal with PERT charts and the like, too bad that he couldn't avoid being recruited).

    But as I said in 15. there exist also processes, where your initial interpretation makes also sense. An example can be found in the italian haute couture production. It's given as one example of how certain social conditions arise in this book. I bought the book as a present and got to look only briefly into it, but if I remember correctly it describes how the production of some haute couture items is put up for a kind of auction: those who sew fastest and cheapest (while keeping haute couture quality) will be paid for the sewn product the others not. At least it seems they are allowed to sell their later-or partly unfinished product on some secondary market, but I dont know how this will go together in future with the new IP tendencies in fashion.

    Comment Source:@John_Baez @Sophie_Libkind @Robert_Figura John wrote in 19. >"This is a perfectly self-consistent viewpoint." It seems you forgot something here. That is Puzzle 57 reads: >Puzzle 57. Which tasks could take longer, without changing the answer to Puzzle 56?.. and >Puzzle 56. What is the minimum amount of time it takes to get from state 10 to state 50? But seven months is not the minimum time it takes to get from state 10 to state 50. Please see again comment 15. John wrote: >That's not really what I meant, but it's what I should have meant. One reason is that Jared Summers, an official expert on PERT charts, answered my puzzles in the way that's consistent with this correction. He gave the same answer as me for the second variant (see comment 15). This seems indeed to be the official reading of PERT charts -at least I understood it in the same way (I recently also needed to look a bit at those organization tools jobwise and I understand that he his not very happy with having to deal with PERT charts and the like, too bad that he couldn't avoid being recruited). But as I said in 15. there exist also processes, where your initial interpretation makes also sense. An example can be found in the italian haute couture production. It's given as one example of how certain social conditions arise in this <a href="https://en.wikipedia.org/wiki/Gomorrah_(book)">book.</a> I bought the book as a present and got to look only briefly into it, but if I remember correctly it describes how the production of some haute couture items is put up for a kind of auction: those who sew fastest and cheapest (while keeping haute couture quality) will be paid for the sewn product the others not. At least it seems they are allowed to sell their later-or partly unfinished product on some secondary market, but I dont know how this will go together in future with the new IP tendencies in fashion.
  • 38.

    I found this in my simplex code notes. I don't know what Lemke-Howson is so I doubt I read or understood it.

    Y. Disser and M. Skutella (2013) In defense of the Simplex Algorithm’s worst-case behavior⋆ http://arxiv.org/abs/1311.5935

    These papers show that the Lemke-Howson can actually solve PSPACE-complete problems (!), and the Simplex algorithm can solve NP-hard problems, respectively (with the solution in the case of the Simplex algorithm encoded in the computation trace, obviously not by the LP solution, which can be found in polynomial time). These results highlight just how powerful these algorithms are, and they reinforce the importance of understanding what types of/ restrictions of inputs are important.

    Comment Source:I found this in my simplex code notes. I don't know what Lemke-Howson is so I doubt I read or understood it. Y. Disser and M. Skutella (2013) In defense of the Simplex Algorithm’s worst-case behavior⋆ http://arxiv.org/abs/1311.5935 These papers show that the Lemke-Howson can actually solve PSPACE-complete problems (!), and the Simplex algorithm can solve NP-hard problems, respectively (with the solution in the case of the Simplex algorithm encoded in the computation trace, obviously not by the LP solution, which can be found in polynomial time). These results highlight just how powerful these algorithms are, and they reinforce the importance of understanding what types of/ restrictions of inputs are important.
  • 39.
    edited May 10

    I chose to ignore interior-point and ellipsoid methods as per:

    We prove that the classic policy-iteration method (Howard 1960), including the Simplex method (Dantzig 1947) with the most-negative-reduced-cost pivoting rule, is a strongly polynomial-time algorithm for solving the Markov decision problem (MDP) with a fixed discount rate. Furthermore, the computational complexity of the policyiteration method (including the Simplex method) is superior to that of the only known strongly polynomial-time interior-point algorithm ([28] 2005) for solving this problem. The result is surprising since the Simplex method with the same pivoting rule was shown to be exponential for solving a general linear programming (LP) problem, the Simplex (or simple policy-iteration) method with the smallest-index pivoting rule was shown to be exponential for solving an MDP regardless of discount rates, and the policy-iteration method was recently shown to be exponential for solving a undiscounted MDP. We also extend the result to solving MDPs with sub-stochastic and transient state transition probability matrices.

    Comment Source:I chose to ignore interior-point and ellipsoid methods as per: > We prove that the classic policy-iteration method (Howard 1960), including the Simplex method (Dantzig 1947) with the most-negative-reduced-cost pivoting rule, is a strongly polynomial-time algorithm for solving the Markov decision problem (MDP) with a fixed discount rate. Furthermore, the computational complexity of the policyiteration method (including the Simplex method) is superior to that of the only known strongly polynomial-time interior-point algorithm ([28] 2005) for solving this problem. The result is surprising since the Simplex method with the same pivoting rule was shown to be exponential for solving a general linear programming (LP) problem, the Simplex (or simple policy-iteration) method with the smallest-index pivoting rule was shown to be exponential for solving an MDP regardless of discount rates, and the policy-iteration method was recently shown to be exponential for solving a undiscounted MDP. We also extend the result to solving MDPs with sub-stochastic and transient state transition probability matrices. * Yunyu Ye, [The Simplex and Policy-Iteration Methods are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate (2010)](http://web.stanford.edu/~yyye/SimplexMDP3.pdf)
  • 40.
    edited May 10

    To be clear I don’t dislike PERT charts, and the story isn't that interesting. My office is trying to become more Agile and less Waterfall, but management only got as far as adding daily standups and were confused as to why that wasn't enough. I ended up explaining theory of constraints, lean, and agile to them and got drafted to help. There are a lot of teams and projects here (1k or so employees, almost all engineers or computer scientists, in our department alone). So the "hell" is being in management/planning more than doing (and with no authority to direct change, I'm more like an internal consultant).

    PERT charts and Gantt charts (less useful in some ways, but a helpful visualization) are great from a scheduling and planning perspective. Interestingly, since this is largely a software shop, they ended up with an internal team that developed our planning tools over the years. They've never included PERT charts in their tools which might be something I'll point out to them. They do have Gantt charts (which can be generated from the same data), but these often don't show the same causal links between states/activities (the diagram gets noisy if you add all the connections in) so the connection is implicit (some activity is to the right of another activity in the chart means it may or may not depend on the activity to the left). What PERT charts we use are mostly handmade in another program and not in the main reporting tool that upper management sees, so projects have it but don't report it. Which is silly.

    Comment Source:To be clear I don’t dislike PERT charts, and the story isn't that interesting. My office is trying to become more Agile and less Waterfall, but management only got as far as adding daily standups and were confused as to why that wasn't enough. I ended up explaining theory of constraints, lean, and agile to them and got drafted to help. There are a *lot* of teams and projects here (1k or so employees, almost all engineers or computer scientists, in our department alone). So the "hell" is being in management/planning more than doing (and with no authority to direct change, I'm more like an internal consultant). PERT charts and Gantt charts (less useful in some ways, but a helpful visualization) are great from a scheduling and planning perspective. Interestingly, since this is largely a software shop, they ended up with an internal team that developed our planning tools over the years. They've never included PERT charts in their tools which might be something I'll point out to them. They do have Gantt charts (which can be generated from the same data), but these often don't show the same causal links between states/activities (the diagram gets noisy if you add all the connections in) so the connection is implicit (some activity is to the right of another activity in the chart means it may or may not depend on the activity to the left). What PERT charts we use are mostly handmade in another program and not in the main reporting tool that upper management sees, so projects have it but don't report it. Which is silly.
  • 41.

    Jonathan Castello #27 wrote:

    Matthew, can you elaborate on why reducing Petri net reachability to SAT would imply \(\text{EXPSPACE} \subseteq \text{NP}\)? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me.

    As I mentioned above, the Cardoza, Lipton and Meyer (1976) establish that reachability for symmetric Petri nets is \(\textsf{EXPSPACE}\)-complete.

    I didn't know this when I wrote my argument yesterday, I had to look it up.

    If we let \(\textsf{PETRI-REACH}\) be the class of problems reducible to Petri net reachability, then \(\textsf{EXPSPACE} \subseteq \textsf{PETRI-REACH}\)

    Approaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, \(\text{P} \ne \text{NP}\).

    This is good, but we can do better I believe.

    Not only does \(\textsf{PETRI-REACH} \subseteq \textsf{NP} \implies \textsf{NP} \neq \textsf{P}\), but in fact we have the stronger result:

    $$ \textsf{NP} \subsetneq \textsf{PETRI-REACH} $$ Proof.

    It's well known that \(\textsf{NP} \subseteq \textsf{PSPACE}\) (see, for instance Arora and Barak (2007), §4.2, pg. 78).

    We also know that \(\mathsf{PSPACE} \subsetneq \mathsf{EXPSPACE}\) from the space hierarchy separation theorem.

    Finally we have \(\mathsf{EXPSPACE} \subseteq \textsf{PETRI-REACH}\) by Cardoza et al. (1976).

    Hence \(\textsf{NP} \subsetneq \textsf{PETRI-REACH}\).

    \(\Box\)

    Comment Source:[Jonathan Castello #27 wrote:](https://forum.azimuthproject.org/discussion/comment/17868/#Comment_17868) > Matthew, can you elaborate on why reducing Petri net reachability to SAT would imply \\(\text{EXPSPACE} \subseteq \text{NP}\\)? Is Petri net reachability known to be EXPSPACE-complete? I don't think you're necessarily wrong, but the critical step is eluding me. As I mentioned above, the [Cardoza, Lipton and Meyer (1976)](https://dl.acm.org/citation.cfm?id=803630) establish that reachability for *symmetric* Petri nets is \\(\textsf{EXPSPACE}\\)-complete. I didn't know this when I wrote my argument yesterday, I had to look it up. If we let \\(\textsf{PETRI-REACH}\\) be the class of problems reducible to Petri net reachability, then \\(\textsf{EXPSPACE} \subseteq \textsf{PETRI-REACH}\\) > Approaching this similarly: We have an exponential lower bound on space for Petri net reachability. As you said, this necessarily imposes an exponential lower bound on time, since you can only write one cell per unit time (per tape). Suppose a reduction to SAT existed. If SAT had a subexponential algorithm, then we could defeat the exponential lower bound; so SAT, and by extension every NP-Complete problem, must not be solvable in subexponential time. Therefore, \\(\text{P} \ne \text{NP}\\). This is good, but we can do better I believe. Not only does \\(\textsf{PETRI-REACH} \subseteq \textsf{NP} \implies \textsf{NP} \neq \textsf{P}\\), but in fact we have the stronger result: $$ \textsf{NP} \subsetneq \textsf{PETRI-REACH} $$ **Proof.** It's well known that \\(\textsf{NP} \subseteq \textsf{PSPACE}\\) (see, for instance [Arora and Barak (2007), §4.2, pg. 78](http://theory.cs.princeton.edu/complexity/book.pdf)). We also know that \\(\mathsf{PSPACE} \subsetneq \mathsf{EXPSPACE}\\) from the [space hierarchy separation theorem](https://en.wikipedia.org/wiki/Space_hierarchy_theorem). Finally we have \\(\mathsf{EXPSPACE} \subseteq \textsf{PETRI-REACH}\\) by [Cardoza et al. (1976)](https://dl.acm.org/citation.cfm?id=803630). Hence \\(\textsf{NP} \subsetneq \textsf{PETRI-REACH}\\). \\(\Box\\)
  • 42.

    @ John Baez:

    I had a thought. I know you can use the Lawvere fixed point theorem to prove the halting problem. Generalized to Oracle machines, it establishes separation theorems for the Arithmetical hierarchy.

    I am wondering: can the separation theorems for the time and space hierarchies in ordinary complexity theory also be seen as applications of the Lawvere fixed point theorem?

    On a related not, it's well known that the fixed point theorem can be used to prove Gödel's first incompleteness theorem. However, I found a super cute proof that applies it to prove the second incompleteness theorem rather succinctly. Not sure if it's something to post here...

    Comment Source:@ John Baez: I had a thought. I know you can use the Lawvere fixed point theorem to prove the halting problem. Generalized to Oracle machines, it establishes separation theorems for the [Arithmetical hierarchy](https://en.wikipedia.org/wiki/Arithmetical_hierarchy). I am wondering: can the separation theorems for the time and space hierarchies in ordinary complexity theory also be seen as applications of the Lawvere fixed point theorem? On a related not, it's well known that the fixed point theorem can be used to prove Gödel's first incompleteness theorem. However, I found a super cute proof that applies it to prove the *second* incompleteness theorem rather succinctly. Not sure if it's something to post here...
  • 43.
    edited May 10

    Matthew wrote:

    I am wondering: can the separation theorems for the time and space hierarchies in ordinary complexity theory also be seen as applications of the Lawvere fixed point theorem?

    I don't have the combination of time and space (= brainpower) to answer this. For example, I have no clue as to how people usually prove those separation theorems. However, if someone were able to prove these theorems using the Lawvere fixed point theorem, that would be a nice (small) step toward applying category theory to computational complexity.

    There's a famous divide between the computer scientists who like category theory and those who like computational complexity. Any step toward bridging this divide would be great.

    I found a super cute proof that applies it to prove the second incompleteness theorem rather succinctly. Not sure if it's something to post here...

    Cool! Is this really new? If so, it might be better to post it on the \(n\)-Category Café and/or the Azimuth blog. That way, more people would see it. If you post it here, I can repost it there, as a "guest post".

    Comment Source:Matthew wrote: > I am wondering: can the separation theorems for the time and space hierarchies in ordinary complexity theory also be seen as applications of the Lawvere fixed point theorem? I don't have the combination of time and space (= brainpower) to answer this. For example, I have no clue as to how people _usually_ prove those separation theorems. However, if someone were able to prove these theorems using the Lawvere fixed point theorem, that would be a nice (small) step toward applying category theory to computational complexity. There's a famous divide between the computer scientists who like category theory and those who like computational complexity. Any step toward bridging this divide would be great. > I found a super cute proof that applies it to prove the _second_ incompleteness theorem rather succinctly. Not sure if it's something to post here... Cool! Is this really new? If so, it might be better to post it on the \\(n\\)-Category Caf&eacute; and/or the Azimuth blog. That way, more people would see it. If you post it here, I can repost it there, as a "guest post".
  • 44.

    FYI..

    1. Rössler, O.E.: Adequate locomotion strategies for an abstract organism in an abstract environment: a relational approach to brain function. In: Physics and Mathematics of the Nervous System (M. Conrad, W. Guttinger and M. DalCin, eds.), Lecture Notes in Biomathematics, vol. 4, pp. 342–369. Springer, New York (1974)

    And

    Abstract /chapter: The Brain Equation

    The brain equation is a solution to the “second survival problem.” The latter is called “positional adaptation.” It unlike Darwin’s first (“metabolic adaptation”) is history-independent. As such it is mathematically well posed. The equation applies to all life forms in the cosmos that live in a structured environment in which survival depends on position in space in a short-term fashion. An eusocial version does not exist. The equation solves, in conjunction with the necessarily attached VR machine, the famous NP-complete “decision-type travelling salesman problem” for finite times. The resulting autonomous optimizer with cognition is susceptible to a “function change” in the sense of Bob Rosen which so far is known empirically only from the human brain.

    Cite this chapter as: Rössler O.E. (2014) The Brain Equation. In: Sanayei A., Zelinka I., Rössler O. (eds) ISCS 2013: Interdisciplinary Symposium on Complex Systems. Emergence, Complexity and Computation, vol 8. Springer, Berlin, Heidelberg

    Comment Source:FYI.. 1. Rössler, O.E.: Adequate locomotion strategies for an abstract organism in an abstract environment: a relational approach to brain function. In: Physics and Mathematics of the Nervous System (M. Conrad, W. Guttinger and M. DalCin, eds.), Lecture Notes in Biomathematics, vol. 4, pp. 342–369. Springer, New York (1974) And Abstract /chapter: The Brain Equation The brain equation is a solution to the “second survival problem.” The latter is called “positional adaptation.” It unlike Darwin’s first (“metabolic adaptation”) is history-independent. As such it is mathematically well posed. The equation applies to all life forms in the cosmos that live in a structured environment in which survival depends on position in space in a short-term fashion. An eusocial version does not exist. The equation solves, in conjunction with the necessarily attached VR machine, the famous NP-complete “decision-type travelling salesman problem” for finite times. The resulting autonomous optimizer with cognition is susceptible to a “function change” in the sense of Bob Rosen which so far is known empirically only from the human brain. Cite this chapter as: Rössler O.E. (2014) The Brain Equation. In: Sanayei A., Zelinka I., Rössler O. (eds) ISCS 2013: Interdisciplinary Symposium on Complex Systems. Emergence, Complexity and Computation, vol 8. Springer, Berlin, Heidelberg
  • 45.
    edited May 10

    I don't have the combination of time and space (= brainpower) to answer this. For example, I have no clue as to how people usually prove those separation theorems. However, if someone were able to prove these theorems using the Lawvere fixed point theorem, that would be a nice (small) step toward applying category theory to computational complexity.

    There's a famous divide between the computer scientists who like category theory and those who like computational complexity. Any step toward bridging this divide would be great.

    Separation for time and space complexity hierarchies is proved by a kind of diagonalization (see Arora and Barak (2007), §3, pgs. 65-74)

    Similar diagonalization arguments also crop up in descriptive set theory for establishing higher separation theorems. For nice proof of this, I like Jech (2003) where in chapter 11 he shows \(\mathbf{\Sigma}^0_\alpha\neq \mathbf{\Pi}^0_\alpha\) (Corollary 11.3).

    In the case of computational complexity theory, I am not sure if diagonalization is taking place in a CCC with a suitable epimorphism. However, I am much more confident that descriptive set theory is following the usual recipe.

    Cool! Is this really new? If so, it might be better to post it on the n-Category Café and/or the Azimuth blog. That way, more people would see it. If you post it here, I can repost it there, as a "guest post".

    Nah it's in Boolos' The Logic of Provability (1995). I can't find the citation but I think he got it from Martin Löb's writings in the 1950s. Unlike the first incompleteness theorem, the quick proof of the second incompleteness theorem applies the fixed point with some indirection á la Curry's paradox.

    I'll try to post it in a For Fun thread.

    Comment Source:> I don't have the combination of time and space (= brainpower) to answer this. For example, I have no clue as to how people usually prove those separation theorems. However, if someone were able to prove these theorems using the Lawvere fixed point theorem, that would be a nice (small) step toward applying category theory to computational complexity. > > There's a famous divide between the computer scientists who like category theory and those who like computational complexity. Any step toward bridging this divide would be great. Separation for time and space complexity hierarchies is proved by a kind of diagonalization (see [Arora and Barak (2007), §3, pgs. 65-74](http://theory.cs.princeton.edu/complexity/book.pdf)) Similar diagonalization arguments also crop up in descriptive set theory for establishing higher separation theorems. For nice proof of this, I like [Jech (2003)](https://link.springer.com/book/10.1007%2F3-540-44761-X#about) where in chapter 11 he shows \\(\mathbf{\Sigma}^0_\alpha\neq \mathbf{\Pi}^0_\alpha\\) (Corollary 11.3). In the case of computational complexity theory, I am not sure if diagonalization is taking place in a CCC with a suitable epimorphism. However, I am much more confident that descriptive set theory is following the usual recipe. > Cool! Is this really new? If so, it might be better to post it on the n-Category Café and/or the Azimuth blog. That way, more people would see it. If you post it here, I can repost it there, as a "guest post". Nah it's in Boolos' *The Logic of Provability* (1995). I can't find the citation but I think he got it from Martin Löb's writings in the 1950s. Unlike the first incompleteness theorem, the quick proof of the second incompleteness theorem applies the fixed point with some indirection á la [Curry's paradox](https://plato.stanford.edu/entries/curry-paradox/). I'll try to post it in a *For Fun* thread.
  • 46.
    edited May 11

    I can't imagine that Boolos used Lawvere's fixed point theorem to prove Gödel's second incompleteness theorem! He may have used a fixed point argument that you can instantly recognize as a special case of Lawvere's fixed point theorem. But making that explicit would be a great \(n\)-Café post... as long as you admit that it's not utterly brand new. People there like seeing things done with categories!

    In the case of computational complexity theory, I am not sure if diagonalization is taking place in a CCC with a suitable epimorphism.

    It would be very nice to get CCC's for different complexity classes of functions.

    Comment Source:I can't imagine that Boolos used Lawvere's fixed point theorem to prove G&ouml;del's second incompleteness theorem! He may have used a fixed point argument that you can instantly recognize as a special case of Lawvere's fixed point theorem. But making that explicit would be a great \\(n\\)-Caf&eacute; post... as long as you admit that it's not utterly brand new. People there like seeing things done with categories! > In the case of computational complexity theory, I am not sure if diagonalization is taking place in a CCC with a suitable epimorphism. It would be very nice to get CCC's for different complexity classes of functions.
  • 47.
    edited May 10

    Speaking of applying category theory to computational complexity, I've been thinking of NP-complete problems as terminal objects in the category of problems in NP with arrows \(a \to b\) when \(a\) can be reduced to \(b\). I'm sure that's a trivial observation, but it was surprisingly handy when I was helping a friend understand how NP-completeness is special and why NP-hardness is something we'd ever think to consider.

    Comment Source:Speaking of applying category theory to computational complexity, I've been thinking of NP-complete problems as terminal objects in the category of problems in NP with arrows \\(a \to b\\) when \\(a\\) can be reduced to \\(b\\). I'm sure that's a trivial observation, but it was surprisingly handy when I was helping a friend understand how NP-completeness is special and why NP-hardness is something we'd ever think to consider.
  • 48.

    I've long wanted to apply category theory to the ideas of optimization, to preserving semantics while changing performance.

    Comment Source:I've long wanted to apply category theory to the ideas of optimization, to preserving semantics while changing performance.
  • 49.

    Christopher that's 2-category. You've got types, the programs and the action of the optimizing compiler that moves between programs. If you restrict yourself to fixed input and output types to go down a category number, you'll lose power for not much gain in ease.

    Comment Source:Christopher that's 2-category. You've got types, the programs and the action of the optimizing compiler that moves between programs. If you restrict yourself to fixed input and output types to go down a category number, you'll lose power for not much gain in ease.
  • 50.
    edited May 11

    @Jonathan Castello:

    It isn't obvious whether Petri net reachability is in NP at all.

    Well, every decision problem in NP has an exponential-time algorithm. And as John's book explains on p.251, the complexity of reachability is at least doubly exponential. Doesn't this mean that reachability cannot be in NP? [Edit: I misread the statement in the book, which is about Presburger arithmetic rather than Petri net reachability. So this doesn't really work.]

    Now that we're talking about computational complexity, it may be fun to note that also computational complexity itself is a resource theory! Here, the resources are all the possible decision problems, while we write \(x \leq y\) if the decision problem y can be reduced to x in, say, polynomial time.

    It may sound a bit funny that I'm calling the decision problems 'resources', and it's better to think of the resources themselves as being the oracles for those decision problems. But technically I find it harder to define what an oracle is, so that's why I've written down the poset of decision problems.

    For given decision problems \(x\) and \(y\), we can take \(x + y\) to mean the decision problem that asks you to solve either an instance of \(x\) or an instance of \(y\). So we really have a resource theory as well! It's one that behaves very differently than the one for lemon pie. I won't go into the details now, since I don't know what will be discussed in the upcoming lectures.

    Comment Source:@Jonathan Castello: > It isn't obvious whether Petri net reachability is in NP at all. Well, every decision problem in NP [has an exponential-time algorithm](https://cs.stackexchange.com/questions/41555/does-every-problem-in-np-have-an-exponential-time-algorithm). And as John's book explains on p.251, the complexity of reachability is at least doubly exponential. Doesn't this mean that reachability cannot be in NP? *[Edit: I misread the statement in the book, which is about Presburger arithmetic rather than Petri net reachability. So this doesn't really work.]* Now that we're talking about computational complexity, it may be fun to note that also computational complexity itself is a resource theory! Here, the resources are all the possible decision problems, while we write \\(x \leq y\\) if the decision problem y can be reduced to x in, say, polynomial time. It may sound a bit funny that I'm calling the decision problems 'resources', and it's better to think of the resources themselves as being the *oracles* for those decision problems. But technically I find it harder to define what an oracle is, so that's why I've written down the poset of decision problems. For given decision problems \\(x\\) and \\(y\\), we can take \\(x + y\\) to mean the decision problem that asks you to solve either an instance of \\(x\\) or an instance of \\(y\\). So we really have a resource theory as well! It's one that behaves very differently than the one for lemon pie. I won't go into the details now, since I don't know what will be discussed in the upcoming lectures.
Sign In or Register to comment.