It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.4K
- Chat 502
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 72
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 717

Options

I created a tiny boring stub:

## Comments

and i added a link to what we have to deal with soon Population growth

`and i added a link to what we have to deal with soon [[Population growth]]`

The other interesting feature of the Logistic solution is that it has the same structure as the Fermi-Dirac distribution function, although the derivation is completely different.

This leads one to think that the Logistic can have other origins, not related to solutions of the Logistic equation. I have my own derivation that I can contribute, drawing from dispersion mechanisms.

`There is some push to use the Logistic equation solution to model oil depletion. It's one of those ideas that draws from questionable premises: "do oil molecules multiply?, is there a carrying capacity for oil?", etc. I would be interesting to either counter or support this claim by deconstructing the equation for oil. (I have my own theories that I can contribute). The other interesting feature of the Logistic solution is that it has the same structure as the Fermi-Dirac distribution function, although the derivation is completely different. This leads one to think that the Logistic can have other origins, not related to solutions of the Logistic equation. I have my own derivation that I can contribute, drawing from dispersion mechanisms.`

Please do! You can just create a separate section on that page and give your own derivation.

`> I have my own derivation that I can contribute, drawing from dispersion mechanisms. Please do! You can just create a separate section on that page and give your own derivation.`

I added to the boring stub a discussion of how it can be chaotic.

So far I've developed the discrete case.

The continuous case displays chaos if the logistic growth model is given delayed feedback.

As for how the logistic function describes oil depletion, see The Derivation of "Logistic-shaped" Discovery in The Oil Drum. The point is that the logistic function describes discovery. The rate of discovery of new oil fields grows with the rate of search and decreases with the fraction of fields left to discover.

`I added to the boring stub a discussion of how it can be chaotic. So far I've developed the discrete case. The continuous case displays chaos if the logistic growth model is given delayed feedback. As for how the logistic function describes oil depletion, see [The Derivation of "Logistic-shaped" Discovery](http://www.theoildrum.com/node/4171) in The Oil Drum. The point is that the logistic function describes discovery. The rate of discovery of new oil fields grows with the rate of search and decreases with the fraction of fields left to discover.`

I added the redirect from logistic map, in order to resolve the link to logistic map on Nonlinear science Cool with collaboration :-)

I will make a simple " Experiments in chaotic logistic map" in Sage, similar to Experiments in predator-prey in Sage

`I added the redirect from logistic map, in order to resolve the link to [[logistic map]] on [[Nonlinear science]] Cool with collaboration :-) I will make a simple " Experiments in chaotic logistic map" in Sage, similar to [[ Experiments in predator-prey in Sage]]`

The Corona Virus pandemic appears to be a relevant example of logistic growth. It grows exponentially at first but then tends to level out, as in China.

As mentioned in comment #2 above, I have a novel mathematical derivation of this logistic sigmoid which has absolutely nothing to do with the logistic equation, but instead uses stochastic principles of the competing processes of a dispersive exponential growth and a range of limiting populations in which to draw from -- this is on p.85 of our book Mathematical Geoenergy.

Just because a sigmoid-shaped curve follows a shape such as 1/(1+A exp(-t)) doesn't mean that it comes solely from the logistic equation. As noted in #2, consider that just as the logistic sigmoid also maps to the Fermi-Dirac distribution, the heuristic logistic equation derivation also appears to be just a quirky coincidence.

As an exercise amongst the mathematicians, can anyone else derive the logistic sigmoid function without relying on the logistic equation?

EDIT: This YouTube was recently posted and goes through the conventional derivation https://youtu.be/Kas0tIxDvrgJohn has a Twitter thread on the virus here : https://twitter.com/WHUT/status/1238148317739089920

`The Corona Virus pandemic appears to be a relevant example of logistic growth. It grows exponentially at first but then tends to level out, [as in China](https://publichealth.gsu.edu/coronavirus/). ![](https://publichealth.gsu.edu/files/2020/02/mech_fcs_2_24.png) As mentioned in comment #2 above, I have a novel mathematical derivation of this logistic sigmoid which has absolutely nothing to do with the logistic equation, but instead uses stochastic principles of the competing processes of a dispersive exponential growth and a range of limiting populations in which to draw from -- this is on [p.85 of our book Mathematical Geoenergy](https://www.google.com/books/edition/Mathematical_Geoenergy/xb17DwAAQBAJ?hl=en&gbpv=1&bsq=%22LOGISTIC%E2%80%90SHAPED%22). Just because a sigmoid-shaped curve follows a shape such as 1/(1+A exp(-t)) doesn't mean that it comes solely from the logistic equation. As noted in #2, consider that just as the logistic sigmoid also maps to the [Fermi-Dirac distribution](https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics#Fermi%E2%80%93Dirac_distribution), the heuristic logistic equation derivation also appears to be just a quirky coincidence. As an exercise amongst the mathematicians, can anyone else derive the logistic sigmoid function without relying on the logistic equation? **EDIT**: This YouTube was recently posted and goes through the conventional derivation https://youtu.be/Kas0tIxDvrg John has a Twitter thread on the virus here : https://twitter.com/WHUT/status/1238148317739089920`

The reason why the original logistic equation suffers in its predictive power is that it assumes that the exponential growth coefficient includes an asymptotic limiting factor incorporating an effective population "carrying capacity" or "herd immunity", but that this must be known

a priorto the process's initiation. In other words, how would the initial dynamics of an epidemic's growth know anything about the ultimate carrying capacity? It can't and because of this conflation between growth and decline in the logistic equation's formulation, it makes no sense to apply it over the entire time interval. An alternative formulation is needed that separates the growth dynamics from the carrying capacity and this is the context of how a more general dispersive growth model is derived.For the virus contagion, the "flattening of the growth curve" is important as one can see in the China situation, growth initially exploded but it nowhere near reached the potential offered by China's total population. More info is needed to understand the limiting factor in the contagion.

`The reason why the original logistic equation suffers in its predictive power is that it assumes that the exponential growth coefficient includes an asymptotic limiting factor incorporating an effective population "carrying capacity" or "herd immunity", but that this must be known *a prior* to the process's initiation. In other words, how would the initial dynamics of an epidemic's growth know anything about the ultimate carrying capacity? It can't and because of this conflation between growth and decline in the logistic equation's formulation, it makes no sense to apply it over the entire time interval. An alternative formulation is needed that separates the growth dynamics from the carrying capacity and this is the context of how a more general dispersive growth model is derived. For the virus contagion, the "flattening of the growth curve" is important as one can see in the China situation, growth initially exploded but it nowhere near reached the potential offered by China's total population. More info is needed to understand the limiting factor in the contagion.`

John wrote up the SIR model in the Azimuth library and his blog on network theory. This has Recovered and Resistant in the Petri net but it would be interesting to add the SEIR model to the page which adds Exposure to the SIR model as in a Covid-19 SEIR paper I read the other day which I'll post a link to when I can find it.

`John wrote up the [SIR model](https://www.azimuthproject.org/azimuth/show/Blog+-+stochastic+epidemic-type-1D-models) in the Azimuth library and his blog on network theory. This has Recovered and Resistant in the Petri net but it would be interesting to add the SEIR model to the page which adds Exposure to the SIR model as in a Covid-19 SEIR paper I read the other day which I'll post a link to when I can find it.`

Jim, Thanks. Good start, the links are very useful. This one has some charted stochastic dynamics

https://journals.aps.org/pre/abstract/10.1103/PhysRevE.78.061132

`Jim, Thanks. Good start, the links are very useful. This one has some charted stochastic dynamics https://journals.aps.org/pre/abstract/10.1103/PhysRevE.78.061132`

The time derivative of the logistic sigmoid is this bell-shaped curve that Obama tweeted. It's all about flattening this curve as he links to

`The time derivative of the logistic sigmoid is this bell-shaped curve that Obama tweeted. It's all about flattening this curve as he links to >If you’re wondering whether it’s an overreaction to cancel large gatherings and public events (and I love basketball), here’s a useful primer as to why these measures can slow the spread of the virus and save lives. We have to look out for each other. Barack Obama (@BarackObama) <a href="https://twitter.com/BarackObama/status/1238238576141352966?ref_src=twsrc%5Etfw">March 12, 2020</a> ![](https://cdn.vox-cdn.com/thumbor/DW7Ef4vGboMTtawP1iO4DxAeudk=/0x0:1497x841/920x0/filters:focal(0x0:1497x841):format(webp):no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/19780273/flattening_the_curve_final.jpg)`

This is how the stochastic derivation of the logistic sigmoid function is simulated via Monte Carlo. The averaged ensemble of the samples is multiplied by 10x to stand out.

From Chapter 7 of the book (this was a proofed version)

`This is how the stochastic derivation of the logistic sigmoid function is simulated via Monte Carlo. The averaged ensemble of the samples is multiplied by 10x to stand out. ![](https://imagizer.imageshack.com/img924/6510/SwnRIJ.png) From Chapter 7 of the book (this was a proofed version)`

Someone on Twitter pointed out that the growth may also be parabolic, referencing this news item:

The stochastic approach to producing a logistic function curve is flexible in that the growth and volume parameters are independently adjustable. Here are some variations:

It's not difficult to add parabolic growth to the mix. But not really sure what parabolic means in the context of this chart, with the caption :

"China Cases Leveled Off - Rest of World Parabolic"So "parabolic" could be a euphemism for accelerating or concave up. China definitely has clamped down, reaching some type of herd-immunity decelerating limit, whereas the rest is still accelerating.

`Someone on Twitter pointed out that the growth may also be parabolic, referencing this news item: > ["There was a decision today [Wednesday], and air carriers have already started implementing it, this is actually about cancellations of flights to and from Italy so far. Later, we'll see about those countries where this virus will develop in a **parabolic way**, so we will also make relevant decisions, recommend that air carriers halt their flights. Forty-nine out of 219 border crossing points will operate starting tomorrow [Thursday, March 12]. The rest will be closed for citizens and vehicles," he said at a briefing on Wednesday following a meeting of the Cabinet of Ministers, according to an UNIAN correspondent.](https://www.unian.info/society/10912457-ukraine-mulls-register-of-citizens-foreigners-arriving-from-countries-with-high-epidemic-hazard.html) The stochastic approach to producing a logistic function curve is flexible in that the growth and volume parameters are independently adjustable. Here are some variations: ![](http://imageshack.com/a/img921/1193/TlXC78.gif) It's not difficult to add parabolic growth to the mix. But not really sure what parabolic means in the context of this chart, with the caption : *"China Cases Leveled Off - Rest of World Parabolic"* ![](https://pbs.twimg.com/media/ETAZcWxWAAEqy8S.jpg) So "parabolic" could be a euphemism for accelerating or concave up. China definitely has clamped down, reaching some type of herd-immunity decelerating limit, whereas the rest is still accelerating.`

What's troubling about the classical logistic equation formulation with the herd immunity inflection point and asymptotic leveling off is that it requires a significant proportion of the population to kick in -- since the negative feedback is a (1-

N/population) factor whereNis number infected. British PM Boris Johnson and his chief scientific adviser got into hot water for suggesting that herd immunity on the scale of 40 million people (60% of population) infected as the leveling-off mechanism which will ultimately keep it under control (inflected=infected as a mnenomic).So is herd immunity an oxymoron in this case? It appears that

avoidinga herd of people is what's keeping the pandemic under control at the moment in places like China and South Korea. The classic logistic contagion model dictates that the disease stops spreading after enough people are infected, but what happens if re-infections are possible? -- as reports from China and now Japan indicate that re-infection is occurring (also found multiple opinions such as"there is no herd immunity if re-infection is possible"a la possible mutations and potential vaccine ineffectiveness)Take a look at comment #11 above and you can see how the stochastic dispersive model takes into account sub-populations that individually reach an asymptote and that these can be superposed to create an inflection point at much less than 60% of the total population. Isolation of these sub-populations is the key factor to model the growth in China. The bigger question is whether this can continue until a vaccine is developed. Keep our fingers crossed.

From Twitter : https://twitter.com/AdamJKucharski/status/1238821515526897664

Consider dominos as a propagating contagion. A break in the dominos limits the propagation but occasionally one can slip through. That's the importance of strict quarantining.

https://youtu.be/066TQoTqkxQ

Subvolume and social distancing simulations in the WaPo https://www.washingtonpost.com/graphics/2020/world/corona-simulator/?itid=hp_hp-top-table-main_virus-simulator520pm:homepage/story-ans

`What's troubling about the classical logistic equation formulation with the herd immunity inflection point and asymptotic leveling off is that it requires a significant proportion of the population to kick in -- since the negative feedback is a (1-*N/population*) factor where *N* is number infected. British PM Boris Johnson and his chief scientific adviser got into hot water for [suggesting that herd immunity on the scale of 40 million people (60% of population) infected as the leveling-off mechanism which will ultimately keep it under control](https://www.ft.com/content/38a81588-6508-11ea-b3f3-fe4680ea68b5) (inflected=infected as a mnenomic). So is herd immunity an oxymoron in this case? It appears that **avoiding** a herd of people is what's keeping the pandemic under control at the moment in places like China and South Korea. The classic logistic contagion model dictates that the disease stops spreading after enough people are infected, but what happens if re-infections are possible? -- as [reports from China and now Japan indicate that re-infection is occurring](https://thehill.com/changing-america/well-being/prevention-cures/484942-japan-confirms-first-case-of-person-reinfected) (also found multiple opinions such as *"there is no herd immunity if re-infection is possible"* a la possible mutations and potential vaccine ineffectiveness) Take a look at comment #11 above and you can see how the stochastic dispersive model takes into account sub-populations that individually reach an asymptote and that these can be superposed to create an inflection point at much less than 60% of the total population. Isolation of these sub-populations is the key factor to model the growth in China. The bigger question is whether this can continue until a vaccine is developed. Keep our fingers crossed. --- From Twitter : https://twitter.com/AdamJKucharski/status/1238821515526897664 > "I am deeply uncomfortable with the message that UK is actively pursuing ‘herd immunity’ as the main COVID-19 strategy. Our group’s scenario modelling has focused on reducing two main things: peak healthcare demand and deaths..." --- Consider dominos as a propagating contagion. A break in the dominos limits the propagation but occasionally one can slip through. That's the importance of strict quarantining. https://youtu.be/066TQoTqkxQ --- Subvolume and social distancing simulations in the WaPo https://www.washingtonpost.com/graphics/2020/world/corona-simulator/?itid=hp_hp-top-table-main_virus-simulator520pm%3Ahomepage%2Fstory-ans`

If someone wants to know how to predict the cumulative of a logistics curve when the data is still around the inflection point but before the asymptote is reached, there's a technique one can borrow from the fossil fuel resource world called

Hubbert linearization. This is a screen grab from the book that shows how the HL is graphically constructed. The value of dU/U is plotted against U (as in eq 7.8) and the x-intercept gives the asymptotic limiting value. Wikipedia explanation : https://en.wikipedia.org/wiki/Hubbert_linearizationThis only works for the case of the perfect logistic, and any other (non-exponential) growth law won't linearize in the same way, as is seen in a power-law growth model.

`If someone wants to know how to predict the cumulative of a logistics curve when the data is still around the inflection point but before the asymptote is reached, there's a technique one can borrow from the fossil fuel resource world called *Hubbert linearization*. This is a screen grab from the book that shows how the HL is graphically constructed. The value of dU/U is plotted against U (as in eq 7.8) and the x-intercept gives the asymptotic limiting value. Wikipedia explanation : https://en.wikipedia.org/wiki/Hubbert_linearization ![](http://imageshack.com/a/img922/6936/hRV9r7.gif) This only works for the case of the perfect logistic, and any other (non-exponential) growth law won't linearize in the same way, as is seen in a power-law growth model.`

ICYM these, @JacobBiamonte started a draft blog article on creation-annihilation operators in the SIR model and @NathanUrban commented on discrete event simulations which links to EpSimS epidemology simulation software.

`ICYM these, @JacobBiamonte started a [draft blog article](http://bit.ly/3d4nVt1) on creation-annihilation operators in the SIR model and @NathanUrban commented on [ discrete event simulations](http://bit.ly/3aY0IGR) which links to [EpSimS](http://bit.ly/2QiULMU) epidemology simulation software.`

Thanks, Jim. These fall into the category known as compartment models (wikipedia), which are essentially stochastic models of data flow along a directed graph. SIR models give a transient notion of patients that recover, which is also an important consideration.

From our book, a short appendix on compartment models is available for free (another medical application is on pharmacokinetics, which is how one models drug delivery) : https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app5

Compartment modeling relies extensively on the concept of convolution, which can be calculated easily via a scripted software algorithm, described in another appendix https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app2

There's unfortunately no convolution operator built into the Excel spreadsheet, but there's a nifty array-syntax trick that allows you to calculate a convolution between two ranges very compactly. If anyone is interested, I can describe exactly how to formulate an Excel convolution.

BTW, compartment modeling is the basis for our comprehensive Oil Shock Model, which I think will also be an important model in the near future, as it will help to understand the sharp disruption in global oil production that will eventually impact the world, see our blog https://peakOilBarrel.com for up-to-date projections. The Oil Shock Model is described in Chapter 5 of the book (which is behind a firewall): https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5

`Thanks, Jim. These fall into the category known as compartment models ([wikipedia](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology)), which are essentially stochastic models of data flow along a directed graph. SIR models give a transient notion of patients that recover, which is also an important consideration. > ![](https://upload.wikimedia.org/wikipedia/commons/thumb/3/32/Sirsys-p9.png/330px-Sirsys-p9.png) > ![](http://imageshack.com/a/img923/5330/2oBS7s.png) >Blue=Susceptible, Green=Infected, and Red=Recovered, for the directed graph shown below the chart From our book, a short appendix on compartment models is available for free (another medical application is on pharmacokinetics, which is how one models drug delivery) : https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app5 Compartment modeling relies extensively on the concept of convolution, which can be calculated easily via a scripted software algorithm, described in another appendix https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app2 There's unfortunately no convolution operator built into the Excel spreadsheet, but there's a nifty array-syntax trick that allows you to calculate a convolution between two ranges very compactly. If anyone is interested, I can describe exactly how to formulate an Excel convolution. BTW, compartment modeling is the basis for our comprehensive Oil Shock Model, which I think will also be an important model in the near future, as it will help to understand the sharp disruption in global oil production that will eventually impact the world, see our blog https://peakOilBarrel.com for up-to-date projections. The Oil Shock Model is described in Chapter 5 of the book (which is behind a firewall): https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5`

Here's the Zang et. al. paper: Prediction of New Coronavirus Infection Based on a Modified SEIR Model (2020)

`Here's the Zang et. al. paper: [Prediction of New Coronavirus Infection Based on a Modified SEIR Model (2020) ](https://www.medrxiv.org/content/10.1101/2020.03.03.20030858v1.full.pdf)`

I'd like to see the algorithm for the "nifty array-syntax trick that allows you to calculate a convolution between two ranges very compactly." Tnx for the offer. :)

`I'd like to see the algorithm for the "nifty array-syntax trick that allows you to calculate a convolution between two ranges very compactly." Tnx for the offer. :)`

Jim, Are you aware of the Excel syntax for range-based calculations? The dot product in Excel is

`=SUM(A1:A10 * B1:B10)`

for two ranges 1..10 but you need to do a Shift-Ctrl-Enter to invoke it (same as`=SUMPRODUCT(A1:A10, B1:B10)`

but w/o the Shift-Ctrl-Enter). A convolution is a running dot product on two ranges with the range endpoints shifting along the timeline, but with one of the ranges reversed in direction. The rest follows from this, as it will depend on how your data is arranged.edit: As convolution has been given greater awareness via NN and machine learning applications, you can find other algorithms, see https://towardsdatascience.com/convolution-a-journey-through-a-familiar-operators-deeper-roots-2e3311f23379

`Jim, Are you aware of the Excel syntax for range-based calculations? The dot product in Excel is `=SUM(A1:A10 * B1:B10)` for two ranges 1..10 but you need to do a Shift-Ctrl-Enter to invoke it (same as `=SUMPRODUCT(A1:A10, B1:B10)` but w/o the Shift-Ctrl-Enter). A convolution is a running dot product on two ranges with the range endpoints shifting along the timeline, but with one of the ranges reversed in direction. The rest follows from this, as it will depend on how your data is arranged. edit: As convolution has been given greater awareness via NN and machine learning applications, you can find other algorithms, see https://towardsdatascience.com/convolution-a-journey-through-a-familiar-operators-deeper-roots-2e3311f23379`

This is an SEIR compartment model

`This is an SEIR compartment model <blockquote class="twitter-tweet"><p lang="en" dir="ltr">I’m sharing a simulation tool I put together for studying COVID19 dynamics and generating visualizations without having to do a bunch of coding yourself. Hoping it can help with your coronavirus-related research and teaching. <a href="https://t.co/GV5ErDkfE9">https://t.co/GV5ErDkfE9</a> (1/7)</p>— Alison Lynn Hill (@alison_l_hill) <a href="https://twitter.com/alison_l_hill/status/1239072817678823425?ref_src=twsrc%5Etfw">March 15, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> ![](https://pbs.twimg.com/media/ETIKchvWkAMFQBY.jpg)`

From what I am trying to understand, a feature of the early growth acceleration is that it is independent of the total population size of the country, indicating that it really depends on the initial hot spot cells and the typical human interaction within a sub-population. In other words, these are not

per capitanumbers. So while Switzerland & Sweden have populations of ~10 million, the UK has a population of 66 million and soI thinkthe eventual cumulative for the UK will rise above that of Switzerland & Sweden. In other words the individual country curves will eventually diverge depending on how effectively each country can sustain social distancing + quarantining. Yet, given the huge population of China of 1400 million, it is at least heartening that they could limit this so far to 180,000 cases, which on an exclusivelyper capita basiswould put them at the level of Switzerland or Sweden as of today.Remember that the goal is to

bendthe cumulative curve so it reaches an asymptote sooner andflattenthe daily curve so the cumulative doesn't rise as fast, which some media people are mixing up.`From what I am trying to understand, a feature of the early growth acceleration is that it is independent of the total population size of the country, indicating that it really depends on the initial hot spot cells and the typical human interaction within a sub-population. In other words, these are not *per capita* numbers. So while Switzerland & Sweden have populations of ~10 million, the UK has a population of 66 million and so *I think* the eventual cumulative for the UK will rise above that of Switzerland & Sweden. In other words the individual country curves will eventually diverge depending on how effectively each country can sustain social distancing + quarantining. Yet, given the huge population of China of 1400 million, it is at least heartening that they could limit this so far to 180,000 cases, which on an exclusively **per capita basis** would put them at the level of Switzerland or Sweden as of today. ![](https://pbs.twimg.com/media/ETRDW6IXYAA298t.jpg) Remember that the goal is to *bend* the cumulative curve so it reaches an asymptote sooner and *flatten* the daily curve so the cumulative doesn't rise as fast, which some [media people](https://twitter.com/julesbell27/status/1239694655094009856?s=20) are mixing up.`

From the initial comment I contributed @ #2, one application of logistic-like formulations is to model oil depletion from a finite population of reserves. Now, with the awareness brought on by the coronavirus crisis, one can also see how the "flattening of the curve" will impact future oil production.

This is at least a 10% reduction in production, which will flatten the current consumption curve and prolong the duration to the asymptotic limit in cumulative production (the URR shown in #15). Of course, with this kind of non-stationarity in the flow, the logistic function by itself no longer works, so we need to apply a stochastic model that provides the possibility for perturbations. This is the Oil Shock Model, derived as a directed graph analogous to a compartmental flow, described here: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5

I'm predicting that at some point John will comment on the success (so far) of Singapore in almost completely flattening the curve

John responded here : https://twitter.com/johncarlosbaez/status/1240023988878725120

`From the initial comment I contributed @ [#2](https://forum.azimuthproject.org/discussion/comment/2160/#Comment_2160), one application of logistic-like formulations is to model oil depletion from a finite population of reserves. Now, with the awareness brought on by the coronavirus crisis, one can also see how the "flattening of the curve" will impact future oil production. <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Goldman Sachs: current consumption decrease 8 mmb/d. Brent will average $20 a barrel during the second quarter<br><br>Trafigura: daily demand loss 10 mmb/d.<a href="https://t.co/6brRUTayuQ">https://t.co/6brRUTayuQ</a><a href="https://twitter.com/hashtag/OOTT?src=hash&ref_src=twsrc%5Etfw">#OOTT</a> <a href="https://twitter.com/hashtag/oilandgas?src=hash&ref_src=twsrc%5Etfw">#oilandgas</a> <a href="https://twitter.com/hashtag/oil?src=hash&ref_src=twsrc%5Etfw">#oil</a> <a href="https://twitter.com/hashtag/WTI?src=hash&ref_src=twsrc%5Etfw">#WTI</a> <a href="https://twitter.com/hashtag/CrudeOil?src=hash&ref_src=twsrc%5Etfw">#CrudeOil</a> <a href="https://twitter.com/hashtag/fintwit?src=hash&ref_src=twsrc%5Etfw">#fintwit</a> <a href="https://twitter.com/hashtag/OPEC?src=hash&ref_src=twsrc%5Etfw">#OPEC</a></p>— Art Berman (@aeberman12) <a href="https://twitter.com/aeberman12/status/1239901824590757888?ref_src=twsrc%5Etfw">March 17, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> This is at least a 10% reduction in production, which will flatten the current consumption curve and prolong the duration to the asymptotic limit in cumulative production (the URR shown in #15). Of course, with this kind of non-stationarity in the flow, the logistic function by itself no longer works, so we need to apply a stochastic model that provides the possibility for perturbations. This is the Oil Shock Model, derived as a directed graph analogous to a compartmental flow, described here: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch5 --- I'm predicting that at some point John will comment on the success (so far) of Singapore in almost completely flattening the curve <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Life in Singapore (243 cases, 0 deaths) has pretty much returned to normal. <br><br>People are walking around, mostly without masks. Shops & restaurants are open. <br><br>Big events & school activities such as tournaments were canceled, but schools remained open unless there was a case. <a href="https://t.co/MIObq8HWRu">pic.twitter.com/MIObq8HWRu</a></p>— Melissa Chen (@MsMelChen) <a href="https://twitter.com/MsMelChen/status/1239604019460558848?ref_src=twsrc%5Etfw">March 16, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> John responded here : https://twitter.com/johncarlosbaez/status/1240023988878725120`

@WebHubTel Thanks for keeping the torch burning! I gotta read up on this stuff...

`@WebHubTel Thanks for keeping the torch burning! I gotta read up on this stuff...`

A follow-on from comment #14, Laherrere calculated a Hubbert Linearization on COVID-19 numbers from several countries here:

https://aspofrance.files.wordpress.com/2020/03/hlcovid19-16mars.pdf

As with using the technique for oil, a set of numbers from the curve can linearize but if another growth acceleration occurs after it settles down, its clear that it's not predicting an ultimate herd immunity level. It's just temporarily keeping it at bay.

And if the curve follows the "Abroad" trajectory as shown below, then the HL will flatline to an infinite x-interception since the inflection point has yet to be reached and there's no sign of bending in the cumulative (on a semi-log plot).

`A follow-on from comment #14, Laherrere calculated a Hubbert Linearization on COVID-19 numbers from several countries here: https://aspofrance.files.wordpress.com/2020/03/hlcovid19-16mars.pdf As with using the technique for oil, a set of numbers from the curve can linearize but if another growth acceleration occurs after it settles down, its clear that it's not predicting an ultimate herd immunity level. It's just temporarily keeping it at bay. And if the curve follows the "Abroad" trajectory as shown below, then the HL will flatline to an infinite x-interception since the inflection point has yet to be reached and there's no sign of bending in the cumulative (on a semi-log plot). ![](https://pbs.twimg.com/media/ETXjrBlXkAE2J7n.jpg)`

Here's Neil Ferguson et. al., Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand (2020). This has a description and the params, of the Imperial college task-force model which has changed the UK gov's policy.

`Here's Neil Ferguson et. al., [Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand (2020)](http://bit.ly/2IYkIgO). This has a description and the params, of the Imperial college task-force model which has changed the UK gov's policy.`

Are Ferguson using a convolution approach with exponentials in the compartmental model based on this statement:

A multiple convolution of damped exponentials results in a gamma with each exponential narrowing the gamma. https://en.wikipedia.org/wiki/Gamma_distribution

Maybe not enough info to figure out the model? Probably more in these papers

Added 3/20 :The lead author Neil Ferguson apparently was infected https://twitter.com/leahmcelrath/status/1240848615255560193

`Are Ferguson using a convolution approach with exponentials in the compartmental model based on this statement: > "Individual infectiousness is assumed to be variable, described by a gamma distribution with mean 1 and shape parameter ⍺=0.25. " A multiple convolution of damped exponentials results in a gamma with each exponential narrowing the gamma. https://en.wikipedia.org/wiki/Gamma_distribution ![](https://upload.wikimedia.org/wikipedia/commons/thumb/e/e6/Gamma_distribution_pdf.svg/325px-Gamma_distribution_pdf.svg.png) Maybe not enough info to figure out the model? Probably more in these papers > Ferguson NM, Cummings DAT, Fraser C, Cajka JC, Cooley PC, Burke DS. Strategies for mitigating an influenza pandemic. Nature 2006;442(7101):448–52. > HalloranME, Ferguson NM, Eubank S, et al. Modeling targeted layered containment of an influenza pandemic in the United States. Proc Natl Acad Sci U S A 2008;105(12):4639–44. --- Added 3/20 :The lead author Neil Ferguson apparently was infected https://twitter.com/leahmcelrath/status/1240848615255560193`

I find it reassuring that a sample of 2 sims: https://aspofrance.files.wordpress.com/2020/03/hlcovid19-16mars.pdf and Alison_L_Hill's https://t.co/GV5ErDkfE9 both show peaks not later than 100 days.

`I find it reassuring that a sample of 2 sims: https://aspofrance.files.wordpress.com/2020/03/hlcovid19-16mars.pdf and Alison_L_Hill's https://t.co/GV5ErDkfE9 both show peaks not later than 100 days.`

This is another paper coming out of Imperial College but not from Ferguson:

Li et al, "Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV2)"

https://science.sciencemag.org/content/early/2020/03/13/science.abb3221

They use a similar Gamma for the compartmental model as Ferguson, with supplementary material here

https://science.sciencemag.org/cgi/content/full/science.abb3221/DC1

`This is another paper coming out of Imperial College but not from Ferguson: Li et al, "Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV2)" https://science.sciencemag.org/content/early/2020/03/13/science.abb3221 They use a similar Gamma for the compartmental model as Ferguson, with supplementary material here https://science.sciencemag.org/cgi/content/full/science.abb3221/DC1`

The compartmental models of oil production & contagion growth show intuitive parallels

Discovery of an oil reservoir is analogous to the start of infection -- the leading indicator.

Extraction from that reservoir to depletion is analogous to death -- the lagging indicator.

Keeping oil in the ground is the equivalent of recovering from the infection.

`The compartmental models of oil production & contagion growth show intuitive parallels Discovery of an oil reservoir is analogous to the start of infection -- the leading indicator. Extraction from that reservoir to depletion is analogous to death -- the lagging indicator. Keeping oil in the ground is the equivalent of recovering from the infection.`

Links; Halloran ME et al., Modeling targeted layered containment of an influenza pandemic in the United States Ferguson NM, Strategies for mitigating an influenza pandemic (2020)

`Links; Halloran ME et al., [Modeling targeted layered containment of an influenza pandemic in the United States](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2290797/) Ferguson NM, [Strategies for mitigating an influenza pandemic (2020)](https://www.researchgate.net/publication/7139153_Strategies_for_mitigating_an_Influenza_pandemic)`

https://www.gov.uk/government/groups/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19-response

From that site: https://www.medrxiv.org/content/10.1101/2020.01.31.20019901v2.full.pdf+html

Referencing "Capturing the time-varying drivers of an epidemic using stochastic dynamical systems." https://academic.oup.com/biostatistics/article/14/3/541/259859

Looking as if the dispersive approach in comment #11 is describing the current situation

As these dispersive waves that hit a (perhaps temporary) herd immunity/quarantine ceiling aggregate, they create an envelope that continues to grow. So each of the subvolume curves shown in the figure above may represent a country, with a large country such as the USA with an initially slow growth adding a lagged response. The Maximum Entropy dispersion formulation of the logistic function sigmoid is simply a mechanism to provide variability to the mix, which thus emulates a global spread of growth.

`https://www.gov.uk/government/groups/scientific-advisory-group-for-emergencies-sage-coronavirus-covid-19-response From that site: https://www.medrxiv.org/content/10.1101/2020.01.31.20019901v2.full.pdf+html Referencing "Capturing the time-varying drivers of an epidemic using stochastic dynamical systems." https://academic.oup.com/biostatistics/article/14/3/541/259859 Looking as if the dispersive approach in [comment #11](https://forum.azimuthproject.org/discussion/comment/21938/#Comment_21938) is describing the current situation ![](https://imagizer.imageshack.com/img924/6510/SwnRIJ.png) As these dispersive waves that hit a (perhaps temporary) herd immunity/quarantine ceiling aggregate, they create an envelope that continues to grow. So each of the subvolume curves shown in the figure above may represent a country, with a large country such as the USA with an initially slow growth adding a lagged response. The Maximum Entropy dispersion formulation of the logistic function sigmoid is simply a mechanism to provide variability to the mix, which thus emulates a global spread of growth.`

Here is a stochastic analogy to epidemic growth -- the rate at which popcorn popping follows a similar logistic sigmoid as an ensemble of contagions. This is the familiar slow initial popping of a few kernels leading up to a maximum popping rate followed by a decline of popping as the population of kernels impacted saturates.

Click on the PDF on the link below and go to section C.4:

https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.app3

https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app3

The rationale for including this example in our book is because it describes the Hubbert logistic curve not directly related to resource extraction, yet provides a real world analogy that can be easily set up as a controlled lab experiment.

It also doesn't hold as a perfect analogy to contagion, as the accelerated growth is controlled as an Arrhenius rate activated by temperature instead of as a multiplicative contagion, i.e. an individual kernel does not pop because its neighbor pops but because of the temperature of the medium. In the figure below, the fraction unpopped is simply the complement of the fraction popped to convert to the familiar S-curve.

The usual observation in virus epidemic growth is that the contagiousness

decreaseswith increased temperature instead of what might be expected as an increase if it was a thermally activated complex obeying the laws of statistical mechanics. This behavior apparently is not completely understood but there is some thought it might be related to the increased intensity of UV light during the summer months killing any airborne virus : https://www.webmd.com/cold-and-flu/news/20180212/can-uv-light-be-used-to-kill-airborne-flu-virus-#1 or that buildings have more air circulation and people tend to congregate less indoors during the summer. Considering that humans have a thermally stabilized environment controlled by their regulated body temperature, it would be hard to make sense of a thermally activated mechanism once the virus enters the body.This is a recent paper on COVID-19 based on available geographic/climate correlation High Temperature and High Humidity Reduce the Transmission of COVID-19

PS: The popcorn experiment is interesting in that the controlled conditions were set up as a

singleisolated popcorn kernel was monitored as it was heated up, not by monitoring an aggregation of kernels. The statistical distribution resembling a logistic sigmoid was only found by compiling the results of thousands of individual kernel measurements. So this is essentially characterizing the stochastic uncertainty of a single kernel.`Here is a stochastic analogy to epidemic growth -- the rate at which popcorn popping follows a similar logistic sigmoid as an ensemble of contagions. This is the familiar slow initial popping of a few kernels leading up to a maximum popping rate followed by a decline of popping as the population of kernels impacted saturates. Click on the PDF on the link below and go to section C.4: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.app3 https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1002/9781119434351.app3 The rationale for including this example in our book is because it describes the Hubbert logistic curve not directly related to resource extraction, yet provides a real world analogy that can be easily set up as a controlled lab experiment. It also doesn't hold as a perfect analogy to contagion, as the accelerated growth is controlled as an Arrhenius rate activated by temperature instead of as a multiplicative contagion, i.e. an individual kernel does not pop because its neighbor pops but because of the temperature of the medium. In the figure below, the fraction unpopped is simply the complement of the fraction popped to convert to the familiar S-curve. ![](https://imagizer.imageshack.com/img923/7695/cO2M7C.png) The usual observation in virus epidemic growth is that the contagiousness *decreases* with increased temperature instead of what might be expected as an increase if it was a thermally activated complex obeying the laws of statistical mechanics. This behavior apparently is not completely understood but there is some thought it might be related to the increased intensity of UV light during the summer months killing any airborne virus : https://www.webmd.com/cold-and-flu/news/20180212/can-uv-light-be-used-to-kill-airborne-flu-virus-#1 or that buildings have more air circulation and people tend to congregate less indoors during the summer. Considering that humans have a thermally stabilized environment controlled by their regulated body temperature, it would be hard to make sense of a thermally activated mechanism once the virus enters the body. This is a recent paper on COVID-19 based on available geographic/climate correlation [High Temperature and High Humidity Reduce the Transmission of COVID-19](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3551767) PS: The popcorn experiment is interesting in that the controlled conditions were set up as a *single* isolated popcorn kernel was monitored as it was heated up, not by monitoring an aggregation of kernels. The statistical distribution resembling a logistic sigmoid was only found by compiling the results of thousands of individual kernel measurements. So this is essentially characterizing the stochastic uncertainty of a single kernel.`

@WebHubTel wrote:

Not sure what you mean by the heuristic logistic equation derivation, and by it being a coincidence.

Also not sure how this relates to your point, but I see that in the Petri net analysis for SI compartmental, the logistic equation is a result not a premise of analyzing the rate equation for a highly simplistic stochastic process model that is motivated by empirical considerations.

`@WebHubTel wrote: > Just because a sigmoid-shaped curve follows a shape such as 1/(1+A exp(-t)) doesn't mean that it comes solely from the logistic equation. As noted in #2, consider that just as the logistic sigmoid also maps to the [Fermi-Dirac distribution](https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics#Fermi%E2%80%93Dirac_distribution), the heuristic logistic equation derivation also appears to be just a quirky coincidence. Not sure what you mean by the heuristic logistic equation derivation, and by it being a coincidence. Also not sure how this relates to your point, but I see that in the Petri net analysis for SI compartmental, the logistic equation is a result not a premise of analyzing the rate equation for a highly simplistic stochastic process model that is motivated by empirical considerations.`

The classic logistic equation is not strictly a stochastic derivation, and at best assumes a mean value for the measure of interest, with no uncertainty in the outcome. In any realistic situation there would be a spread in rates and constraints and so that's what my derivation calculates.

In the comment before yours at #32, I described an experiment and model for a process that is purely stochastic, the popping of a popcorn kernel. As an exercise, see if you can describe the behavior of the amount popped as a function of time just by assuming a mean value of one kernel popping. I originally tried to fit a mean-value model and it didn't come close to Figure C.8 above. That's because the variability in popcorn kernel characteristics was large enough to skew the expected temporal behavior away from that assuming a single mean value.

I just looked up the stochastic logistic equation and a recent paper on that is here: https://www.sciencedirect.com/science/article/pii/S0893965913000050 This uses an Ito calculus formulation which is a noise perturbation on the mean value approach.

I can elaborate more about the "quirky coincidence" and "heuristic" aspects with another example that I will place in another comment.

`> "Note that In the Petri net model for SI, the logistic equation is a result of the analysis of the rate equation not a premise." The classic logistic equation is not strictly a stochastic derivation, and at best assumes a mean value for the measure of interest, with no uncertainty in the outcome. In any realistic situation there would be a spread in rates and constraints and so that's what my derivation calculates. In the comment before yours at #32, I described an experiment and model for a process that is purely stochastic, the popping of a popcorn kernel. As an exercise, see if you can describe the behavior of the amount popped as a function of time just by assuming a mean value of one kernel popping. I originally tried to fit a mean-value model and it didn't come close to Figure C.8 above. That's because the variability in popcorn kernel characteristics was large enough to skew the expected temporal behavior away from that assuming a single mean value. I just looked up the stochastic logistic equation and a recent paper on that is here: https://www.sciencedirect.com/science/article/pii/S0893965913000050 This uses an Ito calculus formulation which is a noise perturbation on the mean value approach. I can elaborate more about the "quirky coincidence" and "heuristic" aspects with another example that I will place in another comment.`

There is a purely stochastic model - a stochastic Petri net - for the SI process, which leads to the logistic equation as a result. See for example page 24 from:

`There is a purely stochastic model - a stochastic Petri net - for the SI process, which leads to the logistic equation as a result. See for example page 24 from: * John C. Baez and Jacob Biamonte, [Quantum Techniques for Stochastic Mechanics](https://arxiv.org/abs/1209.3632), arXiv:1209.3632 [quant-ph]. Text includes treatment from the ground up of Petri nets, both stochastic and deterministic. Clear introduction to SI, SIR and SIRS models.`

There, it's as simple as it gets: infection is modeled as a stochastic process in which one susceptible plus one infected person are transformed into two infected people. In the large number limit, using the regular "mass action" kinetics, the rate equation for this process turns out to be the logistic equation.

`There, it's as simple as it gets: infection is modeled as a stochastic process in which one susceptible plus one infected person are transformed into two infected people. In the large number limit, using the regular "mass action" kinetics, the rate equation for this process turns out to be the logistic equation.`

David, Yes that's the correct way to derive the logistic sigmoid function from the logistic equation, but its not really considered a stochastic model. It's generally classified as a mean-value model or as a classic

deterministicSI model.I gave a ref to a recent paper on the

stochastic logistic equationin comment #34, and the equation for this model is available from the Wolfram math site: https://reference.wolfram.com/language/example/StochasticLogisticGrowthModel.htmlAnd the following paper provides an even better description of the distinction between the two:

So the logistic sigmoid function is a result of solving the classic logistic equation. OTOH, solving the stochastic logistic equation will give something that may look like the logistic sigmoid function but obviously can't match it exactly.

What I did with the dispersive approach described in comment #11 is to apply a spread in the growth parameters and self-limiting factors that does generate precisely a logistic sigmoid function. This can be tested to work by drawing from a population with a specific distribution via a Monte Carlo simulation, and verifying that the statistical aggregate approaches the logistic sigmoid function as shown in comment #31. This may be a better representation of an actual evolving epidemic since it considers the variation implicit over a set of sub-populations. I'm not suggesting that it is in any way equivalent to the stochastic compartmental simulations that e.g. Ferguson et al are doing to model the COVID-19 epidemic, but it's the approach I am using for my resource depletion models and so I thought I would introduce them into the discussion.

This is a good discussion because I think it illuminates the distinction between the simple models used for logistic growth and the more elaborate considerations that must be occurring in the compartmental models of Ferguson et al. There are plenty of references to stochastic simulations in their articles, and so this gives an idea of what they might mean by that. If you have a different interpretation, that would also be good to know.

`David, Yes that's the correct way to derive the logistic sigmoid function from the logistic equation, but its not really considered a stochastic model. It's generally classified as a mean-value model or as a [classic *deterministic* SI model](http://idmod.org/docs/general/model-si.html). I gave a ref to a recent paper on the *stochastic logistic equation* in comment #34, and the equation for this model is available from the Wolfram math site: https://reference.wolfram.com/language/example/StochasticLogisticGrowthModel.html ![](https://reference.wolfram.com/language/example/Files/StochasticLogisticGrowthModel.en/I_1.png) And the following paper provides an even better description of the distinction between the two: > [Liu&Wang (2011), "Asymptotic properties and simulations of a stochastic logistic model under regime switching"](https://www.sciencedirect.com/science/article/pii/S0895717711002937) > "On the other hand, in the real world, population system is inevitably affected by the environmental noise which is an important component in an ecosystem (see e.g. [7], [8], [9], [10]). The deterministic systems assume that parameters in the models are all deterministic irrespective of environmental fluctuations. Hence, they have some limitations in mathematical modeling of ecological systems, besides they are quite difficult to fitting data perfectly and to predict the future dynamics of the system accurately [11]. May [1] pointed out the fact that due to environmental noise, the birth rate, carrying capacity, competition coefficient and other parameters involved in the system exhibit random fluctuation to a greater or lesser extent." So the logistic sigmoid function is a result of solving the classic logistic equation. OTOH, solving the stochastic logistic equation will give something that may look like the logistic sigmoid function but obviously can't match it exactly. What I did with the dispersive approach described in comment #11 is to apply a spread in the growth parameters and self-limiting factors that does generate precisely a logistic sigmoid function. This can be tested to work by drawing from a population with a specific distribution via a Monte Carlo simulation, and verifying that the statistical aggregate approaches the logistic sigmoid function as shown in comment #31. This may be a better representation of an actual evolving epidemic since it considers the variation implicit over a set of sub-populations. I'm not suggesting that it is in any way equivalent to the stochastic compartmental simulations that e.g. Ferguson et al are doing to model the COVID-19 epidemic, but it's the approach I am using for my resource depletion models and so I thought I would introduce them into the discussion. This is a good discussion because I think it illuminates the distinction between the simple models used for logistic growth and the more elaborate considerations that must be occurring in the compartmental models of Ferguson et al. There are plenty of references to stochastic simulations in their articles, and so this gives an idea of what they might mean by that. If you have a different interpretation, that would also be good to know.`

@WebHubTel wrote:

True the page I referred you to talks about the deterministic interpretation of the Petri net SI model. Yet this occurs in the broader context of the stochastic interpretation of Petri nets - which is very much a discrete popcorn-like process - and that is what I

meantto be talking about.When simulated you will get variation, which will be especially pronounced when run on smaller populations like neighborhoods.

For reference, in a separate thread intended for the general Azimuth Forum community, I will summarize some of the basic the key ideas of stochastic Petri nets.

`@WebHubTel wrote: > Yes that's the correct way to derive the logistic sigmoid function from the logistic equation, but its not really considered a stochastic model. It's generally classified as a mean-value model or as a [classic *deterministic* SI model](http://idmod.org/docs/general/model-si.html). True the page I referred you to talks about the deterministic interpretation of the Petri net SI model. Yet this occurs in the broader context of the stochastic interpretation of Petri nets - which is very much a discrete popcorn-like process - and that is what I _meant_ to be talking about. When simulated you will get variation, which will be especially pronounced when run on smaller populations like neighborhoods. For reference, in a separate thread intended for the general Azimuth Forum community, I will summarize some of the basic the key ideas of stochastic Petri nets.`

The Lotka-Volterra equation is closely related to the logistic equation via a growth term, with a feedback term set so that a predatory species can further accelerate the prey toward a limiting value. But since the disappearance of prey will also cause the disappearance of the predator, a cyclic pattern can develop based on this coupled feedback. It's possible that that this is a real mechanism in actual ecological predator/prey relationships but it's difficult to verify. Any stochastic perturbation will likely knock the cycle off its current period.

One of the famous predator/prey behaviors in the Arctic latitudes is the Lemming/Arctic Fox cycle (or Snowy Owl). Over a long time interval this cycle has been estimated to have a period of 3.8 years. A wildlife ecologist working on the topic for

40+ yearsfinally seems to have pattern-matched to a plausible model -- published last year :What he noted is that the lemming cycle happens to match a spring tide cycle, implying more of a climate related mechanism controlling the population. After coming across this paper I noted that the cycle appeared suspiciously close to the tidal forcing that I am using in the ENSO model. The vertical dotted lines indicate the alignment (the inset is CC between ENSO and PDO )

Why it follows the more predictable tidal forcing rather than the more erratic ENSO response, I don't have an answer. But this does look more plausible than a predator-prey cycle. The environment is so harsh in the Arctic that climate factors likely control the health of the lemming population, and the predators then follow that cycle as well since that is their food supply. This may be a classic common-mode mechanism instead of a mutual resonance set by the eigenvalue or chaotic attractor of a differential equation.

`The [Lotka-Volterra equation](https://forum.azimuthproject.org/discussion/967/lotka-volterra-equation) is closely related to the logistic equation via a growth term, with a feedback term set so that a predatory species can further accelerate the prey toward a limiting value. But since the disappearance of prey will also cause the disappearance of the predator, a cyclic pattern can develop based on this coupled feedback. It's possible that that this is a real mechanism in actual ecological predator/prey relationships but it's difficult to verify. Any stochastic perturbation will likely knock the cycle off its current period. One of the famous predator/prey behaviors in the Arctic latitudes is the Lemming/Arctic Fox cycle (or Snowy Owl). Over a long time interval this cycle has been estimated to have a period of 3.8 years. A wildlife ecologist working on the topic for **40+ years** finally seems to have pattern-matched to a plausible model -- published last year : > Archibald, H. L. [Relating the 4-year lemming ( Lemmus spp. and Dicrostonyx spp.) population cycle to a 3.8-year lunar cycle and ENSO](https://tspace.library.utoronto.ca/bitstream/1807/97104/1/cjz-2018-0266.pdf). Can. J. Zool. 97, 1054–1063 (2019). What he noted is that the lemming cycle happens to match a spring tide cycle, implying more of a climate related mechanism controlling the population. After coming across this paper I noted that the cycle appeared suspiciously close to the tidal forcing that I am using in the [ENSO model](https://forum.azimuthproject.org/discussion/comment/21894/#Comment_21894). The vertical dotted lines indicate the alignment (the inset is CC between ENSO and PDO ) ![](https://imagizer.imageshack.com/img924/9279/yXZ6UM.gif) Why it follows the more predictable tidal forcing rather than the more erratic ENSO response, I don't have an answer. But this does look more plausible than a predator-prey cycle. The environment is so harsh in the Arctic that climate factors likely control the health of the lemming population, and the predators then follow that cycle as well since that is their food supply. This may be a classic common-mode mechanism instead of a mutual resonance set by the eigenvalue or chaotic attractor of a differential equation.`

David said:

I generally agree with your argument as that is basically what is involved when developing a state diagram that represents probability flow -- for example when Markov modeling a system for stochastic reliability analysis, c.f. top citations https://scholar.google.com/scholar?q=Markov+modeling+for+reliability+analysis

The mean value flow in this case is describing the probability of a fault-tolerant system existing in a particular state, with the Petri net providing a more concise representation than the expanded state diagram, due to the extra logic in the bar symbols. See this paper for how to create rewrite rules for transforming between Petri net and pure Markov state diagram representations:

So the point is that there is a distinction between how one understands the system under study versus the preferred vocabulary used by the practitioners. I certainly wouldn't have a problem calling these stochastic models, but that isn't the standard practice in epidemiology where it defines a larger spread in the model's parameterization via noise or variability.

I am not sure where this started but since you mentioned the mass-action law of chemistry, consider that a typical reaction is so well mixed and uniform that the fluctuations are not considered that important compared to the mean-value of the the reagent constituents. Consider also that in solid-state electronics where the law of mass action for electrons and holes is \(n p = n_i ^2\). For modeling semiconductors, the stochastic variability is not typically required unless one is interested in modeling shot noise or other carrier fluctuations. Another situation where an extra level of stochastic variability would be applied, is in the analysis of amorphous materials where for example the photovoltaic characteristics show fat tails indicating that the material has a significant spread in electrical properties. I have written about this here: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch18

This discussion is in the weeds but important when placed in context. For a reliability analyst, the stochastic part is

everythingbut for a chemist or semiconductor designer, it's a secondary aspect of their models. For epidemiology, both the mean value determinism and stochastic fluctuations are apparently important.`David said: > "True the page I referred you to talks about the deterministic interpretation of the Petri net SI model. Yet this occurs in the broader context of the stochastic interpretation of Petri nets - which is very much a discrete popcorn-like process - and that is what I meant to be talking about." I generally agree with your argument as that is basically what is involved when developing a state diagram that represents probability flow -- for example when Markov modeling a system for stochastic reliability analysis, c.f. top citations https://scholar.google.com/scholar?q=Markov+modeling+for+reliability+analysis ![](https://imagizer.imageshack.com/img921/7210/swnhu1.png) The mean value flow in this case is describing the probability of a fault-tolerant system existing in a particular state, with the Petri net providing a more concise representation than the expanded state diagram, due to the extra logic in the bar symbols. See this paper for how to create rewrite rules for transforming between Petri net and pure Markov state diagram representations: > Pukite, P. (1995). Intelligent reliability analysis tool for fault-tolerant system design. In 10th Computing in Aerospace Conference https://www.researchgate.net/publication/269227210_Intelligent_reliability_analysis_tool_for_fault-tolerant_system_design/figures So the point is that there is a distinction between how one understands the system under study versus the preferred vocabulary used by the practitioners. I certainly wouldn't have a problem calling these stochastic models, but that isn't the standard practice in epidemiology where it defines a larger spread in the model's parameterization via noise or variability. I am not sure where this started but since you mentioned the mass-action law of chemistry, consider that a typical reaction is so well mixed and uniform that the fluctuations are not considered that important compared to the mean-value of the the reagent constituents. Consider also that in solid-state electronics where the law of mass action for electrons and holes is \\(n p = n_i ^2\\). For modeling semiconductors, the stochastic variability is not typically required unless one is interested in modeling shot noise or other carrier fluctuations. Another situation where an extra level of stochastic variability would be applied, is in the analysis of amorphous materials where for example the photovoltaic characteristics show fat tails indicating that the material has a significant spread in electrical properties. I have written about this here: https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch18 This discussion is in the weeds but important when placed in context. For a reliability analyst, the stochastic part is *everything* but for a chemist or semiconductor designer, it's a secondary aspect of their models. For epidemiology, both the mean value determinism and stochastic fluctuations are apparently important.`

Thanks for putting all this into context!

`Thanks for putting all this into context!`

Here's a Medium.com primer on how to do a Hubbert Linearization of the Logistic as first noted in comment #14

https://medium.com/@puk_54065/how-to-linearize-the-logistic-d8143bfe33be

`Here's a Medium.com primer on how to do a Hubbert Linearization of the Logistic as first noted in comment #14 https://medium.com/@puk_54065/how-to-linearize-the-logistic-d8143bfe33be`

The power of "stochastic thinking"

Why was the wearing of face masks by doctors in the USA not encouraged? Every doctor interviewed by the media said they were not that effective. How can they not be effective? Even if they reduced transmission of droplets by 30% that would be effective in reducing overall R0, and therefore growth in the logistic function. So why were they not recommended? Could it be the doctors realized people would start hoarding surgical masks in an emergency, thus reducing the amount available at hospitals?

Why not tell people how to make their own mask? Even if was only 20% effective instead of 30% effective it would be good. Maybe this isn't done because western people don't understand stochastic thinking, but perhaps the people of the far east do?

From 2009 : https://gigazine.net/gsc_news/en/20090521_papermask/

## How to make an anti-infection mask easily from paper towel

Swine fluis spreading all over the world and the number of patients is growing every each day.Maybe it's not familiar to those who are out of Japan, but in Japan, when you got cold, it's very common to wear a mask covering mouth and nose. The flu mainly passes on by droplet transmission caused by coughs and sneezes. So the mask may not cover you from virus but will help preventing further infection from you to others.

Here's the step-by-step instruction of how to make a transmission preventing mask from paper towel, written by a doctor of

Katakai Hospitalin Niigata pref.(JP)Making masks from Paper Towelof paper towel, 2 rubber

bands, & a stapler.

You may want to pile up 2

or 3 towels in one.

Fold to the center.

in half

fold to center

an accordion-like thing

both sides and fix them

with stapler

Complete

Change the position of rubber

bands to adjust the size.

We should warn you that it will not be a perfect defense for viruses. But you need a mask in hurry, this tip will do.

`The power of "stochastic thinking" Why was the wearing of face masks by doctors in the USA not encouraged? Every doctor interviewed by the media said they were not that effective. How can they not be effective? Even if they reduced transmission of droplets by 30% that would be effective in reducing overall R0, and therefore growth in the [logistic function](https://forum.azimuthproject.org/discussion/377/logistic-equation#latest). So why were they not recommended? Could it be the doctors realized people would start hoarding surgical masks in an emergency, thus reducing the amount available at hospitals? Why not tell people how to make their own mask? Even if was only 20% effective instead of 30% effective it would be good. Maybe this isn't done because western people don't understand stochastic thinking, but perhaps the people of the far east do? --- From 2009 : https://gigazine.net/gsc_news/en/20090521_papermask/ <h1 class="title">How to make an anti-infection mask easily from paper towel</h1> <!-- google_ad_section_start --> <p class="preface"></p><img src="https://i.gzn.jp/img/2009/05/21/papermask/863311_24844261.jpg" border="0" width="100"><p class="preface"> <br /> <b><a href="http://en.wikipedia.org/wiki/Swine_influenza" target="_blank">Swine flu</a></b> is spreading all over the world and the number of patients is growing every each day.<br /> <br /> Maybe it's not familiar to those who are out of Japan, but in Japan, when you got cold, it's very common to wear a mask covering mouth and nose. The flu mainly passes on by droplet transmission caused by coughs and sneezes. So the mask may not cover you from virus but will help preventing further infection from you to others.<br /> <br /> Here's the step-by-step instruction of how to make a transmission preventing mask from paper towel, written by a doctor of <b><a href="http://comet.endless.ne.jp/users/katacli/index.html" target="_blank">Katakai Hospital</a></b> in Niigata pref.<br /> <br /> <b>(JP)<a href="http://comet.endless.ne.jp/users/katacli/mask/papermask.html" target="_blank">Making masks from Paper Towel</a></b><br /> <br /> <table border="1" cellpadding="20"><tr> <td style="text-align:center"> What you need is a sheet <br /> of paper towel, 2 rubber <br /> bands, & a stapler. <br /> </p><img src="https://i.gzn.jp/img/2009/05/21/papermask/DSCN0019.JPG" border="0" class="lzsmall"><p class="preface"><br /> <td style="text-align:center"> Fold a paper towel in two. <br /> You may want to pile up 2 <br /> or 3 towels in one. <br /> </p><img src="https://i.gzn.jp/img/2009/05/21/papermask/DSCN0020.JPG" border="0" class="lzsmall"><p class="preface"><br /> <td style="text-align:center"> <br /> <br /> Fold to the center.<br /> </p><img src="https://i.gzn.jp/img/2009/05/21/papermask/DSCN0021.JPG" border="0" class="lzsmall"> <tr><td style="text-align:center"> Fold the shown part <br /> in half<br /> ![](https://i.gzn.jp/img/2009/05/21/papermask/DSCN0023.JPG) <br /> <td style="text-align:center"> Flip it back and <br /> fold to center<br /> ![](https://i.gzn.jp/img/2009/05/21/papermask/DSCN0024.JPG) <br /> <td style="text-align:center"> Fold at center and make <br /> an accordion-like thing<br /> ![](https://i.gzn.jp/img/2009/05/21/papermask/DSCN0025.JPG) <br /> <tr><td style="text-align:center"> Place rubber bands on <br /> both sides and fix them <br /> with stapler<br /> ![](https://i.gzn.jp/img/2009/05/21/papermask/DSCN0026.JPG) <td style="text-align:center"> <br /> <br /> Complete <br /> ![](https://i.gzn.jp/img/2009/05/21/papermask/DSCN0027.JPG) <br /> <td style="text-align:center"> <br /> Change the position of rubber <br /> bands to adjust the size.<br /> ![](https://i.gzn.jp/img/2009/05/21/papermask/DSCN0030.JPG) </table> We should warn you that it will not be a perfect defense for viruses. But you need a mask in hurry, this tip will do.`

What does it imply if this Oxford-based study of the UK already indicates that 1/2 the population is already infected with COVID-19?

If 1/2 are infected then herd immunity is already reached and they only have to wait for the progression of the illness to play itself out. The document in the dropbox link shows concise details of the parameters of the stochastic compartmental model they are using

The study mentioned above has issues. It may take a while to shake out, so look at the following Twitter thread and follow up to see what the eventual interpretation settles down to

https://twitter.com/WHUT/status/1242517559762735104

In the UK, it's a battle between the Imperial College model and the Oxford University model. Essentially, the Imperial model says that we are at the initial acceleration of the logistic function with a high potential lethality, while the Oxford model says that we are in the last few doublings of the logistic function as it nears the 50% inflection point, and we are only seeing a critical care increase because the overall lethality is very low. So criticality is \( N \times p \) -- Imperial says

Nis still low, whilepmay be large while Oxford say it is just the reverse.`What does it imply if this Oxford-based study of the UK already indicates that 1/2 the population is already infected with COVID-19? > https://amp.ft.com/content/5ff6469a-6dd8-11ea-89df-41bea055720b > https://www.dropbox.com/s/oxmu2rwsnhi9j9c/Draft-COVID-19-Model%20%2813%29.pdf?dl=0 > The new coronavirus may already have infected far more people in the UK than scientists had previously estimated — perhaps as much as half the population — according to modelling by researchers at the University of Oxford. > If the results are confirmed, they imply that fewer than one in a thousand of those infected with Covid-19 become ill enough to need hospital treatment, said Sunetra Gupta, professor of theoretical epidemiology, who led the study. The vast majority develop very mild symptoms or none at all. If 1/2 are infected then herd immunity is already reached and they only have to wait for the progression of the illness to play itself out. The document in the dropbox link shows concise details of the parameters of the stochastic compartmental model they are using ![](https://imagizer.imageshack.com/img924/413/ZdzB9O.png) --- The study mentioned above has issues. It may take a while to shake out, so look at the following Twitter thread and follow up to see what the eventual interpretation settles down to https://twitter.com/WHUT/status/1242517559762735104 In the UK, it's a battle between the Imperial College model and the Oxford University model. Essentially, the Imperial model says that we are at the initial acceleration of the logistic function with a high potential lethality, while the Oxford model says that we are in the last few doublings of the logistic function as it nears the 50% inflection point, and we are only seeing a critical care increase because the overall lethality is very low. So criticality is \\( N \times p \\) -- Imperial says *N* is still low, while *p* may be large while Oxford say it is just the reverse.`

Following up comment #2 above, even though the idea of compartmental modeling is well-known in epidemiology, here's evidence of how little it is applied to fossil fuel depletion.

Google Scholar citations for "compartmental models" & "oil depletion"

https://scholar.google.com/scholar?q="compartmental+model"+"oil+depletion"

https://scholar.google.com/scholar?q="compartmental+model"+"peak+oil"

https://scholar.google.com/scholar?q="compartmental+model"+"oil+reserves"

The only hit shown refers to our work, and by expanding the keyword search, Google returns this:

The way that Herroro et al apply a compartmental model is to model the compartments as oil, atmospheric CO2 from combustion of the oil, and sequestering of that CO2 in the ocean. We also model this compartmental flow in Chap 9.

So it's an interesting intersection of a model used both for Green Math (epidemiology) and for earth/climate sciences.

`Following up comment #2 above, even though the idea of compartmental modeling is well-known in epidemiology, here's evidence of how little it is applied to fossil fuel depletion. Google Scholar citations for "compartmental models" & "oil depletion" https://scholar.google.com/scholar?q=%22compartmental+model%22+%22oil+depletion%22 https://scholar.google.com/scholar?q=%22compartmental+model%22+%22peak+oil%22 https://scholar.google.com/scholar?q=%22compartmental+model%22+%22oil+reserves%22 The only hit shown refers to our work, and by expanding the keyword search, Google returns this: > Herrero, C., García-Olivares, A. and Pelegrí, J.L., 2014. Impact of anthropogenic CO2 on the next glacial cycle. Climatic change, 122(1-2), pp.283-298. > https://www.researchgate.net/profile/Carmen_Herrero5/publication/259148288_Impact_of_anthropogenic_CO2_on_the_next_glacial_cycle/links/02e7e52a5d76fc196e000000.pdf The way that Herroro et al apply a compartmental model is to model the compartments as oil, atmospheric CO2 from combustion of the oil, and sequestering of that CO2 in the ocean. We also model this compartmental flow in [Chap 9](https://agupubs.onlinelibrary.wiley.com/doi/10.1002/9781119434351.ch9). So it's an interesting intersection of a model used both for Green Math (epidemiology) and for earth/climate sciences.`

This is data from https://data.humdata.org/dataset/novel-coronavirus-2019-ncov-cases

Scroll down the page where the data link is and connect or download the confirmed cases data set.

This is a Hubbert Linearization for Italy as described in comment #42

This is a purely mechanical fit, plotting n/N vs N and then selecting the Add Trendline on Excel with a sufficient forecast interval to intercept the x-axis.

`This is data from https://data.humdata.org/dataset/novel-coronavirus-2019-ncov-cases Scroll down the page where the data link is and connect or download the confirmed cases data set. ![](https://imagizer.imageshack.com/img922/9254/SmY8VL.png) This is a Hubbert Linearization for Italy as described in [comment #42](https://forum.azimuthproject.org/discussion/comment/21972/#Comment_21972) ![](https://imagizer.imageshack.com/img923/9329/LJPXLH.png) This is a purely mechanical fit, plotting n/N vs N and then selecting the Add Trendline on Excel with a sufficient forecast interval to intercept the x-axis.`

Example of stochastic thinking applied to testing to better estimate infection levels. Tests are pooled in larger groups so that aggregate positives can be determined at a higher throughput. If a test comes back positive, then the group members are tested individually to identify positives. This divide & conquer strategy is only efficient for populations at low infection levels though.

What's disturbing about the Italy example above is that if the number of confirmed cases limits to 130,000, then the rest of Italy's population of 60 million is really unknown. So aggregate testing can more quickly estimate how many more people are infected w/o symptoms or have anti-bodies (via a different test).

`Example of stochastic thinking applied to testing to better estimate infection levels. [Tests are pooled](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3500568/) in larger groups so that aggregate positives can be determined at a higher throughput. If a test comes back positive, then the group members are tested individually to identify positives. This divide & conquer strategy is only efficient for populations at low infection levels though. What's disturbing about the Italy example above is that if the number of confirmed cases limits to 130,000, then the rest of Italy's population of 60 million is really unknown. So aggregate testing can more quickly estimate how many more people are infected w/o symptoms or have anti-bodies (via a different test).`

This morning someone tweeted:

https://twitter.com/_ppmv/status/1243144909735055361

I responded that he should look into category theory. Computational structures can be categorized according to computational flow, and thus structures & algorithms used in different disciplines but with idiosyncratic names can be pattern-matched according to their structure and data flow and then reapplied elsewhere. This can potentially benefit from the work that went into the original model and so the new algorithms don't have to be reinvented.

So it's not simply about defining our terminology as one commenter recommended but defining the model unambiguosly.

I bring this up because I see this happening in this thread with the cross-disciplinary use of compartmental models in resource depletion and in contagion modeling, and with ideas shared both ways

Example 1 : No one (except for moi) seems to mention compartmental models in resource depletion but they are well-known in epidemiology.

Example 2 : In resource depletion the idea of linearizing the logistic function (via Hubbert linearization) is well known but I have no idea whether it even exists in epidemiology.

The approach is to apply category theory to describe the compartment model in each case and then pattern match. The equivalence in the structure at the category theory level will root out the commonality independent of the naming of the model.

This is the pattern recognition application of category theory that seems to be eluding everyone, IMO. But it is straightforward when we place it into this context, where

"roughly speaking, category theory is graph theory with additional structure to represent composition".`This morning someone tweeted: > "The natural & life sciences (1) may use the same words as humanities & social sciences (2) but they often *use them differently.* >We need to get comfortable w the uncertainty of 21stC-life. We need to use language more precisely & showing clearly where we use it imprecisely https://twitter.com/_ppmv/status/1243144909735055361 I responded that he should look into category theory. Computational structures can be categorized according to computational flow, and thus structures & algorithms used in different disciplines but with idiosyncratic names can be pattern-matched according to their structure and data flow and then reapplied elsewhere. This can potentially benefit from the work that went into the original model and so the new algorithms don't have to be reinvented. So it's not simply about defining our terminology as one commenter recommended but defining the model unambiguosly. I bring this up because I see this happening in this thread with the cross-disciplinary use of compartmental models in resource depletion and in contagion modeling, and with ideas shared both ways Example 1 : No one (except for moi) seems to mention compartmental models in resource depletion but they are well-known in epidemiology. Example 2 : In resource depletion the idea of linearizing the logistic function (via Hubbert linearization) is well known but I have no idea whether it even exists in epidemiology. The approach is to apply [category theory to describe the compartment model](https://forum.azimuthproject.org/discussion/2499/tutorial-on-stochastic-petri-nets-with-sir-disease-model-as-example#latest) in each case and then pattern match. The equivalence in the structure at the category theory level will root out the commonality independent of the naming of the model. This is the pattern recognition application of category theory that seems to be eluding everyone, IMO. But it is straightforward when we place it into this context, where *"roughly speaking, category theory is graph theory with additional structure to represent composition"*.`

There is perhaps a way to make the Hubbert Linearization of the logistic more general. This is an excerpt from our Mathematical GeoEnergy book

This formulation has at least some resemblance to path integral transforms that many people on this forum are likely familiar with. So perhaps we can leverage some other ideas on this front.

The fact that the time-dependent aspect is missing from Hubbert Linearization is perhaps a result of the distinction between autonomous (which describes the logistic) and non-autonomous differential equations. I don't think that this topic has been covered anywhere on this forum so it may be worth a new category.

`There is perhaps a way to make the Hubbert Linearization of the logistic more general. This is an excerpt from our Mathematical GeoEnergy book ![](https://imagizer.imageshack.com/img922/3681/nuO0WV.png) This formulation has at least some resemblance to path integral transforms that many people on this forum are likely familiar with. So perhaps we can leverage some other ideas on this front. The fact that the time-dependent aspect is missing from Hubbert Linearization is perhaps a result of the distinction between autonomous (which describes the logistic) and non-autonomous differential equations. I don't think that this topic has been covered anywhere on this forum so it may be worth a new category.`

Some are plotting the progression this way (again removing time as with Hubbert Linearization):

This is from https://aatishb.com/covidtrends/

https://youtu.be/ZWYY1LiuHUk

The intuition behind understanding this chart is that when the exponential rate of increase is strongest, the incremental increase (i.e. the daily to weekly count) may be of similar scale to the cumulative up to that point. That's why the curve linearizes in the chart, at least until the logistic inflection point is reached, as indicated by the divergence of China and South Korea.

In contrast to this projection, the Hubbert Linearization accounts for the logistic divergence and generates a linear fit over the entire range.

`Some are plotting the progression this way (again removing time as with Hubbert Linearization): ![](https://i.imgur.com/XV0KIT6.png) This is from https://aatishb.com/covidtrends/ https://youtu.be/ZWYY1LiuHUk The intuition behind understanding this chart is that when the exponential rate of increase is strongest, the incremental increase (i.e. the daily to weekly count) may be of similar scale to the cumulative up to that point. That's why the curve linearizes in the chart, at least until the logistic inflection point is reached, as indicated by the divergence of China and South Korea. In contrast to this projection, the Hubbert Linearization accounts for the logistic divergence and generates a linear fit over the entire range.`