Options

Blog - The stochastic resonance program (part 1)

New blog article.

I'm still actively editing it to get the form right, but the intent and contents are clearly spelled out.

I'll let you know when it's cleaned up -- not too long from now.

Comments

  • 1.
    edited April 2014

    Hi Dave, Please delete my name from the blog post; the stochastic model was written by Alan and Glyn. I just persuaded Glyn to learn Javascript and extend Alan's code. I'm also incorrectly credited somewhere else on the blog which I did ask John to remove.

    One of John's students also extended the code, adding some more sliders, but when I looked at it it didn't work and had no explanation attached.

    Also I'm afraid I haven't had the mileage to re-examine the various models in the Azimuth googlecode repo but I don't want that work to go to waste; at least just the ideas, specs and UIs might be useful.

    Comment Source:Hi Dave, Please delete my name from the blog post; the stochastic model was written by Alan and Glyn. I just persuaded Glyn to learn Javascript and extend Alan's code. I'm also incorrectly credited somewhere else on the blog which I did ask John to remove. One of John's students also extended the code, adding some more sliders, but when I looked at it it didn't work and had no explanation attached. Also I'm afraid I haven't had the mileage to re-examine the various models in the Azimuth googlecode repo but I don't want that work to go to waste; at least just the ideas, specs and UIs might be useful.
  • 2.
    edited April 2014

    Hi Jim, I agree that we shouldn't let that work go to waste, let's keep a note of that, thanks. One of these days I'll take a survey, and post a summary of what I find there.

    Comment Source:Hi Jim, I agree that we shouldn't let that work go to waste, let's keep a note of that, thanks. One of these days I'll take a survey, and post a summary of what I find there.
  • 3.
    edited April 2014

    I'll be glad to remove Jim's name from anything he tells me to! I'm sorry, I must have missed the previous request.

    One of John’s students also extended the code, adding some more sliders, but when I looked at it it didn’t work and had no explanation attached.

    Huh? An ex-student of mine created a nice extension that works fine for me:

    It includes a link to an explanation of the stochastic differential equation being run. I believe the core of the code is just the same as for the simpler model by Alan and Glyn.

    By the way, all our online models should be reachable from here:

    If there are some that aren't, fix this page!

    Comment Source:I'll be glad to remove Jim's name from anything he tells me to! I'm sorry, I must have missed the previous request. > One of John’s students also extended the code, adding some more sliders, but when I looked at it it didn’t work and had no explanation attached. Huh? An ex-student of mine created a nice extension that works fine for me: * Michael Knap, [A stochastic energy balance model](http://math.ucr.edu/home/baez/coalbedo/stochastic/stochastic.html), 3 December 2012. It includes a link to an explanation of the stochastic differential equation being run. I believe the core of the code is just the same as for the simpler model by Alan and Glyn. By the way, all our online models should be reachable from here: * [Azimuth Code Project: online models](http://www.azimuthproject.org/azimuth/show/Azimuth+Code+Project+#Online_models). If there are some that aren't, fix this page!
  • 4.

    David - this promises to be a nice blog article! Getting into the technical nitty-gritty might be a good way to attract some coders. Make sure to describe some things that are not too hard to do, that would increase the functionality of models we have.

    Comment Source:David - this promises to be a nice blog article! Getting into the technical nitty-gritty might be a good way to attract some coders. Make sure to describe some things that are not too hard to do, that would increase the functionality of models we have.
  • 5.

    Hi Jim, I got the attribution information from the code page:

    Can you make whatever adjustments you think are appropriate to that page.

    Comment Source:Hi Jim, I got the attribution information from the code page: * [Azimuth Code Project: online models](http://www.azimuthproject.org/azimuth/show/Azimuth+Code+Project+#Online_models). Can you make whatever adjustments you think are appropriate to that page.
  • 6.
    edited April 2014

    The article is much more fleshed out now. All of the intended content and message is now present here.

    I still do have some more work to do, to get the form write, and fill in the blanks. We're going away for the long weekend, but I'm hoping to get it form-ready by the end of next week.

    But in the mean time, any content review comments would be very helpful -- especially on the parts where I discuss Milankovitch cycles and stochastic resonance.

    I put a bunch of things in boldface to indicate that these need to be filled in with links.

    Thanks guys.

    p.s. John if you want to help out by editing some of the references, in particular the section where I refer to your dialog with Arrow on the Azimuth blog, I wouldn't object. But I'm happy to do this myself as well.

    Comment Source:* [[Blog - under the hood: how the stochastic resonance code works]] The article is much more fleshed out now. All of the intended content and message is now present here. I still do have some more work to do, to get the form write, and fill in the blanks. We're going away for the long weekend, but I'm hoping to get it form-ready by the end of next week. But in the mean time, any content review comments would be very helpful -- especially on the parts where I discuss Milankovitch cycles and stochastic resonance. I put a bunch of things in boldface to indicate that these need to be filled in with links. Thanks guys. p.s. John if you want to help out by editing some of the references, in particular the section where I refer to your dialog with Arrow on the Azimuth blog, I wouldn't object. But I'm happy to do this myself as well.
  • 7.

    Hi! I'll check this out and make comments and edits soon. I'm almost done with Steve Easterbrook's series, so it would be nice if this could appear soon.

    Comment Source:Hi! I'll check this out and make comments and edits soon. I'm almost done with Steve Easterbrook's series, so it would be nice if this could appear soon.
  • 8.
    edited April 2014

    Thanks to the Baez library service I skimmed Didier's paper. I think it's a great account of the history. It mentioned that the orbital forcing was quasi-linear but I didn't notice any formula or further for this in that paper so I had a look at the references.

    The nearest I could find was a paper by Imbrie which was discussed by Marcel Bokstedt and others on the Milankovitch cycle discussion page. This has a a lot of useful parameter values for any model of orbital forcing.

    I made a few pages of notes on model specification from this paper and and noted a few naive questions.

    Imbrie was using a deterministic model to see which parameters do not need a stochastic explanation. He described this as groundwork for a stochastic model. Isaac Held had some comments about this.

    I hope these notes (despite the wholly innacurate title) might be of some possible help in the discussion.

    I'll be adding notes on Paillard later when I've re-read it.

    Comment Source:Thanks to the Baez library service I skimmed Didier's paper. I think it's a great account of the history. It mentioned that the orbital forcing was quasi-linear but I didn't notice any formula or further for this in that paper so I had a look at the references. The nearest I could find was a paper by Imbrie which was discussed by Marcel Bokstedt and others on the Milankovitch cycle discussion page. This has a a lot of useful parameter values for any model of orbital forcing. I made a few pages of [notes](https://www.dropbox.com/s/f4hea6cep969p8u/Paillard.md ) on model specification from this paper and and noted a few naive questions. Imbrie was using a deterministic model to see which parameters do not need a stochastic explanation. He described this as groundwork for a stochastic model. Isaac Held had some comments about this. I hope these notes (despite the wholly innacurate title) might be of some possible help in the discussion. I'll be adding notes on Paillard later when I've re-read it.
  • 9.
    edited April 2014

    I am 99% done with this article. All that is left are wording improvements, to shorten it as much as possible, and make it flow as smoothly as possible. And to finish working out the html formatting worked out.

    I was writing a section at the end on some next steps for the Azimuth Code Project, but that is a fresh topic, so took that out of this article, and moved it into a draft of a followup article.

    Still thinking about the title. Originally I had "Under the hood: how the stochastic resonance code works." That's pretty good, but it's a bit gear-like, and doesn't convey any broader context about the Azimuth Code Project. Then I changed it to "At the Azimuth Code Project: The stochastic resonance model." I like the idea of this, but it sounds a little pompous.

    John, I'm going to keep chipping away at the surface editing -- and feel free to join in with this whenever you like.

    It would be nice to get some feedback from the authors of the program, Glyn and Allan. I suggest we give ourselves one week to (1) make the formatting and wording on this look nice, and (2) give them a chance to join in the discussion. Also I'll use this time to work on the Azimuth Code Project page, to get it ready for any new visitors.

    In the mean time, I'll be working on the followup article.

    Comment Source:I am 99% done with this article. All that is left are wording improvements, to shorten it as much as possible, and make it flow as smoothly as possible. And to finish working out the html formatting worked out. I was writing a section at the end on some next steps for the Azimuth Code Project, but that is a fresh topic, so took that out of this article, and moved it into a draft of a followup article. Still thinking about the title. Originally I had "Under the hood: how the stochastic resonance code works." That's pretty good, but it's a bit gear-like, and doesn't convey any broader context about the Azimuth Code Project. Then I changed it to "At the Azimuth Code Project: The stochastic resonance model." I like the idea of this, but it sounds a little pompous. John, I'm going to keep chipping away at the surface editing -- and feel free to join in with this whenever you like. It would be nice to get some feedback from the authors of the program, Glyn and Allan. I suggest we give ourselves one week to (1) make the formatting and wording on this look nice, and (2) give them a chance to join in the discussion. Also I'll use this time to work on the Azimuth Code Project page, to get it ready for any new visitors. In the mean time, I'll be working on the followup article.
  • 10.
    edited April 2014

    A fly in the ointment, if according to John SR is not part of the state of the art explanations for the glacial cycles. This undermines the motivational organization of the blog article. Unless I can find the right way to put it.

    Comment Source:A fly in the ointment, if according to John SR is not part of the state of the art explanations for the glacial cycles. This undermines the motivational organization of the blog article. Unless I can find the right way to put it.
  • 11.
    edited April 2014

    Sad to say I'm thinking it's best to ditch the section on Milankovitch cycles, if the SR Milankovitch theory is really losing steam.

    Instead I could mention the wide variety of applications of stochastic resonance, with a subordinate mention of the controversial SR Milankovitch hypothesis.

    Comment Source:Sad to say I'm thinking it's best to ditch the section on Milankovitch cycles, if the SR Milankovitch theory is really losing steam. Instead I could mention the wide variety of applications of stochastic resonance, with a subordinate mention of the controversial SR Milankovitch hypothesis.
  • 12.
    edited April 2014

    I did it, I axed the section on Milankovitch cycles, and absorbed its contents into another section called "Stochastic resonance in nature," which begins by saying that stochastic resonance was originally introduced in a hypothesis about the timing of the ice ages -- a hypothesis which has not been confirmed. But since then it has been found to be a widely existing phenomenon, in particular to the mechanisms of sensory processing. On the bright side, that made the article shorter and less heavy.

    By the way I found this reference to be clear and educational:

    I added this to the Stochastic resonance wiki page.

    Comment Source:I did it, I axed the section on Milankovitch cycles, and absorbed its contents into another section called "Stochastic resonance in nature," which begins by saying that stochastic resonance was originally introduced in a hypothesis about the timing of the ice ages -- a hypothesis which has not been confirmed. But since then it has been found to be a widely existing phenomenon, in particular to the mechanisms of sensory processing. On the bright side, that made the article shorter and less heavy. By the way I found this reference to be clear and educational: * David Lyttle, [Stochastic resonance in neurobiology](http://math.arizona.edu/~flaschka/Topmatter/527files/termpapers/Lyttle.pdf), May 2008 I added this to the [[Stochastic resonance]] wiki page.
  • 13.

    By the way I found this reference to be clear and educational:

    David Lyttle, Stochastic resonance in neurobiology, May 2008

    I only looked very briefly at that reference. I found it irritating that in the Leaky-Integrate and Fire Model in equation (24) a square potential seemed to have been used. How should stochastic resonance work here, which seems to require a fourth order potential, as I have sofar understood? Does this work because of this threshold mechanism?

    Comment Source:>By the way I found this reference to be clear and educational: > David Lyttle, Stochastic resonance in neurobiology, May 2008 I only looked very briefly at that reference. I found it irritating that in the Leaky-Integrate and Fire Model in equation (24) a square potential seemed to have been used. How should stochastic resonance work here, which seems to require a fourth order potential, as I have sofar understood? Does this work because of this threshold mechanism?
  • 14.

    The article states:

    More recently, in 1994-1995, a simpler, and more general characterization of stochastic resonance emerged, which did not require a bistable dynamical system [15, 14]. In this context, the only necessary components for stochastic resonance are some form of threshold, a sub threshold periodic signal, and a source of noise, either intrinsic to the system or added to the signal.

    Does that address your concern? I wrote a paraphrase of this in the draft blog article, so if anyone thinks that this or the article overall is missing the mark, it would be good to know that.

    Comment Source:The article states: >More recently, in 1994-1995, a simpler, and more general characterization of stochastic resonance emerged, which did not require a bistable dynamical system [15, 14]. In this context, the only necessary components for stochastic resonance are some form of threshold, a sub threshold periodic signal, and a source of noise, either intrinsic to the system or added to the signal. Does that address your concern? I wrote a paraphrase of this in the draft blog article, so if anyone thinks that this or the article overall is missing the mark, it would be good to know that.
  • 15.

    Does that address your concern? I wrote a paraphrase of this in the draft blog article, so if anyone thinks that this or the article overall is missing the mark, it would be good to know that.

    yes thanks. Unfortunately subsection 1.3 didn't make it through my attention threshold. That is after the author started out in 1.2 with tons of for me undefined terms, like for example the term "escape rate" I jumped to section 2.

    The author continues the subsection with:

    Part c of the figure demonstrated the power spectrum computed from the resulting train of spikes. This system was demonstrated to exhibit stochastic resonance in that the amplitude of the peak of the power spectum goes through a maximum as a function of the noise intensity [15].

    I find it a bit irritating that the figure seems to show a power spectrum as a function of frequency.

    Comment Source:>Does that address your concern? I wrote a paraphrase of this in the draft blog article, so if anyone thinks that this or the article overall is missing the mark, it would be good to know that. yes thanks. Unfortunately subsection 1.3 didn't make it through my attention threshold. That is after the author started out in 1.2 with tons of for me undefined terms, like for example the term "escape rate" I jumped to section 2. The author continues the subsection with: >Part c of the figure demonstrated the power spectrum computed from the resulting train of spikes. This system was demonstrated to exhibit stochastic resonance in that the amplitude of the peak of the power spectum goes through a maximum as a function of the noise intensity [15]. I find it a bit irritating that the figure seems to show a power spectrum as a function of frequency.
  • 16.
    edited April 2014

    By the way, Nad, your frequent use of the word "irritating" makes it sound as if you're a somewhat angry person. It means "causing annoyance, impatience, or mild anger." I rarely use this word except when someone is making me angry. When I've met you, you don't seem so annoyed and impatient... but in writing, your use of this word makes me imagine a somewhat less pleasant person than you actually are!

    Comment Source:By the way, Nad, your frequent use of the word "irritating" makes it sound as if you're a somewhat angry person. It means "causing annoyance, impatience, or mild anger." I rarely use this word except when someone is making me angry. When I've met you, you don't seem so annoyed and impatient... but in writing, your use of this word makes me imagine a somewhat less pleasant person than you actually are!
  • 17.
    edited April 2014

    David - I've been busy and distracted, but if the article is ready for a few last-minute changes and then publication let me know!

    A couple of things:

    1) Blog titles should be short. Look at the blog - a title like "“Under the hood: how the stochastic resonance code works" or “At the Azimuth Code Project: The stochastic resonance model” would not fit in one line, and it would also tend to gum up the "New Posts" and "Latest Comments" sections. Steve Easterbrook's titles like "What does the new IPCC report say about climate change (part 7)" were so long that one reader complained they couldn't read the part number in the "Latest Comments" section, so I trimmed it down to "New IPCC report (part 7)".

    The idea of your post is complex: it's about stochastic resonance and it's about your attempt to say what's going on at the Azimuth Project. It's probably not good to try to express all that in the headline - you can do it in the first sentence. But make up your mind what you're mainly doing: discussing stochastic resonance, or helping people get to know the Azimuth Code Project. If the former, a great title is "Stochastic Resonance". If the latter, something like "What's Up at Azimuth" or "Azimuth Code Project News".

    2) I'm glad you didn't completely ditch the discussion of how stochastic resonance is related to Milankovitch cycles, since this is why we bothered writing software about it it in the first place.

    By the way, an equally exciting simple model, more fashionable in climate physics would be the delay differential equation for the El Niño. People think we're heading for a big El Niño soon, and some argue this will end the much-debated "global warming pause" and blast us into a hotter world. If so, that would be huge news.

    Comment Source:David - I've been busy and distracted, but if the article is ready for a few last-minute changes and then publication let me know! A couple of things: 1) Blog titles should be short. Look at the blog - a title like "“Under the hood: how the stochastic resonance code works" or “At the Azimuth Code Project: The stochastic resonance model” would not fit in one line, and it would also tend to gum up the "New Posts" and "Latest Comments" sections. Steve Easterbrook's titles like "What does the new IPCC report say about climate change (part 7)" were so long that one reader complained they couldn't read the part number in the "Latest Comments" section, so I trimmed it down to "New IPCC report (part 7)". The idea of your post is complex: it's about stochastic resonance _and_ it's about your attempt to say what's going on at the Azimuth Project. It's probably not good to try to express all that in the headline - you can do it in the first sentence. But make up your mind what you're _mainly_ doing: discussing stochastic resonance, or helping people get to know the Azimuth Code Project. If the former, a great title is "Stochastic Resonance". If the latter, something like "What's Up at Azimuth" or "Azimuth Code Project News". 2) I'm glad you didn't _completely_ ditch the discussion of how stochastic resonance is related to Milankovitch cycles, since this is why we bothered writing software about it it in the first place. By the way, an equally exciting simple model, more fashionable in climate physics would be the [delay differential equation for the El Niño](http://www.azimuthproject.org/azimuth/show/ENSO#DelayedActionOscillator). People think we're heading for a big El Niño soon, and some argue this will end the much-debated "global warming pause" and blast us into a hotter world. If so, that would be huge news.
  • 18.
    edited April 2014

    Thanks for the useful feedback. I'll check out El Niño stuff.

    I've divided this blog into two parts, and reworked it a bit.

    I've expanded again the part on the Milankovitch cycles. I just needed the make the right emphases and separations between the SR hypothesis and the Milankovitch hypothesis.

    John can you give this a critical review, to make sure that I'm not making any statements that are off base. Also if there are any sentences that deserve to be tightened up.

    Thanks!

    Comment Source:Thanks for the useful feedback. I'll check out El Niño stuff. I've divided this blog into two parts, and reworked it a bit. * [[Blog - The stochastic resonance program (part 1)]]. This contains the math and science background. This is ready to go. I've expanded again the part on the Milankovitch cycles. I just needed the make the right emphases and separations between the SR hypothesis and the Milankovitch hypothesis. John can you give this a critical review, to make sure that I'm not making any statements that are off base. Also if there are any sentences that deserve to be tightened up. * [[Blog - The stochastic resonance program (part 2)]]. This explains the program. This is not yet ready to go, but quite close. I've started a separate discussion thread for this one. Thanks!
  • 19.
    edited April 2014

    Ok, here is an issue, sorry. In the text when I talk about the SDE, I speak about the "stochastic derivative" but I gather this doesn't really exist.

    Truth be told, I have yet to work through all the technical definitions behind stochastic differential equations. Can anyone think of a way to fix up my heuristic discussion, which I think sounds pretty good.

    In the spirit of honesty, I was thinking of adding something along these lines:

    Caveat: I just stated the general idea of an SDE, but the technical definition of the derivative of a stochastic process is a rather nuanced matter, which I have yet to master. It turns out, however, that the approximation algorithm used by the program is quite simple, and is able to skirt around these subtle definitional issues. It works by simply adding a normally distributed random number to the deterministic derivative at each of the sample points.

    But this actually raises the further question of how this algorithm is justified.

    It sounds like this is the Euler Maruyama method.

    And I see that there are convergence theorems for such algorithms, e.g.

    The second page of this reference talks about Lipschitz conditions that guarantee convergence of the approximation algorithm -- but I got lost there. Do these theorems cover the SDE which is used in the program, and quoted in my blog article? There I wrote:

    DerivDeterministic(t, x) = SineWave(t, amplitude, frequency) + Bistable(x),

    where Bistable(x) = x (1 - x2).

    Obviously I can't get into this stuff in this tiny blog article, but I'd like to feel that at least some of us here know that there's not a hole in the explanation of the algorithm.

    I wish Tim was around. Too bad he was on the way out as I was on the way in.

    Comment Source:Ok, here is an issue, sorry. In the text when I talk about the SDE, I speak about the "stochastic derivative" but I gather this doesn't really exist. Truth be told, I have yet to work through all the technical definitions behind stochastic differential equations. Can anyone think of a way to fix up my heuristic discussion, which I _think_ sounds pretty good. In the spirit of honesty, I was thinking of adding something along these lines: Caveat: I just stated the general idea of an SDE, but the technical definition of the derivative of a stochastic process is a rather nuanced matter, which I have yet to master. It turns out, however, that the approximation algorithm used by the program is quite simple, and is able to skirt around these subtle definitional issues. It works by simply adding a normally distributed random number to the deterministic derivative at each of the sample points. But this actually raises the further question of how this algorithm is justified. It sounds like this is the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Maruyama_method">Euler Maruyama</a> method. And I see that there are convergence theorems for such algorithms, e.g. * <a href="http://homepages.warwick.ac.uk/~masdr/JOURNALPUBS/stuart51.pdf">Strong convergence of Euler-type methods for nonlinear stochastic differential equations</a>, D. Higham, X. Mao, A. Stuart The second page of this reference talks about Lipschitz conditions that guarantee convergence of the approximation algorithm -- but I got lost there. Do these theorems cover the SDE which is used in the program, and quoted in my blog article? There I wrote: > DerivDeterministic(t, x) = SineWave(t, amplitude, frequency) + Bistable(x), > > where Bistable(x) = x (1 - x<sup>2</sup>). Obviously I can't get into this stuff in this tiny blog article, but I'd like to feel that at least some of us here know that there's not a hole in the explanation of the algorithm. I wish Tim was around. Too bad he was on the way out as I was on the way in.
  • 20.
    edited April 2014

    David -

    I'm reading your first article now. It looks great! Here are the only problems I can find.

    1. "a noise signal". How about "some noise"? I think ordinary people would think of "noise" and "signal" as antonyms.

    2. "This concept was originally used in a hypothesis about the timing of ice-age cycles..." Is that really where the concept originated? I hadn't known that... or maybe I've forgotten that. (It's not very important where it first originated, but we might as well make sure we're saying true stuff.)

    3. In your light switch example - a good example - you use the phrases "internal state", "digital state" and then "concrete state". Are "internal" and "concrete" synonyms? If so, ditch one. I'd say ditch "concrete state" - I've never heard people say that.

    4. "In this relationship, which is catalyzed by the noise" - I'm not sure which relationship is "this" relationship, and I don't know what it means to catalyze a relationship. I know what it means for something to catalyze a process or reaction. I have a feeling this little passage could be deleted and everything would be clearer. After all, moments later you say "The noise has amplified the input signal", which is the real punchline.

    5. You write "$Bistable(x) = x (1 - x^2)$". Us math whizzes can instantly see this force is minus the derivative of a potential $ x^4/4 - x^2/2$ whose graph looks roughly like this:

    so that you're talking about a ball in viscous honey rolling around on a surface of this shape, pushed around by a random (noisy) force and occasionally rolling from one pit to the other. Normal folks might merely scratch their heads when they see this formula and think "hmm, weird math stuff". Wouldn't it help to include a picture of the double well potential $ x^4/4 - x^2/2$ back when you talk about bistable stochastic resonance, and later say that it gives this force $ x(1 - x^2)$? Nothing fancy - I'm not suggesting you say "derivative of the potential" - just a nudge to help people see what this funny-looking function means.

    You could probably borrow a picture from Wikipedia or an earlier Azimuth article on this stuff.

    Comment Source:David - I'm reading your first article now. It looks great! Here are the only problems I can find. 1. "a noise signal". How about "some noise"? I think ordinary people would think of "noise" and "signal" as antonyms. 2. "This concept was originally used in a hypothesis about the timing of ice-age cycles..." Is that really where the concept originated? I hadn't known that... or maybe I've forgotten that. (It's not very important where it first originated, but we might as well make sure we're saying true stuff.) 3. In your light switch example - a good example - you use the phrases "internal state", "digital state" and then "concrete state". Are "internal" and "concrete" synonyms? If so, ditch one. I'd say ditch "concrete state" - I've never heard people say that. 4. "In this relationship, which is catalyzed by the noise" - I'm not sure which relationship is "this" relationship, and I don't know what it means to catalyze a relationship. I know what it means for something to catalyze a _process_ or _reaction_. I have a feeling this little passage could be deleted and everything would be clearer. After all, moments later you say "The noise has _amplified_ the input signal", which is the real punchline. 5. You write "$Bistable(x) = x (1 - x^2)$". Us math whizzes can instantly see this force is minus the derivative of a potential $ x^4/4 - x^2/2$ whose graph looks roughly like this: <img src = "http://www.scielo.br/img/fbpe/bjp/v30n4/26fi02.jpg" alt = ""/> so that you're talking about a ball in viscous honey rolling around on a surface of this shape, pushed around by a random (noisy) force and occasionally rolling from one pit to the other. Normal folks might merely scratch their heads when they see this formula and think "hmm, weird math stuff". Wouldn't it help to include a picture of the double well potential $ x^4/4 - x^2/2$ back when you talk about bistable stochastic resonance, and later say that it gives this force $ x(1 - x^2)$? Nothing fancy - I'm not suggesting you say "derivative of the potential" - just a nudge to help people see what this funny-looking function means. You could probably borrow a picture from Wikipedia or an earlier Azimuth article on this stuff.
  • 21.

    I edited the bit about “stochastic derivatives”. I added a graph but did not explain it. The R code for the graph is there temporarily.

    Comment Source:I edited the bit about “stochastic derivatives”. I added a graph but did not explain it. The R code for the graph is there temporarily.
  • 22.
    edited April 2014

    John, thanks for the really useful comments, and Graham thanks for the help.

    John wrote:

    "a noise signal". How about "some noise"? I think ordinary people would think of "noise" and "signal" as antonyms.

    Sound good.

    "This concept was originally used in a hypothesis about the timing of ice-age cycles..." Is that really where the concept originated? I hadn't known that... or maybe I've forgotten that. (It's not very important where it first originated, but we might as well make sure we're saying true stuff.)

    I recall reading this somewhere, but I don't recall where :) Anyway, it could be hearsay, and as you point out it doesn't matter what the "first" application was. I'm going to change it to say something like: A striking use of the concept was in a hypothesis about the timing of the ice-age cycles within the framework of bistable climate dynamics. Although this hypothesis has not been confirmed, it remains of interest because...

    In your light switch example - a good example - you use the phrases "internal state", "digital state" and then "concrete state". Are "internal" and "concrete" synonyms? If so, ditch one. I'd say ditch "concrete state" - I've never heard people say that.

    Yes, synonyms, and I agree I should ditch one of them. I was thinking in terms of different levels of states, with each level being an abstraction from the relatively more concrete ones "below" it. (So it looks like abstract/concrete are relative terms.) Anyhow, I'll work this out to make a consistent terminology, and if I end up introducing a new term I'll define it.

    "In this relationship, which is catalyzed by the noise" - I'm not sure which relationship is "this" relationship, and I don't know what it means to catalyze a relationship. I know what it means for something to catalyze a process or reaction. I have a feeling this little passage could be deleted and everything would be clearer. After all, moments later you say "The noise has amplified the input signal", which is the real punchline.

    I agree this sentence is somewhat murky. It doesn't express well what I meant to say. Here's what I wrote:

    Then, a bit of random noise, occurring near the peak of an input cycle, may "tap" the system over to the other digital state. So, there will be a phase dependent-probability of transitions between digital states. In this relationship, which is catalyzed by the noise, the input frequency is being "stochastically transmitted" through to the output.

    The "relationship" is the phase-dependent probability of transitions between digital states. This itself bears the mark of the input frequency, but in a complex, stochastic way. It's no simple amplification of the input signal, in fact, one would expect it to contain a lot of "stochastic dirt" (for lack of a better term). For example, if a noise event is likely to tap the system over to the other state near a peak of the forcing signal, then if the noise contains a lot of high-frequency components, and the forcing signal has a low frequency, one might expect that there is a high likelihood of another noise event pushing it back across the state boundary. We might then see a "buzzing" between the two states near the peak of the forcing signal, with near equal chance of the system ending up on either side of the boundary as the forcing signal retreats -- in fact, wouldn't it be biased towards ending up on the same side that it started? On the other hand, if noise events are less frequent, then there may only be some probability, substantially less than 1, of a state change on each cycle. Then, also, the probability of buzzing will be less, and it is more likely that each tap will actually lead to a lasting state change.

    So we're talking about a pretty funky kind of amplifier. I still wonder about the most appropriate ways to measure the effectiveness of this signal transfer. Sure, there's Fourier analysis. But maybe there are "stochastic measures" as well. For instance, what is the expected number of state transitions per peak/trough of the forcing signal. If it's one, then the state is closely resonating with the input. More broadly, one could look at the probability density of state transitions, as a function of the phase of the forcing signal. If this is "high" and independent of phase, that indicates that the output is just noise, i.e., is dominated by the noise source. If it is low and concentrated at the peaks and troughs, then there is "partial resonance" with the forcing signal, e.g., transitions occur on 50% of the peaks and troughs of the forcing signal.

    It is no wonder, then, that the correlation between input and output is very complex under stochastic amplification.

    So, you see, I was trying to summarize a lot of ideas into one sentence -- I should try again.

    By the way, I wonder of stochastic amplification is a better term. I see it used, e.g. in:

    John wrote:

    You write "$Bistable(x) = x (1 - x^2)$". Us math whizzes can instantly see this force is minus the derivative of a potential $ x^4/4 - x^2/2$ whose graph looks roughly like this...

    Thanks for highlighting this point. It does bring out the meaning of the example polynomial. I will add another brief section that talks about the potentials, shows the graph, and introduces the bistable polynomial.

    So now there are three progressive levels of specificity at which we can see stochastic resonance/amplification takes place: general two-state systems, bistable systems, bistable systems defined by a potential function.

    The latter gives a vivid and motivating physical model of stochastic resonance, but I don't want to give the reader the impression that a potential function is required for stochastic resonance. Three progressive sections, going from abstract to concrete, should address all of this.

    Comment Source:John, thanks for the really useful comments, and Graham thanks for the help. John wrote: > "a noise signal". How about "some noise"? I think ordinary people would think of "noise" and "signal" as antonyms. Sound good. > "This concept was originally used in a hypothesis about the timing of ice-age cycles..." Is that really where the concept originated? I hadn't known that... or maybe I've forgotten that. (It's not very important where it first originated, but we might as well make sure we're saying true stuff.) I recall reading this somewhere, but I don't recall where :) Anyway, it could be hearsay, and as you point out it doesn't matter what the "first" application was. I'm going to change it to say something like: A striking use of the concept was in a hypothesis about the timing of the ice-age cycles within the framework of bistable climate dynamics. Although this hypothesis has not been confirmed, it remains of interest because... > In your light switch example - a good example - you use the phrases "internal state", "digital state" and then "concrete state". Are "internal" and "concrete" synonyms? If so, ditch one. I'd say ditch "concrete state" - I've never heard people say that. Yes, synonyms, and I agree I should ditch one of them. I was thinking in terms of different levels of states, with each level being an abstraction from the relatively more concrete ones "below" it. (So it looks like abstract/concrete are relative terms.) Anyhow, I'll work this out to make a consistent terminology, and if I end up introducing a new term I'll define it. > "In this relationship, which is catalyzed by the noise" - I'm not sure which relationship is "this" relationship, and I don't know what it means to catalyze a relationship. I know what it means for something to catalyze a _process_ or _reaction_. I have a feeling this little passage could be deleted and everything would be clearer. After all, moments later you say "The noise has _amplified_ the input signal", which is the real punchline. I agree this sentence is somewhat murky. It doesn't express well what I meant to say. Here's what I wrote: > Then, a bit of random noise, occurring near the peak of an input cycle, may "tap" the system over to the other digital state. So, there will be a phase dependent-probability of transitions between digital states. In this relationship, which is catalyzed by the noise, the input frequency is being "stochastically transmitted" through to the output. The "relationship" is the phase-dependent probability of transitions between digital states. This itself bears the mark of the input frequency, but in a complex, stochastic way. It's no simple amplification of the input signal, in fact, one would expect it to contain a lot of "stochastic dirt" (for lack of a better term). For example, if a noise event is likely to tap the system over to the other state near a peak of the forcing signal, then if the noise contains a lot of high-frequency components, and the forcing signal has a low frequency, one might expect that there is a high likelihood of another noise event pushing it back across the state boundary. We might then see a "buzzing" between the two states near the peak of the forcing signal, with near equal chance of the system ending up on either side of the boundary as the forcing signal retreats -- in fact, wouldn't it be biased towards ending up on the same side that it started? On the other hand, if noise events are less frequent, then there may only be some probability, substantially less than 1, of a state change on each cycle. Then, also, the probability of buzzing will be less, and it is more likely that each tap will actually lead to a lasting state change. So we're talking about a pretty funky kind of amplifier. I still wonder about the most appropriate ways to measure the effectiveness of this signal transfer. Sure, there's Fourier analysis. But maybe there are "stochastic measures" as well. For instance, what is the expected number of state transitions per peak/trough of the forcing signal. If it's one, then the state is closely resonating with the input. More broadly, one could look at the probability density of state transitions, as a function of the phase of the forcing signal. If this is "high" and independent of phase, that indicates that the output is just noise, i.e., is dominated by the noise source. If it is low and concentrated at the peaks and troughs, then there is "partial resonance" with the forcing signal, e.g., transitions occur on 50% of the peaks and troughs of the forcing signal. It is no wonder, then, that the correlation between input and output is very complex under stochastic amplification. So, you see, I was trying to summarize a lot of ideas into one sentence -- I should try again. By the way, I wonder of stochastic amplification is a better term. I see it used, e.g. in: * D. Alonso, A. McKane, M. Pasqual: <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2373404/">Stochastic amplification in epidemics</a> John wrote: > You write "$Bistable(x) = x (1 - x^2)$". Us math whizzes can instantly see this force is minus the derivative of a potential $ x^4/4 - x^2/2$ whose graph looks roughly like this... Thanks for highlighting this point. It does bring out the meaning of the example polynomial. I will add another brief section that talks about the potentials, shows the graph, and introduces the bistable polynomial. So now there are three progressive levels of specificity at which we can see stochastic resonance/amplification takes place: general two-state systems, bistable systems, bistable systems defined by a potential function. The latter gives a vivid and motivating physical model of stochastic resonance, but I don't want to give the reader the impression that a potential function is required for stochastic resonance. Three progressive sections, going from abstract to concrete, should address all of this.
  • 23.
    edited May 2014

    I have a new approach, which should cut out the problems that I mentioned above with my handling of the SDE concept in the blog.

    I will first introduce the potential function, and its derivative $x (1 - x^2)$, which gives a bistable deterministic dynamics.

    Then I'll introduce a discrete-time, randomized version of this model, still using the potential function. In this model, at each point in time, the derivative is discretely sampled, and a random quantity is added to this derivative, which then gets linearly extrapolated until the next time point. This is exactly what the program does! And it exhibits stochastic resonance!

    This is fine for the purposes of the program, which is to give a toy model that illustrates stochastic resonance. Why not a discrete toy model.

    Then I'll add, in passing, that this model is patterned after a "stochastic differential equation," which loosely means such-and-such.

    Comment Source:I have a new approach, which should cut out the problems that I mentioned above with my handling of the SDE concept in the blog. I will first introduce the potential function, and its derivative $x (1 - x^2)$, which gives a bistable deterministic dynamics. Then I'll introduce a discrete-time, randomized version of this model, still using the potential function. In this model, at each point in time, the derivative is discretely sampled, and a random quantity is added to this derivative, which then gets linearly extrapolated until the next time point. This is exactly what the program does! And it exhibits stochastic resonance! This is fine for the purposes of the program, which is to give a toy model that illustrates stochastic resonance. Why not a discrete toy model. Then I'll add, in passing, that this model is patterned after a "stochastic differential equation," which loosely means such-and-such.
  • 24.
    edited May 2014

    Stochastic resonance is generally accepted to have been introduced in this ice age paper by authors who invented the term "stochastic resonance", or independently in this paper by different authors which introduced the concept of a potential function in climate models. The second paper mentions ice ages only in passing and is more general in its motivation, though followup work by the same authors was focused on ice ages.

    There are a few more papers floating around by the same authors around that develop the concept more, so they sometimes get credit as the "original paper(s)".

    The second paper cites some 1977 work with Prigogine, and other work on nonequilibrium thermodynamics, as motivation, but I don't think those are thought to have invented the concept.

    So, I'd say that stochastic resonance was invented in climate science, and "likely" in relation to ice ages, depending on how you partition attribution to the different groups.

    Comment Source:Stochastic resonance is generally accepted to have been introduced in [this ice age paper](http://dx.doi.org/10.1111%2Fj.2153-3490.1982.tb01787.x) by authors who invented the term "stochastic resonance", or independently in [this paper by different authors](http://dx.doi.org/10.1111/j.2153-3490.1981.tb01746.x) which introduced the concept of a potential function in climate models. The second paper mentions ice ages only in passing and is more general in its motivation, though followup work by the same authors was focused on ice ages. There are a few more papers floating around by the same authors around that develop the concept more, so they sometimes get credit as the "original paper(s)". The second paper cites some 1977 work with Prigogine, and other work on nonequilibrium thermodynamics, as motivation, but I don't think those are thought to have invented the concept. So, I'd say that stochastic resonance was invented in climate science, and "likely" in relation to ice ages, depending on how you partition attribution to the different groups.
  • 25.
    edited May 2014

    Nathan, thanks for the good information and references. I'm going through the first one now.

    Comment Source:Nathan, thanks for the good information and references. I'm going through the first one now.
  • 26.
    edited May 2014

    Great stuff. I'll check back here this weekend to see if the article is done. It should be almost done, I think.... I wasn't calling for big changes.

    So we’re talking about a pretty funky kind of amplifier.

    That would be a nice sentence to include!

    I still wonder about the most appropriate ways to measure the effectiveness of this signal transfer. Sure, there’s Fourier analysis...

    ... but we're studying an inherently nonlinear system thanks to that Bistable function, so the usefulness of Fourier analysis is somewhat less than for linear (even if stochastic) systems. It's indeed an interesting puzzle, trying to quantify the amplification here. We could raise that in the comments on this article...

    Comment Source:Great stuff. I'll check back here this weekend to see if the article is done. It should be almost done, I think.... I wasn't calling for _big_ changes. > So we’re talking about a pretty funky kind of amplifier. That would be a nice sentence to include! > I still wonder about the most appropriate ways to measure the effectiveness of this signal transfer. Sure, there’s Fourier analysis... ... but we're studying an inherently nonlinear system thanks to that Bistable function, so the usefulness of Fourier analysis is somewhat less than for linear (even if stochastic) systems. It's indeed an interesting puzzle, trying to quantify the amplification here. We could raise that in the comments on this article...
  • 27.

    Ok, all set to go. Thanks!

    Graham, thanks for the graph.

    Comment Source:Ok, all set to go. Thanks! Graham, thanks for the graph.
  • 28.

    Edited the introduction, and moved the reference to Glyn and Tim's blog article to the section called "The concept of stochastic resonance."

    Comment Source:Edited the introduction, and moved the reference to Glyn and Tim's blog article to the section called "The concept of stochastic resonance."
  • 29.

    Reworked the introduction, which now talks about and links to Glyn and Tim's article. Removed the link to the article from the references section.

    Comment Source:Reworked the introduction, which now talks about and links to Glyn and Tim's article. Removed the link to the article from the references section.
  • 30.

    Looks good. I tweaked the graph. Suggestion:

    So we will also be writing to explain the science, the math, and the programming behind these models.

    So we will also be explaining the science and math, as well as the programming behind these models.

    Comment Source:Looks good. I tweaked the graph. Suggestion: > So we will also be writing to explain the science, the math, and the programming behind these models. So we will also be explaining the science and math, as well as the programming behind these models.
  • 31.
    edited May 2014

    Thanks Graham! Graph looks good.

    I revised that sentence as follows:

    So we will be writing articles to explain both the programs themselves and the math and science behind them.

    This is ready to go, as far as I'm concerned.

    John thanks for this great opportunity for getting our ideas published.

    Comment Source:Thanks Graham! Graph looks good. I revised that sentence as follows: > So we will be writing articles to explain both the programs themselves and the math and science behind them. This is ready to go, as far as I'm concerned. John thanks for this great opportunity for getting our ideas published.
  • 32.

    Great! I will post this one in, say, 2 or 3 days. I just posted Jacob Biamonte's announcement of Quantum Frontiers in Network Science. I like how we're getting an ecosystem with plenty of posts. After yours, either Jan Galkiowski's or Marc Harper's, whichever gets polished up first. I also have a "just for fun" post on hyperbolic hexagonal honeycombs lined up.

    Comment Source:Great! I will post this one in, say, 2 or 3 days. I just posted [[Jacob Biamonte]]'s announcement of [Quantum Frontiers in Network Science](http://johncarlosbaez.wordpress.com/2014/05/06/quantum-frontiers-in-network-science/). I like how we're getting an ecosystem with plenty of posts. After yours, either Jan Galkiowski's or Marc Harper's, whichever gets polished up first. I also have a "just for fun" post on hyperbolic hexagonal honeycombs lined up.
  • 33.
    edited May 2014

    I copied this blog article to the Azimuth Blog, and I can now publish it with a flick of my finger. I would like to wait just a little, like tomorrow morning maybe.

    The article is very nice and also very nicely formatted. Just three tiny problems:

    1. There was a link of the form [[Stochastic differential equation]], which works on the Azimuth Wiki but not on the blog. We need HTML, not Markdown, and global instead of relative links.

    2. There was LaTeX like $GaussianSample$, which comes out roman font in the Wiki since Jacques Distler tweaked how LaTeX works on this blog, but comes out ugly italics in normal LaTeX installations like the blog. I fixed it: $\mathrm{GaussianSample}$.

    3. There was a figure 650 pixels wide, while the blog is 450 pixels wide.

    But clearly I'm scraping the bottom of the barrel looking for things to complain about here! I have rarely had an article written by someone else that took so little work to format for the blog. I will add point 3 to our directions for would-be bloggers:

    • How to blog

      But I don't really expect anyone to master such subtleties!

    Great article!

    Comment Source:I copied this blog article to the Azimuth Blog, and I can now publish it with a flick of my finger. I would like to wait just a little, like tomorrow morning maybe. The article is very nice and also very nicely formatted. Just three tiny problems: 1. There was a link of the form `[[Stochastic differential equation]]`, which works on the Azimuth Wiki but not on the blog. We need HTML, not Markdown, and global instead of relative links. 2. There was LaTeX like `$GaussianSample$`, which comes out roman font in the Wiki since Jacques Distler tweaked how LaTeX works on this blog, but comes out ugly italics in normal LaTeX installations like the blog. I fixed it: `$\mathrm{GaussianSample}$`. 3. There was a figure 650 pixels wide, while the blog is 450 pixels wide. But clearly I'm scraping the bottom of the barrel looking for things to complain about here! I have rarely had an article written by someone else that took so little work to format for the blog. I will add point 3 to our directions for would-be bloggers: * [How to blog](http://www.azimuthproject.org/azimuth/show/How+to#blog) But I don't really expect anyone to master such subtleties! Great article!
  • 34.

    I think I made an image 480 wide.

    Comment Source:I think I made an image 480 wide.
  • 35.

    You will be shot at dawn.

    Comment Source:You will be shot at dawn.
  • 36.
    edited May 2014

    Okay, I've posted David's blog article here:

    It's very clear. I hope David finishes the second part soon. I'll post a link to this on G+.

    Comment Source:Okay, I've posted David's blog article here: * David Tanzer, [The stochastic resonance program (part 1)](http://johncarlosbaez.wordpress.com/2014/05/10/the-stochastic-resonance-program-part-1/), Azimuth Blog. It's very clear. I hope David finishes the second part soon. I'll post a link to this on G+.
  • 37.
    edited May 2014

    Thanks John much appreciated. I'm looking at part 2 now.

    Comment Source:Thanks John much appreciated. I'm looking at part 2 now.
  • 38.

    Dave, that's excellent! :).

    Comment Source:Dave, that's excellent! :).
Sign In or Register to comment.