Home › Azimuth Project › › - Questions

It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.4K
- Chat 502
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 73
- Azimuth Code Project 110
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 718

Options

During my offline time I've still been thinking about modelling populations and similar effects. I've decided on a slightly different simulation approach than I used previously, and it's thrown up a question which might already have been studied. If anyone is already aware of work in this area I'd be very interested.

Suppose that we've got a stochastic dynamical system which is essentially only dependent on a finite "memory" of parameters (hence only influenced by "absolute time" in that certain system input variables might be dependent on absolute time, eg, that a catastrophic forest fire reducing food supplies happens in year 20 after the simulation starts). Then this for any set of data comprising a state (which may include historical datums or generalised velocities, etc) we can approximate the probability distribution of next states by running the simulation. If we do this for lots of states that we think are "likely states" then we've built up a lot of piecemeal information about the system, but is there any existing algorithms for calculating long-term probabilites of die-off, natural periods, etc? (For a discrete Markov chain you can look at things like power series in the transition matrix, but that assumes that you've got a finite set of states and have computed transition probs for all of them: is there a way more adapted to "sparsely sampled" transitions (since it's going to be computationally prohibitive to just approximate each transition by this method.)

Many thanks for any thoughts,

## Comments

If you had Monte Carlo samples from the posterior distribution of possible transition matrices, then you could calculate powers of each sample matrix and get a distribution of long-term behaviors. There are algorithms to compute such posteriors; perhaps this could be made more efficient using a sequential Monte Carlo method to update the posterior after each measured state. But I imagine that still would be an extremely inefficient way to do this, not really exploiting the sparsity of the measurements.

`If you had Monte Carlo samples from the posterior distribution of possible transition matrices, then you could calculate powers of each sample matrix and get a distribution of long-term behaviors. There are algorithms to compute such posteriors; perhaps this could be made more efficient using a sequential Monte Carlo method to update the posterior after each measured state. But I imagine that still would be an extremely inefficient way to do this, not really exploiting the sparsity of the measurements.`

A Dirichlet process might be useful. I don't understand them, but I think they are used in this kind of problem. I have seen them used in modelling molecular evolution, especially amino acid sequences.

`A [Dirichlet process](http://en.wikipedia.org/wiki/Dirichlet_process) might be useful. I don't understand them, but I think they are used in this kind of problem. I have seen them used in [modelling molecular evolution](http://www2.lirmm.fr/mab/IMG/pdf/phylobayes2.3.pdf), especially amino acid sequences.`