Options

Complexity science of discrete event simulation

As part of my thinking about climate impacts, I have been thinking more recently about some of the simulation models used to study social-physical dynamics. These are often discrete event simulations or agent models.

These models often have complex emergent behavior, and I am wondering what mathematical and computational tools exist to characterize their dynamics. This may intersect with the Azimuth Project's work on Petri nets. I'm toying with writing an internal grant proposal on "the complexity science of discrete event simulations" in the spring, if I can come up with some good ideas.

The example I'm focusing on right now is the SimX and SimCore frameworks developed by Los Alamos National Laboratory: "SimX is a library for developing parallel, distributed-memory simulations in Python. SimX is written primarily in C++ and provides the simulation modeler with the core functionality needed in a parallel simulation library such as event queueing, time advancement, domain partitioning, synchronization and message passing. SimX APIs are exposed to Python, enabling rapid development and protyping of a parallel simulation entirely in Python. SimX is free software, available under the GNU LGPL license, and hosted on github."

These frameworks have been used to implement transportation models (TRANSIM and FastTrans), epidemiological models (EpiSimS), telecommunications models (MIITS), etc. There are also people here who are interested in applying this to dynamic economic models, as opposed to the traditional computable general equilibrium models; I'm not sure how this would compare to the newer DSGE models. (And they probably won't get funded to do this anyway, given institutional politics ...)

Some questions that come to mind:

  • What is the appropriate mathematical structure to represent these models? Petri nets may be too specialized. Agent models can combine continuum state variables with discrete actors, or there can be hybrid discrete-continuous models. Many agent models can have their dynamics determined by an implicit optimization step, where each actor takes some utility-maximizing action (potentially with actor-specific utilities). This may be hard to describe using simple transition rules. Can anything useful be said about such complex dynamics?

  • How can you characterize bifurcations, steady states, etc. in these models? Continuum dynamics approaches include center manifold reduction or numerical continuation methods. What can be done from a discrete event perspective?

  • What methods exist to construct reduced-order models from more complex discrete simulations? You could write an agent model with 300 million actors in it, but it could be beneficial to derive a simplified, more aggregate model that is faithful to the original dynamics. (This is different from building an aggregate model from the ground up, where premature averaging might lose some of the dynamics.) Reduced models can be more computationally efficient (permitting more uncertainty quantification and scenario exploration) and also easier to understand. There are a variety of methods to do this in continuum dynamics, such as Krylov subspace projection, balanced truncation, discrete empirical interpolation, etc. Can they be generalized? Ideally they'd preserve the stochastic behavior of the system, not just its mean behavior (i.e., distributions).

  • How can uncertainty be quantified in such models? Stochastic simulations often have intractable likelihood functions: you don't know how to write them down in closed form, and can only simulate from them. This requires methods like approximate Bayesian computation (ABC). A lot of the computer model calibration literature is devoted to developing Gaussian process emulators which construct simple statistical models of the output of numerical simulations, but these may be too smooth. Do we need "Markov process emulators" or "Petri net emulators" or other ways of building statistical approximations to large-scale discrete simulations? And how do we develop large scale MCMC for uncertainty quantification? Hamiltonian Monte Carlo is a method of choice for Bayesian computation in high-dimensional systems, but it requires continuous derivatives with respect to all the parameters, and also requires a likelihood to be available; can it work in discrete systems or in an ABC context?

  • More on uncertainty: how to characterize "predictability" within these models, and tie this to information theory? Can you see a tipping point coming (collapse of the power grid, global pandemic, ...)? What are the "signatures" of impending bifurcations? This has been studied in continuum dynamics, e.g. critical slowing-down; are there discrete analogues?

  • Still more on uncertainty: as mentioned earlier, agent models often have some individual utility-maximizing behavior. But this does not account for each agent's uncertainty about the system (Bayesian belief). Can these models be adapted to have agents making decisions under uncertainty, maximizing an expected utility? And can they learn as they go, updating their beliefs and consequent actions (endogenous learning)? This probably intersects with approximate dynamic programming and optimal learning, as well as sequential Monte Carlo or ensemble Kalman filtering.

  • How to characterize the high-risk tail-area behavior of these stochastic simulations? You can do it by brute-force Monte Carlo, but can theory help (e.g., suggest asymptotic distributional forms to fit to model output)? What about rare event simulation? Again, can theory help?

Comments

  • 1.
    edited October 2013

    This survey has some links to swarm modelling on Petri nets. I need to go back and look at the current state of swarm modelling at the Santa Fe Institute which was Java stuff from a decade ago. This is something I've wanted to get back to but I'll have to read all these great links first :)

    Comment Source:This [survey](http://www.grids.ac.uk/Complex/ABMS/ABMS.html#SECTION00023000000000000000) has some links to swarm modelling on Petri nets. I need to go back and look at the current state of swarm modelling at the Santa Fe Institute which was Java stuff from a decade ago. This is something I've wanted to get back to but I'll have to read all these great links first :)
  • 2.

    That looks very complicated! The simplest system I know that might count as a ``discrete event simulation'' is the Logistic map. Given a system of the kind you're interested in, I would want to work backwards from high $r$ to low $r$ in the Wikipedia notation.

    1. Are any variables divergent? If not, what stops them being so?

    2. Are any variables chaotic? It seems to me that large interesting systems will always have some chaotic behaviour somewhere, and this will leak into everywhere else, thought perhaps mostly as a low level noise. So can the chaos be bounded, isolated, modelled as stochastic?

    3. Long limit cycles seem very fragile, and probably don't survive the noise (but that's just my geuss). Short cycles might be important. For all I know ``boom and bust '' in economies is one.

    4. Oscillitory behaviour that emerges due to discreteness. Does it matter?

    5. Whatever is left after dealing with the above may be similar enough to the continuous case to use tools from there.

    Comment Source:That looks very complicated! The simplest system I know that might count as a ``discrete event simulation'' is the [Logistic map](http://en.wikipedia.org/wiki/Logistic_map). Given a system of the kind you're interested in, I would want to work backwards from high $r$ to low $r$ in the Wikipedia notation. 1. Are any variables divergent? If not, what stops them being so? 2. Are any variables chaotic? It seems to me that large interesting systems will always have some chaotic behaviour somewhere, and this will leak into everywhere else, thought perhaps mostly as a low level noise. So can the chaos be bounded, isolated, modelled as stochastic? 3. Long limit cycles seem very fragile, and probably don't survive the noise (but that's just my geuss). Short cycles might be important. For all I know ``boom and bust '' in economies is one. 4. Oscillitory behaviour that emerges due to discreteness. Does it matter? 5. Whatever is left after dealing with the above may be similar enough to the continuous case to use tools from there.
Sign In or Register to comment.