It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 494
- ACT Study Group 5
- Azimuth Math Review 6
- MIT 2020: Programming with Categories 53
- MIT 2020: Lectures 21
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 64
- Azimuth Code Project 110
- Drafts 1
- Math Syntax Demos 15
- Wiki - Latest Changes 1
- Strategy 110
- Azimuth Project 1.1K

Options

This post is a natural followup to

and represents a corresponding correction to the work started in

Beginning with

$$A = (d x) u = -(d\phi)\phi^{-1}$$ and turning the crank, after a few simple lines results in the new

Discrete Burgers Equation

- \(\phi(i-1,j-1) = \phi(i,j) \left[1 + k u(i,j)\right]\)
- \(\phi(i+1,j-1) = \phi(i,j) \left[1 - k u(i,j)\right]\)
where \(x = i\Delta x\), \(t = j\Delta t\), and \(k = \frac{\Delta x}{\eta}.\)

Note that adding 1.) and 2.) above eliminates \(u(i,j)\) resulting in the update expression

$$\phi(i,j) = \frac{1}{2}\left[\phi(i-1,j-1)+\phi(i+1,j-1)\right]$$ while subtracting 2.) from 1.) results in

$$u(i,j) = -\eta\left[\frac{\phi(i+1,j-1)-\phi(i-1,j-1)}{2\Delta x}\right] \phi^{-1}(i,j)$$ which is the discrete Cole-Hopf transformation.

It will not take long to implement this in code, but it is significantly past my bedtime and I need to work in the morning. I did manage to code the exact solution for the n-wave

so I will use this as a test and also for the initial conditions for the simulation.

## Comments

Update:

At this point, I've written a bunch of code and generated a bunch of numbers. I'm not quite satisfied enough to present the results, but one thing that is cool (and I'm 100% sure is well known) is that the discrete heat equation, i.e. discrete Cole-Hopf transformed discrete Burgers equation,

$$\phi(i,j) = \frac{1}{2}\left[\phi(i-1,j-1)+\phi(i+1,j-1)\right]$$ has a closed form kernel, i.e. the binomial probability density function. This means we can solve the discrete Burgers equation for any time by simply convolving (via FFT) the kernel with the initial conditions.

`Update: At this point, I've written a bunch of code and generated a bunch of numbers. I'm not quite satisfied enough to present the results, but one thing that is cool (and I'm 100% sure is well known) is that the discrete heat equation, i.e. discrete Cole-Hopf transformed discrete Burgers equation, $$\phi(i,j) = \frac{1}{2}\left[\phi(i-1,j-1)+\phi(i+1,j-1)\right]$$ has a closed form kernel, i.e. the [binomial probability density function](http://en.wikipedia.org/wiki/Binomial_distribution). This means we can solve the discrete Burgers equation for any time by simply convolving (via FFT) the kernel with the initial conditions.`

Ok. I've cleaned up my code a bit. I chickened out about doing it in Python, so did it in Matlab.

To test the discrete Burgers equation, I started with the n-wave exact solution presented on the Azimuth wiki page.

This is what the initial conditions look like:

Note that getting this correct is not completely trivial because our initial conditions are given in terms of the

un-transformed \(u\) while our update expressions are in terms of the transformed variable \(\phi\), so we need to convert initial conditions for \(u\) into initial conditions for \(\phi\). This figure demonstrates that the transformation of initial conditions worked.Next, I simulate the n-wave for roughly 30 seconds corresponding to 5127 times steps. Here are the results:

I assure you there are two curves there. If it weren't way past my bedtime already, I'd redo the plots so you could see both curves, e.g. making one dashed.

One aspect of the discrete Burgers equation that boggles my mind is that errors do not accumulate as you evolve the system in time. Rather, errors disappear. This is mind boggling, but true.

The reason is that the kernel of the discrete heat equation, i.e. the binomial PDF, actually converges to the continuum heat kernel as time evolves so your solution gets more and more accurate with each time step.

This is magic. This demonstrates the value of discrete calculus for numerical methods.

`Ok. I've cleaned up my code a bit. I chickened out about doing it in Python, so did it in Matlab. To test the discrete Burgers equation, I started with the n-wave exact solution presented on the [[Burgers' equation|Azimuth wiki page]]. This is what the initial conditions look like: <img src="http://www.azimuthproject.org/azimuth/files/numsol_nwave_initial.jpg" width = "500" alt = ""/> Note that getting this correct is not completely trivial because our initial conditions are given in terms of the _un_-transformed \\(u\\) while our update expressions are in terms of the transformed variable \\(\phi\\), so we need to convert initial conditions for \\(u\\) into initial conditions for \\(\phi\\). This figure demonstrates that the transformation of initial conditions worked. Next, I simulate the n-wave for roughly 30 seconds corresponding to 5127 times steps. Here are the results: <img src="http://www.azimuthproject.org/azimuth/files/numsol_nwave_t30.jpg" width = "500" alt = ""/> I assure you there are two curves there. If it weren't way past my bedtime already, I'd redo the plots so you could see both curves, e.g. making one dashed. One aspect of the discrete Burgers equation that boggles my mind is that errors do not accumulate as you evolve the system in time. Rather, errors disappear. This is mind boggling, but true. The reason is that the kernel of the discrete heat equation, i.e. the binomial PDF, actually converges to the continuum heat kernel as time evolves so your solution gets more and more accurate with each time step. This is magic. This demonstrates the value of discrete calculus for numerical methods.`

We'd need to compare this to other numerical methods :-)

Is there a way to get an animated picture of the time evolution?

`We'd need to compare this to other numerical methods :-) Is there a way to get an animated picture of the time evolution?`

Has there been any new progress on this since May? I thought it was interesting. My own plans to try to dig into some project here once summer started got pretty seriously derailed, so I understand if you had to set it aside. I'd be curious to know more if you did make further progress, though.

`Has there been any new progress on this since May? I thought it was interesting. My own plans to try to dig into some project here once summer started got pretty seriously derailed, so I understand if you had to set it aside. I'd be curious to know more if you did make further progress, though.`

While I can't speak for Eric, I got distracted by the task to understand atmospheric radiation and other topics that are mentioned on Blog - the color of night, mainly as TODOs.

I'm still convinced that programming a numerical approximation to the Bugers equation on $S^1$ would be the best start to understand numerical fluid dynamics with spectral methods. So I still plan to get into that. I haven't solved some basic problems with the integration of C++ to Python, mainly due to the fact that I'm spoilt by Java build tools. So in fact I'm undecided if I should proceed with C++ or switch to Java.

One aspect is that I still plan to write a simulation for stochastic resonance and let it run in the google application engine, which will work for Java only.

Anyway, I think one should plan to be able to visualize any solutions one gets (very important), and to use spectral methods, because that is what people in climate science do, and compare that to different approaches like the one Eric investigates here. I think Eric could succeed in constructing numerical methods that have important conservation laws built in, which spectral methods don't.

`While I can't speak for Eric, I got distracted by the task to understand atmospheric radiation and other topics that are mentioned on [[Blog - the color of night]], mainly as TODOs. I'm still convinced that programming a numerical approximation to the Bugers equation on $S^1$ would be the best start to understand numerical fluid dynamics with spectral methods. So I still plan to get into that. I haven't solved some basic problems with the integration of C++ to Python, mainly due to the fact that I'm spoilt by Java build tools. So in fact I'm undecided if I should proceed with C++ or switch to Java. One aspect is that I still plan to write a simulation for [[stochastic resonance]] and let it run in the google application engine, which will work for Java only. Anyway, I think one should plan to be able to visualize any solutions one gets (very important), and to use [[spectral methods]], because that is what people in climate science do, and compare that to different approaches like the one Eric investigates here. I think Eric could succeed in constructing numerical methods that have important conservation laws built in, which spectral methods don't.`

I got distracted by many other things this spring and summer, trying to wrap up my old life in higher gauge theory and quantum gravity. But I would love to help you guys on this project.

I want to show people that Azimuth can do cool stuff.The main ways I can help are:

I need to get better at computer programming, but this is not my expertise.

So, let's talk about a small project we could do, which we could then publish or at least blog about.

Blogging is probably a good first step; lately I've been blogging about almost all my work before publishing it.

`I got distracted by many other things this spring and summer, trying to wrap up my old life in higher gauge theory and quantum gravity. But I would love to help you guys on this project. <b>I want to show people that Azimuth can do cool stuff.</b> The main ways I can help are: * learning and doing math and physics * explaining results on the blog * helping write and publish papers I need to get better at computer programming, but this is not my expertise. So, let's talk about a small project we could do, which we could then publish or at least blog about. Blogging is probably a good first step; lately I've been blogging about almost all my work before publishing it.`

John wrote:

The Burgers equation is all about numerical approximations, programming and data visualization, which is all complicated stuff where physics and pure math are not of much help.

I think it would be more worthwhile if you spend your time on the question of climate sensibility -> system theory (linear and nonlinear systems, deterministic and stochastic systems), time series analysis (nonlinear), what are the main drivers and feedbacks, what do measurements say, are they statistically significant, what do sceptics say etc. where physics and pure math are essential at every step of the story.

`John wrote: <blockquote> <p> I need to get better at computer programming, but this is not my expertise. </p> </blockquote> The Burgers equation is all about numerical approximations, programming and data visualization, which is all complicated stuff where physics and pure math are not of much help. I think it would be more worthwhile if you spend your time on the question of climate sensibility -> system theory (linear and nonlinear systems, deterministic and stochastic systems), time series analysis (nonlinear), what are the main drivers and feedbacks, what do measurements say, are they statistically significant, what do sceptics say etc. where physics and pure math are essential at every step of the story.`

Actually I believe Eric's algorithm for solving the Burgers equation works well because it's a completely integrable system, tracing out geodesics in the diffeomorphism group... very similar to how the Euler equation traces out geodesics in the volume-preserving diffeomorphism group, but with a curious extra feature, namely that in the Burgers equation a lot of these geodesics end after a finite amount of time, due to the formation of shocks! It's rather interesting to see an infinite-dimensional Lie group with a left-invariant metric where geodesics end in a finite amount of time: this is impossible in a finite-dimensional Lie group. So, there's a huge amount of nice math here, which is quite easy for me, but perhaps not so easy for everyone.

However, perhaps you

don'twant to take advantage of this nice math, or even think about it, because it doesn't generalize to the full-fledged Navier-Stokes equation. Is that your attitude?Personally I'm a bit reluctant to study the beautiful math of the Burgers equation for a somewhat similar reason: it's too nice, too beautiful, too much like the math I'm already good at, and not sufficiently relevant to the more messy real world that I'd like to understand (e.g., Navier-Stokes).

Okay, well, that certainly leaves me with lots to do!

In the short run, I am planning to continue writing about glacial cycles, Milankovitch cycles, and the like. I want to write up the work Eric and others have done on the Zaliapin-Ghil paper, Another look at climate sensitivity. I want to write up the work you've done so far on stochastic resonance - we could do that together. And it would be quite fun to take the $\delta^{18}$O data from benthic forams, which gives this graph:

and do a wavelet transform on it! Someone already did (who was it, again? Crucifix and Rougier?), but as you noted, it would be nice to do it in a way that makes public exactly what algorithm is being used.

For more on that graph above, and the data that was plotted to make it, see:

Paleoceanography20(2005), PA1003. Data available on Lisiecki's website.`Actually I believe Eric's algorithm for solving the Burgers equation works well because it's a completely integrable system, tracing out geodesics in the diffeomorphism group... very similar to how the Euler equation traces out geodesics in the volume-preserving diffeomorphism group, but with a curious extra feature, namely that in the Burgers equation a lot of these geodesics end after a finite amount of time, due to the formation of shocks! It's rather interesting to see an infinite-dimensional Lie group with a left-invariant metric where geodesics end in a finite amount of time: this is impossible in a finite-dimensional Lie group. So, there's a huge amount of nice math here, which is quite easy for me, but perhaps not so easy for everyone. However, perhaps you _don't_ want to take advantage of this nice math, or even think about it, because it doesn't generalize to the full-fledged Navier-Stokes equation. Is that your attitude? Personally I'm a bit reluctant to study the beautiful math of the Burgers equation for a somewhat similar reason: it's too nice, too beautiful, too much like the math I'm already good at, and not sufficiently relevant to the more messy real world that I'd like to understand (e.g., Navier-Stokes). > I think it would be more worthwhile if you spend your time on the question of climate sensibility -> system theory (linear and nonlinear systems, deterministic and stochastic systems), time series analysis (nonlinear), what are the main drivers and feedbacks, what do measurements say, are they statistically significant, what do sceptics say etc. where physics and pure math are essential at every step of the story. Okay, well, that certainly leaves me with lots to do! <img src = "http://math.ucr.edu/home/baez/emoticons/tongue2.gif" alt = ""/> In the short run, I am planning to continue writing about glacial cycles, Milankovitch cycles, and the like. I want to write up the work Eric and others have done on the Zaliapin-Ghil paper, [[Another look at climate sensitivity]]. I want to write up the work you've done so far on [[stochastic resonance]] - we could do that together. And it would be quite fun to take the $\delta^{18}$O data from benthic forams, which gives this graph: <img src = "http://math.ucr.edu/home/baez/temperature/5Myr.png" alt = ""/> and do a wavelet transform on it! Someone already did (who was it, again? Crucifix and Rougier?), but as you noted, it would be nice to do it in a way that makes public exactly what algorithm is being used. For more on that graph above, and the data that was plotted to make it, see: * Lorraine E. Lisiecki and Maureen E. Raymo, [A Pliocene-Pleistocene stack of 57 globally distributed benthic $\delta^{18}$O records](http://www.naturals.ukpc.net/TimAndTim/Hansen/LisieckiRaymo_preprint.pdf), <i><a href = "http://www.agu.org/pubs/crossref/2005/2004PA001071.shtml">Paleoceanography</a></i> <b>20</b> (2005), PA1003. Data available [on Lisiecki's website](http://lorraine-lisiecki.com/stack.html).`

John wrote:

I'd be interested to learn more about that and intend to do so, but it would be a wholly

differenttraining program than the one I had and have in mind, namely to use the Burgers equation as a simplest nontrivial example to train the programming and visualization of spectral methods, in preparation for the Navier-Stokes equations.I think it was them - my original intend was to try different wavelet bases to see if it makes any differences, and try to figure out which one is best - which is what Crucifix and Rougier, if I remember correctly, did not do.

A wavelet analysis is a nonparametric analysis; since we already have some parametric models at hand, it would be interesting to do some parametric estimations, too. Like for the bistable potential with multiple periodic forcings. I don't know if anyone has done that already. But it would seem that the topic of parametric estimation for stochastic differential equations is still in its infancy.

`John wrote: <blockquote> <p> However, perhaps you don't want to take advantage of this nice math, or even think about it, because it doesn't generalize to the full-fledged Navier-Stokes equation. Is that your attitude? </p> </blockquote> I'd be interested to learn more about that and intend to do so, but it would be a wholly <i>different</i> training program than the one I had and have in mind, namely to use the Burgers equation as a simplest nontrivial example to train the programming and visualization of spectral methods, in preparation for the Navier-Stokes equations. <blockquote> <p> ...and do a wavelet transform on it! Someone already did (who was it, again? Crucifix and Rougier? </p> </blockquote> I think it was them - my original intend was to try different wavelet bases to see if it makes any differences, and try to figure out which one is best - which is what Crucifix and Rougier, if I remember correctly, did not do. A wavelet analysis is a nonparametric analysis; since we already have some parametric models at hand, it would be interesting to do some parametric estimations, too. Like for the bistable potential with multiple periodic forcings. I don't know if anyone has done that already. But it would seem that the topic of parametric estimation for stochastic differential equations is still in its infancy.`

Right. On the blog, it would be sort of fun for me to explain the aspects I understand (the diffeomorphism group stuff), while you explain the stuff you're working on. However, I don't think the aspects I understand are 'useful' - not for understanding Navier-Stokes, or anything else related to climate change. So, I shouldn't spend much time on them.

How easy would it be for a complete novice like me to take a list of numbers and stick them into some existing software that does wavelet transforms? Easy, hard, not possible?

You're making me more interested in parameter estimation for stochastic differential equations...

`> I'd be interested to learn more about that and intend to do so, but it would be a wholly _different_ training program than the one I had and have in mind, namely to use the Burgers equation as a simplest nontrivial example to train the programming and visualization of spectral methods, in preparation for the Navier-Stokes equations. Right. On the blog, it would be sort of fun for me to explain the aspects I understand (the diffeomorphism group stuff), while you explain the stuff you're working on. However, I don't think the aspects I understand are 'useful' - not for understanding Navier-Stokes, or anything else related to climate change. So, I shouldn't spend much time on them. How easy would it be for a complete novice like me to take a list of numbers and stick them into some existing software that does wavelet transforms? Easy, hard, not possible? You're making me more interested in parameter estimation for stochastic differential equations...`

You should not try it all on your own, but get someone to show you how this works with one of the standard software packages (Matlab, R, Mathematica, Sage).

Well, it seems to be a hard topic, for references see parametric estimation for stochastic differential equations. I don't really know how interesting this is for someone with your background, which, although impressive, seems still to be orthogonal to it to first order :-)

But I wonder: We find a concrete model in the literature and a lot of time series. To me, trying a parametric fit of the model to the time series is the very first thing that comes to my mind. Is it mathematically unfeasable to do this?

`<blockquote> <p> How easy would it be for a complete novice like me to take a list of numbers and stick them into some existing software that does wavelet transforms? Easy, hard, not possible? </p> </blockquote> You should not try it all on your own, but get someone to show you how this works with one of the standard software packages (Matlab, R, Mathematica, Sage). <blockquote> <p> You're making me more interested in parameter estimation for stochastic differential equations... </p> </blockquote> Well, it seems to be a hard topic, for references see [[parametric estimation for stochastic differential equations]]. I don't really know how interesting this is for someone with your background, which, although impressive, seems still to be orthogonal to it to first order :-) But I wonder: We find a concrete model in the literature and a lot of time series. To me, trying a parametric fit of the model to the time series is the very first thing that comes to my mind. Is it mathematically unfeasable to do this?`

Crucifix and Rougier found parametric fitting to their stochastic model very difficult. But they were trying to estimate full probability distributions, not just best fits. On the other hand, one reason why their method had so much trouble is because in such problems there a frequently many local optima, so the fit you get may not be good, or the best. I seem to recall that was also a problem in this attempt to estimate parameter distributions from a (deterministic) paleo model.

`Crucifix and Rougier found parametric fitting to their stochastic model very difficult. But they were trying to estimate full probability distributions, not just best fits. On the other hand, one reason why their method had so much trouble is because in such problems there a frequently many local optima, so the fit you get may not be good, or the best. I seem to recall that was also a problem in <a href="http://www.springerlink.com/content/g4e8c68w7k7ama1j/">this attempt</a> to estimate parameter distributions from a (deterministic) paleo model.`

(As you may notice, I'm still here. Basically I'm about to have to move back into temporary accommodation with no internet access, but it hasn't happened yet.)

Can anyone recommend a good source of info (book, article, etc) about group theory and symmetries as applied to processes/time series? Although I'd be interested in anything, I'm particularly interested in the estimation/inference of symmetries from the output data of the process (rather than from analysing it's equations, etc). This relates to the vague long-term plans Ihave for the simulation code I put up on the Azimuth wiki.

`(As you may notice, I'm still here. Basically I'm about to have to move back into temporary accommodation with no internet access, but it hasn't happened yet.) Can anyone recommend a good source of info (book, article, etc) about group theory and symmetries as applied to processes/time series? Although I'd be interested in anything, I'm particularly interested in the estimation/inference of symmetries from the output data of the process (rather than from analysing it's equations, etc). This relates to the vague long-term plans Ihave for the simulation code I put up on the Azimuth wiki.`

I'm still here and still reading all Azimuth comments. Just crazy busy right now.

I did not do much more with the discrete Burgers equation for reasons similar to John's. As far as I am concerned the problem is solved. I wrote down and implemented in code an algorithm having the magical property that the accuracy gets BETTER the longer you simulate. You cannot ask for more than that.

I fully understand and support the desire to solve it also via spectral methods as a warmup to learn the method and techniques with Navier-Stokes in mind. But as fas as Burgers equation, I'm pretty sure there does not exist a solution better than mine on whatever subjective criteria we might choose.

I did spend some additional time on discrete Navier-Stokes and made some progress. Unlike Burgers equation, which is something close to a noncommutative topological zero curvature field theory (if that makes sense), Navier-Stokes is closer to a noncommutative version of Maxwell's equations requiring a metric.

The zero curvature part of the noncommutative Maxwell's equations correspond to vorticity-free sutions to Navier-Stokes (if those exist).

Now back to the vortex...

`I'm still here and still reading all Azimuth comments. Just crazy busy right now. I did not do much more with the discrete Burgers equation for reasons similar to John's. As far as I am concerned the problem is solved. I wrote down and implemented in code an algorithm having the magical property that the accuracy gets BETTER the longer you simulate. You cannot ask for more than that. I fully understand and support the desire to solve it also via spectral methods as a warmup to learn the method and techniques with Navier-Stokes in mind. But as fas as Burgers equation, I'm pretty sure there does not exist a solution better than mine on whatever subjective criteria we might choose. I did spend some additional time on discrete Navier-Stokes and made some progress. Unlike Burgers equation, which is something close to a noncommutative topological zero curvature field theory (if that makes sense), Navier-Stokes is closer to a noncommutative version of Maxwell's equations requiring a metric. The zero curvature part of the noncommutative Maxwell's equations correspond to vorticity-free sutions to Navier-Stokes (if those exist). Now back to the vortex...`

PS: I wrote my previous comment in a rush on the train on my way to work and realize it didn't come out the way I intended. I think the numerical scheme I presented is in many ways optimal, but I have no claim to it. It is not mine. I'm sure it must be well known and standard. At most, I rediscovered it (which is always fun to do).

`PS: I wrote my previous comment in a rush on the train on my way to work and realize it didn't come out the way I intended. I think the numerical scheme I presented is in many ways optimal, but I have no claim to it. It is not mine. I'm sure it must be well known and standard. At most, I rediscovered it (which is always fun to do).`

It looks first order accurate in delta-whatever. That immediately suggests that there may be ways that it can be optimalerized.

`> I think the numerical scheme I presented is in many ways optimal It looks first order accurate in delta-whatever. That immediately suggests that there may be ways that it can be optimalerized.`

`One question that seems interesting is: what is the class of equations that can be solved by this sort of algorithm? Is the condition precisely, as John suggests, integrability? (I think there are multiple not-quite-identical notions of integrability; the one I usually hear about is something like "a system with an infinite number of conserved currents in terms of which the system can be solved," but I'm not sure if that's exactly what makes this algorithm work.)`

Victor, it looks first order, but I'm not sure that is the way to think of it. A truly first order algorithm would not get more accurate as the simulation evolves. The discrete kernel approaches the continuum kernel as time evolves. This is not the behavior of a first order algorithm.

Matt, I would not be surprised if there was a link between this discrete approach and integrable systems. Several integrable nonlinear systems fall out naturally from noncommutative calculus. Search the arxiv for papers by Dimakis and Mueller-Hoissen.

`Victor, it looks first order, but I'm not sure that is the way to think of it. A truly first order algorithm would not get more accurate as the simulation evolves. The discrete kernel approaches the continuum kernel as time evolves. This is not the behavior of a first order algorithm. Matt, I would not be surprised if there was a link between this discrete approach and integrable systems. Several integrable nonlinear systems fall out naturally from noncommutative calculus. Search the arxiv for papers by Dimakis and Mueller-Hoissen.`

David wrote:

I can't. Quantum fields are actually quite similar to stochastic processes, and there's a huge amount known about applications of group theory and symmetry to quantum fields. I bet it's possible to take a lot of that work and transport it over to stochastic processes. But I don't know anything to read about it.

If you described a problem, I might have something to say...

`David wrote: > Can anyone recommend a good source of info (book, article, etc) about group theory and symmetries as applied to processes/time series? I can't. Quantum fields are actually quite similar to stochastic processes, and there's a huge amount known about applications of group theory and symmetry to quantum fields. I bet it's possible to take a lot of that work and transport it over to stochastic processes. But I don't know anything to read about it. If you described a problem, I might have something to say...`

John asked

I suspect that a "fully-targeted" book on what I'm thinking about hasn't been written yet. What I'm thinking about is:

suppose I have some output $D(t,v)$ from some process in the form of a time series over time $t$ and some per-time other variables $v$ (which might be spatial location, type of thing, generally anything). If I observe that some values are (say) the same (to criteria that accounts for floating-point/Monte Carlo results being unlikely to match "exactly") such as $D(t_a,v_b) \approx D(t_c,v_d)$, . . . , $D(t_w,v_x) \approx D(t_{y},v_{z})$ (ie, there's some sets of indexes where things seem to match). If I

hypothesisethat these are evidence of a more complete "mathematical group symmetry" in the system, then it ought to be possible to use technology of group theory to generate complete groups (at least from some class of groups) to hypothesis test. (With this being numerical/probabilistic simulation it's to be expected that, say, things that ought to be equal are just "very close" needed hypothesis tests.) So in a way I'm interested in anything that talks about the various possible "completions" into a full group compatiable with a small set of observed relations.Looking in the typical mathematical section of a bookshop turns up lots of books for the typical group theory undergrad course, or the specific groups that turn up in a specific field, but not really anything about these kind of issues. And I suspect that there probably isn't anything out there; just thought I'd ask.

`John asked > If you described a problem I suspect that a "fully-targeted" book on what I'm thinking about hasn't been written yet. What I'm thinking about is: suppose I have some output $D(t,v)$ from some process in the form of a time series over time $t$ and some per-time other variables $v$ (which might be spatial location, type of thing, generally anything). If I observe that some values are (say) the same (to criteria that accounts for floating-point/Monte Carlo results being unlikely to match "exactly") such as $D(t_a,v_b) \approx D(t_c,v_d)$, . . . , $D(t_w,v_x) \approx D(t_{y},v_{z})$ (ie, there's some sets of indexes where things seem to match). If I _hypothesise_ that these are evidence of a more complete "mathematical group symmetry" in the system, then it ought to be possible to use technology of group theory to generate complete groups (at least from some class of groups) to hypothesis test. (With this being numerical/probabilistic simulation it's to be expected that, say, things that ought to be equal are just "very close" needed hypothesis tests.) So in a way I'm interested in anything that talks about the various possible "completions" into a full group compatiable with a small set of observed relations. Looking in the typical mathematical section of a bookshop turns up lots of books for the typical group theory undergrad course, or the specific groups that turn up in a specific field, but not really anything about these kind of issues. And I suspect that there probably isn't anything out there; just thought I'd ask.`

David wrote:

I've never seen any use of group theory in time series analysis, which is probably due to the fact that in most applications there is not enough data to even test the simplest relations like recurrence or even stationarity.

You can assume that your time series is generated by a deterministic chaotic dynamical system that is ergodic, so that there is an optimal dimension n such that points $X_k$ and $X_{k+n}$ tend to be close to each other. This is explained in the Kantz/Schreiber book that I mentioned in the References of time series analysis:

`David wrote: <blockquote> <p> If I hypothesise that these are evidence of a more complete "mathematical group symmetry" in the system, then it ought to be possible to use technology of group theory to generate complete groups (at least from some class of groups) to hypothesis test... </p> </blockquote> I've never seen any use of group theory in time series analysis, which is probably due to the fact that in most applications there is not enough data to even test the simplest relations like recurrence or even stationarity. <blockquote> <p> ...there's some sets of indexes where things seem to match... </p> </blockquote> You can assume that your time series is generated by a deterministic chaotic dynamical system that is ergodic, so that there is an optimal dimension n such that points $X_k$ and $X_{k+n}$ tend to be close to each other. This is explained in the Kantz/Schreiber book that I mentioned in the References of [[time series analysis]]: * Holger Kantz and Thomas Schreiber: Nonlinear time series analysis, Cambridge University Press, 2nd edition 2004`