It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.3K
- Chat 494
- ACT Study Group 5
- Azimuth Math Review 6
- MIT 2020: Programming with Categories 53
- MIT 2020: Lectures 21
- MIT 2020: Exercises 25
- MIT 2019: Applied Category Theory 339
- MIT 2019: Lectures 79
- MIT 2019: Exercises 149
- MIT 2019: Chat 50
- UCR ACT Seminar 4
- General 64
- Azimuth Code Project 110
- Drafts 1
- Math Syntax Demos 15
- Wiki - Latest Changes 1
- Strategy 110
- Azimuth Project 1.1K

Options

Edit: See the comment:

It turns out that this is not really the discrete Burgers equation because the corresponding continuum version discussed below is not the Burgers equation. I will fix this in a new thread and post the link here when it's ready.

Edit^2: Here is the correction

I hope to work out the discrete Burgers equation here and eventually add any material worth keeping to Burgers' equation.

The idea is to follow the plan suggested here.

To do this, we need to carry out the work in

but up one more dimension. In the above paper, I noted that the discrete calculus applied to a binary tree results in the commutative relations:

$$[d x, x] = \frac{(\Delta x)^2}{\Delta t} d t$$ $$[d x, t] = [d t, x] = \Delta t d x$$ $$[d t, t] = \Delta t d t$$ I stopped at 1-forms because that's all that is needed for discrete stochastic calculus. However, a binary tree supports 2-forms as well since the edges form the boundary of diamond shaped 2-cells.

The important thing to know about discrete calculus is that the product is associative and the exterior derivative \(d\) is nilpotent and satisfies the graded Leibniz rule:

- \(d^2 = 0\)
- \(d(a b) = (d a)b + (-1)^{|a|} a(d b)\)

However, the product is not commutative, i.e. \(a(d b)\ne (d b) a\). Therefore, we have

$$d[d x, x] = 0 \quad\implies\quad (d x)(d x) = 0$$ $$d[d t, t] = 0 \quad\implies\quad (d t)(d t) = 0$$ $$d[d x, t] = d[d t, x] = 0 \quad\implies\quad (d x)(d t) = -(d t)(d x)$$ Edit: Apologies to anyone unfortunate enough to happen to read this as I was fixing multiple typos. It should be ok now.

## Comments

In Section 2 of

a very beautiful derivation of the Burgers equation is given beginning with a two-dimensional noncommutative calculus defined by the commutation relations:

which imply

To quote the above paper:

If we reinterpret the above statement in the language of

we could say:

Basically, we think of the connection \(A\) as a left martingale, i.e. a stochastic process with no drift when written in left component form. If the connection is flat, i.e. its curvature is zero, then \(u\) satisfies the Burgers equation.

Recall from the last post that the discrete calculus on a binary tree satisfies the following commutation relations:

which imply

If we define

\((\Delta x)^2 = \eta \Delta t\)

then the commutation relations become

which in the continuum limit \(\Delta t \to 0\) approach those that give rise to the continuum Burgers equations.

Therefore, we will refer to a discrete left martingale with zero curvature as the

discrete Burgers equation.Since the discrete calculus used to derive the discrete Burgers equation converges to the continuum calculus used to derive the continuum Burgers equation, then any solution to the discrete Burgers equation is guaranteed to converge to a solution of the continuum Burgers equation by construction.

Now that we know what the discrete Burgers equation is, it is now just a matter of turning the crank to determine the corresponding numerical algorithm.

`In Section 2 of * [Soliton equations and the zero curvature condition in noncommutative geometry](http://arxiv.org/abs/hep-th/9608009) a very beautiful derivation of the Burgers equation is given beginning with a two-dimensional noncommutative calculus defined by the commutation relations: * \\([d x, x] = \eta d t\\) * \\([d x, t] = [d t, x] = 0\\) * \\([d t, t] = 0\\) which imply * \\((d x)(d x) = 0\\) * \\((d t)(d t) = 0\\) * \\((d x)(d t) = -(d t)(d x).\\) To quote the above paper: >A \\(G L(1, \mathbb{R})\\)-connection 1-form can be written as >$$A = w d t + u d x$$ >with functions \\(u\\),\\(w\\). Using the differential calculus introduced above, the curvature becomes >$$F = (u_t +\frac{\eta}{2} u_{x x} + \eta u u_x - w_x) d t d x .$$ >For \\(w = 0\\) the zero curvature condition takes the form >$$u_t +\frac{\eta}{2} u_{x x} + \frac{\eta}{2} (u^2)_x = 0$$ >which is the Burgers equation [5, 8]. If we reinterpret the above statement in the language of * [Noncommutative Geometry and Stochastic Calculus](http://phorgyphynance.files.wordpress.com/2008/06/blackscholes.pdf) we could say: >A left martingale thought of as a flat connection gives rise directly to the Burgers equation Basically, we think of the connection \\(A\\) as a left martingale, i.e. a stochastic process with no drift when written in left component form. If the connection is flat, i.e. its curvature is zero, then \\(u\\) satisfies the Burgers equation. Recall from the last post that the discrete calculus on a binary tree satisfies the following commutation relations: * \\([d x, x] = \frac{(\Delta x)^2}{\Delta t} d t\\) * \\([d x, t] = [d t, x] = \Delta t d x\\) * \\([d t, t] = \Delta t d t\\) which imply * \\((d x)(d x) = 0\\) * \\((d t)(d t) = 0\\) * \\((d x)(d t) = -(d t)(d x).\\) If we define \\((\Delta x)^2 = \eta \Delta t\\) then the commutation relations become * \\([d x, x] = \eta d t\\) * \\([d x, t] = [d t, x] = \Delta t d x\\) * \\([d t, t] = \Delta t d t\\) which in the continuum limit \\(\Delta t \to 0\\) approach those that give rise to the continuum Burgers equations. Therefore, we will refer to a discrete left martingale with zero curvature as the _discrete Burgers equation_. Since the discrete calculus used to derive the discrete Burgers equation converges to the continuum calculus used to derive the continuum Burgers equation, then any solution to the discrete Burgers equation is guaranteed to converge to a solution of the continuum Burgers equation by construction. Now that we know what the discrete Burgers equation is, it is now just a matter of turning the crank to determine the corresponding numerical algorithm.`

This is funny. I must have made some mistake because when I turn the crank, I get the discrete Burgers equation is simply

$$u(i,j) = \frac{1}{2}\left[u(i+1,j+1)+u(i-1,j+1)\right],$$ where \(u(i,j)\) denotes the value of \(u\) at the \(i^{th}\) position in space and the \(j^{th}\) position in time, i.e. \(x = i\Delta x\) and \(t = j\Delta t\).

Edit: Found the problem. This is what you get if \(F = d A\), but \(F = d A + A A\). Should have the new algorithm shortly.

`This is funny. I must have made some mistake because when I turn the crank, I get the discrete Burgers equation is simply $$u(i,j) = \frac{1}{2}\left[u(i+1,j+1)+u(i-1,j+1)\right],$$ where \\(u(i,j)\\) denotes the value of \\(u\\) at the \\(i^{th}\\) position in space and the \\(j^{th}\\) position in time, i.e. \\(x = i\Delta x\\) and \\(t = j\Delta t\\). Edit: Found the problem. This is what you get if \\(F = d A\\), but \\(F = d A + A A\\). Should have the new algorithm shortly.`

I have described the discrete calculus as a meta algorithm, i.e. an algorithm for generating algorithms. So far, I only applied it to discrete stochastic calculus and derived the discrete Black-Scholes equation. In this comment, I turn the same crank and derive the discrete Burgers equation. The key insight was presented in Section 2 of

as noted above. The insight being that the Burgers equation corresponds to a zero curvature \(F = 0\) solution obtained from a "driftless" connection \(A = u d x\).

To follow the derivation, you could get by knowing that the exterior derivative on a binary tree corresponds to computing a (graded) commutator, i.e.

$$d \alpha = [\mathbf{G},\alpha],$$ where \(\mathbf{G}\) is a what we've called the

graph operator. It is essentially the formal sum of all directed edges of the binary tree. To write down the sum, we will sum over all edges emanating from a given node labelled \(\mathbf{e}^{(i,j)}\), where \(x = i\Delta x\) and \(t = j\Delta t\).$$\mathbf{G} = \sum_{i,j} \left[\mathbf{e}^{(i,j)(i+1,j+1)} + \mathbf{e}^{(i,j)(i-1,j+1)}\right].$$ You also need to know how to multiply directed edges. Multiplication is a lot like concatenation, i.e. you can only multiply two directed edges if the first edge ends at the same node the second edge begins. However, instead of a path of length 2, the result of multiplying directed edges is a directed "face" or 2-cell. On a binary tree, after traversing one directed edge, the next directed edge is either in the same direction or represents a turn, i.e. there are four types of 2-cells on a binary tree:

If you take two steps either up or down, the "measure" of the 2-cell traced out in the binary tree is zero, so we have

$$\mathbf{e}^{(i,j)(i+1,j+1)(i+2,j+2)} = \mathbf{e}^{(i,j)(i-1,j+1)(i-2,j+2)} = 0.$$ Note: There are more fundamental reasons, but this is a decent heuristic explanation.

We also need to know that

$$\mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)} = -\mathbf{e}^{(i,j)(i-1,j+1)(i,j+2)}.$$ There are fundamental reasons for this as well, but I will assume it is intuitive enough to be believable.

A simple calculation will also confirm that the graph operator is nilpotent, i.e.

$$\mathbf{G}^2 = 0.$$ This is related to the fact that \(d^2 = 0\).

Next, given a connection \(A\), it's holonomy is defined by

$$H = \mathbf{G} + A$$ and its curvature is given by

$$F = H^2 = \mathbf{G}A + A\mathbf{G} + A A = dA + A A.$$ Defining space and time coordinates,

$$x = \sum_{i,j} i\Delta x \mathbf{e}^{(i,j)}\quad\text{and}\quad t = \sum_{i,j} j\Delta t \mathbf{e}^{(i,j)}$$ it follows that

$$d x = \sum_{i,j} \Delta x \left[\mathbf{e}^{(i,j)(i+1,j+1)} - \mathbf{e}^{(i,j)(i-1,j+1)}\right]$$ and

$$d t = \sum_{i,j} \Delta t \left[\mathbf{e}^{(i,j)(i+1,j+1)} + \mathbf{e}^{(i,j)(i-1,j+1)}\right].$$ This is all we need to derive the discrete Burgers equation. The first step is write down the connection

$$A = u d x = \sum_{i,j} \Delta x u(i,j) \left[\mathbf{e}^{(i,j)(i+1,j+1)} - \mathbf{e}^{(i,j)(i-1,j+1)}\right].$$ Next compute the three terms in the curvature

$$G A = \sum_{i,j} \Delta x \left[ -u(i+1,j+1) - u(i-1,j+1)\right] \mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)},$$ $${}$$ $$A G = \sum_{i,j} 2\Delta x u(i,j) \mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)},$$ and

$$A A = \sum_{i,j} (\Delta x)^2\left[ u(i,j) u(i+1,j+1) - u(i,j) u(i-1,j+1)\right] \mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)}.$$ Putting it all together, we have

$$F = \sum_{i,j} \left[2\Delta x u(i,j) - \Delta x u(i+1,j+1) - \Delta x u(i-1,j+1) + (\Delta x)^2 u(i,j) u(i+1,j+1) - (\Delta x)^2 u(i,j) u(i-1,j+1)\right] \mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)}.$$ This vanishes when

Although it doesn't look much like the Burgers equation, I can assure you this will converge to the continuum Burgers equation. It simply must because the mathematics used to derive it converge to the continuum mathematics used to derive the continuum Burgers equation.

Note an interesting fact about the discrete Burgers equation... both \(\Delta t\) and \(\eta\) have disappeared. When I gather enough energy, I will show how they are really still in there and rewrite the discrete Burgers equation in a different, more complicated, form that will illustrate the equivalence to the continuum version. For all practical purposes, the above simple expression is the appropriate form to use for numerical analysis though. Hint: Recall that \((\Delta x)^2 = \eta \Delta t\).

This represents a robust (and as far as I know - new) approach to numerical analysis.

`I have described the discrete calculus as a meta algorithm, i.e. an algorithm for generating algorithms. So far, I only applied it to discrete stochastic calculus and derived the discrete Black-Scholes equation. In this comment, I turn the same crank and derive the discrete Burgers equation. The key insight was presented in Section 2 of * [Soliton equations and the zero curvature condition in noncommutative geometry](http://arxiv.org/abs/hep-th/9608009) as noted above. The insight being that the Burgers equation corresponds to a zero curvature \\(F = 0\\) solution obtained from a "driftless" connection \\(A = u d x\\). To follow the derivation, you could get by knowing that the exterior derivative on a binary tree corresponds to computing a (graded) commutator, i.e. $$d \alpha = [\mathbf{G},\alpha],$$ where \\(\mathbf{G}\\) is a what we've called the **graph operator**. It is essentially the formal sum of all directed edges of the binary tree. To write down the sum, we will sum over all edges emanating from a given node labelled \\(\mathbf{e}^{(i,j)}\\), where \\(x = i\Delta x\\) and \\(t = j\Delta t\\). $$\mathbf{G} = \sum_{i,j} \left[\mathbf{e}^{(i,j)(i+1,j+1)} + \mathbf{e}^{(i,j)(i-1,j+1)}\right].$$ You also need to know how to multiply directed edges. Multiplication is a lot like concatenation, i.e. you can only multiply two directed edges if the first edge ends at the same node the second edge begins. However, instead of a path of length 2, the result of multiplying directed edges is a directed "face" or 2-cell. On a binary tree, after traversing one directed edge, the next directed edge is either in the same direction or represents a turn, i.e. there are four types of 2-cells on a binary tree: 1. \\(\mathbf{e}^{(i,j)(i+1,j+1)(i+2,j+2)}\\) (two steps up) 1. \\(\mathbf{e}^{(i,j)(i-1,j+1)(i-2,j+2)}\\) (two steps down) 1. \\(\mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)}\\) (one step up followed by one step down) 1. \\(\mathbf{e}^{(i,j)(i-1,j+1)(i,j+2)}\\) (one step down followed by one step up) If you take two steps either up or down, the "measure" of the 2-cell traced out in the binary tree is zero, so we have $$\mathbf{e}^{(i,j)(i+1,j+1)(i+2,j+2)} = \mathbf{e}^{(i,j)(i-1,j+1)(i-2,j+2)} = 0.$$ Note: There are more fundamental reasons, but this is a decent heuristic explanation. We also need to know that $$\mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)} = -\mathbf{e}^{(i,j)(i-1,j+1)(i,j+2)}.$$ There are fundamental reasons for this as well, but I will assume it is intuitive enough to be believable. A simple calculation will also confirm that the graph operator is nilpotent, i.e. $$\mathbf{G}^2 = 0.$$ This is related to the fact that \\(d^2 = 0\\). Next, given a connection \\(A\\), it's holonomy is defined by $$H = \mathbf{G} + A$$ and its curvature is given by $$F = H^2 = \mathbf{G}A + A\mathbf{G} + A A = dA + A A.$$ Defining space and time coordinates, $$x = \sum_{i,j} i\Delta x \mathbf{e}^{(i,j)}\quad\text{and}\quad t = \sum_{i,j} j\Delta t \mathbf{e}^{(i,j)}$$ it follows that $$d x = \sum_{i,j} \Delta x \left[\mathbf{e}^{(i,j)(i+1,j+1)} - \mathbf{e}^{(i,j)(i-1,j+1)}\right]$$ and $$d t = \sum_{i,j} \Delta t \left[\mathbf{e}^{(i,j)(i+1,j+1)} + \mathbf{e}^{(i,j)(i-1,j+1)}\right].$$ This is all we need to derive the discrete Burgers equation. The first step is write down the connection $$A = u d x = \sum_{i,j} \Delta x u(i,j) \left[\mathbf{e}^{(i,j)(i+1,j+1)} - \mathbf{e}^{(i,j)(i-1,j+1)}\right].$$ Next compute the three terms in the curvature $$G A = \sum_{i,j} \Delta x \left[ -u(i+1,j+1) - u(i-1,j+1)\right] \mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)},$$ $${}$$ $$A G = \sum_{i,j} 2\Delta x u(i,j) \mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)},$$ and $$A A = \sum_{i,j} (\Delta x)^2\left[ u(i,j) u(i+1,j+1) - u(i,j) u(i-1,j+1)\right] \mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)}.$$ Putting it all together, we have $$F = \sum_{i,j} \left[2\Delta x u(i,j) - \Delta x u(i+1,j+1) - \Delta x u(i-1,j+1) + (\Delta x)^2 u(i,j) u(i+1,j+1) - (\Delta x)^2 u(i,j) u(i-1,j+1)\right] \mathbf{e}^{(i,j)(i+1,j+1)(i,j+2)}.$$ This vanishes when >**Discrete Burgers Equation** > >\\(2 u(i,j) - u(i+1,j+1) - u(i-1,j+1) + \Delta x u(i,j) u(i+1,j+1) - \Delta x u(i,j) u(i-1,j+1) = 0\\) > >for all \\(i\\) and \\(j\\). Although it doesn't look much like the Burgers equation, I can assure you this will converge to the continuum Burgers equation. It simply must because the mathematics used to derive it converge to the continuum mathematics used to derive the continuum Burgers equation. Note an interesting fact about the discrete Burgers equation... both \\(\Delta t\\) and \\(\eta\\) have disappeared. When I gather enough energy, I will show how they are really still in there and rewrite the discrete Burgers equation in a different, more complicated, form that will illustrate the equivalence to the continuum version. For all practical purposes, the above simple expression is the appropriate form to use for numerical analysis though. Hint: Recall that \\((\Delta x)^2 = \eta \Delta t\\). This represents a robust (and as far as I know - new) approach to numerical analysis.`

In the previous comment, I derived the

In this comment, I will show how this can be more closely compared to the continuum Burgers equation.

First, we can add and subtract a fictitious node \(u(i,j+1)\). I say fictitious because it is not really a node on our binary tree, but we add it (and subtract it) in order to compare to the continuum expression. We will also divide everything by \(-2(\Delta x)^2\). Doing so results in

$$\frac{\stackrel{fictitious}{u(i,j+1)} - u(i,j)}{(\Delta x)^2} + \frac{1}{2} \frac{u(i+1,j+1) - 2\stackrel{fictitious}{u(i,j+1)} + u(i-1,j+1)}{(\Delta x)^2} + u(i,j) \frac{u(i+1,j+1) - u(i-1,j+1)}{2\Delta x} = 0.$$ However, recalling the \((\Delta x)^2 = \eta \Delta t\), this can be rewritten as

$$\frac{\stackrel{fictitious}{u(i,j+1)} - u(i,j)}{\Delta t} + \frac{\eta}{2} \frac{u(i+1,j+1) - 2\stackrel{fictitious}{u(i,j+1)} + u(i-1,j+1)}{(\Delta x)^2} + \eta u(i,j) \frac{u(i+1,j+1) - u(i-1,j+1)}{2\Delta x} = 0.$$ This is already looking

muchmore like the continuum Burgers equation. Finally, as a mere (but highly suggestive notation) write$$u_t(i,j) := \frac{\stackrel{fictitious}{u(i,j+1)} - u(i,j)}{\Delta t},$$ $${}$$ $$u_{x x}(i,j) := \frac{u(i+1,j+1) - 2\stackrel{fictitious}{u(i,j+1)} + u(i-1,j+1)}{(\Delta x)^2},$$ and

$$u_x(i,j) := \frac{u(i+1,j+1) - u(i-1,j+1)}{2\Delta x}.$$ Using this suggestive notation, the discrete Burgers equation may be written as

$$u_t(i,j) + \frac{\eta}{2} u_{x x}(i,j) +\eta u(i,j) u_x(i,j) = 0,$$ which is now in the exact form (at least notationally and conceptually) as the continuum Burgers equation. However, no one should ever be tempted to

usethis heinous form. The appropriate form of the discrete Burgers equation to be used for numerical simulations is given simply by:`In the previous comment, I derived the >**Discrete Burgers Equation** > >\\(2 u(i,j) - u(i+1,j+1) - u(i-1,j+1) + \Delta x u(i,j) u(i+1,j+1) - \Delta x u(i,j) u(i-1,j+1) = 0\\) > >for all \\(i\\) and \\(j\\). In this comment, I will show how this can be more closely compared to the continuum Burgers equation. First, we can add and subtract a fictitious node \\(u(i,j+1)\\). I say fictitious because it is not really a node on our binary tree, but we add it (and subtract it) in order to compare to the continuum expression. We will also divide everything by \\(-2(\Delta x)^2\\). Doing so results in $$\frac{\stackrel{fictitious}{u(i,j+1)} - u(i,j)}{(\Delta x)^2} + \frac{1}{2} \frac{u(i+1,j+1) - 2\stackrel{fictitious}{u(i,j+1)} + u(i-1,j+1)}{(\Delta x)^2} + u(i,j) \frac{u(i+1,j+1) - u(i-1,j+1)}{2\Delta x} = 0.$$ However, recalling the \\((\Delta x)^2 = \eta \Delta t\\), this can be rewritten as $$\frac{\stackrel{fictitious}{u(i,j+1)} - u(i,j)}{\Delta t} + \frac{\eta}{2} \frac{u(i+1,j+1) - 2\stackrel{fictitious}{u(i,j+1)} + u(i-1,j+1)}{(\Delta x)^2} + \eta u(i,j) \frac{u(i+1,j+1) - u(i-1,j+1)}{2\Delta x} = 0.$$ This is already looking _much_ more like the continuum Burgers equation. Finally, as a mere (but highly suggestive notation) write $$u_t(i,j) := \frac{\stackrel{fictitious}{u(i,j+1)} - u(i,j)}{\Delta t},$$ $${}$$ $$u_{x x}(i,j) := \frac{u(i+1,j+1) - 2\stackrel{fictitious}{u(i,j+1)} + u(i-1,j+1)}{(\Delta x)^2},$$ and $$u_x(i,j) := \frac{u(i+1,j+1) - u(i-1,j+1)}{2\Delta x}.$$ Using this suggestive notation, the discrete Burgers equation may be written as $$u_t(i,j) + \frac{\eta}{2} u_{x x}(i,j) +\eta u(i,j) u_x(i,j) = 0,$$ which is now in the exact form (at least notationally and conceptually) as the continuum Burgers equation. However, no one should ever be tempted to _use_ this heinous form. The appropriate form of the discrete Burgers equation to be used for numerical simulations is given simply by: >**Discrete Burgers Equation** > >\\(2 u(i,j) - u(i+1,j+1) - u(i-1,j+1) + \Delta x u(i,j) u(i+1,j+1) - \Delta x u(i,j) u(i-1,j+1) = 0\\) > >for all \\(i\\) and \\(j\\).`

By now, we've accomplished the goal we initially set out for. We derived the discrete Burgers equation using discrete calculus.

However, I still feel like there is unfinished business. For one, how do we solve the discrete Burgers equation? It is an implicit algorithm that is best solved backwards in time starting from some final conditions. If you want to solve it forward in time as an evolution starting with some initial conditions, it looks a little tricky.

A good trick, as noted on Burgers' equation, is to perform the Cole-Hopf transformation turning the Burgers equation into the heat equation. Solve the heat equation and transform the solution back.

Can we use the same trick here? Whatever we do, we want to do it right. Fortunately, Section 2 of

provides the answer. Since the Burgers equation (both continuum and discrete versions) correspond to zero curvature solutions, that means the curvature must arise from a pure gauge connection of the form

$$A = \phi^{-1} d\phi.$$ The Cole-Hopf transformation amounts to matching coefficients in the two distinct ways to write down the connection:

$$A = \phi^{-1} d\phi = u d x.$$ The continuum version follows from this via

$$A = \phi^{-1} \left[\left(\phi_t + \frac{\eta}{2} \phi_{x x} \right) d t + \left(\phi_x\right) d x\right] = u d x,$$ which implies

$$\phi_t + \frac{\eta}{2} \phi_{x x} = 0\quad\text{and}\quad \phi^{-1} \phi_x = u.$$ The rightmost expression above is the Cole-Hopf transformation and the leftmost expression above is the heat equation.

I admit it took a while to figure out the discrete version of the above, but I've got it now. The trick is that we need to know what a "pure gauge" means in discrete calculus.

Under a gauge transformation, the holonomy \(H = \mathbf{G} + A\) transforms as

$$\mathbf{G} + A \to \phi^{-1} (\mathbf{G} + A) \phi = \mathbf{G} + A'.$$ The goal is to find \(A'\). With the aid of hindsight, we want to move the \(\phi\) to the left side of \(\mathbf{G}\) above. How do we do that? With the commutator!

$$\phi^{-1} (\mathbf{G} + A) \phi = \mathbf{G} + \phi^{-1} A \phi + \phi^{-1} [\mathbf{G},\phi].$$ Therefore, we have that

$$A' = \phi^{-1} A \phi + \phi^{-1} [\mathbf{G},\phi] = \phi^{-1} A \phi + \phi^{-1} d\phi.$$ In addition to demonstrating the correct form of a general gauge transformation, this also confirms the form of the discrete pure gauge

$$A_{pure} = \phi^{-1} [\mathbf{G},\phi] = \phi^{-1} \mathbf{G} \phi - \mathbf{G}.$$ If this is "done right", then computing the discrete curvature of this discrete connection should be zero. Let's check:

$$F = (\mathbf{G} + A_{pure})^2 = \phi^{-1} \mathbf{G}^2 \phi = 0,$$ where we've used the fact that \(\mathbf{G}^2 = 0.\)

So now, to write down the discrete Cole-Hopf transformation, we simply need to expand

$$A_{pure} = \phi^{-1} d \phi$$ into coordinates. I'm too tired to do that here now, but \(d\phi\) is just the discrete Ito formula and you can see it in all its glory in Equation 5.11 of

By the time I'm done posting comments here, I hope to have a code written and some numerical results. I'll put anything interesting on the wiki.

For now, before I go to sleep, I want to ramble a bit more...

First, I think it is very cool that we see Burgers equation arising as a noncommutative gauge field theory with an abelian gauge group. What other important abelian gauge group is there? Maxwell's equations for electromagnetic theory!

In the discrete stochastic calculus paper above, I wrote down the commutative relations for the discrete calculus on a binary tree. Then I observed there are two continuum limits one could take:

For lack of better names (suggestions welcome), I will call these

To obtain Burgers equations, we are working in the stochastic limit. If instead, we had worked in the exterior limit, we would have ended up with a discrete (perhaps noncommutative) version of Maxwell's equations (in 1+1 dimensions). Interesting, but time to sleep...

`By now, we've accomplished the goal we initially set out for. We derived the discrete Burgers equation using discrete calculus. However, I still feel like there is unfinished business. For one, how do we solve the discrete Burgers equation? It is an implicit algorithm that is best solved backwards in time starting from some final conditions. If you want to solve it forward in time as an evolution starting with some initial conditions, it looks a little tricky. A good trick, as noted on [[Burgers' equation]], is to perform the Cole-Hopf transformation turning the Burgers equation into the heat equation. Solve the heat equation and transform the solution back. Can we use the same trick here? Whatever we do, we want to do it right. Fortunately, Section 2 of * [Soliton equations and the zero curvature condition in noncommutative geometry](http://arxiv.org/abs/hep-th/9608009) provides the answer. Since the Burgers equation (both continuum and discrete versions) correspond to zero curvature solutions, that means the curvature must arise from a pure gauge connection of the form $$A = \phi^{-1} d\phi.$$ The Cole-Hopf transformation amounts to matching coefficients in the two distinct ways to write down the connection: $$A = \phi^{-1} d\phi = u d x.$$ The continuum version follows from this via $$A = \phi^{-1} \left[\left(\phi_t + \frac{\eta}{2} \phi_{x x} \right) d t + \left(\phi_x\right) d x\right] = u d x,$$ which implies $$\phi_t + \frac{\eta}{2} \phi_{x x} = 0\quad\text{and}\quad \phi^{-1} \phi_x = u.$$ The rightmost expression above is the Cole-Hopf transformation and the leftmost expression above is the heat equation. I admit it took a while to figure out the discrete version of the above, but I've got it now. The trick is that we need to know what a "pure gauge" means in discrete calculus. Under a gauge transformation, the holonomy \\(H = \mathbf{G} + A\\) transforms as $$\mathbf{G} + A \to \phi^{-1} (\mathbf{G} + A) \phi = \mathbf{G} + A'.$$ The goal is to find \\(A'\\). With the aid of hindsight, we want to move the \\(\phi\\) to the left side of \\(\mathbf{G}\\) above. How do we do that? With the commutator! $$\phi^{-1} (\mathbf{G} + A) \phi = \mathbf{G} + \phi^{-1} A \phi + \phi^{-1} [\mathbf{G},\phi].$$ Therefore, we have that $$A' = \phi^{-1} A \phi + \phi^{-1} [\mathbf{G},\phi] = \phi^{-1} A \phi + \phi^{-1} d\phi.$$ In addition to demonstrating the correct form of a general gauge transformation, this also confirms the form of the discrete pure gauge $$A_{pure} = \phi^{-1} [\mathbf{G},\phi] = \phi^{-1} \mathbf{G} \phi - \mathbf{G}.$$ If this is "done right", then computing the discrete curvature of this discrete connection should be zero. Let's check: $$F = (\mathbf{G} + A_{pure})^2 = \phi^{-1} \mathbf{G}^2 \phi = 0,$$ where we've used the fact that \\(\mathbf{G}^2 = 0.\\) So now, to write down the discrete Cole-Hopf transformation, we simply need to expand $$A_{pure} = \phi^{-1} d \phi$$ into coordinates. I'm too tired to do that here now, but \\(d\phi\\) is just the discrete Ito formula and you can see it in all its glory in Equation 5.11 of * [Financial Modelling Using Discrete Stochastic Calculus](http://phorgyphynance.files.wordpress.com/2008/06/discretesc.pdf) By the time I'm done posting comments here, I hope to have a code written and some numerical results. I'll put anything interesting on the wiki. For now, before I go to sleep, I want to ramble a bit more... First, I think it is very cool that we see Burgers equation arising as a noncommutative gauge field theory with an abelian gauge group. What other important abelian gauge group is there? Maxwell's equations for electromagnetic theory! In the discrete stochastic calculus paper above, I wrote down the commutative relations for the discrete calculus on a binary tree. Then I observed there are two continuum limits one could take: 1. Exterior Calculus: \\(\Delta x = c\Delta t, \Delta t\to 0\\) 1. Stochastic Calculus: \\((\Delta x)^2 = \eta \Delta t, \Delta t\to 0\\) For lack of better names (suggestions welcome), I will call these 1. Exterior Limit 1. Stochastic Limit To obtain Burgers equations, we are working in the stochastic limit. If instead, we had worked in the exterior limit, we would have ended up with a discrete (perhaps noncommutative) version of Maxwell's equations (in 1+1 dimensions). Interesting, but time to sleep...`

[deleted]

`[deleted]`

Great stuff, Eric!

`Great stuff, Eric!`

In this comment, I will try to collect enough information for us to begin considering a numerical implementation.

To start, we have the:

The discrete Burgers equation expresses the value of \(u(i,j)\) at one point in space and time in terms of values of \(u\) at the subsequent point in time, i.e.

$$u(i,j) =\frac{u(i+1,j+1) + u(i-1,j+1)}{2+\Delta x u(i+1,j+1) - \Delta x u(i-1,j+1)}$$ This makes it easy to evolve the discrete Burgers equation backward in time, but we are probably more interested in evolving Burgers equations forward in time. This is a little tricky due to the nonlinearity.

One way to deal with this is to note that, when viewed as a gauge field theory, the vanishing of the curvature \(F = d A + A A\) implies \(A\) is a pure gauge connection. Therefore, it must follow that

$$A_{pure} = \phi^{-1} d \phi = u d x$$ for some invertible discrete 0-form \(\phi\). For details on the calculation of \(d\phi\), see Section 5 of

where \(d\phi\) is the discrete Ito formula. With the observation above, which comes from Section 2 of

the transformed discrete Burgers equation may be written as

This definitely looks like progress, but before we can implement a numerical algorithm, we need initial conditions. The initial conditions are given in terms of \(u\), so we need a way to express the initial conditions for \(\phi\) in terms of given initial conditions for \(u\).

Note that \(\eta\), \(\Delta x\), and \(\Delta t\) have all disappeared from the transformed discrete Burgers equation, but are reintroduced through the discrete Cole-Hopf transformation.

`In this comment, I will try to collect enough information for us to begin considering a numerical implementation. To start, we have the: >**Discrete Burgers Equation** > > \\(2 u(i,j) - u(i+1,j+1) - u(i-1,j+1) + \Delta x u(i,j) u(i+1,j+1) - \Delta x u(i,j) u(i-1,j+1) = 0\\) > >for all \\(i\\) and \\(j\\). The discrete Burgers equation expresses the value of \\(u(i,j)\\) at one point in space and time in terms of values of \\(u\\) at the subsequent point in time, i.e. $$u(i,j) =\frac{u(i+1,j+1) + u(i-1,j+1)}{2+\Delta x u(i+1,j+1) - \Delta x u(i-1,j+1)}$$ This makes it easy to evolve the discrete Burgers equation backward in time, but we are probably more interested in evolving Burgers equations forward in time. This is a little tricky due to the nonlinearity. One way to deal with this is to note that, when viewed as a gauge field theory, the vanishing of the curvature \\(F = d A + A A\\) implies \\(A\\) is a pure gauge connection. Therefore, it must follow that $$A_{pure} = \phi^{-1} d \phi = u d x$$ for some invertible discrete 0-form \\(\phi\\). For details on the calculation of \\(d\phi\\), see Section 5 of * [Financial Modelling Using Discrete Stochastic Calculus](http://phorgyphynance.files.wordpress.com/2008/06/discretesc.pdf) where \\(d\phi\\) is the discrete Ito formula. With the observation above, which comes from Section 2 of * [Soliton equations and the zero curvature condition in noncommutative geometry](http://arxiv.org/abs/hep-th/9608009), the transformed discrete Burgers equation may be written as >**Discrete Burgers Equation (Transformed)** > >\\(\phi(i+1,j+1)-2\phi(i,j)+\phi(i-1,j+1) = 0\\) > >for all \\(i\\) and \\(j\\) with > >**Discrete Cole-Hopf Transformation** > >$$u(i,j) = \phi^{-1}(i,j)\left[\frac{\phi(i+1,j+1)-\phi(i-1,j+1)}{2\Delta x}\right].$$ This definitely looks like progress, but before we can implement a numerical algorithm, we need initial conditions. The initial conditions are given in terms of \\(u\\), so we need a way to express the initial conditions for \\(\phi\\) in terms of given initial conditions for \\(u\\). Note that \\(\eta\\), \\(\Delta x\\), and \\(\Delta t\\) have all disappeared from the transformed discrete Burgers equation, but are reintroduced through the discrete Cole-Hopf transformation.`

On this glorious 5-day weekend, I'm in and out of family activities, but after posting that last comment, I noticed that everything can be expressed nicely and neatly in terms of yet another transformation

$$\alpha_\pm(i,j) = \phi^{-1}(i,j) \phi(i\pm 1,j+1).$$ Then I noticed that things are even nicer and neater if expressed in terms of the transformation

$$\alpha_\pm(i,j) = \phi^{-1}(i,j) \phi(i\pm 1,j+1) - 1.$$ This transformation should have been obvious because with any binary tree, there is a natural set of coordinates that line up with the branches of the tree. This transformation is simply the expression of the pure gauge connection in terms of these coordinates, i.e.

$$d\phi = \sum_{i,j} \left[\phi(i+1,j+1) - \phi(i,j)\right]\mathbf{e}^{(i,j)(i+1,j+1)} + \left[\phi(i-1,j+1) - \phi(i,j)\right]\mathbf{e}^{(i,j)(i-1,j+1)}$$ so that

$$A_{pure} = \phi^{-1} d\phi = \sum_{i,j} \alpha_+(i,j) \mathbf{e}^{(i,j)(i+1,j+1)} + \alpha_-(i,j) \mathbf{e}^{(i,j)(i-1,j+1)}.$$ With this transformation, the discrete Burgers equation and discrete Cole-Hopf transformation become simply:

This is making the discrete Burgers equation look more and more like a tautology (in a good way), which I can only think represents progress.

I still need to work on the initial conditions for numerical implementation, but thought this was interesting enough.

PS: This all reminds me of a comment Urs Schreiber made in our paper

in Section 5.1 on Lattice Yang-Mills Theory:

`On this glorious 5-day weekend, I'm in and out of family activities, but after posting that last comment, I noticed that everything can be expressed nicely and neatly in terms of yet another transformation $$\alpha_\pm(i,j) = \phi^{-1}(i,j) \phi(i\pm 1,j+1).$$ Then I noticed that things are even nicer and neater if expressed in terms of the transformation $$\alpha_\pm(i,j) = \phi^{-1}(i,j) \phi(i\pm 1,j+1) - 1.$$ This transformation should have been obvious because with any binary tree, there is a natural set of coordinates that line up with the branches of the tree. This transformation is simply the expression of the pure gauge connection in terms of these coordinates, i.e. $$d\phi = \sum_{i,j} \left[\phi(i+1,j+1) - \phi(i,j)\right]\mathbf{e}^{(i,j)(i+1,j+1)} + \left[\phi(i-1,j+1) - \phi(i,j)\right]\mathbf{e}^{(i,j)(i-1,j+1)}$$ so that $$A_{pure} = \phi^{-1} d\phi = \sum_{i,j} \alpha_+(i,j) \mathbf{e}^{(i,j)(i+1,j+1)} + \alpha_-(i,j) \mathbf{e}^{(i,j)(i-1,j+1)}.$$ With this transformation, the discrete Burgers equation and discrete Cole-Hopf transformation become simply: >**Discrete Burgers Equation (Transform II)** > >\\(\alpha_+(i,j)+\alpha_-(i,j) = 0\\) > >for all \\(i\\) and \\(j\\) with > >**Discrete Cole-Hopf Transformation II** > >$$u(i,j) = \frac{1}{\Delta x} \alpha_+(i,j).$$ This is making the discrete Burgers equation look more and more like a tautology (in a good way), which I can only think represents progress. I still need to work on the initial conditions for numerical implementation, but thought this was interesting enough. PS: This all reminds me of a comment Urs Schreiber made in our paper * [Discrete Differential Geometry on Causal Graphs](http://arxiv.org/abs/math-ph/0407005) in Section 5.1 on Lattice Yang-Mills Theory: >It is maybe remarkable that the non-commutativity of 0-forms and 1-forms on the lattice drastically simplifies the notion of gauge covariant derivative to a simple algebraic product of the gauge holonomy 1-form with itself. The discrete gauge theory is in this sense conceptually actually simpler than the continuum theory.`

I believe I found away to obtain the initial conditions for \(\phi\) in terms of given initial conditions for \(u\)...

Denote the given initial condition at \(t_0 = j_0 \Delta t\) by \(u(i,j_0).\)

Since, \(\phi\) is an auxiliary gauge variable used to obtain \(u\), any scalar multiple of \(\phi\) would serve the same purpose. Therefore, to kick start the simulation we can pick any node \(\mathbf{e}^{(i_0,j_0)}\) and set \(\phi(i_0,j_0) = 1\).

Using the discrete Cole-Hopf transformation II above we have

$$u(i_0,j_0) = \pm\frac{1}{\Delta x}\left[\phi^{-1}(i_0,j_0) \phi(i_0\pm 1,j_0+1) - 1\right].$$ Setting \(\phi(i_0,j_0) = 1\), we obtain

$$\phi(i_0+1,j_0+1) = 1+ \Delta x u(i_0,j_0)$$ and

$$\phi(i_0-1,j_0+1) = 1 - \Delta x u(i_0,j_0).$$ We can use these values along with the initial values \(u(i_0+2,j_0)\) and \(u(i_0-2,j_0)\) to obtain \(\phi(i_0+2,j_0)\) and \(\phi(i_0-2,j_0)\) since

$$\phi(i_0+2,j_0) = \frac{\phi(i_0+1,j_0+1)}{1-\Delta x u(i_0+2,j_0)}$$ and

$$\phi(i_0-2,j_0) = \frac{\phi(i_0-1,j_0+1)}{1+\Delta x u(i_0-2,j_0)}.$$ Similarly, we can use these values along with the initial values \(u(i_0+2,j_0)\) and \(u(i_0-2,j_0)\) to obtain \(\phi(i_0+3,j_0+1)\) and \(\phi(i_0-3,j_0+1)\) since

$$\phi(i_0+3,j_0+1) = \phi(i_0+2,j_0) \left[1+\Delta x u(i_0+2,j_0)\right]$$ and

$$\phi(i_0-3,j_0+1) = \phi(i_0-2,j_0) \left[1-\Delta x u(i_0-2,j_0)\right].$$ We can continue this procedure until we've obtained the values of \(\phi\) at times \(j_0\Delta t\) and \((j_0+1)\Delta t\) for all \(i\). This will convert an initial condition given for \(u\) into an initial condition for \(\phi\).

Note that \(\phi\) must be invertible so \(\Delta x\) should be chosen such that \(\Delta x|u(i,j_0)| < 1\) for all \(i\).

`I believe I found away to obtain the initial conditions for \\(\phi\\) in terms of given initial conditions for \\(u\\)... Denote the given initial condition at \\(t_0 = j_0 \Delta t\\) by \\(u(i,j_0).\\) Since, \\(\phi\\) is an auxiliary gauge variable used to obtain \\(u\\), any scalar multiple of \\(\phi\\) would serve the same purpose. Therefore, to kick start the simulation we can pick any node \\(\mathbf{e}^{(i_0,j_0)}\\) and set \\(\phi(i_0,j_0) = 1\\). Using the discrete Cole-Hopf transformation II above we have $$u(i_0,j_0) = \pm\frac{1}{\Delta x}\left[\phi^{-1}(i_0,j_0) \phi(i_0\pm 1,j_0+1) - 1\right].$$ Setting \\(\phi(i_0,j_0) = 1\\), we obtain $$\phi(i_0+1,j_0+1) = 1+ \Delta x u(i_0,j_0)$$ and $$\phi(i_0-1,j_0+1) = 1 - \Delta x u(i_0,j_0).$$ We can use these values along with the initial values \\(u(i_0+2,j_0)\\) and \\(u(i_0-2,j_0)\\) to obtain \\(\phi(i_0+2,j_0)\\) and \\(\phi(i_0-2,j_0)\\) since $$\phi(i_0+2,j_0) = \frac{\phi(i_0+1,j_0+1)}{1-\Delta x u(i_0+2,j_0)}$$ and $$\phi(i_0-2,j_0) = \frac{\phi(i_0-1,j_0+1)}{1+\Delta x u(i_0-2,j_0)}.$$ Similarly, we can use these values along with the initial values \\(u(i_0+2,j_0)\\) and \\(u(i_0-2,j_0)\\) to obtain \\(\phi(i_0+3,j_0+1)\\) and \\(\phi(i_0-3,j_0+1)\\) since $$\phi(i_0+3,j_0+1) = \phi(i_0+2,j_0) \left[1+\Delta x u(i_0+2,j_0)\right]$$ and $$\phi(i_0-3,j_0+1) = \phi(i_0-2,j_0) \left[1-\Delta x u(i_0-2,j_0)\right].$$ We can continue this procedure until we've obtained the values of \\(\phi\\) at times \\(j_0\Delta t\\) and \\((j_0+1)\Delta t\\) for all \\(i\\). This will convert an initial condition given for \\(u\\) into an initial condition for \\(\phi\\). Note that \\(\phi\\) must be invertible so \\(\Delta x\\) should be chosen such that \\(\Delta x|u(i,j_0)| < 1\\) for all \\(i\\).`

Sorry for not following along, but I've got a naive question: Usually people say that there are different discretization schemes (like choosing a certain grid and taking the forward difference for a derivative), and every discretization scheme results in a discretization of a PDE. So there are a lot of different discrete Burgers' equations with different properties.

Are you looking for a discretization with a special property?

`Sorry for not following along, but I've got a naive question: Usually people say that there are different discretization schemes (like choosing a certain grid and taking the forward difference for a derivative), and every discretization scheme results in a discretization of a PDE. So there are a lot of different discrete Burgers' equations with different properties. Are you looking for a discretization with a special property?`

Hi Tim,

You make a great observation here:

The arbitrariness you point out is one of the dissatisfying (to me at least) aspects of traditional numerical analysis and is one of the main drivers behind my dissertation research and subsequent paper with Urs. There are enough clues around that I convinced myself there should be a way for numerical analysis to be "done right". I keep putting "done right" in quotes in reference to the fact that real analysis is often referred to as "calculus done right". In a similar manner, discrete calculus or discrete differential geometry is "numerical analysis done right".

The idea is that you build up a robust mathematical framework for doing numerical analysis in a similar way that differential geometry provides a robust framework for continuum analysis on manifolds.

There are a lot of different approximations to the Burgers equation. I would only call these discrete Burgers equations if they were based on a robust numerical framework without the arbitrariness you referred to above. The discrete Burgers equation should be an exact model in a discrete world with no arbitrary choices to be made. The numerical algorithm I developed above contains absolutely no choices in terms of forward, backward, or central differences.The framework tells you precisely what these should be with absolutely no ambiguity and the resulting construction captures the physics exactly.

For example, for the Burgers equation, the physics is

$$A = \phi^{-1} d \phi = u d x.$$ In other words, we have a pure (abelian) gauge connection in a noncommutative space. This one simple statement contains the Burgers equation as well as the Cole-Hopf transformation in one fell swoop. The discrete Burgers equation captures this physics exactly, but in a discrete world. Any other discretization would not have zero curvature and hence, would display spurious physics.

This marks a departure from traditional numerical analysis. In tradition numerical analysis, you assume the continuum model is correct and attempt to develop finitary models that approximate the continuum model the best you can. You are free to approximate derivatives with forward differences, backward differences, central differences, spectral methods, etc. All the while you are acutely aware of the fact that you are simply approximating something with no deeper meaning.

In discrete differential geometry, you assume the finitary model represents the truth, i.e. your universe is discrete. The finitary models are exact in this finitary world. Once you have this robust framework, you can show that it converges to the continuum theory, which may be nice to convince people that you are not spewing randomness, but as far as I'm concerned, at that point, who is to say the discrete framework is not the most appropriate for modeling the universe and the continuum is the approximation?

I would be VERY happy if you and/or anyone else here became interested enough in this to have a closer look. For what it is worth, this truly does represent the culmination of a large portion of my life and I think it does represent a significant contribution to computational methods. Unfortunately, I no longer have the luxury of doing any kind of focused research and have to get by with a comment here or there on these forums.

My first application of this new approach was toward discrete stochastic calculus. The fact I was able to re-derive the Cox-Ross-Rubinstein model as the "discrete Black-Scholes" equation was a huge sanity check. The second sanity check was when Urs and I re-derived lattice Yang-Mills theory. The third sanity check of the robustness of the approach was demonstrated here with the Burgers equation.

I would be interested in applying this same approach to Navier-Stokes, but I suspect this is only truly useful for completely integrable models, but that is just a hunch. We'll see...

`Hi Tim, You make a great observation here: >Usually people say that there are different discretization schemes (like choosing a certain grid and taking the forward difference for a derivative), and every discretization scheme results in a discretization of a PDE. The arbitrariness you point out is one of the dissatisfying (to me at least) aspects of traditional numerical analysis and is one of the main drivers behind my dissertation research and subsequent paper with Urs. There are enough clues around that I convinced myself there should be a way for numerical analysis to be "done right". I keep putting "done right" in quotes in reference to the fact that real analysis is often referred to as "calculus done right". In a similar manner, discrete calculus or discrete differential geometry is "numerical analysis done right". The idea is that you build up a robust mathematical framework for doing numerical analysis in a similar way that differential geometry provides a robust framework for continuum analysis on manifolds. > So there are a lot of different discrete Burgers' equations with different properties. There are a lot of different approximations to the Burgers equation. I would only call these discrete Burgers equations if they were based on a robust numerical framework without the arbitrariness you referred to above. The discrete Burgers equation should be an exact model in a discrete world with no arbitrary choices to be made. The numerical algorithm I developed above contains absolutely no choices in terms of forward, backward, or central differences.The framework tells you precisely what these should be with absolutely no ambiguity and the resulting construction captures the physics exactly. For example, for the Burgers equation, the physics is $$A = \phi^{-1} d \phi = u d x.$$ In other words, we have a pure (abelian) gauge connection in a noncommutative space. This one simple statement contains the Burgers equation as well as the Cole-Hopf transformation in one fell swoop. The discrete Burgers equation captures this physics exactly, but in a discrete world. Any other discretization would not have zero curvature and hence, would display spurious physics. This marks a departure from traditional numerical analysis. In tradition numerical analysis, you assume the continuum model is correct and attempt to develop finitary models that approximate the continuum model the best you can. You are free to approximate derivatives with forward differences, backward differences, central differences, spectral methods, etc. All the while you are acutely aware of the fact that you are simply approximating something with no deeper meaning. In discrete differential geometry, you assume the finitary model represents the truth, i.e. your universe is discrete. The finitary models are exact in this finitary world. Once you have this robust framework, you can show that it converges to the continuum theory, which may be nice to convince people that you are not spewing randomness, but as far as I'm concerned, at that point, who is to say the discrete framework is not the most appropriate for modeling the universe and the continuum is the approximation? I would be VERY happy if you and/or anyone else here became interested enough in this to have a closer look. For what it is worth, this truly does represent the culmination of a large portion of my life and I think it does represent a significant contribution to computational methods. Unfortunately, I no longer have the luxury of doing any kind of focused research and have to get by with a comment here or there on these forums. My first application of this new approach was toward discrete stochastic calculus. The fact I was able to re-derive the Cox-Ross-Rubinstein model as the "discrete Black-Scholes" equation was a huge sanity check. The second sanity check was when Urs and I re-derived lattice Yang-Mills theory. The third sanity check of the robustness of the approach was demonstrated here with the Burgers equation. I would be interested in applying this same approach to Navier-Stokes, but I suspect this is only truly useful for completely integrable models, but that is just a hunch. We'll see...`

Eric wrote:

It's worth noting that Dennis Sullivan, the expert on differential forms on simplicial sets, spent a lot of time thinking about the Navier-Stokes equation. I'm

surehe must have thought about discrete versions of Navier-Stokes, but Idon'tknow what his conclusions were. So before anyone spends a lot of time trying to figure this stuff out, it would be worth shooting him an email to find out what he thinks.`Eric wrote: > I would be interested in applying this same approach to Navier-Stokes, but I suspect this is only truly useful for completely integrable models, but that is just a hunch. We'll see... It's worth noting that Dennis Sullivan, the expert on differential forms on simplicial sets, spent a lot of time thinking about the Navier-Stokes equation. I'm _sure_ he must have thought about discrete versions of Navier-Stokes, but I _don't_ know what his conclusions were. So before anyone spends a lot of time trying to figure this stuff out, it would be worth shooting him an email to find out what he thinks.`

Do you remember the story about my last experience having dinner with Dennis Sullivan? I felt horrible. I am not capable of speaking the same language as he does.

`Do you remember the story about my [last experience](http://golem.ph.utexas.edu/string/archives/000298.html#c000624) having dinner with Dennis Sullivan? I felt horrible. I am not capable of speaking the same language as he does.`

I had some vague recollection that you knew of Sullivan and his interest in discrete differential geometry. That sentence you quoted of him, the one that made you feel horrible, sounds very much like the kind of thing he'd say. He's very friendly but utterly frank when it comes to talking math.

I also had some vague recollection that he's worked on discretized Navier-Stokes. I see here that he's worked on Navier-Stokes.

Hey, this may be related too - it's on the nLab in a summary of talks at Oberwolfach Workshop, June 2009 -- Strings, Fields, Topology, a conference that Sullivan helped organize. This is the first time I've heard of the Navier-Stokes equation popping out of some highbrow abstract nonsense math:

`I had some vague recollection that you knew of Sullivan and his interest in discrete differential geometry. That sentence you quoted of him, the one that made you feel horrible, sounds very much like the kind of thing he'd say. He's very friendly but utterly frank when it comes to talking math. I also had some vague recollection that he's worked on discretized Navier-Stokes. I see [here](cams.usc.edu/posters/20110307_Sullivan.pdf) that he's worked on Navier-Stokes. Hey, this may be related too - it's on the nLab in a summary of talks at [Oberwolfach Workshop, June 2009 -- Strings, Fields, Topology](http://ncatlab.org/nlab/show/Oberwolfach+Workshop,+June+2009+--+Strings,+Fields,+Topology), a conference that Sullivan helped organize. This is the first time I've heard of the Navier-Stokes equation popping out of some highbrow abstract nonsense math: > #### Scott Wilson, _Some algebra related to mapping spaces and applications_. > Wilson defined a _partial algebra_ as a lax [[monoidal functor]] from the category of finite sets to [[chain complex|chain complexes]], such that the map $A(j \coprod k) \rightarrow A(j) \otimes A(k)$ is a quasi-isomorphism. He showed how these pop up in [[homotopy theory]] all the time. > (Remark: this formula is the basis for [[Jacob Lurie]]'s discussion of [[commutative algebra in an (infinity,1)-category]] in general and of [[symmetric monoidal (infinity,1)-category]] in particular). > For instance, if $Y$ is any space and $A$ a partial algebra, the resulting total complex can be seen as a generalization of [[Hochschild cohomology]]. One really interesting example (for [[Bruce Bartlett|Bruce]]) was where $Y$ is the interval, and $A$ is the partial algebra of forms on a Riemannian manifold $M$. It turns out that a certain canonical equation which pops out is precisely the Navier-Stokes equation. Gulp!`

BTW. you know the hyothesis that everybody knows everybody else through a series of at most 6 (or was it 7) aquaintances? John knows Urs who knows Ulrich Bunke who was a professor in Göttingen when I got my Vordiplom in Math, with one oral exam with Prof. Bunke on his course in ODE. So there are two nodes between us.

`BTW. you know the hyothesis that everybody knows everybody else through a series of at most 6 (or was it 7) aquaintances? John knows Urs who knows Ulrich Bunke who was a professor in Göttingen when I got my Vordiplom in Math, with one oral exam with Prof. Bunke on his course in ODE. So there are two nodes between us.`

Tim, do the acquaintances have to be visual? From Forum conversations I would have guessed you know John better than an average math professor.

I'm more interested in non-visual acquaintances because in that case I could get with two nodes up to Bob Dylan ;-)

`Tim, do the acquaintances have to be visual? From Forum conversations I would have guessed you know John better than an average math professor. I'm more interested in non-visual acquaintances because in that case I could get with two nodes up to Bob Dylan ;-)`

Interesting. In the link above to the quote from my dinner with Sullivan, a few paragraphs down you see a mention of Scott Wilson. He was a student at the time. We exchanged a few comments, but he wasn't very interested in anything I had to say. A few comments down, I compared Scott's approach to Urs and mine. This was back in 2004. I wonder if his talk in 2009 was closer to his original formulation or closer to ours?

For graded differential algebras on finitary spaces, you cannot have an associative graded commutative product. You have to give up either associativity or graded commutativity. In our approach, we maintain associativity but sacrifice graded commutativity. In Scott's (and Sullivan's) approach, they maintained graded commutativity but sacrificed associativity. Other than that, the two were very similar.

It will take me some time to parse the category theoretic language in Scott's talk, but if his 2009 product is associative, then his formulation should be equivalent to ours.

I would not be totally surprised if he leaned toward our approach. If so (which remains to be seen), that would mean that our approach has a nice abstract formulation in terms of category theory and the Navier-Stokes should pop out in a way very similar to the way Burgers equation popped out.

Very interesting...

`> #### Scott Wilson, _Some algebra related to mapping spaces and applications_. Interesting. In the [link above](http://golem.ph.utexas.edu/string/archives/000298.html#c000624) to the quote from my dinner with Sullivan, a few paragraphs down you see a mention of Scott Wilson. He was a student at the time. We exchanged a few comments, but he wasn't very interested in anything I had to say. A few comments down, I compared Scott's approach to Urs and mine. This was back in 2004. I wonder if his talk in 2009 was closer to his original formulation or closer to ours? For graded differential algebras on finitary spaces, you cannot have an associative graded commutative product. You have to give up either associativity or graded commutativity. In our approach, we maintain associativity but sacrifice graded commutativity. In Scott's (and Sullivan's) approach, they maintained graded commutativity but sacrificed associativity. Other than that, the two were very similar. It will take me some time to parse the category theoretic language in Scott's talk, but if his 2009 product is associative, then his formulation should be equivalent to ours. I would not be totally surprised if he leaned toward our approach. If so (which remains to be seen), that would mean that our approach has a nice abstract formulation in terms of category theory and the Navier-Stokes should pop out in a way very similar to the way Burgers equation popped out. Very interesting...`

I think "aquaintance" means a meeting in person with a formal introduction, that's how I understood the hypothesis, which was formulated before the internet existed. John and I haven't met in person. On the other hand John would recognize my name, Ulrich Bunke wouldn't.

`I think "aquaintance" means a meeting in person with a formal introduction, that's how I understood the hypothesis, which was formulated before the internet existed. John and I haven't met in person. On the other hand John would recognize my name, Ulrich Bunke wouldn't.`

Bingo

also

Note: On page 5 of Sullivan's paper:

I'm still a little confused. Wilson keeps talking about "cup product", but cup product is associative. Whatever Sullivan and Wilson do, I'm sure I can do in an even more appropriately associative but NOT commutative way. Too bad Urs has lost interest in this topic...

`Bingo >[DIFFERENTIAL FORMS, FLUIDS, AND FINITE MODELS](http://qcpages.qc.cuny.edu/~swilson/formsfluidsfinal.pdf) > >SCOTT O. WILSON > >**Abstract**. By rewriting the Navier-Stokes equation in terms of differential forms we give a formulation which is abstracted and reproduced in a finite dimensional setting. We give two examples of these finite models and, in the latter case, prove some approximation results. Some useful properties of these finite models are derived. also >[Algebra, Topology and Algebraic Topology of 3D Ideal Fluids](http://arxiv.org/abs/1010.2721) > >Dennis Sullivan > >(Submitted on 13 Oct 2010) > > There is a remarkable and canonical problem in 3D geometry and topology: To understand existing models of 3D fluid motion or to create new ones that may be useful. We discuss from an algebraic viewpoint the PDE called Euler's equation for incompressible frictionless fluid motion. In part I we define a "finite dimensional 3D fluid algebra", write its Euler equation and derive properties related to energy, helicity, transport of vorticity and linking that characterize this equation. This is directly motivated by the infinite dimensional fluid algebra associated to a closed riemannian three manifold whose Euler equation as defined above is the Euler PDE of fluid motion. The classical infinite dimensional fluid algebra satisfies an additional identity related to the Jacobi identity for the lie bracket of vector fields. In part II we discuss informally how this Jacobi identity can be reestablished in finite dimensional approximations as a Lie infinity algebra. The main point of a developed version of this theory would be a coherence between various levels of approximation. It is hoped that a better understanding of the meaning of the Euler equation in terms of such infinity structures would yield algorithms of computation that work well for conceptual reasons. Note: On page 5 of Sullivan's paper: >In each case the multiplication obtained is appropriately commutative but NOT associative. I'm still a little confused. Wilson keeps talking about "cup product", but cup product is associative. Whatever Sullivan and Wilson do, I'm sure I can do in an even more appropriately associative but NOT commutative way. Too bad Urs has lost interest in this topic...`

Very nice references, Eric! I haven't read them, just the stuff you quoted. If these references give a mathematically elegant discretization of the Navier-Stokes equation (which I feel sure is what Sullivan was wanting), or even the Euler equation, I would be very interested.

`Very nice references, Eric! I haven't read them, just the stuff you quoted. If these references give a mathematically elegant discretization of the Navier-Stokes equation (which I feel sure is what Sullivan was wanting), or even the Euler equation, I would be very interested.`

@Eric: Ok, I'll read Sullivan's paper, but here's a preliminary question:

This seems to be close to what you said earlier:

When we accept for the moment that climate models are based on a continuous model and discrete models are introduced as an approximation, because discrete computers can handle discrete models only, the question is: The numerical algorithms you develop here, or those that Dennis Sullivan is after (what would be the reason for him to be interested in this topic?), is there a reason to expect that they do a better job than conventional approximations with regard to:

simplicity of implementation,

running speed,

approximation errors (especially conservation laws) ?

You know, when we implement any of these algorithms, they'll have to beat existing ones to be of any interest to people implementing fluid dynamics. So, does your observation that your discrete version "captures the physics exactly" score as a selling point in this sense?

`@Eric: Ok, I'll read Sullivan's paper, but here's a preliminary question: <blockquote> <p> It is hoped that a better understanding of the meaning of the Euler equation in terms of such infinity structures would yield algorithms of computation that work well for conceptual reasons. </p> </blockquote> This seems to be close to what you said earlier: <blockquote> <p> I would only call these discrete Burgers equations if they were based on a robust numerical framework without the arbitrariness you referred to above. ... The discrete Burgers equation captures this physics exactly, but in a discrete world. Any other discretization would not have zero curvature and hence, would display spurious physics. </p> </blockquote> When we accept for the moment that climate models are based on a continuous model and discrete models are introduced as an approximation, because discrete computers can handle discrete models only, the question is: The numerical algorithms you develop here, or those that Dennis Sullivan is after (what would be the reason for him to be interested in this topic?), is there a reason to expect that they do a better job than conventional approximations with regard to: - simplicity of implementation, - running speed, - approximation errors (especially conservation laws) ? You know, when we implement any of these algorithms, they'll have to beat existing ones to be of any interest to people implementing fluid dynamics. So, does your observation that your discrete version "captures the physics exactly" score as a selling point in this sense?`

Hey Tim, I had a long day at work and not enough energy to do your question justice, but I can say I am optimistic on all aspects. I might point out the result of turning the crank may well give an already well-known algorithm. That is what happened with the discrete Black-Scholes equation and the discrete Yang-Mills theory. A lot of super smart people have been working in numerical analysis for many years and it would be difficult to come up with an algorithm that is better than what others have done before. That won't stop us from trying! What this approach offers is a robust framework for numerical analysis and if we end up rediscovering something, I am happy with that. What we would have gained was confidence that the algorithm was not arbitrary and has a solid basis regardless of its origins.

An example from my former field, computational electromagnetics, would involve conservation of charge. If all you want is an approximation to the continuum Maxwell's equation, you might not care or even expect your algorithm to conserve charge on the lattice. However, a proper algorithm can and should conserve charge exactly to within rounding errors. Algorithms that do not conserve charge tend to be inaccurate and unstable. There is more I could say, but like I said, I'm exhausted.

I stopped by before heading to bed to point out one other reference that looks interesting:

Gulp!

`Hey Tim, I had a long day at work and not enough energy to do your question justice, but I can say I am optimistic on all aspects. I might point out the result of turning the crank may well give an already well-known algorithm. That is what happened with the discrete Black-Scholes equation and the discrete Yang-Mills theory. A lot of super smart people have been working in numerical analysis for many years and it would be difficult to come up with an algorithm that is better than what others have done before. That won't stop us from trying! What this approach offers is a robust framework for numerical analysis and if we end up rediscovering something, I am happy with that. What we would have gained was confidence that the algorithm was not arbitrary and has a solid basis regardless of its origins. An example from my former field, computational electromagnetics, would involve conservation of charge. If all you want is an approximation to the continuum Maxwell's equation, you might not care or even expect your algorithm to conserve charge on the lattice. However, a proper algorithm can and should conserve charge exactly to within rounding errors. Algorithms that do not conserve charge tend to be inaccurate and unstable. There is more I could say, but like I said, I'm exhausted. I stopped by before heading to bed to point out one other reference that looks interesting: >[From Navier-Stokes To Einstein](http://arxiv.org/abs/1101.2451) > >Irene Bredberg, Cynthia Keeler, Vyacheslav Lysov, Andrew Strominger > > We show by explicit construction that for every solution of the incompressible Navier-Stokes equation in $p+1$ dimensions, there is a uniquely associated "dual" solution of the vacuum Einstein equations in $p+2$ dimensions. The dual geometry has an intrinsically flat timelike boundary segment $\Sigma_c$ whose extrinsic curvature is given by the stress tensor of the Navier-Stokes fluid. We consider a "near-horizon" limit in which $\Sigma_c$ becomes highly accelerated. The near-horizon expansion in gravity is shown to be mathematically equivalent to the hydrodynamic expansion in fluid dynamics, and the Einstein equation reduces to the incompressible Navier-Stokes equation. For $p=2$, we show that the full dual geometry is algebraically special Petrov type II. The construction is a mathematically precise realization of suggestions of a holographic duality relating fluids and horizons which began with the membrane paradigm in the 70's and resurfaced recently in studies of the AdS/CFT correspondence. Gulp!`

Tim wrote:

That would certainly be the

dream. I think one relevant buzzword is "mimetic discretization":This dream is easiest to achieve for linear PDE, sometimes possible for completely integrable nonlinear PDE (especially in 1+1 dimensions), and somewhere between very hard and impossible for other nonlinear PDE, like Einstein's equations or the Navier-Stokes equations. But that doesn't stop people like Sullivan from trying!

First of all, he's a genius who is interested in and knowledgeable about

allof mathematics to a terrifying degree. At least he'd be terrifying if he weren't so friendly. If I were a millionaire and didn't have to work for a living and didn't feel any urge to do anything except math, I would have gone to Manhattan and attended his weekly seminar for the last 20 years. I would know so much by now, and have had so much fun! He has a completely clear, direct way of thinking about things, and he asks all the speakers in his seminar to explain things until they made sense - it can go on for hours.Second of all, his famous work from the 1960s on rational homotopy theory involved developing differential forms on simplicial sets - so, in some sense a 'discrete version' of differential calculus. And he's been interested in this idea ever since - and I believe that for many years he was trying to use this idea to get a better understanding of the Navier-Stokes equations and other nonlinear PDE.

Third, he's done a lot of work on dynamical systems, so unlike many algebraic topologists he's not at all scared of analysis or ideas from physics.

More recently he's been working on string topology...

`Tim wrote: > The numerical algorithms you develop here, or those that Dennis Sullivan is after... is there a reason to expect that they do a better job than conventional approximations...? That would certainly be the _dream_. I think one relevant buzzword is "[mimetic discretization](http://arxiv.org/abs/gr-qc/0404052)": > A discretization of a continuum theory with constraints or conserved quantities is called **mimetic** if it mirrors the conserved laws or constraints of the continuum theory at the discrete level. Such discretizations have been found useful in continuum mechanics and in electromagnetism. We have recently introduced a new technique for discretizing constrained theories. This dream is easiest to achieve for linear PDE, sometimes possible for completely integrable nonlinear PDE (especially in 1+1 dimensions), and somewhere between very hard and impossible for other nonlinear PDE, like Einstein's equations or the Navier-Stokes equations. But that doesn't stop people like Sullivan from trying! > what would be the reason for him to be interested in this topic? First of all, he's a genius who is interested in and knowledgeable about _all_ of mathematics to a terrifying degree. At least he'd be terrifying if he weren't so friendly. If I were a millionaire and didn't have to work for a living and didn't feel any urge to do anything except math, I would have gone to Manhattan and attended his weekly seminar for the last 20 years. I would know so much by now, and have had so much fun! He has a completely clear, direct way of thinking about things, and he asks all the speakers in his seminar to explain things until they made sense - it can go on for hours. Second of all, his famous work from the 1960s on rational homotopy theory involved developing differential forms on simplicial sets - so, in some sense a 'discrete version' of differential calculus. And he's been interested in this idea ever since - and I believe that for many years he was trying to use this idea to get a better understanding of the Navier-Stokes equations and other nonlinear PDE. Third, he's done a lot of work on dynamical systems, so unlike many algebraic topologists he's not at all scared of analysis or ideas from physics. More recently he's been working on string topology...`

Eric mentioned

I added a reference to that paper on Navier-Stokes equations some time ago :-)

The interesting question is of course if there is a deeper connection that makes sense of this correspondence physically.

`Eric mentioned <blockquote> <p> From Navier-Stokes To Einstein </p> </blockquote> I added a reference to that paper on [[Navier-Stokes equations]] some time ago :-) The interesting question is of course if there is a deeper connection that makes sense of this correspondence physically.`

John wrote:

Naively I thought that Dennis Sullivan would be interested in developing a discrete differential geometry to transfer the tools from smooth manifolds to some other setting and use it there, but not the other way around. He's certainly not interested in numerical approximations, right?

Well, when I first heard of the millenium problems I thought that a possible strategy to prove the existence of a smooth solution to a PDE would be to develop a discrete scheme and prove the convergence to something smooth in some limit procedure that makes the discrete scheme converge to the continuous PDE.

`John wrote: <blockquote> <p> ...for many years he was trying to use this idea to get a better understanding of the Navier-Stokes equations and other nonlinear PDE. </p> </blockquote> Naively I thought that Dennis Sullivan would be interested in developing a discrete differential geometry to transfer the tools from smooth manifolds to some other setting and use it there, but not the other way around. He's certainly not interested in numerical approximations, right? Well, when I first heard of the millenium problems I thought that a possible strategy to prove the existence of a smooth solution to a PDE would be to develop a discrete scheme and prove the convergence to something smooth in some limit procedure that makes the discrete scheme converge to the continuous PDE.`

I didn't realize that page existed. I probably knew at one time since I see every page created on the wiki, but only recently started thinking about it. Very nice page with very nice references :)

`>I added a reference to that paper on [[Navier-Stokes equations]] some time ago :-) I didn't realize that page existed. I probably knew at one time since I see every page created on the wiki, but only recently started thinking about it. Very nice page with very nice references :)`

That's what he did in his work on rational homotopy theory: develop a theory of differential forms that applies to arbitrary topological spaces.

He doesn't write computer programs, as far as I know, but I'd be very scared to say that Dennis Sullivan wass not interested in some mathematical topic, unless I'd checked with him. I can easily imagine him wanting to prove existence of solutions to Navier-Stokes by means of some clever discrete approximation. I suspect I only know about 1% of what he's interested in.

Terry Tao has a very nice article on why proving existence of solutions of the Navier-Stokes equations is hard. It's basically a simple scaling argument: we (meaning Terry Tao) only know of two quantities that are guaranteed to remain bounded for solutions of this equation, and the closer we 'zoom in', looking at a solution at shorter and shorter distance scales, the less these quantities tell us about how big the velocity vector field or its derivative could get.

I'm not very good at this stuff, but I do know a tiny bit about proving existence of solutions of nonlinear PDE, and this sort of problem sounds like the kind that people usually get around by gaining new physical insight into the PDE at hand, together with playing around with inequalities that express this insight. I don't know a case where a 'clever discretization' did the trick.

Sometimes one finds a bunch of unexpected conservation laws, but I would be shocked if Navier-Stokes had a bunch of those. When you see solitons that can pass through each other unharmed, as in the Korteweg-de Vries equation, you suspect that there must be a bunch of secret conserved quantities making that happen, and thus secret symmetries. But I've never heard of anything like that for Navier-Stokes. If I were going to work on that hard problem, I'd try to prove that global existence

can fail.`> Naively I thought that Dennis Sullivan would be interested in developing a discrete differential geometry to transfer the tools from smooth manifolds to some other setting and use it there... That's what he did in his work on rational homotopy theory: develop a theory of differential forms that applies to arbitrary topological spaces. > He's certainly not interested in numerical approximations, right? He doesn't write computer programs, as far as I know, but I'd be very scared to say that Dennis Sullivan wass not interested in some mathematical topic, unless I'd checked with him. I can easily imagine him wanting to prove existence of solutions to Navier-Stokes by means of some clever discrete approximation. I suspect I only know about 1% of what he's interested in. Terry Tao has a very nice article on [why proving existence of solutions of the Navier-Stokes equations is hard](http://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/). It's basically a simple scaling argument: we (meaning Terry Tao) only know of two quantities that are guaranteed to remain bounded for solutions of this equation, and the closer we 'zoom in', looking at a solution at shorter and shorter distance scales, the less these quantities tell us about how big the velocity vector field or its derivative could get. I'm not very good at this stuff, but I do know a tiny bit about proving existence of solutions of nonlinear PDE, and this sort of problem sounds like the kind that people usually get around by gaining new physical insight into the PDE at hand, together with playing around with inequalities that express this insight. I don't know a case where a 'clever discretization' did the trick. Sometimes one finds a bunch of unexpected conservation laws, but I would be shocked if Navier-Stokes had a bunch of those. When you see solitons that can pass through each other unharmed, as in the Korteweg-de Vries equation, you suspect that there must be a bunch of secret conserved quantities making that happen, and thus secret symmetries. But I've never heard of anything like that for Navier-Stokes. If I were going to work on that hard problem, I'd try to prove that global existence _can fail_.`

Global existence of solutions for Navier-Stokes is one of those problems where physics gets in the way of mathematics for me: I find it really hard to care about. But maybe I'm missing something interesting. Hydrodynamics is a derivative expansion; we

expectit to give wrong answers when gradients get large, when too much structure is developing at small scales, and so on, and we expect higher-derivative terms to patch that up. And, ultimately, real fluids at small enough scales should be modeled as collections of molecules, which in turn should be modeled as quantum mechanics; there's a whole hierarchy of effective theories, each of which weexpectto break down at some point. So I'm not sure why I care very much if the N-S equations just start givinginaccurate, but well-defined, solutions, or if they havesingularsolutions. Either one is consistent with the effective theory breaking down and the need for something more microscopic. (The odd thing is, I'm perfectly willing to believe that all sorts of abstract mathematical questions are interesting even when they have nothing to say about the real world. But in this case, when itdoeshave something to say about the real world, that interferes with my ability to think it's interesting as an abstract problem, ignoring the real-world issues.)Now, what would really blow my mind would be if it could be shown that problems develop in solutions of the Navier-Stokes equations even in the regime where I would believe that it should be a good approximation to the right microscopic description of real fluids. If the solution blows up precociously, I would worry. But it seems just from dimensional analysis that we can rule out that possibility: if the solution becomes badly behaved after some amount of time, something is blowing up somewhere, which means derivatives are large and the effects of other terms we've dropped is important.

Anyway, I'm not totally sure what the point of this comment is, except to emphasize that the best we ever deal with in applications are effective theories that we

expectto fail eventually. Maybe someone can tell me something exciting about the theory of differential equations that would convince me that I really should care about whether Navier-Stokes has singular solutions or not, though. Or maybe I just have to apologize for being a physicist.`Global existence of solutions for Navier-Stokes is one of those problems where physics gets in the way of mathematics for me: I find it really hard to care about. But maybe I'm missing something interesting. Hydrodynamics is a derivative expansion; we _expect_ it to give wrong answers when gradients get large, when too much structure is developing at small scales, and so on, and we expect higher-derivative terms to patch that up. And, ultimately, real fluids at small enough scales should be modeled as collections of molecules, which in turn should be modeled as quantum mechanics; there's a whole hierarchy of effective theories, each of which we _expect_ to break down at some point. So I'm not sure why I care very much if the N-S equations just start giving _inaccurate_, but well-defined, solutions, or if they have _singular_ solutions. Either one is consistent with the effective theory breaking down and the need for something more microscopic. (The odd thing is, I'm perfectly willing to believe that all sorts of abstract mathematical questions are interesting even when they have nothing to say about the real world. But in this case, when it _does_ have something to say about the real world, that interferes with my ability to think it's interesting as an abstract problem, ignoring the real-world issues.) Now, what would really blow my mind would be if it could be shown that problems develop in solutions of the Navier-Stokes equations even in the regime where I would believe that it should be a good approximation to the right microscopic description of real fluids. If the solution blows up precociously, I would worry. But it seems just from dimensional analysis that we can rule out that possibility: if the solution becomes badly behaved after some amount of time, something is blowing up somewhere, which means derivatives are large and the effects of other terms we've dropped is important. Anyway, I'm not totally sure what the point of this comment is, except to emphasize that the best we ever deal with in applications are effective theories that we _expect_ to fail eventually. Maybe someone can tell me something exciting about the theory of differential equations that would convince me that I really should care about whether Navier-Stokes has singular solutions or not, though. Or maybe I just have to apologize for being a physicist.`

@Matt #30:

I'm with you on that. I'm finding the relation between Navier-Stokes and Einstein equations pretty fascinating and it completely supports your intuition about derivatives. One way to view it is that Navier-Stokes relates to a particular limit of general relativity so moving back away from that limit brings some of those derivatives back in but also brings you closer to general relativity. But then your comments apply equally well to general relativity.

I also hope to better understand the relation between Navier-Stokes and stochastic calculus. This might provide a bridge to discrete stochastic calculus resulting in a discrete Navier-Stokes. I really have no doubt there is a discrete Navier-Stokes ("discrete" as defined above), but I need to find the key. The discrete Burgers equation should provide some important clues. For example, I suspect it will be related to curvature as well.

`@Matt #30: I'm with you on that. I'm finding the relation between [[Navier-Stokes equations|Navier-Stokes]] and Einstein equations pretty fascinating and it completely supports your intuition about derivatives. One way to view it is that Navier-Stokes relates to a particular limit of general relativity so moving back away from that limit brings some of those derivatives back in but also brings you closer to general relativity. But then your comments apply equally well to general relativity. I also hope to better understand the relation between [[Navier-Stokes equations|Navier-Stokes]] and stochastic calculus. This might provide a bridge to discrete stochastic calculus resulting in a discrete Navier-Stokes. I really have no doubt there is a discrete Navier-Stokes ("discrete" as defined above), but I need to find the key. The discrete Burgers equation should provide some important clues. For example, I suspect it will be related to curvature as well.`

This is all X-treeemly fascinating stuff, in detail and as a grand story. Luckily I've been away on holidays and then hard working. So now it's easier to not read up and check a lot of stuff here.

At least I've pulled out an old folder titled "The cosmic Galois group" containing Eric's 2002 paper - which for fun was paired with an article of Atiyah. Actually I haven't really digested Eric's paper, for 2 aesthetic reasons. First my coordinate allergy. Quoth Atiyah:

But it seems the coordinate play in Eric's paper makes a counter point.Another issue I had with the paper is the linear algebra: What is linear? Right- or left-linear or both? (Also I would call d a differential, not derivation.)Still I'd love to read more (but I got a small head and no time). What's the latest and best paper on this noncommutative calculus?

Another paper in that folder is titled "Trees, renormalization and differential equations". It's about the Hopf algebra of rooted trees appearing in numerics, QFT and NCG. It seems that phenomenon makes a strong case for Eric's dream of unification of numerics and calculus. Actually it was also one of my dreams, now there're too many others...

I'll ask more question later. Need go back to work.

`This is all X-treeemly fascinating stuff, in detail and as a grand story. Luckily I've been away on holidays and then hard working. So now it's easier to not read up and check a lot of stuff here. At least I've pulled out an old folder titled "The cosmic Galois group" containing Eric's 2002 paper - which for fun was paired with an article of [Atiyah](http://www.ias.ac.in/currsci/dec252005/2041.pdf). Actually I haven't really digested Eric's paper, for 2 aesthetic reasons. First my coordinate allergy. Quoth Atiyah: > Standard text-books make great play with the technical details, introducing coordinates, writing equations and then showing that the resulting physics is independent of the choice of coordinates. To a geometer this is perverse. > The fundamental link is from physics to geometry, from force to curvature and the algebraic machinery that encodes this is secondary. God created the universe without writing down equations! _But it seems the coordinate play in Eric's paper makes a counter point._ Another issue I had with the paper is the linear algebra: What is linear? Right- or left-linear or both? (Also I would call d a differential, not derivation.) Still I'd love to read more (but I got a small head and no time). What's the latest and best paper on this noncommutative calculus? Another paper in that folder is titled "Trees, renormalization and differential equations". It's about the Hopf algebra of rooted trees appearing in numerics, QFT and NCG. It seems that phenomenon makes a strong case for Eric's dream of unification of numerics and calculus. Actually it was also one of my dreams, now there're too many others... I'll ask more question later. Need go back to work.`

Stochastics and numerics: There's always a grain of stochastics involved when crunching numbers, the noise of the rounding errors. Why not tap this source of randomness (instead of throwing Monte Carlo dice at the computer)?

Is this stochastic calculus aspect of "noncommuting commutative differential calculus" just a coincidental phenomenon? Or is there a deeper can of worms to examine?

`Stochastics and numerics: There's always a grain of stochastics involved when crunching numbers, the noise of the rounding errors. Why not tap this source of randomness (instead of throwing Monte Carlo dice at the computer)? Is this stochastic calculus aspect of "noncommuting commutative differential calculus" just a coincidental phenomenon? Or is there a deeper can of worms to examine?`

Now I just need to tell of my little connection with Sullivan's genius (having wondered why the kinematic viscosity constant $\nu$ is constant in Burgers' eq.). It's his paper Related aspects of positivity in Riemannian geometry, J. Diff. Geom. 25 (1987), 327–351.

Once I was the leading expert (just kidding) in potential theory of Schrödinger harmonic functions (BTW, every strictly positive smooth function is Schrödinger harmonic). I have extended Sullivan's Green's region of $\lambda$-harmonic potential theory to nonconstant $\lambda$, where a new phenomenon enters (you can always have recurrence with a suitable Schrödinger term). But I got no idea of Sullivan's applications to hyperbolic Möbius etc. stuff...

`Now I just need to tell of my little connection with Sullivan's genius (having wondered why the kinematic viscosity constant $\nu$ is constant in Burgers' eq.). It's his paper [Related aspects of positivity in Riemannian geometry](http://www.intlpress.com/JDG/archive/1987/25-3-327.pdf), J. Diff. Geom. 25 (1987), 327–351. Once I was the leading expert (just kidding) in potential theory of Schrödinger harmonic functions (BTW, every strictly positive smooth function is Schrödinger harmonic). I have extended Sullivan's Green's region of $\lambda$-harmonic potential theory to nonconstant $\lambda$, where a new phenomenon enters (you can always have recurrence with a suitable Schrödinger term). But I got no idea of Sullivan's applications to hyperbolic Möbius etc. stuff...`

Part of the problem is that floating point numbers have different "magnitudes" of error from rounding and truncation error depending which point in the number space you are (eg, that's behind the famous rule of "don't subtract two large numbers of almost equal magnitude, reorder/manipulate the calculation to avoid this") which probably don't match where the noise in the "real system" is.

(There have been proposals to build "chips which are faster than normal but consequently have a significant probability of error in individual operations" for simulation software. I'd be prepared to use such a chip

ifthe distribution of the noise had been fully profiled and explained so that it was clear where it would occur, and what its distribuiton would be.)`> There's always a grain of stochastics involved when crunching numbers, the noise of the rounding errors. Why not tap this source of randomness (instead of throwing Monte Carlo dice at the computer)? Part of the problem is that floating point numbers have different "magnitudes" of error from rounding and truncation error depending which point in the number space you are (eg, that's behind the famous rule of "don't subtract two large numbers of almost equal magnitude, reorder/manipulate the calculation to avoid this") which probably don't match where the noise in the "real system" is. (There have been proposals to build "chips which are faster than normal but consequently have a significant probability of error in individual operations" for simulation software. I'd be prepared to use such a chip _if_ the distribution of the noise had been fully profiled and explained so that it was clear where it would occur, and what its distribuiton would be.)`

One last question before I probably go away for a week...

Is there any harmonic map involved in Burger's eq.? (If yes it calls for stochastics.)

@David, Yeah, one would need a new data structure, the "random floating point variable", adding a new parameter to mantissa and exponent.

`One last question before I probably go away for a week... Is there any harmonic map involved in Burger's eq.? (If yes it calls for stochastics.) @David, Yeah, one would need a new data structure, the "random floating point variable", adding a new parameter to mantissa and exponent.`

@Martin #32:

I'm with you on the distaste for coordinates. In fact...

I don't know if it is the latest or best, but a decent place to start is my 2004 paper

If the finance is a turn-off, you can just read the intro sections. I think it gives the intuition behind discrete (noncommutative) calculus. Plus, I am happy to address any questions here. One thing I point out related to coordinates is that the Ito formula is MUCH simpler without introducing coordinates. This is similar to the above derivative of discrete Burgers equations. It is much simpler without coordinates.

@Martin #33

There are too many connections to other disciplines for this to be mere coincidence. I was actually studying discrete calculus for numerical analysis (particularly applied to electromagnetic theory) prior to learning stochastic calculus. The minute I picked up a book on stochastic calculus, it was obvious the two were related. Too bad Dimakis and Mueller-Hoissen beat me to it :)

Another great reference that helped me learn this was

Dimakis and Mueller-Hoissen have really pioneered this field (i.e. this particular corner of noncommutative geometry dealing with commutative algebras) and not many have followed. They have many papers on the arxiv, but this was a very nice intro (including $q$-calculus as a special case that John may be interested in).

@Martin #36:

There is no question stochastics are involved. The Ito formula is required for the continuum zero curvature derivation.

`@Martin #32: I'm with you on the distaste for coordinates. In fact... >What's the latest and best paper on this noncommutative calculus? I don't know if it is the latest or best, but a decent place to start is my 2004 paper * [Financial Modelling Using Discrete Stochastic Calculus](http://phorgyphynance.files.wordpress.com/2008/06/discretesc.pdf) If the finance is a turn-off, you can just read the intro sections. I think it gives the intuition behind discrete (noncommutative) calculus. Plus, I am happy to address any questions here. One thing I point out related to coordinates is that the Ito formula is MUCH simpler without introducing coordinates. This is similar to the above derivative of discrete Burgers equations. It is much simpler without coordinates. @Martin #33 >Is this stochastic calculus aspect of "noncommuting commutative differential calculus" just a coincidental phenomenon? Or is there a deeper can of worms to examine? There are too many connections to other disciplines for this to be mere coincidence. I was actually studying discrete calculus for numerical analysis (particularly applied to electromagnetic theory) prior to learning stochastic calculus. The minute I picked up a book on stochastic calculus, it was obvious the two were related. Too bad Dimakis and Mueller-Hoissen beat me to it :) Another great reference that helped me learn this was >**Introduction to Noncommutative Geometry of Commutative Algebras and Applications in Physics** ([Postscript](http://wwwuser.gwdg.de/%7Efmuelle/download/mexico.ps)) > >(Recent Developments in Gravitation and Mathematical Physics, Proceedings of the Second Mexican School on Gravitation, 1998) Dimakis and Mueller-Hoissen have really pioneered this field (i.e. this particular corner of noncommutative geometry dealing with commutative algebras) and not many have followed. They have many papers on the arxiv, but this was a very nice intro (including $q$-calculus as a special case that John may be interested in). @Martin #36: >Is there any harmonic map involved in Burger's eq.? (If yes it calls for stochastics.) There is no question stochastics are involved. The Ito formula is required for the continuum zero curvature derivation.`

I skimmed Sullivan's paper and notices that I need to know a lot more about the infinite dimensional setting first, to understand the analogy, for which I will consult the book

But I don't get the big picture, the "why should I care" - part. What kind of problems is all this about?

Eric wrote:

As a physicists I'll wryly observe that in the end you'll need to compute numbers to check your theory against experiments, and you can calculate numbers only by using coordinates.

David wrote:

That's the problem, if you simulate a SDE the distribution of the noise is determined by the SDE and the approximation scheme and is fixed for each timestep, and will be different from rounding errors (which you can see by comparing numerical solutions of ODE with SDE).

I'd be interested in going the other way 'round: Having an implementation of decimal numbers with adjustable accuracy (like BigDecimal in Java), so that you can adjust your accuracy to the estimated rounding error in each step. This could be used to do numerical approximations to PDE that are

provable correct within prescribed error bounds.`I skimmed Sullivan's paper and notices that I need to know a lot more about the infinite dimensional setting first, to understand the analogy, for which I will consult the book * V.I. Arnold, ; B.A. Khesin: Topological methods in hydrodynamics. But I don't get the big picture, the "why should I care" - part. What kind of problems is all this about? Eric wrote: <blockquote> <p> I'm with you on the distaste for coordinates. </p> </blockquote> As a physicists I'll wryly observe that in the end you'll need to compute numbers to check your theory against experiments, and you can calculate numbers only by using coordinates. David wrote: <blockquote> <p> Part of the problem is that floating point numbers have different "magnitudes" of error from rounding and truncation error depending which point in the number space you are...which probably don't match where the noise in the "real system" is. </p> </blockquote> That's the problem, if you simulate a SDE the distribution of the noise is determined by the SDE and the approximation scheme and is fixed for each timestep, and will be different from rounding errors (which you can see by comparing numerical solutions of ODE with SDE). I'd be interested in going the other way 'round: Having an implementation of decimal numbers with adjustable accuracy (like BigDecimal in Java), so that you can adjust your accuracy to the estimated rounding error in each step. This could be used to do numerical approximations to PDE that are <b>provable correct within prescribed error bounds</b>.`

Tim wrote:

One popular (amongst the small number of people who think about such things) is to solve numerical problems making all non-input variables be

interval arithmeticvalues (ie, $[low,high]$) and then arrange for each elementary operation to try the various different rounding modes and update the intervals accordingly. (It turns out with a bit of encoding you can mostly do this using standard SIMD instructions, see this example.) Then at the end you know that the "unlimited accuracy" solution must lie within your variables' interval bounds, so if those are small you may have a useful bound on how the solution can behave. Of course this has problems in thatthere's not much you can do if the interval bounds do become big, which they easily can do with certain patterns of operations; there's no simple way here to re-run with greater accuracy.

interval arithmetic

reallybenefits from manually rewriting the code to make it clear when correlated variables are being combined (so the library can use tighter interval bounds) which you may not want to do.The other thing of course is that if measured data is being used there's no point being significantly more accurate in the computations than the errors in that. This is why I'm personally not particularly interested in this kind of thing.

`Tim wrote: ~~~~ I'd be interested in going the other way 'round: Having an implementation of decimal numbers with adjustable accuracy (like BigDecimal in Java), so that you can adjust your accuracy to the estimated rounding error in each step. This could be used to do numerical approximations to PDE that are provable correct within prescribed error bounds. ~~~~ One popular (amongst the small number of people who think about such things) is to solve numerical problems making all non-input variables be _interval arithmetic_ values (ie, $[low,high]$) and then arrange for each elementary operation to try the various different rounding modes and update the intervals accordingly. (It turns out with a bit of encoding you can mostly do this using standard SIMD instructions, see [this example](http://www.dagstuhl.de/Materials/Files/06/06021/06021.LambovBranimir.Slides.pdf).) Then at the end you know that the "unlimited accuracy" solution must lie within your variables' interval bounds, so if those are small you may have a useful bound on how the solution can behave. Of course this has problems in that 1. there's not much you can do if the interval bounds do become big, which they easily can do with certain patterns of operations; there's no simple way here to re-run with greater accuracy. 2. interval arithmetic _really_ benefits from manually rewriting the code to make it clear when correlated variables are being combined (so the library can use tighter interval bounds) which you may not want to do. The other thing of course is that if measured data is being used there's no point being significantly more accurate in the computations than the errors in that. This is why I'm personally not particularly interested in this kind of thing.`

Eric said:

Actually the intro section turned me off (but I'm hard working this week, just f*ed up my laptop with not installing Debian "Sqeeze", etc. etc. - so my patience is currently quite limited...).

Serious: (2.7) is not a definition... Perhaps you wanted to avoid turning off people with basic algebraic topology lingo. But now I'll first recall that simplex stuff of 20 years ago. Then I might perhaps have a clearer picture when reading your paper. I'm quite confident your stuff is right - but if it were a student's paper (15 years ago) I would have handed it back and requested more precise language...

`Eric said: >@Martin #32: >>What's the latest and best paper on this noncommutative calculus? >I don't know if it is the latest or best, but a decent place to start is my 2004 paper >* [Financial Modelling Using Discrete Stochastic Calculus](http://phorgyphynance.files.wordpress.com/2008/06/discretesc.pdf) >If the finance is a turn-off, you can just read the intro sections. Actually the intro section turned me off (but I'm hard working this week, just f*ed up my laptop with not installing Debian "Sqeeze", etc. etc. - so my patience is currently quite limited...). Serious: (2.7) is not a definition... Perhaps you wanted to avoid turning off people with basic algebraic topology lingo. But now I'll first recall that simplex stuff of 20 years ago. Then I might perhaps have a clearer picture when reading your paper. I'm quite confident your stuff is right - but if it were a student's paper (15 years ago) I would have handed it back and requested more precise language...`

Maybe when you've simmered down you'll have something constructive to say...

`Maybe when you've simmered down you'll have something constructive to say...`