Options

Looking for references on differential equations and numerical methods

Can anyone point me to some references (preferably online) that would address the following kinds of questions:

  • What classes of differential equations are proven to be analytically solvable?

  • What classes of differential equations are proven to be analytically unsolvable?

  • For which classes of differential equations are there numerical algorithms for which the error term can be bounded by a definite function of the "epsilons" involved in the algorithm and the precision with which calculations are made?

Background: I'm writing a blog article on the rate equation, which includes a "fireside chat" with the reader about differential equations in general; what are their prospects for being solved analytically, and what are their prospects for being reasonably approximated by numerical methods. If we just take a general reaction network, write the rate equation for it, and apply the Euler method to it, then how can we be sure that, with a given machine precision, we know what delta-T will bring the error within a given tolerance.

I saw a lot of talk on the web about unsolvable differential equations, but I didn't find any clear explanations about which ones where proven to be unsolvable, and how they were so proven. On web page stated that all ordinary differential equations were solvable, whereas even some linear partial differential equations are unsolvable. Really? I was skeptical of this statement, also because it framed ordinary differential equations in the special form:

f'(t) = g(t,f(t))

But what about ordinary differential equations that don't express the derivative as a function of all the other terms?

Is there something like the differential Galois theory that is used to prove which integrals are "impossible," which applies to differential equations?

In going through this writing exercise, I learned that these are the basic questions that I need to learn some more about myself.

If anyone can offer clarifications here, that would also be very helpful.

Thanks very much.

Comments

  • 1.

    Really? I was skeptical of this statement, also because it framed ordinary differential equations in the special form

    this could have been related to what you can find here

    http://en.wikipedia.org/wiki/Painlevé_transcendents

    Comment Source:> Really? I was skeptical of this statement, also because it framed ordinary differential equations in the special form this could have been related to what you can find here http://en.wikipedia.org/wiki/Painlev%C3%A9_transcendents
  • 2.

    Thanks, that is informative and related.

    Comment Source:Thanks, that is informative and related.
  • 3.
    edited January 2013

    David Tanzer wrote:

    One web page stated that all ordinary differential equations were solvable, whereas even some linear partial differential equations are unsolvable. Really?

    What do you mean by "solvable" here?

    For a mathematician, this means there exists a function obeying the differential equation. We distinguish between local existence (i.e. the solution exists over a short period of time) and global existence (i.e. the solution exists over an arbitrarily long period of time). We also talk about uniqueness of solutions, and regularity (e.g., things like how smooth these solutions are).

    But you might be talking about the existence of an "analytical" solution, which is a different question. "Analytical" is a somewhat fuzzy concepts, because the more special functions you know, the more things may count as analytical solutions.

    I was skeptical of this statement, also because it framed ordinary differential equations in the special form:

    $$f'(t) = g(t,f(t))$$

    That's the usual way to frame them, because otherwise you have to solve for $f'(t)$, which is an issue of algebra rather than calculus. If we write down an equation like

    $$ (f'(t))^2 = -1 $$ or

    $$ e^{f'(t)} = 0 $$ it won't have any real-valued solutions, but that has little to do with differential equations: it's an issue of algebra!

    Anyway, let me not talk about "analytical solutions" now - let me just talk about the existence of solutions.

    Picard's existence and uniqueness theorem is the most fundamental result on local existence and uniqueness of solutions of equations like this:

    $$f'(t) = g(t,f(t))$$ For global existence we need to know more about $g$.

    For partial differential equations, the key existence result is the Cauchy-Kowalevski theorem. The key nonexistence result is Lewy's example.

    But I get the feeling you were interested in "analytical" solutions. Right?

    Comment Source:David Tanzer wrote: > One web page stated that all ordinary differential equations were solvable, whereas even some linear partial differential equations are unsolvable. Really? What do you mean by "solvable" here? For a mathematician, this means there exists a function obeying the differential equation. We distinguish between **local existence** (i.e. the solution exists over a short period of time) and **global existence** (i.e. the solution exists over an arbitrarily long period of time). We also talk about **uniqueness** of solutions, and **regularity** (e.g., things like how smooth these solutions are). But you might be talking about the existence of an "analytical" solution, which is a different question. "Analytical" is a somewhat fuzzy concepts, because the more special functions you know, the more things may count as analytical solutions. > I was skeptical of this statement, also because it framed ordinary differential equations in the special form: > $$f'(t) = g(t,f(t))$$ That's the usual way to frame them, because otherwise you have to solve for $f'(t)$, which is an issue of algebra rather than calculus. If we write down an equation like $$ (f'(t))^2 = -1 $$ or $$ e^{f'(t)} = 0 $$ it won't have any real-valued solutions, but that has little to do with differential equations: it's an issue of algebra! Anyway, let me not talk about "analytical solutions" now - let me just talk about the existence of solutions. [Picard's existence and uniqueness theorem](http://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem) is the most fundamental result on local existence and uniqueness of solutions of equations like this: $$f'(t) = g(t,f(t))$$ For global existence we need to know more about $g$. For partial differential equations, the key existence result is the [Cauchy-Kowalevski theorem](http://en.wikipedia.org/wiki/Cauchy%E2%80%93Kowalevski_theorem). The key _nonexistence_ result is [Lewy's example](http://en.wikipedia.org/wiki/Lewy%27s_example). But I get the feeling you were interested in "analytical" solutions. Right?
  • 4.
    edited January 2013

    Anyway, let me get this over with while it's on my mind. As I said, the concept of "analytical solution" or "closed-form solution" is fuzzy. Does a Bessel function count as an analytical solution? Most mathematicians would say yes. How about Lambert's W function? How about a hypergeometric function? Etcetera. The more functions you know and love, the more solutions count as "analytical".

    On the other hand, the concept of an elementary function is well-defined, so we can prove theorems saying a differential equation has no elementary functions. I've mainly seen this discussed in the very simplest case:

    $$ f'(t) = g(t) $$ where the solution is the indefinite integral

    $$ f(t) = \int g(t) d t $$ So, now we're asking things like "when is the integral of an elementary function an elementary function"? We all know sometimes it's not, like

    $$ \int exp(-t^2) d t $$ but how do we prove this? This paper seems to give a fairly readable proof, though I can't say I actually went through it! There are also some good comments and references here.

    Comment Source:Anyway, let me get this over with while it's on my mind. As I said, the concept of "analytical solution" or "closed-form solution" is fuzzy. Does a Bessel function count as an analytical solution? Most mathematicians would say yes. How about Lambert's W function? How about a hypergeometric function? Etcetera. The more functions you know and love, the more solutions count as "analytical". On the other hand, the concept of an [elementary function](http://en.wikipedia.org/wiki/Elementary_function) is well-defined, so we can prove theorems saying a differential equation has no elementary functions. I've mainly seen this discussed in the very simplest case: $$ f'(t) = g(t) $$ where the solution is the indefinite integral $$ f(t) = \int g(t) d t $$ So, now we're asking things like "when is the integral of an elementary function an elementary function"? We all know sometimes it's not, like $$ \int exp(-t^2) d t $$ but how do we prove this? [This paper](http://www.claymath.org/programs/outreach/academy/LectureNotes05/Conrad.pdf) seems to give a fairly readable proof, though I can't say I actually went through it! There are also some good comments and references [here](http://math.stackexchange.com/questions/155/how-can-you-prove-that-a-function-has-no-closed-form-integral).
  • 5.
    edited January 2013

    Nice info, thanks.

    I was referring to the existence of analytical solutions, but I was essentially hedging because I didn't want to commit to a specific definition of what an analytical solution is.

    The real question I have is this. Suppose we try to build up a toolbox of functions, for expressing the solutions to the differential equations. Initially, we start with the simplest collection of primitive functions, say, the arithmetic functions, and taking rational roots. (The specifics don't exactly matter for what I am asking.) All we're allowed to do is to compose these functions.

    Now along comes our first differential equation:

    Y'(t) = Y.

    We're not able to express the solution using the functions in our toolbox.

    So we define a new special function to be the solution to this equation, which we can express by a power series. We name this new function the exponential, exp(t), and add it to our toolbox.

    Now along comes another differential equation, and we find that we can't express the solution as a composition of functions even in our extended toolbox. So we define a new special function, and call it, say, a Bessel function, and add it to our toolbox.

    Now, it appears in practice that no finite toolbox will suffice, because an apparently countless number of special functions have been defined.

    Question: Is it proven that no finite collection of functions can express the solutions to all differential equations? Or is it still possible that there is such a "basis," but we don't know what it is? And supposing there were such a basis, can it be proven that there is no possible algorithm for factoring a differential equation into its solution over this basis.

    My hunch is that there is no such finite basis, and there is not even a countable basis. It's based on a sense that differential equations can represent so much complexity, that no kind of reduction scheme exists. But I don't know, so I'm asking.

    Apart from the fact that I find this an interesting question, I have a blog-related motivation for understanding this (as the author of a blog article), which I will explain in a subsequent comment on this thread.

    Comment Source:Nice info, thanks. I was referring to the existence of analytical solutions, but I was essentially hedging because I didn't want to commit to a specific definition of what an analytical solution is. The real question I have is this. Suppose we try to build up a toolbox of functions, for expressing the solutions to the differential equations. Initially, we start with the simplest collection of primitive functions, say, the arithmetic functions, and taking rational roots. (The specifics don't exactly matter for what I am asking.) All we're allowed to do is to compose these functions. Now along comes our first differential equation: Y'(t) = Y. We're not able to express the solution using the functions in our toolbox. So we _define_ a new _special function_ to be the solution to this equation, which we can express by a power series. We name this new function the exponential, exp(t), and add it to our toolbox. Now along comes another differential equation, and we find that we can't express the solution as a composition of functions even in our extended toolbox. So we define a new special function, and call it, say, a Bessel function, and add it to our toolbox. Now, it appears in practice that no finite toolbox will suffice, because an apparently countless number of special functions have been defined. Question: Is it proven that no finite collection of functions can express the solutions to all differential equations? Or is it still possible that there is such a "basis," but we don't know what it is? And supposing there were such a basis, can it be proven that there is no possible algorithm for factoring a differential equation into its solution over this basis. My hunch is that there is no such finite basis, and there is not even a countable basis. It's based on a sense that differential equations can represent so much complexity, that no kind of reduction scheme exists. But I don't know, so I'm asking. Apart from the fact that I find this an interesting question, I have a blog-related motivation for understanding this (as the author of a blog article), which I will explain in a subsequent comment on this thread.
  • 6.
    edited January 2013

    Re: the result that $ \int exp(-t^2) d t $ cannot be expressed as a composition of elementary functions.

    As you mention, this proves that $ f'(t) = exp(-t^2) $ has no solution that can be expressed as a composition of elementary functions. So we definitely do have to add it to our toolbox, as a special function --

    But even if there were an infinite number of such functions for which we could prove this to be true, that would not answer the question I just posed.

    Suppose $ f'(t) = g(t) $ was the second equation that we proved could not be expressed as a composition of elementary functions. But, for all we know, the solution still could be expressible as a composition of the elementary functions and the first special function that we added to our toolbox, viz., $ \int exp(-t^2) $.

    Comment Source:Re: the result that $ \int exp(-t^2) d t $ cannot be expressed as a composition of elementary functions. As you mention, this proves that $ f'(t) = exp(-t^2) $ has no solution that can be expressed as a composition of elementary functions. So we definitely _do_ have to add it to our toolbox, as a special function -- But even if there were an infinite number of such functions for which we could prove this to be true, that would not answer the question I just posed. Suppose $ f'(t) = g(t) $ was the second equation that we proved could not be expressed as a composition of elementary functions. But, for all we know, the solution still could be expressible as a composition of the elementary functions _and_ the first special function that we added to our toolbox, viz., $ \int exp(-t^2) $.
  • 7.
    edited January 2013

    My feeling is that you could prove your non-existence of a basis result in the opposite way: show that there's some sufficiently "big" set of "functions", by which I mean functions in the most basic sense of each domain value having only one range value, for which you can rigorously define individual differential equations that define each one. Then the existence of a "countable (say) basis" for solving differential equations is impossible if it's impossible to give such a basis for this function set. (There might actually be more solutions for differential equations that conceived of in your function set, so the implication only goes one way.)

    One such solution set is the set of otherwise unconstrained functions each of which possess enough derivatives to write down a differential equation that fully defines them, at which point it gets too close to being circular for me to complete the argument. I can clearly conceive of picking some number $n$ and there being a set of functions $n$-times differentiable with no countable basis, but are all of those fully specified by some individual partial differential equations of degree $n$? If they're not the argument doesn't complete, and I haven't got a handle on how to argue for that.

    (Incidentally, despite my recent posts on Azimuth, I actually know rather little about differential equations; I've recently started to get a tiny bit interested again after coming across some papers that apply algebraic/geometrical features of a system to improve numerical approximation schemes.)

    Comment Source:My feeling is that you could prove your non-existence of a basis result in the opposite way: show that there's some sufficiently "big" set of "functions", by which I mean functions in the most basic sense of each domain value having only one range value, for which you can rigorously define individual differential equations that define each one. Then the existence of a "countable (say) basis" for solving differential equations is impossible if it's impossible to give such a basis for this function set. (There might actually be more solutions for differential equations that conceived of in your function set, so the implication only goes one way.) One such solution set is the set of otherwise unconstrained functions each of which possess enough derivatives to write down a differential equation that fully defines them, at which point it gets too close to being circular for me to complete the argument. I can clearly conceive of picking some number $n$ and there being a set of functions $n$-times differentiable with no countable basis, but are all of those fully specified by _some_ individual partial differential equations of degree $n$? If they're not the argument doesn't complete, and I haven't got a handle on how to argue for that. (Incidentally, despite my recent posts on Azimuth, I actually know rather little about differential equations; I've recently started to get a tiny bit interested again after coming across some papers that apply algebraic/geometrical features of a system to improve numerical approximation schemes.)
  • 8.
    edited January 2013

    David Tanzer wrote:

    Question: Is it proven that no finite collection of functions can express the solutions to all differential equations?

    I'm sure something this is true, but I haven't seen anyone try to prove it.

    (By the way, if you're asking around, the right word is not "basis" but "set of generators" - we use that whenever we have an algebraic structure and a set of elements from which all other elements can be obtained using a finite sequence of operations in this structure. If you ask about a "basis", someone will probably tell you that monomials $x^n$ are a topological basis for the topological vector space of Taylor series $\sum_{i=0}^\infty a_n x^n$. In other words, every Taylor series is a convergent linear combination (in a certain topology) of monomials. And this isn't at all what you're interested in.)

    Anyway, your question should be dealt with in the subject called differential Galois theory, and more specifically Picard-Vessiot theory. But I don't know enough to point you to an answer. One problem is that the basic stuff on Picard-Vessiot theory is about differential fields. These are closed under differentiation, addition, multiplication, subtraction and division by anything nonzero. But they're not necessarily closed under all the other operations you're interested in, like composition.

    Still, the Wikipedia article on Picard-Vessiot theory should be somewhat interesting.

    Comment Source:David Tanzer wrote: > Question: Is it proven that no finite collection of functions can express the solutions to all differential equations? I'm sure something this is true, but I haven't seen anyone try to prove it. (By the way, if you're asking around, the right word is not "basis" but "set of generators" - we use that whenever we have an algebraic structure and a set of elements from which all other elements can be obtained using a finite sequence of operations in this structure. If you ask about a "basis", someone will probably tell you that monomials $x^n$ are a _topological basis_ for the [topological vector space](http://en.wikipedia.org/wiki/Topological_vector_space) of Taylor series $\sum_{i=0}^\infty a_n x^n$. In other words, every Taylor series is a convergent linear combination (in a certain topology) of monomials. And this isn't at all what you're interested in.) Anyway, your question should be dealt with in the subject called [differential Galois theory](http://en.wikipedia.org/wiki/Differential_Galois_theory), and more specifically [Picard-Vessiot theory](http://en.wikipedia.org/wiki/Picard%E2%80%93Vessiot_theory). But I don't know enough to point you to an answer. One problem is that the basic stuff on Picard-Vessiot theory is about differential fields. These are closed under differentiation, addition, multiplication, subtraction and division by anything nonzero. But they're not necessarily closed under all the other operations you're interested in, like composition. Still, the Wikipedia article on Picard-Vessiot theory should be somewhat interesting.
  • 9.

    I wrote:

    I was skeptical of this statement, also because it framed ordinary differential equations in the special form:

    $$f'(t) = g(t,f(t))$$

    John replied:

    That's the usual way to frame them, because otherwise you have to solve for $f'(t)$, which is an issue of algebra rather than calculus. If we write down an equation like

    $$ (f'(t))^2 = -1 $$ or $$ e^{f'(t)} = 0 $$ it won't have any real-valued solutions, but that has little to do with differential equations: it's an issue of algebra!

    Yet the more general form of implicit differential equations does arise in some applications, as for example the Legendre differential equation. On that page, the solution is obtained not by algebra, but by a series expansion, and the answer is not expressed in closed form.

    The examples you gave here are extreme cases, where there are no solutions, and this is provable on account of the algebraic structure alone. But in other cases, such as the Legendre equation, we may have the headache of not being able to algebraically solve for the derivatives, yet this algebraic complexity is not so extreme as to prevent a solution from existing, it just further constrains the possible solutions.

    Comment Source:I wrote: > I was skeptical of this statement, also because it framed ordinary differential equations in the special form: > $$f'(t) = g(t,f(t))$$ John replied: > That's the usual way to frame them, because otherwise you have to solve for $f'(t)$, which is an issue of algebra rather than calculus. If we write down an equation like > $$ (f'(t))^2 = -1 $$ > or > $$ e^{f'(t)} = 0 $$ > it won't have any real-valued solutions, but that has little to do with differential equations: it's an issue of algebra! Yet the more general form of implicit differential equations does arise in some applications, as for example the [Legendre differential equation](http://mathworld.wolfram.com/LegendreDifferentialEquation.html). On that page, the solution is obtained not by algebra, but by a series expansion, and the answer is not expressed in closed form. The examples you gave here are extreme cases, where there are no solutions, and this is provable on account of the algebraic structure alone. But in other cases, such as the Legendre equation, we may have the headache of not being able to algebraically solve for the derivatives, yet this algebraic complexity is not so extreme as to prevent a solution from existing, it just further constrains the possible solutions.
  • 10.
    edited February 2013

    For the first question, at summer school we were recently introduced to "Sato theory" for dealing with certain types of PDEs. It's a bit technical I suppose, but maybe provides a comment on the types of PDEs that can be solved exactly:

    http://hakotama.jp/laboratory/works/public/88ostt.pdf - Ohta, Yasuhiro, et al. "An elementary introduction to Sato theory." Progr. Theoret. Phys. Suppl 94 (1988): 210-241.

    Comment Source:For the first question, at summer school we were recently introduced to "Sato theory" for dealing with certain types of PDEs. It's a bit technical I suppose, but maybe provides a comment on the types of PDEs that can be solved exactly: [http://hakotama.jp/laboratory/works/public/88ostt.pdf](http://hakotama.jp/laboratory/works/public/88ostt.pdf) - Ohta, Yasuhiro, et al. "An elementary introduction to Sato theory." Progr. Theoret. Phys. Suppl 94 (1988): 210-241.
  • 11.

    Cool! I linkified that URL; you can click "edit" to see how I did it. The less obvious key step was to click "Markdown+Itex", which enables a bunch of features. I believe if you do it once and have cookies on, you'll never to do it again.

    Comment Source:Cool! I linkified that URL; you can click "edit" to see how I did it. The less obvious key step was to click "Markdown+Itex", which enables a bunch of features. I believe if you do it once and have cookies on, you'll never to do it again.
Sign In or Register to comment.