It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 2.4K
- Chat 505
- Study Groups 21
- Petri Nets 9
- Epidemiology 4
- Leaf Modeling 2
- Review Sections 9
- MIT 2020: Programming with Categories 51
- MIT 2020: Lectures 20
- MIT 2020: Exercises 25
- Baez ACT 2019: Online Course 339
- Baez ACT 2019: Lectures 79
- Baez ACT 2019: Exercises 149
- Baez ACT 2019: Chat 50
- UCR ACT Seminar 4
- General 75
- Azimuth Code Project 111
- Statistical methods 4
- Drafts 10
- Math Syntax Demos 15
- Wiki - Latest Changes 3
- Strategy 113
- Azimuth Project 1.1K
- - Spam 1
- News and Information 148
- Azimuth Blog 149
- - Conventions and Policies 21
- - Questions 43
- Azimuth Wiki 719

Options

When we solve a differential equation using software the solutions are converged locally, but in our case for weather related modeling perhaps we might need a global solution specially if the data is noisy:

ANN for scoling ordinary and partial eq

Solving differential equations using neural network

Moreover most differential equation solves are not parallelized, but we could parallelize NN part of the iterations and this solve larger multivariate diff EQ.

Dara

## Comments

Here is a question I have on gradient-style optimization.

Given that we may want to fit to a factor that looks like cos(A*t+B), how do we most efficiently change A and B at the same time? The problem is that a large change in A will impact the phase shift B. So that leads to a slow conversion.

Is it better, for example, to optimize against something like this? cos(A*(t+B/A))

So when A is changed, the phase offset is adjusted automatically.

Does that make sense?

`Here is a question I have on gradient-style optimization. Given that we may want to fit to a factor that looks like cos(A*t+B), how do we most efficiently change A and B at the same time? The problem is that a large change in A will impact the phase shift B. So that leads to a slow conversion. Is it better, for example, to optimize against something like this? cos(A*(t+B/A)) So when A is changed, the phase offset is adjusted automatically. Does that make sense?`

Paul you make sense, but I need to code a specific example to answer your questions as opposed to usual pontification heh heh heh

Let us work some examples tonight, I am a lot freer

Dara

`Paul you make sense, but I need to code a specific example to answer your questions as opposed to usual pontification heh heh heh Let us work some examples tonight, I am a lot freer Dara`

Paul I tell you what, I turn our discussions and code into Enterprise CDF for educational purposes so others will learn the math, numerical methods and symbolic methods.

D

`Paul I tell you what, I turn our discussions and code into Enterprise CDF for educational purposes so others will learn the math, numerical methods and symbolic methods. D`

The Solvers in Mathematica, or any other solvers, are sensitive to the simplification and factoring of the terms in the expressions. The results might vary even though the expressions are equivalent!

Specially with the example you gave for very small or large values of A.

Also some of these equations might have multiple solutions which numerically only one is issued by Mathematica. There is no telling that Mathematica finds all possible numerical solutions.

This was a puzzle when I wrote another global maximizer and compared it to Mathematica. To my surprise and I did not see it in any papers, the solutions to SVR global min/max are not unique! In other words if you recall what John asked of minimizing an error might not have a unique solution most of the times.

This is actually very good, since it means our model of the dynamical system is incomplete, therefore there are multiple solutions to the system of equations.

Imagine riding a bike how your brain balances the bike and how mine would are totally different in spite of the fact that we both ride the bike quite similarly.

But if the model to the dynamical system of the bike was complete then either I could ride the bike or you, but no both of us.

Dara

`>Is it better, for example, to optimize against something like this? cos(A*(t+B/A)) The Solvers in Mathematica, or any other solvers, are sensitive to the simplification and factoring of the terms in the expressions. The results might vary even though the expressions are equivalent! Specially with the example you gave for very small or large values of A. Also some of these equations might have multiple solutions which numerically only one is issued by Mathematica. There is no telling that Mathematica finds all possible numerical solutions. This was a puzzle when I wrote another global maximizer and compared it to Mathematica. To my surprise and I did not see it in any papers, the solutions to SVR global min/max are not unique! In other words if you recall what John asked of minimizing an error might not have a unique solution most of the times. This is actually very good, since it means our model of the dynamical system is incomplete, therefore there are multiple solutions to the system of equations. Imagine riding a bike how your brain balances the bike and how mine would are totally different in spite of the fact that we both ride the bike quite similarly. But if the model to the dynamical system of the bike was complete then either I could ride the bike or you, but no both of us. Dara`

I noticed another issue with the Mathematica Differential Evolution solver in that it prefers to work with range constraints, but it may not do the right shortcuts.

As an example, a good constraint for a phase constant is $0 .. 2\pi$

But the solver seems to get to the range-constraint and not know enough to wrap around, so many times the result is stuck at either 0 or 2$\pi$, which we know can't be right.

That forces me to either put in several cycles or to go with a formulation such as A

sin(kt)+Bcos(kt), which carries an implicit phase -- but this is not always good because it requires a lower constraint of 0 for both A and B.`I noticed another issue with the Mathematica Differential Evolution solver in that it prefers to work with range constraints, but it may not do the right shortcuts. As an example, a good constraint for a phase constant is $0 .. 2\pi$ But the solver seems to get to the range-constraint and not know enough to wrap around, so many times the result is stuck at either 0 or 2$\pi$, which we know can't be right. That forces me to either put in several cycles or to go with a formulation such as A*sin(kt)+B*cos(kt), which carries an implicit phase -- but this is not always good because it requires a lower constraint of 0 for both A and B.`