What is more difficult about evaluating the error between LHS and RHS of a differential equation discrete-difference expansion is that both sides are essentially heavily parameterized so that there is no fixed yardstick, which is in contrast to a data vs model error.

For example, in the DiffEq expansion the RHS forcing may be strong with lots of structure or very weak with little structure, which has ramifications on how much regression the LHS requires.

In the data vs solved model, the LHS=RHS combination is solved and that solution is compared to the data, whose shape is permanently fixed apart from a scale factor.

I think this makes solving the DiffEq a much more robust approach with the expansion serving as a quick-and-dirty approach.

Does that make sense?

I am "fishing" on this because of the possibility of using the two together, as in a directed search, where the DiffEq expansion is used to generate a preferred gradient of search for the full model solution. Or it may be that this has been done before but I am not sure what the algorithm is called.

For example, in the DiffEq expansion the RHS forcing may be strong with lots of structure or very weak with little structure, which has ramifications on how much regression the LHS requires.

In the data vs solved model, the LHS=RHS combination is solved and that solution is compared to the data, whose shape is permanently fixed apart from a scale factor.

I think this makes solving the DiffEq a much more robust approach with the expansion serving as a quick-and-dirty approach.

Does that make sense?

I am "fishing" on this because of the possibility of using the two together, as in a directed search, where the DiffEq expansion is used to generate a preferred gradient of search for the full model solution. Or it may be that this has been done before but I am not sure what the algorithm is called.