Monday, August 4, 2014

Q&A with Dr. Lawrence Shampine - Part II

Here is part II (see part I here) of the Q&A with Dr. Lawrence Shampine.


Q2:
In the absence of an analytical Jacobian, you have mentioned that accurate approximations of the Jacobians are difficult to compute using finite differences. Can you please elaborate? More specifically, is this a function of higher order equations or is it related to stiffness of an ODE?

A:  No, it is just hard to approximate derivatives accurately and reliably by finite differences.  The increment in the independent variable must be small enough to provide a reasonably accurate approximation to the derivative.  On the other hand, if it is too small, there is a loss of significance in forming the difference.  Indeed, if increment is too small, the function values are not different at all and the derivative is approximated as zero.  This is bad enough, but a Jacobian matrix is formed numerically a column at a time using a vector of function values formed in a call to the function defining the differential equations.  This means that the same increment is used in the approximation of all components of the column.  This increment might be too big to approximate accurately some partial derivatives and too small for others.  The stronger algorithms test the changes in the function values and adjust the increment if necessary to obtain a reasonable approximation to the larger derivatives.  Because the small partial derivatives in the column might be approximated poorly, finite differences should be used only when this is tolerable.
 


Q3:
In the paper “Implementation of Rosenbrock Methods”, you suggest that it is best to recondition the E matrix, and that it is easier to work with fy – (2/h)*I as opposed to using E = I – hγfy. In the implementation of a Rosenbrock type method, this is achieved by using (I/hγ-fy) for the matrix inversion. What is the reason for this scaling?



A:  It is slightly less work to use the recommended form, but the real issue is one of scale.  When the problem is very stiff, there are eigenvalues of the Jacobian for which the product of the step size and eigenvalue is very large compared to one.  Accordingly, the term involving the Jacobian in the form E may be so big compared to I that when the subtraction is done in floating point arithmetic, the role of the identity matrix is lost.

No comments:

Post a Comment