Saturday, May 31, 2014

More on the Semi-Implicit method

The matrix formulation of the semi-implicit methods, as indicated earlier, has the advantage of avoiding an iterative procedure to converge at a solution. However, the reformulation introduces two additional complications – matrix inversion and Jacobian.

Matrix inversion can be performed by using the LU-decomposition followed by the LU-back substitution. The Jacobians can be either defined analytically or computed numerically. The analytical definitions are obviously preferred so as to avoid any numerical approximation errors with the computation, with the caveat that analytical definitions can become a little tedious to program.

Yet another generalized representation of the semi-implicit method can be stated as follows:

New Picture
New Picture (1)
New Picture (2)

The above representation tends to look more like the generalized RK methods, with the obvious exception of the ‘A’ matrix and the Jacobians!

Thursday, May 29, 2014

A Note on Jacobians

The matrix representation for the Backward Euler solution introduced a partial derivative term in the integration algorithm. These partial derivatives are otherwise referred to as Jacobians.

How does a partial derivative differ from its “impartial” counterpart? A partial derivative is one in which the dependent variable is differentiated with respect to only one of the other variables. Partial derivatives bring in the concept of dimensionality in a real world problem. Heat transfer problems in Mechanical Engineering are a good intuitive example.

Mathematically, let us assume a function

New Picture

Here, y is a function of x and y. A normal derivative is usually with respect to x (assuming x is the independent variable), while the partial derivative is with respect to y. In the above case, the partial derivative, or the Jacobian of f with respect to y is

New Picture (1)

The Jacobian of a function that has multiple dependent variables is thus a matrix of partial derivatives. For example, if f is a function defined by

New Picture (2)

then the partial derivatives are

New Picture (3)

Following the above logic, the Jacobian matrix for a series of ODE’s can be represented as follows:

New Picture (4)

Wednesday, May 28, 2014

Matrix representation of Semi-Implicit Method

Higher order equations, or series of ODEs are more easily solved using the Semi-Implicit Method if represented as a matrix form. Thus,
 New Picture

Letters in bold in the above formulation are matrix representations, and I is the identity matrix. Having represented A as a matrix, the Backward Euler can now be formulated as:
  New Picture (1)

The matrix formulation of the Backward Euler negates an iterative process, but introduces a matrix inversion. One matrix inversion suffices for the set of ODE’s at each time step, since the partial derivatives are constant. For a set of n ODE’s, the matrix A is of size nXn, and the variable matrix yn is nX1. A matrix multiplication thus needs to be performed to determine y(n+1).

Press et al (Press, Teukolsky, Vetterling & Flannery, Numerical Recipes, The Art of Scientific Computing) recommend LU-decompositions for the matrix inversion process, followed by the LU-back substitution. Subroutines LUDCMP and LUBKSB can be implemented for this process.

The algorithm for the semi-implicit process can be summarized as:

1)    % Define step size (h), initial function y(0), initial time t0, final time tf, eps
2)    nosteps = (tf – t0)/h
3)    for i ← 1 to nosteps
-       Calculate matrix A
-       Invoke subroutine LUCDMP for A
-       B ← LUDCMP (A)
-       Invoke LUBKSB to compute h*B*f(yn)
-       f(yn) ← LUBKSB(h*B*f(yn))
-       y[i] ← y[i-1] + f(yn)
-       t0 ← t0 + h

4)    % Display results

Tuesday, May 27, 2014

Semi-Implicit Method Using Partial Derivatives

So, how does one avoid iterations and yet be able to solve the Backward Euler’s method? Press et al (Press, Teukolsky, Vetterling & Flannery, Numerical Recipes, The Art of Scientific Computing) suggest using a linearized form of the equation to represent the derivative f(yn+1). Using the principles of basic calculus,
  New Picture

The above representation introduces the partial derivative on the right hand side. Using this formulation and re-arranging the terms, the Backward Euler becomes:
 New Picture (1)

For simple ODE’s that have just one equation, the above semi-implicit form can be easily solved by dividing the equation with the term in the square brackets, and then marching sequentially from one step to the next. The simplified formulation then becomes:
 New Picture (2)

  New Picture (3)

Monday, May 26, 2014

The Semi-Implicit Euler's Method

As noted earlier, the solution of ODE’s using implicit methods (such as the Backward Euler’s method) requires an iterative procedure at each time step. Let’s take a second look at the Backward Euler’s method:
 New Picture

 Atkinson et al (Atkinson, Han & Stewart, Numerical Solution of ODE’s) recommend a two-step approach to solving the above equation in order to avoid iterations. The two-step process can be summarized as follows:
 New Picture (1)

 New Picture (2)

Atkinson et al also indicate that by solving it as a two-step process, the method is no longer absolutely stable. That is, while the original Backward-Euler’s method would converge at a solution for any step size, the two-step process would not.

Methods that are used to solve for implicit formulations without an iterative process are termed as “Semi-Implicit” methods. The computational effort for iterations at each time step is avoided in the semi-implicit method, while at the same time “reasonable” stability can be guaranteed. Press et al (Press, Teukolsky, Vetterling & Flannery, Numerical Recipes, The Art of Scientific Computing) state “It is not guaranteed to be stable, but it usually is…”.

Saturday, May 24, 2014

The two-body orbit problem

Orbit equations refer to a class of ODE’s in Astronomy that describes the motion of planets due to the gravitational forces that each body exerts on the others. The two-body orbit equations are represented by the following differential equations (ref: Enright & Pryce, Two Fortran Packages for Assessing Initial Value problems, ACM Transactions on Mathematical Software, Vol. 3, No. 1, pp 1-27, 1987):
 New Picture

 New Picture (1)

 New Picture (2)

The initial values are defined by

New Picture (3)

The variable step Verner’s method was employed to solve the two-body problem using an epsilon of 0.3 for the initial values. The solutions of velocity versus displacement for u and v are shown below:



New Picture (4)

Tuesday, May 20, 2014

The simple pendulum and its solution

The motion of an object suspended from a string that allows free swinging can be modeled using Newton’s law. The equation for a simple pendulum, which approximates the motion to a single plane, can be written as follows:
 New Picture (2)

In the above equation, g is gravity, 32.2 ft/s. Given an initial angular displacement and zero initial velocity, the above equations can be solved to obtain the angular displacement and velocity of the pendulum. The 2nd order DE has to be split up into two first order ones in order to obtain a solution.

The variable-step method of Dormand & Prince was applied to solve the above IVP. The results for a 170 deg initial displacement are shown below.



New Picture (3)

A comparison of the velocity vs displacement for 45 deg and 170 deg initial values is shown below.      



New Picture (1)

Wednesday, May 14, 2014

Variable step Fehlberg’s Method –example

Problem B2 from W.H. Enright & J.D. Pryce (Two Fortran Packages For Assessing Initial Value Methods, ACM Trans. On Mathematical Software, Vol. 13, No. 1, 1987, pp 1-27) was solved using the Variable-Step Fehlberg’s method. The equations are:

New Picture (1)

New Picture (2)

New Picture (3)

New Picture (4)

New Picture (5)

New Picture (6)

The initial conditions are 1 for all equations, with α= 3. A TOL of 1E-4 was used for the calculations, with Maxbound of 1.1 and Minbound of 0.9. The results are shown below:

New Picture

Tuesday, May 13, 2014

The method of Verner

An example of a 7-stage method with an 8-stage error estimation was developed by Verner (Explicit Runge-Kutta Methods with Estimates of the Local Truncation Error, SIAM Journal on Numerical Analysis, Vol. 15, No. 4 (Aug., 1978), pp. 772-79). The Butcher tableau is shown below.

New Picture

Once again, the generalized RK methodology can be used quite easily with the above table for the Verner’s method. The error estimate is obtained using the ycap coefficients from the last row of coefficients, and can be used to develop a variable step algorithm.

Monday, May 12, 2014

The method of Dormand & Prince

Higher order RK methods that involve 6 and higher stages of calculations at each time step are the next logical procedures. One of these methods with a 6-stage calculation, followed by a 7-stage error estimation, was developed by Dormand & Prince (Dormand & Prince, A family of embedded Runge-Kutta formulae, Journal of Computational & Applied Mathematics, Vol. 6, #1, pp 19-26). The Butcher tableau for the Dormand & Prince method is shown below.

New Picture

Sunday, May 11, 2014

The Variable step Method of Fehlberg

In order to incorporate a variable step algorithm for the Fehlberg’s method, an equation for error estimation has to be included. The Butcher tableau (ref: Hairer, Nørsett & Wanner, Solving ODEs, Vol. 1, Non-stiff problems) with the ycap coefficients are:

 New Picture

The generalized RK method can then be used with the Kaps-Rentrop algorithm for adaptive step size to solve ODEs.                                                                                                                       

Friday, May 9, 2014

The Brusselator Problem

The Brusselator is a class of ODE’s that describe the autocatalytic reaction. The ODE’s that define the Brusselator as presented in Hairer et al (Numerical Solution of ODEs, Vol 1, Non Stiff Equations) are:

 New Picture   

 New Picture (1)

with initial values y1(0)=1.5 and y2(0)=3. The RK-Merson method with the Kaps & Rentrop variable step size algorithm was used to solve the Brusselator problem. An initial step size of 0.001 with a TOL of 0.001 was used.

 The results are shown below. The first figure shows the solutions of y1 and y2 as a function of x. The power of the variable step size is evident from the “dots” in the figure. Extremely small steps are required to initiate the numerical process close to start (at t=0), while fairly generous step sizes can be used as the integration progresses along. Had a fixed step size algorithm been used for the Brusselator, the step size would have to be small all along to avoid numerical instabilities, and thus increases the computation cost significantly.

 New Picture (2)

 New Picture (3)

Thursday, May 8, 2014

Kaps-Rentrop’s Variable Step Size Procedure

The need for a variable stepsize control becomes evident when solving a series of ODE’s that have varying settling times for the solution. The requirement for stepsize is limited by the variable that has the shortest settling time, while the small step may be an over kill for the variables with slower settling times. Press et al (Numerical Recipes, The Art of Scientific Computing) very beautifully state that “many small steps should tiptoe through treacherous terrain, while a few great strides should speed through smooth uninteresting countryside”.

P. Kaps & P. Rentrop (Generalized Runge Kutta Methods of Order Four With Stepsize Control for Stiff ODE’s, Numerical Mathematics, vol. 33, 1979, pp. 55-68) have suggested the following stepsize control for adaptive steps in the integration of ODE’s.

Error estimation (and control) is performed by using two separate calculations with coefficients from the Butcher table as described in the Merson’s method. Having calculated y(n+1) and ycap(n+1),

1-      Define a suitable scaling factor
New Picture

2-      Calculate the local Error estimate, EST 
 New Picture (1)

 3-      Given a user supplied tolerance TOL, the new step size can be calculated as
 New Picture (2)

 4-      The following algorithm can then be used for stepsize control:
 Do Until EST <= TOL

-       Calculate hnew
-       Recalculate y(n+1) and ycap(n+1) with hnew


At each stage of the iterative process, hnew is not accepted until EST<=TOL. hnew is progressively reduced in size until EST<=TOL. Once this condition is met, the same formulation for hnew is used one more time to advance to the next step.


Kaps & Rentrop suggest C=1 for the scaling vector, and a safety factor SF = 0.9 to conservatively guess on the new step. They also suggest bounding the stepsize with a min and max delimiter, as outlined in the Euler’s variable step algorithm.

Tuesday, May 6, 2014

The RK-Merson Method

R.H. Merson (An operational method for the study of integration processes, Proc.
Symp. Data Processing, Weapons Research Establishment, Salisbury, Australia, 1957, pp 110-125) came up with an RK formulation of order 4. The Butcher tableau for the Merson’s method (ref: Hairer, Nørsett, Wanner, Solving ordinary differential equations, vol.1. Nonstiff problems, 2nd ed) is shown below:

New Picture


Notice a second row (ycap1) at the end in the tableau. By using a 4th order formulation, followed by a 5th order estimate for the new coefficients in the last row, an error estimate can be calculated.

The generalized Butcher tableau for with error estimation then becomes

New Picture (1)

The calculations for ki’s and y(n+1) remain the same from the Generalized RK procedure. The error estimation equation is defined by

 New Picture (2)

The difference between the two values of y then serves as an estimate of the error. A variable step algorithm can then be developed based on the error estimate, similar to the one presented for the Euler’s method.

Sunday, May 4, 2014

The Nystrøm’s Method – An Example

Problem C1 (Class C: Non-linear coupling) from W.H. Enright & J.D. Pryce (Two Fortran Packages For Assessing Initial Value Methods, ACM Trans. On Mathematical Software, Vol. 13, No. 1, 1987, pp 1-27) was solved using the RK-Nystrøm’s method. The ODE’s for the IVP are:

New Picture (1)
New Picture (2)
New Picture (3)
New Picture (4)


with initial values of 1 for all y’s. A constant step size of 1E-2 was used. The results are plotted below.

New Picture