Linear algebraic functions can be expressed as a sum of two
other functions, given certain constraints. These constraints are typically the
boundaries of the original function, from which the two secondary functions can
be derived.
Consider a function f(x), defined in the interval [a,b]. The
linearity theory helps define two additional functions such that
Several functions can be derived for f1(x) and f2(x) that
satisfy the above condition. Considering the simplest linear case, a value for
the constant (delta) can be derived with a few generalized
assumptions. Each of the functions f1(x) and f2(x) can be constructed such that
it meets one of the boundary requirements individually. Namely,
Substituting either one of these conditions for f1(x) or
f2(x) into the original equation for f(x), an expression for the constant can
be obtained:
The original function f(x) can then be constructed as a
linear sum of f1(x) and f2(x). An example is shown below for one such
construction.This concept of linear superpositions come in handy while
solving boundary value ODE’s.
No comments:
Post a Comment