Prev Section 8.1: The Objective Function Up Chapter 8: Optimization Appendix A: Tensor Calculus Next
8.2 The Levenberg-Marquardt Method
One of the methods that is currently implemented in FEBio's optimization method is the constrained Levenberg-Marquardt method via the levmar library. (see https://users.ics.forth.gr/~lourakis/levmar/ for more information on this library.)
The Levenberg-Marquardt method is a numerical algorithm that minimizes a function that is defined as a sum of squares of nonlinear functions, i.e. the objective function as defined above. It combines the steepest-descent method with a Gauss-Newton method to find the parameters that minimize the objective function.
The LM method requires a set of measured values and an initial guess for the a vector. It then tries to find a better estimate for a by replacing it with . The function is linearly approximated.
where is the Jacobian of with respect to . Substituting this in the objective function and minimizing with respect to leads to,
where y is the vector of and f is the vector of .
The main idea of the LM method is to replace this linear equation with the following.
Here, is a damping parameter that is controlled by the algorithm. When is small, the method approximates Gauss-Newton, when is large it is closer to a steepdest-descent method. The algorithm will modify try to modify such that an improvement to the parameter vector a can be found in each iteration. The method will terminate when the value of the objective function falls below a user-specified tolerance (the obj_tol parameter in FEBio).
The evaluation of the Jacobian requires evaluating the derivatives of with respect to a. These derivatives are approximated via forward difference formulas. For example, the -th component of the gradient is approximated as follows.
The value for is determined from the following formula.
where, is the forward difference scale factor (the fdiff_scale option in FEBio). In FEBio, the initial value for the damping parameter can be set with the tau parameter.