** Gauss-Newton and Conjugate-Gradient optimization **
This code implements a Gauss-Newton
optimization of objective functions that can
be iteratively approximated by quadratics.
This approach is particularly appropriate
for least-squares inversions of moderately
non-linear transforms. You will also find
code for conjugate-gradient and line-search
optimizations.
Get documentation of the algorithm here:
[[../../papers/inv/inv.html]] [[../../papers/inv.pdf]] [[../../papers/inv.ps.gz]]
Several papers describe ways to use this code:
[[../../papers/regularization.pdf]] [[../../papers/regularization/]]
[[../../papers/neural.pdf]] [[../../papers/neural/]]
[[../../papers/rmsinv.pdf]] [[../../papers/rmsinv/]]
See an older C++ version [[../conjugate_gradients/]]
See the java documentation in the
documentation subdirectory [[documentation/]].
The current version of this code is now a part
of the Mines Java Toolkit at
https://github.com/MinesJTK/jtk
in the edu.mines.jtk.opt package,
with code in
https://github.com/MinesJTK/jtk/tree/master/core/src/main/java/edu/mines/jtk/opt
and documentation in
https://github.com/MinesJTK/jtk/tree/master/docs/opt_package
An older public version is available from
http://code.google.com/p/optimal/