Gauss-Newton and Conjugate-Gradient optimization

This code implements a Gauss-Newton optimization of objective functions that can be iteratively approximated by quadratics. This approach is particularly appropriate for least-squares inversions of moderately non-linear transforms. You will also find code for conjugate-gradient and line-search optimizations.

Get documentation of the algorithm here: [ ../../papers/inv/inv.html ] [ ../../papers/inv.pdf ] [ ../../papers/ ]

Several papers describe ways to use this code: [ ../../papers/regularization.pdf ] [ ../../papers/regularization/ ] [ ../../papers/neural.pdf ] [ ../../papers/neural/ ] [ ../../papers/rmsinv.pdf ] [ ../../papers/rmsinv/ ]

See an older C++ version [ ../conjugate_gradients/ ]

See the java documentation in the documentation subdirectory [ documentation/ ] .

The current version of this code is now a part of the Mines Java Toolkit at in the edu.mines.jtk.opt package, with code in and documentation in

An older public version is available from

File:   Bytes:   Modified:     15K     6,2K     461     5,0K     2,2K     5,2K     3,0K     4,6K     31     6,8K     3,2K     11K     12K  
license.txt     12K     2,4K     2,0K     88     106     102     13K     102     3,9K     3,3K     120     126     118     10K     124     382     2,0K     2,1K     1,7K     7,5K     5,1K  
README     1,3K     17K     7,8K  
src.tar.gz     38K     3,7K     9,5K     1,9K     4,0K     9,0K  


Return to parent directory.