PhysicsBasedAnimationToolkit 0.0.10
Cross-platform C++20 library of algorithms and data structures commonly used in computer graphics research on physically-based simulation.
Loading...
Searching...
No Matches
pbat::math::optimization::Newton< TScalar > Struct Template Reference

Newton's method for optimization. More...

#include <Newton.h>

Public Member Functions

 Newton (int nMaxIters=10, TScalar gtol=TScalar(1e-4), Index n=0)
 Construct a new Newton optimizer.
 
template<class FPrepareDerivatives, class FObjective, class FGradient, class FHessianInverseProduct, class TDerivedX>
TScalar Solve (FPrepareDerivatives prepareDerivatives, FObjective f, FGradient g, FHessianInverseProduct Hinv, Eigen::MatrixBase< TDerivedX > &xk, std::optional< BackTrackingLineSearch< TScalar > > lineSearch=std::nullopt)
 Solve the optimization problem using Newton's method.
 

Public Attributes

int nMaxIters
 Maximum number of iterations for the Newton solver.
 
TScalar gtol2
 Gradient squared norm threshold for convergence.
 
Eigen::Vector< TScalar, Eigen::Dynamic > dxk
 Step direction.
 
Eigen::Vector< TScalar, Eigen::Dynamic > gk
 Gradient at current iteration.
 

Detailed Description

template<class TScalar = Scalar>
struct pbat::math::optimization::Newton< TScalar >

Newton's method for optimization.

Template Parameters
TScalarScalar type

Constructor & Destructor Documentation

◆ Newton()

template<class TScalar>
pbat::math::optimization::Newton< TScalar >::Newton ( int nMaxIters = 10,
TScalar gtol = TScalar(1e-4),
Index n = 0 )
inline

Construct a new Newton optimizer.

Parameters
nMaxItersMaximum number of iterations for the Newton solver
gtolGradient norm threshold for convergence
nNumber of degrees of freedom

Member Function Documentation

◆ Solve()

template<class TScalar>
template<class FPrepareDerivatives, class FObjective, class FGradient, class FHessianInverseProduct, class TDerivedX>
TScalar pbat::math::optimization::Newton< TScalar >::Solve ( FPrepareDerivatives prepareDerivatives,
FObjective f,
FGradient g,
FHessianInverseProduct Hinv,
Eigen::MatrixBase< TDerivedX > & xk,
std::optional< BackTrackingLineSearch< TScalar > > lineSearch = std::nullopt )
inline

Solve the optimization problem using Newton's method.

Template Parameters
FPrepareDerivativesCallable type with signature prepareDerivatives(xk) -> void
FObjectiveCallable type for the objective function with signature f(xk) -> fk
FGradientCallable type for the gradient with signature g(xk) -> gk
FHessianInverseProductCallable type for the Hessian inverse product with signature Hinv(xk, gk) -> dxk
TDerivedXDerived type for the input iterate
Parameters
prepareDerivativesDerivative (pre)computation function
fObjective function
gGradient function
HinvHessian inverse product function
xkCurrent iterate
lineSearchOptional line search object
Returns
Squared norm of the gradient at the final iterate

The documentation for this struct was generated from the following file: