Turi Create  4.0
Utility Functions

Functions

void turi::optimization::set_default_solver_options (const first_order_opt_interface &model, const DenseVector &point, const std::string solver, std::map< std::string, flexible_type > &opts)
 
double turi::optimization::compute_residual (const DenseVector &gradient)
 
double turi::optimization::compute_residual (const SparseVector &gradient)
 
bool turi::optimization::check_hessian (second_order_opt_interface &model, const DenseVector &point, const DenseMatrix &hessian)
 
bool turi::optimization::check_gradient (first_order_opt_interface &model, const DenseVector &point, SparseVector &gradient, const size_t mbStart=0, const size_t mbSize=(size_t)(-1))
 
bool turi::optimization::check_gradient (first_order_opt_interface &model, const DenseVector &point, const DenseVector &gradient, const size_t mbStart=0, const size_t mbSize=(size_t)(-1))
 
std::string turi::optimization::translate_solver_status (const OPTIMIZATION_STATUS &status)
 
void turi::optimization::log_solver_summary_stats (const solver_return &stats, bool simple_mode=false)
 
template<typename L , typename R >
void turi::optimization::vector_add (L &left, const R &right)
 

Detailed Description

Function Documentation

◆ check_gradient() [1/2]

bool turi::optimization::check_gradient ( first_order_opt_interface model,
const DenseVector &  point,
SparseVector &  gradient,
const size_t  mbStart = 0,
const size_t  mbSize = (size_t)(-1) 
)

Check dense gradient of first_order_optimization_iterface models at a point.

The function lets you check that model.compute_gradient is accurately implemented.

Check uses central difference to compute gradient. The must be with 1e-3 relative tolerance. The notion of relative tolerance is tricky especially when gradients are really large or really small.

Parameters
[in]modelAny model with a first order optimization interface.
[in]pointPoint at which to check the gradient.
[in]gradSparse Gradient computed analytically at "point"
[in]mbStartMinibatch start index
[in]mbSizeMinibatch size
Returns
bool True if gradient is correct to 1e-3 tolerance.
Note
I can't make the model a const because model.gradient() need not be const.

◆ check_gradient() [2/2]

bool turi::optimization::check_gradient ( first_order_opt_interface model,
const DenseVector &  point,
const DenseVector &  gradient,
const size_t  mbStart = 0,
const size_t  mbSize = (size_t)(-1) 
)

Check sparse gradient of first_order_optimization_iterface models at a point.

The function lets you check that model.compute_gradient is accurately implemented.

Check uses central difference to compute gradient. The must be with 1e-3 relative tolerance. The notion of relative tolerance is tricky especially when gradients are really large or really small.

Parameters
[in]modelAny model with a first order optimization interface.
[in]pointPoint at which to check the gradient.
[in]gradDense gradient computed analytically at "point"
[in]mbStartMinibatch start index
[in]mbSizeMinibatch size
Returns
bool True if hessian is correct to 1e-3 tolerance.

◆ check_hessian()

bool turi::optimization::check_hessian ( second_order_opt_interface model,
const DenseVector &  point,
const DenseMatrix &  hessian 
)

Check hessian of second_order_optimization_iterface models at a point.

The function lets you check that model.compute_hessian is accurately implemented.

Check uses central difference to hessian. The must be with 1e-3 relative tolerance. The notion of relative tolerance is tricky especially when gradients are really large or really small.

Parameters
[in]modelAny model with a first order optimization interface.
[in]pointPoint at which to check the gradient.
[in]hessianDense hessian matrix.
[in]mbStartMinibatch start index
[in]mbSizeMinibatch size
Returns
bool True if hessian is correct to 1e-3 tolerance.
Note
I can't make the model a const because model.compute_function_value() need not be const.

◆ compute_residual() [1/2]

double turi::optimization::compute_residual ( const DenseVector &  gradient)

Compute residual gradient.

Parameters
[in]gradientDense Gradient
Returns
Residual to check for termination.

◆ compute_residual() [2/2]

double turi::optimization::compute_residual ( const SparseVector &  gradient)

Compute residual gradient.

Parameters
[in]gradientDense Gradient
Returns
Residual to check for termination.

◆ log_solver_summary_stats()

void turi::optimization::log_solver_summary_stats ( const solver_return stats,
bool  simple_mode = false 
)

Log solver summary stats (useful for benchmarking

Parameters
[in]statusStatus of the solver
Returns
Clean output of the optimization summary.

◆ set_default_solver_options()

void turi::optimization::set_default_solver_options ( const first_order_opt_interface model,
const DenseVector &  point,
const std::string  solver,
std::map< std::string, flexible_type > &  opts 
)

Basic solver error checking and default option hanlding.

This function takes in a dictionary of solver options as input. Keys in opts that are required by the solver and NOT in opts are set to a default value.

Parameters
[in]modelAny model with a first order optimization interface.
[in]init_pointStarting point for the solver.
[in,out]optsSolver options.
[in]solverName of solver

◆ translate_solver_status()

std::string turi::optimization::translate_solver_status ( const OPTIMIZATION_STATUS status)

Translate solver status to a string that a user can understand.

Parameters
[in]statusStatus of the solver
Returns
String with a meaningful interpretation of the solver status.

◆ vector_add()

template<typename L , typename R >
void turi::optimization::vector_add ( L &  left,
const R &  right 
)

Performs left = left + right across sparse and dense vectors.

Note
Although Eigen is an amazing library. This operation is horribly inefficient when Left is a dense vector and Right is a sparse vector.
Parameters
[in,out]leftVector
[in]rightRector