![]() |
Dakota Reference Manual
Version 6.15
Explore and Predict with Confidence
|
Hessians are needed and will be approximated by finite differences
Alias: none
Argument(s): none
Child Keywords:
Required/Optional | Description of Group | Dakota Keyword | Dakota Keyword Description | |
---|---|---|---|---|
Optional | fd_step_size | Step size used when computing gradients and Hessians | ||
Optional (Choose One) | Step Scaling (Group 1) | relative | (Default) Scale step size by the parameter value | |
absolute | Do not scale step-size | |||
bounds | Scale step-size by the domain of the parameter | |||
Optional (Choose One) | Finite Difference Type (Group 2) | forward | (Default) Use forward differences | |
central | Use central differences |
The numerical_hessians
specification means that Hessian information is needed and will be computed with finite differences using either first-order gradient differencing (for the cases of analytic_gradients
or for the functions identified by id_analytic_gradients
in the case of mixed_gradients
) or first- or second-order function value differencing (all other gradient specifications). In the former case, the following expression
estimates the Hessian column, and in the latter case, the following expressions
and
provide first- and second-order estimates of the Hessian term. Prior to Dakota 5.0, Dakota always used second-order estimates. In Dakota 5.0 and newer, the default is to use first-order estimates (which honor bounds on the variables and require only about a quarter as many function evaluations as do the second-order estimates), but specifying
central
after numerical_hessians
causes Dakota to use the old second-order estimates, which do not honor bounds. In optimization algorithms that use Hessians, there is little reason to use second-order differences in computing Hessian approximations.
These keywords may also be of interest: