Dakota Reference Manual  Version 6.15
Explore and Predict with Confidence
 All Pages
variance


Fit MLMC sample allocation to control the variance of the estimator for the variance.

Specification

Alias: none

Argument(s): none

Child Keywords:

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional optimization

Solve the optimization problem for the sample allocation by numerical optimization in the case of sampling estimator targeting the variance.

Description

Computes the variance of the sampling estimator for the variance and fits sample allocation by solving the corresponding optimization problem. This optimization problem is obtained in closed form with an analytical approximation. Additionally, a numerical optimization can be used in that case, see optimization.

Examples

The following method block

method,
    model_pointer = 'HIERARCH'
        multilevel_sampling
      pilot_samples = 20 seed = 1237
      convergence_tolerance = .01
      allocation_target = variance

uses the variance as sample allocation target by computing its variance.

Theory

A single level unbiased estimator for the variance of a generic QoI at the highest level $M_L$ of the hierarchy can be written as

\[ \mathbb{V}ar\left[Q_{M_L}\right] \approx \frac{1}{N_{M_L} - 1} \sum_{i=1}^{N_{M_L}} \left( Q_{M_L}^{(i)} - \mathbb{E}\left[Q_L\right] \right)^2. \]

The multilevel extension for this estimator is obtained by writing

\[ \mathbb{V}ar\left[Q_L\right] \approx \hat{Q}_{L,2}^{\mathrm{ML}} = \sum_{\ell=0}^L \hat{Q}_{\ell,2} - \hat{Q}_{\ell-1,2}, \]

where

\[ \hat{Q}_{\ell,2} = \frac{1}{N_\ell - 1} \sum_{i=1}^{N_\ell} \left( Q_\ell^{(i)} - \hat{Q}_\ell \right)^2 \mathrm{\quad and \quad} \hat{Q}_{\ell - 1,2} = \frac{1}{N_\ell - 1} \sum_{i=1}^{N_\ell} \left( Q_{\ell - 1}^{(i)} - \hat{Q}_{\ell - 1} \right)^2. \]

As for the expected value case, we want to obtain an optimal sample allocation per level that minimizes the cost to obtain an estimator with a prescribed variance. The variance of the multilevel estimator $ \hat{Q}_{L,2}^{\mathrm{ML}} $ can be written as

\[ \mathbb{V}ar\left[ \hat{Q}_{\ell,2} \right] \approx \frac{1}{N_\ell} \left( \hat{Q}_{\ell,4} - \frac{N_\ell-3}{N_\ell-1} \left(\hat{Q}_{\ell,2}\right)^2 \right), \]

where $\hat{Q}_{\ell,4}$ denotes the sampling estimator for the fourth order central moment. For more details about the expression that each single term takes in the previous expression, please refer to the Theory Manual.

The final sample allocation is obtained by solving a minimization problem

\[ \min\limits_{N_\ell} \sum_{\ell=0}^L \mathcal{C}_{\ell} N_\ell \quad \mathrm{s.t.} \quad \mathbb{V}ar\left[ \hat{Q}_{L,2}^{\mathrm{ML}} \right] = \varepsilon^2/2. \]

This optimization problem can be solved in two different ways, namely an analytical approximation and by solving a non-linear optimization problem. The analytical approximation follows the approach described in [Pisaroni2017] and introduces a helper variable

\[ \hat{V}_{2, \ell} := \mathbb{V}ar\left[ \hat{Q}_{\ell,2} \right] \cdot N_{\ell}, \]

and the minimization problem is formulated as

\[ f(N_\ell,\lambda) = \sum_{\ell=0}^{L} N_\ell \, \mathcal{C}_{\ell} + \lambda \left( \sum_{\ell=0}^{L} N_\ell^{-1} \hat{V}_{2, \ell} - \varepsilon^2/2 \right). \]

This formulation has a closed form solution (similarly to the expected value case)

\[ N_{\ell} = \frac{2}{\varepsilon^2} \left[ \, \sum_{k=0}^L \left( \hat{V}_{2, k} \mathcal{C}_k \right)^{1/2} \right] \sqrt{\frac{ \hat{V}_{2, \ell} }{\mathcal{C}_{\ell}}}. \]

If the option optimization is specified the previous optimization problem is solved numerically via either OPTPP (default choice) or NPSOL (if available).

See Also

These keywords may also be of interest: