Dakota Reference Manual  Version 6.15
Explore and Predict with Confidence
 All Pages
function_train


Global surrogate model based on functional tensor train decomposition

Specification

Alias: none

Argument(s): none

Child Keywords:

Required/Optional Description of Group Dakota Keyword Dakota Keyword Description
Optional regression_type

Type of solver for forming function train approximations by regression

Optional max_solver_iterations

Maximum iterations in determining polynomial coefficients

Optional max_cross_iterations

Maximum number of iterations for cross-approximation during a rank adaptation.

Optional solver_tolerance

Convergence tolerance for the optimizer used during the regression solve.

Optional response_scaling

Perform bounds-scaling on response values prior to surrogate emulation

Optional tensor_grid Use sub-sampled tensor-product quadrature points to build a polynomial chaos expansion.
Optional rounding_tolerance

An accuracy tolerance that is used to guide rounding during rank adaptation.

Optional arithmetic_tolerance

A secondary rounding tolerance used for post-processing

Optional start_order

(Initial) polynomial order of each univariate function within the functional tensor train.

Optional adapt_order

Activate adaptive procedure for determining the best basis order

Optional kick_order

increment used when adapting the basis order in function train methods

Optional max_order

Maximum polynomial order of each univariate function within the functional tensor train.

Optional max_cv_order_candidates

Limit the number of cross-validation candidates for basis order

Optional start_rank

The initial rank used for the starting point during a rank adaptation.

Optional adapt_rank

Activate adaptive procedure for determining best rank representation

Optional kick_rank

The increment in rank employed during each iteration of the rank adaptation.

Optional max_rank

Limits the maximum rank that is explored during a rank adaptation.

Optional max_cv_rank_candidates

Limit the number of cross-validation candidates for rank

Description

Tensor train decompositions are approximations that exploit low rank structure in an input-output mapping. The form of the approximation can be written as a set of matrix valued products:

\[ f_r(x) = F_1(x_1) F_2(x_2) \dots F_d(x_d) \]

Where the "cores" expand to

\[ F_k(x_k) = \begin{bmatrix} f_k^{11}(x_k) & \cdots & f_k^{1r_k}(x_k)\\ \vdots & \ddots & \vdots\\ f_k^{r_{k-1}1}(x_k) & \cdots & f_k^{r_{k-1}r_k}(x_k) \end{bmatrix} \]

An example expansion over four random variables with rank vector (1,7,5,3,1) is

\[ f_r(x) = \begin{bmatrix} f_1^{11}(x_1) & \cdots & f_1^{17}(x_1) \end{bmatrix} \begin{bmatrix} f_2^{11}(x_2) & \cdots & f_2^{15}(x_2)\\ \vdots & \ddots & \vdots\\ f_2^{71}(x_2) & \cdots & f_2^{75}(x_2) \end{bmatrix} \begin{bmatrix} f_3^{11}(x_3) & \cdots & f_3^{13}(x_3)\\ \vdots & \ddots & \vdots\\ f_3^{51}(x_3) & \cdots & f_3^{53}(x_3) \end{bmatrix} \begin{bmatrix} f_4^{11}(x_4) \\ \vdots \\ f_4^{31}(x_4) \end{bmatrix} \]

In the current implementation, orthogonal polynomial basis functions (Hermite and Legendre) are employed as the basis functions $f_i^{jk}(x_i)$, although the C3 library will enable additional options in the future.

The number of coefficients that must be computed by the regression solver can be inferred from the construction above. For each QoI, the regression size can be determined as follows:

  • For v variables, orders o is a v-vector and ranks r is a v+1-vector
  • the first core is a $ 1 \times r_1 $ row vector and contributes $ (o_0 + 1) r_1 $ terms
  • the last core is a $ r_{v-1} \times 1 $ col vector and contributes $ (o_{v-1}+1) r_{v-1} $ terms
  • the middle v-2 cores are $ r_i \times r_{i+1} $ matrices that contribute $ r_i r_{i+1} (o_i + 1) $ terms, $ i = 1, ..., v-2 $
  • neighboring vec/mat dimensions must match, so there are v-1 unique ranks

Usage Tips

This new capability is stabilizing and beginning to be embedded in higher-level strategies such as multilevel-multifidelity algorithms. It is not included in the Dakota build by default, as some C3 library dependencies (CBLAS) can induce small differences in our regression suite.

This capability is also being used as a prototype to explore model-based versus method-based specification of stochastic expansions. While the model specification is stand-alone, it currently requires a corresponding method specification to exercise the model, which can be a generic UQ strategy such as surrogate_based_uq method or a sampling method. The intent is to migrate function train, polynomial chaos, and stochastic collocation toward model-only specifications that can then be employed in any surrogate/emulator context.

Examples

model,
    id_model = 'FT'
    surrogate global function_train
      start_order = 2
      start_rank  = 2  kick_rank = 2  max_rank = 10
      adapt_rank
    dace_method_pointer = 'SAMPLING'

See Also

These keywords may also be of interest: