Dakota Reference Manual  Version 6.15
Explore and Predict with Confidence
 All Pages
ksg2


Use second Kraskov algorithm to compute mutual information

Specification

Alias: none

Argument(s): none

Description

This algorithm is derived in[58] . The mutual information between $m$ random variables is approximated by

\[ I_{2}(X_{1}, X_{2}, \ldots, X_{m}) = \psi(k) + (m-1)\psi(N) - (m-1)/k - < \psi(n_{x_{1}}) + \psi(n_{x_{2}}) + \ldots + \psi(n_{x_{m}}) >, \]

where $\psi$ is the digamma function, $k$ is the number of nearest neighbors being used, and $N$ is the number of samples available for the joint distribution of the random variables. For each point $z_{i} = (x_{1,i}, x_{2,i}, \ldots, x_{m,i})$ in the joint distribution, $z_{i}$ and its $k$ nearest neighbors are projected into each marginal subpsace. For each subspace $ j = 1, \ldots, m$, $\epsilon_{j,i}$ is defined as the radius of the $l_{\infty}$-ball containing all $k+1$ points. Then, $n_{x_{j,i}}$ is the number of points in the $j$-th subspace within a distance of $\epsilon_{j,i}$ from the point $x_{j,i}$. The angular brackets denote that the average of $\psi(n_{x_{j,i}})$ is taken over all points $i = 1, \ldots, N$.

Examples

method
    bayes_calibration queso
      dram
      seed = 34785
      chain_samples = 1000
      posterior_stats mutual_info 
        ksg2
method
    bayes_calibration
      queso
      dram
      chain_samples = 1000 seed = 348
    experimental_design
      initial_samples = 5
      num_candidates = 10
      max_hifi_evaluations = 3
      ksg2