Consider $n$ time series of returns and make the usual assumption that returns are serially uncorrelated. Then, we can define a vector of zero-mean white noises ${\epsilon}_{t}={r}_{t}-\mu $, where ${r}_{t}$ is the $n\u2a2f1$ vector of returns and $\mu $ is the vector of expected returns.

Despite of being serially uncorrelated, the returns may present contemporaneous correlation. That is:

$${\sum}_{t}\u2254{\mathbb{E}}_{t-1}\left[\left({r}_{t}-\mu \right){\left({r}_{t}-\mu \right)}^{\text{'}}\right]$$may not be a diagonal matrix. Moreover, this contemporaneous variance may be time-varying, depending on past information.

The $\mathrm{GARCH-DCC}$ involves two steps. The first step accounts for the conditional heteroskedasticity. It consists in estimating, for each one of the $n$ series of returns ${r}_{t}^{i}$, its conditional volatility ${\sigma}_{t}^{i}$ using a $\mathrm{GARCH}$ model (see garch documentation). Let ${D}_{t}$ be a diagonal matrix with these conditional volatilities, i.e. ${D}_{t}^{i,i}={\sigma}_{t}^{i}$ and, if $i\ne j$, ${D}_{t}^{i,j}=0$. Then the standardized residuals are:

$${\nu}_{t}\u2254{D}_{t}^{-1}\left({r}_{t}-\mu \right)$$and notice that these standardized residuals have unit conditional volatility. Now, define the matrix:

$$\stackrel{\_}{R}\u2254\frac{1}{T}\sum _{t=1}^{T}{\nu}_{t}{\nu}_{t}^{\text{'}}$$This is the Bollerslev's Constant Conditional Correlation (CCC) Estimator (Bollerslev, 1990).

The second step consists in generalizing Bollerslev's CCC to capture dynamics in the correlation, hence the name Dynamic Conditional Correlation ($\mathrm{DCC}$). The $\mathrm{DCC}$ correlations are:

$${Q}_{t}=\stackrel{\_}{R}+\alpha \left({\nu}_{t-1}{\nu}_{t-1}^{\text{'}}-\stackrel{\_}{R}\right)+\beta \left({Q}_{t-1}-\stackrel{\_}{R}\right)$$So, ${Q}_{t}^{i,j}$ is the correlation between ${r}_{t}^{i}$ and ${r}_{t}^{j}$ at time $t$, and that is what is plotted by V-Lab.

The estimation of one $\mathrm{GARCH}$ model for each of the $n$ time series of returns in the first step is standard. For details on $\mathrm{GARCH}$ estimation, see garch documentation.

For the second step, which is the $\mathrm{DCC}$ estimation per se, V-Lab estimates both parameters, $\alpha $ and $\beta $, simultaneously, by maximizing the log likelihood. The standardized residuals are assumed to be jointly Gaussian. To ease the computation cost of estimating a vast dimensional time-varying correlation model, V-Lab uses a technique called composite likelihood (Engle et al., 2007).

The $\mathrm{DCC}$ model captures a stylized facts in financial time series: correlation clustering. The correlation is more likely to be high at time $t$ if it was also high at time $t-1$. Another way of seeing this is noting that a shock at time $t-1$ also impacts the correlation at time $t$. However, if $\alpha +\beta <1$, the correlation itself is mean reverting, and it fluctuates around $\stackrel{\_}{R}$, the unconditional correlation.

Usual restrictions on the parameters are $\alpha ,\beta >0$. Though, it is possible to have $\alpha +\beta =1$; the conditional correlation is then an integrated process.

Notice that if we had written the $\mathrm{DCC}$ model in a fashion similar to the $\mathrm{GARCH}$ model:

$${Q}_{t}=\Omega +\alpha {\nu}_{t-1}{\nu}_{t-1}^{\text{'}}+\beta {Q}_{t-1}$$we would have to estimate the matrix $\Omega $ also. That is, instead of estimating only two parameters, we would have to estimate $2+n\frac{n+1}{2}$ parameters (it is not $2+{n}^{2}$ parameters due to the fact that $\Omega $ is a symmetric matrix). And then the unconditional correlation implied by the model would have been:

$$\stackrel{\_}{R}=\frac{\Omega}{1-\alpha -\beta}$$Instead of estimating $\Omega $, notice that we actually substituted $\Omega $ by $\stackrel{\_}{R}\left(1-\alpha -\beta \right)$ in the $\mathrm{DCC}$ formula, which is a much more parsimonious way of writing the model. This is called Variance Targeting, introduced by Engle and Mezrich in 1995, and it is a very useful technique when modeling vast dimensional time-varying covariance or correlation models.

The specific model just described can be generalized in two ways.

In the first stage, each $\mathrm{GARCH}$ specification used to standardize each one of the $n$ return time series can be generalized to a $\mathrm{GARCH}\left(p,q\right)$ model (see garch documentation), where $p$ and $q$ can be chosen differently for each return time series, for instance, by Bayesian Information Criterion (BIN), also known as Schwarz Information Criterion (SIC), or by Akaike Information Criterion (AIC). The former tends to be more parsimonious than the latter. V-Lab uses $p=1$ and $q=1$ though, because this is usually the option that best fits financial time series.

In the second stage, the $\mathrm{DCC}$ model can be generalized to account for more lags in the conditional correlation. A $\mathrm{DCC}\left(p,q\right)$ model assumes that:

$${Q}_{t}=\stackrel{\_}{R}+\sum _{i=1}^{p}{\alpha}_{i}\left({\nu}_{t-i}{\nu}_{t-i}^{\text{'}}-\stackrel{\_}{R}\right)+\sum _{j=1}^{q}{\beta}_{j}\left({Q}_{t-j}-\stackrel{\_}{R}\right)$$where $p$ and $q$ can be chosen, for instance, by information criterion. Again, V-Lab uses $p=1$ and $q=1$ though, because this is usually the option that best fits financial time series.

Bollerslev, T., 1990. Modeling The Coherence in Short-Run Nominal Exchange Rates: A Multivariate Generalized ARCH Model. Review of Economics and Statistics 72: 498-505.

Engle, R. F., 2002. Dynamic Conditional Correlation: A Simple Class of Multivariate GARCH Models. Journal of Business and Economic Statistics 20(3).

Engle, R. F., 2009. Anticipating Correlations: A New Paradigm for Risk Management. Princeton University Press.

Engle, R. F. and J. Mezrich, 1995. Grappling with GARCH. Risk: 112-117.

Engle, R. F., N. Shephard, and K. Sheppard, 2007. Fitting and Testing Vast Dimensional Time-Varying Covariance Models. NYU Working Paper FIN-07-046.