The goal of this estimator is to make the GJR-DCC model more robust against large dimensions. To this end Engle et al. (2019) combine two tools. The first tool is the composite likelihood method of Pakel et al. (2017) which makes the estimation of a DCC model in large dimensions computationally feasible: Composite likelihood ensures that DCC can be used when the number of assets is large. The second tool is the nonlinear shrinkage method of Ledoit and Wolf (2012) which results in improved estimation of the correlation targeting matrix of a DCC model. Nonlinear shrinkage ensures that DCC performs well when the number of assets is large.
Consider time series of returns and make the usual assumption that returns are serially uncorrelated. Then, we can define a vector of zero-mean white noises , where is the vector of returns and is the vector of expected returns.
Despite of being serially uncorrelated, the returns may present contemporaneous correlation. That is:may not be a diagonal matrix. Moreover, this contemporaneous variance may be time-varying, depending on past information.
The DCC-NL model involves three steps. The first step accounts for the conditional heteroskdasticity. It consists of estimating, for each one of the series of returns , its conditional volatility using a GJR-GARCH model (see GJR-GARCH documentation). Let be a diagonal matrix with these conditional volatilities, i.e. and, if , . The the standardized residuals are:and notice that these standardized residuals have unit conditional volatility.
The second step is to estimate the unconditional correlation matrix for correlation targeting introduced in Engle and Mezrich (1996). The basic idea of DCC-NL is to use the nonlinear shrinkage estimator of Ledoit and Wolf (2012) to estimate instead of the sample correlation matrix .
The third step is to run a GJR-DCC-NL model, the DCC-NL correlation is defined as follows:where denotes the nonlinear shrinkage estimator of and is a pseudo-correlation matrix, or a conditonal covariance matrix of devolatized residuals. It cannot be used directly because its diagonal elements, although close to one, are not exactly equal to one. From this representation, we obtain the conditional correlation matrix:
It is widely acknowledged that the sample correlation matrix works poorly in large dimensions. The reason is that the sample correlation matrix has parameters, and the data set has noisy observations. When is of the same order of magnitude as , these two quantities are similar-sized. It is not possible to estimate accurately parameters from noisy data points. This is the curse of dimensionality in action. Therefore, instead of using the sample covariance matrix of devolatized residualswhere is the th sample eigenvalue and is its corresponding sample eigenvector, Engle et al. (2019) use the shrunk eigenvalues :
Note that due to the in-sample bias of the sample correlation (or covariance) matrix, the small sample eigenvalues tend to be too small and the large ones tend to too large. So the process is just a matter of pushing up the small values and pulling down the large values. Since this transformation reduces the spread of the cross-sectional distribution of eigenvalues, it is generally called shrinkage. Whereas previous nonlinear shrinkage methods were numerical, for example the QuEST function of Ledoit and Wolf (2015), there is an analytical formula for optimal nonlinear shrinkage of large-dimensional covariance matrices by Ledoit and Wolf (2019).
Engle, R., Ledoit, O., and Wolf, M. (2019). Large Dynamic Covariance Matrices. Journal of Business & Economic Statistics, 37:363–375. doi: 0.1080/07350015.2017.1345683.http://www.econ.uzh.ch/static/wp/econwp231.pdf
Engle, R. F. and Mezrich, J. (1996). GARCH for groups. Risk, 9:36–40.
Ledoit, O. and Wolf, M. (2015). Spectrum estimation: a unified framework for covariance matrix estimation and PCA in large dimensions. Journal of Multivariate Analysis, 139(2):360–384.https://www.sciencedirect.com/science/article/pii/S0047259X15000949
Ledoit, O. and Wolf, M. (2019). Analytical nonlinear shrinkage of large-dimensional covariance matrices. Annals of Statistics, forthcoming.
Pakel, C., Shephard, N., Sheppard, K., and Engle, R. F. (2017). Fitting vast dimensional time-varying covariance models. Working Paper FIN-08-009, NYU.https://ssrn.com/abstract=1354497