V-Lab

GJR-DCC-NL

Motivation

The goal of this estimator is to make the GJR-DCC model more robust against large dimensions. To this end Engle et al. (2019) combine two tools. The first tool is the composite likelihood method of Pakel et al. (2017) which makes the estimation of a DCC model in large dimensions computationally feasible: Composite likelihood ensures that DCC can be used when the number of assets is large. The second tool is the nonlinear shrinkage method of Ledoit and Wolf (2012) which results in improved estimation of the correlation targeting matrix of a DCC model. Nonlinear shrinkage ensures that DCC performs well when the number of assets is large.

Definition

Consider n time series of returns and make the usual assumption that returns are serially uncorrelated. Then, we can define a vector of zero-mean white noises ϵt=rtμ, where rt is the n×1 vector of returns and μ is the vector of expected returns.

Despite of being serially uncorrelated, the returns may present contemporaneous correlation. That is:t𝔼t1[(rtμ)(rtμ)]may not be a diagonal matrix. Moreover, this contemporaneous variance may be time-varying, depending on past information.

The DCC-NL model involves three steps. The first step accounts for the conditional heteroskdasticity. It consists of estimating, for each one of the n series of returns rti, its conditional volatility σti using a GJR-GARCH model (see GJR-GARCH documentation). Let Dt be a diagonal matrix with these conditional volatilities, i.e. Dti,i=σti and, if ij, Dti,j=0. The the standardized residuals are:νtDt1(rtμ)and notice that these standardized residuals have unit conditional volatility.

The second step is to estimate the unconditional correlation matrix R for correlation targeting introduced in Engle and Mezrich (1996). The basic idea of DCC-NL is to use the nonlinear shrinkage estimator of Ledoit and Wolf (2012) to estimate R instead of the sample correlation matrix R_.

The third step is to run a GJR-DCC-NL model, the DCC-NL correlation is defined as follows:Qt=1-α-βR~+ανt-1νt-1'+βQt-1where R~ denotes the nonlinear shrinkage estimator of R and Qt is a pseudo-correlation matrix, or a conditonal covariance matrix of devolatized residuals. It cannot be used directly because its diagonal elements, although close to one, are not exactly equal to one. From this representation, we obtain the conditional correlation matrix:CtDiagQt-1/2QtDiagQt-1/2

Correlation Targeting

It is widely acknowledged that the sample correlation matrix works poorly in large dimensions. The reason is that the sample correlation matrix has nn-1/2 parameters, and the data set has n×T noisy observations. When n is of the same order of magnitude as T, these two quantities are similar-sized. It is not possible to estimate accurately ON2 parameters from ON2 noisy data points. This is the curse of dimensionality in action. Therefore, instead of using the sample covariance matrix of devolatized residualsR_=1Tν ν '=i=1nλiuiui'where λi is the ith sample eigenvalue and ui is its corresponding sample eigenvector, Engle et al. (2019) use the shrunk eigenvalues λ~1λ~2· · ·λ~N:R~ti=1nλ~iuiui'

Note that due to the in-sample bias of the sample correlation (or covariance) matrix, the small sample eigenvalues tend to be too small and the large ones tend to too large. So the process is just a matter of pushing up the small values and pulling down the large values. Since this transformation reduces the spread of the cross-sectional distribution of eigenvalues, it is generally called shrinkage. Whereas previous nonlinear shrinkage methods were numerical, for example the QuEST function of Ledoit and Wolf (2015), there is an analytical formula for optimal nonlinear shrinkage of large-dimensional covariance matrices by Ledoit and Wolf (2019).

References

Engle, R., Ledoit, O., and Wolf, M. (2019). Large Dynamic Covariance Matrices. Journal of Business & Economic Statistics, 37:363–375. doi: 0.1080/07350015.2017.1345683.http://www.econ.uzh.ch/static/wp/econwp231.pdf

Engle, R. F. and Mezrich, J. (1996). GARCH for groups. Risk, 9:36–40.

Ledoit, O. and Wolf, M. (2015). Spectrum estimation: a unified framework for covariance matrix estimation and PCA in large dimensions. Journal of Multivariate Analysis, 139(2):360–384.https://www.sciencedirect.com/science/article/pii/S0047259X15000949

Ledoit, O. and Wolf, M. (2019). Analytical nonlinear shrinkage of large-dimensional covariance matrices. Annals of Statistics, forthcoming.

Pakel, C., Shephard, N., Sheppard, K., and Engle, R. F. (2017). Fitting vast dimensional time-varying covariance models. Working Paper FIN-08-009, NYU.https://ssrn.com/abstract=1354497