Consider a return time series${r}_{t}=\mu +{\epsilon}_{t}$, where $\mu $ is the expected return and${\epsilon}_{t}$ is a zero-mean white noise. Despite of being serially uncorrelated, the series${\epsilon}_{t}$ does not need to be serially independent. For instance, it can present conditional heteroskedasticity. The Glosten-Jagannathan-Runkle $\mathrm{GARCH}$($\mathrm{GJR-GARCH}$) model assumes a specific parametric form for this conditional heteroskedasticity. More specifically, we say that${\epsilon}_{t}~\mathrm{GJR-GARCH}$if we can write${\epsilon}_{t}={\sigma}_{t}{z}_{t}$, where ${z}_{t}$ is standard Gaussian and:

$${\sigma}_{t}^{2}=\omega +\left(\alpha +\gamma {I}_{t-1}\right){\epsilon}_{t-1}^{2}+\beta {\sigma}_{t-1}^{2}$$

where

$${I}_{t-1}\u2254\{\begin{array}{lr}0& \text{if}{r}_{t-1}\ge \mu \\ 1& \text{if}{r}_{t-1}\mu \end{array}$$

V-Lab estimates all the parameters$\left(\mu ,\omega ,\alpha ,\gamma ,\beta \right)$ simultaneously, by maximizing the log likelihood. The assumption that${z}_{t}$ is Gaussian does not imply the returns are Gaussian. Even though their conditional distribution is Gaussian, it can be proved that their unconditional distribution presents excess kurtosis (fat tails). In fact, assuming that the conditional distribution is Gaussian is not as restrictive as it seems: even if the true distribution is different, the so-called Quasi-Maximum Likelihood (QML) estimator is still consistent, under fairly mild regularity conditions.

Besides leptokurtic returns, the $\mathrm{GJR-GARCH}$ model, like the$\mathrm{GARCH}$ model, captures other stylized facts in financial time series, like volatility clustering. The volatility is more likely to be high at time $t$ if it was also high at time$t-1$. Another way of seeing this is noting that a shock at time $t-1$ also impacts the variance at time $t$. However, if$\alpha +\frac{\gamma}{2}+\beta <1$, the volatility itself is mean reverting, and it fluctuates around $\sigma $, the square root of the unconditional variance

$${\sigma}^{2}\u2254\text{Var}\left({r}_{t}\right)=\frac{\omega}{1-\alpha -\frac{\gamma}{2}-\beta}$$

where the $\frac{1}{2}$ multiplying$\gamma $ comes from the normality assumption of${z}_{t}$. More intuitively, it comes from the assumption that the conditional distribution of the returns is symmetric around $\mu $.

Usual restrictions on the parameters are $\omega ,\alpha ,\gamma ,\beta >0$. The $\mathrm{GARCH}$ model is in fact a restricted version of the $\mathrm{GJR-GARCH}$, with $\gamma =0$.

Let ${r}_{t}$ be the last observation in the sample, and let $\hat{\omega}$,$\hat{\alpha}$,$\hat{\gamma}$ and$\hat{\beta}$ be the QML estimators of the parameters $\omega $, $\alpha $,$\gamma $ and $\beta $, respectively. The$\mathrm{GJR-GARCH}$ model implies that the forecast of the conditional variance at time $T+h$ is:

$${\hat{\sigma}}_{T+h}^{2}=\hat{\omega}+\left(\hat{\alpha}+\frac{\hat{\gamma}}{2}+\hat{\beta}\right){\hat{\sigma}}_{T+h-1}^{2}$$

and so, by applying the above formula iteratively, we can forecast the conditional variance for any horizon $h$. Then, the forecast of the compound volatility at time $T+h$is

$${\hat{\sigma}}_{T+1:T+h}=\sqrt{\sum _{i=1}^{h}{\hat{\sigma}}_{T+i}^{2}}$$

Notice that, for large $h$, the forecast of the compound volatility converges to:

$$\sqrt{h}\sqrt{\frac{\hat{\omega}}{1-\hat{\alpha}-\frac{\hat{\gamma}}{2}-\hat{\beta}}}$$

scaling over the forecast horizon with the well known square-root law, times the estimate of the unconditional volatility implied by the$\mathrm{GJR-GARCH}$ model. Again, the$\frac{1}{2}$ multiplying$\gamma $ comes from the assumption of symmetric conditional distribution for the returns.

There is a stylized fact that the $\mathrm{GJR-GARCH}$ model captures that is not contemplated by the $\mathrm{GARCH}$ model, which is the empirically observed fact that negative shocks at time$t-1$ have a stronger impact in the variance at time $t$ than positive shocks. This asymmetry used to be called leverage effect because the increase in risk was believed to come from the increased leverage induced by a negative shock, but nowadays we know that this channel is just too small. Notice that the effective coefficient associated with a negative shock is$\alpha +\gamma $. In financial time series, we generally find that $\gamma $ is statistically significant.

The specific model just described can be generalized to account for more lags in the conditional variance. A$\mathrm{GJR-GARCH}\left(p,q\right)$ model assumes that:

$${\sigma}_{t}^{2}=\omega +\sum _{i=1}^{p}\left({\alpha}_{i}+{\gamma}_{i}{I}_{t-i}\right){\epsilon}_{t-i}^{2}+\sum _{j=1}^{q}{\beta}_{j}{\sigma}_{t-j}^{2}$$

The best model ($p$ and $q$) can be chosen, for instance, by Bayesian Information Criterion (BIC), also known as Schwarz Information Criterion (SIC), or by Akaike Information Criterion (AIC). The former tends to be more parsimonious than the latter. V-Lab uses$p=1$ and$q=1$ though, because this is usually the option that best fits financial time series.

Glosten, L. R., R. Jagannathan, and D. E. Runkle, 1993. On The Relation between The Expected Value and The Volatility of Nominal Excess Return on stocks. Journal of Finance 48: 1779-1801. https://www.jstor.org/stable/2329067

Zakoian, J. M., 1994. Threshold Heteroscedastic Models. Journal of Economic Dynamics and Control 18: 931-955. https://doi.org/10.1016/0165-1889(94)90039-6