Time series unit root testing with Python ‘ARCH’ library: ADF, KPSS & Phillips-Perron tests for Goldman Sachs stock prices

Vsevolod Koteniov
6 min readNov 16, 2021

--

Testing stationarity for time series in an essential part of any time series analysis, and nowadays we can implement the whole bunch of tests helping us determine if the series is stationary or requires being integrated.

While conducting an econometric research, one can apply the tools of various packages which share a pre- installed testing toolkit, as ‘Gretl’, ‘Econometric Views' and Stata. Still, as the majority of DS projects are realized using Python, it can be really useful to look into how we can provide the same testing paradigms applying Python toolkit. Precisely, we will take a loot at ‘ARCH’ library and find out how it may be of help.

Quick reminder: if a time series is non-stationary (i.e. has a unit root), we can’t build any ARMA models until it is integrated. Moreover, Eagle-Granger testing for a potential cointegrating (cause-effect) relatinship between two time series is also impossible as it requires using stationary time series to be tested .

Al grano. There are three key econometric tests we use to detect a unit root in a TS (time series — here and further):

  1. Augmented Dickey-Fuller (ADF) stationarity test
  2. Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test
  3. Phillips-Perron test (for time series with instantenuous leaps or falls)

Let us friedly examine all of them and see how we can hendle them with Python ‘ARCH’.

ADF — Augmented Dickey — Fuller

The standard test taught at econometrics classes all over the world. H0 (null hypothesis) and H1(alternative hypothesis) are easily formulated:

H0: the time series has a unit root (i.e. associated with non-stionary data)
H1: the times series is higly likely to lack a unit root and thus can be considered as associated with stationary data.

The method implies OLS evaluation of an auxiliary regresson. Indeed, it is pretty simple:

A. You take your time series, calculate the first difference of it (i.e. diferentiate it) and take this fisrt difference as a dependent variable: Xt — Xt-1 = (1 — L)Xt, or d_Xt.

B. You take the fist lag of your time series as a regressor: Xt-1 = LXt

C. Additionally you can add a constant and a trend variable: const, time. Note that these parameters can be either included or not — as a researcher, you define the necessity of taking them.

Your regresson will look something like that:

d_Xt = const + beta_0*time + beta_1* LXt + errors.

NB: There should be no autocorrelation in the model’s residuals. Once the residuals result into being co-dependend (there is autocorrelation indeed), you should extend your feature space with all the d_Xt’s lags associated with autocorrelation, i.e. use these d_Xt’s lags as regressors (the ADF test realizatino in ‘ARCH’ does it for you under the hood).

Consider the example:

d_Xt = const + beta_0*time + beta_1* LXt + beta_2*L(d_Xt) + errors.

This example inplies that the first lag is associated with autocorrelation in the auxiliary ADF regresson’s residuals, which makes us augment our equation by including this lag.

The t-statistic for your beta_1 is indeed the ADF test statistic you are looking for. Then, comparing the statistics with the ADF critical values for a certain case, you accept or disprove the H0 hypothesis on TS stationarity. Note that ADF dsitribution is left-tail, meaning that all the critical ADF values are supposed to be negative, so your ADF test statistic should be negative and less than the ADF critical value at the same time.

We are lucky to have arch.unitroot.ADF() function selecting the number of lags for augmentation automatically.

Let us check how it works. Suppose we have very simple time series — Goldman Sachs stock prices from 2019 up to 2021. The time series looks something like that:

fig, ax = plt.subplots(figsize = (17, 7))
sns.lineplot(data = goldman_sachs,
x = goldman_sachs.index,
y = 'close_price')
ax.set_title('Goldman Sachs stock prices, 2019 - 2021'.upper(), pad = 16)
ax.set_ylabel('Close price')
ax.set_xlabel('Observations')
plt.show()

Now let us import the whole functional out of the arch.unitroot:

from arch.unitroot import *

Now we can apply the ADF() function for our time series. Let us consider three auxiliary regressions: 1. Constant &trend, 2. Constant, 3. No constant, no trend.

Case 1. Including both constant & trend
Apart from passin the time series itself, let’s pass the ‘ct’ value to the ‘trend’ argument. ‘ct’ stands for constant & trend:

adf_ct = ADF(goldman_sachs['close_price'], trend = 'ct')
adf_ct.summary()
The ADF resuls for Goldman Sachs stock prices

We can observe that the ADF test statistic is equal -1.799, whereas the ADF ctitical value (generated automatically, btw) is -3.42 (5% significance level). Thus, our statistic has not managed to fall into the ADF left distribution tail (critical zone), and we are to accept the null: the process contains a unit root (the p-value for accepting the null is estimated to be 70,5%). Note that the function has automatically provided the necessary number of lags up tp which the auxiliary regression was to be augmented (7 lags).

Are the results for two other cases supposed to be diffrent? Let’s see.

Cases 2 & 3. Constant / No constant, no trend
Let’s provide the same procedure by including only the constant into our test equation (auxiliary ADF regresson):

adf_c = ADF(goldman_sachs['close_price'], trend = 'c')
adf_c.summary()
ADF test with constant
adf_n = ADF(goldman_sachs['close_price'], trend = 'n')
adf_n.summary()
ADF with no constant, no trend

The same here. The p-values for accepting the null are extremely high (92,8%, 95,3), the ADF test statistics are far away from the left distribution tail (-0.284 vs -2.87, 1.332 vs -1.94). The process is considered to be non-stationary. The num of lags taken for augmenting both test equations is 8.

KPSS — Kwiatkowski–Phillips–Schmidt–Shin

What’s marvelous about this test is that it is resilient to autocorrelation and heteroskedasticity in the auxiliary regresson’s residuals, i.e. you don’t need to manually augment your regresson by extra dependend variable’s lags.

NB: here the null and alternative hypotheses are oppposite.
H0: the time series tested is considered to be stationary
H1: the TS should be considered as non-stationary (there is a unit root).

Let us take the same time series — Goldman Sachs stock prices, 2019–2021.
The KPSS testing fuinction is stored in the same module: arch.unitroot.KPSS(). As long as we have already imported the whole functional out of the module, there is no need in providing extra import. Let us call the function (here we consider the case with constant & trend):

kpss_ct = KPSS(goldman_sachs['close_price'], trend = 'ct')
kpss_ct.summary()
KPSS for Goldman Sachs stock prices (Constant & Trend)

Let us analyse the results. P-value for accepting the null is 0, which means we should disprove the null and accept the alternative. The H1 is about the time series being non-stationary. Thus, we consider our TS a unit-root process.
NB: the KPSS test results match the ADF test results.

Phillips — Perron

The Phillips-Perron test is also considered to be resilient to autocorrelation and heteroskedasticity. However, unlike to ADF, here such effect is possible due to so-called non-parametric t-statistic correction (we’re not gonna get stuck with detailizing this aspect).

The null and alternative correspond with the ADF test:
H0: The process contains a unit root.
H1: The process is stationary.

Let’s take a look how it works for the constant & trend case:

php_ct = PhillipsPerron(goldman_sachs['close_price'], trend = 'ct')
php_ct.summary()
Phillips-Perron for Constant & Trend

P-value for accepting the null is 71,2%, Obviously, we consider our TS to be a unit-root process. Test results match the two previous ones.

To sum up,

It is wise to use all the arch.unitroot options for testing and compare the results. The module is wonderful as it provides under-the-hood evaluation of all the auixiliary regressions and augments these regressions if necessary up to a certain lag.

Speaking about ARCH library, it’s worth mentioning arch.unitroot is one of many useful modules wich help to evaluate autoregressive conditional heteroskedasticity models extremely useful for measuring volatility of future forecasts (especially for finances).

Thanks for reading!

--

--

Vsevolod Koteniov
Vsevolod Koteniov

Written by Vsevolod Koteniov

Industrial AI Team Lead with 5+ years of experience in IT | Master in Applied Mathematics. Keen on Data Science, AI Solutions design & Quantitative Research

Responses (1)