Reputation: 21
In a time series analysis, I tested 30 time series with 62 observations for a unit root with the ur.df test from the R package urca (Bernard Pfaff), with lag length decided by the AIC criterion. With no exception, a lag length of 1 was chosen. This seems highly implausible. Testing with a CADF test from the R package CADFtest (which performs an ordinary ADF test if x~1 is chosen), and the AIC criterion for lag length selection, the number of lags varies between 0 and 7. Is there someone who can explain the tendency to a uniform and short lag length in urca?
Furthermore, if the lag lengths in ur.df and CADFtest are the same, the test statistics are not. For instance, for the time series lcon (natural logarithm of consumption per head) 1950-2010 in the Netherlands, the test statistics (constant and trend) are -1.5378 (1) with ur.df and -2.4331 (1) with CADFtest. Adf.test from the R package tseries computes a test statistic equal to ur.df (-1.5378, 1 lag). So rejection of a unit root is dependent on the package, which is not an optimal situation.
Upvotes: 2
Views: 2673
Reputation: 1
I had the same problem. You need to specify the maximum number of lags, otherwise the default will be 1.
for example
ur.df(variable, type = "drift", lags=30, selectlags = "AIC")
Upvotes: 0
Reputation: 31
There seems to be a severe problem due to sensitivity of results with regard to sample length. Some observations might change the result dramatically (i.e. comparing lag length p=3 and 4 the series starts for the former at y_t=3 and for the latter at y_t=4). Therefore the time series should start at a common date (as is also recommended for IC based selection of lag length for VAR models in gerneral). So if max.lag.y=6 the provided time series needs to be truncated accordingly (i.e. y[-c(1:5)]). Unfortunately this is not the default. Hope this helps. Not sure if this is the only issue with CADFtest though.... (see also https://stat.ethz.ch/pipermail/r-help/2011-November/295519.html )
Bests
Hannes
Upvotes: 0