Reputation: 37
Im am trying to make a research in a Apple data in order to have more information about volatility and variance of the dataset. My idea was to use Garch and Arch models. However, when make a prediction with the trained part, the results have no logic.
from arch import arch_model
n_test = 100
train, test = AAPL[:-n_test], AAPL[-n_test:]
model = arch_model(train, mean='Zero', vol='ARCH', p = 20)
model_fit = model.fit()
I fit the model with the ARCH model, and when i do the prediction
import matplotlib.pyplot as plt
yhat =model_fit.forecast(horizon=n_test)
var = [i*0.01 for i in range(0,100)]
plt.plot(var[-n_test:])
plt.plot(yhat.variance.values[-1, :])
plt.show()
Upvotes: 0
Views: 1024
Reputation: 1
You have to check if the time series you are forecasting is stationary on not use ADF test (Augmented dicky fuller test) to check stationarity. if p value of test is grater that 0.05 then time series is not stationary. Then you have to take first difference (next value - previous value) to make it stationary. Then you can apply arch model to it
Upvotes: 0