South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
both approaches enhances volatility modeling in low-liquidity  
markets.  
Keywords: Volatility, ARCH-GARCH models, State space  
models, Kalman filter, Stochastic volatility, Merval index.  
Comparison of ARCH-GARCH and  
stochastic approaches for estimating  
volatility. Application to a small stock  
market  
RESUMEN  
Las series temporales económicas suelen presentar volatilidad, lo  
que implica que la varianza del error de observación fluctúa con  
el tiempo. Una de las metodologías más utilizadas para mod-  
elar estas dinámicas es el modelo ARCH, introducido por En-  
gle (1982), y sus extensiones, como los modelos GARCH. Estos  
suponen que la varianza condicional depende de valores pasados  
de la serie. En contraste, los modelos de volatilidad estocás-  
tica (SVM), propuestos inicialmente por Taylor (1980, 1986),  
asumen que la volatilidad depende de las varianzas pasadas pero  
no directamente de los rendimientos previos. Este estudio com-  
para ambos enfoques en la modelización de la volatilidad en un  
mercado bursátil pequeño. Evaluar el desempeño de los modelos  
ARCH-GARCH y de volatilidad estocástica en la estimación del  
riesgo de mercado e identificación de patrones de volatilidad en  
el Merval index, que representa la Bolsa de Comercio de Buenos  
Aires (BASE). Se realizó un estudio longitudinal observacional  
basado en datos diarios del Merval index entre el 13 de enero de  
2003 y el 22 de mayo de 2015, abarcando 3006 observaciones.  
Se seleccionó este período para evitar cambios en la afiliación  
política del gobierno, eliminando posibles distorsiones exógenas  
del mercado. Se aplicaron pruebas estadísticas (ADF, Phillips-  
Perron) para verificar estacionariedad y los modelos fueron es-  
timados mediante máxima verosimilitud y filtrado de Kalman.  
Los modelos GARCH con distribuciones de colas pesadas predi-  
jeron mejor la volatilidad en el corto plazo, capturando el cluster-  
ing de volatilidad, mientras que los modelos estocásticos fueron  
más eficaces en la detección de cambios de régimen. El Merval  
index, con una capitalización promedio de $312 millones, con-  
firma las características de un mercado bursátil pequeño, donde  
la modelización de la volatilidad es clave para la evaluación del  
riesgo. La elección entre modelos ARCH-GARCH y estocásti-  
cos depende del horizonte de pronóstico. Los modelos GARCH  
son óptimos para evaluar el riesgo en el corto plazo, mientras  
que los modelos estocásticos son más adecuados para detectar  
cambios estructurales a largo plazo. La combinación de ambos  
enfoques mejora la modelización de la volatilidad en mercados  
de baja liquidez.  
Comparación de los enfoques ARCH-  
GARCH y estocástico para estimar la  
volatilidad. Aplicación a un pequeño mer-  
cado de valores  
Juan Carlos Abril1 and María de las Mercedes Abril1  
1Universidad Nacional de Tucumán and Consejo Nacional de Investi-  
gaciones Científicas y Técnicas (CONICET). Av. Independencia 1900,  
San Miguel de Tucumán, Tucumán, Argentina.  
Correspondence: jabril@herrera.unt.edu.ar;  
mabrilblanco@hotmail.com  
Received: December 18, 2024 - Accepted: January 30, 2025 -  
Published: February 10, 2025  
ABSTRACT  
Economic time series often exhibit volatility, where the variance  
of the observational error fluctuates over time. One of the most  
widely used methodologies for modeling these dynamics is the  
ARCH model, introduced by Engle (1982), and its extensions,  
such as GARCH models. These assume that conditional vari-  
ance depends on past values of the series. In contrast, stochastic  
volatility models (SVM), first proposed by Taylor (1980, 1986),  
assume that volatility depends on past variances but not directly  
on past returns. This study compares both approaches in mod-  
eling the volatility of a small stock market. To evaluate the  
performance of ARCH-GARCH and stochastic volatility mod-  
els in estimating market risk and identifying volatility patterns  
in the Merval index, which represents the Buenos Aires Stock  
Exchange (BASE). A longitudinal observational study was con-  
ducted using daily Merval index data from January 13, 2003, to  
May 22, 2015, covering 3006 observations. This period was cho-  
sen to avoid political shifts that could introduce market distor-  
tions. Statistical tests (ADF, Phillips-Perron) were performed to  
check stationarity, and models were estimated using maximum  
likelihood and Kalman filtering. GARCH models with heavy-  
tailed distributions provided better short-term volatility predic-  
tions, capturing volatility clustering, while stochastic volatility  
models were more effective at identifying regime shifts. The  
Merval index, with an average market capitalization of $312  
million, confirms the characteristics of a small stock market,  
where volatility models play a crucial role in risk assessment.  
The choice between ARCH-GARCH and stochastic models de-  
pends on the forecasting horizon. GARCH models are optimal  
for short-term risk evaluation, whereas stochastic models are bet-  
ter suited for detecting long-term structural changes. Combining  
Palabras claves: Volatility, ARCH-GARCH models, State  
space models, Kalman filter, Stochastic volatility, Merval index.˙  
1
INTRODUCTION  
The study of the phenomenon of volatility has been developed  
mainly from the analysis of time series related to the economy.  
However, it must be emphasized that any time series may be sub-  
ject to the presence of volatility.  
Many economic time series do not have a constant mean and  
in practical situations we often see that the variance of the ob-  
servational error, conditional on past knowledge, is subject to  
substantial variability over time. This phenomenon is known as  
volatility.  
https://doi.org/10.5281/zenodo.14845739  
25  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
To take into account the presence of volatility in an economic  
series it is necessary to resort to models known as conditional  
heteroscedastic models. In these models, the variance of a series  
at a given moment of time depends on past information and other  
data available up to that time, so a conditional variance must be  
defined, which is not constant and does not coincide with the  
overall variance of the observed series.  
An important characteristic of financial time series is that they  
are not generally serially correlated, but rather dependent. Thus,  
linear models such as those belonging to the ARMA model fam-  
ily may not be appropriate to describe these series.  
There is a very large variety of non-linear models in the litera-  
ture, useful for the analysis of economic time series with volatil-  
ity. An important class of them are the ARCH-type models in-  
troduced by Engle (1982) and its extensions. These models are  
non-linear with respect to variance.  
with a small stock market.  
As stated, in this paper we analyze the Merval index series.  
This is a series with information corresponding to all working  
days of the stock market. Specifically, we work with the returns  
of the quotes of this index, which consists of the first differences  
of the logarithm of the Merval levels. The period analyzed goes  
from January 13, 2003 to May 22, 2015. There are 3,006 obser-  
vations. It covers a period in which there was no change in the  
government’s affiliation. In fact, during that period the wing of  
Peronism called Kirchnerism governed. This eliminates the ef-  
fects that could have been introduced into the market by changes  
in the governing group.  
To perform the analysis we used two approaches: one based  
on ARCH-GARCH type models, and another based on stochastic  
volatility models. We then made comparisons between these two  
approaches.  
The ARCH or GARCH family of models assume that the con-  
ditional variance (volatility) depends on past observations. In  
2
ECONOMIC AND FINANCIAL TIME SERIES  
MODELLING  
2
other words, if σ is the volatility, the ARCH-GARCH family  
t
assumes that it depends on the series yj for j < t. On the other  
hand, the stochastic volatility model or SVM, first proposed by  
Taylor (1980, 1986), is not based on this assumption. This model  
The basic idea of a time series is very simple, it consists of  
the recording of any fluctuating quantity measured at different  
points in time.  
Specifically, a time series is a set of observations {y1, ..., yn}  
ordered in time. The basic and general model used to represent  
any time series is the additive model, given by  
2
t
2
j
assumes that the volatility σ depends on its past values (σ for  
j < t) but is independent of the past of the series under analysis  
(
yj for j < t).  
A cursory inspection of series such as the one presented in  
this paper suggests that they do not have a constant mean and  
variance. A stochastic variable in which the variance is con-  
stant is said to be homoscedastic as opposed to a heteroscedastic  
variable. For those series in which there is volatility, the uncon-  
ditional variance may be constant even though the conditional  
variance in some periods is unusually large and in others small.  
As an application, the Merval index series is analyzed. The  
Merval is a stock market index that has been calculated in the  
Buenos Aires Stock Exchange (BASE), Argentina, since June  
yt = µt + γt + εt,  
t = 1, ..., n,  
(1)  
wehere µ is a component that changes smoothly over time called  
trend, γ is a periodic component called seasonality and ε is an  
t
t
t
irregular component called error. As we can see, the common  
feature of all records belonging to the time series domain is that  
they are influenced, at least partially, by sources of random vari-  
ation.  
The main reason for modelling a time series is to enable pre-  
diction of its future values. The distinctive feature of a time se-  
ries model, as opposed to, for example, an econometric model, is  
that no attempt is made to formulate a behavioural relationship  
between the time series under consideration and other explana-  
tory variables. Movements of the series are explained solely in  
terms of its own past, or by its position relative to time or by its  
structure. Predictions are made by extrapolation.  
Many economic time series do not have a constant mean and  
in many cases there are periods of relative calm followed by pe-  
riods of significant changes. Much of the current research in  
time series and econometrics is focused on extending the clas-  
sical and commonly used methodology of Box and Jenkins to  
analyze this type of behaviour. However, there is a characteristic  
present in time series that refer to financial assets (or directly fi-  
nancial time series) and other series referring to economic activ-  
ities and it is what is known as volatility, which can be defined in  
various ways, but is not directly observable. To take into account  
the presence of volatility groups in a financial or economic se-  
ries it is necessary to resort to models known as conditional het-  
eroscedastic models. In these models, the variance (or volatility)  
of a series at a given time depends on its past and other infor-  
mation available up to that time, so a conditional variance must  
3
0, 1986. It measures the traded volume of the main shares  
listed on that exchange. The index is composed of a fixed nom-  
inal amount of shares of different listed companies, commonly  
known as “leading companies”. The shares that make up the  
Merval index change every three (3) months, when this portfolio  
is recalculated, based on the participation in the traded volume  
and the number of operations of the last six (6) months. Those  
shares that are within the accumulated 80% of market participa-  
tion are selected. In addition, the selected companies must meet  
the requirement of having traded in at least 80% of the trading  
sessions of the period considered.  
The Buenos Aires Stock Exchange was founded on July 10,  
854. It is the largest stock exchange and the main business and  
1
financial centre of the Argentine Republic. Its transactions are  
basically shares of important national and foreign companies,  
bonds, currencies and futures contracts. It is a non-profit civil  
association run by representatives of the various business sec-  
tors. According to a study carried out by the International Fi-  
nance Corporation, which is part of the World Bank, the average  
value of the companies listed on the BASE is 312 million dollars,  
a figure that places Argentina in 30th place among the countries  
that have stock markets. That is why we say that we are dealing  
https://doi.org/10.5281/zenodo.14845739  
26  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
q
2
t1  
be defined, which is not constant and does not coincide with the  
global or non-conditional variance of the observed series.  
is non-linear in variance since g(·) = 0 and h(·) = αy  
and  
yt1 depends on εt1.  
ARCH models or autoregressive with conditional het-  
eroscedasticity models were first introduced by Engle in 1982  
to estimate the variance of inflation in Britain. The basic idea of  
this model is that the price of an asset yt is not serially correlated  
but depends on past prices via a quadratic function.  
In conventional econometric models, the variance of the dis-  
turbance is assumed to be constant. However, it can be seen that  
many economic series exhibit periods of unusually large volatil-  
ity followed by periods of relative calm. In these circumstances,  
the assumption of constant variance, also called homoscedastic-  
ity, is somewhat inappropriate. There are instances when one  
wishes to predict the conditional variance of a series. Asset hold-  
ers may be interested in predictions of the rates of return and their  
variance for a given period. The unconditional variance (i.e. the  
long-term variance) would not be important if one plans to buy  
the asset at time t and sell it at t + 1.  
3
THE VOLATILITY  
Volatility is defined as the variance of a random variable, con-  
ditional on all past information. Since volatility cannot be mea-  
sured directly, it can manifest itself in various ways in a time  
series.  
Let yt be the series under study whose dimension is p = 1.  
We define  
µt = E(yt |Yt1 ) = Et1(yt),  
(2)  
2
2
σt  
=
=
var(yt |Yt1 ) = E (yt  µt) |Yt1  
Et1(yt  µt) = vart1(yt),  
2
(3)  
as the conditional mean and variance of yt given the information  
up to time t 1 contained in Yt1, respectively.  
An ARCH(q) model can be expressed as  
A common model to take volatility into account is of the form  
p
yt = µt + htεt,  
(4)  
yt  
=
=
µt + εtht = µt + εtσt, εt  iid D (0, 1) , (6)  
q
where Et1(εt) = 0, var(εt) = 1 and typically the εt are inde-  
pendent and identically distributed (iid) with distribution func-  
tion F. The unconditional mean and variance of yt will be de-  
X
2
2
2
t1  
σt  
h = ω +  
t
αiz  
,
(7)  
i=1  
2
noted as µ = E(yt) and σ = var(yt), respectively, and let G  
be the distribution function of yt. It is clear that (2), (3) and F  
determine µ, σ and G, but not the opposite. More details about  
this formulation can be seen in Abril M. (2014).  
where zt = yt  µt and D (·) is a probability density function  
with mean equal to zero and unit variance.  
2
An ARCH model adequately describes volatility clustering.  
The conditional variance of yt is an increasing function of the  
square of the shock occurring at time t1. Consequently, if yt1  
4
MODELS OF THE ARCH-GARCH FAMILY  
2
t
is large enough in absolute value, σ and thus yt are expected to  
be large enough in absolute value as well. It should be noted that  
even though the conditional variance in an ARCH-type model  
There is a very large variety of non-linear models available  
in the literature to deal with volatility, but we will concentrate  
on the ARCH type models or autoregressive with conditional  
heteroscedasticity models, introduced by R. Engle (1982) and its  
extensions. These models are non-linear as far as the variance is  
concerned.  
2
2
t
varies over time, i.e., σ = E z |Yt1 the unconditional vari-  
t
P
q
i=1  
ance of zt is constant and, since ω > 0 and  
αi < 1, we  
have  
ꢃꢁ  
ω
2
2
t
σ  E E z |Yt1  
=
P
.
(8)  
q
1
αi  
i=1  
ꢂ ꢃ  
ꢂ ꢃ  
In the analysis of non-linear models the errors (also called  
innovations, because they represent the new part of the series  
that cannot be predicted from the past) εt, are generally assumed  
to be iid and the model has the form  
3
4
If εt is normally distributed, then E ε = 0 and E ε = 3.  
t
t
3
Therefore, E zt = 0 and the symmetry of the variable z will  
be equal to zero. Thus, the kurtosis coefficient for an ARCH(1)  
p
2
1
2
1
is 3(1  α )/(1  3α ) if α1 < 1/3  0, 577. In this case,  
the conditional distribution of any series will have heavy tails if  
yt  
=
=
g(εt1, εt2, . . .) + εth(εt1, εt2, . . .)  
gt + εtht = µt + εtht,  
α1 > 0.  
(5)  
In most practical applications, excessive kurtosis in an ARCH  
model means that a normal distribution is not adequate enough  
to explain the process generating the data. Therefore, we can  
make use of other distributions. For example, we can assume  
that εt follows a Student t distribution with mean 0, variance  
equal to 1 and υ degrees of freedom, that is, εt is ST (0, 1, υ).  
In this case, the unconditional kurtosis for the ARCH(1) is  
where g(·) = gt = µt represents the conditional mean and  
2
2
t
h (·) = h is the conditional variance. If g(·) is non-linear,  
the model is said to be non-linear in the mean, on the other hand  
if h(·) is non-linear, the model is said to be non-linear in the  
variance. For example, the model  
2
t1  
2
λ(1  α )/(1  λα ) where λ = 3 (υ  2) / (υ  4). Due to  
1
2
yt = εt + αε  
,
1
the additional coefficient υ, the ARCH(1) model based on a t  
distribution will have heavier tails than the one based on a nor-  
mal distribution, which will be very useful when analyzing the  
data in our study. It is important to note that other distributions  
2
t1  
is non-linear in the mean since g(·) = αε  
and h(·) = 1, while  
the model  
q
2
yt = εt αy  
,
t1  
https://doi.org/10.5281/zenodo.14845739  
27  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
for performing the analysis are available in some software pack-  
ages.  
It should be noted that GARCH-type models are not the only  
extension of ARCH-type models and there are at least twelve  
specifications related to them that will be the subject of future  
research.  
The generalized ARCH models (or GARCH models as they  
are also known) are based on an infinite ARCH specification and  
allow reducing the number of parameters to be estimated by im-  
posing non-linear restrictions on them. The GARCH(p, q) model  
is expressed as follows  
2
The calculation of σ in (7) depends on the past unobserved  
t
2
t
quadratic residuals, z , for t = 0, 1, . . . , q + 1.To initialize  
the process, the unobserved quadratic residuals are set to a value  
equal to the sample mean of the observed ones.  
The conditional mean µt can contain n1explanatory variables,  
which are specified as follows  
n1  
X
q
p
µt = µ +  
δi xi,t.  
(9)  
X
X
2
2
ti  
2
tj  
i=1  
σ = ω +  
αiz  
+
βjσ  
.
(11)  
t
i=1  
j=1  
On the other hand, n2 explanatory variables can be included  
in the conditional variance given in (7), as follows  
Using the lag operator L, the GARCH(p, q) model is trans-  
formed into  
n2  
X
ωt = ω +  
ωi xi,t.  
(10)  
2
t
2
t
2
t
σ = ω + α (L) z + β (L) σ ,  
i=1  
2
q
where α (L) = α1L + α2L + . . . αqL and β (L) = β1L +  
β2L + . . . βpL .  
where the xi,t of (9) are not necessarily the same as those ap-  
2
p
pearing in (10).  
2
If all the roots of the polynomial |1  β (L)| = 0 lie outside  
σ must obviously be positive for all t. The sufficient condi-  
t
the unit circle we have  
tions that ensure that the conditional variance is positive in (7)  
are given by ω > 0 and αi  0 for all i. Furthermore, when  
explanatory variables enter the ARCH specification, these pos-  
itivity restrictions no longer hold, although we still require that  
the conditional variance be non-negative.  
A very simple device for reducing the number of parame-  
ters to be estimated, which we will not develop in this paper,  
is called variance orientation and was first developed by Engle  
and Mezrich (1996).  
The conditional variance matrix in an ARCH model, and in  
most of its generalizations, can be expressed in terms of its un-  
conditional variance and other parameters. By this means, it  
is possible to reparameterize the model using the unconditional  
variance and replace it by a consistent estimator before maximiz-  
ing the likelihood.  
σ = ω |1  β (L)| + α (L) |1  β (L)|1 z ,  
2
1  
2
t
(12)  
t
which can be seen as an ARCH() model since the conditional  
variance depends linearly on all previous quadratic residuals. In  
this case, the conditional variance of yt can be larger than the  
unconditional variance given by  
ω
2
2
t
σ  E z  
=
P
P
,
βj  
q
i=1  
p
j=1  
1
αi −  
2
2
if past realizations of z are greater than σ (Palm, 1996).  
t
Applying the variance orientation procedure to a GARCH  
P
P
2
q
i=1  
p
j=1  
model involves replacing ω by σ 1 −  
αi −  
βj ,  
2
2
t
where σ is the unconditional variance of z which can be con-  
Applying variance orientation for an ARCH model involves  
sistently estimated by means of its sample counterpart.  
On the other hand, if the explanatory variables ap-  
pear in a GARCH-type formulation, ω is then replaced by  
P
2
q
i=1  
2
replacing ω by σ (1 −  
αi), where σ is the unconditional  
variance of yt, which can be consistently estimated by its sample  
P
P
P
counterpart.  
σ2 1 −  
q
i=1  
p
j=1  
n2  
ωi xi, where xi is the  
i=1  
αi −  
βj  
If explanatory variables appear in the ARCH equation then ω  
is replaced by  
sample mean of the variable xi,t, assuming stationarity of the n2  
explanatory variables.  
!
q
n2  
Bollerslev (1986) showed that for  
a
GARCH(1, 1)  
X
X
2
σ
1 −  
αi  
ωixi,  
with normal innovations,  
the kurtosis of  
y
is  
h
i.h  
i
2
2
2
i=1  
i=1  
3
1  (α1 + β1)  
1  (α1 + β1)  2α1  
>
3.  
The  
2
t
where xi is the sample average of the variable xi,t, assuming  
that there is stationarity in the n2 explanatory variables. In other  
words, the explanatory variables are centered.  
autocorrelations of z were derived by Bollerslev (1986). For a  
stationary GARCH(1, 1)  
ꢇꢂ  
ꢃꢈ  
2
1
1  2α1β1 + β12  
ρ1  
ρk  
=
=
α1 + α β1  
,
While Engle (1982) certainly made the major contribution  
to financial econometrics, ARCH-type models are rarely used  
in practice due to their simplicity. A good generalization of  
these models is found in the GARCH-type models introduced  
by Bollerslev (1986). These models are also a weighted aver-  
age of the past squared residuals, are more parsimonious than  
ARCH-type models and even in their simplest form have proven  
to be extremely successful in predicting conditional variances.  
k1  
(α1 + β1)  
ρ1, for all k = 2, 3, . . . .  
In other words, the autocorrelations decline exponentially with a  
decline factor equal to α1 + β1.  
As in the case of ARCH models, it is necessary to impose  
2
t
some restrictions on σ to ensure that it is positive for all t.  
Bollerslev (1986) showed that ensuring that ω > 0, αi  0  
https://doi.org/10.5281/zenodo.14845739  
28  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
(
for i = 1, . . . , q) and βj  0 (for j = 1, . . . , p) are suffi-  
Student t distribution and the skewed-Student distribution.  
cient to ensure that the conditional variance is positive. In prac-  
tical situations, the parameters in a GARCH-type model are es-  
timated without using positivity restrictions. Nelson and Cao  
The logic of the maximum likelihood method is to interpret  
the density as a function of the set of parameters, conditional on  
the set of sample observations. This function is called the likeli-  
hood function. It is evident from (8) that the recursive evaluation  
of this function is conditional on the observed values. For this  
reason, we will consider the approximate or conditional maxi-  
mum likelihood and not the exact maximum likelihood.  
The logarithm of the likelihood function for the standard nor-  
mal distribution is given by  
(
1992) stated that imposing the condition that all coefficients  
are non-negative is too restrictive and that some of them may  
in practical situations be negative while the conditional variance  
remains positive (reviewing a good number of real-life situa-  
tions). They consequently relaxed this fact and established suf-  
ficient conditions for the GARCH(1, q) and GARCH(2, q) cases  
based on the infinite representation given in (12). Indeed, we  
can see that the conditional variance is strictly positive provided  
n
1 Xꢆ  
2
2
1
lnorm = −  
log (2π) + log σt + εt  
,
(13)  
ω |1  β (L)| is positive and all coefficients of the polynomial  
2
1
t=1  
α (L) |1  β (L)| in (12) are nonnegative. The positivity re-  
strictions proposed by Bollerslev (1986) can be fixed during the  
estimation. If not, they, as well as those imposed during the  
ARCH(), representation, will be tested a posteriori if there is  
no explanatory variable in the conditional variance equation.  
where n is the number of observations.  
For a Student t distribution this function is  
ꢄ ꢅ  
ν + 1  
ν
1
2
lStud  
=
n log Γ  
log Γ  
log [π (ν  2)]  
2
2
4
.1 Distributions and estimation  
There are basically three methods for estimating ARCH-  
n ꢍ  
ꢋꢎ  
X
2
ε
t
1
2
log σ + (1 + ν) log 1 +  
, (14)  
2
ν  2  
t=1  
GARCH type models:  
where ν are the degrees of freedom, with 2 < ν  ∞ and Γ (·)  
1
. The most commonly used one, and the one we will work  
with in this case, is the standard estimation by maximum  
likelihood. This uses Newton’s quasi-maximum likeli-  
hood method developed by Broyden, Fletcher, Goldfrab  
and Shanno (or BFGS according to its acronym).  
is the gamma function.  
For a skewed Student distribution (with mean zero and vari-  
ance one) this function is  
ꢄ ꢅ  
η + 1  
η
1
2
lSkSt  
=
n log Γ  
log Γ  
log [π (η 2)]  
2
. An optimization technique that implements quadratic se-  
quence programming to maximize a nonlinear function,  
subject to nonlinear constraints similar to Algorithm 18.7  
in Nocedal and Wright (1999). This is particularly useful  
when imposing positivity or stationarity constraints such as  
α1 > 0 in an ARCH model.  
2
2
!
)
2
+ log  
+ log(s)  
ξ + 1  
ξ
ꢎꢌ  
1n  
( + m)2  
2
t
t
2It  
log(σ ) + (1 + η) 1 +  
ξ
,
2
t=1  
η 2  
(
15)  
3
. And finally a simulation algorithm to optimize non-smooth  
functions with multiple possible local maxima.  
where  
m
if εt  −  
s
1 if εt < −  
1
In our case, the estimation in ARCH-GARCH type models is  
carried out using the quasi-maximum likelihood method, so it is  
necessary to make an additional assumption about the innovation  
process εt, that is, to choose the density function D (0, 1) that  
has a mean equal to zero and a unitary variance.  
Weiss (1986) and Bollerslev and Woolridge (1992) showed  
that under the assumption of normality, the quasi-maximum like-  
lihood estimator (or QML according to its acronym) is consistent  
if the conditional mean and the conditional variance are correctly  
specified. This estimator is, however, inefficient with an increas-  
ing degree of inefficiency as it deviates from that normality (En-  
gle and González-Rivera 1991).  
As stated by Palm (1996), Pagan (1996) and Bollerslev,  
Chou and Kroner (1992), the use of heavy-tailed distributions  
is widespread in the literature. Bolleslev (1987), Hsieh (1989),  
Baillie and Bolleslev (1989) and Palm and Vlaar (1997) among  
others, showed that these distributions perform better when cap-  
turing higher order kurtosis.  
It =  
,
m
s
ξ is the asymmetry parameter, η are the degrees of freedom of  
the distribution,  
√  
η1  
Γ
η 2  
2
1
2
m =  
ꢂ ꢃ  
ξ −  
,
η
πΓ  
ξ
and  
s
s =  
ξ2 + 1  1  m .  
2
ξ2  
It is important to note that this last function is the one that will  
be used when analyzing our data set.  
There are other distributions that can be used to carry out the  
estimation process, such as the generalized error distribution or  
GED, but we will not develop them in our work since they are  
not objects of our research.  
For our problem, we will consider three distributions when  
approaching the estimation process; the normal distribution, the  
In terms of the estimation process, we can say that many au-  
thors have proposed using a Student t distribution or a skewed  
https://doi.org/10.5281/zenodo.14845739  
29  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Student distribution in combination with a GARCH type model  
to adequately model the heavy tails in economic or financial time  
series whose data are of high frequency, which will be seen later.  
distribution called “log chi square”, such that  
2
t
E{log(ε )} ≈ −1, 27,  
(24)  
(25)  
2
t
2
var{log(ε )}  
=
π /2.  
5
MODELS FOR STOCHASTIC VOLATILITY  
From (16), (17) and (18) we get  
We will say that the series yt follows a stochastic volatility  
model (SVM) if  
2
2
2
log(y )  
=
=
log(σ ) + log(ε ),  
(26)  
t
t
t
2
t
ht  
log(σ ) = α0 + α1ht1 + ηt.  
(27)  
yt  
=
=
σtεt,  
(16)  
(17)  
ht  
2 2  
t
t t  
Calling ξ = log(ε )  E{log(ε )} ≈ log(ε ) + 1, 27, we  
2
2
t
e 2  
σt  
,
have that E(ξt) = 0, var(ξt) = π /2 and  
where εt is a stationary series with mean equal to zero and vari-  
ance one, and ht is another stationary series with probability den-  
sity given by a function f (h).  
2
2
log(yt ) = 1, 27 + ht + ξt,  
ξt  iid (0, π /2),(28)  
2
ht  
=
α0 + α1ht1 + ηt,  
ηt  iid N(0, σ ) (, 29)  
η
2
As we can see in (17), ht is not equal to the volatility σ as is  
t
where iid means that the variables are independent and identi-  
cally distributed. It is also assumed that ξt and ηt are indepen-  
dent of each other at all times.  
From the equations (16), (17) and (18) let us calculate some  
moments of the SVM. Taking expectation of (16) we have  
usually the notation in the ARCH-GARCH family models.  
The simplest formulation of the model assumes that the loga-  
rithm of volatility is given by  
ht = α0 + α1ht1 + ηt,  
(18)  
E(yt) = E(σtεt) = E(σt)E(εt) = 0,  
(30)  
where ηt is a stationary, Gaussian series, with mean zero, vari-  
2
ance σ and independent of εt. It follows from this that we must  
η
given that σt and εt are independent.  
have |α1| < 1.  
The variance of yt is  
5
.1 Other SVM formulations  
2
t
2
t
2
t
2
t
2
t
2
t
var(yt) = E(y ) = E(σ ε ) = E(σ )E(ε ) = E(σ ). (31)  
Other SVM formulations can be found in the literature, among  
which we highlight the following:  
2
Since we assume that ηt  N(0, σ ) and that ht is stationary  
η
with  
α0  
E(ht) =  
= µh,  
(32)  
1
. Canonical form of Kim, Shephard and Chib (1998). In this  
case the SVM is written as  
1 α1  
and with  
2
η
σ
ht  
2
βe 2 εt,  
var(ht) =  
= σ ,  
(33)  
yt  
=
=
(19)  
(20)  
h
2
1
α1  
ht  
µ + α1(ht1  µ) + σηηt,  
we have that  
!
with  
!
2
σ
η
2
σ
α0  
η
ht  N  
,
.
(34)  
2
ht  N µ,  
,
(21)  
1  α1 1  α1  
2
1
α1  
2 ht  
Since ht is normal or Gaussian, σ = e is log-normal, then  
t
where εt and ηt are both N(0, 1) and independent of each  
other. If β = 1, then µ = 0.  
we have  
2
h
2
t
2
t
µh+σ /2  
var(yt) = E(y ) = E(σ ) = e  
.
(35)  
(36)  
2
. The Jaquier, Polson and Rossi (1994) form of the SVM is  
equal to  
It is not difficult to show that  
p
2µh+2σh2  
yt  
=
=
htεt,  
(22)  
(23)  
4
E(y ) = 3e  
,
t
log(ht)  
α0 + α1 log(ht1) + σηηt,  
from which we obtain that the kurtosis of yt is  
where εt and ηt are both N(0, 1) and independent of each  
other.  
2
h
2
µ
+2σ  
h
2
σ
3
e
2
h
= 3e > 3,  
K(yt) =  
(37)  
2
h
µ
+σ  
h
e
5
.2 Properties of SVM  
as expected, that is, there are heavy tails for the SVM.  
Let us return to the model defined in the equations (16), (17) y  
The autocovariance function of the series yt is given by  
(
18). Suppose that {εt} constitutes a succession of independent  
2
t
random variables such that εt  N(0, 1), then log(ε ) has a  
γy(s) = E(ytyt+s) = E(σtσt+sεtεt+s) = 0,  
(38)  
30  
https://doi.org/10.5281/zenodo.14845739  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
since εt and ηt are independent. Then yt is serially uncorrelated  
the yt have no serial correlation.  
2
but not independent since there is correlation in log(y ). Let us  
t
2
t
denote as zt = log(y ), then the autocovariance function of the  
5.3 Estimation of the SVM  
series zt is given by  
SVM models are difficult to estimate. We can use the ap-  
proach proposed by Durbin and Koopman (1997a, 1997b, 2000,  
γz(s) = E[(zt  E(zt))(zt+s  E(zt+s))].  
(39)  
2001, 2012) which consists of using a quasi-maximum likeli-  
hood procedure by means of the Kalman filter and smoother. In  
this case, the model defined in the equations (16), (17) y (18) can  
be re-expressed in the form  
As the first term in parentheses of (39) is equal to ht E(ht)+ξt  
and ht is independent of ξt, we get  
ht  
yt  
=
σεte 2  
,
(47)  
γz(s)  
=
=
E[(ht  E(ht) + ξt)(ht+s  E(ht+s) + ξt+s)]  
E[(ht  E(ht))(ht+s  E(ht+s))]  
ht  
σt  
ht  
=
=
σe 2  
,
(48)  
(49)  
+
E(ξtξt+s),  
(40)  
α1ht1 + ηt,  
where σ = exp(α0/2) is a scale factor, α1 is a parameter, and  
ηt is a disturbance term which in the simplest model is uncor-  
related with εt. Literature reviews of this model were carried  
out by Shephard (1996, 2005) and Ghysels, Harvey and Renault  
and calling γh(s) and γξ(s) respectively to the autocovari-  
ances of the right hand of (40), we have  
γz(s) = γh(s) + γξ(s),  
(41)  
(
1996). This SVM has two main attractions. The first is that  
for all s.  
it is a discrete-time natural (Euler) analogue of the continuous-  
time model used in option pricing work, such as that of Hull  
and White (1987). The second is that its statistical properties are  
easy to determine. The disadvantage with respect to conditional  
variance models of the GARCH type is that likelihood-based  
estimation can only be performed by computationally intensive  
techniques such as those described in Kim, Shephard and Chib  
As we are assuming that (18) is satisfied, that is, we have a  
AR(1) model, we get  
2
η
σ
s
1
γh(s) = α  
,
2
s > 0.  
(42)  
1
α1  
Besides γξ(s) = 0 for s > 0. Then, γz(s) = γh(s) for all  
s = 0. With this we can write the autocorrelation function of  
̸
2
zt = log(y ) as  
(
1998) and Sandmann and Koopman (1998). However, a quasi-  
maximum likelihood method is relatively easy to implement and  
is usually reasonably efficient. The method is based on writing  
t
(
47), (48) y (49) in the following equivalent form  
s 2  
α ση/(1  α )  
1
,
γh(0) + γξ(0)  
2
γz(s)  
γz(0)  
1
ρz(s) =  
from which we get  
ρz(s) =  
=
s > 0,  
(43)  
2
log yt  
=
=
κ + ht + ξt,  
α1ht1 + ηt,  
(50)  
(51)  
ht  
ꢂ ꢃ  
ꢂ ꢃ  
2
2
2
s
α
1
where ξt = log ε  E{log ε } and κ = log σ  
+
t
t
ꢂ ꢃ  
,
s > 0,  
2
π2  
E{log ε }.  
1
+ 2σ  
2
t
η
The equations (50) and (51) are expressed in the form of state  
space, as can be seen in Abril and Abril (2018). The formula (50)  
is called the observation equation or measurement equation and  
the formula (51) is called the state equation or transition equa-  
tion. The estimation process is therefore carried out using the  
Kalman filter and smoother in the same way as those developed  
in the previous reference.  
which tends to zero exponentially from the lag s = 2, and this in-  
dicates that zt = log(y ) can be modeled using an AR(1) model.  
2
t
En la práctica obtenemos valores de α1 próximos de uno, lo  
que implica la aparición de altas correlaciones para la volatilidad  
y consecuentes grupos de volatilidades en la serie.  
A general SVM can be obtained if an AR(p) model is admitted  
for ht, that is  
It is important to make some observations here:  
yt  
=
σtεt,  
ht  
e 2  
,
(44)  
(45)  
1. When α1 in (51) is close to 1, the fit of an SVM is sim-  
ilar to that of a GARCH(1, 1) model with the sum of its  
coefficients close to 1.  
σt  
=
=
2
p
(
1  α1B  α2B  · · · − αpB )ht  
α0 + ηt, (46)  
2
. When α1 = 1 in (51), ht is a random walk and the fit of an  
SVM is similar to that of an IGARCH(1, 1) model.  
j
where the lag operator is defined as B ht = htj, the assump-  
tions about the innovations εt and ηt are the same as those made  
previously, but now we assume that the roots of the polynomial  
3. When some observations are equal to zero, which can occur  
in practice, the logarithmic transformation specified in (50)  
cannot be performed. One way to avoid this problem is  
to subtract the overall mean of the series yt of each of the  
observations and taking this result as the series to work on;  
2
p
(
1  α1B  α2B  · · · − αpB ) are outside the unit circle.  
The SVM have been extended to include the fact that volatility  
has long memory, in the sense that the autocorrelation function  
of zt = log(y ) slowly decays, although as we saw in this case,  
2
t
https://doi.org/10.5281/zenodo.14845739  
31  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
that is, taking as a working series  
can be formulated as  
yt  y, t = 1, . . . , n,  
(52)  
yt  
=
=
Ztβt + νt,  
(54)  
P
βt  
t
T β  
+ R ω ,  
ω  N(0, Q ), t = 1, . . .(,5 n5 ,)  
n
t1  
t
t
t
t
1
where y = n  
yt. Another solution, suggested  
t=1  
by Wayne Fuller and analyzed by Breidt and Carriquiry  
1996), is to make the following transformation based on  
a Taylor expansion  
with  
(
ht  
2
νt  
=
σe  
σe  
ε ,  
t
(56)  
ht  
2
cSy2  
σt  
=
=
,
(57)  
(58)  
2
2
t
2
y
log yt = log y + cS  
,
2
y + cS  
y
t = 1, . . . , n,  
53)  
2
ht  
α1ht1 + ηt,  
t
(
2
where βt is the state vector of order m × 1, ωt are serially in-  
dependent disturbances, independent of each other and indepen-  
dent of νt at all times. The system matrices Zt, Tt, Rt y Qt  
have dimensions 1×m, m×m, m×m and m×m respectively,  
and if there are unknown elements in them, they are incorporated  
into the vector ψ of hyperparameters which is estimated by max-  
imum likelihood. (54) is called measurement equation or obser-  
vation equation and (55) transition equation or equation of state.  
The equations (54) y (55) define a state space model with all the  
characteristics and properties of the same presented in Abril and  
Abril (2018). In effect, there can be trends, seasonality, cycles,  
explanatory variables and other important characteristics that ex-  
plain the behavior of the process {yt}. The equations (56), (57)  
and (58) define an SVM with structural components (seasonality  
in this case) for the errors of the state space model given above,  
where εt is a stationary series with mean equal to zero and vari-  
where S is the sample variance of the series yt and c is  
y
a small number. Versions prior to 8.3 of the STAMP pro-  
gram developed by Koopman, Harvey, Doornik, and Shep-  
hard incorporated the transformation defined in (53) with  
c = 0, 02 as a pre-specified operation that could be used  
if needed. Starting with version 8.3 of that program (see  
Koopman, Harvey, Doornik, & Shephard, 2010) that trans-  
formation was no longer a pre-specified element, and that  
or other transformations could be performed, such as the  
one defined in (52), by using the calculator or the availabil-  
ity of Algebra within that program according to the user’s  
requirements and needs.  
As shown in Harvey, Ruiz and Shephard (1994), the state  
space form given by the equations (50) and (51) provides the  
basis for quasi-maximum likelihood estimation via the Kalman  
filter and smoother and also allows for constructing smoothed  
estimates of the component ht of the variance and make pre-  
dictions. One of the attractions of the quasi-maximum likeli-  
hood approach is that it can be applied without an assumption  
about a particular distribution for εt. Another attraction of using  
a quasi-maximum likelihood procedure using Kalman filter and  
smoother to estimate SVM is that it can be carried out directly us-  
ing standard computing packages such as STAMP by Koopman,  
Harvey, Doornik, and Shephard (2010). This is a major advan-  
tage compared to more labor-intensive simulation-based meth-  
ods.  
ance one, and ηt is a stationary, Gaussian series, with mean zero,  
variance σ and independent of εt at all times.  
2
η
The estimates of the state space models are performed us-  
ing standard computing packages such as STAMP by Koopman,  
Harvey, Doornik, and Shephard (2010), which is the one used in  
this work.  
In this case, the volatility is equal to  
2
t
2 ht  
σ = σ e .  
(59)  
Shephard and Pitt (1997) proposed the use of importance sam-  
pling to estimate the likelihood function in the non-Gaussian  
case.  
Since the SVM is a hierarchical model, Jaquier, Polson, and  
Rossi (1994) proposed a Bayesian analysis of the model. See  
also Shephard and Pitt (1997) and Kim, Shephard, and Chib  
The practical treatment in these cases is as follows: given a se-  
ries {yt}, the linear components that can explain the behavior of  
its mean are identified, including the explanatory variables that  
may correspond, in such a way as to explicitly define the model  
of the equations (54) and (55).This first performs filtering and  
b
then smoothing of Kalman, obtaining the smoothed estimator βt  
(
1998). An overview of the SVM estimation problem is provided  
of the the state vector βt. This estimator allows to calculate the  
by Motta (2001).  
smoothed residuals as  
5
.4 Series with errors following a SVM with struc-  
tural components  
The basic SVM given in (47), (48) and (49) captures only the  
b
νbt = yt  Ztβt, t = 1, . . . , n.  
(60)  
These smoothed residuals estimate the disturbances νt. The val-  
ues of νbt serve as a basis for testing the null hypothesis of lack of  
serial correlation of νt. If this hypothesis is accepted, it could be  
said that the model given in the equations (54) and (55) was ad-  
equately identified, defined and estimated. On the other hand, if  
salient features of changing conditional heteroscedasticity in a  
time series. In some cases the model is more accurate when the  
series yt is modeled by incorporating structural components, ex-  
planatory variables and other characteristics that explain its be-  
havior, all of this done through a state space scheme with errors  
that follow a SVM with structural components, for example with  
seasonality. Based on the above, for a univariate series yt, this  
2
t
log(νb) shows serial correlation, it can be said that the errors νt  
follow an SVM of the form given in (56), (57) and (58). There-  
fore, it is taken to νbt as the observed series and the following  
https://doi.org/10.5281/zenodo.14845739  
32  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
state space model is estimated  
3. They can show the relationships between the various ele-  
ments of a system or process and provide clues to future  
correlations between two or more variables.  
2
log νb  
=
=
κ + ht + ξt,  
α1ht1 + ηt,  
(61)  
(62)  
t
ht  
ꢂ ꢃ  
ꢂ ꢃ  
Furthermore, the application of these methods suggests new  
research hypotheses and allows the subsequent implementation  
of statistical models ranging from the simplest to those that are  
much more refined, thus achieving a better analysis of the data  
and its fluctuations over time.  
In Figure 1 the daily series of the Merval from January 13,  
003 to May 22, 2015 is shown. The upper left box shows the  
2
2
log σ2  
where ξt  
=
log εt  E{log ε }, κ  
=
+
t
ꢂ ꢃ  
2
E{log ε }, εt is a stationary series with mean equal to zero  
t
and variance one, and ηt is a stationary, Gaussian series, with  
2
mean zero, variance σ and independent of εt at all times. The  
η
process of estimating (61), (62), is done using the Kalman filter  
and smoother.  
2
As shown in Harvey, Ruiz and Shephard (1994), the state  
space form given by the equations (61) and (62) provides the  
basis for quasi-maximum likelihood estimation via the Kalman  
filter and smoother and also allows for constructing smoothed  
estimates of the component ht, of the variance and make pre-  
dictions. One of the attractions of the quasi-maximum likeli-  
hood approach is that it can be applied without an assumption  
about a particular distribution for εt. Another attraction of using  
a quasi-maximum likelihood procedure using Kalman filter and  
smoother to estimate SVM is that it can be carried out directly us-  
ing standard computing packages such as STAMP by Koopman,  
Harvey, Doornik, and Shephard (2010). This is a major advan-  
tage compared to more labor-intensive simulation-based meth-  
ods.  
levels, the upper right box shows the first differences of the log-  
arithm of the levels called returns, the lower left box shows the  
histogram with the distribution of the returns compared to a nor-  
mal distribution and the lower right box shows the QQ diagram  
of the returns. As can be seen, the returns are not normal, they  
have a distribution with some degree of negative asymmetry and  
with kurtosis. Carrying out a careful inspection of the graph of  
the series of returns, we can see that there are periods where the  
volatility is less pronounced than in others, such as the one cor-  
responding to the year 2009.  
6
.1 Analysis using models from the ARCH-GARCH  
family  
We begin the study of the Merval series by focusing first on  
the models of the ARCH-GARCH family and using the Estimat-  
ing and Forecasting ARCH Models Using G@RCH 7 package  
developed by Laurent (2013).  
6
ANALYSIS OF THE SERIES UNDER STUDY  
The data we handle belong to a very special field within sta-  
tistical science, which is that of time series. The common char-  
acteristic of all records belonging to the domain of time series is  
that they are influenced, even if only partially, by non-observable  
components that contain random variations, that is, the occur-  
rence of unplanned events.  
As an application, the Merval index series is analyzed. This  
is a series with information corresponding to all working days  
of the stock market. Specifically, we work with the returns of  
the quotes of this index, which consists of the first differences  
of the logarithm of the Merval levels. The period analyzed goes  
from January 13, 2003 to May 22, 2015. There are 3006 obser-  
vations. It covers a period in which there was no change in the  
government’s affiliation. In fact, during that period the wing of  
Peronism called Kirchnerism governed. This eliminates the ef-  
fects that could have been introduced in the market by changes  
in the governing group.  
It is important to note that although this is a very long period  
to analyze, it is possible to carry out a very interesting study in  
which the main characteristics of the series can be appreciated.  
First, we proceed to graph it. Within the study of a series,  
graphical methods are an excellent way to begin an investiga-  
tion and then be able to dive into a detailed study of the subject  
under consideration Among the functions that tables and graphs  
perform are the following:  
In Figure 2 we observe that in general the correlations and  
partial correlations are close to zero, except for those of order  
five and nineteen. This can be interpreted as the presence of two  
periodic components, one that coincides with the working week,  
which is five days long, and another with the working month,  
which is almost twenty days long, which leads us to think of an  
autoregressive model of order 20 for the conditional mean.  
In Figure 3 we observe the square of the Merval returns, its  
distribution compared to a normal one with zero mean and vari-  
2
ance (0.00966) , the autocorrelation function of that series and  
also the partial autocorrelation function of the same series un-  
der study. We see that a high-degree ARCH model or a more  
parsimonious GARCH model may be suitable to be applied.  
Different alternatives were tested regarding the modeling of  
the Merval returns series for the period between January 13,  
2
003, and May 22, 2015. After analyzing them and seeing the  
values of different goodness-of-fit statistics, such as the Akaike  
Criterion, the Schwarz Criterion, the Schibata Criterion, or the  
Hannan-Quinn Criterion, we were left with a model based on the  
equation (5) of our work, where yt is the series under study and  
its explicit specification is given by  
yt = µt + εt ht,  
(63)  
where εt is independent with a skewed Student distribution  
whose degrees of freedom are 5.82627 and the asymmetry is  
1
2
. They make the data under study more visible, systematize  
and synthesize them.  
0.103452. The conditional mean µt is equal to a general mean  
given by µ plus an autoregressive process of order 19 but with  
. They reveal their variations and their historical or spatial  
evolution.  
all coefficients equal to zero except those of order 1, 5 and 19,  
https://doi.org/10.5281/zenodo.14845739  
33  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Figure 1. Exploratory Analysis of the Merval Index (2003-2015)  
Note. The figure presents an exploratory analysis of the Merval index series from January 13, 2003, to May 22, 2015. The top left panel shows the  
index levels, while the top right panel displays the logarithmic first differences (returns). The bottom left panel illustrates the histogram of returns  
compared to a normal distribution, and the bottom right panel presents a QQ plot for assessing normality.  
which is explicitly  
asymmetric Student distribution with mean zero, variance one,  
whose degrees of freedom are 5.82627 and the asymmetry is  
µt = µ + φ1yt1 + φ5yt5 + φ19yt19 + νt,  
(64)  
0.103452. From here we see that the adjustment is adequate  
due to the similarity between the distribution of the standardized  
residuals and the skewed Student distribution.  
where νt = εtht of (63). Furthermore, the conditional variance  
in (63) is given by  
At the top of the Figure 5 the last ten observations of the series  
under study can be seen in blue, and the corresponding predic-  
tions (in red) of the conditional mean. The vertical bars corre-  
spond to the 95% confidence intervals that serve to compare the  
predicted value with that which is actually observed. In the lower  
part, the conditional variance corresponding to the last ten obser-  
vations of our series under study is predicted. As can be seen, the  
predicted conditional variance, or volatility, increases smoothly  
over time, which is reasonable for a small market where there are  
usually no major changes in the values of financial assets listed  
on the stock exchange.  
2
2
2
2
t1  
h = σ = ω + α(yt1  µt1) + βσ  
,
(65)  
t
t
that is, it is a GARCH(1, 1) model with a constant given by ω.  
The estimated values of the parameters and their corresponding  
standard errors for the model we have formulated in (63), (64)  
and (65) are expressed in the following Table 1.  
In Table 1 we see that the values of the t statistic to test the  
null hypothesis that the coefficients φ1 y φ5 are equal to zero,  
we are led to accept this hypothesis. Consequently, models were  
tested in which one of the coefficients was first removed and the  
other was left, then they were exchanged between the one that  
was removed and the one that was left and finally both coef-  
ficients were removed. In all cases, the goodness-of-fit statis-  
tics or information criteria such as Akaike, Shibata, Schwarz and  
Hannan-Quinn gave worse results than those obtained by includ-  
ing these coefficients. Therefore, it was decided to leave them in  
the model to be estimated.  
6.2 Analysis using SVM  
We continue with the study of the Merval. Now using SVM.  
To perform the analysis and the estimates we use the STAMP  
computing package by Koopman, Harvey, Doornik, and Shep-  
hard (2010). It should be remembered that this program performs  
the estimates by quasi-maximum likelihood via the Kalman fil-  
ter and smoother. After analyzing the series (see Figures ?? and  
??), and after studying several alternative models the following  
In Figure 4 we have, at the top, the conditional variance  
volatility) of the Merval returns series for the period between  
January 13, 2003, and May 22, 2015, and the distribution of  
the standardized residuals after the adjustment compared with an  
(
https://doi.org/10.5281/zenodo.14845739  
34  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Figure 2. Autocorrelation and Partial Autocorrelation of the Merval Returns (2003-2015)  
Note. The figure illustrates the autocorrelation (top panel) and partial autocorrelation (bottom panel) functions of the Merval returns series for the  
period between January 13, 2003, and May 22, 2015. These functions help identify the time dependence structure of the series.  
Table 1: Estimated Values, Standard Deviations, and t-Statistics for the GARCH(1,1) Model of Merval Returns (2003-2015)  
Estimator  
Estimated value Standard deviation t value Probability  
µb  
0.001164  
0.027376  
0.033687  
0.042393  
0.105309  
0.092285  
0.883313  
0.103452  
5.82627  
0.000319  
0.018400  
0.017660  
0.017573  
0.037825  
0.019477  
0.025945  
0.024290  
0.61778  
3.648  
1.488  
1.908  
2.412  
2.784  
4.738  
34.05  
4.259  
9.431  
0.0003  
0.1369  
0.0565  
0.0159  
0.0054  
0.0000  
0.0000  
0.0000  
0.0000  
φb  
1
φb  
5
φb  
1
9
ωb  
αb  
ARCH  
b
β
GARCH  
Asymmetry  
Degrees of freedom  
Note. This table presents the estimated parameters for the GARCH(1,1) model fitted to the Merval returns series from January 13, 2003, to May  
b
2
2, 2015. The parameters include the mean (µb), autoregressive coefficients (φb), conditional variance parameters (ωb, αbARCH , βGARCH ), and the  
asymmetry term. The degrees of freedom estimate corresponds to the assumed residual distribution.  
https://doi.org/10.5281/zenodo.14845739  
35  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Figure 3. Analysis of the Squared Merval Returns (2003-2015)  
Note. The figure presents an analysis of the squared Merval returns from January 13, 2003, to May 22, 2015. The top left panel shows the squared  
2
returns, while the top right panel compares their distribution to a normal one with zero mean and variance (0.000966) . The bottom left panel displays  
the autocorrelation function of the squared returns, and the bottom right panel presents the corresponding partial autocorrelation function.  
Figure 4. Conditional Variance and Residual Distribution of Merval Returns (2003-2015)  
Note. The top panel displays the conditional variance (volatility) of the Merval returns series from January 13, 2003, to May 22, 2015. The bottom  
panel shows the distribution of the standardized residuals after the adjustment, compared with a skewed Student-t distribution. The estimated degrees  
of freedom for this distribution are 5.82627, and the asymmetry parameter is 0.103452.  
https://doi.org/10.5281/zenodo.14845739  
36  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
was decided upon  
the estimated autocorrelations are practically within the confi-  
dence band, which implies that the respective parameters of a  
possible model do not differ from zero, and the oscillations of  
the spectral density are insignificant compared to its scale. The  
respective statistics also support these assertions.  
To apply the state space scheme and to be able to make the  
respective estimates of volatility, the series of residuals must be  
squared and then logarithms must be taken. With this we arrive  
at the model given in (67), (68) and (69). It is then prepared  
to apply the state space scheme, that is, the following model is  
estimated  
yt = µ + θ1yt1 + θ5yt5 + θ19yt19 + νt,  
(66)  
(67)  
with  
ht  
νt  
=
σe 2 εt,  
ht  
σe 2  
σt  
ht  
=
=
,
(68)  
(69)  
α1ht1 + ηt,  
where equations (67), (68) and (69) define a SVM, εt is a sta-  
tionary series with mean equal to zero and variance one, and ηt  
is a stationary, Gaussian series, with mean zero, variance σ and  
2
t
2
η
log νb  
=
=
κt + ht + ξt,  
α h + ηt,  
(70)  
(71)  
independent of εt at all times.  
h
t
1 t1  
This part of the study begins by estimating the model given  
in the equation (66). The first thing to be observed is that the  
Doornik-Hansen normality statistic, whose distribution under the  
null hypothesis of normality of the errors is a χ , gives a value  
of 652.86, which is very high and leads to rejecting the null hy-  
where κt is the stochastic level. In Figure 7 the logarithm of the  
squared residuals of the model given in (66) after estimation is  
shown at the top left, at the top right is the estimated autocor-  
relation function, then at the bottom left is the estimated partial  
autocorrelation function and finally at the bottom right is the es-  
timated density function which is represented by a red line, com-  
pared to the normal density function which is represented by a  
green line. It is clearly seen there that the model given in (61)  
and (62) is the right one.  
2
2
pothesis. This is not surprising since there is no Gaussianity  
(
see Figure ??) and volatility is present. The H(994) test for  
heteroscedasticity, which is distributed as an F(994, 994) under  
the null hypothesis of presence of heteroscedasticity in the se-  
ries, gives a value of 1.4193, which leads to rejecting the hy-  
pothesis of existence of heteroscedasticity. Finally, the Box-  
By estimating the model given in (61) and (62) we found  
2
2
2
2
that σb  
= 5.4272, σb = 5.12422, σb = 0.296033, the  
2
t
η
Ljung q statistic, which in this case is distributed as χ53 gives  
a value equal to 74.908, which leads to accepting the hypothesis  
log(νb )  
ξ
level κ is stochastic with variance equal to 0.00162343, and  
κb= 0.89350 at the end of the period.  
of lack of serial correlation in the residuals. On the other hand,  
2
y
2
ν
σb = 0.0004, and σb = 0.00039523.  
The normality statistic gives a value of 624.44 which is high.  
This is inevitable because the transformed model (61) and (62)  
is not Gaussian. This should not worry us. On the other hand,  
the estimates of α1 is αb1 = 0.91838, which is high as expected.  
In Figure 8 the logarithm of the squared residuals of the model  
given in (66) is shown at the top after having been estimated  
In Table 2 the estimated values of the parameters of the equa-  
tion (66) are shown, the respective standard deviations, the val-  
ues of the t statistic to test the null hypothesis that the respective  
parameter is equal to zero and the probabilities in the tails cor-  
responding to that hypothesis test. If these probabilities are less  
than 0.05, the respective hypothesis is rejected at that level of  
significance. As we see in these figures, except for the coef-  
ficient θ1, all other coefficients are significantly different from  
zero. In the case of the coefficient θ1, models were tested in  
which it was eliminated, but the goodness-of-fit tests (Akaike  
and others) always gave worse results than those obtained in this  
case in which it was included. Therefore, it was decided to con-  
tinue working with this proposal and this gives us an idea that  
the adopted model is the appropriate one.  
(
black line) and the estimated level (red line), in the central part  
is the estimated AR(1) component whose equation is given in  
62) and finally at the bottom are the estimates of the irregular  
(
component of (61).  
From (68) it follows that volatility is equal to  
2
2 ht  
σ = σ e ,  
(72)  
t
which has two multiplicative components which are: a scale con-  
2
h
t
stant σ and basic volatility e . Of these two components, the  
basic volatility is obviously the most important one since the  
other is a multiplicative constant. The basic volatility is esti-  
To study volatility, because errors νt of the model (66) are not  
observable, they are estimated by the residues of the same after  
the corresponding estimates and are denoted as νbt. The latter is  
the series with which we work to estimate everything related to  
volatility.  
After being estimated the model given in (66), its standardized  
residuals are shown at the the upper left part of Figure 6, in the  
upper right part is its estimated autocorrelation function, then, in  
the lower left part is the estimated spectral density and finally in  
the lower right part is the estimated density function which is rep-  
resented by a red line, compared with the normal density func-  
tion which is represented by a green line. In this Figure we see  
that the residuals do not differ significantly from a series without  
serial correlation and approximately normal. Thus, for example,  
bht  
b
mated as e where ht is  
b
b
ht = αb1ht1,  
(73)  
with αb1 = 0.91838, which is the estimated value of α1. To  
2
estimate σ , the estimated series of the irregular component of  
(
66) corrected for basic heteroscedasticity is calculated, i.e.  
(
)
b
ht  
νet = νbt exp −  
,
(74)  
2
2
and then the variance σe of νet is computed, which is an estimate  
https://doi.org/10.5281/zenodo.14845739  
37  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Table 2: Parameter Estimates for the Model of Merval Returns (2003-2015)  
Estimator Estimated value Standard deviation  
t value  
1.92877  
2.71835  
3.63729  
Probability  
0.05385  
b
θ
0.03518  
0.04962  
0.06633  
0.01824  
0.01825  
0.01824  
1
b
θ
0.00660  
5
b
θ
0.00028  
1
9
Note. This table presents the estimated parameters for the model fitted to the Merval returns series from January 13, 2003, to May 22, 2015. The  
b
b
b
parameters include autoregressive coefficients (θ  
statistical significance.  
1
, θ  
5
, θ19), their standard deviations, t-values, and the corresponding probabilities, which indicate their  
Figure 5. Conditional Variance and Residual Distribution of Merval Returns (2003-2015)  
Note. The top panel displays the conditional variance (volatility) of the Merval returns series from January 13, 2003, to May 22, 2015. The bottom  
panel shows the distribution of the standardized residuals after the adjustment, compared with a skewed Student-t distribution. The estimated degrees  
of freedom for this distribution are 5.82627, and the asymmetry parameter is 0.103452.  
https://doi.org/10.5281/zenodo.14845739  
38  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Figure 6. Residual Analysis of the Merval Returns Model (2003-2015)  
Note. The figure presents the residual analysis of the model defined in Equation (66) for the Merval returns series from January 13, 2003, to May 22,  
015. The top panel displays the estimated autocorrelation function of the residuals, while the bottom panels show the estimated spectral density and  
2
the residual distribution. These diagnostics help assess the adequacy of the fitted model.  
Figure 7. Analysis of the Log-Squared Residuals of the Merval Returns Model (2003-2015)  
Note. The figure presents the analysis of the log-squared residuals of the model defined in Equation (66) for the Merval returns series from January  
1
3, 2003, to May 22, 2015. The top panel displays the estimated autocorrelation function, while the middle panel presents the estimated partial  
autocorrelation function. The bottom panel shows the residual distribution, helping assess the properties of the squared residuals.  
https://doi.org/10.5281/zenodo.14845739  
39  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
2
2
of σ . The calculated value in our case is σe = 0.819646. With  
Autoregressive integrated moving average models (or ARIMA  
models) are often considered to provide the main basis for mod-  
eling any time series. However, given the current state of devel-  
opment of time series research, there may be more attractive and,  
above all, more efficient alternatives. Many economic time se-  
ries do not have a constant mean and also in most cases there are  
phases where relative calm reigns followed by periods of signifi-  
cant changes, i.e. variability changes over time. This behavior is  
what is called volatility.  
To remedy this fact and to take into account the presence of  
volatility in an economic series, it is necessary to resort to models  
known as conditional heteroscedastic models. In these models,  
the variance of a series at a given point in time depends on past  
information and other data available up to that point in time, so  
a conditional variance must be defined, which is not constant  
and does not coincide with the overall variance of the observed  
series.  
this, and based on (68), we have that the estimated volatility is  
b
2
2
ht  
σb = σe e .  
(75)  
t
Figure 9 shows the estimated conditional variance,  
2
b
σe exp{ht}, for the entire period studied. It is important  
to note that there is a peak corresponding to October 22, 2008.  
At that time, between October 14 and 24 of that year (between  
observation 1403 and observation 1411) there was a sharp fall  
in the stock market, capital flight, and a significant rise in the  
value of the dollar. This is also seen when working with the  
ARCH-GARCH family of models, as can be seen in the upper  
part of Figure 4. Also observed in Figure 9 is a minor depression  
close to September 19, 2008 (observation 1387) corresponding  
to the rise in stocks on Wall Street due to the US bailout plan, a  
peak between June 16 and 26, 2014 (between observation 2781  
and observation 2788) corresponding to an adverse ruling by the  
US Supreme Court on Argentina’s debt, and others.  
In order to detect possible relationships between the series  
studied and volatility, Figure 10 shows the estimated conditional  
standard deviation, i.e. σbt, versus the standardized residuals of  
the model defined in the equation (66) for the Merval returns se-  
ries for the period between January 13, 2003 and May 22, 2015.  
It is clear that no structure is observed that relates them.  
In Figure 11 we can see, at the top, the last ten observations  
of the series under study, in red, and the corresponding predic-  
tions (in blue) of the conditional mean (model given in (66)), in  
the middle part are the residuals of that estimated model with  
the corresponding 95% confidence band that serve to determine  
if the residuals differ significantly from zero, so it is observed  
that they do not differ significantly from zero. In the lower part,  
these residuals are shown but standardized, that is, they are trans-  
formed so that they have a mean of zero and variance of one.  
With this we can say that the fit is adequate.  
Among the models we have presented are the models of the  
ARCH family. The ARCH models or autoregressive models with  
conditional heteroscedasticity were first presented by Engle in  
1
982 with the objective of estimating the variance of inflation.  
The basic idea of this model is that yt s not serially correlated  
but the volatility or conditional variance of the series depends on  
the past of the series by means of a quadratic function. However,  
these models are rarely used in practice due to their simplicity.  
A good generalization of this model is found in the GARCH-  
type models introduced by Bollerslev (1986). This model is also  
a weighted average of a quadratic function of the past of the se-  
ries, but it is more parsimonious than the ARCH-type models  
and even in its simplest form it has proven to be extremely suc-  
cessful in predicting conditional variances, so we decided to use  
them when working with our data.  
The old saying “A painting worths more than thousand words”  
is quite true in the analysis of any set of information. Before  
applying any statistical method to the data under study, it is es-  
sential to observe it graphically in order to become familiar with  
it. This can have numerous benefits, as we have explained at the  
beginning of our analysis, since this process will serve as an in-  
dicator of ideas for a more detailed later study. This was the first  
step in our work in which we were able to see the main char-  
acteristics of the series and it helped us to make an appropriate  
adjustment to it.  
Firstly, we decided to fit a GARCH-type model that captures  
the main characteristics of the data. We saw that it adequately  
takes into account the volatility of the series. With this analy-  
sis we have been able to capture some situations where volatility  
has a very important significance. We understand that the model  
used is adequate to predict the series and its components, in par-  
ticular the volatility.  
2
t
Figure 12 shows the prediction of the log νb series where  
the νbt are the residuals of the estimation of (66) in blue) versus  
2
t
the observed series of log νb for the period between January  
1
3, 2003 and May 22, 2015 (the ones corresponding to the last  
ten observations are shown) (top). Residuals of the adjustment  
of the model (70) (in blue) with a 95% confidence band (middle  
part) and the graph of these residuals after having been standard-  
ized. As we can see, it can be concluded that the adjustment is  
adequate.  
7
FINAL CONSIDERATIONS  
The series studied is made up of the first differences of the log-  
arithm of the level of the Merval index. This is a stock market  
index calculated at the Buenos Aires Stock Exchange (BCBA),  
Argentina. It is a series with information corresponding to all  
working days of the stock market. The period analyzed goes  
from January 13, 2003 to May 22, 2015. There are 3006 observa-  
tions. It covers a period in which there was no change in govern-  
ment affiliation. This eliminates the effects that could have been  
introduced into the market by changes in the governing group.  
In our research, we set out to analyze methods for dealing with  
a wide variety of data with irregularities that occur in time series.  
In the second part of this work it was decided to use a stochas-  
tic volatility approach to analyze the series under study, which  
turned out to be very useful in capturing the main characteristics  
of the data  
The ARCH or GARCH family models assume that the con-  
ditional variance (volatility) depends on past values. In other  
2
t
words, and using the notation we saw above, if σ is the volatil-  
ity, the ARCH-GARCH family assumes that it depends on the  
https://doi.org/10.5281/zenodo.14845739  
40  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Figure 8. Decomposition of the Log-Squared Residuals of the Merval Returns Model (2003-2015)  
Note. The figure presents the decomposition of the log-squared residuals of the model defined in Equation (66) for the Merval returns series from  
January 13, 2003, to May 22, 2015. The top panel shows the log-squared residuals with the level estimate. The middle panel displays the estimated  
AR(1) component, whose equation is given in Equation (62). The bottom panel presents the estimates of the irregular component, corresponding to  
Equation (61).  
Figure 9. Estimated Volatility of the Merval Returns (2003-2015)  
Note. The figure illustrates the estimated volatility of the Merval returns series from January 13, 2003, to May 22, 2015. The volatility is modeled  
using a conditional heteroskedasticity approach, capturing periods of high and low market fluctuations.  
https://doi.org/10.5281/zenodo.14845739  
41  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Figure 10. Conditional Standard Deviation and Standardized Residuals of the Merval Returns Model (2003-2015)  
Note. The figure compares the estimated conditional standard deviation with the standardized residuals of the model defined in Equation (66) for the  
Merval returns series from January 13, 2003, to May 22, 2015. This comparison helps assess whether the standardized residuals exhibit homoskedas-  
ticity, validating the adequacy of the volatility model.  
Figure 11. Conditional Mean Prediction and Residual Analysis of the Merval Returns Model (2003-2015)  
Note. The figure presents the conditional mean prediction from the model defined in Equation (66) for the Merval returns series from January 13, 2003,  
to May 22, 2015. The top panel compares the predicted conditional mean (blue) with the observed returns (red), highlighting the last ten observations.  
The middle panel shows the model residuals with a 95% confidence band (blue). The bottom panel displays the standardized residuals for further  
evaluation of model adequacy.  
https://doi.org/10.5281/zenodo.14845739  
42  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
Figure 12. Prediction of the Log-Squared Residuals of the Merval Returns Model (2003-2015)  
c
2
Note. The figure presents the prediction of the log-squared residuals (log(ν )) from the model defined in Equation (66) for the Merval returns series  
t
from January 13, 2003, to May 22, 2015. The top panel compares the predicted values (blue) with the observed series (red), highlighting the last ten  
observations. The middle panel displays the residuals of the model fit in Equation (70) with a 95% confidence band (blue). The bottom panel presents  
the standardized residuals for further diagnostic analysis.  
series yj for j < t. On the other hand, the stochastic volatility  
model or SVM, proposed for the first time by Taylor (1980, 1986,  
els and the SVM work very well to estimate and predict volatility.  
But the SVMs have the advantage that both in the definition of  
their conditional mean given in the equations (54) and (55) and  
in the corresponding conditional variance given in the equations  
(56), (57) and (58) it is possible to introduce non-observable but  
estimable components such as trend, seasonality, cycles, struc-  
tural changes, etc., and also explanatory variables, all due to the  
fact that these models are put in the form of a state space, which  
provides great generality when carrying out the work. Another  
important advantage of this approach is that it is estimated using  
a procedure based on quasi-maximum likelihood, which gives  
great flexibility to the procedure and a lot of robustness to the  
estimators.  
1
994) does not start from this assumption. This model is based  
2 2  
j
on the fact that the volatility σ depends on its past values (σ for  
t
j < t) but is independent of the past values of the series under  
analysis (yj for j < t). Shephard and Pitt (1997) proposed the  
use of importance sampling to estimate the likelihood function in  
the non-Gaussian case. Since the MVE is a hierarchical model,  
Jaquier, Polson, and Rossi (1994) proposed a Bayesian analysis  
of it. See also Shephard (2005), Shephard and Pitt (1997), Kim,  
Shephard, and Chib (1998), and Ghysels, Harvey, and Renault  
(
1996). An overview of the SVM estimation problem is given by  
Motta (2001).  
As shown in Harvey, Ruiz, and Shephard (1994), the state-  
station form provides the basis for quasi-maximum likelihood  
estimation via the Kalman filter and smoother and also allows  
for constructing smoothed estimates of the variance component  
ht and making predictions. One of the attractions of the quasi-  
maximum likelihood approach is that it can be applied with-  
out an assumption about a particular distribution for εt. An-  
other attraction of using a quasi-maximum likelihood procedure  
via the Kalman filter and smoother to estimate SVM is that it  
can be carried out directly using standard computing packages  
such as STAMP by Koopman, Harvey, Doornik, and Shephard  
REFERENCES  
Abril, J. C. (1999). Análisis de series de tiempo basado en mod-  
elos de espacio de estado. EUDEBA.  
Abril, J. C. (2004). Modelos para el análisis de las series de  
tiempo. Ediciones Cooperativas.  
Abril, M. de las M. (2014). El enfoque de espacio de estado de  
las series de tiempo para el estudio de los problemas de  
volatilidad (Tesis doctoral). Universidad Nacional de Tu-  
cumán, Argentina.  
Abril, J. C., & Abril, M. de las M. (2017). La heterocedastici-  
dad condicional en la inflación de la Argentina: Un análi-  
sis para el período 1943-2013. Revista de Investigaciones  
del Departamento de Ciencias Económicas (RINCE), 8.  
https://rince.unlam.edu.ar  
(
2010). This is a great advantage compared to more labor-  
intensive simulation-based methods. Finally, by using an MVE,  
it was possible to estimate the different parts of the volatility (the  
scaling constant and the basic volatility).  
Finally, we can say that both the ARCH-GARCH family mod-  
Abril, J. C., & Abril, M. de las M. (2018). Métodos modernos de  
https://doi.org/10.5281/zenodo.14845739  
43  
South American Research Journal, 4(2), 25-44  
https://sa-rj.net/index.php/sarj/article/view/61  
ISSN 2806-5638  
series de tiempo y sus aplicaciones. Editorial Académica  
Ghysels, E., Harvey, A. C., & Renault, E. (1996). Stochastic  
volatility. En C. R. Rao & G. S. Maddala (Eds.), Statisti-  
cal methods in finance (pp. 119-191). North-Holland.  
Española.  
Abril, J. C., & Abril, M. de las M. (2024). Modelado y esti-  
mación de la volatilidad estocástica: Aplicación a la in-  
flación. South American Research Journal, 4(1), 35-51.  
Baillie, R. T., & Bollerslev, T. (1989). The message in daily ex-  
change rates: A conditional-variance tale. Journal of Busi-  
ness and Economic Statistics, 7(3), 297-305.  
Goldfarb, D. (1970). A family of variable metric updates de-  
rived by variational means. Mathematics of Computation,  
24(109), 23-26.  
Harvey, A. C. (1989). Forecasting, structural time series models  
and the Kalman filter. Cambridge University Press.  
Harvey, A. C., Ruiz, E., & Shephard, N. (1994). Multivariate  
Bollerslev, T. (1986). Generalized autoregressive conditional  
stochastic variance models. Review of Economic Studies,  
heteroskedasticity. Journal of Econometrics, 31(3), 307-  
61(2), 247-264.  
3
27.  
Hsieh, D. A. (1989). Modeling heteroskedasticity in daily for-  
eign exchange rates. Journal of Business and Economic  
Statistics, 7(3), 307-317.  
Kim, S., Shephard, N., & Chib, S. (1998). Stochastic volatility:  
Likelihood inference and comparison with ARCH models.  
Review of Economic Studies, 65(2), 361-393.  
Bollerslev, T. (1987). A conditionally heteroskedastic time series  
model for speculative prices and rates of return. Review of  
Economics and Statistics, 69(3), 542-547.  
Bollerslev, T., Chou, R. Y., & Kroner, K. F. (1992). ARCH mod-  
eling in finance: A review of the theory and empirical evi-  
dence. Journal of Econometrics, 52(1-2), 5-59.  
Koopman, S. J., Harvey, A. C., Doornik, J. A., & Shephard, N.  
Bollerslev, T., & Wooldridge, J. M. (1992). Quasi-maximum  
likelihood estimation and inference in dynamic models with  
time-varying covariances. Econometric Reviews, 11(2),  
(
2010). STAMP 8.3: Structural time series analyser, mod-  
eller and predictor. Timberlake Consultants.  
Laurent, S. (2013). Estimating and forecasting ARCH models us-  
ing G@RCH 7. Timberlake Consultants.  
Motta, A. C. O. (2001). Modelos do espaço de estados não-  
Gaussianos e o modelo de volatilidade estocástica (Tesis  
de maestría). IMECC-UNICAMP.  
Pagan, A. (1996). The econometrics of financial markets. Jour-  
nal of Empirical Finance, 3(1), 15-102.  
Sandmann, G., & Koopman, S. J. (1998). Estimation of stochas-  
tic volatility models via Monte Carlo maximum likelihood.  
Journal of Econometrics, 87(2), 271-301.  
1
43-172.  
Box, G. E. P., & Jenkins, G. M. (1976). Time series analysis:  
Forecasting and control (Revised ed.). Holden-Day.  
Broyden, C. G. (1970). The convergence of a class of double-  
rank minimization algorithms. Journal of the Institute of  
Mathematics and its Applications, 6(1), 76-90.  
Bryan, M. F., & Cecchetti, S. G. (1994). Measuring core infla-  
tion. En N. G. Mankiw (Ed.), Monetary policy (pp. 195-  
2
19). University of Chicago Press.  
Bryan, M. F., Cecchetti, S. G., & Wiggins II, R. L. (1997). Ef-  
ficient inflation estimation (Working Paper No. 6183). Na-  
tional Bureau of Economic Research.  
Durbin, J., & Koopman, S. J. (1997a). Monte Carlo maximum  
likelihood estimation for non-Gaussian state space models.  
Biometrika, 84(3), 669-684.  
Durbin, J., & Koopman, S. J. (1997b). Time series analysis of  
non-Gaussian observations based on state space models.  
Preprint, London School of Economics.  
Durbin, J., & Koopman, S. J. (2000). Time series analysis of non-  
Gaussian observations based on state space models from  
both classical and Bayesian perspectives (with discussion).  
Journal of the Royal Statistical Society: Series B (Statistical  
Methodology), 62(1), 3-56.  
Shephard, N. (2005). Stochastic volatility: Selected readings.  
Oxford University Press.  
Shephard, N., & Pitt, M. K. (1997). Likelihood analysis of non-  
Gaussian measurement time series. Biometrika, 84(3), 653-  
667.  
Stock, J. H., & Watson, M. W. (2015). Core inflation and trend  
inflation (Working Paper No. 21282). National Bureau of  
Economic Research.  
Taylor, S. J. (1980). Conjectured models for trend in financial  
prices tests as forecasts. Journal of the Royal Statistical So-  
ciety: Series B (Methodological), 42(3), 338-362.  
Taylor, S. J. (1986). Modelling financial time series. John Wiley.  
Taylor, S. J. (1994). Modelling stochastic volatility. Mathemati-  
cal Finance, 4(2), 183-204.  
Durbin, J., & Koopman, S. J. (2001). Time series analysis by  
state space methods. Oxford University Press.  
Durbin, J., & Koopman, S. J. (2012). Time series analysis by  
state space methods (2nd ed.). Oxford University Press.  
Weiss, A. A. (1986). Asymptotic theory for ARCH models: Es-  
timation and testing. Econometric Theory, 2(1), 107-131.  
Engle, R. F. (1982). Autoregressive conditional heteroskedastic-  
ity with estimates of the variance of United Kingdom infla-  
tion. Econometrica, 50(4), 987-1007.  
Engle, R. F., & Gonzalez-Rivera, G. (1991). Semiparametric  
ARCH models. Journal of Business and Economic Statis-  
tics, 9(4), 345-360.  
Engle, R., & Mezrich, J. (1996). GARCH for groups. Risk, 9(8),  
3
6-40.  
Fletcher, R. (1970). A new approach to variable metric algo-  
rithms. Computer Journal, 13(3), 317-322.  
Fletcher, R. (1987). Practical methods of optimization (2nd ed.).  
John Wiley & Sons.  
https://doi.org/10.5281/zenodo.14845739  
44