Title: | Assessing Normality of Stationary Process |
---|---|
Description: | Despite that several tests for normality in stationary processes have been proposed in the literature, consistent implementations of these tests in programming languages are limited. Seven normality test are implemented. The asymptotic Lobato & Velasco's, asymptotic Epps, Psaradakis and Vávra, Lobato & Velasco's and Epps sieve bootstrap approximations, El bouch et al., and the random projections tests for univariate stationary process. Some other diagnostics such as, unit root test for stationarity, seasonal tests for seasonality, and arch effect test for volatility; are also performed. Additionally, the El bouch test performs normality tests for bivariate time series. The package also offers residual diagnostic for linear time series models developed in several packages. |
Authors: | Asael Alonzo Matamoros [aut, cre], Alicia Nieto-Reyes [aut], Rob Hyndman [ctb], Mitchell O'Hara-Wild [ctb], Trapletti A. [ctb] |
Maintainer: | Asael Alonzo Matamoros <[email protected]> |
License: | GPL-2 |
Version: | 1.1.2 |
Built: | 2025-01-26 06:12:00 UTC |
Source: | https://github.com/asael697/nortstest |
Despite that several tests for normality in stationary processes have been proposed in the literature, consistent implementations of these tests in programming languages are limited.Seven normality test are implemented. The asymptotic Lobato and Velasco's, asymptotic Epps, Psaradakis and Vávra, Lobato and Velasco's sieve bootstrap approximation, El bouch et al., Epps sieve bootstrap approximation and the random projections tests for univariate stationary process. Some other diagnostics such as, unit root test for stationarity, seasonal tests for seasonality, and arch effect test for volatility; are also performed. Additionally, the El bouch test performs normality tests for bivariate time series. The package also offers residual diagnostic for linear time series models developed in several packages.
We present several functions for testing the hypothesis of normality in
univariate stationary processes, the epps.test
, lobato.test
,
rp.test
, lobato-bootstrap.test
, epps-bootstrap.test
,
elbouch.test
, and varvra.test
. Additionally, the elbouch.test
function performs a bivariate normality test when the user provides a second
time series. For model diagnostics, we provide functions for unit root, seasonality
and ARCH effects tests for stationary, and other methods for visual checks using the
ggplot2 and forecast packages.
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The Annals of Statistic. 15(4), 1683-1698.https://projecteuclid.org/euclid.aos/1176350618.
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series.
Journal of econometric theory. 20(4), 671-689.
doi:https://doi.org/10.1017/S0266466604204030
.
Psaradakis, Z. & Vávra, M. (2017). A distance test of normality for a wide class
of stationary process. Journal of Econometrics and Statistics. 2, 50-60.
doi:https://doi.org/10.1016/j.ecosta.2016.11.005
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
Hyndman, R. & Khandakar, Y. (2008). Automatic time series forecasting: the
forecast package for R
. Journal of Statistical Software. 26(3),
1-22.doi: 10.18637/jss.v027.i03
.
Wickham, H. (2008). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.
Performs the Pormanteau Q and Lagrange Multipliers test for homoscedasticity in a univariate stationary process. The null hypothesis (H0), is that the process is homoscedastic.
arch.test(y, arch = c("box","Lm"), alpha = 0.05, lag.max = 2)
arch.test(y, arch = c("box","Lm"), alpha = 0.05, lag.max = 2)
y |
a numeric vector or an object of the |
arch |
A character string naming the desired test for checking stationarity. Valid values are
|
alpha |
Level of the test, possible values range from 0.01 to 0.1. By default |
lag.max |
an integer with the number of used lags. |
Several different tests are available:
Performs Portmanteau Q and Lagrange Multiplier tests for the null hypothesis that the residuals of
an ARIMA model are homoscedastic. The ARCH test is based on the fact that if the residuals (defined
as e(t)
) are heteroscedastic, the squared residuals (e^2[t]) are autocorrelated.
The first type of test is to examine whether the squares of residuals are a sequence of white noise,
which is called the Portmanteau Q test, and similar to the Ljung-Box test on the squared residuals.
By default, alpha = 0.05
is used to select the more likely hypothesis.
A list with class "h.test"
containing the following components:
statistic: |
the test statistic. |
parameter: |
the test degrees freedoms. |
p.value: |
the p-value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string with the test name. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros
Engle, R. F. (1982). Auto-regressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation. Econometrica. 50(4), 987-1007.
McLeod, A. I. & W. K. Li. (1984). Diagnostic Checking ARMA Time Series Models Using Squared-Residual Auto-correlations. Journal of Time Series Analysis. 4, 269-273.
normal.test
, seasonal.test
, uroot.test
# stationary ar process y = arima.sim(100,model = list(ar = 0.3)) arch.test(y)
# stationary ar process y = arima.sim(100,model = list(ar = 0.3)) arch.test(y)
autoplot
takes an object of type ts
or mts
and creates
a ggplot object suitable for usage with stat_forecast
.
## S3 method for class 'ts' autoplot( object, series = NULL, xlab = "Time", ylab = deparse(substitute(object)), main = NULL, facets = FALSE, colour = TRUE, ... ) ## S3 method for class 'numeric' autoplot( object, series = NULL, xlab = "Time", ylab = deparse(substitute(object)), main = NULL, ... ) ## S3 method for class 'ts' fortify(model, data, ...)
## S3 method for class 'ts' autoplot( object, series = NULL, xlab = "Time", ylab = deparse(substitute(object)), main = NULL, facets = FALSE, colour = TRUE, ... ) ## S3 method for class 'numeric' autoplot( object, series = NULL, xlab = "Time", ylab = deparse(substitute(object)), main = NULL, ... ) ## S3 method for class 'ts' fortify(model, data, ...)
object |
Object of class “ |
series |
Identifies the time series with a colour, which integrates well with the functionality of geom_forecast. |
xlab |
a string with the plot's x axis label. By default a NULL value. |
ylab |
a string with the plot's y axis label. By default a counts" value. |
main |
a string with the plot's title. |
facets |
If TRUE, multiple time series will be faceted (and unless specified, colour is set to FALSE). If FALSE, each series will be assigned a colour. |
colour |
If TRUE, the time series will be assigned a colour aesthetic. |
... |
Other plotting parameters to affect the plot. |
model |
Object of class “ |
data |
Not used (required for fortify method). |
fortify.ts
takes a ts
object and converts it into a data frame
(for usage with ggplot2).
None. Function produces a ggplot2 graph.
Mitchell O'Hara-Wild
library(ggplot2) autoplot(USAccDeaths) lungDeaths <- cbind(mdeaths, fdeaths) autoplot(lungDeaths) autoplot(lungDeaths, facets=TRUE)
library(ggplot2) autoplot(USAccDeaths) lungDeaths <- cbind(mdeaths, fdeaths) autoplot(lungDeaths) autoplot(lungDeaths, facets=TRUE)
Generic function for a visual check of residuals in time series models, these methods are inspired in
the check.residuals
function provided by the forecast
package.
## S3 method for class 'ts' check_plot(y, model = " ", ...)
## S3 method for class 'ts' check_plot(y, model = " ", ...)
y |
a numeric vector or an object of the |
model |
A string with the model name. |
... |
Other plotting parameters to affect the plot. |
A graph object from ggplot2.
Asael Alonzo Matamoros.
check_residuals
y = arima.sim(100,model = list(ar = 0.3)) check_plot(y)
y = arima.sim(100,model = list(ar = 0.3)) check_plot(y)
Generic function for residuals check analysis, these methods are inspired in the check.residuals
function
provided by the forecast
package.
## S3 method for class 'ts' check_residuals( y, normality = "epps", unit_root = NULL, seasonal = NULL, arch = NULL, alpha = 0.05, plot = FALSE, ... )
## S3 method for class 'ts' check_residuals( y, normality = "epps", unit_root = NULL, seasonal = NULL, arch = NULL, alpha = 0.05, plot = FALSE, ... )
y |
Either a time series model,the supported classes are |
normality |
A character string naming the desired test for checking gaussian distribution.
Valid values are |
unit_root |
A character string naming the desired unit root test for checking stationarity.
Valid values are |
seasonal |
A character string naming the desired unit root test for checking seasonality.
Valid values are |
arch |
A character string naming the desired test for checking stationarity. Valid values are
|
alpha |
Level of the test, possible values range from 0.01 to 0.1. By default |
plot |
A boolean value. If |
... |
Other testing parameters |
The function performs a residuals analysis, it prints a unit root and seasonal test to check
stationarity, and a normality test for checking Gaussian distribution. In addition, if the plot option is
TRUE
a time plot, ACF, and histogram of the series are presented.
The function does not return any value
Asael Alonzo Matamoros
Dickey, D. & Fuller, W. (1979). Distribution of the Estimators for Autoregressive Time Series with a Unit Root. Journal of the American Statistical Association. 74, 427-431.
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The
Annals of Statistic. 15(4), 1683-1698.http://www.jstor.org/stable/2336512.
doi:10.1214/aos/1176350618
Osborn, D., Chui, A., Smith, J., & Birchenhall, C. (1988). Seasonality and the order of integration for consumption. Oxford Bulletin of Economics and Statistics. 50(4), 361-377.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) check_residuals(y,unit_root = "adf")
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) check_residuals(y,unit_root = "adf")
Performs the approximated Cramer Von Mises test of normality for univariate time series. Computes the p-value using Psaradakis and Vavra's (2020) sieve bootstrap procedure.
cvm_bootstrap.test(y, reps = 1000, h = 100, seed = NULL)
cvm_bootstrap.test(y, reps = 1000, h = 100, seed = NULL)
y |
a numeric vector or an object of the |
reps |
an integer with the total bootstrap repetitions. |
h |
an integer with the first |
seed |
An optional |
Employs Cramer Von Mises test approximating the p-value using a sieve-bootstrap procedure, Psaradakis, Z. and Vávra, M. (2020).
A list with class "h.test"
containing the following components:
statistic: |
the sieve bootstrap Cramer Von Mises' statistic. |
p.value: |
the p value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Sieve-Bootstrap Cramer Von Mises' test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros.
Psaradakis, Z. and Vávra, M. (2020) Normality tests for dependent data: large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation 49 (2). ISSN 0361-0918.
Bulmann, P. (1997). Sieve Bootstrap for time series. Bernoulli. 3(2), 123 -148.
Stephens, M.A. (1986): Tests based on EDF statistics. In: D'Agostino, R.B. and Stephens, M.A., eds.: Goodness-of-Fit Techniques. Marcel Dekker, New York.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) cvm_bootstrap.test(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) cvm_bootstrap.test(y)
Computes the El Bouch, Michel, & Comon's z test statistic for normality of a univariate or bivariate time series.
elbouch.statistic(y, x = NULL)
elbouch.statistic(y, x = NULL)
y |
a numeric vector or an object of the |
x |
a numeric vector or an object of the |
This function computes Mardia's standardized 'z = (B - E_B)/ sd_B' statistic corrected by El Bouch, et al. (2022) for stationary bivariate time series. Where: 'B' is the square of a quadratic form of the process 'c(y, x)'; 'E_B' and 'sd_B' are the estimator's expected value and standard error respectively. If 'x' is set to 'NULL', the test computes the univariate counterpart.
a real value with El Bouch test's statistic.
Asael Alonzo Matamoros.
El Bouch, S., Michel, O. & Comon, P. (2022). A normality test for Multivariate dependent samples. Journal of Signal Processing. Volume 201.
Mardia, K. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika, 57 519-530
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
# Genere an univariate stationary ARMA process y = arima.sim(100,model = list(ar = 0.3)) elbouch.statistic(y) # Generate a bivariate Gaussian random vector x = rnorm(200) y = rnorm(200) elbouch.statistic(y = y, x = x)
# Genere an univariate stationary ARMA process y = arima.sim(100,model = list(ar = 0.3)) elbouch.statistic(y) # Generate a bivariate Gaussian random vector x = rnorm(200) y = rnorm(200) elbouch.statistic(y = y, x = x)
Computes the El Bouch, Michel, & Comon's test for normality of a bivariate dependent samples.
elbouch.test(y, x = NULL)
elbouch.test(y, x = NULL)
y |
a numeric vector or an object of the |
x |
a numeric vector or an object of the |
This function computes El Bouch, et al. (2022) test for normality of bivariate dependent samples. If 'x' is set to 'NULL', the test computes the univariate counterpart. This test is a correction of Mardia's, (1970) multivariate skewness and kurtosis test for multivariate samples.
A list with class "h.test"
containing the following components:
statistic: |
the El Bouch Z statistic. |
p.value: |
the p value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “El Bouch, Michel & Comon's test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros.
El Bouch, S., Michel, O. & Comon, P. (2022). A normality test for Multivariate dependent samples. Journal of Signal Processing. Volume 201.
Mardia, K. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika, 57 519-530
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
# Generate an univariate stationary arma process y = arima.sim(100,model = list(ar = 0.3)) elbouch.test(y) # Generate a bivariate Gaussian random vector x = rnorm(200) y = rnorm(200) elbouch.test(y = y, x = x)
# Generate an univariate stationary arma process y = arima.sim(100,model = list(ar = 0.3)) elbouch.test(y) # Generate a bivariate Gaussian random vector x = rnorm(200) y = rnorm(200) elbouch.test(y = y, x = x)
Performs the approximated Epps and Pulley's test of normality for univariate time series. Computes the p-value using Psaradakis and Vavra's (2020) sieve bootstrap procedure.
epps_bootstrap.test(y, lambda = c(1,2), reps = 500, h = 100, seed = NULL)
epps_bootstrap.test(y, lambda = c(1,2), reps = 500, h = 100, seed = NULL)
y |
a numeric vector or an object of the |
lambda |
a numeric vector for evaluating the characteristic function. |
reps |
an integer with the total bootstrap repetitions. |
h |
an integer with the first |
seed |
An optional |
The Epps test minimize the process' empirical characteristic function using a quadratic loss in terms of the process two first moments, Epps, T.W. (1987). Approximates the p-value using a sieve-bootstrap procedure Psaradakis, Z. and Vávra, M. (2020).
A list with class "h.test"
containing the following components:
statistic: |
the sieve bootstrap Epps and Pulley's statistic. |
p.value: |
the p value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Sieve-Bootstrap Epps' test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros and Alicia Nieto-Reyes.
Psaradakis, Z. and Vávra, M. (2020) Normality tests for dependent data: large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation 49 (2). ISSN 0361-0918.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The Annals of Statistic. 15(4), 1683-1698.
# Generating an stationary arma process y = arima.sim(300, model = list(ar = 0.3)) epps_bootstrap.test(y, reps = 1000)
# Generating an stationary arma process y = arima.sim(300, model = list(ar = 0.3)) epps_bootstrap.test(y, reps = 1000)
Estimates the Epps statistic minimizing the quadratic loss of the process' characteristic function in terms of the first two moments.
epps.statistic(y, lambda = c(1,2))
epps.statistic(y, lambda = c(1,2))
y |
a numeric vector or an object of the |
lambda |
a numeric vector for evaluating the characteristic function. This values could be selected by the user for a better test performance. By default, the values are 'c(1,2)', another plausible option is to select random values. |
The Epps test minimize the process' empirical characteristic function using a quadratic loss in terms of the process two first moments. Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014) upgrade the test implementation by allowing the option of evaluating the characteristic function with random values.
This function is the equivalent of Sub
in Nieto-Reyes, A.,
Cuesta-Albertos, J. & Gamboa, F. (2014). This function uses a quadratic
optimization solver implemented by Press, W.H., Teukolsky, S.A.,
Vetterling, W.T. and Flannery, B.P. (2007).
a real value with the Epps test's statistic.
Alicia Nieto-Reyes and Asael Alonzo Matamoros.
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The Annals of Statistic. 15(4), 1683-1698.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P. (2007). Numerical Recipes. The Art of Scientific Computing. Cambridge University Press.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) epps.statistic(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) epps.statistic(y)
Performs the asymptotic Epps test of normality for univariate time series. Computes the p-value using the asymptotic Gamma Distribution.
epps.test(y, lambda = c(1,2))
epps.test(y, lambda = c(1,2))
y |
a numeric vector or an object of the |
lambda |
a numeric vector for evaluating the characteristic function. This values could be selected by the user for a better test performance. By default, the values are 'c(1,2)', another plausible option is to select random values. |
The Epps test minimize the process' empirical characteristic function using a
quadratic loss in terms of the process two first moments. Nieto-Reyes, A.,
Cuesta-Albertos, J. & Gamboa, F. (2014) upgrade the test implementation by
allowing the option of evaluating the characteristic function with random values.
The amoebam()
function of Press, W.H., Teukolsky, S.A., Vetterling,
W.T. and Flannery, B.P. (2007), performs the optimal search.
A list with class "h.test"
containing the following components:
statistic: |
the Epps statistic. |
parameter: |
the test degrees freedoms. |
p.value: |
the p value. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Epps test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros and Alicia Nieto-Reyes.
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The Annals of Statistic. 15(4), 1683-1698.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P. (2007). Numerical Recipes. The Art of Scientific Computing. Cambridge University Press.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) epps.test(y) # Epps tests with random lambda values y = arima.sim(100,model = list(ar = c(0.3,0.2))) epps.test(y, lambda = rnorm(2,mean = 1,sd = 0.1))
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) epps.test(y) # Epps tests with random lambda values y = arima.sim(100,model = list(ar = c(0.3,0.2))) epps.test(y, lambda = rnorm(2,mean = 1,sd = 0.1))
acf
plot.Plot of the auto-correlation function for a univariate time series.
ggacf(y, title = NULL)
ggacf(y, title = NULL)
y |
a numeric vector or an object of the |
title |
a string with the plot's title. |
None.
Asael Alonzo Matamoros
x = rnorm(100) ggacf(x)
x = rnorm(100) ggacf(x)
Plots a histogram and density estimates using ggplot.
gghist(y, title = NULL, xlab = NULL, ylab = "counts", bins, add.normal = TRUE)
gghist(y, title = NULL, xlab = NULL, ylab = "counts", bins, add.normal = TRUE)
y |
a numeric vector or an object of the |
title |
a string with the plot's title. |
xlab |
a string with the plot's x axis label. By default a NULL value. |
ylab |
a string with the plot's y axis label. By default a "counts" value. |
bins |
the number of bins to use for the histogram. Selected by default using the Friedman-Diaconis rule. |
add.normal |
a boolean value. Add a normal density function for comparison,
by default |
None.
Rob J Hyndman
x = rnorm(100) gghist(x,add.normal = TRUE)
x = rnorm(100) gghist(x,add.normal = TRUE)
qqplot
with normal qqline
Plot the quantile-quantile plot and quantile-quantile line using ggplot.
ggnorm(y, title = NULL, add.normal = TRUE)
ggnorm(y, title = NULL, add.normal = TRUE)
y |
a numeric vector or an object of the |
title |
a string with the plot's title. |
add.normal |
Add a normal density function for comparison. |
None.
Asael Alonzo Matamoros
x = rnorm(100) ggnorm(x)
x = rnorm(100) ggnorm(x)
pacf
plot.Plot of the partial autocorrelation function for a univariate time series.
ggpacf(y, title = NULL)
ggpacf(y, title = NULL)
y |
a numeric vector or an object of the |
title |
a string with the plot's title. |
None.
Mitchell O'Hara-Wild and Asael Alonzo Matamoros
x = rnorm(100) ggpacf(x)
x = rnorm(100) ggpacf(x)
Performs the approximated Jarque Bera test of normality for univariate time series. Computes the p-value using Psaradakis and Vavra's (2020) sieve bootstrap procedure.
jb_bootstrap.test(y, reps = 1000, h = 100, seed = NULL)
jb_bootstrap.test(y, reps = 1000, h = 100, seed = NULL)
y |
a numeric vector or an object of the |
reps |
an integer with the total bootstrap repetitions. |
h |
an integer with the first |
seed |
An optional |
Employs Jarque Bera skewness-kurtosis test approximating the p-value using a sieve-bootstrap procedure, Psaradakis, Z. and Vávra, M. (2020).
A list with class "h.test"
containing the following components:
statistic: |
the sieve bootstrap Jarque Bera's statistic. |
p.value: |
the p value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Sieve-Bootstrap Jarque Bera's test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros.
Psaradakis, Z. and Vávra, M. (2020) Normality tests for dependent data: large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation 49 (2). ISSN 0361-0918.
Bulmann, P. (1997). Sieve Bootstrap for time series. Bernoulli. 3(2), 123 -148.
J. B. Cromwell, W. C. Labys and M. Terraza (1994): Univariate Tests for Time Series Models, Sage, Thousand Oaks, CA, pages 20–22.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) jb_bootstrap.test(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) jb_bootstrap.test(y)
Performs the Lagrange Multipliers test for homoscedasticity in a stationary process. The null hypothesis (H0), is that the process is homoscedastic.
Lm.test(y,lag.max = 2,alpha = 0.05)
Lm.test(y,lag.max = 2,alpha = 0.05)
y |
a numeric vector or an object of the |
lag.max |
an integer with the number of used lags. |
alpha |
Level of the test, possible values range from 0.01 to 0.1. By default
|
The Lagrange Multiplier test proposed by Engle (1982) fits a linear regression model for the squared residuals and examines whether the fitted model is significant. So the null hypothesis is that the squared residuals are a sequence of white noise, namely, the residuals are homoscedastic.
A list with class "h.test"
containing the following components:
statistic: |
the Lagrange multiplier statistic. |
parameter: |
the test degrees freedoms. |
p.value: |
the p value. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Lagrange Multiplier test”. |
data.name: |
a character string giving the name of the data. |
A. Trapletti and Asael Alonzo Matamoros.
Engle, R. F. (1982). Auto-regressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation. Econometrica. 50(4), 987-1007.
McLeod, A. I. and W. K. Li. (1984). Diagnostic Checking ARMA Time Series Models Using Squared-Residual Auto-correlations. Journal of Time Series Analysis. 4, 269-273.
# generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) Lm.test(y)
# generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) Lm.test(y)
Performs the approximated Lobato and Velasco's test of normality for univariate time series. Computes the p-value using Psaradakis and Vavra's (2020) sieve bootstrap procedure.
lobato_bootstrap.test(y, c = 1, reps = 1000, h = 100, seed = NULL)
lobato_bootstrap.test(y, c = 1, reps = 1000, h = 100, seed = NULL)
y |
a numeric vector or an object of the |
c |
a positive real value that identifies the total amount of values used in the cumulative sum. |
reps |
an integer with the total bootstrap repetitions. |
h |
an integer with the first |
seed |
An optional |
This test proves a normality assumption in correlated data employing the skewness-kurtosis test statistic proposed by Lobato, I., & Velasco, C. (2004), approximating the p-value using a sieve-bootstrap procedure, Psaradakis, Z. and Vávra, M. (2020).
A list with class "h.test"
containing the following components:
statistic: |
the sieve bootstrap Lobato n Velasco's statistic. |
p.value: |
the p value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Sieve-Bootstrap Lobato's test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros and Alicia Nieto-Reyes.
Psaradakis, Z. and Vávra, M. (2020) Normality tests for dependent data: large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation 49 (2). ISSN 0361-0918.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
# Generating an stationary arma process y = arima.sim(1000,model = list(ar = 0.3)) lobato_bootstrap.test(y, reps = 1000)
# Generating an stationary arma process y = arima.sim(1000,model = list(ar = 0.3)) lobato_bootstrap.test(y, reps = 1000)
Computes the Lobato and Velasco's statistic. This test proves a normality assumption in correlated data employing the skewness-kurtosis test statistic, but studentized by standard error estimates that are consistent under serial dependence of the observations.
lobato.statistic(y, c = 1)
lobato.statistic(y, c = 1)
y |
a numeric vector or an object of the |
c |
a positive real value that identifies the total amount of values used in the cumulative sum. |
This function is the equivalent of GestadisticoVn
of Nieto-Reyes, A.,
Cuesta-Albertos, J. & Gamboa, F. (2014).
A real value with the Lobato and Velasco test's statistic.
Alicia Nieto-Reyes and Asael Alonzo Matamoros.
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) lobato.statistic(y,3)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) lobato.statistic(y,3)
Performs the asymptotic Lobato and Velasco's test of normality for univariate time series. Computes the p-value using the asymptotic Gamma Distribution.
lobato.test(y,c = 1)
lobato.test(y,c = 1)
y |
a numeric vector or an object of the |
c |
a positive real value that identifies the total amount of values used in the cumulative sum. |
This test proves a normality assumption in correlated data employing the skewness-kurtosis test statistic, but studentized by standard error estimates that are consistent under serial dependence of the observations. The test was proposed by Lobato, I., & Velasco, C. (2004) and implemented by Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014).
A list with class "h.test"
containing the following components:
statistic: |
the Lobato and Velasco's statistic. |
parameter: |
the test degrees freedoms. |
p.value: |
the p-value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Lobato and Velasco's test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros and Alicia Nieto-Reyes.
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) lobato.test(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) lobato.test(y)
Perform a normality test. The null hypothesis (H0) is that the given data follows a stationary Gaussian process.
normal.test(y, normality = c("epps","lobato","vavra","rp","jb","ad","shapiro"), alpha = 0.05)
normal.test(y, normality = c("epps","lobato","vavra","rp","jb","ad","shapiro"), alpha = 0.05)
y |
a numeric vector or an object of the |
normality |
A character string naming the desired test for checking normality. Valid values are
|
alpha |
Level of the test, possible values range from 0.01 to 0.1. By default |
"lobato"
, "epps"
, "vavras"
and "rp"
test are for testing normality
in stationary process. "jb"
, "ad"
, and "shapiro"
tests are for numeric data.
In all cases, the alternative hypothesis is that y
follows a Gaussian process. By default,
alpha = 0.05
is used to select the more likely hypothesis.
A list with class "h.test"
containing the following components:
statistic: |
the test statistic. |
parameter: |
the test degrees freedoms. |
p.value: |
the p-value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string with the test name. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The Annals of Statistic. 15(4), 1683-1698.
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
Psaradakis, Z. & Vávra, M. (2017). A distance test of normality for a wide class of stationary process. Journal of Econometrics and Statistics. 2, 50-60.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
Patrick, R. (1982). An extension of Shapiro and Wilk's W test for normality to large samples. Journal of Applied Statistics. 31, 115-124.
Cromwell, J. B., Labys, W. C. & Terraza, M. (1994). Univariate Tests for Time Series Models. Sage, Thousand Oaks, CA. 20-22.
# stationary ar process y = arima.sim(100, model = list(ar = 0.3)) normal.test(y) # epps test # normal random sample y = rnorm(100) normal.test(y, normality = "shapiro") # exponential random sample y = rexp(100) normal.test(y, normality = "ad")
# stationary ar process y = arima.sim(100, model = list(ar = 0.3)) normal.test(y) # epps test # normal random sample y = rnorm(100) normal.test(y, normality = "shapiro") # exponential random sample y = rexp(100) normal.test(y, normality = "ad")
Generates a random projection of a univariate stationary stochastic process. Using a beta(shape1,shape2) distribution.
random.projection(y,shape1,shape2,seed = NULL)
random.projection(y,shape1,shape2,seed = NULL)
y |
a numeric vector or an object of the |
shape1 |
an optional real value with the first shape parameters of the beta distribution. |
shape2 |
an optional real value with the second shape parameters of the beta distribution. |
seed |
An optional |
Generates one random projection of a stochastic process using a beta distribution. For more details, see: Nieto-Reyes, A.,Cuesta-Albertos, J. & Gamboa, F. (2014).
a real vector with the projected stochastic process.
Alicia Nieto-Reyes and Asael Alonzo Matamoros.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.Result
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The Annals of Statistic. 15(4), 1683-1698.
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) rp.test(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) rp.test(y)
Generates a 2k sample of test statistics projecting the stationary process using the random projections procedure.
rp.sample(y, k = 1, pars1 = c(100,1), pars2 = c(2,7), seed = NULL)
rp.sample(y, k = 1, pars1 = c(100,1), pars2 = c(2,7), seed = NULL)
y |
a numeric vector or an object of the |
k |
an integer that determines the '2k' random projections are used for
every type of test. The 'pars1' argument generates the first 'k' projections,
and 'pars2' generates the later 'k' projections. By default, |
pars1 |
an optional real vector with the shape parameters of the beta
distribution used for the first 'k' random projections By default,
|
pars2 |
an optional real vector with the shape parameters of the beta
distribution used to compute the last 'k' random projections. By default,
|
seed |
An optional |
The rp.sample
function generates '2k' tests statistics by projecting
the time series using '2k' stick breaking processes. First, the function
samples 'k' stick breaking processes using pars1
argument. Then, projects
the time series using the sampled stick processes. Later, applies the Epps
statistics to the odd projections and the Lobato and Velasco’s statistics to
the even ones. Analogously, the function performs the three steps using also
pars2
argument
The function uses beta distributions for generating the '2k' random projections.
By default, uses a beta(shape1 = 100,shape = 1)
distribution contained
in pars1
argument to generate the first 'k' projections. For the later
'k' projections the functions uses a beta(shape1 = 2,shape = 7)
distribution
contained in pars2
argument.
The test was proposed by Nieto-Reyes, A.,Cuesta-Albertos, J. & Gamboa, F. (2014).
A list with 2 real value vectors:
lobato: |
A vector with the Lobato and Velasco's statistics sample. |
epps: |
A vector with the Epps statistics sample. |
Alicia Nieto-Reyes and Asael Alonzo Matamoros
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The Annals of Statistic. 15(4), 1683-1698.
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
# Generating an stationary ARMA process y = arima.sim(100,model = list(ar = 0.3)) rp.sample(y)
# Generating an stationary ARMA process y = arima.sim(100,model = list(ar = 0.3)) rp.sample(y)
Performs the random projection test for normality. The null hypothesis (H0) is that the given data follows a stationary Gaussian process.
rp.test(y, k = 1, FDR = TRUE, pars1 = c(100,1), pars2 = c(2,7), seed = NULL)
rp.test(y, k = 1, FDR = TRUE, pars1 = c(100,1), pars2 = c(2,7), seed = NULL)
y |
a numeric vector or an object of the |
k |
an integer that determines the '2k' random projections are used for
every type of test. The 'pars1' argument generates the first 'k' projections,
and 'pars2' generates the later 'k' projections. By default, |
FDR |
a logical value for mixing the p.values using a False discovery
rate method. If |
pars1 |
an optional real vector with the shape parameters of the beta
distribution used for the first 'k' random projections By default,
|
pars2 |
an optional real vector with the shape parameters of the beta
distribution used to compute the last 'k' random projections. By default,
|
seed |
An optional |
The random projection test generates '2k' random projections of 'y'. Applies Epps statistics to the odd projections and Lobato and Velasco’s statistics to the even ones. Computes the '2k' p.values using an asymptotic chi-square distribution with two degrees of freedom. Finally, mixes the p.values using a false discover rate procedure. By default, mixes the p.values using Benjamin and Yekutieli’s (2001) method.
The function uses beta distributions for generating the '2k' random projections.
By default, uses a beta(shape1 = 100,shape = 1)
distribution contained
in pars1
argument to generate the first 'k' projections. For the later
'k' projections the functions uses a beta(shape1 = 2,shape = 7)
distribution
contained in pars2
argument.
The test was proposed by Nieto-Reyes, A.,Cuesta-Albertos, J. & Gamboa, F. (2014).
A list with class "h.test"
containing the following components:
statistic: |
an integer value with the amount of projections per test. |
parameter: |
a text that specifies the p.value mixing FDR method. |
p.value: |
the FDR mixed p-value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “k random projections test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros and Alicia Nieto-Reyes.
Nieto-Reyes, A., Cuesta-Albertos, J. & Gamboa, F. (2014). A random-projection based test of Gaussianity for stationary processes. Computational Statistics & Data Analysis, Elsevier, vol. 75(C), pages 124-141.
Lobato, I., & Velasco, C. (2004). A simple test of normality in time series. Journal of econometric theory. 20(4), 671-689.
Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics. 29, 1165–1188. Doi:10.1214/aos/1013699998.
Hochberg, Y. (1988). A sharper Bonferroni procedure for multiple tests of significance. Biometrika. 75, 800–803. Doi:10.2307/2336325.
Epps, T.W. (1987). Testing that a stationary time series is Gaussian. The Annals of Statistic. 15(4), 1683-1698.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) rp.test(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) rp.test(y)
Perform a seasonal unit root test to check seasonality in a linear stochastic process
seasonal.test(y, seasonal = c("ocsb","ch","hegy"), alpha = 0.05)
seasonal.test(y, seasonal = c("ocsb","ch","hegy"), alpha = 0.05)
y |
a numeric vector or an object of the |
seasonal |
A character string naming the desired seasonal unit root test for checking seasonality.
Valid values are |
alpha |
Level of the test, possible values range from 0.01 to 0.1. By default |
Several different tests are available:
In the kpss
test, the null hypothesis that y
has a stationary root
against a unit-root alternative. In the two remaining tests, the null hypothesis
is that y
has a unit root against a stationary root alternative. By default,
alpha = 0.05
is used to select the more likely hypothesis.
A list with class "h.test"
containing the following components:
statistic: |
the test statistic. |
parameter: |
the test degrees freedoms. |
p.value: |
the p-value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string with the test name. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros
Osborn, D., Chui, A., Smith, J., & Birchenhall, C. (1988). Seasonality and the order of integration for consumption. Oxford Bulletin of Economics and Statistics. 50(4), 361-377.
Canova, F. & Hansen, B. (1995). Are Seasonal Patterns Constant over Time? A Test for Seasonal Stability. Journal of Business and Economic Statistics. 13(3), 237-252.
Hylleberg, S., Engle, R., Granger, C. & Yoo, B. (1990). Seasonal integration and cointegration. Journal of Econometrics 44(1), 215-238.
# stationary ar process y = ts(rnorm(100),frequency = 6) seasonal.test(y)
# stationary ar process y = ts(rnorm(100),frequency = 6) seasonal.test(y)
Performs the approximated Shapiro test for normality for univariate time series. Computes the p-value using Psaradakis and Vavra's (2020) sieve bootstrap procedure.
shapiro_bootstrap.test(y, reps = 1000, h = 100, seed = NULL)
shapiro_bootstrap.test(y, reps = 1000, h = 100, seed = NULL)
y |
a numeric vector or an object of the |
reps |
an integer with the total bootstrap repetitions. |
h |
an integer with the first |
seed |
An optional |
Employs the Shapiro test approximating the p-value using a sieve-bootstrap procedure, Psaradakis, Z. and Vávra, M. (2020).
A list with class "h.test"
containing the following components:
statistic: |
the sieve bootstrap Shapiro's statistic. |
p.value: |
the p value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Sieve-Bootstrap Shapiro's test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros.
Psaradakis, Z. and Vávra, M. (2020) Normality tests for dependent data: large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation 49 (2). ISSN 0361-0918.
Bulmann, P. (1997). Sieve Bootstrap for time series. Bernoulli. 3(2), 123 -148.
Patrick Royston (1982). An extension of Shapiro and Wilk's W test for normality to large samples. Applied Statistics, 31, 115–124. Doi:10.2307/2347973.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) jb_bootstrap.test(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) jb_bootstrap.test(y)
The function generates a sieve bootstrap sample for a univariate linear stochastic process.
sieve.bootstrap(y,reps = 1000,pmax = NULL,h = 100,seed = NULL)
sieve.bootstrap(y,reps = 1000,pmax = NULL,h = 100,seed = NULL)
y |
a numeric vector or an object of the |
reps |
an integer with the total bootstrap repetitions. |
pmax |
an integer with the max considered lags for the generated
|
h |
an integer with the first |
seed |
An optional |
simulates bootstrap samples for the stochastic process y, using a stationary
auto-regressive model of order "pmax"
, AR(pmax)
. If
pmax = NULL
(default), the function estimates the process maximum
lags using an AIC
as a model selection criteria.
A matrix or reps
row and n
columns, with the sieve
bootstrap sample and n
the time series length.
Asael Alonzo Matamoros.
Bulmann, P. (1997). Sieve Bootstrap for time series. Bernoulli. 3(2), 123 -148.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) M = sieve.bootstrap(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) M = sieve.bootstrap(y)
Perform a unit root test to check stationary in a linear stochastic process.
uroot.test(y, unit_root = c("adf","kpss","pp","box"), alpha = 0.05)
uroot.test(y, unit_root = c("adf","kpss","pp","box"), alpha = 0.05)
y |
a numeric vector or an object of the |
unit_root |
A character string naming the desired unit root test for checking stationary.
Valid values are |
alpha |
Level of the test, possible values range from 0.01 to 0.1. By default |
Several different tests are available:
In the kpss
test, the null hypothesis that y
has a stationary root
against a unit-root alternative. In the two remaining tests, the null hypothesis
is that y
has a unit root against a stationary root alternative. By default,
alpha = 0.05
is used to select the more likely hypothesis.
A list with class "h.test"
containing the following components:
statistic: |
the test statistic. |
parameter: |
the test degrees freedoms. |
p.value: |
the p-value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string with the test name. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros and A. Trapletti.
Dickey, D. & Fuller, W. (1979). Distribution of the Estimators for Autoregressive Time Series with a Unit Root. Journal of the American Statistical Association. 74, 427-431.
Kwiatkowski, D., Phillips, P., Schmidt, P. & Shin, Y. (1992). Testing the Null Hypothesis of Stationarity against the Alternative of a Unit Root, Journal of Econometrics. 54, 159-178.
Phillips, P. & Perron, P. (1988). Testing for a unit root in time series regression, Biometrika. 72(2), 335-346.
Ljung, G. M. & Box, G. E. P. (1978). On a measure of lack of fit in time series models. Biometrika. 65, 297-303.
# stationary ar process y = arima.sim(100,model = list(ar = 0.3)) uroot.test(y) # a random walk process y = cumsum(y) uroot.test(y, unit_root = "pp")
# stationary ar process y = arima.sim(100,model = list(ar = 0.3)) uroot.test(y) # a random walk process y = cumsum(y) uroot.test(y, unit_root = "pp")
Generates a sieve bootstrap sample of the Anderson-Darling statistic test.
vavra.sample(y, normality = c("ad","lobato","jb","cvm","shapiro","epps"), reps = 1000, h = 100, seed = NULL, c = 1, lambda = c(1,2))
vavra.sample(y, normality = c("ad","lobato","jb","cvm","shapiro","epps"), reps = 1000, h = 100, seed = NULL, c = 1, lambda = c(1,2))
y |
a numeric vector or an object of the |
normality |
A character string naming the desired test for checking normality.
Valid values are |
reps |
an integer with the total bootstrap repetitions. |
h |
an integer with the first |
seed |
An optional |
c |
a positive real value used as argument for the Lobato's test. |
lambda |
a numeric vector used as argument for the Epps's test. |
The Vávra test approximates the empirical distribution function of the Anderson-Darlings statistic, using a sieve bootstrap approximation. The test was proposed by Psaradakis, Z. & Vávra, M (20.17).
This function is the equivalent of xarsieve
of
Psaradakis, Z. & Vávra, M (20.17).
A numeric array with the Anderson Darling sieve bootstrap sample
Asael Alonzo Matamoros.
Psaradakis, Z. and Vávra, M. (2020) Normality tests for dependent data: large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation 49 (2). ISSN 0361-0918.
Psaradakis, Z. & Vávra, M. (2017). A distance test of normality for a wide class of stationary process. Journal of Econometrics and Statistics. 2, 50-60.
Bulmann, P. (1997). Sieve Bootstrap for time series. Bernoulli. 3(2), 123 -148.
epps.statistic
, lobato.statistic
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) adbs = vavra.sample(y) mean(adbs)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) adbs = vavra.sample(y) mean(adbs)
Performs the Psaradakis and Vávra distance test for normality. The null hypothesis (H0), is that the given data follows a Gaussian process.
vavra.test(y, normality = c("ad","lobato","jb","cvm","epps"), reps = 1000, h = 100, seed = NULL, c = 1, lambda = c(1,2))
vavra.test(y, normality = c("ad","lobato","jb","cvm","epps"), reps = 1000, h = 100, seed = NULL, c = 1, lambda = c(1,2))
y |
a numeric vector or an object of the |
normality |
A character string naming the desired test for checking
normality. Valid values are |
reps |
an integer with the total bootstrap repetitions. |
h |
an integer with the first |
seed |
An optional |
c |
a positive real value used as argument for the Lobato's test. |
lambda |
a numeric vector used as argument for the Epps's test. |
The Psaradakis and Vávra test approximates the empirical distribution function of the Anderson Darling's statistic, using a sieve bootstrap approximation. The test was proposed by Psaradakis, Z. & Vávra, M. (20.17).
A list with class "h.test"
containing the following components:
statistic: |
the sieve bootstrap A statistic. |
p.value: |
the p value for the test. |
alternative: |
a character string describing the alternative hypothesis. |
method: |
a character string “Psaradakis and Vávra test”. |
data.name: |
a character string giving the name of the data. |
Asael Alonzo Matamoros.
Psaradakis, Z. and Vávra, M. (2020) Normality tests for dependent data: large-sample and bootstrap approaches. Communications in Statistics-Simulation and Computation 49 (2). ISSN 0361-0918.
Psaradakis, Z. & Vávra, M. (2017). A distance test of normality for a wide class of stationary process. Journal of Econometrics and Statistics. 2, 50-60.
Bulmann, P. (1997). Sieve Bootstrap for time series. Bernoulli. 3(2), 123 -148.
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) vavra.test(y)
# Generating an stationary arma process y = arima.sim(100,model = list(ar = 0.3)) vavra.test(y)