fracdiff/0000755000176200001440000000000014556665642012041 5ustar liggesusersfracdiff/NAMESPACE0000644000176200001440000000135413610314772013244 0ustar liggesusersuseDynLib(fracdiff, .registration=TRUE) importFrom("stats", AIC, arima, as.ts , fft, nextn , lm.fit , coef, logLik, resid , na.fail , printCoefmat , pnorm, qnorm, rnorm , symnum , tsp, "tsp<-" ) export("fracdiff", "fracdiff.sim", "fracdiff.var", "diffseries", "fdGPH", "fdSperio") ###---- Methods ---- all documented but not exported S3method(coef, fracdiff) S3method(confint, fracdiff) ; export("confint.fracdiff")# has been advertized S3method(fitted, fracdiff) S3method(logLik, fracdiff) S3method(print, fracdiff) S3method(residuals, fracdiff) S3method(vcov, fracdiff) S3method(summary, fracdiff) S3method(print, summary.fracdiff) fracdiff/README0000644000176200001440000000211513345443770012707 0ustar liggesusersfracdiff Maximum likelihood estimation of the parameters of a fractionally differenced ARIMA (p,d,q) model. For long-memory dependence in time series. (Haslett and Raftery, Applied Statistics 38, 1989, 1-50). See the help files for details. The original S/S-plus package by Chris Fraley, Department of Statistics, University of Washington, has been converted for usage with R; see README.orig, also for copyright. I've converted all single precision floats to double precision (both in the R file and the Fortran sources), as R didn't support single precision (in 1999). Fritz Leisch, TU Wien, Austria ------------ The package was _orphaned_ in Summer 2003, and after asking Fritz and Chris Fraley, I have become the new maintainer in December 2003. I've managed to locate and eradicate the bug leading to wrong hessian, covariance and correlation matrix estimates. Martin Maechler, ETH Zurich, Switzerland ------ See the files ./TODO and ./Done and ./ChangeLog on TODOs and ideas See the file ./Calling (and then src/ftn-struc) about code organization ~~~~~~~~~ fracdiff/ChangeLog0000644000176200001440000001622314556655056013615 0ustar liggesusers2022-10-18 Martin Maechler * DESCRIPTION (Version): 1.5-2: * src/fracdiff.h, src/fdcore.c: (S_fp) pointer *with* argument list [clang15 warnings]. 2020-01-17 Martin Maechler * DESCRIPTION (Version): 1.5-1; for CRAN 'Additional issues', fiddling w/ * tests/Valderio-ex.R: show all.equal(*, tol=0) only if "surprising" 2019-12-09 Martin Maechler * DESCRIPTION (Authors@R): using new format; ready for CRAN * R/fd-methods.R (summary.fracdiff): renamed '$ coefficients' such that 'coef()' works. 2019-12-09 Rob Hyndman * R/fracdiff.R, fd-methods.R (fracdiff): provide 'residuals()' and 'fitted'()' methods. 2018-09-10 Martin Maechler * R/diffseries.R (diffseries): Finally found bug in the fft() based version {"forgotten" centering} and fixed it. 2018-09-06 Martin Maechler * DESCRIPTION (Version): 1.5-0 (for new release with proper Imports) 2012-12-01 Martin Maechler * DESCRIPTION (Version): 1.4-2 * R/fracdiff.R (fracdiff.sim): new argument 'start.innov' in order to become even closer to arima.sim(). New arg 'backComp' which should allow to get *the same* results are arima.sima(). 2011-08-09 Martin Maechler * DESCRIPTION (Version): 1.4-0 * R/fd-methods.R (summary.fracdiff): fix bug that gave wrong 'df' * R/fracdiff.R (fracdiff): new 'trace' argument; further: now return estimated sigma (of white noise). * src/fdcore.c: -> verbose argument for iteration monitoring * src/init.c: add (dll symbol) "registration", for implied consistency checking * NAMESPACE: ditto * src/*.[ch]: a bit of cleanup; no longer using global fd_min_fnorm 2011-04-29 Martin Maechler * src/fdcore.c: remove set but unused variables (R 2.14.0 on CRAN now warns). 2009-06-09 Martin Maechler * DESCRIPTION (Version): 1.3-2, released to CRAN 2009-06-08 Martin Maechler * R/fracdiff.R (fracdiff): save *both* kind of warnings; * R/fd-methods.R (print.fracdiff): print them 2009-05-07 Martin Maechler src/ * NAMESPACE: add namespace, "just for fun" * R/fracdiff.R (fracdiff): first steps in *saving* warning messages (from C calls). 2006-09-08 Martin Maechler * DESCRIPTION (Enhances): longmemo 2006-09-08 Martin Maechler * released 1.3-1 to CRAN * tests/ex.R: update; use summary() * tests/ex.Rout.save: * R/fd-methods.R (summary.fracdiff): logLik + AIC (also in print). 2006-09-07 Martin Maechler * R/fd-methods.R (summary.fracdiff): also add summary(), print() vcov() and logLik() methods for that. * R/fracdiff.R (fracdiff.var): * DESCRIPTION (Version): 1.3-1 * R/fracdiff.R (fracdiff): finally add *class* "fracdiff" * src/Makevars: add missing FLIBS * R/fd-methods.R (confint.fracdiff): new, based on Spencer Graves' code (R-help, 23 Jul 2006) * R/fd-methods.R: (print.fracdiff), etc; new; just a stub * man/fd-methods.Rd: new. 2006-02-06 Martin Maechler * DESCRIPTION (Version): 1.3-0 released to CRAN 2005-12-27 Martin Maechler * R/diffseries.R (diffseries): new functions from Valderio Reisen * R/Sperio.R (Sperio): 2005-07-19 Martin Maechler * DESCRIPTION (Version): 1.2-2 * R/fracdiff.R (fracdiff): use .C(), no longer .Fortran() * src/fdcore.c (fracdf): dito * Calling: update * tests/sim-ex.R: if(FALSE) library(*, lib="..MM..") 2005-07-06 Martin Maechler * src/fdhess.c (hesspq_): move "inline" declarations to beginning of loop: against warning "ISO C89 forbids mixed declarations and code" 2005-07-02 Martin Maechler * DESCRIPTION (Version): 1.2-1 * src/fdsim.c (fdsim): finally found "off by 1" indexing bug {introduced only in 1.1-2, two weeks ago} which accessed s[0] and hence sometimes gave huge garbage initially. * tests/sim-ex.R: new: for fracdiff.sim() bug search now also a speed test. 2005-06-30 Martin Maechler * R/fracdiff.R (fracdiff.sim): add 'n.start', 'rand.gen', etc; similar as in arima.sim. NOTA BENE: changes the default fracdiff.sim() result as soon as --------- p + q >= 1 ! 2005-06-29 Martin Maechler * DESCRIPTION (Version): 1.2-0 {never released} * src/fdcore.c: using FD_EXTERNAL and including all the ``common block'' declarations: * src/mach_comm.h: all these are new, and included by * src/maux_comm.h: the *.c files that need them. * src/tols_comm.h: * src/gamm_comm.h: * src/hess_comm.h: * README: added several general notes * R/fracdiff.R (fracdiff): .C("fdhpq"): 'x' is not neeeded * src/fdcore.c (fdcom): move fdcom() {Common Block Initialization} * src/fdhess.c (fdcom): from fdhess.c to fdcore.c 2005-06-17 Martin Maechler * src/Makevars : drop the non-portable "-O3 -Wall" flags 2005-06-17 Martin Maechler * DESCRIPTION (Version): 1.1-2 * DESCRIPTION (Date): 2004-10-02 --- never released ---> now to CRAN * tests/ex-Vinod.R: add the example that failed (memory error) 2004-10-02 Martin Maechler * R/fracdiff.R (fracdiff): new integer work array passed to C. * src/fdcore.c (fracdf_): etc: new iw[] integer work array. * README: explain the "cast" bug in pqopt_()'s call to lmder1_()... * src/fracdiff.h: new for common declarations 2004-09-18 Martin Maechler * src/fdcore.c: now translated from fortran * src/fdgam.c: using f2c, my "f2c-clean" * src/fdhess.c: and lots of manual cleaning. * src/fdmin.c: * src/fdsim.c: 2004-04-29 Martin Maechler * R/fracdiff.R (fracdiff): make 'lenw' compatible to check in src/fdcore.f; {also make "info = 1" message more informative} * src/fdcore.f (fracdf): in the case of too small workspace, return the desired size to R. 2004-01-12 Martin Maechler * DESCRIPTION (Version): 1.1-1 * tests/ex.R: changed version; less precision - portable? * tests/ex.Rout.save: 2004-01-07 Martin Maechler * man/fracdiff.Rd: explain `h' argument and its default; now list all components in \value{}; also 'dtol': * R/fracdiff.R: dtol < 0 now does work as the docs have always said. * src/Makevars (PKG_LIBS): new; use BLAS_LIBS. 2003-12-29 Martin Maechler * DESCRIPTION (Version): 1.1-0 -> released to CRAN * tests/ex.R: new - first regression tests * R/fracdiff.R (fracdiff): found *the* bug!! : use result$w ! * src/fdmin.f: declared all; checked with "implicit none" * src/fdhess.f: * src/fdcore.f: * src/fdgam.f (d9lgmc): fixed typo: s/d9gmlc/d9lgmc/ checked all with "implicit none" * DESCRIPTION (Maintainer): Martin Maechler (was ORPHANED) * DESCRIPTION (License): GPL fracdiff/man/0000755000176200001440000000000014327743202012575 5ustar liggesusersfracdiff/man/fd-methods.Rd0000644000176200001440000000475213610314772015126 0ustar liggesusers\name{fracdiff-methods} \alias{coef.fracdiff} \alias{logLik.fracdiff} \alias{print.fracdiff} \alias{fitted.fracdiff} \alias{residuals.fracdiff} \alias{vcov.fracdiff} \alias{summary.fracdiff} \alias{print.summary.fracdiff} % \title{Many Methods for "fracdiff" Objects} \description{ Many \dQuote{accessor} methods for \code{\link{fracdiff}} objects, notably \code{\link{summary}}, \code{\link{coef}}, \code{\link{vcov}}, and \code{\link{logLik}}; further \code{\link{print}()} methods were needed. } \usage{ \method{coef}{fracdiff}(object, \dots) \method{logLik}{fracdiff}(object, \dots) \method{print}{fracdiff}(x, digits = getOption("digits"), \dots) \method{summary}{fracdiff}(object, symbolic.cor = FALSE, \dots) \method{print}{summary.fracdiff}(x, digits = max(3, getOption("digits") - 3), correlation = FALSE, symbolic.cor = x$symbolic.cor, signif.stars = getOption("show.signif.stars"), \dots) \method{fitted}{fracdiff}(object, \dots) \method{residuals}{fracdiff}(object, \dots) \method{vcov}{fracdiff}(object, \dots) } \arguments{ \item{x, object}{object of class \code{fracdiff}.} \item{digits}{the number of significant digits to use when printing.} \item{\dots}{further arguments passed from and to methods.} \item{correlation}{logical; if \code{TRUE}, the correlation matrix of the estimated parameters is returned and printed.} \item{symbolic.cor}{logical. If \code{TRUE}, print the correlations in a symbolic form (see \code{\link{symnum}}) rather than as numbers.} \item{signif.stars}{logical. If \code{TRUE}, \dQuote{significance stars} are printed for each coefficient.} } % \value{ % ~Describe the value returned % If it is a LIST, use % \item{comp1 }{Description of 'comp1'} % \item{comp2 }{Description of 'comp2'} % ... % } \author{Martin Maechler; Rob Hyndman contributed the \code{\link{residuals}()} and \code{\link{fitted}()} methods.} \seealso{\code{\link{fracdiff}} to get \code{"fracdiff"} objects, \code{\link{confint.fracdiff}} for the \code{\link{confint}} method; further, \code{\link{fracdiff.var}}. } \examples{ set.seed(7) ts4 <- fracdiff.sim(10000, ar = c(0.6, -.05, -0.2), ma = -0.4, d = 0.2) modFD <- fracdiff( ts4$series, nar = length(ts4$ar), nma = length(ts4$ma)) ## -> warning (singular Hessian) %% FIXME ??? coef(modFD) # the estimated parameters vcov(modFD) smFD <- summary(modFD) smFD coef(smFD) # gives the whole table AIC(modFD) # AIC works because of the logLik() method stopifnot(exprs = { }) } \keyword{print} \keyword{models} fracdiff/man/confint.fracdiff.Rd0000644000176200001440000000322213345443770016274 0ustar liggesusers\name{confint.fracdiff} \alias{confint.fracdiff} \title{Confidence Intervals for Fracdiff Model Parameters} \description{ Computes (Wald) confidence intervals for one or more parameters in a fitted fracdiff model, see \code{\link{fracdiff}}. } \usage{ \method{confint}{fracdiff}(object, parm, level = 0.95, \dots) } \section{Warning}{ As these confidence intervals use the standard errors returned by \code{\link{fracdiff}()} (which are based on finite difference approximations to the Hessian) they may end up being much too narrow, see the example in \code{\link{fracdiff.var}}. } \arguments{ \item{object}{an object of class \code{fracdiff}, typically result of \code{\link{fracdiff}(..)}.} \item{parm}{a specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If missing, all parameters are considered.} \item{level}{the confidence level required.} \item{\dots}{additional argument(s) for methods.} } \value{ A matrix (or vector) with columns giving lower and upper confidence limits for each parameter. These will be labelled as (1-level)/2 and 1 - (1-level)/2 in \% (by default 2.5\% and 97.5\%). } \author{Spencer Graves posted the initial version to R-help.} \seealso{the generic \code{\link{confint}}; \code{\link{fracdiff}} model fitting, notably \code{\link{fracdiff.var}()} for re-estimating the variance-covariance matrix on which \code{confint()} builds entirely. } \examples{ set.seed(101) ts2 <- fracdiff.sim(5000, ar = .2, ma = -.4, d = .3) mFD <- fracdiff( ts2$series, nar = length(ts2$ar), nma = length(ts2$ma)) coef(mFD) confint(mFD) } \keyword{models} fracdiff/man/fdGPH.Rd0000644000176200001440000000224113345443770014021 0ustar liggesusers\name{fdGPH} \alias{fdGPH} \title{Geweke and Porter-Hudak Estimator for ARFIMA(p,d,q)} \description{ Estimate the fractional (or \dQuote{memory}) parameter \eqn{d} in the ARFIMA(p,d,q) model by the method of Geweke and Porter-Hudak (GPH). The GPH estimator is based on the regression equation using the periodogram function as an estimate of the spectral density. } \usage{ fdGPH(x, bandw.exp = 0.5) } \arguments{ \item{x}{univariate time series} \item{bandw.exp}{the bandwidth used in the regression equation} } \details{ The function also provides the asymptotic standard deviation and the standard error deviation of the fractional estimator. The bandwidth is \code{bw = trunc(n ^ bandw.exp)}, where 0 < bandw.exp < 1 and n is the sample size. Default \code{bandw.exp = 0.5}. } \value{ \item{d}{GPH estimate} \item{sd.as}{asymptotic standard deviation} \item{sd.reg}{standard error deviation} } \references{see those in \code{\link{fdSperio}}. } \author{Valderio A. Reisen and Artur J. Lemonte} \seealso{\code{\link{fdSperio}}, \code{\link{fracdiff}}} \examples{ memory.long <- fracdiff.sim(1500, d = 0.3) fdGPH(memory.long$series) } \keyword{ts} fracdiff/man/diffseries.Rd0000644000176200001440000000350113345443770015214 0ustar liggesusers\name{diffseries} \alias{diffseries} \title{Fractionally Differenciate Data} \description{ Differenciates the time series data using the approximated binomial expression of the long-memory filter and an estimate of the memory parameter in the ARFIMA(p,d,q) model. } \usage{ diffseries(x, d) } \arguments{ \item{x}{numeric vector or univariate time series.} \item{d}{number specifiying the fractional difference order.} } \value{the fractionally differenced series \code{x}.} \details{ Since 2018, we are using (an important correction of) the fast algorithm based on the discrete Fourier transform (\code{\link{fft}}) by Jensen and Nielsen which is significantly faster for large \code{n = length(x)}. } \references{ See those in \code{\link{fdSperio}}; additionally Reisen, V. A. and Lopes, S. (1999) Some simulations and applications of forecasting long-memory time series models; \emph{Journal of Statistical Planning and Inference} \bold{80}, 269--287. Reisen, V. A. Cribari-Neto, F. and Jensen, M.J. (2003) Long Memory Inflationary Dynamics. The case of Brazil. \emph{Studies in Nonlinear Dynamics and Econometrics} \bold{7}(3), 1--16. Jensen, Andreas Noack and Nielsen, Morten \enc{Ørregaard}{Oerregaard} (2014) A Fast Fractional Difference Algorithm. \emph{Journal of Time Series Analysis} \bold{35}(5), 428--436; \doi{10.1111/jtsa.12074}. } \author{Valderio A. Reisen \email{valderio@cce.ufes.br} and Artur J. Lemonte (first slow version), now hidden as \code{diffseries.0()}. Current version: Jensen and Nielsen (2014); tweaks by Martin Maechler, 2018. } \seealso{\code{\link{fracdiff.sim}}} \examples{ memory.long <- fracdiff.sim(80, d = 0.3) str(mGPH <- fdGPH(memory.long$series)) r <- diffseries(memory.long$series, d = mGPH$d) #acf(r) # shouldn't show structure - ideally } \keyword{ts} fracdiff/man/fdSperio.Rd0000644000176200001440000000375513345443770014657 0ustar liggesusers\name{fdSperio} \alias{fdSperio} \title{Sperio Estimate for 'd' in ARFIMA(p,d,q)} \description{ This function makes use Reisen (1994) estimator to estimate the memory parameter d in the ARFIMA(p,d,q) model. It is based on the regression equation using the smoothed periodogram function as an estimate of the spectral density. } \usage{ fdSperio(x, bandw.exp = 0.5, beta = 0.9) } \arguments{ \item{x}{univariate time series data.} \item{bandw.exp}{numeric: exponent of the bandwidth used in the regression equation.} \item{beta}{numeric: exponent of the bandwidth used in the lag Parzen window.} } \details{ The function also provides the asymptotic standard deviation and the standard error deviation of the fractional estimator. The bandwidths are \code{bw = trunc(n ^ bandw.exp)}, where 0 < bandw.exp < 1 and n is the sample size. Default \code{bandw.exp= 0.5}; \cr and \code{bw2 = trunc(n ^ beta)}, where 0 < beta < 1 and n is the sample size. Default \code{beta = 0.9}. } \value{ a list with components \item{d}{Sperio estimate} \item{sd.as}{asymptotic standard deviation} \item{sd.reg}{standard error deviation} } \references{ Geweke, J. and Porter-Hudak, S. (1983) The estimation and application of long memory time series models. \emph{Journal of Time Series Analysis} \bold{4}(4), 221--238. Reisen, V. A. (1994) Estimation of the fractional difference parameter in the ARFIMA(p,d,q) model using the smoothed periodogram. \emph{Journal Time Series Analysis}, \bold{15}(1), 335--350. Reisen, V. A., B. Abraham, and E. M. M. Toscano (2001) Parametric and semiparametric estimations of stationary univariate ARFIMA model. \emph{Brazilian Journal of Probability and Statistics} \bold{14}, 185--206. } \author{Valderio A. Reisen \email{valderio@cce.ufes.br} and Artur J. Lemonte} \seealso{\code{\link{fdGPH}}, \code{\link{fracdiff}} } \examples{ memory.long <- fracdiff.sim(1500, d = 0.3) spm <- fdSperio(memory.long$series) str(spm, digits=6) } \keyword{ts} fracdiff/man/fracdiff.sim.Rd0000644000176200001440000000707213345443770015433 0ustar liggesusers\name{fracdiff.sim} \alias{fracdiff.sim} \title{Simulate fractional ARIMA Time Series} \description{ Generates simulated long-memory time series data from the fractional ARIMA(p,d,q) model. This is a test problem generator for \code{\link{fracdiff}}. Note that the MA coefficients have \emph{inverted} signs compared to other parametrizations, see the details in \code{\link{fracdiff}}. } \usage{ fracdiff.sim(n, ar = NULL, ma = NULL, d, rand.gen = rnorm, innov = rand.gen(n+q, ...), n.start = NA, backComp = TRUE, allow.0.nstart = FALSE, start.innov = rand.gen(n.start, ...), ..., mu = 0) } \arguments{ \item{n}{length of the time series.} \item{ar}{vector of autoregressive parameters; empty by default.} \item{ma}{vector of moving average parameters; empty by default.} \item{d}{fractional differencing parameter.} \item{rand.gen}{a function to generate the innovations; the default, \code{\link{rnorm}} generates white N(0,1) noise.} \item{innov}{an optional times series of innovations. If not provided, \code{rand.gen()} is used.} \item{n.start}{length of \dQuote{burn-in} period. If \code{NA}, the default, the same value as in \code{\link{arima.sim}} is computed.} \item{backComp}{logical indicating if back compatibility with older versions of \code{fracdiff.sim} is desired. Otherwise, for \code{d = 0}, compatibility with \R's \code{\link{arima.sim}} is achieved.} \item{allow.0.nstart}{logical indicating if \code{n.start = 0} should be allowed even when \eqn{p + q > 0}. This not recommended unless for producing the same series as with older versions of \code{fracdiff.sim}.} \item{start.innov}{an optional vector of innovations to be used for the burn-in period. If supplied there must be at least \code{n.start} values.} \item{\dots}{additional arguments for \code{rand.gen()}. Most usefully, the standard deviation of the innovations generated by \code{rnorm} can be specified by \code{sd}.} \item{mu}{time series mean (added at the end).} } \value{ a list containing the following elements : \item{series}{time series} \item{ar, ma, d, mu, n.start}{same as input} } \seealso{ \code{\link{fracdiff}}, also for references; \code{\link[stats]{arima.sim}} } \examples{ ## Pretty (too) short to "see" the long memory fracdiff.sim(100, ar = .2, ma = .4, d = .3) ## longer with "extreme" ar: r <- fracdiff.sim(n=1500, ar=-0.9, d= 0.3) plot(as.ts(r$series)) ## Show that MA coefficients meaning is inverted ## compared to stats :: arima : AR <- 0.7 MA <- -0.5 n.st <- 2 AR <- c(0.7, -0.1) MA <- c(-0.5, 0.4) n <- 512 ; sd <- 0.1 n.st <- 10 set.seed(101) Y1 <- arima.sim(list(ar = AR, ma = MA), n = n, n.start = n.st, sd = sd) plot(Y1) # For our fracdiff, reverse the MA sign: set.seed(101) Y2 <- fracdiff.sim(n = n, ar = AR, ma = - MA, d = 0, n.start = n.st, sd = sd)$series lines(Y2, col=adjustcolor("red", 0.5)) ## .. no, you don't need glasses ;-) Y2 is Y1 shifted slightly ##' rotate left by k (k < 0: rotate right) rot <- function(x, k) { stopifnot(k == round(k)) n <- length(x) if(k <- k \%\% n) x[c((k+1):n, 1:k)] else x } k <- n.st - 2 Y2.s <- rot(Y2, k) head.matrix(cbind(Y1, Y2.s)) plot(Y1, Y2.s); i <- (n-k+1):n text(Y1[i], Y2.s[i], i, adj = c(0,0)-.1, col=2) ## With backComp = FALSE, get *the same* as arima.sim(): set.seed(101) Y2. <- fracdiff.sim(n = n, ar = AR, ma = - MA, d = 0, n.start = n.st, sd = sd, backComp = FALSE)$series stopifnot( all.equal( c(Y1), Y2., tolerance= 1e-15)) } \keyword{ts} fracdiff/man/fracdiff.Rd0000644000176200001440000001372414327743202014637 0ustar liggesusers\name{fracdiff} \alias{fracdiff} \title{ML Estimates for Fractionally-Differenced ARIMA (p,d,q) models} \description{ Calculates the maximum likelihood estimators of the parameters of a fractionally-differenced ARIMA (p,d,q) model, together (if possible) with their estimated covariance and correlation matrices and standard errors, as well as the value of the maximized likelihood. The likelihood is approximated using the fast and accurate method of Haslett and Raftery (1989). } \usage{ fracdiff(x, nar = 0, nma = 0, ar = rep(NA, max(nar, 1)), ma = rep(NA, max(nma, 1)), dtol = NULL, drange = c(0, 0.5), h, M = 100, trace = 0) } \arguments{ \item{x}{time series (numeric vector) for the ARIMA model} \item{nar}{number of autoregressive parameters \eqn{p}.} \item{nma}{number of moving average parameters \eqn{q}.} \item{ar}{initial autoregressive parameters.} \item{ma}{initial moving average parameters.} \item{dtol}{interval of uncertainty for \eqn{d}. If \code{dtol} is negative or NULL, the fourth root of machine precision will be used. \code{dtol} will be altered if necessary by the program.} \item{drange}{interval over which the likelihood function is to be maximized as a function of \eqn{d}.} \item{h}{size of finite difference interval for numerical derivatives. By default (or if negative), %% Only found the following by reading ../src/fdhess.f : \code{h = min(0.1, eps.5 * (1+ abs(cllf)))}, where \code{clff := log. max.likelihood} (as returned) and \code{eps.5 := sqrt(.Machine$double.neg.eps)} (typically 1.05e-8). This is used to compute a finite difference approximation to the Hessian, and hence only influences the cov, cor, and std.error computations; use \code{\link{fracdiff.var}()} to change this \emph{after} the estimation process. } \item{M}{number of terms in the likelihood approximation (see Haslett and Raftery 1989).} \item{trace}{optional integer, specifying a trace level. If positive, currently the \dQuote{outer loop} iterations produce one line of diagnostic output.} } \details{ The \pkg{fracdiff} package has --- for historical reason, namely, S-plus \code{arima()} compatibility --- used an unusual parametrization for the MA part, see also the \sQuote{Details} section in \code{\link[stats]{arima}} (in standard \R's \pkg{stats} package). The ARMA (i.e., \eqn{d = 0}) model in \code{fracdiff()} and \code{\link{fracdiff.sim}()} is \deqn{X_t - a_1X_{t-1} - \cdots - a_pX_{t-p} = e_t - b_1e_{t-1} - \dots - b_qe_{t-q},}{% X[t] - a[1]X[t-1] - \dots - a[p]X[t-p] = e[t] - b[1]e[t-1] - \dots - b[q]e[t-q],} where \eqn{e_i}{e[i]} are mean zero i.i.d., for \code{fracdiff()}'s estimation, \eqn{e_i \sim \mathcal{N}(0,\sigma^2)}{e[i] ~ N(0, s^2)}. This model indeed has the signs of the MA coefficients \eqn{b_j}{b[j]} \emph{inverted}, compared to other parametrizations, including Wikipedia's \url{https://en.wikipedia.org/wiki/Autoregressive_moving-average_model} and the one of \code{\link[stats]{arima}}. Note that \code{NA}'s in the initial values for \code{ar} or \code{ma} are replaced by \eqn{0}'s. } \value{ an object of S3 \code{\link{class}} \code{"fracdiff"}, which is a list with components: \item{log.likelihood}{logarithm of the maximum likelihood} \item{d}{optimal fractional-differencing parameter} \item{ar}{vector of optimal autoregressive parameters} \item{ma}{vector of optimal moving average parameters} \item{covariance.dpq}{covariance matrix of the parameter estimates (order : d, ar, ma).} \item{stderror.dpq}{standard errors of the parameter estimates \code{c(d, ar, ma)}.} \item{correlation.dpq}{correlation matrix of the parameter estimates (order : d, ar, ma).} \item{h}{interval used for numerical derivatives, see \code{h} argument.} \item{dtol}{interval of uncertainty for d; possibly altered from input \code{dtol}.} \item{M}{as input.} \item{hessian.dpq}{the approximate Hessian matrix \eqn{H} of 2nd order partial derivatives of the likelihood with respect to the parameters; this is (internally) used to compute \code{covariance.dpq}, the approximate asymptotic covariance matrix as \eqn{C = (-H)^{-1}}.} } \note{ Ordinarily, \code{nar} and \code{nma} should not be too large (say < 10) to avoid degeneracy in the model. The function \code{\link{fracdiff.sim}} is available for generating test problems. } \section{Method}{ The optimization is carried out in two levels:\cr an outer univariate unimodal optimization in d over the interval \code{drange} (typically [0,.5]), using Brent's \code{fmin} algorithm), and\cr an inner nonlinear least-squares optimization in the AR and MA parameters to minimize white noise variance (uses the MINPACK subroutine \code{lm}DER). written by Chris Fraley (March 1991). } \section{Warning}{ The variance-covariance matrix and consequently the standard errors may be quite inaccurate, see the example in \code{\link{fracdiff.var}}. } \references{ J. Haslett and A. E. Raftery (1989) Space-time Modelling with Long-memory Dependence: Assessing Ireland's Wind Power Resource (with Discussion); \emph{Applied Statistics} \bold{38}, 1--50. R. Brent (1973) \emph{Algorithms for Minimization without Derivatives}, Prentice-Hall J. J. More, B. S. Garbow, and K. E. Hillstrom (1980) \emph{Users Guide for MINPACK-1}, Technical Report ANL-80-74, Applied Mathematics Division, Argonne National Laboratory. } \seealso{ \code{\link{coef.fracdiff}} and other methods for \code{"fracdiff"} objects; \code{\link{fracdiff.var}()} for re-estimation of variances or standard errors; \code{\link{fracdiff.sim}} } \examples{ ts.test <- fracdiff.sim( 5000, ar = .2, ma = -.4, d = .3) fd. <- fracdiff( ts.test$series, nar = length(ts.test$ar), nma = length(ts.test$ma)) fd. ## Confidence intervals confint(fd.) ## with iteration output fd2 <- fracdiff(ts.test$series, nar = 1, nma = 1, trace = 1) all.equal(fd., fd2) } \keyword{ts} fracdiff/man/fracdiff.var.Rd0000644000176200001440000000322613345443770015430 0ustar liggesusers\name{fracdiff.var} \alias{fracdiff.var} \title{Recompute Covariance Estimate for fracdiff} \usage{ fracdiff.var(x, fracdiff.out, h) } \arguments{ \item{x}{a univariate time series or a vector. Missing values (NAs) are not allowed.} \item{fracdiff.out}{output from \code{fracdiff} for time series \code{x}.} \item{h}{finite-difference interval for approximating partial derivatives with respect to the \code{d} parameter.} } \description{ Allows the finite-difference interval to be altered for recomputation of the covariance estimate for \code{fracdiff}. } \value{ an object of S3 \code{\link{class}} \code{"fracdiff"}, i.e., basically a list with the same elements as the result from \code{\link{fracdiff}}, but with possibly different values for the hessian, covariance, and correlation matrices and for standard error, as well as for \code{h}. } \seealso{ \code{fracdiff}, also for references. } \examples{ ## Generate a fractionally-differenced ARIMA(1,d,1) model : ts.test <- fracdiff.sim(10000, ar = .2, ma = .4, d = .3) ## estimate the parameters in an ARIMA(1,d,1) model for the simulated series fd.out <- fracdiff(ts.test$ser, nar= 1, nma = 1) ## Modify the covariance estimate by changing the finite-difference interval (fd.o2 <- fracdiff.var(ts.test$series, fd.out, h = .0001)) ## looks identical as print(fd.out), ## however these (e.g.) differ : vcov(fd.out) vcov(fd.o2) ## A case, were the default variance is *clearly* way too small: set.seed(1); fdc <- fracdiff(X <- fracdiff.sim(n=100,d=0.25)$series) fdc # Confidence intervals just based on asymp.normal approx. and std.errors: confint(fdc) # ridiculously too narrow } \keyword{ts} fracdiff/TODO0000644000176200001440000000332313345443770012521 0ustar liggesusersTODO / Ideas see ---> ./Done for things finished ------------ ~~~~~~ 1. Now have class, but not yet residuals() & fitted(); predict() 1a. In any case, we want $residuals (as "arima"): ``the fitted innovations'' 1c. fracdiff.sim(): think about making it an *generalization* of arima.sim, maybe call the new function arfima.sim() and keep the old one as is. 2. call R's gammafn() and minimizers (Brent is there!), instead of "our own" 4. Consider the diverse filters, e.g. (0,d,0) --> (p,d,q) Now started implementing and testing in *R* : --> ./filters.R ~~~~~~~~~~~ 5. fracdiff() and fracdiff.var() share much code, including warning message generation. Clean up! See 'FIXME' in R/fracdiff.R ! 6. fracdiff.sim(): This really is a *filter* of the innovations. in C: 1) eps_t --> fARIMA(0, d, 0) =: Y_t 2) Y_t --> fARIMA(p, d, q) =: Z_t i.e. a simple ARMA() filter --> we should provide the "filter 1)" as a *separate* R function 8. Long-standing "Bug" / Problem: set.seed(1); (fdc <- fracdiff(X <- fracdiff.sim(n=100,d=0.25)$series))$covariance.dpq # d # d 1.901027e-12 This is *clearly* too small: At least now added warning Hessian --> covariance ====================== 3. For the hessian / covariance { src/fdhess.c } : Think about trying several step-sizes and use stable ("optimal"?) one. 6. Currently C/Fortran uses old Linpack SVD and its own inverse, and just returns warnings if things "fail" there. Possibly rather do these in R, and possibly use using chol() and chol2inv() rather than svd. 7. We should return \hat{\sigma_\epsilon} or \hat{\sigma^2_\epsilon} ---> is the new "wnv" (= white noise variance) ??? fracdiff/DESCRIPTION0000644000176200001440000000356714556665642013562 0ustar liggesusersPackage: fracdiff Version: 1.5-3 VersionNote: Released 1.5-0 on 2019-12-09, 1.5-1 on 2020-01-20, 1.5-2 on 2022-10-31 Date: 2024-02-01 Title: Fractionally Differenced ARIMA aka ARFIMA(P,d,q) Models Authors@R: c(person("Martin","Maechler", role=c("aut","cre"), email="maechler@stat.math.ethz.ch", comment = c(ORCID = "0000-0002-8685-9910")) , person("Chris", "Fraley", role=c("ctb","cph"), comment = "S original; Fortran code") , person("Friedrich", "Leisch", role = "ctb", comment = c("R port", ORCID = "0000-0001-7278-1983")) , person("Valderio", "Reisen", role="ctb", comment = "fdGPH() & fdSperio()") , person("Artur", "Lemonte", role="ctb", comment = "fdGPH() & fdSperio()") , person("Rob", "Hyndman", email="Rob.Hyndman@monash.edu", role="ctb", comment = c("residuals() & fitted()", ORCID = "0000-0002-2140-5352")) ) Description: Maximum likelihood estimation of the parameters of a fractionally differenced ARIMA(p,d,q) model (Haslett and Raftery, Appl.Statistics, 1989); including inference and basic methods. Some alternative algorithms to estimate "H". Imports: stats Suggests: longmemo, forecast, urca License: GPL (>= 2) URL: https://github.com/mmaechler/fracdiff BugReports: https://github.com/mmaechler/fracdiff/issues Encoding: UTF-8 NeedsCompilation: yes Packaged: 2024-02-01 08:45:38 UTC; maechler Author: Martin Maechler [aut, cre] (), Chris Fraley [ctb, cph] (S original; Fortran code), Friedrich Leisch [ctb] (R port, ), Valderio Reisen [ctb] (fdGPH() & fdSperio()), Artur Lemonte [ctb] (fdGPH() & fdSperio()), Rob Hyndman [ctb] (residuals() & fitted(), ) Maintainer: Martin Maechler Repository: CRAN Date/Publication: 2024-02-01 10:00:02 UTC fracdiff/build/0000755000176200001440000000000014556655062013133 5ustar liggesusersfracdiff/build/partial.rdb0000644000176200001440000000007514556655062015262 0ustar liggesusers‹‹àb```b`afb`b1…À€… H02°0piÖ¼ÄÜÔb C"Éð§%!ˆ7fracdiff/tests/0000755000176200001440000000000014556655062013176 5ustar liggesusersfracdiff/tests/ex.Rout.save0000644000176200001440000001670213611356442015417 0ustar liggesusers R version 3.6.2 Patched (2019-12-14 r77587) -- "Dark and Stormy Night" Copyright (C) 2019 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > library(fracdiff) > > doExtras <- interactive() # for now > > .proctime00 <- proc.time() > > set.seed(107) > options(digits = 5) > > ## 1) > > x1 <- fracdiff.sim(5000, ar = .2, ma = -.4, d = .3, n.start=0, allow.0 = TRUE) > (fd1 <- fracdiff(x1$series, nar = 1, nma = 1, dtol = 1e-10)) Call: fracdiff(x = x1$series, nar = 1, nma = 1, dtol = 1e-10) Coefficients: d ar ma 0.26721 0.27824 -0.36951 sigma[eps] = 0.99138 a list with components: [1] "log.likelihood" "n" "msg" "d" [5] "ar" "ma" "covariance.dpq" "fnormMin" [9] "sigma" "stderror.dpq" "correlation.dpq" "h" [13] "d.tol" "M" "hessian.dpq" "length.w" [17] "residuals" "fitted" "call" > vcov(fd1) d ar1 ma1 d 0.00059661 -0.00080522 -0.00018971 ar1 -0.00080522 0.00161219 0.00066399 ma1 -0.00018971 0.00066399 0.00054849 > logLik(fd1) 'log Lik.' -7051.5 (df=4) > > fdCOVcomp <- + c("h", "covariance.dpq", "stderror.dpq", "correlation.dpq", "hessian.dpq") > fd1. <- fracdiff.var(x1$series, fd1, h = fd1$h / 2) > dns <- dimnames(fd1.$covariance.dpq) > > ## dput(sapply(fd1.[fdCOVcomp], signif, digits = 4)) # edited: > fd1.L <- list( + h = 3.7155e-05, + covariance.dpq = matrix(c(0.0005966, -0.0008052, -0.0001897, + -0.0008052, 0.001612, 0.000664, + -0.0001897, 0.000664, 0.0005485), + 3L, 3L, dimnames = dns), + stderror.dpq = c(0.02443, 0.04015, 0.02342), + correlation.dpq = matrix(c(1, -0.821, -0.3316, + -0.821, 1, 0.7061, + -0.3316, 0.7061, 1), 3), + hessian.dpq = matrix(c(-8252, -5875, 4258, + -5875, -5420, 4529, + 4258, 4529, -5834), + 3L, 3L, dimnames = dns)) > stopifnot(all.equal(fd1.[fdCOVcomp], fd1.L, tolerance = 2e-4)) > > fd1u <- fracdiff.var(x1$series, fd1, h = fd1$h * 8) > ## dput(sapply(fd1u[fdCOVcomp], signif, digits = 4)) : > fd1uL <- list( + h = 0.0005945, + covariance.dpq = matrix(c(0.0005966, -0.0008052, -0.0001897, + -0.0008052, 0.001612, 0.000664, + -0.0001897, 0.000664, 0.0005485), + 3L, 3L, dimnames = dns), + stderror.dpq = c(0.02443, 0.04015, 0.02342), + correlation.dpq = matrix(c(1, -0.821, -0.3316, + -0.821, 1, 0.7061, + -0.3316, 0.7061, 1), 3), + hessian.dpq = matrix(c(-8252, -5875, 4258, + -5875, -5420, 4529, + 4258, 4529, -5834), + 3L, 3L, dimnames = dns)) > stopifnot( all.equal(fd1u[fdCOVcomp], fd1uL, tolerance = 2e-4) ) > > ## 2) > > x2 <- fracdiff.sim( 2048, ar = .8, ma = -.4, d = .3, n.start=0, allow.0 = TRUE) > ## -> NA's and problems > fd2 <- fracdiff(x2$series, nar = length(x2$ar), nma = length(x2$ma)) > sfd2 <- summary(fd2) > ss2 <- lapply(sfd2[setdiff(names(sfd2), c("residuals", "fitted"))], + function(.) if(is.numeric(.)) signif(., digits = 7) else .) > ss2$coefficients <- ss2$coefficients[, -4] # drop p values > ss2S <- list( + log.likelihood = -2924.262, n = 2048, + msg = c(fracdf = "ok", fdcov = "ok"), + covariance.dpq = matrix(c(0.0004182859, -0.0007078449, -6.753008e-05, + -0.0007078449, 0.001712827, 0.0002692938, + -6.753008e-05, 0.0002692938, 0.0002572701), 3L, + dimnames = dns), + fnormMin = 45.62935, sigma = 1.008768, + correlation.dpq = matrix(c(1, -0.8362667, -0.2058572, + -0.8362667, 1, 0.405672, + -0.2058572, 0.405672, 1), 3, dimnames = dns), + h = 3.082264e-05, d.tol = 0.0001220703, M = 100, + hessian.dpq = matrix(c(-8557.83, -3810.55, 1742.32, + -3810.55, -2395.564, 1507.303, + 1742.32, 1507.303, -5007.377), 3, dimnames = dns), + length.w = 10254, + call = quote( + fracdiff(x = x2$series, nar = length(x2$ar), nma = length(x2$ma))), + coefficients = matrix(c(0.3374173, 0.7709664, -0.3810478, + 0.02045204, 0.04138631, 0.01603964, + 16.49798, 18.62854, -23.75663), + 3, 3, dimnames = dimnames(ss2$coefficients)), + df = 4, aic = 5856.524, symbolic.cor = FALSE) > ## > if(doExtras) + print(all.equal(ss2S, ss2, tol = 0)) # 0.0001273 (32b Win); TRUE (64b F30, gcc) > stopifnot(all.equal(ss2S, ss2, tol = 4e-4)) > > fd2. <- fracdiff.var(x2$series, fd2, h = fd2$h / 2) > sfd2. <- sapply(fd2.[fdCOVcomp], signif, digits = 4) > sfd2S <- ## dput(sapply(fd2.[fdCOVcomp], signif, digits = 5)) + list(h = 1.5411e-05, + covariance.dpq = matrix(c( 5.4726e-05,-9.261e-05, -8.8353e-06, + -9.261e-05, 0.0006717, 0.00016997, + -8.8353e-06, 0.00016997, 0.00024779), 3, dimnames=dns), + stderror.dpq = c(0.0073977, 0.025917, 0.015741), + correlation.dpq = matrix(c(1, -0.48303, -0.075871, + -0.48303, 1, 0.41661, + -0.075871, 0.41661, 1), 3), + hessian.dpq = matrix(c(-24440, -3810.6, 1742.3, + -3810.6, -2395.6, 1507.3, + 1742.3, 1507.3,-5007.4), 3, dimnames=dns)) > ## > if(doExtras) + print(all.equal(sfd2S, sfd2., tol = 1e-6, countEQ=TRUE)) # 8.7655e-5 > stopifnot(all.equal(sfd2S, sfd2., tol = 2e-4, countEQ=TRUE)) > > fd2u <- fracdiff.var(x2$series, fd2, h = fd2$h * 8)#-> warning, unable .. corr... Warning message: In fracdiff.var(x2$series, fd2, h = fd2$h * 8) : unable to compute correlation matrix > sd2u <- sapply(fd2u[fdCOVcomp], signif, digits = 4) > sd2uS <- list( ## dput(sapply(sd2u[fdCOVcomp], signif, digits = 5)) + h = 0.0002466, + covariance.dpq = matrix(c(-0.0003545, 6e-04, 5.724e-05, + 6e-04, -0.0005003, 5.816e-05, + 5.724e-05, 5.816e-05, 0.0002371), 3, dimnames=dns), + stderror.dpq = c(0, 0, 0.0154), + correlation.dpq = matrix(0, 3,3), + hessian.dpq = matrix(c(-3347, -3811, 1742, + -3811, -2396, 1507, + 1742, 1507,-5007), 3, dimnames=dns)) > ## > if(doExtras) + print(all.equal(sd2uS, sd2u, tol = 1e-8, countEQ=TRUE))# 0.000103 (32b Win); T.(64b F30) > stopifnot(all.equal(sd2uS, sd2u, tol = 4e-4, countEQ=TRUE)) > > > proc.time() user system elapsed 0.168 0.030 0.248 fracdiff/tests/Valderio-ex.R0000644000176200001440000000237713610314772015477 0ustar liggesuserslibrary(fracdiff) set.seed(1) ## examples(fdSperio) mem.long <- fracdiff.sim(1500, d = 0.3) spm <- fdSperio(mem.long$series) str(spm, digits=6) set.seed(8) ## examples(fdGPH) mem.l2 <- fracdiff.sim(1024, d = 0.25) fdGPH(mem.l2$series) diffserie0 <- fracdiff:::diffseries0 # the old slow for()-loop one stopifnot(exprs = { all.equal(diffserie0(1:20, d = 1), c(-9.5, rep(1, 20-1)), tol = 1e-15) all.equal(diffseries(1:20, d = 1), c(-9.5, rep(1, 20-1)), tol = 1e-13) # fft all.equal(diffserie0(-10:10, d = 0), -10:10, tol = 1e-15) all.equal(diffseries(-10:10, d = 0), -10:10, tol = 1e-13) all.equal(diffserie0(-10:10, d = 1/2), diffseries(-10:10, d = 1/2), tol = 1e-13) # see 4.3e-16 on 64b-Lnx }) set.seed(123) ## example(diffseries) mem.l3 <- fracdiff.sim(80, d = 0.3) mGPH <- fdGPH(mem.l3$series) r0 <- diffserie0(mem.l3$series, d = mGPH$d) r. <- diffseries(mem.l3$series, d = mGPH$d) print(r0, digits = 4) r <- all.equal(r0, r., tol = 0, countEQ = TRUE) # average rel.error, seen ~ 3.5e-16 if(is.character(r) && as.numeric(sub(".*: ", '', r)) > 4e-15) print(r) stopifnot(all.equal(r0, r., tol = 1e-13)) print(acf(r0)) # mtext("(shouldn't show structure - ideally)") cat("Time used: ", proc.time(),"\n") # for ``statistical reasons'' fracdiff/tests/Valderio-ex.Rout.save0000644000176200001440000000660113610314772017156 0ustar liggesusers R Under development (unstable) (2020-01-16 r77667) -- "Unsuffered Consequences" Copyright (C) 2020 The R Foundation for Statistical Computing Platform: x86_64-pc-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > library(fracdiff) > > set.seed(1) > ## examples(fdSperio) > mem.long <- fracdiff.sim(1500, d = 0.3) > spm <- fdSperio(mem.long$series) > str(spm, digits=6) List of 3 $ d : num 0.189757 $ sd.as : num 0.048145 $ sd.reg: num 0.0319748 > > set.seed(8) > ## examples(fdGPH) > mem.l2 <- fracdiff.sim(1024, d = 0.25) > fdGPH(mem.l2$series) $d [1] 0.2357737 $sd.as [1] 0.1346387 $sd.reg [1] 0.1209971 > > diffserie0 <- fracdiff:::diffseries0 # the old slow for()-loop one > stopifnot(exprs = { + all.equal(diffserie0(1:20, d = 1), c(-9.5, rep(1, 20-1)), tol = 1e-15) + all.equal(diffseries(1:20, d = 1), c(-9.5, rep(1, 20-1)), tol = 1e-13) # fft + all.equal(diffserie0(-10:10, d = 0), -10:10, tol = 1e-15) + all.equal(diffseries(-10:10, d = 0), -10:10, tol = 1e-13) + all.equal(diffserie0(-10:10, d = 1/2), + diffseries(-10:10, d = 1/2), tol = 1e-13) # see 4.3e-16 on 64b-Lnx + }) > > set.seed(123) > ## example(diffseries) > mem.l3 <- fracdiff.sim(80, d = 0.3) > mGPH <- fdGPH(mem.l3$series) > r0 <- diffserie0(mem.l3$series, d = mGPH$d) > r. <- diffseries(mem.l3$series, d = mGPH$d) > print(r0, digits = 4) [1] -0.761863 -0.648357 1.156142 0.254446 0.205702 1.790090 1.061263 [8] -0.727289 -0.711688 -0.641284 0.969069 0.581372 0.643513 0.406330 [15] -0.308041 1.804514 1.121691 -1.376291 0.516945 -0.296126 -1.079355 [22] -0.539937 -1.294863 -1.253349 -1.235765 -2.351131 -0.223510 -0.376664 [29] -1.579870 0.473816 0.201878 -0.440213 0.602163 0.889618 1.028929 [36] 1.031588 0.971153 0.386139 -0.015032 -0.256695 -0.707172 -0.421774 [43] -1.471215 1.610035 1.508302 -0.597359 -0.429911 -0.591076 0.550370 [50] 0.002532 0.255906 0.042294 -0.017154 1.366683 0.206105 1.723059 [57] -0.901849 0.538978 0.339388 0.422938 0.616481 -0.192038 -0.254046 [64] -1.031030 -1.378216 -0.226398 0.183380 -0.022056 0.848218 2.268592 [71] 0.286371 -1.954859 0.551967 -0.638054 -0.878345 0.681981 -0.196609 [78] -1.287879 -0.267172 -0.392858 > r <- all.equal(r0, r., tol = 0, countEQ = TRUE) # average rel.error, seen ~ 3.5e-16 > if(is.character(r) && as.numeric(sub(".*: ", '', r)) > 4e-15) + print(r) > stopifnot(all.equal(r0, r., tol = 1e-13)) > print(acf(r0)) # Autocorrelations of series 'r0', by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 0.239 0.045 0.198 -0.051 -0.039 -0.024 -0.055 -0.093 -0.115 -0.084 11 12 13 14 15 16 17 18 19 -0.067 -0.197 -0.168 0.081 -0.068 -0.016 0.098 -0.061 -0.042 > mtext("(shouldn't show structure - ideally)") > > cat("Time used: ", proc.time(),"\n") # for ``statistical reasons'' Time used: 0.153 0.026 0.233 0.002 0.004 > > proc.time() user system elapsed 0.155 0.030 0.233 fracdiff/tests/ex-Vinod.R0000644000176200001440000000376313345443771015020 0ustar liggesusers## From: VINOD@FORDHAM.EDU ## To: maechler@stat.math.ethz.ch ## X-Spam-Level: * ## Subject: fracdiff in R does not work for gnp series "insufficient workspace" ## Date: Sun, 15 May 2005 13:24:46 -0400 ## Dear Martin Maechler ## I teach econometrics at Fordham. For some reason the fracdiff ## does not work for the basic gnp series. library(fracdiff) if(FALSE) { ##MM library(urca) ##MM data(npext) data(npext, package = "urca") # Nelson Plosser data ## "bad practice": attach(npext) realgnp2 <- npext[50:129, "realgnp"] # to exclude missing data } else { ## keep test independent: realgnp2 <- c(4.7604631, 4.7883247, 4.8138091, 4.8690717, 4.8782461, 4.8331023, 4.8243057, 4.9000761, 4.9067552, 5.0225639, 4.9863426, 4.9416424, 4.8504665, 4.9972123, 5.1113852, 5.1089712, 5.1896179, 5.2470241, 5.2459709, 5.2517497, 5.3161573, 5.2122147, 5.1316723, 4.9712012, 4.9522997, 5.0388988, 5.1328529, 5.2626902, 5.3141907, 5.2621719, 5.3442463, 5.4258307, 5.5748121, 5.6964221, 5.8203796, 5.8897086, 5.872681, 5.7449244, 5.7362497, 5.7798172, 5.7810521, 5.8729625, 5.9490788, 5.9791389, 6.0229632, 6.0088132, 6.0822189, 6.1005431, 6.1147878, 6.1032295, 6.1652077, 6.1897005, 6.2089924, 6.2724996, 6.3117348, 6.3649229, 6.4261648, 6.4893569, 6.5150089, 6.5604647, 6.5857578, 6.5792512, 6.6067, 6.6552832, 6.705961, 6.700553, 6.687906, 6.7356178, 6.781224, 6.8328012, 6.8572808, 6.8556192, 6.8747935, 6.8489768, 6.8840768, 6.9496707, 6.9826227, 7.0096668, 7.0455416, 7.0888837) } fr1 <- fracdiff(realgnp2, nar = 0, nma = 0, M = 100) ## COMPUTER SAYS ## Error in switch(result$info, stop("insufficient workspace"), stop("error in ## gamma function"), : ## insufficient workspace fr1 ## ... ## Hrishikesh D. Vinod ## Professor of Economics, Fordham University ## E-Mail: Vinod@fordham.edu ## Web page: http://www.fordham.edu/economics/vinod summary(fr1) fracdiff/tests/sim-ex.Rout.save0000644000176200001440000002625013345443771016212 0ustar liggesusers R version 2.4.0 alpha (2006-09-07 r39185) Copyright (C) 2006 The R Foundation for Statistical Computing ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > library(fracdiff) > if(FALSE) # manual testing + library(fracdiff, lib="/u/maechler/R/Pkgs/fracdiff.Rcheck") > > .ptime <- proc.time() > ## Test if the default 'n.start' is ok, i.e., if the > ## "burn in" period is long enough : > > n <- 512 > > set.seed(101) ; ok <- TRUE > for(i in 1:2000) { + r <- fracdiff.sim(n, ar = -0.9, ma = NULL, d = 0.3)$series + if(max(abs(r)) > 10) { + cat("OOps : indices ",str(ibig <- which(big <- abs(r) > 10)), + "\n are > 10\n") + if(any(ibig < 200) && (length(ibig) > 5 || abs(r)[big] > 20)) { + cat("Some have index < 200 --> BREAK\n") + ok <- FALSE + break + } + } + if(i %% 100 == 0) { + cat(i,": ACF = \n") + print(acf(r, plot=FALSE)) + } + } 100 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.810 0.793 -0.668 0.635 -0.542 0.516 -0.446 0.445 -0.395 0.401 11 12 13 14 15 16 17 18 19 20 21 -0.361 0.374 -0.332 0.329 -0.304 0.319 -0.312 0.319 -0.289 0.314 -0.300 22 23 24 25 26 27 0.302 -0.288 0.282 -0.256 0.235 -0.218 int 443 OOps : indices are > 10 200 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.792 0.775 -0.613 0.560 -0.421 0.366 -0.267 0.228 -0.148 0.124 11 12 13 14 15 16 17 18 19 20 21 -0.068 0.047 0.004 0.000 0.040 -0.027 0.041 -0.013 0.036 0.001 -0.014 22 23 24 25 26 27 0.058 -0.056 0.079 -0.068 0.085 -0.080 300 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.842 0.817 -0.700 0.666 -0.572 0.546 -0.456 0.445 -0.385 0.386 11 12 13 14 15 16 17 18 19 20 21 -0.336 0.342 -0.289 0.303 -0.261 0.266 -0.226 0.227 -0.185 0.199 -0.165 22 23 24 25 26 27 0.172 -0.136 0.135 -0.103 0.121 -0.104 400 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.864 0.832 -0.739 0.698 -0.624 0.567 -0.498 0.443 -0.372 0.341 11 12 13 14 15 16 17 18 19 20 21 -0.286 0.254 -0.212 0.157 -0.115 0.066 -0.039 0.013 0.011 -0.013 0.020 22 23 24 25 26 27 -0.035 0.029 -0.032 0.026 -0.053 0.051 500 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.766 0.729 -0.578 0.549 -0.427 0.396 -0.289 0.277 -0.196 0.189 11 12 13 14 15 16 17 18 19 20 21 -0.123 0.131 -0.090 0.116 -0.084 0.103 -0.042 0.060 -0.036 0.055 -0.043 22 23 24 25 26 27 0.041 -0.032 0.033 -0.056 0.063 -0.092 600 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.822 0.765 -0.634 0.607 -0.513 0.478 -0.384 0.357 -0.295 0.283 11 12 13 14 15 16 17 18 19 20 21 -0.220 0.194 -0.153 0.136 -0.089 0.072 -0.044 0.028 0.000 -0.006 0.039 22 23 24 25 26 27 -0.053 0.082 -0.098 0.110 -0.097 0.086 700 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.855 0.824 -0.724 0.698 -0.627 0.610 -0.554 0.516 -0.466 0.441 11 12 13 14 15 16 17 18 19 20 21 -0.391 0.359 -0.302 0.275 -0.234 0.233 -0.204 0.205 -0.197 0.198 -0.187 22 23 24 25 26 27 0.186 -0.181 0.169 -0.165 0.172 -0.180 800 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.809 0.748 -0.649 0.598 -0.546 0.510 -0.450 0.414 -0.373 0.337 11 12 13 14 15 16 17 18 19 20 21 -0.298 0.274 -0.249 0.238 -0.201 0.172 -0.144 0.138 -0.113 0.090 -0.061 22 23 24 25 26 27 0.043 -0.045 0.040 -0.051 0.052 -0.035 900 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.834 0.824 -0.696 0.679 -0.572 0.562 -0.473 0.460 -0.375 0.371 11 12 13 14 15 16 17 18 19 20 21 -0.307 0.318 -0.261 0.262 -0.202 0.200 -0.136 0.139 -0.086 0.097 -0.034 22 23 24 25 26 27 0.036 0.012 0.002 0.037 -0.011 0.037 1000 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.844 0.813 -0.699 0.668 -0.594 0.565 -0.502 0.477 -0.426 0.386 11 12 13 14 15 16 17 18 19 20 21 -0.343 0.319 -0.299 0.270 -0.233 0.193 -0.153 0.127 -0.104 0.093 -0.083 22 23 24 25 26 27 0.086 -0.083 0.059 -0.048 0.036 -0.035 1100 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.713 0.713 -0.547 0.555 -0.422 0.443 -0.358 0.368 -0.297 0.289 11 12 13 14 15 16 17 18 19 20 21 -0.241 0.238 -0.152 0.145 -0.092 0.124 -0.075 0.099 -0.056 0.078 -0.027 22 23 24 25 26 27 0.022 -0.013 0.045 -0.024 0.042 -0.059 1200 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.785 0.726 -0.594 0.522 -0.445 0.402 -0.366 0.349 -0.313 0.297 11 12 13 14 15 16 17 18 19 20 21 -0.231 0.204 -0.124 0.107 -0.052 0.002 0.064 -0.088 0.120 -0.106 0.128 22 23 24 25 26 27 -0.150 0.166 -0.183 0.217 -0.231 0.256 1300 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.830 0.830 -0.705 0.700 -0.593 0.575 -0.487 0.468 -0.383 0.361 11 12 13 14 15 16 17 18 19 20 21 -0.287 0.289 -0.229 0.235 -0.173 0.176 -0.147 0.141 -0.094 0.077 -0.034 22 23 24 25 26 27 0.025 0.021 -0.013 0.045 -0.030 0.057 1400 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.841 0.800 -0.702 0.662 -0.587 0.546 -0.467 0.428 -0.372 0.360 11 12 13 14 15 16 17 18 19 20 21 -0.315 0.293 -0.251 0.254 -0.234 0.233 -0.220 0.222 -0.220 0.219 -0.204 22 23 24 25 26 27 0.194 -0.167 0.181 -0.157 0.162 -0.151 1500 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.807 0.811 -0.670 0.677 -0.562 0.551 -0.442 0.421 -0.338 0.321 11 12 13 14 15 16 17 18 19 20 21 -0.258 0.259 -0.224 0.225 -0.197 0.189 -0.178 0.174 -0.179 0.180 -0.191 22 23 24 25 26 27 0.179 -0.188 0.169 -0.166 0.142 -0.121 1600 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.825 0.800 -0.662 0.629 -0.527 0.487 -0.409 0.378 -0.303 0.275 11 12 13 14 15 16 17 18 19 20 21 -0.195 0.181 -0.140 0.154 -0.129 0.141 -0.110 0.108 -0.078 0.070 -0.047 22 23 24 25 26 27 0.046 -0.001 0.003 0.061 -0.042 0.084 1700 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.830 0.798 -0.675 0.624 -0.527 0.487 -0.416 0.391 -0.325 0.297 11 12 13 14 15 16 17 18 19 20 21 -0.252 0.210 -0.161 0.130 -0.108 0.093 -0.076 0.081 -0.068 0.087 -0.054 22 23 24 25 26 27 0.041 -0.025 0.017 0.007 -0.001 0.028 1800 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.704 0.679 -0.530 0.486 -0.400 0.381 -0.307 0.314 -0.271 0.273 11 12 13 14 15 16 17 18 19 20 21 -0.257 0.241 -0.230 0.223 -0.166 0.185 -0.146 0.151 -0.121 0.136 -0.120 22 23 24 25 26 27 0.154 -0.118 0.142 -0.107 0.095 -0.074 1900 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.856 0.816 -0.685 0.630 -0.535 0.493 -0.444 0.427 -0.400 0.384 11 12 13 14 15 16 17 18 19 20 21 -0.361 0.348 -0.326 0.304 -0.280 0.252 -0.203 0.172 -0.123 0.103 -0.061 22 23 24 25 26 27 0.073 -0.040 0.042 -0.005 -0.016 0.051 2000 : ACF = Autocorrelations of series ‘r’, by lag 0 1 2 3 4 5 6 7 8 9 10 1.000 -0.824 0.804 -0.640 0.620 -0.508 0.495 -0.390 0.380 -0.280 0.281 11 12 13 14 15 16 17 18 19 20 21 -0.208 0.223 -0.160 0.173 -0.133 0.160 -0.140 0.172 -0.137 0.162 -0.112 22 23 24 25 26 27 0.140 -0.111 0.170 -0.159 0.217 -0.196 > if(!ok) { + cat("i=",i," gave series \n") + print(head(r)) ; cat(".......\n") + plot(as.ts(r)) ## clearly did show problem {when we had bug} + } > > ## Try to find an example more quickly with setting `one seed': > .AR <- c(-.75, -.9) > .MA <- c(0.2, 0.1) > ok <- TRUE > set.seed(1) > r0 <- fracdiff.sim(100, d = 0.3) > r1 <- fracdiff.sim(100, ar = .AR, d = 0.25) > r2 <- fracdiff.sim(100, ar = .AR, ma = .MA, d = 0.2) > for(i in 1:1000) { + set.seed(1)# yes; identical ones + r0i <- fracdiff.sim(100, d = 0.3) + r1i <- fracdiff.sim(100, ar = .AR, d = 0.25) + r2i <- fracdiff.sim(100, ar = .AR, ma = .MA, d = 0.2) + stopifnot(identical(r0, r0i), + identical(r1, r1i), + identical(r2, r2i)) + } > > ## Last Line: > cat('Time elapsed: ', proc.time() - .ptime,'\n') Time elapsed: 7.181 0.012 7.34 0 0 > fracdiff/tests/sim-ex.R0000644000176200001440000000275513345443771014531 0ustar liggesuserslibrary(fracdiff) if(FALSE) # manual testing library(fracdiff, lib="/u/maechler/R/Pkgs/fracdiff.Rcheck") .ptime <- proc.time() ## Test if the default 'n.start' is ok, i.e., if the ## "burn in" period is long enough : n <- 512 set.seed(101) ; ok <- TRUE for(i in 1:2000) { r <- fracdiff.sim(n, ar = -0.9, ma = NULL, d = 0.3)$series if(max(abs(r)) > 10) { cat("OOps : indices ",str(ibig <- which(big <- abs(r) > 10)), "\n are > 10\n") if(any(ibig < 200) && (length(ibig) > 5 || abs(r)[big] > 20)) { cat("Some have index < 200 --> BREAK\n") ok <- FALSE break } } if(i %% 100 == 0) { cat(i,": ACF = \n") print(acf(r, plot=FALSE)) } } if(!ok) { cat("i=",i," gave series \n") print(head(r)) ; cat(".......\n") plot(as.ts(r)) ## clearly did show problem {when we had bug} } ## Try to find an example more quickly with setting `one seed': .AR <- c(-.75, -.9) .MA <- c(0.2, 0.1) ok <- TRUE set.seed(1) r0 <- fracdiff.sim(100, d = 0.3) r1 <- fracdiff.sim(100, ar = .AR, d = 0.25) r2 <- fracdiff.sim(100, ar = .AR, ma = .MA, d = 0.2) for(i in 1:1000) { set.seed(1)# yes; identical ones r0i <- fracdiff.sim(100, d = 0.3) r1i <- fracdiff.sim(100, ar = .AR, d = 0.25) r2i <- fracdiff.sim(100, ar = .AR, ma = .MA, d = 0.2) stopifnot(identical(r0, r0i), identical(r1, r1i), identical(r2, r2i)) } ## Last Line: cat('Time elapsed: ', proc.time() - .ptime,'\n') fracdiff/tests/ex.R0000644000176200001440000001325113611356442013726 0ustar liggesuserslibrary(fracdiff) doExtras <- interactive() # for now .proctime00 <- proc.time() set.seed(107) options(digits = 5) ## 1) x1 <- fracdiff.sim(5000, ar = .2, ma = -.4, d = .3, n.start=0, allow.0 = TRUE) (fd1 <- fracdiff(x1$series, nar = 1, nma = 1, dtol = 1e-10)) vcov(fd1) logLik(fd1) fdCOVcomp <- c("h", "covariance.dpq", "stderror.dpq", "correlation.dpq", "hessian.dpq") fd1. <- fracdiff.var(x1$series, fd1, h = fd1$h / 2) dns <- dimnames(fd1.$covariance.dpq) ## dput(sapply(fd1.[fdCOVcomp], signif, digits = 4)) # edited: fd1.L <- list( h = 3.7155e-05, covariance.dpq = matrix(c(0.0005966, -0.0008052, -0.0001897, -0.0008052, 0.001612, 0.000664, -0.0001897, 0.000664, 0.0005485), 3L, 3L, dimnames = dns), stderror.dpq = c(0.02443, 0.04015, 0.02342), correlation.dpq = matrix(c(1, -0.821, -0.3316, -0.821, 1, 0.7061, -0.3316, 0.7061, 1), 3), hessian.dpq = matrix(c(-8252, -5875, 4258, -5875, -5420, 4529, 4258, 4529, -5834), 3L, 3L, dimnames = dns)) stopifnot(all.equal(fd1.[fdCOVcomp], fd1.L, tolerance = 2e-4)) fd1u <- fracdiff.var(x1$series, fd1, h = fd1$h * 8) ## dput(sapply(fd1u[fdCOVcomp], signif, digits = 4)) : fd1uL <- list( h = 0.0005945, covariance.dpq = matrix(c(0.0005966, -0.0008052, -0.0001897, -0.0008052, 0.001612, 0.000664, -0.0001897, 0.000664, 0.0005485), 3L, 3L, dimnames = dns), stderror.dpq = c(0.02443, 0.04015, 0.02342), correlation.dpq = matrix(c(1, -0.821, -0.3316, -0.821, 1, 0.7061, -0.3316, 0.7061, 1), 3), hessian.dpq = matrix(c(-8252, -5875, 4258, -5875, -5420, 4529, 4258, 4529, -5834), 3L, 3L, dimnames = dns)) stopifnot( all.equal(fd1u[fdCOVcomp], fd1uL, tolerance = 2e-4) ) ## 2) x2 <- fracdiff.sim( 2048, ar = .8, ma = -.4, d = .3, n.start=0, allow.0 = TRUE) ## -> NA's and problems fd2 <- fracdiff(x2$series, nar = length(x2$ar), nma = length(x2$ma)) sfd2 <- summary(fd2) ss2 <- lapply(sfd2[setdiff(names(sfd2), c("residuals", "fitted"))], function(.) if(is.numeric(.)) signif(., digits = 7) else .) ss2$coefficients <- ss2$coefficients[, -4] # drop p values ss2S <- list( log.likelihood = -2924.262, n = 2048, msg = c(fracdf = "ok", fdcov = "ok"), covariance.dpq = matrix(c(0.0004182859, -0.0007078449, -6.753008e-05, -0.0007078449, 0.001712827, 0.0002692938, -6.753008e-05, 0.0002692938, 0.0002572701), 3L, dimnames = dns), fnormMin = 45.62935, sigma = 1.008768, correlation.dpq = matrix(c(1, -0.8362667, -0.2058572, -0.8362667, 1, 0.405672, -0.2058572, 0.405672, 1), 3, dimnames = dns), h = 3.082264e-05, d.tol = 0.0001220703, M = 100, hessian.dpq = matrix(c(-8557.83, -3810.55, 1742.32, -3810.55, -2395.564, 1507.303, 1742.32, 1507.303, -5007.377), 3, dimnames = dns), length.w = 10254, call = quote( fracdiff(x = x2$series, nar = length(x2$ar), nma = length(x2$ma))), coefficients = matrix(c(0.3374173, 0.7709664, -0.3810478, 0.02045204, 0.04138631, 0.01603964, 16.49798, 18.62854, -23.75663), 3, 3, dimnames = dimnames(ss2$coefficients)), df = 4, aic = 5856.524, symbolic.cor = FALSE) ## if(doExtras) print(all.equal(ss2S, ss2, tol = 0)) # 0.0001273 (32b Win); TRUE (64b F30, gcc) stopifnot(all.equal(ss2S, ss2, tol = 4e-4)) fd2. <- fracdiff.var(x2$series, fd2, h = fd2$h / 2) sfd2. <- sapply(fd2.[fdCOVcomp], signif, digits = 4) sfd2S <- ## dput(sapply(fd2.[fdCOVcomp], signif, digits = 5)) list(h = 1.5411e-05, covariance.dpq = matrix(c( 5.4726e-05,-9.261e-05, -8.8353e-06, -9.261e-05, 0.0006717, 0.00016997, -8.8353e-06, 0.00016997, 0.00024779), 3, dimnames=dns), stderror.dpq = c(0.0073977, 0.025917, 0.015741), correlation.dpq = matrix(c(1, -0.48303, -0.075871, -0.48303, 1, 0.41661, -0.075871, 0.41661, 1), 3), hessian.dpq = matrix(c(-24440, -3810.6, 1742.3, -3810.6, -2395.6, 1507.3, 1742.3, 1507.3,-5007.4), 3, dimnames=dns)) ## if(doExtras) print(all.equal(sfd2S, sfd2., tol = 1e-6, countEQ=TRUE)) # 8.7655e-5 stopifnot(all.equal(sfd2S, sfd2., tol = 2e-4, countEQ=TRUE)) fd2u <- fracdiff.var(x2$series, fd2, h = fd2$h * 8)#-> warning, unable .. corr... sd2u <- sapply(fd2u[fdCOVcomp], signif, digits = 4) sd2uS <- list( ## dput(sapply(sd2u[fdCOVcomp], signif, digits = 5)) h = 0.0002466, covariance.dpq = matrix(c(-0.0003545, 6e-04, 5.724e-05, 6e-04, -0.0005003, 5.816e-05, 5.724e-05, 5.816e-05, 0.0002371), 3, dimnames=dns), stderror.dpq = c(0, 0, 0.0154), correlation.dpq = matrix(0, 3,3), hessian.dpq = matrix(c(-3347, -3811, 1742, -3811, -2396, 1507, 1742, 1507,-5007), 3, dimnames=dns)) ## if(doExtras) print(all.equal(sd2uS, sd2u, tol = 1e-8, countEQ=TRUE))# 0.000103 (32b Win); T.(64b F30) stopifnot(all.equal(sd2uS, sd2u, tol = 4e-4, countEQ=TRUE)) fracdiff/tests/sim-2.R0000644000176200001440000000160513345443771014247 0ustar liggesusersrequire(fracdiff) ## confirm that we guessed right: ## fracdiff.sim(....., d = 0, backComp = FALSE) <===> arima.sim(....) AR <- c(0.7, -0.1, 0.2) MA <- c(-0.5, 0.4, 0.4) n <- 512 ; sd <- 0.1 n.st <- 10 set.seed(1) for(i in 1:200) { cat(sprintf("%3d ", i)) p <- sample(0:length(AR), 1) q <- sample(0:length(MA), 1) .ar <- AR[seq_len(p)] .ma <- MA[seq_len(q)] n.st <- p+q+ rpois(1, lambda = 2) sid <- round(runif(1)* 1000) set.seed(sid) y1 <- arima.sim(list(ar = .ar, ma = .ma), n = n, n.start = n.st, sd = sd) set.seed(sid) y2 <- fracdiff.sim(n = n, ar = .ar, ma = - .ma, d = 0, n.start = n.st, sd = sd, backComp = FALSE) if(!isTRUE(aeq <- all.equal(c(y1), y2$series, tol = 1e-15))) cat("y1 and y2 are not equal: ", aeq,"\n") if(!(i %% 10)) cat("\n") } ## Last Line: cat('Time elapsed: ', proc.time(),'\n') fracdiff/src/0000755000176200001440000000000014323537676012625 5ustar liggesusersfracdiff/src/maux_comm.h0000644000176200001440000000015113345443770014752 0ustar liggesusersFD_EXTERNAL struct { double epsp25, epspt3, epspt5, epsp75, bignum; } mauxfd_; #define mauxfd_1 mauxfd_ fracdiff/src/fdgam.c0000644000176200001440000004701413345443770014047 0ustar liggesusers/* fdgam.f -- translated by f2c (version 20031025). * * and produced by * * and manually pretty edited by Martin Maechler, 2004-10-01 */ #include #ifndef max # define max(a, b) ((a) < (b) ? (b) : (a)) #endif #ifndef min # define min(a, b) ((a) > (b) ? (b) : (a)) #endif #ifndef abs # define abs(x) ((x) >= 0 ? (x) : -(x)) #endif /* EXPORTS */ double dgamma_(double *x); double dgamr_ (double *x); static int dlgams_(double *, double *, double *); static double dlngam_(double *); static void d9gaml_(double *xmin, double *xmax); static double d9lgmc_(double *); static double dcsevl_(double *x, double *a, int *n); static int initds_(double *, int *, float *); /* Common Block Declarations --- included as "extern" */ #define FD_EXTERNAL extern #include "mach_comm.h" #include "gamm_comm.h" /* Table of constant values */ static int c__42 = 42; static int c__15 = 15; double dgamma_(double *x) { /* Initialized data */ static double gamcs[42] = { .008571195590989331421920062399942, .004415381324841006757191315771652, .05685043681599363378632664588789, -.004219835396418560501012500186624, .001326808181212460220584006796352, -1.893024529798880432523947023886e-4, 3.606925327441245256578082217225e-5, -6.056761904460864218485548290365e-6, 1.055829546302283344731823509093e-6, -1.811967365542384048291855891166e-7, 3.117724964715322277790254593169e-8, -5.354219639019687140874081024347e-9, 9.19327551985958894688778682594e-10, -1.577941280288339761767423273953e-10, 2.707980622934954543266540433089e-11, -4.646818653825730144081661058933e-12, 7.973350192007419656460767175359e-13, -1.368078209830916025799499172309e-13, 2.347319486563800657233471771688e-14, -4.027432614949066932766570534699e-15, 6.910051747372100912138336975257e-16, -1.185584500221992907052387126192e-16, 2.034148542496373955201026051932e-17, -3.490054341717405849274012949108e-18, 5.987993856485305567135051066026e-19, -1.027378057872228074490069778431e-19, 1.762702816060529824942759660748e-20, -3.024320653735306260958772112042e-21, 5.188914660218397839717833550506e-22, -8.902770842456576692449251601066e-23, 1.527474068493342602274596891306e-23, -2.620731256187362900257328332799e-24, 4.496464047830538670331046570666e-25, -7.714712731336877911703901525333e-26, 1.323635453126044036486572714666e-26, -2.270999412942928816702313813333e-27, 3.896418998003991449320816639999e-28, -6.685198115125953327792127999999e-29, 1.146998663140024384347613866666e-29, -1.967938586345134677295103999999e-30, 3.376448816585338090334890666666e-31, -5.793070335782135784625493333333e-32 }; static double pi = 3.1415926535897932384626433832795; static double sq2pil = .91893853320467274178032973640562; static int ngam = 0; static double xmin = 0.; static double xmax = 0.; static double xsml = 0.; static double dxrel = 0.; /* System generated locals */ int i__1; float r__1; double ret_val, d__1, d__2; /* Local variables */ static int i__, n; static double y, temp, sinpiy; /* jan 1984 edition. w. fullerton, c3, los alamos scientific lab. */ /* double precision x, gamcs(42), dxrel, pi, sinpiy, sq2pil, xmax, */ /* 1 xmin, y, d9lgmc, dcsevl, d1mach, dexp, dint, dlog, */ /* 2 dsin, dsqrt */ /* external d1mach, d9lgmc, dcsevl, dexp, dint, dlog, dsin, dsqrt, */ /* 1 initds */ /* series for gam on the interval 0. to 1.00000e+00 */ /* with weighted error 5.79e-32 */ /* log weighted error 31.24 */ /* significant figures required 30.00 */ /* decimal places required 32.05 */ /* sq2pil is 0.5*alog(2*pi) = alog(sqrt(2*pi)) */ ret_val = -999.; if (ngam == 0) { /* ngam = initds (gamcs, 42, 0.1*sngl( d1mach) ) */ r__1 = (float) machfd_.epsmin * .1f; ngam = initds_(gamcs, &c__42, &r__1); d9gaml_(&xmin, &xmax); if (gammfd_.igamma != 0) { return ret_val; } /* xsml = dexp (dmax1 (dlog(d1mach(1)), -dlog(d1mach(2)))+0.01d0) */ /* Computing MAX */ d__1 = log(machfd_.fltmin), d__2 = -log(machfd_.fltmax); xsml = exp(max(d__1,d__2) + .01); /* dxrel = dsqrt (d1mach(4)) */ dxrel = sqrt(machfd_.epsmax); } /* y = fabs(x) */ y = abs(*x); if (y > 10.) { goto L50; } /* compute gamma(x) for -xbnd .le. x .le. xbnd. reduce interval and find */ /* gamma(1+y) for 0.0 .le. y .lt. 1.0 first of all. */ n = (int) (*x); if (*x < 0.) { --n; } y = *x - (double) ((float) n); --n; /* dgamma = 0.9375d0 + dcsevl (2.d0*y-1.d0, gamcs, ngam) */ d__1 = y * 2. - 1.; temp = dcsevl_(&d__1, gamcs, &ngam); if (gammfd_.igamma != 0) { return ret_val; } ret_val = temp + .9375; if (n == 0) { return ret_val; } if (n > 0) { goto L30; } /* compute gamma(x) for x .lt. 1.0 */ n = -n; /* if (x.eq.0.d0) call seteru (14hdgamma x is 0, 14, 4, 2) */ /* if (x.lt.0d0 .and. x+dble(float(n-2)).eq.0.d0) call seteru ( */ /* 1 31hdgamma x is a negative integer, 31, 4, 2) */ /* if (x.lt.(-0.5d0) .and. fabs((x-dint(x-0.5d0))/x).lt.dxrel) call */ /* 1 seteru (68hdgamma answer lt half precision because x too near n */ /* 2egative integer, 68, 1, 1) */ /* if (y.lt.xsml) call seteru ( */ /* 1 54hdgamma x is so close to 0.0 that the result overflows, */ /* 2 54, 5, 2) */ if (*x == 0.) { /* write(6,*) 'dgamma : x is 0' */ gammfd_.igamma = 11; return ret_val; } if (*x < 0. && *x + (double) ((float) (n - 2)) == 0.) { /* write( 6, *) 'dgamma : x is a negative integer' */ gammfd_.igamma = 12; return ret_val; } if (*x < -.5 && (d__1 = (*x - (double) ((int) (*x - .5))) / *x, abs(d__1)) < dxrel) { gammfd_.jgamma = 11; } /* 1 write(6,*) 'dgamma : answer lt half precision because */ /* 2 x too near a negative integer' */ if (y < xsml) { /* write(6,*) 'dgamma :, */ /* 1 x is so close to 0.0 that the result overflows' */ gammfd_.igamma = 13; return ret_val; } i__1 = n; for (i__ = 1; i__ <= i__1; ++i__) { ret_val /= *x + (double) ((float) (i__ - 1)); /* L20: */ } return ret_val; /* gamma(x) for x .ge. 2.0 and x .le. 10.0 */ L30: i__1 = n; for (i__ = 1; i__ <= i__1; ++i__) { ret_val = (y + (double) ((float) i__)) * ret_val; /* L40: */ } return ret_val; /* gamma(x) for fabs(x) .gt. 10.0. recall y = fabs(x). */ L50: if (*x > xmax) { /* write(6,*) 'dgamma : x so big gamma overflows' */ gammfd_.igamma = 14; return ret_val; } ret_val = 0.; if (*x < xmin) { /* write(6,*) 'dgamma : x so small gamma underflows' */ gammfd_.jgamma = 12; return ret_val; } /* dgamma = dexp ((y-0.5d0)*dlog(y) - y + sq2pil + d9lgmc(y) ) */ temp = d9lgmc_(&y); if (gammfd_.igamma != 0) { return ret_val; } ret_val = exp((y - .5) * log(y) - y + sq2pil + temp); if (*x > 0.) { return ret_val; } /* if (fabs((x-dint(x-0.5d0))/x).lt.dxrel) call seteru ( */ /* 1 61hdgamma answer lt half precision, x too near negative integer */ /* 2 , 61, 1, 1) */ if ((d__1 = (*x - (double) ((int) (*x - .5))) / *x, abs(d__1)) < dxrel) { gammfd_.jgamma = 11; } /* sinpiy = dsin (pi*y) */ sinpiy = sin(pi * y); if (sinpiy == 0.) { /* write(6,*) 'dgamma : x is a negative integer' */ gammfd_.igamma = 12; return ret_val; } ret_val = -pi / (y * sinpiy * ret_val); return ret_val; } /* dgamma_ */ double dgamr_(double *x) { /* System generated locals */ double ret_val; /* Local variables */ static double temp, alngx, sgngx; /* july 1977 edition. w. fullerton, c3, los alamos scientific lab. */ /* this routine, not dgamma(x), should be the fundamental one. */ /* ============ ============= */ /* Calls dgamma(), only if |x| < 10; otherwise dlgams() -> dlngam() -> d9lgmc() */ /* external dexp, dgamma, dint, d1mach */ ret_val = 0.; if (*x <= 0. && (double) ((int) (*x)) == *x) { return ret_val; } if (abs(*x) <= 10.) { /* dgamr = 1.0d0/dgamma(x) */ temp = dgamma_(x); if (gammfd_.igamma != 0) { ret_val = machfd_.fltmax; return ret_val; } ret_val = 1. / temp; } else { /* x > 10. : */ dlgams_(x, &alngx, &sgngx); if (gammfd_.igamma != 0) { return ret_val; } ret_val = sgngx * exp(-alngx); } return ret_val; } /* dgamr_ */ /* Subroutine */ int dlgams_(double *x, double *dlgam, double *sgngam) { /* july 1977 edition. w. fullerton, c3, los alamos scientific lab. */ /* evaluate log abs (gamma(x)) and return the sign of gamma(x) in sgngam. */ /* sgngam is either +1.0 or -1.0. */ int intx; *dlgam = dlngam_(x); if (gammfd_.igamma != 0) { return 0; } *sgngam = 1.; if (*x > 0.) { return 0; } intx = (int) (fmod(-((double) ((int) (*x))), 2.) + .1); if (intx == 0) { *sgngam = -1.; } return 0; } /* dlgams_ */ int initds_(double *dos, int *nos, float *eta) { /* System generated locals */ int ret_val, i__1; float r__1; /* Local variables */ static int i__, ii; static double err; /* june 1977 edition. w. fullerton, c3, los alamos scientific lab. */ /* initialize the double precision orthogonal series dos so that initds */ /* is the number of terms needed to insure the error is no larger than */ /* eta. ordinarily eta will be chosen to be one-tenth machine precision. */ /* input arguments -- */ /* dos dble prec array of nos coefficients in an orthogonal series. */ /* nos number of coefficients in dos. */ /* eta requested accuracy of series. */ /* if (nos.lt.1) call seteru ( */ /* 1 35hinitds number of coefficients lt 1, 35, 2, 2) */ /* Parameter adjustments */ --dos; /* Function Body */ if (*nos < 1) { gammfd_.jgamma = 31; } i__ = -1; err = 0.f; i__1 = *nos; for (ii = 1; ii <= i__1; ++ii) { i__ = *nos + 1 - ii; err += (r__1 = (float) dos[i__], fabs(r__1)); if (err > *eta) { goto L20; } /* L10: */ } /* 20 if (i.eq.nos) call seteru (28hinitds eta may be too small, 28, */ /* 1 1, 2) */ L20: /* if (i.eq.nos) write(6,*) 'initds : eta may be too small' */ if (i__ == *nos) { gammfd_.jgamma = 32; } ret_val = i__; return ret_val; } /* initds_ */ /* Subroutine */ static void d9gaml_(double *xmin, double *xmax) { /* System generated locals */ double d__1, d__2; /* Local variables */ static int i__; static double xln, xold, alnbig, alnsml; /* june 1977 edition. w. fullerton, c3, los alamos scientific lab. */ /* calculate the minimum and maximum legal bounds for x in gamma(x). */ /* xmin and xmax are not the only bounds, but they are the only non- */ /* trivial ones to calculate. */ /* output arguments -- */ /* xmin dble prec minimum legal value of x in gamma(x). any smaller */ /* value of x might result in underflow. */ /* xmax dble prec maximum legal value of x in gamma(x). any larger */ /* value of x might cause overflow. */ /* double precision xmin, xmax, alnbig, alnsml, xln, xold, d1mach, */ /* 1 dlog */ /* external d1mach, dlog */ /* alnsml = dlog(d1mach(1)) */ alnsml = log(machfd_.fltmin); *xmin = -alnsml; for (i__ = 1; i__ <= 10; ++i__) { xold = *xmin; /* xln = dlog(xmin) */ xln = log(*xmin); *xmin -= *xmin * ((*xmin + .5) * xln - *xmin - .2258 + alnsml) / (* xmin * xln + .5); /* if (fabs(xmin-xold).lt.0.005d0) go to 20 */ if ((d__1 = *xmin - xold, abs(d__1)) < .005) { goto L20; } /* L10: */ } /* call seteru (27hd9gaml unable to find xmin, 27, 1, 2) */ /* write(6,*) 'd9gaml : unable to find xmin' */ gammfd_.igamma = 21; return; L20: *xmin = -(*xmin) + .01; /* alnbig = dlog (d1mach(2)) */ alnbig = log(machfd_.fltmax); *xmax = alnbig; for (i__ = 1; i__ <= 10; ++i__) { xold = *xmax; /* xln = dlog(xmax) */ xln = log(*xmax); *xmax -= *xmax * ((*xmax - .5) * xln - *xmax + .9189 - alnbig) / (* xmax * xln - .5); /* if (fabs(xmax-xold).lt.0.005d0) go to 40 */ if ((d__1 = *xmax - xold, abs(d__1)) < .005) { goto L40; } /* L30: */ } /* call seteru (27hd9gaml unable to find xmax, 27, 2, 2) */ /* write(6,*) 'd9gaml : unable to find xmax' */ gammfd_.igamma = 22; return; L40: *xmax += -.01; /* Computing MAX */ d__1 = *xmin, d__2 = -(*xmax) + 1.; *xmin = max(d__1,d__2); return; } /* d9gaml_ */ double d9lgmc_(double *x) { /* Initialized data */ static double algmcs[15] = { .1666389480451863247205729650822, -1.384948176067563840732986059135e-5, 9.810825646924729426157171547487e-9, -1.809129475572494194263306266719e-11, 6.221098041892605227126015543416e-14, -3.399615005417721944303330599666e-16, 2.683181998482698748957538846666e-18, -2.868042435334643284144622399999e-20, 3.962837061046434803679306666666e-22, -6.831888753985766870111999999999e-24, 1.429227355942498147573333333333e-25, -3.547598158101070547199999999999e-27,1.025680058010470912e-28, -3.401102254316748799999999999999e-30, 1.276642195630062933333333333333e-31 }; static int nalgm = 0; static double xbig = 0.; static double xmax = 0.; /* System generated locals */ float r__1; double ret_val, d__1, d__2; /* Local variables */ static double temp; /* august 1977 edition. w. fullerton, c3, los alamos scientific lab. */ /* compute the log gamma correction factor for x .ge. 10. so that */ /* dlog (dgamma(x)) = dlog(dsqrt(2*pi)) + (x-.5)*dlog(x) - x + d9lgmc(x) */ /* double precision x, algmcs(15), xbig, xmax, dcsevl, d1mach, */ /* 1 dexp, dlog, dsqrt */ /* external d1mach, dcsevl, dexp, dlog, dsqrt, initds */ /* series for algm on the interval 0. to 1.00000e-02 */ /* with weighted error 1.28e-31 */ /* log weighted error 30.89 */ /* significant figures required 29.81 */ /* decimal places required 31.48 */ if (nalgm != 0) { goto L10; } /* nalgm = initds (algmcs, 15, sngl(d1mach(3)) ) */ r__1 = (float) machfd_.epsmin; nalgm = initds_(algmcs, &c__15, &r__1); /* xbig = 1.0d0/dsqrt(d1mach(3)) */ xbig = 1. / sqrt(machfd_.epsmin); /* xmax = dexp (dmin1(dlog(d1mach(2)/12.d0), -dlog(12.d0*d1mach(1)))) */ /* Computing MIN */ d__1 = log(machfd_.fltmax / 12.), d__2 = -log(machfd_.fltmin * 12.); xmax = exp((min(d__1,d__2))); /* 10 if (x.lt.10.d0) call seteru (23hd9lgmc x must be ge 10, 23, 1, 2) */ L10: if (*x < 10.) { /* write(6,*) 'd9lgmc : x must be ge 10' */ gammfd_.igamma = 51; /* d9lgmc = d1mach(2) */ ret_val = machfd_.fltmax; return ret_val; } if (*x >= xmax) { goto L20; } ret_val = 1. / (*x * 12.); /* if (x.lt.xbig) d9lgmc = dcsevl (2.0d0*(10.d0/x)**2-1.d0, algmcs, */ /* 1 nalgm) / x */ if (*x < xbig) { /* Computing 2nd power */ d__2 = 10. / *x; d__1 = d__2 * d__2 * 2. - 1.; temp = dcsevl_(&d__1, algmcs, &nalgm); if (gammfd_.igamma != 0) { /* d9lgmc = d1mach(2) */ ret_val = machfd_.fltmax; } else { ret_val = temp / *x; } } return ret_val; L20: ret_val = 0.; /* call seteru (34hd9lgmc x so big d9lgmc underflows, 34, 2, 0) */ /* write(6,*) 'd9lgmc : x so big d9lgmc underflows' */ gammfd_.jgamma = 51; return ret_val; } /* d9lgmc_ */ double dcsevl_(double *x, double *a, int *n) { /* System generated locals */ int i__1; /* Local variables */ int i__, ni; double b0, b1, b2, twox; /* evaluate the n-term chebyshev series a at x. adapted from */ /* r. broucke, algorithm 446, c.a.c.m., 16, 254 (1973). */ /* input arguments -- */ /* x dble prec value at which the series is to be evaluated. */ /* a dble prec array of n terms of a chebyshev series. in eval- */ /* uating a, only half the first coef is summed. */ /* n number of terms in array a. */ /* double precision d1mach */ /* external d1mach */ /* Parameter adjustments */ --a; /* Function Body */ b2 = 0.f; /* if (n.lt.1) call seteru (28hdcsevl number of terms le 0, 28, 2,2) */ /* if (n.gt.1000) call seteru (31hdcsevl number of terms gt 1000, */ /* 1 31, 3, 2) */ /* if (x.lt.(-1.1d0) .or. x.gt.1.1d0) call seteru ( */ /* 1 25hdcsevl x outside (-1,+1), 25, 1, 1) */ if (*n < 1) { /* 'dcsevl : number of terms le 0' */ gammfd_.igamma = 41; return machfd_.fltmax; } if (*n > 1000) { /* 'dcsevl : number of terms gt 1000' */ gammfd_.igamma = 42; return machfd_.fltmax; } if (*x < -1.1 || *x > 1.1) { /* 'dcsevl : x outside (-1,+1)' */ gammfd_.igamma = 43; return machfd_.fltmax; } twox = *x * 2.; b1 = 0.; b0 = 0.; i__1 = *n; for (i__ = 1; i__ <= i__1; ++i__) { b2 = b1; b1 = b0; ni = *n - i__ + 1; b0 = twox * b1 - b2 + a[ni]; } return (b0 - b2) * .5; } /* dcsevl_ */ double dlngam_(double *x) { /* Initialized data */ static double sq2pil = .91893853320467274178032973640562; static double sqpi2l = .225791352644727432363097614947441; static double pi = 3.1415926535897932384626433832795; static double xmax = 0.; static double dxrel = 0.; /* System generated locals */ double ret_val, d__1; /* Local variables */ static double y, temp, sinpiy; /* august 1980 edition. w. fullerton, c3, los alamos scientific lab. */ /* double precision x, dxrel, pi, sinpiy, sqpi2l, sq2pil, */ /* 1 y, xmax, dint, dgamma, d9lgmc, d1mach, dlog, dsin, dsqrt */ /* external d1mach, d9lgmc, dgamma, dint, dlog, dsin, dsqrt */ /* sq2pil = alog (sqrt(2*pi)), sqpi2l = alog(sqrt(pi/2)) */ ret_val = 0.; if (xmax == 0.) { /* xmax = d1mach(2)/dlog(d1mach(2)) */ xmax = machfd_.fltmax / log(machfd_.fltmax); /* dxrel = dsqrt (d1mach(4)) */ dxrel = sqrt(machfd_.fltmax); } y = abs(*x); if (y <= 10.) { /* |x| <= 10 : Compute dlngam := dlog (fabs (dgamma(x)) ) */ temp = dgamma_(x); if (gammfd_.igamma != 0) { ret_val = machfd_.fltmax; return ret_val; } ret_val = log((abs(temp))); return ret_val; } /* ELSE |x| > 10 : Compute dlog ( fabs (dgamma(x)) ) */ if (y > xmax) { /* write(6,*) 'dlngam : abs(x) so big dlngam overflows' */ gammfd_.igamma = 61; ret_val = machfd_.fltmax; return ret_val; } /* if (x.gt.0.d0) dlngam = sq2pil + (x-0.5d0)*dlog(x) - x + d9lgmc(y) */ temp = d9lgmc_(&y); if (gammfd_.igamma != 0) { ret_val = machfd_.fltmax; return ret_val; } if (*x > 0.) { ret_val = sq2pil + (*x - .5) * log(*x) - *x + temp; } if (*x > 0.) { return ret_val; } sinpiy = (d__1 = sin(pi * y), abs(d__1)); if (sinpiy == 0.) { /* write(6,*) 'dlngam : x is a negative integer' */ gammfd_.igamma = 62; ret_val = machfd_.fltmax; return ret_val; } /* dlngam = sqpi2l + (x-0.5d0)*dlog(y) - x - dlog(sinpiy) - d9lgmc(y) */ temp = d9lgmc_(&y); if (gammfd_.igamma != 0) { ret_val = machfd_.fltmax; return ret_val; } ret_val = sqpi2l + (*x - .5) * log(y) - *x - log(sinpiy) - temp; /* if (fabs((x-dint(x-0.5d0))*dlngam/x).lt.dxrel) call seteru ( */ /* 1 68hdlngam answer lt half precision because x too near negative */ /* 2integer, 68, 1, 1) */ if ((d__1 = (*x - (double) ((int) (*x - .5))) * ret_val / *x, abs( d__1)) < dxrel) { gammfd_.jgamma = 61; } return ret_val; } /* dlngam_ */ fracdiff/src/mach_comm.h0000644000176200001440000000010713345443770014711 0ustar liggesusersFD_EXTERNAL struct { double fltmin, fltmax, epsmin, epsmax; } machfd_; fracdiff/src/init.c0000644000176200001440000000316014323537676013734 0ustar liggesusers#include #include #include "fracdiff.h" #include #define CDEF(name) {#name, (DL_FUNC) &name, sizeof(name ## _typ)/sizeof(name ## _typ[0]), name ##_typ} // -- ./fdsim.c -- static R_NativePrimitiveArgType fdsim_typ[13] = { /*n:*/ INTSXP, INTSXP, INTSXP, REALSXP, REALSXP, /*d__:*/ REALSXP, REALSXP, REALSXP, REALSXP, /*flmin__:*/ REALSXP, REALSXP, REALSXP, REALSXP }; // -- ./fdhess.c -- static R_NativePrimitiveArgType fdhpq_typ[3] = { REALSXP, INTSXP, REALSXP }; static R_NativePrimitiveArgType fdcov_typ[11] = { REALSXP, REALSXP, REALSXP, REALSXP, REALSXP, INTSXP, REALSXP, INTSXP, REALSXP, REALSXP, INTSXP }; // -- ./fdcore.c -- static R_NativePrimitiveArgType fracdf_typ[19] = { /* x */ REALSXP, INTSXP, INTSXP, INTSXP, INTSXP, /* dtol */REALSXP, REALSXP, REALSXP, /* d__*/ REALSXP, REALSXP, REALSXP, REALSXP, /* lenw */INTSXP, INTSXP, INTSXP, /* flmin*/REALSXP, REALSXP, REALSXP, REALSXP }; static R_NativePrimitiveArgType fdcom_typ[9] = { /* n */ INTSXP, INTSXP, INTSXP, INTSXP, /* hood */ REALSXP, REALSXP, REALSXP, /*epmin */ REALSXP, REALSXP }; static const R_CMethodDef CEntries[] = { CDEF(fdsim), CDEF(fdhpq), CDEF(fdcov), CDEF(fracdf), CDEF(fdcom), {NULL, NULL, 0} }; /* static R_CallMethodDef CallEntries[] = { * {NULL, NULL, 0} * }; */ /* static R_FortranMethodDef FortEntries[] = { * {NULL, NULL, 0} * }; */ void R_init_fracdiff(DllInfo *dll) { R_registerRoutines(dll, CEntries, NULL/*CallEntries*/, NULL/*FortEntries*/, NULL); R_useDynamicSymbols(dll, FALSE); } fracdiff/src/hess_comm.h0000644000176200001440000000063213345443770014746 0ustar liggesusers/* included only by ./fdcore.c and ./fdhess.c : */ FD_EXTERNAL struct { int n, m, p, q, pq, pq1, maxpq, maxpq1, minpq, nm; } Dims; FD_EXTERNAL struct { double hatmu, wnv, cllf; } filtfd_; FD_EXTERNAL struct { int ksvd, kcov, kcor; } hessfd_; FD_EXTERNAL struct { int ly, lamk, lak, lvk, lphi, lpi; } w_fil; FD_EXTERNAL struct { int lqp, la, lajac, ipvt, ldiag, lqtf, lwa1, lwa2, lwa3, lwa4; } w_opt; fracdiff/src/fdsim.c0000644000176200001440000000730313345443770014070 0ustar liggesusers/*-*- mode: C; kept-old-versions: 12; kept-new-versions: 20; -*- * * fdsim.f -- translated by f2c (version 20031025). * * and produced by f2c-clean,v 1.10 2002/03/28 16:37:27 maechler */ #include #include "fracdiff.h" extern double dgamr_(double *); extern double dgamma_(double *); /* Common Block Declarations --- included as "extern" */ #define FD_EXTERNAL extern #include "mach_comm.h" #include "gamm_comm.h" void fdsim(int *n, int *ip, int *iq, double *ar, double *ma, double *d__, double *mu, double *y, double *s, double *flmin, double *flmax, double *epmin, double *epmax) { /* Generates a random time series ``for use with fracdif'', * i.e., filters a white noise series y[] into an ARIMA(p,d,q) series s[] Input : n int length of the time series ip int number of autoregressive parameters iq int number of moving average parameters ar float (ip) autoregressive parameters ma float (iq) moving average parameters d float fractional differencing parameter mu float time series mean y float (n+iq) 1st n : normalized random numbers s float (n+iq) workspace Output : s float (n) the generated time series ----------------------------------------------------------------------------- Simulates a series of length n from an ARIMA (p,d,q) model with fractional d (0 < d < 0.5). ----------------------------------------------------------------------------- float ar(ip), ma(iq), d, mu float y(n+iq), s(n+iq) -------------------------------------------------------------------------- */ /* System generated locals */ double d__1; /* Local variables */ int i, j, k; double dj, vk, dk1, amk, sum, dk1d, temp; /* Parameter adjustments */ --y; --s; /* Common Block -- Initializations: Input & Output for gamma() functions */ gammfd_.igamma = 0; gammfd_.jgamma = 0; machfd_.fltmin = *flmin; machfd_.fltmax = *flmax; machfd_.epsmin = *epmin; machfd_.epsmax = *epmax; /* Calculate vk[0] = 'g0' */ d__1 = 1. - *d__; temp = dgamr_(&d__1); if (gammfd_.igamma != 0) { for (i = 1; i <= *n; ++i) s[i] = 0.; return; } /* else : */ d__1 = 1. - *d__ * 2.; vk = dgamma_(&d__1) * (temp * temp); if (gammfd_.igamma != 0) { for (i = 1; i <= *n; ++i) s[i] = 0.; return; } /* else -- Gamma values ok, compute : */ /* Generate y(1) */ y[1] *= sqrt(vk); /* Generate y(2) and initialise vk,phi(j) */ temp = *d__ / (1. - *d__); vk *= 1. - temp * temp; amk = temp * y[1]; s[1] = temp; y[2] = amk + y[2] * sqrt(vk); /* Generate y(3),...,y(n+iq) */ for (k = 3; k <= (*n + *iq); ++k) { dk1 = (double) k - 1.; dk1d = dk1 - *d__; /* Update the phi(j) using the recursion formula on W498 */ for (j = 1; j <= (k - 2); ++j) { dj = dk1 - (double) j; s[j] *= dk1 * (dj - *d__) / (dk1d * dj); } temp = *d__ / dk1d; s[k - 1] = temp; /* Update vk */ vk *= 1. - temp * temp; /* Form amk */ amk = 0.; for (j = 1; j <= (k - 1); ++j) amk += s[j] * y[k - j]; /* Generate y(k) */ y[k] = amk + y[k] * sqrt(vk); } /* We now have an ARIMA (0,d,0) realisation of length n+iq in * y[k], k=1,..,n+iq. We now run this through an inverse ARMA(p,q) filter to get the final output in s[k], k=1,..,n. */ for (k = 1; k <= *n; ++k) { sum = 0.; j = imin2(*ip, k-1); /* i < j <= k-1 ==> (k - i - 1) >= 1 */ for (i = 0; i < j; ++i) sum += ar[i] * s[k - i - 1]; for (j = 0; j < *iq; ++j) sum -= ma[j] * y[k + *iq - j - 1]; s[k] = sum + y[k + *iq]; } /* now add the global mean */ if (*mu != 0.) { for (i = 1; i <= *n; ++i) s[i] += *mu; } return; } /* fdsim */ fracdiff/src/fdmin.c0000644000176200001440000011170513345443770014065 0ustar liggesusers/*-*- mode: C; kept-old-versions: 12; kept-new-versions: 20; -*- * * fdmin.f -- translated by f2c (version 20031025). * and produced by f2c-clean,v 1.10 2002/03/28 16:37:27 maechler * * and manually pretty edited by Martin Maechler, 2004-10-01, ff. */ #include // for warning(): #include #include "fracdiff.h" /* Common Block Declarations --- included as "extern" */ #define FD_EXTERNAL extern #include "mach_comm.h" #include "maux_comm.h" // #include "tols_comm.h" /* Constant (used to pass pointer) */ static /*logical*/int c_true = (1); static void qrfac(int *, int *, double *, int *, /*logical*/int *, int *, int *, double *, double *, double *); static void qrsolv(int, double *, int *, int *, double *, double *, double *, double *, double *); /* --------- EXPORTS ------------------- */ static double enorm(int, double *); static double lmpar(int, double *, int *, int *, double *, double *, double *, double, double *, double *, double *, double *); /* and this : */ double lmder1(S_fp fcn, int m, int n, double *x, double *fvec, double *fjac, int ldfjac, double ftol, double xtol, double gtol, int maxfev, double *diag, int mode, double factor, int *info, int *nfev, int *njev, int *ipvt, double *qtf, double *wa1, double *wa2, double *wa3, double *wa4, double *y) { // THE return value (since 2011-08-08): double fd_min_fnorm = -99.; // Wall /* subroutine lmder the purpose of lmder is to minimize the sum of the squares of m nonlinear functions in n variables by a modification of the levenberg-marquardt algorithm. the user must provide a subroutine which calculates the functions and the jacobian. the subroutine statement is subroutine lmder(fcn,m,n,x,fvec,fjac,ldfjac,ftol,xtol,gtol, maxfev,diag,mode,factor,nprint,info,nfev, njev,ipvt,qtf,wa1,wa2,wa3,wa4) where fcn is the name of the user-supplied subroutine which calculates the functions and the jacobian. fcn must be declared in an external statement in the user calling program, and should be written as follows. subroutine fcn(m, n, x,fvec,fjac, ldfjac,iflag) int m,n, ldfjac,iflag double precision x(n), fvec(m), fjac(ldfjac,n) ---------- if iflag = 1 calculate the functions at x and return this vector in fvec. do not alter fjac. if iflag = 2 calculate the jacobian at x and return this matrix in fjac. do not alter fvec. ---------- return end the value of iflag should not be changed by fcn unless the user wants to terminate execution of lmder. in this case set iflag to a negative int. m is a positive int input variable set to the number of functions. n is a positive int input variable set to the number of variables. n must not exceed m. x is an array of length n. on input x must contain an initial estimate of the solution vector. on output x contains the final estimate of the solution vector. fvec is an output array of length m which contains the functions evaluated at the output x. fjac is an output m by n array. the upper n by n submatrix of fjac contains an upper triangular matrix r with diagonal elements of nonincreasing magnitude such that t t t p *(jac *jac)*p = r *r, where p is a permutation matrix and jac is the final calculated jacobian. column j of p is column ipvt(j) (see below) of the identity matrix. the lower trapezoidal part of fjac contains information generated during the computation of r. ldfjac is a positive int input variable not less than m which specifies the leading dimension of the array fjac. ftol is a nonnegative input variable. termination occurs when both the actual and predicted relative reductions in the sum of squares are at most ftol. therefore, ftol measures the relative error desired in the sum of squares. xtol is a nonnegative input variable. termination occurs when the relative error between two consecutive iterates is at most xtol. therefore, xtol measures the relative error desired in the approximate solution. gtol is a nonnegative input variable. termination occurs when the cosine of the angle between fvec and any column of the jacobian is at most gtol in absolute value. therefore, gtol measures the orthogonality desired between the function vector and the columns of the jacobian. maxfev is a positive int input variable. termination occurs when the number of calls to fcn with iflag = 1 has reached maxfev. diag is an array of length n. if mode = 1 (see below), diag is internally set. if mode = 2, diag must contain positive entries that serve as multiplicative scale factors for the variables. mode is an int input variable. if mode = 1, the variables will be scaled internally. if mode = 2, the scaling is specified by the input diag. other values of mode are equivalent to mode = 1. factor is a positive input variable used in determining the initial step bound. this bound is set to the product of factor and the euclidean norm of diag*x if nonzero, or else to factor itself. in most cases factor should lie in the interval (.1,100.).100. is a generally recommended value. nprint is an int input variable that enables controlled printing of iterates if it is positive. in this case, fcn is called with iflag = 0 at the beginning of the first iteration and every nprint iterations thereafter and immediately prior to return, with x, fvec, and fjac available for printing. fvec and fjac should not be altered. if nprint is not positive, no special calls of fcn with iflag = 0 are made. info is an int output variable. if the user has terminated execution, info is set to the (negative) value of iflag. see description of fcn. otherwise, info is set as follows. info = 0 improper input parameters. info = 1 both actual and predicted relative reductions in the sum of squares are at most ftol. info = 2 relative error between two consecutive iterates is at most xtol. info = 3 conditions for info = 1 and info = 2 both hold. info = 4 the cosine of the angle between fvec and any column of the jacobian is at most gtol in absolute value. info = 5 number of calls to fcn with iflag = 1 has reached maxfev. info = 6 ftol is too small. no further reduction in the sum of squares is possible. info = 7 xtol is too small. no further improvement in the approximate solution x is possible. info = 8 gtol is too small. fvec is orthogonal to the columns of the jacobian to machine precision. nfev is an int output variable set to the number of calls to fcn with iflag = 1. njev is an int output variable set to the number of calls to fcn with iflag = 2. ipvt is an int output array of length n. ipvt defines a permutation matrix p such that jac*p = q*r, where jac is the final calculated jacobian, q is orthogonal (not stored), and r is upper triangular with diagonal elements of nonincreasing magnitude. column j of p is column ipvt(j) of the identity matrix. qtf is an output array of length n which contains the first n elements of the vector (q transpose)*fvec. wa1, wa2, and wa3 are work arrays of length n. wa4 is a work array of length m. subprograms called user-supplied ...... fcn minpack-supplied ... dpmpar,enorm,lmpar,qrfac fortran-supplied ... fabs,dmax1,dmin1,dsqrt,mod argonne national laboratory. minpack project. march 1980. burton s. garbow, kenneth e. hillstrom, jorge j. more epsmch is the machine precision. epsmch = dpmpar(1) */ /* Initialized data */ static double p1 = .1; static double p5 = .5; static double p25 = .25; static double p75 = .75; static double p0001 = 1e-4; /* System generated locals */ int fjac_offset; double d__1; /* Local variables */ int i__, j, l, iter, iflag, nprint; double par, sum, temp, temp1, temp2, ratio, enorm_n, xnorm, fnorm1, actred, dirder, prered, T_gnorm, delta = 0.; /* Wall*/ /* Parameter adjustments */ --wa4; --fvec; --wa3; --wa2; --wa1; --qtf; --ipvt; --diag; --x; --y; fjac_offset = 1 + ldfjac; fjac -= fjac_offset; /* Function Body */ temp = 0.; nprint = 0; *info = 0; iflag = 0; *nfev = 0; *njev = 0; /* check the input parameters for errors. */ if (n <= 0 || m < n || ldfjac < m || ftol < 0. || xtol < 0. || gtol < 0. || maxfev <= 0 || factor <= 0.) { warning("lmder1(): invalid (scalar) input"); goto L_end; } if (mode == 2) { /* check diag[] */ for (j = 1; j <= n; ++j) if (diag[j] <= 0.) goto L_end; } /* evaluate the function at the starting point and calculate its norm. */ iflag = 1; (*fcn)(&x[1], &fvec[1], &fjac[fjac_offset], ldfjac, iflag, &y[1]); *nfev = 1; if (iflag < 0) { warning("lmder1(): problem in function evaluation at starting point"); goto L_end; } fd_min_fnorm = fmin2(enorm(m, &fvec[1]), mauxfd_1.bignum); /* initialize levenberg-marquardt parameter and iteration counter. */ par = 0.; iter = 1; /* ==== beginning of the outer loop. ==========================================*/ L30: /* calculate the jacobian matrix. */ iflag = 2; (*fcn)(&x[1], &fvec[1], &fjac[fjac_offset], ldfjac, iflag, &y[1]); ++(*njev); if (iflag < 0) goto L_end; /* if requested, call fcn to enable printing of iterates. */ if (nprint > 0) { iflag = 0; if ((iter - 1) % nprint == 0) (*fcn)(&x[1], &fvec[1], &fjac[fjac_offset], ldfjac, iflag, &y[1]); if (iflag < 0) goto L_end; } /* L40: */ /* compute the qr factorization of the jacobian. */ qrfac(&m, &n, &fjac[fjac_offset], &ldfjac, &c_true, &ipvt[1], &n, &wa1[1], &wa2[1], &wa3[1]); /* on the first iteration -- do a some initializations : */ if (iter == 1) { /* if mode is 1, scale according to the norms of the columns of the initial jacobian. */ if (mode == 1) { for (j = 1; j <= n; ++j) diag[j] = ((wa2[j] != 0.)? wa2[j] : 1.); } /* calculate the norm of the scaled x and initialize the step bound delta. */ for (j = 1; j <= n; ++j) wa3[j] = diag[j] * x[j]; xnorm = enorm(n, &wa3[1]); delta = factor * xnorm; if (delta == 0.) { delta = factor; } } /* L80: */ /* form (q transpose)*fvec and store the first n components in qtf. */ for (i__ = 1; i__ <= m; ++i__) { wa4[i__] = fvec[i__]; } for (j = 1; j <= n; ++j) { if (fjac[j + j * ldfjac] != 0.) { sum = 0.; for (i__ = j; i__ <= m; ++i__) sum += fjac[i__ + j * ldfjac] * wa4[i__]; temp = -sum / fjac[j + j * ldfjac]; for (i__ = j; i__ <= m; ++i__) wa4[i__] += fjac[i__ + j * ldfjac] * temp; } /* L120: */ fjac[j + j * ldfjac] = wa1[j]; qtf[j] = wa4[j]; } /* compute the norm of the scaled gradient. */ T_gnorm = 0.; if (fd_min_fnorm != 0.) { for (j = 1; j <= n; ++j) { l = ipvt[j]; if (wa2[l] != 0.) { sum = 0.; for (i__ = 1; i__ <= j; ++i__) sum += fjac[i__ + j * ldfjac] * (qtf[i__] / fd_min_fnorm); T_gnorm = fmax2(T_gnorm, fabs(sum / wa2[l])); } } } /* L170: */ /* test for convergence of the gradient norm. */ if (T_gnorm <= gtol) *info = 4; if (*info != 0) goto L_end; /* rescale if necessary. */ if (mode == 1) { for (j = 1; j <= n; ++j) diag[j] = fmax2(diag[j], wa2[j]); } /* L190: */ do { // ------------- the inner loop. ------------------------------------ /* determine the levenberg-marquardt parameter. */ par = lmpar(n, &fjac[fjac_offset], &ldfjac, &ipvt[1], &diag[1], &qtf[1], &delta, par, &wa1[1], &wa2[1], &wa3[1], &wa4[1]); /* store the direction p and x + p. calculate the norm of p. */ for (j = 1; j <= n; ++j) { wa1[j] = -wa1[j]; wa2[j] = x[j] + wa1[j]; wa3[j] = diag[j] * wa1[j]; } enorm_n = enorm(n, &wa3[1]); /* on the first iteration, adjust the initial step bound. */ if (iter == 1) { delta = fmin2(delta,enorm_n); } /* evaluate the function at x + p and calculate its norm. */ iflag = 1; (*fcn)(&wa2[1], &wa4[1], &fjac[fjac_offset], ldfjac, iflag, &y[1]); ++(*nfev); if (iflag < 0) goto L_end; fnorm1 = fmin2(enorm(m, &wa4[1]), mauxfd_1.bignum); /* compute the scaled actual reduction. */ actred = -1.; if (p1 * fnorm1 < fd_min_fnorm) { d__1 = fnorm1 / fd_min_fnorm; actred = 1. - d__1 * d__1; } /* actred = (fnorm*fnorm - fnorm1*fnorm1) */ /* compute the scaled predicted reduction and the scaled directional derivative. */ for (j = 1; j <= n; ++j) { wa3[j] = 0.; l = ipvt[j]; temp = wa1[l]; for (i__ = 1; i__ <= j; ++i__) { wa3[i__] += fjac[i__ + j * ldfjac] * temp; } } temp1 = enorm(n, &wa3[1]) / fd_min_fnorm; temp2 = sqrt(par) * enorm_n / fd_min_fnorm; prered = temp1 * temp1 + temp2 * temp2 / p5; /* temp1 = enorm(n,wa3) temp2 = (dsqrt(par)*enorm_n) prered = (temp1**2 + 2.d0*temp2**2) */ dirder = -(temp1 * temp1 + temp2 * temp2); /* compute the ratio of the actual to the predicted reduction. */ if (prered != 0.) ratio = actred / prered; else ratio = 0.; /* update the step bound. */ if (ratio <= p25) { if (actred >= 0.) temp = p5; else /* (actred < 0.) */ temp = p5 * dirder / (dirder + p5 * actred); if (p1 * fnorm1 >= fd_min_fnorm || temp < p1) temp = p1; delta = temp * fmin2(delta, enorm_n / p1); par /= temp; } else { /* ratio > p25 */ if (par == 0. || ratio >= p75) { delta = enorm_n / p5; par = p5 * par; } } /* L260: */ /* test for successful iteration. */ if (ratio >= p0001) { /* successful iteration. update x, fvec, and their norms. */ for (j = 1; j <= n; ++j) { x[j] = wa2[j]; wa2[j] = diag[j] * x[j]; } for (i__ = 1; i__ <= m; ++i__) fvec[i__] = wa4[i__]; xnorm = enorm(n, &wa2[1]); fd_min_fnorm = fnorm1; ++iter; } /* L290: tests for convergence. */ if((fabs(actred) <= ftol && prered <= ftol && p5 * ratio <= 1.) || (fd_min_fnorm <= ftol)) *info = 1; if (delta <= xtol) { *info = 2; if (fabs(actred) <= ftol && prered <= ftol && p5 * ratio <= 1.) *info = 3; } if (*info != 0) goto L_end; /* tests for termination and stringent tolerances. */ if (*nfev >= maxfev) *info = 5; if (fabs(actred) <= machfd_.epsmax && prered <= machfd_.epsmax && p5 * ratio <= 1.) *info = 6; if (delta <= machfd_.epsmax) *info = 7; if (T_gnorm <= machfd_.epsmax) *info = 8; if (*info != 0) goto L_end; /* end of the inner loop. repeat if iteration unsuccessful. */ } while (ratio < p0001); /* end of the outer loop. */ goto L30; L_end: // termination, either normal or user imposed. if (iflag < 0) { *info = iflag; } iflag = 0; if (nprint > 0) { (*fcn)(&x[1], &fvec[1], &fjac[fjac_offset], ldfjac, iflag, &y[1]); } return fd_min_fnorm; } /* lmder1 */ double enorm(int n, double *x) { /* Initialized data */ static double rdwarf = 3.834e-20; static double rgiant = 1.304e19; /* System generated locals */ double ret_val, d__1; /* Local variables */ static int i__; static double s1, s2, s3, xabs, x1max, x3max, agiant, floatn; /* ********** function enorm given an n-vector x, this function calculates the euclidean norm of x. the euclidean norm is computed by accumulating the sum of squares in three different sums. the sums of squares for the small and large components are scaled so that no overflows occur. non-destructive underflows are permitted. underflows and overflows do not occur in the computation of the unscaled sum of squares for the intermediate components. the definitions of small, intermediate and large components depend on two constants, rdwarf and rgiant. the main restrictions on these constants are that rdwarf**2 not underflow and rgiant**2 not overflow. the constants given here are suitable for every known computer. the function statement is double precision function enorm(n,x) where n is a positive int input variable. x is an input array of length n. subprograms called fortran-supplied ... fabs,dsqrt argonne national laboratory. minpack project. march 1980. burton s. garbow, kenneth e. hillstrom, jorge j. more ********** Parameter adjustments */ --x; /* Function Body */ ret_val = -1.; s1 = 0.; s2 = 0.; s3 = 0.; x1max = 0.; x3max = 0.; floatn = (double) (n); agiant = rgiant / floatn; for (i__ = 1; i__ <= n; ++i__) { xabs = fabs(x[i__]); if (xabs > rdwarf && xabs < agiant) {/* sum for intermediate components.*/ s2 += xabs * xabs; } else if (xabs > rdwarf) { /* sum for large components. */ if (xabs <= x1max) { /* Computing 2nd power */ d__1 = xabs / x1max; s1 += d__1 * d__1; } else { /* Computing 2nd power */ d__1 = x1max / xabs; s1 = 1. + s1 * (d__1 * d__1); x1max = xabs; } } else { /* sum for small components. */ if (xabs <= x3max) { if (xabs != 0.) { /* Computing 2nd power */ d__1 = xabs / x3max; s3 += d__1 * d__1; } } else { /* Computing 2nd power */ d__1 = x3max / xabs; s3 = 1. + s3 * (d__1 * d__1); x3max = xabs; } } } /* for(i ) */ /* calculation of norm. */ if (s1 == 0.) { if (s2 == 0.) { ret_val = x3max * sqrt(s3); } else { if (s2 >= x3max) ret_val = sqrt(s2 * (1. + x3max / s2 * (x3max * s3))); else /* (s2 < x3max) */ ret_val = sqrt(x3max * (s2 / x3max + x3max * s3)); } } else { ret_val = x1max * sqrt(s1 + s2 / x1max / x1max); } return ret_val; } /* enorm */ static void qrfac(int *m, int *n, double *a, int *lda, /*logical*/int *pivot, int *ipvt, int *lipvt, double *rdiag, double *acnorm, double *wa) { /* Initialized data */ static double p05 = .05; /* System generated locals */ int a_dim1, a_offset; double d__1; /* Local variables */ int i__, j, k, jp1, minmn; double sum, temp, ajnorm; /* ********** subroutine qrfac this subroutine uses householder transformations with column pivoting (optional) to compute a qr factorization of the m by n matrix a. that is, qrfac determines an orthogonal matrix q, a permutation matrix p, and an upper trapezoidal matrix r with diagonal elements of nonincreasing magnitude, such that a*p = q*r. the householder transformation for column k, k = 1,2,...,min(m,n), is of the form t i - (1/u(k))*u*u where u has zeros in the first k-1 positions. the form of this transformation and the method of pivoting first appeared in the corresponding linpack subroutine. the subroutine statement is subroutine qrfac(m,n,a,lda,pivot,ipvt,lipvt,rdiag,acnorm,wa) where m is a positive int input variable set to the number of rows of a. n is a positive int input variable set to the number of columns of a. a is an m by n array. on input a contains the matrix for which the qr factorization is to be computed. on output the strict upper trapezoidal part of a contains the strict upper trapezoidal part of r, and the lower trapezoidal part of a contains a factored form of q (the non-trivial elements of the u vectors described above). lda is a positive int input variable not less than m which specifies the leading dimension of the array a. pivot is a *logical* input variable. if pivot is set true, then column pivoting is enforced. if pivot is set false, then no column pivoting is done. ipvt is an int output array of length lipvt. ipvt defines the permutation matrix p such that a*p = q*r. column j of p is column ipvt(j) of the identity matrix. if pivot is false, ipvt is not referenced. lipvt is a positive int input variable. if pivot is false, then lipvt may be as small as 1. if pivot is true, then lipvt must be at least n. rdiag is an output array of length n which contains the diagonal elements of r. acnorm is an output array of length n which contains the norms of the corresponding columns of the input matrix a. if this information is not needed, then acnorm can coincide with rdiag. wa is a work array of length n. if pivot is false, then wa can coincide with rdiag. subprograms called minpack-supplied ... dpmpar,enorm fortran-supplied ... dmax1,dsqrt,min0 argonne national laboratory. minpack project. march 1980. burton s. garbow, kenneth e. hillstrom, jorge j. more ********** double precision dpmpar,enorm Parameter adjustments */ --wa; --acnorm; --rdiag; a_dim1 = *lda; a_offset = 1 + a_dim1; a -= a_offset; --ipvt; // compute the initial column norms and initialize several arrays. for (j = 1; j <= *n; ++j) { acnorm[j] = enorm(*m, &a[j * a_dim1 + 1]); rdiag[j] = acnorm[j]; wa[j] = rdiag[j]; if (*pivot) ipvt[j] = j; } /* reduce a to r with householder transformations. */ minmn = imin2(*m,*n); for (j = 1; j <= minmn; ++j) { if (*pivot) { // bring the column of largest norm into the pivot position. int kmax = j; for (k = j; k <= *n; ++k) { if (rdiag[k] > rdiag[kmax]) kmax = k; } if (kmax != j) { // swap a[,j] and a[,kmax] : for (i__ = 1; i__ <= *m; ++i__) { double t = a[i__ + j * a_dim1]; a[i__ + j * a_dim1] = a[i__ + kmax * a_dim1]; a[i__ + kmax * a_dim1] = t; } rdiag[kmax] = rdiag[j]; wa[kmax] = wa[j]; k = ipvt[j]; ipvt[j] = ipvt[kmax]; ipvt[kmax] = k; } } // L40: /* compute the householder transformation to reduce the j-th column of a to a multiple of the j-th unit vector. */ ajnorm = enorm(*m - j + 1, &a[j + j * a_dim1]); if (ajnorm == 0.) { goto L100; } if (a[j + j * a_dim1] < 0.) { ajnorm = -ajnorm; } for (i__ = j; i__ <= *m; ++i__) { a[i__ + j * a_dim1] /= ajnorm; } a[j + j * a_dim1] += 1.; /* apply the transformation to the remaining columns and update the norms. */ jp1 = j + 1; for (k = jp1; k <= *n; ++k) { sum = 0.; for (i__ = j; i__ <= *m; ++i__) { sum += a[i__ + j * a_dim1] * a[i__ + k * a_dim1]; } temp = sum / a[j + j * a_dim1]; for (i__ = j; i__ <= *m; ++i__) { a[i__ + k * a_dim1] -= temp * a[i__ + j * a_dim1]; } if (*pivot && rdiag[k] != 0.) { temp = a[j + k * a_dim1] / rdiag[k]; rdiag[k] *= sqrt((fmax2(0., 1. - temp * temp))); /* Computing 2nd power */ d__1 = rdiag[k] / wa[k]; if (p05 * (d__1 * d__1) < machfd_.epsmax) { rdiag[k] = enorm(*m - j, &a[jp1 + k * a_dim1]); wa[k] = rdiag[k]; } } } L100: rdiag[j] = -ajnorm; } return; } /* qrfac */ double lmpar(int n, double *r__, int *ldr, int *ipvt, double *diag, double *qtb, double *delta, double par_init, double *x, double *sdiag, double *wa1, double *wa2) { double par = par_init; // ---- the return_value /* subroutine lmpar given an m by n matrix a, an n by n nonsingular diagonal matrix d, an m-vector b, and a positive number delta, the problem is to determine a value for the parameter par such that if x solves the system a*x = b , sqrt(par)*d*x = 0 , in the least squares sense, and dxnorm is the euclidean norm of d*x, then either par is 0. and (dxnorm-delta) <= 0.1*delta , or par is positive and abs(dxnorm-delta) <= 0.1*delta . this subroutine completes the solution of the problem if it is provided with the necessary information from the qr factorization, with column pivoting, of a. that is, if a*p = q*r, where p is a permutation matrix, q has orthogonal columns, and r is an upper triangular matrix with diagonal elements of nonincreasing magnitude, then lmpar expects the full upper triangle of r, the permutation matrix p, and the first n components of (q transpose)*b. on output lmpar also provides an upper triangular matrix s such that t t t p *(a *a + par*d*d)*p = s *s . s is employed within lmpar and may be of separate interest. only a few iterations are generally needed for convergence of the algorithm. if, however, the limit of 10 iterations is reached, then the output par will contain the best value obtained so far. the subroutine statement is subroutine lmpar(n,r,ldr,ipvt,diag,qtb,delta,par,x,sdiag, wa1,wa2) where n is a positive int input variable set to the order of r. r is an n by n array. on input the full upper triangle must contain the full upper triangle of the matrix r. on output the full upper triangle is unaltered, and the strict lower triangle contains the strict upper triangle (transposed) of the upper triangular matrix s. ldr is a positive int input variable not less than n which specifies the leading dimension of the array r. ipvt is an int input array of length n which defines the permutation matrix p such that a*p = q*r. column j of p is column ipvt(j) of the identity matrix. diag is an input array of length n which must contain the diagonal elements of the matrix d. qtb is an input array of length n which must contain the first n elements of the vector (q transpose)*b. delta is a positive input variable which specifies an upper bound on the euclidean norm of d*x. par is a nonnegative variable. on input par contains an initial estimate of the levenberg-marquardt parameter. on output par contains the final estimate. x is an output array of length n which contains the least squares solution of the system a*x = b, sqrt(par)*d*x = 0, for the output par. sdiag is an output array of length n which contains the diagonal elements of the upper triangular matrix s. wa1 and wa2 are work arrays of length n. subprograms called minpack-supplied ... dpmpar,enorm,qrsolv fortran-supplied ... fabs,dmax1,dmin1,dsqrt argonne national laboratory. minpack project. march 1980. burton s. garbow, kenneth e. hillstrom, jorge j. more ***********/ /* Initialized data */ static double p1 = .1; static double p001 = .001; /* System generated locals */ int r_dim1, r_offset; /* Local variables */ int i__, j, k, l, jp1, iter, nsing; double fp, sum, parc, parl, temp, paru, dwarf, gnorm, dxnorm; /* Parameter adjustments */ --wa2; --wa1; --sdiag; --x; --qtb; --diag; --ipvt; r_dim1 = *ldr; r_offset = 1 + r_dim1; r__ -= r_offset; // dwarf is the smallest positive magnitude : dwarf = machfd_.fltmin; /* compute and store in x the gauss-newton direction. if the jacobian is rank-deficient, obtain a least squares solution. */ nsing = n; for (j = 1; j <= n; ++j) { wa1[j] = qtb[j]; if (r__[j + j * r_dim1] == 0. && nsing == n) { nsing = j - 1; } if (nsing < n) { wa1[j] = 0.; } } for (k = 1; k <= nsing; ++k) { j = nsing - k + 1; wa1[j] /= r__[j + j * r_dim1]; temp = wa1[j]; for (i__ = 1; i__ <= j-1; ++i__) { wa1[i__] -= r__[i__ + j * r_dim1] * temp; } } for (j = 1; j <= n; ++j) { l = ipvt[j]; x[l] = wa1[j]; } /* initialize the iteration counter. evaluate the function at the origin, and test for acceptance of the gauss-newton direction. */ iter = 0; for (j = 1; j <= n; ++j) { wa2[j] = diag[j] * x[j]; } dxnorm = enorm(n, &wa2[1]); fp = dxnorm - *delta; if (fp <= p1 * *delta) { goto L220; } /* if the jacobian is not rank deficient, the newton step provides a lower bound, parl, for the zero of the function. Otherwise set this bound to 0. */ parl = 0.; if (nsing >= n) { for (j = 1; j <= n; ++j) { l = ipvt[j]; wa1[j] = diag[l] * (wa2[l] / dxnorm); } for (j = 1; j <= n; ++j) { sum = 0.; for (i__ = 1; i__ <= j-1; ++i__) { sum += r__[i__ + j * r_dim1] * wa1[i__]; } wa1[j] = (wa1[j] - sum) / r__[j + j * r_dim1]; } temp = enorm(n, &wa1[1]); parl = fp / *delta / temp / temp; } // L120: /* calculate an upper bound, paru, for the 0. of the function. */ for (j = 1; j <= n; ++j) { sum = 0.; for (i__ = 1; i__ <= j; ++i__) { sum += r__[i__ + j * r_dim1] * qtb[i__]; } l = ipvt[j]; wa1[j] = sum / diag[l]; } gnorm = enorm(n, &wa1[1]); paru = gnorm / *delta; if (paru == 0.) { paru = dwarf / fmin2(*delta,p1); } /* if the input par lies outside of the interval (parl,paru), set par to the closer endpoint. */ par = fmax2(par, parl); par = fmin2(par, paru); if (par == 0.) { par = gnorm / dxnorm; } /* beginning of an iteration. */ L150: ++iter; /* evaluate the function at the current value of par. */ if (par == 0.) par = fmax2(dwarf, p001 * paru); temp = sqrt(par); for (j = 1; j <= n; ++j) { wa1[j] = temp * diag[j]; } qrsolv(n, &r__[r_offset], ldr, &ipvt[1], &wa1[1], &qtb[1], &x[1], &sdiag[1], &wa2[1]); for (j = 1; j <= n; ++j) { wa2[j] = diag[j] * x[j]; } dxnorm = enorm(n, &wa2[1]); temp = fp; fp = dxnorm - *delta; /* if the function is small enough, accept the current value of par. also test for the exceptional cases where parl is 0. or the number of iterations has reached 10. */ if (fabs(fp) <= p1 * *delta || (parl == 0. && fp <= temp && temp < 0.) || iter == 10) { // << FIXME: give warning for iter == 10 !! goto L220; } /* compute the newton correction. */ for (j = 1; j <= n; ++j) { l = ipvt[j]; wa1[j] = diag[l] * (wa2[l] / dxnorm); } for (j = 1; j <= n; ++j) { wa1[j] /= sdiag[j]; temp = wa1[j]; jp1 = j + 1; for (i__ = jp1; i__ <= n; ++i__) { wa1[i__] -= r__[i__ + j * r_dim1] * temp; } } temp = enorm(n, &wa1[1]); parc = fp / *delta / temp / temp; /* depending on the sign of the function, update parl or paru. */ if (fp > 0.) { parl = fmax2(parl,par); } if (fp < 0.) { paru = fmin2(paru,par); } // compute an improved estimate for par. par = fmax2(parl, par + parc); /* end of an iteration. */ goto L150; L220: // termination. if (iter == 0) { par = 0.; } return par; } /* lmpar */ /* Subroutine */ static void qrsolv(int n, double *r__, int *ldr, int *ipvt, double *diag, double *qtb, double *x, double *sdiag, double *wa) { /* Initialized data */ static double p5 = .5; static double p25 = .25; /* System generated locals */ int r_dim1, r_offset; /* Local variables */ int i__, j, k, l, nsing; double tan__, cos__, sin__, temp, cotan, qtbpj; /* ********** subroutine qrsolv given an m by n matrix a, an n by n diagonal matrix d, and an m-vector b, the problem is to determine an x which solves the system a*x = b , d*x = 0 , in the least squares sense. this subroutine completes the solution of the problem if it is provided with the necessary information from the qr factorization, with column pivoting, of a. that is, if a*p = q*r, where p is a permutation matrix, q has orthogonal columns, and r is an upper triangular matrix with diagonal elements of nonincreasing magnitude, then qrsolv expects the full upper triangle of r, the permutation matrix p, and the first n components of (q transpose)*b. the system a*x = b, d*x = 0, is then equivalent to t t r*z = q *b , p *d*p*z = 0 , where x = p*z. if this system does not have full rank, then a least squares solution is obtained. on output qrsolv also provides an upper triangular matrix s such that t t t p *(a *a + d*d)*p = s *s . s is computed within qrsolv and may be of separate interest. the subroutine statement is subroutine qrsolv(n,r,ldr,ipvt,diag,qtb,x,sdiag,wa) where n is a positive int input variable set to the order of r. r is an n by n array. on input the full upper triangle must contain the full upper triangle of the matrix r. on output the full upper triangle is unaltered, and the strict lower triangle contains the strict upper triangle (transposed) of the upper triangular matrix s. ldr is a positive int input variable not less than n which specifies the leading dimension of the array r. ipvt is an int input array of length n which defines the permutation matrix p such that a*p = q*r. column j of p is column ipvt(j) of the identity matrix. diag is an input array of length n which must contain the diagonal elements of the matrix d. qtb is an input array of length n which must contain the first n elements of the vector (q transpose)*b. x is an output array of length n which contains the least squares solution of the system a*x = b, d*x = 0. sdiag is an output array of length n which contains the diagonal elements of the upper triangular matrix s. wa is a work array of length n. subprograms called fortran-supplied ... fabs,dsqrt argonne national laboratory. minpack project. march 1980. burton s. garbow, kenneth e. hillstrom, jorge j. more ********** Parameter adjustments */ --wa; --sdiag; --x; --qtb; --diag; --ipvt; r_dim1 = *ldr; r_offset = 1 + r_dim1; r__ -= r_offset; /* Function Body copy r and (q transpose)*b to preserve input and initialize s. in particular, save the diagonal elements of r in x. */ for (j = 1; j <= n; ++j) { for (i__ = j; i__ <= n; ++i__) { r__[i__ + j * r_dim1] = r__[j + i__ * r_dim1]; } x[j] = r__[j + j * r_dim1]; wa[j] = qtb[j]; } /* eliminate the diagonal matrix d using a givens rotation. */ for (j = 1; j <= n; ++j) { /* prepare the row of d to be eliminated, locating the diagonal element using p from the qr factorization. */ l = ipvt[j]; if (diag[l] == 0.) { goto L90; } for (k = j; k <= n; ++k) { sdiag[k] = 0.; } sdiag[j] = diag[l]; /* the transformations to eliminate the row of d modify only a single element of (q transpose)*b beyond the first n, which is initially 0.. */ qtbpj = 0.; for (k = j; k <= n; ++k) if(sdiag[k] != 0.) { /* determine a givens rotation which eliminates the appropriate element in the current row of d. */ if (fabs(r__[k + k * r_dim1]) < fabs(sdiag[k])) { cotan = r__[k + k * r_dim1] / sdiag[k]; sin__ = p5 / sqrt(p25 + p25 * (cotan * cotan)); cos__ = sin__ * cotan; } else { tan__ = sdiag[k] / r__[k + k * r_dim1]; cos__ = p5 / sqrt(p25 + p25 * (tan__ * tan__)); sin__ = cos__ * tan__; } /* compute the modified diagonal element of r and the modified element of ((q transpose)*b,0). */ r__[k + k * r_dim1] = cos__ * r__[k + k * r_dim1] + sin__ * sdiag[k]; temp = cos__ * wa[k] + sin__ * qtbpj; qtbpj = -sin__ * wa[k] + cos__ * qtbpj; wa[k] = temp; /* accumulate the tranformation in the row of s. */ for (i__ = k+1; i__ <= n; ++i__) { double r_n = cos__ * r__[i__ + k * r_dim1] + sin__ * sdiag[i__]; sdiag[i__] = -sin__ * r__[i__ + k * r_dim1] + cos__ * sdiag[i__]; r__[i__ + k * r_dim1] = r_n; } } L90: /* store the diagonal element of s and restore the corresponding diagonal element of r. */ sdiag[j] = r__[j + j * r_dim1]; r__[j + j * r_dim1] = x[j]; } // for( j = 1 .. n ) /* solve the triangular system for z. if the system is singular, then obtain a least squares solution. */ nsing = n; for (j = 1; j <= n; ++j) { if (sdiag[j] == 0. && nsing == n) { nsing = j - 1; } if (nsing < n) { wa[j] = 0.; } } for (k = 1; k <= nsing; ++k) { j = nsing - k + 1; double sum = 0.; for (i__ = j + 1; i__ <= nsing; ++i__) sum += r__[i__ + j * r_dim1] * wa[i__]; wa[j] = (wa[j] - sum) / sdiag[j]; } /* permute the components of z back to components of x. */ for (j = 1; j <= n; ++j) { l = ipvt[j]; x[l] = wa[j]; } return; } /* qrsolv */ fracdiff/src/gamm_comm.h0000644000176200001440000000006413345443770014724 0ustar liggesusersFD_EXTERNAL struct { int igamma, jgamma; } gammfd_; fracdiff/src/pmult.c0000644000176200001440000000176313345443770014133 0ustar liggesusers/* * Copyright (C) 2004 Martin Maechler * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * A copy of the GNU General Public License is available via WWW at * http://www.gnu.org/copyleft/gpl.html. You can also obtain it by * writing to the Free Software Foundation, Inc., 59 Temple Place, * Suite 330, Boston, MA 02111-1307 USA. */ #include #include SEXP poly_mult(SEXP a, SEXP b) { SEXP prod; /* TODO: implement polynomial multiplication */ prod = a; /* -Wall : for now */ return prod; } fracdiff/src/fracdiff.h0000644000176200001440000000324614323537676014547 0ustar liggesusers // fdsim.c -------------------------------------- void fdsim(int *n, int *ip, int *iq, double *ar, double *ma, double *d__, double *mu, double *y, double *s, double *flmin, double *flmax, double *epmin, double *epmax); // fdcore.c -------------------------------------- void fracdf(double *x, int *n, int *m, int *nar, int *nma, double *dtol, double *drange, double *hood_etc, double *d__, double *ar, double *ma, double *w, int *lenw, int *iw, int *inform, // <- also use as input double *flmin, double *flmax, double *epmin, double *epmax); void fdfilt(double *x, double d, /* output : */ double *y, double *slogvk, /* using */ double *amk, double *ak, double *vk, double *phi, double *pi); void fdcom(int *n, int *m, int *nar, int *nma, double *hood, double *flmin, double *flmax, double *epmin, double *epmax); void ajqp_(double *qp, double *a, double *ajac, int lajac, int op_code, double *y); // fdhess.c -------------------------------------- void fdhpq(double *h, int *lh, double *w); void fdcov(double *x, double *d__, double *hh, double *hd, double *cov, int *lcov, double *cor, int *lcor, double *se, double *w, int *info); // fdmin.c -------------------------------------- typedef /* Subroutine */ void (*S_fp)(double *, double *, double *, int, int, double *); double lmder1(S_fp fcn, int m, int n, double *x, double *fvec, double *fjac, int ldfjac, double ftol, double xtol, double gtol, int maxfev, double *diag, int mode, double factor, int *info, int *nfev, int *njev, int *ipvt, double *qtf, double *wa1, double *wa2, double *wa3, double *wa4, double *y); fracdiff/src/Makevars0000644000176200001440000000004113345443770014306 0ustar liggesusersPKG_LIBS = $(BLAS_LIBS) $(FLIBS) fracdiff/src/fdcore.c0000644000176200001440000006312314323537676014240 0ustar liggesusers/*-*- mode: C; kept-old-versions: 12; kept-new-versions: 20; -*- * * fdcore.f -- translated by f2c (version 20031025). * and produced by f2c-clean,v 1.10 2002/03/28 16:37:27 maechler * * and manually pretty edited by Martin Maechler, 2004-09-18, ff. */ #include // for warning(), and "monitoring" output #include /* dcopy() and ddot() only:*/ #include #include "fracdiff.h" extern double dgamr_(double *); extern double dgamma_(double *); static double dopt(double *x, double dinit, double *drange, int verbose, double *hood, double *delta, double *w, int *iw, double *min_fnorm); static double pqopt(double *x, double d__, double *w, int *iw, double *min_fnorm); /* These + ajqp_(..) are passed to LMDER1() to be optimized: */ static void ajp_(double *p, double *a, double *ajac, int lajac, int op_code, double *y); static void ajq_(double *qp, double *a, double *ajac, int lajac, int op_code, double *y); /* Common Block Declarations */ /* 1 - local ones --- MM: maybe get rid of (some of) them : */ static struct { int maxopt, maxfun, nopt, nfun, ngrd, ifun, igrd, info; } OP; static struct { double d, f, x, g; } TOL; static struct { int iminpk, jminpk; } MinPck; static struct { int ilimit, jlimit; } limsfd_; /* 2 - global ones --- * all defined here :*/ #define FD_EXTERNAL #include "mach_comm.h" #include "maux_comm.h" #include "gamm_comm.h" #include "hess_comm.h" /* Table of constant values (used as pointers) */ static double c_m99 = -99.; static int ic__1 = 1; static int ic__0 = 0; static double c__1 = 1.; /***************************************************************************** ******************************************************************************/ void fracdf(double *x, int *n, int *m, int *nar, int *nma, double *dtol, double *drange, double *hood_etc, double *d__, double *ar, double *ma, double *w, int *lenw, int *iw, int *inform, // <- also use as input for verbose double *flmin, double *flmax, double *epmin, double *epmax) { /* ---------------------------------------------------------------------------- Input : x(n) double time series for the ARIMA model n int length of the time series m int number of terms in the likelihood approximation suggested value 100 (see Haslett and Raftery 1989) nar int number of autoregressive parameters nma int number of moving average parameters dtol double desired length of final interval of uncertainty for d suggested value : 4th root of machine precision if dtol < 0 it is automatically set to this value dtol will be altered if necessary by the program drange(2) double array of length 2 giving minimum and maximum values f for the fractional differencing parameter d double initial guess for optimal fractional differencing parameter w double work array lenw int length of double precision workspace w, must be at least max( p+q+2*(n+M), 3*n+(n+6.5)*(p+q) +1, (3+2*(p+q+1))*(p+q+1)+1) MM: max( p+q+2*(n+M), 3*n+(n+6.5)*(p+q) +1, 31 * 12) is what the code below rather checks Output : dtol double value of dtol ultimately used by the algorithm d double final value optimal fractional differencing parameter hood_etc double[3] [1]: logarithm of the maximum likelihood [2]: minimal objective value [3]: estimated noise variance ar double optimal autoregressive parameters ma double optimal moving average parameters ---------------------------------------------------------------------------- copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ----------------------------------------------------------------------------*/ /* Local variables */ double delta; int lfree, lwfree, verbose = inform[0], w_lqp; if (*m <= 0) /* default: */ *m = 100; /* MM: Using 'fdcom' instead of 'code copy' -- FIXME: use #include in C * initialize several of the above common blocks: */ fdcom(n, m, nar, nma, &c_m99, flmin, flmax, epmin, epmax); w_lqp = w_opt.lqp - 1;// '-1' : so we do *not* need 'w--' lfree = w_opt.lwa4 + *n - Dims.minpq; /* = 1+ ipvt + 5.5*npq + n - minpq = 2+ 6.5*npq + 3*n - 2*minpq + (n-maxpq)*npq and lvk+M = 1 + npq + 2(n + M) */ lwfree = imax2((12*31), imax2(w_fil.lvk + *m, lfree)); /* ^^^^^^^ MM: where is this needed? */ if (lwfree > *lenw + 1) { limsfd_.ilimit = lwfree - *lenw; REprintf("** Insufficient storage : Increase length of w by at least %d\n", limsfd_.ilimit); *inform = 1; /* return the *desired* workspace storage: */ *lenw = lwfree; return; } OP.maxopt = 100; OP.maxfun = 100; /* set error and warning flags */ *inform = 0; gammfd_.igamma = 0; MinPck.iminpk = 0; limsfd_.ilimit = 0; gammfd_.jgamma = 0; MinPck.jminpk = 0; limsfd_.jlimit = 0; if (*dtol > .1) *dtol = .1; if (*dtol <= 0.) { TOL.d = mauxfd_.epsp25; TOL.f = mauxfd_.epspt3; } else { TOL.d = fmax2(*dtol, mauxfd_.epspt5); TOL.f = fmax2(*dtol / 10., mauxfd_.epsp75); } TOL.g = TOL.f; TOL.x = TOL.d; *dtol = TOL.d; /* if (npq != 0) call dcopy( npq, zero, 0, w(lqp), 1) */ if (Dims.pq != 0) { F77_CALL(dcopy)(&Dims.p, ar, &ic__1, &w[w_lqp + Dims.q], &ic__1); F77_CALL(dcopy)(&Dims.q, ma, &ic__1, &w[w_lqp], &ic__1); } OP.nopt = 0; OP.nfun = 0; OP.ngrd = 0; /* ==== */ *d__ = dopt(x, *d__, drange, verbose, /* ===*/ &hood_etc[0], &delta, w, iw, /* min_fnorm = */&hood_etc[1]); hood_etc[2] = filtfd_.wnv; if (OP.nopt >= OP.maxopt) { limsfd_.jlimit = 1; warning("fracdf(): optimization iteration limit %d reached", OP.maxopt); } if (gammfd_.igamma != 0 || MinPck.iminpk != 0) { *d__ = machfd_.fltmax; hood_etc[0] = machfd_.fltmax; F77_CALL(dcopy)(&Dims.p, &machfd_.fltmax, &ic__0, ar, &ic__1); F77_CALL(dcopy)(&Dims.q, &machfd_.fltmax, &ic__0, ma, &ic__1); if (gammfd_.igamma != 0) { *inform = 2; return; } if (MinPck.iminpk != 0) { *inform = 3; return; } } F77_CALL(dcopy)(&Dims.p, &w[w_lqp + Dims.q], &ic__1, ar, &ic__1); F77_CALL(dcopy)(&Dims.q, &w[w_lqp], &ic__1, ma, &ic__1); if (gammfd_.jgamma != 0) { *inform = 4; return; } if (MinPck.jminpk != 0) { *inform = 5; return; } if (limsfd_.jlimit != 0) { *inform = 6; } return; /* 900 format( 4h itr, 14h d , 14h est mean , * 16h white noise, 17h log likelihd, * 4h nf, 3h ng) */ } /* fracdf() {main} */ /****************************************************************************** ***************************************************************************** optimization with respect to d based on Brent's fmin algorithm */ static double dopt(double *x, double dinit, double *drange, int verbose, double *hood, double *delta, double *w, int *iw, double *min_fnorm) { /* float x(n) */ /* cc is the squared inverse of the golden ratio, cc := (3-sqrt(5.))/2 : */ static double cc = .38196601125011; static double aa, bb, dd, ee, hh, fu, fv, fw, fx, rr, ss, tt, uu, vv, ww, xx, eps, tol, tol1, tol2, tol3; /* copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ------------------------------------------------------------------------------ */ /* eps is approximately the square root of the relative machine precision. */ eps = machfd_.epsmax; tol1 = eps + 1.; eps = sqrt(eps); aa = drange[0]; bb = drange[1]; if (dinit > aa + TOL.d && dinit < bb - TOL.d) { vv = dinit; } else { vv = aa + cc * (bb - aa); } ww = vv; xx = vv; uu = xx; dd = 0.; ee = 0.; OP.nopt = 1; fx = pqopt(x, xx, w, iw, min_fnorm); /* ===== */ if(verbose) { REprintf("dopt() debugging: dinit = %g ==> xx = %g, fx = pqopt(x[], xx) = %g; min_fnorm = %g\n", dinit, xx, fx, *min_fnorm); REprintf(" it. | uu | pqopt(uu) | delta |\n"); // | 123456789012 | 123456789012 | 1234567890 |\n", // REprintf(" .. DBG dopt() [%2d]:| %12g | %12g | %10.6e |\n", } fv = fx; fw = fx; tol = fmax2(TOL.d, 0.); tol3 = tol / 3.; /* main loop starts here ======================================================*/ L10: if (gammfd_.igamma != 0 || MinPck.iminpk != 0) { *hood = machfd_.fltmax; warning("** dopt() ERROR: invalid gamma (%d) or Minpack (%d) codes", gammfd_.igamma, MinPck.iminpk); return -1.; } hh = (aa + bb) * .5; tol1 = eps * (fabs(xx) + 1.) + tol3; tol2 = tol1 * 2.; /* check stopping criterion */ *delta = fabs(xx - hh) + (bb - aa) * .5; /* if (abs(xx-hh) .le. (tol2-half*(bb-aa))) goto 100 */ if(verbose && OP.nopt > 1) REprintf(" .. DBG dopt() [%2d]:| %12g | %12g | %10.6e |\n", OP.nopt, uu, fu, *delta); if (*delta <= tol2) { goto L_end; } if (OP.nopt >= OP.maxopt) { goto L_end; } /* Maybe another check : * if (delpq <= EPSMAX*(one+pqnorm)) goto 100 */ rr = 0.; ss = 0.; tt = 0.; if (fabs(ee) > tol1) { /* fit parabola */ rr = (xx - ww) * (fx - fv); ss = (xx - vv) * (fx - fw); tt = (xx - vv) * ss - (xx - ww) * rr; ss = (ss - rr) * 2.; if (ss <= 0.) { ss = -ss; } else { tt = -tt; } rr = ee; ee = dd; } if (fabs(tt) >= fabs(ss * .5 * rr) || tt <= ss * (aa - xx) || tt >= ss * (bb - xx)) { /*--- a golden-section step ---*/ if (xx >= hh) { ee = aa - xx; } else { ee = bb - xx; } dd = cc * ee; } else { /*--- a parabolic-interpolation step ---*/ dd = tt / ss; uu = xx + dd; /* f must not be evaluated too close to aa or bb */ if (uu - aa < tol2 || bb - uu < tol2) { dd = tol1; if (xx >= hh) { dd = -dd; } } } /* f must not be evaluated too close to xx */ if (fabs(dd) >= tol1) { uu = xx + dd; } else { if (dd <= 0.) { uu = xx - tol1; } else { uu = xx + tol1; } } ++OP.nopt; fu = pqopt(x, uu, w, iw, min_fnorm); /* update aa, bb, vv, ww, and xx */ if (fx >= fu) { if (uu >= xx) { aa = xx; } else { bb = xx; } vv = ww; fv = fw; ww = xx; fw = fx; xx = uu; fx = fu; } else { if (uu >= xx) { bb = uu; } else { aa = uu; } if (fu > fw && ww != xx) { if (fu <= fv || vv == xx || vv == ww) { vv = uu; fv = fu; } } else { vv = ww; fv = fw; ww = uu; fw = fu; } } goto L10; /* end of main loop */ L_end: *hood = -fx; filtfd_.cllf = *hood; return xx; /* 900 format( i4, 2(1pe14.6), 1pe16.7, 1pe17.8, 1x, 2(i3)) 901 format( i4, 3(1pe10.2), 1pe11.2, 2(i3), 3(1pe8.1), i2) */ } /* dopt */ /* **************************************************************************** ******************************************************************************/ void fdcom(int *n, int *m, int *nar, int *nma, double *hood, double *flmin, double *flmax, double *epmin, double *epmax) /* is also called from R --> need all pointers */ { /* Fill "parameter"s into global variables (Common blocks) needed later: * * copyright 1991 Department of Statistics, University of Washington written by Chris Fraley -----------------------------------------------------------------------------*/ filtfd_.cllf = *hood; /* machine constants */ machfd_.fltmin = *flmin; machfd_.fltmax = *flmax; machfd_.epsmin = *epmin; machfd_.epsmax = *epmax; mauxfd_.epspt5 = sqrt(machfd_.epsmin); mauxfd_.epsp25 = sqrt(mauxfd_.epspt5); mauxfd_.epspt3 = pow(machfd_.epsmin, 0.3); mauxfd_.epsp75 = pow(machfd_.epsmin, 0.75); mauxfd_.bignum = 1. / machfd_.epsmin; /* useful quantities -- integer "dimensions" : */ Dims.n = *n; Dims.m = *m; Dims.p = *nar; Dims.q = *nma; Dims.pq = Dims.p + Dims.q; Dims.pq1 = Dims.pq + 1; if(Dims.p >= Dims.q) { Dims.maxpq = Dims.p; Dims.minpq = Dims.q; } else { Dims.maxpq = Dims.q; Dims.minpq = Dims.p; } Dims.maxpq1 = Dims.maxpq + 1; Dims.nm = *n - Dims.maxpq; /* workspace allocation */ w_opt.lqp = 1; w_fil.ly = w_opt.lqp + Dims.pq; w_fil.lamk = w_fil.ly; w_fil.lak = w_fil.lamk + *n; w_fil.lphi= w_fil.lak + *n; w_fil.lvk = w_fil.lphi + *m; /* = lamk + 2*n + M = 1 + npq + 2n + M */ w_fil.lpi = w_fil.lphi; w_opt.la = w_fil.ly + *n; w_opt.lajac = w_opt.la + *n - Dims.minpq; /* old ipvt = lajac + max( (n-np)*np, (n-nq)*nq, (n-maxpq)*npq) */ w_opt.ipvt = w_opt.lajac + (*n - Dims.maxpq) * Dims.pq; w_opt.ldiag= w_opt.ipvt + Dims.pq / 2 + 1; w_opt.lqtf = w_opt.ldiag + Dims.pq; w_opt.lwa1 = w_opt.lqtf + Dims.pq; w_opt.lwa2 = w_opt.lwa1 + Dims.pq; w_opt.lwa3 = w_opt.lwa2 + Dims.pq; w_opt.lwa4 = w_opt.lwa3 + Dims.pq; /* lfree = lwa4 + n - minpq */ return; } /* fdcom */ /************************************************************************** ************************************************************************** */ static double pqopt(double *x, double d__, double *w, int *iw, double *min_fnorm) { /* x: double x(n) */ /* w: work array exactly as in main fracdf() */ /* 'const' (but need to pass pointers of these): */ static int modelm = 1; static double factlm = 100.; /* Local variables */ double t, u, slogvk; /* Parameter adjustments */ --w; /* copyright 1991 Department of Statistics, University of Washington * written by Chris Fraley ---------------------------------------------------------------------------- */ fdfilt(x, d__, &w[(0 + (0 + (w_fil.ly << 3))) / 8], &slogvk, &w[(0 + (0 + (w_fil.lamk << 3))) / 8], &w[(0 + (0 + (w_fil.lak << 3))) / 8], &w[(0 + (0 + (w_fil.lvk << 3))) / 8], &w[(0 + (0 + (w_fil.lphi << 3))) / 8], &w[(0 + (0 + (w_fil.lpi << 3))) / 8]); if (gammfd_.igamma != 0) { filtfd_.wnv = machfd_.fltmax; filtfd_.cllf = -machfd_.fltmax; warning("** pqopt() gamma error (%d)", gammfd_.igamma); return machfd_.fltmax; } t = (double) Dims.n; if (Dims.pq == 0) { /* trivial case --- p = q = 0 : */ filtfd_.wnv = F77_CALL(ddot)(&Dims.n, &w[w_fil.ly], &ic__1, &w[w_fil.ly], &ic__1) / t; OP.ifun = 0; OP.igrd = 0; OP.info = -1; } else { /* optimize as an unconstrained optimization problem */ if (modelm == 2) { F77_CALL(dcopy)(&Dims.pq, &c__1, &ic__0, &w[w_opt.ldiag], &ic__1); } if (OP.nopt < 0) { // (never used ??) REprintf("pqopt() -- nopt < 0 case --- should never happen. Please report!"); if (Dims.p != 0) { int n_p = Dims.n - Dims.p; lmder1((S_fp)ajp_, n_p, Dims.p, &w[w_opt.lqp + Dims.q], &w[w_opt.la], &w[w_opt.lajac], n_p, TOL.f, TOL.x, TOL.g, OP.maxfun, &w[w_opt.ldiag], modelm, factlm, &OP.info, &OP.ifun, &OP.igrd, iw /* was &w[w_opt.ipvt] */, &w[w_opt.lqtf], &w[w_opt.lwa1], &w[w_opt.lwa2], &w[w_opt.lwa3], &w[w_opt.lwa4], &w[w_fil.ly]); } if (Dims.q != 0) { int n_q = Dims.n - Dims.q; lmder1((S_fp)ajq_, n_q, Dims.q, &w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], n_q, TOL.f, TOL.x, TOL.g, OP.maxfun, &w[w_opt.ldiag], modelm, factlm, &OP.info, &OP.ifun, &OP.igrd, iw /* was &w[w_opt.ipvt] */, &w[w_opt.lqtf], &w[w_opt.lwa1], &w[w_opt.lwa2], &w[w_opt.lwa3], &w[w_opt.lwa4], &w[w_fil.ly]); } } *min_fnorm = lmder1((S_fp)ajqp_, Dims.nm, Dims.pq, &w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, TOL.f, TOL.x, TOL.g, OP.maxfun, &w[w_opt.ldiag], modelm, factlm, &OP.info, &OP.ifun, &OP.igrd, iw /* was &w[w_opt.ipvt] */, &w[w_opt.lqtf], &w[w_opt.lwa1], &w[w_opt.lwa2], &w[w_opt.lwa3], &w[w_opt.lwa4], &w[w_fil.ly]); if (OP.info == 0) { /* 'MINPACK : improper input parameters */ MinPck.iminpk = 10; filtfd_.wnv = machfd_.fltmax; filtfd_.cllf = -machfd_.fltmax; return machfd_.fltmax; } if(OP.info== 5) MinPck.jminpk = 5; /* MINPACK : function evaluation limit reached */ if(OP.info== 6) MinPck.jminpk = 6; /* MINPACK : ftol is too small */ if(OP.info== 7) MinPck.jminpk = 7; /* MINPACK : xtol is too small */ if(OP.info== 8) MinPck.jminpk = 8; /* MINPACK : gtol is too small */ /* call daxpy( npq, (-one), w(lpq), 1, w(lqp), 1 delpq = sqrt(ddot( npq, w(lqp), 1, w(lqp), 1)) pqnorm = sqrt(ddot( npq, w(lpq), 1, w(lpq), 1)) */ filtfd_.wnv = *min_fnorm * *min_fnorm / (double) (Dims.nm - 1); } u = t * (log(filtfd_.wnv) + 2.8378) + slogvk; /* unused: BIC = u + (double) (Dims.p + Dims.q + 1) * log(t); */ filtfd_.cllf = -u / 2.; return u / 2; } /* End pqopt() */ /*************************************************************************** */ void fdfilt(double *x, double d__, /* -> output */ double *y, double *slogvk, /* using */ double *amk, double *ak, double *vk, double *phi, double *pi) { /* called as fdfilt( x, d, w(ly), slogvk, w(lamk), w(lak), w(lvk), w(lphi), w(lpi)) float x(n) double precision y(n), amk(n), ak(n) double precision vk(M), phi(M), pi(M) ************************************************************************** input : x float original time series d double estimated value of d output : y double filtered series slogvk double the sum of the logarithms of the vk notes : y can use the same storage as either ak or amk phi and pi can use the same storage can be arranged so that phi, pi and vk share the same storage MM: Which filtering exactly ???? -- --> look at ./fdsim.c which is similar (but simpler) and ../filters.R ************************************************************************** copyright 1991 Department of Statistics, University of Washington written by Chris Fraley -----------------------------------------------------------------------*/ /* System generated locals */ double d__1; /* Local variables */ int j, k, km, mcap; double r__, s, t, u, v, z__, g0; /* Parameter adjustments */ --pi; --phi; --vk; --ak; --amk; --y; --x; /* Function Body */ mcap = imin2(Dims.m, Dims.n); /* calculate amk(k), vk(k), and ak(k) for k=1,n (see W522-4 for notation). */ /* k = 1 */ amk[1] = 0.; ak[1] = 1.; /* k = 2 ; initialize phi(1) */ z__ = d__ / (1. - d__); amk[2] = z__ * x[1]; ak[2] = 1. - z__; phi[1] = z__; d__1 = 1. - d__; t = dgamr_(&d__1); if (gammfd_.igamma != 0) { return; } d__1 = 1. - d__ * 2.; g0 = dgamma_(&d__1) * (t * t); if (gammfd_.igamma != 0) { return; } vk[1] = g0; vk[2] = g0 * (1. - z__ * z__); /* k = 3, mcap */ for (k = 3; k <= mcap; ++k) { km = k - 1; t = (double) km; u = t - d__; /* calculate phi() and vk() using the recursion formula on W498 */ for (j = 1; j <= (km - 1); ++j) { s = t - (double) j; phi[j] *= t * (s - d__) / (u * s); } v = d__ / u; phi[km] = v; vk[k] = vk[km] * (1. - v * v); /* form amk(k) and ak(k) */ u = 0.; v = 1.; for (j = 1; j <= km; ++j) { t = phi[j]; u += t * x[k - j]; v -= t; } amk[k] = u; ak[k] = v; } /* k = mcap+1, n */ if (Dims.m < Dims.n) { /* i.e. mcap = min(M,n) != n */ /* calculate pi(j), j = 1,mcap */ pi[1] = d__; s = d__; for (j = 2; j <= mcap; ++j) { u = (double) j; t = pi[j - 1] * ((u - 1. - d__) / u); s += t; pi[j] = t; } s = 1. - s; r__ = 0.; u = (double) mcap; t = u * pi[mcap]; for (k = mcap+1; k <= Dims.n; ++k) { km = k - mcap; z__ = 0.; for (j = 1; j <= mcap; ++j) { z__ += pi[j] * x[k - j]; } if (r__ == 0.) { amk[k] = z__; ak[k] = s; } else { v = t * (1. - pow(u / k, d__)) / d__; amk[k] = z__ + v * r__ / ((double) km - 1.); ak[k] = s - v; } r__ += x[km]; } } /* form muhat - see formula on W523. */ r__ = 0.; s = 0.; for (k = 1; k <= (Dims.n); ++k) { t = ak[k]; u = (x[k] - amk[k]) * t; v = t * t; if (k <= mcap) { z__ = vk[k]; u /= z__; v /= z__; } r__ += u; s += v; } filtfd_.hatmu = r__ / s; /* form filtered version */ s = 0.; for (k = 1; k <= mcap; ++k) s += log(vk[k]); *slogvk = s; s = 0.; for (k = 1; k <= (Dims.n); ++k) { t = x[k] - amk[k] - filtfd_.hatmu * ak[k]; if (k <= mcap) t /= sqrt(vk[k]); s += t; y[k] = t; } if (Dims.pq == 0) { return; } t = (double) Dims.n; u = z__ / t; for (k = 1; k <= Dims.n; ++k) y[k] -= u; return; } /* fdfilt */ /**************************************************************************** *****************************************************************************/ /* Passed to lmder1() minimizer, but also called from * hesdpq() in ./fdhess.c : */ void ajqp_(double *qp, double *a, double *ajac, int lajac, int op_code, double *y) { /* System generated locals */ int ajac_dim1, ajac_offset; /* Local variables */ static int i, k, l; static double s, t; static int km; /* double precision qp(npq), a(nm), ajac(nm,npq), y(n) copyright 1991 Department of Statistics, University of Washington written by Chris Fraley -------------------------------------------------------------------------- Parameter adjustments */ --qp; --a; ajac_dim1 = lajac; ajac_offset = 1 + ajac_dim1; ajac -= ajac_offset; --y; if (op_code == 1) { /* objective calculation */ for (k = Dims.maxpq1; k <= (Dims.n); ++k) { km = k - Dims.maxpq; t = 0.; if (Dims.p != 0) { for (l = 1; l <= (Dims.p); ++l) { t -= qp[Dims.q + l] * y[k - l]; } } s = 0.; if (Dims.q != 0) { for (l = 1; l <= (Dims.q); ++l) { if (km <= l) break; s += qp[l] * a[km - l]; } } s = y[k] + (t + s); if (fabs(s) <= mauxfd_.bignum) { a[km] = s; } else { a[km] = sign(s) * mauxfd_.bignum; } } ++OP.nfun; } else if (op_code == 2) { /* jacobian calculation */ for (i = 1; i <= (Dims.pq); ++i) { for (k = Dims.maxpq1; k <= (Dims.n); ++k) { km = k - Dims.maxpq; t = 0.; if (Dims.q != 0) { for (l = 1; l <= (Dims.q); ++l) { if (km <= l) break; t += qp[l] * ajac[km - l + i * ajac_dim1]; } } if (i <= Dims.q) { if (km > i) { s = a[km - i] + t; } else { s = t; } } else { s = -y[k - (i - Dims.q)] + t; } if (fabs(s) <= mauxfd_.bignum) { ajac[km + i * ajac_dim1] = s; } else { ajac[km + i * ajac_dim1] = sign(s) * mauxfd_.bignum; } } } ++OP.ngrd; } else { // invalid op_code warning("ajqp_(): invalid op_code = %d", op_code); } return; } /* ajqp_ */ /**************************************************************************** ****************************************************************************/ static void ajp_(double *p, double *a, double *ajac, int lajac, int op_code, double *y) /* p(np), a(nm), ajac(nm,npq), y(n) */ { /* copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ------------------------------------------------------------------------- */ /* Local variables */ int i, k; /* Parameter adjustments */ --p; --a; --y; /* Function Body */ if (op_code == 1) { /* objective calculation */ if (Dims.p == 0) { } for (k = Dims.p + 1; k <= (Dims.n); ++k) { double t = 0; for (i = 1; i <= (Dims.p); ++i) t -= p[i] * y[k - i]; a[k - Dims.p] = y[k] + t; } } else if (op_code == 2) { /* jacobian calculation */ /* L200: */ /* Matrix 1-indexing adjustments (System generated): */ int ajac_dim1 = lajac; ajac -= (1 + ajac_dim1); for (i = 1; i <= Dims.p; ++i) for (k = Dims.p + 1; k <= (Dims.n); ++k) ajac[k - Dims.p + i * ajac_dim1] = - y[k - i]; } return; } /* ajp_ **************************************************************************** ****************************************************************************/ static void ajq_(double *qp, double *a, double *ajac, int lajac, int op_code, double *y) /* double precision qp(npq), a(nm), ajac(nm,npq), y(n) */ { /* copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ------------------------------------------------------------------- */ /* Local variables */ int i, k, l, km; double s, t; /* Parameter adjustments */ --qp; --a; --y; if (op_code == 1) { /*--- objective calculation ---*/ if (Dims.q == 0) return; for (k = Dims.maxpq1; k <= (Dims.n); ++k) { km = k - Dims.maxpq; t = 0.; if (Dims.p != 0) { for (l = 1; l <= (Dims.p); ++l) { t -= qp[Dims.q + l] * y[k - l]; } } s = 0.; if (Dims.q != 0) { for (l = 1; l <= (Dims.q); ++l) { if (km <= l) break; s += qp[l] * a[km - l]; } } a[km] = y[k] + (t + s); } ++OP.nfun; } else if (op_code == 2) { /*--- jacobian calculation ---*/ /* L200: */ /* Matrix 1-indexing adjustments (System generated): */ int ajac_dim1 = lajac; ajac -= (1 + ajac_dim1); for (i = 1; i <= (Dims.pq); ++i) { for (k = Dims.maxpq1; k <= (Dims.n); ++k) { km = k - Dims.maxpq; t = 0.; if (Dims.q != 0) { for (l = 1; l <= (Dims.q); ++l) { if (km <= l) break; t += qp[l] * ajac[km - l + i * ajac_dim1]; } } if (i <= Dims.q) { if (km > i) { ajac[km + i * ajac_dim1] = a[km - i] + t; } else { ajac[km + i * ajac_dim1] = t; } } else { ajac[km + i * ajac_dim1] = -y[k - (i - Dims.q)] + t; } } } ++OP.ngrd; } return; } /* ajq_ */ fracdiff/src/fdhess.c0000644000176200001440000004037113345443770014244 0ustar liggesusers/*-*- mode: C; kept-old-versions: 12; kept-new-versions: 20; -*- * * fdhess.f -- translated by f2c (version 20031025). * and produced by f2c-clean,v 1.10 2002/03/28 16:37:27 maechler * * and manually pretty edited by Martin Maechler, 2004-10-01 */ #include #include "fracdiff.h" /* ddot(), daxpy(), dcopy(), dscal() : */ #include /* dsvdc: */ #include /*----------------------------------------------------------- * local to this file: */ static void hesdpq(double *, double, double *, double *, double *); static void hesspq_(double *qp, double *a, double *ajac, int *lajac, double *h__, int *lh, double *aij, double *g); static void invsvd_(double *, double *, int *, double *, int *, double *, int *); static void gradpq(double *g, double a[], double ajac[], int l_ajac); /* Common Block Declarations --- included as "extern" */ #define FD_EXTERNAL extern #include "mach_comm.h" /*-> machfd_ */ #include "maux_comm.h" /*-> mauxfd_ */ #include "gamm_comm.h" /*-> gammfd_ */ #include "hess_comm.h" /*-> Dims, filtfd_, hessfd_, w_fil, w_opt */ /* Table of constant values */ static int c__0 = 0; static int c__1 = 1; static double c_0d = 0.; static double c_m1 = -1.; /******************************************************************************* *******************************************************************************/ /* Called from R: Analytic Hessian with respect to p and q variables : */ void fdhpq(double *h, int *lh, double *w) { /* double precision H(lH, pq1) copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ----------------------------------------------------------------------------- Parameter adjustments */ --w; hesspq_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], &Dims.nm, h, lh, &w[w_opt.lwa4], &w[w_opt.lwa1]); /* call dcopy( pq1, zero, 0, H(1,1), lH) */ /* call dcopy( pq , zero, 0, H(2,1), 1) */ return; } /* fdhpq */ /******************************************************************************* *******************************************************************************/ /* called from R : */ void fdcov(double *x, double *d__, double *hh, double *hd, double *cov, int *lcov, double *cor, int *lcor, double *se, double *w, int *info) { /* float x(n) double precision d, hh, hd(pq1), cov(lcov,pq1), cor(lcor,pq1), se(pq1), w(*) copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ----------------------------------------------------------------------------*/ const int c__11 = 11; int i, j, k, le, ls, lu, lv, lwork, pq1 = Dims.pq1; double temp; /* Parameter adjustments */ int cov_dim1, cov_offset, cor_dim1, cor_offset; cov_dim1 = *lcov; cov_offset = 1 + cov_dim1; cov -= cov_offset; cor_dim1 = *lcor; cor_offset = 1 + cor_dim1; cor -= cor_offset; --se; --w; hesdpq(x, *d__, hh, hd, &w[1]); /* ====== ^^ */ F77_CALL(dcopy)(&pq1, hd, &c__1, &cov[cov_offset], lcov); gammfd_.igamma = 0; gammfd_.jgamma = 0; /* hessfd_.ksvd = 0; */ hessfd_.kcov = 0; hessfd_.kcor = 0; *info = 0; for (i = 1; i <= pq1; ++i) { for (j = i + 1; j <= pq1; ++j) { cov[j + i * cov_dim1] = cov[i + j * cov_dim1]; } } ls = w_fil.ly; lu = ls + pq1 + 1; lv = lu + pq1 * pq1; le = lv + pq1 * pq1; lwork = le + pq1; /* lfree = lwork + pq1 */ /*Linpack: dsvdc(x, ldx, n,p, s, e,u,ldu, v,ldv, work, job,info) */ F77_CALL(dsvdc)(&cov[cov_offset], lcov, &pq1, &pq1, &w[ls], &w[le], &w[lu], &pq1, &w[lv], &pq1, &w[lwork], (int*)&c__11, info); if (*info != 0) { F77_CALL(dcopy)(&pq1, &c_0d, &c__0, &se[1], &c__1); for (j = 1; j <= pq1; ++j) { F77_CALL(dcopy)(&pq1, &c_0d, &c__0, &cov[j * cov_dim1 + 1], &c__1); } /* hessfd_.ksvd = 1; */ *info = 3; return; } invsvd_(&w[ls], &w[lu], &pq1, &w[lv], &pq1, &cov[cov_offset], lcov); for (i = 1; i <= pq1; ++i) { for (j = i + 1; j <= pq1; ++j) { cov[j + i * cov_dim1] = cov[i + j * cov_dim1]; } } temp = 1.; for (j = 1; j <= pq1; ++j) { if (cov[j + j * cov_dim1] > 0.) { se[j] = sqrt(cov[j + j * cov_dim1]); } else { temp = fmin2(temp, cov[j + j * cov_dim1]); se[j] = 0.; } } if (temp == 1.) { double d__1; for (k = 1; k <= pq1; ++k) { F77_CALL(dcopy)(&k, &cov[k * cov_dim1 + 1], &c__1, &cor[k * cor_dim1 + 1], &c__1); } for (i = 1; i <= pq1; ++i) { int i2 = pq1 - i + 1; d__1 = 1. / se[i]; F77_CALL(dscal)(&i2, &d__1, &cor[i + i * cor_dim1], lcor); } for (j = 1; j <= pq1; ++j) { d__1 = 1. / se[j]; F77_CALL(dscal)(&j, &d__1, &cor[j * cor_dim1 + 1], &c__1); } } else { /* cov() contains non-positive diagonal entry */ hessfd_.kcor = 1; for (j = 1; j <= pq1; ++j) { F77_CALL(dcopy)(&pq1, &c_0d, &c__0, &cor[j * cor_dim1 + 1], &c__1); } } for (i = 1; i <= pq1; ++i) for (j = i + 1; j <= pq1; ++j) cor[j + i * cor_dim1] = cor[i + j * cor_dim1]; if (gammfd_.igamma != 0) *info = 4; if (gammfd_.jgamma != 0) *info = 1; /* if (hessfd_.ksvd != 0) *info = 3; */ if (hessfd_.kcov != 0) *info = 2; /* error in invsvd() */ if (hessfd_.kcor != 0) *info = 3; return; } /* fdcov */ /****************************************************************************** *******************************************************************************/ static void invsvd_(double *s, double *u, int *lu, double *v, int *lv, double *cov, int *lcov) { /* double precision s(pq1), u(lu,pq1), v(lv,pq1), cov(lcov,pq1) copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ---------------------------------------------------------------------------*/ /* System generated locals */ int u_dim1, u_offset, v_dim1, v_offset, cov_dim1, cov_offset; double d__1; /* Local variables */ int i__, j, k, krank, pq1 = Dims.pq1; double ss; /* Parameter adjustments */ --s; u_dim1 = *lu; u_offset = 1 + u_dim1; u -= u_offset; v_dim1 = *lv; v_offset = 1 + v_dim1; v -= v_offset; cov_dim1 = *lcov; cov_offset = 1 + cov_dim1; cov -= cov_offset; /* Function Body */ krank = pq1; for (i__ = 1; i__ <= pq1; ++i__) { ss = s[i__]; for (j = 1; j <= pq1; ++j) { if (ss < 1.) { if (fabs(u[i__ + j * u_dim1]) > ss * machfd_.fltmax) { krank = i__ - 1; hessfd_.kcov = 1; goto L100; } } } } L100: for (k = 1; k <= pq1; ++k) { F77_CALL(dcopy)(&k, &c_0d, &c__0, &cov[k * cov_dim1 + 1], &c__1); } if (krank == 0) { return; } /* do k = 1, pq1 */ /* do i = 1, pq1 */ /* do j = i, pq1 */ /* H(i,j) = H(i,j) + s(k)*u(i,k)*v(j,k) */ /* end do */ /* end do */ /* end do */ /* do k = 1, pq1 */ /* ss = s(k) */ /* do j = 1, pq1 */ /* call daxpy( j, ss*v(j,k), u(1,k), 1, H(1,j), 1) */ /* end do */ /* end do */ for (k = 1; k <= krank; ++k) { ss = -1. / s[k]; for (j = 1; j <= pq1; ++j) { d__1 = ss * u[j + k * u_dim1]; F77_CALL(daxpy)(&j, &d__1, &v[k * v_dim1 + 1], &c__1, &cov[j * cov_dim1 + 1], &c__1); } } return; } /* invsvd_ ****************************************************************************** *****************************************************************************/ /* analytic Hessian with respect to p and q variables */ void hesspq_(double *qp, double *a, double *ajac, int *lajac, /* output: h[.,.], aij[.], g[.] : */ double *h__, int *lh, double *aij, double *g) { /* double precision qp(pq), a(nm), ajac(nm,pq) double precision H(lH,pq1), aij(nm), g(pq) copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ----------------------------------------------------------------------------*/ int i, j, k, l, km; double s, t, u, fac; int n = Dims.n, p = Dims.p, q = Dims.q; /* Parameter adjustments */ int ajac_dim1 = *lajac, ajac_offset; int h_dim1 = *lh; --qp; ajac_offset = 1 + ajac_dim1; ajac -= ajac_offset; --aij; --g; fac = 1. / (filtfd_.wnv * (double) (Dims.nm - 1)); if (q != 0 && p != 0) { for (k = 1; k <= Dims.pq; ++k) { g[k] = F77_CALL(ddot)(&Dims.nm, a, &c__1, &ajac[k * ajac_dim1 + 1], &c__1); } for (i = 1; i <= p; ++i) { int i_aj = (q + i)* ajac_dim1; u = g[q + i]; for (j = 1; j <= q; ++j) { u *= g[j]; for (k = Dims.maxpq1; k <= n; ++k) { km = k - Dims.maxpq; t = 0.; for (l = 1; l < km && l <= q; ++l) t += qp[l] * aij[km - l]; aij[km] = (km > j) ? ajac[km - j + i_aj] + t : t; } s = F77_CALL(ddot)(&Dims.nm, &ajac[i_aj + 1], &c__1, &ajac[j * ajac_dim1 + 1], &c__1); t = F77_CALL(ddot)(&Dims.nm, a, &c__1, &aij[1], &c__1); h__[i + (p + j) * h_dim1] = - n * (s + t - 2 * fac * u) * fac; } } } if (q != 0) { for (i = 1; i <= q; ++i) { int i_aj = i * ajac_dim1; u = g[i]; for (j = i; j <= q; ++j) { int j_aj = j * ajac_dim1; u *= g[j]; for (k = Dims.maxpq1; k <= n; ++k) { km = k - Dims.maxpq; t = 0.; for (l = 1; l < km && l <= q; ++l) t += qp[l] * aij[km - l]; s = 0.; if (km > i) s += ajac[km - i + j_aj]; if (km > j) s += ajac[km - j + i_aj]; aij[km] = s + t; } s = F77_CALL(ddot)(&Dims.nm, &ajac[i_aj + 1], &c__1, &ajac[j_aj + 1], &c__1); t = F77_CALL(ddot)(&Dims.nm, a, &c__1, &aij[1], &c__1); h__[p + i + (p + j) * h_dim1] = -n * (s + t - 2 * fac * u) * fac; } } } if (p != 0) { for (i = 1; i <= p; ++i) { u = g[q + i]; for (j = i; j <= p; ++j) { u = g[q + j] * u; /* do k = maxpq1, n */ /* km = k - maxpq */ /* t = zero */ /* if (nq .ne. 0) then */ /* do l = 1, nq */ /* if (km .le. l) goto 303 */ /* t = t + qp(l)*aij(km-l) */ /* end do */ /* end if */ /* 303 continue */ /* aij(km) = t */ /* end do */ /* t = ddot( nm, a , 1, aij , 1) */ s = F77_CALL(ddot)(&Dims.nm, &ajac[(q+ i)*ajac_dim1 + 1], &c__1, &ajac[(q+ j)*ajac_dim1 + 1], &c__1); /* H(i+1,j+1) = -dble(n)*((s + t) - two*fac*u)*fac */ h__[i + (j) * h_dim1] = - n * (s - 2 * fac * u) * fac; } } } return; } /* hesspq_ */ /****************************************************************************** *****************************************************************************/ void hesdpq(double *x, double d_, double *hh, double *hd, double *w) { /* float x(n) double precision d, hh, hd(pq1), w(*) * copyright 1991 Department of Statistics, University of Washington written by Chris Fraley ---------------------------------------------------------------------------*/ double fa, fb, slogvk, d__1; /* Parameter adjustments */ --w; /* Function Body */ if (*hh <= 0.) { *hh = (fabs(filtfd_.cllf) + 1.) * mauxfd_.epspt5; } if(*hh > 0.1) *hh = 0.1; if (d_ - *hh > 0.) { fdfilt(x, d_ - *hh, &w[w_fil.ly], &slogvk, &w[w_fil.lamk], &w[w_fil.lak], &w[w_fil.lvk], &w[w_fil.lphi], &w[w_fil.lpi]); if (Dims.pq != 0) { ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 1, &w[w_fil.ly]); ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 2, &w[w_fil.ly]); gradpq(&w[w_opt.lwa1], &w[w_opt.la], &w[w_opt.lajac],Dims.nm); filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_opt.la], &c__1, &w[w_opt.la], &c__1); d__1 = 1. / filtfd_.wnv; F77_CALL(dscal)(&Dims.pq, &d__1, &w[w_opt.lwa1], &c__1); filtfd_.wnv /= (Dims.nm - 1); } else { filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_fil.ly], &c__1, &w[w_fil.ly], &c__1) / (Dims.nm - 1); } fa = -(Dims.n * (log(filtfd_.wnv) + 2.8378) + slogvk) / 2.; if (d_ + *hh < .5) { fdfilt(x, d_ + *hh, &w[w_fil.ly], &slogvk, &w[w_fil.lamk], &w[w_fil.lak], &w[w_fil.lvk], &w[w_fil.lphi], &w[w_fil.lpi]); if (Dims.pq != 0) { ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 1, &w[w_fil.ly]); ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 2, &w[w_fil.ly]); gradpq(&w[w_opt.lwa2], &w[w_opt.la], &w[w_opt.lajac], Dims.nm); filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_opt.la], &c__1, &w[w_opt.la], &c__1); d__1 = 1. / filtfd_.wnv; F77_CALL(dscal)(&Dims.pq, &d__1, &w[w_opt.lwa2], &c__1); filtfd_.wnv /= (Dims.nm - 1); } else { filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_fil.ly], &c__1, &w[w_fil.ly], &c__1) / (Dims.nm - 1); } fb = -(Dims.n * (log(filtfd_.wnv) + 2.8378) + slogvk)/ 2.; hd[0] = (fa + fb - filtfd_.cllf * 2.) / (*hh * *hh); } else { fdfilt(x, d_ - *hh * 2., &w[w_fil.ly], &slogvk, &w[w_fil.lamk], &w[w_fil.lak], &w[w_fil.lvk], &w[w_fil.lphi], &w[w_fil.lpi]); if (Dims.pq != 0) { ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 1, &w[w_fil.ly]); ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 2, &w[w_fil.ly]); gradpq(&w[w_opt.lwa2], &w[w_opt.la], &w[w_opt.lajac], Dims.nm); filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_opt.la], &c__1, &w[w_opt.la], &c__1); d__1 = 1. / filtfd_.wnv; F77_CALL(dscal)(&Dims.pq, &d__1, &w[w_opt.lwa2], &c__1); filtfd_.wnv /= (Dims.nm - 1); } else { filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_fil.ly], &c__1, &w[w_fil.ly], &c__1) / (Dims.nm - 1); } fb = -(Dims.n * (log(filtfd_.wnv) + 2.8378) + slogvk) / 2.; hd[0] = (filtfd_.cllf + fb - fa * 2.) / (*hh * 2. * *hh); } } else { /* (d_ <= *hh ) : */ fdfilt(x, d_ + *hh, &w[w_fil.ly], &slogvk, &w[w_fil.lamk], &w[w_fil.lak], &w[w_fil.lvk], &w[w_fil.lphi], &w[w_fil.lpi]); if (Dims.pq != 0) { ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 1, &w[w_fil.ly]); ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 2, &w[w_fil.ly]); gradpq(&w[w_opt.lwa1], &w[w_opt.la], &w[w_opt.lajac],Dims.nm); filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_opt.la], &c__1, &w[w_opt.la], &c__1); d__1 = 1. / filtfd_.wnv; F77_CALL(dscal)(&Dims.pq, &d__1, &w[w_opt.lwa1], &c__1); filtfd_.wnv /= (Dims.nm - 1); } else { filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_fil.ly], &c__1, &w[w_fil.ly], &c__1) / (Dims.nm - 1); } fa = -(Dims.n * (log(filtfd_.wnv) + 2.8378) + slogvk) / 2.; fdfilt(x, d_ + *hh * 2., &w[w_fil.ly], &slogvk, &w[w_fil.lamk], &w[w_fil.lak], &w[w_fil.lvk], &w[w_fil.lphi], &w[w_fil.lpi]); if (Dims.pq != 0) { ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 1, &w[w_fil.ly]); ajqp_(&w[w_opt.lqp], &w[w_opt.la], &w[w_opt.lajac], Dims.nm, 2, &w[w_fil.ly]); gradpq(&w[w_opt.lwa1], &w[w_opt.la], &w[w_opt.lajac],Dims.nm); filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_opt.la], &c__1, &w[w_opt.la], &c__1); d__1 = 1. / filtfd_.wnv; F77_CALL(dscal)(&Dims.pq, &d__1, &w[w_opt.lwa1], &c__1); filtfd_.wnv /= (Dims.nm - 1); } else { filtfd_.wnv = F77_CALL(ddot)(&Dims.nm, &w[w_fil.ly], &c__1, &w[w_fil.ly], &c__1) / (Dims.nm - 1); } fb = -(Dims.n * (log(filtfd_.wnv) + 2.8378) + slogvk) / 2.; hd[0] = (filtfd_.cllf + fb - fa * 2.) / (*hh * 2. * *hh); } if (Dims.pq == 0) { return; } F77_CALL(daxpy)(&Dims.pq, &c_m1, &w[w_opt.lwa2], &c__1, &w[w_opt.lwa1], &c__1); d__1 = Dims.n / (*hh * 2.); F77_CALL(dscal)(&Dims.pq, &d__1, &w[w_opt.lwa1], &c__1); F77_CALL(dcopy)(&Dims.pq, &w[w_opt.lwa1], &c__1, &hd[+1], &c__1); return; } /* hesdpq */ /****************************************************************************** *****************************************************************************/ void gradpq(double *g, double a[], double ajac[], int l_ajac) { /* double precision g(pq), a(nm), ajac(nm,pq) copyright 1991 Department of Statistics, University of Washington written by Chris Fraley -----------------------------------------------------------------------------*/ int i, j; for (i = 0; i < Dims.p; ++i) g[i] = F77_CALL(ddot)(&Dims.nm, a, &c__1, &ajac[(Dims.q + i) * l_ajac], &c__1); for (j = 0; j < Dims.q; ++j) g[Dims.p + j] = F77_CALL(ddot)(&Dims.nm, a, &c__1, &ajac[j * l_ajac], &c__1); return; } /* gradpq */ fracdiff/R/0000755000176200001440000000000013610314772012223 5ustar liggesusersfracdiff/R/diffseries.R0000644000176200001440000000445613610314772014502 0ustar liggesusers#### Fractional differentiation -- the inverse of fractional integration #### -------------------------- ---------------------------------------- ## by Valderio Reisen -- Dec.2005-- ## MM: This is 'not optimal' -- and I may have better in ../filters.R ? <<< FIXME >>> diffseries0 <- function(x, d) { x <- as.data.frame(x) names(x) <- "series" x <- x$series if (NCOL(x) > 1) stop("only implemented for univariate time series") if (any(is.na(x))) stop("NAs in x") n <- length(x) stopifnot(n >= 2) x <- x - mean(x) PI <- numeric(n) PI[1] <- -d for (k in 2:n) { PI[k] <- PI[k-1]*(k - 1 - d)/k } ydiff <- x for (i in 2:n) { ydiff[i] <- x[i] + sum(PI[1:(i-1)]*x[(i-1):1]) } ## return numeric! ydiff } ## From: alexios ghalanos ## Date: Mon, 13 Jan 2014 19:58:48 +0000 ## To: ## Subject: fracdiff ## Dear Martin, ## Just a quick note, should it be of interest, that a very fast algorithm ## for diffseries was recently published (1st version, 2013; 2nd: March 2014): ## http://qed.econ.queensu.ca/working_papers/papers/qed_wp_1307.pdf‎ (ok, but wget fails!) ## (MM: This is now published ===> see ../man/diffseries.Rd) ## Page 6 contains the R code and page 7 the benchmark timings. ## Quick check (win 7 x64, R 3.02) shows a large performance boost (for 'large' n): #-------------------------------------- ## library(microbenchmark) ## library(fracdiff) ## memory.long <- fracdiff.sim(8000, d = 0.3) ## mGPH <- fdGPH(memory.long$series) ## Jensen and Nielsen code: ## (slightly improved by MM) diffseries <- function(x, d) { stopifnot((iT <- length(x)) >= 2) x <- x - mean(x) ## <<-- Missing in J+N(2014) np2 <- nextn(iT+iT - 1L)# changed from J+N: also factors 3 and 5 pad <- rep.int(0, np2-iT) k <- seq_len(iT - 1L) b <- c(1, cumprod((k - (d+1))/ k), pad) ## ~= convolve(x, b, type = "filter") : dx <- fft(fft(b) * fft(c(x, pad)), inverse =TRUE)[seq_len(iT)] / np2 Re(dx) } ## microbenchmark(diffseries(memory.long$series, d = mGPH$d), ## diffseries2(memory.long$series, d = mGPH$d)) # Unit: milliseconds # diffseries 852.314992 (median) # diffseries2 3.181065 (median) #------------------------------------------------ ## Best Regards, ## Alexios fracdiff/R/fracdiff.R0000644000176200001440000002532413610314772014120 0ustar liggesusers ### Original file: ### copyright 1991 Department of Statistics, Univeristy of Washington ### Patched by Friedrich.Leisch, for use with R, 22.1.1997 ### fixed & changed by Martin Maechler, since Dec 2003 if(getRversion() < "2.15") paste0 <- function(...) paste(..., sep="") .fdcov <- function(x, d, h, nar, nma, hess, fdf.work) { npq <- as.integer(nar + nma) npq1 <- npq + 1L # integer, too stopifnot(length(di <- dim(hess)) == 2, di == c(npq1, npq1)) fdc <- .C(fdcov, ## --> ../src/fdhess.c x, d, h = as.double(if(missing(h)) -1 else h), hd = double(npq1), cov = hess, npq1, cor = hess, npq1, se = double(npq1), fdf.work, info = integer(1))[c("h","hd", "cov","cor", "se", "info")] f.msg <- if(fdc$info) { msg <- switch(fdc$info, "fdcov problem in gamma function", # 1 "singular Hessian", # 2 ## FIXME improve: different reasons for info = 3 : "unable to compute correlation matrix; maybe change 'h'", # 3 stop("error in gamma function")) # 4 warning(msg, call. = FALSE) msg } else "ok" se.ok <- fdc$info %in% 0:2 nam <- "d" if(nar) nam <- c(nam, paste0("ar", 1:nar)) if(nma) nam <- c(nam, paste0("ma", 1:nma)) dimnames(fdc$cov) <- dn <- list(nam, nam) if(se.ok) dimnames(fdc$cor) <- dn list(msg = f.msg, d = d, nam = nam, h = fdc$h, hd = fdc$hd, se.ok = se.ok, covariance.dpq = fdc$cov, stderror.dpq = if(se.ok) fdc$se, # else NULL correlation.dpq= if(se.ok) fdc$cor) }## end{.fdcov} fracdiff <- function(x, nar = 0, nma = 0, ar = rep(NA, max(nar, 1)), ma = rep(NA, max(nma, 1)), dtol = NULL, drange = c(0, 0.5), h, M = 100, trace = 0) { ## ######################################################################### ## ## x - time series for the ARIMA model ## nar - number of autoregressive parameters ## nma - number of moving average parameters ## ar - initial autoregressive parameters ## ma - initial moving average parameters ## dtol - desired accurcay for d ## by default (and if negative), (4th root of machine precision) ## is used. dtol will be changed internally if necessary ## drange - interval over which the likelihood function is to be maximized ## as a function of d ## h - finite difference interval ## M - number of terms in the likelihood approximation ## ## (see Haslett and Raftery 1989) ## ## ######################################################################## cl <- match.call() if(any(is.na(x))) stop("missing values not allowed in time series") if(is.matrix(x) && ncol(x) > 2) stop("multivariate time series not allowed") n <- length(x) if(round(nar) != nar || nar < 0 || round(nma) != nma || nma < 0) stop("'nar' and 'nma' must be non-negative integer numbers") npq <- as.integer(nar + nma) npq1 <- npq + 1L # integer, too lenw <- max(npq + 2*(n + M), 3*n + (n+6)*npq + npq %/% 2 + 1, 31 * 12, ## << added because checked in ../src/fdcore.f (3 + 2*npq1) * npq1 + 1)## << this is *not* checked (there) lenw <- as.integer(lenw) ar[is.na(ar)] <- 0 ma[is.na(ma)] <- 0 if(is.null(dtol)) dtol <- .Machine$double.eps^0.25 # ~ 1.22e-4 ## if dtol < 0: the fortran code will choose defaults tspx <- tsp(x) # Added by RJH. 9 Dec 2019 x <- as.double(x) ## this also initializes "common blocks" that are used in .C(.) calls : fdf <- .C(fracdf, x, n, as.integer(M), as.integer(nar), as.integer(nma), dtol = as.double(dtol), drange = as.double(drange), hood.etc = double(3), d = double(1), ar = as.double(ar), ma = as.double(ma), w = double(lenw), lenw = lenw, iw = integer(npq), ## <<< new int-work array info = as.integer(trace > 0),## <- "verbose" [input] .Machine$double.xmin, .Machine$double.xmax, .Machine$double.neg.eps, .Machine$double.eps)[c("dtol","drange","hood.etc", "d", "ar", "ma", "w", "lenw", "info")] fd.msg <- if(fdf$info) { msg <- switch(fdf$info, stop("insufficient workspace; need ", fdf$lenw, " instead of just ", lenw), # 1 stop("error in gamma function"), # 2 stop("invalid MINPACK input"), # 3 "warning in gamma function", # 4 "C fracdf() optimization failure", # 5 "C fracdf() optimization limit reached") # 6 ## otherwise ## stop("unknown .C(fracdf, *) info -- should not happen") warning(msg, call. = FALSE, immediate. = TRUE) msg } else "ok" if(nar == 0) fdf$ar <- numeric(0) if(nma == 0) fdf$ma <- numeric(0) hess <- .C(fdhpq, hess = matrix(double(1), npq1, npq1), npq1, fdf$w)$hess ## NOTA BENE: The above hess[.,.] is further "transformed", ## well, added to and inverted in fdcov : ## Cov == (-H)^{-1} == solve(-H) ## Note that the following can be "redone" using fracdiff.var() : fdc <- .fdcov(x, fdf$d, h, nar=nar, nma=nma, hess=hess, fdf.work = fdf$w) dimnames(hess) <- dimnames(fdc$covariance.dpq) hess[1, ] <- fdc$hd hess[row(hess) > col(hess)] <- hess[row(hess) < col(hess)] hstat <- fdf[["hood.etc"]] var.WN <- hstat[3] ## Following lines added by RJH. 9 Dec 2019 diffx <- diffseries(x, d = fdf$d) armafit <- arima(diffx, order = c(length(fdf$ar), 0L, length(fdf$ma)), include.mean = FALSE, fixed = c(fdf$ar, -fdf$ma)) res <- armafit$residuals tsp(res) <- tspx structure(list(log.likelihood = hstat[1], n = n, msg = c(fracdf = fd.msg, fdcov = fdc$msg), d = fdf$d, ar = fdf$ar, ma = fdf$ma, covariance.dpq = fdc$covariance.dpq, fnormMin = hstat[2], sigma = sqrt(var.WN), stderror.dpq = if(fdc$se.ok) fdc$stderror.dpq, # else NULL correlation.dpq= if(fdc$se.ok) fdc$correlation.dpq, h = fdc$h, d.tol = fdf$dtol, M = M, hessian.dpq = hess, length.w = lenw, residuals = res, fitted = x - res, ## by RJH call = cl), class = "fracdiff") } ### FIXME [modularity]: a lot of this is "cut & paste" also in fracdiff() itself ### ----- NOTABLY, now use .fdcov() ! fracdiff.var <- function(x, fracdiff.out, h) { if(!is.numeric(h)) stop("h must be numeric") if(!is.list(fracdiff.out) || !is.numeric(M <- fracdiff.out$M)) stop("invalid ", sQuote("fracdiff.out")) p <- length(fracdiff.out$ar) q <- length(fracdiff.out$ma) n <- length(x) npq <- p + q npq1 <- npq + 1 lwork <- max(npq + 2 * (n + M), 3 * n + (n + 6) * npq + npq %/% 2 + 1, (3 + 2 * npq1) * npq1 + 1) ## Initialize .C(fdcom, n, as.integer(M), (p), (q), as.double(fracdiff.out$log.likelihood), .Machine$double.xmin, .Machine$double.xmax, .Machine$double.neg.eps, .Machine$double.eps) ## Re compute Covariance Matrix: fdc <- .C(fdcov, as.double(x), as.double(fracdiff.out$d), h = as.double(h), hd = double(npq1), cov = as.double(fracdiff.out$hessian.dpq), as.integer(npq1), cor = as.double(fracdiff.out$hessian.dpq), as.integer(npq1), se = double(npq1), as.double(c(fracdiff.out$ma, fracdiff.out$ar, rep(0, lwork))), info = integer(1)) ## FIXME: should be *automatically* same messages as inside fracdiff() above! fracdiff.out$msg <- if(fdc$info) { msg <- switch(fdc$info, "warning in gamma function", "singular Hessian", "unable to compute correlation matrix", stop("error in gamma function")) warning(msg) } else "ok" se.ok <- fdc$info != 0 || fdc$info < 3 ## << FIXME -- illogical!! nam <- "d" if(p) nam <- c(nam, paste0("ar", 1:p)) if(q) nam <- c(nam, paste0("ma", 1:q)) fracdiff.out$h <- fdc$h fracdiff.out$covariance.dpq <- array(fdc$cov, c(npq1,npq1), list(nam,nam)) fracdiff.out$stderror.dpq <- if(se.ok) fdc$se # else NULL fracdiff.out$correlation.dpq <- if(se.ok) array(fdc$cor, c(npq1, npq1)) fracdiff.out$hessian.dpq[1, ] <- fdc$hd fracdiff.out$hessian.dpq[, 1] <- fdc$hd fracdiff.out }## end{ fracdiff.var() } ## MM: Added things for more arima.sim() compatibility. ## really, 'mu' is nonsense since can be done separately (or via 'innov'). fracdiff.sim <- function(n, ar = NULL, ma = NULL, d, rand.gen = rnorm, innov = rand.gen(n+q, ...), n.start = NA, backComp = TRUE, allow.0.nstart = FALSE, # <- for back-compatibility start.innov = rand.gen(n.start, ...), ..., mu = 0) { p <- length(ar) q <- length(ma) if(p) { minroots <- min(Mod(polyroot(c(1, -ar)))) if(minroots <= 1) { warning("'ar' part of fracdiff model is not stationary!!") minroots <- 1.01 # -> n.start= 603 by default } } if(is.na(n.start)) n.start <- p + q + ifelse(p > 0, ceiling(6/log(minroots)), 0) if(n.start < p + q && !allow.0.nstart) stop("burn-in 'n.start' must be as long as 'ar + ma'") if(missing(start.innov)) { if(!backComp) force(start.innov) } else if(length(start.innov) < n.start) stop(gettextf("'start.innov' is too short: need %d points", n.start), domain = NA) if(length(innov) < n+q) stop("'innov' must have length >= n + q") y <- c(start.innov[seq_len(n.start)], innov[1:(n+q)]) stopifnot(is.double(y), length(y) == n + q + n.start) if(d < -1/2 || d > 1/2) stop("'d' must be in [-1/2, 1/2]. Consider using cumsum(.) or diff(.) for additional integration or differentiation") ii <- n.start - (if(backComp) 0L else q) + 1:n y <- .C(fdsim, as.integer(n + n.start), (p), (q), as.double(ar), as.double(ma), as.double(d), as.double(mu), y = y, s = double(length(y)), .Machine$double.xmin, .Machine$double.xmax, .Machine$double.neg.eps, .Machine$double.eps)[["s"]][ii] list(series = y, ar = ar, ma = ma, d = d, mu = mu, n.start = n.start) } fracdiff/R/fdSperio.R0000644000176200001440000000263213345443770014132 0ustar liggesusers#### by Valderio Reisen -- Dec.2005-- #### Tweaks by MM ## MM(FIXME): This is "in parallel" to fdGPH() , see ./fdGPH.R fdSperio <- function(x, bandw.exp = 0.5, beta = 0.9) { if(NCOL(x) > 1) stop("only implemented for univariate time series") x <- as.numeric(na.fail(as.ts(x))) if (any(is.na(x))) stop("NAs in x") n <- length(x) ## Compute "smoothed" periodogram -- MM (FIXME): use spec.pgram() ! g <- trunc(n^bandw.exp) j <- 1:g kk <- 1:(n-1) w <- 2*pi*j/n x <- x - mean(x) var.x <- sum(x^2)/n # not /(n-1) cov.x <- numeric(n-1) for (k in kk) cov.x[k] <- sum(x[1:(n-k)] * x[(1+k):n]) / n M <- trunc(n^beta) M2 <- M %/% 2 pw <- numeric(n-1) for (k in kk) { A_k <- k/M pw[k] <- if (k <= M2) 1 - 6*A_k^2 *(1 - A_k) else if (k <= M) 2*(1 - A_k)^3 else 0 } periodogram <- numeric(g) for (i in 1:g) # unscaled (will scale below) periodogram[i] <- var.x + 2*sum(cov.x* pw * cos(w[i]*kk)) pos <- j[periodogram > 0] y.reg <- log(periodogram[pos] / (2*pi)) x.reg <- 2*log(2*sin(w[pos]/2)) ## = log( (2*sin(..)) ^ 2) fit <- lm.fit(cbind(1, x.reg), y.reg) d.GPH <- coef(fit)[["x.reg"]] x.r2 <- sum((x.reg - mean(x.reg))^2) var.d <- (0.539285*M/n)/ x.r2 var.reg <- sum(resid(fit)^2) / ((g - 1) * x.r2) list(d = -d.GPH, sd.as = sqrt(var.d), sd.reg = sqrt(var.reg)) } fracdiff/R/fd-methods.R0000644000176200001440000000777513610314772014420 0ustar liggesusers#### Methods for "fracdiff" objects #### ------------------------------- coef.fracdiff <- function(object, ...) unlist(object[c("d", "ar", "ma")]) vcov.fracdiff <- function(object, ...) object$covariance.dpq ## Lines added by RJH. 9 Dec 2019 residuals.fracdiff <- function(object, ...) object$residuals fitted.fracdiff <- function(object, ...) object$fitted ## End of RJH addition logLik.fracdiff <- function(object, ...) { r <- object$log.likelihood attr(r, "df") <- length(coef(object)) + 1:1 # "+ 1" : sigma^2 attr(r, "nobs") <- attr(r, "nall") <- object$n class(r) <- "logLik" r } print.fracdiff <- function(x, digits = getOption("digits"), ...) { cat("\nCall:\n ", deparse(x$call), "\n") if(any(not.ok <- x$msg != "ok")) cat(sprintf("\n*** Warning during (%s) fit: %30s\n", names(x$msg)[not.ok], x$msg[not.ok])) cat("\nCoefficients:\n") print(coef(x), digits = digits, ...) ## print.default(x, digits = digits, ...)too cheap to be true cat("sigma[eps] =", format(x$sigma), "\n") cat("a list with components:\n") print(names(x), ...) invisible(x) } summary.fracdiff <- function(object, symbolic.cor = FALSE, ...) { ## add a 'coef' matrix (and not much more): cf <- coef(object) se <- object$stderror.dpq cf <- cbind("Estimate" = cf, "Std. Error"= se, "z value" = cf / se, "Pr(>|z|)" = 2 * pnorm(-abs(cf / se))) object$coefficients <- cf # 'long name' such that coef(.) works logl <- logLik(object) object$df <- attr(logl, "df") object$aic <- AIC(logl) object$symbolic.cor <- symbolic.cor ## remove those components we have in 'coef' anyway object$d <- object$ar <- object$ma <- object$stderror.dpq <- NULL class(object) <- "summary.fracdiff" object } print.summary.fracdiff <- function(x, digits = max(3, getOption("digits") - 3), correlation = FALSE, symbolic.cor = x$symbolic.cor, signif.stars = getOption("show.signif.stars"), ...) { cat("\nCall:\n ", deparse(x$call), "\n") if(any(not.ok <- x$msg != "ok")) cat(sprintf("\n*** Warning during (%s) fit: %30s\n", names(x$msg)[not.ok], x$msg[not.ok])) cat("\nCoefficients:\n") printCoefmat(x$coef, digits = digits, signif.stars = signif.stars, ...) cat("sigma[eps] =", format(x$sigma), "\n") cat("[d.tol = ", formatC(x$d.tol),", M = ", x$M,", h = ",formatC(x$h), ## really not much informative: "length.w = ", x$length.w, "]\n", sep='') cat("Log likelihood: ", formatC(x$log.likelihood, digits=digits), " ==> AIC = ", x$aic," [", x$df," deg.freedom]\n", sep='') if (correlation && !is.null(correl <- x$correlation.dpq)) { p <- NCOL(correl) if (p > 1) { cat("\nCorrelation of Coefficients:\n") if (is.logical(symbolic.cor) && symbolic.cor) { print(symnum(correl, abbr.colnames = NULL)) } else { correl <- format(round(correl, 2), nsmall = 2, digits = digits) correl[!lower.tri(correl)] <- "" print(correl[-1, -p, drop = FALSE], quote = FALSE) } } } invisible(x) } ### This and coef.fracdiff() were supplied ## From: Spencer Graves ## To: Melissa Ann Haltuch ## CC: r-help@stat.math.ethz.ch, Martin Maechler ## Subject: Re: [R] fracdiff ## Date: Sun, 23 Jul 2006 03:40:08 +0800 confint.fracdiff <- function(object, parm, level = 0.95, ...) { p <- length(cf <- coef(object)) stopifnot(p >= 1, length(level) == 1, 0 < level, level < 1) se <- object$stderror.dpq pnames <- names(cf) names(se) <- pnames if (missing(parm)) parm <- 1:p else if (is.character(parm)) parm <- match(parm, pnames, nomatch = 0) cf <- cf[parm] se <- se[parm] a <- (1-level)/2 a <- c(a, 1 - a) CI <- cf + outer(se, qnorm(a)) dimnames(CI)[[2]] <- paste(format(100*a), "%") CI } fracdiff/R/fdGPH.R0000644000176200001440000000226613345443770013312 0ustar liggesusers#### by Valderio Reisen -- Dec.2005-- #### Tweaks by MM ## MM(FIXME): This is "in parallel" to fdSperio() , see ./fdSperio.R fdGPH <- function(x, bandw.exp = 0.5) { if(NCOL(x) > 1) stop("only implemented for univariate time series") x <- as.numeric(na.fail(as.ts(x))) if (any(is.na(x))) stop("NAs in x") n <- length(x) ## Compute "smoothed" periodogram -- MM (FIXME): use spec.pgram() ! g <- trunc(n^bandw.exp) j <- 1:g kk <- 1:(n-1) w <- 2*pi*j/n x <- x - mean(x) var.x <- sum(x^2)/n # not /(n-1) cov.x <- numeric(n-1L) for (k in kk) cov.x[k] <- sum(x[1:(n-k)] * x[(1+k):n]) / n periodogram <- numeric(g) for (i in 1:g) # unscaled (will scale below) periodogram[i] <- var.x + 2*sum(cov.x * cos(w[i]*kk)) pos <- j[periodogram > 0] y.reg <- log(periodogram[pos] / (2*pi)) x.reg <- 2*log(2*sin(w[pos]/2)) ## = log( (2*sin(..)) ^ 2) fit <- lm.fit(cbind(1, x.reg), y.reg) d.GPH <- coef(fit)[["x.reg"]] x.r2 <- sum((x.reg - mean(x.reg))^2) var.d <- pi^2 / (6*x.r2) var.reg <- sum(resid(fit)^2) / ((g - 1) * x.r2) ## return list(d = -d.GPH, sd.as = sqrt(var.d), sd.reg = sqrt(var.reg)) } fracdiff/MD50000644000176200001440000000364514556665642012361 0ustar liggesusers2a62b3146b00498f5f943c31c7fc0013 *ChangeLog fb30fa10b0cacb760fd5525718c3fd7b *DESCRIPTION 3697b0d3de7479c51c93d61fe20f68b3 *NAMESPACE f4f0fc84ffeebfaf33a2dea0dfe23290 *R/diffseries.R 1f8574cf20a4f34ef2018faee0c009fe *R/fd-methods.R 034172c45e04214edb6dab3190d4e5f4 *R/fdGPH.R 702802b3e795a3e8397810100d80feda *R/fdSperio.R 7c03205f5af65e87838245890fbef68f *R/fracdiff.R cc4b16ee40609adbf6d5eea91aa31668 *README 05c4ec7d8644cbd6c467db3108ec2f0c *TODO 8faf250d78a120ba4c7ea2186efddd24 *build/partial.rdb 8519303ab6180ae5b72bfb7e08b466b1 *man/confint.fracdiff.Rd 8113e7c8baf0419d7962ca1327b46bee *man/diffseries.Rd ff5a7e6439ede40296b4a5f261dde29e *man/fd-methods.Rd 309734764ac1d753dd07ff27c6192a1c *man/fdGPH.Rd 2876ba0566b2d9f40ba714682401138a *man/fdSperio.Rd 391c7dc08e91fbfccc4d41e653f61ab3 *man/fracdiff.Rd c0ef6230721c3be752b853d70f0cf9dd *man/fracdiff.sim.Rd d0b12302dc3d328b994881df65715a09 *man/fracdiff.var.Rd 2fa4c7011c2bc0f7449ae151d5cc44ae *src/Makevars 38bc80bfe58dfaf10ac11a209e2976c9 *src/fdcore.c b67ea7cf831fd635a1be01b60e83adcc *src/fdgam.c 79d10b971d0015b5e6aa64ec115ffdcc *src/fdhess.c 9403abe2144eab3a0f708292ed0e4db8 *src/fdmin.c 8c2630ad0887fc72056dacc0b54f9a28 *src/fdsim.c 9195f85e79f1f3de95cf2bce438950e1 *src/fracdiff.h 6fe329996ca0b045dffbe8d81944f4de *src/gamm_comm.h 017be9fbeaaeff60740269095697c364 *src/hess_comm.h 75f65296169f12ec139d48c465a8779f *src/init.c 51000f9ec70c6503b84239c646ac4241 *src/mach_comm.h 3ef6556fc2a65de62819a3c1d9eb01b6 *src/maux_comm.h 52bd746de8e016767e4cd0ee28dd0e56 *src/pmult.c 2a1147e176a7336890c1818c9fc44a14 *tests/Valderio-ex.R 35da85906e36c1848b1e41415c092d7e *tests/Valderio-ex.Rout.save e354d21300653d00a2f80ecc1f107d1c *tests/ex-Vinod.R 2acd1e65a525b3f793fefa8f5667fe4a *tests/ex.R 5a58202c48a34f0ede0e7992c5a977a8 *tests/ex.Rout.save e1dada860c4125548720aa2a9411985b *tests/sim-2.R 175f12755ef2a091e477a89551cf7403 *tests/sim-ex.R 36ef50f16b9f1f3faed8b9c3f82a73af *tests/sim-ex.Rout.save