boot/ChangeLog0000644000076600000240000002054512122016732013025 0ustar00ripleystaffversion 1.3-9 2013-03-20 Add ko translations version 1.3-8 2013-02-09 Update reference output for R 3.0.0 version 1.3-7 2012-10-12 Force byte-compilation for compatibility with versions in R releases version 1.3-6 2012-10-08 Work on messages. Update translations. version 1.3-5 2012-06-27 inst/CITATION: protect against TRE bug in UTF-8 locales. Add Polish translations. version 1.3-4 2012-01-16 DESCRIPTION: needs Suggests: MASS for data(package=) version 1.3-3 2011-10-06 Use package 'parallel' rather than multicore/snow. Update tests/Examples/boot-Ex.Rout.save for byte-compiled R >= 2.14.0. Try to force promises in the ... argument with parallel = "snow" version 1.3-2 2011-06-06 Add 'parallel' argument for tsboot(). Allow a 'snow' cluster to be passed in for parallel operations in boot(), censboot() and tsboot(). Use a wrapper rather than ... to avoid problems with argument names in parLapply etc. Force promises in several places to allow parallel operation with 'snow' (which was passing unevaluated promises to worker processes). version 1.3-1 2011-05-23 (formerly 1.22-44) Much faster verstion of nested.corr. c() method for class "boot". 'parallel' argument for boot() and censboot() using 'multicore' or 'snow'. Code cleanup. version 1.2-43 2010-09-25 Update CITATION version 1.2-42 2010-03-28 Various clean ups, including using sample.int for safety Update boot-Ex.Rout.save for 2.11.0. version 1.2-41 2009-10-15 Update German and Russian translations. version 1.2-40 2009-10-07 Add error message in bca.ci for misuse when all the samples are the same. Add German translation. Update boot-Ex.Rout.save for help rendering changes version 1.2-39 2009-09-04 Update boot-Ex.Rout.save for survival change. version 1.2-38 2009-07-28 Add boot-Ex.Rout.save file, remove timing call in censboot.Rd version 1.2-37 2009-06-16 example(censboot) returned the value of a for() loop: need to change for R 2.10.x. version 1.2-36 2009-03-12 guard aginst (some uses of) sample() on length-1 input version 1.2-35 2009-01-01 make use of integer constants spelling corrections, formats for references, 'the the', Rd markup version 1.2-34 2008-09-05 antithetic.arrray, ordinary.array: return integer matrix. ordinary.array: use as little memory as possible. boot: add 'simple' argument. const: use mean(x, na.rm=TRUE) version 1.2-33 2008-05-05 glm.diag: 'gaussian' not 'Gaussian' for family name. cv.glm.Rd: typo version 1.2-32 2008-03-30 Add inst/CITATION file version 1.2-31 2008-02-18 print.simplex now returns its result invisibly. version 1.2-30 2007-10-03 po/R-ru.po: new file *.Rd: remove old \non_function{} markup. version 1.2-29 2007-08-29 R/*.q add licence/copyright statements melanona.Rd: correct the coding of 'sex' beaver.Rd, bootfuns.q cv.glm.Rd, EEF.profile.Rd: spelling corrections glm.diag.Rd: add \link markup. version 1.2-28 2007-06-12 INDEX, man: change 'library' to 'package' and remove some references to S-PLUS. bootfuns.q: avoid abbreviating arguments, switch to seq.int, add drop=TRUE. DESCRIPTION: R >= 2.4.0 because of seq.int. version 1.2-27 2006-11-29 bootfuns.q : use control=NULL for deparsed calls. version 1.2-26 2006-09-05 smooth.f.Rd, tilt.boot.Rd: omit empty argument version 1.2-25 2006-07-25 censboot.Rd: unique() in mel.fun was not working correctly due to rounding error on some P4 Linux machines. version 1.2-24 2005-12-09 catsM.Rd: correct units for hearts to that given (correctly) in MASS. Add French and en@quot translation. version 1.2-23 2005-07-26 Add error message for a = NA in bca.ci. In cv.glm, eval updated formula in the parent. version 1.2-22 2005-02-01 Improve error messages for possible translation. version 1.2-21 2005-01-21 remove <> in ?boot.array version 1.2-20 2004-11-03 remove require(ts) call in tsboot version 1.2-19 2004-08-12 version 1.2-18 2004-08-03 versions for R 2.0.0. version 1.2-17 2004-04-20 remake datasets: looks like wool, coal and manaus were corrupt. version 1.2-16 2003-12-08 updates for R 1.9.0's namespaces version 1.2-15 2003-10-22 consistency in documentation of default args. tapply(foo)[bar] is now still an array in R. version 1.2-14 2003-08-10 remove unused vars add local definition for is.missing use \dQuote markup in help files. ensure .Random.seed is saved/retrieved in the workspace. remove/comment redundant assignments. version 1.2-13 2003-03-09 update NAMESPACE file censboot.Rd: call generic, not predict.smooth.spline version 1.2-12 2003-03-01 Use namespace, remove help for internal functions. glm.diag.plots had one set of axis labels reversed. version 1.2-11 2003-01-29 Replace inv.logit with overflow-proof plogis. Remove birthwt (from MASS) and help page for lynx. version 1.2-10 2002-11-30 Use optim(method="BFGS") for 1D optimization version 1.2-9 2002-11-05 Remove datasets taken from MASS package. version 1.2-8 2002-04-11 Remove arima.sim. Converted data sets to version 2 .rda, compressed larger ones. More care over .Random.seed. version 1.2-7 2002-01-29 Added some help pages from Angelo Canty. Bug fix to simplex() from Duncan Murdoch. version 1.2-6 2001-11-27 Start documenting rest of the objects and arguments. version 1.2-5 2001-08-16 version 1.2-4 2001-08-08 Use TRUE and FALSE on help pages, add ... to arg lists. version 1.2-3 2001-06-16 Change to survival from survival5. version 1.2-2 2001/03/31 Add priority: recommended. Remove test directory, as R CMD check works (but takes a long time). version 1.2-1 2001/03/03 Mods from Angelo Canty for version 1.2 of the library section. Dataset beaver was corrupted and it has been repaired. Change T, F to TRUE, FALSE in R code. version 1.1-7 2000/12/28 Changes for R 1.2.0, this is really boot version 1.1 now, version 1.0-6 2000/11/27 Re-port all the help files with Sd2Rd 1.15. R-specific change in glm.diag. version 1.0-5 2000/02/06 Change tsboot for R, make tsboot examples work. Revise censboot examples to avoid conflicts with survival5 datasets. Help files for datasets now use keyword `datasets' not `sysdata' Replace nlm by optim, as works much better. One example still fails, but it diverges on S and the authors did not notice. version 1.0-4 1999/11/21 version 1.0-3 Adjustments for later versions of R (removing functions added here). version 1.0-2 1999/02/24 Removed sample, model.response from zzz.R Changed attributes(out$spa)$name<- to names(out$spa)<- in saddle. Altered test/test-examples to make use of the examples now in R-ex. Note that saddle.distn and functions that depend on it (control) may fail due to lack of robustness of nlm. version 1.0-1 1998/07/24 boot/R: Many funs store the .Random.seed but that need not exist, so call runif(1) first. Add missing function union in zzz.R Added version of sample that uses prob= in zzz.R Replace is.matrix by isMatrix <- function(x) length(dim(x)) == 2 as R does not think a data frame is a matrix. Replace reading of a number in glm.diag.plots by readline. Replace nlmin by nlm, with appropriate changes to returned components. (This is not wholly successful, as nlm seems less tolerant.) Comment out assigns to frame=1. split.screen etc are not in R yet: use par(mfrow) in plot.glm.diag and layout in plot.boot. add drop=F in cv.glm, cens.weird. change is.inf to is.infinite. change maxit= in glm calls in saddle to calls to control= boot/data: add a dummy rts: rts <- function(units, name, ...) ts(...) to top of bootdata.q edit n= to nmax= in scan (carefully: also occurs in lists in scan commands). add for(obj in ls()) save(obj, file = paste(obj, ".rda", sep=""), ascii=T) to end of bootdata.q R -v 20 --no-save < bootdata.q cd4.nested is made by bd.q in the top directory. boot/man: convert files by Sd2Rd. Add data() lines. boot/test: script to test the examples in the help files. Not all will run, as far as I know due to things missing in R or nlm not coping as well as nlmin. boot/DESCRIPTION0000644000076600000240000000204512122262275012761 0ustar00ripleystaffPackage: boot Priority: recommended Version: 1.3-9 Date: 2013-03-20 Authors@R: c(person("Angelo", "Canty", role = "aut", email = "cantya@mcmaster.ca"), person("Brian", "Ripley", role = c("aut", "trl", "cre"), email = "ripley@stats.ox.ac.uk", comment = "author of parallel support")) Author: S original by Angelo Canty . R port and many enhancements by Brian Ripley . Maintainer: Brian Ripley Note: Maintainers are not available to give advice on using a package they did not author. Description: functions and datasets for bootstrapping from the book "Bootstrap Methods and Their Applications" by A. C. Davison and D. V. Hinkley (1997, CUP). Title: Bootstrap Functions (originally by Angelo Canty for S) Depends: R (>= 3.0.0), graphics, stats Suggests: MASS, survival LazyData: yes ByteCompile: yes License: Unlimited Packaged: 2013-03-20 07:26:53 UTC; ripley boot/INDEX0000644000076600000240000000571611643311707012057 0ustar00ripleystaffBootstrap R Functions ===================== Version 1.3; May 2011: The main change is support for parallel computation in functions boot(), censboot() and tsboot(). Many of the examples have been tidied up. Version 1.2; March 2001: This version corrects some minor errors in Version 1.0 of the code distributed with the first printing of Davison and Hinkley (1997). The author would like to thank those users who pointed out errors or possible improvements to the code. Any further errors found should be reported to the author (Angelo Canty) for correction in the next version. Version 1.3; May 2011: Added parallel support, some code cleanup for efficiency. The package contains the following functions, all of which have online help available. abc.ci ABC confidence intervals boot Main bootstrap function boot.array Generate a bootstrap frequency/index array boot.ci Bootstrap simulation confidence intervals censboot Bootstrap for censored data and Cox regression models. control Control variate calculations corr Weighted form of correlation coefficient cum3 Estimate the skewness cv.glm Cross-validation for generalized linear models empinf Calculate empirical influence values envelope Confidence envelopes for functions exp.tilt Exponential tilting freq.array Convert an index array into a frequency array glm.diag Diagnostics for generalized linear models glm.diag.plots Diagnostic plots for glm's imp.moments Importance resampling estimates of moments imp.prob Importance resampling estimates of probabilities imp.quantile Importance resampling estimates of quantiles imp.weights Weights for importance resampling inv.logit Inverse logit function jack.after.boot Jackknife after bootstrap plots k3.linear Linear skewness approximation linear.approx Linear approximation to a statistic lines.saddle.distn Lines method for a saddlepoint distribution object logit Logit of a proportion norm.ci Normal approximation confidence intervals plot.boot Plot method for a bootstrap object print.boot Print method for a bootstrap object print.bootci Print method for a bootstrap confidence interval object print.saddle.distn Print method for a saddlepoint distribution object print.simplex Print method for a simplex object saddle Simple and conditional saddlepoint calculations saddle.distn Approximate a distribution by saddlepoint simplex Tableau simplex method for linear programming smooth.f Frequency smoothing tilt.boot Tilted bootstrap tsboot Bootstrap for time series var.linear Linear variance approximation The package also contains some items used in the practicals of "Bootstrap Methods and Their Applications" by A.C. Davison and D.V. Hinkley (1997, Cambridge University Press). The objects are not intended to be used except in the context of these practicals. The objects are cd4.nested A nested bootstrap referred to in Practical 5.5 corr.nested The statistic used in cd4.nested EL.profile, EEF.profile, and lik.CI functions defined for use in the practicals of chapter 10. boot/NAMESPACE0000644000076600000240000000135011566024046012473 0ustar00ripleystaffexport(abc.ci, boot, boot.array, boot.ci, censboot, control, corr, cum3, cv.glm, EEF.profile, EL.profile, empinf, envelope, exp.tilt, freq.array, glm.diag, glm.diag.plots, imp.moments, imp.prob, imp.quantile, imp.weights, inv.logit, jack.after.boot, k3.linear, lik.CI, linear.approx, logit, nested.corr, norm.ci, saddle, saddle.distn, simplex, smooth.f, tilt.boot, tsboot, var.linear) # documented but not exported # export(lines.saddle.distn, plot.boot, print.boot, print.bootci, print.simplex) importFrom(graphics, lines, plot) S3method(c, boot) S3method(lines, saddle.distn) S3method(plot, boot) S3method(print, boot) S3method(print, bootci) S3method(print, saddle.distn) S3method(print, simplex) boot/R/0000755000076600000240000000000012036311402011442 5ustar00ripleystaffboot/R/bootfuns.q0000644000076600000240000037146612035552757013527 0ustar00ripleystaff# part of R package boot # copyright (C) 1997-2001 Angelo J. Canty # corrections (C) 1997-2011 B. D. Ripley # # Unlimited distribution is permitted # safe version of sample # needs R >= 2.9.0 # only works if size is not specified in R >= 2.11.0, but it always is in boot sample0 <- function(x, ...) x[sample.int(length(x), ...)] bsample <- function(x, ...) x[sample.int(length(x), replace = TRUE, ...)] isMatrix <- function(x) length(dim(x)) == 2L ## random permutation of x. rperm <- function(x) sample0(x, length(x)) antithetic.array <- function(n, R, L, strata) # # Create an array of indices by antithetic resampling using the # empirical influence values in L. This function just calls anti.arr # to do the sampling within strata. # { inds <- as.integer(names(table(strata))) out <- matrix(0L, R, n) for (s in inds) { gp <- seq_len(n)[strata == s] out[, gp] <- anti.arr(length(gp), R, L[gp], gp) } out } anti.arr <- function(n, R, L, inds=seq_len(n)) { # R x n array of bootstrap indices, generated antithetically # according to the empirical influence values in L. unique.rank <- function(x) { # Assign unique ranks to a numeric vector ranks <- rank(x) if (any(duplicated(ranks))) { inds <- seq_along(x) uniq <- sort(unique(ranks)) tab <- table(ranks) for (i in seq_along(uniq)) if (tab[i] > 1L) { gp <- inds[ranks == uniq[i]] ranks[gp] <- rperm(inds[sort(ranks) == uniq[i]]) } } ranks } R1 <- floor(R/2) mat1 <- matrix(bsample(inds, R1*n), R1, n) ranks <- unique.rank(L) rev <- inds for (i in seq_len(n)) rev[i] <- inds[ranks == (n+1-ranks[i])] mat1 <- rbind(mat1, matrix(rev[mat1], R1, n)) if (R != 2*R1) mat1 <- rbind(mat1, bsample(inds, n)) mat1 } balanced.array <- function(n, R, strata) { # # R x n array of bootstrap indices, sampled hypergeometrically # within strata. # output <- matrix(rep(seq_len(n), R), n, R) inds <- as.integer(names(table(strata))) for(is in inds) { group <- seq_len(n)[strata == is] if(length(group) > 1L) { g <- matrix(rperm(output[group, ]), length(group), R) output[group, ] <- g } } t(output) } boot <- function(data, statistic, R, sim = "ordinary", stype = c("i", "f", "w"), strata = rep(1, n), L = NULL, m = 0, weights = NULL, ran.gen = function(d, p) d, mle = NULL, simple = FALSE, ..., parallel = c("no", "multicore", "snow"), ncpus = getOption("boot.ncpus", 1L), cl = NULL) { # # R replicates of bootstrap applied to statistic(data) # Possible sim values are: "ordinary", "balanced", "antithetic", # "permutation", "parametric" # Various auxilliary functions find the indices to be used for the # bootstrap replicates and then this function loops over those replicates. # call <- match.call() stype <- match.arg(stype) if (missing(parallel)) parallel <- getOption("boot.parallel", "no") parallel <- match.arg(parallel) have_mc <- have_snow <- FALSE if (parallel != "no" && ncpus > 1L) { if (parallel == "multicore") have_mc <- .Platform$OS.type != "windows" else if (parallel == "snow") have_snow <- TRUE if (!have_mc && !have_snow) ncpus <- 1L } if (simple && (sim != "ordinary" || stype != "i" || sum(m))) { warning("'simple=TRUE' is only valid for 'sim=\"ordinary\", stype=\"i\", n=0', so ignored") simple <- FALSE } if (!exists(".Random.seed", envir = .GlobalEnv, inherits = FALSE)) runif(1) seed <- get(".Random.seed", envir = .GlobalEnv, inherits = FALSE) n <- NROW(data) if ((n == 0) || is.null(n)) stop("no data in call to 'boot'") temp.str <- strata strata <- tapply(seq_len(n),as.numeric(strata)) t0 <- if (sim != "parametric") { if ((sim == "antithetic") && is.null(L)) L <- empinf(data = data, statistic = statistic, stype = stype, strata = strata, ...) if (sim != "ordinary") m <- 0 else if (any(m < 0)) stop("negative value of 'm' supplied") if ((length(m) != 1L) && (length(m) != length(table(strata)))) stop("length of 'm' incompatible with 'strata'") if ((sim == "ordinary") || (sim == "balanced")) { if (isMatrix(weights) && (nrow(weights) != length(R))) stop("dimensions of 'R' and 'weights' do not match")} else weights <- NULL if (!is.null(weights)) weights <- t(apply(matrix(weights, n, length(R), byrow = TRUE), 2L, normalize, strata)) if (!simple) i <- index.array(n, R, sim, strata, m, L, weights) original <- if (stype == "f") rep(1, n) else if (stype == "w") { ns <- tabulate(strata)[strata] 1/ns } else seq_len(n) t0 <- if (sum(m) > 0L) statistic(data, original, rep(1, sum(m)), ...) else statistic(data, original, ...) rm(original) t0 } else # "parametric" statistic(data, ...) pred.i <- NULL fn <- if (sim == "parametric") { ## force promises, so values get sent by parallel ran.gen; data; mle function(r) { dd <- ran.gen(data, mle) statistic(dd, ...) } } else { if (!simple && ncol(i) > n) { pred.i <- as.matrix(i[ , (n+1L):ncol(i)]) i <- i[, seq_len(n)] } if (stype %in% c("f", "w")) { f <- freq.array(i) rm(i) if (stype == "w") f <- f/ns if (sum(m) == 0L) function(r) statistic(data, f[r, ], ...) else function(r) statistic(data, f[r, ], pred.i[r, ], ...) } else if (sum(m) > 0L) function(r) statistic(data, i[r, ], pred.i[r,], ...) else if (simple) function(r) statistic(data, index.array(n, 1, sim, strata, m, L, weights), ...) else function(r) statistic(data, i[r, ], ...) } RR <- sum(R) res <- if (ncpus > 1L && (have_mc || have_snow)) { if (have_mc) { parallel::mclapply(seq_len(RR), fn, mc.cores = ncpus) } else if (have_snow) { list(...) # evaluate any promises if (is.null(cl)) { cl <- parallel::makePSOCKcluster(rep("localhost", ncpus)) if(RNGkind()[1L] == "L'Ecuyer-CMRG") parallel::clusterSetRNGStream(cl) res <- parallel::parLapply(cl, seq_len(RR), fn) parallel::stopCluster(cl) res } else parallel::parLapply(cl, seq_len(RR), fn) } } else lapply(seq_len(RR), fn) t.star <- matrix(, RR, length(t0)) for(r in seq_len(RR)) t.star[r, ] <- res[[r]] if (is.null(weights)) weights <- 1/tabulate(strata)[strata] boot.return(sim, t0, t.star, temp.str, R, data, statistic, stype, call, seed, L, m, pred.i, weights, ran.gen, mle) } normalize <- function(wts, strata) { # # Normalize a vector of weights to sum to 1 within each strata. # n <- length(strata) out <- wts inds <- as.integer(names(table(strata))) for (is in inds) { gp <- seq_len(n)[strata == is] out[gp] <- wts[gp]/sum(wts[gp]) } out } boot.return <- function(sim, t0, t, strata, R, data, stat, stype, call, seed, L, m, pred.i, weights, ran.gen, mle) # # Return the results of a bootstrap in the form of an object of class # "boot". # { out <- list(t0=t0, t=t, R=R, data=data, seed=seed, statistic=stat, sim=sim, call=call) if (sim == "parametric") out <- c(out, list(ran.gen=ran.gen, mle=mle)) else if (sim == "antithetic") out <- c(out, list(stype=stype, strata=strata, L=L)) else if (sim == "ordinary") { if (sum(m) > 0) out <- c(out, list(stype=stype, strata=strata, weights=weights, pred.i=pred.i)) else out <- c(out, list(stype=stype, strata=strata, weights=weights)) } else if (sim == "balanced") out <- c(out, list(stype=stype, strata=strata, weights=weights )) else out <- c(out, list(stype=stype, strata=strata)) class(out) <- "boot" out } c.boot <- function (..., recursive = TRUE) { args <- list(...) nm <- lapply(args, names) if (!all(sapply(nm, function(x) identical(x, nm[[1]])))) stop("arguments are not all the same type of \"boot\" object") res <- args[[1]] res$R <- sum(sapply(args, "[[", "R")) res$t <- do.call(rbind, lapply(args, "[[", "t")) res } boot.array <- function(boot.out, indices=FALSE) { # # Return the frequency or index array for the bootstrap resamples # used in boot.out # This function recreates such arrays from the information in boot.out # if (exists(".Random.seed", envir=.GlobalEnv, inherits = FALSE)) temp <- get(".Random.seed", envir=.GlobalEnv, inherits = FALSE) else temp<- NULL assign(".Random.seed", boot.out$seed, envir=.GlobalEnv) n <- NROW(boot.out$data) R <- boot.out$R sim <- boot.out$sim if (boot.out$call[[1L]] == "tsboot") { # Recreate the array for an object created by tsboot, The default for # such objects is to return the index array unless index is specifically # passed as F if (missing(indices)) indices <- TRUE if (sim == "model") stop("index array not defined for model-based resampling") n.sim <- boot.out$n.sim i.a <- ts.array(n, n.sim, R, boot.out$l, sim, boot.out$endcorr) out <- matrix(NA,R,n.sim) for(r in seq_len(R)) { if (sim == "geom") ends <- cbind(i.a$starts[r, ], i.a$lengths[r, ]) else ends <- cbind(i.a$starts[r,], i.a$lengths) inds <- apply(ends, 1L, make.ends, n) if (is.list(inds)) inds <- unlist(inds)[seq_len(n.sim)] out[r,] <- inds } } else if (boot.out$call[[1L]] == "censboot") { # Recreate the array for an object created by censboot as long # as censboot was called with sim = "ordinary" if (sim == "ordinary") { strata <- tapply(seq_len(n), as.numeric(boot.out$strata)) out <- cens.case(n,strata,R) } else stop("boot.array not implemented for this object") } else { # Recreate the array for objects created by boot or tilt.boot if (sim == "parametric") stop("array cannot be found for parametric bootstrap") strata <- tapply(seq_len(n),as.numeric(boot.out$strata)) if (boot.out$call[[1L]] == "tilt.boot") weights <- boot.out$weights else { weights <- boot.out$call$weights if (!is.null(weights)) weights <- boot.out$weights } out <- index.array(n, R, sim, strata, 0, boot.out$L, weights) } if (!indices) out <- freq.array(out) if (!is.null(temp)) assign(".Random.seed", temp, envir=.GlobalEnv) else rm(.Random.seed, pos=1) out } plot.boot <- function(x,index=1, t0=NULL, t=NULL, jack=FALSE, qdist="norm",nclass=NULL,df, ...) { # # A plot method for bootstrap output objects. It produces a histogram # of the bootstrap replicates and a QQ plot of them. Optionally it can # also produce a jackknife-after-bootstrap plot. # boot.out <- x t.o <- t if (is.null(t)) { t <- boot.out$t[,index] if (is.null(t0)) t0 <- boot.out$t0[index] } t <- t[is.finite(t)] if (const(t, min(1e-8,mean(t, na.rm=TRUE)/1e6))) { print(paste("All values of t* are equal to ", mean(t, na.rm=TRUE))) return(invisible(boot.out)) } if (is.null(nclass)) nclass <- min(max(ceiling(length(t)/25),10),100) if (!is.null(t0)) { # Calculate the breakpoints for the histogram so that one of them is # exactly t0. rg <- range(t) if (t0rg[2L]) rg[2L] <- t0 rg <- rg+0.05*c(-1,1)*diff(rg) lc <- diff(rg)/(nclass-2) n1 <- ceiling((t0-rg[1L])/lc) n2 <- ceiling((rg[2L]-t0)/lc) bks <- t0+(-n1:n2)*lc } R <- boot.out$R if (qdist == "chisq") { qq <- qchisq((seq_len(R))/(R+1),df=df) qlab <- paste("Quantiles of Chi-squared(",df,")",sep="") } else { if (qdist!="norm") warning(gettextf("%s distribution not supported: using normal instead", sQuote(qdist)), domain = NA) qq <- qnorm((seq_len(R))/(R+1)) qlab <-"Quantiles of Standard Normal" } if (jack) { layout(mat = matrix(c(1,2,3,3), 2L, 2L, byrow=TRUE)) if (is.null(t0)) hist(t,nclass=nclass,probability=TRUE,xlab="t*") else hist(t,breaks=bks,probability=TRUE,xlab="t*") if (!is.null(t0)) abline(v=t0,lty=2) qqplot(qq,t,xlab=qlab,ylab="t*") if (qdist == "norm") abline(mean(t),sqrt(var(t)),lty=2) else abline(0,1,lty=2) jack.after.boot(boot.out,index=index,t=t.o, ...) } else { par(mfrow=c(1,2)) if (is.null(t0)) hist(t,nclass=nclass,probability=TRUE,xlab="t*") else hist(t,breaks=bks,probability=TRUE,xlab="t*") if (!is.null(t0)) abline(v=t0,lty=2) qqplot(qq,t,xlab=qlab,ylab="t*") if (qdist == "norm") abline(mean(t),sqrt(var(t)),lty=2) else abline(0,1,lty=2) } par(mfrow=c(1,1)) invisible(boot.out) } print.boot <- function(x, digits = getOption("digits"), index = 1L:ncol(boot.out$t), ...) { # # Print the output of a bootstrap # boot.out <- x sim <- boot.out$sim cl <- boot.out$call t <- matrix(boot.out$t[, index], nrow = nrow(boot.out$t)) allNA <- apply(t,2L,function(t) all(is.na(t))) ind1 <- index[allNA] index <- index[!allNA] t <- matrix(t[, !allNA], nrow = nrow(t)) rn <- paste("t",index,"*",sep="") if (length(index) == 0L) op <- NULL else if (is.null(t0 <- boot.out$t0)) { if (is.null(boot.out$call$weights)) op <- cbind(apply(t,2L,mean,na.rm=TRUE), sqrt(apply(t,2L,function(t.st) var(t.st[!is.na(t.st)])))) else { op <- NULL for (i in index) op <- rbind(op, imp.moments(boot.out,index=i)$rat) op[,2L] <- sqrt(op[,2]) } dimnames(op) <- list(rn,c("mean", "std. error")) } else { t0 <- boot.out$t0[index] if (is.null(boot.out$call$weights)) { op <- cbind(t0,apply(t,2L,mean,na.rm=TRUE)-t0, sqrt(apply(t,2L,function(t.st) var(t.st[!is.na(t.st)])))) dimnames(op) <- list(rn, c("original"," bias "," std. error")) } else { op <- NULL for (i in index) op <- rbind(op, imp.moments(boot.out,index=i)$rat) op <- cbind(t0,op[,1L]-t0,sqrt(op[,2L]), apply(t,2L,mean,na.rm=TRUE)) dimnames(op) <- list(rn,c("original", " bias ", " std. error", " mean(t*)")) } } if (cl[[1L]] == "boot") { if (sim == "parametric") cat("\nPARAMETRIC BOOTSTRAP\n\n") else if (sim == "antithetic") { if (is.null(cl$strata)) cat("\nANTITHETIC BOOTSTRAP\n\n") else cat("\nSTRATIFIED ANTITHETIC BOOTSTRAP\n\n") } else if (sim == "permutation") { if (is.null(cl$strata)) cat("\nDATA PERMUTATION\n\n") else cat("\nSTRATIFIED DATA PERMUTATION\n\n") } else if (sim == "balanced") { if (is.null(cl$strata) && is.null(cl$weights)) cat("\nBALANCED BOOTSTRAP\n\n") else if (is.null(cl$strata)) cat("\nBALANCED WEIGHTED BOOTSTRAP\n\n") else if (is.null(cl$weights)) cat("\nSTRATIFIED BALANCED BOOTSTRAP\n\n") else cat("\nSTRATIFIED WEIGHTED BALANCED BOOTSTRAP\n\n") } else { if (is.null(cl$strata) && is.null(cl$weights)) cat("\nORDINARY NONPARAMETRIC BOOTSTRAP\n\n") else if (is.null(cl$strata)) cat("\nWEIGHTED BOOTSTRAP\n\n") else if (is.null(cl$weights)) cat("\nSTRATIFIED BOOTSTRAP\n\n") else cat("\nSTRATIFIED WEIGHTED BOOTSTRAP\n\n") } } else if (cl[[1L]] == "tilt.boot") { R <- boot.out$R th <- boot.out$theta if (sim == "balanced") cat("\nBALANCED TILTED BOOTSTRAP\n\n") else cat("\nTILTED BOOTSTRAP\n\n") if ((R[1L] == 0) || is.null(cl$tilt) || eval(cl$tilt)) cat("Exponential tilting used\n") else cat("Frequency Smoothing used\n") i1 <- 1 if (boot.out$R[1L]>0) cat(paste("First",R[1L],"replicates untilted,\n")) else { cat(paste("First ",R[2L]," replicates tilted to ", signif(th[1L],4),",\n",sep="")) i1 <- 2 } if (i1 <= length(th)) { for (j in i1:length(th)) cat(paste("Next ",R[j+1L]," replicates tilted to ", signif(th[j],4L), ifelse(j!=length(th),",\n",".\n"),sep="")) } op <- op[, 1L:3L] } else if (cl[[1L]] == "tsboot") { if (!is.null(cl$indices)) cat("\nTIME SERIES BOOTSTRAP USING SUPPLIED INDICES\n\n") else if (sim == "model") cat("\nMODEL BASED BOOTSTRAP FOR TIME SERIES\n\n") else if (sim == "scramble") { cat("\nPHASE SCRAMBLED BOOTSTRAP FOR TIME SERIES\n\n") if (boot.out$norm) cat("Normal margins used.\n") else cat("Observed margins used.\n") } else if (sim == "geom") { if (is.null(cl$ran.gen)) cat("\nSTATIONARY BOOTSTRAP FOR TIME SERIES\n\n") else cat(paste("\nPOST-BLACKENED STATIONARY", "BOOTSTRAP FOR TIME SERIES\n\n")) cat(paste("Average Block Length of",boot.out$l,"\n")) } else { if (is.null(cl$ran.gen)) cat("\nBLOCK BOOTSTRAP FOR TIME SERIES\n\n") else cat(paste("\nPOST-BLACKENED BLOCK", "BOOTSTRAP FOR TIME SERIES\n\n")) cat(paste("Fixed Block Length of",boot.out$l,"\n")) } } else { cat("\n") if (sim == "weird") { if (!is.null(cl$strata)) cat("STRATIFIED ") cat("WEIRD BOOTSTRAP FOR CENSORED DATA\n\n") } else if ((sim == "ordinary") || ((sim == "model") && is.null(boot.out$cox))) { if (!is.null(cl$strata)) cat("STRATIFIED ") cat("CASE RESAMPLING BOOTSTRAP FOR CENSORED DATA\n\n") } else if (sim == "model") { if (!is.null(cl$strata)) cat("STRATIFIED ") cat("MODEL BASED BOOTSTRAP FOR COX REGRESSION MODEL\n\n") } else if (sim == "cond") { if (!is.null(cl$strata)) cat("STRATIFIED ") cat("CONDITIONAL BOOTSTRAP ") if (is.null(boot.out$cox)) cat("FOR CENSORED DATA\n\n") else cat("FOR COX REGRESSION MODEL\n\n") } } cat("\nCall:\n") dput(cl, control=NULL) cat("\n\nBootstrap Statistics :\n") if (!is.null(op)) print(op,digits=digits) if (length(ind1) > 0L) for (j in ind1) cat(paste("WARNING: All values of t", j, "* are NA\n", sep="")) invisible(boot.out) } corr <- function(d, w=rep(1,nrow(d))/nrow(d)) { # The correlation coefficient in weighted form. s <- sum(w) m1 <- sum(d[, 1L] * w)/s m2 <- sum(d[, 2L] * w)/s (sum(d[, 1L] * d[, 2L] * w)/s - m1 * m2)/sqrt((sum(d[, 1L]^2 * w)/s - m1^2) * (sum(d[, 2L]^2 * w)/s - m2^2)) } extra.array <- function(n, R, m, strata=rep(1,n)) { # # Extra indices for predictions. Can only be used with # types "ordinary" and "stratified". For type "ordinary" # m is a positive integer. For type "stratified" m can # be a positive integer or a vector of the same length as # strata. # if (length(m) == 1L) output <- matrix(sample.int(n, m*R, replace=TRUE), R, m) else { inds <- as.integer(names(table(strata))) output <- matrix(NA, R, sum(m)) st <- 0 for (i in inds) { if (m[i] > 0) { gp <- seq_len(n)[strata == i] inds1 <- (st+1):(st+m[i]) output[,inds1] <- matrix(bsample(gp, R*m[i]), R, m[i]) st <- st+m[i] } } } output } freq.array <- function(i.array) { # # converts R x n array of bootstrap indices into # R X n array of bootstrap frequencies # result <- NULL n <- ncol(i.array) result <- t(apply(i.array, 1, tabulate, n)) result } importance.array <- function(n, R, weights, strata){ # # Function to do importance resampling within strata based # on the weights supplied. If weights is a matrix with n columns # R must be a vector of length nrow(weights) otherwise weights # must be a vector of length n and R must be a scalar. # imp.arr <- function(n, R, wts, inds=seq_len(n)) matrix(bsample(inds, n*R, prob=wts), R, n) output <- NULL if (!isMatrix(weights)) weights <- matrix(weights, nrow=1) inds <- as.integer(names(table(strata))) for (ir in seq_along(R)) { out <- matrix(rep(seq_len(n), R[ir]), R[ir], n, byrow=TRUE) for (is in inds) { gp <- seq_len(n)[strata == is] out[, gp] <- imp.arr(length(gp), R[ir], weights[ir,gp], gp) } output <- rbind(output, out) } output } importance.array.bal <- function(n, R, weights, strata) { # # Function to do balanced importance resampling within strata # based on the supplied weights. Balancing is achieved in such # a way that each index appears in the array approximately in # proportion to its weight. # imp.arr.bal <- function(n, R, wts, inds=seq_len(n)) { if (sum (wts) != 1) wts <- wts / sum(wts) nRw1 <- floor(n*R*wts) nRw2 <- n*R*wts - nRw1 output <- rep(inds, nRw1) if (any (nRw2 != 0)) output <- c(output, sample0(inds, round(sum(nRw2)), prob=nRw2)) matrix(rperm(output), R, n) } output <- NULL if (!isMatrix(weights)) weights <- matrix(weights, nrow = 1L) inds <- as.integer(names(table(strata))) for (ir in seq_along(R)) { out <- matrix(rep(seq_len(n), R[ir]), R[ir], n, byrow=TRUE) for (is in inds) { gp <- seq_len(n)[strata == is] out[,gp] <- imp.arr.bal(length(gp), R[ir], weights[ir,gp], gp) } output <- rbind(output, out) } output } index.array <- function(n, R, sim, strata=rep(1,n), m=0, L=NULL, weights=NULL) { # # Driver function for generating a bootstrap index array. This function # simply determines the type of sampling required and calls the appropriate # function. # indices <- NULL if (is.null (weights)) { if (sim == "ordinary") { indices <- ordinary.array(n, R, strata) if (sum(m) > 0) indices <- cbind(indices, extra.array(n, R, m, strata)) } else if (sim == "balanced") indices <- balanced.array(n, R, strata) else if (sim == "antithetic") indices <- antithetic.array(n, R, L, strata) else if (sim == "permutation") indices <- permutation.array(n, R, strata) } else { if (sim == "ordinary") indices <- importance.array(n, R, weights, strata) else if (sim == "balanced") indices <- importance.array.bal(n, R, weights, strata) } indices } jack.after.boot <- function(boot.out, index=1, t=NULL, L=NULL, useJ=TRUE, stinf = TRUE, alpha=NULL, main = "", ylab=NULL, ...) { # jackknife after bootstrap plot t.o <- t if (is.null(t)) { if (length(index) > 1L) { index <- index[1L] warning("only first element of 'index' used") } t <- boot.out$t[, index] } fins <- seq_along(t)[is.finite(t)] t <- t[fins] if (is.null(alpha)) { alpha <- c(0.05, 0.1, 0.16, 0.5, 0.84, 0.9, 0.95) if (is.null(ylab)) ylab <- "5, 10, 16, 50, 84, 90, 95 %-iles of (T*-t)" } if (is.null(ylab)) ylab <- "Percentiles of (T*-t)" data <- boot.out$data n <- NROW(data) f <- boot.array(boot.out)[fins, , drop=TRUE] percentiles <- matrix(data = NA, length(alpha), n) J <- numeric(n) for(j in seq_len(n)) { # Find the quantiles of the bootstrap distribution on omitting each point. values <- t[f[, j] == 0] J[j] <- mean(values) percentiles[, j] <- quantile(values, alpha) - J[j] } # Now find the jackknife values to be plotted, and standardize them, # if required. if (!useJ) { if (is.null(L)) J <- empinf(boot.out, index=index, t=t.o, ...) else J <- L } else J <- (n - 1) * (mean(J) - J) xtext <- "jackknife value" if (!useJ) { if (!is.null(L) || (is.null(t.o) && (boot.out$stype == "w"))) xtext <- paste("infinitesimal", xtext) else xtext <- paste("regression", xtext) } if (stinf) { J <- J/sqrt(var(J)) xtext <- paste("standardized", xtext) } top <- max(percentiles) bot <- min(percentiles) ylts <- c(bot - 0.35 * (top - bot), top + 0.1 * (top - bot)) percentiles <- percentiles[,order(J)]# # Plot the overall quantiles and the delete-1 quantiles against the # jackknife values. plot(sort(J), percentiles[1, ], ylim = ylts, type = "n", xlab = xtext, ylab = ylab, main=main) for(j in seq_along(alpha)) lines(sort(J), percentiles[j, ], type = "b", pch = "*") percentiles <- quantile(t, alpha) - mean(t) for(j in seq_along(alpha)) abline(h=percentiles[j], lty=2) # Now print the observation numbers below the plotted lines. They are printed # in five rows so that all numbers can be read easily. text(sort(J), rep(c(bot - 0.08 * (top - bot), NA, NA, NA, NA), n, n), order(J), cex = 0.5) text(sort(J), rep(c(NA, bot - 0.14 * (top - bot), NA, NA, NA), n, n), order(J), cex = 0.5) text(sort(J), rep(c(NA, NA, bot - 0.2 * (top - bot), NA, NA), n, n), order(J), cex = 0.5) text(sort(J), rep(c(NA, NA, NA, bot - 0.26 * (top - bot), NA), n, n), order(J), cex = 0.5) text(sort(J), rep(c(NA, NA, NA, NA, bot - 0.32 * (top - bot)), n, n), order(J), cex = 0.5) invisible() } ordinary.array <- function(n, R, strata) { # # R x n array of bootstrap indices, resampled within strata. # This is the function which generates a regular bootstrap array # using equal weights within each stratum. # inds <- as.integer(names(table(strata))) if (length(inds) == 1L) { output <- sample.int(n, n*R, replace=TRUE) dim(output) <- c(R, n) } else { output <- matrix(as.integer(0L), R, n) for(is in inds) { gp <- seq_len(n)[strata == is] output[, gp] <- if (length(gp) == 1) rep(gp, R) else bsample(gp, R*length(gp)) } } output } permutation.array <- function(n, R, strata) { # # R x n array of bootstrap indices, permuted within strata. # This is similar to ordinary array except that resampling is # done without replacement in each row. # output <- matrix(rep(seq_len(n), R), n, R) inds <- as.integer(names(table(strata))) for(is in inds) { group <- seq_len(n)[strata == is] if (length(group) > 1L) { g <- apply(output[group, ], 2L, rperm) output[group, ] <- g } } t(output) } cv.glm <- function(data, glmfit, cost=function(y,yhat) mean((y-yhat)^2), K=n) { # cross-validation estimate of error for glm prediction with K groups. # cost is a function of two arguments: the observed values and the # the predicted values. call <- match.call() if (!exists(".Random.seed", envir=.GlobalEnv, inherits = FALSE)) runif(1) seed <- get(".Random.seed", envir=.GlobalEnv, inherits = FALSE) n <- nrow(data) out <- NULL if ((K > n) || (K <= 1)) stop("'K' outside allowable range") K.o <- K K <- round(K) kvals <- unique(round(n/(1L:floor(n/2)))) temp <- abs(kvals-K) if (!any(temp == 0)) K <- kvals[temp == min(temp)][1L] if (K!=K.o) warning(gettextf("'K' has been set to %f", K), domain = NA) f <- ceiling(n/K) s <- sample0(rep(1L:K, f), n) n.s <- table(s) # glm.f <- formula(glmfit) glm.y <- glmfit$y cost.0 <- cost(glm.y, fitted(glmfit)) ms <- max(s) CV <- 0 Call <- glmfit$call for(i in seq_len(ms)) { j.out <- seq_len(n)[(s == i)] j.in <- seq_len(n)[(s != i)] ## we want data from here but formula from the parent. Call$data <- data[j.in, , drop=FALSE] d.glm <- eval.parent(Call) p.alpha <- n.s[i]/n cost.i <- cost(glm.y[j.out], predict(d.glm, data[j.out, , drop=FALSE], type = "response")) CV <- CV + p.alpha * cost.i cost.0 <- cost.0 - p.alpha * cost(glm.y, predict(d.glm, data, type = "response")) } list(call = call, K = K, delta = as.numeric(c(CV, CV + cost.0)), # drop any names seed = seed) } boot.ci <- function(boot.out,conf = 0.95,type = "all", index = 1L:min(2L, length(boot.out$t0)), var.t0 = NULL ,var.t = NULL, t0 = NULL, t = NULL, L = NULL, h = function(t) t, hdot = function(t) rep(1, length(t)), hinv = function(t) t, ...) # # Main function to calculate bootstrap confidence intervals. # This function calls a number of auxilliary functions to do # the actual calculations depending on the type of interval(s) # requested. # { call <- match.call() # Get and transform the statistic values and their variances, if ((is.null(t) && !is.null(t0)) || (!is.null(t) && is.null(t0))) stop("'t' and 't0' must be supplied together") t.o <- t; t0.o <- t0 # vt.o <- var.t vt0.o <- var.t0 if (is.null(t)) { if (length(index) == 1L) { t0 <- boot.out$t0[index] t <- boot.out$t[,index] } else if (ncol(boot.out$t)= 4) digs <- 0 else digs <- 4-digs intlabs <- NULL basrg <- strg <- perg <- bcarg <- NULL if (!is.null(ci.out$normal)) intlabs <- c(intlabs," Normal ") if (!is.null(ci.out$basic)) { intlabs <- c(intlabs," Basic ") basrg <- range(ci.out$basic[,2:3]) } if (!is.null(ci.out$student)) { intlabs <- c(intlabs," Studentized ") strg <- range(ci.out$student[,2:3]) } if (!is.null(ci.out$percent)) { intlabs <- c(intlabs," Percentile ") perg <- range(ci.out$percent[,2:3]) } if (!is.null(ci.out$bca)) { intlabs <- c(intlabs," BCa ") bcarg <- range(ci.out$bca[,2:3]) } level <- 100*ci.out[[4L]][, 1L] if (ntypes == 4L) n1 <- n2 <- 2L else if (ntypes == 5L) {n1 <- 3L; n2 <- 2L} else {n1 <- ntypes; n2 <- 0L} ints1 <- matrix(NA,nints,2L*n1+1L) ints1[,1L] <- level n0 <- 4L # Re-organize the intervals and coerce them into character data for (i in n0:(n0+n1-1)) { j <- c(2L*i-6L,2L*i-5L) nc <- ncol(ci.out[[i]]) nc <- c(nc-1L,nc) if (is.null(hinv)) ints1[,j] <- ci.out[[i]][,nc] else ints1[,j] <- hinv(ci.out[[i]][,nc]) } n0 <- 4L+n1 ints1 <- format(round(ints1,digs)) ints1[,1L] <- paste("\n",level,"% ",sep="") ints1[,2*(1L:n1)] <- paste("(",ints1[,2*(1L:n1)],",",sep="") ints1[,2*(1L:n1)+1L] <- paste(ints1[,2*(1L:n1)+1L],") ") if (n2 > 0) { ints2 <- matrix(NA,nints,2L*n2+1L) ints2[,1L] <- level j <- c(2L,3L) for (i in n0:(n0+n2-1L)) { if (is.null(hinv)) ints2[,j] <- ci.out[[i]][,c(4L,5L)] else ints2[,j] <- hinv(ci.out[[i]][,c(4L,5L)]) j <- j+2L } ints2 <- format(round(ints2,digs)) ints2[,1L] <- paste("\n",level,"% ",sep="") ints2[,2*(1L:n2)] <- paste("(",ints2[,2*(1L:n2)],",",sep="") ints2[,2*(1L:n2)+1L] <- paste(ints2[,2*(1L:n2)+1L],") ") } R <- ci.out$R # # Print the intervals cat("BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS\n") cat(paste("Based on",R,"bootstrap replicates\n\n")) cat("CALL : \n") dput(cl, control=NULL) cat("\nIntervals : ") cat("\nLevel",intlabs[1L:n1]) cat(t(ints1)) if (n2 > 0) { cat("\n\nLevel",intlabs[(n1+1):(n1+n2)]) cat(t(ints2)) } if (!is.null(cl$h)) { if (is.null(cl$hinv) && is.null(hinv)) cat("\nCalculations and Intervals on ", "Transformed Scale\n") else cat("\nCalculations on Transformed Scale;", " Intervals on Original Scale\n") } else if (is.null(cl$hinv) && is.null(hinv)) cat("\nCalculations and Intervals on Original Scale\n") else cat("\nCalculations on Original Scale", " but Intervals Transformed\n")# # Print any warnings about extreme values. if (!is.null(basrg)) { if ((basrg[1L] <= 1) || (basrg[2L] >= R)) cat("Warning : Basic Intervals used Extreme Quantiles\n") if ((basrg[1L] <= 10) || (basrg[2L] >= R-9)) cat("Some basic intervals may be unstable\n") } if (!is.null(strg)) { if ((strg[1L] <= 1) || (strg[2L] >= R)) cat("Warning : Studentized Intervals used Extreme Quantiles\n") if ((strg[1L] <= 10) || (strg[2L] >= R-9)) cat("Some studentized intervals may be unstable\n") } if (!is.null(perg)) { if ((perg[1L] <= 1) || (perg[2L] >= R)) cat("Warning : Percentile Intervals used Extreme Quantiles\n") if ((perg[1L] <= 10) || (perg[2L] >= R-9)) cat("Some percentile intervals may be unstable\n") } if (!is.null(bcarg)) { if ((bcarg[1L] <= 1) || (bcarg[2L] >= R)) cat("Warning : BCa Intervals used Extreme Quantiles\n") if ((bcarg[1L] <= 10) || (bcarg[2L] >= R-9)) cat("Some BCa intervals may be unstable\n") } invisible(ci.out) } norm.ci <- function(boot.out = NULL,conf = 0.95,index = 1,var.t0 = NULL, t0 = NULL, t = NULL, L = NULL, h = function(t) t, hdot = function(t) 1, hinv = function(t) t) # # Normal approximation method for confidence intervals. This can be # used with or without a bootstrap object. If a bootstrap object is # given then the intervals are bias corrected and the bootstrap variance # estimate can be used if none is supplied. # { if (is.null(t0)) { if (!is.null(boot.out)) t0 <-boot.out$t0[index] else stop("bootstrap output object or 't0' required") } if (!is.null(boot.out) && is.null(t)) t <- boot.out$t[,index] if (!is.null(t)) { fins <- seq_along(t)[is.finite(t)] t <- h(t[fins]) } if (is.null(var.t0)) { if (is.null(t)) { if (is.null(L)) stop("unable to calculate 'var.t0'") else var.t0 <- sum((hdot(t0)*L/length(L))^2) } else var.t0 <- var(t) } else var.t0 <- var.t0*hdot(t0)^2 t0 <- h(t0) if (!is.null(t)) bias <- mean(t)-t0 else bias <- 0 merr <- sqrt(var.t0)*qnorm((1+conf)/2) out <- cbind(conf,hinv(t0-bias-merr),hinv(t0-bias+merr)) out } norm.inter <- function(t,alpha) # # Interpolation on the normal quantile scale. For a non-integer # order statistic this function interpolates between the surrounding # order statistics using the normal quantile scale. See equation # 5.8 of Davison and Hinkley (1997) # { t <- t[is.finite(t)] R <- length(t) rk <- (R+1)*alpha if (!all(rk>1 & rk0 & k 1L) { if (parallel == "multicore") have_mc <- .Platform$OS.type != "windows" else if (parallel == "snow") have_snow <- TRUE if (!have_mc && !have_snow) ncpus <- 1L } if (!exists(".Random.seed", envir = .GlobalEnv, inherits = FALSE)) runif(1) seed <- get(".Random.seed", envir = .GlobalEnv, inherits = FALSE) call <- match.call() if (isMatrix(data)) n <- nrow(data) else stop("'data' must be a matrix with at least 2 columns") if (ncol(data) < 2L) stop("'data' must be a matrix with at least 2 columns") if (length(index) < 2L) stop("'index' must contain 2 elements") if (length(index) > 2L) { warning("only first 2 elements of 'index' used") index <- index[1L:2L] } if (ncol(data) < max(index)) stop("indices are incompatible with 'ncol(data)'") if (sim == "weird") { if (!is.null(cox)) stop("sim = \"weird\" cannot be used with a \"coxph\" object") if (ncol(data) > 2L) warning(gettextf("only columns %s and %s of 'data' used", index[1L], index[2L]), domain = NA) data <- data[,index] } if (!is.null(cox) && is.null(cox$coefficients) && ((sim == "cond") || (sim == "model"))) { warning("no coefficients in Cox model -- model ignored") cox <- NULL } if ((sim != "ordinary") && missing(F.surv)) stop("'F.surv' is required but missing") if (missing(G.surv) && ((sim == "cond") || (sim == "model"))) stop("'G.surv' is required but missing") if (NROW(strata) != n) stop("'strata' of wrong length") if (!isMatrix(strata)) { if (!((sim == "weird") || (sim == "ordinary"))) strata <- cbind(strata, 1) } else { if ((sim == "weird") || (sim == "ordinary")) strata <- strata[, 1L] else strata <- strata[, 1L:2L] } temp.str <- strata strata <- if (isMatrix(strata)) apply(strata, 2L, function(s, n) tapply(seq_len(n), as.numeric(s)), n) else tapply(seq_len(n), as.numeric(strata)) t0 <- if ((sim == "weird") && !mstrata) statistic(data, temp.str, ...) else statistic(data, ...) ## Calculate the resampled data sets. For ordinary resampling this ## involves finding the matrix of indices of the case to be resampled. ## For the conditional bootstrap or model-based we must find an array ## consisting of R matrices containing the resampled times and their ## censoring indicators. The data sets for the weird bootstrap must be ## calculated individually. fn <- if (sim == "ordinary") { bt <- cens.case(n, strata, R) function(r) statistic(data[sort(bt[r, ]), ], ...) } else if (sim == "weird") { ## force promises data; F.surv if (!mstrata) { function(r) { bootdata <- cens.weird(data, F.surv, strata) statistic(bootdata[, 1:2], bootdata[, 3L], ...) } } else { function(r) { bootdata <- cens.weird(data, F.surv, strata) statistic(bootdata[, 1:2], ...) } } } else { bt <- cens.resamp(data, R, F.surv, G.surv, strata, index, cox, sim) function(r) { bootdata <- data bootdata[, index] <- bt[r, , ] oi <- order(bt[r, , 1L], 1-bt[r, , 2L]) statistic(bootdata[oi, ], ...) } } rm(mstrata) res <- if (ncpus > 1L && (have_mc || have_snow)) { if (have_mc) { parallel::mclapply(seq_len(R), fn, ..., mc.cores = ncpus) } else if (have_snow) { list(...) # evaluate any promises if (is.null(cl)) { cl <- parallel::makePSOCKcluster(rep("localhost", ncpus)) if(RNGkind()[1L] == "L'Ecuyer-CMRG") parallel::clusterSetRNGStream(cl) parallel::clusterEvalQ(cl, library(survival)) res <- parallel::parLapply(cl, seq_len(R), fn) parallel::stopCluster(cl) res } else { parallel::clusterEvalQ(cl, library(survival)) parallel::parLapply(cl, seq_len(R), fn) } } } else lapply(seq_len(R), fn) t <- matrix(, R, length(t0)) for(r in seq_len(R)) t[r, ] <- res[[r]] cens.return(sim, t0, t, temp.str, R, data, statistic, call, seed) } cens.return <- function(sim, t0, t, strata, R, data, statistic, call, seed) { # # Create an object of class "boot" from the output of a censored bootstrap. # out <- list(t0 = t0, t = t, R = R, sim = sim, data = data, seed = seed, statistic = statistic, strata = strata, call = call) class(out) <- "boot" out } cens.case <- function(n, strata, R) { # # Simple case resampling. # out <- matrix(NA, nrow = R, ncol = n) for (s in seq_along(table(strata))) { inds <- seq_len(n)[strata == s] ns <- length(inds) out[, inds] <- bsample(inds, ns*R) } out } cens.weird <- function(data, surv, strata) { # # The weird bootstrap. Censoring times are fixed and the number of # failures at each failure time are sampled from a binomial # distribution. See Chapter 3 of Davison and Hinkley (1997). # # data is a two column matrix containing the times and censoring # indicator. # surv is a survival object giving the failure time distribution. # strata is a the strata vector used in surv or a vector of 1's if no # strata were used. # m <- length(surv$time) if (is.null(surv$strata)) { nstr <- 1 str <- rep(1, m) } else { nstr <- length(surv$strata) str <- rep(1L:nstr, surv$strata) } n.ev <- rbinom(m, surv$n.risk, surv$n.event/surv$n.risk) while (any(tapply(n.ev, str, sum) == 0)) n.ev <- rbinom(m, surv$n.risk, surv$n.event/surv$n.risk) times <- rep(surv$time, n.ev) str <- rep(str, n.ev) out <- NULL for (s in 1L:nstr) { temp <- cbind(times[str == s], 1) temp <- rbind(temp, as.matrix(data[(strata == s&data[, 2L] == 0), , drop=FALSE])) temp <- cbind(temp, s) oi <- order(temp[, 1L], 1-temp[, 2L]) out <- rbind(out, temp[oi, ]) } if (is.data.frame(data)) out <- as.data.frame(out) out } cens.resamp <- function(data, R, F.surv, G.surv, strata, index = c(1,2), cox = NULL, sim = "model") { # # Other types of resampling for the censored bootstrap. This function # uses some local functions to implement the conditional bootstrap for # censored data and resampling based on a Cox regression model. This # latter method of sampling can also use conditional sampling to get the # censoring times. # # data is the data set # R is the number of replicates # F.surv is a survfit object for the failure time distribution # G.surv is a survfit object for the censoring time distribution # strata is a two column matrix, the first column gives the strata # gives the strata for the failure times and the second for the # censoring times. # index is a vector with two integer components giving the position # of the times and censoring indicators in data # cox is an object returned by the coxph function to give the Cox # regression model for the failure times. # sim is the simulation type which will always be "model" or "cond" # gety1 <- function(n, R, surv, inds) { # Sample failure times from the product limit estimate of the failure # time distribution. survival <- surv$surv[inds] time <- surv$time[inds] n1 <- length(time) if (survival[n1] > 0L) { survival <- c(survival, 0) time <- c(time, Inf) } probs <- diff(-c(1, survival)) matrix(bsample(time, n*R, prob = probs), R, n) } gety2 <- function(n, R, surv, eta, inds) { # Sample failure times from the Cox regression model. F0 <- surv$surv[inds] time <- surv$time[inds] n1 <- length(time) if (F0[n1] > 0) { F0 <- c(F0, 0) time <- c(time, Inf) } ex <- exp(eta) Fh <- 1 - outer(F0, ex, "^") apply(rbind(0, Fh), 2L, function(p, y, R) bsample(y, R, prob = diff(p)), time, R) } getc1 <- function(n, R, surv, inds) { # Sample censoring times from the product-limit estimate of the # censoring distribution. cens <- surv$surv[inds] time <- surv$time[inds] n1 <- length(time) if (cens[n1] > 0) { cens <- c(cens, 0) time <- c(time, Inf) } probs <- diff(-c(1, cens)) matrix(bsample(time, n*R, prob = probs), nrow = R) } getc2 <- function(n, R, surv, inds, data, index) { # Sample censoring times form the conditional distribution. If a failure # was observed then sample from the product-limit estimate of the censoring # distribution conditional on the time being greater than the observed # failure time. If the observation is censored then resampled time is the # observed censoring time. cens <- surv$surv[inds] time <- surv$time[inds] n1 <- length(time) if (cens[n1] > 0) { cens <- c(cens, 0) time <- c(time, Inf) } probs <- diff(-c(1, cens)) cout <- matrix(NA, R, n) for (i in seq_len(n)) { if (data[i, 2] == 0) cout[, i] <- data[i, 1L] else { pri <- probs[time > data[i, 1L]] ti <- time[time > data[i, 1L]] if (length(ti) == 1L) cout[, i] <- ti else cout[, i] <- bsample(ti, R, prob = pri) } } cout } n <- nrow(data) Fstart <- 1 Fstr <- F.surv$strata if (is.null(Fstr)) Fstr <- length(F.surv$time) Gstart <- 1 Gstr <- G.surv$strata if (is.null(Gstr)) Gstr <- length(G.surv$time) out <- array(NA, c(R, n, 2)) y0 <- matrix(NA, R, n) for (s in seq_along(table(strata[, 1L]))) { # Find the resampled failure times within strata for failures ns <- sum(strata[, 1L] == s) inds <- Fstart:(Fstr[s]+Fstart-1) y0[, strata[, 1L] == s] <- if (is.null(cox)) gety1(ns, R, F.surv, inds) else gety2(ns, R, F.surv, cox$linear.predictors[strata[, 1L] == s], inds) Fstart <- Fstr[s]+Fstart } c0 <- matrix(NA, R, n) for (s in seq_along(table(strata[, 2L]))) { # Find the resampled censoring times within strata for censoring times ns <- sum(strata[, 2] == s) inds <- Gstart:(Gstr[s]+Gstart-1) c0[, strata[, 2] == s] <- if (sim != "cond") getc1(ns, R, G.surv, inds) else getc2(ns, R, G.surv, inds, data[strata[,2] == s, index]) Gstart <- Gstr[s]+Gstart } infs <- (is.infinite(y0) & is.infinite(c0)) if (sum(infs) > 0) { # If both the resampled failure time and the resampled censoring time # are infinite then set the resampled time to be a failure at the largest # failure time in the failure time stratum containing the observation. evs <- seq_len(n)[data[, index[2L]] == 1] maxf <- tapply(data[evs, index[1L]], strata[evs, 1L], max) maxf <- matrix(maxf[strata[, 1L]], nrow = R, ncol = n, byrow = TRUE) y0[infs] <- maxf[infs] } array(c(pmin(y0, c0), 1*(y0 <= c0)), c(dim(y0), 2)) } empinf <- function(boot.out = NULL, data = NULL, statistic = NULL, type = NULL, stype = NULL ,index = 1, t = NULL, strata = rep(1, n), eps = 0.001, ...) { # # Calculation of empirical influence values. Possible types are # "inf" = infinitesimal jackknife (numerical differentiation) # "reg" = regression based estimation # "jack" = usual jackknife estimates # "pos" = positive jackknife estimates # if (!is.null(boot.out)) { if (boot.out$sim == "parametric") stop("influence values cannot be found from a parametric bootstrap") data <- boot.out$data if (is.null(statistic)) statistic <- boot.out$statistic if (is.null(stype)) stype <- boot.out$stype if (!is.null(boot.out$strata)) strata <- boot.out$strata } else { if (is.null(data)) stop("neither 'data' nor bootstrap object specified") if (is.null(statistic)) stop("neither 'statistic' nor bootstrap object specified") if (is.null(stype)) stype <- "w" } n <- NROW(data) if (is.null(type)) { if (!is.null(t)) type <- "reg" else if (stype == "w") type <- "inf" else if (!is.null(boot.out) && (boot.out$sim != "parametric") && (boot.out$sim != "permutation")) type <- "reg" else type <- "jack" } if (type == "inf") { # calculate the infinitesimal jackknife values by numerical differentiation if (stype !="w") stop("'stype' must be \"w\" for type=\"inf\"") if (length(index) != 1L) { warning("only first element of 'index' used") index <- index[1L] } if (!is.null(t)) warning("input 't' ignored; type=\"inf\"") L <- inf.jack(data, statistic, index, strata, eps, ...) } else if (type == "reg") { # calculate the regression estimates of the influence values if (is.null(boot.out)) stop("bootstrap object needed for type=\"reg\"") if (is.null(t)) { if (length(index) != 1L) { warning("only first element of 'index' used") index <- index[1L] } t <- boot.out$t[,index] } L <- empinf.reg(boot.out, t) } else if (type == "jack") { if (!is.null(t)) warning("input 't' ignored; type=\"jack\"") if (length(index) != 1L) { warning("only first element of 'index' used") index <- index[1L] } L <- usual.jack(data, statistic, stype, index, strata, ...) } else if (type == "pos") { if (!is.null(t)) warning("input 't' ignored; type=\"pos\"") if (length(index) != 1L) { warning("only first element of 'index' used") index <- index[1L] } L <- positive.jack(data, statistic, stype, index, strata, ...) } L } inf.jack <- function(data, stat, index = 1, strata = rep(1, n), eps = 0.001, ...) { # # Numerical differentiation to get infinitesimal jackknife estimates # of the empirical influence values. # n <- NROW(data) L <- seq_len(n) eps <- eps/n strata <- tapply(strata, as.numeric(strata)) w.orig <- 1/table(strata)[strata] tobs <- stat(data, w.orig, ...)[index] for(i in seq_len(n)) { group <- seq_len(n)[strata == strata[i]] w <- w.orig w[group] <- (1 - eps)*w[group] w[i] <- w[i] + eps L[i] <- (stat(data, w, ...)[index] - tobs)/eps } L } empinf.reg <- function(boot.out, t = boot.out$t[,1L]) # # Function to estimate empirical influence values using regression. # This method regresses the observed bootstrap values on the bootstrap # frequencies to estimate the empirical influence values # { fins <- seq_along(t)[is.finite(t)] t <- t[fins] R <- length(t) n <- NROW(boot.out$data) strata <- boot.out$strata if (is.null(strata)) strata <- rep(1,n) else strata <- tapply(strata,as.numeric(strata)) ns <- table(strata) # S <- length(ns) f <- boot.array(boot.out)[fins,] X <- f/matrix(ns[strata], R, n ,byrow=TRUE) out <- tapply(seq_len(n), strata, min) inc <- seq_len(n)[-out] X <- X[,inc] beta <- coefficients(glm(t ~ X))[-1L] l <- rep(0, n) l[inc] <- beta l <- l - tapply(l,strata,mean)[strata] l } usual.jack <- function(data, stat, stype = "w", index = 1, strata = rep(1, n), ...) # # Function to use the normal (delete 1) jackknife method to estimate the # empirical influence values # { n <- NROW(data) l <- rep(0,n) strata <- tapply(strata,as.numeric(strata)) if (stype == "w") { w0 <- rep(1, n)/table(strata)[strata] tobs <- stat(data, w0, ...)[index] for (i in seq_len(n)) { w1 <- w0 w1[i] <- 0 gp <- strata == strata[i] w1[gp] <- w1[gp]/sum(w1[gp]) l[i] <- (sum(gp)-1)*(tobs - stat(data,w1, ...)[index]) } } else if (stype == "f") { f0 <- rep(1,n) tobs <- stat(data, f0, ...)[index] for (i in seq_len(n)) { f1 <- f0 f1[i] <- 0 gp <- strata == strata[i] l[i] <- (sum(gp)-1)*(tobs - stat(data, f1, ...)[index]) } } else { i0 <- seq_len(n) tobs <- stat(data, i0, ...)[index] for (i in seq_len(n)) { i1 <- i0[-i] gp <- strata == strata[i] l[i] <- (sum(gp)-1)*(tobs - stat(data, i1, ...)[index]) } } l } positive.jack <- function(data, stat, stype = "w", index = 1, strata = rep(1 ,n), ...) { # # Use the positive jackknife to estimate the empirical influence values. # The positive jackknife includes one observation twice to find its # influence. # strata <- tapply(strata,as.numeric(strata)) n <- NROW(data) L <- rep(0, n) if (stype == "w") { w0 <- rep(1, n)/table(strata)[strata] tobs <- stat(data, w0, ...)[index] for (i in seq_len(n)) { st1 <- c(strata,strata[i]) w1 <- 1/table(st1)[strata] w1[i] <- 2*w1[i] gp <- strata == strata[i] w1[gp] <- w1[gp]/sum(w1[gp]) L[i] <- (sum(gp)+1)*(stat(data, w1, ...)[index] - tobs) } } else if (stype == "f") { f0 <- rep(1,n) tobs <- stat(data, f0, ...)[index] for (i in seq_len(n)) { f1 <- f0 f1[i] <- 2 gp <- strata == strata[i] L[i] <- (sum(gp)+1)*(stat(data, f1, ...)[index] - tobs) } } else if (stype == "i") { i0 <- seq_len(n) tobs <- stat(data, i0, ...)[index] for (i in seq_len(n)) { i1 <- c(i0, i) gp <- strata == strata[i] L[i] <- (sum(gp)+1)*(stat(data, i1, ...)[index] - tobs) } } L } linear.approx <- function(boot.out, L = NULL, index = 1, type = NULL, t0 = NULL, t = NULL, ...) # # Find the linear approximation to the bootstrap replicates of a # statistic. L should be the linear influence values which will # be found by empinf if they are not supplied. # { f <- boot.array(boot.out) n <- length(f[1, ]) if ((length(index) > 1L) && (is.null(t0) || is.null(t))) { warning("only first element of 'index' used") index <- index[1L] } if (is.null(t0)) { t0 <- boot.out$t0[index] if (is.null(L)) L <- empinf(boot.out, index=index, type=type, ...) } else if (is.null(t) && is.null(L)) { warning("input 't0' ignored: neither 't' nor 'L' supplied") t0 <- t0[index] L <- empinf(boot.out, index=index, type=type, ...) } else if (is.null(L)) L <- empinf(boot.out, type=type, t=t, ...) tL <- rep(t0, boot.out$R) strata <- boot.out$strata if (is.null(strata)) strata <- rep(1, n) else strata <- tapply(strata,as.numeric(strata)) S <- length(table(strata)) for(s in 1L:S) { i.s <- seq_len(n)[strata == s] tL <- tL + f[, i.s] %*% L[i.s]/length(i.s) } as.vector(tL) } envelope <- function(boot.out = NULL, mat = NULL, level = 0.95, index = 1L:ncol(mat)) # # Function to estimate pointwise and overall confidence envelopes for # a function. # # mat is a matrix of bootstrap values of the function at a number of # points. The points at which they are evaluated are assumed to # be constant over the rows. # { emperr <- function(rmat, p = 0.05, k = NULL) # Local function to estimate the overall error rate of an envelope. { R <- nrow(rmat) if (is.null(k)) k <- p*(R+1)/2 else p <- 2*k/(R+1) kf <- function(x, k, R) 1*((min(x) <= k)|(max(x) >= R+1L-k)) c(k, p, sum(apply(rmat, 1L, kf, k, R))/(R+1)) } kfun <- function(x, k1, k2) # Local function to find the cut-off points in each column of the matrix. sort(x ,partial = sort(c(k1, k2)))[c(k1, k2)] if (!is.null(boot.out) && isMatrix(boot.out$t)) mat <- boot.out$t if (!isMatrix(mat)) stop("bootstrap output matrix missing") n <- ncol(mat) if (length(index) < 2L) stop("use 'boot.ci' for scalar parameters") mat <- mat[,index] rmat <- apply(mat,2L,rank) R <- nrow(mat) if (length(level) == 1L) level <- rep(level,2L) k.pt <- floor((R+1)*(1-level[1L])/2+1e-10) k.pt <- c(k.pt, R+1-k.pt) err.pt <- emperr(rmat,k = k.pt[1L]) ov <- emperr(rmat,k = 1) ee <- err.pt al <- 1-level[2L] if (ov[3L] > al) warning("unable to achieve requested overall error rate") else { continue <- !(ee[3L] < al) while(continue) { # If the observed error is greater than the level required for the overall # envelope then try another envelope. This loop uses linear interpolation # on the integers between 1 and k.pt[1L] to find the required value. kk <- ov[1L]+round((ee[1L]-ov[1L])*(al-ov[3L])/ (ee[3L]-ov[3L])) if (kk == ov[1L]) kk <- kk+1 else if (kk == ee[1L]) kk <- kk-1 temp <- emperr(rmat, k = kk) if (temp[3L] > al) ee <- temp else ov <- temp continue <- !(ee[1L] == ov[1L]+1) } } k.ov <- c(ov[1L], R+1-ov[1L]) err.ov <- ov[-1L] out <- apply(mat, 2L, kfun, k.pt, k.ov) list(point = out[2:1,], overall = out[4:3,], k.pt = k.pt, err.pt = err.pt[-1L], k.ov = k.ov, err.ov = err.ov, err.nom = 1-level) } glm.diag <- function(glmfit) { # # Calculate diagnostics for objects of class "glm". The diagnostics # calculated are various types of residuals as well as the Cook statistics # and the leverages. # w <- if (is.null(glmfit$prior.weights)) rep(1,length(glmfit$residuals)) else glmfit$prior.weights sd <- switch(family(glmfit)$family[1L], "gaussian" = sqrt(glmfit$deviance/glmfit$df.residual), "Gamma" = sqrt(sum(w*(glmfit$y/fitted(glmfit) - 1)^2)/ glmfit$df.residual), 1) ## sd <- ifelse(family(glmfit)$family[1L] == "gaussian", ## sqrt(glmfit$deviance/glmfit$df.residual), 1) ## sd <- ifelse(family(glmfit)$family[1L] == "Gamma", ## sqrt(sum(w*(glmfit$y/fitted(glmfit) - 1)^2)/glmfit$df.residual), sd) dev <- residuals(glmfit, type = "deviance")/sd pear <- residuals(glmfit, type = "pearson")/sd ## R change: lm.influence drops 0-wt cases. h <- rep(0, length(w)) h[w != 0] <- lm.influence(glmfit)$hat p <- glmfit$rank rp <- pear/sqrt(1 - h) rd <- dev/sqrt(1 - h) cook <- (h * rp^2)/((1 - h) * p) res <- sign(dev) * sqrt(dev^2 + h * rp^2) list(res = res, rd = rd, rp = rp, cook = cook, h = h, sd = sd) } glm.diag.plots <- function(glmfit, glmdiag = glm.diag(glmfit), subset = NULL, iden = FALSE, labels = NULL, ret = FALSE) { # Diagnostic plots for objects of class "glm" if (is.null(glmdiag)) glmdiag <- glm.diag(glmfit) if (is.null(subset)) subset <- seq_along(glmdiag$h) else if (is.logical(subset)) subset <- seq_along(subset)[subset] else if (is.numeric(subset) && all(subset<0)) subset <- (1L:(length(subset)+length(glmdiag$h)))[subset] else if (is.character(subset)) { if (is.null(labels)) labels <- subset subset <- seq_along(subset) } # close.screen(all = T) # split.screen(c(2, 2)) # screen(1) # par(mfrow = c(2,2)) # Plot the deviance residuals against the fitted values x1 <- predict(glmfit) plot(x1, glmdiag$res, xlab = "Linear predictor", ylab = "Residuals") pars <- vector(4L, mode="list") pars[[1L]] <- par("usr") # screen(2) # # Plot a normal QQ plot of the standardized deviance residuals y2 <- glmdiag$rd x2 <- qnorm(ppoints(length(y2)))[rank(y2)] plot(x2, y2, ylab = "Quantiles of standard normal", xlab = "Ordered deviance residuals") abline(0, 1, lty = 2) pars[[2L]] <- par("usr") # screen(3) # # Plot the Cook statistics against h/(1-h) and draw line to highlight # possible influential and high leverage points. hh <- glmdiag$h/(1 - glmdiag$h) plot(hh, glmdiag$cook, xlab = "h/(1-h)", ylab = "Cook statistic") rx <- range(hh) ry <- range(glmdiag$cook) rank.fit <- glmfit$rank nobs <- rank.fit + glmfit$df.residual cooky <- 8/(nobs - 2 * rank.fit) hy <- (2 * rank.fit)/(nobs - 2 * rank.fit) if ((cooky >= ry[1L]) && (cooky <= ry[2L])) abline(h = cooky, lty = 2) if ((hy >= rx[1L]) && (hy <= rx[2L])) abline(v = hy, lty = 2) pars[[3L]] <- par("usr") # screen(4) # # Plot the Cook statistics against the observation number in the original # data set. plot(subset, glmdiag$cook, xlab = "Case", ylab = "Cook statistic") if ((cooky >= ry[1L]) && (cooky <= ry[2L])) abline(h = cooky, lty = 2) xx <- list(x1,x2,hh,subset) yy <- list(glmdiag$res, y2, glmdiag$cook, glmdiag$cook) pars[[4L]] <- par("usr") if (is.null(labels)) labels <- names(x1) while (iden) { # If interaction with the plots is required then ask the user which plot # they wish to interact with and then run identify() on that plot. # When the user terminates identify(), reprompt until no further interaction # is required and the user inputs a 0. cat("****************************************************\n") cat("Please Input a screen number (1,2,3 or 4)\n") cat("0 will terminate the function \n") # num <- scan(nmax=1) num <- as.numeric(readline()) if ((length(num) > 0L) && ((num == 1)||(num == 2)||(num == 3)||(num == 4))) { cat(paste("Interactive Identification for screen", num,"\n")) cat("left button = Identify, center button = Exit\n") # screen(num, new=F) nm <- num+1 par(mfg = c(trunc(nm/2),1 +nm%%2, 2, 2)) par(usr = pars[[num]]) identify(xx[[num]], yy[[num]], labels) } else iden <- FALSE } # close.screen(all=T) par(mfrow = c(1, 1)) if (ret) glmdiag else invisible() } exp.tilt <- function(L, theta = NULL, t0 = 0, lambda = NULL, strata = rep(1, length(L)) ) { # exponential tilting of linear approximation to statistic # to give mean theta. # tilt.dis <- function(lambda) { # Find the squared error in the mean using the multiplier lambda # This is then minimized to find the correct value of lambda # Note that the function should have minimum 0. L <- para[[2L]] theta <- para[[1L]] strata <- para[[3L]] ns <- table(strata) tilt <- rep(NA, length(L) ) for (s in seq_along(ns)) { p <- exp(lambda*L[strata == s]/ns[s]) tilt[strata == s] <- p/sum(p) } (sum(L*tilt) - theta)^2 } tilted.prob <- function(lambda, L, strata) { # Find the tilted probabilities for a given value of lambda ns <- table(strata) m <- length(lambda) tilt <- matrix(NA, m, length(L)) for (i in 1L:m) for (s in seq_along(ns)) { p <- exp(lambda[i]*L[strata == s]/ns[s]) tilt[i,strata == s] <- p/sum(p) } if (m == 1) tilt <- as.vector(tilt) tilt } strata <- tapply(strata, as.numeric(strata)) if (!is.null(theta)) { theta <- theta-t0 m <- length(theta) lambda <- rep(NA,m) for (i in 1L:m) { para <- list(theta[i],L,strata) # assign("para",para,frame=1) # lambda[i] <- nlmin(tilt.dis, 0 )$x lambda[i] <- optim(0, tilt.dis, method = "BFGS")$par msd <- tilt.dis(lambda[i]) if (is.na(msd) || (abs(msd) > 1e-6)) stop(gettextf("unable to find multiplier for %f", theta[i]), domain = NA) } } else if (is.null(lambda)) stop("'theta' or 'lambda' required") probs <- tilted.prob( lambda, L, strata ) if (is.null(theta)) theta <- t0 + sum(probs * L) else theta <- theta+t0 list(p = probs, theta = theta, lambda = lambda) } imp.weights <- function(boot.out, def = TRUE, q = NULL) { # # Takes boot.out object and calculates importance weights # for each element of boot.out$t, as if sampling from multinomial # distribution with probabilities q. # If q is NULL the weights are calculated as if # sampling from a distribution with equal probabilities. # If def=T calculates weights using defensive mixture # distribution, if F uses weights knowing from which element of # the mixture they come. # R <- boot.out$R if (length(R) == 1L) def <- FALSE f <- boot.array(boot.out) n <- ncol(f) strata <- tapply(boot.out$strata,as.numeric(boot.out$strata)) # ns <- table(strata) if (is.null(q)) q <- rep(1,ncol(f)) if (any(q == 0)) stop("0 elements not allowed in 'q'") p <- boot.out$weights if ((length(R) == 1L) && all(abs(p - q)/p < 1e-10)) return(rep(1, R)) np <- length(R) q <- normalize(q, strata) lw.q <- as.vector(f %*% log(q)) if (!isMatrix(p)) p <- as.matrix(t(p)) p <- t(apply(p, 1L, normalize, strata)) lw.p <- matrix(NA, sum(R), np) for(i in 1L:np) { zz <- seq_len(n)[p[i, ] > 0] lw.p[, i] <- f[, zz] %*% log(p[i, zz]) } if (def) w <- 1/(exp(lw.p - lw.q) %*% R/sum(R)) else { i <- cbind(seq_len(sum(R)), rep(seq_along(R), R)) w <- exp(lw.q - lw.p[i]) } as.vector(w) } const <- function(w, eps=1e-8) { # Are all of the values of w equal to within the tolerance eps. all(abs(w-mean(w, na.rm=TRUE)) < eps) } imp.moments <- function(boot.out=NULL, index=1, t=boot.out$t[,index], w=NULL, def=TRUE, q=NULL ) { # Calculates raw, ratio, and regression estimates of mean and # variance of t using importance sampling weights in w. if (missing(t) && is.null(boot.out$t)) stop("bootstrap replicates must be supplied") if (is.null(w)) if (!is.null(boot.out)) w <- imp.weights(boot.out, def, q) else stop("either 'boot.out' or 'w' must be specified.") if ((length(index) > 1L) && missing(t)) { warning("only first element of 'index' used") t <- boot.out$t[,index[1L]] } fins <- seq_along(t)[is.finite(t)] t <- t[fins] w <- w[fins] if (!const(w)) { y <- t*w m.raw <- mean( y ) m.rat <- sum( y )/sum( w ) t.lm <- lm( y~w ) m.reg <- mean( y ) - coefficients(t.lm)[2L]*(mean(w)-1) v.raw <- mean(w*(t-m.raw)^2) v.rat <- sum(w/sum(w)*(t-m.rat)^2) x <- w*(t-m.reg)^2 t.lm2 <- lm( x~w ) v.reg <- mean( x ) - coefficients(t.lm2)[2L]*(mean(w)-1) } else { m.raw <- m.rat <- m.reg <- mean(t) v.raw <- v.rat <- v.reg <- var(t) } list( raw=c(m.raw,v.raw), rat = c(m.rat,v.rat), reg = as.vector(c(m.reg,v.reg))) } imp.reg <- function(w) { # This function takes a vector of importance sampling weights and # returns the regression importance sampling weights. The function # is called by imp.prob and imp.quantiles to enable those functions # to find regression estimates of tail probabilities and quantiles. if (!const(w)) { R <- length(w) mw <- mean(w) s2w <- (R-1)/R*var(w) b <- (1-mw)/s2w w <- w*(1+b*(w-mw))/R } cumsum(w)/sum(w) } imp.quantile <- function(boot.out=NULL, alpha=NULL, index=1, t=boot.out$t[,index], w=NULL, def=TRUE, q=NULL ) { # Calculates raw, ratio, and regression estimates of alpha quantiles # of t using importance sampling weights in w. if (missing(t) && is.null(boot.out$t)) stop("bootstrap replicates must be supplied") if (is.null(alpha)) alpha <- c(0.01,0.025,0.05,0.95,0.975,0.99) if (is.null(w)) if (!is.null(boot.out)) w <- imp.weights(boot.out, def, q) else stop("either 'boot.out' or 'w' must be specified.") if ((length(index) > 1L) && missing(t)){ warning("only first element of 'index' used") t <- boot.out$t[,index[1L]] } fins <- seq_along(t)[is.finite(t)] t <- t[fins] w <- w[fins] o <- order(t) t <- t[o] w <- w[o] cum <- cumsum(w) o <- rev(o) w.m <- w[o] t.m <- -rev(t) cum.m <- cumsum(w.m) cum.rat <- cum/mean(w) cum.reg <- imp.reg(w) R <- length(w) raw <- rat <- reg <- rep(NA,length(alpha)) for (i in seq_along(alpha)) { if (alpha[i]<=0.5) raw[i] <- max(t[cum<=(R+1)*alpha[i]]) else raw[i] <- -max(t.m[cum.m<=(R+1)*(1-alpha[i])]) rat[i] <- max(t[cum.rat <= (R+1)*alpha[i]]) reg[i] <- max(t[cum.reg <= (R+1)*alpha[i]]) } list(alpha=alpha, raw=raw, rat=rat, reg=reg) } imp.prob <- function(boot.out=NULL, index=1, t0=boot.out$t0[index], t=boot.out$t[,index], w=NULL, def=TRUE, q=NULL) { # Calculates raw, ratio, and regression estimates of tail probability # pr( t <= t0 ) using importance sampling weights in w. is.missing <- function(x) length(x) == 0L || is.na(x) if (missing(t) && is.null(boot.out$t)) stop("bootstrap replicates must be supplied") if (is.null(w)) if (!is.null(boot.out)) w <- imp.weights(boot.out, def, q) else stop("either 'boot.out' or 'w' must be specified.") if ((length(index) > 1L) && (missing(t) || missing(t0))) { warning("only first element of 'index' used") index <- index[1L] if (is.missing(t)) t <- boot.out$t[,index] if (is.missing(t0)) t0 <- boot.out$t0[index] } fins <- seq_along(t)[is.finite(t)] t <- t[fins] w <- w[fins] o <- order(t) t <- t[o] w <- w[o] raw <- rat <- reg <- rep(NA,length(t0)) cum <- cumsum(w)/sum(w) cum.r <- imp.reg(w) for (i in seq_along(t0)) { raw[i] <-sum(w[t<=t0[i]])/length(w) rat[i] <- max(cum[t<=t0[i]]) reg[i] <- max(cum.r[t<=t0[i]]) } list(t0=t0, raw=raw, rat=rat, reg=reg ) } smooth.f <- function(theta, boot.out, index=1, t=boot.out$t[,index], width=0.5 ) { # Does frequency smoothing of the frequency array for boot.out with # bandwidth A to give frequencies for 'typical' distribution at theta if ((length(index) > 1L) && missing(t)) { warning("only first element of 'index' used") t <- boot.out$t[,index[1L]] } if (isMatrix(t)) { warning("only first column of 't' used") t <- t[,1L] } fins <- seq_along(t)[is.finite(t)] t <- t[fins] m <- length(theta) v <- imp.moments(boot.out, t=t)$reg[2L] eps <- width*sqrt(v) if (m == 1) w <- dnorm((theta-t)/eps )/eps else { w <- matrix(0,length(t),m) for (i in 1L:m) w[,i] <- dnorm((theta[i]-t)/eps )/eps } f <- crossprod(boot.array(boot.out)[fins,] , w) strata <- boot.out$strata strata <- tapply(strata, as.numeric(strata)) ns <- table(strata) out <- matrix(NA,ncol(f),nrow(f)) for (s in seq_along(ns)) { ts <- matrix(f[strata == s,],m,ns[s],byrow=TRUE) ss <- apply(ts,1L,sum) out[,strata == s] <- ts/matrix(ss,m,ns[s]) } if (m == 1) out <- as.vector(out) out } tilt.boot <- function(data, statistic, R, sim="ordinary", stype="i", strata = rep(1, n), L = NULL, theta=NULL, alpha=c(0.025,0.975), tilt=TRUE, width=0.5, index=1, ... ) { # Does tilted bootstrap sampling of stat applied to data with strata strata # and simulation type sim. # The levels of R give the number of simulations at each level. For example, # R=c(199,100,50) will give three separate bootstraps with 199, 100, 50 # simulations. If R[1L]>0 the first simulation is assumed to be untilted # and L can be estimated from it by regression, or it can be frequency # smoothed to give probabilities p. # If tilt=T use exponential tilting with empirical influence value L # given explicitly or estimated from boot0, but if tilt=F # (in which case R[1L] should be large) frequency smoothing of boot0 is used # with bandwidth A. # Tilting/frequency smoothing is to theta (so length(theta)=length(R)-1). # The function assumes at present that q=0 is the median of the distribution # of t*. if ((sim != "ordinary") && (sim != "balanced")) stop("invalid value of 'sim' supplied") if (!is.null(theta) && (length(R) != length(theta)+1)) stop("'R' and 'theta' have incompatible lengths") if (!tilt && (R[1L] == 0)) stop("R[1L] must be positive for frequency smoothing") call <- match.call() n <- NROW(data) if (R[1L]>0) { # If required run an initial bootstrap with equal weights. if (is.null(theta) && (length(R) != length(alpha)+1)) stop("'R' and 'alpha' have incompatible lengths") boot0 <- boot(data, statistic, R = R[1L], sim=sim, stype=stype, strata = strata, ... ) if (is.null(theta)) { if (any(c(alpha,1-alpha)*(R[1L]+1) <= 5)) warning("extreme values used for quantiles") theta <- quantile(boot0$t[,index],alpha) } } else { # If no initial bootstrap is run then exponential tilting must be # used. Also set up a dummy bootstrap object to hold the output. tilt <- TRUE if (is.null(theta)) stop("'theta' must be supplied if R[1L] = 0") if (!missing(alpha)) warning("'alpha' ignored; R[1L] = 0") if (stype == "i") orig <- seq_len(n) else if (stype == "f") orig <- rep(1,n) else orig <- rep(1,n)/n boot0 <- boot.return(sim=sim,t0=statistic(data,orig, ...), t=NULL, strata=strata, R=0, data=data, stat=statistic, stype=stype,call=NULL, seed=get(".Random.seed", envir=.GlobalEnv, inherits = FALSE), m=0,weights=NULL) } # Calculate the weights for the subsequent bootstraps if (is.null(L) & tilt) if (R[1L] > 0) L <- empinf(boot0, index, ...) else L <- empinf(data=data, statistic=statistic, stype=stype, index=index, ...) if (tilt) probs <- exp.tilt(L, theta, strata=strata, t0=boot0$t0[index])$p else probs <- smooth.f(theta, boot0, index, width=width)# # Run the weighted bootstraps and collect the output. boot1 <- boot(data, statistic, R[-1L], sim=sim, stype=stype, strata=strata, weights=probs, ...) boot0$t <- rbind(boot0$t, boot1$t) boot0$weights <- rbind(boot0$weights, boot1$weights) boot0$R <- c(boot0$R, boot1$R) boot0$call <- call boot0$theta <- theta boot0 } control <- function(boot.out, L=NULL, distn=NULL, index=1, t0=NULL, t=NULL, bias.adj=FALSE, alpha=NULL, ... ) { # # Control variate estimation. Post-simulation balance can be used to # find the adjusted bias estimate. Alternatively the linear approximation # to the statistic of interest can be used as a control variate and hence # moments and quantiles can be estimated. # if (!is.null(boot.out$call$weights)) stop("control methods undefined when 'boot.out' has weights") if (is.null(alpha)) alpha <- c(1,2.5,5,10,20,50,80,90,95,97.5,99)/100 tL <- dL <- bias <- bias.L <- var.L <- NULL k3.L <- q.out <- distn.L <- NULL stat <- boot.out$statistic data <- boot.out$data R <- boot.out$R f <- boot.array(boot.out) if (bias.adj) { # Find the adjusted bias estimate using post-simulation balance. if (length(index) > 1L) { warning("only first element of 'index' used") index <- index[1L] } f.big <- apply(f, 2L, sum) if (boot.out$stype == "i") { n <- ncol(f) i.big <- rep(seq_len(n),f.big) t.big <- stat(data, i.big, ...)[index] } else if (boot.out$stype == "f") t.big <- stat(data, f.big, ...)[index] else if (boot.out$stype == "w") t.big <- stat(data, f.big/R, ...)[index] bias <- mean(boot.out$t[, index]) - t.big out <- bias } else { # Using the linear approximation as a control variable, find estimates # of the moments and quantiles of the statistic of interest. if (is.null(t) || is.null(t0)) { if (length(index) > 1L) { warning("only first element of 'index' used") index <- index[1L] } if (is.null(L)) L <- empinf(boot.out, index=index, ...) tL <- linear.approx(boot.out, L, index, ...) t <- boot.out$t[,index] t0 <- boot.out$t0[index] } else { if (is.null(L)) L <- empinf(boot.out, t=t, ...) tL <- linear.approx(boot.out, L, t0=t0, ...) } fins <- seq_along(t)[is.finite(t)] t <- t[fins] tL <- tL[fins] R <- length(t) dL <- t - tL # # Find the moments of the statistic of interest. bias.L <- mean(dL) strata <- tapply(boot.out$strata, as.numeric(boot.out$strata)) var.L <- var.linear(L, strata) + 2*var(tL, dL) + var(dL) k3.L <- k3.linear(L, strata) + 3 * cum3(tL, dL) + 3 * cum3(dL, tL) + cum3(dL) if (is.null(distn)) { # If distn is not supplied then calculate the saddlepoint approximation to # the distribution of the linear approximation. distn <- saddle.distn((t0+L)/length(L), alpha = (1L:R)/(R + 1), t0=c(t0,sqrt(var.L)), strata=strata) dist.q <- distn$quantiles[,2] distn <- distn$distn } else dist.q <- predict(distn, x=qnorm((1L:R)/(R+1)))$y# # Use the quantiles of the distribution of the linear approximation and # the control variates to estimate the quantiles of the statistic of interest. distn.L <- sort(dL[order(tL)] + dist.q) q.out <- distn.L[(R + 1) * alpha] out <- list(L=L, tL=tL, bias=bias.L, var=var.L, k3=k3.L, quantiles=cbind(alpha,q.out), distn=distn) } out } var.linear <- function(L, strata = NULL) { # estimate the variance of a statistic using its linear approximation vL <- 0 n <- length(L) if (is.null(strata)) strata <- rep(1, n) else strata <- tapply(seq_len(n),as.numeric(strata)) S <- length(table(strata)) for(s in 1L:S) { i.s <- seq_len(n)[strata == s] vL <- vL + sum(L[i.s]^2/length(i.s)^2) } vL } k3.linear <- function(L, strata = NULL) { # estimate the skewness of a statistic using its linear approximation k3L <- 0 n <- length(L) if (is.null(strata)) strata <- rep(1, n) else strata <- tapply(seq_len(n),as.numeric(strata)) S <- length(table(strata)) for(s in 1L:S) { i.s <- seq_len(n)[strata == s] k3L <- k3L + sum(L[i.s]^3/length(i.s)^3) } k3L } cum3 <- function(a, b=a, c=a, unbiased=TRUE) # calculate third order cumulants. { n <- length(a) if (unbiased) mult <- n/((n-1)*(n-2)) else mult <- 1/n mult*sum((a - mean(a)) * (b - mean(b)) * (c - mean(c))) } logit <- function(p) qlogis(p) # # Calculate the logit of a proportion in the range [0,1] # ## { ## out <- p ## inds <- seq_along(p)[!is.na(p)] ## if (any((p[inds] < 0) | (p[inds] > 1))) ## stop("invalid proportions input") ## out[inds] <- log(p[inds]/(1-p[inds])) ## out[inds][p[inds] == 0] <- -Inf ## out[inds][p[inds] == 1] <- Inf ## out ## } inv.logit <- function(x) # # Calculate the inverse logit of a number # # { # out <- exp(x)/(1+exp(x)) # out[x==-Inf] <- 0 # out[x==Inf] <- 1 # out # } plogis(x) iden <- function(n) # # Return the identity matrix of size n # if (n > 0) diag(rep(1,n)) else NULL zero <- function(n,m) # # Return an n x m matrix of 0's # if ((n > 0) & (m > 0)) matrix(0,n,m) else NULL simplex <- function(a,A1=NULL,b1=NULL,A2=NULL,b2=NULL,A3=NULL,b3=NULL, maxi=FALSE, n.iter=n+2*m, eps=1e-10) # # This function calculates the solution to a linear programming # problem using the tableau simplex method. The constraints are # given by the matrices A1, A2, A3 and the vectors b1, b2 and b3 # such that A1%*%x <= b1, A2%*%x >= b2 and A3%*%x = b3. The 2-phase # Simplex method is used. # { call <- match.call() if (!is.null(A1)) if (is.matrix(A1)) m1 <- nrow(A1) else m1 <- 1 else m1 <- 0 if (!is.null(A2)) if (is.matrix(A2)) m2 <- nrow(A2) else m2 <- 1 else m2 <- 0 if (!is.null(A3)) if (is.matrix(A3)) m3 <- nrow(A3) else m3 <- 1 else m3 <- 0 m <- m1+m2+m3 n <- length(a) a.o <- a if (maxi) a <- -a if (m2+m3 == 0) # If there are no >= or = constraints then the origin is a feasible # solution, and so only the second phase is required. out <- simplex1(c(a,rep(0,m1)), cbind(A1,iden(m1)), b1, c(rep(0,m1),b1), n+(1L:m1), eps=eps) else { if (m2 > 0) out1 <- simplex1(c(a,rep(0,m1+2*m2+m3)), cbind(rbind(A1,A2,A3), rbind(iden(m1),zero(m2+m3,m1)), rbind(zero(m1,m2),-iden(m2), zero(m3,m2)), rbind(zero(m1,m2+m3), iden(m2+m3))), c(b1,b2,b3), c(rep(0,n),b1,rep(0,m2),b2,b3), c(n+(1L:m1),(n+m1+m2)+(1L:(m2+m3))), stage=1, n1=n+m1+m2, n.iter=n.iter, eps=eps) else out1 <- simplex1(c(a,rep(0,m1+m3)), cbind(rbind(A1,A3), iden(m1+m3)), c(b1,b3), c(rep(0,n),b1,b3), n+(1L:(m1+m3)), stage=1, n1=n+m1, n.iter=n.iter, eps=eps) # In phase 1 use 1 artificial variable for each constraint and # minimize the sum of the artificial variables. This gives a # feasible solution to the original problem as long as all # artificial variables are non-basic (and hence the value of the # new objective function is 0). If this is true then optimize the # original problem using the result as the original feasible solution. if (out1$val.aux > eps) out <- out1 else out <- simplex1(out1$a[1L:(n+m1+m2)], out1$A[,1L:(n+m1+m2)], out1$soln[out1$basic], out1$soln[1L:(n+m1+m2)], out1$basic, val=out1$value, n.iter=n.iter, eps=eps) } if (maxi) out$value <- -out$value out$maxi <- maxi if (m1 > 0L) out$slack <- out$soln[n+(1L:m1)] if (m2 > 0L) out$surplus <- out$soln[n+m1+(1L:m2)] if (out$solved == -1) out$artificial <- out$soln[-(1L:n+m1+m2)] out$obj <- a.o names(out$obj) <- paste("x",seq_len(n),sep="") out$soln <- out$soln[seq_len(n)] names(out$soln) <- paste("x",seq_len(n),sep="") out$call <- call class(out) <- "simplex" out } simplex1 <- function(a,A,b,init,basic,val=0,stage=2, n1=N, eps=1e-10, n.iter=n1) # # Tableau simplex function called by the simplex routine. This does # the actual calculations required in each phase of the simplex method. # { pivot <- function(tab, pr, pc) { # Given the position of the pivot and the tableau, complete # the matrix operations to swap the variables. pv <- tab[pr,pc] pcv <- tab[,pc] tab[-pr,]<- tab[-pr,] - (tab[-pr,pc]/pv)%o%tab[pr,] tab[pr,] <- tab[pr,]/(-pv) tab[pr,pc] <- 1/pv tab[-pr,pc] <- pcv[-pr]/pv tab } N <- ncol(A) M <- nrow(A) nonbasic <- (1L:N)[-basic] tableau <- cbind(b,-A[,nonbasic,drop=FALSE]) # If in the first stage then find the artifical objective function, # otherwise use the original objective function. if (stage == 2) { tableau <- rbind(tableau,c(val,a[nonbasic])) obfun <- a[nonbasic] } else { obfun <- apply(tableau[(M+n1-N+1):M,,drop=FALSE],2L,sum) tableau <- rbind(c(val,a[nonbasic]),tableau,obfun) obfun <- obfun[-1L] } it <- 1 while (!all(obfun> -eps) && (it <= n.iter)) # While the objective function can be reduced # Find a pivot # complete the matrix operations required # update the lists of basic and non-basic variables { pcol <- 1+order(obfun)[1L] if (stage == 2) neg <- (1L:M)[tableau[1L:M,pcol]< -eps] else neg <- 1+ (1L:M)[tableau[2:(M+1),pcol] < -eps] ratios <- -tableau[neg,1L]/tableau[neg,pcol] prow <- neg[order(ratios)[1L]] tableau <- pivot(tableau,prow,pcol) if (stage == 1) { temp <- basic[prow-1L] basic[prow-1L] <- nonbasic[pcol-1L] nonbasic[pcol-1L] <- temp obfun <- tableau[M+2L,-1L] } else { temp <- basic[prow] basic[prow] <- nonbasic[pcol-1L] nonbasic[pcol-1L] <- temp obfun <- tableau[M+1L,-1L] } it <- it+1 } # END of while loop if (stage == 1) { val.aux <- tableau[M+2,1L] # If the value of the auxilliary objective function is zero but some # of the artificial variables are basic (with value 0) then switch # them with some nonbasic variables (which are not artificial). if ((val.aux < eps) && any(basic>n1)) { ar <- (1L:M)[basic>n1] for (j in seq_along(temp)) { prow <- 1+ar[j] pcol <- 1 + order( nonbasic[abs(tableau[prow,-1L])>eps])[1L] tableau <- pivot(tableau,prow,pcol) temp1 <- basic[prow-1L] basic[prow-1L] <- nonbasic[pcol-1L] nonbasic[pcol-1L] <- temp1 } } soln <- rep(0,N) soln[basic] <- tableau[2:(M+1L),1L] val.orig <- tableau[1L,1L] A.out <- matrix(0,M,N) A.out[,basic] <- iden(M) A.out[,nonbasic] <- -tableau[2L:(M+1L),-1L] a.orig <- rep(0,N) a.orig[nonbasic] <- tableau[1L,-1L] a.aux <- rep(0,N) a.aux[nonbasic] <- tableau[M+2,-1L] list(soln=soln, solved=-1, value=val.orig, val.aux=val.aux, A=A.out, a=a.orig, a.aux=a.aux, basic=basic) } else { soln <- rep(0,N) soln[basic] <- tableau[1L:M,1L] val <- tableau[(M+1L),1L] A.out <- matrix(0,M,N) A.out[,basic] <- iden(M) A.out[,nonbasic] <- tableau[1L:M,-1L] a.out <- rep(0,N) a.out[nonbasic] <- tableau[M+1L,-1L] if (it <= n.iter) solved <- 1L else solved <- 0L list(soln=soln, solved=solved, value=val, A=A.out, a=a.out, basic=basic) } } print.simplex <- function(x, ...) { # # Print the output of a simplex solution to a linear programming problem. # simp.out <- x cat("\nLinear Programming Results\n\n") cl <- simp.out$call cat("Call : ") dput(cl, control=NULL) if (simp.out$maxi) cat("\nMaximization ") else cat("\nMinimization ") cat("Problem with Objective Function Coefficients\n") print(simp.out$obj) if (simp.out$solved == 1) { cat("\n\nOptimal solution has the following values\n") print(simp.out$soln) cat(paste("The optimal value of the objective ", " function is ",simp.out$value,".\n",sep="")) } else if (simp.out$solved == 0) { cat("\n\nIteration limit exceeded without finding solution\n") cat("The coefficient values at termination were\n") print(simp.out$soln) cat(paste("The objective function value was ",simp.out$value, ".\n",sep="")) } else cat("\nNo feasible solution could be found\n") invisible(x) } saddle <- function(A = NULL, u = NULL, wdist = "m", type = "simp", d = NULL, d1 = 1, init = rep(0.1, d), mu = rep(0.5, n), LR = FALSE, strata = NULL, K.adj = NULL, K2 = NULL) # # Saddle point function. Standard multinomial saddlepoints are # computed using nlmin whereas the more complicated conditional # saddlepoints for Poisson and Binary cases are done by fitting # a GLM to a set of responses which, in turn, are derived from a # linear programming problem. # { det <- function(mat) { # absolute value of the determinant of a matrix. if (any(is.na(mat))) NA else if (!all(is.finite(mat))) Inf else abs(prod(eigen(mat,only.values = TRUE)$values)) } sgn <- function(x, eps = 1e-10) # sign of a real number. if (abs(x) < eps) 0 else 2*(x > 0) - 1 if (!is.null(A)) { A <- as.matrix(A) d <- ncol(A) if (length(u) != d) stop(gettextf("number of columns of 'A' (%d) not equal to length of 'u' (%d)", d, length(u)), domain = NA) n <- nrow(A) } else if (is.null(K.adj)) stop("either 'A' and 'u' or 'K.adj' and 'K2' must be supplied") if (!is.null(K.adj)) { # If K.adj and K2 are supplied then calculate the simple saddlepoint. if (is.null(d)) d <- 1 type <- "simp" wdist <- "o" speq <- suppressWarnings(optim(init, K.adj)) if (speq$convergence == 0) { ahat <- speq$par Khat <- K.adj(ahat) K2hat <- det(K2(ahat)) gs <- 1/sqrt((2*pi)^d*K2hat)*exp(Khat) if (d == 1) { r <- sgn(ahat)*sqrt(-2*Khat) v <- ahat*sqrt(K2hat) if (LR) Gs <- pnorm(r)+dnorm(r)*(1/r + 1/v) else Gs <- pnorm(r+log(v/r)/r) } else Gs <- NA } else gs <- Gs <- ahat <- NA } else if (wdist == "m") { # Calculate the standard simple saddlepoint for the multinomial case. type <- "simp" if (is.null(strata)) { p <- mu/sum(mu) para <- list(p,A,u,n) K <- function(al) { w <- para[[1L]]*exp(al%*%t(para[[2L]])) para[[4L]]*log(sum(w))-sum(al*para[[3L]]) } speq <- suppressWarnings(optim(init, K)) ahat <- speq$par w <- as.vector(p*exp(ahat%*%t(A))) Khat <- n*log(sum(w))-sum(ahat*u) sw <- sum(w) if (d == 1) K2hat <- n*(sum(w*A*A)/sw-(sum(w*A)/sw)^2) else { saw <- w %*% A sa2w <- t(matrix(w,n,d)*A) %*% A K2hat <- det(n/sw*(sa2w-(saw%*%t(saw))/sw)) } } else { sm <- as.vector(tapply(mu,strata,sum)[strata]) p <- mu/sm ns <- table(strata) para <- list(p,A,u,strata,ns) K <- function(al) { w <- para[[1L]]*exp(al%*%t(para[[2L]])) sum(para[[5]]*log(tapply(w,para[[4L]],sum))) - sum(al*para[[3L]]) } speq <- suppressWarnings(optim(init, K)) ahat <- speq$par w <- p*exp(ahat%*%t(A)) Khat <- sum(ns*log(tapply(w,strata,sum)))-sum(ahat*u) temp <- matrix(0,d,d) for (s in seq_along(ns)) { gp <- seq_len(n)[strata == s] sw <- sum(w[gp]) saw <- w[gp]%*%A[gp,] sa2w <- t(matrix(w[gp],ns[s],d)*A[gp,])%*%A[gp,] temp <- temp+ns[s]/sw*(sa2w-(saw%*%t(saw))/sw) } K2hat <- det(temp) } if (speq$convergence == 0) { gs <- 1/sqrt(2*pi*K2hat)^d*exp(Khat) if (d == 1) { r <- sgn(ahat)*sqrt(-2*Khat) v <- ahat*sqrt(K2hat) if (LR) Gs <- pnorm(r)+dnorm(r)*(1/r - 1/v) else Gs <- pnorm(r+log(v/r)/r) } else Gs <- NA } else gs <- Gs <- ahat <- NA } else if (wdist == "p") { if (type == "cond") { # Conditional Poisson and Binary saddlepoints are caculated by first # solving a linear programming problem and then fitting a generalized # linear model to find the solution to the saddlepoint equations. smp <- simplex(rep(0, n), A3 = t(A), b3 = u) if (smp$solved == 1) { y <- smp$soln A1 <- A[,1L:d1] A2 <- A[,-(1L:d1)] mod1 <- summary(glm(y ~ A1 + A2 + offset(log(mu)) - 1, poisson, control = glm.control(maxit=100))) mod2 <- summary(glm(y ~ A2 + offset(log(mu)) - 1, poisson, control = glm.control(maxit=100))) ahat <- mod1$coefficients[,1L] ahat2 <- mod2$coefficients[,1L] temp1 <- mod2$deviance - mod1$deviance temp2 <- det(mod2$cov.unscaled)/det(mod1$cov.unscaled) gs <- 1/sqrt((2*pi)^d1*temp2)*exp(-temp1/2) if (d1 == 1) { r <- sgn(ahat[1L])*sqrt(temp1) v <- ahat[1L]*sqrt(temp2) if (LR) Gs<-pnorm(r)+dnorm(r)*(1/r-1/v) else Gs <- pnorm(r+log(v/r)/r) } else Gs <- NA } else { ahat <- ahat2 <- NA gs <- Gs <- NA } } else stop("this type not implemented for Poisson") } else if (wdist == "b") { if (type == "cond") { smp <- simplex(rep(0, n), A1 = iden(n), b1 = rep(1-2e-6, n), A3 = t(A), b3 = u - 1e-6*apply(A, 2L, sum)) # For the binary case we require that the values are in the interval (0,1) # since glm code seems to have problems when there are too many 0's or 1's. if (smp$solved == 1) { y <- smp$soln+1e-6 A1 <- A[, 1L:d1] A2 <- A[, -(1L:d1)] mod1 <- summary(glm(cbind(y, 1-y) ~ A1+A2+offset(qlogis(mu))-1, binomial, control = glm.control(maxit=100))) mod2 <- summary(glm(cbind(y, 1-y) ~ A2+offset(qlogis(mu))-1, binomial, control = glm.control(maxit=100))) ahat <- mod1$coefficients[,1L] ahat2 <- mod2$coefficients[,1L] temp1 <- mod2$deviance-mod1$deviance temp2 <- det(mod2$cov.unscaled)/det(mod1$cov.unscaled) gs <- 1/sqrt((2*pi)^d1*temp2)*exp(-temp1/2) if (d1 == 1) { r <- sgn(ahat[1L])*sqrt(temp1) v <- ahat[1L]*sqrt(temp2) if (LR) Gs<-pnorm(r)+dnorm(r)*(1/r-1/v) else Gs <- pnorm(r+log(v/r)/r) } else Gs <- NA } else { ahat <- ahat2 <- NA gs <- Gs <- NA } } else stop("this type not implemented for Binary") } if (type == "simp") out <- list(spa = c(gs, Gs), zeta.hat = ahat) else #if (type == "cond") out <- list(spa = c(gs, Gs), zeta.hat = ahat, zeta2.hat = ahat2) names(out$spa) <- c("pdf", "cdf") out } saddle.distn <- function(A, u = NULL, alpha = NULL, wdist = "m", type = "simp", npts = 20, t = NULL, t0 = NULL, init = rep(0.1, d), mu = rep(0.5, n), LR = FALSE, strata = NULL, ...) # # This function calculates the entire saddlepoint distribution by # finding the saddlepoint approximations at npts values and then # fitting a spline to the results (on the normal quantile scale). # A may be a matrix or a function of t. If A is a matrix with 1 column # u is not used (u = t), if A is a matrix with more than 1 column u must # be a vector with ncol(A)-1 elements, if A is a function of t then u # must also be a function returning a vector of ncol(A(t, ...)) elements. { call <- match.call() if (is.null(alpha)) alpha <- c(0.001,0.005,0.01,0.025,0.05,0.1,0.2,0.5, 0.8,0.9,0.95,0.975,0.99,0.995,0.999) if (is.null(t) && is.null(t0)) stop("one of 't' or 't0' required") ep1 <- min(c(alpha,0.01))/10 ep2 <- (1-max(c(alpha,0.99)))/10 d <- if (type == "simp") 1 else if (is.function(u)) { if (is.null(t)) length(u(t0[1L], ...)) else length(u(t[1L], ...)) } else 1L+length(u) i <- nsads <- 0 if (!is.null(t)) npts <- length(t) zeta <- matrix(NA,npts,2L*d-1L) spa <- matrix(NA,npts,2L) pts <- NULL if (is.function(A)) { n <- nrow(as.matrix(A(t0[1L], ...))) if (is.null(u)) stop("function 'u' missing") if (!is.function(u)) stop("'u' must be a function") if (is.null(t)) { t1 <- t0[1L]-2*t0[2L] sad <- saddle(A = A(t1, ...), u = u(t1, ...), wdist = wdist, type = type, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) bdu <- bdl <- NULL while (is.na(sad$spa[2L]) || (sad$spa[2L] > ep1) || (sad$spa[2L] < ep1/100)) { nsads <- nsads+1 # Find a lower bound on the effective range of the saddlepoint distribution if (!is.na(sad$spa[2L]) && (sad$spa[2L] > ep1)) { i <- i+1 zeta[i,] <- c(sad$zeta.hat, sad$zeta2.hat) spa[i,] <- sad$spa pts <- c(pts,t1) bdu <- t1 } else bdl <- t1 if (nsads == npts) stop("unable to find range") if (is.null(bdl)) { t1 <- 2*t1-t0[1L] sad <- saddle(A = A(t1, ...), u = u(t1, ...), wdist = wdist, type = type, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } else if (is.null(bdu)) { t1 <- (t0[1L]+bdl)/2 sad <- saddle(A = A(t1, ...), u = u(t1, ...), wdist = wdist, type = type, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } else { t1 <- (bdu+bdl)/2 sad <- saddle(A = A(t1, ...), u = u(t1, ...), wdist = wdist, type = type, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } } i1 <- i <- i+1 nsads <- 0 zeta[i,] <- c(sad$zeta.hat, sad$zeta2.hat) spa[i,] <- sad$spa pts <- c(pts,t1) t2 <- t0[1L]+2*t0[2L] sad <- saddle(A = A(t2, ...), u = u(t2, ...), wdist = wdist, type = type, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) bdu <- bdl <- NULL while (is.na(sad$spa[2L]) || (1-sad$spa[2L] > ep2) || (1-sad$spa[2L] < ep2/100)){ # Find an upper bound on the effective range of the saddlepoint distribution nsads <- nsads+1 if (!is.na(sad$spa[2L])&&(1-sad$spa[2L] > ep2)) { i <- i+1 zeta[i,] <- c(sad$zeta.hat, sad$zeta2.hat) spa[i,] <- sad$spa pts <- c(pts,t2) bdl <- t2 } else bdu <- t2 if (nsads == npts) stop("unable to find range") if (is.null(bdu)) { t2 <- 2*t2-t0[1L] sad <- saddle(A = A(t2, ...), u = u(t2, ...), wdist = wdist, type = type, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } else if (is.null(bdl)) { t2 <- (t0[1L]+bdu)/2 sad <- saddle(A = A(t2, ...), u = u(t2, ...), wdist = wdist, type = type, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } else { t2 <- (bdu+bdl)/2 sad <- saddle(A = A(t2, ...), u = u(t2, ...), wdist = wdist, type = type, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } } i <- i+1 zeta[i,] <- c(sad$zeta.hat, sad$zeta2.hat) spa[i,] <- sad$spa pts <- c(pts,t2) # Now divide the rest of the npts points so that about half are at # either side of t0[1L]. if ((npts %% 2) == 0) { tt1<- seq.int(t1, t0[1L], length.out = npts/2-i1+2)[-1L] tt2 <- seq.int(t0[1L], t2, length.out = npts/2+i1-i+2)[-1L] t <- c(tt1[-length(tt1)], tt2[-length(tt2)]) } else { ex <- 1*(t1+t2 > 2*t0[1L]) ll <- floor(npts/2)+2 tt1 <- seq.int(t1, t0[1L], length.out = ll-i1+1-ex)[-1L] tt2 <- seq.int(t0[1L], t2, length.out = ll+i1-i+ex)[-1L] t <- c(tt1[-length(tt1)], tt2[-length(tt2)]) } } init1 <- init for (j in (i+1):npts) { # Calculate the saddlepoint approximations at the extra points. sad <- saddle(A = A(t[j-i], ...), u = u(t[j-i], ...), wdist = wdist, type = type, d1 = 1, init = init1, mu = mu, LR = LR, strata = strata) zeta[j,] <- c(sad$zeta.hat, sad$zeta2.hat) init1 <- sad$zeta.hat spa[j,] <- sad$spa } } else { A <- as.matrix(A) n <- nrow(A) if (is.null(t)) { # Find a lower bound on the effective range of the saddlepoint distribution t1 <- t0[1L]-2*t0[2L] sad <- saddle(A = A, u = c(t1,u), wdist = wdist, type = type, d = d, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) bdu <- bdl <- NULL while (is.na(sad$spa[2L]) || (sad$spa[2L] > ep1) || (sad$spa[2L] < ep1/100)) { if (!is.na(sad$spa[2L]) && (sad$spa[2L] > ep1)) { i <- i+1 zeta[i,] <- c(sad$zeta.hat, sad$zeta2.hat) spa[i,] <- sad$spa pts <- c(pts,t1) bdu <- t1 } else bdl <- t1 if (i == floor(npts/2)) stop("unable to find range") if (is.null(bdl)) { t1 <- 2*t1-t0[1L] sad <- saddle(A = A, u = c(t1,u), wdist = wdist, type = type, d = d, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } else if (is.null(bdu)) { t1 <- (t0[1L]+bdl)/2 sad <- saddle(A = A, u = c(t1,u), wdist = wdist, type = type, d = d, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } else { t1 <- (bdu+bdl)/2 sad <- saddle(A = A, u = c(t1,u), wdist = wdist, type = type, d = d, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } } i1 <- i <- i+1 zeta[i,] <- c(sad$zeta.hat, sad$zeta2.hat) spa[i,] <- sad$spa pts <- c(pts,t1) # Find an upper bound on the effective range of the saddlepoint distribution t2 <- t0[1L]+2*t0[2L] sad <- saddle(A = A, u = c(t2,u), wdist = wdist, type = type, d = d, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) bdu <- bdl <- NULL while (is.na(sad$spa[2L]) || (1-sad$spa[2L] > ep2) || (1-sad$spa[2L] < ep2/100)) { if (!is.na(sad$spa[2L])&&(1-sad$spa[2L] > ep2)) { i <- i+1 zeta[i,] <- c(sad$zeta.hat, sad$zeta2.hat) spa[i,] <- sad$spa pts <- c(pts, t2) bdl <- t2 } else bdu <- t2 if ((i-i1) == floor(npts/2)) stop("unable to find range") if (is.null(bdu)) { t2 <- 2*t2-t0[1L] sad <- saddle(A = A, u = c(t2, u), wdist = wdist, type = type, d = d, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } else if (is.null(bdl)) { t2 <- (t0[1L]+bdu)/2 sad <- saddle(A = A, u = c(t2, u), wdist = wdist, type = type, d = d, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } else { t2 <- (bdu+bdl)/2 sad <- saddle(A = A, u = c(t2, u), wdist = wdist, type = type, d = d, d1 = 1, init = init, mu = mu, LR = LR, strata = strata) } } i <- i+1 zeta[i,] <- c(sad$zeta.hat, sad$zeta2.hat) spa[i,] <- sad$spa pts <- c(pts, t2) # Now divide the rest of the npts points so that about half are at # either side of t0[1L]. if ((npts %% 2) == 0) { tt1 <- seq.int(t1, t0[1L], length.out=npts/2-i1+2)[-1L] tt2 <- seq.int(t0[1L], t2, length.out=npts/2+i1-i+2)[-1L] t <- c(tt1[-length(tt1)], tt2[-length(tt2)]) } else { ex <- 1*(t1+t2 > 2*t0[1L]) ll <- floor(npts/2)+2 tt1 <- seq.int(t1, t0[1L], length.out=ll-i1+1-ex)[-1L] tt2 <- seq.int(t0[1L], t2, length.out=ll+i1-i+ex)[-1L] t <- c(tt1[-length(tt1)], tt2[-length(tt2)]) } } init1 <- init for (j in (i+1):npts) { # Calculate the saddlepoint approximations at the extra points. sad <- saddle(A=A, u=c(t[j-i],u), wdist=wdist, type=type, d=d, d1=1, init=init, mu=mu, LR=LR, strata=strata) zeta[j,] <- c(sad$zeta.hat, sad$zeta2.hat) init1 <- sad$zeta.hat spa[j,] <- sad$spa } } # Omit points too close to the center as the distribution approximation is # not good at those points. pts.in <- (1L:npts)[(abs(zeta[,1L]) > 1e-6) & (abs(spa[, 2L] - 0.5) < 0.5 - 1e-10)] pts <- c(pts,t)[pts.in] zeta <- as.matrix(zeta[pts.in, ]) spa <- spa[pts.in, ] # Fit a spline to the approximations and predict at the required quantile # values. distn <- smooth.spline(qnorm(spa[,2]), pts) quantiles <- predict(distn, qnorm(alpha))$y quans <- cbind(alpha, quantiles) colnames(quans) <- c("alpha", "quantile") inds <- order(pts) psa <- cbind(pts[inds], spa[inds,], zeta[inds,]) if (d == 1) anames <- "zeta" else { anames <- rep("",2*d-1) for (j in 1L:d) anames[j] <- paste("zeta1.", j ,sep = "") for (j in (d+1):(2*d-1)) anames[j] <- paste("zeta2.", j-d, sep = "") } dimnames(psa) <- list(NULL,c("t", "gs", "Gs", anames)) out <- list(quantiles = quans, points = psa, distn = distn, call = call, LR = LR) class(out) <- "saddle.distn" out } print.saddle.distn <- function(x, ...) { # # Print the output from saddle.distn # sad.d <- x cl <- sad.d$call rg <- range(sad.d$points[,1L]) mid <- mean(rg) digs <- ceiling(log10(abs(mid))) if (digs <= 0) digs <- 4 else if (digs >= 4) digs <- 0 else digs <- 4-digs rg <- round(rg,digs) level <- 100*sad.d$quantiles[,1L] quans <- format(round(sad.d$quantiles,digs)) quans[,1L] <- paste("\n",format(level),"% ",sep="") cat("\nSaddlepoint Distribution Approximations\n\n") cat("Call : \n") dput(cl, control=NULL) cat("\nQuantiles of the Distribution\n") cat(t(quans)) cat(paste("\n\nSmoothing spline used ", nrow(sad.d$points), " points in the range ", rg[1L]," to ", rg[2L], ".\n", sep="")) if (sad.d$LR) cat("Lugananni-Rice approximations used\n") invisible(sad.d) } lines.saddle.distn <- function(x, dens = TRUE, h = function(u) u, J = function(u) 1, npts = 50, lty = 1, ...) { # # Add lines corresponding to a saddlepoint approximation to a plot # sad.d <- x tt <- sad.d$points[,1L] rg <- range(h(tt, ...)) tt1 <- seq.int(from = rg[1L], to = rg[2L], length.out = npts) if (dens) { gs <- sad.d$points[,2] spl <- smooth.spline(h(tt, ...),log(gs*J(tt, ...))) lines(tt1,exp(predict(spl, tt1)$y), lty = lty) } else { Gs <- sad.d$points[,3] spl <- smooth.spline(h(tt, ...),qnorm(Gs)) lines(tt1,pnorm(predict(spl ,tt1)$y)) } invisible(sad.d) } ts.array <- function(n, n.sim, R, l, sim, endcorr) { # # This function finds the starting positions and lengths for the # block bootstrap. # # n is the number of data points in the original time series # n.sim is the number require in the simulated time series # R is the number of simulated series required # l is the block length # sim is the simulation type "fixed" or "geom". For "fixed" l is taken # to be the fixed block length, for "geom" l is the average block # length, the actual lengths having a geometric distribution. # endcorr is a logical specifying whether end-correction is required. # # It returns a list of two components # starts is a matrix of starts, it has R rows # lens is a vector of lengths if sim="fixed" or a matrix of lengths # corresponding to the starting points in starts if sim="geom" endpt <- if (endcorr) n else n-l+1 cont <- TRUE if (sim == "geom") { len.tot <- rep(0,R) lens <- NULL while (cont) { # inds <- (1L:R)[len.tot < n.sim] temp <- 1+rgeom(R, 1/l) temp <- pmin(temp, n.sim - len.tot) lens <- cbind(lens, temp) len.tot <- len.tot + temp cont <- any(len.tot < n.sim) } dimnames(lens) <- NULL nn <- ncol(lens) st <- matrix(sample.int(endpt, nn*R, replace = TRUE), R) } else { nn <- ceiling(n.sim/l) lens <- c(rep(l,nn-1), 1+(n.sim-1)%%l) st <- matrix(sample.int(endpt, nn*R, replace = TRUE), R) } list(starts = st, lengths = lens) } make.ends <- function(a, n) { # Function which takes a matrix of starts and lengths and returns the # indices for a time series simulation. (Viewing the series as circular.) mod <- function(i, n) 1 + (i - 1) %% n if (a[2L] == 0) numeric() else mod(seq.int(a[1L], a[1L] + a[2L] - 1, length.out = a[2L]), n) } tsboot <- function(tseries, statistic, R, l = NULL, sim = "model", endcorr = TRUE, n.sim = NROW(tseries), orig.t = TRUE, ran.gen = function(tser, n.sim, args) tser, ran.args = NULL, norm = TRUE, ..., parallel = c("no", "multicore", "snow"), ncpus = getOption("boot.ncpus", 1L), cl = NULL) { # # Bootstrap function for time series data. Possible resampling methods are # the block bootstrap, the stationary bootstrap (these two can also be # post-blackened), model-based resampling and phase scrambling. # if (missing(parallel)) parallel <- getOption("boot.parallel", "no") parallel <- match.arg(parallel) have_mc <- have_snow <- FALSE if (parallel != "no" && ncpus > 1L) { if (parallel == "multicore") have_mc <- .Platform$OS.type != "windows" else if (parallel == "snow") have_snow <- TRUE if (!have_mc && !have_snow) ncpus <- 1L } ## This does not necessarily call statistic, so we force a promise. statistic tscl <- class(tseries) R <- floor(R) if (R <= 0) stop("'R' must be positive") call <- match.call() if (!exists(".Random.seed", envir = .GlobalEnv, inherits = FALSE)) runif(1) seed <- get(".Random.seed", envir = .GlobalEnv, inherits = FALSE) t0 <- if (orig.t) statistic(tseries, ...) else NULL ts.orig <- if (!isMatrix(tseries)) as.matrix(tseries) else tseries n <- nrow(ts.orig) if (missing(n.sim)) n.sim <- n class(ts.orig) <- tscl if ((sim == "model") || (sim == "scramble")) l <- NULL else if ((is.null(l) || (l <= 0) || (l > n))) stop("invalid value of 'l'") fn <- if (sim == "scramble") { rm(ts.orig) ## Phase scrambling function(r) statistic(scramble(tseries, norm), ...) } else if (sim == "model") { rm(ts.orig) ## Model-based resampling ## force promises ran.gen; ran.args function(r) statistic(ran.gen(tseries, n.sim, ran.args), ...) } else if (sim %in% c("fixed", "geom")) { ## Otherwise generate an R x n matrix of starts and lengths for blocks. ## The actual indices of the blocks can then easily be found and these ## indices used for the resampling. If ran.gen is present then ## post-blackening is required when the blocks have been formed. if (sim == "geom") endcorr <- TRUE i.a <- ts.array(n, n.sim, R, l, sim, endcorr) ## force promises ran.gen; ran.args function(r) { ends <- if (sim == "geom") cbind(i.a$starts[r, ], i.a$lengths[r, ]) else cbind(i.a$starts[r, ], i.a$lengths) inds <- apply(ends, 1L, make.ends, n) inds <- if (is.list(inds)) matrix(unlist(inds)[1L:n.sim], n.sim, 1L) else matrix(inds, n.sim, 1L) statistic(ran.gen(ts.orig[inds, ], n.sim, ran.args), ...) } } else stop("unrecognized value of 'sim'") res <- if (ncpus > 1L && (have_mc || have_snow)) { if (have_mc) { parallel::mclapply(seq_len(R), fn, mc.cores = ncpus) } else if (have_snow) { list(...) # evaluate any promises if (is.null(cl)) { cl <- parallel::makePSOCKcluster(rep("localhost", ncpus)) if(RNGkind()[1L] == "L'Ecuyer-CMRG") parallel::clusterSetRNGStream(cl) res <- parallel::parLapply(cl, seq_len(R), fn) parallel::stopCluster(cl) res } else parallel::parLapply(cl, seq_len(R), fn) } } else lapply(seq_len(R), fn) t <- matrix(, R, length(res[[1L]])) for(r in seq_len(R)) t[r, ] <- res[[r]] ts.return(t0 = t0, t = t, R = R, tseries = tseries, seed = seed, stat = statistic, sim = sim, endcorr = endcorr, n.sim = n.sim, l = l, ran.gen = ran.gen, ran.args = ran.args, call = call, norm = norm) } scramble <- function(ts, norm = TRUE) # # Phase scramble a time series. If norm = TRUE then normal margins are # used otherwise exact empirical margins are used. # { cl <- class(ts) if (isMatrix(ts)) stop("multivariate time series not allowed") st <- start(ts) dt <- deltat(ts) frq <- frequency(ts) y <- as.vector(ts) e <- y - mean(y) n <- length(e) if (!norm) e <- qnorm( rank(e)/(n+1) ) f <- fft(e) * complex(n, argument = runif(n) * 2 * pi) C.f <- Conj(c(0, f[seq(from = n, to = 2L, by = -1L)])) # or n:2 e <- Re(mean(y) + fft((f + C.f)/sqrt(2), inverse = TRUE)/n) if (!norm) e <- sort(y)[rank(e)] ts(e, start = st, freq = frq, deltat = dt) } ts.return <- function(t0, t, R, tseries, seed, stat, sim, endcorr, n.sim, l, ran.gen, ran.args, call, norm) { # # Return the results of a time series bootstrap as an object of # class "boot". # out <- list(t0 = t0,t = t, R = R, data = tseries, seed = seed, statistic = stat, sim = sim, n.sim = n.sim, call = call) if (sim == "scramble") out <- c(out, list(norm = norm)) else if (sim == "model") out <- c(out, list(ran.gen = ran.gen, ran.args = ran.args)) else { out <- c(out, list(l = l, endcorr = endcorr)) if (!is.null(call$ran.gen)) out <- c(out,list(ran.gen = ran.gen, ran.args = ran.args)) } class(out) <- "boot" out } boot/R/bootpracs.q0000644000076600000240000000751712027641107013642 0ustar00ripleystaff# part of R package boot # copyright (C) 1997-2001 Angelo J. Canty # corrections (C) 1997-2011 B. D. Ripley # # Unlimited distribution is permitted # empirical log likelihood --------------------------------------------------------- EL.profile <- function(y, tmin = min(y) + 0.1, tmax = max(y) - 0.1, n.t = 25, u = function(y, t) y - t ) { # Calculate the profile empirical log likelihood function EL.loglik <- function(lambda) { temp <- 1 + lambda * EL.stuff$u if (any(temp <= 0)) NA else - sum(log(1 + lambda * EL.stuff$u)) } EL.paras <- matrix(NA, n.t, 3) lam <- 0.001 for(it in 0:(n.t-1)) { t <- tmin + ((tmax - tmin) * it)/(n.t-1) EL.stuff <- list(u = u(y, t)) EL.out <- nlm(EL.loglik, lam) i <- 1 while (EL.out$code > 2 && (i < 20)) { i <- i+1 lam <- lam/5 EL.out <- nlm(EL.loglik, lam) } EL.paras[1 + it, ] <- c(t, EL.loglik(EL.out$x), EL.out$x) lam <- EL.out$x } EL.paras[,2] <- EL.paras[,2]-max(EL.paras[,2]) EL.paras } EEF.profile <- function(y, tmin = min(y)+0.1, tmax = max(y) - 0.1, n.t = 25, u = function(y,t) y - t) { EEF.paras <- matrix( NA, n.t+1, 4) for (it in 0:n.t) { t <- tmin + (tmax-tmin)*it/n.t psi <- as.vector(u( y, t )) fit <- glm(zero~psi -1,poisson(log)) f <- fitted(fit) EEF.paras[1+it,] <- c(t, sum(log(f)-log(sum(f))), sum(f-1), coefficients(fit)) } EEF.paras[,2] <- EEF.paras[,2] - max(EEF.paras[,2]) EEF.paras[,3] <- EEF.paras[,3] - max(EEF.paras[,3]) EEF.paras } lik.CI <- function(like, lim ) { # # Calculate an interval based on the likelihood of a parameter. # The likelihood is input as a matrix of theta values and the # likelihood at those points. Also a limit is input. Values of # theta for which the likelihood is over the limit are then used # to estimate the end-points. # # Not that the estimate only works for unimodal likelihoods. # L <- like[, 2] theta <- like[, 1] n <- length(L) i <- min(c(1L:n)[L > lim]) if (is.na(i)) stop(gettextf("likelihood never exceeds %f", lim), domain = NA) j <- max(c(1L:n)[L > lim]) if (i ==j ) stop(gettextf("likelihood exceeds %f at only one point", lim), domain = NA) if (i == 1) bot <- -Inf else { i <- i + c(-1, 0, 1) x <- theta[i] y <- L[i]-lim co <- coefficients(lm(y ~ x + x^2)) bot <- (-co[2L] + sqrt( co[2L]^2 - 4*co[1L]*co[3L]))/(2*co[3L]) } if (j == n) top <- Inf else { j <- j + c(-1, 0, 1) x <- theta[j] y <- L[j] - lim co <- coefficients(lm(y ~ x + x^2)) top <- (-co[2L] - sqrt(co[2L]^2 - 4*co[1L]*co[3L]))/(2*co[3L]) } out <- c(bot, top) names(out) <- NULL out } nested.corr <- function(data,w,t0,M) { ## Statistic for the example nested bootstrap on the cd4 data. ## Indexing a bare matrix is much faster data <- unname(as.matrix(data)) corr.fun <- function(d, w = rep(1, nrow(d))/nrow(d)) { x <- d[, 1L]; y <- d[, 2L] w <- w/sum(w) n <- nrow(d) m1 <- sum(x * w) m2 <- sum(y * w) v1 <- sum(x^2 * w) - m1^2 v2 <- sum(y^2 * w) - m2^2 rho <- (sum(x * y * w) - m1 * m2)/sqrt(v1 * v2) i <- rep(1L:n, round(n * w)) us <- (x[i] - m1)/sqrt(v1) xs <- (y[i] - m2)/sqrt(v2) L <- us * xs - 0.5 * rho * (us^2 + xs^2) c(rho, sum(L^2)/nrow(d)^2) } n <- nrow(data) i <- rep(1L:n,round(n*w)) t <- corr.fun(data,w) z <- (t[1L]-t0)/sqrt(t[2L]) nested.boot <- boot(data[i,],corr.fun,R=M,stype="w") z.nested <- (nested.boot$t[,1L]-t[1L])/sqrt(nested.boot$t[,2L]) c(z,sum(z.nestedh_-ìtM³iîÌŠ%%ñ;f©aû~£Àí!¿pâLuDèʳŽÝ½ÃòY»Õö ?ì™huŸÿìÄâXwŸZPÏ ¬þH·49_xR¼æŠõÞBÞRZ3ü¥J8ž0‰Yå0N¥†XlƒÂÕ&S–éÁmêg¿‡¯¸ÜtZ‡}–„ÎîjíÙY±«½£ ­éÞ ÉŒ|БØ7yuOöÌ¢ö?ù=;¤†] ÕÅö&ƒ ¹ºþfB©ò]¥Ýæ×úáŒñ!KÖ¤ÊÅë’-&AÙ7‰‘vÓ÷@iqnÉ»×áÛ:­æÆ (²Z Pµ”ùõù{;!g¿­Îjc(¤¤9d´ºÏ…ê¡ì’²vk(¯x:•‹€\Ñöœ$/Ã÷r!–pëí ÙçÁ=üfåÑMõÏ^BÉ„¼ô-¶§¡ŠY ÛvAÍ‹UOä]{©³O6ôäÛ-[©‚ú_¬>=è¥óf^:*…’ôCe™ûº †—.±x),ÆýÒ«3UPåÖÏâaXÏünʪø® ,äû@¨Kßü¬/x>TnuÏqÞ q!2ëw˜88nòÓä•…“…ß™Ïêöð9Ò“MŽúcwñUIwÈ·o‚î“Ýn æ9Bízñ‘[ƒ/àümËðÂhh5õœyrŽŽ×´zª-$¶Ðàý¤-Ë*W_Þ(ß圯eW”Z%e¸ßjä‹´ù|æâF^·Õ›ÙzNúå…antúñ²Úê"hŠëGV„Rbç-/2¿êbyRFŸ‚sk^ªü@}@æ³5ÃpÝÅFô•ñ04 $ýVÓ›Í_³0K®ss‡r÷jJ+†º)ÖQñ-÷øæ¶[óMÐЫ_ä÷V4u¥/tvs‰lvR](è<Ïʪ‹í )“}uÅÒêý~ܸQlÚ¯'¾öÝÝM^D-œÏñs¤:¯›!Ã|Ň·3!7(¨¥€è‚Ê)+ý³BSáÂål·œ”Èéz$]ÇñÇvÛ÷%w@•üõxÙ6=èŸô¹ì$Aúöß½CÑsmi.œÛÕðóÜa¾ü†“éP Ô]x½¥|A2h¯ •ÇD^3nYáí¨}Ð}/Å4*_Lo“tCSëñ=&ŽPÙóø´Ñ´Ð^ó»Dé: ®Ú.ïµ^šñóƒîg÷BA×ìÖ—Ãù¬/‹ß8ÒŒ#Tá!Ñ£g™Tøç‡GFÄld᪨M!1£ã¿#ðk˜FEÆ:áëg¢X!!)%-€pl‰…ÃB,<¢ÅƒQ¢pF¤@D "QˆhDJD "rÈA rÈA rÈA ‰$rÈA"‰$rÈA"‰$rPÈA!…rPÈA!…rPÈA!4rÐÈA#4rÐÈA#4r(ßܳÁ›UÑÂ^~Ìl*Få¥Ùž¢¡‘×?µ=Õ¯Ñ boot/data/aids.rda0000644000076600000240000000472011110552530013566 0ustar00ripleystaff‹ íߎG‡ÛŽCl !Køb/|±BQ„PdyΟîž@È„DHbrCVÙµ)a;B¾Û'ÅÂ#ð±›és¾lͶv½¶§Ð¦Jr~Õ½=õMuuÝSÓ›ýðýûróþÍ®ë®vWntÝ•kGÕkWþs¥ënß8^Øû|ÿQ×½öý£úwŽ¿w´é[«¿ÿ§;.ßšüG×òRçµ–—:¯·¼ÔùÝ–—:oµ¼Ôy»å¥Î–—:ï´¼Ô¹ÛògÎë¬o§»­ç­:ùοë䪫”m|·’«®R¶ñÝJ®ºJÙÆw+¹ê*e߭䪫”m|·’«®R¶ñÝJ®ºJÙÆw+¹ê*e߭䪫”ß²ñyXZEÞžrwÊ7¦|sÊÅ”6å0åÔÒêí)§Wï®ó½iùýÃq·q·q·q·q·q·q·q·qŸ•{r^§•VZi¥•VZi¥•VZùÿ+ñý÷¶yÛâÎ}ß¿mÞËâž÷ù†mó^÷¢Ïsl›wQî‹z~eÛ¼ór_Öó:ÛæÍq·õ|Ò¶yp×ó:í¹¬-ås>_uá¼])w*åJ¹[)P)_¯”oTÊVÊUÊ7+åÝJy¯R.*¥TJ­”V)½Rö•r¨”c¥\VÊ·*å+åO*åÛ•ò§•òJ¹ª•‡uòÝ®RÖÉŸu•ò°NÆs4[ÎüÿëLKñ¹8Êy罞µlÌëÜ9÷²æË7ítÞê¼íͼ~vû×Oç^t^óL^ÌwÄýüõ‹µ3÷~æÖçüN?³~¦¹õç-9¾C±<ÃýÑ•ëÏË‹,çvN¶Wö¿ä]tž;çYâ¾:Æ;ÞWônž±8ÿâug•l?îv‹,ö÷Üû‰r^ßd»qŸóÑïØ]±}‘çío¶ów®3Χïb&/–‹ó.Û-xùóh/æYâzŸÛ¢Ÿ1žgñŠå¹’û1æ•~5qc"ø±ýâõÅòYeãüˆö>eÜççUîŸîdžåñrû<^cîãÿ0eWåñíïcãü(ÞÏÆxÄÏÃ÷Ÿ®3>/78ÑÿÒ“;ÅúŸÌö÷ã‰÷¹q\Å|WÉ)¿Ÿ)Ç¡;Y6~ÇÕGë-sž)öCÙÏÝ‚ËÅõ¡,«2§Êêãb9æåÃ+Z,—ý<ƒ³±>ægƒçOŒs\Ã/Á-Îçl÷úéë7Æ9>/}2eø*Æ=ÆwwÊòû§bÿ—ýš»NåýüW'ßGð⸎ýRÜD{9³ÿ“ýúÓ”¿8\gÌsÅqÜÝ“ür—çÙ\ÉãèÓ‰ÇwWÑߨ¯sçQqÿuVÉëý'O×ùËi9®e?‹û‹Þgçõï³uÃñ{éËòzX^ÿŸ 78{7ÿãðdûs×ã3®C³Ü_OÇÕ{/Îã8_ƒ?ûó~NÌû›ƒ­3ü㞎þ>îo'Χ7Ο¸Oý}ÑŸƒW¿ºÎýÝu~0­ùâg*CTƨ,§ÊÕŽ¬-²&YÓ¬YÖ}2údôÉè“Ñ'£OFŸŒ>C2†d É’1$cHÆŒ!C2†dŒÉ“1&cLƘŒ1c2ÆdŒÉ“±LÆ2Ëd,“±LÆ2Ëd,“±LÆ2¯,îÝ£º *T•ªQuª=ÕêHÚÚÚÚÚÚÚÚÚÚš@hM  4&К@h M¡)4…¦ÐšBSh M¡4ƒfÐ šA3hÍ 4ƒæÐšCshÍ¡94‡æÐZ­‡ÖCë¡õÐzh=´Z­‡6@  Ðh´Úm€6@ ÐFh#´Úm„6B¡ÐFhKhKhKhKhKhKhKhKhKh¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—.\"¸Dp‰àÁ%‚K—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Dq‰âÅ%ŠK—(.Q\¢¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K —.1\b¸Äp‰áÃ%†K—8.q\â¸Äq‰ãÇ%ŽK—8.q\â¸Äq‰ãÇ%ŽK—8.q\â¸Äq‰ãÇ%ŽK—8.q\â¸Äq‰ãÇ%ŽK—8.q\â¸Äq‰ãÇ%ŽK—8.q\â¸Äq‰ãÇ%ŽK—8.q\â¸Äq‰ãÇ%ŽK—8.q\â¸Äq‰ãÇ%ŽK—8.q\â¸Äq‰ãÿÚ%'¾—øì‹½Gñ½Ä•i»›û{÷î>x¸wüUC÷ßãÿ„ %$„boot/data/aircondit.rda0000644000076600000240000000031011110552530014611 0ustar00ripleystaff‹ r‰0âŠàb```b`äd``d2Y˜€#ƒP€31³(9?/%³„]"Á¤y8ÀÀAJË@i#(íÚ¡C tØ„ªK€Êç@åë ¨aÍKÌM- @ÌÈ/-*FwrQ~¹²j¨jFCÃÆ0†1L` Sà Æ0‡1,` K(ƒÉÐÎ2„³ŒÐ]žœ“XŒîr®”Ä’D½´" 3¼ êŠ[Úboot/data/aircondit7.rda0000644000076600000240000000037311110552530014711 0ustar00ripleystaff‹ ]Ï ‚@‡G1*Ô¡gˆ\MëRQt-»µd©-ºö(=JOmíì+è÷ÍìÈÙ(fvl€ FÀ°„Z¦øuÑ›²uzI× X“'VE°•à÷DuMd é !² ‡É1Öä9CÎ_’ ¬ãHr‰ÿó•äëýSò(ç´›.ü¼ÉE³Š—ü5÷é-˵Ér–ÞÿéN®¦ÄSâ+é( ”„JºJz(¦Û&sÉ™Gæ“uȲ¬KFŒ2e0Ê`”Á|}cëÏõÙ ¿rg›‰õˆêý}?K¾lDfboot/data/amis.rda0000644000076600000240000006733611110552530013613 0ustar00ripleystaff‹ ìÝK,ÝuÞù¤šÝ–< ´x@†Ñh4”k­gG„n,S/ERÛè‘§Zhh`YžæGëä çMY¹cýk剷.§ª²¹ˆ|*³2sGfÅ~âö«:ÿ÷wÿÙþåþ—§Óé·N?øÓé?üÕ—?ü­_ýßN§ßûºñ×ÿåoÿñtúÿë?}ý«ÿ~÷túÑ/þàô?þw˜xÍ?z"ÿø+óOòÇoœo——å¿?½Q^¾?rzã¼¼,ÿôô•yùr~w*y½ÿ§×Û?½|9ÿìTòò8vzœ?¿ÞÿÔ<ø,ëõ›­ß—çå×®Ÿ¯Í7_¯k^¾?ß}}~*/_Î}ý¯yùr>5fþüôåüÅõûºÞÞJ>µ]yéöå­¶;wÛ¾\^—µ½øêyvyŸüÚùõâyvy^îó¨du=}«õïá(/o›ïÞÇ—ÇùÕû ¯ÍËËòŽýT^_÷°Çk^ž—7=?óòº¬ûI7y}ÜÍöâš~ýþ_^oÏíÅœ'O®ç—×åGõæ¯Ý~ùåmóÝ÷wj^ž—/'Ïu>ü¢äŸÏ¼>î/®·ÿòz{îÍídz÷ó/ߟßìúùº¼×öâÞÛ•·ÎgÏ¿™—çåÑüªófî_Óã\®¹–¬Ç'_{¼òVùÚãž÷>÷µçí^›w;Ÿwy]¾×ñÙSýwÔGuÞÕõû£·ßj}¹Ûù±Ëãüèãð»mo/_Χ¶‡o½w¸_6óúø£ý®£íÆÜïšÛçn/ž»=øl½~ïžþ¬×eÞ{øqÛåûó¹çæüªçwŸ»Ÿôµëï½×ˇ¯ÍËûæ›íwÔ¼¼m¾÷qޛϻæÑñÊÑö¦n7Þª¿ßj½x*/_—ï½^Ýë<ÙGõñ›ÿ_¾œO‡¿ú¼×õö~ž÷z{îgýÅõö[_7ÿæŽ+g^^—ÕÛï5¯ÞëüÚ[Ÿ÷:š'/žYçIõ%Ͻ.^çÇ[ß¾ÙzyyÛü´ç‡/ÏË>oûî×Ef^¾œO]yêzÈÜ~Ìãó£ãçîg½÷vãá­òòºü4.äòuy·ùwùþ|³ùs½¿nêþS=_5s^œùÜëw?N¾¼,ßíøvæåq¾ÙqFÍËËò^×!ßl^¼4/ÏË9ž¼î~Í9ŸæqùÑùÚ·:ÿúÖÇ+ßüyØËûä[oß>j~?wþÍÃgŸ\çÑuô¿ºæSë÷½Î7¾ÕÏá£ûôׯk<•——å“Ç3/ó¹}ÿ”¿zîñx=õVä­¯ó}–íËÃ[ååëò³]ÿ¸Û~âåyùÜë€Omw^êÛžÊëëÚë —·ÉÏâØßÝμ¼6/ó¥óí©ùópð¼§æï›ŸÇ{¯¼|9ソñQçû_—÷Éw;õܼ>owì×Ü÷«®·çï×ÞÌßË÷ç{ù‹·Zÿ?zú^çK?ê¸öÝ×ÿ§òòýy´þ?•OýÞÇœ/ïîkg^¾œ}^òÝŽó/_κ~ýÙ¶OåkûâµóùhÞÕ¿W²§_s'í¯¾ø8îò¼üfÎo]^—Ÿå¼ÕWÏŸËóòî'^ž—O͇º½™Û¹_õ¯÷×ã­rS÷ºÞu¯ùòaóèò>ù^Û•¯W_ëykî¿äšõw^š—¯Ëwó03/_Î{Ï»÷šG½?öVÇmϽnxô{Rï~Ü1óò²¼·Ÿø(_ñY¶ïu\òVç)¾ö¸ã)out¼þ”ß}ê¼ýGýží·öûä½Ýº»7˜yyY¾»¸<ÎçÎϺý8\ß.óÕ×Gž›—ïÏÏv¼ûnç‰f^^—ovÞù¹yùþ|iÿ?u¼ðÚëÏÝnÌã7¿Îyùþü¬ûKïu]ç×ýøýÓÏ÷Ë÷çÍþ×Ìë÷¼È½Üû[í·}ó¿§xùº¼×þÒ{Ïïçλ·šOÓ]U/õÒ¿ ÷^ÿ~ÁÝ[.ߟŸæ<ôåyùQÛÚÏ|ëãüú÷ªçüxÊíÎyò^¿õµóå[ÿýÙ{ïw~øqâåuùâíÆåqçÔ¼~ÿµ?ôkÏs}ëë^ûS}¾é­·3_»Ýx¯óG¿/UÏïý]ŸÏòwo¿•ùóæóéò6ùÍž§}n^¾œu¾ÍßÙº|]Þ{¾Üý<ÀåeùÔ¼ûÚyU÷Ãêù«—ž¯º÷ïç|ô|ý´/¨æåeùÖóóÓÌ›£¼|9ë¼¹×vã[]Þû<ÖGW|¶ó]ï~ìò8Ÿ:N™×?ŽæGÍ׺ҧòò¼ülûO­÷ºÎÿQ×é?úzúáy§Ëã|éõóú÷vçuóù÷áêyÛºýø¬û?Ÿíï™~ºë!3/ÏË÷ž¿ï5__{>ëæ¸¼ä‘/9òWŸÎÍ]¾?ïýû÷î÷£|iÏ?Ù÷OååyùÔuˆ§òµÎª^¬¿þ0ózÿ[Ÿ¼—ú°ýˆ—æåyùîëÝ[ååeùbW8óòå<\ÿg^çÑ¿ç\÷¯æ¿ïüÞÇ}|ðÑdz÷>~ý¨ãÙšoÕ_í{g^ç“Ç%¶#Gÿ~Á›]7¸|¾ÛùËÇä½=í·²þíqùs·cG¿XÇŸÚŽÌãóº~?uœñÜõð³ºì7ë×™—/ç½öËîuüðâý¹ËóòÉ¿cuz"/_Îúw|漨ûYOõù×öò½ÎsÞ}þ\ž—5?¾zžååûó½o^û÷¯žÚŽüÕõöÃõö“û?—Çùnç//ËO¿¸|9ïv>¨æåËùµÇÍ/=ôTÏ?užéh;p”õ8|îOÍyñŸþ9çþUuýÿpgVóòuùéÎ?ͼ¼M~³ç§.ߟOî?ͼ|9oæWÉ£ù5ÿþÕÍv¡äk{ÿ£öó¿6ï}\ðÑç{ïµôìùsyY>wÞΓë÷÷ëæ×Ûÿázû©íÆkçÉGŸ7ù°ýæË—óÃûræåyùTí¿îÇ̼Þ|Üüù×ýô9ÎÜ¿žŸG]æÏe>~ïÇò¾ærü´|¿öÛQ/ÏïÏÛsÿ¾Î>~yÞüêÏaÎãùsÏŸËw³Þ\s~s^ÕÏ·.Gý¹ï?Ç2þÍü¸<^þýuëó/eüòüyÄ¿/Ï{(÷ïó¥<¿öÁÞ—åþýó)9Ç™½±¯/óy—Ç÷ïýRƯóx.çþ>.oÏ÷]{q~ÿù\3W{®þœç÷^7¿_·«óóÙ{àz{þ=’¹\sœ?/÷ïÛÝËãÛZž7.sœù¹þ´Üžó|>®WóùûöeÞ>x£ÞýYyÞ/Êóë|šï«~޳/_Y®ús®ów¾þìãýõ/Wûá'åqóóÜ?ß:ny½£ín]þ›¾)Ïÿ®<~_ÿËãgÎó®G=<.õs­}Y.³ŸæÏù'åþù:sÜùyW³½¨ã•å«ËU÷ÏŽöKk?ÝôjY¾:¿kÿÏÇßlgË÷çýV¾?·o³WgüáÁã÷y{°Ÿ7gëçR÷ûëyŽ_”ûëöjÿ\çû.Ë{³}(Ë_Ç=êýº?]÷?ëóêñYÝ.îÇ“eùæóëþÈüêñQÝ®7›¹¿\?®¾ÿ9ÞÍþì|ÜÁóöùRWwë÷çëÏqëòÍåßׇ£Ç•åšÏ«ó´ž÷:z¹Î^˜Ë]Ï'¿®çEæãª›÷ÿEyþ‘›žªös¿þ¼ëö~~Ÿ¯åõêúúݼ>îšs=œÛß9ßçëÞôèÌ2~=¾žó®~ÞÓ«ÌíìÍõþëíÙ‹ûù´ú¾æ¸eœz||ÔóvÝ~×ëuþþ¢<¿^ŸšË7_w~>µÿŽŽ/ëö±öbÝ?›Ï¯û·e¹æç~t¬öa=N™ãïç¡Êrì×á®Ycæ÷ÿ²Ü®×éæç=ç×¾=™¯{ðüù~÷ã¬9þåñýóyuÿl:ܺ¾ÎÇíûoó}—ç×ó\u½¬ûóýÎ×ù‹ò¼º}¬Ç›³GnökÊòÖ«ëÿ\îzÝ´ž÷~(«½¸÷Ìéñrթϯ?ÇùzsÞÏŸ÷ÏËóêöôÏË÷÷óµ—ÇÛ¯×Ìå.ãÖídÝÏš·Qž_ÏÔy6ß_=>ùy¹ÿf½šïï`¹öÞ¹æ~~ìòxÜ£ëzóuëü¯û/óñpð¸ùºG`.×Ñùÿú>êþkÝ߯}3ï¯}P÷ãj¿Íõf¾Ïúó¬ëKí×z>rÿ9”å­ów.w]¿æû­ýügõþ²\ûþIYž‡òøzÜuóüò¾Ž®ß]G©×OçýõºÆÍyÖò¸º^=õ{ u}­Ç©7ó¹,o=ž©ûSu?a®7sùêqj]¯ëváðó»fÝϪûã?+«ób~¿®Çsœ£ãž§Î¯ÎÏ«öií¹ºŸTÏ+Öå«ëkÝNÍå®ëëSû¿G® Þ?Ç«ç1ê~a½þQϯýA¹=ßÏÍyâr}Ü~çòåû«Ç©çëþïÑ¿«4ï?òGóqõs©çÃêçTßïVžWÏçÖãœú÷|ŽÎÿÕó‘G?—ù¼#¯ssžðàùGç»æ÷ëûò¸­ÜžçÉëùó?(߯˵Ÿ'.߯ëE½ÞTÏŸ×å˜Ë7—gßﺾÞwלû û~Ì5ëy º_²ï–ÇíÛñ2Nuõ8áæ¼Õéñ8u9j¿Õ~­=]÷Këù“z>â¦Êízüº_Î÷[–·öz=V÷WŽÎãÍÇÍíÖQÖŸË|þ‘/¨Ûùº_øpÍ}¨¼NÝîÞ¬׬?¿9Þ¾_<ǽfÝ.Õýòº_]?¯zœQ×ÿê©çóëqg½.Z·S7?ÿù~êøe9öíoÉ›ãú¹¼%çxûù»òýý<ó̹\׬绿+÷×Û7ûC×û÷óeüŸ–üóò¼ýx´,×ÍqÂéñû9Ú^Í÷YÃëñý|_7ûŸåû7ÏŸËzü¸S²®÷u¿çæ8¼¼ßù¸ýø¸,O=®©çƒëï­ÖýÒzû»òø£ž­û…ÕÏͧzüü”÷<:Q÷këü¯×¯æzQ—g®¿óqõ¼j¯³7~VžWçYõY_ÏÃ]Ï®~àf”qêþË\Îú{$õçYûgþ÷ãȃ¼™?syÊrí?¿òø½Ëãê~sý<öó¹eœz= öw=ž¬=rã=æíkÖþ«Çóu^Ôã¶¹^í×7ÊëùÙ£¾þ«òøù¾«3¿ñ?×ûçö³î•ïW7^Ï÷ÍùU×ϧ®/×å¬ç9ëz6ßo½Ž\ÝÄ7-Ëý]Y¾›ùW–ï)ÿ8û·:Îúº7¿·UÞ÷¾/Ï»9;Ç;=~ÞÑþ둺顃å®ç׿ão®“\ßž¯³{úòúõ¼ÍË*«ã×óõxëèºü¾Þ_óæüîåñóêú¹¯ïå}ÕýÁ#'RWê|;º<ßW]Oê~ÉÑõÙÚƒO]o¾™%öãæ¸7ûåñsÞÜœï,ﳞw¿yeüúþêõ±ßXn×ù}ôó¬ï§žï®×ëçS¯kÔí`=>ûîàöÑ~ô¼½÷ãÁ8õø´öCǵOêù¦z^£z¦Úóþ›ßk›»<^ÎzÞª~¾u^]o­Ë{³¾ÞþnÞ¾fݺùý½ù>êxs¹Ê÷kßÕß[©Û¥:/ëy ›õ­¼~í“›>¸æ\«ªóãèze]_jÿÖ㤛ë"×Ûß•å©×q÷ëDóuËû>:/T«öãë9þ5ëuûºÿQ?—º=¨=WçiÝNÏŸÝ<š/u=¨Ûÿz¢î¯Öûçóªo«×mvx)Ës½=Çêù§º¾ÔõòfùæíëãNÇ;:?]ûzž7¯[·»óýÞìï–×­ï{ß®—çù¬ýqå}Öž¨ûóqóûu?¨~N‡ÛÉ2þÑvŽS?—zÜPßï¼=?ﺟQ]OÝÕãðús?º2¿_çü.oíëýøçºœu{\]ÑÍ~ayÝzý§n7êϽöèœÿóó¾Ù—çíWÞ\)9ß÷þ{.e9ŽÎ?Ôëqu9Êãëö¥®užÕy=—£^g©Ë;Ÿ_ï ^ÿ©Ç[õ8t\¹ÿèºÔÑù™:^}ŸGóöèü{}zü3Ë8õz]Ý^ßlgʸõ÷êçû«¯ï¿._Wuý«ýU7çëýý¸º]©ïëè¸ÿ¡<¯Îßz^©îÿ=õ÷Yæü¯û¯GÛµ›óå—ǯ¿÷PçÃü½Ž:ïërÔãì£ë¹uìèxl\yýýºbYžùúsüº_ÿP²¾^½NSW¯³Ïå¯ýU?¿º^Öó0u9Ž®'Õ¾­½1—çè¸ôÉyrÍÙÿ7¿ßR^÷'åñµoêùÀ£Ÿó\îzœXÛëù£íßÑv¦¿Õã²z^û¡dýþ~}ºŒ[¯êû«®§®u=©ÏŸ¯^ëè¸fî?×íz}z}{ÿwÊû©?÷zœ5Ç«ï·.Ç|ýê‚k}wyWîÿ®Ü?³^˜»Y/ËòÔ¿Gr´~ùÙýzQÉ:ÿêòÕùxs¤Œ{t>¿^7Þ׫ùzåu÷ó:¯×ÃÏÃÎñËëí×GJÖ¿³P÷Ãêû¯ûË?-«¸.ÏÑü:t0óõçr—Çïç!ËûªŸS=ÿQ½míÛzþ§ž×ÜÏ“_ï?ú»7ó¤Ü®>wÿ¼–ïè|t=߳ϷùøË—³¾ßù9Öõ¹žoªÏ¯Ç³ûÏõ`¹çòî×%çãÊòÕíJý¹Öë–uùçóêz}óûEåuëu—úó¯Û£úyí×¹‡º¾Õ׫׿Vžw´ÿ\×ßúyyˆ#/T×Ãzžïæüjy½Cot½]ï¥î§=”÷Qsýœ¿+߯Ç!?)÷×õµþ>×üÜgÏÕý­ºÞÔÇÕßçªû)õsªçëç[·ƒûï-¾ü¸úû:»Ë©ïóôx¹«‡šï¯¾ßÚ7¿ßpy|ÝÔ¿?4¿v‡UÆ©?¯ÚKµÇæãžrœõçS÷Wçýs;P׋ùøús¯û¹óýÕ÷S]Ú¾z|ÿÍuÊ2^=¿?ǯ}X{¾nærÖó}õ_¾|ÿžeyçëÍÞ¹™åýÎÛûï£Ìå›ËS^·~n{–×™½T÷Cê~Ñ~Ü7_§Œ{tþoß¿(Ë]?÷z]cþ{OõØüþìÝ¿(÷ïý3Ÿ_^·îßï󴼟z|RçIõóu;1?çé°n¼\YÞúï-í~ëòøùµ/ëñDݨ¹Ÿ'¸>~þû{¿_.GÝ®}xó{vó}•å­ëû<î®óµ.ç~œu½¿n'ëߟ?Ïùû¿s;Rß²ÏÝ—ûë¿Ï0?Ÿº]Ÿïkß~•å«û‘õ¼HÙ/=~ø»§Ó~ùãÿþÏ·:;;oóáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙÙÙÙùmæÃ©³³³³³óåyïíWggggç·™§ÎÎÎÎÎΗ罷_Ÿ9NGyïùÙÙù™óáÔÙÙy˜¿ÝÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙù›š§Ó÷túÑ/ïý÷Ä;;;;;¿Í|8uvvvvv¾<ï½ýêììììü6óáÔÙÙÙÙÙùò¼÷ö«³³³³³³³³³³ó7'N/Ï{o¿:;;;;;;;;;;sòáÔÙÙÙÙÙùò¼÷ö«³³³³óÛ̇SgggggçËóÞÛ¯ÎÎÎÎÎo3N/Ï{o¿:;;;;¿Í|8uvvvvv¾<ï½ýêììììü6óáÔÙÙÙÙÙùò¼÷ö«³³³³óÛ̇SgggggçËóÞÛ¯ÎÎÎÎÎo3N/Ï{o¿:;;;;;;;;;;sòáÔÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙÙùë•÷¾þÒÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙÙÙÙù8NßrÞ{;ÒÙÙÙÙù²|8uvvvvv¾<ï½ýêìüÌùpêìì<Ê{ÏÏÎÎÏœ§ÎÎΣ¼÷üììüÌùpêìì<Ê{ÏÏÎÎÏœ§ÎÎ_Ÿ¼÷|êìüÌùpêììììì|yÞ{ûÕÙù™óáÔÙÙy”÷žŸŸ9NGyïùÙÙù™óáÔÙÙy”§Ó÷túÑ/ï=O;;;;;;;;;;;sòáÔÙÙÙÙÙÙÙÙÙÙÙùAùÛ¯ÈÕÙÙÙÙÙÙÙÙÙÙÙùAù{¯ÈÝÙÙÙÙÙÙÙÙÙÙÙùAùo:;;;;;;;;;;;?(ÔÙÙÙÙÙÙÙÙÙÙÙùAùo;;;;;;;;;;;;?(ÿ·ÎÎÎÎÎÎÎÎÎÎÎÎÊÿ½³³³³³³³³³³³óƒòßuvvvvvvvvvvv~PþŸ‡ùtvvvvv¾"ï½ýêìüÌyïùÙÙù™óÞó³³³³³óÛÌ{o¿:;?sþvgggggç+òÞÿ~nggggç7™?þï/χSgggggç+ò_uv~¢¼÷ùÜÎÎÏ÷ž‡ÏÉ{Ï“ÎÎÏœŸä:Cgç£üׯÈ{o¿:;?sþ›ÎÎÎÎÎÎWä:;;;;;_‘ÿ¶³³³³³óyï—µ³³³³ócòÞÿ^gggggggç¯[þ»ÎÎÎÎÎÎWä½ÿNrggggç·™÷þw(:;;;;¿É<~ø[¿ú¿œN¿÷;¿Êÿùïþú¿üÍ?þêÎú3 ?üÕÿãÎüû¿ù›ÿçzãùû¿ù‡¿ý¯óÖ¿øoý÷·÷ÿ^oþðïÿúoÿ¡¼äïüÃýoÿ^öG¿¼>øçù…Í/|~ó Í/Æüb™_¬ó‹íúÅoÿê¼eûW¾ûWÚ¿ûWËþÕºµaû¶aû¶aû¶aû¶aû¶áû¾áû¾áû¾áû¾áû¾û±û±û±û±û±¡} íchCûÚÇÐ>†ö1´¡} ícŒ}Œ±1ö1Æ>ÆØÇûccìcŒ}Œ±±ìc,ûË>Ʋ±ìc,ûË>Ʋ±ìc,ûë>ƺ±îc¬ûë>ƺ±îc¬ûë>ƺ±íclûÛ>ƶ±íclûÛ>ƶ±íclsŒÿéüû¿Ÿ_žóKË/=¿ŒüRùåÈ/—ürÍ/s´sŽvÎÑÎ9Ú9G;çhç휣s´sŽvÎÑ,G³Ír4ËÑ,G³Ír4ËÑ,G³Ís4ÏÑàÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}À·øöß>àÛ|û€oðí¾}üÿMÚ;n EÁ€DCEÁ:(ïñvžøTÐÖHH3…%ûÈK|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷ß>|ûðí÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷‡oß¾=|{øöðíáÛ÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛß~øö÷¾ýðí‡o?|ûáÛïÍ·¿Ùß==}~m¾þ|äbý—ÝE?Ú°ÏgD÷Ưå]ã­ùçï~ßÿ_1„ÿ¥qµŠ+G©>@~¹² ½ýŒ#òy´JÑ‹ÉúkcT('ëïÖÏ*uÌ.‰$ñH:$]’‰OÒ'yF±Ü6›Ë&Ù<¶[—­Çæ³õÙ¸‡ä’{Hî!=óüæK•ÒŸ Lsj«Zï‰ÊNÙsxŒV=åboot/data/beaver.rda0000644000076600000240000000140611110552530014110 0ustar00ripleystaff‹ í—MH”A€gWMA#OÖ¡ƒA‡(]¬a>A딞Œ,\u£7´˲°ƒà%vÍßõgw5¤C‡¥“T;õ…Ö!ɃŒ ’¾ižñ‹…]‚È9øÌÏ;ó>óβuUõûóëó…~áÛ*„/ÛífûÝ?>!JÜ ±¥)è ž"·ÐŒÉÈ>¡Û&7ùoQþ^+‚ð–a”õh .ÆJ!ë±3†ýx÷×Ãx.À/†×çà²á!>}†ƒÇà˜‚àªá~CøçAü†ñÆo¿‘68 ñÁo¿Qüâ–CüâøÅñ‹ã7FÇÒ¿qüÆñÇob¬øMà7ß~“¼Ûä3Ã~ üø%ðKJÈû%ñKâ—Ä/• ‰KuCâS3}©yÃ)î3U¹×T+ìƒïbºrÿé H¦įMî†ð¬‚Gàqx‚¼§7R‡óMs†!ÖÕÑʶOó¥:©Q)Õù|=!ÕÀݤšd>‘®]éÝé1ÂzWÜmcRµ¯ô–¥ë¼1ù6â®°ÏòjõlÉÅ%©Ü]î„Ç~Îé[»mØ#ÕeòÛøîr½à1Ú ;RÅtX»·ßú¹Q埽8K;¯·µÍxÔei©î’ï çÙñwããøõA÷¥C2s/§`Þm½q¡»«zÖ‹·cç}Òé¾i\’N–ñp¶?§Ôø8;Ì»8Åæ^NõbŸÒQµï¥zÇ{Úñ+ê½D¾¥Œ{-˜º9¹zºÆË·—{ðªÆ[}å}-íyó&^}ä\ÿ¿7Ôw™wµûÞš¼Ö×Öe£ŽÜW­ñ^Öû9ùïpþmêhÇ9÷ÁiݤZä6ïK¾«×¼‹ÍK]¼Ÿžb³ýQ;¼ºÉÿ‰¿N²ZBawj›Ð?`„hÑë!ynÈÙ@8qûÅÄ­»1EÄšc]t³;BáàF?n£Ÿhîuf¦ïˆèõBݵ†ö?ãoŒsš[‘ùý6¸#Bׯ{ZP¬ÿµêÖÚ boot/data/bigcity.rda0000644000076600000240000000074411110552530014302 0ustar00ripleystaff‹ ]“iJA…{BDÌôž@Lw›å‡Ü÷%q 1JÆ%"¸1‰$‚bÀ‹x#xO"NìWO{’÷ª¦º¿š®™òbUgª¥TJ#J騦Sñ_ ÔDœPÃg×Wç×í§Øÿ–©ôh¬ÙRXRƒ«tÝî9m6œ®*§óÈÏ"ÎCç  º]îõGïNËØo ù ´óé´;<ê¶qÿõ#±}ï"^„†Ð;<Çâ#ì7xñ.úw3Á?`ߥž¯“^óørùÖï÷-c¿ì#ç s:îûùuQÔWpq ç-óßBý>ôúŒs¯õüõoŸþ9b™‹Ì-l80×ùÔÉùJŸ2:´ù…z¹uõDßr.aß﫺šô…:9§[è†Âå}>Cwáíe+NŽ)÷ñ ’Á£˜n¢|$ºïLý_’•J-ƈ±bfÄäÄäÅÄaRÙiº,¦3t–n†.G—§+Б¡ÉÐdh24š M†&C“¡ÉÐd2 † C†!ÃaÈ0d2 – K†%Ã’aɰdX2,– [L¾0ç7aK¦ ,s¶Ã©f¿qô=øýá‹Ëºboot/data/brambles.rda0000644000076600000240000001452511110552530014441 0ustar00ripleystaff‹ íœ{ŒågYǧÛˆ4 4^P+ Æ÷ú;1äU "V¨!i·R$M[´­R´bˆ¢1Hˆ!€Ph)m)í¶´e{Û{·ÝÛÜvfïsæÌ™™33gf¨xà q~¿ý~Ÿ¯ž ñ?ÿñœ¤}çÌ9¿ç÷>ïó|žËûfßû¶kݶk·ŒŒl¹èå##mݼܺeóŒ\¾ùÁÈ%7ܶý–n¾ñö‘‘ïÕæû‹7ÿ~éæËƒÿ¾tó'ÞV¾òâ®?yå‡ÊB=Üú×eáôc ¿ù ¥7õŸß~üí÷”åÞöðk÷}±¬Çr×Ú_}°¬¿oë}úŽ§ËÆOßôÑïœ.ŸüµÙŸ}ÿo”ûæ¯ù—£'Ëú›.­¿QÖßÞܸl¸ãov¼IãC—ÿÁÜ;ÒïpŸµ×oJX/k;êüpYÿ@ó*ëßi¨,_{Ù ïšø‡²TßåÒ÷•¥ú×?ú÷oþÄUŸùØËÊâ+ÞYßÉæ³Øˆ»¼,¾kâ¥Í)—ù͇ü?ÿ™ÒmÕ7œ,ÍÙÝ}ß|9û±—½îì[~¤œúÒ³¿ð±V9ùùê÷«œúhóöù)|ÿðÿiϧ®ûé2=Ž4¯rÏû(æõõæg×–øÝCO¾gõã¯yÒÆGoUÆ~¾ø@«§ùÒß•%<çúòõJ”•Í›n~ÅÆç¡‡CÐû̶·þν3ÛÊn<ωO×ðoåôgš‰•ÓçTÚõrÞýµÒÞÔÎç>ùîÒ¾»¾Á±Ò¾·¾ÃWK{s‘o{ø'J·yüŸ¼Ð.ð½èyåS×Õš(+×7RÖöÔì-k°«5èuãÕm> o¨/þQcc^·—6æß®Å_1^¦?×J™Æï¦k­lyE™®ÿÿÎeüµAýRÇ}vÃ^ï¹µù¡Ïa]Ñ(°»«™h™ÄûÉ«Ei¬­ê’Ñ2yM³åø[ê ¼PŽ7æý{öþT£ö·–Sõ·ß°¬ri/sxÎ9Ìs®6¯¿-ÝÚ›núã²û§-Cß+Íc_VVàOËÍô¹ôõÿQYÆsð}ë»ÿZ¨ÍáªÏ–EØÛæ=ûu`ßôƒN#î†Òÿu^j ªÌÕ¿zíþÒ…½tׯ#¶¾køû2æµ\K{O¿¬úæUVsÞbv°ñÊFAšæÝkã×Kv°ˆ¿/¾zµÛ½ñŸ°kÁƒ1Øÿxýx¯þrYÇ<̰®ç,]ZfÁ3à žsæê\eÏ?»<9ûð}úýQpòÖå Öc÷éã¹ú™|ÄÖsëÒ¯§Í¿–>Ös|Ø€n4b/1?[8W©oŽøÝ8a|†¯AŸëà%ï¿ =¬6¸ÙZV¡¿yèoo7fó«¥Çs̃û‹°³n#î§Jz8 =m¾þcå,ü‹ïÏÁgá¿ç çsàÁ9ð`¼iƒ£KÐ÷Rƒ‰ëËÂkjøñÒÅß` °Û<Çü–ó`›÷:YÞT:¯kž´Ìƒ»Œô_~>ñ¹Ï`¾gêå¾óx9ƒïwa]ØS·6÷ÿÞŸÏcÝ:ð¹o×±»ÌÁ¿æàsX¿9ø;9:‹çš…½’«Œ³Xר?íý$îwþ<èO/b=÷a>ëX÷u|Nþ÷OÖð»Uðuë×EžÁxÃ÷Ä[ò©‹ù·³Mع®|~Äxû$žó)øËSàÂsð¯‡7ù•²yÊWsÿÃòÈãߟž÷€‹{qÿ'`×{À“=È+ö"^O¨—ýÐÃøã?¿ëwî·|Ý~ú;;Ü¿?ëlŽœødyü£uY…ÛºBŸ«x^®ý³; ÿÏŸPâü¾s¾Ð½ä=îÛF\é NXÝ»ZD¾cõÖ—Ÿ3ž“]ø3ógÆ•³˜?û;gñÌÏ鿳XW¾oã~ö¨‹ºð›.ôiõâúüÀúj¬;Œó¬°Î]p‘õGuä<ì}vhñÏÁ¿S¿sˆöq}ÖKÆ1è¯=³®è­ÎÀüX_Ù}aŸmÌËò ŒÌÓgà'áïcàë‰1ØÏ`}Á>×óXŸ½àócæÝ¬{Çw€âïÌëÃߟ„üˆ×OC;áO€SV/à~_Cú—ÐÏ]D|v¹ñ’õ³à ǯCÌçö xqc/ø°vÃû‚Ì÷د¶>5ã8Éütóf°þ4»c½Âü}ÂEÜgœZÁ:2ÞZŸ¼„¼¾Gîö0oÆ¡ÜÏú@°»>üú‚8‡‘ùµå9ìŸÁŽÎ3sëμe öE=õ±þŒwgß¡ï>8¾ˆ¼© »$ÿÈ—Yøù:‡úÊúxþqðy/ôöôpzÃýÛ¬‡˜W³Â÷§-bý'zð?ö1¹?Áºz ?±OîÃNh7°Û¿y7æùYðèKÐß½XöE¾ ~ý¾÷§à÷¾|ùyøü‰~8yŒ°›=à¹Å¾‹qúâ¼ì=ìþÀúÌêpä›Ó°³Cðoòäæ7…<–~O9Ö—_DžÂx@“¿äˆñe .g¿Œœ°úqõトw´3Æö¹/DrÝLðgŽ»°ÿ² ó&÷ù9ëdö«Ø'àº0nZ ëÆñ›à!ןëÈïs4ù˜'¹¶ñÏâ*ôñ üœñ•ŸÛóáyYïØ> ütœd‰ü'§MASXWÖýãÈ&‡Û} ®/û”K;˜@Ý=ç§Ž ôߨ—8»²ºë3Ž<žñv68’W‡‘ò=ó1{^èÏæƒ8ÊûÌ¡gõ(÷'˜gÑN9¿IìÓL QÃ8ê}ñƒû©ìË×ÚÈ_È[ö§N!ßâø,âóqäµÇÁ-öõØ_? î[üæ~ÜØÇôuÎ!NÍÂÿgQ³ßÎçâ÷¸ay1ó`p¨ÍüžûeÐó"ônû"x¿ˆºŸûsK߇¯b}¬~†^Ùoä¾ì2ôÌxÎþÈ ôÇ‘õ]~Ⱦ=û!+Ð× ü…ý!«ç1ÖÍVrÿz±þê FÌg…yŸþi#ün ~Eýp_dŸ³o¹Œ}¤Á‘ûÌ àë.Ö;ì?r½m|€ï¶oÃúõÝÀß—àÿKà\ùÅ"Öa°ß¶ÿ³ý+ä7¹Ï¾‚xÈõ¶ó1¿5øóÔ5ðÙÞ#ÿî Ÿï >Ð>Y²¾g=Lý¯b~Ödž°oÀ‘v°¸¹¾ÒÞ¸¯k}Jؽõ !ßêo¬3ûV7sЯ½ç¾6ýúµõãÈõÁ÷¬ ûëñ\¸`çQPgòy©›?ìš~C¿fŽù!û¬3óyÛhŸ³àùiû€à=÷ý­ž¨+X?°ŸÑA|àùösºì×P˜ßïáÇö|°Û3ä'úÉÜ<…ça™¿·ó¼/~7ú„ý¥.ìeÜYÿÐOä|i—§Ï—¬…ÿ?=<…|àÄkö˜±À|î›Ðû#ÐûNÔ–ïàþ™W1®±ŸË|Žyšå}̳ÁcÛ—ÈïX×°îç~¤ýã3à8ó0Îûqö`‡»Q—ó\ ó;ö-8ö'˜r~¼ÿãàóJÊgfpì‹s?t?Ö•} îS²N`ÞÁýBÛWDÁüh õä88iy ä[¾fýÅ|†ù%óÀqö³0ï1ÄaÖ5Ì­³~¢<æ…v®‰ýø‡=묓=/ôÍ<ñâóeÖ+Ìçð=žkbÅ|öü}°Nfßi ~Ìçd¿…y#óxÖM'PO²sëÀ>ÌIØÏM0_d½Å<Žëȼ“ûk<ÇÆ<ŽœÌY7²ŸÁ¾0ÏŸ0o<ƒyZþ »'Ϙ?2ÿ;‡ù[_Ï1u·üþf}Rp„ñ‰yÇž§ó…väùöax_öÛ°öGxžk°þ0óZžƒa?¥ƒ:ˆüí`^Œmögà¿gy¾}Bî, Þ³sšðGæÝŒK–òü Ï%‚û̧G¹?ÈüËÎU!Ž­Â/WáG܇²¼ñá‚_8WÇürÜgŸ§}ôaÏv<Ÿ½Çz2Ob^Î<Žy‘í£ÃŽ7¹ÿ`y%ò;ö ¹Ïó86äqÜwe¿ù ÷ù-D^ÂócV—°nÀ}z¸õ'ñ<¶/ ¿±}[èŸçû°ËS™çb\EÝkõׇëççþËÖgq•ûȨ̀XGp„ç!XW0¿·óÃXWÛ‡Âz1?ä¼ì¼ìœïi×Ì«ÏEt¿GýØFžÜf½‚y.Á©ï%ð„ç)mÿ^DÜä¾½Ù-ê@žïå¹³p€û›ƒçÂéG븿ՅÈÇ8Ò×_Ì™GcžÜÿ]Eì|:ü‡ýâ%ÄGÖ7¬'¾g1X?pŸ˜Ÿsã í&÷!YOAOŒ?ä9çøÂþÊYžC„]3ßç9J«+`¿ŒÜ´zúŒ[g‘‡œgyîG³>`ÜÜGž‡X=0Ðä¹*ž—ãy;žÜ »ã~ûp‡Á·Áþáê‚ø?÷Ãì|ž‹ù*ó6ž'²<ý.|Î<•ù-ó“Óàà)èÁúìG±Ÿû=Žý£ãˆ›Ô«õé wž³c^ÀsKv^uóö'·xŽÝÎé°OÅx ý‘ß5p˜ùt¾öÊKÜiBÍœn¢E/k^bnj1PP¹ A–’ÌÜT(›ÿhz8‹òËõõICÕ2ÂF0†1Œac˜Âf0†9ŒacXBL†p–!œegÃY&p–)œeg™ÃYpÜ#¸Fp;ŒàvÁí0‚Ûa·Ãn‡‘9zÈ&ç$ÃBˆªŒ+%±$Q/­(¾ ÿ@ò…I¦ƒboot/data/cane.rda0000644000076600000240000000327511110552530013560 0ustar00ripleystaff‹ í˜ûnUUÆ÷é…›7TÔª(kŦû~¢nîE åRîçôp ¡µ$… öQxÀGá|tmÎ÷}ãž´‰‰ Iš,fÍZ³æ73ë²É™;±ìZØEÑHÔÚE­±Ð ÿ´¢hÏÎZé÷VQ´ýýÐêÛA>«.ÿÕÕ-È«ëл`‡ù‹]Œ_®†ròŒß€þ`r(¯@ V«Ý¦Ýø]‚ýøé?z\ØÏ!Îk_„ß3˜¿áòYÂümØÀ;ûëŒò>ìX‡°›‡?Öoö]ø›ƒ=ëu§jú¿Žù›X¿ë⥻ÛX7çòb+Ïá¯j®_†ë|ñÝt~ïa=÷³ýÒz3îô»˜??—è’ûË}eœÝ§C©s5×ù¼OCr_»°[„_îû\Õœ¿ÿ¬3뻀ñóëÍ|ÏAgý8ë¯A_FÞWà‡ç‡Ük_€ýùöá§û ó·a%jÆ×‡]~ça vÜÇ{Ð/®÷´™Gw½™×ñüÐ/ëÅ{~~È—çuâ==ÃxÀ]„Ÿ.ìy/xOÏ®7çY×_ ÏÁ_þø. yy8²‡¸xþ¨óÜŠšÜEÄÁz]@œº‡°[^oÎ/ÂþƯBç»Àû±ÞÆ…=÷•ç„ïΑÞéy¬Ÿ„äý?møÇñª¢„ý!è;œ<ÉóÁñYçþÇù¨é÷¨‹cr/ä”›/ ÷@Æn~2ƒü>Ú𯚀ì¸8ù¾í‡<Ð\÷Ó_M¿z§sèüR'‡ub^̃ß?ŸçaH¾#<¿ ôª§Ösñ£>oJæíã¤îó<ìÖ‘OÿG×›:íPÿ(ÚþVøçYôòw‘èÍÐ> ­¦~ZÅ·¡Õ«¾‹êßR¢h4´úÇ”ñжÕB«£©S©oy#´Úgýþ¿Zù»¡½Zý[K}"?íÃÐê]ù8´OBû4´ú”|Úç¡í ­Žú‹Ð¾ ­®äW¡Õø:´©×1¿Žyó˜¿õm[<¬< £»‡ÑŽÕ£­‘xV=KÔKÕËÔËÕ+Ô+Õk«×!LîÁÁ›,,,,,,LNRÁRÁRÁR³,,,,,LK3Á2Á2Á2Á2[AXKB[d¶ˆluܶŽ÷Wz¸«-Xm[êõ?\ ½îuÛjmd ¶Ñ-ØÆ¶^«¯ÁÈß8ùc¼GÙ9ÆÎqvN ׌r͆·¥y¹V{¿ x¹Æég•5vþ@gôIcã‹+ûËÎßε‡¿ÏüÛç3ÿ ë‰Ôó¥§å??¯ê9¯ø«x¼ÿÏ›=’‰‘‰‘‰‘‰‘‹‘‹‘‹‘‹‘‹‘‹‘‹‘‹‘‹‘‹QˆQˆQˆQˆQˆQˆQˆQˆQˆQˆQŠQŠQŠQŠQŠQŠQŠQŠQŠQŠÑ£-F[Œ¶m1Úb´Åh‹Ñ£-FGŒŽ1:btÄèˆÑ£#FGŒ£ñì¬ucë&ÖM­›Y7·naÝÒºmë-6Zl´Øh±Ñb£ÅF‹-6Zl´Äh‰Ñ£%FKŒ–-1Zb´Äh‰ÑR£¥FK–-5Zj´Ôh©ÑR£¥FËŒ–-3Zf´Ìh™Ñ2£eFËŒ–-7Zn´Üh¹Ñr£åFË–-7Zn´Âh…Ñ £F+ŒV­0Za´Âh…ÑJ£•F+V­4Zi´Òh¥ÑJ£•F{ùplô‰ßu·÷¸7³´>½ÑËÏ|ôâ93z!boot/data/capability.rda0000644000076600000240000000074211110552530014767 0ustar00ripleystaff‹ eÓ¿OAÆñ¹ËÆD2¥µ4Ä›_‹…É&˜PIEA³¢$$ '°¶¦ÖVkjjü#¨¯¶¶5Î.<ß5ã&·ûÌλïgv÷vçå®kvcÌÔLVŒ™ÌJœMËnbÌZ9ašýnѽ9<:<;7æáã»™Ù£rÜÎöûmóbñ#[3lÙ~í·oÙî=ý}s¹—íå²]ô…³=¶lý°e{ýú×—'×Ù–]IÙÃl/–[WkŸ³-û‹å«ñ¼ê†öë£Sµy}›­«ÑU_­Kžúk¬ûÓõ꯱浞º}^×ë:ùZ¯ê4¯uè>4V½Ö¡>zNš×Q÷Uº–ʱNcõ×ûÐ|}ßêwÓ¿ØŸÿ?Õ-‡0ºzš¯ûëx×·ú>xß¿;-'Wïÿ|ýÉÉyUµròáÓÆ¿•Ûªœ+8¯¢BRh6žß‡éüiNr$O ¤HJ¤–´IÂpÃa8 ‡á0†ÃpÃcx á1<†ÇðÃcxŒ€0FÀ#`Œ€0"FĈ#bDŒˆ1"FÄH #a$Œ„‘0FÂH £Åh1ZŒ£Åhcý­ìu§õ·Ò¼íꃓòq”ÑŸþ÷àSŒøboot/data/catsM.rda0000644000076600000240000000125711110552530013717 0ustar00ripleystaff‹ í–KoÓP…omI  VJyÑGÚÜ{íkï*„›n`Ó­UÒUJ¥$¢,ùð›à/œäœC1B í‚ –Òs<ž™oÆnä¼yybë'ucLÙ”jÆ”ª¹­–ó?%cå³tšGÇÆ¬<ÌO*¹®æšM þþñç—µ<èèFyô®N¢¥W4Ç¿=ØA6bz YËgÙéøb˜»«<²6yÔGfvüÔ¯“ãÛQéóäørc-ŸM£Š›sµ þs/ªK˜ó¶ºŒ=ÿVW°÷<½ƒýn­Ø{Q­a¿EµŽ½æé]ì5Oïayÿ¤«˜o |ê}p ú>´þsp¨ë˜wƒù8§6Q׆nphqmôi ¾…x‡üÂ}hðþaž꘿ƒ~»¬ÃuÞï&´…¼6ùìË=׿<àup÷C<è6®sÿê¹Çt|r¸âÌã»Ìc_öA|õ]ÞGÎ[œý÷8'Ÿ/ÎÐïñ¹œ ºíbþ¿0~ˆ~äøO³„Âëä}vÞç뤂×Iåmÿ#í‹Ë1í멽^]^\v¯wÈøÚêÑXGãi"š˜&Ð$4)L¹w(ד³rNÎËEr±\KäİbX1¬V +†ÊaŰbX1œN '†ÉáÄpb81œN /†ËáÅðbx1¼^ /†#####################ˆÄb1‚AŒ F#ˆÄHÄHÄHÄHÄHÄHÄHÄHÄHÄHÄHÅHÅHÅHÅHÅHÅHÅHÃì^ù^ø…Y—³îÙ0ÿ®›é¯Lsõ(„tM… boot/data/cav.rda0000644000076600000240000000207611110552530013421 0ustar00ripleystaff‹ u–Yn\U†¯#? $X@ÄÊSÔ÷œ;":1É`‡8 àn·Û6žèÉÝíö?ç)KÈX‚—%x ¬ÑÎýê?\K´dÿUuêÔWçÕýó£u·¼¾EÑb´p7Š–ææÒâüßB}8Dw:íq½ûÁÛ”hé½¹¾l¾¸Ý|šç¯ÑW•žŸ5+=y¿ÒiTéÜ«tt]iŸõA Ÿ:=òð·àu¯*íßfý9ñy?R÷Kô tý²Ò_èó1þ*þ u¾&þ”}O‰ÿ†¿n¢Ûäï ]úØÅßD÷è÷ÿˆýýkÎý¦Ò!çþ]éˆüþ=fŸ®k‹ø½zÞÿ„<»_§p.Vðé{ÆúŒýgœÒ n«®ÇœcÐä<Ô=¢ïõÑ=âv÷àöÐâmêvá<¦þ÷ìÿ”õ ýmÜÒýµça5ª×]Ç·ççç¶çk“¼v³Þç.ëô©ûJŸ}Î5æþNñ‡\ÏùS®÷Œë÷Âîù'וžþUéîÈê˜wÈý²o@ý>ññ#Îc÷c›:[¨=ÇfýzlàÿJÿkø«¬Cü!êÐï">Ì•sÎcóåŒsŸ?µëbçg}öŸó_ 3ú=åz_ gÔ¹¸¬óÎ-/ªï³ë|Ñb?üsüùv_¬î”}câ6ïF¼gÇè˜üI“¸í³÷ÎôU=~ü¦^wȾáÕ-.þýÓF}}Jš×ÔŸ\ÖóŽm¾Duµsœ´êul.Ùû<°çúGÄ÷èïíSgÀyzôÛ§›/¸ý«zþ€<›?¶ßæ­ÍÙ}öís›ËûÍz]ëÿ}Nþõ6Éßf½‹vØß†×bŸÍ“ßٿú}oX¼Mû^1޽Ýf½Ÿ-ú·ï'››Öç.œ?ˆo“oïã}=ÿ~†Ú÷è õÖðŸØ~úù‰u›Ç£_ÿ§ÎCüÏñmþ>¡î·¬býà'¶ežßú)óÎQû°;œožŠ›37Á…©'·Òïþœ<øï–—–›áÌðf$f¤fdfäff”‹qCV,ËÉò²Y©¬LV.«%†ÉáÄpb81œN '†ÉáÅðbx1¼^ /†ËáÅðb$b$b$b$b$b$b$b$b$b$b¤b¤b¤b¤b¤b¤b¤b¤b¤b¤bdbdbdbdbdbdbdbdbdbdbäbäbäbäbäbäbäbäbäbäbbbbbbbbbbb”b”b”b”b”b”b”b”b”b”Ƹ7ÁŒƒé‚郙3 fÌ<˜E0-´8Ðâ@‹-´8Ðâ@‹-´8Ð\ ¹@sæÍš 4h.Ð\ ¹@óæÍš4h>Ð| ù@{;j½sÐÚx^ oy«=j?ØÌ'÷Üûçæï_+};çboot/data/cd4.nested.rda0000644000076600000240000001474511110552530014611 0ustar00ripleystaff‹ Íœyp]wuÇß{’mYq;qV³$!$ÔNì$8˜ßm¥-¤@è0™–ÉàØNãŒ7$e…Òv:(…”I[hC!@ÈM³@⟼Æ[¼Hò*/ïi_¬}·l©ïÞóý\=?ÉKBÿÀ3 «ûîr~ç|Ï÷|Ïù=åöß1¿øŽâD"‘J$§&ÉÂìaa*ûÉDbVöD¢xÑâ›æ®XRZ¶dq"1å‚ì™s²WLîÐ?—™|Õ¡ë/{<‘˜œ=?e»ë¾­¢ë™áF×õžÕŸkýú{}óô/^òè}¾õû7ýêÃw¸ºwe¯x6˜92T}åÎz×ÿ³½Å[õ„?ä¾Úö½{J]oö®÷¬¾Ý d²ìÛ÷MÙ›fÜõ¾fStàg>ùܬ‡3îW‡_q s¢ø¾ìþôGßrUEó®n~p·k²Ÿ¾uãSÿýâ¿÷‹×ö­}ä‹nŸž·>üq[¥Ëô¼ü‰ŸlYãÃÿ/ÿÇó]»½Çõ7?Xu¢çe¿îóÛ÷_ô w@ïí´Ÿ¾'¼½¢Ûw]^Øë~óãh~[ôï-×üŸá¿ÿr‡¿q]jÚŸýÆu-ûö­ÿþ“}ǧ"ƒÜ1»>H¬ ð娾ó_0í]¿ WàÛ"3þÄu-oü‘?rYt£[ó­ðD§ïøÈâ¿Ê^êºísGý-$˜”uÆ‚O½é>ºû™ãnÇ‘èB8 ×ßp™Ï„'Þíõ×™]üŠïÇo¼;úç²Fßy³ó[´®¡ð®û7»Ÿ¿|ýc?¯õ=áÕŸ9ë¹Oœ¹Õ§C·W¸&{¯oüz×g~v¯5FŽðCŠK³Ö‘¹(tÔŸûZá¤ÞÖí g¾Òâà›m]AêѲ×ïýÚ1W™Š ¦„—â§nmdÖGÜ[ß»§äùË7¸È=ó\K·/ùtxûƧ}üÒ+¼òþݵá‹.v뵞ά²Ètµß‰ævžƒ‚КëR~³áÀ­3¿ûÐ9ŸØá8Ÿ6¼»îè1÷ø]–/¾Ëâævgƒ~mß:÷ÝÕð]¥ùŵ›½®&zÍ4ßjïsu!J®í÷CÊ«ÁðGÙën—ü¥ü ’LÿÔ·®|WøÔé_p-?×nö8ÖÛ^þÉç|k¤Y ]&L¿[ÿÃwÚó} ö‡n]ÿCWñµÈÀ ©|¨ÍfqÉþH÷;}SôÚ¸ê0LCÕ~ØâàG\Fx®ˆnû˜Û%üŒ`÷·®Mxy5réèök|C~—ìk¶uºÊŸG q#ú¼ÞžïŽFËýKßf~öÿ¥ÕC.£¼Ú,¿uǵòÃ>»/(RÞ¾Ád¹+~ö›]¾ËòÈõî|‹øc$Zöû\·ñß.»öZþúÍZW‘ìoýòmHùÖù ç-N>-œ†Qº÷ïü+ÂÉ ü/;ÝSŠO‹x®/tóWÛ½üæ„“}†¿ÃòÐ7+Þ¯)2†;wÀòÂw‹/ŽÊŽ·B”^êê#óf»>ãuß">_+3Æ“~X8ÙgyL‹àZèºßñ£ìr«•ïÇ"·/öÇU:ì9Añ­‘ƒüù»GõeØòÜí6;ý°ìÕzO(® æ§ qcôÏ ë§øÅ·ë½ýâÓ=lf¹£âÿáûê`ƒå±Ûn×ù~«CnÈò2HÉÏàïXXÍ|Ú2¾óå·nÝßÑÄt?lyãúÄ?kψkT­Ñ}ƒÂoÿ#‘C±Ïµ Ãæ· eþõÝâ›fÃ]”ß¾}—Õßa¸ Î3¾÷â‘N­³Bñ¯Ôç}V÷Ü‹7·×òÏ5†a*yÞ Fåè³Þ+Þû·Îèã+\[?p'ÄË-âéûÝïïvÚ:~©2ݤ´Þ«o®M8­±ëƒBãi÷¸ÅÁÿZ<½Ïìr}Æ?nâ?bë &nƒ„xaŸü×]ö€«~zÄ“kÅÇ:ß$ýóšìÛduÎo²ç¸Ñ–+ݰp¡üs]VW];~¯ X}ôOJ‡¤Å »TŸ6…°{ìIwDþo•¾Ø"=P†ùÊ]¾Kñª·÷û6ñP½êŒtžk²úçI¯Õ‰Ï»Ÿ\½tÇAù; ÷îêˆò‘ú0 >P]xA|ù†ñH0Uºn“x¡ÁââÓ²‡ü¯±¼!~Äß¿bº,ΟZ‹«V•¿ádêuE$;¸NážjèŠÏí—ÉmÊ·nñÕűAõ^8t#²³QzàׯS¾Gx5¾ðmÒ9k îY鯴pó”®7wÎpCÊß~‹k0I8©–Îm”.éÌ‹Ïá§ÃâãúM…Zÿ6áä¸ìØoññ5‡ ¡ç½a:"H 誵æ¯`²t9º1­ú¹KÏ“®ò›_ƒkæGøõÒeþMåU—x?Ö-ª#; ï~ê±xÅ× Wô!CâÕ~Ù]«þ¢ÎôCPh<êz¤غËSÆçåâÛu{#‡Iê©êA¯ú—.«Ã®\}P“øe¿øí1á¯Cñ9¤üiï´“â«QåûåiZy±Sö¤o& —i[gR}è” â§þHØú!]ÿ’Õ§üŽuu—ìî·ç;Õ-¿Ûü+ê}Ÿú”íVO|¶1‚×½~§õ¾M~Ú+üU[^©Èýïwôoª/îiõ[”õâ§VÕ醠ØpçëăÜ'ýïFM÷•«î¹Zé®6øSö I?Á#ÝÒí=ú]ëpÇÍ^_k ñÕÂzµVñ«Q_rB~îVþ¡·Ž+~窟N OǤ À¿òÊ Šçª¥¶H¿§´þ&áã5كΦoEo‡VÙ¿_:òØÉüî^îP~{â›‘Ž®’^•èãqØ-]&~t£Ò÷¯‹gzT¶J—>ixqƒòÏá«ÍðÌT^?*¹K}ï1åsRzþºOú±OyUo¸fß`ý¯—._NRšy“éÃâŒø*#]Ù¤ø£{êUW?zDx^½úw¿GëÛ#ˆÎ7º)ô[…ûñE£üZE¢<é–~R|ÊSŠ'}Á æ”÷ÕÒwÊÿÍŠ÷úŠùéå=ªc?R}Ü¢>aH¼7¢|¬~6Kÿ· /kŒwƒK¥¨ŸêS‰·ê¾g.’–¥ÏÚdñô•Â{Æâ$õ¾NÕù—å×ÍwÒÂW›pœ¿I¿ùNÕõª 'Ô¨_õè§võͤÐñ;T_ë…ëØŸ|.½K¿ó?}tH‡ê¦ô…«’}5â‰UÂM­p¡ø¸u²÷„æ)ðìñM¯ú8͋ܮkÔï»5IF§¯ó•ÂÇVá@z0HHS>ת¿ÖܧB8ø™úµ„ø«ÓpÌVëÎ ·¿µúLR´W}Gô\F8Û#Q^õk~ð}ùþ¿ôO3:LüùOæGÄ®/O*žG5_ÔüÈ·[ÞøÓËAÒðãú¥çáÃ×MwS59ªëØ}î˜tO«ðÚ ¾þ]«~í€ø|TuA}‚ë6>wCÊæ#Zw“p]gó´ !^f¾Ù  {[ÅCâ«x^Xgyjý=šƒU)>ä}¿t[ƒìê1¼¸.ñRŸpżAõÄÅý¤ê4}ú€úãÝV?¼êº«Q¾7«£“;•?=êCù¼AùÑ£÷K70ÇõêÇ ezcù³æŸê݃ÒÙe†_ßi|$”ß«Õϵ‹ï+T‡;…Kæpê3]¯ú%=?(6¿ é´tU·tÊ øµS¼ý’ò>­º6¬¹sìh_ )+_ÚÕ§ÿÂt@p¡éØ )~iQ]ëTÛ¡¾¬C¼¼Aù×"^Ý$?’þªžé×ï»=ê ŠÕw«ŸN©/ìÕ:™³þB¼˜>7ª~½¬¼­Ÿ‹wµÇ“ƒ©Æ^uÖ}S~¬Ñú¥/‚"ùáóhé…åßAËg_i¼çöÛ:‚"å‘tºQè×õƒÂúPqs{Ä/iÍ»ûÔÇdd¯žKŸ¦| Âq¿éò¤ò‡ú§~ÍwªQÝŒõ—êýcPœ4 ’òƒöUâùÞß+ÿ›¥³ßxôÂ^õ;ë”÷½ò›æÞô9¾Nv4XéÓšÏW<éß~`q .¡ÏUÝs-ú½Bq—¾uíâåišßÐ7½¨}‚ª×Ò¿Áß5ÝE¿7*]¢¹¤ßkó©˜_Û4:Æ\Xû>ÚGqiÕ¿ Ë» @ù8 :Œ.×ó5·ˆùWº"ž»Á-š?Ê/®]¼R%ÿ©óÏyæIÃÌ÷ÝÚêÓܱ_ùS'ž/ÌûϾûÿ¬¼m?÷hn‘Q¿8¬:žîÔƒý»Få¡ð㻄cæõâ½mªÛ=š#1oïÕ\%¡þ³[ûƒ}ÒÁèõåO‹x‹ùÃF}Þ ø•ÎjNéï´ÏÍ|Ã)~Ì›ÝÒ×ꃨ‡1Ö Úowë ÿq~ÖéþÚ#Ò·jÈ5 oGU÷i^P+žA½ xˆï\F¿ï“½MâË:ñ:vâfñm›êTóBéõ]î åóôJñmÚêGü9õýyô˜Î³®Z«gN}dì'ÕêNüý€íy÷óÜ´Í÷øž@¼ŸU-\j>æµOKmeßMóHž¯ÐOn—Ž$ÌöªßÁ.í[9æKíê'±/£ùý‰ð¯³öaø~q‡‡˜Ë0GÒ¼¢F8kS=`ݼw¯ÞÃïõÊÃV=‡þï}dÔ÷µÂk³æhu²‡<¨—*;žË»< ¿vªþÑ_5ªÿÌ­À#ùBü^‘ßÉ¿*áWóCÇ|AûÞn§êú›º¯Fu€x¿®ºNÞ±¯[£>[ûZ±øû¨xIz0žé{Nú+Χ Í}št½æ1žë¬Ÿtí¿—øKýxü^âvH¼ˆÀo«ô3xwê§caþÁ-ñ$óø‘¼×þAlyóCù^;ª¼¦Î×ëòðï°¯V)^`Ÿ„ëÓš£ƒ?òþySuœü‡g™3¼¥|‚/ÙW—Þ‰×ÏܨCû§Ôðoayõ¬®C§“Ÿøý·Š ûLuZ7~Çä%ör=ù”–½|?ç—ºüÁ3-|Dz`§ø¿‘·ì?¶I?knóëe¾Nžb~kÕ¾éAánýºø^%Ÿ±›x5¨CWKüA~àWpǾöÕ?ÍqcÞÄ^ò»¸Oû-NûÕñ<±Y¼@RC>«®S‡Ûõ}êm\·©ÃÚ¯€g™ƒPãïyè½â+tvcG­òPß3ˆçä;~ãsx\£¤;ã<ÇOð¦úçØ_Ú¿× ‰'y‚_Ð_Äž¥Îa7u²Nz®Qþ!ø‹¼…—Á½tó6á ýÂþëÆž8ïųð;uRß§‰?‡ŸÐƒÄ€ü}@:†zÍsðƒæƒcü)ûxú‡|…5‹ùøúF>Á3ð}Ù ®ˆ;8àû7¬^‡Çø¼\õ^9"Þ¤NÀìÛ7©CG“¬‡¹"×ÿXÿÊ^ð¿t1ú`›ê<‡}üN½áþ£øO<^¨|†8éûnq½FÏ¢w¨/ètúu3‡¥¾µåÕqòž ›evÁ«š+Åzƒ|…Á u‚z†ø ÎRxN£p“¯7XoŒæƒÊkp‹~×ð8#/‰ûKò_Œ ášüØÁ{x?~ ^â®ã¾jÕ³}ÒoÔpÎ:Ð{ð*}úý ÁþDgÀèÁŒê6õUžn¡~Ñwwê/ýÑzåø€wècð yN߃ÎE—4é:ð¼Lß›Žû!øüÃßè0ö£©øþÁà À?Íš³Ãwèlty¯}ã8o±þ¥¥¯ÀßÜO|ñ/ºCß·ë÷•'Gäx‘õi6Æ9ë€wÀ5ÏG·¢+é[˜[ o(¾ûÅç|?xÀ#•Êtz:îËU_ð뇷Èî'>ø\€/ê ü?¯xî¿R×¥[Á;}:<¥¾¢;¨÷ôÁÄ™<À^ô9¸…Ÿàkx¿âgðO“ŸìaõšzG=ðüC¼Ð_q½Ñ÷É?ò‘ü€7°Ÿ:F?…ÎFw¢ÛÁ¸¬Ï‹'z‰¾‹zÃõ¼|7ëû<ÜÐSO7åù›:Cý"ð/ß?…é+Ð÷ä)ßëç>ü îð ÷ãWúxž Þ6)~Ôõ¸>¨Î é—¨çèòš¿Gà}è®#yùI½Àî­Z~!Á/uˆ|GôqOƒpÙ®çòÞŸÈSæ¥ÌýÀyžÏ‡èFt~ ~°>ú6ì@GÁçä:üÀSôÄŸøÐ§€gæ!¼—º?ò‚|#o´ÿ÷cøŸùa<—•ÿÀ=ú’yOÌßèqù~þa~A_ƒ.fîNÞÁ—ô%¼ùvãw>g…½ôSÄþ¿ÐÁƒèêSÀ÷ø‘¾„n¸î ó ­—ç£ûÀ'ëÀOÌ':´ÏFþQoÀ?ú¿SG㺯:H¿ þÐ#è·xÿ@ùH?Ç÷ÈGx›ŸùûÌýÀ<ʼ}H]‡/àmx:KŸK^3/%~è-pDB߃g~÷A:O^üA|éëÐýð)úø¡+á]t)ýøÈÇö¿ãõ¢¿˜WÅ:MþlÈó/~%~ÌÁ©×ð~‚'°ƒ¾=÷õÊgæÒäñæçyüA?ÞÐûô'ð¼É Ÿ¨Ëì+Àƒø•¸1 aýÔìÀÿð>óÞ€+ú¨ü¹h]©õ“¼]ÇŸ¼÷øÜ0?ƒ‡ÿM?Á/ûvð-óæWð:#ž(Žø]ÍœÍë3™Ÿ¡Ðçà€¾¡ûÑ«Ôõmys®'¿™{£‡¨›ø þƒ_Ù'‡©ÇñüQýº—> <€{êÈÖ<¾G/¢à;únp†>eÿ€z‰®B÷s=?Óy:þ#>è7ú{úê7z {ˆ[<ÿåϼ¿X/X¼työÔ¹ö'êúSõýÙz2xäýázôçì)Ÿ$Lþ3‚sÌA¡¾þ6C_ßœ¥Ï§ë÷b“7Á ·ðÓôçfzžÑó u~ŠÁ5(Ðמ‹Lþ“ô5Ú"þÐ^v˜<Î××`/4wE²çR¿ØÒ/8O¿Ï0.°t ¦ëýçkÝÅú³á‹ŒžƒÉú3¸²kªþŒ»gÊΙú½ÀîË Ç¤ —/)Ížœ!g‡'‹îZXºdÙÒKôû”•+–<´daIÞ½SKV>07÷þYº>9ƒùÜÈÁMÜÌÁ‡8XÀÁ-|X©y7ÄGóâ£ùñÑñÑMñÑÍñчâ£ñÑ-ñQüŽù7äûfѲ…¥¬-©ËŠ/,[8÷î’ì² ¶t ²?‡„ƒ”n/¼kåÊ2Ý5yÞÜùs"Cr_P>+ûs8ï|ò;yUîÉTÙ œMÞf'#K&ç~ðeVîÉÔGçè,gŠ­,)™{÷}+&¸º({zQÙÒ•úìdÙ]4šÿîë'xZAÉ’U9 t$θK Wd‘eçÏó2×?:Ág9ŸŸ3:voìÓ &²&ùÁ‰Nþµ8;†Oáב·¹ÔòùÛsQÎñÅ£§{{xço¿?Çc‹“8£Icßy¶vŽ÷ÒYÙ¯gÏÖ3/=ͽg^oŽ·/ËYΩ–9ûwˆÂ;YÝ%¿Ëê JîY9öñŒ £uíÛ[ÿÙ£p|tß¶ÏN㟋ò¢É)>Æg¥_.)ÿ¥úù®üûOK6KÇ>œ9¡WÿðžëÒhbb '•¬¼oÅâñ&Îʥʳ51u_éÄ‘OÎñe§ æå'o|0/œ (ïÍuêYYùàÿ«•yðɇEž•ãB~ÊpzbcÐûôóª¼ÈN”7rg:ïλ/wñcF\7vrvÞ‹OµúÜk¯:"9]%åX;AÈrƒ—¾ÿL/€UÇUï³1:uúPΚàùSN£.Ϲ>7Á¯8URL×WNýw”ÊÉÚ:7×øSPãø<.­üúÉóßß8ÿúþÝãÍ5¢½v÷WÖÏo¨vôуë¿ïÆϯ¿ÎkGxí;ë/^|õµý«¿ìßùÌþƒß ·¾×¼÷ή”㽃rrEõ?«~ê Üy[åír|çA9=¡ûC½÷K=ß=(ç¿ÖýwÔ¾PŽôÞôA=—Îö1õ÷ÕÕ~ú–ê²s´¥úÕßlÛ1åþ»j§çÛÔeÏöq•òËHvÏ4ÎmÆ¿°ã÷ßIéÍõ|Eíå¿mÚ¯é¾Æ9’Ÿ¦Þ“½ÌÇXºó[ÔîjÛ/èM5®‰Æ5—_gê'ü+»&ïXãêþTí粿cÇ”÷¤Ç<ÎÖ¶›ù›Ÿ—²ÿÎä÷âì7m¿?—¤KŽoÓ{êï’ÚM^jÛÇ8¶ßn¿7¿§mÏ¿gøQq=ª~¥}Ÿ¸]•žÚEÜ).w޵ï3ùa¡ø™±^ð“üBœíÈ_3ÙÇzšnª½ú'Næ¿’ŽúÝ®ãŸñ⿽¶;²o¢ñ±Þvä_Öq=}»Ý.ì¼[Kã%žbž×ÔþD{üÛ¬Í/ñ[ûƒ8œÈ#=ëýq«ç[{•êÿq4ÌsøQí&ÒŸ©ÝDþ±.¥3£_ù}Tùk¶);䟉âr Wñ˰­3–ßFÇDïÃÁxÿ­ª®yÞ‘ÿ¶ˆ;Å/q±³«ñ ®ü2ÁOÄtæßnë7åo8;Ñúÿ²íòE̻윽Ôn#ù™ùÝž·Ç˺c^Ñ%¯À¯©ž“Æ*‰+ú%ïŒÕ~Š_ˆGùgŒ½C½'ÝmüLnê}ÍßHý“‡&&_ÜÕ¶Ÿ¸.ÉžméÂ…1y~W÷õœˆùWœÌà!y“øa°OeÍ¡Ñ[íþˆ£ñf{¼ìGFÄMÅ¡Q×sæãÍö}òË¥=½§qNäçé•¶p ?!žäOân¬uqñ`û}ü‹îXöÃòë¾1¯[êýB¬OÅÛÎírKóÿ“zŸ#û·XgêwKöLyŸ~Y7šü:…ÓÌ+ë›}C]Â'Ú¯¨®~¶ˆsâVö²nÉ'ŒcL^&Oh]ŽØ¯¬©Ô¼EÞ ÿ“‰/Öu£þÅë9~“¶àÕJà օîÏÔoì£á—Æÿáó ÿ.á?Ö#y†xÜl?}þ'?ïphOãªòû¥ØÁgâœ}‹âh&?×™¯YµŸÄ¿3k‡ˆï½¶Ÿà_Ä9û*õ»µRõO?Õ~ÆþP~b]÷àë® ÏþsWe5nò!ûMâ›};ý²ï¥¬÷Ùì/ˆöuÄ#ãàÜA|,TÆ>| ~à&yîq®ê>ë‹óçÅEì{ÙgWþS¿qî$»zi¯¸™éùüG*ª|DíÅÝ™JÎyó;ä?Ù±ø­î“þÔçŒ8×:bŸ6¿½Ò?ä7ò²ì$±/_`çšüÇ<¨_âîľûö89/,nÕ8yÎ>Ÿ8a«œË®ýÃküÕ¨ÝÙ¶?àQœÕ~~Zω?ÅÁÎâ7½Yóµ`~‡òœ&~5®ùE•Wõ\íÙ¿À7ÖñTv,´ÎˆcÖå|";UÇÿ´‡×ìÛ·«zäY¸J|iþ¶Ø7œ·áøŠž3NòÆiœð¿ò>\aÙwW8ï+îç›jϹ\ã_ÇW¥¿×nOœ—¬Ëó¨süeõÇ:‚£‹·ÛýΈÎûzo6l×?|"~fìdù]ø·Ð¸ãûý²^ˆæ“õBSÿñýr®­x=—¿¿Pý+yPvÆ>rMïK—¼E;üƒ;UÉ~bv¥]Îá ñ%NF>¢$>Ù/©_â€ï‹[Úõ)<àü¤qÄ9‚ýj£÷´ÞÈ÷¼Ï9*8DÉzØÓø/;ªg4¯ ÍOħü¾ÿr޾L¼Ã3tÉçòy!8CÿŒ«¨ý®êây™2¾×`7ýë; œ >3.ÍÜ!^ç·µÇ<`^eOÄ3ó¡ñïÔqŒ?ÖJÎ-qîP\ötÿxÛ_œãÙoÍ®:ßAd/\cœäex}‘OÆë›ó©ž3ÿ¬Çóʺcþد×ïh#øL¾`¿;lÛɾ^q‹ñÿTuüªùc].˜g¾ÃUçÛÈóÌߞ귷ljÿâ{J£’sðšÚ±ÎwÛzÁ]ö­œX7zßÙßa‡ì"ϱÿX¥ò~yf]¿ÜçE¾«ç™óΦú#NÉszNüðŒý ß[ÈK”ŒŸ}.ºœÓ8ÿÌ᫾Ÿ°¿„+äéðû4ÎôóÇÊÄ­î_þƒÚ­”|ˆ¸ƒëú®N>cÿ¿Ú—-鱟J—ü oõ^pZvóÝüΪÇw;âŒþ”7&»%ßÇ8GÅùDãbòýŸ¼¹ÅºÔ8æ¬ÙÇ÷‚ø^¤þˆ‹Ø?ÃOé²of\œ ‰÷È߬_q:ò-|Â+ò—ÆÏ9 ?Áöq¬ÃØçq.Tÿ|‡cß)×tŸ|K\`üD>üÃzâ¼S­;ÖöÆ÷õ«*•àù™óü÷Å=þ¿,{â¿sTñß3ñkµÏ˜VóŠ¿bÝËo òÙ»í’ý=ÿFy!Î/hÏÉγêÿ~ÕÏ©|V~yFõÓU÷ï×ý¯ª<«rUÏO©þœtŸW}}å 6í:å“j÷„JŸ‡OªübõÞÚÕªìܼ~ûgdÏ——èmÉýZç˜Ú}©jÿÂæõßïVíž’øõé½öó{uÿ‘ƒþø OVý< úWßg<+*‹ÊSÃòñaëý{ßÕó¯«<Ž®Æ•š¶}ËîÓ{mìê½'¤óÝ¢óÍ|=Vµ{JïãÇ[›öøî«ìü¦ÊNÓßpMýÝNížmªö+²GíÏêy¬{•/0.=ÿ\Ó¶{àÃ]ëçgdWÍ¡÷+ÉCâ'¿¥íï_òüýÖ1%qÈüÕ¬ç“xLý¯4×ýÿŸ¦_µVï³NoSɼÿ¾Ñ´íÀ>æ•÷‰÷¯ë=ö7Ø ‡é‡Å^öì#Ø÷‘ÿžTü‡½ì×Èw‘ÿöÚz÷UíNª~Zý²žÉc¬öÌçâ;7çòvÐ?ýaת7*áûkÎE¬xü€êøñáê>û{ô蟼Í>Šù»+;WTÔiGþ$®Ø_£ÿ´žãÖGgÊkœ;9Oü¯ËëïãnýûWçë¿=ÿ•~Ëwyøûp~‡þ=üþþóßGu}TíúOÿ¯ãâ÷Aó߇µªÛÿ«ïXùÿ£²?ü öÔ¿kÿûAu–Ù÷ïû¡ýÏA/¬¿²Á?½Qÿô¯o¼¡Ë7.\|í‡ü{Ò7^¾ÈõÅ—_‰gúâÆ…×«ŽozíÕÜýw¿÷üp­ñ‘U.™‹ÂE‡‹.=.ú\ tqÃꉸZ«W9®J\uâªW½¸êÇUh¤ÐH¡‘B#…F )4Rh¤ÐH¡‘C#‡F94rhäÐÈ¡‘C#‡F %4Jh”Ð(¡QB£„F Ðè„F'4:¡Ñ NhtB£Ðè„F74º¡Ñ nhtC£ÝÐè†F74º¡Ñ ^hôB£½Ðè…F/4z¡Ñ ^hôC£ýÐè‡F?4ú¡Ñ~hôC£ƒÐ„Æ 4¡1Ah BcƒÐ ñ‰Õ'|¹êËäËìËâËŽ/»¾ìù²ïK«­ZmÕj«V[µÚªÕV­¶jµU«­ZmÕjÉjÉjÉjÉjÉjÉjÉjÉjÉjÉjÙjÙjÙjÙjÙjÙjÙjÙjÙjÙjÅjÅjÅjÅjÅjÅjÅjÅjÅjÅj«u¬Ö±ZÇj«u¬Ö±ZÇj«u¬ÖµZ×j]«u­ÖµZ×j]«u­ÖµZ×j=«õ¬Ö³ZÏj=«õ¬Ö³ZÏj=«õ¬Ö·Zßj}«õ­Ö·Zßj}«õ­Ö·Zßj« ¬6°ÚÀj« ¬6°ÚÀj«™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Ì’d–$³$™%É,IfI2K’Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Ù,ÉfI6K²Y’Í’l–d³$›%Å,)fI1KŠYRÌ’b–³¤˜%Å,)fI1KŠYRÌ’b–³¤˜%Å,)fI1KŠYRÌ’b–³¤˜%Å,)fI1KŠYRÌ’b–³¤˜%Å,)fI1KŠYRÌ’b–³¤˜%Å,)fI1KŠYRÌ’b–³¤˜%Å,)fI1KŠYRÌ’b–³¤˜%Å,)fI1KŠYRÌ’b–”k,¹aÿê¯Mûÿéíæo¯_\¿û;¯­¿÷ñî½ÿ··æ/»œ¾ÌUboot/data/city.rda0000644000076600000240000000032611110552530013614 0ustar00ripleystaff‹ r‰0âŠàb```b`äd``d2Y˜€#ƒ'ˆ“œYRÉÀÀ. VÃÀ¤¹@À!Jû5@è´íÁ¡ â¶P¾9”¶ƒÒ 03÷D< *5¿J€ÐþPs=Pí‹>€"ŽæÖ¼ÄÜÔb  Ô3 AÆR£M9gQ~¹².˜JCÃÆ0†1L` Sà Æ0‡1,` K(ƒÉÐÝ©É9‰Å0{¡Ê¸RKõÒŠ€Nòþ0Iü%_Áboot/data/claridge.rda0000644000076600000240000000046611110552530014423 0ustar00ripleystaff‹ ¥”MKÃ@†'!b›ƒzðè¥ÅÌ&MDmW(þO½.¦~€FH¯þd¸ig^L@j yŸÙLö Ù%wóÇ‹˜ˆB †DAä1 ý% :ö4¸qõsù¸$Ú?Z÷QtàsdÏh}X–L%³N¼”¼úg^ï˜Ó-9ë™ö³72>—ú–äØ|¯ÙצҴƒvÝ7-m©w›¯;O_ïùOþ¸/ïkO¥±½ÿö*÷º\ùÁ¦­Ù|Í`TV®R~rUÙyhX¿}Œ?8’æ Q`£*d …\¡P¸“sPb¥  4å ÃÁp0 ÃÁp0 ÃÁp8 ‡ÃÀaà0ywùüïc¥«H[\ºw7~¨ýùê»9XŪʉboot/data/cloth.rda0000644000076600000240000000064311110552530013757 0ustar00ripleystaff‹ u“=HÃ@Ç/¥EPªIÅ”´¤µƒƒsé— ô ‚®vrp jqP+mÁ½³³®:wÎ]\\©‹ ^’w=éƒæÿ»÷qÿË‘67w¹¾«3ÆbLK2¦ÅÆcâ¡1fŠKì·{GŒMa‹Ï uýÜpç­ßøò€ÄÀ¼zÙÀŒÔ‡ë½åñ#Ý" …ù¹»‘¾qvób87l‚këÆSÏ`Š…È@óéa8 ¢[Œ€…¾Ö-œË`=Žoƒ}Ä-Øè“m…ÙèüýëÎ(Üòâô¢ùר¿`Eû,EûÓ=¼Ë÷C5QWPmÔ*Cu”9{ÂzQ™w”¼ì+*ù”2'Ï3ê*õℼô[G­ý­«Ë©rØÉÀ>øX‚¤v!áRiOvÚç¥ß#Žìt%p ž„²„Š„ª„š„º„5„˜»Jäq"¨LT!ªÕˆêDäÁɃ“'Nœ<8ypòàäÁɃ“‡GyxüÿÖïÊ;Õ°M?ð{~©Õ×-VßÁï³?[þboot/data/co.transfer.rda0000644000076600000240000000030711110552530015067 0ustar00ripleystaff‹ ]ŽM ‚@†gecÑ ‚úBöuõà¥èC¤Àëbë¥RXéæ/&ra÷}öÝyg'‰ÒÀK=p@¸B"J0GÆYé×VUn,€šu¥ '¨*Œ [áŽô@z$=‘žIã¶Î'aïïéýBz%?æ|û×o0â¨ÐOS¡9¥á:Óµ}ÑE6ÆÜ)×–ÿ›TT, Ã’aŰfØ0l‡Se]qoAUÞM×ÚÏ-~‹·÷w~oñ“ƒboot/data/coal.rda0000644000076600000240000000357611110552530013574 0ustar00ripleystaff‹ ]ÖyxMwð‰ŽÄ  Z*µ’ܳ_RÞˆÐXŠÐº1Q"–h&šVh¤’ÊT£j»ï!íPKE ÚZ&xL(õÄØB[K­ Š1eÞ[Éûvü‘ûýžûÜç|~'¿ßù×Û­¸G‡¿ÃáãGÕ¯}ø8Mü½I“SŽ?=ûί!exn™¶¢€çÖGËû<ÏíÊ̲ó¡¼Ü8ôq3ðTEmqg7eRüØ•2}þö“«( –TŒž»WC§Î¹ž{û§ÆûÒñ½_š]ܰ <÷û`|þeʧ«b{hàyXÔ'úK@Ÿ…G¿Y~°NÝû­ÏÏ¥ì°d]Á=Ê¿^éZ èWøÙÄKëbÞ¸¾C_ºÛ¾ €õFš?ü%–ÂjÐùŠõÇ›Ößw³¼’ΘÛ8žrý«k¼øbÂù°ewƒ&h;ïÜ¢\ߎ» å¥ùõ&¶l²óõœÎßý¤cHÀ¦îì=íf6›Wùß8ÀWŠÖD¡ó¾R–û÷-4Î[»»VÞlÕµ^dŸ”Þa½IYõ†Ÿö)`»‹íñ%€.ž]úóÇ€£Âöì”0Ü“Ey$voùÀÐaí”¶gß&'ôP¾òÛ(À°‚ÔßêN wËÅÙ?Qf´ò]ÿPyš•vŽ•Îâs“òî˜en@Ý‘WÔz%50 Ð0¿›=²ÐŒÌX¹ðåÌ[o}h}p6= 3`·Fÿ ¸¿˜Æ‘TžwýtàØÂ…†?ꘜ;ýßí/N Ïî¾`$¥[U¼’®*!0%yÙâ‡-'…Ÿ¨~«#åº1ò'÷{Rî yÏH=Ú¹Á:À©_§Df¬Ì<£|˜8ý—˜+gLÏ™Üv e¹#~ëÀ÷|×?©>KY²iô¨8ÀìüQ»ê¾ï]v9€³"~ ^:0W=·ÚŸæ#÷`¯FK? -Nì¹ pnrúÎE%­ê$`¡ÏÍ´Nÿæ]ÖÙ€EÞÛ“üÏë5J™8pÉÕ9¡}×bŒñÁlZ+Þp6ª¢ù.¦»4n3eœwƒ,É=6ÿ*­»-Eƒ TnÛ@7 ù{]YMÚRîªoT¸Ëi(›x°éÀGÃ'ýxÆkýhýkrú?çç¾ÚfÜË€ÿš=òSÍÓ©1×n¼x:­SåZg迟H÷怒GÛ¼ÒÆêÙ„®ûÚéI•´>«§5_Ø’æûa«xm%Ø-(q¯¥ÜöÒèy`7ìÛæú׋Án±2ùþ m`‡dþyò©J°;>Û?ìÎO^,:Ù ì.oμV4ìð!3ÞóùŒ2Þæ*°#z-üòDO°ÕL׫œÒ;=tlÎmwºù°»Ñªvìt–ˆ|Êý/5ì^c²–~·ì˜gûžûnBçÝÍÀÛØ}2 ìÁN{tì!O Ïû­÷Ìûí`ï…Rö§ˆÒí(MûÆï~¶îìø[!‹ÆN;åÙ~g§6ï>~;Ý˹( »•¢,Û˜œvÍ:±ÕŒ·Ø»Ý{µw{kðܳ¡nzbÚ¸ ú2°æáðûclâÔqÏýÐʤéaüqYÍ}œµE©-jmÑj‹^[ŒÚbÖ«¶¸jJg7'7…›ÊMã¦s3¸™Ü,nl(l(l(l(l(l(l(l(l(l(l¨l¨l¨l¨l¨l¨l¨l¨l¨l¨lhlhlhlhlhlhlhlhlhlhlèlèlèlèlèlèlèlèlèlèlllllllllll˜l˜l˜l˜l˜l˜l˜l˜l˜l˜lXlXlXlXlXlXlXlXlXlXl¸Øp±ábÃņ‹ .6\l¸ØpվΈ©N©ŠTUª&U—jH5¥ZREsŠæÍ)šS4§hNÑœ¢9EsŠæMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ%Úï{Èÿ½$¥&f<ÿJ@¯‰aã§Ð =ñþý8Lgkyboot/data/darwin.rda0000644000076600000240000000032411110552530014126 0ustar00ripleystaff‹ ]Í ‚@€G±BÔ¡câoxðà!"?m‘uǾiØ£µî_ï¾ Ë4ìÐboot/data/ducks.rda0000644000076600000240000000033211110552530013752 0ustar00ripleystaff‹ ]K ‚@€geÃTˆ <„D‡I{ß<ô :yÝÔôD³þ}´ÊŒä.¸ß7Ã̸³ÛF™ 3—Ê5y1[& •ñ%Ð{Uðޤ:PpŒœ ûJì!»È’úÝf}=¿ù!r„´•¹DG©§ÿ¹Í~u¹»¸¥r9ÞÅåʤþ¼7qL14öéI¼Ï"SšìññþXØÁ|’€dF2'Y,IV$k’ ŠæOkóÕâ«Èé ËÌD¼„wÈäãdô-¿¥ƒ êboot/data/fir.rda0000644000076600000240000000054011110552530013422 0ustar00ripleystaff‹ Í“ÍNÂ@ǧHi¢!‘£Ï@èìð$ŸÀצJb‚4)_Û'0níÎ?t I6æ7³¿-,/Ï[N·)…Lˆ‚ØbÚ€hfíÞk¢ñ}ƒvèÎf&÷<}·yãêMâòôJMݬëÉëûuo}âeo\÷½–u>êÙåóùû÷Þ×[‡s'Ý~ÏŸ\>ß÷xü‘ø–Úß%áÀˆF<0Fâó>ᙣ¿±1¶Ñ|[ÍMÿÿ¼Î_jt(>ÞŽ¶9mÆͲú<œ\ÕÕ—bYí½õ;:?߃ÝÜ S@Ç(ˆB®°TX)¬„Ù”d@ÊAKÐ ´ÁÁp0 ÃÁp0 ÃÁp0‡ÃÀaà0p8 ‡À!p‡À!p‡À!pä ÿ.•ûâ¨÷ pÓÒ×âTÌwµ½"¶úiâœç<5-øJÄ9'Í÷ÓœÇ9k/àÓØüÚ#x*Ôäex¹Ÿ-ûé݇óМO]ëÏ—ã<÷çÎýr?ÊcøÒ¤{Vü¾ßø¸'ÕyŸÎãúðïex\Ã;kWãìb¢Lhçäs=’›ýç×ä?Ñ©«#GS‚Áœ`A°$Xùü«‹EÞÏçn.Ñ퓜º8³}¯6_H´u‹ÜéËûý’6/Üÿ;þÍýîÚ{µðuû÷Õ›ñ *eÕ˜Ý{Úîz±@"0˜ ¤J€2Çr½ªt#ÏV 6zµßÕ†Nnù‡~/¥Ü—n¥¯)ë·óñeOXï>G—}C[î§ Ä  ô,¤Æ ”€& )(Á¡àPp(8†ƒá`8†ƒá`8†#ƒ#‹»!]};£Öz¯G›ÚŒ«?~©°Qboot/data/gravity.rda0000644000076600000240000000100011110552530014317 0ustar00ripleystaff‹ ­Ó]kAÆñÙmšÚ€"èç(f^ãÝ ‚ˆš7õj©i)D ›Pñ.­Ÿ¬:Mæü镱09ÏÌžÙßdÓ½žéÞ¬§”*Uq¬Tщ±SÆB©çqA]4õõåêgLÏ6mªó$Öa5V›¿j²Nõf[ߥõWiýMšOÓõa6?Mu–ú?¥ë_ªv•õIš²û}HóÏUûŽwO±Ø1Ê=ÆÁ_ŒÎŽq¸ct÷Gÿ0í?Z¯~w1¿ž/–qõiºz·Zô%h F‚•à$x A SÏõR"uuÏë³ÕUÓmÞþ½þ6—öRnz!û–óærsùþžãæêÇÉý}Ãÿø]6áe eÿ©OÒ$C²$Gò¤@04†ÆÐCch ¡14†Æ0Ã` ƒa0 †Á0ÃbX ‹a1,†Å°ÃbX ‡á0†ÃpÃa8 ‡á0<†ÇðÃcx á1<†Ç#`Œ€0FÀc€1èoß ƒ_Ù{ÙûZ¯ê“ó&¾KÛwSÝþ¥”¾Cboot/data/hirose.rda0000644000076600000240000000076211110552530014141 0ustar00ripleystaff‹ Ý”1H[AÇïÅhɤ4DŒup(Tæî’¨zƒ:*HÕê#&(D£I¨¨‹‹³8¦›ˆ«K‡ºT' …,]ì H‡•Þyw$è ^þ¿ûîÞ÷{\ob,KݬK 'Dˆ” ȇˆ,Žù…r©’'äEXÎÚäj§Ì!7£eÙóDÙßâôZ“8ço?týûŽÉ¯›7yÕóÃ]“û&möÝ™ïûôúä…ͽ37µ"&/Ó‘õ†È ×äø,2#©e¹"2ÿT¤Ä‡0S9½¦÷}ü«sÆÔg5Äì•™›ô2®j(ØÏúøùÖkÁ=iKˆø¦~ŽxCû# 6D²½WmÉ_ºßèÛ-Yø-R_t¿6frêÍŸ£í)2ÙiÎñUM?WøDSÑu®ïïV]{ë"ªï·çüNyhÞ7Ûï¹fÓW¥}É_ÌWdñ%ÑU ~*«–« ‹y˹üR¥©A¨\Zõn70›˜jYàâ’†, ÄA11ÅA P4‚ƒÂAá pP8( …ƒÂAá`p08 ƒƒÁÁà`p088‡ƒóæ"Wô+ö¿tÌ6wίú^¡ì«W\«ë?‡o“±boot/data/islay.rda0000644000076600000240000000034611110552530013767 0ustar00ripleystaff‹ ]K Â0†'¥¢v!¢ÂU±­ "]ˆ WÒÝIЊ‚Ï6*î<’Gð(žDŒÎ$HÉ÷eæ'dâQâ;‰°2³¥Ú–<@C °É·üP¬cÑ®HÖ¢&üVt©HñDž§ÈìR>$NîÈåbLýq!çÄü…,QŒï˜ŸÞó]šËb•>ý+Šu*¸‘,g‡«ûŸ®QšyJ|%’¶’Ž’®’ž’PIŸÄòZÚÿß>ʳëÎÜ9÷œO]f9Ïh6ÏL$é‹ôLE"=1ùQ¬OþÒ‰ìÈ"£MË7/ßá&[‘O†¡¢îw›y]¢2¨Åþ½F¹¨¾T±ÄÜ¥ ÊF›¯4;çŠù%ó-Ìbñ[‘ʻwðcMi„Iªãr®-¹ªvzo×äc¥ÔÁæ$ŒbEVÜå P‚§—pƒïßoû·o“ ûá›LíS Ú}Ro (Uê~ŸWoå×ÜäN§A¹iäÑë'/€êwCs¨ñ>jŠ×W2zvÆ'–êS_;$Ä¿WR¿ìŒë¢ß$Ú;cÍÍ»]/]jÄ\«úȈô³X°nþRé½åXž~§îNO)Vœ(>õëå>¬ªîž™–€Õ¡o«ŸXÊò_Ž|zJÃ¥ì»4×î=V˜'¯ðˆX ÅÖN/ vŒ¹ó„/–ý׃jr–¦c˜>’ŵO»ÈŠB½¦}ÔT°"óÎí·¾EuÜå[/F`MúÖÓ™³_³b«3Ï̲Ác¸•²¶>E=ôYÌZÔ>fýÍ;³VSNÖÖ­ÍZùõ5m8v˜µØbV0t†+Zù4'}+žÑ¢hîùÓ˜÷üé»ß˜Ø´×œòv™˜:ôŒÉøv˜=bÛ¯Ÿ?*nn}ƒß°ú˜á¿¦h×u—[Vi¢alúš&‡ÏŽ…ãó•a gÈ·ÜEP±ÿØa§ePÓ­û µÿ ¨ö9çêYãjÝ•ÌÊP^è»éXhÜͺ{È\±ƒs¡È\ÞíÌx \ÇIúóá-»‚¤ßüY¶n²¶SçCWƒúm_ÃÀ›PøMÔC±¡ (\ù9†š¡WŽÜÌ·eL®Çíþ†±î3Q¯j¦!cè¡¿¬÷ÙCŒ¸c§œcþC@Ù»Û=Ç¡¿1|À»Ð-†Bˆ +;e£nù‹bÃ6'{@‰LM*ûäj*ÜÄ× )x`ÖŒÅ6 ô¬¹œ3ð5H͹ÆÇ¼gIƒ_Àú93“O®ë‡á}ªœ,Úƒ$~Rþô›3A•þtWCùRù†¯#…«£—ŽË~ o<ž®2žŒá'ãÍjúa7mã±èX™}éST¶Ÿ9·oC kýÞ»ëCxlö`v^®™¹éYÆaÈzgõ Pö ºêGš´é‹o7‡ï¯Ðǧ™=¿tD髜¶¬CeÒ¶)×C²†_9 Kc›ôuéšx ks/úÖŸåa¬~9YFR+n=õóŒí&¨HHÿ#r ÖtuúK¹–5f?|ZÓÈêS|¨Ü±ùiq,õØêYØ«~J×dífÂ>îi? †Õ³“ '%@EÏÓ¯VNÚM¼c¬ÃŸxH˜–íSþ"$*z_mhüjé|©:÷”6½8@‡ŸûGåÞï´ý‰Éae!oïþÆøO¯_$&êãÞaÏ&‡½…ÐþzÓ}ç†bð;Aé¤êë(÷pâ:¥ƒçýjÿ ¥Ü¸ú†ªØA#ö…`õª_îç=í°ÊWâmû«ésVW-<Ø¿ùDÌá¶Ç0‹…z‚ll↮xÔd÷™hÝþ*è\ÜÙSü¡ˆö³2¢û´º¼fPÏ$«ÃZ3âÇAk’2^ƒŽ77 ÍkIöõ†ä~)KÚÿ…—P>Î# 2~ðí{ý§vA~êtäÂF±*[ÏíK òÁÑ]ŒÛƒ²MƒLGÿèòÕ^¡WŽa¬€cX>æIÜð±- ¤cv/k6ƒ\ÀPÆj =UÞ_TÙŽ–P?`u•-÷¡–Ö¿î aÁ÷¡îé㯇è1ú.^êÃw‹d¿ŸŒ­7Œáðsæs†j¡~Œ(õv?çs=ãfé‘+½c ‚<“@Më#áÛÅr§8äŽý÷2Æ4ø¸ä”pãtä$ÄŽ¼ziL¦;u JTÂý qü÷UÊsâRÌ·ÞsÃÈ:³[O8ÒÉšÌòìÃW.3YKÏCz¿–^e-‹_PôîÎÚÏåy›m]M`Ì&”m±À­Ú0޵ïÏ߇m¹}>©DÛdùÁÆ/]YëÖFMëÕ«þ5F½ é’š av«%ç» €R6BÎ@Ù¡^aCüкl©³…gµ"2Ð:üǸ½.¦1žøáy Ü.ØíR éVö„x‚"fêæM× @9ïóÙ™^(',˜°Sø²¹‚jéï3ÒóAqxæ¼·Ýj@VÖ;Õcö^svnÜ.È»þsßhw€ßý ÒYXÙãïöE#e˜'è¼vÂrÁB¿¨üª~ìwï&jNö7­¨„…“écrXƒ6¯:/]ˆÅ]mÛY ³œJ˜m ™ Oþ[üfÐ>‡ôŽv]¶ùBï­ž…XJ¯_Ql¸ÏþP" ü‰E½c6äí+†«“œNT˜u‡_DMí½òT‡ÿúÍŸ·Ù!îû,Y:bú_lsÈ2¯«9àƒRB>kœv@‘Äõ p™T=%À KBn?n¿¥†äŸòÌÜŠ± ̹ÀbÉKÌqå72â8æqãÑãÊêÎ tôyŽQð×GXqïå³znÖ8'Ì 5ª÷îN:ãÞžMºûn%æ¹ì~óïXÈpêò lçßê€QIÝ :èêŸ×¬ã=•º?üðæ€è-àŽ·MõVªwá·å¸1\ß§míþ,§õId GõY1¾”5\Û£(†„‚À^â*!ñëe—ÕŸ‚ ‡YG]ýsç´ýïÛù*ŒEÏX#³g?|&HYý3‰¼„ÊDŸk1‰ˆÝ­.ʰÐ[³üžå8Ý|åž5Rf«<âB|U“V¯Þô¿þš9>ŠóP®œ~áe0£wZg=ó1c(à'cÒoö"-ñÒ‹GßrƒÊ`iÁ”Iºþ/²=^ÄnnÈ"L/=OéÞ-»ÔñïÛ¿ƒ¶·uý¿ËyÝŠß_Oª&“‹ª£êG(ôÉè`¦,[õsÔ\ÌYÏ+wÃgDL"ÅX2œâ1G1R܇t® Æñ_›Ž Š ©cEˆëð¬eM‡Þøü{üd¶ùγæù~BÍ8æú³úó>®é¶w€æ÷WÞß«Ó/,¤þ¥Œú Õ›…ž`™÷™c­–%²¢mö}åO\uú¹ªëÎö²S¶¬¹€ß¬EÃZ«ÞMž`㦂)ž`zß?¯þ:à­.lå_¦B*'‹•CJ—+\¥tõ/ö-]ØgÙ‚ÎFŠÛ˜JžjÒÙMN¦'òi.ÄÝ[Þ}¸¸/äp·µQã+†'.|µš ºÄr²¶"þÜÜ3¤ç‘a:ü¯>ÿy÷J·MŒI©í·¸SƯ|ñ?dô‹îqBd”/¤ I¡‰ñˆÎ ï™Ú* eK[—OŠ™‹-¶¢ú~PÖ»a×P.+ÝJ¨‹ Ê5ï'9ù@.ÅñZŸ}¡ v¯uü®B–y] eý™áUà õzoòŠ]p•sK&cÆœ Üù„±N`ŽÁÖ_?vLGÕ6Ÿc)톲¢± “š”mÄì¿×Çÿ}Ž5‚ÿÀ¯†=?zZ…i‚-Ãò%Ço1õ¡™ØXL=m0x>†Rܤó‹_«>‘^[S=hq”Óý¥úeÄ}®ÿ˜ÏPÞ)ãTÄœmáÏ šwED<‘†ËÙN§LÈò8ºýZwkÐò`UïEMë- Rз˜zpq _QEù%‰>/]¿Nʨ>WË& é#(¹ïõF§ß*¨>©±¼ýÙö¸4ÙË&\ie”G´ý#ô;–M9N ‡ 6:Õçq+_=zZ™0ËIÄBÞεÀ<ª?UT•ûËÜÆHÃ`ÌøÁ—ߌžŒ)^'ï·<óëó?ÿx—¨¢Å• ð”3ÅIdg —òCä çÆîÿyCQƒÃèúGq<Ã¹éœ Ãë+‚^ÿÀ/dšG±ý ´%7åH÷ópñ®ƒ­°"áv™¾²å¹}ç, Æ"·¤÷/Z4ÇѨ’öWC æ<¯Su××üÎ ú™1¼%_°õt&£/ã¶q#üñMZ_[Ëè ÏÏèe'þÛ×´orv£U¦sS±íTP¿P#ð5T¹­Xñ3x¨¨þ¯¼eJLK¨hɤN¿I‡®nþÇ2$~Ý9a )œëÙU‚…tßµùRÑùèÊOŽYõocx³¬ÕøsKÏãÜà+‡Æ³DPBH(ÿ-£y…Lðã Åk­¿Sú‡Õ æùëh%Ï Ì'.ï}@;(§üž¹´Qã¼èÛ(æ”´N¿E¿Ÿ<ÍËZ·þzbžìV3ëÉ]Ža _{¡VÈŸ´þ?’ê©‚¦a¶“a)ͯ$áE~{Ÿã_Ë+ãB¿ì„µ¼=Ý…ô~Ø@õmãZ²×wX èmÈŠÜZõ.{9nãao?~5á뎅\{€/g˜rÊTÛÿø\¨/¦?—u²n3–ü‡õ)réÈ£¬Åè]D"`îŽ\ñÆS´þséÿ׿Ÿ…”cë4–_;â>N­½>$†Œˆ;ÕE¦«”úðlšÏ”ÙRÔâþ&PŒ è5¨doô—õ>RŽ^ECž€Ÿ˜²r‰;üÉÉñíåZþÒåùÂ~c‰ÀÓ*äHÏ¥ [@w¶å™¶£’ºcÐ…—÷|[ÞÁ’´ú¡Uyñ(ߎ«*§r X̹IÃó¶á̵¦ÍC¶ÄnFùßëXÊ¥ŽæÃYñÜäÐW`þËY÷ÇðF<Ï+dÙ‹ÇŽ¾qw.üdnCß.ÍtùR7®m/@n€ÙÚVÅ:]³ôÌ„þ̨ð‹:” ¹°4ÿpª}VrªbçPÖT›‡”¾ ¢Ë .ÙÃìØµ'E®yºü3Á^X_n$ÈuÇÚŒœE÷¢Ä¬Y©f'QŠl}µ€Âžµpäõ06†½uRÆø¡ZÈXk'‚Zs±6œ]q`hnѸaÀ?¹}—`ƒ7c¥ÿ ”úÕpf—_Ó„6 rcH뚺9Ü slÍšz ûŠ5÷¨>dÅÝøU‹Â>¿sÁr®G_ \ÀKT.8k »e†rŠoê'µ=ŸôpG•€KXo[ ÷2ëðM¾'`|âv;l¦aÖíVSþܼ‹%Þ¶ÏÓz@Õ5Õ⨤=ß«´úkLxÊÖÅUêyîÃÆ«&­Ù³i®Ž5ó¹XÌUùÄu÷BŒæb𪻠^Â3 ¨i>¯å—ÄÜËÃ7™¶š·A<Í VðõÖæ«¨x¥œ–Ž*{¢Š–(QÂ_ËüA¥o¨t­‡ *ˆ('Öæ·H¯§Õ'¨Í%žMI§Ê³r~^qߌˆæ“Ñ… K̃Ôpkì¿~>ƒðKsÓÙd5‘*yeûBÊ6ܺü¼„î—Úå(ÇTX!ä«Z|F¹à/°QÈŸ°¸ÅlbÈ’à¬ùÏÂ6¥Pôz‰¢>ó”riÉxòv´5q)d§U:}(ü¹V¿1¢?‹ö¦ú¿Ä°Åo8×¢Šž_ ú$ "ÿ/_"*ƒ TÖr²ß*%¿&YA­€“Œ¾3f]·eÌNðxÉváè§?£ïí3XRm¾õÂþ2¢÷OLuÞ jz}Årîg (¨^¢9*T ù #¦×7¡ú¤R&fçwƒ*’tUërP} ‡ØØÿà'‡*Ã/Ç•“ö¸ÛÝ…í¼mØ Ÿßó öÏj¶óD”ö|‡ÑëÇ µe¬Ž¹´ûc×úó»iMã@Mǰ²‡ŒXÈ_½ß«?Ý|ßjecC<Þ@õWùAUüN/‡"nœLÅK <Ȉºø ¬˜Ô”1Îtù›¥ ï½j>x†:aΠ–ò{-­¿†ˆ6âuú¼ÊÖîLZ½#Ô øÆn6P6cÛ'ý·Ìz$cS/à_Ån<Ü!î•P/9Ícµë×ç/Œžà½ÙDFâF$äÐx–÷…ŒÈÖz棫 Ð èÒ9¢úepÄú¾ò˘€Ê]¤l!ï°è^”Aij@ˆêgÈ9{ÈãìëãC@ùäBÞ*ޏ3AÙݺ q’P/œ¯1ƾ„†Fù1bÁ0úÛù€ʸX(áˆ.ßKØWÌ%Ì1øÐ'ßðõ-èeÈgnߟŸ …B¾¥´Ï4Å|е4¿•S}UGëóˆ·e[uëÿ¹óeØá˜>ºõçq,èu 2ë—»ä#åT}Ùi®NñÆrªsx¸œƒ”ïþDñ.Va,Ç(¬1ÍõË6ü&€…üyÖ<ßí¢9‰5>³zÂù kôEÿ•õ𻍠}(óâ‚ø‡Zý†)ªaÛ–]ÚˆAÿÃ%þŸõxÕBA ¸sM/…T)G åÛ¼ÖÇ6 ÙiB ”—; X&ð­Î©¿ó?ýLñY^½6ªÄ}?Èijõ§šÎ£’î—6ÖêsªOtù¿4xBªa ÔÞ3®Uë ¿yZîŠÅ¼]¦;?-¡ëW õaE°6®¼QbÍh~RIÏ_²}q$¾¦ºYêbK Ùyñµ*/wne’. &5…?e‰!6Ù!}Þ¯öOoÉ!Ë—' y TœÞáÙ=; ª*x#*à:Ò~n0œ~¤| 逅–ý×÷²7Ä×·D!e°Ó:ú<‹ÄZšoërf!ÿÑÖ çPEû³Šž‡Ð~{MÀ¼Éü@H'›8íR4Qý‰,äð^Ð@sT çÍPCçÜpôç¼Q-+#î¸Î–1¶hmV£/ønÐPª¡ç;2âfnîz~ÂëoLεP&œO`ÙÝ;ëR÷ÃôQ}B[?Óå“/(.§i¦>žûÁ J¹ÓÏÖƒ@.\_ë¿0:ÈQur¶#ÜÎ ‰îÕëXÊ©¦YEXFqò5·+;¡z¥ÈNóú„ߪÇè»~êXÙœù¿(& U?ôªH Bß ‰½$‘ 3±’7>Œ™p¾Ìˆ…¾bL7¨ ®Ïê º ¿p.i™ƒrÁ'•÷ówk|Ô$Ü1D‰´û™KóL—î¶²„r!çÕž_ zå#Úÿõ4Ÿçdã,<Ä©ß×? MÕñ—6ߣ9'¨ýŠÆ–39Œ@z_2<b•ð9+:Ìmà Ö¤-CŽb ÎhÕ¡´®kN–è³!÷KÇögÍ…sXÖ\à?l¤¿_)ì#–|çº Ñy¬­øû~»ÃÀÝm+yËŠû‘ ø â^L`§òzÈtÑ×ÿ÷Õ—»¹‘7m„7ÅÜ›úîäQ÷çÿ°Dð S"boot/data/melanoma.rda0000644000076600000240000000474011110552530014441 0ustar00ripleystaff‹ Í›ol•õÇŸ–"ÐÒ¬ºa—-K‰Žô>ÿ'»X‘)T^ðb7X•IÛº1ö¯É¦S·(™ÆÒRKµu””)&5é’×,¼€…ÍÆ‹ÅDË9Mfp’ÝmßsΞ3oîóÜ–çßßßó9çüÎó»¥ÚÛ¶ÛõÛë-˪µj–XVM]©YW[ú£Æ²šJÖâÎŽÝ…={; –µ¨±Ô_Tš_VÒ³ñ׬Ù'þtM¯ÑíE£4Ý…ñïÆF;­¤>¼Ïhô{­F÷cÿa£?Þlô'çŒö>nôgBÇŒþüô¼ÑÇ`÷ðãñíÐÓFŸ€'`÷ÉÅPôÙ…ÿO…Pì{ê¤ÑƒmÐAáÇÁ3FÝ`ôÄ÷ÌAè_> ÿžÛÝm´¼>ìëƒý>ä§~ôM=t=¼CÛ ßÂî¡¢Ñþ&(Öõc¾ÜþQ¥´yê?ŸÔ¥Päkvp^J?2zy< îaäïðûI¬ƒ"¾AÔÇ ò08©öŸ· 8盡' ƒ~lt뇖C](êhèWPœËÎeö†PgGÿ‘Pä÷È0~Aþ†áßðµÐ/BáǰÝ…½á=PÔýð~h/ù&Îß¾°z¿Rø÷ü{q´ ú(Þ¿ÿ½¨ô£#¨‡Ø# ¯Bñ^Œ`ßh=u:мޢÞGqn/ÝŹ¿Ž¢>Ž"Ž£EèôC£cË 7Cq®cÈÃê ùƒŸcï@áï1Äy vŽ Bß4:nAQÇãˆkç7ŽóGžÇ‘§ñg¡àÇþã×(Žuü1£'pN'ÞJêo{•âýz¹CŸ4ú ü~õrrô:(êóä^èŸþîëПBñÞ¼z çóšù@‰_ÿ½ÑçÍbpI­d?ÿÁg땞¯V¯”ݹú[ŸÝת÷—Û—µ_­–³“uüŠ©®Û”ú?÷Sqÿ‹Ÿñ¼«åÎ׺«m}æü”¹ïRïŸ/?晵*}Þ”»/éI;_éIk7«iùåü©Oµqg/­ÝJ¼¹®›ëyVZ_-?í¹TâÎW~³žß\ë-«•Æ+ñ«}ÒÆ—Õ^Ö¼d½·æëËêWZNµ~eµ›5¾ô~âï7[M/Þmë5º¥ht#ÆonÞ‰þZþFô7[Éñ»0~úíÐõª/¸w xúkaç6ÅÝŒ}w÷&ãi…ÞFëâd\ß¶’ñ½vØshŸ²»^ío§¸ãdüߤù8¹Ÿò±ã”×m˜§s!{wbýÍß„þ:hãk•:O´ôŽÞd.qÿ:Šëbô·“ûWÑ:â“}+¹îVôÛÈô7XIÞÌ·¡Ïu'íRR‡òJ~Òù’=Ê7ýþ˜òµÆJî§}wCuPPÞu]ÝJþ(iùKvÉʽdâŽiô.µþöÞdœT”?ª?Šïv(å‡ÎŸò@õMy¦ó¥÷Šü">ùEvè|)¿Õ-Ù U¼[U|T”'z/ˆ¿šü·’~ÒþØJÆ©>í£ó$?èÞ!qóªOùÔï]ƒ•œ§|Óû¦ýߤÖóû ]£Öµ@=µžâ¢|Ñ~Ê /Õ#)}¿â“XO÷Ý‹ä/Õ Ùçz2ûøófèœ*B/¨þ)(~_ÌëÏ«u4?¥ÖÑþÓjþŒRí‡îã÷¸CÓªOv'Êø;©Æ‹jŸ¶3©tBÙ£uÓj\Ï“ÿSj½Þ¯Uû7­ì•Û—u}Z=“q¼R|iõ\JÕõvµkzJ뺙Pý´qêz̪ºÞ+©~ßÊéé*uªL_ëd…ùJ:‘q}±ŒNV˜Ÿë>}?Í—N¤ÔS)õ$Ô|ÈŸ7_ª_½oä/õù gfž³ùK3½ÕÇ‹œÙ'ni¹ôæÓ;VÆ;fqÓÌô¾ÑxI©ÓréñJè—1Þ`úù©o¯¹é‡ñ²™]-òþЕ+&¶¼÷è ñuà,Ÿucuüù÷-ͼ¯€Þøš™¯Y¾b‚ìÇ I¿òÓà}¥i¶×ÁŸ¥_:»½=ÿo܈þep¾»ß0㼟ü§ý”ŸìSóqî¡ÙñüÅdž¸ÿ9ð›1þ)òOù—ÖÓ8ùÉv,œâο‹ù"~œï»ý“É#¯¿¨xÈÛÁùðü[ØOy«5ëòï<={œ_ŠÿüÇð›â×ùúóU¤tÞtþÊÿx±:wò—ü"î»ÉzãýT×T à\€_´Žü½œ¬>ò“Þä‹ý#èýCÞù|8_¦ÞÙoZO~\kòñ¾Ñ?ó ðûDÜFu´¿I½ÿªSÊŸÿ¿°¾±Ô9ðö>ÏPºè¼.$óCv¸Î4wJÕæÙ.å ù‰—ãÜô{¤ß§ë1O\íקɺÖõÍñÑz}î 1Oõz9é'ÇM}U÷|¿}Z÷8ËÕ{@óxÏâh³¹wÙ_Ü#|δOÕß74N÷ÝÛ*\o¨#ýýM¹'í÷ÿo-çoÖ¸*ÍWû½LZÿæú=ƒ¶3×g®¿o/gçJ}ÿ‘Õ¯´vÒîËêç|ÿÞ¿Ò¾¬q”ëWZ—õý˜kÝÏW½§µ[í¹g=ï´Ï|ç7­ô~&þ½ôÂ=…ÎŽîÒ`ƒeþ±ôÌ`]Ϯδ¯éî)ô<ÒÞ‚îŽýÔ,<@kê~ÐQèB{Iσ»v>´§£›¶,|d÷ÎŽ.…]Òµ÷û«þ}«krÔ°©áPÃ¥†G Ÿ5BjDhÔæZ¹•ã–Í-‡[.·3|føÌð™á3Ãg†Ï Ÿ>3|f̘0#`FÀŒ€3f̘2#dFÈŒ!3Bf„Ì™2#dFÄŒˆ3"fD̈˜1#bFÄŒˆ r­­ÒÌIÓ–¦#MWšž4}iÒ ¥)´œÐrBË -'´œÐrBË -'´œÐrB³…f Íš-4[h¶Ðl¡ÙB³…f Íš#4GhŽÐ¡9Bs„æÍš#4Wh®Ð\¡¹Bs…æ Íš+4Wh®Ð<¡yBó„æ Íš'4OhžÐ<¡yBó…æ Íš/4_h¾Ð|¡ùBó…æ -Z ´@hСB „-Z ´Ph¡ÐB¡…B … -Z(´Ph¡Ð"¡EB‹„ -Z$´Hh‘Ð"¡É]bË]bË]bË]bË]bË]bÏÞ%‰}vî.tÓÏ5XW_¡§°êþ®ÂÌ@ÖÌøäߌ…Øu6boot/data/motor.rda0000644000076600000240000000155711110552530014013 0ustar00ripleystaff‹ í–ËNQ†{¸¨5&Hâe!F LŸ¾ÈôÀÌ € ñ†Ldf¢qg| } À^¹Ò×píÂ…+cOŸ¿þ#ËI„PUÿ©úªºûtÓÕM5¸9hYVU°¬B_êöõ¤ –u1¬þƒÃöaÓ²NuVÒŸó©ÝŠ{ìˆû>wŽ/ñÀ»Îñ>>‹ø‚•ñ%ÄWö²#¾Šºk°#Я£þôQÔb} ú8ôqp'°>‰õIp¦O!žFþ ôèEðŠÐ‹àeu6b±Bì ÎAÝ…îB÷{ƒã£€õë!ôzn=B^½½„üÖg¡ÏBŸEþâ9äÍA¿…xëeÔ—LjcäÅ Õqy¬WÀ­ ¯‚ó^¸¬õEð‘·ˆ¼*ô*xUÔÕ×À¯½Ñyuä×±^‡¾´ªë–0ßmÔݾŒºeô_AÝ æXÕ}eß[8>~×zÒ¯ëD—8oóùÿÒ“á?ÇëC}"Wçƒ_Õ×;ñôù%µVv~Œ¥ß]Ä뉶 °›Ÿ²ë–<Ññâ{úz&OÑGô'¯5oõ?Â’'v[¿0È¿}õk¨ÛÒs—ÐOæ~ˆü5̃ç:©çÞG7¡{²?plì Ù—ºo<‚ý´¢ó$Ÿï ð˸R'ýãuy0®'9ù} ÷÷1žÏ¥ÜçxJÇRG}LÞ—è'×iâè>á:8ùý$ë²ïËßt]·mlØcaÏt×ò;瞇nÛ·?ô|'öÿ¶¸«6ÿñÿ¢q°ÛJÅοÏÎÇ&¶Ÿk1 ;;»ûH?Õj7í¢ÂËm yøjúwâ [%Ž#Ž+Ž'Ž/N N(N§Ç.Ò³é9ô\z=Ÿ^@/¤G²"Y‘¬=V(V8ìë°›Ãó\R\Îç²Â%ÙcžÇ<Ó{${$ûœÙçÌ>)>çóÉóÙ×'Ù'Ùç,>{$$䤤¤„œ/$%$%ä,!)!)!'ˆH‰H‰H‰x–§ŠÈ‹È‹8U$ä^»X4®m\Ǹ®q}ã†tmC° ÁVÆ50ÛÀlã¶L]3™2\e¸Êp•á*ÃU†« W™y•iᘎáf»úÈ»bg¿Ñ’'»€¼Ágéû`z¯™>ôiô³óû F*"Äsboot/data/neuro.rda0000644000076600000240000001224511110552530013777 0ustar00ripleystaff‹ íœmˆWv†Û–W»6ÞÄ‹6Š•lyeËöF¶%¾?¬cÏH²¤‘ÔÓÝóÑ=]ÕÕÝÕU=KLC Á?ƒÃþY,øG`!°° ƒ! ƒÙ€Á? ƒ~LLI÷yk¸­ñìH²F&X°[®ª{Ï=ç=ïyï­ê[S›ÙùÈÌ#…BáÁ g©¿Ë—­ÿ,ÙpwëïÔnüÇ7¾¾ÍÒOoÚÍú¿þV×í=ËoÿÎê/luób2µ*½ÏâÕéoÖõ¶x’õÝ|“¥o­ÌÓAù¦Ý¯Ó›l°ü|—õþÉÙïŽ'‹›Ý1ùüæqÉ÷¼—µÞuëƒþoÜysëßÙèá”u¹¯õD‚ý9ÎÓGÝz£{•#~Doz~fýg]ûûã77’Ÿu~û^•öãÎn°ÿ¶¹óèCçGÞÅ~ç=w¿qÁÝoÑ?æzDüƒ+äc<œžesý‚sàéæ¡¬õ‘k×ùOÆ!Þ–[OgóÄÓeœä²;*Ž6yMŠÎ~^v?Wpl3~ŠŸé¿pî¯ã¤¿YLûǦü^Gÿ÷‰ûýí®]„ ¯—Ï+—Ç“ ?pNß`îYò ùõVQÿý¸X7d1u ø•߸jþNágÄM]k½Ò"~ñW<—Φð®›èbG:®õ©×¦ô˜y´F}vÁW|©¡G1uʺg Ÿ Cõ®}Í×x“¯=U7Ñ‹zPÇpÔÏ[þ§Mï~Þ®íû³´þçyCózûtÞDð­>}拌È{È|?ŸîȲþ,O»ñ·Ég—<öàeÝ ñ«!Ü™Gš¿c~U=‰ÿÔ5ë¢Û}ŽËšÔþ =gd-ô°Ë¼ âSÜ´nŸßê÷ohÞÖ|EÝÔ˜÷ð5Çg=ó$q¥âí¥!º¢·Â'b¾ë¼A}ÀóÞ>jþik÷€¼ÌS§mpI©ß>úÖW<à0þ"q°>ÊúØæuB^cì†ð??ѧü$Î[tWÏCš¿úÔK‚u«¾Ž´¯ ß´Þîsa?:DÞStGó~,\É ÏƒYíæCÆhß@_cx®õÛœx‡½X:?bìÕ°§uY„ݶžŸàC[zOêð'Ž:Ø#Ÿ!þHŸÅ§†tTz ï#ø1 ?gÀ{Zçð? ̯Ï0n¿wà×qâÔº`Ç—À¥Æ¼ðí«Äu§¯o&áI^¾€½mðïeü•)Çcðp7y8‚ <.‘ÿòQÁï³àßú×X7÷瓌g´;…þïÔ}ò8Âø§ˆs”뻉çgÄsV~ÁÛ—°{‚öø2nã\¯aG~Ï€W…ó íæ®à?ü›¢ŽŽã¯áßħ®ß´®cg~•.¸ã~ìUÈÛœúa¿’¸þ5ò’ã,öß¼¬ƒgëi\øXE?¦°³ÿd¯þ“Œ_EFÀ÷0ýJø}„x§ÿç ð\y{Q|Ò“þNÕÇ)øw>=ofüž·¿Â?åI¼z”øá×ÎW•þâÕ˲Ãõqâ=?%ây;;ÅoêrÜ^‚‡Ï`oœøÕ»Æqú¼<+ŒS&®2¼)‚_ ÜfÈë.é þîã|~T…öGL~1ÎOéwŠû‡ˆs½ØCþOÓÏè7¿ñÐn^í‡3Ò+ñ¼ÆÈ÷ãL*~ò9Mÿ)Æ{ùŠÏ›ÚWÀý øTËîüi®#.ÕÉLÕ×ϽÄuÎnòu¦Æù_r^ÅïëœûþÎr¿?&ÐÍip™uþ]ú-yù1ãÎp¿È±„ýIt¬~;ˆç(ífБ2<9ÅøÏa‡òOü»Å{p*ç.â8¬vâí´>?JÞöÒîï±7Fžªà{šëÇÁWÏg?Nà³™ómŒ·kŒ3+ý@§'.ûøüˆöÓWÝõç°³.uTÆÿ:{–~ÒŸ-ôÓ<:Î3Œ#^ÎàÏ^ðª‰ò<§±w2N™ñ¥«š¯jô«Q“èÌÏÁç(þ>‡½ixUsð^œ$Ž*vJëÝý€÷;¯àO\6c÷)â™#%ê¼HüOâêdšñ+ØŸ…šÏ•oáT\ïëĤì 7êòy×þÒWàø Çiômœ~ŠýIâØGÜMøPÃ&ñâ÷$ýÄÏIx^Gåã<8éì\ú?4ÏÌop÷ŸàúØU]†œ7¹¯úÑsTÈúò…Ë~}L€ƒÖeâ>ƒ^j^œ?ÍEâÙ ¿f肼4àoˆZÁë4í â¬bÿøì——9ŽÑï~"Î*÷ϯ)üÁm†qêÄ¡z?ÿšØÑºc+þVð¿t?ðó4~î"þ2q•?òët‡pŸãøWnΟK_2Þ¨ôà¢Ã1x¬õ¡ž^!žqðx‰~'¥Â¿æÉ«ÖƒZG—Ï‘Wüz?žåþ ԕ☃ïÓø'ýš¢î¤«5ì–ñ£Š¿Û±ðþhVzƒ»Áù8#/ú?Íuý® ug¿´®¬ÿÌ×o¿ø >ÒËŽÆøsîüÒcw–ü/Ðïï¤3ô›eœ¹œ}Íë/2Ï-—®b¯¢<`w'í¶ç<)q^eœíœrù=ñ6íØãÈÎAáÈõ…/qòa¬£r'hwÿ„[ÉñÜvÊ.Ç~Gvo¿ðsue»ðSüÓyå=ìa¼÷)®w}ÿă<à#^ìÀzn%§—y¼EîÏë öždœªò y.·¿V~çíN’¿‰kÄ n¬gìyÚ=‡ÿ$ÎX‡åþ‹ÿ<¯ZÉÈ»êU8‹Gª‹ÄÇ©(^áâ.ã'ÏÅy\Â}‚ó"þ Ï—v¬r¿u¿ oåÇ)Ý'.åOq¨¿ôC÷Y÷ëã9ÎJâãó|j‡•/ù-?¯ù<ž‡ÓŠó"|_éS‰¼¨îKâµê€ñxžÎëlþé¾på½™mÎçá!íxþ·ç¥‹C¼¯ŒûþìVÞ†ôrû‘÷~ÆzÅö˸ñ¾ÌæàÏfðyŠëÊévÄÿŸ«?82/æ~‘^Å£:¨KÄ_®«};ʇê˜÷¹ŠÇy>…3z>½q6£Çà¥ùm¿fõÇSœâ/í·ᨺeÞ¶ ýÊâ?qŽ_âÜ'žpç㽄±Þ²-ÂEüâxh¯ñ*ølCOÁåqðxZyà¸YuC{éÔ“´vÄÏç!ÕLj?¾ü-bG×bÅ©ùK<ÍõíŸWÅ3éã>£qÉß“·Ì}Ox”çG:Ès½…ðHõrˆvÊÇzü<;âëˆtëôǾN ×CÒ éçóèz>þHüßï®OƒßíyîÎýâýŒm”¾ÁÍëZoH‡Ž«î‰O<'ÞóÚYò¢uTÀ¸Z·(:ç½ñÞÇxd?#^Þ‹[à~'1Þ÷æë å©LÞÅ|>H|©ž”ÞwæüØF¤‡\ï¢ë¼—ÈûWh5/J_‹èϧvw<ò¿ºO?ÕÏÿ¹~¿€?¼—6þªõƒÏOÐß¡ì¤êC<·Óê÷Ã÷¤óø÷ìýÍPñ¾?¯­§'‰÷é‚oWyÿ¶‹ßÒ ®k.”>mâúSÃú£yãœ;*oüîa?Àž|Î8ZgåzŒŸÊ£æ­—öpŸßÅŒ÷‰Æ{ |ýÌ{c[À¿œkÎû`ãýžñû[®¿5å•x¶ã_‘ëÌ÷úØÆ±·‘¸ªÔ¿·¿WX‰þ?T½Jà+ïWsÝÓü+Þ —|†žœe|ñB8ò;Bî'ûµ,âù’}(Ö¿.zÃ~k¸¼¬?±ûÙB·o#·›Ð/¯q6§ƒŸ!ã·ýý%Öp„0ökYÏýoì³6vØIÜþãwn ÿH^È#û-|˜øˆ#—Îk>.Í÷ úý ;ð8ý%ã¾çŽÒöõZž³ïÒØÿl‰›hŒßÙ­ó‘³Ë¾ac‡…nÿ®5‰«ãöïXŸcñâ=àGþ›ë±ëöÁûO­?ýð·‡ìË1öûXÌ|Ð/öY¹üˆ£Î±ÛØŸg쯶yê¨ Þ¼ŸÌó5}ÍÏ¿ÖìWÞ?cì³P¸c¯{ÂËcÞž}?Æ~¶[îó{ª¥øŸ€cƒëì •?{<ÅÊ;uÌ~?c_‚àL^—âxëàÄw$ÖEÇcøÇ{´œÇáG¿.uȾ_ë¼ãç‡ý3Æ÷~9ïÆg†Åð4d\Å=x¾0ŽtCõ؇ø5ÿD;tDï¯Bî×É{žÍƒsô†ß>büH|…×Ý)xÍ¿ ûmâ`ºuQ̯mÕùa_½5é׆çmô˜}5Æ~•|½€[f¿§…Š‹|Øé© êµãö÷Ù8IÏšªKô¯“¹cÞŽzdN®¯âQ犯ÇuŸ÷¶ ¼‚Ë,ãNQ§x#Ý/—ý–^ôí†èY<°ÿÒBò§õí"vØÏiMü–^õÀ3‚wì+³uÕ¿ff®‡MÆa?–±Ûº\»9ÕãÅŸùu$>t¥ÄÓ"±ê‡ù²ßôõ”}韞Sªà<‹½\»Ô[ üØOg³ôoàGq¤]@¾Zª‹/}>Öà ¿ûûál;|?`u긎þÔ©éß¡Xœüš%¿ì'¶–ÞSï‚øC}+ÏšÿûøÉïcÖѼG]ôÈ¿ÖA|ÏcÕ7y`±uð#¾ªG­;üï@}ª¹žÔ‰k4EÒ3î³_Ô²/>‚£pS>ø$×3=÷µßBwÝþVcÿ±O{©®À9€ï|ÿd!uÑ;Ç9|n‘Ç9é*õÜÀ/ö±ZÄz`6ñqá;Œ|¾ã»µ|ýS?óø»@¾S?¬µ_ëÆà;§úÚàåÇšàÃCÍÏ|‘¯ÃËè®êÂåyi}«yAëê¡©qX'ò–5˜#ꢮeÅ¡y‡ûùº [ä£ /;W ópYZ'Ã'¾£±Þvx î컵YtGëöUY |ø>ÈÔM•|OÃ7­ÇÙ—amüfÿfÎo½¿ȇ淹óðz`Ÿy¾.^àþ,ãà(]™"OZo¨ÞºŒÓb]Ð…gìÿËç¡Põ)ÝÓz ¿CÆÑíô>TüO»ðˆï:¬‹N-õ:K»>qñ{âRýÁ¯:Òf>‡×|÷guùE>ùýßø¾ÀøÞËêûýº¿?f ƒC—¸#ì'øÓѺ„<µÁuÑÿÄò£ß:ØÏ×!ðFx²¯/Ÿwzè`Þ‡Z·P_5Õ·Ö£Òaøjýƒóäùù~?¼VG­Ïóó[_}ãã¤w÷wlÜ¿¾-GKW÷½÷wÇ5Î ßå~ýýw•7øë–%»[î‰ÙàÜšðÌø;L¶è'o)ójÊ<’ޝJlðæ=Õeãïüy?.Þe¾ýyö–ûñ»+Þ_3Þ÷¿XS=²”u‚ž³½ïõuÑbÿ;t¬ü÷),]–7Æw—3NâçÕ†þþŠ X/òݳ%ï¯ÈG˜³ËwW·Üç;Òáqï¿dÈßþ½ù»C6 O½Ûû;Œ6Èà÷gÛGW—m§÷£wî¿ÿ÷E,YÝß§1¾]ó:\µ>.×­í>ñŸ·¿¡õžuVÖµ{Ž“ÞÛ¥<ßÄëV®Ï¾Þ‡¾=„G°"Æwt6ðŸ¬ÿÉÊë„AÄóþ ý]Cë½w[õªõ-._Ã<_=ßV®ãï’|½­|?o×}gHoýùÝ’;ûûIÆ÷ª«—÷Ÿ‹{¾GÞ÷4O ÍËë«Å<_ª«Ô[OêÓé.yçï’ ý1àu­^zýÒ 7þ^pá?\?®¿þ¿¯nüïÿ¿Ö­KXboot/data/nitrofen.rda0000644000076600000240000000101111110552530014460 0ustar00ripleystaff‹ ­–AkZA…牶ÆE)ÔE.¤„B£MhcJAé"˾j…Ä*tÛ_^:Ò{ñ´S ¼=çÞ™;ß0w?ºñ›Žs®áŠ#çŠf²ÍFú*œë¦„k/¿oVÕÝíÒ¹ç¯RÜJó/’zWó˜}©Y5ëÃ×zu]¯¢/³¶·kú:£:¸ýÄõ}©C¾wà¾nÏ9s|èË'î×ËÔõþ¿ƒçÐóè~]TX_Î,búVâ©®;5=–:­×uoDuþ$£ý WëŽ3y=7ïcw^Ç Úÿ÷|n€sø°¾à|¸ÿ‘Ä9Õ{G߆¢^bí³ÞÛ™¨öû4kÿ0¯ýûë}J¾/uÈãýôvçëÖ—KãÌL¯ÿü£3ùOM¯dz-1ö»0½ÔzÓ÷HÎq%ñÅžú±)îýľxGèû@êð~÷ýŽ¡îðß±Çÿ ZËòáv’ÛåÛ?Ûds^-çæŸ}[UÕb´ù(XÔÚT›ò^ö?ZU?¨-°eÁ 6*"Ìæf3…yg¦1Òè<] ‹tcºsº Ý”Ž O†'ÓáÉðdx2<ž O†'#Èd2Œ@F #ȈdD2"‘ŒHF$#’ɈdD2ÆC}kóûrwPز΢ܔƒ»Uz")úµýü ØÛvÎ boot/data/nodal.rda0000644000076600000240000000061311110552530013740 0ustar00ripleystaff‹ í–ßjÂ0‡O¥"z1ÛsÈzN¢îj7{‚]y¬Ê@'TaÛ[ï Æâìù9 ¥v@ó6É—6iÈÓã”Óu(è¡Ç°ãÿ¢[º/›Ø­ˆz7>éù›W¾´ôÿåe—UçK#›7Ï12‡}·žoýÅk:ö÷ƒµB’Bè–ó8åîvç3M–‰‹5 ß÷޳ç8#ë'›×áO¡UO¤À ¢`Py¤0V˜(ܧЉî@ˆA2 Æ  †ƒá`8†ƒá`8†ƒá8Cà8Cà8‡ÃÀaà0p8 ‡ÃÂaá°pXÉ.ËÙÊmu•iµAìvn¸HüòÙçþ÷332Îboot/data/nuclear.rda0000644000076600000240000000154211110552530014276 0ustar00ripleystaff‹ ÍV]H“Q>[šº Š„DŒ¨°áÎü+i–¹oxáEs)–:çÜðoÉDW3ÝM·t%B7ÑEˆÑEÑDx›·Ù…7uÏó¾/~§ært`{Þ÷9Ïû>ç;ßgþ³ÍÜÑì`ŒÙ™­ˆ1[ž óìòËÆ˜S¬  vµ"2Ú'Ó]rz·D—ˆ/¼–ãˆ7.9j³bøqòÐã‹oj)Y{.Æ÷*>v¯&ü`É!zóW¤À/ú>ÈéÅ)O«<™Vu±ÏΡåÚ91xܔ׈ÁE™9‡ÅÈ7 CËbàá¢lüU Ť|%)n1Û´ˆH±ò›œPýFæTŸÑg†IˆñwŠë¼/ÇŒÛÙnqsF­+ùRõ»¾ªô‘³Aè}«úô&•og©ò ¯¬¯ŸöÁ8ªê}­h°õAèû®žÇpk¸ê ŸQõ8JCä¯jºzM‡}žB~0 >3¿€õ¿‡õ¾R¼o ø5m^å´¥ðœG4<¦á À@¬sk|‰¦ÇyÆc_®éË4]¥Æ{5,×ú⮃ ìC-ð~¡ð  oAaSBáyDÐ7@~нæko„¼Чõ¿|³òØß¼˜°ò  ?Ǭó€uÌšãúüÀ_f0`’ŠŸžÝ{NÌ+ŒO[óøŒ‚o øè¦;Þ<僼êKZõwSàó êÐ÷ÔÁ9OC~»Î:?9oEœ‡÷iãôêæy6}®#›ïVו©¿Ž¸™úe®Û\¿Õun·nëÃz2=w®ûϵ>Sžûyùýï"×ñ¯Þëßžƒìûd½7uý/÷M! SÓA.˜¦; ñpOѽ‡:UG}]€x⽈÷,Öã½Y¡Õ¡Ÿ~ÿ»¬ùvŸ‡ÿeüéï{ÃyØø·;?èn들y;šºM2/ØÓÅøZ Ú±=ꡈC´##Ž`¢’ 6²·öC”Œu»CTÕ–Tééwo\– ¤6t·¡¹Í‹A9TbP…A5'ÑÛSF=™‡S䥨œ¢ Š*)ª¢¨š"òàäÁɃ“'Nœ<8ypòàäÁÉÃK^òðrýU»}¸§69äë ¸Û#r»eöÃüü^jV¬ boot/data/paulsen.rda0000644000076600000240000000304611110552530014315 0ustar00ripleystaff‹ u—ÉŽUU†o!†@lо‘‚¢‡âž6qP‡Žg…¢qækàkðø>†ñB­}²#7©ÚûìîÛÍÚß¹÷Û¯ëÄã«ÕêÈjëøjµut“=zdóokµÚÞ¬Žý|ðë³—Ožor§‹~¶I¾Ûßyúî³=Þ}ödz-Óu¦w²Ýú«ÍçÏýõ7o6Ÿ¿ö¯¿:|¾—élwÛý_çxYuõÎv7³üJ¶¿üæpÜGY%ëïgùÃL¯gý¬ßÉqû ïs¼¾/³Ýµ,wêuì&¿Ëþg“³“ýö¼/9žÓݬ÷|¯6óð:fyÍ;Ç¿›ãÞÎgïÓN>_Î~›uxŸîeŸß™æüîdûKžgö3?²ÝUï¯ù™Ö¾üýþ:<[ÙÏý•õÞGóv³üvöûªéÖßwäzÏ?ýîŽ×ÏNOyÿ›8¼Ÿå^o»¿·<^>?Èñ¼ïÊõ{?Ôì¿ãÌqèst|x³þ\–o7ñèv®_{Ÿ“ã}8Ÿé^ÓÞó ïSs¿ï{ܼ—Ž×»¸‡{æåó™Ü#ßÞñ—íng½ãíªïO>wŽûæ¾_Èq.}àù‘½åÔùzý;ûÜÝÞóòyz~öŸïƒ=x3ëx²|Î8vûm{!ÛÒìç:S¯ûóäoבώÇѧM|Ýkûg½=ëgŸ£÷ïbö¿åspä8žß©lç{æø-ÿd»þõûûvªY÷iŸëÎ×\Ÿ_½²Þ~q|=Ìvžy§sÜs™:îí‘mÇe¶÷þÛ£gÏÜìÿ0ëõžiöa»™×Ù&Þ~:޼þ;îçx±ï²¼kÖã¸Øiæo¯Ÿôû'ë}®ž¯ç3Úwî×øÚûu¶YßÍ÷t3kžëw›qö‘ý_÷8Ç·÷ìcï߯>ÏÓýìÇvöª¿Oøýés÷¹ú^Ôùå³¼îL¯7ó_çøŽ+ßϋͺÿ» ç|Ãó¹´çsÉõÍú=¾}Wïµ¼G~{~7=אָòÇçþE3oïçïïW>ð{Èqêño8~›{íûàï~?Ë~‡I4_{?~~ðã“—›Â“‡…GßnýÞ´:þâ§ßöþÓrë;·ìœ‘3áLïÌàÌèÌäÌìÌ’™#ݺr]åT¹¨\_¹¡rcå¦ÊÍ•+†Š¡b¨*†Š¡b¨*†Š¡bD1¢QŒ(F#ŠňbD1¢}1úbôÅè‹Ñ£/F_Œ¾}1úb ÅŠ1c(ÆPŒ¡C1†b ÅŠ1c,ÆXŒ±c1ÆbŒÅ‹1c,ÆTŒ©S1¦bLŘŠ1c*ÆTŒ©s1æbÌŘ‹1c.Æ\Œ¹s1æb,ÅXбc)ÆRŒ¥K1–b,ÅXÌø¨[¯ÉvdE6Èöd²#Ù‰ìLZ­ƒÖAë uÐ:h´Z­ƒ&h‚&h‚&h‚&h‚&h‚ÐZ@ h- ´€ÐZ­‡ÖCë¡õÐzh=´Z­‡6@  Ðh´Úm€6@ ÐFh#´Úm„6B¡ÐFh´ Úm‚6A› MÐ&h´ Ú m†6C›¡ÍÐfh3´Ú m†¶@[ -Ðh ´Úm¶@Ã%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%Â%K—. \¸$pIà’À%K—. \¸$pIà’À%K—. \¸$pIà’À%K—. \¸$pIà’À%K—. \¸$pIà’À%K—. \¸$pI¼sÉ{?\¿vð²ýáz⇃_öž¾ØüRÝ<ýóöï__Ò*úboot/data/poisons.rda0000644000076600000240000000103011110552530014327 0ustar00ripleystaff‹ Í”OHAÆg7«â‚¥PÏz­IfFMah<ëÉC.‹Ý€M–z÷ì¹½öî9ç^ E¥Tj•ì?Z –âUÙ}ï£YhBoxÉ73ï½ß{3Ã./¬Èp%BøÂ ¬ |ûã 1nÄÈfc­ÕØhYuÏN vûŽý/šƒNXÙ|Õ1§»nì™3; +OÌÉÖD{éǖɶCó~»zÿêõ¶9~áÆK³G~»nµ:eÞvwÆŸwÍòçýïé´b~QÜù%¿ŸäwN¼)¦j~S]*¦ã²/î˜#ë4Ñ^F=\'×ÿ‰ò#¿SšÖÒ~ó}¼é0ûí4}qÞ/´þ9ÅMšsÊÓu«Kæ’xìÇu¢~ê—ù²<ö>ÆÜ=dWÕc~Î 9ûÏü{žÛp=~×ík îÒ®[õJ,$ • Z­G-ŽóÈk¸­&¦U×}ÎkåëdùódÁ¿™kÛ¿¡N>‡,±˜g±ÅnúNïanDëqþ‚dm=æÐì«@³¡¤GI.Çh³ñlúÏ<Åþ—™ Íb†Å,‹9eHø¥"T JB)( 55 5U†C‚!Á`H0$ †C‚!ÁP`(0 †C¡ÀP`(04 †Cƒ¡ÁÐ`h0tù/¯+|%Ñt­¹Gã^˜¸¾à~”=nboot/data/polar.rda0000644000076600000240000000114411110552530013760 0ustar00ripleystaff‹ ]“Ë‹AÆ{²IAÁ“ è²éîlÖÅ…ƒ¸!1ÉdvÍ>fQpÌÅ›'OO«"Þiï@·Ð4øÁqæ·ñ¾´†û¤½-Ø;?ao¾¿Îùü÷°7FÈëã>i ïB+èÚFÿÐ:Ÿ˜·Ãñ5¼3 Ù–ÑTã:.ó}5ÑOÓS7¼·7J‚Çøcšg¢pÌ2õº·§ö䆽sÿïÓ¼Ø+8!ã„u¢èÄ‚%'¸È"S˜U¥EQVTQÔ‚¨’¨EQÂÐÂÐÂÐÂÐÂÐÂÐÂÐÂÐÂÐÂÐÂ0Â0Â0Â0Â0Â0Â0Â0Â0Â0°°°°°°°°°°Â(ÎO÷ÏNŽ\x¼,¿ŽÃ¹Î0i‘Äû;ùýÔêŒm×boot/data/remission.rda0000644000076600000240000000045111110552530014653 0ustar00ripleystaff‹ r‰0âŠàb```b`äd``d2Y˜€#ƒP€³(57³¸83?](À TÀ¤¥íoÎYpúÀéÇÆ`€A?K ú%ÔtúÍ8kÿj.:ýªF†ÚóªïÔüŸPùßPùPq4î?\ö qæ?:„½¸ÜCmu( ˜5/17µ((ÀI¼ A&O(‹1Æ(ÂHùùåzÈš¥a* a #ÃÆ01La 3Ãư€1,a®14€³ á,#8ËÎ2³Lá,38Ëβ€³àvÁí0‚Ûa·Ãn‡Ü#¸Fp;ŒÌу79'±BŒPe\)‰%‰ziEÀÀòþ0C@ðQjboot/data/salinity.rda0000644000076600000240000000077711110552530014512 0ustar00ripleystaff‹ µS=hSQ¾/¶´}CQ(Hyñ5–`nò~Rž¡Eq‘:uÈòhZ 1&b÷ÎuÕY3[\D7¡‹ºt©ˆKWñÞ¼ïåu0 ’œï|ç»çç†ûxmS»›®R*§œ¥œ 'ræÇQ*o5ÝOÚ­NkðB©©9›4ŸYãÈÛyŸ¬}¦k•‘ÑüKk¯(¯FF7àÁ/â\~ç ¨s º"x±äQû.!öá¹ÞøŽ‘¿ÑáYWJ½ìÉóò¾×áÇݷݸûÿ²o!3¿ŸÙ·~Ì}eÏ«à±Û½àÓ4üÖ_Êÿs?ìåþÅ…'Ã󃛴º{à 7¾QŒ¸~–þO1î3Š í )üê_¼?lP¸îY!E_öOï¿ÍSm騭wßPØÉ[‚‚õîëc—ÂÄd÷O)úžò4LëÄ\çQÕŸÿÖ»©¦ì±`ÎÏù:Ô 5l!ª~Hç ¶FsdÞïd'yºÝ7¤½ûx-yÅCvYùñ5|Wn¢5©sÚA!!ÕçTÕ½õݢÛ2ØcR“¬“dÁfiø$ÆÜ ¦·¨¹u÷"Äd·ƒn••¹eõì,Œ¯¿õÕt9Êê5Öªÿ ç'ØÿrÆç²:Ãú»]޲ú÷òŠûq]à>êqÜ£žЇÐG¸÷1ÖO¦‘ÿTù8AsUÄåˆ룢Aœ|‹j‘oQ·Ãy]G]Cå Ž<¼[¹ùiäQ‘/È¿T*â,îõˆ/ºŠó‘þ÷Úò?Ç6¸òìÿcfÿ™é5gïfç_­¿2kÞ7³ƒ°{'¾ [î&¹š&Ý;T÷n®×¾w®ëOfõÞ– neZOó6¸ónø~½×hxŠð¬­Šíï4“ú¨“µÚÎ? ÿÌܺT¶¨±jœ¯¦P³©f¬æ¹¾5ÑåtBgé§+è6éÆtdB†!dB†!dB†%Ã’aɰdX2,– K†%Ã’áÈpd82Ž G†#ÑáÈpdx2<ž O†'ÓáÉðdx2<Å(ö[ïg§;õ¢NÛÐy±—Íù/5‘ò¸\boot/data/tuna.rda0000644000076600000240000000105411110552530013612 0ustar00ripleystaff‹ ]’?LTAÆ÷gÎ;“ã„—ˆ‰z!„nÿ<° ;1&XØ@EqÍ‹J£"ƈ5 6XSSC¡…ÚÐØ\Bar†pF Ñ·»³Ÿæ6¹›owgæ7³on.ÊÚbMQIUˆ¤\Èr©øK„H«n³þd9¢2ÎÊõÂ’}?ßݼ´»`?93ÿÍ~Þjõö¶ì—n}´‡¹ô¹=äý×â²=öÃv•_ö»ðËqÜÑ®Oh·j³+oí1û÷VÜÁcû“óüzåÖkû»È¾Ñ¹eO8þ+£»IbÉ/ÁŸݦҬOD!Žœ™Û¡²7)U|xóŸ uR…ãOs|•9g\™ÛTï@õP7 òý`¨“¡~j„:©áÚêíÓY>bÞpÇF)דržsáh„ýF8>ÚóÜçö¿Èý79O“û¸ü&Ä2wôe¸¿Âu]½ò\ã>ÆïûïJã/|ã4ñ.¼ÛÔ Ï훑SËùÃ{kÅaƒ‡Ä&Ïú¼ª«žNþïIѳ…ŒBE¡£0QdQLG1Åu¥ÖT JB)( e 2¨i¨(0$ †C‚!Á`H0$ †C¡ÀP`(0 †Cƒ¡ÁÐ`h04 †Cƒ¡Á0`0  †ÀaÀ0`0202020202Ý?½wäkýÓ[»›¯ç“K«Å¸»÷û 8küO,boot/data/urine.rda0000644000076600000240000000336511110552530013774 0ustar00ripleystaff‹ í˜éoUUÀ_k‘R [Ûו¶ìÙúÖÖZï!h”ÅJˆPZ èòZÚR ( !FÑâL !Ñ„CpÁÔ„Æ®à’\E FC|÷ßÌ£ïð79gΜùÍÌ9÷vÚÚëYuY>Ÿ/Ý—6ÌçKˈ3Òãÿ¥ù|9q…oÈÆØÚæU>ßÐìødh|qD\.ôÝyþ—ÇqïÈÛ¥Ü?Ç]’‘¤ã=Ç}è•ÏÎìã¸e¾ôÆ÷9îŒÞŠãæô\~øDŽã.þuWÉ©ZÇ­˜|µû«[ŽÛùé»Uu£·¦õµ‹Y5I?²ouâq\O»n›ã¿ßš\Ÿwú©°³%©{áǯ\ð¥ã®\ðÅoý븞øçç¤^â‰{éýýp2~‘å×Ï\65éOìe]ü¥æ-þÄ>uÿÄof¾ü~2~É[ôRáË>±“|D¦ê×mýûëñ7FSÒÔm©W°)ɹg~þÇ$Ç£·¾žô+u¿ûç>·ýîIŽ»çÞôáóN'ãžÔWÎ9uìd¼"o:nóó~[¬ƒÄ%öRWÉkÎÞ裞Ü?“mã5~/Íò¦øœ÷|nüY CShïŸÉEúb³ñŸH(L.ûóãÙÇK`ò=7×?0þÄ´ÔcŸkã29ŽMöQïyѲ¯ÈÆeŠÄçæìASL<&×sÓsÙŒe=?þS‰Â›"â-I¸i3…È<áW)qçy»ÿf áæ±žGœ~ò* ÎôøË‡›kÏÝø‰·ÈÞ“G¼Ÿú—: ‡:åoÜ<æØ“_>yK½ýäQŠÿüùÉË/ö¬çpÎ~â–:èœ|‹ìWÊIþä•M¹èsÈ;'!–™RôÙðÅNÎ7߯§÷o÷qËÙ‘ieÇ(+7õY¹f…•O_²²÷¼•‡ç[Ù7`e•Yù ófceÛIö—Y¹ ¿-Ǭ<ð‚•ÝØm'®ý‹¬Üsfpœ;O¡oÂo+ñt[¹N3²»]ij#ö­ÔûYx»\òa¾›õY'νìï†Ów…}ijŸ<ûg³úí§Îý•Vî#ß½øí£]ìÛlõÛåúªé<3¸>Û©Ç>ìŸæöáïÀ%+`ˆx·Jݘ÷öYÙBÜúñCÝÛYßgȇóßC}vc¿éùPçÄÓãã±÷Oó ±ß¿©|wJÐWÍ·ú ÞÛ9×ìzø';¯zÇΧ`?v“¹ÿð[ý}V_†Þà_ôQöÕ`o°>ÿ5ðÆa_…¿ ò~¦ØI³˜Ï®Œ|ç¢þ ô•Ô½Q‰û òç4â˜Ã¼ÄO ÒÁ5<ñ–ú!eÂnÒ¹Áç„?ýèàúUãOüWw uœ>$þR7Ãì°—ïf'ùw²#>ùžÅˆ§ þì–ã¿ûÒà¼ä»µ»­ðåü$¾-仓õö7Á]ƒ}#~êáw³ÞËþ‰â—ójcÞS–?œnü´À[mì¹wqOGÓ¤Ó·¤ÙûîüÉÏ÷,ÛÇ87¸ÇeÖ^×Ól_åܲû¿¤¶ý’sÓö+Îýý‚ãò~øèJàÜä}£r®Ú>ŹnûgÀîsà\¡oJÃïG¶ßq~!¾L‰‡g¤í›¤¯7÷ß5úé÷†$Ò}ņŸ!ô_éä)u-¢›ÆyùáH_6þtúÓ1ì˰öRO3Š~2“ºeÒJUF\éÔ£{é×§Ò§ãœ‡ÙøMçZĹþ€á¦Þ°/#ýû¸‘v>žßW¿c½{ù¹1þ“Ä<õ3ÍõVµÇ•ÞÛâýqÆS¦Å ]«ï\Û±‰izk£Œ–ö -2nhin”ñÆØªzÕׯoHÁ ‹µtͼ¹P2È (ƒ Â2ˆÈ *ƒJTI¬³uT¡£€Ž‚: é(¬£ˆŽ¢:ªÔ‘2Ê(# Œ€2Ê(# Œ€2Ê(#¨Œ 2‚Ê*#¨Œ 2‚Ê*#¨Œ 2BÊ)#¤Œ2BÊ)#¤Œ2BÊ)#¬Œ°2ÂÊ+#¬Œ°2ÂÊ+#¬Œ°2"ʈ(#¢Œˆ2"ʈ(#¢Œˆ2"ʈ(#ªŒ¨2¢Êˆ*#ªŒ¨2¢Êˆ*#ªŒhUêËÙ°¾¾]Þ”4̲ë;êg®ŽÅ_¢øì–÷ï?ÿC" „boot/data/wool.rda0000644000076600000240000000325011110552530013623 0ustar00ripleystaff‹ eVXG=P± vÅ®1*–`}ÄÅBb°ìb5*Å{‰=vQDØ»”»ãnwï84zØ{‰Kf&s{ßgöûà¿ùgvgæ½÷¿™!!~ž!ž*•ÊYåä¡R9¹Ÿ.Î䟓JUÙƒ6b###T*w/’j‡Ì¾©¿´Ãd8#Î_vÙÃ\|6D§A{»@^aœÍ£¼ ¦îÅÐÚ6¿m>å14$Ù±dÔƒ—ÞPúÔ;å¶;‚´ŒS£†w=ŠÔ×E×ÄÊHyRîì޽r0AÓhñ\˜Óåú¢1)¸0~㉠²Ëp~{p¦ {p6mÊÂ7êã|[Ëê»Ò‘ZóØw­=Á%³6¼R· ¤öüº^¨´3:I†@ˆðäaIVÚJ¾TàͰ±+ ”3@~uKçõ«ßA ¡ÙÖVée_@Óeƾ|Ï.ÐVœ¾yMÐ-%fí\z¿á_‘=·ÜºM—¶"‹®&&G‰™ÃòLGrw)íì±uVy8(1ë@…í·g¿SÆef£rEW‚çÓ3wð€–.CxÍɺ™¸M›í^é·öC¸[m|s³BÙ3g¯õEF¤\§ø~„})Òq–/J[}çäÊ݇@3¸šÑ}ô h¿?07.Â÷Qs¹sû m]•qjÊRû¹ oêã=´9/ PSTš¬Qæ ÛZ÷„?€šìæíg_Ç|7 ƒ±)4-¼¨b”¨ÎôÓ ?˜!aD?"£ÝÞn Œ†påî’n)‹¡¡„#Áì§ðóm[áí^íã2굈m”åˆÅ÷×®šY!õJy}ÓÑŽhŸŸëAù^zÌÕ³B”˜3úÚñcHg°Ì@w¡Aé8ò*4uè³ Nñ!TA{‚)ꎿïsÚFo´iæ‘Ë7"sZ–Õ?a2 ùNû¦ãªåó?ç{í…Ž ÷Èë@$Å6eáGˆ?žÝÛf{9ˆ¼ŽÄë'Í:Y&¾^#ÑȃœZ0m …8FŽCîÐFÞêË`¼0øÙòÚ`$»8ºkLžŒ( ½`ºÅ>Sÿ샅;NCäüÈßÊ_iCiRñ+!ÝúÎxÝÒ°´b$$®g™¸Äüë!óuÈÕ¨øk’²\â#Uƒ=2Nó2“»3$j'çlÊ>¤—ú’*ç¢a¸ÿ:X çÃÂÊ Vþ~þ Œ¬&o ¬élBXÙô“ÏëÂÊ})¿Ê¹¨§gH¾‹ûÕþaäÿ¼êï‹o_Â:`mý~÷º·amB‰ì<7j0»‘ÇëÀr±þÞí±ÛgIöi¶µ5,±Û¦:Y ó£ˆßz&ÇÃÂù1·§†¦à%_ÚBÝR™ÓÉOÄýÐŽ«‘n3'F™& =YA°ó9eº/YÓB?wä©z'Wú T.{ôȽI Ä &¶7ˆ9Ë‹"'@Ê©Uù^šRV ¤eœˆ!µuyÚouˆÔ†+†i>#¦!`-Œ“iÁ"—7Rwî;D‰¦Ð&o4I¡Èá|éæ&+ S¢žã­ç<陜Z@ÿª{ÝQBuGžÊ¨j"ôdDIШ¤ïÐ%±  {V“îú³”qzZFSSa`Ç@¬#óï”Ûz*ÄåÁ@Ô—ã7V¦aÈ ýxìúã®ê}ý¡Ïãƒ Í ñ:‘ìý˜±Aâ:â©0j@¤£[ý ‘G¾D>òþ]ÙyvÃ@Ï?!q?$f¨ù¹%÷ù’ ¸¿JL†eõÎug"à焘ø‰f”¶‘ë]$.|ÿÝHÌæ\•õ‹ƒúT-r[ ±3Œ#na†ñuâŒ~‰‹.y¤Ì/óz°×£Ò¶÷óyezªú4‡e¢­} S”{dÂLÑñ ‡…Z!óóÞÌy4‡Æ)@.Y¸îëæ‰š;—™ïË,3#…\1?ÊçÓ ˆTmMÞ:"-›åE¹_Ú£D.þ…íýÌ.ü!¶eF ‘Ÿ/ÒO G;˜ $z qøÝŽVyuò†;̦T­ƒ¼Òå_jù×5pƒß ®OcÀãF2Š 8Ï6~®Ù8O6î¶j$›À®Oò`£×­ù/a£Ã\ây{A™sôÓ*™ðÍ®TtÔ ’ò¢?Q¤bO·¨A,ÚÛß¼â:6"<*Š$Ëÿ—t¡Içh’Q}¡ÿhŸ o9 boot/extra-tests/0000755000076600000240000000000011663151666013547 5ustar00ripleystaffboot/extra-tests/README0000644000076600000240000000016511643306261014420 0ustar00ripleystaffThis directory is for extra tests which are not to be run routinely. It was started for tests of parallel operation. boot/extra-tests/parallel.R0000644000076600000240000000213711647215071015462 0ustar00ripleystaff## Reproducibility of parallel simulation library(boot) set.seed(123, "L'Ecuyer-CMRG") cd4.rg <- function(data, mle) MASS::mvrnorm(nrow(data), mle$m, mle$v) cd4.mle <- list(m = colMeans(cd4), v = var(cd4)) ## serial version cd4.boot <- boot(cd4, corr, R = 999, sim = "parametric", ran.gen = cd4.rg, mle = cd4.mle) boot.ci(cd4.boot, type = c("norm", "basic", "perc"), conf = 0.9, h = atanh, hinv = tanh) for (iter in 1:2) { set.seed(123, "L'Ecuyer-CMRG") cd4.boot <- boot(cd4, corr, R = 999, sim = "parametric", ran.gen = cd4.rg, mle = cd4.mle, ncpus = 4, parallel = "multicore") print(boot.ci(cd4.boot, type = c("norm", "basic", "perc"), conf = 0.9, h = atanh, hinv = tanh)) } for (iter in 1:2) { set.seed(123, "L'Ecuyer-CMRG") cd4.boot <- boot(cd4, corr, R = 999, sim = "parametric", ran.gen = cd4.rg, mle = cd4.mle, ncpus = 4, parallel = "snow") print(boot.ci(cd4.boot, type = c("norm", "basic", "perc"), conf = 0.9, h = atanh, hinv = tanh)) } boot/inst/0000755000076600000240000000000011725143510012225 5ustar00ripleystaffboot/inst/CITATION0000644000076600000240000000231711716773365013406 0ustar00ripleystaffcitHeader("To cite the 'boot' package in publications use:") year <- sub(".*(2[[:digit:]]{3})-.*", "\\1", meta$Date, perl = TRUE) vers <- paste("R package version", meta$Version) citEntry(entry="Manual", title = "boot: Bootstrap R (S-Plus) Functions", author = personList(as.person("Angelo Canty"), as.person("B. D. Ripley")), year = year, note = vers, textVersion = paste("Angelo Canty and Brian Ripley (", year, "). boot: Bootstrap R (S-Plus) Functions. ", vers, ".", sep="")) citEntry(entry="Book", title = "Bootstrap Methods and Their Applications", author = personList(as.person("A. C. Davison"), as.person("D. V. Hinkley")), publisher = "Cambridge University Press", address = "Cambridge", year = "1997", note = "ISBN 0-521-57391-2", url = "http://statwww.epfl.ch/davison/BMA/", textVersion = paste("Davison, A. C. & Hinkley, D. V. (1997)", "Bootstrap Methods and Their Applications.", "Cambridge University Press, Cambridge. ISBN 0-521-57391-2") ) boot/inst/po/0000755000076600000240000000000012121561347012646 5ustar00ripleystaffboot/inst/po/de/0000755000076600000240000000000011663151666013247 5ustar00ripleystaffboot/inst/po/de/LC_MESSAGES/0000755000076600000240000000000012023430346015016 5ustar00ripleystaffboot/inst/po/de/LC_MESSAGES/R-boot.mo0000644000076600000240000002055012122262107016515 0ustar00ripleystaffÞ•NŒkü¨3© Ý þ6)R)|¦»/ÖL&s"Œ&¯Ö%ì / F 4d .™ 4È .ý *, &W ~ (ž %Ç 4í 5" ,X 7… +½ é $ *- !X z 2 - *ð < X v • 0³ ä ù ('Bj$†$«Ð-ï2-P~=˜Ö%ò%>"\.2®$á%.,[ x™®Å#á+ 1KÒ%%Dj'ƒ*«*Ö51!gZ‰ä%.&U2r¥Åá8/:;jB¦3é*H+f)’<¼:ù84Cm3±"å'20(cŒ<Ÿ6Ü*Y>!˜"º!Ý4ÿ4$M(r3›,Ï&ü%# I,j1—5É ÿH  i3Š9¾&ø&7F@~.¿/î9 $X 2} "° Ó î - !07!L9JB*5= N+1'47 - MAC?8D;@,2>H.$#/6I 0E G:F&)%( K3!"<%s distribution not supported: using normal instead'F.surv' is required but missing'G.surv' is required but missing'K' has been set to %f'K' outside allowable range'R' and 'alpha' have incompatible lengths'R' and 'theta' have incompatible lengths'R' must be positive'alpha' ignored; R[1L] = 0'data' must be a matrix with at least 2 columns'index' must contain 2 elements'simple=TRUE' is only valid for 'sim="ordinary", stype="i", n=0', so ignored'strata' of wrong length'stype' must be "w" for type="inf"'t' and 't0' must be supplied together't' must of length %d'theta' must be supplied if R[1L] = 0'theta' or 'lambda' required'u' must be a function0 elements not allowed in 'q'BCa intervals not defined for time series bootstrapsR[1L] must be positive for frequency smoothingarguments are not all the same type of "boot" objectarray cannot be found for parametric bootstrapboot.array not implemented for this objectbootstrap object needed for type="reg"bootstrap output matrix missingbootstrap output object or 't0' requiredbootstrap replicates must be suppliedbootstrap variances needed for studentized intervalscontrol methods undefined when 'boot.out' has weightsdimensions of 'R' and 'weights' do not matcheither 'A' and 'u' or 'K.adj' and 'K2' must be suppliedeither 'boot.out' or 'w' must be specified.estimated adjustment 'a' is NAestimated adjustment 'w' is infiniteextreme order statistics used as endpointsextreme values used for quantilesfunction 'u' missingindex array not defined for model-based resamplingindex out of bounds; minimum index only used.indices are incompatible with 'ncol(data)'influence values cannot be found from a parametric bootstrapinput 't' ignored; type="inf"input 't' ignored; type="jack"input 't' ignored; type="pos"input 't0' ignored: neither 't' nor 'L' suppliedinvalid value of 'l'invalid value of 'sim' suppliedlength of 'm' incompatible with 'strata'likelihood exceeds %f at only one pointlikelihood never exceeds %fmissing values not allowed in 'data'multivariate time series not allowednegative value of 'm' suppliedneither 'data' nor bootstrap object specifiedneither 'statistic' nor bootstrap object specifiedno coefficients in Cox model -- model ignoredno data in call to 'boot'number of columns of 'A' (%d) not equal to length of 'u' (%d)one of 't' or 't0' requiredonly columns %s and %s of 'data' usedonly first 2 elements of 'index' usedonly first column of 't' usedonly first element of 'index' usedonly first element of 'index' used in 'abc.ci'sim = "weird" cannot be used with a "coxph" objectthis type not implemented for Binarythis type not implemented for Poissonunable to achieve requested overall error rateunable to calculate 'var.t0'unable to find multiplier for %funable to find rangeunknown value of 'sim'unrecognized value of 'sim'use 'boot.ci' for scalar parametersvariance required for studentized intervalsProject-Id-Version: R 2.15.2 / boot 1.3-6- Report-Msgid-Bugs-To: bugs@r-project.org POT-Creation-Date: 2012-10-11 15:21 PO-Revision-Date: 2012-10-11 16:01+0200 Last-Translator: Chris Leick Language-Team: German Language: de MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Plural-Forms: nplurals=2; plural=(n != 1); %s Verteilung nicht unterstützt, stattdessen wird Normalverteilung benutzt'F.surv' wird benötigt, fehlt jedoch'G.surv' wird benötigt, fehlt jedoch'K' wurde auf %f gesetzt'K' außerhalb des erlaubbaren Bereichs'R' und 'alpha' haben inkompatible Längen'R' und 'theta' haben inkompatible Längen'R' muss psitiv sein'alpha' ignoriert; R[1L]=0'data' muss eine Matrix mit mindestens 2 Spalten sein'index' muss 2 Elemente enthalten'simple=TRUE' gilt nur für 'sim="ordinary", stype="i", n=0' und wird daher hier ignoriert'strata' hat falsche Länge'stype' muss für type="inf" 'w' sein't' und 't0' müssen zusammen angegeben werden't' muss die Länge %d haben'theta' muss angegeben werden, falls R[1L] = 0 ist'theta' oder 'lambda' benötigt'u' muss eine Funktion sein0 Elemente nicht in 'q' erlaubtBCa Intervalle nicht für Zeitreihenbootstrap definiert.R[1L] muss für Frequenz-Glättung positiv seinArgumente waren nicht all vom selben Typ des 'boot'-ObjektsArray kann nicht für parametrisches Bootstrapping gefunden werden'boot.array' nicht für dieses Objekt implementiertBootstrap-Objekt für type="reg" benötigtBootstrap-Ausgabematrix fehltBootstrap-Ausgabeobjekt oder 't0' benötigtBootstrap-Kopien müssen angegeben werdenBootstrap-Varianzen für studentisierte Intervalle benötigtKontrollmethoden undefiniert, wenn 'boot.out' Gewichte hatDimensionen von 'R' und 'weights' stimmen nicht übereinentweder 'A' und 'u' oder 'K.adj' und 'K2' müssen angegeben werdenEntweder 'boot.out' oder 'w' muss angegeben werden.geschätzte Einstellung 'a' ist NAgeschätzte Anpassung 'w' ist unendlichExtremwertstatistiken werden als Endpunkte benutztExtremwerte werden für Quantile benutztFunktion 'u' fehltIndex-Array nicht für Modell-basiertes Resampling definiertIndex außerhalb des Rands. Minimalindex wird benutzt.Indizes sind inkompatibel mit 'ncol(data)'es können keine beeinflussenden Werte von einem parametrischen Bootstrap gefunden werdenEingabe 't' ignoriert; type="inf"Eingabe 't' ignoriert; type="jack"Eingabe 't' ignoriert; type="pos"Eingabe 't0' ignoriert: weder 't' noch 'L' angegebenungültiger Wert von 'l'ungültiger Wert von 'sim' angegebenLänge von 'm' inkompatibel mit 'strata'Wahrscheinlichkeit überschreitet %f an einem PunktWahrscheinlichkeit überschreitet niemals %ffehlende Werte in 'data' nicht erlaubtmultivariate Zeitserien nicht erlaubtnegativer Wert von 'm' angegebenweder 'data' noch Bootstrap-Objekt angegebenweder 'statistic' noch Bootstrap-Objekt angegebenkeine Koeffizienten im Cox-Modell -- Modell ignoriertkeine Daten im Aufruf von 'boot'Anzahl der Spalten von 'A' (%d) ist nicht gleich der Länge von 'u' (%d)eins von 't' oder 't0' benötigtnur die Spalten %s und %s von 'data' werden benutztnur die beiden ersten Elemente von 'index' werden benutztNur erste Spalte von 't' wird benutzt.nur erstes Element von 'index' benutztnur erstes Element von 'index' wird in 'abc.ci' benutztsim = "weird" kann nicht mit einem "coxph" Objekt benutzt werdendieser Typ ist nicht für Binary implementiertdieser Typ ist nicht für Poisson implementiertgeforderte overall Fehlerquote kann nicht erreicht werden'var.t0' kann nicht berechnet werdenEs kann kein Multiplikator für %f gefunden werdenBereich kann nicht gefunden werdenunbekannter Wert von 'sim'unbekannter Wert von 'sim'benutzen Sie 'boot.ci' für skalare ParameterVarianz für studentisierte Intervalle benötigtboot/inst/po/en@quot/0000755000076600000240000000000011663151666014272 5ustar00ripleystaffboot/inst/po/en@quot/LC_MESSAGES/0000755000076600000240000000000011772542457016062 5ustar00ripleystaffboot/inst/po/en@quot/LC_MESSAGES/R-boot.mo0000644000076600000240000001777312122262110017547 0ustar00ripleystaffÞ•NŒkü¨3© Ý þ6)R)|¦»/ÖL&s"Œ&¯Ö%ì / F 4d .™ 4È .ý *, &W ~ (ž %Ç 4í 5" ,X 7… +½ é $ *- !X z 2 - *ð < X v • 0³ ä ù ('Bj$†$«Ð-ï2-P~=˜Ö%ò%>"\.2®$á%.,[ x™®Å#á+513g$›$Àå1 1R„3¼#ðTi&†.­Ü)ö$ E!`4‚.·4æ.*J&uœ,¼%é49D4~G³3û"/(R*{!¦È2á-.B<q!®"Ð!ó<R#k0'Àè($-"R1u6§-Þ E*#p)”)¾!è& 612h$›%À.æ  6Wl‡'§+ÏL9JB*5= N+1'47 - MAC?8D;@,2>H.$#/6I 0E G:F&)%( K3!"<%s distribution not supported: using normal instead'F.surv' is required but missing'G.surv' is required but missing'K' has been set to %f'K' outside allowable range'R' and 'alpha' have incompatible lengths'R' and 'theta' have incompatible lengths'R' must be positive'alpha' ignored; R[1L] = 0'data' must be a matrix with at least 2 columns'index' must contain 2 elements'simple=TRUE' is only valid for 'sim="ordinary", stype="i", n=0', so ignored'strata' of wrong length'stype' must be "w" for type="inf"'t' and 't0' must be supplied together't' must of length %d'theta' must be supplied if R[1L] = 0'theta' or 'lambda' required'u' must be a function0 elements not allowed in 'q'BCa intervals not defined for time series bootstrapsR[1L] must be positive for frequency smoothingarguments are not all the same type of "boot" objectarray cannot be found for parametric bootstrapboot.array not implemented for this objectbootstrap object needed for type="reg"bootstrap output matrix missingbootstrap output object or 't0' requiredbootstrap replicates must be suppliedbootstrap variances needed for studentized intervalscontrol methods undefined when 'boot.out' has weightsdimensions of 'R' and 'weights' do not matcheither 'A' and 'u' or 'K.adj' and 'K2' must be suppliedeither 'boot.out' or 'w' must be specified.estimated adjustment 'a' is NAestimated adjustment 'w' is infiniteextreme order statistics used as endpointsextreme values used for quantilesfunction 'u' missingindex array not defined for model-based resamplingindex out of bounds; minimum index only used.indices are incompatible with 'ncol(data)'influence values cannot be found from a parametric bootstrapinput 't' ignored; type="inf"input 't' ignored; type="jack"input 't' ignored; type="pos"input 't0' ignored: neither 't' nor 'L' suppliedinvalid value of 'l'invalid value of 'sim' suppliedlength of 'm' incompatible with 'strata'likelihood exceeds %f at only one pointlikelihood never exceeds %fmissing values not allowed in 'data'multivariate time series not allowednegative value of 'm' suppliedneither 'data' nor bootstrap object specifiedneither 'statistic' nor bootstrap object specifiedno coefficients in Cox model -- model ignoredno data in call to 'boot'number of columns of 'A' (%d) not equal to length of 'u' (%d)one of 't' or 't0' requiredonly columns %s and %s of 'data' usedonly first 2 elements of 'index' usedonly first column of 't' usedonly first element of 'index' usedonly first element of 'index' used in 'abc.ci'sim = "weird" cannot be used with a "coxph" objectthis type not implemented for Binarythis type not implemented for Poissonunable to achieve requested overall error rateunable to calculate 'var.t0'unable to find multiplier for %funable to find rangeunknown value of 'sim'unrecognized value of 'sim'use 'boot.ci' for scalar parametersvariance required for studentized intervalsProject-Id-Version: boot 1.3-9 POT-Creation-Date: 2013-03-20 07:24 PO-Revision-Date: 2013-03-20 07:24 Last-Translator: Automatically generated Language-Team: none MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Language: en Plural-Forms: nplurals=2; plural=(n != 1); %s distribution not supported: using normal instead‘F.surv’ is required but missing‘G.surv’ is required but missing‘K’ has been set to %f‘K’ outside allowable range‘R’ and ‘alpha’ have incompatible lengths‘R’ and ‘theta’ have incompatible lengths‘R’ must be positive‘alpha’ ignored; R[1L] = 0‘data’ must be a matrix with at least 2 columns‘index’ must contain 2 elements‘simple=TRUE’ is only valid for ‘sim="ordinary", stype="i", n=0’, so ignored‘strata’ of wrong length‘stype’ must be "w" for type="inf"‘t’ and ‘t0’ must be supplied together‘t’ must of length %d‘theta’ must be supplied if R[1L] = 0‘theta’ or ‘lambda’ required‘u’ must be a function0 elements not allowed in ‘q’BCa intervals not defined for time series bootstrapsR[1L] must be positive for frequency smoothingarguments are not all the same type of "boot" objectarray cannot be found for parametric bootstrapboot.array not implemented for this objectbootstrap object needed for type="reg"bootstrap output matrix missingbootstrap output object or ‘t0’ requiredbootstrap replicates must be suppliedbootstrap variances needed for studentized intervalscontrol methods undefined when ‘boot.out’ has weightsdimensions of ‘R’ and ‘weights’ do not matcheither ‘A’ and ‘u’ or ‘K.adj’ and ‘K2’ must be suppliedeither ‘boot.out’ or ‘w’ must be specified.estimated adjustment ‘a’ is NAestimated adjustment ‘w’ is infiniteextreme order statistics used as endpointsextreme values used for quantilesfunction ‘u’ missingindex array not defined for model-based resamplingindex out of bounds; minimum index only used.indices are incompatible with ‘ncol(data)’influence values cannot be found from a parametric bootstrapinput ‘t’ ignored; type="inf"input ‘t’ ignored; type="jack"input ‘t’ ignored; type="pos"input ‘t0’ ignored: neither ‘t’ nor ‘L’ suppliedinvalid value of ‘l’invalid value of ‘sim’ suppliedlength of ‘m’ incompatible with ‘strata’likelihood exceeds %f at only one pointlikelihood never exceeds %fmissing values not allowed in ‘data’multivariate time series not allowednegative value of ‘m’ suppliedneither ‘data’ nor bootstrap object specifiedneither ‘statistic’ nor bootstrap object specifiedno coefficients in Cox model -- model ignoredno data in call to ‘boot’number of columns of ‘A’ (%d) not equal to length of ‘u’ (%d)one of ‘t’ or ‘t0’ requiredonly columns %s and %s of ‘data’ usedonly first 2 elements of ‘index’ usedonly first column of ‘t’ usedonly first element of ‘index’ usedonly first element of ‘index’ used in ‘abc.ci’sim = "weird" cannot be used with a "coxph" objectthis type not implemented for Binarythis type not implemented for Poissonunable to achieve requested overall error rateunable to calculate ‘var.t0’unable to find multiplier for %funable to find rangeunknown value of ‘sim’unrecognized value of ‘sim’use ‘boot.ci’ for scalar parametersvariance required for studentized intervalsboot/inst/po/fr/0000755000076600000240000000000011663151666013266 5ustar00ripleystaffboot/inst/po/fr/LC_MESSAGES/0000755000076600000240000000000011772542456015055 5ustar00ripleystaffboot/inst/po/fr/LC_MESSAGES/R-boot.mo0000644000076600000240000002067712122262107016546 0ustar00ripleystaffÞ•NŒkü¨3© Ý þ6)R)|¦»/ÖL&s"Œ&¯Ö%ì / F 4d .™ 4È .ý *, &W ~ (ž %Ç 4í 5" ,X 7… +½ é $ *- !X z 2 - *ð < X v • 0³ ä ù ('Bj$†$«Ð-ï2-P~=˜Ö%ò%>"\.2®$á%.,[ x™®Å#á+¯1Pá!2!Tv ˆ.©.Ø:8 sd”ù%,=j%‡­ÈãV6Y?/Ð(()*R,}3ªLÞ@+8l:¥,à! /:N-‰·KÎ60QU‚Ø ø19k&„*«0Ö&-.+\ˆ+¨0Ô7'=Je°4Ï8-=-k<™7Ö* +9 7e  /½ +í !2!0O!>€!L9JB*5= N+1'47 - MAC?8D;@,2>H.$#/6I 0E G:F&)%( K3!"<%s distribution not supported: using normal instead'F.surv' is required but missing'G.surv' is required but missing'K' has been set to %f'K' outside allowable range'R' and 'alpha' have incompatible lengths'R' and 'theta' have incompatible lengths'R' must be positive'alpha' ignored; R[1L] = 0'data' must be a matrix with at least 2 columns'index' must contain 2 elements'simple=TRUE' is only valid for 'sim="ordinary", stype="i", n=0', so ignored'strata' of wrong length'stype' must be "w" for type="inf"'t' and 't0' must be supplied together't' must of length %d'theta' must be supplied if R[1L] = 0'theta' or 'lambda' required'u' must be a function0 elements not allowed in 'q'BCa intervals not defined for time series bootstrapsR[1L] must be positive for frequency smoothingarguments are not all the same type of "boot" objectarray cannot be found for parametric bootstrapboot.array not implemented for this objectbootstrap object needed for type="reg"bootstrap output matrix missingbootstrap output object or 't0' requiredbootstrap replicates must be suppliedbootstrap variances needed for studentized intervalscontrol methods undefined when 'boot.out' has weightsdimensions of 'R' and 'weights' do not matcheither 'A' and 'u' or 'K.adj' and 'K2' must be suppliedeither 'boot.out' or 'w' must be specified.estimated adjustment 'a' is NAestimated adjustment 'w' is infiniteextreme order statistics used as endpointsextreme values used for quantilesfunction 'u' missingindex array not defined for model-based resamplingindex out of bounds; minimum index only used.indices are incompatible with 'ncol(data)'influence values cannot be found from a parametric bootstrapinput 't' ignored; type="inf"input 't' ignored; type="jack"input 't' ignored; type="pos"input 't0' ignored: neither 't' nor 'L' suppliedinvalid value of 'l'invalid value of 'sim' suppliedlength of 'm' incompatible with 'strata'likelihood exceeds %f at only one pointlikelihood never exceeds %fmissing values not allowed in 'data'multivariate time series not allowednegative value of 'm' suppliedneither 'data' nor bootstrap object specifiedneither 'statistic' nor bootstrap object specifiedno coefficients in Cox model -- model ignoredno data in call to 'boot'number of columns of 'A' (%d) not equal to length of 'u' (%d)one of 't' or 't0' requiredonly columns %s and %s of 'data' usedonly first 2 elements of 'index' usedonly first column of 't' usedonly first element of 'index' usedonly first element of 'index' used in 'abc.ci'sim = "weird" cannot be used with a "coxph" objectthis type not implemented for Binarythis type not implemented for Poissonunable to achieve requested overall error rateunable to calculate 'var.t0'unable to find multiplier for %funable to find rangeunknown value of 'sim'unrecognized value of 'sim'use 'boot.ci' for scalar parametersvariance required for studentized intervalsProject-Id-Version: boot 1.2-23 Report-Msgid-Bugs-To: bugs@r-project.org POT-Creation-Date: 2012-10-11 15:21 PO-Revision-Date: 2012-10-03 15:35+0100 Last-Translator: Philippe Grosjean Language-Team: French Language: fr MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit Plural-Forms: nplurals=2; plural=(n > 1); X-Generator: Poedit 1.5.3 %s distribution non supportée, utilisation d'une distribution normale à la place'F.surv' est requis mais manquant'G.surv' est requis mais manquant'K' est fixé à %f'K' en dehors de la plage admise'R' et 'alpha' ont des longueurs non conformes'R' et 'theta' ont des longueurs non conformes'R' doit être positif'alpha' ignoré ; R[1L] = 0'data' doit être une matrice contenant au moins 2 colonnes'index' doit contenir 2 éléments'simple=TRUE' n'est seulement valable que pour 'sim="ordinary", stype="i", n=0' ; il est donc ignoré'strata' de mauvaise longueur'stype' doit être "w" pour type="inf"'t' et 't0' doivent être fixés simultanément't' doit être de longueur %d'theta' doit être fourni si R[1L] = 0'theta' ou 'lambda' requis'u' doit être une fonction0 éléments non permis pour 'q'les intervalles BCa ne sont pas définis pour les bootstraps sur les séries temporellesR[1L] doit être positif pour un lissage des fréquencesles arguments ne sont pas tous du même type pour l'objet "boot"tableau non trouvé pour un bootstrap pamétriqueboot.array non implémenté pour cet objetobjet 'bootstrap' requis pour type="reg"matrice manquante dans la sortie bootstrapobjet résultat d'un bootstrap ou 't0' requisles réplications de bootstrap doivent être fourniesles variances de bootstrap sont nécessaires pour les intervalles studentisésméthodes de contrôle non définies lorsque 'boot.out' est pondéréles dimensions de 'R' et 'weights' ne sont pas conformessoit 'A' et 'u', soit 'K.adj' et 'K2' doivent être fournissoit 'boot.out', soit 'w' doit être spécifiél'ajustement de 'a' estimé est NAl'ajustement de 'w' est infinistatistiques d'ordre extrême utilisées comme points finauxvaleurs extrêmes utilisées pour les quantilesfonction 'u' manquanteindiçage de tableau non défini pour un rééchantillonnage basé sur un modèleindice hors plage ; l'indice le plus petit est utiliséles indices sont incompatibles avec 'ncol(data)'les valeurs d'influence ne peuvent être trouvées à partir d'un bootstrap paramétriqueentrée 't' ignorée ; type="inf"entrée 't' ignorée ; type="jack"entrée 't' ignorée ; type="pos"entrée 't0' ignorée : ni 't', ni 'L' n'est fournivaleur de 'l' incorrectevaleur incorrecte spécifiée pour 'sim'longueur de 'm' incompatible avec 'strata'la vraissemblance excède %f a seulement un pointla vraissemblance n'a jamais excédé %fvaleurs manquantes non autorisées dans 'data'séries temporelles multivariées non admisesvaleur négative donnée pour 'm'pas de 'data' ou d'objet bootstrap spécifiépas de 'statistic' ou d'objet bootstrap spécifiépas de coefficients dans le modèle Cox -- modèle ignorépas de données lors de l'appel à 'boot'le nombre de colonnes de 'A' (%d) n'est pas égal à la longueur de 'u' (%d)soit 't', soit 't0' est requisseule les colonnes %s et %s de 'data' sont utiliséesseuls les deux premiers éléments d''index' sont utilisésseule la première colonne de 't' est utiliséeseul le premier élément d''index' est utiliséseul le premier élément de 'index' est utilisé dans 'abc.ci'sim="weird" ne peut être utilisé avec un object "coxph"ce type n'est pas implémenté pour 'Binary'ce type n'est pas implémenté pour 'Poisson'impossible d'atteindre le taux global d'erreur spécifiéimpossible de calculer 'var.t0'impossible de trouver un multiplicateur pour %fimpossible de trouver l'étendue des valeursvaleur inconnue de 'sim'valeur de 'sim' non reconnueutilisez 'boot.ci' pour des paramètres scalairesvariance requise pour les intervalles de confiance studentisésboot/inst/po/ko/0000755000076600000240000000000012121561347013257 5ustar00ripleystaffboot/inst/po/ko/LC_MESSAGES/0000755000076600000240000000000012121561347015044 5ustar00ripleystaffboot/inst/po/ko/LC_MESSAGES/R-boot.mo0000644000076600000240000001606112122262107016540 0ustar00ripleystaffÞ•:ìO¼ø3ù - No‹/ ÐLð="V&y %¶Üù4.E.t*£&Î(õ%4D,y+¦Ò$ñ* !A c 2x <« è ý ( $F $k  -¯ 2Ý - > =X – %² %Ø þ " .? 2n ¡ ¾ ß ô  #' +K ×w MO22Ð-*1P\@­fî,UE‚<È0E6*|*§AÒPAe2§>Ú?8Y>’:ÑG 'T5|<²1ï !?BO‚'Ò5ú60:g0¢&Ó>úC9G}0ÅZö-Q5?µ-õ4#CXPœ&í2!G$i*Ž9¹=ó 1 %/*"+' :!&527 8-9(,).0 6$#43 %s distribution not supported: using normal instead'F.surv' is required but missing'G.surv' is required but missing'K' outside allowable range'R' must be positive'data' must be a matrix with at least 2 columns'index' must contain 2 elements'simple=TRUE' is only valid for 'sim="ordinary", stype="i", n=0', so ignored'strata' of wrong length'stype' must be "w" for type="inf"'t' and 't0' must be supplied together't' must of length %d'theta' must be supplied if R[1L] = 0'theta' or 'lambda' required'u' must be a functionBCa intervals not defined for time series bootstrapsR[1L] must be positive for frequency smoothingarray cannot be found for parametric bootstrapboot.array not implemented for this objectbootstrap object needed for type="reg"bootstrap output object or 't0' requiredbootstrap replicates must be suppliedbootstrap variances needed for studentized intervalsdimensions of 'R' and 'weights' do not matcheither 'boot.out' or 'w' must be specified.estimated adjustment 'a' is NAestimated adjustment 'w' is infiniteextreme order statistics used as endpointsextreme values used for quantilesfunction 'u' missingindex array not defined for model-based resamplinginfluence values cannot be found from a parametric bootstrapinvalid value of 'l'invalid value of 'sim' suppliedlength of 'm' incompatible with 'strata'missing values not allowed in 'data'multivariate time series not allowednegative value of 'm' suppliedneither 'data' nor bootstrap object specifiedneither 'statistic' nor bootstrap object specifiedno coefficients in Cox model -- model ignoredno data in call to 'boot'number of columns of 'A' (%d) not equal to length of 'u' (%d)one of 't' or 't0' requiredonly columns %s and %s of 'data' usedonly first 2 elements of 'index' usedonly first column of 't' usedonly first element of 'index' usedonly first element of 'index' used in 'abc.ci'sim = "weird" cannot be used with a "coxph" objectunable to calculate 'var.t0'unable to find multiplier for %funable to find rangeunknown value of 'sim'unrecognized value of 'sim'use 'boot.ci' for scalar parametersvariance required for studentized intervalsProject-Id-Version: boot 1.3-6 POT-Creation-Date: 2012-10-11 15:21 PO-Revision-Date: 2013-03-11 13:41-0600 Last-Translator: Chel Hee Lee Language-Team: R Development Translation Teams (Korean) Language: ko MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Plural-Forms: nplurals=1; plural=0; X-Poedit-Language: Korean X-Poedit-Country: KOREA, REPUBLIC OF X-Poedit-SourceCharset: utf-8 %s ë¶„í¬ëŠ” ì§€ì›ë˜ì§€ 않으므로 정규분í¬ê°€ 대신 사용ë©ë‹ˆë‹¤'F.surv'ê°€ í•„ìš”í•œë° ëˆ„ë½ë˜ì–´ 있습니다'G.surv'ê°€ í•„ìš”í•œë° ëˆ„ë½ë˜ì–´ 있습니다'K'는 허용하는 ë²”ìœ„ì™¸ì— ìžˆìŠµë‹ˆë‹¤'R'ì€ ë°˜ë“œì‹œ 양수ì´ì–´ì•¼ 합니다'data'는 반드시 ì ì–´ë„ 2ê°œì˜ ì—´ì„ ê°€ì§€ëŠ” 행렬ì´ì–´ì•¼ 합니다'index'는 반드시 2ê°œì˜ ìš”ì†Œë“¤ì„ í¬í•¨í•´ì•¼ 합니다'simple=TRUE'ì€ 'sim="ordinary", stype="i", n=0'ì¸ ê²½ìš°ì—ë§Œ 유효하므로무시ë˜ì—ˆìŠµë‹ˆë‹¤'strata'ì˜ ê¸¸ì´ê°€ 잘 못ë˜ì—ˆìŠµë‹ˆë‹¤typeì´ "inf"경우ì—는 'stype'ì´ ë°˜ë“œì‹œ "w"ì´ì–´ì•¼ 합니다't'와 't0'는 반드시 함께 제공ë˜ì–´ì ¸ì•¼ 합니다't'ì˜ ê¸¸ì´ëŠ” 반드시 %dì´ì–´ì•¼ 합니다만약 R[1L] = 0ì´ë¼ë©´ 'theta'는 반드시 주어져야 합니다'theta' ë˜ëŠ” 'lambda'ê°€ 필요합니다'u'는 반드시 함수ì´ì–´ì•¼ 합니다time series bootstrapsì— ì •ì˜ëœ BCa intervalsê°€ 아닙니다frequency smoothingì„ ìœ„í•´ì„œëŠ” 반드시 R[1L]ê°€ 양수ì´ì–´ì•¼ 합니다parameteric bootstrapì„ ìœ„í•œ ë°°ì—´ì„ ì°¾ì„ ìˆ˜ ì—†ìŠµë‹ˆë‹¤ì´ ê°ì²´ì— êµ¬í˜„ëœ boot.arrayê°€ 아닙니다typeì´ "reg"ì¸ ê²½ìš°ì— í•„ìš”í•œ bootstrap ê°ì²´ìž…니다bootstrap로부터 나온 ê°ì²´ ë˜ëŠ” 't0'ê°€ 필요합니다bootstrap replicates는 반드시 주어져야 합니다studentized intervalsì— í•„ìš”í•œ boostrap variances입니다'R'ê³¼ 'weights'ì˜ dimensionì´ ì¼ì¹˜í•˜ì§€ 않습니다'boot.out' ë˜ëŠ” 'w' 중 하나는 반드시 지정ë˜ì–´ì•¼ í•©ë‹ˆë‹¤ì¶”ì •ëœ adjustment 'a'ê°€ NAìž…ë‹ˆë‹¤ì¶”ì •ëœ adjustment 'w'ê°€ ë¬´í•œê°’ì„ ê°€ì§‘ë‹ˆë‹¤endpoints 처럼 ì‚¬ìš©ëœ extreme order statistics입니다quantilesì— ì‚¬ìš©ëœ extreme values들입니다함수 'u'ê°€ 빠져있습니다model-based resamplingì— ì •ì˜ëœ index arrayê°€ 아닙니다influence valuesë“¤ì„ parametric bootstrap으로부터 ì°¾ì„ ìˆ˜ 없습니다유효하지 ì•Šì€ 'l'ì˜ ê°’ìž…ë‹ˆë‹¤ìœ íš¨í•˜ì§€ ì•Šì€ 'sim'ê°’ì´ ì œê³µë˜ì—ˆìŠµë‹ˆë‹¤'m'ì˜ ê¸¸ì´ê°€ 'strata'와 부합하지 않습니다'data'ì— í—ˆìš©ë˜ì§€ 않는 ê²°ì¸¡ì¹˜ë“¤ì´ ìžˆìŠµë‹ˆë‹¤í—ˆìš©ë˜ì§€ ì•Šì€ ë‹¤ë³€ëŸ‰ 시계열입니다'm'ì— ìŒìˆ˜ê°€ 제공ë˜ì—ˆìŠµë‹ˆë‹¤ì§€ì •ëœ 'data'ë„ ì•„ë‹ˆê³  bootstrap ê°ì²´ë„ ì•„ë‹™ë‹ˆë‹¤ì§€ì •ëœ 'statistic'ë„ ì•„ë‹ˆê³  bootstrap ê°ì²´ë„ 아닙니다Cox 모ë¸ì— ê³„ìˆ˜ë“¤ì´ ì—†ìœ¼ë¯€ë¡œ 모ë¸ì´ 무시ë˜ì—ˆìŠµë‹ˆë‹¤'boot'ì— í˜¸ì¶œì¤‘ì¸ ë°ì´í„°ê°€ 없습니다'A'ê°€ 가지는 ì—´ì˜ ê°œìˆ˜ (%d)는 'u'ê°€ 가지는 ê¸¸ì´ (%d)와 같지 않습니다't' ë˜ëŠ” 't0' 중 하나가 필요합니다'data'ì˜ %s와 %s ì—´ë“¤ë§Œì´ ì‚¬ìš©ë˜ì—ˆìŠµë‹ˆë‹¤'index'ì˜ ì²«ë²ˆì§¸ 2ê°œ ìš”ì†Œë“¤ë§Œì´ ì‚¬ìš©ë˜ì—ˆìŠµë‹ˆë‹¤'t'ì˜ ì²«ë²ˆì§¸ ì—´ë§Œ 사용ë˜ì—ˆìŠµë‹ˆë‹¤'index'ì˜ ì²«ë²ˆì§¸ ìš”ì†Œë§Œì„ ì‚¬ìš©í–ˆìŠµë‹ˆë‹¤'index'ì˜ ì²«ë²ˆì§¸ ìš”ì†Œë§Œì´ 'abc.ci'ì— ì‚¬ìš©ë˜ì—ˆìŠµë‹ˆë‹¤sim ì¸ìžì— "weird" ê°’ì€ "coxph" ê°ì²´ì™€ 함께 ì‚¬ìš©ë  ìˆ˜ 없습니다'var.t0'를 계산할 수 없습니다%fì— ëŒ€í•œ multiplier를 ì°¾ì„ ìˆ˜ 없습니다범위를 구할 수 없습니다알 수 없는 'sim'ì˜ ê°’ìž…ë‹ˆë‹¤ì¸ì‹í•  수 없는 'sim'ì˜ ê°’ìž…ë‹ˆë‹¤ìŠ¤ì¹¼ë¼ íŒŒë¼ë¯¸í„°ì¼ë•Œ 'boot.ci'를 사용하세요studentized intervalsì— ìš”êµ¬ë˜ì–´ì§€ëŠ” variance입니다boot/inst/po/pl/0000755000076600000240000000000011772542457013275 5ustar00ripleystaffboot/inst/po/pl/LC_MESSAGES/0000755000076600000240000000000011772542457015062 5ustar00ripleystaffboot/inst/po/pl/LC_MESSAGES/R-boot.mo0000644000076600000240000001624112122262110016534 0ustar00ripleystaffÞ•B,Y<  ¡ Âãú))@jš"³&Öý%9Vm.‹4º.ï*&Ip(%¹4ß5 ,J 7w +¯ Û $ú * !J l 2 -´ *â < J h ‡ 0¥ Ö ë ( '4 \ $x $  -á  ) %E k "‰ .¬ $Û %& Cdy#¬%Ð*ö*!Lh1…1·é&"'$J.ož1º!ì%%:K;†;Â:þ/9'i4‘1ÆGø@@-D¯4ô")/LB|)¿éKý1I${K .ì/-KDy¾(Ø.;0.l/›0Ë!üA`"€1£*Õ-8.<g9¤Þ%ü"?X*vA ;:!36#%@1 /9B (?&+=8>2 5 0$'".,<47-)* 'F.surv' is required but missing'G.surv' is required but missing'K' has been set to %f'K' outside allowable range'R' and 'alpha' have incompatible lengths'R' and 'theta' have incompatible lengths'R' must be positive'alpha' ignored; R[1L] = 0'strata' of wrong length'stype' must be "w" for type="inf"'t' and 't0' must be supplied together't' must of length %d'theta' must be supplied if R[1L] = 0'theta' or 'lambda' required'u' must be a function0 elements not allowed in 'q'R[1L] must be positive for frequency smoothingarguments are not all the same type of "boot" objectarray cannot be found for parametric bootstrapboot.array not implemented for this objectbootstrap object needed for type="reg"bootstrap output matrix missingbootstrap output object or 't0' requiredbootstrap replicates must be suppliedbootstrap variances needed for studentized intervalscontrol methods undefined when 'boot.out' has weightsdimensions of 'R' and 'weights' do not matcheither 'A' and 'u' or 'K.adj' and 'K2' must be suppliedeither 'boot.out' or 'w' must be specified.estimated adjustment 'a' is NAestimated adjustment 'w' is infiniteextreme order statistics used as endpointsextreme values used for quantilesfunction 'u' missingindex array not defined for model-based resamplingindex out of bounds; minimum index only used.indices are incompatible with 'ncol(data)'influence values cannot be found from a parametric bootstrapinput 't' ignored; type="inf"input 't' ignored; type="jack"input 't' ignored; type="pos"input 't0' ignored: neither 't' nor 'L' suppliedinvalid value of 'l'invalid value of 'sim' suppliedlength of 'm' incompatible with 'strata'likelihood exceeds %f at only one pointlikelihood never exceeds %fmissing values not allowed in 'data'multivariate time series not allowednegative value of 'm' suppliedno coefficients in Cox model -- model ignoredno data in call to 'boot'one of 't' or 't0' requiredonly first 2 elements of 'index' usedonly first column of 't' usedonly first element of 'index' usedonly first element of 'index' used in 'abc.ci'this type not implemented for Binarythis type not implemented for Poissonunable to calculate 'var.t0'unable to find multiplier for %funable to find rangeunknown value of 'sim'unrecognized value of 'sim'use 'boot.ci' for scalar parametersProject-Id-Version: boot 1.3-5 Report-Msgid-Bugs-To: bugs@r-project.org POT-Creation-Date: 2012-10-11 15:21 PO-Revision-Date: Last-Translator: Åukasz Daniel Language-Team: Åukasz Daniel Language: pl_PL MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit na-Revision-Date: 2012-05-29 07:55+0100 Plural-Forms: nplurals=3; plural=(n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2); X-Poedit-SourceCharset: iso-8859-1 X-Generator: Poedit 1.5.3 'F.surv' jest wymagany, ale jest nieobecny'G.surv' jest wymagany, ale jest nieobecny'K' zostaÅ‚ ustawiony na %f'K' poza dozwolonym zakresem'R' oraz 'alpha' majÄ… niekompatybilne dÅ‚ugoÅ›ci'R' oraz 'theta' majÄ… niekompatybilne dÅ‚ugoÅ›ci'R' musi być dodatnie'alpha' zostaÅ‚o zignornowane; R[1L]=0'strata' o niepoprawnej dÅ‚ugoÅ›ci'stype' musi być "w" dla type="inf"'t' oraz 't0' muszÄ… zostać dostarczone razem't' musi być dÅ‚ugoÅ›ci %d'theta' musi zostać dostarczona jeÅ›li R[1L] = 0'theta' lub 'lambda' sÄ… wymagane'u' musi być funkcjÄ…0 elementów nie jest dozwolone w 'q'R[1L] musi być dodatnia dla wygÅ‚adzania czÄ™stotliwoÅ›ciargumenty nie sÄ… wszystkie tego samego typu obiektu 'boot'nie można znaleźć tablicy dla parametrycznego bootstrapu'boot.array' nie zostaÅ‚ zaimplementowany dla tego obiektuobiekt bootstrapu jest potrzebny dla type="reg"brakuje wyjÅ›ciowej macierzy bootstrapuwymagany jest obiekt wyjÅ›ciowy bootstrapu albo 't0'bootstrapowane repliki muszÄ… zostać dostarczonepotrzebne sÄ… bootstrapowe wariancje dla studentyzowanych przedziałówmetody kontroli nie sÄ… zdefiniowane gdy 'boot.out' posiada wagiwymiary 'R' oraz 'weights' nie zgadzajÄ… siÄ™albo 'A' oraz 'u', albo 'K.adj' oraz 'K2' muszÄ… zostać dostarczonejedno z 'boot.out' lub 'w' musi zostać dostarczone.oszacowana korekta 'a' wynosi 'NA'oszacowana korekta 'w' wynosi nieskoÅ„czonośćekstremalnie uporzÄ…dkowana statystyka użyta jako punkty koÅ„coweekstremalne wartoÅ›ci użyte dla kwantylibrakuje funkcji 'u'tablica indeksów nie jest zdefiniowana dla próbkowania opartego na modeluindeks poza zakresem; użyto minimalnego indeksu.indeksy sÄ… niezgodne z 'ncol(data)'wartoÅ›ci wpÅ‚ywu nie mogÄ… zostać znalezione z parametrycznego bootstrapuwejÅ›cie 't' zostaÅ‚o zignornowane; type="inf"wejÅ›cie 't' zostaÅ‚o zignornowane; type="jack"wejÅ›cie 't' zostaÅ‚o zignorowane; type="pos"wejÅ›cie 't0' zostaÅ‚o zignornowane: nie dostarczono ani 't' ani 'L'niepoprawna wartość 'l'dostarczono niepoprawnÄ… wartość 'sim'dÅ‚ugość 'm' jest niekompatybilna z 'strata'funkcja wiarygodnoÅ›ci przekracza %f tylko w jednym punkciefunkcja wiarygodnoÅ›ci nigdy nie przekracza %fbrakujÄ…ce wartoÅ›ci nie sÄ… dozwolone w 'data'wielowymiarowe szeregi czasowe nie sÄ… dozwolonedostarczono ujemnÄ… wartość 'm'brak współczynników w modelu Coxa -- model zostaÅ‚ zignorowanybrak danych w wywoÅ‚aniu 'boot'jeden z 't' lub 't0' jest wymaganytylko pierwsze 2 elementy 'index' zostaÅ‚y użytetylko pierwsza kolumna 't' zostaÅ‚a użytatylko pierwszy element 'index' zostaÅ‚ użytytylko pierwszy element 'index' zostaÅ‚ użyty w 'abc.ci'ten typ nie jest zaimplementowany dla rozkÅ‚adu Bernoulliegoten typ nie jest zaimplementowany dla rozkÅ‚adu Poisson'anie można wyliczyć 'var.t0'nie można znaleźć mnożnika dla %fnie można znaleźć zakresunieznana wartość 'sim'nierozpoznana wartość 'sim'użyj 'boot.ci' dla skalarnych parametrówboot/inst/po/ru/0000755000076600000240000000000011663151666013305 5ustar00ripleystaffboot/inst/po/ru/LC_MESSAGES/0000755000076600000240000000000012032132427015053 5ustar00ripleystaffboot/inst/po/ru/LC_MESSAGES/R-boot.mo0000644000076600000240000002020112122262110016536 0ustar00ripleystaffÞ•NŒkü¨3© Ý þ6)R)|¦»/ÖL&s"Œ&¯Ö%ì / F 4d .™ 4È .ý *, &W ~ (ž %Ç 4í 5" ,X 7… +½ é $ *- !X z 2 - *ð < X v • 0³ ä ù ('Bj$†$«Ð-ï2-P~=˜Ö%ò%>"\.2®$á%.,[ x™®Å#á+Ù18 Dd„˜,·&ä )8C"|XŸø&%;a${ ¿Ø9÷:1+l4˜,Í$ú!+Am4CÂ*01'b*Š,µ9â1NDd7©#áCIg†-¤Ò#ì"/3%c*‰'´"Ü(ÿ*(4Sˆ3£×*ó+&J'q2™9Ì2194k »Úï ,( +U L9JB*5= N+1'47 - MAC?8D;@,2>H.$#/6I 0E G:F&)%( K3!"<%s distribution not supported: using normal instead'F.surv' is required but missing'G.surv' is required but missing'K' has been set to %f'K' outside allowable range'R' and 'alpha' have incompatible lengths'R' and 'theta' have incompatible lengths'R' must be positive'alpha' ignored; R[1L] = 0'data' must be a matrix with at least 2 columns'index' must contain 2 elements'simple=TRUE' is only valid for 'sim="ordinary", stype="i", n=0', so ignored'strata' of wrong length'stype' must be "w" for type="inf"'t' and 't0' must be supplied together't' must of length %d'theta' must be supplied if R[1L] = 0'theta' or 'lambda' required'u' must be a function0 elements not allowed in 'q'BCa intervals not defined for time series bootstrapsR[1L] must be positive for frequency smoothingarguments are not all the same type of "boot" objectarray cannot be found for parametric bootstrapboot.array not implemented for this objectbootstrap object needed for type="reg"bootstrap output matrix missingbootstrap output object or 't0' requiredbootstrap replicates must be suppliedbootstrap variances needed for studentized intervalscontrol methods undefined when 'boot.out' has weightsdimensions of 'R' and 'weights' do not matcheither 'A' and 'u' or 'K.adj' and 'K2' must be suppliedeither 'boot.out' or 'w' must be specified.estimated adjustment 'a' is NAestimated adjustment 'w' is infiniteextreme order statistics used as endpointsextreme values used for quantilesfunction 'u' missingindex array not defined for model-based resamplingindex out of bounds; minimum index only used.indices are incompatible with 'ncol(data)'influence values cannot be found from a parametric bootstrapinput 't' ignored; type="inf"input 't' ignored; type="jack"input 't' ignored; type="pos"input 't0' ignored: neither 't' nor 'L' suppliedinvalid value of 'l'invalid value of 'sim' suppliedlength of 'm' incompatible with 'strata'likelihood exceeds %f at only one pointlikelihood never exceeds %fmissing values not allowed in 'data'multivariate time series not allowednegative value of 'm' suppliedneither 'data' nor bootstrap object specifiedneither 'statistic' nor bootstrap object specifiedno coefficients in Cox model -- model ignoredno data in call to 'boot'number of columns of 'A' (%d) not equal to length of 'u' (%d)one of 't' or 't0' requiredonly columns %s and %s of 'data' usedonly first 2 elements of 'index' usedonly first column of 't' usedonly first element of 'index' usedonly first element of 'index' used in 'abc.ci'sim = "weird" cannot be used with a "coxph" objectthis type not implemented for Binarythis type not implemented for Poissonunable to achieve requested overall error rateunable to calculate 'var.t0'unable to find multiplier for %funable to find rangeunknown value of 'sim'unrecognized value of 'sim'use 'boot.ci' for scalar parametersvariance required for studentized intervalsProject-Id-Version: R 2.10.0 Report-Msgid-Bugs-To: bugs@r-project.org POT-Creation-Date: 2012-10-11 15:21 PO-Revision-Date: 2013-03-19 14:42-0600 Last-Translator: Alexey Shipunov Language-Team: Russian Language: MIME-Version: 1.0 Content-Type: text/plain; charset=KOI8-R Content-Transfer-Encoding: 8bit X-Poedit-Language: Russian Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2); ÒÁÓÐÒÅÄÅÌÅÎÉÅ %s ÎÅ ÐÏÄÄÅÒÖÉ×ÁÅÔÓÑ, ÉÓÐÏÌØÚÕÀ ÎÏÒÍÁÌØÎÏÅ'F.surv' ÔÒÅÂÕÅÔÓÑ, ÎÏ ÐÒÏÐÕÝÅÎ'G.surv' ÔÒÅÂÕÅÔÓÑ, ÎÏ ÐÒÏÐÕÝÅÎ'K' ÕÓÔÁÎÏ×ÌÅÎ × %f'K' ×ÎÅ ÄÏÐÕÓÔÉÍÏÇÏ ÐÒÏÍÅÖÕÔËÁ'R' É 'alpha' ÄÏÌÖÎÙ ÉÍÅÔØ ÓÏ×ÍÅÓÔÉÍÙÅ ÄÌÉÎÙÕ 'R' É 'theta' -- ÎÅÓÏ×ÍÅÓÔÉÍÙÅ ÄÌÉÎÙ'R' ÄÏÌÖÅÎ ÂÙÔØ ÐÏÌÏÖÉÔÅÌØÎÙÍ'alpha' ÐÒÏÐÕÝÅÎ; R[1L]=0ÄÁÎÎÙÅ ÄÏÌÖÎÙ ÂÙÔØ ÍÁÔÒÉÃÅÊ ÐÏ ÍÅÎØÛÅÊ ÍÅÒÅ ÉÚ 2 ËÏÌÏÎÏËÉÎÄÅËÓ ÄÏÌÖÅÎ ÓÏÄÅÒÖÁÔØ 2 ÜÌÅÍÅÎÔÁ'simple=TRUE' ÐÒÁ×ÉÌØÎÏ ÔÏÌØËÏ ÄÌÑ 'sim="ordinary", stype="i", n=0, ÐÏÜÔÏÍÕ ÐÒÏÐÕÓËÁÅÔÓÑ'strata' ÎÅÐÒÁ×ÉÌØÎÏÊ ÄÌÉÎÙ'stype' ÄÏÌÖÅÎ ÂÙÔØ "w" ÄÌÑ type="inf"'t' É 't0' ÄÏÌÖÎÙ ÂÙÔØ ÕËÁÚÁÎÙ ×ÍÅÓÔÅ't' ÄÏÌÖÅÎ ÂÙÔØ ÄÌÉÎÏÊ %dÎÁÄÏ ÕËÁÚÁÔØ 'theta', ÅÓÌÉ R[1L] = 0ÔÒÅÂÕÅÔÓÑ 'theta' ÉÌÉ 'lambda''u' ÄÏÌÖÎÁ ÂÙÔØ ÆÕÎËÃÉÅÊ0 ÜÌÅÍÅÎÔÏ× × 'q' ÎÅ ÒÁÚÒÅÛÅÎÏBCa ÉÎÔÅÒ×ÁÌÙ ÎÅ ÏÐÒÅÄÅÌÅÎÙ ÄÌÑ ÂÕÔÓÔÒÅÐÁ ×ÒÅÍÅÎÎÙÈ ÒÑÄÏ×R[1L] ÄÏÌÖÅÎ ÂÙÔØ ÐÏÌÏÖÉÔÅÌØÎÙÍ ÄÌÑ ÞÁÓÔÏÔÎÏÇÏ ÓÇÌÁÖÉ×ÁÎÉÑÎÅ ×ÓÅ ÁÒÇÕÍÅÎÔÙ ÏÂßÅËÔÁ "boot" ÏÄÎÏÇÏ ÔÉÐÁÎÅ ÍÏÇÕ ÎÁÊÔÉ ÍÁÔÒÉÃÕ ÄÌÑ ÐÁÒÁÍÅÔÒÉÞÅÓËÏÇÏ ÂÕÔÓÔÒÅÐÁ'boot.array' ÄÌÑ ÜÔÏÇÏ ÏÂßÅËÔÁ ÎÅ ÒÁÚÒÁÂÏÔÁÎÄÌÑ type="reg" ÎÕÖÅÎ ÂÕÔÓÔÒÅÐ-ÏÂßÅËÔÐÒÏÐÕÝÅÎÁ ÍÁÔÒÉÃÁ ÂÕÔÓÔÒÅÐ-×Ù×ÏÄÁÔÒÅÂÕÅÔÓÑ ÏÂßÅËÔ ×Ù×ÏÄÁ ÂÕÔÓÔÒÅÐÁ ÌÉÂÏ 't0'ÎÁÄÏ ÕËÁÚÁÔØ ÂÕÔÓÔÒÅÐ-ÒÅÐÌÉËÁÔÙÂÕÔÓÔÒÅÐ-×ÁÒÉÁÎÓÙ ÎÕÖÎÙ ÄÌÑ ÉÎÔÅÒ×ÁÌÏ× óÔØÀÄÅÎÔ-ÔÉÐÁÍÅÔÏÄÙ ËÏÎÔÒÏÌÑ ÎÅ ÏÐÒÅÄÅÌÅÎÙ × ÔÏ ×ÒÅÍÑ ËÁË Õ 'boot.out' ÅÓÔØ ×ÅÓÁÉÚÍÅÒÅÎÉÑ 'R' É 'weights' ÎÅ ÓÏÏÔ×ÅÔÓÔ×ÕÀÔÎÁÄÏ ÕËÁÚÁÔØ ÌÉÂÏ 'A' É 'u', ÌÉÂÏ 'K.adj' É 'K2'ÎÁÄÏ ÕËÁÚÁÔØ ÌÉÂÏ 'boot.out', ÌÉÂÏ 'w'.ÐÒÅÄÐÏÌÁÇÁÅÍÁÑ ËÏÒÒÅËÔÉÒÏ×ËÁ 'a' -- ÜÔÏ NAÐÒÅÄÐÏÌÁÇÁÅÍÁÑ ËÏÒÒÅËÔÉÒÏ×ËÁ 'w' -- infinite'extreme order statistics' ÉÓÐÏÌØÚÏ×ÁÎÁ × ËÏÎÅÞÎÙÈ ÔÏÞËÁÈÜËÓÔÒÅÍÁÌØÎÙÅ ÚÎÁÞÅÎÉÑ ÉÓÐÏÌØÚÏ×ÁÎÙ ÄÌÑ Ë×ÁÎÔÉÌÅÊÆÕÎËÃÉÑ 'u' ÐÒÏÐÕÝÅÎÁÄÌÑ ÏÓÎÏ×ÁÎÎÏÇÏ ÎÁ ÍÏÄÅÌÉ ÒÅÓÜÍÐÌÉÎÇÁ ÎÅ ÏÐÒÅÄÅÌÅÎÁ ÍÁÔÒÉÃÁ ÉÎÄÅËÓÏ×ÉÎÄÅËÓ ×ÎÅ ÇÒÁÎÉÃ; ÉÓÐÏÌØÚÏ×ÁÎ ÌÉÛØ ÍÉÎÉÍÁÌØÎÙÊ ÉÎÄÅËÓ.ÉÎÄÅËÓÙ ÎÅÓÏ×ÍÅÓÔÉÍÙ Ó 'ncol(data)'ÚÎÁÞÅÎÉÑ ×ÌÉÑÎÉÑ ÎÅÌØÚÑ ÎÁÊÔÉ ÐÒÉ ÐÏÍÏÝÉ ÐÁÒÁÍÅÔÒÉÞÅÓËÏÇÏ ÂÕÔÓÔÒÅÐÁ××ÏÄ 't' ÐÒÏÐÕÝÅÎ; type="inf"××ÏÄ 't' ÐÒÏÐÕÝÅÎ; type="jack"××ÏÄ 't' ÐÒÏÐÕÝÅÎ; type="pos"××ÏÄ 't0' ÐÒÏÐÕÝÅÎ: ÎÅ ÕËÁÚÁÎÏ ÎÉ 't', ÎÉ 'L'ÎÅÐÒÁ×ÉÌØÎÏÅ ÚÎÁÞÅÎÉÅ 'l'ÕËÁÚÁÎÏ ÎÅÐÒÁ×ÉÌØÎÏÅ ÚÎÁÞÅÎÉÅ 'sim'ÄÌÉÎÁ 'm' ÎÅÓÏ×ÍÅÓÔÉÍÁ ÓÏ 'strata'ÐÒÁ×ÄÏÐÏÄÏÂÉÅ ÐÒÅ×ÙÛÁÅÔ %f ÔÏÌØËÏ × ÏÄÎÏÊ ÔÏÞËÅÐÒÁ×ÄÏÐÏÄÏÂÉÅ ÎÉËÏÇÄÁ ÎÅ ÐÒÅ×ÙÛÁÅÔ %fÐÒÏÐÕÝÅÎÎÙÅ ÚÎÁÞÅÎÉÑ × ÄÁÎÎÙÈ ÎÅ ÒÁÚÒÅÛÅÎÙÍÎÏÇÏÍÅÒÎÙÅ ×ÒÅÍÅÎÎÙÅ ÒÑÄÙ ÎÅ ÒÁÚÒÅÛÅÎÙÕËÁÚÁÎÏ ÏÔÒÉÃÁÔÅÌØÎÏÅ ÚÎÁÞÅÎÉÅ 'm'ÎÅÔ ÄÁÎÎÙÈ ÉÌÉ ÎÅ ÕËÁÚÁÎ ÂÕÔÓÔÒÅÐ-ÏÂßÅËÔÎÅÔ 'statistic' ÉÌÉ ÕËÁÚÁÎ ÂÕÔÓÔÒÅÐ-ÏÂßÅËÔ× ÍÏÄÅÌÉ 'Cox' ÎÅÔ ËÏÜÆÆÉÃÉÅÎÔÏ× -- ÍÏÄÅÌØ ÐÒÏÐÕÝÅÎÁÎÅÔ ÄÁÎÎÙÈ × ×ÙÚÏ×Å 'boot'ËÏÌÉÞÅÓÔ×Ï ËÏÌÏÎÏË 'A' (%d) ÎÅ ÒÁ×ÎÏ ÄÌÉÎÅ 'u' (%d)ÔÒÅÂÕÅÔÓÑ ÏÄÎÏ 't' ÉÌÉ 't0'ÉÓÐÏÌØÚÏ×ÁÎÙ ÔÏÌØËÏ ËÏÌÏÎËÉ %s É %s ÄÁÎÎÙÈÌÉÛØ ÐÅÒ×ÙÅ 2 ÜÌÅÍÅÎÔÁ ÉÎÄÅËÓÁ ÉÓÐÏÌØÚÏ×ÁÎÙÉÓÐÏÌØÚÏ×ÁÎÁ ÔÏÌØËÏ ÐÅÒ×ÁÑ ËÏÌÏÎËÁ 't'ÉÓÐÏÌØÚÏ×ÁÎ ÌÉÛØ ÐÅÒ×ÙÊ ÜÌÅÍÅÎÔ ÉÎÄÅËÓÁÌÉÛØ ÐÅÒ×ÙÊ ÜÌÅÍÅÎÔ ÉÎÄÅËÓÁ ÉÓÐÏÌØÚÏ×ÁÎ × 'abc.ci'sim="weird" ÎÅ ÍÏÖÅÔ ÂÙÔØ ÉÓÐÏÌØÚÏ×ÁÎ ÄÌÑ ÏÂßÅËÔÁ "coxph"ÜÔÏÔ ÔÉÐ ÎÅ ÒÁÚÒÁÂÏÔÁÎ ÄÌÑ ÂÉÎÁÒÎÏÇÏ ÒÁÓÐÒÅÄÅÌÅÎÉÑÜÔÏÔ ÔÉÐ ÎÅ ÒÁÚÒÁÂÏÔÁÎ ÄÌÑ ÒÁÓÐÒÅÄÅÌÅÎÉÑ ðÕÁÓÓÏÎÁÎÅ ÍÏÇÕ ÄÏÓÔÉÞØ ÔÒÅÂÕÅÍÏÇÏ ÏÂÝÅÇÏ ÓÏÏÔÎÏÛÅÎÉÑ ÏÛÉÂÏËÎÅ ÍÏÇÕ ÐÏÓÞÉÔÁÔØ 'var.t0'ÎÅ ÍÏÇÕ ÎÁÊÔÉ ÍÎÏÖÉÔÅÌØ ÄÌÑ %fÎÅ ÍÏÇÕ ÎÁÊÔÉ ÒÁÚÍÁÈÎÅÉÚ×ÅÓÔÎÏÅ ÚÎÁÞÅÎÉÅ 'sim'ÎÅÒÁÓÐÏÚÎÁÎÎÏÅ ÚÎÁÞÅÎÉÅ 'sim'ÉÓÐÏÌØÚÕÀ 'boot.ci' ÄÌÑ ÓËÁÌÑÒÎÙÈ ÐÁÒÁÍÅÔÒÏ×ÄÌÑ ÉÎÔÅÒ×ÁÌÏ× óÔØÀÄÅÎÔ-ÔÉÐÁ ÎÕÖÎÁ ×ÁÒÉÁÎÓÁboot/man/0000755000076600000240000000000011663151666012037 5ustar00ripleystaffboot/man/EEF.profile.Rd0000644000076600000240000000423511566473062014367 0ustar00ripleystaff\name{EEF.profile} \alias{EEF.profile} \alias{EL.profile} \title{ Empirical Likelihoods} \description{ Construct the empirical log likelihood or empirical exponential family log likelihood for a mean.} \usage{ EEF.profile(y, tmin = min(y) + 0.1, tmax = max(y) - 0.1, n.t = 25, u = function(y, t) y - t) EL.profile(y, tmin = min(y) + 0.1, tmax = max(y) - 0.1, n.t = 25, u = function(y, t) y - t) } \arguments{ \item{y}{A vector or matrix of data} \item{tmin}{ The minimum value of the range over which the likelihood should be computed. This must be larger than \code{min(y)}.} \item{tmax}{ The maximum value of the range over which the likelihood should be computed. This must be smaller than \code{max(y)}.} \item{n.t}{ The number of points between \code{tmin} and \code{tmax} at which the value of the log-likelihood should be computed.} \item{u}{A function of the data and the parameter.} } \details{ These functions calculate the log likelihood for a mean using either an empirical likelihood or an empirical exponential family likelihood. They are supplied as part of the package \code{boot} for demonstration purposes with the practicals in chapter 10 of Davison and Hinkley (1997). The functions are not intended for general use and are not supported as part of the \code{boot}package. For more general and more robust code to calculate empirical likelihoods see Professor A. B. Owen's empirical likelihood home page at the URL \url{http://www-stat.stanford.edu/~owen/empirical/}.} \value{ A matrix with \code{n.t} rows. The first column contains the values of the parameter used. The second column of the output of \code{EL.profile} contains the values of the empirical log likelihood. The second and third columns of the output of \code{EEF.profile} contain two versions of the empirical exponential family log-likelihood. The final column of the output matrix contains the values of the Lagrange multiplier used in the optimization procedure. } \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \author{Angelo J. Canty} \keyword{htest} boot/man/Imp.Estimates.Rd0000644000076600000240000001471711566414741015020 0ustar00ripleystaff\name{Imp.Estimates} \alias{Imp.Estimates} \alias{imp.moments} \alias{imp.prob} \alias{imp.quantile} \alias{imp.reg} \title{ Importance Sampling Estimates } \description{ Central moment, tail probability, and quantile estimates for a statistic under importance resampling. } \usage{ imp.moments(boot.out = NULL, index = 1, t = boot.out$t[, index], w = NULL, def = TRUE, q = NULL) imp.prob(boot.out = NULL, index = 1, t0 = boot.out$t0[index], t = boot.out$t[, index], w = NULL, def = TRUE, q = NULL) imp.quantile(boot.out = NULL, alpha = NULL, index = 1, t = boot.out$t[, index], w = NULL, def = TRUE, q = NULL) } \arguments{ \item{boot.out}{ A object of class \code{"boot"} generated by a call to \code{boot} or \code{tilt.boot}. Use of these functions makes sense only when the bootstrap resampling used unequal weights for the observations. If the importance weights \code{w} are not supplied then \code{boot.out} is a required argument. It is also required if \code{t} is not supplied. } \item{alpha}{ The alpha levels for the required quantiles. The default is to calculate the 1\%, 2.5\%, 5\%, 10\%, 90\%, 95\%, 97.5\% and 99\% quantiles. } \item{index}{ The index of the variable of interest in the output of \code{boot.out$statistic}. This is not used if the argument \code{t} is supplied. } \item{t0}{ The values at which tail probability estimates are required. For each value \code{t0[i]} the function will estimate the bootstrap cdf evaluated at \code{t0[i]}. If \code{imp.prob} is called without the argument \code{t0} then the bootstrap cdf evaluated at the observed value of the statistic is found. } \item{t}{ The bootstrap replicates of a statistic. By default these are taken from the bootstrap output object \code{boot.out} but they can be supplied separately if required (e.g. when the statistic of interest is a function of the calculated values in \code{boot.out}). Either \code{boot.out} or \code{t} must be supplied. } \item{w}{ The importance resampling weights for the bootstrap replicates. If they are not supplied then \code{boot.out} must be supplied, in which case the importance weights are calculated by a call to \code{imp.weights}. } \item{def}{ A logical value indicating whether a defensive mixture is to be used for weight calculation. This is used only if \code{w} is missing and it is passed unchanged to \code{imp.weights} to calculate \code{w}. } \item{q}{ A vector of probabilities specifying the resampling distribution from which any estimates should be found. In general this would correspond to the usual bootstrap resampling distribution which gives equal weight to each of the original observations. The estimates depend on this distribution only through the importance weights \code{w} so this argument is ignored if \code{w} is supplied. If \code{w} is missing then \code{q} is passed as an argument to \code{imp.weights} and used to find \code{w}. } } \value{ A list with the following components : \item{alpha}{ The \code{alpha} levels used for the quantiles, if \code{imp.quantile} is used. } \item{t0}{ The values at which the tail probabilities are estimated, if \code{imp.prob} is used. } \item{raw}{ The raw importance resampling estimates. For \code{imp.moments} this has length 2, the first component being the estimate of the mean and the second being the variance estimate. For \code{imp.prob}, \code{raw} is of the same length as \code{t0}, and for \code{imp.quantile} it is of the same length as \code{alpha}. } \item{rat}{ The ratio importance resampling estimates. In this method the weights \code{w} are rescaled to have average value one before they are used. The format of this vector is the same as \code{raw}. } \item{reg}{ The regression importance resampling estimates. In this method the weights which are used are derived from a regression of \code{t*w} on \code{w}. This choice of weights can be shown to minimize the variance of the weights and also the Euclidean distance of the weights from the uniform weights. The format of this vector is the same as \code{raw}. }} \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Hesterberg, T. (1995) Weighted average importance sampling and defensive mixture distributions. \emph{Technometrics}, \bold{37}, 185--194. Johns, M.V. (1988) Importance sampling for bootstrap confidence intervals. \emph{Journal of the American Statistical Association}, \bold{83}, 709--714. } \seealso{ \code{\link{boot}}, \code{\link{exp.tilt}}, \code{\link{imp.weights}}, \code{\link{smooth.f}}, \code{\link{tilt.boot}} } \examples{ # Example 9.8 of Davison and Hinkley (1997) requires tilting the # resampling distribution of the studentized statistic to be centred # at the observed value of the test statistic, 1.84. In this example # we show how certain estimates can be found using resamples taken from # the tilted distribution. grav1 <- gravity[as.numeric(gravity[,2]) >= 7, ] grav.fun <- function(dat, w, orig) { strata <- tapply(dat[, 2], as.numeric(dat[, 2])) d <- dat[, 1] ns <- tabulate(strata) w <- w/tapply(w, strata, sum)[strata] mns <- as.vector(tapply(d * w, strata, sum)) # drop names mn2 <- tapply(d * d * w, strata, sum) s2hat <- sum((mn2 - mns^2)/ns) c(mns[2] - mns[1], s2hat, (mns[2] - mns[1] - orig)/sqrt(s2hat)) } grav.z0 <- grav.fun(grav1, rep(1, 26), 0) grav.L <- empinf(data = grav1, statistic = grav.fun, stype = "w", strata = grav1[,2], index = 3, orig = grav.z0[1]) grav.tilt <- exp.tilt(grav.L, grav.z0[3], strata = grav1[, 2]) grav.tilt.boot <- boot(grav1, grav.fun, R = 199, stype = "w", strata = grav1[, 2], weights = grav.tilt$p, orig = grav.z0[1]) # Since the weights are needed for all calculations, we shall calculate # them once only. grav.w <- imp.weights(grav.tilt.boot) grav.mom <- imp.moments(grav.tilt.boot, w = grav.w, index = 3) grav.p <- imp.prob(grav.tilt.boot, w = grav.w, index = 3, t0 = grav.z0[3]) unlist(grav.p) grav.q <- imp.quantile(grav.tilt.boot, w = grav.w, index = 3, alpha = c(0.9, 0.95, 0.975, 0.99)) as.data.frame(grav.q) } \keyword{htest} \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/abc.ci.Rd0000644000076600000240000000650011566130145013435 0ustar00ripleystaff\name{abc.ci} \alias{abc.ci} \title{ Nonparametric ABC Confidence Intervals } \description{ Calculate equi-tailed two-sided nonparametric approximate bootstrap confidence intervals for a parameter, given a set of data and an estimator of the parameter, using numerical differentiation. } \usage{ abc.ci(data, statistic, index=1, strata=rep(1, n), conf=0.95, eps=0.001/n, \dots) } \arguments{ \item{data}{ A data set expressed as a vector, matrix or data frame. } \item{statistic}{ A function which returns the statistic of interest. The function must take at least 2 arguments; the first argument should be the data and the second a vector of weights. The weights passed to \code{statistic} will be normalized to sum to 1 within each stratum. Any other arguments should be passed to \code{abc.ci} as part of the \code{\dots{}} argument. } \item{index}{ If \code{statistic} returns a vector of length greater than 1, then this indicates the position of the variable of interest within that vector. } \item{strata}{ A factor or numerical vector indicating to which sample each observation belongs in multiple sample problems. The default is the one-sample case. } \item{conf}{ A scalar or vector containing the confidence level(s) of the required interval(s). } \item{eps}{ The value of epsilon to be used for the numerical differentiation. } \item{...}{ Any other arguments for \code{statistic}. These will be passed unchanged to \code{statistic} each time it is called within \code{abc.ci}. }} \value{ A \code{length(conf)} by 3 matrix where each row contains the confidence level followed by the lower and upper end-points of the ABC interval at that level. } \details{ This function is based on the function \code{abcnon} written by R. Tibshirani. A listing of the original function is available in DiCiccio and Efron (1996). The function uses numerical differentiation for the first and second derivatives of the statistic and then uses these values to approximate the bootstrap BCa intervals. The total number of evaluations of the statistic is \code{2*n+2+2*length(conf)} where \code{n} is the number of data points (plus calculation of the original value of the statistic). The function works for the multiple sample case without the need to rewrite the statistic in an artificial form since the stratified normalization is done internally by the function. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}, Chapter 5. Cambridge University Press. DiCiccio, T. J. and Efron B. (1992) More accurate confidence intervals in exponential families. \emph{Biometrika}, \bold{79}, 231--245. DiCiccio, T. J. and Efron B. (1996) Bootstrap confidence intervals (with Discussion). \emph{Statistical Science}, \bold{11}, 189--228. } \seealso{ \code{\link{boot.ci}} } \examples{ # 90\% and 95\% confidence intervals for the correlation # coefficient between the columns of the bigcity data abc.ci(bigcity, corr, conf=c(0.90,0.95)) # A 95\% confidence interval for the difference between the means of # the last two samples in gravity mean.diff <- function(y, w) { gp1 <- 1:table(as.numeric(y$series))[1] sum(y[gp1, 1] * w[gp1]) - sum(y[-gp1, 1] * w[-gp1]) } grav1 <- gravity[as.numeric(gravity[, 2]) >= 7, ] abc.ci(grav1, mean.diff, strata = grav1$series) } \keyword{nonparametric} \keyword{htest} boot/man/acme.Rd0000644000076600000240000000217311110552530013214 0ustar00ripleystaff\name{acme} \alias{acme} \title{ Monthly Excess Returns } \description{ The \code{acme} data frame has 60 rows and 3 columns. The excess return for the Acme Cleveland Corporation are recorded along with those for all stocks listed on the New York and American Stock Exchanges were recorded over a five year period. These excess returns are relative to the return on a risk-less investment such a U.S. Treasury bills. } \usage{ acme } \format{ This data frame contains the following columns: \describe{ \item{\code{month}}{ A character string representing the month of the observation. } \item{\code{market}}{ The excess return of the market as a whole. } \item{\code{acme}}{ The excess return for the Acme Cleveland Corporation. }}} \source{ The data were obtained from Simonoff, J.S. and Tsai, C.-L. (1994) Use of modified profile likelihood for improved tests of constancy of variance in regression. \emph{Applied Statistics}, \bold{43}, 353--370. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/aids.Rd0000644000076600000240000000377311566130164013250 0ustar00ripleystaff\name{aids} \alias{aids} \title{ Delay in AIDS Reporting in England and Wales } \description{ The \code{aids} data frame has 570 rows and 6 columns. Although all cases of AIDS in England and Wales must be reported to the Communicable Disease Surveillance Centre, there is often a considerable delay between the time of diagnosis and the time that it is reported. In estimating the prevalence of AIDS, account must be taken of the unknown number of cases which have been diagnosed but not reported. The data set here records the reported cases of AIDS diagnosed from July 1983 and until the end of 1992. The data are cross-classified by the date of diagnosis and the time delay in the reporting of the cases. } \usage{ aids } \format{ This data frame contains the following columns: \describe{ \item{\code{year}}{ The year of the diagnosis. } \item{\code{quarter}}{ The quarter of the year in which diagnosis was made. } \item{\code{delay}}{ The time delay (in months) between diagnosis and reporting. 0 means that the case was reported within one month. Longer delays are grouped in 3 month intervals and the value of \code{delay} is the midpoint of the interval (therefore a value of \code{2} indicates that reporting was delayed for between 1 and 3 months). } \item{\code{dud}}{ An indicator of censoring. These are categories for which full information is not yet available and the number recorded is a lower bound only. } \item{\code{time}}{ The time interval of the diagnosis. That is the number of quarters from July 1983 until the end of the quarter in which these cases were diagnosed. } \item{\code{y}}{ The number of AIDS cases reported. }}} \source{ The data were obtained from De Angelis, D. and Gilks, W.R. (1994) Estimating acquired immune deficiency syndrome accounting for reporting delay. \emph{Journal of the Royal Statistical Society, A}, \bold{157}, 31--40. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} boot/man/aircondit.Rd0000644000076600000240000000216011110552530014257 0ustar00ripleystaff\name{aircondit} \alias{aircondit} \alias{aircondit7} \title{ Failures of Air-conditioning Equipment } \description{ Proschan (1963) reported on the times between failures of the air-conditioning equipment in 10 Boeing 720 aircraft. The \code{aircondit} data frame contains the intervals for the ninth aircraft while \code{aircondit7} contains those for the seventh aircraft. Both data frames have just one column. Note that the data have been sorted into increasing order. } \usage{ aircondit } \format{ The data frames contain the following column: \describe{ \item{\code{hours}}{ The time interval in hours between successive failures of the air-conditioning equipment }}} \source{ The data were taken from Cox, D.R. and Snell, E.J. (1981) \emph{Applied Statistics: Principles and Examples}. Chapman and Hall. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Proschan, F. (1963) Theoretical explanation of observed decreasing failure rate. \emph{Technometrics}, \bold{5}, 375-383. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/amis.Rd0000644000076600000240000000364111110552530013241 0ustar00ripleystaff\name{amis} \alias{amis} \title{ Car Speeding and Warning Signs } \description{ The \code{amis} data frame has 8437 rows and 4 columns. In a study into the effect that warning signs have on speeding patterns, Cambridgeshire County Council considered 14 pairs of locations. The locations were paired to account for factors such as traffic volume and type of road. One site in each pair had a sign erected warning of the dangers of speeding and asking drivers to slow down. No action was taken at the second site. Three sets of measurements were taken at each site. Each set of measurements was nominally of the speeds of 100 cars but not all sites have exactly 100 measurements. These speed measurements were taken before the erection of the sign, shortly after the erection of the sign, and again after the sign had been in place for some time. } \usage{ amis } \format{ This data frame contains the following columns: \describe{ \item{\code{speed}}{ Speeds of cars (in miles per hour). } \item{\code{period}}{ A numeric column indicating the time that the reading was taken. A value of 1 indicates a reading taken before the sign was erected, a 2 indicates a reading taken shortly after erection of the sign and a 3 indicates a reading taken after the sign had been in place for some time. } \item{\code{warning}}{ A numeric column indicating whether the location of the reading was chosen to have a warning sign erected. A value of 1 indicates presence of a sign and a value of 2 indicates that no sign was erected. } \item{\code{pair}}{ A numeric column giving the pair number at which the reading was taken. Pairs were numbered from 1 to 14. }}} \source{ The data were kindly made available by Mr. Graham Amis, Cambridgeshire County Council, U.K. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/aml.Rd0000644000076600000240000000362111110552530013057 0ustar00ripleystaff\name{aml} \alias{aml} \title{ Remission Times for Acute Myelogenous Leukaemia } \description{ The \code{aml} data frame has 23 rows and 3 columns. A clinical trial to evaluate the efficacy of maintenance chemotherapy for acute myelogenous leukaemia was conducted by Embury et al. (1977) at Stanford University. After reaching a stage of remission through treatment by chemotherapy, patients were randomized into two groups. The first group received maintenance chemotherapy and the second group did not. The aim of the study was to see if maintenance chemotherapy increased the length of the remission. The data here formed a preliminary analysis which was conducted in October 1974. } \usage{ aml } \format{ This data frame contains the following columns: \describe{ \item{\code{time}}{ The length of the complete remission (in weeks). } \item{\code{cens}}{ An indicator of right censoring. 1 indicates that the patient had a relapse and so \code{time} is the length of the remission. 0 indicates that the patient had left the study or was still in remission in October 1974, that is the length of remission is right-censored. } \item{\code{group}}{ The group into which the patient was randomized. Group 1 received maintenance chemotherapy, group 2 did not. }}} \source{ The data were obtained from Miller, R.G. (1981) \emph{Survival Analysis}. John Wiley. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Embury, S.H, Elias, L., Heller, P.H., Hood, C.E., Greenberg, P.L. and Schrier, S.L. (1977) Remission maintenance therapy in acute myelogenous leukaemia. \emph{Western Journal of Medicine}, \bold{126}, 267-272. } \note{ Package \pkg{survival} also has a dataset \code{aml}. It is the same data with different names and with \code{group} replaced by a factor \code{x}. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/beaver.Rd0000644000076600000240000000326011110552530013551 0ustar00ripleystaff\name{beaver} \alias{beaver} \title{ Beaver Body Temperature Data } \description{ The \code{beaver} data frame has 100 rows and 4 columns. It is a multivariate time series of class \code{"ts"} and also inherits from class \code{"data.frame"}. This data set is part of a long study into body temperature regulation in beavers. Four adult female beavers were live-trapped and had a temperature-sensitive radio transmitter surgically implanted. Readings were taken every 10 minutes. The location of the beaver was also recorded and her activity level was dichotomized by whether she was in the retreat or outside of it since high-intensity activities only occur outside of the retreat. The data in this data frame are those readings for one of the beavers on a day in autumn. } \usage{ beaver } \format{ This data frame contains the following columns: \describe{ \item{\code{day}}{ The day number. The data includes only data from day 307 and early 308. } \item{\code{time}}{ The time of day formatted as hour-minute. } \item{\code{temp}}{ The body temperature in degrees Celsius. } \item{\code{activ}}{ The dichotomized activity indicator. \code{1} indicates that the beaver is outside of the retreat and therefore engaged in high-intensity activity. }}} \source{ The data were obtained from Reynolds, P.S. (1994) Time-series analyses of beaver body temperatures. In \emph{Case Studies in Biometry}. N. Lange, L. Ryan, L. Billard, D. Brillinger, L. Conquest and J. Greenhouse (editors), 211--228. John Wiley. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/bigcity.Rd0000644000076600000240000000164611110552530013745 0ustar00ripleystaff\name{bigcity} \alias{bigcity} \alias{city} \title{ Population of U.S. Cities } \description{ The \code{bigcity} data frame has 49 rows and 2 columns. The \code{city} data frame has 10 rows and 2 columns. The measurements are the population (in 1000's) of 49 U.S. cities in 1920 and 1930. The 49 cities are a random sample taken from the 196 largest cities in 1920. The \code{city} data frame consists of the first 10 observations in \code{bigcity}. } \usage{ bigcity } \format{ This data frame contains the following columns: \describe{ \item{\code{u}}{ The 1920 population. } \item{\code{x}}{ The 1930 population. }}} \source{ The data were obtained from Cochran, W.G. (1977) \emph{Sampling Techniques}. Third edition. John Wiley } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/boot-practicals.Rd0000644000076600000240000000133511573362240015406 0ustar00ripleystaff\name{boot-practicals} \alias{nested.corr} \alias{lik.CI} \title{ Functions for Bootstrap Practicals} \description{ Functions for use with the practicals in Davison and Hinkley (1997). } \usage{ nested.corr(data, w, t0, M) lik.CI(like, lim) } \details{ \code{nested.corr} is meant for use with the double bootstrap in practical 5.5 of Davison and Hinkley (1997). \code{lik.CI} is meant for use with practicals 10.1 and 10.2 of Davison and Hinkley (1997). } \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \author{ Angelo J. Canty. Faster version of \code{nested.corr} for \pkg{boot} 1.3-1 by Brian Ripley. } \keyword{internal} boot/man/boot.Rd0000644000076600000240000004350611643310242013262 0ustar00ripleystaff\name{boot} \alias{boot} \alias{boot.return} \alias{c.boot} \title{ Bootstrap Resampling } \description{ Generate \code{R} bootstrap replicates of a statistic applied to data. Both parametric and nonparametric resampling are possible. For the nonparametric bootstrap, possible resampling methods are the ordinary bootstrap, the balanced bootstrap, antithetic resampling, and permutation. For nonparametric multi-sample problems stratified resampling is used: this is specified by including a vector of strata in the call to boot. Importance resampling weights may be specified. } \usage{ boot(data, statistic, R, sim = "ordinary", stype = c("i", "f", "w"), strata = rep(1,n), L = NULL, m = 0, weights = NULL, ran.gen = function(d, p) d, mle = NULL, simple = FALSE, ..., parallel = c("no", "multicore", "snow"), ncpus = getOption("boot.ncpus", 1L), cl = NULL) } \arguments{ \item{data}{ The data as a vector, matrix or data frame. If it is a matrix or data frame then each row is considered as one multivariate observation. } \item{statistic}{ A function which when applied to data returns a vector containing the statistic(s) of interest. When \code{sim = "parametric"}, the first argument to \code{statistic} must be the data. For each replicate a simulated dataset returned by \code{ran.gen} will be passed. In all other cases \code{statistic} must take at least two arguments. The first argument passed will always be the original data. The second will be a vector of indices, frequencies or weights which define the bootstrap sample. Further, if predictions are required, then a third argument is required which would be a vector of the random indices used to generate the bootstrap predictions. Any further arguments can be passed to \code{statistic} through the \code{\dots} argument. } \item{R}{ The number of bootstrap replicates. Usually this will be a single positive integer. For importance resampling, some resamples may use one set of weights and others use a different set of weights. In this case \code{R} would be a vector of integers where each component gives the number of resamples from each of the rows of weights. } \item{sim}{ A character string indicating the type of simulation required. Possible values are \code{"ordinary"} (the default), \code{"parametric"}, \code{"balanced"}, \code{"permutation"}, or \code{"antithetic"}. Importance resampling is specified by including importance weights; the type of importance resampling must still be specified but may only be \code{"ordinary"} or \code{"balanced"} in this case. } \item{stype}{ A character string indicating what the second argument of \code{statistic} represents. Possible values of stype are \code{"i"} (indices - the default), \code{"f"} (frequencies), or \code{"w"} (weights). Not used for \code{sim = "parametric"}. } \item{strata}{ An integer vector or factor specifying the strata for multi-sample problems. This may be specified for any simulation, but is ignored when \code{sim = "parametric"}. When \code{strata} is supplied for a nonparametric bootstrap, the simulations are done within the specified strata. } \item{L}{ Vector of influence values evaluated at the observations. This is used only when \code{sim} is \code{"antithetic"}. If not supplied, they are calculated through a call to \code{empinf}. This will use the infinitesimal jackknife provided that \code{stype} is \code{"w"}, otherwise the usual jackknife is used. } \item{m}{ The number of predictions which are to be made at each bootstrap replicate. This is most useful for (generalized) linear models. This can only be used when \code{sim} is \code{"ordinary"}. \code{m} will usually be a single integer but, if there are strata, it may be a vector with length equal to the number of strata, specifying how many of the errors for prediction should come from each strata. The actual predictions should be returned as the final part of the output of \code{statistic}, which should also take an argument giving the vector of indices of the errors to be used for the predictions. } \item{weights}{ Vector or matrix of importance weights. If a vector then it should have as many elements as there are observations in \code{data}. When simulation from more than one set of weights is required, \code{weights} should be a matrix where each row of the matrix is one set of importance weights. If \code{weights} is a matrix then \code{R} must be a vector of length \code{nrow(weights)}. This parameter is ignored if \code{sim} is not \code{"ordinary"} or \code{"balanced"}. } \item{ran.gen}{ This function is used only when \code{sim = "parametric"} when it describes how random values are to be generated. It should be a function of two arguments. The first argument should be the observed data and the second argument consists of any other information needed (e.g. parameter estimates). The second argument may be a list, allowing any number of items to be passed to \code{ran.gen}. The returned value should be a simulated data set of the same form as the observed data which will be passed to \code{statistic} to get a bootstrap replicate. It is important that the returned value be of the same shape and type as the original dataset. If \code{ran.gen} is not specified, the default is a function which returns the original \code{data} in which case all simulation should be included as part of \code{statistic}. Use of \code{sim = "parametric"} with a suitable \code{ran.gen} allows the user to implement any types of nonparametric resampling which are not supported directly. } \item{mle}{ The second argument to be passed to \code{ran.gen}. Typically these will be maximum likelihood estimates of the parameters. For efficiency \code{mle} is often a list containing all of the objects needed by \code{ran.gen} which can be calculated using the original data set only. } \item{simple}{logical, only allowed to be \code{TRUE} for \code{sim = "ordinary", stype = "i", n = 0} (otherwise ignored with a warning). By default a \code{n} by \code{R} index array is created: this can be large and if \code{simple = TRUE} this is avoided by sampling separately for each replication, which is slower but uses less memory. } \item{\dots}{ Other named arguments for \code{statistic} which are passed unchanged each time it is called. Any such arguments to \code{statistic} should follow the arguments which \code{statistic} is required to have for the simulation. Beware of partial matching to arguments of \code{boot} listed above, and that arguments named \code{X} and \code{FUN} cause conflicts in some versions of \pkg{boot} (but not this one). } \item{parallel}{ The type of parallel operation to be used (if any). If missing, the default is taken from the option \code{"boot.parallel"} (and if that is not set, \code{"no"}). } \item{ncpus}{ integer: number of processes to be used in parallel operation: typically one would chose this to the number of available CPUs. } \item{cl}{ An optional \pkg{parallel} or \pkg{snow} cluster for use if \code{parallel = "snow"}. If not supplied, a cluster on the local machine is created for the duration of the \code{boot} call. } } \value{ The returned value is an object of class \code{"boot"}, containing the following components: \item{t0}{ The observed value of \code{statistic} applied to \code{data}. } \item{t}{ A matrix with \code{sum(R)} rows each of which is a bootstrap replicate of the result of calling \code{statistic}. } \item{R}{ The value of \code{R} as passed to \code{boot}. } \item{data}{ The \code{data} as passed to \code{boot}. } \item{seed}{ The value of \code{.Random.seed} when \code{boot} was called. } \item{statistic}{ The function \code{statistic} as passed to \code{boot}. } \item{sim}{ Simulation type used. } \item{stype}{ Statistic type as passed to \code{boot}. } \item{call}{ The original call to \code{boot}. } \item{strata}{ The strata used. This is the vector passed to \code{boot}, if it was supplied or a vector of ones if there were no strata. It is not returned if \code{sim} is \code{"parametric"}. } \item{weights}{ The importance sampling weights as passed to \code{boot} or the empirical distribution function weights if no importance sampling weights were specified. It is omitted if \code{sim} is not one of \code{"ordinary"} or \code{"balanced"}. } \item{pred.i}{ If predictions are required (\code{m > 0}) this is the matrix of indices at which predictions were calculated as they were passed to statistic. Omitted if \code{m} is \code{0} or \code{sim} is not \code{"ordinary"}. } \item{L}{ The influence values used when \code{sim} is \code{"antithetic"}. If no such values were specified and \code{stype} is not \code{"w"} then \code{L} is returned as consecutive integers corresponding to the assumption that data is ordered by influence values. This component is omitted when \code{sim} is not \code{"antithetic"}. } \item{ran.gen}{ The random generator function used if \code{sim} is \code{"parametric"}. This component is omitted for any other value of \code{sim}. } \item{mle}{ The parameter estimates passed to \code{boot} when \code{sim} is \code{"parametric"}. It is omitted for all other values of \code{sim}. } There are \code{c}, \code{plot} and \code{print} methods for this class. } \details{ The statistic to be bootstrapped can be as simple or complicated as desired as long as its arguments correspond to the dataset and (for a nonparametric bootstrap) a vector of indices, frequencies or weights. \code{statistic} is treated as a black box by the \code{boot} function and is not checked to ensure that these conditions are met. The first order balanced bootstrap is described in Davison, Hinkley and Schechtman (1986). The antithetic bootstrap is described by Hall (1989) and is experimental, particularly when used with strata. The other non-parametric simulation types are the ordinary bootstrap (possibly with unequal probabilities), and permutation which returns random permutations of cases. All of these methods work independently within strata if that argument is supplied. For the parametric bootstrap it is necessary for the user to specify how the resampling is to be conducted. The best way of accomplishing this is to specify the function \code{ran.gen} which will return a simulated data set from the observed data set and a set of parameter estimates specified in \code{mle}. } \section{Parallel operation}{ When \code{parallel = "multicore"} is used (not available on Windows), each worker process inherits the environment of the current session, including the workspace and the loaded namespaces and attached packages (but not the random number seed: see below). More work is needed when \code{parallel = "snow"} is used: the worker processes are newly created \R processes, and \code{statistic} needs to arrange to set up the environment it needs: often a good way to do that is to make use of lexical scoping since when \code{statistic} is sent to the worker processes its enclosing environment is also sent. (E.g. see the example for \code{\link{jack.after.boot}} where ancillary functions are nested inside the \code{statistic} function.) \code{parallel = "snow"} is primarily intended to be used on multi-core Windows machine where \code{parallel = "multicore"} is not available. For most of the \code{boot} methods the resampling is done in the master process, but not if \code{simple = TRUE} nor \code{sim = "parametric"}. In those cases (or where \code{statistic} itself uses random numbers), more care is needed if the results need to be reproducible. Resampling is done in the worker processes by \code{\link{censboot}(sim = "wierd")} and by most of the schemes in \code{\link{tsboot}} (the exceptions being \code{sim == "fixed"} and \code{sim == "geom"} with the default \code{ran.gen}). Where random-number generation is done in the worker processes, the default behaviour is that each worker chooses a separate seed, non-reproducibly. However, with \code{parallel = "multicore"} or \code{parallel = "snow"} using the default cluster, a second approach is used if \code{\link{RNGkind}("L'Ecuyer-CMRG")} has been selected. In that approach each worker gets a different subsequence of the RNG stream based on the seed at the time the worker is spawned and so the results will be reproducible if \code{ncpus} is unchanged, and for \code{parallel = "multicore"} if \code{parallel::\link{mc.reset.stream}()} is called: see the examples for \code{\link{mclapply}}. } \references{ There are many references explaining the bootstrap and its variations. Among them are : Booth, J.G., Hall, P. and Wood, A.T.A. (1993) Balanced importance resampling for the bootstrap. \emph{Annals of Statistics}, \bold{21}, 286--298. Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Davison, A.C., Hinkley, D.V. and Schechtman, E. (1986) Efficient bootstrap simulation. \emph{Biometrika}, \bold{73}, 555--566. Efron, B. and Tibshirani, R. (1993) \emph{An Introduction to the Bootstrap}. Chapman & Hall. Gleason, J.R. (1988) Algorithms for balanced bootstrap simulations. \emph{ American Statistician}, \bold{42}, 263--266. Hall, P. (1989) Antithetic resampling for the bootstrap. \emph{Biometrika}, \bold{73}, 713--724. Hinkley, D.V. (1988) Bootstrap methods (with Discussion). \emph{Journal of the Royal Statistical Society, B}, \bold{50}, 312--337, 355--370. Hinkley, D.V. and Shi, S. (1989) Importance sampling and the nested bootstrap. \emph{Biometrika}, \bold{76}, 435--446. Johns M.V. (1988) Importance sampling for bootstrap confidence intervals. \emph{Journal of the American Statistical Association}, \bold{83}, 709--714. Noreen, E.W. (1989) \emph{Computer Intensive Methods for Testing Hypotheses}. John Wiley & Sons. } \seealso{ \code{\link{boot.array}}, \code{\link{boot.ci}}, \code{\link{censboot}}, \code{\link{empinf}}, \code{\link{jack.after.boot}}, \code{\link{tilt.boot}}, \code{\link{tsboot}}. } \examples{ # Usual bootstrap of the ratio of means using the city data ratio <- function(d, w) sum(d$x * w)/sum(d$u * w) boot(city, ratio, R = 999, stype = "w") # Stratified resampling for the difference of means. In this # example we will look at the difference of means between the final # two series in the gravity data. diff.means <- function(d, f) { n <- nrow(d) gp1 <- 1:table(as.numeric(d$series))[1] m1 <- sum(d[gp1,1] * f[gp1])/sum(f[gp1]) m2 <- sum(d[-gp1,1] * f[-gp1])/sum(f[-gp1]) ss1 <- sum(d[gp1,1]^2 * f[gp1]) - (m1 * m1 * sum(f[gp1])) ss2 <- sum(d[-gp1,1]^2 * f[-gp1]) - (m2 * m2 * sum(f[-gp1])) c(m1 - m2, (ss1 + ss2)/(sum(f) - 2)) } grav1 <- gravity[as.numeric(gravity[,2]) >= 7,] boot(grav1, diff.means, R = 999, stype = "f", strata = grav1[,2]) # In this example we show the use of boot in a prediction from # regression based on the nuclear data. This example is taken # from Example 6.8 of Davison and Hinkley (1997). Notice also # that two extra arguments to 'statistic' are passed through boot. nuke <- nuclear[, c(1, 2, 5, 7, 8, 10, 11)] nuke.lm <- glm(log(cost) ~ date+log(cap)+ne+ct+log(cum.n)+pt, data = nuke) nuke.diag <- glm.diag(nuke.lm) nuke.res <- nuke.diag$res * nuke.diag$sd nuke.res <- nuke.res - mean(nuke.res) # We set up a new data frame with the data, the standardized # residuals and the fitted values for use in the bootstrap. nuke.data <- data.frame(nuke, resid = nuke.res, fit = fitted(nuke.lm)) # Now we want a prediction of plant number 32 but at date 73.00 new.data <- data.frame(cost = 1, date = 73.00, cap = 886, ne = 0, ct = 0, cum.n = 11, pt = 1) new.fit <- predict(nuke.lm, new.data) nuke.fun <- function(dat, inds, i.pred, fit.pred, x.pred) { lm.b <- glm(fit+resid[inds] ~ date+log(cap)+ne+ct+log(cum.n)+pt, data = dat) pred.b <- predict(lm.b, x.pred) c(coef(lm.b), pred.b - (fit.pred + dat$resid[i.pred])) } nuke.boot <- boot(nuke.data, nuke.fun, R = 999, m = 1, fit.pred = new.fit, x.pred = new.data) # The bootstrap prediction squared error would then be found by mean(nuke.boot$t[, 8]^2) # Basic bootstrap prediction limits would be new.fit - sort(nuke.boot$t[, 8])[c(975, 25)] # Finally a parametric bootstrap. For this example we shall look # at the air-conditioning data. In this example our aim is to test # the hypothesis that the true value of the index is 1 (i.e. that # the data come from an exponential distribution) against the # alternative that the data come from a gamma distribution with # index not equal to 1. air.fun <- function(data) { ybar <- mean(data$hours) para <- c(log(ybar), mean(log(data$hours))) ll <- function(k) { if (k <= 0) 1e200 else lgamma(k)-k*(log(k)-1-para[1]+para[2]) } khat <- nlm(ll, ybar^2/var(data$hours))$estimate c(ybar, khat) } air.rg <- function(data, mle) { # Function to generate random exponential variates. # mle will contain the mean of the original data out <- data out$hours <- rexp(nrow(out), 1/mle) out } air.boot <- boot(aircondit, air.fun, R = 999, sim = "parametric", ran.gen = air.rg, mle = mean(aircondit$hours)) # The bootstrap p-value can then be approximated by sum(abs(air.boot$t[,2]-1) > abs(air.boot$t0[2]-1))/(1+air.boot$R) } \keyword{nonparametric} \keyword{htest} boot/man/boot.array.Rd0000644000076600000240000000562411566472516014417 0ustar00ripleystaff\name{boot.array} \alias{boot.array} \title{ Bootstrap Resampling Arrays } \description{ This function takes a bootstrap object calculated by one of the functions \code{boot}, \code{censboot}, or \code{tilt.boot} and returns the frequency (or index) array for the bootstrap resamples. } \usage{ boot.array(boot.out, indices) } \arguments{ \item{boot.out}{ An object of class \code{"boot"} returned by one of the generation functions for such an object. } \item{indices}{ A logical argument which specifies whether to return the frequency array or the raw index array. The default is \code{indices=FALSE} unless \code{boot.out} was created by \code{tsboot} in which case the default is \code{indices=TRUE}. }} \value{ A matrix with \code{boot.out$R} rows and \code{n} columns where \code{n} is the number of observations in \code{boot.out$data}. If \code{indices} is \code{FALSE} then this will give the frequency of each of the original observations in each bootstrap resample. If \code{indices} is \code{TRUE} it will give the indices of the bootstrap resamples in the order in which they would have been passed to the statistic. } \section{Side Effects}{ This function temporarily resets \code{.Random.seed} to the value in \code{boot.out$seed} and then returns it to its original value at the end of the function. } \details{ The process by which the original index array was generated is repeated with the same value of \code{.Random.seed}. If the frequency array is required then \code{freq.array} is called to convert the index array to a frequency array. A resampling array can only be returned when such a concept makes sense. In particular it cannot be found for any parametric or model-based resampling schemes. Hence for objects generated by \code{censboot} the only resampling scheme for which such an array can be found is ordinary case resampling. Similarly if \code{boot.out$sim} is \code{"parametric"} in the case of \code{boot} or \code{"model"} in the case of \code{tsboot} the array cannot be found. Note also that for post-blackened bootstraps from \code{tsboot} the indices found will relate to those prior to any post-blackening and so will not be useful. Frequency arrays are used in many post-bootstrap calculations such as the jackknife-after-bootstrap and finding importance sampling weights. They are also used to find empirical influence values through the regression method. } \seealso{ \code{\link{boot}}, \code{\link{censboot}}, \code{\link{freq.array}}, \code{\link{tilt.boot}}, \code{\link{tsboot}} } \examples{ # A frequency array for a nonparametric bootstrap city.boot <- boot(city, corr, R = 40, stype = "w") boot.array(city.boot) perm.cor <- function(d,i) cor(d$x,d$u[i]) city.perm <- boot(city, perm.cor, R = 40, sim = "permutation") boot.array(city.perm, indices = TRUE) } \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/boot.ci.Rd0000644000076600000240000002451011566472552013667 0ustar00ripleystaff\name{boot.ci} \alias{boot.ci} \title{ Nonparametric Bootstrap Confidence Intervals } \description{ This function generates 5 different types of equi-tailed two-sided nonparametric confidence intervals. These are the first order normal approximation, the basic bootstrap interval, the studentized bootstrap interval, the bootstrap percentile interval, and the adjusted bootstrap percentile (BCa) interval. All or a subset of these intervals can be generated. } \usage{ boot.ci(boot.out, conf = 0.95, type = "all", index = 1:min(2,length(boot.out$t0)), var.t0 = NULL, var.t = NULL, t0 = NULL, t = NULL, L = NULL, h = function(t) t, hdot = function(t) rep(1,length(t)), hinv = function(t) t, \dots) } \arguments{ \item{boot.out}{ An object of class \code{"boot"} containing the output of a bootstrap calculation. } \item{conf}{ A scalar or vector containing the confidence level(s) of the required interval(s). } \item{type}{ A vector of character strings representing the type of intervals required. The value should be any subset of the values \code{c("norm","basic", "stud", "perc", "bca")} or simply \code{"all"} which will compute all five types of intervals. } \item{index}{ This should be a vector of length 1 or 2. The first element of \code{index} indicates the position of the variable of interest in \code{boot.out$t0} and the relevant column in \code{boot.out$t}. The second element indicates the position of the variance of the variable of interest. If both \code{var.t0} and \code{var.t} are supplied then the second element of \code{index} (if present) is ignored. The default is that the variable of interest is in position 1 and its variance is in position 2 (as long as there are 2 positions in \code{boot.out$t0}). } \item{var.t0}{ If supplied, a value to be used as an estimate of the variance of the statistic for the normal approximation and studentized intervals. If it is not supplied and \code{length(index)} is 2 then \code{var.t0} defaults to \code{boot.out$t0[index[2]]} otherwise \code{var.t0} is undefined. For studentized intervals \code{var.t0} must be defined. For the normal approximation, if \code{var.t0} is undefined it defaults to \code{var(t)}. If a transformation is supplied through the argument \code{h} then \code{var.t0} should be the variance of the untransformed statistic. } \item{var.t}{ This is a vector (of length \code{boot.out$R}) of variances of the bootstrap replicates of the variable of interest. It is used only for studentized intervals. If it is not supplied and \code{length(index)} is 2 then \code{var.t} defaults to \code{boot.out$t[,index[2]]}, otherwise its value is undefined which will cause an error for studentized intervals. If a transformation is supplied through the argument \code{h} then \code{var.t} should be the variance of the untransformed bootstrap statistics. } \item{t0}{ The observed value of the statistic of interest. The default value is \code{boot.out$t0[index[1]]}. Specification of \code{t0} and \code{t} allows the user to get intervals for a transformed statistic which may not be in the bootstrap output object. See the second example below. An alternative way of achieving this would be to supply the functions \code{h}, \code{hdot}, and \code{hinv} below. } \item{t}{ The bootstrap replicates of the statistic of interest. It must be a vector of length \code{boot.out$R}. It is an error to supply one of \code{t0} or \code{t} but not the other. Also if studentized intervals are required and \code{t0} and \code{t} are supplied then so should be \code{var.t0} and \code{var.t}. The default value is \code{boot.out$t[,index]}. } \item{L}{ The empirical influence values of the statistic of interest for the observed data. These are used only for BCa intervals. If a transformation is supplied through the parameter \code{h} then \code{L} should be the influence values for \code{t}; the values for \code{h(t)} are derived from these and \code{hdot} within the function. If \code{L} is not supplied then the values are calculated using \code{empinf} if they are needed. } \item{h}{ A function defining a transformation. The intervals are calculated on the scale of \code{h(t)} and the inverse function \code{hinv} applied to the resulting intervals. It must be a function of one variable only and for a vector argument, it must return a vector of the same length, i.e. \code{h(c(t1,t2,t3))} should return \code{c(h(t1),h(t2),h(t3))}. The default is the identity function. } \item{hdot}{ A function of one argument returning the derivative of \code{h}. It is a required argument if \code{h} is supplied and normal, studentized or BCa intervals are required. The function is used for approximating the variances of \code{h(t0)} and \code{h(t)} using the delta method, and also for finding the empirical influence values for BCa intervals. Like \code{h} it should be able to take a vector argument and return a vector of the same length. The default is the constant function 1. } \item{hinv}{ A function, like \code{h}, which returns the inverse of \code{h}. It is used to transform the intervals calculated on the scale of \code{h(t)} back to the original scale. The default is the identity function. If \code{h} is supplied but \code{hinv} is not, then the intervals returned will be on the transformed scale. } \item{\dots}{ Any extra arguments that \code{boot.out$statistic} is expecting. These arguments are needed only if BCa intervals are required and \code{L} is not supplied since in that case \code{L} is calculated through a call to \code{empinf} which calls \code{boot.out$statistic}. } } \details{ The formulae on which the calculations are based can be found in Chapter 5 of Davison and Hinkley (1997). Function \code{boot} must be run prior to running this function to create the object to be passed as \code{boot.out}. Variance estimates are required for studentized intervals. The variance of the observed statistic is optional for normal theory intervals. If it is not supplied then the bootstrap estimate of variance is used. The normal intervals also use the bootstrap bias correction. Interpolation on the normal quantile scale is used when a non-integer order statistic is required. If the order statistic used is the smallest or largest of the R values in boot.out a warning is generated and such intervals should not be considered reliable. } \value{ An object of type \code{"bootci"} which contains the intervals. It has components \item{R}{ The number of bootstrap replicates on which the intervals were based. } \item{t0}{ The observed value of the statistic on the same scale as the intervals. } \item{call}{ The call to \code{boot.ci} which generated the object. It will also contain one or more of the following components depending on the value of \code{type} used in the call to \code{bootci}. } \item{normal}{ A matrix of intervals calculated using the normal approximation. It will have 3 columns, the first being the level and the other two being the upper and lower endpoints of the intervals. } \item{basic}{ The intervals calculated using the basic bootstrap method. } \item{student}{ The intervals calculated using the studentized bootstrap method. } \item{percent}{ The intervals calculated using the bootstrap percentile method. } \item{bca}{ The intervals calculated using the adjusted bootstrap percentile (BCa) method. These latter four components will be matrices with 5 columns, the first column containing the level, the next two containing the indices of the order statistics used in the calculations and the final two the calculated endpoints themselves. } } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}, Chapter 5. Cambridge University Press. DiCiccio, T.J. and Efron B. (1996) Bootstrap confidence intervals (with Discussion). \emph{Statistical Science}, \bold{11}, 189--228. Efron, B. (1987) Better bootstrap confidence intervals (with Discussion). \emph{Journal of the American Statistical Association}, \bold{82}, 171--200. } \seealso{ \code{\link{abc.ci}}, \code{\link{boot}}, \code{\link{empinf}}, \code{\link{norm.ci}} } \examples{ # confidence intervals for the city data ratio <- function(d, w) sum(d$x * w)/sum(d$u * w) city.boot <- boot(city, ratio, R = 999, stype = "w", sim = "ordinary") boot.ci(city.boot, conf = c(0.90, 0.95), type = c("norm", "basic", "perc", "bca")) # studentized confidence interval for the two sample # difference of means problem using the final two series # of the gravity data. diff.means <- function(d, f) { n <- nrow(d) gp1 <- 1:table(as.numeric(d$series))[1] m1 <- sum(d[gp1,1] * f[gp1])/sum(f[gp1]) m2 <- sum(d[-gp1,1] * f[-gp1])/sum(f[-gp1]) ss1 <- sum(d[gp1,1]^2 * f[gp1]) - (m1 * m1 * sum(f[gp1])) ss2 <- sum(d[-gp1,1]^2 * f[-gp1]) - (m2 * m2 * sum(f[-gp1])) c(m1 - m2, (ss1 + ss2)/(sum(f) - 2)) } grav1 <- gravity[as.numeric(gravity[,2]) >= 7, ] grav1.boot <- boot(grav1, diff.means, R = 999, stype = "f", strata = grav1[ ,2]) boot.ci(grav1.boot, type = c("stud", "norm")) # Nonparametric confidence intervals for mean failure time # of the air-conditioning data as in Example 5.4 of Davison # and Hinkley (1997) mean.fun <- function(d, i) { m <- mean(d$hours[i]) n <- length(i) v <- (n-1)*var(d$hours[i])/n^2 c(m, v) } air.boot <- boot(aircondit, mean.fun, R = 999) boot.ci(air.boot, type = c("norm", "basic", "perc", "stud")) # Now using the log transformation # There are two ways of doing this and they both give the # same intervals. # Method 1 boot.ci(air.boot, type = c("norm", "basic", "perc", "stud"), h = log, hdot = function(x) 1/x) # Method 2 vt0 <- air.boot$t0[2]/air.boot$t0[1]^2 vt <- air.boot$t[, 2]/air.boot$t[ ,1]^2 boot.ci(air.boot, type = c("norm", "basic", "perc", "stud"), t0 = log(air.boot$t0[1]), t = log(air.boot$t[,1]), var.t0 = vt0, var.t = vt) } \keyword{nonparametric} \keyword{htest} boot/man/brambles.Rd0000644000076600000240000000213211566130465014107 0ustar00ripleystaff\name{brambles} \alias{brambles} \title{ Spatial Location of Bramble Canes } \description{ The \code{brambles} data frame has 823 rows and 3 columns. The location of living bramble canes in a 9m square plot was recorded. We take 9m to be the unit of distance so that the plot can be thought of as a unit square. The bramble canes were also classified by their age. } \usage{ brambles } \format{ This data frame contains the following columns: \describe{ \item{\code{x}}{ The x coordinate of the position of the cane in the plot. } \item{\code{y}}{ The y coordinate of the position of the cane in the plot. } \item{\code{age}}{ The age classification of the canes; \code{0} indicates a newly emerged cane, \code{1} indicates a one year old cane and \code{2} indicates a two year old cane. }}} \source{ The data were obtained from Diggle, P.J. (1983) \emph{Statistical Analysis of Spatial Point Patterns}. Academic Press. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/breslow.Rd0000644000076600000240000000316511110552530013766 0ustar00ripleystaff\name{breslow} \alias{breslow} \title{ Smoking Deaths Among Doctors } \description{ The \code{breslow} data frame has 10 rows and 5 columns. In 1961 Doll and Hill sent out a questionnaire to all men on the British Medical Register enquiring about their smoking habits. Almost 70\% of such men replied. Death certificates were obtained for medical practitioners and causes of death were assigned on the basis of these certificates. The \code{breslow} data set contains the person-years of observations and deaths from coronary artery disease accumulated during the first ten years of the study. } \usage{ breslow } \format{ This data frame contains the following columns: \describe{ \item{\code{age}}{ The mid-point of the 10 year age-group for the doctors. } \item{\code{smoke}}{ An indicator of whether the doctors smoked (1) or not (0). } \item{\code{n}}{ The number of person-years in the category. } \item{\code{y}}{ The number of deaths attributed to coronary artery disease. } \item{\code{ns}}{ The number of smoker years in the category (\code{smoke*n}). }}} \source{ The data were obtained from Breslow, N.E. (1985) Cohort Analysis in Epidemiology. In \emph{A Celebration of Statistics} A.C. Atkinson and S.E. Fienberg (editors), 109--143. Springer-Verlag. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Doll, R. and Hill, A.B. (1966) Mortality of British doctors in relation to smoking: Observations on coronary thrombosis. \emph{National Cancer Institute Monograph}, \bold{19}, 205-268. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/calcium.Rd0000644000076600000240000000226311110552530013724 0ustar00ripleystaff\name{calcium} \alias{calcium} \title{ Calcium Uptake Data } \description{ The \code{calcium} data frame has 27 rows and 2 columns. Howard Grimes from the Botany Department, North Carolina State University, conducted an experiment for biochemical analysis of intracellular storage and transport of calcium across plasma membrane. Cells were suspended in a solution of radioactive calcium for a certain length of time and then the amount of radioactive calcium that was absorbed by the cells was measured. The experiment was repeated independently with 9 different times of suspension each replicated 3 times. } \usage{ calcium } \format{ This data frame contains the following columns: \describe{ \item{\code{time}}{ The time (in minutes) that the cells were suspended in the solution. } \item{\code{cal}}{ The amount of calcium uptake (nmoles/mg). }}} \source{ The data were obtained from Rawlings, J.O. (1988) \emph{Applied Regression Analysis}. Wadsworth and Brooks/Cole Statistics/Probability Series. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/cane.Rd0000644000076600000240000000274111110552530013216 0ustar00ripleystaff\name{cane} \alias{cane} \title{ Sugar-cane Disease Data } \description{ The \code{cane} data frame has 180 rows and 5 columns. The data frame represents a randomized block design with 45 varieties of sugar-cane and 4 blocks.} \details{ The aim of the experiment was to classify the varieties into resistant, intermediate and susceptible to a disease called "coal of sugar-cane" (carvao da cana-de-acucar). This is a disease that is common in sugar-cane plantations in certain areas of Brazil. For each plot, fifty pieces of sugar-cane stem were put in a solution containing the disease agent and then some were planted in the plot. After a fixed period of time, the total number of shoots and the number of diseased shoots were recorded. } \usage{ cane } \format{ This data frame contains the following columns: \describe{ \item{\code{n}}{ The total number of shoots in each plot. } \item{\code{r}}{ The number of diseased shoots. } \item{\code{x}}{ The number of pieces of the stems, out of 50, planted in each plot. } \item{\code{var}}{ A factor indicating the variety of sugar-cane in each plot. } \item{\code{block}}{ A factor for the blocks. }} } \source{ The data were kindly supplied by Dr. C.G.B. Demetrio of Escola Superior de Agricultura, Universidade de Sao Paolo, Brazil. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 0.3-1. boot/man/capability.Rd0000644000076600000240000000175011110552530014430 0ustar00ripleystaff\name{capability} \alias{capability} \title{ Simulated Manufacturing Process Data } \description{ The \code{capability} data frame has 75 rows and 1 columns. The data are simulated successive observations from a process in equilibrium. The process is assumed to have specification limits (5.49, 5.79). } \usage{ capability } \format{ This data frame contains the following column: \describe{ \item{\code{y}}{ The simulated measurements. }}} \source{ The data were obtained from Bissell, A.F. (1990) How reliable is your capability index? \emph{Applied Statistics}, \bold{39}, 331--340. } \references{ Canty, A.J. and Davison, A.C. (1996) Implementation of saddlepoint approximations to resampling distributions. To appear in \emph{Computing Science and Statistics; Proceedings of the 28th Symposium on the Interface}. Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/catsM.Rd0000644000076600000240000000226611247245020013365 0ustar00ripleystaff\name{catsM} \alias{catsM} \title{ Weight Data for Domestic Cats } \description{ The \code{catsM} data frame has 97 rows and 3 columns. 144 adult (over 2kg in weight) cats used for experiments with the drug digitalis had their heart and body weight recorded. 47 of the cats were female and 97 were male. The \code{catsM} data frame consists of the data for the male cats. The full data are in dataset \code{\link{cats}} in package \code{MASS}. } \usage{ cats } \format{ This data frames contain the following columns: \describe{ \item{\code{Sex}}{ A factor for the sex of the cat (levels are \code{F} and \code{M}). } \item{\code{Bwt}}{ Body weight in kg. } \item{\code{Hwt}}{ Heart weight in g. }}} \seealso{ \code{\link{cats}} } \source{ The data were obtained from Fisher, R.A. (1947) The analysis of covariance method for the relation between a part and the whole. \emph{Biometrics}, \bold{3}, 65--68. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Venables, W.N. and Ripley, B.D. (1994) \emph{Modern Applied Statistics with S-Plus}. Springer-Verlag. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/cav.Rd0000644000076600000240000000216411566472672013107 0ustar00ripleystaff\name{cav} \alias{cav} \title{ Position of Muscle Caveolae } \description{ The \code{cav} data frame has 138 rows and 2 columns. The data gives the positions of the individual caveolae in a square region with sides of length 500 units. This grid was originally on a 2.65mum square of muscle fibre. The data are those points falling in the lower left hand quarter of the region used for the dataset \code{caveolae.dat} in the \pkg{spatial} package by B.D. Ripley (1994). } \usage{ cav } \format{ This data frame contains the following columns: \describe{ \item{\code{x}}{ The x coordinate of the caveola's position in the region. } \item{\code{y}}{ The y coordinate of the caveola's position in the region. }}} \references{ Appleyard, S.T., Witkowski, J.A., Ripley, B.D., Shotton, D.M. and Dubowicz, V. (1985) A novel procedure for pattern analysis of features present on freeze fractured plasma membranes. \emph{Journal of Cell Science}, \bold{74}, 105--117. Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/cd4.Rd0000644000076600000240000000256211110552530012763 0ustar00ripleystaff\name{cd4} \alias{cd4} \title{ CD4 Counts for HIV-Positive Patients } \description{ The \code{cd4} data frame has 20 rows and 2 columns. CD4 cells are carried in the blood as part of the human immune system. One of the effects of the HIV virus is that these cells die. The count of CD4 cells is used in determining the onset of full-blown AIDS in a patient. In this study of the effectiveness of a new anti-viral drug on HIV, 20 HIV-positive patients had their CD4 counts recorded and then were put on a course of treatment with this drug. After using the drug for one year, their CD4 counts were again recorded. The aim of the experiment was to show that patients taking the drug had increased CD4 counts which is not generally seen in HIV-positive patients. } \usage{ cd4 } \format{ This data frame contains the following columns: \describe{ \item{\code{baseline}}{ The CD4 counts (in 100's) on admission to the trial. } \item{\code{oneyear }}{ The CD4 counts (in 100's) after one year of treatment with the new drug. }}} \source{ The data were obtained from DiCiccio, T.J. and Efron B. (1996) Bootstrap confidence intervals (with Discussion). \emph{Statistical Science}, \bold{11}, 189--228. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/cd4.nested.Rd0000644000076600000240000000074211110552530014242 0ustar00ripleystaff\name{cd4.nested} \alias{cd4.nested} \title{ Nested Bootstrap of cd4 data } \description{ This is an example of a nested bootstrap for the correlation coefficient of the \code{cd4} data frame. It is used in a practical in Chapter 5 of Davison and Hinkley (1997). } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \seealso{ \code{\link{cd4}} } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/censboot.Rd0000644000076600000240000003223611573362630014143 0ustar00ripleystaff\name{censboot} \alias{censboot} \alias{cens.return} \title{ Bootstrap for Censored Data } \description{ This function applies types of bootstrap resampling which have been suggested to deal with right-censored data. It can also do model-based resampling using a Cox regression model. } \usage{ censboot(data, statistic, R, F.surv, G.surv, strata = matrix(1,n,2), sim = "ordinary", cox = NULL, index = c(1, 2), \dots, parallel = c("no", "multicore", "snow"), ncpus = getOption("boot.ncpus", 1L), cl = NULL) } \arguments{ \item{data}{ The data frame or matrix containing the data. It must have at least two columns, one of which contains the times and the other the censoring indicators. It is allowed to have as many other columns as desired (although efficiency is reduced for large numbers of columns) except for \code{sim = "weird"} when it should only have two columns - the times and censoring indicators. The columns of \code{data} referenced by the components of \code{index} are taken to be the times and censoring indicators. } \item{statistic}{ A function which operates on the data frame and returns the required statistic. Its first argument must be the data. Any other arguments that it requires can be passed using the \code{\dots} argument. In the case of \code{sim = "weird"}, the data passed to \code{statistic} only contains the times and censoring indicator regardless of the actual number of columns in \code{data}. In all other cases the data passed to statistic will be of the same form as the original data. When \code{sim = "weird"}, the actual number of observations in the resampled data sets may not be the same as the number in \code{data}. For this reason, if \code{sim = "weird"} and \code{strata} is supplied, \code{statistic} should also take a numeric vector indicating the strata. This allows the statistic to depend on the strata if required. } \item{R}{ The number of bootstrap replicates. } \item{F.surv}{ An object returned from a call to \code{survfit} giving the survivor function for the data. This is a required argument unless \code{sim = "ordinary"} or \code{sim = "model"} and \code{cox} is missing. } \item{G.surv}{ Another object returned from a call to \code{survfit} but with the censoring indicators reversed to give the product-limit estimate of the censoring distribution. Note that for consistency the uncensored times should be reduced by a small amount in the call to \code{survfit}. This is a required argument whenever \code{sim = "cond"} or when \code{sim = "model"} and \code{cox} is supplied. } \item{strata}{ The strata used in the calls to \code{survfit}. It can be a vector or a matrix with 2 columns. If it is a vector then it is assumed to be the strata for the survival distribution, and the censoring distribution is assumed to be the same for all observations. If it is a matrix then the first column is the strata for the survival distribution and the second is the strata for the censoring distribution. When \code{sim = "weird"} only the strata for the survival distribution are used since the censoring times are considered fixed. When \code{sim = "ordinary"}, only one set of strata is used to stratify the observations, this is taken to be the first column of \code{strata} when it is a matrix. } \item{sim}{ The simulation type. Possible types are \code{"ordinary"} (case resampling), \code{"model"} (equivalent to \code{"ordinary"} if \code{cox} is missing, otherwise it is model-based resampling), \code{"weird"} (the weird bootstrap - this cannot be used if \code{cox} is supplied), and \code{"cond"} (the conditional bootstrap, in which censoring times are resampled from the conditional censoring distribution). } \item{cox}{ An object returned from \code{coxph}. If it is supplied, then \code{F.surv} should have been generated by a call of the form \code{survfit(cox)}. } \item{index}{ A vector of length two giving the positions of the columns in \code{data} which correspond to the times and censoring indicators respectively. } \item{\dots}{ Other named arguments which are passed unchanged to \code{statistic} each time it is called. Any such arguments to \code{statistic} must follow the arguments which \code{statistic} is required to have for the simulation. Beware of partial matching to arguments of \code{censboot} listed above, and that arguments named \code{X} and \code{FUN} cause conflicts in some versions of \pkg{boot} (but not this one). } \item{parallel, ncpus, cl}{ See the help for \code{\link{boot}}. } } \value{ An object of class \code{"boot"} containing the following components: \item{t0}{ The value of \code{statistic} when applied to the original data. } \item{t}{ A matrix of bootstrap replicates of the values of \code{statistic}. } \item{R}{ The number of bootstrap replicates performed. } \item{sim}{ The simulation type used. This will usually be the input value of \code{sim} unless that was \code{"model"} but \code{cox} was not supplied, in which case it will be \code{"ordinary"}. } \item{data}{ The data used for the bootstrap. This will generally be the input value of \code{data} unless \code{sim = "weird"}, in which case it will just be the columns containing the times and the censoring indicators. } \item{seed}{ The value of \code{.Random.seed} when \code{censboot} was called. } \item{statistic}{ The input value of \code{statistic}. } \item{strata}{ The strata used in the resampling. When \code{sim = "ordinary"} this will be a vector which stratifies the observations, when \code{sim = "weird"} it is the strata for the survival distribution and in all other cases it is a matrix containing the strata for the survival distribution and the censoring distribution. } \item{call}{ The original call to \code{censboot}. } } \details{ The various types of resampling are described in Davison and Hinkley (1997) in sections 3.5 and 7.3. The simplest is case resampling which simply resamples with replacement from the observations. The conditional bootstrap simulates failure times from the estimate of the survival distribution. Then, for each observation its simulated censoring time is equal to the observed censoring time if the observation was censored and generated from the estimated censoring distribution conditional on being greater than the observed failure time if the observation was uncensored. If the largest value is censored then it is given a nominal failure time of \code{Inf} and conversely if it is uncensored it is given a nominal censoring time of \code{Inf}. This is necessary to allow the largest observation to be in the resamples. If a Cox regression model is fitted to the data and supplied, then the failure times are generated from the survival distribution using that model. In this case the censoring times can either be simulated from the estimated censoring distribution (\code{sim = "model"}) or from the conditional censoring distribution as in the previous paragraph (\code{sim = "cond"}). The weird bootstrap holds the censored observations as fixed and also the observed failure times. It then generates the number of events at each failure time using a binomial distribution with mean 1 and denominator the number of failures that could have occurred at that time in the original data set. In our implementation we insist that there is a least one simulated event in each stratum for every bootstrap dataset. When there are strata involved and \code{sim} is either \code{"model"} or \code{"cond"} the situation becomes more difficult. Since the strata for the survival and censoring distributions are not the same it is possible that for some observations both the simulated failure time and the simulated censoring time are infinite. To see this consider an observation in stratum 1F for the survival distribution and stratum 1G for the censoring distribution. Now if the largest value in stratum 1F is censored it is given a nominal failure time of \code{Inf}, also if the largest value in stratum 1G is uncensored it is given a nominal censoring time of \code{Inf} and so both the simulated failure and censoring times could be infinite. When this happens the simulated value is considered to be a failure at the time of the largest observed failure time in the stratum for the survival distribution. When \code{parallel = "snow"} and \code{cl} is not supplied, \code{library(survival)} is run in each of the worker processes. } \references{ Andersen, P.K., Borgan, O., Gill, R.D. and Keiding, N. (1993) \emph{Statistical Models Based on Counting Processes}. Springer-Verlag. Burr, D. (1994) A comparison of certain bootstrap confidence intervals in the Cox model. \emph{Journal of the American Statistical Association}, \bold{89}, 1290--1302. Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Efron, B. (1981) Censored data and the bootstrap. \emph{Journal of the American Statistical Association}, \bold{76}, 312--319. Hjort, N.L. (1985) Bootstrapping Cox's regression model. Technical report NSF-241, Dept. of Statistics, Stanford University. } \seealso{ \code{\link{boot}}, \code{\link{coxph}}, \code{\link{survfit}} } \examples{ library(survival) # Example 3.9 of Davison and Hinkley (1997) does a bootstrap on some # remission times for patients with a type of leukaemia. The patients # were divided into those who received maintenance chemotherapy and # those who did not. Here we are interested in the median remission # time for the two groups. data(aml, package = "boot") # not the version in survival. aml.fun <- function(data) { surv <- survfit(Surv(time, cens) ~ group, data = data) out <- NULL st <- 1 for (s in 1:length(surv$strata)) { inds <- st:(st + surv$strata[s]-1) md <- min(surv$time[inds[1-surv$surv[inds] >= 0.5]]) st <- st + surv$strata[s] out <- c(out, md) } out } aml.case <- censboot(aml, aml.fun, R = 499, strata = aml$group) # Now we will look at the same statistic using the conditional # bootstrap and the weird bootstrap. For the conditional bootstrap # the survival distribution is stratified but the censoring # distribution is not. aml.s1 <- survfit(Surv(time, cens) ~ group, data = aml) aml.s2 <- survfit(Surv(time-0.001*cens, 1-cens) ~ 1, data = aml) aml.cond <- censboot(aml, aml.fun, R = 499, strata = aml$group, F.surv = aml.s1, G.surv = aml.s2, sim = "cond") # For the weird bootstrap we must redefine our function slightly since # the data will not contain the group number. aml.fun1 <- function(data, str) { surv <- survfit(Surv(data[, 1], data[, 2]) ~ str) out <- NULL st <- 1 for (s in 1:length(surv$strata)) { inds <- st:(st + surv$strata[s] - 1) md <- min(surv$time[inds[1-surv$surv[inds] >= 0.5]]) st <- st + surv$strata[s] out <- c(out, md) } out } aml.wei <- censboot(cbind(aml$time, aml$cens), aml.fun1, R = 499, strata = aml$group, F.surv = aml.s1, sim = "weird") # Now for an example where a cox regression model has been fitted # the data we will look at the melanoma data of Example 7.6 from # Davison and Hinkley (1997). The fitted model assumes that there # is a different survival distribution for the ulcerated and # non-ulcerated groups but that the thickness of the tumour has a # common effect. We will also assume that the censoring distribution # is different in different age groups. The statistic of interest # is the linear predictor. This is returned as the values at a # number of equally spaced points in the range of interest. data(melanoma, package = "boot") library(splines)# for ns mel.cox <- coxph(Surv(time, status == 1) ~ ns(thickness, df=4) + strata(ulcer), data = melanoma) mel.surv <- survfit(mel.cox) agec <- cut(melanoma$age, c(0, 39, 49, 59, 69, 100)) mel.cens <- survfit(Surv(time - 0.001*(status == 1), status != 1) ~ strata(agec), data = melanoma) mel.fun <- function(d) { t1 <- ns(d$thickness, df=4) cox <- coxph(Surv(d$time, d$status == 1) ~ t1+strata(d$ulcer)) ind <- !duplicated(d$thickness) u <- d$thickness[!ind] eta <- cox$linear.predictors[!ind] sp <- smooth.spline(u, eta, df=20) th <- seq(from = 0.25, to = 10, by = 0.25) predict(sp, th)$y } mel.str <- cbind(melanoma$ulcer, agec) # this is slow! mel.mod <- censboot(melanoma, mel.fun, R = 499, F.surv = mel.surv, G.surv = mel.cens, cox = mel.cox, strata = mel.str, sim = "model") # To plot the original predictor and a 95\% pointwise envelope for it mel.env <- envelope(mel.mod)$point th <- seq(0.25, 10, by = 0.25) plot(th, mel.env[1, ], ylim = c(-2, 2), xlab = "thickness (mm)", ylab = "linear predictor", type = "n") lines(th, mel.mod$t0, lty = 1) matlines(th, t(mel.env), lty = 2) } \author{Angelo J. Canty. Parallel extensions by Brian Ripley} \keyword{survival} boot/man/channing.Rd0000644000076600000240000000360611110552530014076 0ustar00ripleystaff\name{channing} \alias{channing} \title{ Channing House Data } \description{ The \code{channing} data frame has 462 rows and 5 columns. Channing House is a retirement centre in Palo Alto, California. These data were collected between the opening of the house in 1964 until July 1, 1975. In that time 97 men and 365 women passed through the centre. For each of these, their age on entry and also on leaving or death was recorded. A large number of the observations were censored mainly due to the resident being alive on July 1, 1975 when the data was collected. Over the time of the study 130 women and 46 men died at Channing House. Differences between the survival of the sexes, taking age into account, was one of the primary concerns of this study. } \usage{ channing } \format{ This data frame contains the following columns: \describe{ \item{\code{sex}}{ A factor for the sex of each resident (\code{"Male"} or \code{"Female"}). } \item{\code{entry}}{ The residents age (in months) on entry to the centre } \item{\code{exit}}{ The age (in months) of the resident on death, leaving the centre or July 1, 1975 whichever event occurred first. } \item{\code{time}}{ The length of time (in months) that the resident spent at Channing House. (\code{time=exit-entry}) } \item{\code{cens}}{ The indicator of right censoring. 1 indicates that the resident died at Channing House, 0 indicates that they left the house prior to July 1, 1975 or that they were still alive and living in the centre at that date. }}} \source{ The data were obtained from Hyde, J. (1980) Testing survival with incomplete observations. \emph{Biostatistics Casebook}. R.G. Miller, B. Efron, B.W. Brown and L.E. Moses (editors), 31--46. John Wiley. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/claridge.Rd0000644000076600000240000000321711110552530014061 0ustar00ripleystaff\name{claridge} \alias{claridge} \title{ Genetic Links to Left-handedness } \description{ The \code{claridge} data frame has 37 rows and 2 columns. The data are from an experiment which was designed to look for a relationship between a certain genetic characteristic and handedness. The 37 subjects were women who had a son with mental retardation due to inheriting a defective X-chromosome. For each such mother a genetic measurement of their DNA was made. Larger values of this measurement are known to be linked to the defective gene and it was hypothesized that larger values might also be linked to a progressive shift away from right-handednesss. Each woman also filled in a questionnaire regarding which hand they used for various tasks. From these questionnaires a measure of hand preference was found for each mother. The scale of this measure goes from 1, indicating someone who always favours their right hand, to 8, indicating someone who always favours their left hand. Between these two extremes are people who favour one hand for some tasks and the other for other tasks. } \usage{ claridge } \format{ This data frame contains the following columns: \describe{ \item{\code{dnan}}{ The genetic measurement on each woman's DNA. } \item{\code{hand}}{ The measure of left-handedness on an integer scale from 1 to 8. }}} \source{ The data were kindly made available by Dr. Gordon S. Claridge from the Department of Experimental Psychology, University of Oxford. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/cloth.Rd0000644000076600000240000000123511110552530013416 0ustar00ripleystaff\name{cloth} \alias{cloth} \title{ Number of Flaws in Cloth } \description{ The \code{cloth} data frame has 32 rows and 2 columns. } \usage{ cloth } \format{ This data frame contains the following columns: \describe{ \item{\code{x}}{ The length of the roll of cloth. } \item{\code{y}}{ The number of flaws found in the roll. }}} \source{ The data were obtained from Bissell, A.F. (1972) A negative binomial model with varying element size. \emph{Biometrika}, \bold{59}, 435--441. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/co.transfer.Rd0000644000076600000240000000222611110552530014532 0ustar00ripleystaff\name{co.transfer} \alias{co.transfer} \title{ Carbon Monoxide Transfer } \description{ The \code{co.transfer} data frame has 7 rows and 2 columns. Seven smokers with chickenpox had their levels of carbon monoxide transfer measured on entry to hospital and then again after 1 week. The main question being whether one week of hospitalization has changed the carbon monoxide transfer factor. } \usage{ co.transfer } \format{ This data frame contains the following columns: \describe{ \item{\code{entry}}{ Carbon monoxide transfer factor on entry to hospital. } \item{\code{week}}{ Carbon monoxide transfer one week after admittance to hospital. }}} \source{ The data were obtained from Hand, D.J., Daly, F., Lunn, A.D., McConway, K.J. and Ostrowski, E (1994) \emph{A Handbook of Small Data Sets}. Chapman and Hall. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Ellis, M.E., Neal, K.R. and Webb, A.K. (1987) Is smoking a risk factor for pneumonia in patients with chickenpox? \emph{British Medical Journal}, \bold{294}, 1002. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/coal.Rd0000644000076600000240000000204011110552530013216 0ustar00ripleystaff\name{coal} \alias{coal} \title{ Dates of Coal Mining Disasters } \description{ The \code{coal} data frame has 191 rows and 1 columns. This data frame gives the dates of 191 explosions in coal mines which resulted in 10 or more fatalities. The time span of the data is from March 15, 1851 until March 22 1962. } \usage{ coal } \format{ This data frame contains the following column: \describe{ \item{\code{date}}{ The date of the disaster. The integer part of \code{date} gives the year. The day is represented as the fraction of the year that had elapsed on that day. }}} \source{ The data were obtained from Hand, D.J., Daly, F., Lunn, A.D., McConway, K.J. and Ostrowski, E. (1994) \emph{A Handbook of Small Data Sets}, Chapman and Hall. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Jarrett, R.G. (1979) A note on the intervals between coal-mining disasters. \emph{Biometrika}, \bold{66}, 191-193. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/control.Rd0000644000076600000240000001347711566131371014013 0ustar00ripleystaff\name{control} \alias{control} \title{ Control Variate Calculations } \description{ This function will find control variate estimates from a bootstrap output object. It can either find the adjusted bias estimate using post-simulation balancing or it can estimate the bias, variance, third cumulant and quantiles, using the linear approximation as a control variate. } \usage{ control(boot.out, L = NULL, distn = NULL, index = 1, t0 = NULL, t = NULL, bias.adj = FALSE, alpha = NULL, \dots) } \arguments{ \item{boot.out}{ A bootstrap output object returned from \code{boot}. The bootstrap replicates must have been generated using the usual nonparametric bootstrap. } \item{L}{ The empirical influence values for the statistic of interest. If \code{L} is not supplied then \code{empinf} is called to calculate them from \code{boot.out}. } \item{distn}{ If present this must be the output from \code{smooth.spline} giving the distribution function of the linear approximation. This is used only if \code{bias.adj} is \code{FALSE}. Normally this would be found using a saddlepoint approximation. If it is not supplied in that case then it is calculated by \code{saddle.distn}. } \item{index}{ The index of the variable of interest in the output of \code{boot.out$statistic}. } \item{t0}{ The observed value of the statistic of interest on the original data set \code{boot.out$data}. This argument is used only if \code{bias.adj} is \code{FALSE}. The input value is ignored if \code{t} is not also supplied. The default value is is \code{boot.out$t0[index]}. } \item{t}{ The bootstrap replicate values of the statistic of interest. This argument is used only if \code{bias.adj} is \code{FALSE}. The input is ignored if \code{t0} is not supplied also. The default value is \code{boot.out$t[,index]}. } \item{bias.adj}{ A logical variable which if \code{TRUE} specifies that the adjusted bias estimate using post-simulation balance is all that is required. If \code{bias.adj} is \code{FALSE} (default) then the linear approximation to the statistic is calculated and used as a control variate in estimates of the bias, variance and third cumulant as well as quantiles. } \item{alpha}{ The alpha levels for the required quantiles if \code{bias.adj} is \code{FALSE}. } \item{\dots}{ Any additional arguments that \code{boot.out$statistic} requires. These are passed unchanged every time \code{boot.out$statistic} is called. \code{boot.out$statistic} is called once if \code{bias.adj} is \code{TRUE}, otherwise it may be called by \code{empinf} for empirical influence calculations if \code{L} is not supplied. } } \value{ If \code{bias.adj} is \code{TRUE} then the returned value is the adjusted bias estimate. If \code{bias.adj} is \code{FALSE} then the returned value is a list with the following components \item{L}{ The empirical influence values used. These are the input values if supplied, and otherwise they are the values calculated by \code{empinf}. } \item{tL}{ The linear approximations to the bootstrap replicates \code{t} of the statistic of interest. } \item{bias}{ The control estimate of bias using the linear approximation to \code{t} as a control variate. } \item{var}{ The control estimate of variance using the linear approximation to \code{t} as a control variate. } \item{k3}{ The control estimate of the third cumulant using the linear approximation to \code{t} as a control variate. } \item{quantiles}{ A matrix with two columns; the first column are the alpha levels used for the quantiles and the second column gives the corresponding control estimates of the quantiles using the linear approximation to \code{t} as a control variate. } \item{distn}{ An output object from \code{smooth.spline} describing the saddlepoint approximation to the bootstrap distribution of the linear approximation to \code{t}. If \code{distn} was supplied on input then this is the same as the input otherwise it is calculated by a call to \code{saddle.distn}. } } \details{ If \code{bias.adj} is \code{FALSE} then the linear approximation to the statistic is found and evaluated at each bootstrap replicate. Then using the equation \emph{T* = Tl*+(T*-Tl*)}, moment estimates can be found. For quantile estimation the distribution of the linear approximation to \code{t} is approximated very accurately by saddlepoint methods, this is then combined with the bootstrap replicates to approximate the bootstrap distribution of \code{t} and hence to estimate the bootstrap quantiles of \code{t}. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Davison, A.C., Hinkley, D.V. and Schechtman, E. (1986) Efficient bootstrap simulation. \emph{Biometrika}, \bold{73}, 555--566. Efron, B. (1990) More efficient bootstrap computations. \emph{Journal of the American Statistical Association}, \bold{55}, 79--89. } \seealso{ \code{\link{boot}}, \code{\link{empinf}}, \code{\link{k3.linear}}, \code{\link{linear.approx}}, \code{\link{saddle.distn}}, \code{\link{smooth.spline}}, \code{\link{var.linear}} } \examples{ # Use of control variates for the variance of the air-conditioning data mean.fun <- function(d, i) { m <- mean(d$hours[i]) n <- nrow(d) v <- (n-1)*var(d$hours[i])/n^2 c(m, v) } air.boot <- boot(aircondit, mean.fun, R = 999) control(air.boot, index = 2, bias.adj = TRUE) air.cont <- control(air.boot, index = 2) # Now let us try the variance on the log scale. air.cont1 <- control(air.boot, t0 = log(air.boot$t0[2]), t = log(air.boot$t[, 2])) } \keyword{nonparametric} boot/man/corr.Rd0000644000076600000240000000174111566131413013264 0ustar00ripleystaff\name{corr} \alias{corr} \title{ Correlation Coefficient } \description{ Calculates the weighted correlation given a data set and a set of weights. } \usage{ corr(d, w = rep(1, nrow(d))/nrow(d)) } \arguments{ \item{d}{ A matrix with two columns corresponding to the two variables whose correlation we wish to calculate. } \item{w}{ A vector of weights to be applied to each pair of observations. The default is equal weights for each pair. Normalization takes place within the function so \code{sum(w)} need not equal 1. }} \value{ The correlation coefficient between \code{d[,1]} and \code{d[,2]}. } \details{ This function finds the correlation coefficient in weighted form. This is often useful in bootstrap methods since it allows for numerical differentiation to get the empirical influence values. It is also necessary to have the statistic in this form to find ABC intervals. } \seealso{ \code{\link{cor}} } \keyword{math} \keyword{multivariate} % Converted by Sd2Rd version 1.15. boot/man/cum3.Rd0000644000076600000240000000216711566471164013203 0ustar00ripleystaff\name{cum3} \alias{cum3} \title{ Calculate Third Order Cumulants } \description{ Calculates an estimate of the third cumulant, or skewness, of a vector. Also, if more than one vector is specified, a product-moment of order 3 is estimated. } \usage{ cum3(a, b = a, c = a, unbiased = TRUE) } \arguments{ \item{a}{ A vector of observations. } \item{b}{ Another vector of observations, if not supplied it is set to the value of \code{a}. If supplied then it must be the same length as \code{a}. } \item{c}{ Another vector of observations, if not supplied it is set to the value of \code{a}. If supplied then it must be the same length as \code{a}. } \item{unbiased}{ A logical value indicating whether the unbiased estimator should be used. }} \value{ The required estimate. } \details{ The unbiased estimator uses a multiplier of \code{n/((n-1)*(n-2))} where \code{n} is the sample size, if \code{unbiased} is \code{FALSE} then a multiplier of \code{1/n} is used. This is multiplied by \code{sum((a-mean(a))*(b-mean(b))*(c-mean(c)))} to give the required estimate. } \keyword{math} \keyword{multivariate} % Converted by Sd2Rd version 1.15. boot/man/cv.glm.Rd0000644000076600000240000001135011566414220013502 0ustar00ripleystaff\name{cv.glm} \alias{cv.glm} \title{ Cross-validation for Generalized Linear Models } \description{ This function calculates the estimated K-fold cross-validation prediction error for generalized linear models. } \usage{ cv.glm(data, glmfit, cost, K) } \arguments{ \item{data}{ A matrix or data frame containing the data. The rows should be cases and the columns correspond to variables, one of which is the response. } \item{glmfit}{ An object of class \code{"glm"} containing the results of a generalized linear model fitted to \code{data}. } \item{cost}{ A function of two vector arguments specifying the cost function for the cross-validation. The first argument to \code{cost} should correspond to the observed responses and the second argument should correspond to the predicted or fitted responses from the generalized linear model. \code{cost} must return a non-negative scalar value. The default is the average squared error function. } \item{K}{ The number of groups into which the data should be split to estimate the cross-validation prediction error. The value of \code{K} must be such that all groups are of approximately equal size. If the supplied value of \code{K} does not satisfy this criterion then it will be set to the closest integer which does and a warning is generated specifying the value of \code{K} used. The default is to set \code{K} equal to the number of observations in \code{data} which gives the usual leave-one-out cross-validation. }} \value{ The returned value is a list with the following components. \item{call}{ The original call to \code{cv.glm}. } \item{K}{ The value of \code{K} used for the K-fold cross validation. } \item{delta}{ A vector of length two. The first component is the raw cross-validation estimate of prediction error. The second component is the adjusted cross-validation estimate. The adjustment is designed to compensate for the bias introduced by not using leave-one-out cross-validation. } \item{seed}{ The value of \code{.Random.seed} when \code{cv.glm} was called. }} \section{Side Effects}{ The value of \code{.Random.seed} is updated. } \details{ The data is divided randomly into \code{K} groups. For each group the generalized linear model is fit to \code{data} omitting that group, then the function \code{cost} is applied to the observed responses in the group that was omitted from the fit and the prediction made by the fitted models for those observations. When \code{K} is the number of observations leave-one-out cross-validation is used and all the possible splits of the data are used. When \code{K} is less than the number of observations the \code{K} splits to be used are found by randomly partitioning the data into \code{K} groups of approximately equal size. In this latter case a certain amount of bias is introduced. This can be reduced by using a simple adjustment (see equation 6.48 in Davison and Hinkley, 1997). The second value returned in \code{delta} is the estimate adjusted by this method. } \references{ Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C.J. (1984) \emph{Classification and Regression Trees}. Wadsworth. Burman, P. (1989) A comparative study of ordinary cross-validation, \emph{v}-fold cross-validation and repeated learning-testing methods. \emph{Biometrika}, \bold{76}, 503--514 Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Efron, B. (1986) How biased is the apparent error rate of a prediction rule? \emph{Journal of the American Statistical Association}, \bold{81}, 461--470. Stone, M. (1974) Cross-validation choice and assessment of statistical predictions (with Discussion). \emph{Journal of the Royal Statistical Society, B}, \bold{36}, 111--147. } \seealso{ \code{\link{glm}}, \code{\link{glm.diag}}, \code{\link{predict}} } \examples{ # leave-one-out and 6-fold cross-validation prediction error for # the mammals data set. data(mammals, package="MASS") mammals.glm <- glm(log(brain) ~ log(body), data = mammals) (cv.err <- cv.glm(mammals, mammals.glm)$delta) (cv.err.6 <- cv.glm(mammals, mammals.glm, K = 6)$delta) # As this is a linear model we could calculate the leave-one-out # cross-validation estimate without any extra model-fitting. muhat <- fitted(mammals.glm) mammals.diag <- glm.diag(mammals.glm) (cv.err <- mean((mammals.glm$y - muhat)^2/(1 - mammals.diag$h)^2)) # leave-one-out and 11-fold cross-validation prediction error for # the nodal data set. Since the response is a binary variable an # appropriate cost function is cost <- function(r, pi = 0) mean(abs(r-pi) > 0.5) nodal.glm <- glm(r ~ stage+xray+acid, binomial, data = nodal) (cv.err <- cv.glm(nodal, nodal.glm, cost, K = nrow(nodal))$delta) (cv.11.err <- cv.glm(nodal, nodal.glm, cost, K = 11)$delta) } \keyword{regression} boot/man/darwin.Rd0000644000076600000240000000225711110552530013576 0ustar00ripleystaff\name{darwin} \alias{darwin} \title{ Darwin's Plant Height Differences } \description{ The \code{darwin} data frame has 15 rows and 1 columns. Charles Darwin conducted an experiment to examine the superiority of cross-fertilized plants over self-fertilized plants. 15 pairs of plants were used. Each pair consisted of one cross-fertilized plant and one self-fertilized plant which germinated at the same time and grew in the same pot. The plants were measured at a fixed time after planting and the difference in heights between the cross- and self-fertilized plants are recorded in eighths of an inch. } \usage{ darwin } \format{ This data frame contains the following column: \describe{ \item{\code{y}}{ The difference in heights for the pairs of plants (in units of 0.125 inches). }}} \source{ The data were obtained from Fisher, R.A. (1935) \emph{Design of Experiments}. Oliver and Boyd. } \references{ Darwin, C. (1876) \emph{The Effects of Cross- and Self-fertilisation in the Vegetable Kingdom}. John Murray. Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/dogs.Rd0000644000076600000240000000106611110552530013243 0ustar00ripleystaff\name{dogs} \alias{dogs} \title{ Cardiac Data for Domestic Dogs } \usage{dogs} \description{ The \code{dogs} data frame has 7 rows and 2 columns. Data on the cardiac oxygen consumption and left ventricular pressure were gathered on 7 domestic dogs. } \format{ This data frame contains the following columns: \describe{ \item{mvo}{Cardiac Oxygen Consumption} \item{lvp}{Left Ventricular Pressure} } } \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} boot/man/downs.bc.Rd0000644000076600000240000000307711110552530014030 0ustar00ripleystaff\name{downs.bc} \alias{downs.bc} \title{ Incidence of Down's Syndrome in British Columbia } \description{ The \code{downs.bc} data frame has 30 rows and 3 columns. Down's syndrome is a genetic disorder caused by an extra chromosome 21 or a part of chromosome 21 being translocated to another chromosome. The incidence of Down's syndrome is highly dependent on the mother's age and rises sharply after age 30. In the 1960's a large scale study of the effect of maternal age on the incidence of Down's syndrome was conducted at the British Columbia Health Surveillance Registry. These are the data which was collected in that study. Mothers were classified by age. Most groups correspond to the age in years but the first group comprises all mothers with ages in the range 15-17 and the last is those with ages 46-49. No data for mothers over 50 or below 15 were collected. } \usage{ downs.bc } \format{ This data frame contains the following columns: \describe{ \item{\code{age}}{ The average age of all mothers in the age category. } \item{\code{m}}{ The total number of live births to mothers in the age category. } \item{\code{r}}{ The number of cases of Down's syndrome. }}} \source{ The data were obtained from Geyer, C.J. (1991) Constrained maximum likelihood exemplified by isotonic convex logistic regression. \emph{Journal of the American Statistical Association}, \bold{86}, 717--724. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/ducks.Rd0000644000076600000240000000312211110552530013413 0ustar00ripleystaff\name{ducks} \alias{ducks} \title{ Behavioral and Plumage Characteristics of Hybrid Ducks } \description{ The \code{ducks} data frame has 11 rows and 2 columns. Each row of the data frame represents a male duck who is a second generation cross of mallard and pintail ducks. For 11 such ducks a behavioural and plumage index were calculated. These were measured on scales devised for this experiment which was to examine whether there was any link between which species the ducks resembled physically and which they resembled in behaviour. The scale for the physical appearance ranged from 0 (identical in appearance to a mallard) to 20 (identical to a pintail). The behavioural traits of the ducks were on a scale from 0 to 15 with lower numbers indicating closer to mallard-like in behaviour. } \usage{ ducks } \format{ This data frame contains the following columns: \describe{ \item{\code{plumage}}{ The index of physical appearance based on the plumage of individual ducks. } \item{\code{behaviour}}{ The index of behavioural characteristics of the ducks. }}} \source{ The data were obtained from Larsen, R.J. and Marx, M.L. (1986) \emph{An Introduction to Mathematical Statistics and its Applications} (Second Edition). Prentice-Hall. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Sharpe, R.S., and Johnsgard, P.A. (1966) Inheritance of behavioral characters in \eqn{F_2}{F2} mallard x pintail (\emph{Anas Platyrhynchos L. x Anas Acuta L.}) hybrids. \emph{Behaviour}, \bold{27}, 259-272. } \keyword{datasets} boot/man/empinf.Rd0000644000076600000240000002010711566415032013574 0ustar00ripleystaff\name{empinf} \alias{empinf} \title{ Empirical Influence Values } \description{ This function calculates the empirical influence values for a statistic applied to a data set. It allows four types of calculation, namely the infinitesimal jackknife (using numerical differentiation), the usual jackknife estimates, the \sQuote{positive} jackknife estimates and a method which estimates the empirical influence values using regression of bootstrap replicates of the statistic. All methods can be used with one or more samples. } \usage{ empinf(boot.out = NULL, data = NULL, statistic = NULL, type = NULL, stype = NULL ,index = 1, t = NULL, strata = rep(1, n), eps = 0.001, ...) } \arguments{ \item{boot.out}{ A bootstrap object created by the function \code{boot}. If \code{type} is \code{"reg"} then this argument is required. For any of the other types it is an optional argument. If it is included when optional then the values of \code{data}, \code{statistic}, \code{stype}, and \code{strata} are taken from the components of \code{boot.out} and any values passed to \code{empinf} directly are ignored. } \item{data}{ A vector, matrix or data frame containing the data for which empirical influence values are required. It is a required argument if \code{boot.out} is not supplied. If \code{boot.out} is supplied then \code{data} is set to \code{boot.out$data} and any value supplied is ignored. } \item{statistic}{ The statistic for which empirical influence values are required. It must be a function of at least two arguments, the data set and a vector of weights, frequencies or indices. The nature of the second argument is given by the value of \code{stype}. Any other arguments that it takes must be supplied to \code{empinf} and will be passed to \code{statistic} unchanged. This is a required argument if \code{boot.out} is not supplied, otherwise its value is taken from \code{boot.out} and any value supplied here will be ignored. } \item{type}{ The calculation type to be used for the empirical influence values. Possible values of \code{type} are \code{"inf"} (infinitesimal jackknife), \code{"jack"} (usual jackknife), \code{"pos"} (positive jackknife), and \code{"reg"} (regression estimation). The default value depends on the other arguments. If \code{t} is supplied then the default value of \code{type} is \code{"reg"} and \code{boot.out} should be present so that its frequency array can be found. It \code{t} is not supplied then if \code{stype} is \code{"w"}, the default value of \code{type} is \code{"inf"}; otherwise, if \code{boot.out} is present the default is \code{"reg"}. If none of these conditions apply then the default is \code{"jack"}. Note that it is an error for \code{type} to be \code{"reg"} if \code{boot.out} is missing or to be \code{"inf"} if \code{stype} is not \code{"w"}. } \item{stype}{ A character variable giving the nature of the second argument to \code{statistic}. It can take on three values: \code{"w"} (weights), \code{"f"} (frequencies), or \code{"i"} (indices). If \code{boot.out} is supplied the value of \code{stype} is set to \code{boot.out$stype} and any value supplied here is ignored. Otherwise it is an optional argument which defaults to \code{"w"}. If \code{type} is \code{"inf"} then \code{stype} MUST be \code{"w"}. } \item{index}{ An integer giving the position of the variable of interest in the output of \code{statistic}. } \item{t}{ A vector of length \code{boot.out$R} which gives the bootstrap replicates of the statistic of interest. \code{t} is used only when \code{type} is \code{reg} and it defaults to \code{boot.out$t[,index]}. } \item{strata}{ An integer vector or a factor specifying the strata for multi-sample problems. If \code{boot.out} is supplied the value of \code{strata} is set to \code{boot.out$strata}. Otherwise it is an optional argument which has default corresponding to the single sample situation. } \item{eps}{ This argument is used only if \code{type} is \code{"inf"}. In that case the value of epsilon to be used for numerical differentiation will be \code{eps} divided by the number of observations in \code{data}. } \item{\dots}{ Any other arguments that \code{statistic} takes. They will be passed unchanged to \code{statistic} every time that it is called. } } \section{Warning}{ All arguments to \code{empinf} must be passed using the \code{name = value} convention. If this is not followed then unpredictable errors can occur. } \value{ A vector of the empirical influence values of \code{statistic} applied to \code{data}. The values will be in the same order as the observations in data. } \details{ If \code{type} is \code{"inf"} then numerical differentiation is used to approximate the empirical influence values. This makes sense only for statistics which are written in weighted form (i.e. \code{stype} is \code{"w"}). If \code{type} is \code{"jack"} then the usual leave-one-out jackknife estimates of the empirical influence are returned. If \code{type} is \code{"pos"} then the positive (include-one-twice) jackknife values are used. If \code{type} is \code{"reg"} then a bootstrap object must be supplied. The regression method then works by regressing the bootstrap replicates of \code{statistic} on the frequency array from which they were derived. The bootstrap frequency array is obtained through a call to \code{boot.array}. Further details of the methods are given in Section 2.7 of Davison and Hinkley (1997). Empirical influence values are often used frequently in nonparametric bootstrap applications. For this reason many other functions call \code{empinf} when they are required. Some examples of their use are for nonparametric delta estimates of variance, BCa intervals and finding linear approximations to statistics for use as control variates. They are also used for antithetic bootstrap resampling. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Efron, B. (1982) \emph{The Jackknife, the Bootstrap and Other Resampling Plans}. CBMS-NSF Regional Conference Series in Applied Mathematics, \bold{38}, SIAM. Fernholtz, L.T. (1983) \emph{von Mises Calculus for Statistical Functionals}. Lecture Notes in Statistics, \bold{19}, Springer-Verlag. } \seealso{ \code{\link{boot}}, \code{\link{boot.array}}, \code{\link{boot.ci}}, \code{\link{control}}, \code{\link{jack.after.boot}}, \code{\link{linear.approx}}, \code{\link{var.linear}} } \examples{ # The empirical influence values for the ratio of means in # the city data. ratio <- function(d, w) sum(d$x *w)/sum(d$u*w) empinf(data = city, statistic = ratio) city.boot <- boot(city, ratio, 499, stype="w") empinf(boot.out = city.boot, type = "reg") # A statistic that may be of interest in the difference of means # problem is the t-statistic for testing equality of means. In # the bootstrap we get replicates of the difference of means and # the variance of that statistic and then want to use this output # to get the empirical influence values of the t-statistic. grav1 <- gravity[as.numeric(gravity[,2]) >= 7,] grav.fun <- function(dat, w) { strata <- tapply(dat[, 2], as.numeric(dat[, 2])) d <- dat[, 1] ns <- tabulate(strata) w <- w/tapply(w, strata, sum)[strata] mns <- as.vector(tapply(d * w, strata, sum)) # drop names mn2 <- tapply(d * d * w, strata, sum) s2hat <- sum((mn2 - mns^2)/ns) c(mns[2] - mns[1], s2hat) } grav.boot <- boot(grav1, grav.fun, R = 499, stype = "w", strata = grav1[, 2]) # Since the statistic of interest is a function of the bootstrap # statistics, we must calculate the bootstrap replicates and pass # them to empinf using the t argument. grav.z <- (grav.boot$t[,1]-grav.boot$t0[1])/sqrt(grav.boot$t[,2]) empinf(boot.out = grav.boot, t = grav.z) } \keyword{nonparametric} \keyword{math} boot/man/envelope.Rd0000644000076600000240000000741711566471143014151 0ustar00ripleystaff\name{envelope} \alias{envelope} \title{ Confidence Envelopes for Curves } \description{ This function calculates overall and pointwise confidence envelopes for a curve based on bootstrap replicates of the curve evaluated at a number of fixed points. } \usage{ envelope(boot.out = NULL, mat = NULL, level = 0.95, index = 1:ncol(mat)) } \arguments{ \item{boot.out}{ An object of class \code{"boot"} for which \code{boot.out$t} contains the replicates of the curve at a number of fixed points. } \item{mat}{ A matrix of bootstrap replicates of the values of the curve at a number of fixed points. This is a required argument if \code{boot.out} is not supplied and is set to \code{boot.out$t} otherwise. } \item{level}{ The confidence level of the envelopes required. The default is to find 95\% confidence envelopes. It can be a scalar or a vector of length 2. If it is scalar then both the pointwise and the overall envelopes are found at that level. If is a vector then the first element gives the level for the pointwise envelope and the second gives the level for the overall envelope. } \item{index}{ The numbers of the columns of \code{mat} which contain the bootstrap replicates. This can be used to ensure that other statistics which may have been calculated in the bootstrap are not considered as values of the function. }} \value{ A list with the following components : \item{point}{ A matrix with two rows corresponding to the values of the upper and lower pointwise confidence envelope at the same points as the bootstrap replicates were calculated. } \item{overall}{ A matrix similar to \code{point} but containing the envelope which controls the overall error. } \item{k.pt}{ The quantiles used for the pointwise envelope. } \item{err.pt}{ A vector with two components, the first gives the pointwise error rate for the pointwise envelope, and the second the overall error rate for that envelope. } \item{k.ov}{ The quantiles used for the overall envelope. } \item{err.ov}{ A vector with two components, the first gives the pointwise error rate for the overall envelope, and the second the overall error rate for that envelope. } \item{err.nom}{ A vector of length 2 giving the nominal error rates for the pointwise and the overall envelopes. }} \details{ The pointwise envelope is found by simply looking at the quantiles of the replicates at each point. The overall error for that envelope is then calculated using equation (4.17) of Davison and Hinkley (1997). A sequence of pointwise envelopes is then found until one of them has overall error approximately equal to the level required. If no such envelope can be found then the envelope returned will just contain the extreme values of each column of \code{mat}. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \seealso{ \code{\link{boot}}, \code{\link{boot.ci}} } \examples{ # Testing whether the final series of measurements of the gravity data # may come from a normal distribution. This is done in Examples 4.7 # and 4.8 of Davison and Hinkley (1997). grav1 <- gravity$g[gravity$series == 8] grav.z <- (grav1 - mean(grav1))/sqrt(var(grav1)) grav.gen <- function(dat, mle) rnorm(length(dat)) grav.qqboot <- boot(grav.z, sort, R = 999, sim = "parametric", ran.gen = grav.gen) grav.qq <- qqnorm(grav.z, plot.it = FALSE) grav.qq <- lapply(grav.qq, sort) plot(grav.qq, ylim = c(-3.5, 3.5), ylab = "Studentized Order Statistics", xlab = "Normal Quantiles") grav.env <- envelope(grav.qqboot, level = 0.9) lines(grav.qq$x, grav.env$point[1, ], lty = 4) lines(grav.qq$x, grav.env$point[2, ], lty = 4) lines(grav.qq$x, grav.env$overall[1, ], lty = 1) lines(grav.qq$x, grav.env$overall[2, ], lty = 1) } \keyword{dplot} \keyword{htest} % Converted by Sd2Rd version 1.15. boot/man/exp.tilt.Rd0000644000076600000240000001253411566473720014102 0ustar00ripleystaff\name{exp.tilt} \alias{exp.tilt} \title{ Exponential Tilting } \description{ This function calculates exponentially tilted multinomial distributions such that the resampling distributions of the linear approximation to a statistic have the required means. } \usage{ exp.tilt(L, theta = NULL, t0 = 0, lambda = NULL, strata = rep(1, length(L))) } \arguments{ \item{L}{ The empirical influence values for the statistic of interest based on the observed data. The length of \code{L} should be the same as the size of the original data set. Typically \code{L} will be calculated by a call to \code{empinf}. } \item{theta}{ The value at which the tilted distribution is to be centred. This is not required if \code{lambda} is supplied but is needed otherwise. } \item{t0}{ The current value of the statistic. The default is that the statistic equals 0. } \item{lambda}{ The Lagrange multiplier(s). For each value of \code{lambda} a multinomial distribution is found with probabilities proportional to \code{exp(lambda * L)}. In general \code{lambda} is not known and so \code{theta} would be supplied, and the corresponding value of \code{lambda} found. If both \code{lambda} and \code{theta} are supplied then \code{lambda} is ignored and the multipliers for tilting to \code{theta} are found. } \item{strata}{ A vector or factor of the same length as \code{L} giving the strata for the observed data and the empirical influence values \code{L}. }} \value{ A list with the following components : \item{p}{ The tilted probabilities. There will be \code{m} distributions where \code{m} is the length of \code{theta} (or \code{lambda} if supplied). If \code{m} is 1 then \code{p} is a vector of \code{length(L)} probabilities. If \code{m} is greater than 1 then \code{p} is a matrix with \code{m} rows, each of which contain \code{length(L)} probabilities. In this case the vector \code{p[i,]} is the distribution tilted to \code{theta[i]}. \code{p} is in the form required by the argument \code{weights} of the function \code{boot} for importance resampling. } \item{lambda}{ The Lagrange multiplier used in the equation to determine the tilted probabilities. \code{lambda} is a vector of the same length as \code{theta}. } \item{theta}{ The values of \code{theta} to which the distributions have been tilted. In general this will be the input value of \code{theta} but if \code{lambda} was supplied then this is the vector of the corresponding \code{theta} values. }} \details{ Exponential tilting involves finding a set of weights for a data set to ensure that the bootstrap distribution of the linear approximation to a statistic of interest has mean \code{theta}. The weights chosen to achieve this are given by \code{p[j]} proportional to \code{exp(lambda*L[j]/n)}, where \code{n} is the number of data points. \code{lambda} is then chosen to make the mean of the bootstrap distribution, of the linear approximation to the statistic of interest, equal to the required value \code{theta}. Thus \code{lambda} is defined as the solution of a nonlinear equation. The equation is solved by minimizing the Euclidean distance between the left and right hand sides of the equation using the function \code{nlmin}. If this minimum is not equal to zero then the method fails. Typically exponential tilting is used to find suitable weights for importance resampling. If a small tail probability or quantile of the distribution of the statistic of interest is required then a more efficient simulation is to centre the resampling distribution close to the point of interest and then use the functions \code{imp.prob} or \code{imp.quantile} to estimate the required quantity. Another method of achieving a similar shifting of the distribution is through the use of \code{smooth.f}. The function \code{tilt.boot} uses \code{exp.tilt} or \code{smooth.f} to find the weights for a tilted bootstrap. } \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Efron, B. (1981) Nonparametric standard errors and confidence intervals (with Discussion). \emph{Canadian Journal of Statistics}, \bold{9}, 139--172. } \seealso{ \code{\link{empinf}}, \code{\link{imp.prob}}, \code{\link{imp.quantile}}, \code{\link{optim}}, \code{\link{smooth.f}}, \code{\link{tilt.boot}} } \examples{ # Example 9.8 of Davison and Hinkley (1997) requires tilting the resampling # distribution of the studentized statistic to be centred at the observed # value of the test statistic 1.84. This can be achieved as follows. grav1 <- gravity[as.numeric(gravity[,2]) >=7 , ] grav.fun <- function(dat, w, orig) { strata <- tapply(dat[, 2], as.numeric(dat[, 2])) d <- dat[, 1] ns <- tabulate(strata) w <- w/tapply(w, strata, sum)[strata] mns <- as.vector(tapply(d * w, strata, sum)) # drop names mn2 <- tapply(d * d * w, strata, sum) s2hat <- sum((mn2 - mns^2)/ns) c(mns[2]-mns[1], s2hat, (mns[2]-mns[1]-orig)/sqrt(s2hat)) } grav.z0 <- grav.fun(grav1, rep(1, 26), 0) grav.L <- empinf(data = grav1, statistic = grav.fun, stype = "w", strata = grav1[,2], index = 3, orig = grav.z0[1]) grav.tilt <- exp.tilt(grav.L, grav.z0[3], strata = grav1[,2]) boot(grav1, grav.fun, R = 499, stype = "w", weights = grav.tilt$p, strata = grav1[,2], orig = grav.z0[1]) } \keyword{nonparametric} \keyword{smooth} % Converted by Sd2Rd version 1.15. boot/man/fir.Rd0000644000076600000240000000135211110552530013065 0ustar00ripleystaff\name{fir} \alias{fir} \title{ Counts of Balsam-fir Seedlings } \description{ The \code{fir} data frame has 50 rows and 3 columns. The number of balsam-fir seedlings in each quadrant of a grid of 50 five foot square quadrants were counted. The grid consisted of 5 rows of 10 quadrants in each row. } \usage{ fir } \format{ This data frame contains the following columns: \describe{ \item{\code{count}}{ The number of seedlings in the quadrant. } \item{\code{row}}{ The row number of the quadrant. } \item{\code{col}}{ The quadrant number within the row. }}} \source{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/freq.array.Rd0000644000076600000240000000251011110552530014354 0ustar00ripleystaff\name{freq.array} \alias{freq.array} \title{ Bootstrap Frequency Arrays } \description{ Take a matrix of indices for nonparametric bootstrap resamples and return the frequencies of the original observations in each resample. } \usage{ freq.array(i.array) } \arguments{ \item{i.array}{ This will be an matrix of integers between 1 and n, where n is the number of observations in a data set. The matrix will have n columns and R rows where R is the number of bootstrap resamples. Such matrices are found by \code{boot} when doing nonparametric bootstraps. They can also be found after a bootstrap has been run through the function \code{boot.array}. }} \value{ A matrix of the same dimensions as the input matrix. Each row of the matrix corresponds to a single bootstrap resample. Each column of the matrix corresponds to one of the original observations and specifies its frequency in each bootstrap resample. Thus the first column tells us how often the first observation appeared in each bootstrap resample. Such frequency arrays are often useful for diagnostic purposes such as the jackknife-after-bootstrap plot. They are also necessary for the regression estimates of empirical influence values and for finding importance sampling weights. } \seealso{ \code{\link{boot.array}} } \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/frets.Rd0000644000076600000240000000200711110552530013426 0ustar00ripleystaff\name{frets} \alias{frets} \title{ Head Dimensions in Brothers } \description{ The \code{frets} data frame has 25 rows and 4 columns. The data consist of measurements of the length and breadth of the heads of pairs of adult brothers in 25 randomly sampled families. All measurements are expressed in millimetres. } \usage{ frets } \format{ This data frame contains the following columns: \describe{ \item{\code{l1}}{ The head length of the eldest son. } \item{\code{b1}}{ The head breadth of the eldest son. } \item{\code{l2}}{ The head length of the second son. } \item{\code{b2}}{ The head breadth of the second son. }}} \source{ The data were obtained from Frets, G.P. (1921) Heredity of head form in man. \emph{Genetica}, \bold{3}, 193. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Whittaker, J. (1990) \emph{Graphical Models in Applied Multivariate Statistics}. John Wiley. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/glm.diag.Rd0000644000076600000240000000271211110552530013770 0ustar00ripleystaff\name{glm.diag} \alias{glm.diag} \title{ Generalized Linear Model Diagnostics } \description{ Calculates jackknife deviance residuals, standardized deviance residuals, standardized Pearson residuals, approximate Cook statistic, leverage and estimated dispersion. } \usage{ glm.diag(glmfit) } \arguments{ \item{glmfit}{ \code{glmfit} is a \code{glm.object} - the result of a call to \code{glm()} }} \value{ Returns a list with the following components \item{res}{ The vector of jackknife deviance residuals. } \item{rd}{ The vector of standardized deviance residuals. } \item{rp}{ The vector of standardized Pearson residuals. } \item{cook}{ The vector of approximate Cook statistics. } \item{h}{ The vector of leverages of the observations. } \item{sd}{ The value used to standardize the residuals. This is the estimate of residual standard deviation in the Gaussian family and is the square root of the estimated shape parameter in the Gamma family. In all other cases it is 1. }} \references{ Davison, A.C. and Snell, E.J. (1991) Residuals and diagnostics. In \emph{Statistical Theory and Modelling: In Honour of Sir David Cox}. D.V. Hinkley, N. Reid and E.J. Snell (editors), 83--106. Chapman and Hall. } \seealso{ \code{\link{glm}}, \code{\link{glm.diag.plots}}, \code{\link{summary.glm}} } \note{ See the help for \code{\link{glm.diag.plots}} for an example of the use of \code{glm.diag}. } \keyword{regression} \keyword{dplot} % Converted by Sd2Rd version 1.15. boot/man/glm.diag.plots.Rd0000644000076600000240000001042711566471075015155 0ustar00ripleystaff\name{glm.diag.plots} \alias{glm.diag.plots} \title{ Diagnostics plots for generalized linear models } \description{ Makes plot of jackknife deviance residuals against linear predictor, normal scores plots of standardized deviance residuals, plot of approximate Cook statistics against leverage/(1-leverage), and case plot of Cook statistic. } \usage{ glm.diag.plots(glmfit, glmdiag = glm.diag(glmfit), subset = NULL, iden = FALSE, labels = NULL, ret = FALSE) } \arguments{ \item{glmfit}{ \code{glm.object} : the result of a call to \code{glm()} } \item{glmdiag}{ Diagnostics of \code{glmfit} obtained from a call to \code{glm.diag}. If it is not supplied then it is calculated. } \item{subset}{ Subset of \code{data} for which \code{glm} fitting performed: should be the same as the \code{subset} option used in the call to \code{glm()} which generated \code{glmfit}. Needed only if the \code{subset=} option was used in the call to \code{glm}. } \item{iden}{ A logical argument. If \code{TRUE} then, after the plots are drawn, the user will be prompted for an integer between 0 and 4. A positive integer will select a plot and invoke \code{identify()} on that plot. After exiting \code{identify()}, the user is again prompted, this loop continuing until the user responds to the prompt with 0. If \code{iden} is \code{FALSE} (default) the user cannot interact with the plots. } \item{labels}{ A vector of labels for use with \code{identify()} if \code{iden} is \code{TRUE}. If it is not supplied then the labels are derived from \code{glmfit}. } \item{ret}{ A logical argument indicating if \code{glmdiag} should be returned. The default is \code{FALSE}. }} \value{ If \code{ret} is \code{TRUE} then the value of \code{glmdiag} is returned otherwise there is no returned value. } \details{ The diagnostics required for the plots are calculated by \code{glm.diag}. These are then used to produce the four plots on the current graphics device. The plot on the top left is a plot of the jackknife deviance residuals against the fitted values. The plot on the top right is a normal QQ plot of the standardized deviance residuals. The dotted line is the expected line if the standardized residuals are normally distributed, i.e. it is the line with intercept 0 and slope 1. The bottom two panels are plots of the Cook statistics. On the left is a plot of the Cook statistics against the standardized leverages. In general there will be two dotted lines on this plot. The horizontal line is at 8/(n-2p) where n is the number of observations and p is the number of parameters estimated. Points above this line may be points with high influence on the model. The vertical line is at 2p/(n-2p) and points to the right of this line have high leverage compared to the variance of the raw residual at that point. If all points are below the horizontal line or to the left of the vertical line then the line is not shown. The final plot again shows the Cook statistic this time plotted against case number enabling us to find which observations are influential. Use of \code{iden=T} is encouraged for proper exploration of these four plots as a guide to how well the model fits the data and whether certain observations have an unduly large effect on parameter estimates. } \section{Side Effects}{ The current device is cleared and four plots are plotted by use of \code{split.screen(c(2,2))}. If \code{iden} is \code{TRUE}, interactive identification of points is enabled. All screens are closed, but not cleared, on termination of the function. } \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Davison, A.C. and Snell, E.J. (1991) Residuals and diagnostics. In \emph{Statistical Theory and Modelling: In Honour of Sir David Cox} D.V. Hinkley, N. Reid, and E.J. Snell (editors), 83--106. Chapman and Hall. } \seealso{ \code{\link{glm}}, \code{\link{glm.diag}}, \code{\link{identify}} } \examples{ # In this example we look at the leukaemia data which was looked at in # Example 7.1 of Davison and Hinkley (1997) data(leuk, package = "MASS") leuk.mod <- glm(time ~ ag-1+log10(wbc), family = Gamma(log), data = leuk) leuk.diag <- glm.diag(leuk.mod) glm.diag.plots(leuk.mod, leuk.diag) } \keyword{regression} \keyword{dplot} \keyword{hplot} boot/man/gravity.Rd0000644000076600000240000000274011110552530013774 0ustar00ripleystaff\name{gravity} \alias{gravity} \alias{grav} \title{ Acceleration Due to Gravity } \description{ The \code{gravity} data frame has 81 rows and 2 columns. The \code{grav} data set has 26 rows and 2 columns. Between May 1934 and July 1935, the National Bureau of Standards in Washington D.C. conducted a series of experiments to estimate the acceleration due to gravity, \emph{g}, at Washington. Each experiment produced a number of replicate estimates of \emph{g} using the same methodology. Although the basic method remained the same for all experiments, that of the reversible pendulum, there were changes in configuration. The \code{gravity} data frame contains the data from all eight experiments. The \code{grav} data frame contains the data from the experiments 7 and 8. The data are expressed as deviations from 980.000 in centimetres per second squared. } \usage{ gravity } \format{ This data frame contains the following columns: \describe{ \item{\code{g}}{ The deviation of the estimate from 980.000 centimetres per second squared. } \item{\code{series}}{ A factor describing from which experiment the estimate was derived. }}} \source{ The data were obtained from Cressie, N. (1982) Playing safe with misweighted means. \emph{Journal of the American Statistical Association}, \bold{77}, 754--759. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/hirose.Rd0000644000076600000240000000167611110552530013607 0ustar00ripleystaff\name{hirose} \alias{hirose} \title{ Failure Time of PET Film } \description{ The \code{hirose} data frame has 44 rows and 3 columns. PET film is used in electrical insulation. In this accelerated life test the failure times for 44 samples in gas insulated transformers. 4 different voltage levels were used. } \usage{ hirose } \format{ This data frame contains the following columns: \describe{ \item{\code{volt}}{ The voltage (in kV). } \item{\code{time}}{ The failure or censoring time in hours. } \item{\code{cens}}{ The censoring indicator; \code{1} means right-censored data. }}} \source{ The data were obtained from Hirose, H. (1993) Estimation of threshold stress in accelerated life-testing. \emph{IEEE Transactions on Reliability}, \bold{42}, 650--657. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/imp.weights.Rd0000644000076600000240000000563111566471042014564 0ustar00ripleystaff\name{imp.weights} \alias{imp.weights} \title{ Importance Sampling Weights } \description{ This function calculates the importance sampling weight required to correct for simulation from a distribution with probabilities \code{p} when estimates are required assuming that simulation was from an alternative distribution with probabilities \code{q}. } \usage{ imp.weights(boot.out, def = TRUE, q = NULL) } \arguments{ \item{boot.out}{ A object of class \code{"boot"} generated by \code{boot} or \code{tilt.boot}. Typically the bootstrap simulations would have been done using importance resampling and we wish to do our calculations under the assumption of sampling with equal probabilities. } \item{def}{ A logical variable indicating whether the defensive mixture distribution weights should be calculated. This makes sense only in the case where the replicates in \code{boot.out} were simulated under a number of different distributions. If this is the case then the defensive mixture weights use a mixture of the distributions used in the bootstrap. The alternative is to calculate the weights for each replicate using knowledge of the distribution from which the bootstrap resample was generated. } \item{q}{ A vector of probabilities specifying the resampling distribution from which we require inferences to be made. In general this would correspond to the usual bootstrap resampling distribution which gives equal weight to each of the original observations and this is the default. \code{q} must have length equal to the number of observations in the \code{boot.out$data} and all elements of \code{q} must be positive. }} \value{ A vector of importance weights of the same length as \code{boot.out$t}. These weights can then be used to reweight \code{boot.out$t} so that estimates can be found as if the simulations were from a distribution with probabilities \code{q}. } \details{ The importance sampling weight for a bootstrap replicate with frequency vector \code{f} is given by \code{prod((q/p)^f)}. This reweights the replicates so that estimates can be found as if the bootstrap resamples were generated according to the probabilities \code{q} even though, in fact, they came from the distribution \code{p}. } \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Hesterberg, T. (1995) Weighted average importance sampling and defensive mixture distributions. \emph{Technometrics}, \bold{37}, 185--194. Johns, M.V. (1988) Importance sampling for bootstrap confidence intervals. \emph{Journal of the American Statistical Association}, \bold{83}, 709--714. } \seealso{ \code{\link{boot}}, \code{\link{exp.tilt}}, \code{\link{imp.moments}}, \code{\link{smooth.f}}, \code{\link{tilt.boot}} } \note{ See the example in the help for \code{imp.moments} for an example of using \code{imp.weights}. } \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/inv.logit.Rd0000644000076600000240000000125011565746177014245 0ustar00ripleystaff\name{inv.logit} \alias{inv.logit} \title{ Inverse Logit Function } \description{ Given a numeric object return the inverse logit of the values. } \usage{ inv.logit(x) } \arguments{ \item{x}{ A numeric object. Missing values (\code{NA}s) are allowed. }} \value{ An object of the same type as \code{x} containing the inverse logits of the input values. } \details{ The inverse logit is defined by \code{exp(x)/(1+exp(x))}. Values in \code{x} of \code{-Inf} or \code{Inf} return logits of 0 or 1 respectively. Any \code{NA}s in the input will also be \code{NA}s in the output. } \seealso{ \code{\link{logit}}, \code{\link{plogis}} for which this is a wrapper. } \keyword{math} boot/man/islay.Rd0000644000076600000240000000151711110552530013431 0ustar00ripleystaff\name{islay} \alias{islay} \title{ Jura Quartzite Azimuths on Islay } \description{ The \code{islay} data frame has 18 rows and 1 columns. Measurements were taken of paleocurrent azimuths from the Jura Quartzite on the Scottish island of Islay. } \usage{ islay } \format{ This data frame contains the following column: \describe{ \item{\code{theta}}{ The angle of the azimuth in degrees East of North. }}} \source{ The data were obtained from Hand, D.J., Daly, F., Lunn, A.D., McConway, K.J. and Ostrowski, E. (1994) \emph{A Handbook of Small Data Sets}, Chapman and Hall. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Till, R. (1974) \emph{Statistical Methods for the Earth Scientist}. Macmillan. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/jack.after.boot.Rd0000644000076600000240000001261211573145742015300 0ustar00ripleystaff\name{jack.after.boot} \alias{jack.after.boot} \title{ Jackknife-after-Bootstrap Plots } \description{ This function calculates the jackknife influence values from a bootstrap output object and plots the corresponding jackknife-after-bootstrap plot. } \usage{ jack.after.boot(boot.out, index = 1, t = NULL, L = NULL, useJ = TRUE, stinf = TRUE, alpha = NULL, main = "", ylab = NULL, \dots) } \arguments{ \item{boot.out}{ An object of class \code{"boot"} which would normally be created by a call to \code{\link{boot}}. It should represent a nonparametric bootstrap. For reliable results \code{boot.out$R} should be reasonably large. } \item{index}{ The index of the statistic of interest in the output of \code{boot.out$statistic}. } \item{t}{ A vector of length \code{boot.out$R} giving the bootstrap replicates of the statistic of interest. This is useful if the statistic of interest is a function of the calculated bootstrap output. If it is not supplied then the default is \code{boot.out$t[,index]}. } \item{L}{ The empirical influence values for the statistic of interest. These are used only if \code{useJ} is \code{FALSE}. If they are not supplied and are needed, they are calculated by a call to \code{empinf}. If \code{L} is supplied then it is assumed that they are the infinitesimal jackknife values. } \item{useJ}{ A logical variable indicating if the jackknife influence values calculated from the bootstrap replicates should be used. If \code{FALSE} the empirical influence values are used. The default is \code{TRUE}. } \item{stinf}{ A logical variable indicating whether to standardize the jackknife values before plotting them. If \code{TRUE} then the jackknife values used are divided by their standard error. } \item{alpha}{ The quantiles at which the plots are required. The default is \code{c(0.05, 0.1, 0.16, 0.5, 0.84, 0.9, 0.95)}. } \item{main}{ A character string giving the main title for the plot. } \item{ylab}{ The label for the Y axis. If the default values of \code{alpha} are used and \code{ylab} is not supplied then a label indicating which percentiles are plotted is used. If \code{alpha} is supplied then the default label will not say which percentiles were used. } \item{...}{ Any extra arguments required by \code{boot.out$statistic}. These are required only if \code{useJ} is \code{FALSE} and \code{L} is not supplied, in which case they are passed to \code{empinf} for use in calculation of the empirical influence values. }} \value{ There is no returned value but a plot is generated on the current graphics display. } \section{Side Effects}{ A plot is created on the current graphics device. } \details{ The centred jackknife quantiles for each observation are estimated from those bootstrap samples in which the particular observation did not appear. These are then plotted against the influence values. If \code{useJ} is \code{TRUE} then the influence values are found in the same way as the difference between the mean of the statistic in the samples excluding the observations and the mean in all samples. If \code{useJ} is \code{FALSE} then empirical influence values are calculated by calling \code{empinf}. The resulting plots are useful diagnostic tools for looking at the way individual observations affect the bootstrap output. The plot will consist of a number of horizontal dotted lines which correspond to the quantiles of the centred bootstrap distribution. For each data point the quantiles of the bootstrap distribution calculated by omitting that point are plotted against the (possibly standardized) jackknife values. The observation number is printed below the plots. To make it easier to see the effect of omitting points on quantiles, the plotted quantiles are joined by line segments. These plots provide a useful diagnostic tool in establishing the effect of individual observations on the bootstrap distribution. See the references below for some guidelines on the interpretation of the plots. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Efron, B. (1992) Jackknife-after-bootstrap standard errors and influence functions (with Discussion). \emph{Journal of the Royal Statistical Society, B}, \bold{54}, 83--127. } \seealso{ \code{\link{boot}}, \code{\link{empinf}} } \examples{ # To draw the jackknife-after-bootstrap plot for the head size data as in # Example 3.24 of Davison and Hinkley (1997) frets.fun <- function(data, i) { pcorr <- function(x) { # Function to find the correlations and partial correlations between # the four measurements. v <- cor(x) v.d <- diag(var(x)) iv <- solve(v) iv.d <- sqrt(diag(iv)) iv <- - diag(1/iv.d) \%*\% iv \%*\% diag(1/iv.d) q <- NULL n <- nrow(v) for (i in 1:(n-1)) q <- rbind( q, c(v[i, 1:i], iv[i,(i+1):n]) ) q <- rbind( q, v[n, ] ) diag(q) <- round(diag(q)) q } d <- data[i, ] v <- pcorr(d) c(v[1,], v[2,], v[3,], v[4,]) } frets.boot <- boot(log(as.matrix(frets)), frets.fun, R = 999) # we will concentrate on the partial correlation between head breadth # for the first son and head length for the second. This is the 7th # element in the output of frets.fun so we set index = 7 jack.after.boot(frets.boot, useJ = FALSE, stinf = FALSE, index = 7) } \keyword{hplot} \keyword{nonparametric} boot/man/k3.linear.Rd0000644000076600000240000000175211566471002014110 0ustar00ripleystaff\name{k3.linear} \alias{k3.linear} \title{ Linear Skewness Estimate } \description{ Estimates the skewness of a statistic from its empirical influence values. } \usage{ k3.linear(L, strata = NULL) } \arguments{ \item{L}{ Vector of the empirical influence values of a statistic. These will usually be calculated by a call to \code{empinf}. } \item{strata}{ A numeric vector or factor specifying which observations (and hence which components of \code{L}) come from which strata. }} \value{ The skewness estimate calculated from \code{L}. } \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \seealso{ \code{\link{empinf}}, \code{\link{linear.approx}}, \code{\link{var.linear}} } \examples{ # To estimate the skewness of the ratio of means for the city data. ratio <- function(d, w) sum(d$x * w)/sum(d$u * w) k3.linear(empinf(data = city, statistic = ratio)) } \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/linear.approx.Rd0000644000076600000240000001005211566474072015107 0ustar00ripleystaff\name{linear.approx} \alias{linear.approx} \title{ Linear Approximation of Bootstrap Replicates } \description{ This function takes a bootstrap object and for each bootstrap replicate it calculates the linear approximation to the statistic of interest for that bootstrap sample. } \usage{ linear.approx(boot.out, L = NULL, index = 1, type = NULL, t0 = NULL, t = NULL, \dots) } \arguments{ \item{boot.out}{ An object of class \code{"boot"} representing a nonparametric bootstrap. It will usually be created by the function \code{boot}. } \item{L}{ A vector containing the empirical influence values for the statistic of interest. If it is not supplied then \code{L} is calculated through a call to \code{empinf}. } \item{index}{ The index of the variable of interest within the output of \code{boot.out$statistic}. } \item{type}{ This gives the type of empirical influence values to be calculated. It is not used if \code{L} is supplied. The possible types of empirical influence values are described in the help for \code{\link{empinf}}. } \item{t0}{ The observed value of the statistic of interest. The input value is used only if one of \code{t} or \code{L} is also supplied. The default value is \code{boot.out$t0[index]}. If \code{t0} is supplied but neither \code{t} nor \code{L} are supplied then \code{t0} is set to \code{boot.out$t0[index]} and a warning is generated. } \item{t}{ A vector of bootstrap replicates of the statistic of interest. If \code{t0} is missing then \code{t} is not used, otherwise it is used to calculate the empirical influence values (if they are not supplied in \code{L}). } \item{...}{ Any extra arguments required by \code{boot.out$statistic}. These are needed if \code{L} is not supplied as they are used by \code{empinf} to calculate empirical influence values. }} \value{ A vector of length \code{boot.out$R} with the linear approximations to the statistic of interest for each of the bootstrap samples. } \details{ The linear approximation to a bootstrap replicate with frequency vector \code{f} is given by \code{t0 + sum(L * f)/n} in the one sample with an easy extension to the stratified case. The frequencies are found by calling \code{boot.array}. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \seealso{ \code{\link{boot}}, \code{\link{empinf}}, \code{\link{control}} } \examples{ # Using the city data let us look at the linear approximation to the # ratio statistic and its logarithm. We compare these with the # corresponding plots for the bigcity data ratio <- function(d, w) sum(d$x * w)/sum(d$u * w) city.boot <- boot(city, ratio, R = 499, stype = "w") bigcity.boot <- boot(bigcity, ratio, R = 499, stype = "w") op <- par(pty = "s", mfrow = c(2, 2)) # The first plot is for the city data ratio statistic. city.lin1 <- linear.approx(city.boot) lim <- range(c(city.boot$t,city.lin1)) plot(city.boot$t, city.lin1, xlim = lim, ylim = lim, main = "Ratio; n=10", xlab = "t*", ylab = "tL*") abline(0, 1) # Now for the log of the ratio statistic for the city data. city.lin2 <- linear.approx(city.boot,t0 = log(city.boot$t0), t = log(city.boot$t)) lim <- range(c(log(city.boot$t),city.lin2)) plot(log(city.boot$t), city.lin2, xlim = lim, ylim = lim, main = "Log(Ratio); n=10", xlab = "t*", ylab = "tL*") abline(0, 1) # The ratio statistic for the bigcity data. bigcity.lin1 <- linear.approx(bigcity.boot) lim <- range(c(bigcity.boot$t,bigcity.lin1)) plot(bigcity.lin1, bigcity.boot$t, xlim = lim, ylim = lim, main = "Ratio; n=49", xlab = "t*", ylab = "tL*") abline(0, 1) # Finally the log of the ratio statistic for the bigcity data. bigcity.lin2 <- linear.approx(bigcity.boot,t0 = log(bigcity.boot$t0), t = log(bigcity.boot$t)) lim <- range(c(log(bigcity.boot$t),bigcity.lin2)) plot(bigcity.lin2, log(bigcity.boot$t), xlim = lim, ylim = lim, main = "Log(Ratio); n=49", xlab = "t*", ylab = "tL*") abline(0, 1) par(op) } \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/lines.saddle.distn.Rd0000644000076600000240000000561411566474117016022 0ustar00ripleystaff\name{lines.saddle.distn} \alias{lines.saddle.distn} \title{ Add a Saddlepoint Approximation to a Plot } \description{ This function adds a line corresponding to a saddlepoint density or distribution function approximation to the current plot. } \usage{ \method{lines}{saddle.distn}(x, dens = TRUE, h = function(u) u, J = function(u) 1, npts = 50, lty = 1, \dots) } \arguments{ \item{x}{ An object of class \code{"saddle.distn"} (see \code{\link{saddle.distn.object}} representing a saddlepoint approximation to a distribution. } \item{dens}{ A logical variable indicating whether the saddlepoint density (\code{TRUE}; the default) or the saddlepoint distribution function (\code{FALSE}) should be plotted. } \item{h}{ Any transformation of the variable that is required. Its first argument must be the value at which the approximation is being performed and the function must be vectorized. } \item{J}{ When \code{dens=TRUE} this function specifies the Jacobian for any transformation that may be necessary. The first argument of \code{J} must the value at which the approximation is being performed and the function must be vectorized. If \code{h} is supplied \code{J} must also be supplied and both must have the same argument list. } \item{npts}{ The number of points to be used for the plot. These points will be evenly spaced over the range of points used in finding the saddlepoint approximation. } \item{lty}{ The line type to be used. } \item{\dots}{ Any additional arguments to \code{h} and \code{J}. } } \value{ \code{sad.d} is returned invisibly. } \section{Side Effects}{ A line is added to the current plot. } \details{ The function uses \code{smooth.spline} to produce the saddlepoint curve. When \code{dens=TRUE} the spline is on the log scale and when \code{dens=FALSE} it is on the probit scale. } \seealso{ \code{\link{saddle.distn}} } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \examples{ # In this example we show how a plot such as that in Figure 9.9 of # Davison and Hinkley (1997) may be produced. Note the large number of # bootstrap replicates required in this example. expdata <- rexp(12) vfun <- function(d, i) { n <- length(d) (n-1)/n*var(d[i]) } exp.boot <- boot(expdata,vfun, R = 9999) exp.L <- (expdata - mean(expdata))^2 - exp.boot$t0 exp.tL <- linear.approx(exp.boot, L = exp.L) hist(exp.tL, nclass = 50, probability = TRUE) exp.t0 <- c(0, sqrt(var(exp.boot$t))) exp.sp <- saddle.distn(A = exp.L/12,wdist = "m", t0 = exp.t0) # The saddlepoint approximation in this case is to the density of # t-t0 and so t0 must be added for the plot. lines(exp.sp, h = function(u, t0) u+t0, J = function(u, t0) 1, t0 = exp.boot$t0) } \keyword{aplot} \keyword{smooth} \keyword{nonparametric} boot/man/logit.Rd0000644000076600000240000000140411565746140013441 0ustar00ripleystaff\name{logit} \alias{logit} \title{ Logit of Proportions } \description{ This function calculates the logit of proportions. } \usage{ logit(p) } \arguments{ \item{p}{ A numeric Splus object, all of whose values are in the range [0,1]. Missing values (\code{NA}s) are allowed. }} \value{ A numeric object of the same type as \code{p} containing the logits of the input values. } \details{ If any elements of \code{p} are outside the unit interval then an error message is generated. Values of \code{p} equal to 0 or 1 (to within machine precision) will return \code{-Inf} or \code{Inf} respectively. Any \code{NA}s in the input will also be \code{NA}s in the output. } \seealso{ \code{\link{inv.logit}}, \code{\link{qlogis}} for which this is a wrapper. } \keyword{math} boot/man/manaus.Rd0000644000076600000240000000377311110552530013602 0ustar00ripleystaff\name{manaus} \alias{manaus} \title{ Average Heights of the Rio Negro river at Manaus } \description{ The \code{manaus} time series is of class \code{"ts"} and has 1080 observations on one variable. The data values are monthly averages of the daily stages (heights) of the Rio Negro at Manaus. Manaus is 18km upstream from the confluence of the Rio Negro with the Amazon but because of the tiny slope of the water surface and the lower courses of its flatland affluents, they may be regarded as a good approximation of the water level in the Amazon at the confluence. The data here cover 90 years from January 1903 until December 1992. The Manaus gauge is tied in with an arbitrary bench mark of 100m set in the steps of the Municipal Prefecture; gauge readings are usually referred to sea level, on the basis of a mark on the steps leading to the Parish Church (Matriz), which is assumed to lie at an altitude of 35.874 m according to observations made many years ago under the direction of Samuel Pereira, an engineer in charge of the Manaus Sanitation Committee Whereas such an altitude cannot, by any means, be considered to be a precise datum point, observations have been provisionally referred to it. The measurements are in metres. } \source{ The data were kindly made available by Professors H. O'Reilly Sternberg and D. R. Brillinger of the University of California at Berkeley. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Sternberg, H. O'R. (1987) Aggravation of floods in the Amazon river as a consequence of deforestation? \emph{Geografiska Annaler}, \bold{69A}, 201-219. Sternberg, H. O'R. (1995) Waters and wetlands of Brazilian Amazonia: An uncertain future. In \emph{The Fragile Tropics of Latin America: Sustainable Management of Changing Environments}, Nishizawa, T. and Uitto, J.I. (editors), United Nations University Press, 113-179. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/melanoma.Rd0000644000076600000240000000400111110552530014070 0ustar00ripleystaff\name{melanoma} \alias{melanoma} \title{ Survival from Malignant Melanoma } \description{ The \code{melanoma} data frame has 205 rows and 7 columns. The data consist of measurements made on patients with malignant melanoma. Each patient had their tumour removed by surgery at the Department of Plastic Surgery, University Hospital of Odense, Denmark during the period 1962 to 1977. The surgery consisted of complete removal of the tumour together with about 2.5cm of the surrounding skin. Among the measurements taken were the thickness of the tumour and whether it was ulcerated or not. These are thought to be important prognostic variables in that patients with a thick and/or ulcerated tumour have an increased chance of death from melanoma. Patients were followed until the end of 1977. } \usage{ melanoma } \format{ This data frame contains the following columns: \describe{ \item{\code{time}}{ Survival time in days since the operation, possibly censored. } \item{\code{status}}{ The patients status at the end of the study. 1 indicates that they had died from melanoma, 2 indicates that they were still alive and 3 indicates that they had died from causes unrelated to their melanoma. } \item{\code{sex}}{ The patients sex; 1=male, 0=female. } \item{\code{age}}{ Age in years at the time of the operation. } \item{\code{year}}{ Year of operation. } \item{\code{thickness}}{ Tumour thickness in mm. } \item{\code{ulcer}}{ Indicator of ulceration; 1=present, 0=absent. }}} \note{ This dataset is not related to the dataset in the \pkg{lattice} package with the same name. } \source{ The data were obtained from Andersen, P.K., Borgan, O., Gill, R.D. and Keiding, N. (1993) \emph{Statistical Models Based on Counting Processes}. Springer-Verlag. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Venables, W.N. and Ripley, B.D. (1994) \emph{Modern Applied Statistics with S-Plus}. Springer-Verlag. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/motor.Rd0000644000076600000240000000260511247245074013464 0ustar00ripleystaff\name{motor} \alias{motor} \title{ Data from a Simulated Motorcycle Accident } \description{ The \code{motor} data frame has 94 rows and 4 columns. The rows are obtained by removing replicate values of \code{time} from the dataset \code{\link{mcycle}}. Two extra columns are added to allow for strata with a different residual variance in each stratum. } \usage{ motor } \format{ This data frame contains the following columns: \describe{ \item{\code{times}}{ The time in milliseconds since impact. } \item{\code{accel}}{ The recorded head acceleration (in g). } \item{\code{strata}}{ A numeric column indicating to which of the three strata (numbered 1, 2 and 3) the observations belong. } \item{\code{v}}{ An estimate of the residual variance for the observation. \code{v} is constant within the strata but a different estimate is used for each of the three strata. }}} \source{ The data were obtained from Silverman, B.W. (1985) Some aspects of the spline smoothing approach to non-parametric curve fitting. \emph{Journal of the Royal Statistical Society, B}, \bold{47}, 1--52. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Venables, W.N. and Ripley, B.D. (1994) \emph{Modern Applied Statistics with S-Plus}. Springer-Verlag. } \seealso{ \code{\link{mcycle}} } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/neuro.Rd0000644000076600000240000000202411110552530013432 0ustar00ripleystaff\name{neuro} \alias{neuro} \title{ Neurophysiological Point Process Data } \description{ \code{neuro} is a matrix containing times of observed firing of a neuron in windows of 250ms either side of the application of a stimulus to a human subject. Each row of the matrix is a replication of the experiment and there were a total of 469 replicates. } \note{ There are a lot of missing values in the matrix as different numbers of firings were observed in different replicates. The number of firings observed varied from 2 to 6. } \source{ The data were collected and kindly made available by Dr. S.J. Boniface of the Neurophysiology Unit at the Radcliffe Infirmary, Oxford. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Ventura, V., Davison, A.C. and Boniface, S.J. (1997) A stochastic model for the effect of magnetic brain stimulation on a motorneurone. To appear in \emph{Applied Statistics}. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/nitrofen.Rd0000644000076600000240000000352611226131035014137 0ustar00ripleystaff\name{nitrofen} \alias{nitrofen} \title{ Toxicity of Nitrofen in Aquatic Systems } \description{ The \code{nitrofen} data frame has 50 rows and 5 columns. Nitrofen is a herbicide that was used extensively for the control of broad-leaved and grass weeds in cereals and rice. Although it is relatively non-toxic to adult mammals, nitrofen is a significant tetragen and mutagen. It is also acutely toxic and reproductively toxic to cladoceran zooplankton. Nitrofen is no longer in commercial use in the U.S., having been the first pesticide to be withdrawn due to tetragenic effects. The data here come from an experiment to measure the reproductive toxicity of nitrofen on a species of zooplankton (\emph{Ceriodaphnia dubia}). 50 animals were randomized into batches of 10 and each batch was put in a solution with a measured concentration of nitrofen. Then the number of live offspring in each of the three broods to each animal was recorded. } \usage{ nitrofen } \format{ This data frame contains the following columns: \describe{ \item{\code{conc}}{ The nitrofen concentration in the solution (mug/litre). } \item{\code{brood1}}{ The number of live offspring in the first brood. } \item{\code{brood2}}{ The number of live offspring in the second brood. } \item{\code{brood3}}{ The number of live offspring in the third brood. } \item{\code{total}}{ The total number of live offspring in the first three broods. }}} \source{ The data were obtained from Bailer, A.J. and Oris, J.T. (1994) Assessing toxicity of pollutants in aquatic systems. In \emph{Case Studies in Biometry}. N. Lange, L. Ryan, L. Billard, D. Brillinger, L. Conquest and J. Greenhouse (editors), 25--40. John Wiley. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/nodal.Rd0000644000076600000240000000423011110552530013400 0ustar00ripleystaff\name{nodal} \alias{nodal} \title{ Nodal Involvement in Prostate Cancer } \description{ The \code{nodal} data frame has 53 rows and 7 columns. The treatment strategy for a patient diagnosed with cancer of the prostate depend highly on whether the cancer has spread to the surrounding lymph nodes. It is common to operate on the patient to get samples from the nodes which can then be analysed under a microscope but clearly it would be preferable if an accurate assessment of nodal involvement could be made without surgery. For a sample of 53 prostate cancer patients, a number of possible predictor variables were measured before surgery. The patients then had surgery to determine nodal involvement. It was required to see if nodal involvement could be accurately predicted from the predictor variables and which ones were most important. } \usage{ nodal } \format{ This data frame contains the following columns: \describe{ \item{\code{m}}{ A column of ones. } \item{\code{r}}{ An indicator of nodal involvement. } \item{\code{aged}}{ The patients age dichotomized into less than 60 (\code{0}) and 60 or over \code{1}. } \item{\code{stage}}{ A measurement of the size and position of the tumour observed by palpitation with the fingers via the rectum. A value of \code{1} indicates a more serious case of the cancer. } \item{\code{grade}}{ Another indicator of the seriousness of the cancer, this one is determined by a pathology reading of a biopsy taken by needle before surgery. A value of \code{1} indicates a more serious case of the cancer. } \item{\code{xray}}{ A third measure of the seriousness of the cancer taken from an X-ray reading. A value of \code{1} indicates a more serious case of the cancer. } \item{\code{acid}}{ The level of acid phosphatase in the blood serum. }}} \source{ The data were obtained from Brown, B.W. (1980) Prediction analysis for binary data. In \emph{Biostatistics Casebook}. R.G. Miller, B. Efron, B.W. Brown and L.E. Moses (editors), 3--18. John Wiley. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/norm.ci.Rd0000644000076600000240000001050211566470733013672 0ustar00ripleystaff\name{norm.ci} \alias{norm.ci} \title{ Normal Approximation Confidence Intervals } \description{ Using the normal approximation to a statistic, calculate equi-tailed two-sided confidence intervals. } \usage{ norm.ci(boot.out = NULL, conf = 0.95, index = 1, var.t0 = NULL, t0 = NULL, t = NULL, L = NULL, h = function(t) t, hdot = function(t) 1, hinv = function(t) t) } \arguments{ \item{boot.out}{ A bootstrap output object returned from a call to \code{boot}. If \code{t0} is missing then \code{boot.out} is a required argument. It is also required if both \code{var.t0} and \code{t} are missing. } \item{conf}{ A scalar or vector containing the confidence level(s) of the required interval(s). } \item{index}{ The index of the statistic of interest within the output of a call to \code{boot.out$statistic}. It is not used if \code{boot.out} is missing, in which case \code{t0} must be supplied. } \item{var.t0}{ The variance of the statistic of interest. If it is not supplied then \code{var(t)} is used. } \item{t0}{ The observed value of the statistic of interest. If it is missing then it is taken from \code{boot.out} which is required in that case. } \item{t}{ Bootstrap replicates of the variable of interest. These are used to estimate the variance of the statistic of interest if \code{var.t0} is not supplied. The default value is \code{boot.out$t[,index]}. } \item{L}{ The empirical influence values for the statistic of interest. These are used to calculate \code{var.t0} if neither \code{var.t0} nor \code{boot.out} are supplied. If a transformation is supplied through \code{h} then the influence values must be for the untransformed statistic \code{t0}. } \item{h}{ A function defining a monotonic transformation, the intervals are calculated on the scale of \code{h(t)} and the inverse function \code{hinv} is applied to the resulting intervals. \code{h} must be a function of one variable only and must be vectorized. The default is the identity function. } \item{hdot}{ A function of one argument returning the derivative of \code{h}. It is a required argument if \code{h} is supplied and is used for approximating the variance of \code{h(t0)}. The default is the constant function 1. } \item{hinv}{ A function, like \code{h}, which returns the inverse of \code{h}. It is used to transform the intervals calculated on the scale of \code{h(t)} back to the original scale. The default is the identity function. If \code{h} is supplied but \code{hinv} is not, then the intervals returned will be on the transformed scale. } } \value{ If \code{length(conf)} is 1 then a vector containing the confidence level and the endpoints of the interval is returned. Otherwise, the returned value is a matrix where each row corresponds to a different confidence level. } \details{ It is assumed that the statistic of interest has an approximately normal distribution with variance \code{var.t0} and so a confidence interval of length \code{2*qnorm((1+conf)/2)*sqrt(var.t0)} is found. If \code{boot.out} or \code{t} are supplied then the interval is bias-corrected using the bootstrap bias estimate, and so the interval would be centred at \code{2*t0-mean(t)}. Otherwise the interval is centred at \code{t0}. } \note{ This function is primarily designed to be called by \code{boot.ci} to calculate the normal approximation after a bootstrap but it can also be used without doing any bootstrap calculations as long as \code{t0} and \code{var.t0} can be supplied. See the examples below. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \seealso{ \code{\link{boot.ci}} } \examples{ # In Example 5.1 of Davison and Hinkley (1997), normal approximation # confidence intervals are found for the air-conditioning data. air.mean <- mean(aircondit$hours) air.n <- nrow(aircondit) air.v <- air.mean^2/air.n norm.ci(t0 = air.mean, var.t0 = air.v) exp(norm.ci(t0 = log(air.mean), var.t0 = 1/air.n)[2:3]) # Now a more complicated example - the ratio estimate for the city data. ratio <- function(d, w) sum(d$x * w)/sum(d$u *w) city.v <- var.linear(empinf(data = city, statistic = ratio)) norm.ci(t0 = ratio(city,rep(0.1,10)), var.t0 = city.v) } \keyword{htest} % Converted by Sd2Rd version 1.15. boot/man/nuclear.Rd0000644000076600000240000000420511110552530013736 0ustar00ripleystaff\name{nuclear} \alias{nuclear} \title{ Nuclear Power Station Construction Data } \description{ The \code{nuclear} data frame has 32 rows and 11 columns. The data relate to the construction of 32 light water reactor (LWR) plants constructed in the U.S.A in the late 1960's and early 1970's. The data was collected with the aim of predicting the cost of construction of further LWR plants. 6 of the power plants had partial turnkey guarantees and it is possible that, for these plants, some manufacturers' subsidies may be hidden in the quoted capital costs. } \usage{ nuclear } \format{ This data frame contains the following columns: \describe{ \item{\code{cost}}{ The capital cost of construction in millions of dollars adjusted to 1976 base. } \item{\code{date}}{ The date on which the construction permit was issued. The data are measured in years since January 1 1990 to the nearest month. } \item{\code{t1}}{ The time between application for and issue of the construction permit. } \item{\code{t2}}{ The time between issue of operating license and construction permit. } \item{\code{cap}}{ The net capacity of the power plant (MWe). } \item{\code{pr}}{ A binary variable where \code{1} indicates the prior existence of a LWR plant at the same site. } \item{\code{ne}}{ A binary variable where \code{1} indicates that the plant was constructed in the north-east region of the U.S.A. } \item{\code{ct}}{ A binary variable where \code{1} indicates the use of a cooling tower in the plant. } \item{\code{bw}}{ A binary variable where \code{1} indicates that the nuclear steam supply system was manufactured by Babcock-Wilcox. } \item{\code{cum.n}}{ The cumulative number of power plants constructed by each architect-engineer. } \item{\code{pt}}{ A binary variable where \code{1} indicates those plants with partial turnkey guarantees. }}} \source{ The data were obtained from Cox, D.R. and Snell, E.J. (1981) \emph{Applied Statistics: Principles and Examples}. Chapman and Hall. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/paulsen.Rd0000644000076600000240000000273611110552530013763 0ustar00ripleystaff\name{paulsen} \alias{paulsen} \title{ Neurotransmission in Guinea Pig Brains } \description{ The \code{paulsen} data frame has 346 rows and 1 columns. Sections were prepared from the brain of adult guinea pigs. Spontaneous currents that flowed into individual brain cells were then recorded and the peak amplitude of each current measured. The aim of the experiment was to see if the current flow was quantal in nature (i.e. that it is not a single burst but instead is built up of many smaller bursts of current). If the current was indeed quantal then it would be expected that the distribution of the current amplitude would be multimodal with modes at regular intervals. The modes would be expected to decrease in magnitude for higher current amplitudes. } \usage{ paulsen } \format{ This data frame contains the following column: \describe{ \item{\code{y}}{ The current flowing into individual brain cells. The currents are measured in pico-amperes. }}} \source{ The data were kindly made available by Dr. O. Paulsen from the Department of Pharmacology at the University of Oxford. Paulsen, O. and Heggelund, P. (1994) The quantal size at retinogeniculate synapses determined from spontaneous and evoked EPSCs in guinea-pig thalamic slices. \emph{Journal of Physiology}, \bold{480}, 505--511. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/plot.boot.Rd0000644000076600000240000001304011573142613014234 0ustar00ripleystaff\name{plot.boot} \alias{plot.boot} \title{ Plots of the Output of a Bootstrap Simulation } \description{ This takes a bootstrap object and produces plots for the bootstrap replicates of the variable of interest. } \usage{ \method{plot}{boot}(x, index = 1, t0 = NULL, t = NULL, jack = FALSE, qdist = "norm", nclass = NULL, df, \dots) } \arguments{ \item{x}{ An object of class \code{"boot"} returned from one of the bootstrap generation functions. } \item{index}{ The index of the variable of interest within the output of \code{boot.out}. This is ignored if \code{t} and \code{t0} are supplied. } \item{t0}{ The original value of the statistic. This defaults to \code{boot.out$t0[index]} unless \code{t} is supplied when it defaults to \code{NULL}. In that case no vertical line is drawn on the histogram. } \item{t}{ The bootstrap replicates of the statistic. Usually this will take on its default value of \code{boot.out$t[,index]}, however it may be useful sometimes to supply a different set of values which are a function of \code{boot.out$t}. } \item{jack}{ A logical value indicating whether a jackknife-after-bootstrap plot is required. The default is not to produce such a plot. } \item{qdist}{ The distribution against which the Q-Q plot should be drawn. At present \code{"norm"} (normal distribution - the default) and \code{"chisq"} (chi-squared distribution) are the only possible values. } \item{nclass}{ An integer giving the number of classes to be used in the bootstrap histogram. The default is the integer between 10 and 100 closest to \code{ceiling(length(t)/25)}. } \item{df}{ If \code{qdist} is \code{"chisq"} then this is the degrees of freedom for the chi-squared distribution to be used. It is a required argument in that case. } \item{...}{ When \code{jack} is \code{TRUE} additional parameters to \code{jack.after.boot} can be supplied. See the help file for \code{jack.after.boot} for details of the possible parameters. } } \value{ \code{boot.out} is returned invisibly. } \section{Side Effects}{ All screens are closed and cleared and a number of plots are produced on the current graphics device. Screens are closed but not cleared at termination of this function. } \details{ This function will generally produce two side-by-side plots. The left plot will be a histogram of the bootstrap replicates. Usually the breaks of the histogram will be chosen so that \code{t0} is at a breakpoint and all intervals are of equal length. A vertical dotted line indicates the position of \code{t0}. This cannot be done if \code{t} is supplied but \code{t0} is not and so, in that case, the breakpoints are computed by \code{hist} using the \code{nclass} argument and no vertical line is drawn. The second plot is a Q-Q plot of the bootstrap replicates. The order statistics of the replicates can be plotted against normal or chi-squared quantiles. In either case the expected line is also plotted. For the normal, this will have intercept \code{mean(t)} and slope \code{sqrt(var(t))} while for the chi-squared it has intercept 0 and slope 1. If \code{jack} is \code{TRUE} a third plot is produced beneath these two. That plot is the jackknife-after-bootstrap plot. This plot may only be requested when nonparametric simulation has been used. See \code{jack.after.boot} for further details of this plot. } \seealso{ \code{\link{boot}}, \code{\link{jack.after.boot}}, \code{\link{print.boot}} } \examples{ # We fit an exponential model to the air-conditioning data and use # that for a parametric bootstrap. Then we look at plots of the # resampled means. air.rg <- function(data, mle) rexp(length(data), 1/mle) air.boot <- boot(aircondit$hours, mean, R = 999, sim = "parametric", ran.gen = air.rg, mle = mean(aircondit$hours)) plot(air.boot) # In the difference of means example for the last two series of the # gravity data grav1 <- gravity[as.numeric(gravity[, 2]) >= 7, ] grav.fun <- function(dat, w) { strata <- tapply(dat[, 2], as.numeric(dat[, 2])) d <- dat[, 1] ns <- tabulate(strata) w <- w/tapply(w, strata, sum)[strata] mns <- as.vector(tapply(d * w, strata, sum)) # drop names mn2 <- tapply(d * d * w, strata, sum) s2hat <- sum((mn2 - mns^2)/ns) c(mns[2] - mns[1], s2hat) } grav.boot <- boot(grav1, grav.fun, R = 499, stype = "w", strata = grav1[, 2]) plot(grav.boot) # now suppose we want to look at the studentized differences. grav.z <- (grav.boot$t[, 1]-grav.boot$t0[1])/sqrt(grav.boot$t[, 2]) plot(grav.boot, t = grav.z, t0 = 0) # In this example we look at the one of the partial correlations for the # head dimensions in the dataset frets. frets.fun <- function(data, i) { pcorr <- function(x) { # Function to find the correlations and partial correlations between # the four measurements. v <- cor(x) v.d <- diag(var(x)) iv <- solve(v) iv.d <- sqrt(diag(iv)) iv <- - diag(1/iv.d) \%*\% iv \%*\% diag(1/iv.d) q <- NULL n <- nrow(v) for (i in 1:(n-1)) q <- rbind( q, c(v[i, 1:i], iv[i,(i+1):n]) ) q <- rbind( q, v[n, ] ) diag(q) <- round(diag(q)) q } d <- data[i, ] v <- pcorr(d) c(v[1,], v[2,], v[3,], v[4,]) } frets.boot <- boot(log(as.matrix(frets)), frets.fun, R = 999) plot(frets.boot, index = 7, jack = TRUE, stinf = FALSE, useJ = FALSE) } \keyword{hplot} \keyword{nonparametric} boot/man/poisons.Rd0000644000076600000240000000214111110552530013774 0ustar00ripleystaff\name{poisons} \alias{poisons} \title{ Animal Survival Times } \description{ The \code{poisons} data frame has 48 rows and 3 columns. The data form a 3x4 factorial experiment, the factors being three poisons and four treatments. Each combination of the two factors was used for four animals, the allocation to animals having been completely randomized. } \usage{ poisons } \format{ This data frame contains the following columns: \describe{ \item{\code{time}}{ The survival time of the animal in units of 10 hours. } \item{\code{poison}}{ A factor with levels \code{1}, \code{2} and \code{3} giving the type of poison used. } \item{\code{treat}}{ A factor with levels \code{A}, \code{B}, \code{C} and \code{D} giving the treatment. }}} \source{ The data were obtained from Box, G.E.P. and Cox, D.R. (1964) An analysis of transformations (with Discussion). \emph{ Journal of the Royal Statistical Society, B}, \bold{26}, 211--252. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/polar.Rd0000644000076600000240000000162111110552530013421 0ustar00ripleystaff\name{polar} \alias{polar} \title{ Pole Positions of New Caledonian Laterites } \description{ The \code{polar} data frame has 50 rows and 2 columns. The data are the pole positions from a paleomagnetic study of New Caledonian laterites. } \usage{ polar } \format{ This data frame contains the following columns: \describe{ \item{\code{lat}}{ The latitude (in degrees) of the pole position. Note that all latitudes are negative as the axis is taken to be in the lower hemisphere. } \item{\code{long}}{ The longitude (in degrees) of the pole position. }}} \source{ The data were obtained from Fisher, N.I., Lewis, T. and Embleton, B.J.J. (1987) \emph{Statistical Analysis of Spherical Data}. Cambridge University Press. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/print.boot.Rd0000644000076600000240000000320711110552530014404 0ustar00ripleystaff\name{print.boot} \alias{print.boot} \title{ Print a Summary of a Bootstrap Object } \description{ This is a method for the function print for objects of the class \code{"boot"}. } \usage{ \method{print}{boot}(x, digits = getOption("digits"), index = 1:ncol(boot.out$t), \dots) } \arguments{ \item{x}{ A bootstrap output object of class \code{"boot"} generated by one of the bootstrap functions. } \item{digits}{ The number of digits to be printed in the summary statistics. } \item{index}{ Indices indicating for which elements of the bootstrap output summary statistics are required. } \item{\dots}{further arguments passed to or from other methods.} } \value{ The bootstrap object is returned invisibly. } \details{ For each statistic calculated in the bootstrap the original value and the bootstrap estimates of its bias and standard error are printed. If \code{boot.out$t0} is missing (such as when it was created by a call to \code{tsboot} with \code{orig.t=FALSE}) the bootstrap mean and standard error are printed. If resampling was done using importance resampling weights, then the bootstrap estimates are reweighted as if uniform resampling had been done. The ratio importance sampling estimates are used and if there were a number of distributions then defensive mixture distributions are used. In this case an extra column with the mean of the observed bootstrap statistics is also printed. } \seealso{ \code{\link{boot}}, \code{\link{censboot}}, \code{\link{imp.moments}}, \code{\link{plot.boot}}, \code{\link{tilt.boot}}, \code{\link{tsboot}} } \keyword{nonparametric} \keyword{htest} boot/man/print.bootci.Rd0000644000076600000240000000276311110552530014726 0ustar00ripleystaff\name{print.bootci} \alias{print.bootci} \title{ Print Bootstrap Confidence Intervals } \description{ This is a method for the function \code{print()} to print objects of the class \code{"bootci"}. } \usage{ \method{print}{bootci}(x, hinv = NULL, ...) } \arguments{ \item{x}{ The output from a call to \code{boot.ci}. } \item{hinv}{ A transformation to be made to the interval end-points before they are printed. } \item{\dots}{further arguments passed to or from other methods.} } \value{ The object \code{ci.out} is returned invisibly. } \details{ This function prints out the results from \code{boot.ci} in a "nice" format. It also notes whether the scale of the intervals is the original scale of the input to \code{boot.ci} or a different scale and whether the calculations were done on a transformed scale. It also looks at the order statistics that were used in calculating the intervals. If the smallest or largest values were used then it prints a message \code{Warning : Intervals used Extreme Quantiles} Such intervals should be considered very unstable and not relied upon for inferences. Even if the extreme values are not used, it is possible that the intervals are unstable if they used quantiles close to the extreme values. The function alerts the user to intervals which use the upper or lower 10 order statistics with the message \code{Some intervals may be unstable} } \seealso{ \code{\link{boot.ci}} } \keyword{print} \keyword{htest} % Converted by Sd2Rd version 0.3-1. boot/man/print.saddle.distn.Rd0000644000076600000240000000147211110552530016017 0ustar00ripleystaff\name{print.saddle.distn} \alias{print.saddle.distn} \title{ Print Quantiles of Saddlepoint Approximations } \description{ This is a method for the function \code{print()} to print objects of class \code{"saddle.distn"}. } \usage{ \method{print}{saddle.distn}(x, \dots) } \arguments{ \item{x}{ An object of class \code{"saddle.distn"} created by a call to \code{saddle.distn}. } \item{\dots}{further arguments passed to or from other methods.} } \value{ The input is returned invisibly. } \details{ The quantiles of the saddlepoint approximation to the distribution are printed along with the original call and some other useful information. } \seealso{ \code{\link{lines.saddle.distn}}, \code{\link{saddle.distn}} } \keyword{print} \keyword{smooth} \keyword{nonparametric} % Converted by Sd2Rd version 0.3-1. boot/man/print.simplex.Rd0000644000076600000240000000207511110552530015124 0ustar00ripleystaff\name{print.simplex} \alias{print.simplex} \title{ Print Solution to Linear Programming Problem } \description{ This is a method for the function \code{print()} to print objects of class \code{"simplex"}. } \usage{ \method{print}{simplex}(x, \dots) } \arguments{ \item{x}{ An object of class \code{"simplex"} created by calling the function \code{simplex} to solve a linear programming problem. } \item{\dots}{further arguments passed to or from other methods.} } \value{ \code{x} is returned silently. } \details{ The coefficients of the objective function are printed. If a solution to the linear programming problem was found then the solution and the optimal value of the objective function are printed. If a feasible solution was found but the maximum number of iterations was exceeded then the last feasible solution and the objective function value at that point are printed. If no feasible solution could be found then a message stating that is printed. } \seealso{ \code{\link{simplex}} } \keyword{print} \keyword{optimize} % Converted by Sd2Rd version 0.3-1. boot/man/remission.Rd0000644000076600000240000000140611110552530014315 0ustar00ripleystaff\name{remission} \alias{remission} \title{ Cancer Remission and Cell Activity } \description{ The \code{remission} data frame has 27 rows and 3 columns. } \usage{ remission } \format{ This data frame contains the following columns: \describe{ \item{\code{LI}}{ A measure of cell activity. } \item{\code{m}}{ The number of patients in each group (all values are actually 1 here). } \item{\code{r}}{ The number of patients (out of \code{m}) who went into remission. }}} \source{ The data were obtained from Freeman, D.H. (1987) \emph{Applied Categorical Data Analysis}. Marcel Dekker. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/saddle.Rd0000644000076600000240000001631511566474326013573 0ustar00ripleystaff\name{saddle} \alias{saddle} \title{ Saddlepoint Approximations for Bootstrap Statistics } \description{ This function calculates a saddlepoint approximation to the distribution of a linear combination of \bold{W} at a particular point \code{u}, where \bold{W} is a vector of random variables. The distribution of \bold{W} may be multinomial (default), Poisson or binary. Other distributions are possible also if the adjusted cumulant generating function and its second derivative are given. Conditional saddlepoint approximations to the distribution of one linear combination given the values of other linear combinations of \bold{W} can be calculated for \bold{W} having binary or Poisson distributions. } \usage{ saddle(A = NULL, u = NULL, wdist = "m", type = "simp", d = NULL, d1 = 1, init = rep(0.1, d), mu = rep(0.5, n), LR = FALSE, strata = NULL, K.adj = NULL, K2 = NULL) } \arguments{ \item{A}{ A vector or matrix of known coefficients of the linear combinations of \bold{W}. It is a required argument unless \code{K.adj} and \code{K2} are supplied, in which case it is ignored. } \item{u}{ The value at which it is desired to calculate the saddlepoint approximation to the distribution of the linear combination of \bold{W}. It is a required argument unless \code{K.adj} and \code{K2} are supplied, in which case it is ignored. } \item{wdist}{ The distribution of \bold{W}. This can be one of \code{"m"} (multinomial), \code{"p"} (Poisson), \code{"b"} (binary) or \code{"o"} (other). If \code{K.adj} and \code{K2} are given \code{wdist} is set to \code{"o"}. } \item{type}{ The type of saddlepoint approximation. Possible types are \code{"simp"} for simple saddlepoint and \code{"cond"} for the conditional saddlepoint. When \code{wdist} is \code{"o"} or \code{"m"}, \code{type} is automatically set to \code{"simp"}, which is the only type of saddlepoint currently implemented for those distributions. } \item{d}{ This specifies the dimension of the whole statistic. This argument is required only when \code{wdist = "o"} and defaults to 1 if not supplied in that case. For other distributions it is set to \code{ncol(A)}. } \item{d1}{ When \code{type} is \code{"cond"} this is the dimension of the statistic of interest which must be less than \code{length(u)}. Then the saddlepoint approximation to the conditional distribution of the first \code{d1} linear combinations given the values of the remaining combinations is found. Conditional distribution function approximations can only be found if the value of \code{d1} is 1. } \item{init}{ Used if \code{wdist} is either \code{"m"} or \code{"o"}, this gives initial values to \code{nlmin} which is used to solve the saddlepoint equation. } \item{mu}{ The values of the parameters of the distribution of \bold{W} when \code{wdist} is \code{"m"}, \code{"p"} \code{"b"}. \code{mu} must be of the same length as W (i.e. \code{nrow(A)}). The default is that all values of \code{mu} are equal and so the elements of \bold{W} are identically distributed. } \item{LR}{ If \code{TRUE} then the Lugananni-Rice approximation to the cdf is used, otherwise the approximation used is based on Barndorff-Nielsen's r*. } \item{strata}{ The strata for stratified data. } \item{K.adj}{ The adjusted cumulant generating function used when \code{wdist} is \code{"o"}. This is a function of a single parameter, \code{zeta}, which calculates \code{K(zeta)-u\%*\%zeta}, where \code{K(zeta)} is the cumulant generating function of \bold{W}. } \item{K2}{ This is a function of a single parameter \code{zeta} which returns the matrix of second derivatives of \code{K(zeta)} for use when \code{wdist} is \code{"o"}. If \code{K.adj} is given then this must be given also. It is called only once with the calculated solution to the saddlepoint equation being passed as the argument. This argument is ignored if \code{K.adj} is not supplied. } } \value{ A list consisting of the following components \item{spa}{ The saddlepoint approximations. The first value is the density approximation and the second value is the distribution function approximation. } \item{zeta.hat}{ The solution to the saddlepoint equation. For the conditional saddlepoint this is the solution to the saddlepoint equation for the numerator. } \item{zeta2.hat}{ If \code{type} is \code{"cond"} this is the solution to the saddlepoint equation for the denominator. This component is not returned for any other value of \code{type}. } } \details{ If \code{wdist} is \code{"o"} or \code{"m"}, the saddlepoint equations are solved using \code{nlmin} to minimize \code{K.adj} with respect to its parameter \code{zeta}. For the Poisson and binary cases, a generalized linear model is fitted such that the parameter estimates solve the saddlepoint equations. The response variable 'y' for the \code{glm} must satisfy the equation \code{t(A)\%*\%y = u} (\code{t()} being the transpose function). Such a vector can be found as a feasible solution to a linear programming problem. This is done by a call to \code{simplex}. The covariate matrix for the \code{glm} is given by \code{A}. } \references{ Booth, J.G. and Butler, R.W. (1990) Randomization distributions and saddlepoint approximations in generalized linear models. \emph{Biometrika}, \bold{77}, 787--796. Canty, A.J. and Davison, A.C. (1997) Implementation of saddlepoint approximations to resampling distributions. \emph{Computing Science and Statistics; Proceedings of the 28th Symposium on the Interface}, 248--253. Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and their Application}. Cambridge University Press. Jensen, J.L. (1995) \emph{Saddlepoint Approximations}. Oxford University Press. } \seealso{ \code{\link{saddle.distn}}, \code{\link{simplex}} } \examples{ # To evaluate the bootstrap distribution of the mean failure time of # air-conditioning equipment at 80 hours saddle(A = aircondit$hours/12, u = 80) # Alternatively this can be done using a conditional poisson saddle(A = cbind(aircondit$hours/12,1), u = c(80, 12), wdist = "p", type = "cond") # To use the Lugananni-Rice approximation to this saddle(A = cbind(aircondit$hours/12,1), u = c(80, 12), wdist = "p", type = "cond", LR = TRUE) # Example 9.16 of Davison and Hinkley (1997) calculates saddlepoint # approximations to the distribution of the ratio statistic for the # city data. Since the statistic is not in itself a linear combination # of random Variables, its distribution cannot be found directly. # Instead the statistic is expressed as the solution to a linear # estimating equation and hence its distribution can be found. We # get the saddlepoint approximation to the pdf and cdf evaluated at # t = 1.25 as follows. jacobian <- function(dat,t,zeta) { p <- exp(zeta*(dat$x-t*dat$u)) abs(sum(dat$u*p)/sum(p)) } city.sp1 <- saddle(A = city$x-1.25*city$u, u = 0) city.sp1$spa[1] <- jacobian(city, 1.25, city.sp1$zeta.hat) * city.sp1$spa[1] city.sp1 } \keyword{smooth} \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/saddle.distn.Rd0000644000076600000240000002216211566470461014704 0ustar00ripleystaff\name{saddle.distn} \alias{saddle.distn} \title{ Saddlepoint Distribution Approximations for Bootstrap Statistics } \description{ Approximate an entire distribution using saddlepoint methods. This function can calculate simple and conditional saddlepoint distribution approximations for a univariate quantity of interest. For the simple saddlepoint the quantity of interest is a linear combination of \bold{W} where \bold{W} is a vector of random variables. For the conditional saddlepoint we require the distribution of one linear combination given the values of any number of other linear combinations. The distribution of \bold{W} must be one of multinomial, Poisson or binary. The primary use of this function is to calculate quantiles of bootstrap distributions using saddlepoint approximations. Such quantiles are required by the function \code{\link{control}} to approximate the distribution of the linear approximation to a statistic. } \usage{ saddle.distn(A, u = NULL, alpha = NULL, wdist = "m", type = "simp", npts = 20, t = NULL, t0 = NULL, init = rep(0.1, d), mu = rep(0.5, n), LR = FALSE, strata = NULL, \dots) } \arguments{ \item{A}{ This is a matrix of known coefficients or a function which returns such a matrix. If a function then its first argument must be the point \code{t} at which a saddlepoint is required. The most common reason for A being a function would be if the statistic is not itself a linear combination of the \bold{W} but is the solution to a linear estimating equation. } \item{u}{ If \code{A} is a function then \code{u} must also be a function returning a vector with length equal to the number of columns of the matrix returned by \code{A}. Usually all components other than the first will be constants as the other components are the values of the conditioning variables. If \code{A} is a matrix with more than one column (such as when \code{wdist = "cond"}) then \code{u} should be a vector with length one less than \code{ncol(A)}. In this case \code{u} specifies the values of the conditioning variables. If \code{A} is a matrix with one column or a vector then \code{u} is not used. } \item{alpha}{ The alpha levels for the quantiles of the distribution which should be returned. By default the 0.1, 0.5, 1, 2.5, 5, 10, 20, 50, 80, 90, 95, 97.5, 99, 99.5 and 99.9 percentiles are calculated. } \item{wdist}{ The distribution of \bold{W}. Possible values are \code{"m"} (multinomial), \code{"p"} (Poisson), or \code{"b"} (binary). } \item{type}{ The type of saddlepoint to be used. Possible values are \code{"simp"} (simple saddlepoint) and \code{"cond"} (conditional). If \code{wdist} is \code{"m"}, \code{type} is set to \code{"simp"}. } \item{npts}{ The number of points at which the saddlepoint approximation should be calculated and then used to fit the spline. } \item{t}{ A vector of points at which the saddlepoint approximations are calculated. These points should extend beyond the extreme quantiles required but still be in the possible range of the bootstrap distribution. The observed value of the statistic should not be included in \code{t} as the distribution function approximation breaks down at that point. The points should, however cover the entire effective range of the distribution including close to the centre. If \code{t} is supplied then \code{npts} is set to \code{length(t)}. When \code{t} is not supplied, the function attempts to find the effective range of the distribution and then selects points to cover this range. } \item{t0}{ If \code{t} is not supplied then a vector of length 2 should be passed as \code{t0}. The first component of \code{t0} should be the centre of the distribution and the second should be an estimate of spread (such as a standard error). These two are then used to find the effective range of the distribution. The range finding mechanism does rely on an accurate estimate of location in \code{t0[1]}. } \item{init}{ When \code{wdist} is \code{"m"}, this vector should contain the initial values to be passed to \code{nlmin} when it is called to solve the saddlepoint equations. } \item{mu}{ The vector of parameter values for the distribution. The default is that the components of \bold{W} are identically distributed. } \item{LR}{ A logical flag. When \code{LR} is \code{TRUE} the Lugananni-Rice cdf approximations are calculated and used to fit the spline. Otherwise the cdf approximations used are based on Barndorff-Nielsen's r*. } \item{strata}{ A vector giving the strata when the rows of A relate to stratified data. This is used only when \code{wdist} is \code{"m"}. } \item{\dots}{ When \code{A} and \code{u} are functions any additional arguments are passed unchanged each time one of them is called. } } \value{ The returned value is an object of class \code{"saddle.distn"}. See the help file for \code{\link{saddle.distn.object}} for a description of such an object. } \details{ The range at which the saddlepoint is used is such that the cdf approximation at the endpoints is more extreme than required by the extreme values of \code{alpha}. The lower endpoint is found by evaluating the saddlepoint at the points \code{t0[1]-2*t0[2]}, \code{t0[1]-4*t0[2]}, \code{t0[1]-8*t0[2]} etc. until a point is found with a cdf approximation less than \code{min(alpha)/10}, then a bisection method is used to find the endpoint which has cdf approximation in the range (\code{min(alpha)/1000}, \code{min(alpha)/10}). Then a number of, equally spaced, points are chosen between the lower endpoint and \code{t0[1]} until a total of \code{npts/2} approximations have been made. The remaining \code{npts/2} points are chosen to the right of \code{t0[1]} in a similar manner. Any points which are very close to the centre of the distribution are then omitted as the cdf approximations are not reliable at the centre. A smoothing spline is then fitted to the probit of the saddlepoint distribution function approximations at the remaining points and the required quantiles are predicted from the spline. Sometimes the function will terminate with the message \code{"Unable to find range"}. There are two main reasons why this may occur. One is that the distribution is too discrete and/or the required quantiles too extreme, this can cause the function to be unable to find a point within the allowable range which is beyond the extreme quantiles. Another possibility is that the value of \code{t0[2]} is too small and so too many steps are required to find the range. The first problem cannot be solved except by asking for less extreme quantiles, although for very discrete distributions the approximations may not be very good. In the second case using a larger value of \code{t0[2]} will usually solve the problem. } \references{ Booth, J.G. and Butler, R.W. (1990) Randomization distributions and saddlepoint approximations in generalized linear models. \emph{Biometrika}, \bold{77}, 787--796. Canty, A.J. and Davison, A.C. (1997) Implementation of saddlepoint approximations to resampling distributions. \emph{Computing Science and Statistics; Proceedings of the 28th Symposium on the Interface} 248--253. Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and their Application}. Cambridge University Press. Jensen, J.L. (1995) \emph{Saddlepoint Approximations}. Oxford University Press. } \seealso{ \code{\link{lines.saddle.distn}}, \code{\link{saddle}}, \code{\link{saddle.distn.object}}, \code{\link{smooth.spline}} } \examples{ # The bootstrap distribution of the mean of the air-conditioning # failure data: fails to find value on R (and probably on S too) air.t0 <- c(mean(aircondit$hours), sqrt(var(aircondit$hours)/12)) \dontrun{saddle.distn(A = aircondit$hours/12, t0 = air.t0)} # alternatively using the conditional poisson saddle.distn(A = cbind(aircondit$hours/12, 1), u = 12, wdist = "p", type = "cond", t0 = air.t0) # Distribution of the ratio of a sample of size 10 from the bigcity # data, taken from Example 9.16 of Davison and Hinkley (1997). ratio <- function(d, w) sum(d$x *w)/sum(d$u * w) city.v <- var.linear(empinf(data = city, statistic = ratio)) bigcity.t0 <- c(mean(bigcity$x)/mean(bigcity$u), sqrt(city.v)) Afn <- function(t, data) cbind(data$x - t*data$u, 1) ufn <- function(t, data) c(0,10) saddle.distn(A = Afn, u = ufn, wdist = "b", type = "cond", t0 = bigcity.t0, data = bigcity) # From Example 9.16 of Davison and Hinkley (1997) again, we find the # conditional distribution of the ratio given the sum of city$u. Afn <- function(t, data) cbind(data$x-t*data$u, data$u, 1) ufn <- function(t, data) c(0, sum(data$u), 10) city.t0 <- c(mean(city$x)/mean(city$u), sqrt(city.v)) saddle.distn(A = Afn, u = ufn, wdist = "p", type = "cond", t0 = city.t0, data = city) } \keyword{nonparametric} \keyword{smooth} \keyword{dplot} % Converted by Sd2Rd version 1.15. boot/man/saddle.distn.object.Rd0000644000076600000240000000431011123744306016133 0ustar00ripleystaff\name{saddle.distn.object} \alias{saddle.distn.object} \title{ Saddlepoint Distribution Approximation Objects } \description{ Class of objects that result from calculating saddlepoint distribution approximations by a call to \code{saddle.distn}. } \section{Generation}{ This class of objects is returned from calls to the function \code{\link{saddle.distn}}. } \section{Methods}{ The class \code{"saddle.distn"} has methods for the functions \code{\link{lines}} and \code{\link{print}}. } \section{Structure}{ Objects of class \code{"saddle.distn"} are implemented as a list with the following components. \describe{ \item{quantiles}{ A matrix with 2 columns. The first column contains the probabilities \code{alpha} and the second column contains the estimated quantiles of the distribution at those probabilities derived from the spline. } \item{points}{ A matrix of evaluations of the saddlepoint approximation. The first column contains the values of \code{t} which were used, the second and third contain the density and cdf approximations at those points and the rest of the columns contain the solutions to the saddlepoint equations. When \code{type} is \code{"simp"}, there is only one of those. When \code{type} is \code{"cond"} there are \code{2*d-1} where \code{d} is the number of columns in \code{A} or the output of \code{A(t,\dots{})}. The first \code{d} of these correspond to the numerator and the remainder correspond to the denominator. } \item{distn}{ An object of class \code{smooth.spline}. This corresponds to the spline fitted to the saddlepoint cdf approximations in points in order to approximate the entire distribution. For the structure of the object see \code{smooth.spline}. } \item{call}{ The original call to \code{saddle.distn} which generated the object. } \item{LR}{ A logical variable indicating whether the Lugananni-Rice approximations were used. } } } \seealso{ \code{\link{lines.saddle.distn}}, \code{\link{saddle.distn}}, \code{\link{print.saddle.distn}} } \keyword{nonparametric} \keyword{methods} \keyword{smooth} boot/man/salinity.Rd0000644000076600000240000000265011110552530014143 0ustar00ripleystaff\name{salinity} \alias{salinity} \title{ Water Salinity and River Discharge } \description{ The \code{salinity} data frame has 28 rows and 4 columns. Biweekly averages of the water salinity and river discharge in Pamlico Sound, North Carolina were recorded between the years 1972 and 1977. The data in this set consists only of those measurements in March, April and May. } \usage{ salinity } \format{ This data frame contains the following columns: \describe{ \item{\code{sal}}{ The average salinity of the water over two weeks. } \item{\code{lag}}{ The average salinity of the water lagged two weeks. Since only spring is used, the value of \code{lag} is not always equal to the previous value of \code{sal}. } \item{\code{trend}}{ A factor indicating in which of the 6 biweekly periods between March and May, the observations were taken. The levels of the factor are from 0 to 5 with 0 being the first two weeks in March. } \item{\code{dis}}{ The amount of river discharge during the two weeks for which \code{sal} is the average salinity. }}} \source{ The data were obtained from Ruppert, D. and Carroll, R.J. (1980) Trimmed least squares estimation in the linear model. \emph{Journal of the American Statistical Association}, \bold{75}, 828--838. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/simplex.Rd0000644000076600000240000001054611566412136014007 0ustar00ripleystaff\name{simplex} \alias{simplex} \title{ Simplex Method for Linear Programming Problems } \description{ This function will optimize the linear function \code{a\%*\%x} subject to the constraints \code{A1\%*\%x <= b1}, \code{A2\%*\%x >= b2}, \code{A3\%*\%x = b3} and \code{x >= 0}. Either maximization or minimization is possible but the default is minimization. } \usage{ simplex(a, A1 = NULL, b1 = NULL, A2 = NULL, b2 = NULL, A3 = NULL, b3 = NULL, maxi = FALSE, n.iter = n + 2 * m, eps = 1e-10) } \arguments{ \item{a}{ A vector of length \code{n} which gives the coefficients of the objective function. } \item{A1}{ An \code{m1} by \code{n} matrix of coefficients for the \eqn{\leq}{<=} type of constraints. } \item{b1}{ A vector of length \code{m1} giving the right hand side of the \eqn{\leq}{<=} constraints. This argument is required if \code{A1} is given and ignored otherwise. All values in \code{b1} must be non-negative. } \item{A2}{ An \code{m2} by \code{n} matrix of coefficients for the \eqn{\geq}{>=} type of constraints. } \item{b2}{ A vector of length \code{m2} giving the right hand side of the \eqn{\geq}{>=} constraints. This argument is required if \code{A2} is given and ignored otherwise. All values in \code{b2} must be non-negative. Note that the constraints \code{x >= 0} are included automatically and so should not be repeated here. } \item{A3}{ An \code{m3} by \code{n} matrix of coefficients for the equality constraints. } \item{b3}{ A vector of length \code{m3} giving the right hand side of equality constraints. This argument is required if \code{A3} is given and ignored otherwise. All values in \code{b3} must be non-negative. } \item{maxi}{ A logical flag which specifies minimization if \code{FALSE} (default) and maximization otherwise. If \code{maxi} is \code{TRUE} then the maximization problem is recast as a minimization problem by changing the objective function coefficients to their negatives. } \item{n.iter}{ The maximum number of iterations to be conducted in each phase of the simplex method. The default is \code{n+2*(m1+m2+m3)}. } \item{eps}{ The floating point tolerance to be used in tests of equality. } } \value{ An object of class \code{"simplex"}: see \code{\link{simplex.object}}. } \details{ The method employed by this function is the two phase tableau simplex method. If there are \eqn{\geq}{>=} or equality constraints an initial feasible solution is not easy to find. To find a feasible solution an artificial variable is introduced into each \eqn{\geq}{>=} or equality constraint and an auxiliary objective function is defined as the sum of these artificial variables. If a feasible solution to the set of constraints exists then the auxiliary objective will be minimized when all of the artificial variables are 0. These are then discarded and the original problem solved starting at the solution to the auxiliary problem. If the only constraints are of the \eqn{\leq}{<=} form, the origin is a feasible solution and so the first stage can be omitted. } \note{ The method employed here is suitable only for relatively small systems. Also if possible the number of constraints should be reduced to a minimum in order to speed up the execution time which is approximately proportional to the cube of the number of constraints. In particular if there are any constraints of the form \code{x[i] >= b2[i]} they should be omitted by setting \code{x[i] = x[i]-b2[i]}, changing all the constraints and the objective function accordingly and then transforming back after the solution has been found. } \references{ Gill, P.E., Murray, W. and Wright, M.H. (1991) \emph{Numerical Linear Algebra and Optimization Vol. 1}. Addison-Wesley. Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P. (1992) \emph{Numerical Recipes: The Art of Scientific Computing (Second Edition)}. Cambridge University Press. } \examples{ # This example is taken from Exercise 7.5 of Gill, Murray and Wright (1991). enj <- c(200, 6000, 3000, -200) fat <- c(800, 6000, 1000, 400) vitx <- c(50, 3, 150, 100) vity <- c(10, 10, 75, 100) vitz <- c(150, 35, 75, 5) simplex(a = enj, A1 = fat, b1 = 13800, A2 = rbind(vitx, vity, vitz), b2 = c(600, 300, 550), maxi = TRUE) } \keyword{optimize} boot/man/simplex.object.Rd0000644000076600000240000000732011134046017015240 0ustar00ripleystaff\name{simplex.object} \alias{simplex.object} \title{ Linear Programming Solution Objects } \description{ Class of objects that result from solving a linear programming problem using \code{simplex}. } \section{Generation}{ This class of objects is returned from calls to the function \code{simplex}. } \section{Methods}{ The class \code{"saddle.distn"} has a method for the function \code{print}. } \section{Structure}{ Objects of class \code{"simplex"} are implemented as a list with the following components. \describe{ \item{soln}{ The values of \code{x} which optimize the objective function under the specified constraints provided those constraints are jointly feasible. } \item{solved}{ This indicates whether the problem was solved. A value of \code{-1} indicates that no feasible solution could be found. A value of \code{0} that the maximum number of iterations was reached without termination of the second stage. This may indicate an unbounded function or simply that more iterations are needed. A value of \code{1} indicates that an optimal solution has been found. } \item{value}{ The value of the objective function at \code{soln}. } \item{val.aux}{ This is \code{NULL} if a feasible solution is found. Otherwise it is a positive value giving the value of the auxiliary objective function when it was minimized. } \item{obj}{ The original coefficients of the objective function. } \item{a}{ The objective function coefficients re-expressed such that the basic variables have coefficient zero. } \item{a.aux}{ This is \code{NULL} if a feasible solution is found. Otherwise it is the re-expressed auxiliary objective function at the termination of the first phase of the simplex method. } \item{A}{ The final constraint matrix which is expressed in terms of the non-basic variables. If a feasible solution is found then this will have dimensions \code{m1+m2+m3} by \code{n+m1+m2}, where the final \code{m1+m2} columns correspond to slack and surplus variables. If no feasible solution is found there will be an additional \code{m1+m2+m3} columns for the artificial variables introduced to solve the first phase of the problem. } \item{basic}{ The indices of the basic (non-zero) variables in the solution. Indices between \code{n+1} and \code{n+m1} correspond to slack variables, those between \code{n+m1+1} and \code{n+m2} correspond to surplus variables and those greater than \code{n+m2} are artificial variables. Indices greater than \code{n+m2} should occur only if \code{solved} is \code{-1} as the artificial variables are discarded in the second stage of the simplex method. } \item{slack}{ The final values of the \code{m1} slack variables which arise when the "<=" constraints are re-expressed as the equalities \code{A1\%*\%x + slack = b1}. } \item{surplus}{ The final values of the \code{m2} surplus variables which arise when the "<=" constraints are re-expressed as the equalities \code{A2\%*\%x - surplus = b2}. } \item{artificial}{ This is NULL if a feasible solution can be found. If no solution can be found then this contains the values of the \code{m1+m2+m3} artificial variables which minimize their sum subject to the original constraints. A feasible solution exists only if all of the artificial variables can be made 0 simultaneously. } } } \seealso{ \code{\link{print.simplex}}, \code{\link{simplex}} } \keyword{optimize} \keyword{methods} % Converted by Sd2Rd version 0.3-1. boot/man/smooth.f.Rd0000644000076600000240000001102511566474411014060 0ustar00ripleystaff\name{smooth.f} \alias{smooth.f} \title{ Smooth Distributions on Data Points } \description{ This function uses the method of frequency smoothing to find a distribution on a data set which has a required value, \code{theta}, of the statistic of interest. The method results in distributions which vary smoothly with \code{theta}. } \usage{ smooth.f(theta, boot.out, index = 1, t = boot.out$t[, index], width = 0.5) } \arguments{ \item{theta}{ The required value for the statistic of interest. If \code{theta} is a vector, a separate distribution will be found for each element of \code{theta}. } \item{boot.out}{ A bootstrap output object returned by a call to \code{boot}. } \item{index}{ The index of the variable of interest in the output of \code{boot.out$statistic}. This argument is ignored if \code{t} is supplied. \code{index} must be a scalar. } \item{t}{ The bootstrap values of the statistic of interest. This must be a vector of length \code{boot.out$R} and the values must be in the same order as the bootstrap replicates in \code{boot.out}. } \item{width}{ The standardized width for the kernel smoothing. The smoothing uses a value of \code{width*s} for epsilon, where \code{s} is the bootstrap estimate of the standard error of the statistic of interest. \code{width} should take a value in the range (0.2, 1) to produce a reasonable smoothed distribution. If \code{width} is too large then the distribution becomes closer to uniform. }} \value{ If \code{length(theta)} is 1 then a vector with the same length as the data set \code{boot.out$data} is returned. The value in position \code{i} is the probability to be given to the data point in position \code{i} so that the distribution has parameter value approximately equal to \code{theta}. If \code{length(theta)} is bigger than 1 then the returned value is a matrix with \code{length(theta)} rows each of which corresponds to a distribution with the parameter value approximately equal to the corresponding value of \code{theta}. } \details{ The new distributional weights are found by applying a normal kernel smoother to the observed values of \code{t} weighted by the observed frequencies in the bootstrap simulation. The resulting distribution may not have parameter value exactly equal to the required value \code{theta} but it will typically have a value which is close to \code{theta}. The details of how this method works can be found in Davison, Hinkley and Worton (1995) and Section 3.9.2 of Davison and Hinkley (1997). } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Davison, A.C., Hinkley, D.V. and Worton, B.J. (1995) Accurate and efficient construction of bootstrap likelihoods. \emph{Statistics and Computing}, \bold{5}, 257--264. } \seealso{ \code{\link{boot}}, \code{\link{exp.tilt}}, \code{\link{tilt.boot}} } \examples{ # Example 9.8 of Davison and Hinkley (1997) requires tilting the resampling # distribution of the studentized statistic to be centred at the observed # value of the test statistic 1.84. In the book exponential tilting was used # but it is also possible to use smooth.f. grav1 <- gravity[as.numeric(gravity[, 2]) >= 7, ] grav.fun <- function(dat, w, orig) { strata <- tapply(dat[, 2], as.numeric(dat[, 2])) d <- dat[, 1] ns <- tabulate(strata) w <- w/tapply(w, strata, sum)[strata] mns <- as.vector(tapply(d * w, strata, sum)) # drop names mn2 <- tapply(d * d * w, strata, sum) s2hat <- sum((mn2 - mns^2)/ns) c(mns[2] - mns[1], s2hat, (mns[2]-mns[1]-orig)/sqrt(s2hat)) } grav.z0 <- grav.fun(grav1, rep(1, 26), 0) grav.boot <- boot(grav1, grav.fun, R = 499, stype = "w", strata = grav1[, 2], orig = grav.z0[1]) grav.sm <- smooth.f(grav.z0[3], grav.boot, index = 3) # Now we can run another bootstrap using these weights grav.boot2 <- boot(grav1, grav.fun, R = 499, stype = "w", strata = grav1[, 2], orig = grav.z0[1], weights = grav.sm) # Estimated p-values can be found from these as follows mean(grav.boot$t[, 3] >= grav.z0[3]) imp.prob(grav.boot2, t0 = -grav.z0[3], t = -grav.boot2$t[, 3]) # Note that for the importance sampling probability we must # multiply everything by -1 to ensure that we find the correct # probability. Raw resampling is not reliable for probabilities # greater than 0.5. Thus 1 - imp.prob(grav.boot2, index = 3, t0 = grav.z0[3])$raw # can give very strange results (negative probabilities). } \keyword{smooth} \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/sunspot.Rd0000644000076600000240000000254711120174562014035 0ustar00ripleystaff\name{sunspot} \alias{sunspot} \title{ Annual Mean Sunspot Numbers } \description{ \code{sunspot} is a time series and contains 289 observations. The Zurich sunspot numbers have been analyzed in almost all books on time series analysis as well as numerous papers. The data set, usually attributed to Rudolf Wolf, consists of means of daily relative numbers of sunspot sightings. The relative number for a day is given by k(f+10g) where g is the number of sunspot groups observed, f is the total number of spots within the groups and k is a scaling factor relating the observer and telescope to a baseline. The relative numbers are then averaged to give an annual figure. See Inzenman (1983) for a discussion of the relative numbers. The figures are for the years 1700-1988. } \source{ The data were obtained from Tong, H. (1990) \emph{Nonlinear Time Series: A Dynamical System Approach}. Oxford University Press } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Inzenman, A.J. (1983) J.R. Wolf and H.A. Wolfer: An historical note on the Zurich sunspot relative numbers. \emph{Journal of the Royal Statistical Society, A}, \bold{146}, 311-318. Waldmeir, M. (1961) \emph{The Sunspot Activity in the Years 1610-1960}. Schulthess and Co. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/survival.Rd0000644000076600000240000000162411110552530014162 0ustar00ripleystaff\name{survival} \alias{survival} \title{ Survival of Rats after Radiation Doses } \description{ The \code{survival} data frame has 14 rows and 2 columns. The data measured the survival percentages of batches of rats who were given varying doses of radiation. At each of 6 doses there were two or three replications of the experiment. } \usage{ survival } \format{ This data frame contains the following columns: \describe{ \item{\code{dose}}{ The dose of radiation administered (rads). } \item{\code{surv}}{ The survival rate of the batches expressed as a percentage. }}} \source{ The data were obtained from Efron, B. (1988) Computer-intensive methods in statistical regression. \emph{SIAM Review}, \bold{30}, 421--449. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/tau.Rd0000644000076600000240000000402411110552530013075 0ustar00ripleystaff\name{tau} \alias{tau} \title{ Tau Particle Decay Modes } \description{ The \code{tau} data frame has 60 rows and 2 columns. The tau particle is a heavy electron-like particle discovered in the 1970's by Martin Perl at the Stanford Linear Accelerator Center. Soon after its production the tau particle decays into various collections of more stable particles. About 86\% of the time the decay involves just one charged particle. This rate has been measured independently 13 times. The one-charged-particle event is made up of four major modes of decay as well as a collection of other events. The four main types of decay are denoted rho, pi, e and mu. These rates have been measured independently 6, 7, 14 and 19 times respectively. Due to physical constraints each experiment can only estimate the composite one-charged-particle decay rate or the rate of one of the major modes of decay. Each experiment consists of a major research project involving many years work. One of the goals of the experiments was to estimate the rate of decay due to events other than the four main modes of decay. These are uncertain events and so cannot themselves be observed directly. } \usage{ tau } \format{ This data frame contains the following columns: \describe{ \item{\code{rate}}{ The decay rate expressed as a percentage. } \item{\code{decay}}{ The type of decay measured in the experiment. It is a factor with levels \code{1}, \code{rho}, \code{pi}, \code{e} and \code{mu}. }}} \source{ The data were obtained from Efron, B. (1992) Jackknife-after-bootstrap standard errors and influence functions (with Discussion). \emph{Journal of the Royal Statistical Society, B}, \bold{54}, 83--127. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Hayes, K.G., Perl, M.L. and Efron, B. (1989) Application of the bootstrap statistical method to the tau-decay-mode problem. \emph{Physical Review, D}, \bold{39}, 274-279. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/tilt.boot.Rd0000644000076600000240000002006311573143357014243 0ustar00ripleystaff\name{tilt.boot} \alias{tilt.boot} \title{ Non-parametric Tilted Bootstrap } \description{ This function will run an initial bootstrap with equal resampling probabilities (if required) and will use the output of the initial run to find resampling probabilities which put the value of the statistic at required values. It then runs an importance resampling bootstrap using the calculated probabilities as the resampling distribution. } \usage{ tilt.boot(data, statistic, R, sim = "ordinary", stype = "i", strata = rep(1, n), L = NULL, theta = NULL, alpha = c(0.025, 0.975), tilt = TRUE, width = 0.5, index = 1, \dots) } \arguments{ \item{data}{ The data as a vector, matrix or data frame. If it is a matrix or data frame then each row is considered as one (multivariate) observation. } \item{statistic}{ A function which when applied to data returns a vector containing the statistic(s) of interest. It must take at least two arguments. The first argument will always be \code{data} and the second should be a vector of indices, weights or frequencies describing the bootstrap sample. Any other arguments must be supplied to \code{tilt.boot} and will be passed unchanged to statistic each time it is called. } \item{R}{ The number of bootstrap replicates required. This will generally be a vector, the first value stating how many uniform bootstrap simulations are to be performed at the initial stage. The remaining values of \code{R} are the number of simulations to be performed resampling from each reweighted distribution. The first value of \code{R} must always be present, a value of 0 implying that no uniform resampling is to be carried out. Thus \code{length(R)} should always equal \code{1+length(theta)}. } \item{sim}{ This is a character string indicating the type of bootstrap simulation required. There are only two possible values that this can take: \code{"ordinary"} and \code{"balanced"}. If other simulation types are required for the initial un-weighted bootstrap then it will be necessary to run \code{boot}, calculate the weights appropriately, and run \code{boot} again using the calculated weights. } \item{stype}{ A character string indicating the type of second argument expected by \code{statistic}. The possible values that \code{stype} can take are \code{"i"} (indices), \code{"w"} (weights) and \code{"f"} (frequencies). } \item{strata}{ An integer vector or factor representing the strata for multi-sample problems. } \item{L}{ The empirical influence values for the statistic of interest. They are used only for exponential tilting when \code{tilt} is \code{TRUE}. If \code{tilt} is \code{TRUE} and they are not supplied then \code{tilt.boot} uses \code{empinf} to calculate them. } \item{theta}{ The required parameter value(s) for the tilted distribution(s). There should be one value of \code{theta} for each of the non-uniform distributions. If \code{R[1]} is 0 \code{theta} is a required argument. Otherwise \code{theta} values can be estimated from the initial uniform bootstrap and the values in \code{alpha}. } \item{alpha}{ The alpha level to which tilting is required. This parameter is ignored if \code{R[1]} is 0 or if \code{theta} is supplied, otherwise it is used to find the values of \code{theta} as quantiles of the initial uniform bootstrap. In this case \code{R[1]} should be large enough that \code{min(c(alpha, 1-alpha))*R[1] > 5}, if this is not the case then a warning is generated to the effect that the \code{theta} are extreme values and so the tilted output may be unreliable. } \item{tilt}{ A logical variable which if \code{TRUE} (the default) indicates that exponential tilting should be used, otherwise local frequency smoothing (\code{smooth.f}) is used. If \code{tilt} is \code{FALSE} then \code{R[1]} must be positive. In fact in this case the value of \code{R[1]} should be fairly large (in the region of 500 or more). } \item{width}{ This argument is used only if \code{tilt} is \code{FALSE}, in which case it is passed unchanged to \code{smooth.f} as the standardized bandwidth for the smoothing operation. The value should generally be in the range (0.2, 1). See \code{smooth.f} for for more details. } \item{index}{ The index of the statistic of interest in the output from \code{statistic}. By default the first element of the output of \code{statistic} is used. } \item{\dots}{ Any additional arguments required by \code{statistic}. These are passed unchanged to \code{statistic} each time it is called. } } \value{ An object of class \code{"boot"} with the following components \item{t0}{ The observed value of the statistic on the original data. } \item{t}{ The values of the bootstrap replicates of the statistic. There will be \code{sum(R)} of these, the first \code{R[1]} corresponding to the uniform bootstrap and the remainder to the tilted bootstrap(s). } \item{R}{ The input vector of the number of bootstrap replicates. } \item{data}{ The original data as supplied. } \item{statistic}{ The \code{statistic} function as supplied. } \item{sim}{ The simulation type used in the bootstrap(s), it can either be \code{"ordinary"} or \code{"balanced"}. } \item{stype}{ The type of statistic supplied, it is the same as the input value \code{stype}. } \item{call}{ A copy of the original call to \code{tilt.boot}. } \item{strata}{ The strata as supplied. } \item{weights}{ The matrix of weights used. If \code{R[1]} is greater than 0 then the first row will be the uniform weights and each subsequent row the tilted weights. If \code{R[1]} equals 0 then the uniform weights are omitted and only the tilted weights are output. } \item{theta}{ The values of \code{theta} used for the tilted distributions. These are either the input values or the values derived from the uniform bootstrap and \code{alpha}. } } \references{ Booth, J.G., Hall, P. and Wood, A.T.A. (1993) Balanced importance resampling for the bootstrap. \emph{Annals of Statistics}, \bold{21}, 286--298. Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Hinkley, D.V. and Shi, S. (1989) Importance sampling and the nested bootstrap. \emph{Biometrika}, \bold{76}, 435--446. } \seealso{ \code{\link{boot}}, \code{\link{exp.tilt}}, \code{\link{Imp.Estimates}}, \code{\link{imp.weights}}, \code{\link{smooth.f}} } \examples{ # Note that these examples can take a while to run. # Example 9.9 of Davison and Hinkley (1997). grav1 <- gravity[as.numeric(gravity[,2]) >= 7, ] grav.fun <- function(dat, w, orig) { strata <- tapply(dat[, 2], as.numeric(dat[, 2])) d <- dat[, 1] ns <- tabulate(strata) w <- w/tapply(w, strata, sum)[strata] mns <- as.vector(tapply(d * w, strata, sum)) # drop names mn2 <- tapply(d * d * w, strata, sum) s2hat <- sum((mn2 - mns^2)/ns) c(mns[2]-mns[1],s2hat,(mns[2]-mns[1]-orig)/sqrt(s2hat)) } grav.z0 <- grav.fun(grav1, rep(1, 26), 0) tilt.boot(grav1, grav.fun, R = c(249, 375, 375), stype = "w", strata = grav1[,2], tilt = TRUE, index = 3, orig = grav.z0[1]) # Example 9.10 of Davison and Hinkley (1997) requires a balanced # importance resampling bootstrap to be run. In this example we # show how this might be run. acme.fun <- function(data, i, bhat) { d <- data[i,] n <- nrow(d) d.lm <- glm(d$acme~d$market) beta.b <- coef(d.lm)[2] d.diag <- boot::glm.diag(d.lm) SSx <- (n-1)*var(d$market) tmp <- (d$market-mean(d$market))*d.diag$res*d.diag$sd sr <- sqrt(sum(tmp^2))/SSx c(beta.b, sr, (beta.b-bhat)/sr) } acme.b <- acme.fun(acme, 1:nrow(acme), 0) acme.boot1 <- tilt.boot(acme, acme.fun, R = c(499, 250, 250), stype = "i", sim = "balanced", alpha = c(0.05, 0.95), tilt = TRUE, index = 3, bhat = acme.b[1]) } \keyword{nonparametric} boot/man/tsboot.Rd0000644000076600000240000002477111573362666013657 0ustar00ripleystaff\name{tsboot} \alias{tsboot} \alias{ts.return} \title{ Bootstrapping of Time Series } \description{ Generate \code{R} bootstrap replicates of a statistic applied to a time series. The replicate time series can be generated using fixed or random block lengths or can be model based replicates. } \usage{ tsboot(tseries, statistic, R, l = NULL, sim = "model", endcorr = TRUE, n.sim = NROW(tseries), orig.t = TRUE, ran.gen, ran.args = NULL, norm = TRUE, \dots, parallel = c("no", "multicore", "snow"), ncpus = getOption("boot.ncpus", 1L), cl = NULL) } \arguments{ \item{tseries}{ A univariate or multivariate time series. } \item{statistic}{ A function which when applied to \code{tseries} returns a vector containing the statistic(s) of interest. Each time \code{statistic} is called it is passed a time series of length \code{n.sim} which is of the same class as the original \code{tseries}. Any other arguments which \code{statistic} takes must remain constant for each bootstrap replicate and should be supplied through the \dots{} argument to \code{tsboot}. } \item{R}{ A positive integer giving the number of bootstrap replicates required. } \item{sim}{ The type of simulation required to generate the replicate time series. The possible input values are \code{"model"} (model based resampling), \code{"fixed"} (block resampling with fixed block lengths of \code{l}), \code{"geom"} (block resampling with block lengths having a geometric distribution with mean \code{l}) or \code{"scramble"} (phase scrambling). } \item{l}{ If \code{sim} is \code{"fixed"} then \code{l} is the fixed block length used in generating the replicate time series. If \code{sim} is \code{"geom"} then \code{l} is the mean of the geometric distribution used to generate the block lengths. \code{l} should be a positive integer less than the length of \code{tseries}. This argument is not required when \code{sim} is \code{"model"} but it is required for all other simulation types. } \item{endcorr}{ A logical variable indicating whether end corrections are to be applied when \code{sim} is \code{"fixed"}. When \code{sim} is \code{"geom"}, \code{endcorr} is automatically set to \code{TRUE}; \code{endcorr} is not used when \code{sim} is \code{"model"} or \code{"scramble"}. } \item{n.sim}{ The length of the simulated time series. Typically this will be equal to the length of the original time series but there are situations when it will be larger. One obvious situation is if prediction is required. Another situation in which \code{n.sim} is larger than the original length is if \code{tseries} is a residual time series from fitting some model to the original time series. In this case, \code{n.sim} would usually be the length of the original time series. } \item{orig.t}{ A logical variable which indicates whether \code{statistic} should be applied to \code{tseries} itself as well as the bootstrap replicate series. If \code{statistic} is expecting a longer time series than \code{tseries} or if applying \code{statistic} to \code{tseries} will not yield any useful information then \code{orig.t} should be set to \code{FALSE}. } \item{ran.gen}{ This is a function of three arguments. The first argument is a time series. If \code{sim} is \code{"model"} then it will always be \code{tseries} that is passed. For other simulation types it is the result of selecting \code{n.sim} observations from \code{tseries} by some scheme and converting the result back into a time series of the same form as \code{tseries} (although of length \code{n.sim}). The second argument to \code{ran.gen} is always the value \code{n.sim}, and the third argument is \code{ran.args}, which is used to supply any other objects needed by \code{ran.gen}. If \code{sim} is \code{"model"} then the generation of the replicate time series will be done in \code{ran.gen} (for example through use of \code{\link{arima.sim}}). For the other simulation types \code{ran.gen} is used for \sQuote{post-blackening}. The default is that the function simply returns the time series passed to it. } \item{ran.args}{ This will be supplied to \code{ran.gen} each time it is called. If \code{ran.gen} needs any extra arguments then they should be supplied as components of \code{ran.args}. Multiple arguments may be passed by making \code{ran.args} a list. If \code{ran.args} is \code{NULL} then it should not be used within \code{ran.gen} but note that \code{ran.gen} must still have its third argument. } \item{norm}{ A logical argument indicating whether normal margins should be used for phase scrambling. If \code{norm} is \code{FALSE} then margins corresponding to the exact empirical margins are used. } \item{...}{ Extra named arguments to \code{statistic} may be supplied here. Beware of partial matching to the arguments of \code{tsboot} listed above. } \item{parallel, ncpus, cl}{ See the help for \code{\link{boot}}. } } \value{ An object of class \code{"boot"} with the following components. \item{t0}{ If \code{orig.t} is \code{TRUE} then \code{t0} is the result of \code{statistic(tseries,\dots{})} otherwise it is \code{NULL}. } \item{t}{ The results of applying \code{statistic} to the replicate time series. } \item{R}{ The value of \code{R} as supplied to \code{tsboot}. } \item{tseries}{ The original time series. } \item{statistic}{ The function \code{statistic} as supplied. } \item{sim}{ The simulation type used in generating the replicates. } \item{endcorr}{ The value of \code{endcorr} used. The value is meaningful only when \code{sim} is \code{"fixed"}; it is ignored for model based simulation or phase scrambling and is always set to \code{TRUE} if \code{sim} is \code{"geom"}. } \item{n.sim}{ The value of \code{n.sim} used. } \item{l}{ The value of \code{l} used for block based resampling. This will be \code{NULL} if block based resampling was not used. } \item{ran.gen}{ The \code{ran.gen} function used for generating the series or for \sQuote{post-blackening}. } \item{ran.args}{ The extra arguments passed to \code{ran.gen}. } \item{call}{ The original call to \code{tsboot}. } } \details{ If \code{sim} is \code{"fixed"} then each replicate time series is found by taking blocks of length \code{l}, from the original time series and putting them end-to-end until a new series of length \code{n.sim} is created. When \code{sim} is \code{"geom"} a similar approach is taken except that now the block lengths are generated from a geometric distribution with mean \code{l}. Post-blackening can be carried out on these replicate time series by including the function \code{ran.gen} in the call to \code{tsboot} and having \code{tseries} as a time series of residuals. Model based resampling is very similar to the parametric bootstrap and all simulation must be in one of the user specified functions. This avoids the complicated problem of choosing the block length but relies on an accurate model choice being made. Phase scrambling is described in Section 8.2.4 of Davison and Hinkley (1997). The types of statistic for which this method produces reasonable results is very limited and the other methods seem to do better in most situations. Other types of resampling in the frequency domain can be accomplished using the function \code{boot} with the argument \code{sim = "parametric"}. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. Kunsch, H.R. (1989) The jackknife and the bootstrap for general stationary observations. \emph{Annals of Statistics}, \bold{17}, 1217--1241. Politis, D.N. and Romano, J.P. (1994) The stationary bootstrap. \emph{Journal of the American Statistical Association}, \bold{89}, 1303--1313. } \seealso{ \code{\link{boot}}, \code{\link{arima.sim}} } \examples{ lynx.fun <- function(tsb) { ar.fit <- ar(tsb, order.max = 25) c(ar.fit$order, mean(tsb), tsb) } # the stationary bootstrap with mean block length 20 lynx.1 <- tsboot(log(lynx), lynx.fun, R = 99, l = 20, sim = "geom") # the fixed block bootstrap with length 20 lynx.2 <- tsboot(log(lynx), lynx.fun, R = 99, l = 20, sim = "fixed") # Now for model based resampling we need the original model # Note that for all of the bootstraps which use the residuals as their # data, we set orig.t to FALSE since the function applied to the residual # time series will be meaningless. lynx.ar <- ar(log(lynx)) lynx.model <- list(order = c(lynx.ar$order, 0, 0), ar = lynx.ar$ar) lynx.res <- lynx.ar$resid[!is.na(lynx.ar$resid)] lynx.res <- lynx.res - mean(lynx.res) lynx.sim <- function(res,n.sim, ran.args) { # random generation of replicate series using arima.sim rg1 <- function(n, res) sample(res, n, replace = TRUE) ts.orig <- ran.args$ts ts.mod <- ran.args$model mean(ts.orig)+ts(arima.sim(model = ts.mod, n = n.sim, rand.gen = rg1, res = as.vector(res))) } lynx.3 <- tsboot(lynx.res, lynx.fun, R = 99, sim = "model", n.sim = 114, orig.t = FALSE, ran.gen = lynx.sim, ran.args = list(ts = log(lynx), model = lynx.model)) # For "post-blackening" we need to define another function lynx.black <- function(res, n.sim, ran.args) { ts.orig <- ran.args$ts ts.mod <- ran.args$model mean(ts.orig) + ts(arima.sim(model = ts.mod,n = n.sim,innov = res)) } # Now we can run apply the two types of block resampling again but this # time applying post-blackening. lynx.1b <- tsboot(lynx.res, lynx.fun, R = 99, l = 20, sim = "fixed", n.sim = 114, orig.t = FALSE, ran.gen = lynx.black, ran.args = list(ts = log(lynx), model = lynx.model)) lynx.2b <- tsboot(lynx.res, lynx.fun, R = 99, l = 20, sim = "geom", n.sim = 114, orig.t = FALSE, ran.gen = lynx.black, ran.args = list(ts = log(lynx), model = lynx.model)) # To compare the observed order of the bootstrap replicates we # proceed as follows. table(lynx.1$t[, 1]) table(lynx.1b$t[, 1]) table(lynx.2$t[, 1]) table(lynx.2b$t[, 1]) table(lynx.3$t[, 1]) # Notice that the post-blackened and model-based bootstraps preserve # the true order of the model (11) in many more cases than the others. } \keyword{nonparametric} \keyword{ts} boot/man/tuna.Rd0000644000076600000240000000204311110552530013252 0ustar00ripleystaff\name{tuna} \alias{tuna} \title{ Tuna Sighting Data } \description{ The \code{tuna} data frame has 64 rows and 1 columns. The data come from an aerial line transect survey of Southern Bluefin Tuna in the Great Australian Bight. An aircraft with two spotters on board flies randomly allocated line transects. Each school of tuna sighted is counted and its perpendicular distance from the transect measured. The survey was conducted in summer when tuna tend to stay on the surface. } \usage{ tuna } \format{ This data frame contains the following column: \describe{ \item{\code{y}}{ The perpendicular distance, in miles, from the transect for 64 independent sightings of tuna schools. }}} \source{ The data were obtained from Chen, S.X. (1996) Empirical likelihood confidence intervals for nonparametric density estimation. \emph{Biometrika}, \bold{83}, 329--341. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/urine.Rd0000644000076600000240000000251011110552530013424 0ustar00ripleystaff\name{urine} \alias{urine} \title{ Urine Analysis Data } \description{ The \code{urine} data frame has 79 rows and 7 columns. 79 urine specimens were analyzed in an effort to determine if certain physical characteristics of the urine might be related to the formation of calcium oxalate crystals. } \usage{ urine } \format{ This data frame contains the following columns: \describe{ \item{\code{r}}{ Indicator of the presence of calcium oxalate crystals. } \item{\code{gravity}}{ The specific gravity of the urine. } \item{\code{ph}}{ The pH reading of the urine. } \item{\code{osmo}}{ The osmolarity of the urine. Osmolarity is proportional to the concentration of molecules in solution. } \item{\code{cond}}{ The conductivity of the urine. Conductivity is proportional to the concentration of charged ions in solution. } \item{\code{urea}}{ The urea concentration in millimoles per litre. } \item{\code{calc}}{ The calcium concentration in millimoles per litre. }}} \source{ The data were obtained from Andrews, D.F. and Herzberg, A.M. (1985) \emph{Data: A Collection of Problems from Many Fields for the Student and Research Worker}. Springer-Verlag. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/man/var.linear.Rd0000644000076600000240000000175211566474536014402 0ustar00ripleystaff\name{var.linear} \alias{var.linear} \title{ Linear Variance Estimate } \description{ Estimates the variance of a statistic from its empirical influence values. } \usage{ var.linear(L, strata = NULL) } \arguments{ \item{L}{ Vector of the empirical influence values of a statistic. These will usually be calculated by a call to \code{empinf}. } \item{strata}{ A numeric vector or factor specifying which observations (and hence empirical influence values) come from which strata. }} \value{ The variance estimate calculated from \code{L}. } \references{ Davison, A. C. and Hinkley, D. V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \seealso{ \code{\link{empinf}}, \code{\link{linear.approx}}, \code{\link{k3.linear}} } \examples{ # To estimate the variance of the ratio of means for the city data. ratio <- function(d,w) sum(d$x * w)/sum(d$u * w) var.linear(empinf(data = city, statistic = ratio)) } \keyword{nonparametric} % Converted by Sd2Rd version 1.15. boot/man/wool.Rd0000644000076600000240000000162411110552530013267 0ustar00ripleystaff\name{wool} \alias{wool} \title{ Australian Relative Wool Prices } \description{ \code{wool} is a time series of class \code{"ts"} and contains 309 observations. Each week that the market is open the Australian Wool Corporation set a floor price which determines their policy on intervention and is therefore a reflection of the overall price of wool for the week in question. Actual prices paid can vary considerably about the floor price. The series here is the log of the ratio between the price for fine grade wool and the floor price, each market week between July 1976 and Jun 1984. } \source{ The data were obtained from Diggle, P.J. (1990) \emph{Time Series: A Biostatistical Introduction}. Oxford University Press. } \references{ Davison, A.C. and Hinkley, D.V. (1997) \emph{Bootstrap Methods and Their Application}. Cambridge University Press. } \keyword{datasets} % Converted by Sd2Rd version 1.15. boot/po/0000755000076600000240000000000012121561250011662 5ustar00ripleystaffboot/po/R-boot.pot0000644000076600000240000001123712122262107013554 0ustar00ripleystaffmsgid "" msgstr "" "Project-Id-Version: boot 1.3-9\n" "POT-Creation-Date: 2013-03-20 07:24\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=CHARSET\n" "Content-Transfer-Encoding: 8bit\n" msgid "'simple=TRUE' is only valid for 'sim=\"ordinary\", stype=\"i\", n=0', so ignored" msgstr "" msgid "no data in call to 'boot'" msgstr "" msgid "negative value of 'm' supplied" msgstr "" msgid "length of 'm' incompatible with 'strata'" msgstr "" msgid "dimensions of 'R' and 'weights' do not match" msgstr "" msgid "arguments are not all the same type of \"boot\" object" msgstr "" msgid "index array not defined for model-based resampling" msgstr "" msgid "boot.array not implemented for this object" msgstr "" msgid "array cannot be found for parametric bootstrap" msgstr "" msgid "%s distribution not supported: using normal instead" msgstr "" msgid "only first element of 'index' used" msgstr "" msgid "'K' outside allowable range" msgstr "" msgid "'K' has been set to %f" msgstr "" msgid "'t' and 't0' must be supplied together" msgstr "" msgid "index out of bounds; minimum index only used." msgstr "" msgid "'t' must of length %d" msgstr "" msgid "bootstrap variances needed for studentized intervals" msgstr "" msgid "BCa intervals not defined for time series bootstraps" msgstr "" msgid "bootstrap output object or 't0' required" msgstr "" msgid "unable to calculate 'var.t0'" msgstr "" msgid "extreme order statistics used as endpoints" msgstr "" msgid "variance required for studentized intervals" msgstr "" msgid "estimated adjustment 'w' is infinite" msgstr "" msgid "estimated adjustment 'a' is NA" msgstr "" msgid "only first element of 'index' used in 'abc.ci'" msgstr "" msgid "missing values not allowed in 'data'" msgstr "" msgid "unknown value of 'sim'" msgstr "" msgid "'data' must be a matrix with at least 2 columns" msgstr "" msgid "'index' must contain 2 elements" msgstr "" msgid "only first 2 elements of 'index' used" msgstr "" msgid "indices are incompatible with 'ncol(data)'" msgstr "" msgid "sim = \"weird\" cannot be used with a \"coxph\" object" msgstr "" msgid "only columns %s and %s of 'data' used" msgstr "" msgid "no coefficients in Cox model -- model ignored" msgstr "" msgid "'F.surv' is required but missing" msgstr "" msgid "'G.surv' is required but missing" msgstr "" msgid "'strata' of wrong length" msgstr "" msgid "influence values cannot be found from a parametric bootstrap" msgstr "" msgid "neither 'data' nor bootstrap object specified" msgstr "" msgid "neither 'statistic' nor bootstrap object specified" msgstr "" msgid "'stype' must be \"w\" for type=\"inf\"" msgstr "" msgid "input 't' ignored; type=\"inf\"" msgstr "" msgid "bootstrap object needed for type=\"reg\"" msgstr "" msgid "input 't' ignored; type=\"jack\"" msgstr "" msgid "input 't' ignored; type=\"pos\"" msgstr "" msgid "input 't0' ignored: neither 't' nor 'L' supplied" msgstr "" msgid "bootstrap output matrix missing" msgstr "" msgid "use 'boot.ci' for scalar parameters" msgstr "" msgid "unable to achieve requested overall error rate" msgstr "" msgid "unable to find multiplier for %f" msgstr "" msgid "'theta' or 'lambda' required" msgstr "" msgid "0 elements not allowed in 'q'" msgstr "" msgid "bootstrap replicates must be supplied" msgstr "" msgid "either 'boot.out' or 'w' must be specified." msgstr "" msgid "only first column of 't' used" msgstr "" msgid "invalid value of 'sim' supplied" msgstr "" msgid "'R' and 'theta' have incompatible lengths" msgstr "" msgid "R[1L] must be positive for frequency smoothing" msgstr "" msgid "'R' and 'alpha' have incompatible lengths" msgstr "" msgid "extreme values used for quantiles" msgstr "" msgid "'theta' must be supplied if R[1L] = 0" msgstr "" msgid "'alpha' ignored; R[1L] = 0" msgstr "" msgid "control methods undefined when 'boot.out' has weights" msgstr "" msgid "number of columns of 'A' (%d) not equal to length of 'u' (%d)" msgstr "" msgid "either 'A' and 'u' or 'K.adj' and 'K2' must be supplied" msgstr "" msgid "this type not implemented for Poisson" msgstr "" msgid "this type not implemented for Binary" msgstr "" msgid "one of 't' or 't0' required" msgstr "" msgid "function 'u' missing" msgstr "" msgid "'u' must be a function" msgstr "" msgid "unable to find range" msgstr "" msgid "'R' must be positive" msgstr "" msgid "invalid value of 'l'" msgstr "" msgid "unrecognized value of 'sim'" msgstr "" msgid "multivariate time series not allowed" msgstr "" msgid "likelihood never exceeds %f" msgstr "" msgid "likelihood exceeds %f at only one point" msgstr "" boot/po/R-de.po0000644000076600000240000002126212035553062013022 0ustar00ripleystaff# Translation of boot to German # Copyright (C) 2005 The R Foundation # This file is distributed under the same license as the boot package. # Copyright (C) of this file 2009-2012 Chris Leick . # 2012 Detlef Steuer msgid "" msgstr "" "Project-Id-Version: R 2.15.2 / boot 1.3-6-\n" "Report-Msgid-Bugs-To: bugs@r-project.org\n" "POT-Creation-Date: 2012-10-11 15:21\n" "PO-Revision-Date: 2012-10-11 16:01+0200\n" "Last-Translator: Chris Leick \n" "Language-Team: German \n" "Language: de\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=2; plural=(n != 1);\n" msgid "" "'simple=TRUE' is only valid for 'sim=\"ordinary\", stype=\"i\", n=0', so " "ignored" msgstr "" "'simple=TRUE' gilt nur für 'sim=\"ordinary\", stype=\"i\", n=0' und wird " "daher hier ignoriert" msgid "no data in call to 'boot'" msgstr "keine Daten im Aufruf von 'boot'" msgid "negative value of 'm' supplied" msgstr "negativer Wert von 'm' angegeben" msgid "length of 'm' incompatible with 'strata'" msgstr "Länge von 'm' inkompatibel mit 'strata'" msgid "dimensions of 'R' and 'weights' do not match" msgstr "Dimensionen von 'R' und 'weights' stimmen nicht überein" msgid "arguments are not all the same type of \"boot\" object" msgstr "Argumente waren nicht all vom selben Typ des 'boot'-Objekts" # http://de.wikipedia.org/wiki/Resampling msgid "index array not defined for model-based resampling" msgstr "Index-Array nicht für Modell-basiertes Resampling definiert" msgid "boot.array not implemented for this object" msgstr "'boot.array' nicht für dieses Objekt implementiert" # http://de.wikipedia.org/wiki/Bootstrapping_(Statistik) msgid "array cannot be found for parametric bootstrap" msgstr "Array kann nicht für parametrisches Bootstrapping gefunden werden" # R/bootfuns.q msgid "%s distribution not supported: using normal instead" msgstr "" "%s Verteilung nicht unterstützt, stattdessen wird Normalverteilung benutzt" msgid "only first element of 'index' used" msgstr "nur erstes Element von 'index' benutzt" msgid "'K' outside allowable range" msgstr "'K' außerhalb des erlaubbaren Bereichs" msgid "'K' has been set to %f" msgstr "'K' wurde auf %f gesetzt" msgid "'t' and 't0' must be supplied together" msgstr "'t' und 't0' müssen zusammen angegeben werden" msgid "index out of bounds; minimum index only used." msgstr "Index außerhalb des Rands. Minimalindex wird benutzt." msgid "'t' must of length %d" msgstr "'t' muss die Länge %d haben" # http://de.wikipedia.org/wiki/Studentisierung msgid "bootstrap variances needed for studentized intervals" msgstr "Bootstrap-Varianzen für studentisierte Intervalle benötigt" msgid "BCa intervals not defined for time series bootstraps" msgstr "BCa Intervalle nicht für Zeitreihenbootstrap definiert." msgid "bootstrap output object or 't0' required" msgstr "Bootstrap-Ausgabeobjekt oder 't0' benötigt" msgid "unable to calculate 'var.t0'" msgstr "'var.t0' kann nicht berechnet werden" # http://xtremes.stat.math.uni-siegen.de/xtremes_old/history.pdf msgid "extreme order statistics used as endpoints" msgstr "Extremwertstatistiken werden als Endpunkte benutzt" msgid "variance required for studentized intervals" msgstr "Varianz für studentisierte Intervalle benötigt" msgid "estimated adjustment 'w' is infinite" msgstr "geschätzte Anpassung 'w' ist unendlich" msgid "estimated adjustment 'a' is NA" msgstr "geschätzte Einstellung 'a' ist NA" msgid "only first element of 'index' used in 'abc.ci'" msgstr "nur erstes Element von 'index' wird in 'abc.ci' benutzt" msgid "missing values not allowed in 'data'" msgstr "fehlende Werte in 'data' nicht erlaubt" msgid "unknown value of 'sim'" msgstr "unbekannter Wert von 'sim'" msgid "'data' must be a matrix with at least 2 columns" msgstr "'data' muss eine Matrix mit mindestens 2 Spalten sein" msgid "'index' must contain 2 elements" msgstr "'index' muss 2 Elemente enthalten" msgid "only first 2 elements of 'index' used" msgstr "nur die beiden ersten Elemente von 'index' werden benutzt" msgid "indices are incompatible with 'ncol(data)'" msgstr "Indizes sind inkompatibel mit 'ncol(data)'" msgid "sim = \"weird\" cannot be used with a \"coxph\" object" msgstr "sim = \"weird\" kann nicht mit einem \"coxph\" Objekt benutzt werden" msgid "only columns %s and %s of 'data' used" msgstr "nur die Spalten %s und %s von 'data' werden benutzt" msgid "no coefficients in Cox model -- model ignored" msgstr "keine Koeffizienten im Cox-Modell -- Modell ignoriert" msgid "'F.surv' is required but missing" msgstr "'F.surv' wird benötigt, fehlt jedoch" msgid "'G.surv' is required but missing" msgstr "'G.surv' wird benötigt, fehlt jedoch" msgid "'strata' of wrong length" msgstr "'strata' hat falsche Länge" msgid "influence values cannot be found from a parametric bootstrap" msgstr "" "es können keine beeinflussenden Werte von einem parametrischen Bootstrap " "gefunden werden" msgid "neither 'data' nor bootstrap object specified" msgstr "weder 'data' noch Bootstrap-Objekt angegeben" msgid "neither 'statistic' nor bootstrap object specified" msgstr "weder 'statistic' noch Bootstrap-Objekt angegeben" msgid "'stype' must be \"w\" for type=\"inf\"" msgstr "'stype' muss für type=\"inf\" 'w' sein" msgid "input 't' ignored; type=\"inf\"" msgstr "Eingabe 't' ignoriert; type=\"inf\"" msgid "bootstrap object needed for type=\"reg\"" msgstr "Bootstrap-Objekt für type=\"reg\" benötigt" msgid "input 't' ignored; type=\"jack\"" msgstr "Eingabe 't' ignoriert; type=\"jack\"" msgid "input 't' ignored; type=\"pos\"" msgstr "Eingabe 't' ignoriert; type=\"pos\"" msgid "input 't0' ignored: neither 't' nor 'L' supplied" msgstr "Eingabe 't0' ignoriert: weder 't' noch 'L' angegeben" msgid "bootstrap output matrix missing" msgstr "Bootstrap-Ausgabematrix fehlt" msgid "use 'boot.ci' for scalar parameters" msgstr "benutzen Sie 'boot.ci' für skalare Parameter" msgid "unable to achieve requested overall error rate" msgstr "geforderte overall Fehlerquote kann nicht erreicht werden" msgid "unable to find multiplier for %f" msgstr "Es kann kein Multiplikator für %f gefunden werden" msgid "'theta' or 'lambda' required" msgstr "'theta' oder 'lambda' benötigt" msgid "0 elements not allowed in 'q'" msgstr "0 Elemente nicht in 'q' erlaubt" msgid "bootstrap replicates must be supplied" msgstr "Bootstrap-Kopien müssen angegeben werden" msgid "either 'boot.out' or 'w' must be specified." msgstr "Entweder 'boot.out' oder 'w' muss angegeben werden." msgid "only first column of 't' used" msgstr "Nur erste Spalte von 't' wird benutzt." msgid "invalid value of 'sim' supplied" msgstr "ungültiger Wert von 'sim' angegeben" msgid "'R' and 'theta' have incompatible lengths" msgstr "'R' und 'theta' haben inkompatible Längen" msgid "R[1L] must be positive for frequency smoothing" msgstr "R[1L] muss für Frequenz-Glättung positiv sein" msgid "'R' and 'alpha' have incompatible lengths" msgstr "'R' und 'alpha' haben inkompatible Längen" msgid "extreme values used for quantiles" msgstr "Extremwerte werden für Quantile benutzt" msgid "'theta' must be supplied if R[1L] = 0" msgstr "'theta' muss angegeben werden, falls R[1L] = 0 ist" msgid "'alpha' ignored; R[1L] = 0" msgstr "'alpha' ignoriert; R[1L]=0" msgid "control methods undefined when 'boot.out' has weights" msgstr "Kontrollmethoden undefiniert, wenn 'boot.out' Gewichte hat" msgid "number of columns of 'A' (%d) not equal to length of 'u' (%d)" msgstr "" "Anzahl der Spalten von 'A' (%d) ist nicht gleich der Länge von 'u' (%d)" msgid "either 'A' and 'u' or 'K.adj' and 'K2' must be supplied" msgstr "entweder 'A' und 'u' oder 'K.adj' und 'K2' müssen angegeben werden" msgid "this type not implemented for Poisson" msgstr "dieser Typ ist nicht für Poisson implementiert" msgid "this type not implemented for Binary" msgstr "dieser Typ ist nicht für Binary implementiert" msgid "one of 't' or 't0' required" msgstr "eins von 't' oder 't0' benötigt" msgid "function 'u' missing" msgstr "Funktion 'u' fehlt" msgid "'u' must be a function" msgstr "'u' muss eine Funktion sein" msgid "unable to find range" msgstr "Bereich kann nicht gefunden werden" msgid "'R' must be positive" msgstr "'R' muss psitiv sein" msgid "invalid value of 'l'" msgstr "ungültiger Wert von 'l'" msgid "unrecognized value of 'sim'" msgstr "unbekannter Wert von 'sim'" # http://de.wikipedia.org/wiki/Multivariat msgid "multivariate time series not allowed" msgstr "multivariate Zeitserien nicht erlaubt" msgid "likelihood never exceeds %f" msgstr "Wahrscheinlichkeit überschreitet niemals %f" msgid "likelihood exceeds %f at only one point" msgstr "Wahrscheinlichkeit überschreitet %f an einem Punkt" boot/po/R-fr.po0000644000076600000240000002167512035553135013052 0ustar00ripleystaff# Translation of R-boot.pot to French # Copyright (C) 2005 The R Foundation # This file is distributed under the same license as the boot R package. # Philippe Grosjean , 2005. # msgid "" msgstr "" "Project-Id-Version: boot 1.2-23\n" "Report-Msgid-Bugs-To: bugs@r-project.org\n" "POT-Creation-Date: 2012-10-11 15:21\n" "PO-Revision-Date: 2012-10-03 15:35+0100\n" "Last-Translator: Philippe Grosjean \n" "Language-Team: French \n" "Language: fr\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=ISO-8859-1\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=2; plural=(n > 1);\n" "X-Generator: Poedit 1.5.3\n" msgid "" "'simple=TRUE' is only valid for 'sim=\"ordinary\", stype=\"i\", n=0', so " "ignored" msgstr "" "'simple=TRUE' n'est seulement valable que pour 'sim=\"ordinary\", stype=\"i" "\", n=0' ; il est donc ignoré" msgid "no data in call to 'boot'" msgstr "pas de données lors de l'appel à 'boot'" msgid "negative value of 'm' supplied" msgstr "valeur négative donnée pour 'm'" msgid "length of 'm' incompatible with 'strata'" msgstr "longueur de 'm' incompatible avec 'strata'" msgid "dimensions of 'R' and 'weights' do not match" msgstr "les dimensions de 'R' et 'weights' ne sont pas conformes" msgid "arguments are not all the same type of \"boot\" object" msgstr "les arguments ne sont pas tous du même type pour l'objet \"boot\"" msgid "index array not defined for model-based resampling" msgstr "" "indiçage de tableau non défini pour un rééchantillonnage basé sur un modèle" msgid "boot.array not implemented for this object" msgstr "boot.array non implémenté pour cet objet" msgid "array cannot be found for parametric bootstrap" msgstr "tableau non trouvé pour un bootstrap pamétrique" msgid "%s distribution not supported: using normal instead" msgstr "" "%s distribution non supportée, utilisation d'une distribution normale à la " "place" msgid "only first element of 'index' used" msgstr "seul le premier élément d''index' est utilisé" msgid "'K' outside allowable range" msgstr "'K' en dehors de la plage admise" msgid "'K' has been set to %f" msgstr "'K' est fixé à %f" msgid "'t' and 't0' must be supplied together" msgstr "'t' et 't0' doivent être fixés simultanément" msgid "index out of bounds; minimum index only used." msgstr "indice hors plage ; l'indice le plus petit est utilisé" msgid "'t' must of length %d" msgstr "'t' doit être de longueur %d" msgid "bootstrap variances needed for studentized intervals" msgstr "" "les variances de bootstrap sont nécessaires pour les intervalles studentisés" msgid "BCa intervals not defined for time series bootstraps" msgstr "" "les intervalles BCa ne sont pas définis pour les bootstraps sur les séries " "temporelles" msgid "bootstrap output object or 't0' required" msgstr "objet résultat d'un bootstrap ou 't0' requis" msgid "unable to calculate 'var.t0'" msgstr "impossible de calculer 'var.t0'" msgid "extreme order statistics used as endpoints" msgstr "statistiques d'ordre extrême utilisées comme points finaux" msgid "variance required for studentized intervals" msgstr "variance requise pour les intervalles de confiance studentisés" msgid "estimated adjustment 'w' is infinite" msgstr "l'ajustement de 'w' est infini" msgid "estimated adjustment 'a' is NA" msgstr "l'ajustement de 'a' estimé est NA" msgid "only first element of 'index' used in 'abc.ci'" msgstr "seul le premier élément de 'index' est utilisé dans 'abc.ci'" msgid "missing values not allowed in 'data'" msgstr "valeurs manquantes non autorisées dans 'data'" msgid "unknown value of 'sim'" msgstr "valeur inconnue de 'sim'" msgid "'data' must be a matrix with at least 2 columns" msgstr "'data' doit être une matrice contenant au moins 2 colonnes" msgid "'index' must contain 2 elements" msgstr "'index' doit contenir 2 éléments" msgid "only first 2 elements of 'index' used" msgstr "seuls les deux premiers éléments d''index' sont utilisés" msgid "indices are incompatible with 'ncol(data)'" msgstr "les indices sont incompatibles avec 'ncol(data)'" msgid "sim = \"weird\" cannot be used with a \"coxph\" object" msgstr "sim=\"weird\" ne peut être utilisé avec un object \"coxph\"" msgid "only columns %s and %s of 'data' used" msgstr "seule les colonnes %s et %s de 'data' sont utilisées" msgid "no coefficients in Cox model -- model ignored" msgstr "pas de coefficients dans le modèle Cox -- modèle ignoré" msgid "'F.surv' is required but missing" msgstr "'F.surv' est requis mais manquant" msgid "'G.surv' is required but missing" msgstr "'G.surv' est requis mais manquant" msgid "'strata' of wrong length" msgstr "'strata' de mauvaise longueur" msgid "influence values cannot be found from a parametric bootstrap" msgstr "" "les valeurs d'influence ne peuvent être trouvées à partir d'un bootstrap " "paramétrique" msgid "neither 'data' nor bootstrap object specified" msgstr "pas de 'data' ou d'objet bootstrap spécifié" msgid "neither 'statistic' nor bootstrap object specified" msgstr "pas de 'statistic' ou d'objet bootstrap spécifié" msgid "'stype' must be \"w\" for type=\"inf\"" msgstr "'stype' doit être \"w\" pour type=\"inf\"" msgid "input 't' ignored; type=\"inf\"" msgstr "entrée 't' ignorée ; type=\"inf\"" msgid "bootstrap object needed for type=\"reg\"" msgstr "objet 'bootstrap' requis pour type=\"reg\"" msgid "input 't' ignored; type=\"jack\"" msgstr "entrée 't' ignorée ; type=\"jack\"" msgid "input 't' ignored; type=\"pos\"" msgstr "entrée 't' ignorée ; type=\"pos\"" msgid "input 't0' ignored: neither 't' nor 'L' supplied" msgstr "entrée 't0' ignorée : ni 't', ni 'L' n'est fourni" msgid "bootstrap output matrix missing" msgstr "matrice manquante dans la sortie bootstrap" msgid "use 'boot.ci' for scalar parameters" msgstr "utilisez 'boot.ci' pour des paramètres scalaires" msgid "unable to achieve requested overall error rate" msgstr "impossible d'atteindre le taux global d'erreur spécifié" msgid "unable to find multiplier for %f" msgstr "impossible de trouver un multiplicateur pour %f" msgid "'theta' or 'lambda' required" msgstr "'theta' ou 'lambda' requis" msgid "0 elements not allowed in 'q'" msgstr "0 éléments non permis pour 'q'" msgid "bootstrap replicates must be supplied" msgstr "les réplications de bootstrap doivent être fournies" msgid "either 'boot.out' or 'w' must be specified." msgstr "soit 'boot.out', soit 'w' doit être spécifié" msgid "only first column of 't' used" msgstr "seule la première colonne de 't' est utilisée" msgid "invalid value of 'sim' supplied" msgstr "valeur incorrecte spécifiée pour 'sim'" msgid "'R' and 'theta' have incompatible lengths" msgstr "'R' et 'theta' ont des longueurs non conformes" msgid "R[1L] must be positive for frequency smoothing" msgstr "R[1L] doit être positif pour un lissage des fréquences" msgid "'R' and 'alpha' have incompatible lengths" msgstr "'R' et 'alpha' ont des longueurs non conformes" msgid "extreme values used for quantiles" msgstr "valeurs extrêmes utilisées pour les quantiles" msgid "'theta' must be supplied if R[1L] = 0" msgstr "'theta' doit être fourni si R[1L] = 0" msgid "'alpha' ignored; R[1L] = 0" msgstr "'alpha' ignoré ; R[1L] = 0" msgid "control methods undefined when 'boot.out' has weights" msgstr "méthodes de contrôle non définies lorsque 'boot.out' est pondéré" msgid "number of columns of 'A' (%d) not equal to length of 'u' (%d)" msgstr "" "le nombre de colonnes de 'A' (%d) n'est pas égal à la longueur de 'u' (%d)" msgid "either 'A' and 'u' or 'K.adj' and 'K2' must be supplied" msgstr "soit 'A' et 'u', soit 'K.adj' et 'K2' doivent être fournis" msgid "this type not implemented for Poisson" msgstr "ce type n'est pas implémenté pour 'Poisson'" msgid "this type not implemented for Binary" msgstr "ce type n'est pas implémenté pour 'Binary'" msgid "one of 't' or 't0' required" msgstr "soit 't', soit 't0' est requis" msgid "function 'u' missing" msgstr "fonction 'u' manquante" msgid "'u' must be a function" msgstr "'u' doit être une fonction" msgid "unable to find range" msgstr "impossible de trouver l'étendue des valeurs" msgid "'R' must be positive" msgstr "'R' doit être positif" msgid "invalid value of 'l'" msgstr "valeur de 'l' incorrecte" msgid "unrecognized value of 'sim'" msgstr "valeur de 'sim' non reconnue" msgid "multivariate time series not allowed" msgstr "séries temporelles multivariées non admises" msgid "likelihood never exceeds %f" msgstr "la vraissemblance n'a jamais excédé %f" msgid "likelihood exceeds %f at only one point" msgstr "la vraissemblance excède %f a seulement un point" #~ msgid "only columns" #~ msgstr "seulement des colonnes" #~ msgid "and" #~ msgstr "et" #~ msgid "of data used" #~ msgstr "des données utilisées" #~ msgid "number of columns of A (" #~ msgstr "le nombre de colonnes de A (" #~ msgid "at only one point" #~ msgstr "à seulement un point" #~ msgid "invalid proportions input" #~ msgstr "proportions d'entrée incorrectes" #~ msgid "irregular time series not allowed" #~ msgstr "séries temporelles irrégulières non admises" boot/po/R-ko.po0000644000076600000240000002065312117521045013043 0ustar00ripleystaff# Korean translation for R boot package # Recommended/boot/po/R-ko.po # Maintainer: Brian Ripley # Copyright (C) 1995-2013 The R Core Team # This file is distributed under the same license as the R boot package. # R Development Translation Team - Korean # Chel Hee Lee , 2013. # Chel Hee Lee , 2013. # msgid "" msgstr "" "Project-Id-Version: boot 1.3-6\n" "POT-Creation-Date: 2012-10-11 15:21\n" "PO-Revision-Date: 2013-03-11 13:41-0600\n" "Last-Translator: Chel Hee Lee \n" "Language-Team: R Development Translation Teams (Korean) \n" "Language: ko\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=1; plural=0;\n" "X-Poedit-Language: Korean\n" "X-Poedit-Country: KOREA, REPUBLIC OF\n" "X-Poedit-SourceCharset: utf-8\n" msgid "" "'simple=TRUE' is only valid for 'sim=\"ordinary\", stype=\"i\", n=0', so " "ignored" msgstr "" "'simple=TRUE'ì€ 'sim=\"ordinary\", stype=\"i\", n=0'ì¸ ê²½ìš°ì—ë§Œ 유효하므로" "무시ë˜ì—ˆìŠµë‹ˆë‹¤" msgid "no data in call to 'boot'" msgstr "'boot'ì— í˜¸ì¶œì¤‘ì¸ ë°ì´í„°ê°€ 없습니다" msgid "negative value of 'm' supplied" msgstr "'m'ì— ìŒìˆ˜ê°€ 제공ë˜ì—ˆìŠµë‹ˆë‹¤" msgid "length of 'm' incompatible with 'strata'" msgstr "'m'ì˜ ê¸¸ì´ê°€ 'strata'와 부합하지 않습니다" msgid "dimensions of 'R' and 'weights' do not match" msgstr "'R'ê³¼ 'weights'ì˜ dimensionì´ ì¼ì¹˜í•˜ì§€ 않습니다" msgid "arguments are not all the same type of \"boot\" object" msgstr "" msgid "index array not defined for model-based resampling" msgstr "model-based resamplingì— ì •ì˜ëœ index arrayê°€ 아닙니다" msgid "boot.array not implemented for this object" msgstr "ì´ ê°ì²´ì— êµ¬í˜„ëœ boot.arrayê°€ 아닙니다" msgid "array cannot be found for parametric bootstrap" msgstr "parameteric bootstrapì„ ìœ„í•œ ë°°ì—´ì„ ì°¾ì„ ìˆ˜ 없습니다" msgid "%s distribution not supported: using normal instead" msgstr "%s ë¶„í¬ëŠ” ì§€ì›ë˜ì§€ 않으므로 정규분í¬ê°€ 대신 사용ë©ë‹ˆë‹¤" msgid "only first element of 'index' used" msgstr "'index'ì˜ ì²«ë²ˆì§¸ ìš”ì†Œë§Œì„ ì‚¬ìš©í–ˆìŠµë‹ˆë‹¤" msgid "'K' outside allowable range" msgstr "'K'는 허용하는 ë²”ìœ„ì™¸ì— ìžˆìŠµë‹ˆë‹¤" msgid "'K' has been set to %f" msgstr "" msgid "'t' and 't0' must be supplied together" msgstr "'t'와 't0'는 반드시 함께 제공ë˜ì–´ì ¸ì•¼ 합니다" msgid "index out of bounds; minimum index only used." msgstr "" msgid "'t' must of length %d" msgstr "'t'ì˜ ê¸¸ì´ëŠ” 반드시 %dì´ì–´ì•¼ 합니다" msgid "bootstrap variances needed for studentized intervals" msgstr "studentized intervalsì— í•„ìš”í•œ boostrap variances입니다" msgid "BCa intervals not defined for time series bootstraps" msgstr "time series bootstrapsì— ì •ì˜ëœ BCa intervalsê°€ 아닙니다" msgid "bootstrap output object or 't0' required" msgstr "bootstrap로부터 나온 ê°ì²´ ë˜ëŠ” 't0'ê°€ 필요합니다" msgid "unable to calculate 'var.t0'" msgstr "'var.t0'를 계산할 수 없습니다" msgid "extreme order statistics used as endpoints" msgstr "endpoints 처럼 ì‚¬ìš©ëœ extreme order statistics입니다" msgid "variance required for studentized intervals" msgstr "studentized intervalsì— ìš”êµ¬ë˜ì–´ì§€ëŠ” variance입니다" msgid "estimated adjustment 'w' is infinite" msgstr "ì¶”ì •ëœ adjustment 'w'ê°€ ë¬´í•œê°’ì„ ê°€ì§‘ë‹ˆë‹¤" msgid "estimated adjustment 'a' is NA" msgstr "ì¶”ì •ëœ adjustment 'a'ê°€ NA입니다" msgid "only first element of 'index' used in 'abc.ci'" msgstr "'index'ì˜ ì²«ë²ˆì§¸ ìš”ì†Œë§Œì´ 'abc.ci'ì— ì‚¬ìš©ë˜ì—ˆìŠµë‹ˆë‹¤" msgid "missing values not allowed in 'data'" msgstr "'data'ì— í—ˆìš©ë˜ì§€ 않는 ê²°ì¸¡ì¹˜ë“¤ì´ ìžˆìŠµë‹ˆë‹¤" msgid "unknown value of 'sim'" msgstr "알 수 없는 'sim'ì˜ ê°’ìž…ë‹ˆë‹¤" msgid "'data' must be a matrix with at least 2 columns" msgstr "'data'는 반드시 ì ì–´ë„ 2ê°œì˜ ì—´ì„ ê°€ì§€ëŠ” 행렬ì´ì–´ì•¼ 합니다" msgid "'index' must contain 2 elements" msgstr "'index'는 반드시 2ê°œì˜ ìš”ì†Œë“¤ì„ í¬í•¨í•´ì•¼ 합니다" msgid "only first 2 elements of 'index' used" msgstr "'index'ì˜ ì²«ë²ˆì§¸ 2ê°œ ìš”ì†Œë“¤ë§Œì´ ì‚¬ìš©ë˜ì—ˆìŠµë‹ˆë‹¤" msgid "indices are incompatible with 'ncol(data)'" msgstr "" msgid "sim = \"weird\" cannot be used with a \"coxph\" object" msgstr "sim ì¸ìžì— \"weird\" ê°’ì€ \"coxph\" ê°ì²´ì™€ 함께 ì‚¬ìš©ë  ìˆ˜ 없습니다" msgid "only columns %s and %s of 'data' used" msgstr "'data'ì˜ %s와 %s ì—´ë“¤ë§Œì´ ì‚¬ìš©ë˜ì—ˆìŠµë‹ˆë‹¤" msgid "no coefficients in Cox model -- model ignored" msgstr "Cox 모ë¸ì— ê³„ìˆ˜ë“¤ì´ ì—†ìœ¼ë¯€ë¡œ 모ë¸ì´ 무시ë˜ì—ˆìŠµë‹ˆë‹¤" msgid "'F.surv' is required but missing" msgstr "'F.surv'ê°€ í•„ìš”í•œë° ëˆ„ë½ë˜ì–´ 있습니다" msgid "'G.surv' is required but missing" msgstr "'G.surv'ê°€ í•„ìš”í•œë° ëˆ„ë½ë˜ì–´ 있습니다" msgid "'strata' of wrong length" msgstr "'strata'ì˜ ê¸¸ì´ê°€ 잘 못ë˜ì—ˆìŠµë‹ˆë‹¤" msgid "influence values cannot be found from a parametric bootstrap" msgstr "influence valuesë“¤ì„ parametric bootstrap으로부터 ì°¾ì„ ìˆ˜ 없습니다" msgid "neither 'data' nor bootstrap object specified" msgstr "ì§€ì •ëœ 'data'ë„ ì•„ë‹ˆê³  bootstrap ê°ì²´ë„ 아닙니다" msgid "neither 'statistic' nor bootstrap object specified" msgstr "ì§€ì •ëœ 'statistic'ë„ ì•„ë‹ˆê³  bootstrap ê°ì²´ë„ 아닙니다" msgid "'stype' must be \"w\" for type=\"inf\"" msgstr "typeì´ \"inf\"경우ì—는 'stype'ì´ ë°˜ë“œì‹œ \"w\"ì´ì–´ì•¼ 합니다" msgid "input 't' ignored; type=\"inf\"" msgstr "" msgid "bootstrap object needed for type=\"reg\"" msgstr "typeì´ \"reg\"ì¸ ê²½ìš°ì— í•„ìš”í•œ bootstrap ê°ì²´ìž…니다" msgid "input 't' ignored; type=\"jack\"" msgstr "" msgid "input 't' ignored; type=\"pos\"" msgstr "" msgid "input 't0' ignored: neither 't' nor 'L' supplied" msgstr "" msgid "bootstrap output matrix missing" msgstr "" msgid "use 'boot.ci' for scalar parameters" msgstr "ìŠ¤ì¹¼ë¼ íŒŒë¼ë¯¸í„°ì¼ë•Œ 'boot.ci'를 사용하세요" msgid "unable to achieve requested overall error rate" msgstr "" msgid "unable to find multiplier for %f" msgstr "%fì— ëŒ€í•œ multiplier를 ì°¾ì„ ìˆ˜ 없습니다" msgid "'theta' or 'lambda' required" msgstr "'theta' ë˜ëŠ” 'lambda'ê°€ 필요합니다" msgid "0 elements not allowed in 'q'" msgstr "" msgid "bootstrap replicates must be supplied" msgstr "bootstrap replicates는 반드시 주어져야 합니다" msgid "either 'boot.out' or 'w' must be specified." msgstr "'boot.out' ë˜ëŠ” 'w' 중 하나는 반드시 지정ë˜ì–´ì•¼ 합니다" msgid "only first column of 't' used" msgstr "'t'ì˜ ì²«ë²ˆì§¸ ì—´ë§Œ 사용ë˜ì—ˆìŠµë‹ˆë‹¤" msgid "invalid value of 'sim' supplied" msgstr "유효하지 ì•Šì€ 'sim'ê°’ì´ ì œê³µë˜ì—ˆìŠµë‹ˆë‹¤" msgid "'R' and 'theta' have incompatible lengths" msgstr "" msgid "R[1L] must be positive for frequency smoothing" msgstr "frequency smoothingì„ ìœ„í•´ì„œëŠ” 반드시 R[1L]ê°€ 양수ì´ì–´ì•¼ 합니다" msgid "'R' and 'alpha' have incompatible lengths" msgstr "" msgid "extreme values used for quantiles" msgstr "quantilesì— ì‚¬ìš©ëœ extreme values들입니다" msgid "'theta' must be supplied if R[1L] = 0" msgstr "만약 R[1L] = 0ì´ë¼ë©´ 'theta'는 반드시 주어져야 합니다" msgid "'alpha' ignored; R[1L] = 0" msgstr "" msgid "control methods undefined when 'boot.out' has weights" msgstr "" msgid "number of columns of 'A' (%d) not equal to length of 'u' (%d)" msgstr "'A'ê°€ 가지는 ì—´ì˜ ê°œìˆ˜ (%d)는 'u'ê°€ 가지는 ê¸¸ì´ (%d)와 같지 않습니다" msgid "either 'A' and 'u' or 'K.adj' and 'K2' must be supplied" msgstr "" msgid "this type not implemented for Poisson" msgstr "" msgid "this type not implemented for Binary" msgstr "" msgid "one of 't' or 't0' required" msgstr "'t' ë˜ëŠ” 't0' 중 하나가 필요합니다" msgid "function 'u' missing" msgstr "함수 'u'ê°€ 빠져있습니다" msgid "'u' must be a function" msgstr "'u'는 반드시 함수ì´ì–´ì•¼ 합니다" msgid "unable to find range" msgstr "범위를 구할 수 없습니다" msgid "'R' must be positive" msgstr "'R'ì€ ë°˜ë“œì‹œ 양수ì´ì–´ì•¼ 합니다" msgid "invalid value of 'l'" msgstr "유효하지 ì•Šì€ 'l'ì˜ ê°’ìž…ë‹ˆë‹¤" msgid "unrecognized value of 'sim'" msgstr "ì¸ì‹í•  수 없는 'sim'ì˜ ê°’ìž…ë‹ˆë‹¤" msgid "multivariate time series not allowed" msgstr "허용ë˜ì§€ ì•Šì€ ë‹¤ë³€ëŸ‰ 시계열입니다" msgid "likelihood never exceeds %f" msgstr "" msgid "likelihood exceeds %f at only one point" msgstr "" boot/po/R-pl.po0000644000076600000240000003776512035552772013073 0ustar00ripleystaffmsgid "" msgstr "" "Project-Id-Version: boot 1.3-5\n" "Report-Msgid-Bugs-To: bugs@r-project.org\n" "POT-Creation-Date: 2012-10-11 15:21\n" "PO-Revision-Date: \n" "Last-Translator: Åukasz Daniel \n" "Language-Team: Åukasz Daniel \n" "Language: pl_PL\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "na-Revision-Date: 2012-05-29 07:55+0100\n" "Plural-Forms: nplurals=3; plural=(n==1 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 " "|| n%100>=20) ? 1 : 2);\n" "X-Poedit-SourceCharset: iso-8859-1\n" "X-Generator: Poedit 1.5.3\n" # boot/R/bootfuns.q: 111 # warning("'simple=TRUE' is only valid for 'sim=\"ordinary\", stype=\"i\", n=0, so ignored") #, fuzzy msgid "" "'simple=TRUE' is only valid for 'sim=\"ordinary\", stype=\"i\", n=0', so " "ignored" msgstr "" "'simple=TRUE' jest poprawne jedynie dla 'sim=\"ordinary\", stype=\"i\", n=0, " "tak wiÄ™c zignorowano" # boot/R/bootfuns.q: 118 # stop("no data in call to 'boot'") msgid "no data in call to 'boot'" msgstr "brak danych w wywoÅ‚aniu 'boot'" # boot/R/bootfuns.q: 126 # stop("negative value of 'm' supplied") msgid "negative value of 'm' supplied" msgstr "dostarczono ujemnÄ… wartość 'm'" # boot/R/bootfuns.q: 128 # stop("length of 'm' incompatible with 'strata'") msgid "length of 'm' incompatible with 'strata'" msgstr "dÅ‚ugość 'm' jest niekompatybilna z 'strata'" # boot/R/bootfuns.q: 131 # stop("dimensions of 'R' and 'weights' do not match") msgid "dimensions of 'R' and 'weights' do not match" msgstr "wymiary 'R' oraz 'weights' nie zgadzajÄ… siÄ™" # boot/R/bootfuns.q: 249 # stop("arguments are not all the same type of \"boot\" object") msgid "arguments are not all the same type of \"boot\" object" msgstr "argumenty nie sÄ… wszystkie tego samego typu obiektu 'boot'" # boot/R/bootfuns.q: 275 # stop("index array not defined for model-based resampling") msgid "index array not defined for model-based resampling" msgstr "" "tablica indeksów nie jest zdefiniowana dla próbkowania opartego na modelu" # boot/R/bootfuns.q: 299 # stop("boot.array not implemented for this object") msgid "boot.array not implemented for this object" msgstr "'boot.array' nie zostaÅ‚ zaimplementowany dla tego obiektu" # boot/R/bootfuns.q: 304 # stop("array cannot be found for parametric bootstrap") msgid "array cannot be found for parametric bootstrap" msgstr "nie można znaleźć tablicy dla parametrycznego bootstrapu" # boot/R/bootfuns.q: 359 # warning(gettextf("%s distribution not supported using normal instead", sQuote(qdist)), domain = NA) #, fuzzy msgid "%s distribution not supported: using normal instead" msgstr "rozkÅ‚ad %s nie jest wspierany, w zamian używanie rozkÅ‚adu normalnego" # boot/R/bootfuns.q: 699 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 1636 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 1648 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 1658 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 1666 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 1821 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 2147 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 2203 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 2244 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 2272 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 2401 # warning("only first element of 'index' used") # boot/R/bootfuns.q: 2422 # warning("only first element of 'index' used") msgid "only first element of 'index' used" msgstr "tylko pierwszy element 'index' zostaÅ‚ użyty" # boot/R/bootfuns.q: 822 # stop("'K' outside allowable range") msgid "'K' outside allowable range" msgstr "'K' poza dozwolonym zakresem" # boot/R/bootfuns.q: 829 # warning(gettextf("'K' has been set to %f", K), domain = NA) msgid "'K' has been set to %f" msgstr "'K' zostaÅ‚ ustawiony na %f" # boot/R/bootfuns.q: 876 # stop("'t' and 't0' must be supplied together") msgid "'t' and 't0' must be supplied together" msgstr "'t' oraz 't0' muszÄ… zostać dostarczone razem" # boot/R/bootfuns.q: 886 # warning("index out of bounds; minimum index only used.") msgid "index out of bounds; minimum index only used." msgstr "indeks poza zakresem; użyto minimalnego indeksu." # boot/R/bootfuns.q: 904 # stop(gettextf("'t' must of length %d", boot.out$R), domain = NA) msgid "'t' must of length %d" msgstr "'t' musi być dÅ‚ugoÅ›ci %d" # boot/R/bootfuns.q: 928 # warning("bootstrap variances needed for studentized intervals") msgid "bootstrap variances needed for studentized intervals" msgstr "potrzebne sÄ… bootstrapowe wariancje dla studentyzowanych przedziałów" # boot/R/bootfuns.q: 937 # warning("BCa intervals not defined for time series bootstraps.") #, fuzzy msgid "BCa intervals not defined for time series bootstraps" msgstr "" "przedziaÅ‚y BC nie sÄ… zdefiniowane dla bootstrapowych szeregów czasowych" # boot/R/bootfuns.q: 1079 # stop("bootstrap output object or 't0' required") msgid "bootstrap output object or 't0' required" msgstr "wymagany jest obiekt wyjÅ›ciowy bootstrapu albo 't0'" # boot/R/bootfuns.q: 1090 # stop("unable to calculate 'var.t0'") msgid "unable to calculate 'var.t0'" msgstr "nie można wyliczyć 'var.t0'" # boot/R/bootfuns.q: 1117 # warning("extreme order statistics used as endpoints") msgid "extreme order statistics used as endpoints" msgstr "ekstremalnie uporzÄ…dkowana statystyka użyta jako punkty koÅ„cowe" # boot/R/bootfuns.q: 1153 # warning("variance required for Studentized CI's") #, fuzzy msgid "variance required for studentized intervals" msgstr "wariancja jest wymagana dla studentyzowanych przedziałów ufnoÅ›ci" # boot/R/bootfuns.q: 1192 # stop("estimated adjustment 'w' is infinite") msgid "estimated adjustment 'w' is infinite" msgstr "oszacowana korekta 'w' wynosi nieskoÅ„czoność" # boot/R/bootfuns.q: 1198 # stop("estimated adjustment 'a' is NA") msgid "estimated adjustment 'a' is NA" msgstr "oszacowana korekta 'a' wynosi 'NA'" # boot/R/bootfuns.q: 1216 # warning("only first element of 'index' used in 'abc.ci'") msgid "only first element of 'index' used in 'abc.ci'" msgstr "tylko pierwszy element 'index' zostaÅ‚ użyty w 'abc.ci'" # boot/R/bootfuns.q: 1281 # stop("missing values not allowed in 'data'") msgid "missing values not allowed in 'data'" msgstr "brakujÄ…ce wartoÅ›ci nie sÄ… dozwolone w 'data'" # boot/R/bootfuns.q: 1283 # stop("unknown value of 'sim'") msgid "unknown value of 'sim'" msgstr "nieznana wartość 'sim'" # boot/R/bootfuns.q: 1297 # stop("data must be a matrix with at least 2 columns") # boot/R/bootfuns.q: 1299 # stop("data must be a matrix with at least 2 columns") #, fuzzy msgid "'data' must be a matrix with at least 2 columns" msgstr "dane muszÄ… być macierzÄ… o co najmniej 2 kolumnach" # boot/R/bootfuns.q: 1301 # stop("index must contain 2 elements") #, fuzzy msgid "'index' must contain 2 elements" msgstr "indeks musi zawierać 2 elementy" # boot/R/bootfuns.q: 1303 # warning("only first 2 elements of 'index' used") msgid "only first 2 elements of 'index' used" msgstr "tylko pierwsze 2 elementy 'index' zostaÅ‚y użyte" # boot/R/bootfuns.q: 1307 # stop("indices are incompatible with 'ncol(data)'") msgid "indices are incompatible with 'ncol(data)'" msgstr "indeksy sÄ… niezgodne z 'ncol(data)'" # boot/R/bootfuns.q: 1310 # stop("sim = \"weird\" cannot be used with a 'coxph' object") #, fuzzy msgid "sim = \"weird\" cannot be used with a \"coxph\" object" msgstr "'sim=\"weird\"' nie może być użyte z obiektem 'coxph'" # boot/R/bootfuns.q: 1312 # warning(gettextf("only columns %s and %s of data used", # index[1L], index[2L]), domain = NA) #, fuzzy msgid "only columns %s and %s of 'data' used" msgstr "tylko kolumny %s oraz %s danych zostaÅ‚y użyte" # boot/R/bootfuns.q: 1318 # warning("no coefficients in Cox model -- model ignored") msgid "no coefficients in Cox model -- model ignored" msgstr "brak współczynników w modelu Coxa -- model zostaÅ‚ zignorowany" # boot/R/bootfuns.q: 1322 # stop("'F.surv' is required but missing") msgid "'F.surv' is required but missing" msgstr "'F.surv' jest wymagany, ale jest nieobecny" # boot/R/bootfuns.q: 1324 # stop("'G.surv' is required but missing") msgid "'G.surv' is required but missing" msgstr "'G.surv' jest wymagany, ale jest nieobecny" # boot/R/bootfuns.q: 1325 # stop("'strata' of wrong length") msgid "'strata' of wrong length" msgstr "'strata' o niepoprawnej dÅ‚ugoÅ›ci" # boot/R/bootfuns.q: 1605 # stop("influence values cannot be found from a parametric bootstrap") msgid "influence values cannot be found from a parametric bootstrap" msgstr "" "wartoÅ›ci wpÅ‚ywu nie mogÄ… zostać znalezione z parametrycznego bootstrapu" # boot/R/bootfuns.q: 1617 # stop("no data or bootstrap object specified") #, fuzzy msgid "neither 'data' nor bootstrap object specified" msgstr "brak danych lub okreÅ›lonego obiektu bootstrapu" # boot/R/bootfuns.q: 1619 # stop("no statistic or bootstrap object specified") #, fuzzy msgid "neither 'statistic' nor bootstrap object specified" msgstr "brak statystyki lub okreÅ›lonego obiektu bootstrapu" # boot/R/bootfuns.q: 1634 # stop("'stype' must be \"w\" for type=\"inf\"") msgid "'stype' must be \"w\" for type=\"inf\"" msgstr "'stype' musi być \"w\" dla type=\"inf\"" # boot/R/bootfuns.q: 1640 # warning("input 't' ignored; type=\"inf\"") msgid "input 't' ignored; type=\"inf\"" msgstr "wejÅ›cie 't' zostaÅ‚o zignornowane; type=\"inf\"" # boot/R/bootfuns.q: 1645 # stop("bootstrap object needed for type=\"reg\"") msgid "bootstrap object needed for type=\"reg\"" msgstr "obiekt bootstrapu jest potrzebny dla type=\"reg\"" # boot/R/bootfuns.q: 1656 # warning("input 't' ignored; type=\"jack\"") msgid "input 't' ignored; type=\"jack\"" msgstr "wejÅ›cie 't' zostaÅ‚o zignornowane; type=\"jack\"" # boot/R/bootfuns.q: 1664 # warning("input 't' ignored; type=\"pos\"") msgid "input 't' ignored; type=\"pos\"" msgstr "wejÅ›cie 't' zostaÅ‚o zignorowane; type=\"pos\"" # boot/R/bootfuns.q: 1829 # warning("input 't0' ignored: neither 't' nor 'L' supplied") msgid "input 't0' ignored: neither 't' nor 'L' supplied" msgstr "wejÅ›cie 't0' zostaÅ‚o zignornowane: nie dostarczono ani 't' ani 'L'" # boot/R/bootfuns.q: 1871 # stop("bootstrap output matrix missing") msgid "bootstrap output matrix missing" msgstr "brakuje wyjÅ›ciowej macierzy bootstrapu" # boot/R/bootfuns.q: 1873 # stop("use 'boot.ci' for scalar parameters") msgid "use 'boot.ci' for scalar parameters" msgstr "użyj 'boot.ci' dla skalarnych parametrów" # boot/R/bootfuns.q: 1885 # warning("unable to achieve requested overall error rate.") #, fuzzy msgid "unable to achieve requested overall error rate" msgstr "nie można uzyskać zażądanego ogólnego wskaźnika błędu." # boot/R/bootfuns.q: 2073 # stop(gettextf("unable to find multiplier for %f", theta[i]), # domain = NA) msgid "unable to find multiplier for %f" msgstr "nie można znaleźć mnożnika dla %f" # boot/R/bootfuns.q: 2078 # stop("'theta' or 'lambda' required") msgid "'theta' or 'lambda' required" msgstr "'theta' lub 'lambda' sÄ… wymagane" # boot/R/bootfuns.q: 2106 # stop("0 elements not allowed in 'q'") msgid "0 elements not allowed in 'q'" msgstr "0 elementów nie jest dozwolone w 'q'" # boot/R/bootfuns.q: 2141 # stop("bootstrap replicates must be supplied") # boot/R/bootfuns.q: 2196 # stop("bootstrap replicates must be supplied") # boot/R/bootfuns.q: 2238 # stop("bootstrap replicates must be supplied") msgid "bootstrap replicates must be supplied" msgstr "bootstrapowane repliki muszÄ… zostać dostarczone" # boot/R/bootfuns.q: 2145 # stop("either 'boot.out' or 'w' must be specified.") # boot/R/bootfuns.q: 2201 # stop("either 'boot.out' or 'w' must be specified.") # boot/R/bootfuns.q: 2242 # stop("either 'boot.out' or 'w' must be specified.") msgid "either 'boot.out' or 'w' must be specified." msgstr "jedno z 'boot.out' lub 'w' musi zostać dostarczone." # boot/R/bootfuns.q: 2276 # warning("only first column of 't' used") msgid "only first column of 't' used" msgstr "tylko pierwsza kolumna 't' zostaÅ‚a użyta" # boot/R/bootfuns.q: 2324 # stop("invalid value of 'sim' supplied") msgid "invalid value of 'sim' supplied" msgstr "dostarczono niepoprawnÄ… wartość 'sim'" # boot/R/bootfuns.q: 2326 # stop("'R' and 'theta' have incompatible lengths") msgid "'R' and 'theta' have incompatible lengths" msgstr "'R' oraz 'theta' majÄ… niekompatybilne dÅ‚ugoÅ›ci" # boot/R/bootfuns.q: 2328 # stop("R[1L] must be positive for frequency smoothing") msgid "R[1L] must be positive for frequency smoothing" msgstr "R[1L] musi być dodatnia dla wygÅ‚adzania czÄ™stotliwoÅ›ci" # boot/R/bootfuns.q: 2334 # stop("'R' and 'alpha' have incompatible lengths") msgid "'R' and 'alpha' have incompatible lengths" msgstr "'R' oraz 'alpha' majÄ… niekompatybilne dÅ‚ugoÅ›ci" # boot/R/bootfuns.q: 2339 # warning("extreme values used for quantiles") msgid "extreme values used for quantiles" msgstr "ekstremalne wartoÅ›ci użyte dla kwantyli" # boot/R/bootfuns.q: 2348 # stop("'theta' must be supplied if R[1L] = 0") msgid "'theta' must be supplied if R[1L] = 0" msgstr "'theta' musi zostać dostarczona jeÅ›li R[1L] = 0" # boot/R/bootfuns.q: 2350 # warning("'alpha' ignored; R[1L] = 0") msgid "'alpha' ignored; R[1L] = 0" msgstr "'alpha' zostaÅ‚o zignornowane; R[1L]=0" # boot/R/bootfuns.q: 2389 # stop("control methods undefined when 'boot.out' has weights") msgid "control methods undefined when 'boot.out' has weights" msgstr "metody kontroli nie sÄ… zdefiniowane gdy 'boot.out' posiada wagi" # boot/R/bootfuns.q: 2804 # stop(gettextf("number of columns of A (%d) not equal to length of u (%d)", # d, length(u)), domain = NA) #, fuzzy msgid "number of columns of 'A' (%d) not equal to length of 'u' (%d)" msgstr "liczba kolumn 'A' (%d) nie równa siÄ™ dÅ‚ugoÅ›ci 'u' (%d)" # boot/R/bootfuns.q: 2808 # stop("either 'A' and 'u' or 'K.adj' and 'K2' must be supplied") msgid "either 'A' and 'u' or 'K.adj' and 'K2' must be supplied" msgstr "albo 'A' oraz 'u', albo 'K.adj' oraz 'K2' muszÄ… zostać dostarczone" # boot/R/bootfuns.q: 2920 # stop("this type not implemented for Poisson") msgid "this type not implemented for Poisson" msgstr "ten typ nie jest zaimplementowany dla rozkÅ‚adu Poisson'a" # boot/R/bootfuns.q: 2954 # stop("this type not implemented for Binary") msgid "this type not implemented for Binary" msgstr "ten typ nie jest zaimplementowany dla rozkÅ‚adu Bernoulliego" # boot/R/bootfuns.q: 2982 # stop("one of 't' or 't0' required") msgid "one of 't' or 't0' required" msgstr "jeden z 't' lub 't0' jest wymagany" # boot/R/bootfuns.q: 2996 # stop("function 'u' missing") msgid "function 'u' missing" msgstr "brakuje funkcji 'u'" # boot/R/bootfuns.q: 2997 # stop("'u' must be a function") msgid "'u' must be a function" msgstr "'u' musi być funkcjÄ…" # boot/R/bootfuns.q: 3017 # stop("unable to find range") # boot/R/bootfuns.q: 3063 # stop("unable to find range") # boot/R/bootfuns.q: 3136 # stop("unable to find range") # boot/R/bootfuns.q: 3178 # stop("unable to find range") msgid "unable to find range" msgstr "nie można znaleźć zakresu" # boot/R/bootfuns.q: 3388 # stop("'R' must be positive") msgid "'R' must be positive" msgstr "'R' musi być dodatnie" # boot/R/bootfuns.q: 3400 # stop("invalid value of 'l'") msgid "invalid value of 'l'" msgstr "niepoprawna wartość 'l'" # boot/R/bootfuns.q: 3430 # stop("unrecognized value of 'sim'") msgid "unrecognized value of 'sim'" msgstr "nierozpoznana wartość 'sim'" # boot/R/bootfuns.q: 3464 # stop("multivariate time series not allowed") msgid "multivariate time series not allowed" msgstr "wielowymiarowe szeregi czasowe nie sÄ… dozwolone" # boot/R/bootpracs.q: 70 # stop(gettextf("likelihood never exceeds %f", lim), # domain = NA) msgid "likelihood never exceeds %f" msgstr "funkcja wiarygodnoÅ›ci nigdy nie przekracza %f" # boot/R/bootpracs.q: 74 # stop(gettextf("likelihood exceeds %f at only one point", lim), # domain = NA) msgid "likelihood exceeds %f at only one point" msgstr "funkcja wiarygodnoÅ›ci przekracza %f tylko w jednym punkcie" boot/po/R-ru.po0000644000076600000240000002113412122137701013051 0ustar00ripleystaff# Russian translations for R # òÕÓÓËÉÊ ÐÅÒÅ×ÏÄ ÄÌÑ R # # Copyright (C) 2007 The R Foundation # This file is distributed under the same license as the R package. # Alexey Shipunov 2009 # msgid "" msgstr "" "Project-Id-Version: R 2.10.0\n" "Report-Msgid-Bugs-To: bugs@r-project.org\n" "POT-Creation-Date: 2012-10-11 15:21\n" "PO-Revision-Date: 2013-03-19 14:42-0600\n" "Last-Translator: Alexey Shipunov \n" "Language-Team: Russian\n" "Language: \n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=KOI8-R\n" "Content-Transfer-Encoding: 8bit\n" "X-Poedit-Language: Russian\n" "Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\n" msgid "'simple=TRUE' is only valid for 'sim=\"ordinary\", stype=\"i\", n=0', so ignored" msgstr "'simple=TRUE' ÐÒÁ×ÉÌØÎÏ ÔÏÌØËÏ ÄÌÑ 'sim=\"ordinary\", stype=\"i\", n=0, ÐÏÜÔÏÍÕ ÐÒÏÐÕÓËÁÅÔÓÑ" msgid "no data in call to 'boot'" msgstr "ÎÅÔ ÄÁÎÎÙÈ × ×ÙÚÏ×Å 'boot'" msgid "negative value of 'm' supplied" msgstr "ÕËÁÚÁÎÏ ÏÔÒÉÃÁÔÅÌØÎÏÅ ÚÎÁÞÅÎÉÅ 'm'" msgid "length of 'm' incompatible with 'strata'" msgstr "ÄÌÉÎÁ 'm' ÎÅÓÏ×ÍÅÓÔÉÍÁ ÓÏ 'strata'" msgid "dimensions of 'R' and 'weights' do not match" msgstr "ÉÚÍÅÒÅÎÉÑ 'R' É 'weights' ÎÅ ÓÏÏÔ×ÅÔÓÔ×ÕÀÔ" msgid "arguments are not all the same type of \"boot\" object" msgstr "ÎÅ ×ÓÅ ÁÒÇÕÍÅÎÔÙ ÏÂßÅËÔÁ \"boot\" ÏÄÎÏÇÏ ÔÉÐÁ" msgid "index array not defined for model-based resampling" msgstr "ÄÌÑ ÏÓÎÏ×ÁÎÎÏÇÏ ÎÁ ÍÏÄÅÌÉ ÒÅÓÜÍÐÌÉÎÇÁ ÎÅ ÏÐÒÅÄÅÌÅÎÁ ÍÁÔÒÉÃÁ ÉÎÄÅËÓÏ×" msgid "boot.array not implemented for this object" msgstr "'boot.array' ÄÌÑ ÜÔÏÇÏ ÏÂßÅËÔÁ ÎÅ ÒÁÚÒÁÂÏÔÁÎ" msgid "array cannot be found for parametric bootstrap" msgstr "ÎÅ ÍÏÇÕ ÎÁÊÔÉ ÍÁÔÒÉÃÕ ÄÌÑ ÐÁÒÁÍÅÔÒÉÞÅÓËÏÇÏ ÂÕÔÓÔÒÅÐÁ" msgid "%s distribution not supported: using normal instead" msgstr "ÒÁÓÐÒÅÄÅÌÅÎÉÅ %s ÎÅ ÐÏÄÄÅÒÖÉ×ÁÅÔÓÑ, ÉÓÐÏÌØÚÕÀ ÎÏÒÍÁÌØÎÏÅ" msgid "only first element of 'index' used" msgstr "ÉÓÐÏÌØÚÏ×ÁÎ ÌÉÛØ ÐÅÒ×ÙÊ ÜÌÅÍÅÎÔ ÉÎÄÅËÓÁ" msgid "'K' outside allowable range" msgstr "'K' ×ÎÅ ÄÏÐÕÓÔÉÍÏÇÏ ÐÒÏÍÅÖÕÔËÁ" msgid "'K' has been set to %f" msgstr "'K' ÕÓÔÁÎÏ×ÌÅÎ × %f" msgid "'t' and 't0' must be supplied together" msgstr "'t' É 't0' ÄÏÌÖÎÙ ÂÙÔØ ÕËÁÚÁÎÙ ×ÍÅÓÔÅ" msgid "index out of bounds; minimum index only used." msgstr "ÉÎÄÅËÓ ×ÎÅ ÇÒÁÎÉÃ; ÉÓÐÏÌØÚÏ×ÁÎ ÌÉÛØ ÍÉÎÉÍÁÌØÎÙÊ ÉÎÄÅËÓ." msgid "'t' must of length %d" msgstr "'t' ÄÏÌÖÅÎ ÂÙÔØ ÄÌÉÎÏÊ %d" msgid "bootstrap variances needed for studentized intervals" msgstr "ÂÕÔÓÔÒÅÐ-×ÁÒÉÁÎÓÙ ÎÕÖÎÙ ÄÌÑ ÉÎÔÅÒ×ÁÌÏ× óÔØÀÄÅÎÔ-ÔÉÐÁ" msgid "BCa intervals not defined for time series bootstraps" msgstr "BCa ÉÎÔÅÒ×ÁÌÙ ÎÅ ÏÐÒÅÄÅÌÅÎÙ ÄÌÑ ÂÕÔÓÔÒÅÐÁ ×ÒÅÍÅÎÎÙÈ ÒÑÄÏ×" msgid "bootstrap output object or 't0' required" msgstr "ÔÒÅÂÕÅÔÓÑ ÏÂßÅËÔ ×Ù×ÏÄÁ ÂÕÔÓÔÒÅÐÁ ÌÉÂÏ 't0'" msgid "unable to calculate 'var.t0'" msgstr "ÎÅ ÍÏÇÕ ÐÏÓÞÉÔÁÔØ 'var.t0'" msgid "extreme order statistics used as endpoints" msgstr "'extreme order statistics' ÉÓÐÏÌØÚÏ×ÁÎÁ × ËÏÎÅÞÎÙÈ ÔÏÞËÁÈ" msgid "variance required for studentized intervals" msgstr "ÄÌÑ ÉÎÔÅÒ×ÁÌÏ× óÔØÀÄÅÎÔ-ÔÉÐÁ ÎÕÖÎÁ ×ÁÒÉÁÎÓÁ" msgid "estimated adjustment 'w' is infinite" msgstr "ÐÒÅÄÐÏÌÁÇÁÅÍÁÑ ËÏÒÒÅËÔÉÒÏ×ËÁ 'w' -- infinite" msgid "estimated adjustment 'a' is NA" msgstr "ÐÒÅÄÐÏÌÁÇÁÅÍÁÑ ËÏÒÒÅËÔÉÒÏ×ËÁ 'a' -- ÜÔÏ NA" msgid "only first element of 'index' used in 'abc.ci'" msgstr "ÌÉÛØ ÐÅÒ×ÙÊ ÜÌÅÍÅÎÔ ÉÎÄÅËÓÁ ÉÓÐÏÌØÚÏ×ÁÎ × 'abc.ci'" msgid "missing values not allowed in 'data'" msgstr "ÐÒÏÐÕÝÅÎÎÙÅ ÚÎÁÞÅÎÉÑ × ÄÁÎÎÙÈ ÎÅ ÒÁÚÒÅÛÅÎÙ" msgid "unknown value of 'sim'" msgstr "ÎÅÉÚ×ÅÓÔÎÏÅ ÚÎÁÞÅÎÉÅ 'sim'" msgid "'data' must be a matrix with at least 2 columns" msgstr "ÄÁÎÎÙÅ ÄÏÌÖÎÙ ÂÙÔØ ÍÁÔÒÉÃÅÊ ÐÏ ÍÅÎØÛÅÊ ÍÅÒÅ ÉÚ 2 ËÏÌÏÎÏË" msgid "'index' must contain 2 elements" msgstr "ÉÎÄÅËÓ ÄÏÌÖÅÎ ÓÏÄÅÒÖÁÔØ 2 ÜÌÅÍÅÎÔÁ" msgid "only first 2 elements of 'index' used" msgstr "ÌÉÛØ ÐÅÒ×ÙÅ 2 ÜÌÅÍÅÎÔÁ ÉÎÄÅËÓÁ ÉÓÐÏÌØÚÏ×ÁÎÙ" msgid "indices are incompatible with 'ncol(data)'" msgstr "ÉÎÄÅËÓÙ ÎÅÓÏ×ÍÅÓÔÉÍÙ Ó 'ncol(data)'" msgid "sim = \"weird\" cannot be used with a \"coxph\" object" msgstr "sim=\"weird\" ÎÅ ÍÏÖÅÔ ÂÙÔØ ÉÓÐÏÌØÚÏ×ÁÎ ÄÌÑ ÏÂßÅËÔÁ \"coxph\"" msgid "only columns %s and %s of 'data' used" msgstr "ÉÓÐÏÌØÚÏ×ÁÎÙ ÔÏÌØËÏ ËÏÌÏÎËÉ %s É %s ÄÁÎÎÙÈ" msgid "no coefficients in Cox model -- model ignored" msgstr "× ÍÏÄÅÌÉ 'Cox' ÎÅÔ ËÏÜÆÆÉÃÉÅÎÔÏ× -- ÍÏÄÅÌØ ÐÒÏÐÕÝÅÎÁ" msgid "'F.surv' is required but missing" msgstr "'F.surv' ÔÒÅÂÕÅÔÓÑ, ÎÏ ÐÒÏÐÕÝÅÎ" msgid "'G.surv' is required but missing" msgstr "'G.surv' ÔÒÅÂÕÅÔÓÑ, ÎÏ ÐÒÏÐÕÝÅÎ" msgid "'strata' of wrong length" msgstr "'strata' ÎÅÐÒÁ×ÉÌØÎÏÊ ÄÌÉÎÙ" msgid "influence values cannot be found from a parametric bootstrap" msgstr "ÚÎÁÞÅÎÉÑ ×ÌÉÑÎÉÑ ÎÅÌØÚÑ ÎÁÊÔÉ ÐÒÉ ÐÏÍÏÝÉ ÐÁÒÁÍÅÔÒÉÞÅÓËÏÇÏ ÂÕÔÓÔÒÅÐÁ" msgid "neither 'data' nor bootstrap object specified" msgstr "ÎÅÔ ÄÁÎÎÙÈ ÉÌÉ ÎÅ ÕËÁÚÁÎ ÂÕÔÓÔÒÅÐ-ÏÂßÅËÔ" msgid "neither 'statistic' nor bootstrap object specified" msgstr "ÎÅÔ 'statistic' ÉÌÉ ÕËÁÚÁÎ ÂÕÔÓÔÒÅÐ-ÏÂßÅËÔ" msgid "'stype' must be \"w\" for type=\"inf\"" msgstr "'stype' ÄÏÌÖÅÎ ÂÙÔØ \"w\" ÄÌÑ type=\"inf\"" msgid "input 't' ignored; type=\"inf\"" msgstr "××ÏÄ 't' ÐÒÏÐÕÝÅÎ; type=\"inf\"" msgid "bootstrap object needed for type=\"reg\"" msgstr "ÄÌÑ type=\"reg\" ÎÕÖÅÎ ÂÕÔÓÔÒÅÐ-ÏÂßÅËÔ" msgid "input 't' ignored; type=\"jack\"" msgstr "××ÏÄ 't' ÐÒÏÐÕÝÅÎ; type=\"jack\"" msgid "input 't' ignored; type=\"pos\"" msgstr "××ÏÄ 't' ÐÒÏÐÕÝÅÎ; type=\"pos\"" msgid "input 't0' ignored: neither 't' nor 'L' supplied" msgstr "××ÏÄ 't0' ÐÒÏÐÕÝÅÎ: ÎÅ ÕËÁÚÁÎÏ ÎÉ 't', ÎÉ 'L'" msgid "bootstrap output matrix missing" msgstr "ÐÒÏÐÕÝÅÎÁ ÍÁÔÒÉÃÁ ÂÕÔÓÔÒÅÐ-×Ù×ÏÄÁ" msgid "use 'boot.ci' for scalar parameters" msgstr "ÉÓÐÏÌØÚÕÀ 'boot.ci' ÄÌÑ ÓËÁÌÑÒÎÙÈ ÐÁÒÁÍÅÔÒÏ×" msgid "unable to achieve requested overall error rate" msgstr "ÎÅ ÍÏÇÕ ÄÏÓÔÉÞØ ÔÒÅÂÕÅÍÏÇÏ ÏÂÝÅÇÏ ÓÏÏÔÎÏÛÅÎÉÑ ÏÛÉÂÏË" msgid "unable to find multiplier for %f" msgstr "ÎÅ ÍÏÇÕ ÎÁÊÔÉ ÍÎÏÖÉÔÅÌØ ÄÌÑ %f" msgid "'theta' or 'lambda' required" msgstr "ÔÒÅÂÕÅÔÓÑ 'theta' ÉÌÉ 'lambda'" msgid "0 elements not allowed in 'q'" msgstr "0 ÜÌÅÍÅÎÔÏ× × 'q' ÎÅ ÒÁÚÒÅÛÅÎÏ" msgid "bootstrap replicates must be supplied" msgstr "ÎÁÄÏ ÕËÁÚÁÔØ ÂÕÔÓÔÒÅÐ-ÒÅÐÌÉËÁÔÙ" msgid "either 'boot.out' or 'w' must be specified." msgstr "ÎÁÄÏ ÕËÁÚÁÔØ ÌÉÂÏ 'boot.out', ÌÉÂÏ 'w'." msgid "only first column of 't' used" msgstr "ÉÓÐÏÌØÚÏ×ÁÎÁ ÔÏÌØËÏ ÐÅÒ×ÁÑ ËÏÌÏÎËÁ 't'" msgid "invalid value of 'sim' supplied" msgstr "ÕËÁÚÁÎÏ ÎÅÐÒÁ×ÉÌØÎÏÅ ÚÎÁÞÅÎÉÅ 'sim'" msgid "'R' and 'theta' have incompatible lengths" msgstr "Õ 'R' É 'theta' -- ÎÅÓÏ×ÍÅÓÔÉÍÙÅ ÄÌÉÎÙ" msgid "R[1L] must be positive for frequency smoothing" msgstr "R[1L] ÄÏÌÖÅÎ ÂÙÔØ ÐÏÌÏÖÉÔÅÌØÎÙÍ ÄÌÑ ÞÁÓÔÏÔÎÏÇÏ ÓÇÌÁÖÉ×ÁÎÉÑ" msgid "'R' and 'alpha' have incompatible lengths" msgstr "'R' É 'alpha' ÄÏÌÖÎÙ ÉÍÅÔØ ÓÏ×ÍÅÓÔÉÍÙÅ ÄÌÉÎÙ" msgid "extreme values used for quantiles" msgstr "ÜËÓÔÒÅÍÁÌØÎÙÅ ÚÎÁÞÅÎÉÑ ÉÓÐÏÌØÚÏ×ÁÎÙ ÄÌÑ Ë×ÁÎÔÉÌÅÊ" msgid "'theta' must be supplied if R[1L] = 0" msgstr "ÎÁÄÏ ÕËÁÚÁÔØ 'theta', ÅÓÌÉ R[1L] = 0" msgid "'alpha' ignored; R[1L] = 0" msgstr "'alpha' ÐÒÏÐÕÝÅÎ; R[1L]=0" msgid "control methods undefined when 'boot.out' has weights" msgstr "ÍÅÔÏÄÙ ËÏÎÔÒÏÌÑ ÎÅ ÏÐÒÅÄÅÌÅÎÙ × ÔÏ ×ÒÅÍÑ ËÁË Õ 'boot.out' ÅÓÔØ ×ÅÓÁ" msgid "number of columns of 'A' (%d) not equal to length of 'u' (%d)" msgstr "ËÏÌÉÞÅÓÔ×Ï ËÏÌÏÎÏË 'A' (%d) ÎÅ ÒÁ×ÎÏ ÄÌÉÎÅ 'u' (%d)" msgid "either 'A' and 'u' or 'K.adj' and 'K2' must be supplied" msgstr "ÎÁÄÏ ÕËÁÚÁÔØ ÌÉÂÏ 'A' É 'u', ÌÉÂÏ 'K.adj' É 'K2'" msgid "this type not implemented for Poisson" msgstr "ÜÔÏÔ ÔÉÐ ÎÅ ÒÁÚÒÁÂÏÔÁÎ ÄÌÑ ÒÁÓÐÒÅÄÅÌÅÎÉÑ ðÕÁÓÓÏÎÁ" msgid "this type not implemented for Binary" msgstr "ÜÔÏÔ ÔÉÐ ÎÅ ÒÁÚÒÁÂÏÔÁÎ ÄÌÑ ÂÉÎÁÒÎÏÇÏ ÒÁÓÐÒÅÄÅÌÅÎÉÑ" msgid "one of 't' or 't0' required" msgstr "ÔÒÅÂÕÅÔÓÑ ÏÄÎÏ 't' ÉÌÉ 't0'" msgid "function 'u' missing" msgstr "ÆÕÎËÃÉÑ 'u' ÐÒÏÐÕÝÅÎÁ" msgid "'u' must be a function" msgstr "'u' ÄÏÌÖÎÁ ÂÙÔØ ÆÕÎËÃÉÅÊ" msgid "unable to find range" msgstr "ÎÅ ÍÏÇÕ ÎÁÊÔÉ ÒÁÚÍÁÈ" msgid "'R' must be positive" msgstr "'R' ÄÏÌÖÅÎ ÂÙÔØ ÐÏÌÏÖÉÔÅÌØÎÙÍ" msgid "invalid value of 'l'" msgstr "ÎÅÐÒÁ×ÉÌØÎÏÅ ÚÎÁÞÅÎÉÅ 'l'" msgid "unrecognized value of 'sim'" msgstr "ÎÅÒÁÓÐÏÚÎÁÎÎÏÅ ÚÎÁÞÅÎÉÅ 'sim'" msgid "multivariate time series not allowed" msgstr "ÍÎÏÇÏÍÅÒÎÙÅ ×ÒÅÍÅÎÎÙÅ ÒÑÄÙ ÎÅ ÒÁÚÒÅÛÅÎÙ" msgid "likelihood never exceeds %f" msgstr "ÐÒÁ×ÄÏÐÏÄÏÂÉÅ ÎÉËÏÇÄÁ ÎÅ ÐÒÅ×ÙÛÁÅÔ %f" msgid "likelihood exceeds %f at only one point" msgstr "ÐÒÁ×ÄÏÐÏÄÏÂÉÅ ÐÒÅ×ÙÛÁÅÔ %f ÔÏÌØËÏ × ÏÄÎÏÊ ÔÏÞËÅ" #~ msgid "only columns" #~ msgstr "ÔÏÌØËÏ ËÏÌÏÎËÉ" #~ msgid "and" #~ msgstr "É" #~ msgid "of data used" #~ msgstr "ÄÁÎÎÙÈ ÉÓÐÏÌØÚÏ×ÁÎÙ" #~ msgid "number of columns of A (" #~ msgstr "ËÏÌÉÞÅÓÔ×Ï ËÏÌÏÎÏË A (" #~ msgid ")" #~ msgstr ")" #~ msgid "at only one point" #~ msgstr "ÔÏÌØËÏ × ÏÄÎÏÊ ÔÏÞËÅ" #~ msgid "invalid proportions input" #~ msgstr "ÎÅÐÒÁ×ÉÌØÎÙÊ ××ÏÄ ÐÒÏÐÏÒÃÉÊ" #~ msgid "irregular time series not allowed" #~ msgstr "ÎÅÒÅÇÕÌÑÒÎÙÅ ×ÒÅÍÅÎÎÙÅ ÒÑÄÙ ÎÅ ÒÁÚÒÅÛÅÎÙ" boot/tests/0000755000076600000240000000000011663151666012426 5ustar00ripleystaffboot/tests/Examples/0000755000076600000240000000000012105463455014176 5ustar00ripleystaffboot/tests/Examples/boot-Ex.Rout.save0000644000076600000240000022677612105463455017347 0ustar00ripleystaff R Under development (unstable) (2013-02-09 r61878) -- "Unsuffered Consequences" Copyright (C) 2013 The R Foundation for Statistical Computing ISBN 3-900051-07-0 Platform: x86_64-unknown-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > pkgname <- "boot" > source(file.path(R.home("share"), "R", "examples-header.R")) > options(warn = 1) > library('boot') > > base::assign(".oldSearch", base::search(), pos = 'CheckExEnv') > cleanEx() > nameEx("Imp.Estimates") > ### * Imp.Estimates > > flush(stderr()); flush(stdout()) > > ### Name: Imp.Estimates > ### Title: Importance Sampling Estimates > ### Aliases: Imp.Estimates imp.moments imp.prob imp.quantile imp.reg > ### Keywords: htest nonparametric > > ### ** Examples > > # Example 9.8 of Davison and Hinkley (1997) requires tilting the > # resampling distribution of the studentized statistic to be centred > # at the observed value of the test statistic, 1.84. In this example > # we show how certain estimates can be found using resamples taken from > # the tilted distribution. > grav1 <- gravity[as.numeric(gravity[,2]) >= 7, ] > grav.fun <- function(dat, w, orig) { + strata <- tapply(dat[, 2], as.numeric(dat[, 2])) + d <- dat[, 1] + ns <- tabulate(strata) + w <- w/tapply(w, strata, sum)[strata] + mns <- as.vector(tapply(d * w, strata, sum)) # drop names + mn2 <- tapply(d * d * w, strata, sum) + s2hat <- sum((mn2 - mns^2)/ns) + c(mns[2] - mns[1], s2hat, (mns[2] - mns[1] - orig)/sqrt(s2hat)) + } > grav.z0 <- grav.fun(grav1, rep(1, 26), 0) > grav.L <- empinf(data = grav1, statistic = grav.fun, stype = "w", + strata = grav1[,2], index = 3, orig = grav.z0[1]) > grav.tilt <- exp.tilt(grav.L, grav.z0[3], strata = grav1[, 2]) > grav.tilt.boot <- boot(grav1, grav.fun, R = 199, stype = "w", + strata = grav1[, 2], weights = grav.tilt$p, + orig = grav.z0[1]) > # Since the weights are needed for all calculations, we shall calculate > # them once only. > grav.w <- imp.weights(grav.tilt.boot) > grav.mom <- imp.moments(grav.tilt.boot, w = grav.w, index = 3) > grav.p <- imp.prob(grav.tilt.boot, w = grav.w, index = 3, t0 = grav.z0[3]) > unlist(grav.p) t0 raw rat reg 1.8401182 1.0222447 0.9778246 0.9767292 > grav.q <- imp.quantile(grav.tilt.boot, w = grav.w, index = 3, + alpha = c(0.9, 0.95, 0.975, 0.99)) > as.data.frame(grav.q) alpha raw rat reg 1 0.900 3.048484 1.170707 5.056004 2 0.950 3.237935 1.565928 5.056004 3 0.975 3.629448 1.895876 5.056004 4 0.990 3.629448 2.258157 5.056004 > > > > cleanEx() > nameEx("abc.ci") > ### * abc.ci > > flush(stderr()); flush(stdout()) > > ### Name: abc.ci > ### Title: Nonparametric ABC Confidence Intervals > ### Aliases: abc.ci > ### Keywords: nonparametric htest > > ### ** Examples > > # 90% and 95% confidence intervals for the correlation > # coefficient between the columns of the bigcity data > > abc.ci(bigcity, corr, conf=c(0.90,0.95)) conf [1,] 0.90 0.9581503 0.9917271 [2,] 0.95 0.9493699 0.9930713 > > # A 95% confidence interval for the difference between the means of > # the last two samples in gravity > mean.diff <- function(y, w) + { gp1 <- 1:table(as.numeric(y$series))[1] + sum(y[gp1, 1] * w[gp1]) - sum(y[-gp1, 1] * w[-gp1]) + } > grav1 <- gravity[as.numeric(gravity[, 2]) >= 7, ] > abc.ci(grav1, mean.diff, strata = grav1$series) [1] 0.9500000 -6.7075791 -0.3939377 > > > > cleanEx() > nameEx("boot") > ### * boot > > flush(stderr()); flush(stdout()) > > ### Name: boot > ### Title: Bootstrap Resampling > ### Aliases: boot boot.return c.boot > ### Keywords: nonparametric htest > > ### ** Examples > > # Usual bootstrap of the ratio of means using the city data > ratio <- function(d, w) sum(d$x * w)/sum(d$u * w) > boot(city, ratio, R = 999, stype = "w") ORDINARY NONPARAMETRIC BOOTSTRAP Call: boot(data = city, statistic = ratio, R = 999, stype = "w") Bootstrap Statistics : original bias std. error t1* 1.520313 0.03959996 0.2167146 > > > # Stratified resampling for the difference of means. In this > # example we will look at the difference of means between the final > # two series in the gravity data. > diff.means <- function(d, f) + { n <- nrow(d) + gp1 <- 1:table(as.numeric(d$series))[1] + m1 <- sum(d[gp1,1] * f[gp1])/sum(f[gp1]) + m2 <- sum(d[-gp1,1] * f[-gp1])/sum(f[-gp1]) + ss1 <- sum(d[gp1,1]^2 * f[gp1]) - (m1 * m1 * sum(f[gp1])) + ss2 <- sum(d[-gp1,1]^2 * f[-gp1]) - (m2 * m2 * sum(f[-gp1])) + c(m1 - m2, (ss1 + ss2)/(sum(f) - 2)) + } > grav1 <- gravity[as.numeric(gravity[,2]) >= 7,] > boot(grav1, diff.means, R = 999, stype = "f", strata = grav1[,2]) STRATIFIED BOOTSTRAP Call: boot(data = grav1, statistic = diff.means, R = 999, stype = "f", strata = grav1[, 2]) Bootstrap Statistics : original bias std. error t1* -2.846154 0.002541003 1.546664 t2* 16.846154 -1.457662791 6.759103 > > # In this example we show the use of boot in a prediction from > # regression based on the nuclear data. This example is taken > # from Example 6.8 of Davison and Hinkley (1997). Notice also > # that two extra arguments to 'statistic' are passed through boot. > nuke <- nuclear[, c(1, 2, 5, 7, 8, 10, 11)] > nuke.lm <- glm(log(cost) ~ date+log(cap)+ne+ct+log(cum.n)+pt, data = nuke) > nuke.diag <- glm.diag(nuke.lm) > nuke.res <- nuke.diag$res * nuke.diag$sd > nuke.res <- nuke.res - mean(nuke.res) > > # We set up a new data frame with the data, the standardized > # residuals and the fitted values for use in the bootstrap. > nuke.data <- data.frame(nuke, resid = nuke.res, fit = fitted(nuke.lm)) > > # Now we want a prediction of plant number 32 but at date 73.00 > new.data <- data.frame(cost = 1, date = 73.00, cap = 886, ne = 0, + ct = 0, cum.n = 11, pt = 1) > new.fit <- predict(nuke.lm, new.data) > > nuke.fun <- function(dat, inds, i.pred, fit.pred, x.pred) + { + lm.b <- glm(fit+resid[inds] ~ date+log(cap)+ne+ct+log(cum.n)+pt, + data = dat) + pred.b <- predict(lm.b, x.pred) + c(coef(lm.b), pred.b - (fit.pred + dat$resid[i.pred])) + } > > nuke.boot <- boot(nuke.data, nuke.fun, R = 999, m = 1, + fit.pred = new.fit, x.pred = new.data) > # The bootstrap prediction squared error would then be found by > mean(nuke.boot$t[, 8]^2) [1] 0.08815734 > # Basic bootstrap prediction limits would be > new.fit - sort(nuke.boot$t[, 8])[c(975, 25)] [1] 6.160255 7.298819 > > > # Finally a parametric bootstrap. For this example we shall look > # at the air-conditioning data. In this example our aim is to test > # the hypothesis that the true value of the index is 1 (i.e. that > # the data come from an exponential distribution) against the > # alternative that the data come from a gamma distribution with > # index not equal to 1. > air.fun <- function(data) { + ybar <- mean(data$hours) + para <- c(log(ybar), mean(log(data$hours))) + ll <- function(k) { + if (k <= 0) 1e200 else lgamma(k)-k*(log(k)-1-para[1]+para[2]) + } + khat <- nlm(ll, ybar^2/var(data$hours))$estimate + c(ybar, khat) + } > > air.rg <- function(data, mle) { + # Function to generate random exponential variates. + # mle will contain the mean of the original data + out <- data + out$hours <- rexp(nrow(out), 1/mle) + out + } > > air.boot <- boot(aircondit, air.fun, R = 999, sim = "parametric", + ran.gen = air.rg, mle = mean(aircondit$hours)) > > # The bootstrap p-value can then be approximated by > sum(abs(air.boot$t[,2]-1) > abs(air.boot$t0[2]-1))/(1+air.boot$R) [1] 0.461 > > > > cleanEx() > nameEx("boot.array") > ### * boot.array > > flush(stderr()); flush(stdout()) > > ### Name: boot.array > ### Title: Bootstrap Resampling Arrays > ### Aliases: boot.array > ### Keywords: nonparametric > > ### ** Examples > > # A frequency array for a nonparametric bootstrap > city.boot <- boot(city, corr, R = 40, stype = "w") > boot.array(city.boot) [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] 1 1 3 1 1 0 1 0 1 1 [2,] 0 0 1 3 1 2 1 1 1 0 [3,] 0 0 0 1 3 2 0 1 2 1 [4,] 0 2 2 2 0 1 0 1 0 2 [5,] 1 1 1 0 1 2 0 2 2 0 [6,] 0 1 2 0 2 1 0 1 2 1 [7,] 3 0 0 1 1 2 0 1 1 1 [8,] 0 4 1 1 1 0 1 2 0 0 [9,] 0 1 4 0 1 0 2 2 0 0 [10,] 1 1 1 1 1 1 1 2 0 1 [11,] 0 0 3 0 2 1 1 1 0 2 [12,] 2 3 1 0 0 1 0 0 2 1 [13,] 1 0 0 1 2 0 2 1 1 2 [14,] 0 0 2 3 0 0 2 0 2 1 [15,] 2 0 1 1 1 1 0 2 1 1 [16,] 1 1 0 2 2 1 1 1 1 0 [17,] 1 0 1 2 2 1 2 1 0 0 [18,] 0 2 0 0 2 2 0 1 1 2 [19,] 1 1 0 2 0 1 2 0 1 2 [20,] 0 0 0 1 2 1 1 1 0 4 [21,] 0 0 2 0 1 1 3 0 0 3 [22,] 1 3 2 2 0 1 1 0 0 0 [23,] 0 0 3 1 2 1 2 0 1 0 [24,] 0 1 1 2 1 2 0 1 0 2 [25,] 0 0 2 1 0 0 3 2 1 1 [26,] 0 2 4 1 1 1 0 0 0 1 [27,] 2 3 1 0 2 0 0 2 0 0 [28,] 1 1 0 1 1 0 0 4 2 0 [29,] 2 2 0 1 1 0 1 0 1 2 [30,] 0 1 0 1 1 2 0 1 3 1 [31,] 1 2 0 2 2 0 1 1 0 1 [32,] 1 4 0 1 0 2 0 1 1 0 [33,] 0 1 0 5 1 0 0 1 1 1 [34,] 0 2 0 1 3 1 1 1 0 1 [35,] 0 2 3 1 1 1 0 0 1 1 [36,] 1 2 1 1 1 1 2 0 1 0 [37,] 1 1 1 0 0 1 2 2 2 0 [38,] 2 3 0 3 0 1 0 0 1 0 [39,] 0 0 1 2 3 0 0 2 1 1 [40,] 0 1 1 0 3 0 2 2 0 1 > > perm.cor <- function(d,i) cor(d$x,d$u[i]) > city.perm <- boot(city, perm.cor, R = 40, sim = "permutation") > boot.array(city.perm, indices = TRUE) [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] 7 2 8 10 6 4 9 3 1 5 [2,] 10 4 6 8 5 3 2 1 9 7 [3,] 5 10 6 2 3 8 1 9 7 4 [4,] 10 1 4 8 9 5 3 6 7 2 [5,] 1 3 5 8 2 9 6 7 10 4 [6,] 10 4 3 2 1 9 8 5 7 6 [7,] 2 5 1 4 10 9 3 8 7 6 [8,] 5 9 6 3 1 2 4 10 8 7 [9,] 8 2 7 10 4 3 1 6 9 5 [10,] 1 9 3 2 8 6 7 4 5 10 [11,] 6 7 10 5 3 9 2 8 4 1 [12,] 9 5 1 6 10 8 3 2 7 4 [13,] 9 7 3 1 8 5 4 6 2 10 [14,] 7 1 3 2 9 6 10 4 5 8 [15,] 4 9 6 3 2 1 10 8 7 5 [16,] 9 6 1 7 5 2 8 4 10 3 [17,] 7 2 3 6 10 4 1 9 5 8 [18,] 2 4 3 1 5 8 6 10 9 7 [19,] 4 9 6 1 10 3 7 2 5 8 [20,] 6 3 7 5 8 4 2 10 1 9 [21,] 9 10 2 6 7 5 8 4 3 1 [22,] 10 4 5 9 8 3 1 2 6 7 [23,] 9 2 5 1 10 4 6 8 7 3 [24,] 9 4 3 6 8 2 1 10 5 7 [25,] 10 3 8 2 5 7 1 9 6 4 [26,] 7 8 3 9 4 1 5 2 10 6 [27,] 4 8 1 5 3 6 10 9 7 2 [28,] 8 5 7 4 10 3 2 1 9 6 [29,] 8 10 2 4 7 3 9 6 1 5 [30,] 1 5 7 9 3 6 4 10 2 8 [31,] 10 9 7 5 4 1 2 6 3 8 [32,] 8 5 1 6 3 7 10 4 9 2 [33,] 1 6 8 5 2 7 9 4 10 3 [34,] 8 5 7 1 9 2 6 3 10 4 [35,] 4 5 9 2 7 6 8 3 1 10 [36,] 5 9 10 1 3 7 2 8 4 6 [37,] 8 2 7 9 10 1 3 4 6 5 [38,] 5 8 1 7 4 3 9 10 6 2 [39,] 10 7 6 4 8 1 3 5 9 2 [40,] 2 1 8 3 4 6 9 10 7 5 > > > > cleanEx() > nameEx("boot.ci") > ### * boot.ci > > flush(stderr()); flush(stdout()) > > ### Name: boot.ci > ### Title: Nonparametric Bootstrap Confidence Intervals > ### Aliases: boot.ci > ### Keywords: nonparametric htest > > ### ** Examples > > # confidence intervals for the city data > ratio <- function(d, w) sum(d$x * w)/sum(d$u * w) > city.boot <- boot(city, ratio, R = 999, stype = "w", sim = "ordinary") > boot.ci(city.boot, conf = c(0.90, 0.95), + type = c("norm", "basic", "perc", "bca")) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 999 bootstrap replicates CALL : boot.ci(boot.out = city.boot, conf = c(0.9, 0.95), type = c("norm", "basic", "perc", "bca")) Intervals : Level Normal Basic 90% ( 1.124, 1.837 ) ( 1.059, 1.740 ) 95% ( 1.056, 1.905 ) ( 0.932, 1.799 ) Level Percentile BCa 90% ( 1.301, 1.982 ) ( 1.301, 1.984 ) 95% ( 1.242, 2.109 ) ( 1.243, 2.110 ) Calculations and Intervals on Original Scale > > # studentized confidence interval for the two sample > # difference of means problem using the final two series > # of the gravity data. > diff.means <- function(d, f) + { n <- nrow(d) + gp1 <- 1:table(as.numeric(d$series))[1] + m1 <- sum(d[gp1,1] * f[gp1])/sum(f[gp1]) + m2 <- sum(d[-gp1,1] * f[-gp1])/sum(f[-gp1]) + ss1 <- sum(d[gp1,1]^2 * f[gp1]) - (m1 * m1 * sum(f[gp1])) + ss2 <- sum(d[-gp1,1]^2 * f[-gp1]) - (m2 * m2 * sum(f[-gp1])) + c(m1 - m2, (ss1 + ss2)/(sum(f) - 2)) + } > grav1 <- gravity[as.numeric(gravity[,2]) >= 7, ] > grav1.boot <- boot(grav1, diff.means, R = 999, stype = "f", + strata = grav1[ ,2]) > boot.ci(grav1.boot, type = c("stud", "norm")) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 999 bootstrap replicates CALL : boot.ci(boot.out = grav1.boot, type = c("stud", "norm")) Intervals : Level Normal Studentized 95% (-5.880, 0.183 ) (-7.059, -0.101 ) Calculations and Intervals on Original Scale > > # Nonparametric confidence intervals for mean failure time > # of the air-conditioning data as in Example 5.4 of Davison > # and Hinkley (1997) > mean.fun <- function(d, i) + { m <- mean(d$hours[i]) + n <- length(i) + v <- (n-1)*var(d$hours[i])/n^2 + c(m, v) + } > air.boot <- boot(aircondit, mean.fun, R = 999) > boot.ci(air.boot, type = c("norm", "basic", "perc", "stud")) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 999 bootstrap replicates CALL : boot.ci(boot.out = air.boot, type = c("norm", "basic", "perc", "stud")) Intervals : Level Normal Basic 95% ( 35.5, 181.9 ) ( 26.0, 170.6 ) Level Studentized Percentile 95% ( 47.9, 294.5 ) ( 45.6, 190.2 ) Calculations and Intervals on Original Scale > > # Now using the log transformation > # There are two ways of doing this and they both give the > # same intervals. > > # Method 1 > boot.ci(air.boot, type = c("norm", "basic", "perc", "stud"), + h = log, hdot = function(x) 1/x) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 999 bootstrap replicates CALL : boot.ci(boot.out = air.boot, type = c("norm", "basic", "perc", "stud"), h = log, hdot = function(x) 1/x) Intervals : Level Normal Basic 95% ( 4.035, 5.469 ) ( 4.118, 5.546 ) Level Studentized Percentile 95% ( 3.959, 5.808 ) ( 3.820, 5.248 ) Calculations and Intervals on Transformed Scale > > # Method 2 > vt0 <- air.boot$t0[2]/air.boot$t0[1]^2 > vt <- air.boot$t[, 2]/air.boot$t[ ,1]^2 > boot.ci(air.boot, type = c("norm", "basic", "perc", "stud"), + t0 = log(air.boot$t0[1]), t = log(air.boot$t[,1]), + var.t0 = vt0, var.t = vt) BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS Based on 999 bootstrap replicates CALL : boot.ci(boot.out = air.boot, type = c("norm", "basic", "perc", "stud"), var.t0 = vt0, var.t = vt, t0 = log(air.boot$t0[1]), t = log(air.boot$t[, 1])) Intervals : Level Normal Basic 95% ( 4.069, 5.435 ) ( 4.118, 5.546 ) Level Studentized Percentile 95% ( 3.959, 5.808 ) ( 3.820, 5.248 ) Calculations and Intervals on Original Scale > > > > cleanEx() > nameEx("censboot") > ### * censboot > > flush(stderr()); flush(stdout()) > > ### Name: censboot > ### Title: Bootstrap for Censored Data > ### Aliases: censboot cens.return > ### Keywords: survival > > ### ** Examples > > library(survival) Loading required package: splines Attaching package: 'survival' The following object is masked from 'package:boot': aml > # Example 3.9 of Davison and Hinkley (1997) does a bootstrap on some > # remission times for patients with a type of leukaemia. The patients > # were divided into those who received maintenance chemotherapy and > # those who did not. Here we are interested in the median remission > # time for the two groups. > data(aml, package = "boot") # not the version in survival. > aml.fun <- function(data) { + surv <- survfit(Surv(time, cens) ~ group, data = data) + out <- NULL + st <- 1 + for (s in 1:length(surv$strata)) { + inds <- st:(st + surv$strata[s]-1) + md <- min(surv$time[inds[1-surv$surv[inds] >= 0.5]]) + st <- st + surv$strata[s] + out <- c(out, md) + } + out + } > aml.case <- censboot(aml, aml.fun, R = 499, strata = aml$group) Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf > > # Now we will look at the same statistic using the conditional > # bootstrap and the weird bootstrap. For the conditional bootstrap > # the survival distribution is stratified but the censoring > # distribution is not. > > aml.s1 <- survfit(Surv(time, cens) ~ group, data = aml) > aml.s2 <- survfit(Surv(time-0.001*cens, 1-cens) ~ 1, data = aml) > aml.cond <- censboot(aml, aml.fun, R = 499, strata = aml$group, + F.surv = aml.s1, G.surv = aml.s2, sim = "cond") Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf > > > # For the weird bootstrap we must redefine our function slightly since > # the data will not contain the group number. > aml.fun1 <- function(data, str) { + surv <- survfit(Surv(data[, 1], data[, 2]) ~ str) + out <- NULL + st <- 1 + for (s in 1:length(surv$strata)) { + inds <- st:(st + surv$strata[s] - 1) + md <- min(surv$time[inds[1-surv$surv[inds] >= 0.5]]) + st <- st + surv$strata[s] + out <- c(out, md) + } + out + } > aml.wei <- censboot(cbind(aml$time, aml$cens), aml.fun1, R = 499, + strata = aml$group, F.surv = aml.s1, sim = "weird") Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf Warning in min(surv$time[inds[1 - surv$surv[inds] >= 0.5]]) : no non-missing arguments to min; returning Inf > > # Now for an example where a cox regression model has been fitted > # the data we will look at the melanoma data of Example 7.6 from > # Davison and Hinkley (1997). The fitted model assumes that there > # is a different survival distribution for the ulcerated and > # non-ulcerated groups but that the thickness of the tumour has a > # common effect. We will also assume that the censoring distribution > # is different in different age groups. The statistic of interest > # is the linear predictor. This is returned as the values at a > # number of equally spaced points in the range of interest. > data(melanoma, package = "boot") > library(splines)# for ns > mel.cox <- coxph(Surv(time, status == 1) ~ ns(thickness, df=4) + strata(ulcer), + data = melanoma) > mel.surv <- survfit(mel.cox) > agec <- cut(melanoma$age, c(0, 39, 49, 59, 69, 100)) > mel.cens <- survfit(Surv(time - 0.001*(status == 1), status != 1) ~ + strata(agec), data = melanoma) > mel.fun <- function(d) { + t1 <- ns(d$thickness, df=4) + cox <- coxph(Surv(d$time, d$status == 1) ~ t1+strata(d$ulcer)) + ind <- !duplicated(d$thickness) + u <- d$thickness[!ind] + eta <- cox$linear.predictors[!ind] + sp <- smooth.spline(u, eta, df=20) + th <- seq(from = 0.25, to = 10, by = 0.25) + predict(sp, th)$y + } > mel.str <- cbind(melanoma$ulcer, agec) > > # this is slow! > mel.mod <- censboot(melanoma, mel.fun, R = 499, F.surv = mel.surv, + G.surv = mel.cens, cox = mel.cox, strata = mel.str, sim = "model") > # To plot the original predictor and a 95% pointwise envelope for it > mel.env <- envelope(mel.mod)$point > th <- seq(0.25, 10, by = 0.25) > plot(th, mel.env[1, ], ylim = c(-2, 2), + xlab = "thickness (mm)", ylab = "linear predictor", type = "n") > lines(th, mel.mod$t0, lty = 1) > matlines(th, t(mel.env), lty = 2) > > > > cleanEx() detaching 'package:survival', 'package:splines' > nameEx("control") > ### * control > > flush(stderr()); flush(stdout()) > > ### Name: control > ### Title: Control Variate Calculations > ### Aliases: control > ### Keywords: nonparametric > > ### ** Examples > > # Use of control variates for the variance of the air-conditioning data > mean.fun <- function(d, i) + { m <- mean(d$hours[i]) + n <- nrow(d) + v <- (n-1)*var(d$hours[i])/n^2 + c(m, v) + } > air.boot <- boot(aircondit, mean.fun, R = 999) > control(air.boot, index = 2, bias.adj = TRUE) [1] -6.298101 > air.cont <- control(air.boot, index = 2) > # Now let us try the variance on the log scale. > air.cont1 <- control(air.boot, t0 = log(air.boot$t0[2]), + t = log(air.boot$t[, 2])) > > > > cleanEx() > nameEx("cv.glm") > ### * cv.glm > > flush(stderr()); flush(stdout()) > > ### Name: cv.glm > ### Title: Cross-validation for Generalized Linear Models > ### Aliases: cv.glm > ### Keywords: regression > > ### ** Examples > > # leave-one-out and 6-fold cross-validation prediction error for > # the mammals data set. > data(mammals, package="MASS") > mammals.glm <- glm(log(brain) ~ log(body), data = mammals) > (cv.err <- cv.glm(mammals, mammals.glm)$delta) [1] 0.4918650 0.4916571 > (cv.err.6 <- cv.glm(mammals, mammals.glm, K = 6)$delta) [1] 0.4851491 0.4834641 > > # As this is a linear model we could calculate the leave-one-out > # cross-validation estimate without any extra model-fitting. > muhat <- fitted(mammals.glm) > mammals.diag <- glm.diag(mammals.glm) > (cv.err <- mean((mammals.glm$y - muhat)^2/(1 - mammals.diag$h)^2)) [1] 0.491865 > > > # leave-one-out and 11-fold cross-validation prediction error for > # the nodal data set. Since the response is a binary variable an > # appropriate cost function is > cost <- function(r, pi = 0) mean(abs(r-pi) > 0.5) > > nodal.glm <- glm(r ~ stage+xray+acid, binomial, data = nodal) > (cv.err <- cv.glm(nodal, nodal.glm, cost, K = nrow(nodal))$delta) [1] 0.1886792 0.1886792 > (cv.11.err <- cv.glm(nodal, nodal.glm, cost, K = 11)$delta) [1] 0.2264151 0.2217871 > > > > cleanEx() > nameEx("empinf") > ### * empinf > > flush(stderr()); flush(stdout()) > > ### Name: empinf > ### Title: Empirical Influence Values > ### Aliases: empinf > ### Keywords: nonparametric math > > ### ** Examples > > # The empirical influence values for the ratio of means in > # the city data. > ratio <- function(d, w) sum(d$x *w)/sum(d$u*w) > empinf(data = city, statistic = ratio) [1] -1.04367815 -0.58417763 -0.37092459 -0.18958996 0.03164142 0.10544878 [7] 0.09236345 0.20365074 1.02178280 0.73381132 > city.boot <- boot(city, ratio, 499, stype="w") > empinf(boot.out = city.boot, type = "reg") 1 1 1 1 1 1 -1.13619987 -0.69728210 -0.45301061 -0.27615882 0.02108999 0.14896336 1 1 1 1 0.09746429 0.20622340 1.18798956 0.90092079 > > # A statistic that may be of interest in the difference of means > # problem is the t-statistic for testing equality of means. In > # the bootstrap we get replicates of the difference of means and > # the variance of that statistic and then want to use this output > # to get the empirical influence values of the t-statistic. > grav1 <- gravity[as.numeric(gravity[,2]) >= 7,] > grav.fun <- function(dat, w) { + strata <- tapply(dat[, 2], as.numeric(dat[, 2])) + d <- dat[, 1] + ns <- tabulate(strata) + w <- w/tapply(w, strata, sum)[strata] + mns <- as.vector(tapply(d * w, strata, sum)) # drop names + mn2 <- tapply(d * d * w, strata, sum) + s2hat <- sum((mn2 - mns^2)/ns) + c(mns[2] - mns[1], s2hat) + } > > grav.boot <- boot(grav1, grav.fun, R = 499, stype = "w", + strata = grav1[, 2]) > > # Since the statistic of interest is a function of the bootstrap > # statistics, we must calculate the bootstrap replicates and pass > # them to empinf using the t argument. > grav.z <- (grav.boot$t[,1]-grav.boot$t0[1])/sqrt(grav.boot$t[,2]) > empinf(boot.out = grav.boot, t = grav.z) 1 1 1 1 1 1 1 -2.9326019 -1.3760327 -2.4400720 -1.2175846 0.2795352 -0.8258764 -0.8156286 1 1 1 1 1 1 2 -0.5573332 -1.1275252 -3.1603140 1.2840693 3.5434781 9.3458860 2.6692589 2 2 2 2 2 2 2 4.4496570 3.6948000 0.9929002 -3.0100985 -3.2237464 -2.5493305 -0.6551745 2 2 2 2 2 1.9065308 0.4980530 -1.6219628 -1.6980508 -1.4528364 > > > > cleanEx() > nameEx("envelope") > ### * envelope > > flush(stderr()); flush(stdout()) > > ### Name: envelope > ### Title: Confidence Envelopes for Curves > ### Aliases: envelope > ### Keywords: dplot htest > > ### ** Examples > > # Testing whether the final series of measurements of the gravity data > # may come from a normal distribution. This is done in Examples 4.7 > # and 4.8 of Davison and Hinkley (1997). > grav1 <- gravity$g[gravity$series == 8] > grav.z <- (grav1 - mean(grav1))/sqrt(var(grav1)) > grav.gen <- function(dat, mle) rnorm(length(dat)) > grav.qqboot <- boot(grav.z, sort, R = 999, sim = "parametric", + ran.gen = grav.gen) > grav.qq <- qqnorm(grav.z, plot.it = FALSE) > grav.qq <- lapply(grav.qq, sort) > plot(grav.qq, ylim = c(-3.5, 3.5), ylab = "Studentized Order Statistics", + xlab = "Normal Quantiles") > grav.env <- envelope(grav.qqboot, level = 0.9) > lines(grav.qq$x, grav.env$point[1, ], lty = 4) > lines(grav.qq$x, grav.env$point[2, ], lty = 4) > lines(grav.qq$x, grav.env$overall[1, ], lty = 1) > lines(grav.qq$x, grav.env$overall[2, ], lty = 1) > > > > cleanEx() > nameEx("exp.tilt") > ### * exp.tilt > > flush(stderr()); flush(stdout()) > > ### Name: exp.tilt > ### Title: Exponential Tilting > ### Aliases: exp.tilt > ### Keywords: nonparametric smooth > > ### ** Examples > > # Example 9.8 of Davison and Hinkley (1997) requires tilting the resampling > # distribution of the studentized statistic to be centred at the observed > # value of the test statistic 1.84. This can be achieved as follows. > grav1 <- gravity[as.numeric(gravity[,2]) >=7 , ] > grav.fun <- function(dat, w, orig) { + strata <- tapply(dat[, 2], as.numeric(dat[, 2])) + d <- dat[, 1] + ns <- tabulate(strata) + w <- w/tapply(w, strata, sum)[strata] + mns <- as.vector(tapply(d * w, strata, sum)) # drop names + mn2 <- tapply(d * d * w, strata, sum) + s2hat <- sum((mn2 - mns^2)/ns) + c(mns[2]-mns[1], s2hat, (mns[2]-mns[1]-orig)/sqrt(s2hat)) + } > grav.z0 <- grav.fun(grav1, rep(1, 26), 0) > grav.L <- empinf(data = grav1, statistic = grav.fun, stype = "w", + strata = grav1[,2], index = 3, orig = grav.z0[1]) > grav.tilt <- exp.tilt(grav.L, grav.z0[3], strata = grav1[,2]) > boot(grav1, grav.fun, R = 499, stype = "w", weights = grav.tilt$p, + strata = grav1[,2], orig = grav.z0[1]) STRATIFIED WEIGHTED BOOTSTRAP Call: boot(data = grav1, statistic = grav.fun, R = 499, stype = "w", strata = grav1[, 2], weights = grav.tilt$p, orig = grav.z0[1]) Bootstrap Statistics : original bias std. error mean(t*) t1* 2.846154 -0.3661063 1.705171 5.702944 t2* 2.392353 -0.3538294 1.002889 3.444050 t3* 0.000000 -0.5160619 1.314298 1.473456 > > > > cleanEx() > nameEx("glm.diag.plots") > ### * glm.diag.plots > > flush(stderr()); flush(stdout()) > > ### Name: glm.diag.plots > ### Title: Diagnostics plots for generalized linear models > ### Aliases: glm.diag.plots > ### Keywords: regression dplot hplot > > ### ** Examples > > # In this example we look at the leukaemia data which was looked at in > # Example 7.1 of Davison and Hinkley (1997) > data(leuk, package = "MASS") > leuk.mod <- glm(time ~ ag-1+log10(wbc), family = Gamma(log), data = leuk) > leuk.diag <- glm.diag(leuk.mod) > glm.diag.plots(leuk.mod, leuk.diag) > > > > cleanEx() > nameEx("jack.after.boot") > ### * jack.after.boot > > flush(stderr()); flush(stdout()) > > ### Name: jack.after.boot > ### Title: Jackknife-after-Bootstrap Plots > ### Aliases: jack.after.boot > ### Keywords: hplot nonparametric > > ### ** Examples > > # To draw the jackknife-after-bootstrap plot for the head size data as in > # Example 3.24 of Davison and Hinkley (1997) > frets.fun <- function(data, i) { + pcorr <- function(x) { + # Function to find the correlations and partial correlations between + # the four measurements. + v <- cor(x) + v.d <- diag(var(x)) + iv <- solve(v) + iv.d <- sqrt(diag(iv)) + iv <- - diag(1/iv.d) %*% iv %*% diag(1/iv.d) + q <- NULL + n <- nrow(v) + for (i in 1:(n-1)) + q <- rbind( q, c(v[i, 1:i], iv[i,(i+1):n]) ) + q <- rbind( q, v[n, ] ) + diag(q) <- round(diag(q)) + q + } + d <- data[i, ] + v <- pcorr(d) + c(v[1,], v[2,], v[3,], v[4,]) + } > frets.boot <- boot(log(as.matrix(frets)), frets.fun, R = 999) > # we will concentrate on the partial correlation between head breadth > # for the first son and head length for the second. This is the 7th > # element in the output of frets.fun so we set index = 7 > jack.after.boot(frets.boot, useJ = FALSE, stinf = FALSE, index = 7) > > > > cleanEx() > nameEx("k3.linear") > ### * k3.linear > > flush(stderr()); flush(stdout()) > > ### Name: k3.linear > ### Title: Linear Skewness Estimate > ### Aliases: k3.linear > ### Keywords: nonparametric > > ### ** Examples > > # To estimate the skewness of the ratio of means for the city data. > ratio <- function(d, w) sum(d$x * w)/sum(d$u * w) > k3.linear(empinf(data = city, statistic = ratio)) [1] 7.831452e-05 > > > > cleanEx() > nameEx("linear.approx") > ### * linear.approx > > flush(stderr()); flush(stdout()) > > ### Name: linear.approx > ### Title: Linear Approximation of Bootstrap Replicates > ### Aliases: linear.approx > ### Keywords: nonparametric > > ### ** Examples > > # Using the city data let us look at the linear approximation to the > # ratio statistic and its logarithm. We compare these with the > # corresponding plots for the bigcity data > > ratio <- function(d, w) sum(d$x * w)/sum(d$u * w) > city.boot <- boot(city, ratio, R = 499, stype = "w") > bigcity.boot <- boot(bigcity, ratio, R = 499, stype = "w") > op <- par(pty = "s", mfrow = c(2, 2)) > > # The first plot is for the city data ratio statistic. > city.lin1 <- linear.approx(city.boot) > lim <- range(c(city.boot$t,city.lin1)) > plot(city.boot$t, city.lin1, xlim = lim, ylim = lim, + main = "Ratio; n=10", xlab = "t*", ylab = "tL*") > abline(0, 1) > > # Now for the log of the ratio statistic for the city data. > city.lin2 <- linear.approx(city.boot,t0 = log(city.boot$t0), + t = log(city.boot$t)) > lim <- range(c(log(city.boot$t),city.lin2)) > plot(log(city.boot$t), city.lin2, xlim = lim, ylim = lim, + main = "Log(Ratio); n=10", xlab = "t*", ylab = "tL*") > abline(0, 1) > > # The ratio statistic for the bigcity data. > bigcity.lin1 <- linear.approx(bigcity.boot) > lim <- range(c(bigcity.boot$t,bigcity.lin1)) > plot(bigcity.lin1, bigcity.boot$t, xlim = lim, ylim = lim, + main = "Ratio; n=49", xlab = "t*", ylab = "tL*") > abline(0, 1) > > # Finally the log of the ratio statistic for the bigcity data. > bigcity.lin2 <- linear.approx(bigcity.boot,t0 = log(bigcity.boot$t0), + t = log(bigcity.boot$t)) > lim <- range(c(log(bigcity.boot$t),bigcity.lin2)) > plot(bigcity.lin2, log(bigcity.boot$t), xlim = lim, ylim = lim, + main = "Log(Ratio); n=49", xlab = "t*", ylab = "tL*") > abline(0, 1) > > par(op) > > > > graphics::par(get("par.postscript", pos = 'CheckExEnv')) > cleanEx() > nameEx("lines.saddle.distn") > ### * lines.saddle.distn > > flush(stderr()); flush(stdout()) > > ### Name: lines.saddle.distn > ### Title: Add a Saddlepoint Approximation to a Plot > ### Aliases: lines.saddle.distn > ### Keywords: aplot smooth nonparametric > > ### ** Examples > > # In this example we show how a plot such as that in Figure 9.9 of > # Davison and Hinkley (1997) may be produced. Note the large number of > # bootstrap replicates required in this example. > expdata <- rexp(12) > vfun <- function(d, i) { + n <- length(d) + (n-1)/n*var(d[i]) + } > exp.boot <- boot(expdata,vfun, R = 9999) > exp.L <- (expdata - mean(expdata))^2 - exp.boot$t0 > exp.tL <- linear.approx(exp.boot, L = exp.L) > hist(exp.tL, nclass = 50, probability = TRUE) > exp.t0 <- c(0, sqrt(var(exp.boot$t))) > exp.sp <- saddle.distn(A = exp.L/12,wdist = "m", t0 = exp.t0) > > # The saddlepoint approximation in this case is to the density of > # t-t0 and so t0 must be added for the plot. > lines(exp.sp, h = function(u, t0) u+t0, J = function(u, t0) 1, + t0 = exp.boot$t0) > > > > cleanEx() > nameEx("norm.ci") > ### * norm.ci > > flush(stderr()); flush(stdout()) > > ### Name: norm.ci > ### Title: Normal Approximation Confidence Intervals > ### Aliases: norm.ci > ### Keywords: htest > > ### ** Examples > > # In Example 5.1 of Davison and Hinkley (1997), normal approximation > # confidence intervals are found for the air-conditioning data. > air.mean <- mean(aircondit$hours) > air.n <- nrow(aircondit) > air.v <- air.mean^2/air.n > norm.ci(t0 = air.mean, var.t0 = air.v) conf [1,] 0.95 46.93055 169.2361 > exp(norm.ci(t0 = log(air.mean), var.t0 = 1/air.n)[2:3]) [1] 61.38157 190.31782 > > # Now a more complicated example - the ratio estimate for the city data. > ratio <- function(d, w) + sum(d$x * w)/sum(d$u *w) > city.v <- var.linear(empinf(data = city, statistic = ratio)) > norm.ci(t0 = ratio(city,rep(0.1,10)), var.t0 = city.v) conf [1,] 0.95 1.167046 1.873579 > > > > cleanEx() > nameEx("plot.boot") > ### * plot.boot > > flush(stderr()); flush(stdout()) > > ### Name: plot.boot > ### Title: Plots of the Output of a Bootstrap Simulation > ### Aliases: plot.boot > ### Keywords: hplot nonparametric > > ### ** Examples > > # We fit an exponential model to the air-conditioning data and use > # that for a parametric bootstrap. Then we look at plots of the > # resampled means. > air.rg <- function(data, mle) rexp(length(data), 1/mle) > > air.boot <- boot(aircondit$hours, mean, R = 999, sim = "parametric", + ran.gen = air.rg, mle = mean(aircondit$hours)) > plot(air.boot) > > # In the difference of means example for the last two series of the > # gravity data > grav1 <- gravity[as.numeric(gravity[, 2]) >= 7, ] > grav.fun <- function(dat, w) { + strata <- tapply(dat[, 2], as.numeric(dat[, 2])) + d <- dat[, 1] + ns <- tabulate(strata) + w <- w/tapply(w, strata, sum)[strata] + mns <- as.vector(tapply(d * w, strata, sum)) # drop names + mn2 <- tapply(d * d * w, strata, sum) + s2hat <- sum((mn2 - mns^2)/ns) + c(mns[2] - mns[1], s2hat) + } > > grav.boot <- boot(grav1, grav.fun, R = 499, stype = "w", strata = grav1[, 2]) > plot(grav.boot) > # now suppose we want to look at the studentized differences. > grav.z <- (grav.boot$t[, 1]-grav.boot$t0[1])/sqrt(grav.boot$t[, 2]) > plot(grav.boot, t = grav.z, t0 = 0) > > # In this example we look at the one of the partial correlations for the > # head dimensions in the dataset frets. > frets.fun <- function(data, i) { + pcorr <- function(x) { + # Function to find the correlations and partial correlations between + # the four measurements. + v <- cor(x) + v.d <- diag(var(x)) + iv <- solve(v) + iv.d <- sqrt(diag(iv)) + iv <- - diag(1/iv.d) %*% iv %*% diag(1/iv.d) + q <- NULL + n <- nrow(v) + for (i in 1:(n-1)) + q <- rbind( q, c(v[i, 1:i], iv[i,(i+1):n]) ) + q <- rbind( q, v[n, ] ) + diag(q) <- round(diag(q)) + q + } + d <- data[i, ] + v <- pcorr(d) + c(v[1,], v[2,], v[3,], v[4,]) + } > frets.boot <- boot(log(as.matrix(frets)), frets.fun, R = 999) > plot(frets.boot, index = 7, jack = TRUE, stinf = FALSE, useJ = FALSE) > > > > cleanEx() > nameEx("saddle") > ### * saddle > > flush(stderr()); flush(stdout()) > > ### Name: saddle > ### Title: Saddlepoint Approximations for Bootstrap Statistics > ### Aliases: saddle > ### Keywords: smooth nonparametric > > ### ** Examples > > # To evaluate the bootstrap distribution of the mean failure time of > # air-conditioning equipment at 80 hours > saddle(A = aircondit$hours/12, u = 80) $spa pdf cdf 0.01005866 0.24446677 $zeta.hat [1] -0.02580078 > > # Alternatively this can be done using a conditional poisson > saddle(A = cbind(aircondit$hours/12,1), u = c(80, 12), + wdist = "p", type = "cond") Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.090909 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.909091 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.090909 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.909091 $spa pdf cdf 0.01005943 0.24438736 $zeta.hat A1 A2 -0.02580805 0.89261577 $zeta2.hat [1] 0.6931472 > > # To use the Lugananni-Rice approximation to this > saddle(A = cbind(aircondit$hours/12,1), u = c(80, 12), + wdist = "p", type = "cond", + LR = TRUE) Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.090909 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.909091 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.090909 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.909091 $spa pdf cdf 0.01005943 0.24447362 $zeta.hat A1 A2 -0.02580805 0.89261577 $zeta2.hat [1] 0.6931472 > > # Example 9.16 of Davison and Hinkley (1997) calculates saddlepoint > # approximations to the distribution of the ratio statistic for the > # city data. Since the statistic is not in itself a linear combination > # of random Variables, its distribution cannot be found directly. > # Instead the statistic is expressed as the solution to a linear > # estimating equation and hence its distribution can be found. We > # get the saddlepoint approximation to the pdf and cdf evaluated at > # t = 1.25 as follows. > jacobian <- function(dat,t,zeta) + { + p <- exp(zeta*(dat$x-t*dat$u)) + abs(sum(dat$u*p)/sum(p)) + } > city.sp1 <- saddle(A = city$x-1.25*city$u, u = 0) > city.sp1$spa[1] <- jacobian(city, 1.25, city.sp1$zeta.hat) * city.sp1$spa[1] > city.sp1 $spa pdf cdf 0.05565040 0.02436306 $zeta.hat [1] -0.02435547 > > > > cleanEx() > nameEx("saddle.distn") > ### * saddle.distn > > flush(stderr()); flush(stdout()) > > ### Name: saddle.distn > ### Title: Saddlepoint Distribution Approximations for Bootstrap Statistics > ### Aliases: saddle.distn > ### Keywords: nonparametric smooth dplot > > ### ** Examples > > # The bootstrap distribution of the mean of the air-conditioning > # failure data: fails to find value on R (and probably on S too) > air.t0 <- c(mean(aircondit$hours), sqrt(var(aircondit$hours)/12)) > ## Not run: saddle.distn(A = aircondit$hours/12, t0 = air.t0) > > # alternatively using the conditional poisson > saddle.distn(A = cbind(aircondit$hours/12, 1), u = 12, wdist = "p", + type = "cond", t0 = air.t0) Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.344718 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.655282 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.344718 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.655282 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.832240 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.167760 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.832240 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.167760 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.444538 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.555462 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.444538 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.555462 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.494449 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.505551 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.494449 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.505551 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.594269 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.405731 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.594269 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.405731 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.544359 Warning in dpois(y, mu, log = TRUE) : non-integer x = 8.455641 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.544359 Warning in dpois(y, mu, log = TRUE) : non-integer x = 8.455641 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.519404 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.480596 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.519404 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.480596 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.561394 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.438606 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.561394 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.438606 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.290549 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.709451 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.290549 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.709451 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.019703 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.980297 Warning in dpois(y, mu, log = TRUE) : non-integer x = 11.019703 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.980297 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.748857 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.251143 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.748857 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.251143 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.478011 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.521989 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.478011 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.521989 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.207165 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.792835 Warning in dpois(y, mu, log = TRUE) : non-integer x = 10.207165 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.792835 Warning in dpois(y, mu, log = TRUE) : non-integer x = 9.936320 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.063680 Warning in dpois(y, mu, log = TRUE) : non-integer x = 9.936320 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.063680 Warning in dpois(y, mu, log = TRUE) : non-integer x = 9.665474 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.334526 Warning in dpois(y, mu, log = TRUE) : non-integer x = 9.665474 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.334526 Warning in dpois(y, mu, log = TRUE) : non-integer x = 8.785225 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.214775 Warning in dpois(y, mu, log = TRUE) : non-integer x = 8.785225 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.214775 Warning in dpois(y, mu, log = TRUE) : non-integer x = 8.175822 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.824178 Warning in dpois(y, mu, log = TRUE) : non-integer x = 8.175822 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.824178 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.566419 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.433581 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.566419 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.433581 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.957016 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.042984 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.957016 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.042984 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.347613 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.652387 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.347613 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.652387 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.738210 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.261790 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.738210 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.261790 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.128807 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.871193 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.128807 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.871193 Saddlepoint Distribution Approximations Call : saddle.distn(A = cbind(aircondit$hours/12, 1), u = 12, wdist = "p", type = "cond", t0 = air.t0) Quantiles of the Distribution 0.1% 27.4 0.5% 35.4 1.0% 39.7 2.5% 46.7 5.0% 53.5 10.0% 62.5 20.0% 75.3 50.0% 104.5 80.0% 139.0 90.0% 158.8 95.0% 175.9 97.5% 191.2 99.0% 209.6 99.5% 222.4 99.9% 249.5 Smoothing spline used 20 points in the range 9.8 to 304.7. > > # Distribution of the ratio of a sample of size 10 from the bigcity > # data, taken from Example 9.16 of Davison and Hinkley (1997). > ratio <- function(d, w) sum(d$x *w)/sum(d$u * w) > city.v <- var.linear(empinf(data = city, statistic = ratio)) > bigcity.t0 <- c(mean(bigcity$x)/mean(bigcity$u), sqrt(city.v)) > Afn <- function(t, data) cbind(data$x - t*data$u, 1) > ufn <- function(t, data) c(0,10) > saddle.distn(A = Afn, u = ufn, wdist = "b", type = "cond", + t0 = bigcity.t0, data = bigcity) Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Warning in eval(expr, envir, enclos) : non-integer counts in a binomial glm! Saddlepoint Distribution Approximations Call : saddle.distn(A = Afn, u = ufn, wdist = "b", type = "cond", t0 = bigcity.t0, data = bigcity) Quantiles of the Distribution 0.1% 1.070 0.5% 1.092 1.0% 1.104 2.5% 1.122 5.0% 1.139 10.0% 1.158 20.0% 1.184 50.0% 1.237 80.0% 1.304 90.0% 1.348 95.0% 1.392 97.5% 1.436 99.0% 1.494 99.5% 1.537 99.9% 1.636 Smoothing spline used 20 points in the range 1.014 to 1.96. > > # From Example 9.16 of Davison and Hinkley (1997) again, we find the > # conditional distribution of the ratio given the sum of city$u. > Afn <- function(t, data) cbind(data$x-t*data$u, data$u, 1) > ufn <- function(t, data) c(0, sum(data$u), 10) > city.t0 <- c(mean(city$x)/mean(city$u), sqrt(city.v)) > saddle.distn(A = Afn, u = ufn, wdist = "p", type = "cond", t0 = city.t0, + data = city) Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.866400 Warning in dpois(y, mu, log = TRUE) : non-integer x = 8.511350 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.622251 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.866400 Warning in dpois(y, mu, log = TRUE) : non-integer x = 8.511350 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.622251 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.210844 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.107208 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.681949 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.210844 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.107208 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.681949 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.038622 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.809279 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.152100 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.038622 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.809279 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.152100 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.452511 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.160314 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.387175 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.452511 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.160314 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.387175 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.159455 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.835832 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.004713 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.159455 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.835832 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.004713 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.155629 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.115412 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.728960 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.155629 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.115412 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.728960 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.205022 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.133273 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.661705 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.205022 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.133273 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.661705 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.722845 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.033105 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.244049 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.722845 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.033105 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.244049 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.787431 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.388294 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.824275 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.787431 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.388294 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.824275 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.415407 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.940756 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.643837 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.415407 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.940756 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.643837 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.043383 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.493218 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.463399 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.043383 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.493218 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.463399 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.671359 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.045680 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.282961 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.671359 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.045680 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.282961 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.299336 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.598142 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.102523 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.299336 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.598142 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.102523 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.433935 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.409277 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.156788 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.433935 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.409277 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.156788 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.360158 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.271008 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.368834 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.360158 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.271008 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.368834 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.319252 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.639889 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.040859 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.319252 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.639889 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.040859 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.278346 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.008770 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.712884 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.278346 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.008770 Warning in dpois(y, mu, log = TRUE) : non-integer x = 4.712884 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.237440 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.377650 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.384909 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.237440 Warning in dpois(y, mu, log = TRUE) : non-integer x = 1.377650 Warning in dpois(y, mu, log = TRUE) : non-integer x = 5.384909 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.196534 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.746531 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.056935 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.196534 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.746531 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.056935 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.155629 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.115412 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.728960 Warning in dpois(y, mu, log = TRUE) : non-integer x = 3.155629 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.115412 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.728960 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.734728 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.299661 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.965612 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.734728 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.299661 Warning in dpois(y, mu, log = TRUE) : non-integer x = 6.965612 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.228786 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.666383 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.104831 Warning in dpois(y, mu, log = TRUE) : non-integer x = 2.228786 Warning in dpois(y, mu, log = TRUE) : non-integer x = 0.666383 Warning in dpois(y, mu, log = TRUE) : non-integer x = 7.104831 Saddlepoint Distribution Approximations Call : saddle.distn(A = Afn, u = ufn, wdist = "p", type = "cond", t0 = city.t0, data = city) Quantiles of the Distribution 0.1% 1.216 0.5% 1.236 1.0% 1.248 2.5% 1.272 5.0% 1.301 10.0% 1.340 20.0% 1.393 50.0% 1.502 80.0% 1.618 90.0% 1.680 95.0% 1.732 97.5% 1.777 99.0% 1.830 99.5% 1.866 99.9% 1.938 Smoothing spline used 20 points in the range 1.182 to 2.061. > > > > cleanEx() > nameEx("simplex") > ### * simplex > > flush(stderr()); flush(stdout()) > > ### Name: simplex > ### Title: Simplex Method for Linear Programming Problems > ### Aliases: simplex > ### Keywords: optimize > > ### ** Examples > > # This example is taken from Exercise 7.5 of Gill, Murray and Wright (1991). > enj <- c(200, 6000, 3000, -200) > fat <- c(800, 6000, 1000, 400) > vitx <- c(50, 3, 150, 100) > vity <- c(10, 10, 75, 100) > vitz <- c(150, 35, 75, 5) > simplex(a = enj, A1 = fat, b1 = 13800, A2 = rbind(vitx, vity, vitz), + b2 = c(600, 300, 550), maxi = TRUE) Linear Programming Results Call : simplex(a = enj, A1 = fat, b1 = 13800, A2 = rbind(vitx, vity, vitz), b2 = c(600, 300, 550), maxi = TRUE) Maximization Problem with Objective Function Coefficients x1 x2 x3 x4 200 6000 3000 -200 Optimal solution has the following values x1 x2 x3 x4 0.0 0.0 13.8 0.0 The optimal value of the objective function is 41400. > > > > cleanEx() > nameEx("smooth.f") > ### * smooth.f > > flush(stderr()); flush(stdout()) > > ### Name: smooth.f > ### Title: Smooth Distributions on Data Points > ### Aliases: smooth.f > ### Keywords: smooth nonparametric > > ### ** Examples > > # Example 9.8 of Davison and Hinkley (1997) requires tilting the resampling > # distribution of the studentized statistic to be centred at the observed > # value of the test statistic 1.84. In the book exponential tilting was used > # but it is also possible to use smooth.f. > grav1 <- gravity[as.numeric(gravity[, 2]) >= 7, ] > grav.fun <- function(dat, w, orig) { + strata <- tapply(dat[, 2], as.numeric(dat[, 2])) + d <- dat[, 1] + ns <- tabulate(strata) + w <- w/tapply(w, strata, sum)[strata] + mns <- as.vector(tapply(d * w, strata, sum)) # drop names + mn2 <- tapply(d * d * w, strata, sum) + s2hat <- sum((mn2 - mns^2)/ns) + c(mns[2] - mns[1], s2hat, (mns[2]-mns[1]-orig)/sqrt(s2hat)) + } > grav.z0 <- grav.fun(grav1, rep(1, 26), 0) > grav.boot <- boot(grav1, grav.fun, R = 499, stype = "w", + strata = grav1[, 2], orig = grav.z0[1]) > grav.sm <- smooth.f(grav.z0[3], grav.boot, index = 3) > > # Now we can run another bootstrap using these weights > grav.boot2 <- boot(grav1, grav.fun, R = 499, stype = "w", + strata = grav1[, 2], orig = grav.z0[1], + weights = grav.sm) > > # Estimated p-values can be found from these as follows > mean(grav.boot$t[, 3] >= grav.z0[3]) [1] 0.01402806 > imp.prob(grav.boot2, t0 = -grav.z0[3], t = -grav.boot2$t[, 3]) $t0 [1] -1.840118 $raw [1] 0.02163715 $rat [1] 0.02099078 $reg [1] 0.02174393 > > > # Note that for the importance sampling probability we must > # multiply everything by -1 to ensure that we find the correct > # probability. Raw resampling is not reliable for probabilities > # greater than 0.5. Thus > 1 - imp.prob(grav.boot2, index = 3, t0 = grav.z0[3])$raw [1] -0.009155757 > # can give very strange results (negative probabilities). > > > > cleanEx() > nameEx("tilt.boot") > ### * tilt.boot > > flush(stderr()); flush(stdout()) > > ### Name: tilt.boot > ### Title: Non-parametric Tilted Bootstrap > ### Aliases: tilt.boot > ### Keywords: nonparametric > > ### ** Examples > > # Note that these examples can take a while to run. > > # Example 9.9 of Davison and Hinkley (1997). > grav1 <- gravity[as.numeric(gravity[,2]) >= 7, ] > grav.fun <- function(dat, w, orig) { + strata <- tapply(dat[, 2], as.numeric(dat[, 2])) + d <- dat[, 1] + ns <- tabulate(strata) + w <- w/tapply(w, strata, sum)[strata] + mns <- as.vector(tapply(d * w, strata, sum)) # drop names + mn2 <- tapply(d * d * w, strata, sum) + s2hat <- sum((mn2 - mns^2)/ns) + c(mns[2]-mns[1],s2hat,(mns[2]-mns[1]-orig)/sqrt(s2hat)) + } > grav.z0 <- grav.fun(grav1, rep(1, 26), 0) > tilt.boot(grav1, grav.fun, R = c(249, 375, 375), stype = "w", + strata = grav1[,2], tilt = TRUE, index = 3, orig = grav.z0[1]) TILTED BOOTSTRAP Exponential tilting used First 249 replicates untilted, Next 375 replicates tilted to -2.821, Next 375 replicates tilted to 1.636. Call: tilt.boot(data = grav1, statistic = grav.fun, R = c(249, 375, 375), stype = "w", strata = grav1[, 2], tilt = TRUE, index = 3, orig = grav.z0[1]) Bootstrap Statistics : original bias std. error t1* 2.846154 -0.4487564 2.500644 t2* 2.392353 -0.3221155 1.187574 t3* 0.000000 -0.8862944 2.208945 > > > # Example 9.10 of Davison and Hinkley (1997) requires a balanced > # importance resampling bootstrap to be run. In this example we > # show how this might be run. > acme.fun <- function(data, i, bhat) { + d <- data[i,] + n <- nrow(d) + d.lm <- glm(d$acme~d$market) + beta.b <- coef(d.lm)[2] + d.diag <- boot::glm.diag(d.lm) + SSx <- (n-1)*var(d$market) + tmp <- (d$market-mean(d$market))*d.diag$res*d.diag$sd + sr <- sqrt(sum(tmp^2))/SSx + c(beta.b, sr, (beta.b-bhat)/sr) + } > acme.b <- acme.fun(acme, 1:nrow(acme), 0) > acme.boot1 <- tilt.boot(acme, acme.fun, R = c(499, 250, 250), + stype = "i", sim = "balanced", alpha = c(0.05, 0.95), + tilt = TRUE, index = 3, bhat = acme.b[1]) > > > > cleanEx() > nameEx("tsboot") > ### * tsboot > > flush(stderr()); flush(stdout()) > > ### Name: tsboot > ### Title: Bootstrapping of Time Series > ### Aliases: tsboot ts.return > ### Keywords: nonparametric ts > > ### ** Examples > > lynx.fun <- function(tsb) { + ar.fit <- ar(tsb, order.max = 25) + c(ar.fit$order, mean(tsb), tsb) + } > > # the stationary bootstrap with mean block length 20 > lynx.1 <- tsboot(log(lynx), lynx.fun, R = 99, l = 20, sim = "geom") > > # the fixed block bootstrap with length 20 > lynx.2 <- tsboot(log(lynx), lynx.fun, R = 99, l = 20, sim = "fixed") > > # Now for model based resampling we need the original model > # Note that for all of the bootstraps which use the residuals as their > # data, we set orig.t to FALSE since the function applied to the residual > # time series will be meaningless. > lynx.ar <- ar(log(lynx)) > lynx.model <- list(order = c(lynx.ar$order, 0, 0), ar = lynx.ar$ar) > lynx.res <- lynx.ar$resid[!is.na(lynx.ar$resid)] > lynx.res <- lynx.res - mean(lynx.res) > > lynx.sim <- function(res,n.sim, ran.args) { + # random generation of replicate series using arima.sim + rg1 <- function(n, res) sample(res, n, replace = TRUE) + ts.orig <- ran.args$ts + ts.mod <- ran.args$model + mean(ts.orig)+ts(arima.sim(model = ts.mod, n = n.sim, + rand.gen = rg1, res = as.vector(res))) + } > > lynx.3 <- tsboot(lynx.res, lynx.fun, R = 99, sim = "model", n.sim = 114, + orig.t = FALSE, ran.gen = lynx.sim, + ran.args = list(ts = log(lynx), model = lynx.model)) > > # For "post-blackening" we need to define another function > lynx.black <- function(res, n.sim, ran.args) { + ts.orig <- ran.args$ts + ts.mod <- ran.args$model + mean(ts.orig) + ts(arima.sim(model = ts.mod,n = n.sim,innov = res)) + } > > # Now we can run apply the two types of block resampling again but this > # time applying post-blackening. > lynx.1b <- tsboot(lynx.res, lynx.fun, R = 99, l = 20, sim = "fixed", + n.sim = 114, orig.t = FALSE, ran.gen = lynx.black, + ran.args = list(ts = log(lynx), model = lynx.model)) > > lynx.2b <- tsboot(lynx.res, lynx.fun, R = 99, l = 20, sim = "geom", + n.sim = 114, orig.t = FALSE, ran.gen = lynx.black, + ran.args = list(ts = log(lynx), model = lynx.model)) > > # To compare the observed order of the bootstrap replicates we > # proceed as follows. > table(lynx.1$t[, 1]) 2 3 4 5 7 8 10 11 12 13 14 16 19 38 4 6 3 1 9 1 1 1 > table(lynx.1b$t[, 1]) 2 3 4 5 6 7 8 11 12 14 15 6 2 22 6 4 6 3 40 7 1 2 > table(lynx.2$t[, 1]) 2 3 4 5 6 7 8 10 11 13 12 18 51 5 2 3 1 2 4 1 > table(lynx.2b$t[, 1]) 2 3 4 5 6 7 8 9 10 11 12 13 15 21 2 1 21 4 1 10 4 1 3 45 3 1 2 1 > table(lynx.3$t[, 1]) 2 3 4 5 6 7 8 9 10 11 12 13 14 15 4 8 11 2 1 4 2 2 2 54 6 1 1 1 > # Notice that the post-blackened and model-based bootstraps preserve > # the true order of the model (11) in many more cases than the others. > > > > cleanEx() > nameEx("var.linear") > ### * var.linear > > flush(stderr()); flush(stdout()) > > ### Name: var.linear > ### Title: Linear Variance Estimate > ### Aliases: var.linear > ### Keywords: nonparametric > > ### ** Examples > > # To estimate the variance of the ratio of means for the city data. > ratio <- function(d,w) sum(d$x * w)/sum(d$u * w) > var.linear(empinf(data = city, statistic = ratio)) [1] 0.03248701 > > > > ### *