gmodels/0000755000176200001440000000000014572111402011701 5ustar liggesusersgmodels/NAMESPACE0000644000176200001440000000224314571426730013135 0ustar liggesusers# Generated by roxygen2: do not edit by hand S3method(base::print,fit_contrast) S3method(base::print,glh.test) S3method(base::summary,glh.test) S3method(gmodels::ci,binom) S3method(gmodels::ci,estimable) S3method(gmodels::ci,fit_contrast) S3method(gmodels::ci,lm) S3method(gmodels::ci,lme) S3method(gmodels::ci,numeric) S3method(gmodels::est_p_ci,fit_contrast) S3method(gmodels::est_p_ci,lm) S3method(gmodels::estimable,default) S3method(gmodels::estimable,mlm) S3method(gmodels::fit.contrast,lm) S3method(gmodels::fit.contrast,lme) S3method(stats::coef,fit_contrast) export(.to.est) export(CrossTable) export(ci) export(ci.binom) export(coefFrame) export(est_p_ci) export(estimable) export(fast.prcomp) export(fast.svd) export(fit.contrast) export(glh.test) export(make.contrasts) importFrom(MASS,ginv) importFrom(gdata,frameApply) importFrom(gdata,nobs) importFrom(stats,chisq.test) importFrom(stats,coef) importFrom(stats,family) importFrom(stats,fisher.test) importFrom(stats,mcnemar.test) importFrom(stats,pchisq) importFrom(stats,pf) importFrom(stats,pt) importFrom(stats,qbeta) importFrom(stats,qt) importFrom(stats,sd) importFrom(stats,summary.glm) importFrom(stats,summary.lm) gmodels/ChangeLog0000644000176200001440000004372514570413077013501 0ustar liggesusers2018-06-25 14:53 warnes * ChangeLog, NEWS, inst/ChangeLog, inst/NEWS: Move NEWS and ChangeLog to top level. 2018-06-25 14:52 warnes * ChangeLog, NEWS: Remove NEWS and ChangeLog soft links 2018-06-20 21:06 warnes * inst/ChangeLog: Update ChangeLog for gmodels 2.18.0. 2018-06-20 21:04 warnes * DESCRIPTION, inst/NEWS: Update DESCRIPTION and NEWS for gmodels 2.18.0. 2018-06-20 21:02 warnes * NAMESPACE: Add imports from stats package. 2018-06-20 20:51 warnes * DESCRIPTION, inst/ChangeLog: Update DESCRIPTION and ChangeLog for gmodels 2.16.1 2018-06-20 20:50 warnes * NAMESPACE: Remove obsolete functions. 2018-06-20 20:46 warnes * R/fit.contrast.R, man/ci.Rd, man/estimable.Rd, man/fit.contrast.Rd: Fix R CMD check issues. 2018-06-20 20:42 warnes * R/ci.R: Remove trailing whitespace 2018-06-20 20:40 warnes * R/est_p_ci.R: Use base::trimws() instead of gdata::trim() 2018-06-20 20:35 warnes * test: Remove duplicated/incorrect 'test' directory. 2017-06-12 23:53 warnes * DESCRIPTION: Update imports 2017-06-12 23:53 warnes * NAMESPACE: Add imports for base R packages to NAMESPACE 2017-06-05 21:27 warnes * inst/NEWS: Update NEWS and ChangeLog files for 2.18.0. 2016-08-15 19:11 warnes * test, test/lme-test.R, test/test_estimable_mlm.R: Add tests for mlm and (obsolete) lme 2016-08-15 19:05 warnes * R/est_p_ci.R: Add est_p_ci generic and lm method 2016-08-12 17:15 warnes * DESCRIPTION, NAMESPACE, R/ci.R, R/fit.contrast.R, inst/ChangeLog, inst/NEWS, man/ci.Rd, man/estimable.Rd, man/fit.contrast.Rd: Updates... 2015-07-22 00:53 warnes * DESCRIPTION: Update gmodels DESCRIPTION 2015-07-22 00:48 warnes * test, tests, tests/lme-test.R: Renamed 'test' directory to 'tests', commented out tests for lme4 which has a changed API 2015-07-20 23:38 warnes * DESCRIPTION, NAMESPACE: Changs to squash new R CMD check warnings 2015-07-19 03:22 warnes * DESCRIPTION, NAMESPACE, R/ci.R, R/est.mer.R, R/estimable.R, R/fit.contrast.R, R/to.est.R, inst/NEWS, man/ci.Rd, man/estimable.Rd, man/fit.contrast.Rd: - Removed references to 'mer' objects, sincel the nlme4 update is not backwards compatible with my code. - Removed 'require' calls. 2015-07-19 02:34 warnes * DESCRIPTION, inst/ChangeLog, inst/NEWS: Update DESCRIPTION, ChangeLog, and NEWS for gmodels 2.16.1 2015-07-19 02:30 warnes * R/ci.R, man/ci.Rd: ci.binom() was using an incorrect method for calculating binomial confidence interval. The revised code calculates the Clopper-Pearson 'exect' interval, which is *conservative* due to the discrete nature of the binomial distribution. 2015-05-02 17:38 warnes * Rename 'trunk' to 'pkg' for compatibility with R-forge 2015-04-06 21:52 warnes * Add ChangeLog files to repository 2014-07-24 15:18 warnes * Update NEWS for gmodels 2.16.0 2014-07-24 15:14 warnes * - Estimable now adds the class 'estimable' to returned objects. - New ci() method for estimable objects. - Minor improvemets to man page formatting. 2013-07-18 14:09 warnes * Looks like Brian Ripley repackaged for R 3.0.0 and bumped version number, so change it to 2.15.5 2013-07-18 13:57 warnes * Update for gmodels 2.15.4 2013-07-18 13:54 warnes * Update to current Rd syntax 2013-07-18 13:46 warnes * Correct bug in estimable.mlm 2013-07-15 18:13 warnes * Remove unused argument to ci.mer 2012-06-28 00:49 warnes * Update for gmodels version 2.15.3. 2012-06-28 00:47 warnes * Move percentile() function to a separate file. 2012-06-28 00:41 warnes * Update est.mer() to support new S4 "mer" class. 2012-06-28 00:40 warnes * Make lme4 example executable. 2012-06-27 22:42 warnes * Add test code submitted by Ariel.Muldoon@oregonstate.edu. 2012-04-19 22:09 warnes * Update for release 2.15.2 2012-04-19 22:07 warnes * Update version and date. 2012-04-19 22:06 warnes * The 'Design' package has been replaced my 'rms', so update man page references. 2012-04-19 22:05 warnes * More fixes for support of S4 'mer' class from lme4 package. 2012-04-19 21:13 warnes * Split long line. 2012-04-19 17:50 warnes * Changes to pass R CMD check 2011-12-14 18:17 warnes * Improve formatting of ci.mer(). 2011-12-14 18:14 warnes * Modify est.mer to work with recent lme4 'mer' S4 objects. 2011-01-16 22:17 warnes * Fix warnings reported by R CMD check. Update version number to 2.15.1. 2009-05-09 05:29 warnes * Add tests for lme4 'mer' objects 2009-05-09 05:04 warnes * Update for 2.15.0 2009-05-09 05:02 warnes * Update description for 2.15.0 2009-05-09 05:01 warnes * Add support for lme4's 'mer' objects 2009-05-09 05:00 warnes * Add support for lme4's 'mer' objects 2009-05-09 04:53 warnes * Fix .Rd syntax error 2009-05-09 04:37 warnes * Add softlinks for ChangeLog and NEWS to top level dir for convenience 2009-05-09 04:36 warnes * Move ChangeLog and NEWS files into inst directory 2009-05-09 04:00 warnes * Update Greg's email address 2008-04-10 14:05 warnes * Improve languages a bit 2008-01-02 16:56 warnes * Update Marc's email address 2007-12-12 21:16 warnes * Move copyright notice for Randall's contributions from License section to Author section of the DESCRIPTION file. 2007-12-07 22:21 warnes * Update DESCRIPTION and NEWS for release 2.14.1 2007-12-07 22:10 warnes * Correct minor typos in man page for estimable() 2007-12-07 22:09 warnes * Add support for lme models to estimable() 2007-12-07 22:07 warnes * Replace non-ascii characters in Soren's name with (equivalent?) ascii character to avoid character encoding issues. 2007-10-22 02:24 warnes * Clarify GPL version 2007-07-26 00:20 warnes * Add support for mlm to estimable(). 2007-07-26 00:10 warnes * Add estimable method for mlm objects 2007-03-09 22:35 warnes * Remove stray character 2007-03-09 20:10 warnes * Update NEWS file. 2007-03-09 20:07 warnes * Update version number 2007-03-09 20:06 warnes * Minor code formatting changes 2007-03-09 20:06 warnes * Flip lower and upper interval in ci.lmer(). Add example to man page. 2007-03-09 19:43 warnes * Fix some old email addressses that got missed 2006-11-29 00:11 warnes * Update for 2.13.1 2006-11-29 00:05 warnes * Correct declartion of S3 methods for estimable() 2006-11-29 00:05 warnes * Add additional suggested packages 2006-11-29 00:04 warnes * - Add generic - Fix code vs. doc inconsistiencies 2006-11-28 22:38 warnes * Remove extraneous comma that causes errors in R 2.5.0 2006-11-27 20:45 warnes * Update for 2.13.1 2006-11-27 20:36 warnes * Add missing export of methods for estimable() 2006-11-14 22:25 ggorjan * Removed executable property 2006-08-02 22:21 warnes * Update my email address 2006-06-06 19:17 nj7w * Updated ci, estimable and fit.contrast as per Randall Johnson 2006-06-05 21:00 nj7w * Additions as per Randall C Johnson 2006-06-05 20:59 nj7w * Additions as per Randall C Johnson 2006-06-05 20:57 nj7w * - New function to estimate CI's and p-values using mcmcsamp() from the Matrix package 2006-05-05 18:29 nj7w * Fixed an error: According to Marc Schwartz - there was an error when a matrix without dimnames(or names(dimnames)) was passed as x argument 2005-12-13 16:03 nj7w * Removed ChangeLog 2005-12-13 16:02 nj7w * Updated NEWS 2005-12-12 21:57 nj7w * Updated version number for CRAN 2005-12-04 06:27 warnes * Update for 2.11.0 2005-12-04 06:12 warnes * Integration of code changes suggested by Randall C Johnson to add support for lmer (lme version 4) objects to ci(), estimable(), and fit.contrast(). Addition of simplified coefficient specificaiton for estimable() based on a function provided by Randall C Johnson. It is now possible to do things like: estimable(reg, c("xB"=1,"xD"=-1) ) instead of: estimable(reg, c( 0, 1, 0, -1) ) which should make estimable much easier to use for large models. 2005-12-01 16:54 nj7w * Updated Greg's email address 2005-10-27 11:21 warnes * Update version number. Bump minor version since we added functionality. 2005-10-27 10:33 warnes * Add ci.binom() to NAMESPACE, bump version 2005-10-26 13:39 warnes * Add ci.binom 2005-10-25 21:18 warnes * Add gdata::nobs to import list. Needed by ci() 2005-09-12 15:44 nj7w * Updated Greg's email 2005-09-07 15:31 nj7w * Fixed man page 2005-09-06 21:34 nj7w * Updated DESCRIPTION 2005-09-06 21:34 nj7w * Added NEWS 2005-09-06 16:21 nj7w * Fixed the Package name 2005-09-02 23:10 nj7w * Added ChangeLog 2005-08-31 16:28 nj7w * Added DESCRIPTION file 2005-08-31 16:27 nj7w * removed DESCRIPTION.in 2005-07-11 21:35 nj7w * Revision based on Marc Schwartz's suggestions: 1) Added 'dnn' argument to enable specification of dimnames as per table() 2) Corrected bug in SPSS output for 1d table, where proportions were being printed and not percentages ('%' output) 2005-06-09 14:20 nj7w * Updating the version number, and various help files to synchronize splitting of gregmisc bundle in 4 individual components. 2005-06-09 14:13 nj7w * Updates by Marc Schwartz: CrossTable: # Revision 2.0 2005/04/27 # Added 'format = "d"' to all table count output # so that large integers do not print in # scientific notation 2005-05-13 18:59 nj7w * 1) Using dQuote.ascii function in read.xls as the new version of dQuote doesn't work proprly with UTF-8 locale. 2) Modified CrossTable.Rd usage in gmodels 3) Modified heatmap.2 usage in gplots. 2005-05-11 13:51 warnes * Add dependency on gdata::frameApply. 2005-03-31 20:32 warnes * Add ceofFrame function to NAMESPACE 2005-03-31 19:05 warnes * coefFrame example needs to properly load ELISA data from gtools package 2005-03-31 18:31 warnes * Ensure that each file has $Id$ header, and no $Log$ 2005-03-31 18:30 warnes * Add coefFrame() function contributed by Jim Rogers 2005-01-18 19:53 warnes * Removed Windows Line Endings 2005-01-14 21:40 nj7w * Updated the manual to reflect prop.chisq change in its R file. 2005-01-14 19:14 warnes * Nitin added display of the Chisquare contribution of each cell, as suggested by Greg Snow. 2005-01-12 20:50 warnes * Add dependency on R 1.9.0+ to prevent poeple from installing on old versions of R which don't support namespaces. 2004-12-23 19:32 nj7w * Split the function print.CrossTable.vector in two parts - for SAS behaiour and SPSS behaviour. Also put the code of printing statistics in a function 'print.statistics' 2004-12-21 22:38 warnes * Added & extended changes made by Nitin to implement 'SPSS' format, as suggested by Dirk Enzmann . 2004-09-30 21:03 warneg * Fix typos. 2004-09-27 21:01 warneg * Updated to pass R CMD check. 2004-09-03 22:44 warneg * Add explicit package to call to quantcut in example. 2004-09-03 17:27 warneg * initial bundle checkin 2004-09-02 17:14 warneg * Initial revision 2004-05-25 02:57 warnes * Updates from Mark Schwartz. 2004-04-13 11:41 warnes * Fix latex warning: it doesn't like double subscripts. 2004-03-26 22:28 warnes * Reflect movement of code from 'mva' package to 'stats' in R 1.9.0. 2004-03-25 20:09 warnes * - Estimable was reporting sqrt(X^2) rather than X^2 in the output. - Provide latex math markup for linear algebra expressions in help text. - Other clarifications in help text 2004-03-25 18:17 warnes * Add enhancements to estimable() provided by S�ren H�jsgaard \email{sorenh@agrsci.dk}: I have made a modified version of the function [..] which 1) also works on geese and gee objects and 2) can test hypotheses af the forb L * beta = beta0 both as a single Wald test and row-wise for each row in L. 2003-11-17 21:40 warnes * - Fix incorrect handling of glm objects by fit.contrast, as reported by Ulrich Halekoh, Phd . - Add regression test code to for this bug. 2003-08-07 03:49 warnes * - Fixed incorrect denominator in standard error for mean in ci.default. 2003-04-22 17:24 warnes * - the variable 'df' was used within the lme code section overwriting the argument 'df'. 2003-03-12 17:58 warnes * - Fixed a typo in the example - Added to lme example 2003-03-07 15:48 warnes * - Minor changes to code to allow the package to be provided as an S-Plus chapter. 2003-01-30 21:53 warnes * - Renamed 'contrast.lm' to 'fit.contrast'. This new name is more descriptive and makes it easier to create and use methods for other classes, eg lme. - Enabled fit.contrast for lme object now that Doug Bates has provided the necessary support for contrasts in the nlme package. - New contrast.lm function which generates a 'depreciated' warning and calls fit.contrast - Updated help text to match changes. 2003-01-30 21:41 warnes * - Removed argument 'correct' and now print separate corrected values for 2 x 2 tables. - Added arguments 'prop.r', 'prop.c' and 'prop.t' to toggle printing of row, col and table percentages. Default is TRUE. - Added argument 'fisher' to toggle fisher exact test. Default is FALSE. - Added McNemar test to statistics and argument 'mcnemar' to toggle test. Default is FALSE. - Added code to generate an invisible return list containing table counts, proportions and the results of the appropriate statistical tests. 2003-01-30 14:58 warnes * - Added explicit check to ensure that the number of specified contrasts is less than or equal to the ncol - 1. Previously, this failed with an obtuse error message when the contrast matrix had row names, and silently dropped contrasts over ncol-1. 2002-11-04 14:13 warnes * - Moved fisher.test() to after table is printed, so that table is still printed in the event that fisher.test() results in errors. 2002-10-29 23:06 warnes * - Fixes to fast.svd to make it actually work. - Updates to man page to fix mistmatches between code and docs and to fix warnings. 2002-10-29 23:00 warnes * - Moved make.contrasts to a separate file. - Enhanced make contrasts to better label contrast matrix, to give how.many a default value, and to coerce vectors into row matrixes. - Added help page for make.contrasts. - Added link from contrasts.lm seealso to make.contrasts. 2002-10-29 19:29 warnes * Initial checkin for fast.prcomp() and fast.svd(). 2002-09-26 12:11 warnes * - Added note and example code to illustrate how to properly compute contrasts for the first factor in the model. 2002-09-24 19:12 warnes * - Fixed a typo. 2002-09-23 14:27 warnes * - Fixed syntax errors in barplot2.Rd and CrossTable.Rd - Fixed incorrect translation of 'F' (distribution) to 'FALSE' in glh.test.Rd 2002-09-23 13:59 warnes * - Modified all files to include CVS Id and Log tags. 2002-09-23 13:38 warnes * - Added CrossTable() and barplot2() code and docs contributed by Marc Schwartz. - Permit combinations() to be used when r>n provided repeat.allowed=TRUE - Bumped up version number 2002-08-01 19:37 warnes * - Corrected documentation mismatch for ci, ci.default. - Replaced all occurences of '_' for assignment with '<-'. - Replaced all occurences of 'T' or 'F' for 'TRUE' and 'FALSE' with the spelled out version. - Updaded version number and date. 2002-04-09 00:51 warneg * Checkin for version 0.5.3 2002-03-26 21:22 warneg * - Changed methods to include '...' to match the generic. - Updated for version 0.5.1 2002-03-26 15:30 warneg * Removed incorrect link to 'contrast' from seealso. 2002-02-20 20:09 warneg * Minor changes, typo and formatting fixes. 2002-01-17 23:51 warneg * - Fixed errror in last example by adding 'conf.int' parameter to 'estimable' call. 2002-01-17 23:42 warneg * - Fixed typo in code that resulted in an syntax error. 2002-01-10 17:35 warneg * - print.glh.test() was using cat() to printing the call. This didn't work and generated an error. 2001-12-19 20:06 warneg * - Fixed display of formulae. - Added description of return value 2001-12-19 20:05 warneg * - Removed extra element of return object. 2001-12-18 22:14 warneg * - Updated documentation to reflect change of parameters from 'alpha' to 'conf.int', including the new optional status of the confidence intervals. 2001-12-18 22:12 warneg * - Modified to make confidence intervals optional. Changed 'alpha' parameter giving significance level to 'conf.int' giving confidence level. 2001-12-18 21:36 warneg * - Added summary.glh.test to alias, usage, and example sections. 2001-12-18 21:34 warneg * - Modified to work correctly when obj is of class 'aov' by specifying summary.lm instead of summary. This ensures that the summary object has the fields we need. - Moved detailed reporting of results from 'print' to 'summary' function and added a simpler report to 'print' 2001-12-18 21:27 warneg * - Modified to work correctly when obj is of class 'aov' by specifying summary.lm instead of summary. This ensures that the summary object has the fields we need. 2001-12-18 00:45 warneg * Initial checkin. 2001-12-17 18:59 warneg * - Fixed spelling errors. 2001-12-17 18:52 warneg * - Fixed the link to contrasts.lm. - Rephrased title/description to be more clear. 2001-12-10 19:35 warneg * Renamed 'contrsts.coeff.Rd' to 'estimable.Rd' corresponding to function rename. 2001-12-10 19:26 warneg * renamed from contrast.coeff.R to estimable.R (incorrectly via contrast.lm.R) 2001-12-07 19:50 warneg * - Added text noting that lme is now supported. 2001-12-07 19:19 warneg * - Fixed typo: DF column was being filled in with p-value. 2001-12-07 18:49 warneg * - Added ci.lme method to handle lme objects. 2001-10-16 23:15 warneg * Fixed unbalanced brace. 2001-08-25 05:52 warneg * - Added CVS header. - Added my email address. 2001-05-30 13:23 warneg * Initial revision gmodels/README.md0000644000176200001440000000263514571747424013210 0ustar liggesusers # gmodels [![CRAN status](https://www.r-pkg.org/badges/version/gmodels)](https://CRAN.R-project.org/package=gmodels) [![Lifecycle: stable](https://img.shields.io/badge/lifecycle-stable-brightgreen.svg)](https://lifecycle.r-lib.org/articles/stages.html#stable) [![R-CMD-check](https://github.com/r-gregmisc/gmodels/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/r-gregmisc/gmodels/actions/workflows/R-CMD-check.yaml) Tools for fitting linear models that complement those in base `R`. Provided functions include: * `ci` - Compute Confidence Intervals * `coefFrame` - Return model parameters in a data frame * `CrossTable` - Cross Tabulation with Tests for Factor Independence * `estimable` - Compute contrasts and estimable linear functions * `fast.prcomp` - Efficient computation of principal components and singular value decomposition * `fit.contrast` - Compute and test arbitrary contrasts for regression objects * `glh.test` - Test a General Linear Hypothesis for a Regression Model * `make.contrasts` - Construct a User-Specified Contrast Matrix ## Installation Install the released version of gmodels from [CRAN](https://cran.r-project.org) with: ```r install.packages('gmodels') ``` Install the development version of gmodels from [GitHub](https://github.com/) with: ``` r # install.packages("devtools") devtools::install_github("r-gregmisc/gmodels") ``` gmodels/man/0000755000176200001440000000000014571376347012500 5ustar liggesusersgmodels/man/ci.Rd0000644000176200001440000000337414571436122013355 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ci.R \name{ci} \alias{ci} \alias{ci.numeric} \alias{ci.binom} \alias{ci.lm} \alias{ci.lme} \alias{ci.estimable} \alias{ci.fit_contrast} \title{Compute Confidence Intervals} \usage{ ci(x, confidence = 0.95, alpha = 1 - confidence, ...) \method{ci}{numeric}(x, confidence = 0.95, alpha = 1 - confidence, na.rm = FALSE, ...) } \arguments{ \item{x}{object from which to compute confidence intervals.} \item{confidence}{confidence level. Defaults to 0.95.} \item{alpha}{type one error rate. Defaults to 1.0-\code{confidence}} \item{\dots}{Arguments for methods} \item{na.rm}{\code{logical} indicating whether missing values should be removed.} } \value{ vector or matrix with one row per model parameter and elements/columns \code{Estimate}, \verb{CI lower}, \verb{CI upper}, \verb{Std. Error}, \code{DF} (for lme objects only), and \code{p-value}. } \description{ Compute and display confidence intervals for model estimates. Methods are provided for the mean of a numeric vector \code{ci.default}, the probability of a binomial vector \code{ci.binom}, and for \code{lm}, \code{lme}, and \code{mer} objects are provided. } \examples{ # mean and confidence interval ci( rnorm(10) ) # binomial proportion and exact confidence interval b <- rbinom( prob=0.75, size=1, n=20 ) ci.binom(b) # direct call class(b) <- 'binom' ci(b) # indirect call # confidence intervals for regression parameteres data(state) reg <- lm(Area ~ Population, data=as.data.frame(state.x77)) ci(reg) } \seealso{ \code{\link[stats:confint]{stats::confint()}}, \code{\link[stats:lm]{stats::lm()}}, \code{\link[stats:summary.lm]{stats::summary.lm()}} } \author{ Gregory R. Warnes \email{greg@warnes.net} } \keyword{regression} gmodels/man/estimable.Rd0000644000176200001440000001260014571436122014717 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/estimable.R \name{estimable} \alias{estimable} \alias{estimable.default} \alias{estimable.mlm} \title{Contrasts and estimable linear functions of model coefficients} \usage{ estimable(obj, cm, beta0, conf.int = NULL, show.beta0, ...) \method{estimable}{default}(obj, cm, beta0, conf.int = NULL, show.beta0, joint.test = FALSE, ...) } \arguments{ \item{obj}{Regression (lm, glm, lme, mer, mlm) object.} \item{cm}{Vector, List, or Matrix specifying estimable linear functions or contrasts. See below for details.} \item{beta0}{Vector of null hypothesis values} \item{conf.int}{Confidence level. If provided, confidence intervals will be computed.} \item{show.beta0}{Logical value. If TRUE a column for beta0 will be included in the output table. Defaults to TRUE when beta0 is specified, FALSE otherwise.} \item{...}{ignored} \item{joint.test}{Logical value. If TRUE a 'joint' Wald test for the hypothesis \eqn{L \beta=\beta_0} is performed. Otherwise 'row-wise' tests are performed, i.e. \eqn{(L \beta)[i]=\beta_0[i]}.} } \value{ Returns a matrix with one row per linear function. Columns contain the beta0 value (optional, see \code{show.beta0} above), estimated coefficients, standard errors, t values, degrees of freedom, two-sided p-values, and the lower and upper endpoints of the 1-alpha confidence intervals. } \description{ Compute and test contrasts and other estimable linear functions of model coefficients for for lm, glm, lme, mer, and geese objects } \details{ \code{estimable} computes an estimate, test statitic, significance test, and (optional) confidence interval for each linear functions of the model coefficients specified by \code{cm}. The estimable function(s) may be specified via a vector, list, or matrix. If \code{cm} is a vector, it should contained named elements each of which gives the coefficient to be applied to the corresponding parameter. These coefficients will be used to construct the contrast matrix, with unspecified model parameters assigned zero coefficients. If \code{cm} is a list, it should contain one or more coefficient vectors, which will be used to construct rows of the contrast matrix. If \code{cm} is a matrix, column names must match (a subset of) the model parameters, and each row should contain the corresponding coefficient to be applied. Model parameters which are not present in the set of column names of \code{cm} will be set to zero. The estimates and their variances are obtained by applying the contrast matrix (generated from) \code{cm} to the model estimates variance-covariance matrix. Degrees of freedom are obtained from the appropriate model terms. The user is responsible for ensuring that the specified linear functions are meaningful. For computing contrasts among levels of a single factor, \code{fit.contrast} may be more convenient. For computing contrasts between two specific combinations of model parameters, the \code{contrast} function in Frank Harrell's 'rms' library (formerly 'Design') may be more convenient. \%The \code{.wald} function is called internally by \code{estimable} and \%is not intended for direct use. } \note{ The estimated fixed effect parameters of \code{lme} objects may have different degrees of freedom. If a specified contrast includes nonzero coefficients for parameters with differing degrees of freedom, the smallest number of degrees of freedom is used and a warning message is issued. } \examples{ # setup example data y <- rnorm(100) x <- cut(rnorm(100, mean=y, sd=0.25),c(-4,-1.5,0,1.5,4)) levels(x) <- c("A","B","C","D") x2 <- rnorm(100, mean=y, sd=0.5) # simple contrast and confidence interval reg <- lm(y ~ x) estimable(reg, c( 0, 1, 0, -1) ) # full coefficient vector estimable(reg, c("xB"=1,"xD"=-1) ) # just the nonzero terms # Fit a spline with a single knot at 0.5 and plot the *pointwise* # confidence intervals library(gplots) pm <- pmax(x2-0.5, 0) # knot at 0.5 reg2 <- lm(y ~ x + x2 + pm ) range <- seq(-2, 2, , 50) tmp <- estimable(reg2, cm=cbind( '(Intercept)'=1, 'xC'=1, 'x2'=range, 'pm'=pmax(range-0.5, 0) ), conf.int=0.95) plotCI(x=range, y=tmp[, 1], li=tmp[, 6], ui=tmp[, 7]) # Fit both linear and quasi-Poisson models to iris data, then compute # joint confidence intervals on contrasts for the Species and # Sepal.Width by Species interaction terms. data(iris) lm1 <- lm (Sepal.Length ~ Sepal.Width + Species + Sepal.Width:Species, data=iris) glm1 <- glm(Sepal.Length ~ Sepal.Width + Species + Sepal.Width:Species, data=iris, family=quasipoisson("identity")) cm <- rbind( 'Setosa vs. Versicolor' = c(0, 0, 1, 0, 1, 0), 'Setosa vs. Virginica' = c(0, 0, 0, 1, 0, 1), 'Versicolor vs. Virginica'= c(0, 0, 1,-1, 1,-1) ) estimable(lm1, cm) estimable(glm1, cm) } \seealso{ \code{\link[=fit.contrast]{fit.contrast()}}, \code{\link[stats:lm]{stats::lm()}}, \code{\link[nlme:lme]{nlme::lme()}}, \code{\link[stats:contrasts]{stats::contrasts()}}, \code{\link[rms:contrast]{rms::contrast()}} } \author{ BXC (Bendix Carstensen) \email{b@bxc.dk}, Gregory R. Warnes \email{greg@warnes.net}, Soren Hojsgaard \email{sorenh@agrsci.dk}, and Randall C Johnson \email{rjohnson@ncifcrf.gov} } \keyword{models} \keyword{regression} gmodels/man/est_p_ci.Rd0000644000176200001440000000221114571434253014537 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/est_p_ci.R \name{est_p_ci} \alias{est_p_ci} \title{Display estimate, confidence interval and p-value for one model term} \usage{ est_p_ci(model, term, mult = 1, digits = 2, ...) } \arguments{ \item{model}{model object} \item{term}{model term} \item{mult}{scale (multiply) the parameter by this factor} \item{digits}{number of significant digits to display} \item{...}{optional arguments} } \description{ Display estimate, confidence interval and p-value for one model term } \examples{ set.seed(42) # fit an example model with 3 groups y <- rnorm(100) x <- cut(rnorm(100, mean=y, sd=0.25),c(-4,-1.5,0,1.5,4)) reg <- lm(y ~ x) reg # show model estimate, p-value, and confidence interval # for the first group est_p_ci(reg, 2) # estimate some group contrasts cmat <- rbind( "1 vs 4" =c(-1, 0, 0, 1), "1+2 vs 3+4"=c(-1/2,-1/2, 1/2, 1/2), "1 vs 2+3+4"=c(-3/3, 1/3, 1/3, 1/3)) cont <- fit.contrast(reg, x, cmat, conf.int = 0.95) cont # show the contrast estimate, p-value, and confidence interval # for the first contrast est_p_ci(cont, 2:3) } gmodels/man/dot-to.est.Rd0000644000176200001440000000076114571374710014763 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/to.est.R \name{.to.est} \alias{.to.est} \title{Return a vector for cm in estimable()} \usage{ .to.est(obj, params) } \arguments{ \item{obj}{estimable object} \item{params}{character vector of names or logical vector with one element per model parameter selecting desrired parameter(s).} } \description{ Return a vector for cm in estimable() } \author{ Randy Johnson, Laboratory of Genomic Diversity at NCI-Frederick } gmodels/man/glh.test.Rd0000644000176200001440000000606514571437325014520 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/glh.test.R \name{glh.test} \alias{glh.test} \alias{print.glh.test} \alias{summary.glh.test} \title{Test a General Linear Hypothesis for a Regression Model} \usage{ glh.test(reg, cm, d = rep(0, nrow(cm))) } \arguments{ \item{reg}{Regression model} \item{cm}{contrast matrix \code{C} . Each row specifies a linear combination of the coefficients} \item{d}{vector \code{d} specifying the null hypothesis values for each linear combination} } \value{ Object of class \code{c("glh.test","htest")} with elements: \item{call }{Function call that created the object} \item{statistic }{F statistic} \item{parameter}{vector containing the numerator (r) and denominator (n-p) degrees of freedom} \item{p.value}{p-value} \item{estimate}{computed estimate for each row of \code{cm}} \item{null.value}{d} \item{method}{description of the method} \item{data.name}{name of the model given for \code{reg}} \item{matrix}{matrix specifying the general linear hypotheis (\code{cm})} } \description{ Test, print, or summarize a general linear hypothesis for a regression model } \details{ Test the general linear hypothesis \eqn{C \hat{\beta} = d } for the regression model \code{reg}. The test statistic is obtained from the formula: \deqn{ f = \frac{(C \hat{\beta} - d)' ( C (X'X)^{-1} C' ) (C \hat{\beta} - d) / r }{ SSE / (n-p) } } where \itemize{ \item \code{r} is the number of contrasts contained in \code{C}, and \item \code{n-p} is the model degrees of freedom. } Under the null hypothesis, \code{f} will follow a F-distribution with \code{r} and \code{n-p} degrees of freedom } \note{ When using treatment contrasts (the default) the first level of the factors are subsumed into the intercept term. The estimated model coefficients are then contrasts versus the first level. This should be taken into account when forming contrast matrixes, particularly when computing contrasts that include 'baseline' level of factors. See the comparison with \code{fit.contrast} in the examples below. } \examples{ # fit a simple model y <- rnorm(100) x <- cut(rnorm(100, mean=y, sd=0.25),c(-4,-1.5,0,1.5,4)) reg <- lm(y ~ x) summary(reg) # test both group 1 = group 2 and group 3 = group 4 # *Note the 0 in the column for the intercept term* C <- rbind( c(0,-1,0,0), c(0,0,-1,1) ) ret <- glh.test(reg, C) ret # same as 'print(ret) ' summary(ret) # To compute a contrast between the first and second level of the factor # 'x' using 'fit.contrast' gives: fit.contrast( reg, x,c(1,-1,0,0) ) # To test this same contrast using 'glh.test', use a contrast matrix # with a zero coefficient for the intercept term. See the Note section, # above, for an explanation. C <- rbind( c(0,-1,0,0) ) glh.test( reg, C ) } \references{ R.H. Myers, Classical and Modern Regression with Applications, 2nd Ed, 1990, p. 105 } \seealso{ \code{\link[=fit.contrast]{fit.contrast()}}, \code{\link[=estimable]{estimable()}}, \code{\link[stats:contrasts]{stats::contrasts()}} } \author{ Gregory R. Warnes \email{greg@warnes.net} } \keyword{models} \keyword{regression} gmodels/man/make.contrasts.Rd0000644000176200001440000000744714571437325015731 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/make.contrasts.R \name{make.contrasts} \alias{make.contrasts} \title{Construct a User-Specified Contrast Matrix} \usage{ make.contrasts(contr, how.many = ncol(contr)) } \arguments{ \item{contr}{vector or matrix specifying contrasts (one per row).} \item{how.many}{dimensions of the desired contrast matrix. This must equal the number of levels of the target factor variable.} } \value{ \code{make.contrasts} returns a matrix with dimensions (\code{how.many}, \code{how.many}) containing the specified contrasts augmented (if necessary) with orthogonal "filler" contrasts. This matrix can then be used as the argument to \code{\link[=contrasts]{contrasts()}} or to the \code{contrasts} argument of model functions (eg, \code{\link[=lm]{lm()}}). } \description{ This function converts human-readable contrasts into the form that R requires for computation. } \details{ Specifying a contrast row of the form \code{c(1,0,0,-1)} creates a contrast that will compare the mean of the first group with the mean of the fourth group. } \examples{ set.seed(4684) y <- rnorm(100) x.true <- rnorm(100, mean=y, sd=0.25) x <- factor(cut(x.true,c(-4,-1.5,0,1.5,4))) reg <- lm(y ~ x) summary(reg) # Mirror default treatment contrasts test <- make.contrasts(rbind( c(-1,1,0,0), c(-1,0,1,0), c(-1,0,0,1) )) lm( y ~ x, contrasts=list(x = test )) # Specify some more complicated contrasts # - mean of 1st group vs mean of 4th group # - mean of 1st and 2nd groups vs mean of 3rd and 4th groups # - mean of 1st group vs mean of 2nd, 3rd and 4th groups cmat <- rbind( "1 vs 4" =c(-1, 0, 0, 1), "1+2 vs 3+4"=c(-1/2,-1/2, 1/2, 1/2), "1 vs 2+3+4"=c(-3/3, 1/3, 1/3, 1/3)) summary(lm( y ~ x, contrasts=list(x=make.contrasts(cmat) ))) # or contrasts(x) <- make.contrasts(cmat) summary(lm( y ~ x ) ) # or use contrasts.lm reg <- lm(y ~ x) fit.contrast( reg, "x", cmat ) # compare with values computed directly using group means gm <- sapply(split(y,x),mean) gm \%*\% t(cmat) # # Example for Analysis of Variance # set.seed(03215) Genotype <- sample(c("WT","KO"), 1000, replace=TRUE) Time <- factor(sample(1:3, 1000, replace=TRUE)) data <- data.frame(y, Genotype, Time) y <- rnorm(1000) data <- data.frame(y, Genotype, as.factor(Time)) # Compute Contrasts & obtain 95\% confidence intervals model <- aov( y ~ Genotype + Time + Genotype:Time, data=data ) fit.contrast( model, "Genotype", rbind("KO vs WT"=c(-1,1) ), conf=0.95 ) fit.contrast( model, "Time", rbind("1 vs 2"=c(-1,1,0), "2 vs 3"=c(0,-1,1) ), conf=0.95 ) cm.G <- rbind("KO vs WT"=c(-1,1) ) cm.T <- rbind("1 vs 2"=c(-1,1,0), "2 vs 3"=c(0,-1,1) ) # Compute contrasts and show SSQ decompositions model <- model <- aov( y ~ Genotype + Time + Genotype:Time, data=data, contrasts=list(Genotype=make.contrasts(cm.G), Time=make.contrasts(cm.T) ) ) summary(model, split=list( Genotype=list( "KO vs WT"=1 ), Time = list( "1 vs 2" = 1, "2 vs 3" = 2 ) ) ) } \seealso{ \itemize{ \item \code{\link[stats:lm]{stats::lm()}}, \code{\link[stats:contrasts]{stats::contrasts()}}, \code{\link[stats:contrast]{stats::contr.treatment()}}, \code{\link[stats:contrast]{stats::contr.poly()}}, \item Computation and testing of General Linear Hypothesis: \code{\link[=glh.test]{glh.test()}}, \item Computation and testing of estimable functions of model coefficients: \code{\link[=estimable]{estimable()}}, \item Estimate and Test Contrasts for a previously fit linear model: \code{\link[=fit.contrast]{fit.contrast()}} } } \author{ Gregory R. Warnes \email{greg@warnes.net} } \keyword{models} \keyword{regression} gmodels/man/fit.contrast.Rd0000644000176200001440000001206114571437035015375 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/fit.contrast.R \name{fit.contrast} \alias{fit.contrast} \alias{fit.contrast.lm} \alias{fit.contrast.lme} \title{Compute and test arbitrary contrasts for regression objects} \usage{ fit.contrast(model, varname, coeff, showall, conf.int, df, ...) } \arguments{ \item{model}{regression (lm,glm,aov,lme) object for which the contrast(s) will be computed.} \item{varname}{variable name} \item{coeff}{vector or matrix specifying contrasts (one per row).} \item{showall}{return all regression coefficients. If \code{TRUE}, all model cofficients will be returned. If \code{FALSE} (the default), only the coefficients corresponding to the specified contrast will be returned.} \item{conf.int}{numeric value on (0,1) or NULL. If a numeric value is specified, confidence intervals with nominal coverage probability \code{conf.int} will be computed. If \code{NULL}, confidence intervals will not be computed.} \item{df}{boolean indicating whether to return a column containing the degrees of freedom.} \item{\dots}{optional arguments provided by methods.} } \value{ Returns a matrix containing estimated coefficients, standard errors, t values, two-sided p-values. If \code{df} is TRUE, an additional column containing the degrees of freedom is included. If \code{conf.int} is specified lower and upper confidence limits are also returned. } \description{ Compute and test arbitrary contrasts for regression objects. } \details{ Computes the specified contrast(s) by re-fitting the model with the appropriate arguments. A contrast of the form \code{c(1,0,0,-1)} would compare the mean of the first group with the mean of the fourth group. } \examples{ set.seed(42) y <- rnorm(100) x <- cut(rnorm(100, mean=y, sd=0.25),c(-4,-1.5,0,1.5,4)) reg <- lm(y ~ x) summary(reg) # look at the group means gm <- sapply(split(y,x),mean) gm # mean of 1st group vs mean of 4th group fit.contrast(reg, x, c( 1, 0, 0, -1) ) # estimate should be equal to: gm[1] - gm[4] # mean of 1st and 2nd groups vs mean of 3rd and 4th groups fit.contrast(reg, x, c( -1/2, -1/2, 1/2, 1/2) ) # estimate should be equal to: sum(-1/2*gm[1], -1/2*gm[2], 1/2*gm[3], 1/2*gm[4]) # mean of 1st group vs mean of 2nd, 3rd and 4th groups fit.contrast(reg, x, c( -3/3, 1/3, 1/3, 1/3) ) # estimate should be equal to: sum(-3/3*gm[1], 1/3*gm[2], 1/3*gm[3], 1/3*gm[4]) # all at once cmat <- rbind( "1 vs 4" =c(-1, 0, 0, 1), "1+2 vs 3+4"=c(-1/2,-1/2, 1/2, 1/2), "1 vs 2+3+4"=c(-3/3, 1/3, 1/3, 1/3)) fit.contrast(reg,x,cmat) # x2 <- rnorm(100,mean=y,sd=0.5) reg2 <- lm(y ~ x + x2 ) fit.contrast(reg2,x,c(-1,0,0,1)) # # Example for Analysis of Variance # set.seed(03215) Genotype <- sample(c("WT","KO"), 1000, replace=TRUE) Time <- factor(sample(1:3, 1000, replace=TRUE)) y <- rnorm(1000) data <- data.frame(y, Genotype, Time) # Compute Contrasts & obtain 95\% confidence intervals model <- aov( y ~ Genotype + Time + Genotype:Time, data=data ) fit.contrast( model, "Genotype", rbind("KO vs WT"=c(-1,1) ), conf=0.95 ) fit.contrast( model, "Time", rbind("1 vs 2"=c(-1,1,0), "2 vs 3"=c(0,-1,1) ), conf=0.95 ) cm.G <- rbind("KO vs WT"=c(-1,1) ) cm.T <- rbind("1 vs 2"=c(-1,1,0), "2 vs 3"=c(0,-1,1) ) # Compute contrasts and show SSQ decompositions model <- aov( y ~ Genotype + Time + Genotype:Time, data=data, contrasts=list(Genotype=make.contrasts(cm.G), Time=make.contrasts(cm.T) ) ) summary(model, split=list( Genotype=list( "KO vs WT"=1 ), Time = list( "1 vs 2" = 1, "2 vs 3" = 2 ) ) ) # example for lme library(nlme) data(Orthodont) fm1 <- lme(distance ~ Sex, data = Orthodont,random=~1|Subject) # Contrast for sex. This example is equivalent to standard treatment # contrast. # fit.contrast(fm1, "Sex", c(-1,1), conf.int=0.95 ) # # and identical results can be obtained using lme built-in 'intervals' # intervals(fm1) # Cut age into quantile groups & compute some contrasts Orthodont$AgeGroup <- gtools::quantcut(Orthodont$age) fm2 <- lme(distance ~ Sex + AgeGroup, data = Orthodont,random=~1|Subject) # fit.contrast(fm2, "AgeGroup", rbind("Linear"=c(-2,-1,1,2), "U-Shaped"=c(-1,1,1,-1), "Change-Point at 11"=c(-1,-1,1,1)), conf.int=0.95) } \references{ Venables & Ripley, Section 6.2 } \seealso{ \itemize{ \item \code{\link[stats:lm]{stats::lm()}}, \code{\link[stats:contrasts]{stats::contrasts()}}, \code{\link[stats:contrast]{stats::contr.treatment()}}, \code{\link[stats:contrast]{stats::contr.poly()}}, \item Computation and testing of General Linear Hypothesis: \code{\link[=glh.test]{glh.test()}}, \item Computation and testing of estimable functions of model coefficients: \code{\link[=estimable]{estimable()}}, \code{\link[=make.contrasts]{make.contrasts()}} } } \author{ Gregory R. Warnes \email{greg@warnes.net} } \keyword{models} \keyword{regression} gmodels/man/coefFrame.Rd0000644000176200001440000000535114571371315014650 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/coefFrame.R \name{coefFrame} \alias{coefFrame} \title{Return model parameters in a data frame} \usage{ coefFrame( mod, data, by = NULL, fit.on = TRUE, fitfun, keep.unused.levels = TRUE, byvar.sep = "\\001", ... ) } \arguments{ \item{mod}{a model formula, to be passed to by \code{fitfun}.} \item{data}{a data frame, row subsets of which will be used as the \code{data} argument to \code{fitfun}.} \item{by}{names of columns in \code{x} that will be used to define the subgroups.} \item{fit.on}{a logical vector indicating which rows of \code{x} are to be used to fit the model (like the \code{subset} argument in a lot of other functions). Can be given in terms of variables in \code{x}} \item{fitfun}{a model fitting function (e.g. lm, nls). More specifically, a function that expects at least a formula object (as the first argument) and a data.frame object (passed as an argument named \code{data}) and returns a model object for which a \code{coef} method has been defined (e.g. coef.lm, coef.nls) to extract fit values of model parameters.} \item{keep.unused.levels}{Include rows in output for all unique values of \code{by}, even those which were excluded by \code{fit.on}. The default value \code{TRUE} should be left alone if you are going to go on to pass the result to \code{backFit}.} \item{byvar.sep}{passed to frameApply, used to form the subsets of the data.} \item{...}{other arguments to pass to \code{fitfun}.} } \value{ a data frame with a row for each unique row of \code{x[by]}, and column for each model paramter, as well as columns specified in \code{by}. } \description{ Fits a model to each subgroup defined by \code{by}, then returns a data frame with one row for each fit and one column for each parameter. } \examples{ # load example data library(gtools) data(ELISA) # Coefficients for four parameter logistic fits: coefFrame(log(Signal) ~ SSfpl(log(Concentration), A, B, xmid, scal), data = ELISA, fitfun = nls, by = c("PlateDay", "Read"), fit.on = Description == "Standard" & Concentration != 0) # Coefficients for linear fits: coefFrame(log(Signal) ~ log(Concentration), data = ELISA, fitfun = lm, by = c("PlateDay", "Read"), fit.on = Description == "Standard" & Concentration != 0 ) # Example passing arguments to fitfun, and example of # error handling during model fitting: ELISA$Signal[1] <- NA coefFrame(log(Signal) ~ log(Concentration), data = ELISA, fitfun = lm, na.action = na.fail, by = c("PlateDay", "Read"), fit.on = Description == "Standard" & Concentration != 0 ) } \author{ Jim Rogers \email{james.a.rogers@pfizer.com} } \keyword{models} gmodels/man/fast.prcomp.Rd0000644000176200001440000001016114571437035015212 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/fast.prcomp.R \name{fast.prcomp} \alias{fast.prcomp} \alias{fast.svd} \title{Efficient computation of principal components and singular value decompositions.} \usage{ fast.prcomp(x, retx = TRUE, center = TRUE, scale. = FALSE, tol = NULL) } \arguments{ \item{x}{data matrix} \item{retx}{a logical value indicating whether the rotated variables should be returned.} \item{center}{a logical value indicating whether the variables should be shifted to be zero centered. Alternately, a vector of length equal the number of columns of \code{x} can be supplied. The value is passed to \code{scale}.} \item{scale.}{a logical value indicating whether the variables should be scaled to have unit variance before the analysis takes place. The default is \code{FALSE} for consistency with S, but in general scaling is advisable. Alternatively, a vector of length equal the number of columns of \code{x} can be supplied. The value is passed to \code{\link{scale}}.} \item{tol}{a value indicating the magnitude below which components should be omitted. (Components are omitted if their standard deviations are less than or equal to \code{tol} times the standard deviation of the first component.) With the default null setting, no components are omitted (unless \code{rank.} is specified less than \code{min(dim(x))}.). Other settings for tol could be \code{tol = 0} or \code{tol = sqrt(.Machine$double.eps)}, which would omit essentially constant components.} } \value{ See the documetation for \code{\link[stats:prcomp]{stats::prcomp()}} or \code{\link[=svd]{svd()}} . } \description{ The standard \code{\link[stats:prcomp]{stats::prcomp()}} and \code{\link[=svd]{svd()}} function are very inefficient for wide matrixes. \code{fast.prcomp} and \code{fast.svd} are modified versions which are efficient even for matrixes that are very wide. } \details{ The current implementation of the function \code{\link[=svd]{svd()}} in S-Plus and R is much slower when operating on a matrix with a large number of columns than on the transpose of this matrix, which has a large number of rows. As a consequence, \code{\link[stats:prcomp]{stats::prcomp()}}, which uses \code{\link[=svd]{svd()}}, is also very slow when applied to matrixes with a large number of rows. The simple solution is to use \code{\link[=La.svd]{La.svd()}} instead of \code{\link[=svd]{svd()}}. A suitable patch to \code{\link[stats:prcomp]{stats::prcomp()}} has been submitted. In the mean time, the function \code{fast.prcomp} has been provided as a short-term work-around. \describe{ \item{list("fast.prcomp")}{is a modified versiom of \code{\link[stats:prcomp]{stats::prcomp()}} that calls \code{\link[=La.svd]{La.svd()}} instead of \code{\link[=svd]{svd()}} } \item{list("fast.svd")}{is simply a wrapper around \code{\link[=La.svd]{La.svd()}}. } } } \examples{ # create test matrix set.seed(4943546) nr <- 50 nc <- 2000 x <- matrix( rnorm( nr*nc), nrow=nr, ncol=nc ) tx <- t(x) # SVD directly on matrix is SLOW: system.time( val.x <- svd(x)$u ) # SVD on t(matrix) is FAST: system.time( val.tx <- svd(tx)$v ) # and the results are equivalent: max( abs(val.x) - abs(val.tx) ) # Time gap dissapears using fast.svd: system.time( val.x <- fast.svd(x)$u ) system.time( val.tx <- fast.svd(tx)$v ) max( abs(val.x) - abs(val.tx) ) library(stats) # prcomp directly on matrix is SLOW: system.time( pr.x <- prcomp(x) ) # prcomp.fast is much faster system.time( fast.pr.x <- fast.prcomp(x) ) # and the results are equivalent max( pr.x$sdev - fast.pr.x$sdev ) max( abs(pr.x$rotation[,1:49]) - abs(fast.pr.x$rotation[,1:49]) ) max( abs(pr.x$x) - abs(fast.pr.x$x) ) # (except for the last and least significant component): max( abs(pr.x$rotation[,50]) - abs(fast.pr.x$rotation[,50]) ) } \seealso{ \code{\link[stats:prcomp]{stats::prcomp()}}, \code{\link[base:svd]{base::svd()}}, \code{\link[base:svd]{base::La.svd()}} } \author{ Modifications by Gregory R. Warnes \email{greg@warnes.net} } \keyword{algebra} \keyword{array} \keyword{multivariate} gmodels/man/CrossTable.Rd0000644000176200001440000001233414571437325015025 0ustar liggesusers% Generated by roxygen2: do not edit by hand % Please edit documentation in R/CrossTable.R \name{CrossTable} \alias{CrossTable} \title{Cross Tabulation with Tests for Factor Independence} \usage{ CrossTable( x, y, digits = 3, max.width = 5, expected = FALSE, prop.r = TRUE, prop.c = TRUE, prop.t = TRUE, prop.chisq = TRUE, chisq = FALSE, fisher = FALSE, mcnemar = FALSE, resid = FALSE, sresid = FALSE, asresid = FALSE, missing.include = FALSE, format = c("SAS", "SPSS"), dnn = NULL, ... ) } \arguments{ \item{x}{A vector or a matrix. If y is specified, x must be a vector} \item{y}{A vector in a matrix or a dataframe} \item{digits}{Number of digits after the decimal point for cell proportions} \item{max.width}{In the case of a 1 x n table, the default will be to print the output horizontally. If the number of columns exceeds max.width, the table will be wrapped for each successive increment of max.width columns. If you want a single column vertical table, set max.width to 1} \item{expected}{If \code{TRUE}, chisq will be set to \code{TRUE} and expected cell counts from the \eqn{\chi^2}{Chi-Square} will be included} \item{prop.r}{If \code{TRUE}, row proportions will be included} \item{prop.c}{If \code{TRUE}, column proportions will be included} \item{prop.t}{If \code{TRUE}, table proportions will be included} \item{prop.chisq}{If \code{TRUE}, chi-square contribution of each cell will be included} \item{chisq}{If \code{TRUE}, the results of a chi-square test will be included} \item{fisher}{If \code{TRUE}, the results of a Fisher Exact test will be included} \item{mcnemar}{If \code{TRUE}, the results of a McNemar test will be included} \item{resid}{If \code{TRUE}, residual (Pearson) will be included} \item{sresid}{If \code{TRUE}, standardized residual will be included} \item{asresid}{If \code{TRUE}, adjusted standardized residual will be included} \item{missing.include}{If \code{TRUE}, then remove any unused factor levels} \item{format}{Either SAS (default) or SPSS, depending on the type of output desired.} \item{dnn}{the names to be given to the dimensions in the result (the dimnames names).} \item{\dots}{optional arguments} } \value{ A list with multiple components including key table data and statistical test results, where performed. t: An n by m matrix containing table cell counts prop.col: An n by m matrix containing cell column proportions prop.row: An n by m matrix containing cell row proportions prop.tbl: An n by m matrix containing cell table proportions chisq: Results from the Chi-Square test. A list with class 'htest'. See ?chisq.test for details chisq.corr: Results from the corrected Chi-Square test. A list with class 'htest'. See ?chisq.test for details. ONLY included in the case of a 2 x 2 table. fisher.ts: Results from the two-sided Fisher Exact test. A list with class 'htest'. See ?fisher.test for details. ONLY included if 'fisher' = TRUE. fisher.lt: Results from the Fisher Exact test with HA = "less". A list with class 'htest'. See ?fisher.test for details. ONLY included if 'fisher' = TRUE and in the case of a 2 x 2 table. fisher.gt: Results from the Fisher Exact test with HA = "greater". A list with class 'htest'. See ?fisher.test for details. ONLY included if 'fisher' = TRUE and in the case of a 2 x 2 table. mcnemar: Results from the McNemar test. A list with class 'htest'. See ?mcnemar.test for details. ONLY included if 'mcnemar' = TRUE. mcnemar.corr: Results from the corrected McNemar test. A list with class 'htest'. See ?mcnemar.test for details. ONLY included if 'mcnemar' = TRUE and in the case of a 2 x 2 table. resid/sresid/asresid: Pearson Residuals (from chi-square tests). } \description{ An implementation of a cross-tabulation function with output similar to S-Plus crosstabs() and SAS Proc Freq (or SPSS format) with Chi-square, Fisher and McNemar tests of the independence of all table factors. } \details{ A summary table will be generated with cell row, column and table proportions and marginal totals and proportions. Expected cell counts can be printed if desired (if 'chisq = TRUE'). In the case of a 2 x 2 table, both corrected and uncorrected values will be included for appropriate tests. In the case of tabulating a single vector, cell counts and table proportions will be printed. Note: If 'x' is a vector and 'y' is not specified, no statistical tests will be performed, even if any are set to \code{TRUE}. } \examples{ # Simple cross tabulation of education versus prior induced abortions # using infertility data data(infert, package = "datasets") CrossTable(infert$education, infert$induced, expected = TRUE) CrossTable(infert$education, infert$induced, expected = TRUE, format="SAS") CrossTable(infert$education, infert$induced, expected = TRUE, format="SPSS") CrossTable(warpbreaks$wool, warpbreaks$tension, dnn = c("Wool", "Tension")) } \seealso{ \code{\link[stats:xtabs]{stats::xtabs()}}, \code{\link[base:table]{base::table()}}, \code{\link[base:proportions]{base::prop.table()}} } \author{ Marc Schwartz \email{marc_schwartz@comcast.net}. Original version posted to r-devel on Jul 27, 2002. SPSS format modifications added by Nitin Jain based upon code provided by Dirk Enzmann \email{dirk.enzmann@jura.uni-hamburg.de} } \keyword{category} \keyword{univar} gmodels/DESCRIPTION0000644000176200001440000000366414572111402013420 0ustar liggesusersPackage: gmodels Version: 2.19.1 Date: 2024-03-05 Title: Various R Programming Tools for Model Fitting URL: https://github.com/r-gregmisc/gmodels BugReports: https://github.com/r-gregmisc/gmodels/issues Authors@R: c( person(c('Gregory', 'R.'), 'Warnes', email='greg@warnes.net', role=c('aut', 'cre')), person('Ben', 'Bolker', role='aut'), person('Thomas', 'Lumley', role='aut'), person(c('Randall', 'C.'), 'Johnson', role='aut', comment='Contributions from Randall C. Johnson are Copyright (2005) SAIC-Frederick, Inc. Funded by the Intramural Research Program, of the NIH, National Cancer Institute, Center for Cancer Research under NCI Contract NO1-CO-12400'), person('Airel', 'Muldoon', role='ctb', email='Ariel.Muldoon@oregonstate.edu'), person('Nitin', 'Jain', role='aut'), person('Dirk', 'Enzmann', role='ctb', email='dirk.enzmann@jura.uni-hamburg.de'), person('Søren', 'Højsgaards', role='ctb', email='sorenh@agrsci.dk'), person('Ulrich', 'Halekoh', role='ctb', email='ulrich.halekoh@agrsci.dk'), person('Mark', 'Schwartz', role='aut'), person('Jim', 'Rogers', role='aut') ) Maintainer: Gregory R. Warnes Description: Various R programming tools for model fitting. Suggests: gplots, gtools, Matrix, nlme, lme4 Imports: MASS, gdata, stats License: GPL-2 NeedsCompilation: no RoxygenNote: 7.3.1 Encoding: UTF-8 Packaged: 2024-03-06 15:32:38 UTC; warnes Author: Gregory R. Warnes [aut, cre], Ben Bolker [aut], Thomas Lumley [aut], Randall C. Johnson [aut] (Contributions from Randall C. Johnson are Copyright (2005) SAIC-Frederick, Inc. Funded by the Intramural Research Program, of the NIH, National Cancer Institute, Center for Cancer Research under NCI Contract NO1-CO-12400), Airel Muldoon [ctb], Nitin Jain [aut], Dirk Enzmann [ctb], Søren Højsgaards [ctb], Ulrich Halekoh [ctb], Mark Schwartz [aut], Jim Rogers [aut] Repository: CRAN Date/Publication: 2024-03-06 16:00:02 UTC gmodels/tests/0000755000176200001440000000000014570413077013056 5ustar liggesusersgmodels/tests/test_estimable_mlm.R0000644000176200001440000000037614570413077017060 0ustar liggesuserslibrary(gmodels) y <- cbind(rnorm(100), rnorm(100)) x1 <- rnorm(100) x2 <- rnorm(100) cm <- t(matrix(c(0, 1,-1))) lm.1 <- lm(y ~ x1 + x2) estimable(lm.1, cm) ## >> Error in coef(object) : object 'object' not found gmodels:::estimable.mlm(lm.1, cm) gmodels/tests/lme-test.R0000644000176200001440000000431314570413077014734 0ustar liggesusers## library(gmodels) ## library(lme4) ## set.seed(314159) ## sleepstudy$dayGroup <- cut(sleepstudy$Days, seq(-1,9,by=2), include=T) ## # ci example ## fm2 <- lmer(Reaction ~ dayGroup + (1|Subject) + (0+Days|Subject), sleepstudy) ## ci(fm2) ## # estimable examples ## estimable(fm2, c( 0, -1, 1, 0, 0 ) ) # list all terms ## estimable(fm2, c("dayGroup(1,3]"=-1, "dayGroup(3,5]"=1)) # just the nonzero terms ## estimable(fm2, c("dayGroup(1,3]"=-1, "dayGroup(3,5]"=1), n.sim=5000 ) # more simulations... ## # fit.contrast example ## fit.contrast( fm2, "dayGroup", ## rbind("0-1 vs 3-4"=c(-1,0,1,0,0), ## "3-4 vs 5-6"=c(0,0,-1,1,0) ## ), ## conf=0.95 ) ## # Example from Ariel.Muldoon@oregonstate.edu ## homerange=c( ## "male","1","fall","0.1", ## "male","1","winter","0.3", ## "male","1","spring","5.2", ## "male","1","summer","3.1", ## "male","2","fall","3.4", ## "male","2","winter","1.3", ## "male","2","spring","4.8", ## "male","2","summer","4.3", ## "male","3","fall","3.9", ## "male","3","winter","3.8", ## "male","3","spring","5.7", ## "male","3","summer","2.0", ## "male","4","fall","3.7", ## "male","4","winter","4.3", ## "male","4","spring","6.0", ## "male","4","summer","1.8", ## "female","5","fall","4.3", ## "female","5","winter","1.9", ## "female","5","spring","7.2", ## "female","5","summer","6.9", ## "female","6","fall","5.3", ## "female","6","winter","4.3", ## "female","6","spring","6.2", ## "female","6","summer","4.8", ## "female","7","fall","7.1", ## "female","7","winter","4.9", ## "female","7","spring","8.3", ## "female","7","summer","7.7" ## ) ## homerange <- data.frame(matrix(homerange,ncol=4, byrow=T)) ## names(homerange) <- c("sex", "animal", "season", "area") ## homerange$area = as.numeric(as.character(homerange$area)) ## fit1 <- lmer(area ~ sex*season + (1|animal), data=homerange) ## summary(fit1) ## anova(fit1) ## #matrix to give estimable for making estimates ## spr <- rbind(c(1,0,1,0,0,0,0,0), ## c(1,1,0,0,0,1,0,0), ## c(1,0,0,1,0,0,0,0), ## c(1,1,0,0,0,0,1,0), ## c(1,0,0,0,1,0,0,0), ## c(1,1,0,0,0,0,0,1)) ## estimable(fit1, spr) gmodels/R/0000755000176200001440000000000014570413077012115 5ustar liggesusersgmodels/R/make.contrasts.R0000644000176200001440000001124714571437302015177 0ustar liggesusers#' Construct a User-Specified Contrast Matrix #' #' This function converts human-readable contrasts into the form that R #' requires for computation. #' #' Specifying a contrast row of the form `c(1,0,0,-1)` creates a contrast #' that will compare the mean of the first group with the mean of the fourth #' group. #' #' @param contr vector or matrix specifying contrasts (one per row). #' @param how.many dimensions of the desired contrast matrix. This must equal #' the number of levels of the target factor variable. #' @return `make.contrasts` returns a matrix with dimensions #' (`how.many`, `how.many`) containing the specified contrasts #' augmented (if necessary) with orthogonal "filler" contrasts. #' #' This matrix can then be used as the argument to [contrasts()] or #' to the `contrasts` argument of model functions (eg, [lm()]). #' @author Gregory R. Warnes \email{greg@@warnes.net} #' @seealso #' * [stats::lm()], [stats::contrasts()], [stats::contr.treatment()], #' [stats::contr.poly()], #' * Computation and testing of General Linear Hypothesis: [glh.test()], #' * Computation and testing of estimable functions of model coefficients: #' [estimable()], #' * Estimate and Test Contrasts for a previously fit linear model: #' [fit.contrast()] #' #' @keywords models regression #' #' @examples #' #' set.seed(4684) #' y <- rnorm(100) #' x.true <- rnorm(100, mean=y, sd=0.25) #' x <- factor(cut(x.true,c(-4,-1.5,0,1.5,4))) #' reg <- lm(y ~ x) #' summary(reg) #' #' # Mirror default treatment contrasts #' test <- make.contrasts(rbind( c(-1,1,0,0), c(-1,0,1,0), c(-1,0,0,1) )) #' lm( y ~ x, contrasts=list(x = test )) #' #' # Specify some more complicated contrasts #' # - mean of 1st group vs mean of 4th group #' # - mean of 1st and 2nd groups vs mean of 3rd and 4th groups #' # - mean of 1st group vs mean of 2nd, 3rd and 4th groups #' cmat <- rbind( "1 vs 4" =c(-1, 0, 0, 1), #' "1+2 vs 3+4"=c(-1/2,-1/2, 1/2, 1/2), #' "1 vs 2+3+4"=c(-3/3, 1/3, 1/3, 1/3)) #' #' summary(lm( y ~ x, contrasts=list(x=make.contrasts(cmat) ))) #' # or #' contrasts(x) <- make.contrasts(cmat) #' summary(lm( y ~ x ) ) #' #' # or use contrasts.lm #' reg <- lm(y ~ x) #' fit.contrast( reg, "x", cmat ) #' #' # compare with values computed directly using group means #' gm <- sapply(split(y,x),mean) #' gm %*% t(cmat) #' #' #' # #' # Example for Analysis of Variance #' # #' #' set.seed(03215) #' Genotype <- sample(c("WT","KO"), 1000, replace=TRUE) #' Time <- factor(sample(1:3, 1000, replace=TRUE)) #' data <- data.frame(y, Genotype, Time) #' y <- rnorm(1000) #' #' data <- data.frame(y, Genotype, as.factor(Time)) #' #' # Compute Contrasts & obtain 95% confidence intervals #' #' model <- aov( y ~ Genotype + Time + Genotype:Time, data=data ) #' #' fit.contrast( model, "Genotype", rbind("KO vs WT"=c(-1,1) ), conf=0.95 ) #' #' fit.contrast( model, "Time", #' rbind("1 vs 2"=c(-1,1,0), #' "2 vs 3"=c(0,-1,1) #' ), #' conf=0.95 ) #' #' #' cm.G <- rbind("KO vs WT"=c(-1,1) ) #' cm.T <- rbind("1 vs 2"=c(-1,1,0), #' "2 vs 3"=c(0,-1,1) ) #' #' # Compute contrasts and show SSQ decompositions #' #' model <- model <- aov( y ~ Genotype + Time + Genotype:Time, data=data, #' contrasts=list(Genotype=make.contrasts(cm.G), #' Time=make.contrasts(cm.T) ) #' ) #' #' summary(model, split=list( Genotype=list( "KO vs WT"=1 ), #' Time = list( "1 vs 2" = 1, #' "2 vs 3" = 2 ) ) ) #' #' @importFrom MASS ginv #' #' @export make.contrasts <- function ( contr, how.many=ncol(contr) ) { if(!is.matrix(contr)) contr <- matrix(contr,ncol=length(contr)) if(nrow(contr)+1 > how.many) stop("Too many contrasts specified. Must be less than the number of factor levels (columns).") value <- as.matrix(ginv(contr)) # requires library(MASS) if (nrow(value) != how.many) stop("wrong number of contrast matrix rows") n1 <- if (missing(how.many)) how.many - 1 else how.many nc <- ncol(value) if (nc < n1) { cm <- qr(cbind(1, value)) if (cm$rank != nc + 1) stop("singular contrast matrix") cm <- qr.qy(cm, diag(how.many))[, 2:how.many, drop=FALSE] cm[, 1:nc] <- value } else cm <- value[, 1:n1, drop = FALSE] colnames(cm) <- paste( "C", 1:ncol(cm), sep="") rownames(cm) <- paste( "V", 1:nrow(cm), sep="") if(!is.null(rownames(contr))) { namelist <- rownames(contr) colnames(cm)[1:length(namelist)] <- namelist } if(!is.null(colnames(contr))) rownames(cm) <- colnames(contr) cm } gmodels/R/fit.contrast.R0000644000176200001440000002530114571440701014652 0ustar liggesusers#' Compute and test arbitrary contrasts for regression objects #' #' Compute and test arbitrary contrasts for regression objects. #' #' Computes the specified contrast(s) by re-fitting the model with the #' appropriate arguments. A contrast of the form `c(1,0,0,-1)` would #' compare the mean of the first group with the mean of the fourth group. #' #' @aliases fit.contrast fit.contrast.lm fit.contrast.lme #' @param model regression (lm,glm,aov,lme) object for which the contrast(s) #' will be computed. #' @param varname variable name #' @param coeff vector or matrix specifying contrasts (one per row). #' @param showall return all regression coefficients. If `TRUE`, all model #' cofficients will be returned. If `FALSE` (the default), only the #' coefficients corresponding to the specified contrast will be returned. #' @param conf.int numeric value on (0,1) or NULL. If a numeric value is #' specified, confidence intervals with nominal coverage probability #' `conf.int` will be computed. If `NULL`, confidence intervals will #' not be computed. #' @param df boolean indicating whether to return a column containing the #' degrees of freedom. #' @param \dots optional arguments provided by methods. #' @return Returns a matrix containing estimated coefficients, standard errors, #' t values, two-sided p-values. If `df` is TRUE, an additional column #' containing the degrees of freedom is included. If `conf.int` is #' specified lower and upper confidence limits are also returned. #' @author Gregory R. Warnes \email{greg@@warnes.net} #' @seealso #' * [stats::lm()], [stats::contrasts()], [stats::contr.treatment()], [stats::contr.poly()], #' * Computation and testing of General Linear Hypothesis: [glh.test()], #' * Computation and testing of estimable functions of model coefficients: [estimable()], [make.contrasts()] #' @references Venables & Ripley, Section 6.2 #' @keywords models regression #' @examples #' #' set.seed(42) #' #' y <- rnorm(100) #' x <- cut(rnorm(100, mean=y, sd=0.25),c(-4,-1.5,0,1.5,4)) #' reg <- lm(y ~ x) #' summary(reg) #' #' # look at the group means #' gm <- sapply(split(y,x),mean) #' gm #' #' #' # mean of 1st group vs mean of 4th group #' fit.contrast(reg, x, c( 1, 0, 0, -1) ) #' # estimate should be equal to: #' gm[1] - gm[4] #' #' # mean of 1st and 2nd groups vs mean of 3rd and 4th groups #' fit.contrast(reg, x, c( -1/2, -1/2, 1/2, 1/2) ) #' # estimate should be equal to: #' sum(-1/2*gm[1], -1/2*gm[2], 1/2*gm[3], 1/2*gm[4]) #' #' # mean of 1st group vs mean of 2nd, 3rd and 4th groups #' fit.contrast(reg, x, c( -3/3, 1/3, 1/3, 1/3) ) #' # estimate should be equal to: #' sum(-3/3*gm[1], 1/3*gm[2], 1/3*gm[3], 1/3*gm[4]) #' #' # all at once #' cmat <- rbind( "1 vs 4" =c(-1, 0, 0, 1), #' "1+2 vs 3+4"=c(-1/2,-1/2, 1/2, 1/2), #' "1 vs 2+3+4"=c(-3/3, 1/3, 1/3, 1/3)) #' fit.contrast(reg,x,cmat) #' #' # #' x2 <- rnorm(100,mean=y,sd=0.5) #' reg2 <- lm(y ~ x + x2 ) #' fit.contrast(reg2,x,c(-1,0,0,1)) #' #' # #' # Example for Analysis of Variance #' # #' #' set.seed(03215) #' Genotype <- sample(c("WT","KO"), 1000, replace=TRUE) #' Time <- factor(sample(1:3, 1000, replace=TRUE)) #' y <- rnorm(1000) #' data <- data.frame(y, Genotype, Time) #' #' #' # Compute Contrasts & obtain 95% confidence intervals #' #' model <- aov( y ~ Genotype + Time + Genotype:Time, data=data ) #' #' fit.contrast( model, "Genotype", rbind("KO vs WT"=c(-1,1) ), conf=0.95 ) #' #' fit.contrast( model, "Time", #' rbind("1 vs 2"=c(-1,1,0), #' "2 vs 3"=c(0,-1,1) #' ), #' conf=0.95 ) #' #' #' cm.G <- rbind("KO vs WT"=c(-1,1) ) #' cm.T <- rbind("1 vs 2"=c(-1,1,0), #' "2 vs 3"=c(0,-1,1) ) #' #' # Compute contrasts and show SSQ decompositions #' #' model <- aov( y ~ Genotype + Time + Genotype:Time, data=data, #' contrasts=list(Genotype=make.contrasts(cm.G), #' Time=make.contrasts(cm.T) ) #' ) #' #' summary(model, split=list( Genotype=list( "KO vs WT"=1 ), #' Time = list( "1 vs 2" = 1, #' "2 vs 3" = 2 ) ) ) #' #' #' # example for lme #' library(nlme) #' data(Orthodont) #' fm1 <- lme(distance ~ Sex, data = Orthodont,random=~1|Subject) #' #' # Contrast for sex. This example is equivalent to standard treatment #' # contrast. #' # #' fit.contrast(fm1, "Sex", c(-1,1), conf.int=0.95 ) #' # #' # and identical results can be obtained using lme built-in 'intervals' #' # #' intervals(fm1) #' #' # Cut age into quantile groups & compute some contrasts #' Orthodont$AgeGroup <- gtools::quantcut(Orthodont$age) #' fm2 <- lme(distance ~ Sex + AgeGroup, data = Orthodont,random=~1|Subject) #' # #' fit.contrast(fm2, "AgeGroup", rbind("Linear"=c(-2,-1,1,2), #' "U-Shaped"=c(-1,1,1,-1), #' "Change-Point at 11"=c(-1,-1,1,1)), #' conf.int=0.95) #' #' #' @export fit.contrast <- function(model, varname, coeff, showall, conf.int, df, ...) UseMethod("fit.contrast") #' @exportS3Method stats::coef coef.fit_contrast <- function(object, ...) object #' @exportS3Method base::print print.fit_contrast <- function(x, ...) print(unclass(x)) #' @exportS3Method gmodels::fit.contrast #' @importFrom stats coef #' @importFrom stats pt #' @importFrom stats qt #' @importFrom stats summary.glm #' @importFrom stats summary.lm fit.contrast.lm <- function(model, varname, coeff, showall=FALSE, conf.int=NULL, df=FALSE, ...) { # check class of model if( !(any(class(model) %in% c("lm", "aov", "lme") ) )) stop("contrast.lm can only be applied to objects inheriting from 'lm'", "and 'lme' (eg: lm,glm,aov,lme).") # make sure we have the NAME of the variable if(!is.character(varname)) varname <- deparse(substitute(varname)) # make coeff into a matrix if(!is.matrix(coeff)) { coeff <- matrix(coeff, nrow=1) } # make sure columns are labeled if (is.null(rownames(coeff))) { rn <- vector(length=nrow(coeff)) for(i in 1:nrow(coeff)) rn[i] <- paste(" c=(",paste(coeff[i,],collapse=" "), ")") rownames(coeff) <- rn } # now convert into the proper form for the contrast matrix cmat <- make.contrasts(coeff, ncol(coeff) ) cn <- paste(" C",1:ncol(cmat),sep="") cn[1:nrow(coeff)] <- rownames(coeff) colnames(cmat) <- cn # recall fitting method with the specified contrast m <- model$call if(is.null(m$contrasts)) m$contrasts <- list() m$contrasts[[varname]] <- cmat r <- eval(m, parent.frame()) # now return the correct elements .... if( 'lme' %in% class(model) ) { est <- r$coefficients$fixed se <- sqrt(diag(r$varFix)) tval <- est/se df.lme <- r$fixDF$X retval <- cbind( "Estimate"= est, "Std. Error"= se, "t-value"= tval, "Pr(>|t|)"= 2 * (1 - pt(abs(tval), df.lme)), "DF"=df.lme ) } else if('glm' %in% class(model)) { smodel <- summary.glm(r) retval <- cbind(coef(smodel), "DF"=smodel$df[2]) } else # lm, aov { smodel <- summary.lm(r) retval <- cbind(coef(smodel), "DF"=smodel$df[2]) } if( !showall ) { rn <- paste(varname,rownames(coeff),sep="") ind <- match(rn,rownames(retval)) retval <- retval[ind,,drop=FALSE] } if(!missing(conf.int) && !is.null(conf.int)) # add confidence intervals { alpha <- 1-conf.int retval <- cbind( retval, "lower CI"=retval[,1] - qt(1-alpha/2,retval[,5])*retval[,2], "upper CI"=retval[,1] + qt(1-alpha/2,retval[,5])*retval[,2] ) } if(!df) retval <- retval[,-5,drop=FALSE] class(retval) <- "fit_contrast" retval } # fit.contrast.lme and fit.contrast.mer are necessary because # 'lme' and 'mer' objects do not inherit from 'lm'. # # **Make sure that the argument list *exactly* matches the one # for fit.contrast.lm() above.** # #' @exportS3Method gmodels::fit.contrast fit.contrast.lme <- function(model, varname, coeff, showall=FALSE, conf.int=NULL, df=FALSE, ...) { fit.contrast.lm(model, varname, coeff, showall, conf.int, df) } ## # I made rather dramatic changes here and do all calculations in fit.contrast.mer rather than ## # fit.contrast.lm because of the simulation extras ... added sim.mer and n.sim to the parameter list ## fit.contrast.mer <- function(model, varname, coeff, showall=FALSE, ## conf.int=NULL, sim.mer=TRUE, n.sim=1000, ...) ## { ## require(lme4) ## # make sure we have the NAME of the variable ## if(!is.character(varname)) ## varname <- deparse(substitute(varname)) ## # make coeff into a matrix ## if(!is.matrix(coeff)) ## { ## coeff <- matrix(coeff, nrow=1) ## } ## # make sure columns are labeled ## if (is.null(rownames(coeff))) ## { ## rn <- vector(length=nrow(coeff)) ## for(i in 1:nrow(coeff)) ## rn[i] <- paste(" c=(",paste(coeff[i,],collapse=" "), ")") ## rownames(coeff) <- rn ## } ## # now convert into the proper form for the contrast matrix ## cmat <- make.contrasts(coeff, ncol(coeff) ) ## cn <- paste(" C",1:ncol(cmat),sep="") ## cn[1:nrow(coeff)] <- rownames(coeff) ## colnames(cmat) <- cn ## m <- model@call ## if(is.null(m$contrasts)) ## m$contrasts <- list() ## m$contrasts[[varname]] <- cmat ## if(is.R()) ## r <- eval(m, parent.frame()) ## else ## r <- eval(m) ## # now return the correct elements .... ## r.effects <- fixef(r) ## n <- length(r.effects) ## if(sim.mer) ## { ## retval <- est.mer(obj = r, cm = diag(n), beta0 = rep(0, n), ## conf.int = conf.int, show.beta0 = FALSE, ## n.sim=n.sim) ## rownames(retval) <- names(r.effects) ## }else{ ## if(!is.null(conf.int)) ## warning("Confidence interval calculation for mer objects requires simulation -- use sim.mer = TRUE") ## est <- fixef(r) ## se <- sqrt(diag(as.matrix(vcov(r)))) ## tval <- est/se ## retval <- cbind( ## "Estimate"= est, ## "Std. Error"= se, ## "t-value"= tval ## ) ## } ## if( !showall ) ## { ## if( !is.R() && ncol(cmat)==1 ) ## { ## retval <- retval[varname,,drop=FALSE] ## rownames(retval) <- rn ## }else{ ## rn <- paste(varname,rownames(coeff),sep="") ## ind <- match(rn,rownames(retval)) ## retval <- retval[ind,,drop=FALSE] ## } ## } ## return(retval) ## } gmodels/R/est.mer.R0000644000176200001440000000320314570413077013613 0ustar liggesusers# est.mer.R # generate estimable output for mer objects using mcmcsamp() # Randall Johnson # Laboratory of Genomic Diversity at NCI Frederick # SAIC Frederick, Inc # Created April 25, 2006 # Updated 2012-04-19 for S4 version of lmer object ## est.mer <- function(obj, cm, beta0, conf.int, show.beta0, n.sim) ## { ## samp <- lme4:::mcmcsamp(obj, n.sim) ## ## samp.summ <- summary(samp) ## samp.cm <- t(cm %*% samp@fixef) ## # calculate requested statistics ## est <- apply(samp.cm, 2, mean) ## stderr <- apply(samp.cm, 2, sd) ## pval <- sapply(1:length(beta0), ## function(i){percentile(beta0[i], samp.cm[,i])}) ## pval <- ifelse(pval <= .5, 2*pval, 2*(1-pval)) ## if(is.null(conf.int)) ## { ## lower.ci <- NULL ## upper.ci <- NULL ## } ## else ## { ## alpha <- 1-conf.int ## samp.ci <- sapply(1:length(beta0), ## function(i) ## { ## quantile(samp.cm[,i], probs=c(alpha/2, 1-alpha/2)) ## } ## ) ## lower.ci <- samp.ci[1,] ## upper.ci <- samp.ci[2,] ## } ## # return results ## if(!show.beta0) ## beta0 <- NULL ## samp.stats <- cbind('beta0' = beta0, ## 'Estimate' = est, ## 'Std. Error' = stderr, ## 'p value' = pval, ## 'Lower.CI' = lower.ci, ## 'Upper.CI' = upper.ci) ## row.names(samp.stats) <- paste('(', apply(cm, 1, paste, collapse=" "), ## ')', sep='') ## return(samp.stats) ## } gmodels/R/estimable.R0000644000176200001440000003257614571436120014215 0ustar liggesusers#' Contrasts and estimable linear functions of model coefficients #' #' Compute and test contrasts and other estimable linear functions of model #' coefficients for for lm, glm, lme, mer, and geese objects #' #' `estimable` computes an estimate, test statitic, significance test, and #' (optional) confidence interval for each linear functions of the model #' coefficients specified by `cm`. #' #' The estimable function(s) may be specified via a vector, list, or matrix. #' If `cm` is a vector, it should contained named elements each of which #' gives the coefficient to be applied to the corresponding parameter. These #' coefficients will be used to construct the contrast matrix, with unspecified #' model parameters assigned zero coefficients. If `cm` is a list, it #' should contain one or more coefficient vectors, which will be used to #' construct rows of the contrast matrix. If `cm` is a matrix, column #' names must match (a subset of) the model parameters, and each row should #' contain the corresponding coefficient to be applied. Model parameters which #' are not present in the set of column names of `cm` will be set to zero. #' #' The estimates and their variances are obtained by applying the contrast #' matrix (generated from) `cm` to the model estimates variance-covariance #' matrix. Degrees of freedom are obtained from the appropriate model terms. #' #' The user is responsible for ensuring that the specified linear functions are #' meaningful. #' #' For computing contrasts among levels of a single factor, `fit.contrast` #' may be more convenient. For computing contrasts between two specific #' combinations of model parameters, the `contrast` function in Frank #' Harrell's 'rms' library (formerly 'Design') may be more convenient. #' #' %The `.wald` function is called internally by `estimable` and %is #' not intended for direct use. #' #' @aliases estimable estimable.default estimable.mlm #' @param obj Regression (lm, glm, lme, mer, mlm) object. #' @param cm Vector, List, or Matrix specifying estimable linear functions or #' contrasts. See below for details. #' @param beta0 Vector of null hypothesis values #' @param conf.int Confidence level. If provided, confidence intervals will be #' computed. #' @param show.beta0 Logical value. If TRUE a column for beta0 will be included #' in the output table. Defaults to TRUE when beta0 is specified, FALSE #' otherwise. #' @param ... ignored #' @return Returns a matrix with one row per linear function. Columns contain #' the beta0 value (optional, see `show.beta0` above), estimated #' coefficients, standard errors, t values, degrees of freedom, two-sided #' p-values, and the lower and upper endpoints of the 1-alpha confidence #' intervals. #' @note The estimated fixed effect parameters of `lme` objects may have #' different degrees of freedom. If a specified contrast includes nonzero #' coefficients for parameters with differing degrees of freedom, the smallest #' number of degrees of freedom is used and a warning message is issued. #' @author BXC (Bendix Carstensen) \email{b@@bxc.dk}, Gregory R. Warnes #' \email{greg@@warnes.net}, Soren Hojsgaard \email{sorenh@@agrsci.dk}, and #' Randall C Johnson \email{rjohnson@@ncifcrf.gov} #' @seealso [fit.contrast()], [stats::lm()], #' [nlme::lme()], [stats::contrasts()], #' [rms::contrast()] #' @keywords models regression #' @examples #' #' # setup example data #' y <- rnorm(100) #' x <- cut(rnorm(100, mean=y, sd=0.25),c(-4,-1.5,0,1.5,4)) #' levels(x) <- c("A","B","C","D") #' x2 <- rnorm(100, mean=y, sd=0.5) #' #' # simple contrast and confidence interval #' reg <- lm(y ~ x) #' estimable(reg, c( 0, 1, 0, -1) ) # full coefficient vector #' estimable(reg, c("xB"=1,"xD"=-1) ) # just the nonzero terms #' #' #' # Fit a spline with a single knot at 0.5 and plot the *pointwise* #' # confidence intervals #' library(gplots) #' pm <- pmax(x2-0.5, 0) # knot at 0.5 #' reg2 <- lm(y ~ x + x2 + pm ) #' #' range <- seq(-2, 2, , 50) #' tmp <- estimable(reg2, #' cm=cbind( #' '(Intercept)'=1, #' 'xC'=1, #' 'x2'=range, #' 'pm'=pmax(range-0.5, 0) #' ), #' conf.int=0.95) #' plotCI(x=range, y=tmp[, 1], li=tmp[, 6], ui=tmp[, 7]) #' #' # Fit both linear and quasi-Poisson models to iris data, then compute #' # joint confidence intervals on contrasts for the Species and #' # Sepal.Width by Species interaction terms. #' data(iris) #' lm1 <- lm (Sepal.Length ~ Sepal.Width + Species + Sepal.Width:Species, data=iris) #' glm1 <- glm(Sepal.Length ~ Sepal.Width + Species + Sepal.Width:Species, data=iris, #' family=quasipoisson("identity")) #' #' cm <- rbind( #' 'Setosa vs. Versicolor' = c(0, 0, 1, 0, 1, 0), #' 'Setosa vs. Virginica' = c(0, 0, 0, 1, 0, 1), #' 'Versicolor vs. Virginica'= c(0, 0, 1,-1, 1,-1) #' ) #' estimable(lm1, cm) #' estimable(glm1, cm) #' #' @export estimable <- function (obj, cm, beta0, conf.int=NULL, show.beta0, ...) { UseMethod("estimable") } #' @rdname estimable #' @param joint.test Logical value. If TRUE a 'joint' Wald test for the #' hypothesis \eqn{L \beta=\beta_0} is performed. #' Otherwise 'row-wise' tests are performed, i.e. \eqn{(L \beta)[i]=\beta_0[i]}. #' @exportS3Method gmodels::estimable #' @importFrom stats coef #' @importFrom stats family #' @importFrom stats pchisq #' @importFrom stats pt #' @importFrom stats qt #' @importFrom stats summary.lm #' estimable.default <- function (obj, cm, beta0, conf.int=NULL, show.beta0, joint.test=FALSE, ...) { if (is.matrix(cm) || is.data.frame(cm)) { cm <- t(apply(cm, 1, .to.est, obj=obj)) } else if(is.list(cm)) { cm <- matrix(.to.est(obj, cm), nrow=1) } else if(is.vector(cm)) { cm <- matrix(.to.est(obj, cm), nrow=1) } else { stop("`cm' argument must be of type vector, list, or matrix.") } if(missing(show.beta0)) { if(!missing(beta0)) show.beta0=TRUE else show.beta0=FALSE } if (missing(beta0)) { beta0 = rep(0, ifelse(is.null(nrow(cm)), 1, nrow(cm))) } if (joint.test==TRUE) { .wald(obj, cm, beta0) } else { if ("lme" %in% class(obj)) { stat.name <- "t.stat" cf <- summary(obj)$tTable rho <- summary(obj)$cor vcv <- rho * outer(cf[, 2], cf[, 2]) tmp <- cm tmp[tmp==0] <- NA df.all <- t(abs(t(tmp) * obj$fixDF$X)) df <- apply(df.all, 1, min, na.rm=TRUE) problem <- apply(df.all !=df, 1, any, na.rm=TRUE) if (any(problem)) warning(paste("Degrees of freedom vary among parameters used to ", "construct linear contrast(s): ", paste((1:nrow(tmp))[problem], collapse=", "), ". Using the smallest df among the set of parameters.", sep="")) } else if ("lm" %in% class(obj)) { stat.name <- "t.stat" cf <- summary.lm(obj)$coefficients vcv <- summary.lm(obj)$cov.unscaled * summary.lm(obj)$sigma^2 df <- obj$df.residual if ("glm" %in% class(obj)) { vcv <- summary(obj)$cov.scaled if(family(obj)[1] %in% c("poisson", "binomial")) { stat.name <- "X2.stat" df <- 1 } else { stat.name <- "t.stat" df <- obj$df.residual } } } else if ("geese" %in% class(obj)) { stat.name <- "X2.stat" cf <- summary(obj)$mean vcv <- obj$vbeta df <- 1 } else if ("gee" %in% class(obj)) { stat.name <- "X2.stat" cf <- summary(obj)$coef vcv <- obj$robust.variance df <- 1 } else { stop("obj must be of class 'lm', 'glm', 'aov', 'lme', 'gee', 'geese' or 'nlme'") } if (is.null(cm)) cm <- diag(dim(cf)[1]) if (!dim(cm)[2]==dim(cf)[1]) stop(paste("\n Dimension of ", deparse(substitute(cm)), ": ", paste(dim(cm), collapse="x"), ", not compatible with no of parameters in ", deparse(substitute(obj)), ": ", dim(cf)[1], sep="")) ct <- cm %*% cf[, 1] ct.diff <- cm %*% cf[, 1] - beta0 vc <- sqrt(diag(cm %*% vcv %*% t(cm))) if (is.null(rownames(cm))) rn <- paste("(", apply(cm, 1, paste, collapse=" "), ")", sep="") else rn <- rownames(cm) switch(stat.name, t.stat={ prob <- 2 * (1 - pt(abs(ct.diff/vc), df)) }, X2.stat={ prob <- 1 - pchisq((ct.diff/vc)^2, df=1) }) if (stat.name=="X2.stat") { retval <- cbind(hyp=beta0, est=ct, stderr=vc, "X^2 value"=(ct.diff/vc)^2, df=df, prob=1 - pchisq((ct.diff/vc)^2, df=1)) dimnames(retval) <- list(rn, c("beta0", "Estimate", "Std. Error", "X^2 value", "DF", "Pr(>|X^2|)")) } else if (stat.name=="t.stat") { retval <- cbind(hyp=beta0, est=ct, stderr=vc, "t value"=ct.diff/vc, df=df, prob=2 * (1 - pt(abs(ct.diff/vc), df))) dimnames(retval) <- list(rn, c("beta0", "Estimate", "Std. Error", "t value", "DF", "Pr(>|t|)")) } if (!is.null(conf.int)) { if (conf.int <=0 || conf.int >=1) stop("conf.int should be between 0 and 1. Usual values are 0.95, 0.90") alpha <- 1 - conf.int switch(stat.name, t.stat={ quant <- qt(1 - alpha/2, df) }, X2.stat={ quant <- qt(1 - alpha/2, 100) }) nm <- c(colnames(retval), "Lower.CI", "Upper.CI") retval <- cbind(retval, lower=ct.diff - vc * quant, upper=ct.diff + vc * quant) colnames(retval) <- nm } rownames(retval) <- make.unique(rownames(retval)) retval <- as.data.frame(retval) if(!show.beta0) retval$beta0 <- NULL class(retval) <- c("estimable", class(retval)) return(retval) } } #' @importFrom stats coef .wald <- function (obj, cm, beta0=rep(0, ifelse(is.null(nrow(cm)), 1, nrow(cm)))) { if (!is.matrix(cm) && !is.data.frame(cm)) cm <- matrix(cm, nrow=1) df <- nrow(cm) if ("geese" %in% class(obj)) { cf <- obj$beta vcv <- obj$vbeta } else if ("gee" %in% class(obj)) { cf <- summary(obj)$coef vcv <- obj$robust.variance } else if ("lm" %in% class(obj)) { cf <- summary.lm(obj)$coefficients[, 1] if ("glm" %in% class(obj)) vcv <- summary(obj)$cov.scaled else vcv <- summary.lm(obj)$cov.unscaled * summary.lm(obj)$sigma^2 } else if ("lme" %in% class(obj)) { s.o <- summary(obj) cf <- s.o$tTable[,1] se <- s.o$tTable[, 2] rho <- s.o$cor vcv <- rho * outer(se, se) } else stop("obj must be of class 'lm', 'glm', 'aov', 'gee', 'geese', or 'lme'.") u <- (cm %*% cf)-beta0 vcv.u <- cm %*% vcv %*% t(cm) W <- t(u) %*% solve(vcv.u) %*% u prob <- 1 - pchisq(W, df=df) retval <- as.data.frame(cbind(W, df, prob)) names(retval) <- c("X2.stat", "DF", "Pr(>|X^2|)") print(as.data.frame(retval)) } ## estimable.mer <- function (obj, cm, beta0, conf.int=NULL, show.beta0, ## sim.mer=TRUE, n.sim=1000, ...) ## { ## if (is.matrix(cm) || is.data.frame(cm)) ## { ## cm <- t(apply(cm, 1, .to.est, obj=obj)) ## } ## else if(is.list(cm)) ## { ## cm <- matrix(.to.est(obj, cm), nrow=1) ## } ## else if(is.vector(cm)) ## { ## cm <- matrix(.to.est(obj, cm), nrow=1) ## } ## else ## { ## stop("'cm' argument must be of type vector, list, or matrix.") ## } ## if(missing(show.beta0)) ## { ## if(!missing(beta0)) ## show.beta0=TRUE ## else ## show.beta0=FALSE ## } ## if (missing(beta0)) ## { ## beta0 = rep(0, ifelse(is.null(nrow(cm)), 1, nrow(cm))) ## } ## if ("mer" %in% class(obj)) { ## if(sim.mer) ## return(est.mer(obj=obj, cm=cm, beta0=beta0, conf.int=conf.int, ## show.beta0=show.beta0, n.sim=n.sim)) ## stat.name <- "mer" ## cf <- as.matrix(fixef(obj)) ## vcv <- as.matrix(vcov(obj)) ## df <- NA ## } ## else { ## stop("obj is not of class mer") ## } ## if (is.null(rownames(cm))) ## rn <- paste("(", apply(cm, 1, paste, collapse=" "), ## ")", sep="") ## else rn <- rownames(cm) ## ct <- cm %*% cf[, 1] ## ct.diff <- cm %*% cf[, 1] - beta0 ## vc <- sqrt(diag(cm %*% vcv %*% t(cm))) ## retval <- cbind(hyp=beta0, est=ct, stderr=vc, "t value"=ct.diff/vc) ## dimnames(retval) <- list(rn, c("beta0", "Estimate", "Std. Error", ## "t value")) ## rownames(retval) <- make.unique(rownames(retval)) ## retval <- as.data.frame(retval) ## if(!show.beta0) retval$beta0 <- NULL ## class(retval) <- c("estimable", class(retval)) ## return(retval) ## } gmodels/R/glh.test.R0000644000176200001440000001374314571437156014005 0ustar liggesusers#' Test a General Linear Hypothesis for a Regression Model #' #' Test, print, or summarize a general linear hypothesis for a regression model #' #' Test the general linear hypothesis \eqn{C \hat{\beta} = d } for the regression model `reg`. #' #' The test statistic is obtained from the formula: #' \deqn{ #' f = \frac{(C \hat{\beta} - d)' ( C (X'X)^{-1} C' ) (C \hat{\beta} - d) / r }{ #' SSE / (n-p) } #' } #' where #' * `r` is the number of contrasts contained in `C`, and #' * `n-p` is the model degrees of freedom. #' #' Under the null hypothesis, `f` will follow a F-distribution with `r` and `n-p` #' degrees of freedom #' #' @aliases glh.test print.glh.test summary.glh.test #' @param reg Regression model #' @param cm contrast matrix `C` . Each row specifies a linear combination of the #' coefficients #' @param d vector `d` specifying the null hypothesis values for each linear #' combination #' #' @return Object of class `c("glh.test","htest")` with elements: #' \item{call }{Function call that created the object} #' \item{statistic }{F statistic} #' \item{parameter}{vector containing the numerator (r) and #' denominator (n-p) degrees of freedom} #' \item{p.value}{p-value} #' \item{estimate}{computed estimate for each row of `cm`} #' \item{null.value}{d} #' \item{method}{description of the method} #' \item{data.name}{name of the model given for `reg`} #' \item{matrix}{matrix specifying the general linear hypotheis (`cm`)} #' #' @note When using treatment contrasts (the default) the first level of the #' factors are subsumed into the intercept term. The estimated model #' coefficients are then contrasts versus the first level. This should be taken #' into account when forming contrast matrixes, particularly when computing #' contrasts that include 'baseline' level of factors. #' #' See the comparison with `fit.contrast` in the examples below. #' @author Gregory R. Warnes \email{greg@@warnes.net} #' @seealso [fit.contrast()], [estimable()], [stats::contrasts()] #' @references R.H. Myers, Classical and Modern Regression with Applications, #' 2nd Ed, 1990, p. 105 #' @keywords models regression #' @examples #' #' #' # fit a simple model #' y <- rnorm(100) #' x <- cut(rnorm(100, mean=y, sd=0.25),c(-4,-1.5,0,1.5,4)) #' reg <- lm(y ~ x) #' summary(reg) #' #' # test both group 1 = group 2 and group 3 = group 4 #' # *Note the 0 in the column for the intercept term* #' #' C <- rbind( c(0,-1,0,0), c(0,0,-1,1) ) #' ret <- glh.test(reg, C) #' ret # same as 'print(ret) ' #' summary(ret) #' #' # To compute a contrast between the first and second level of the factor #' # 'x' using 'fit.contrast' gives: #' #' fit.contrast( reg, x,c(1,-1,0,0) ) #' #' # To test this same contrast using 'glh.test', use a contrast matrix #' # with a zero coefficient for the intercept term. See the Note section, #' # above, for an explanation. #' #' C <- rbind( c(0,-1,0,0) ) #' glh.test( reg, C ) #' #' @importFrom stats pf #' @importFrom stats coef #' @importFrom stats summary.lm #' #' @export glh.test <- function(reg, cm, d=rep(0, nrow(cm)) ) { if( !is.matrix(cm) && !is.data.frame(cm) ) cm <- matrix(cm, nrow=1) if ( !( "lm" %in% class(reg) ) ) stop("Only defined for lm,glm objects") bhat <- summary.lm(reg)$coefficients[,1,drop=FALSE] XpX <- summary.lm(reg)$cov.unscaled df <- reg$df.residual msr <- summary.lm(reg)$sigma # == SSE / (n-p) r <- nrow(cm) if ( ncol(cm) != length(bhat) ) stop( paste( "\n Dimension of ", deparse( substitute( cm ) ), ": ", paste( dim(cm), collapse="x" ), ", not compatible with no of parameters in ", deparse( substitute( reg ) ), ": ", length(bhat), sep="" ) ) # -1 # (Cb - d)' ( C (X'X) C' ) (Cb - d) / r # F = --------------------------------------- # SSE / (n-p) # Fstat <- t(cm %*% bhat - d) %*% solve((cm %*% XpX %*% t(cm))) %*% (cm %*% bhat - d) / r / msr^2 p <- 1-pf(Fstat,r,df) retval <- list() retval$call <- match.call() retval$statistic <- c(F=Fstat) retval$parameter <- c(df1=r,df2=df) retval$p.value <- p retval$conf.int <- NULL retval$estimate <- cm%*%bhat retval$null.value <- d retval$method <- "Test of General Linear Hypothesis" retval$data.name <- deparse(substitute(reg)) retval$matrix <- cm colnames(retval$matrix) <- names(reg$coef) class(retval) <- c("glh.test","htest") retval } #' @exportS3Method base::print #' @inheritParams base::print print.glh.test <- function(x, digits = 4, ... ) { cat("\n") cat("\t",x$method, prefix = "\t") cat("\n") cat("Call:\n") print(x$call) if (!is.null(x$statistic)) cat(names(x$statistic), " = ", format(round(x$statistic, 4)), ", ", sep = "") if (!is.null(x$parameter)) cat(paste(names(x$parameter), " = ", format(round(x$parameter, 3)), ",", sep = ""), "") cat("p-value =", format.pval(x$p.value, digits = digits), "\n") cat("\n") } #' @exportS3Method base::summary #' @inheritParams base::summary summary.glh.test <- function(object, digits = 4, ... ) { cat("\n") cat("\t",object$method, prefiobject = "\t") cat("\n") cat("Regression: ", object$data.name, "\n") cat("\n") cat("Null Hypothesis: C %*% Beta-hat = d \n") cat("\n") cat("C matrix: \n") print(object$matrix, digits=digits) cat("\n") cat("d vector: \n") print(object$null.value, digits=digits) cat("\n") cat("C %*% Beta-hat: \n") print(c(object$estimate)) cat("\n") if (!is.null(object$statistic)) cat(names(object$statistic), " = ", format(round(object$statistic, 4)), ", ", sep = "") if (!is.null(object$parameter)) cat(paste(names(object$parameter), " = ", format(round(object$parameter, 3)), ",", sep = ""), "") cat("p-value =", format.pval(object$p.value, digits = digits), "\n") cat("\n") } gmodels/R/estimable.mlm.R0000644000176200001440000000171214571374223014772 0ustar liggesusers#' @exportS3Method gmodels::estimable #' @importFrom stats coef `estimable.mlm` <- function (obj, cm, beta0, conf.int=NULL, show.beta0, ...) { coef <- coef(obj) ny <- ncol(coef) effects <- obj$effects resid <- obj$residuals fitted <- obj$fitted ynames <- colnames(coef) if (is.null(ynames)) { lhs <- obj$terms[[2]] if (mode(lhs) == "call" && lhs[[1]] == "cbind") ynames <- as.character(lhs)[-1] else ynames <- paste("Y", seq(ny), sep = "") } value <- vector("list", ny) names(value) <- paste("Response", ynames) cl <- oldClass(obj) class(obj) <- cl[match("mlm", cl):length(cl)][-1] for (i in seq(ny)) { obj$coefficients <- coef[, i] obj$residuals <- resid[, i] obj$fitted.values <- fitted[, i] obj$effects <- effects[, i] obj$call$formula[[2]] <- obj$terms[[2]] <- as.name(ynames[i]) value[[i]] <- estimable(obj, cm, beta0, conf.int=NULL, show.beta0, ...) } class(value) <- "listof" value } gmodels/R/to.est.R0000644000176200001440000000301414571374703013455 0ustar liggesusers#' Return a vector for cm in estimable() #' @param obj estimable object #' @param params character vector of names or logical vector with one element per model parameter selecting desrired parameter(s). #' @author Randy Johnson, Laboratory of Genomic Diversity at NCI-Frederick #' @importFrom stats coef #' @export .to.est <- function(obj, params) { ## if('lme' %in% class(obj) | 'mer' %in% class(obj)) ## { ## eff.obj <- fixef(obj) ## } ## else if('geese' %in% class(obj)) { eff.obj <- obj$beta } else { eff.obj <- coef(obj) } if(is.null(obj)) stop("Error obtaining model coefficients") est <- rep(0, length(eff.obj)) names(est) <- names(eff.obj) if(!missing(params)) { if(is.null(names(params))) if(length(params)==length(est)) names(params) <- names(est) else stop("'param' has no names and does not match number of coefficients of model. Unable to construct coefficient vector") else { matches <- names(params) %in% names(est) if(!(all(matches))) stop( '\n\t', 'Invalid parameter name(s): ', paste(names(params)[!matches], collapse=', '), '\n\t', 'Valid names are: ', paste(names(est), collapse=', ') ) } if(is.list(params)) est[names(params)] <- unlist(params) else est[names(params)] <- params } return(est) } gmodels/R/est_p_ci.R0000644000176200001440000000472314571433124014027 0ustar liggesusers#' Display estimate, confidence interval and p-value for one model term #' #' @param model model object #' @param term model term #' @param mult scale (multiply) the parameter by this factor #' @param digits number of significant digits to display #' @param ... optional arguments #' #' @examples #' #' set.seed(42) #' #' # fit an example model with 3 groups #' y <- rnorm(100) #' x <- cut(rnorm(100, mean=y, sd=0.25),c(-4,-1.5,0,1.5,4)) #' reg <- lm(y ~ x) #' reg #' #' # show model estimate, p-value, and confidence interval #' # for the first group #' est_p_ci(reg, 2) #' #' # estimate some group contrasts #' cmat <- rbind( "1 vs 4" =c(-1, 0, 0, 1), #' "1+2 vs 3+4"=c(-1/2,-1/2, 1/2, 1/2), #' "1 vs 2+3+4"=c(-3/3, 1/3, 1/3, 1/3)) #' cont <- fit.contrast(reg, x, cmat, conf.int = 0.95) #' cont #' #' # show the contrast estimate, p-value, and confidence interval #' # for the first contrast #' est_p_ci(cont, 2:3) #' #' @export est_p_ci <- function(model, term, mult=1, digits=2, ...) UseMethod("est_p_ci") #' @exportS3Method gmodels::est_p_ci est_p_ci.lm <- function(model, term, mult=1, digits=2, ...) { info <- ci(model) if(is.character(term) && !(term %in% rownames(info))) stop(term, " is not a coefficient in model.") info <- info[term,,drop=FALSE] info.ci <- trimws(format( round(mult * info[,1:3, drop=FALSE], digits=digits) )) if(mult < 0) colnames(info.ci)[2:3] <- colnames(info.ci)[3:2] paste("Est=", info.ci[,'Estimate'], " ", "p=",format.pval(info[,'p-value'], digits=digits), " ", "95% CI: ", info.ci[,'CI lower'], " to ", info.ci[,'CI upper'], sep="" ) } #' @exportS3Method gmodels::est_p_ci est_p_ci.fit_contrast <- function(model, term, mult=1, digits=2, ...) { if( !all(c("lower CI", "upper CI") %in% colnames(model) ) ) stop("object does not contain confidence interval information.") if(is.character(term) && !(term %in% rownames(model))) stop(term, " is not a coefficient in model.") info.ci <- trimws(format( round(mult * model[term, c("Estimate", "lower CI", "upper CI")], digits=digits) ) ) if(mult < 0) colnames(info.ci)[2:3] <- colnames(info.ci)[3:2] paste("Est=", info.ci[,"Estimate"], " ", "p=",format.pval(model[term, 'Pr(>|t|)'], digits=digits), " ", "95% CI: ", info.ci[,"lower CI"], " to ", info.ci[,"upper CI"], sep="" ) } gmodels/R/coefFrame.R0000644000176200001440000000642414571436120014130 0ustar liggesusers#' Return model parameters in a data frame #' #' Fits a model to each subgroup defined by `by`, then returns a data #' frame with one row for each fit and one column for each parameter. #' #' #' @param mod a model formula, to be passed to by `fitfun`. #' @param data a data frame, row subsets of which will be used as the #' `data` argument to `fitfun`. #' @param by names of columns in `x` that will be used to define the #' subgroups. #' @param fit.on a logical vector indicating which rows of `x` are to be #' used to fit the model (like the `subset` argument in a lot of other #' functions). Can be given in terms of variables in `x` #' @param fitfun a model fitting function (e.g. lm, nls). More specifically, a #' function that expects at least a formula object (as the first argument) and #' a data.frame object (passed as an argument named `data`) and returns a #' model object for which a `coef` method has been defined (e.g. coef.lm, #' coef.nls) to extract fit values of model parameters. #' @param keep.unused.levels Include rows in output for all unique values of #' `by`, even those which were excluded by `fit.on`. The default #' value `TRUE` should be left alone if you are going to go on to pass the #' result to `backFit`. #' @param byvar.sep passed to frameApply, used to form the subsets of the data. #' @param ... other arguments to pass to `fitfun`. #' @return a data frame with a row for each unique row of `x[by]`, and #' column for each model paramter, as well as columns specified in `by`. #' @author Jim Rogers \email{james.a.rogers@@pfizer.com} #' @keywords models #' @examples #' #' # load example data #' library(gtools) #' data(ELISA) #' #' # Coefficients for four parameter logistic fits: #' coefFrame(log(Signal) ~ SSfpl(log(Concentration), A, B, xmid, scal), #' data = ELISA, fitfun = nls, #' by = c("PlateDay", "Read"), #' fit.on = Description == "Standard" & Concentration != 0) #' #' # Coefficients for linear fits: #' coefFrame(log(Signal) ~ log(Concentration), #' data = ELISA, fitfun = lm, #' by = c("PlateDay", "Read"), #' fit.on = Description == "Standard" & Concentration != 0 ) #' #' # Example passing arguments to fitfun, and example of #' # error handling during model fitting: #' ELISA$Signal[1] <- NA #' coefFrame(log(Signal) ~ log(Concentration), #' data = ELISA, fitfun = lm, na.action = na.fail, #' by = c("PlateDay", "Read"), #' fit.on = Description == "Standard" & Concentration != 0 ) #' #' #' #' #' @importFrom gdata frameApply #' @importFrom stats coef #' #' @export coefFrame <- function ( mod, data, by = NULL, fit.on = TRUE, fitfun, keep.unused.levels = TRUE, byvar.sep = "\001" , ... ) { fit.on <- eval(substitute(fit.on), data, parent.frame()) out <- frameApply(data, on = intersect(all.vars(mod), names(data)), by = by, subset = fit.on, byvar.sep = byvar.sep, fun = function(sub.dat, ...) { fit <- try(fitfun(mod, data = sub.dat, ...), silent = TRUE) if (inherits(fit, "try-error")) return(fit) outi <- coef(fit) outi }, ...) if (keep.unused.levels) out <- unique(merge(data[by], out, all.x = TRUE)) out } gmodels/R/CrossTable.R0000644000176200001440000006462014571436565014321 0ustar liggesusers#' Cross Tabulation with Tests for Factor Independence #' #' An implementation of a cross-tabulation function with output similar to #' S-Plus crosstabs() and SAS Proc Freq (or SPSS format) with Chi-square, #' Fisher and McNemar tests of the independence of all table factors. #' #' A summary table will be generated with cell row, column and table #' proportions and marginal totals and proportions. Expected cell counts can be #' printed if desired (if 'chisq = TRUE'). In the case of a 2 x 2 table, both #' corrected and uncorrected values will be included for appropriate tests. In #' the case of tabulating a single vector, cell counts and table proportions #' will be printed. #' #' Note: If 'x' is a vector and 'y' is not specified, no statistical tests will #' be performed, even if any are set to `TRUE`. #' #' @param x A vector or a matrix. If y is specified, x must be a vector #' @param y A vector in a matrix or a dataframe #' @param digits Number of digits after the decimal point for cell proportions #' @param max.width In the case of a 1 x n table, the default will be to print #' the output horizontally. If the number of columns exceeds max.width, the #' table will be wrapped for each successive increment of max.width columns. If #' you want a single column vertical table, set max.width to 1 #' @param expected If `TRUE`, chisq will be set to `TRUE` and #' expected cell counts from the \eqn{\chi^2}{Chi-Square} will be included #' @param prop.r If `TRUE`, row proportions will be included #' @param prop.c If `TRUE`, column proportions will be included #' @param prop.t If `TRUE`, table proportions will be included #' @param prop.chisq If `TRUE`, chi-square contribution of each cell will #' be included #' @param chisq If `TRUE`, the results of a chi-square test will be #' included #' @param fisher If `TRUE`, the results of a Fisher Exact test will be #' included #' @param mcnemar If `TRUE`, the results of a McNemar test will be #' included #' @param resid If `TRUE`, residual (Pearson) will be included #' @param sresid If `TRUE`, standardized residual will be included #' @param asresid If `TRUE`, adjusted standardized residual will be #' included #' @param missing.include If `TRUE`, then remove any unused factor levels #' @param format Either SAS (default) or SPSS, depending on the type of output #' desired. #' @param dnn the names to be given to the dimensions in the result (the #' dimnames names). #' @param \dots optional arguments #' @return A list with multiple components including key table data and #' statistical test results, where performed. #' #' t: An n by m matrix containing table cell counts #' #' prop.col: An n by m matrix containing cell column proportions #' #' prop.row: An n by m matrix containing cell row proportions #' #' prop.tbl: An n by m matrix containing cell table proportions #' #' chisq: Results from the Chi-Square test. A list with class 'htest'. See #' ?chisq.test for details #' #' chisq.corr: Results from the corrected Chi-Square test. A list with class #' 'htest'. See ?chisq.test for details. ONLY included in the case of a 2 x 2 #' table. #' #' fisher.ts: Results from the two-sided Fisher Exact test. A list with class #' 'htest'. See ?fisher.test for details. ONLY included if 'fisher' = TRUE. #' #' fisher.lt: Results from the Fisher Exact test with HA = "less". A list with #' class 'htest'. See ?fisher.test for details. ONLY included if 'fisher' = #' TRUE and in the case of a 2 x 2 table. #' #' fisher.gt: Results from the Fisher Exact test with HA = "greater". A list #' with class 'htest'. See ?fisher.test for details. ONLY included if 'fisher' #' = TRUE and in the case of a 2 x 2 table. #' #' mcnemar: Results from the McNemar test. A list with class 'htest'. See #' ?mcnemar.test for details. ONLY included if 'mcnemar' = TRUE. #' #' mcnemar.corr: Results from the corrected McNemar test. A list with class #' 'htest'. See ?mcnemar.test for details. ONLY included if 'mcnemar' = TRUE #' and in the case of a 2 x 2 table. #' #' resid/sresid/asresid: Pearson Residuals (from chi-square tests). #' @author Marc Schwartz \email{marc_schwartz@@comcast.net}. Original version #' posted to r-devel on Jul 27, 2002. SPSS format modifications added by Nitin #' Jain based upon code provided by Dirk Enzmann #' \email{dirk.enzmann@@jura.uni-hamburg.de} #' @seealso [stats::xtabs()], [base::table()], [base::prop.table()] #' @keywords category univar #' @examples #' #' #' # Simple cross tabulation of education versus prior induced abortions #' # using infertility data #' data(infert, package = "datasets") #' CrossTable(infert$education, infert$induced, expected = TRUE) #' CrossTable(infert$education, infert$induced, expected = TRUE, format="SAS") #' CrossTable(infert$education, infert$induced, expected = TRUE, format="SPSS") #' CrossTable(warpbreaks$wool, warpbreaks$tension, dnn = c("Wool", "Tension")) #' #' @importFrom stats chisq.test #' @importFrom stats fisher.test #' @importFrom stats mcnemar.test #' #' @export CrossTable <- function (x, y, digits = 3, max.width = 5, expected = FALSE, prop.r = TRUE, prop.c = TRUE, prop.t = TRUE, prop.chisq = TRUE, chisq = FALSE, fisher = FALSE, mcnemar = FALSE, resid = FALSE, sresid = FALSE, asresid = FALSE, missing.include = FALSE, format = c("SAS", "SPSS"), dnn = NULL, ... ) { format = match.arg(format) RowData <- deparse(substitute(x)) if (!missing(y)) ColData <- deparse(substitute(y)) ## Ensure that max.width >= 1 if (max.width < 1) stop("max.width must be >= 1") ## Set 'x' vector flag vector.x <- FALSE ## Ensure that if (expected), a chisq is done if (expected) chisq <- TRUE if (missing(y)) { ## is x a vector? if (is.null(dim(x))) { if (missing.include) x <- factor(x,exclude=NULL) else ## Remove any unused factor levels x <- factor(x) t <- t(as.matrix(table(x))) vector.x <- TRUE } ## is x a matrix? else if (length(dim(x) == 2)) { if(any(x < 0) || any(is.na(x))) stop("all entries of x must be nonnegative and finite") ## Check to see if x has names(dimnames) defined. If yes, use these for ## 'RowData' and 'ColData' labels, else create blank ones ## This can be overridden by setting 'dnn' values if (is.null(names(dimnames(x)))) { RowData <- "" ColData <- "" } else { RowData <- names(dimnames(x))[1] ColData <- names(dimnames(x))[2] } ## Add generic column and rownames if required ## check each separately, in case user has defined one or the other if (is.null(rownames(x))) rownames(x) <- paste("[", 1:nrow(x), ",]", sep = "") if (is.null(colnames(x))) colnames(x) <- paste("[,", 1:ncol(x), "]", sep = "") t <- x } else stop("x must be either a vector or a 2 dimensional matrix, if y is not given") } else { if(length(x) != length(y)) stop("x and y must have the same length") if (missing.include) { x <- factor(x, exclude=c()) y <- factor(y, exclude=c()) } else { ## Remove unused factor levels from vectors x <- factor(x) y <- factor(y) } ## Generate table t <- table(x, y) } ## Create Titles for Table From Vector Names ## At least 2 x 2 table only (for now) if (all(dim(t) >= 2)) { if (!is.null(dnn)) { if (length(dnn) != 2) stop("dnn must have length of 2, one element for each table dimension") else { RowData <- dnn[1] ColData <- dnn[2] } } } ## if t is not at least a 2 x 2, do not do stats ## even if any set to TRUE. Do not do col/table props if (any(dim(t) < 2)) { prop.c <- prop.r <- prop.chisq <- chisq <- expected <- fisher <- mcnemar <- FALSE } ## Generate cell proportion of row CPR <- prop.table(t, 1) ## Generate cell proportion of col CPC <- prop.table(t, 2) ## Generate cell proportion of total CPT <- prop.table(t) ## Generate summary counts GT <- sum(t) RS <- rowSums(t) CS <- colSums(t) if (length(dim(x) == 2)) TotalN <- GT else TotalN <- length(x) ## Column and Row Total Headings ColTotal <- "Column Total" RowTotal <- "Row Total" ## Set consistent column widths based upon dimnames and table values CWidth <- max(digits + 2, c(nchar(t), nchar(dimnames(t)[[2]]), nchar(RS), nchar(CS), nchar(RowTotal))) RWidth <- max(c(nchar(dimnames(t)[[1]]), nchar(ColTotal))) ## Adjust first column width if Data Titles present if (exists("RowData")) RWidth <- max(RWidth, nchar(RowData)) ## Create row separators RowSep <- paste(rep("-", CWidth + 2), collapse = "") RowSep1 <- paste(rep("-", RWidth + 1), collapse = "") SpaceSep1 <- paste(rep(" ", RWidth), collapse = "") SpaceSep2 <- paste(rep(" ", CWidth), collapse = "") ## Create formatted Names FirstCol <- formatC(dimnames(t)[[1]], width = RWidth, format = "s") ColTotal <- formatC(ColTotal, width = RWidth, format = "s") RowTotal <- formatC(RowTotal, width = CWidth, format = "s") ## Perform Chi-Square Tests ## Needs to be before the table output, in case (expected = TRUE) if (chisq) { if (all(dim(t) == 2)) CSTc <- chisq.test(t, correct = TRUE, ...) CST <- chisq.test(t, correct = FALSE, ...) } else CST <- suppressWarnings(chisq.test(t, correct = FALSE)) if (asresid & !vector.x) ASR <- (CST$observed-CST$expected)/sqrt(CST$expected*((1-RS/GT) %*% t(1-CS/GT))) print.CrossTable.SAS <- function() { if (exists("RowData")) { cat(SpaceSep1, "|", ColData, "\n") cat(formatC(RowData, width = RWidth, format= "s"), formatC(dimnames(t)[[2]], width = CWidth, format = "s"), RowTotal, sep = " | ", collapse = "\n") } else cat(SpaceSep1, formatC(dimnames(t)[[2]], width = CWidth, format = "s"), RowTotal, sep = " | ", collapse = "\n") cat(RowSep1, rep(RowSep, ncol(t) + 1), sep = "|", collapse = "\n") ## Print table cells for (i in 1:nrow(t)) { cat(FirstCol[i], formatC(c(t[i, ], RS[i]), width = CWidth, format = "d"), sep = " | ", collapse = "\n") if (expected) cat(SpaceSep1, formatC(CST$expected[i, ], digits = digits, format = "f", width = CWidth), SpaceSep2, sep = " | ", collapse = "\n") if (prop.chisq) cat(SpaceSep1, formatC((((CST$expected[i, ]-t[i, ])^2)/CST$expected[i, ]), width = CWidth, digits = digits, format = "f"), SpaceSep2, sep = " | ", collapse = "\n") if (prop.r) cat(SpaceSep1, formatC(c(CPR[i, ], RS[i]/GT), width = CWidth, digits = digits, format = "f"), sep = " | ", collapse = "\n") if (prop.c) cat(SpaceSep1, formatC(CPC[i, ], width = CWidth, digits = digits, format = "f"), SpaceSep2, sep = " | ", collapse = "\n") if (prop.t) cat(SpaceSep1, formatC(CPT[i, ], width = CWidth, digits = digits, format = "f"), SpaceSep2, sep = " | ", collapse = "\n") cat(RowSep1, rep(RowSep, ncol(t) + 1), sep = "|", collapse = "\n") } ## Print Column Totals cat(ColTotal, formatC(c(CS, GT), width = CWidth, format = "d"), sep = " | ", collapse = "\n") if (prop.c) cat(SpaceSep1, formatC(CS/GT, width = CWidth, digits = digits, format = "f"), SpaceSep2, sep = " | ", collapse = "\n") cat(RowSep1, rep(RowSep, ncol(t) + 1), sep = "|", collapse = "\n") } ## End Of print.Crosstable.SAS function print.CrossTable.SPSS <- function() { ## similar to SPSS behaviour ## Print Column headings if (exists("RowData")) { cat(SpaceSep1, "|", ColData, "\n") cat(cat(formatC(RowData, width = RWidth, format = "s"),sep=" | ", collapse=""), cat(formatC(dimnames(t)[[2]], width = CWidth-1, format = "s"), sep=" | ", collapse=""), cat(RowTotal, sep = " | ", collapse = "\n"), sep="", collapse="") } else cat(SpaceSep1, formatC(dimnames(t)[[2]], width = CWidth, format = "s"), RowTotal, sep = " | ", collapse = "\n") cat(RowSep1, rep(RowSep, ncol(t) + 1), sep = "|", collapse = "\n") ## Print table cells for (i in 1:nrow(t)) { cat(cat(FirstCol[i], sep=" | ", collapse=""), cat(formatC(c(t[i, ], RS[i]), width = CWidth-1, format = "d"), sep = " | ", collapse = "\n"), sep="", collapse="") if (expected) cat(cat(SpaceSep1, sep=" | ", collapse=""), cat(formatC(CST$expected[i, ], digits = digits, format = "f", width = CWidth-1), sep=" | ", collapse=""), cat(SpaceSep2, sep = " | ", collapse = "\n"), sep="", collapse="") if (prop.chisq) cat(cat(SpaceSep1, sep=" | ", collapse=""), cat(formatC((((CST$expected[i, ]-t[i, ])^2)/CST$expected[i, ]), digits = digits, format = "f", width = CWidth-1), sep=" | ", collapse=""), cat(SpaceSep2, sep = " | ", collapse = "\n"), sep="", collapse="") if (prop.r) cat(cat(SpaceSep1, sep=" | ", collapse=""), cat(formatC(c(CPR[i, ]*100, 100*RS[i] / GT), width = CWidth-1, digits = digits, format = "f"), sep = "% | ", collapse = "\n"), sep="", collapse="") if (prop.c) cat(cat(SpaceSep1, sep=" | ", collapse=""), cat(formatC(CPC[i, ]*100, width = CWidth-1, digits = digits, format = "f"), sep="% | ", collapse=""), cat(SpaceSep2, sep = " | ", collapse = "\n"), sep="", collapse="") if (prop.t) cat(cat(SpaceSep1, sep=" | ", collapse=""), cat(formatC(CPT[i, ]*100, width = CWidth-1, digits = digits, format = "f"), sep="% | ", collapse=""), cat(SpaceSep2, sep = " | ", collapse = "\n"), sep="", collapse="") if (resid) cat(cat(SpaceSep1,sep=" | ",collapse = ""), cat(formatC(CST$observed[i, ]-CST$expected[i, ], digits = digits, format = "f", width = CWidth-1), sep = " | ", collapse = ""), cat(SpaceSep2,sep = " | ", collapse = "\n"),sep="",collapse="") if (sresid) cat(cat(SpaceSep1,sep=" | ",collapse = ""), cat(formatC(CST$residual[i, ], digits = digits, format = "f", width = CWidth-1), sep = " | ", collapse = ""), cat(SpaceSep2,sep = " | ", collapse = "\n"),sep="",collapse="") if (asresid) cat(cat(SpaceSep1,sep=" | ",collapse = ""), cat(formatC(ASR[i, ], digits = digits, format = "f", width = CWidth-1), sep = " | ", collapse = ""), cat(SpaceSep2,sep = " | ", collapse = "\n"),sep="",collapse="") cat(RowSep1, rep(RowSep, ncol(t) + 1), sep = "|", collapse = "\n") } ## Print Column Totals cat(cat(ColTotal,sep=" | ",collapse=""), cat(formatC(c(CS, GT), width = CWidth-1, format = "d"), sep = " | ", collapse = "\n"),sep="",collapse="") if (prop.c) cat(cat(SpaceSep1,sep=" | ",collapse=""), cat(formatC(100*CS/GT, width = CWidth-1, digits = digits, format = "f"),sep = "% | ", collapse = ""), cat(SpaceSep2,sep = " | ", collapse = "\n"),sep="",collapes="") cat(RowSep1, rep(RowSep, ncol(t) + 1), sep = "|", collapse = "\n") } ## End of print.CrossTable.SPSS function ## Print Function For 1 X N Vector In SAS Format print.CrossTable.vector.SAS <- function() { if (length(t) > max.width) { ## set breakpoints for output based upon max.width final.row <- length(t) %% max.width max <- length(t) - final.row ## Define breakpoint indices for each row start <- seq(1, max, max.width) end <- start + (max.width - 1) ## Add final.row if required if (final.row > 0) { start <- c(start, end[length(end)] + 1) end <- c(end, end[length(end)] + final.row) } } else { ## Each value printed horizontally in a single row start <- 1 end <- length(t) } SpaceSep3 <- paste(SpaceSep2, " ", sep = "") for (i in 1:length(start)) { ## print column labels cat(SpaceSep2, formatC(dimnames(t)[[2]][start[i]:end[i]], width = CWidth, format = "s"), sep = " | ", collapse = "\n") cat(SpaceSep3, rep(RowSep, (end[i] - start[i]) + 1), sep = "|", collapse = "\n") cat(SpaceSep2, formatC(t[, start[i]:end[i]], width = CWidth, format = "d"), sep = " | ", collapse = "\n") cat(SpaceSep2, formatC(CPT[, start[i]:end[i]], width = CWidth, digits = digits, format = "f"), sep = " | ", collapse = "\n") cat(SpaceSep3, rep(RowSep, (end[i] - start[i]) + 1), sep = "|", collapse = "\n") cat("\n\n") } } ## End of print.Crosstable.vector.SAS function ## Print function for 1 X N vector in SPSS format print.CrossTable.vector.SPSS <- function() { if (length(t) > max.width) { ## set breakpoints for output based upon max.width final.row <- length(t) %% max.width max <- length(t) - final.row ## Define breakpoint indices for each row start <- seq(1, max, max.width) end <- start + (max.width - 1) ## Add final.row if required if (final.row > 0) { start <- c(start, end[length(end)] + 1) end <- c(end, end[length(end)] + final.row) } } else { ## Each value printed horizontally in a single row start <- 1 end <- length(t) } SpaceSep3 <- paste(SpaceSep2, " ", sep = "") for (i in 1:length(start)) { cat(cat(SpaceSep2,sep=" | ",collapse=""), cat(formatC(dimnames(t)[[2]][start[i]:end[i]], width = CWidth-1, format = "s"), sep = " | ", collapse = "\n"), sep="",collapse="") cat(SpaceSep3, rep(RowSep, (end[i] - start[i]) + 1), sep = "|", collapse = "\n") cat(cat(SpaceSep2,sep=" | ",collapse=""), cat(formatC(t[, start[i]:end[i]], width = CWidth-1, format = "d"), sep = " | ", collapse = "\n"), sep="",collapse="") cat(cat(SpaceSep2, sep=" | ",collapse=""), cat(formatC(CPT[, start[i]:end[i]] * 100, width = CWidth-1, digits = digits, format = "f"), sep = "% | ", collapse = ""),sep="",collapse="\n") cat(SpaceSep3, rep(RowSep, (end[i] - start[i]) + 1), sep = "|", collapse = "\n") } ## End of for (i in 1:length(start)) if (GT < TotalN) cat("\nNumber of Missing Observations: ",TotalN-GT," (",100*(TotalN-GT)/TotalN,"%)\n",sep="") } ## End of print.CrossTable.vector.SPSS Function print.statistics <- function() { ## Print Statistics if (chisq) { cat(rep("\n", 2)) cat("Statistics for All Table Factors\n\n\n") cat(CST$method,"\n") cat("------------------------------------------------------------\n") cat("Chi^2 = ", CST$statistic, " d.f. = ", CST$parameter, " p = ", CST$p.value, "\n\n") if (all(dim(t) == 2)) { cat(CSTc$method,"\n") cat("------------------------------------------------------------\n") cat("Chi^2 = ", CSTc$statistic, " d.f. = ", CSTc$parameter, " p = ", CSTc$p.value, "\n") } } ## Perform McNemar tests if (mcnemar) { McN <- mcnemar.test(t, correct = FALSE) cat(rep("\n", 2)) cat(McN$method,"\n") cat("------------------------------------------------------------\n") cat("Chi^2 = ", McN$statistic, " d.f. = ", McN$parameter, " p = ", McN$p.value, "\n\n") if (all(dim(t) == 2)) { McNc <- mcnemar.test(t, correct = TRUE) cat(McNc$method,"\n") cat("------------------------------------------------------------\n") cat("Chi^2 = ", McNc$statistic, " d.f. = ", McNc$parameter, " p = ", McNc$p.value, "\n") } } ## Perform Fisher Tests if (fisher) { cat(rep("\n", 2)) FTt <- fisher.test(t, alternative = "two.sided") if (all(dim(t) == 2)) { FTl <- fisher.test(t, alternative = "less") FTg <- fisher.test(t, alternative = "greater") } cat("Fisher's Exact Test for Count Data\n") cat("------------------------------------------------------------\n") if (all(dim(t) == 2)) { cat("Sample estimate odds ratio: ", FTt$estimate, "\n\n") cat("Alternative hypothesis: true odds ratio is not equal to 1\n") cat("p = ", FTt$p.value, "\n") cat("95% confidence interval: ", FTt$conf.int, "\n\n") cat("Alternative hypothesis: true odds ratio is less than 1\n") cat("p = ", FTl$p.value, "\n") cat("95% confidence interval: ", FTl$conf.int, "\n\n") cat("Alternative hypothesis: true odds ratio is greater than 1\n") cat("p = ", FTg$p.value, "\n") cat("95% confidence interval: ", FTg$conf.int, "\n\n") } else { cat("Alternative hypothesis: two.sided\n") cat("p = ", FTt$p.value, "\n") } } ## End Of If(Fisher) Loop cat(rep("\n", 2)) ## Create list of results for invisible() CT <- list(t = t, prop.row = CPR, prop.col = CPC, prop.tbl = CPT) if (any(chisq, fisher, mcnemar)) { if (all(dim(t) == 2)) { if (chisq) CT <- c(CT, list(chisq = CST, chisq.corr = CSTc)) if (fisher) CT <- c(CT, list(fisher.ts = FTt, fisher.tl = FTl, fisher.gt = FTg)) if (mcnemar) CT <- c(CT, list(mcnemar = McN, mcnemar.corr = McNc)) } else { if (chisq) CT <- c(CT, list(chisq = CST)) if (fisher) CT <- c(CT, list(fisher.ts = FTt)) if (mcnemar) CT <- c(CT, list(mcnemar = McN)) } } ## End of if(any(chisq, fisher, mcnemar)) loop ## return list(CT) invisible(CT) } ## End of print.statistics function ## Printing the tables if (format=="SAS") { ## Print Cell Layout cat(rep("\n", 2)) cat(" Cell Contents\n") cat("|-------------------------|\n") cat("| N |\n") if (expected) cat("| Expected N |\n") if (prop.chisq) cat("| Chi-square contribution |\n") if (prop.r) cat("| N / Row Total |\n") if (prop.c) cat("| N / Col Total |\n") if (prop.t) cat("| N / Table Total |\n") cat("|-------------------------|\n") cat(rep("\n", 2)) cat("Total Observations in Table: ", GT, "\n") cat(rep("\n", 2)) if (!vector.x) print.CrossTable.SAS() else print.CrossTable.vector.SAS() print.statistics() } else if (format == "SPSS") { ## Print Cell Layout cat("\n") cat(" Cell Contents\n") cat("|-------------------------|\n") cat("| Count |\n") if (!vector.x) { if (expected) cat("| Expected Values |\n") if (prop.chisq) cat("| Chi-square contribution |\n") if (prop.r) cat("| Row Percent |\n") if (prop.c) cat("| Column Percent |\n") if (prop.t) cat("| Total Percent |\n") if (resid) cat("| Residual |\n") if (sresid) cat("| Std Residual |\n") if (asresid) cat("| Adj Std Resid |\n") } else cat("| Row Percent |\n") cat("|-------------------------|\n") cat("\n") cat("Total Observations in Table: ", GT, "\n") cat("\n") if (!vector.x) print.CrossTable.SPSS() else print.CrossTable.vector.SPSS() print.statistics() if (any(dim(t) >= 2) & any(chisq,mcnemar,fisher)) { MinExpF = min(CST$expected) cat(' Minimum expected frequency:',MinExpF,"\n") NMinExpF = length(CST$expected[which(CST$expected<5)]) if (NMinExpF > 0) { NCells = length(CST$expected) cat('Cells with Expected Frequency < 5: ',NMinExpF,' of ',NCells," (",100*NMinExpF/NCells,"%)\n",sep="") } cat("\n") } ## End of if (any(dim(t)... } ## End of if(format=="SPSS") loop else stop("unknown format") } ## End of the main function Crosstable.R gmodels/R/percentile.R0000644000176200001440000000027514571426722014400 0ustar liggesuserspercentile <- function(x, distn, include.observed=FALSE) { if(include.observed) distn <- c(x, distn) n <- length(distn) return(findInterval(x, distn[order(distn)]) / n) } gmodels/R/ci.R0000644000176200001440000001313114571436120012625 0ustar liggesusers#' Compute Confidence Intervals #' #' Compute and display confidence intervals for model estimates. Methods are #' provided for the mean of a numeric vector `ci.default`, the probability #' of a binomial vector `ci.binom`, and for `lm`, `lme`, and #' `mer` objects are provided. #' #' #' @aliases ci ci.numeric ci.binom ci.lm ci.lme ci.estimable ci.fit_contrast #' @param x object from which to compute confidence intervals. #' @param confidence confidence level. Defaults to 0.95. #' @param alpha type one error rate. Defaults to 1.0-`confidence` #' @param \dots Arguments for methods #' @return vector or matrix with one row per model parameter and #' elements/columns `Estimate`, `CI lower`, `CI upper`, #' `Std. Error`, `DF` (for lme objects only), and `p-value`. #' @author Gregory R. Warnes \email{greg@@warnes.net} #' @seealso [stats::confint()], [stats::lm()], #' [stats::summary.lm()] #' @keywords regression #' #' @examples #' #' #' # mean and confidence interval #' ci( rnorm(10) ) #' #' # binomial proportion and exact confidence interval #' b <- rbinom( prob=0.75, size=1, n=20 ) #' ci.binom(b) # direct call #' class(b) <- 'binom' #' ci(b) # indirect call #' #' # confidence intervals for regression parameteres #' data(state) #' reg <- lm(Area ~ Population, data=as.data.frame(state.x77)) #' ci(reg) #' #' @export ci <- function(x, confidence=0.95,alpha=1-confidence,...) UseMethod("ci") #' @rdname ci #' @param na.rm `logical` indicating whether missing values should be removed. #' @exportS3Method gmodels::ci #' @importFrom stats qt #' @importFrom stats sd ci.numeric <- function(x, confidence=0.95,alpha=1-confidence,na.rm=FALSE,...) { warning("No class or unkown class. Using default calcuation.") est <- mean(x, na.rm=na.rm) stderr <- sd(x, na.rm=na.rm)/sqrt(nobs(x)); ci.low <- est + qt(alpha/2, nobs(x)-1) * stderr ci.high <- est - qt(alpha/2, nobs(x)-1) * stderr retval <- c( Estimate=est, "CI lower"=ci.low, "CI upper"=ci.high, "Std. Error"=stderr ) retval } #' @export ci.binom #' @exportS3Method gmodels::ci #' @importFrom gdata nobs #' @importFrom stats qbeta #' @importFrom stats qt ci.binom <- function(x, confidence=0.95,alpha=1-confidence,...) { if( !(all(x %in% c(0,1))) ) stop("Binomial values must be either 0 or 1.") if( all(x==0) || all(x==1) ) warning("All observed values are ", as.numeric(x[1]), ", so estimated Std. Error is 0.") est <- mean(x, na.rm=TRUE) n <- nobs(x) x <- sum(x) stderr <- sqrt(est*(1-est)/n) ci.low <- qbeta( alpha/2, x , n + 1 - x) ci.high <- qbeta(1- alpha/2, x+1, n-x ) retval <- cbind(Estimate=est, "CI lower"=ci.low, "CI upper"=ci.high, "Std. Error"= stderr ) retval } #' @exportS3Method gmodels::ci #' @importFrom stats coef ci.lm <- function(x,confidence=0.95,alpha=1-confidence,...) { x <- summary(x) est <- coef(x)[,1] ; ci.low <- est + qt(alpha/2, x$df[2]) * coef(x)[,2] ; ci.high <- est - qt(alpha/2, x$df[2]) * coef(x)[,2] ; retval <- cbind(Estimate=est, "CI lower"=ci.low, "CI upper"=ci.high, "Std. Error"= coef(x)[,2], "p-value" = coef(x)[,4]) retval } #' @exportS3Method gmodels::ci #' @importFrom stats qt ci.lme <- function(x,confidence=0.95,alpha=1-confidence,...) { x <- summary(x) est <- x$tTable[,"Value"] ; ci.low <- est + qt(alpha/2, x$tTable[,"DF"]) * x$tTable[,"Std.Error"] ; ci.high <- est - qt(alpha/2, x$tTable[,"DF"]) * x$tTable[,"Std.Error"] ; retval <- cbind(Estimate=est, "CI lower"=ci.low, "CI upper"=ci.high, "Std. Error"= x$tTable[,"Std.Error"], "DF" = x$tTable[,"DF"], "p-value" = x$tTable[,"p-value"]) rownames(retval) <- rownames(x$tTable) retval } ## ci.mer <- function (x, ## confidence = 0.95, ## alpha = 1 - confidence, ## n.sim = 1e4, ## ...) ## { ## x.effects <- x@fixef ## n <- length(x.effects) ## retval <- gmodels::est.mer(obj = x, ## cm = diag(n), ## beta0 = rep(0, n), ## conf.int = confidence, ## show.beta0 = FALSE, ## n.sim = n.sim) ## retval <- retval[, ## c("Estimate", "Lower.CI", "Upper.CI", "Std. Error", "p value"), ## drop=FALSE ## ] ## colnames(retval)[c(2:3, 5)] <- c("CI lower", "CI upper", "p-value") ## rownames(retval) <- names(x.effects) ## retval ## } #' @exportS3Method gmodels::ci #' @importFrom stats qt ci.estimable <- function(x,confidence=0.95,alpha=1-confidence,...) { ci.low <- x$Estimate + qt(alpha/2, x$DF) * x$"Std. Error" ci.high <- x$Estimate - qt(alpha/2, x$DF) * x$"Std. Error" retval <- cbind(Estimate=x$Estimate, "CI lower"=ci.low, "CI upper"=ci.high, "Std. Error"= x$"Std. Error", "p-value" = x$"Pr(>|t|)" ) rownames(retval) <- rownames(x) retval } #' @exportS3Method gmodels::ci ci.fit_contrast <- function (x, confidence = 0.95, alpha = 1 - confidence, ...) { if( !all(c("lower CI", "upper CI") %in% colnames(x) ) ) stop("object does not contain confidence interval information.") colnames(x) <- c("Estimate", "Std. Error", "Delete", "p-value", "CI lower", "CI upper") x[, c(1, 5:6, 2, 4), drop=FALSE] } gmodels/R/fast.prcomp.R0000644000176200001440000000660714571436625014512 0ustar liggesusers#' Efficient computation of principal components and singular value #' decompositions. #' #' The standard [stats::prcomp()] and [svd()] function are #' very inefficient for wide matrixes. `fast.prcomp` and `fast.svd` #' are modified versions which are efficient even for matrixes that are very #' wide. #' #' The current implementation of the function [svd()] in S-Plus and R #' is much slower when operating on a matrix with a large number of columns #' than on the transpose of this matrix, which has a large number of rows. As a #' consequence, [stats::prcomp()], which uses [svd()], is #' also very slow when applied to matrixes with a large number of rows. #' #' The simple solution is to use [La.svd()] instead of #' [svd()]. A suitable patch to [stats::prcomp()] has been #' submitted. In the mean time, the function `fast.prcomp` has been #' provided as a short-term work-around. #' #' \describe{ #' \item{list("fast.prcomp")}{is a modified versiom of #' [stats::prcomp()] that calls [La.svd()] instead of #' [svd()] } #' \item{list("fast.svd")}{is simply a wrapper around #' [La.svd()]. } #' } #' #' @aliases fast.prcomp fast.svd #' #' @param x data matrix #' @inheritParams stats::prcomp #' @inheritParams base::svd #' @return See the documetation for [stats::prcomp()] or #' [svd()] . #' @author Modifications by Gregory R. Warnes \email{greg@@warnes.net} #' @seealso [stats::prcomp()], [base::svd()], [base::La.svd()] #' @keywords multivariate algebra array #' @examples #' #' #' # create test matrix #' set.seed(4943546) #' nr <- 50 #' nc <- 2000 #' x <- matrix( rnorm( nr*nc), nrow=nr, ncol=nc ) #' tx <- t(x) #' #' # SVD directly on matrix is SLOW: #' system.time( val.x <- svd(x)$u ) #' #' # SVD on t(matrix) is FAST: #' system.time( val.tx <- svd(tx)$v ) #' #' # and the results are equivalent: #' max( abs(val.x) - abs(val.tx) ) #' #' # Time gap dissapears using fast.svd: #' system.time( val.x <- fast.svd(x)$u ) #' system.time( val.tx <- fast.svd(tx)$v ) #' max( abs(val.x) - abs(val.tx) ) #' #' #' library(stats) #' #' # prcomp directly on matrix is SLOW: #' system.time( pr.x <- prcomp(x) ) #' #' # prcomp.fast is much faster #' system.time( fast.pr.x <- fast.prcomp(x) ) #' #' # and the results are equivalent #' max( pr.x$sdev - fast.pr.x$sdev ) #' max( abs(pr.x$rotation[,1:49]) - abs(fast.pr.x$rotation[,1:49]) ) #' max( abs(pr.x$x) - abs(fast.pr.x$x) ) #' #' # (except for the last and least significant component): #' max( abs(pr.x$rotation[,50]) - abs(fast.pr.x$rotation[,50]) ) #' #' @export fast.prcomp <- function (x, retx = TRUE, center = TRUE, scale. = FALSE, tol = NULL) { x <- as.matrix(x) x <- scale(x, center = center, scale = scale.) s <- La.svd(x, nu = 0) if (!is.null(tol)) { rank <- sum(s$d > (s$d[1] * tol)) if (rank < ncol(x)) s$vt <- s$vt[, 1:rank, drop = FALSE] } s$d <- s$d/sqrt(max(1, nrow(x) - 1)) dimnames(s$vt) <- list(paste("PC", seq(len = nrow(s$vt)), sep = ""), colnames(x) ) r <- list(sdev = s$d, rotation = t(s$vt) ) if (retx) r$x <- x %*% t(s$vt) class(r) <- "prcomp" r } #' @export fast.svd <- function( x, nu = min(n, p), nv = min(n, p), ...) { x <- as.matrix(x) dx <- dim(x) n <- dx[1] p <- dx[2] retval <- La.svd(x, nu=nu, nv=nv, ... ) retval$v <- t(retval$vt) retval$vt <- NULL retval } gmodels/NEWS.md0000644000176200001440000000747214571441201013013 0ustar liggesusers# gmodels Version 2.19.0 - 2024-04-04 New features: - Expose (and fix) `est_p_ci` which formats the estimate, p-value, and confidence interval into text. Other Changes: - Upgrade to current package development tooling, including: - Use `roxygen2` for documentation and NAMESPACE - Use `NEWS.md` instead of `NEWS` - Add additional contributors to authors list and use new format. # gmodels Version 2.18.1 - 2018-06-25 Other Changes: - Remove soft-links for NEWS and ChangeLog files for platform portability. # gmodels Version 2.18.0 - 2018-06-19 Bug fixes: - ci.binom() was using an incorrect method for calcuating binomial conficence intervals. It now calculates the Clopper-Pearson 'exect' interval, which is *conservative* due to the discrete nature of the binomial distribution. Other Changes: - Support for lme4 objects has been removed due to incompatible changes to the lme4 package. # gmodels Version 2.16.0 - 2014-07-24 New features: - The estimable() function now returns objects that are of class 'estimable'. - The confidence interval function ci() now has a method for 'estimable' objects, with the same layout as for 'lm' objects, making it easier to combine confidence information about model parameters and estimable functions into a single table. # gmodels Version 2.15.5 - 2013-07-18 Bug fixes: - Correct error in estimable.mlm() that caused it to always fail. Added test code to prevent future issues. Other Changes: - Update man page file for ci() to current Rd syntax. - Remove unused argument to ci.mer() # gmodels Version 2.15.3 - 2012-06-27 Bug fixes: - Update est.mer() to work with "mer" object changes introduced in lme4 version 0.999999-0. # gmodels Version 2.15.2 - 2012-04-19 Bug fixes: - Update est.mer() to work with recent versions of lme4 which changed 'mer' objects from S3 to S4 class - Changes to pass new R CMD check tests - The 'Design' package has been replaced my 'rms', so update man page references. # gmodels Version 2.15.1 - 2011-01-16 Bug fixes: - Fix warnings reported by new versions of R CMD check. # gmodels Version 2.15.0 New features: - Add support for 'mer' model from lme4. Bug fixes: - Correct several minor .Rd syntax errors - Move extra copyright text to Author field instead of License field. # gmodels Version 2.14.1 New features: - Add support for 'lme' objects to estimable(). Other: - Fix minor typos in manual page for estimable(). # gmodels Version 2.14.0 New Features: - Add support for 'mlm' objects to estimable # gmodels Version 2.13.2 Bug Fixes: - Lower and upper end of confidence interval for lmer objects were reversed in est.lmer(). - Correct Greg's email address in two help files. # gmodels Version 2.13.1 Bug Fixes: - Problem: R CMD check errors under development version of R 2.5.0 Solution: - Add additional packages to 'Suggests' list in DESCRIPTION - Remove extra trailing comma in function calls - fix various code/doc inconsistencies - Problem: estimable() was failing for lmer objects. Solution: - Create a generic estimable() - Move old function to estimable.default() - Add estimable.lmer() to the exported methods list in NAMESPACE # gmodels Version 2.12.0 - Updated Greg's email address. - Add support for lmer (lme version 4) objects to ci(), estimable(), and fit.contrast() via code contributed by Randall C Johnson. - Add simplfied coefficient specification to estimable() based on a function provided by Randall C Johnson. It is now possible to do things like: estimable(reg, c("xB"=1,"xD"=-1)) instead of: estimable(reg, c( 0, 1, 0, -1)) which should make estimable() much easier to use for large models. # gmodels Version 2.0.8 - Added DESCRIPTION and removed DESCRIPTION.in - Updated CrossTable.R - Updated NAMESPACE file gmodels/MD50000644000176200001440000000267714572111402012225 0ustar liggesusers6c30613b05ff46b1df9f94bd953eeea3 *ChangeLog 18bde372d731280eff18fc76db743d13 *DESCRIPTION 57856b3a715cf54a8235789d4fa3d28d *NAMESPACE 36aa96007fa06e6df8fc7eb423c9ae0e *NEWS.md da5e06a5433435bde04fc4617c619c3a *R/CrossTable.R 1f35927c6b533e164b388d25265c4c7e *R/ci.R e5fbb79926ded9f7221e0ae8c004eaf6 *R/coefFrame.R 0a3fd5f6f75dc0e5fdb488543f51eb34 *R/est.mer.R ea799ca87043ba43af1b59c08b59ac53 *R/est_p_ci.R e057d914c20db6bdddd04659a33e38b6 *R/estimable.R 6b5def6ee03ddb8f613314b1a0b549b4 *R/estimable.mlm.R 16f143fd507ea5784d1bad7262560b49 *R/fast.prcomp.R 55db18fabe348b14bef5a96afc9c6c22 *R/fit.contrast.R 0aeb67c11a5eeaf1df67fd693907593a *R/glh.test.R cc3813c3198d2ded279896553fc78409 *R/make.contrasts.R 1ebfe9012f37f1bdd775aa40a32dea12 *R/percentile.R f5555ccdfaeed202d401f60e3ce9a92b *R/to.est.R aa2b2fc110d232ae5b49f2a13a89164a *README.md 8331acea54c7822bfde0013ae064ee5e *man/CrossTable.Rd bd59ea7b7d51d139f4c651a17e280a32 *man/ci.Rd 910a6f6723ec8a0740d4464491b55951 *man/coefFrame.Rd ec1d0dc5579ff99d1d39be693389135b *man/dot-to.est.Rd 79c81ccfd888bdcdfacd03250d43b89a *man/est_p_ci.Rd 1b01d30294a6558cbbd1139b35297716 *man/estimable.Rd bf6c44617a5f39586b8f68ec1f4438e9 *man/fast.prcomp.Rd b5e61fcbfc125b9a5d040ac85f9adb7a *man/fit.contrast.Rd 71c20fdcd615270369176a62e24055c3 *man/glh.test.Rd 67713e056eaca60cd8f49eb2d338c1c7 *man/make.contrasts.Rd f955cb62ce81d1888caf1bba99d9494c *tests/lme-test.R 12fda76476e1514590cd3e03a9547fc9 *tests/test_estimable_mlm.R