MIXED EFFECTS MODELS IN S AND S PLUS PDF

adminComment(0)

To Elisa and Laura To Mary Ellen, Barbara, and Michael PrefaceMixed-effects models provide a flexible and powerful to. Nonlinear Mixed-Effects Models. Front Matter. Pages PDF · Nonlinear Mixed-effects Models: Basic Concepts and Motivating Examples. Pages PDF | 5+ minutes read | On Sep 1, , V. J Carey and others published Mixed- Effects Models in S and S-Plus.


Mixed Effects Models In S And S Plus Pdf

Author:GWENDA MCCONICO
Language:English, German, Dutch
Country:Azerbaijan
Genre:Art
Pages:352
Published (Last):12.08.2016
ISBN:521-5-62667-534-2
ePub File Size:19.86 MB
PDF File Size:18.37 MB
Distribution:Free* [*Sign up for free]
Downloads:45148
Uploaded by: ROSELLA

Request PDF on ResearchGate | Mixed-Effect Models in S and S-plus | Linear Mixed-Effects * Theory and Computational Methods for LME Models * Structure of . The nlme library we developed for analyzing mixed-effects models in implementations of the S language, including S-PLUS and R, provides the underlying. The software comes with a number of online manuals (in PDF format) . In the current version of S-plus linear and nonlinear mixed-effects models can be fitted .

In our own experience we encountered the need for these models in analyzing HIV viral load data, where observations occur below left-censored or above right-censored the limit of quantitation of the assay; see Saitoh et al. Their MCEM improves the simulation at the E-step, the numeric implementation at the M-step, and includes automatic monitoring and stopping of the algorithm.

However, by its nature MCEM is an expensive proposition, due to a combination of Monte Carlo simulation with the iterative procedure Ruppert Whereas HEM takes 20— sec to compute with satisfactory precision, this is still too much for routine use, as in simulations, or as part of more complex statistical procedures.

We show that the E-step reduces to computing the first two moments of certain truncated multivariate normal multinormal distributions. The general formulas for these moments were derived by Tallis and Finney The likelihood function is easily computed as a by-product of the E-step and is used for monitoring convergence and for model selection AIC, likelihood ratio test.

In contrast with the existing literature, we give here explicit derivations for a wide class of mixed effects models with censored response, including the Laird—Ware model and extensions to different structures for the variance components, heteroscedastic and autocorrelated errors, and multilevel models. Estimation methods for LME models. A general formulation of LME models is presented and illustrated with examples. Casella and McCulloch or in Vonesh and Chinchilli It is not the purpose of this chapter to present a thorough theoretical description of LME models.

When computing with the model it is more convenient to express the variance—covariance matrix in the form of a relative precision factor. For the LME model with. We use some of the examples in Chapter 1 to illustrate the general LME model formulation. The Cholesky factor Thisted. Note that the distinction between. The length of y ij is nij. In this book. In this example. Those familiar with the multilevel modeling literature Bryk and Raudenbush.

In that literature the model 2. We will concentrate on two general methods: The likelihood function for the model 2. Descriptions and comparisons of the various estimation methods used for LME models can be found. N-dimensional response vector. M are part of the model.

Substituting 2. The exponent in the integral of 2. Integrating the exponential of the second term in 2. Notice that the parts of 2. Substituting these conditional estimates back into The matrix X e is sparse and can be very large.

If possible we want take advantage of the sparsity and avoid working directly with X e. We only need to know the norm of the residual from the augmented least squares problem.

Because the are the sum of two independent multivariate normal random vectors. We present these expressions for completeness only. The model 2. Kennedy and Gentle. We prefer to use the expressions from the pseudo-data representation for computation. Although we will not use this representation extensively in this chapter. It then follows from 2. Q Xqr [. The S functions qr. Moler and Stewart Chapter 9 for details.

See Dongarra. The S function qr is used to create a QR decomposition from a matrix. An important property of orthogonal matrices is that they preserve norms of vectors under multiplication either by Q or by QT.

R Xqr [. Q Xqr. Because X has rank p. Returning to the integral in 2. Because R11 i is triangular. They can be evaluated.. If we determined the least-squares solution to 2. Because Qe is orthogonal. In the next section we examine these terms in detail. The decomposition 2. It consists of three additive components. In Figure 2. There are patterns in Figure 2.

In the general case this is the sum of the logarithms of the ratios of determinants. The log of the norm of the residual is an increasing sigmoidal..

Two of the components of the log-likelihood.

6 editions of this work

In these cases. This does not always occur. Both in the log of the ratio of determinants term and in the log of the norm of the penalized residual term. For some data sets. We use the two-level LME model to illustrate the basic steps in the derivation of the multilevel likelihood function. This allows us to evaluate the inner integrals. To evaluate the outer integrals we iterate this process.

The arrays in the second row of the decomposition: R11 ij. Using the matrices and vectors produced in 2. To evaluate the outer integral in If A is the matrix exponential of B. The Newton—Raphson algorithm Thisted. Each Newton— Raphson iteration requires the calculation of the score function and its derivative. The EM algorithm Dempster. Lindstrom and Bates. Because the calculation of the Hessian matrix at each iteration may be computationally expensive.

This gives a nonredundant. At iteration w we use the current variance—covariance parameter vector. Laird and Rubin. Because we are taking an expectation. Each iteration of the EM algorithm results in an increase in the log-likelihood function. By default 25 EM iterations are performed before switching to Newton— Raphson iterations.

This is done by including an optional control argument in the call to lme.

Although the EM iterations generally bring the parameters into the region of the optimum very quickly. Newton—Raphson iterations. The lme function implements such a hybrid optimization scheme. Individual iterations of the EM algorithm are quickly and easily computed. The approximate variance—covariance matrix for the maximum likelihood estimates is given by the inverse of the information matrix Cox. Pinheiro has shown that. Because most optimization algorithms are designed to minimize rather than maximize a function of the parameters.

If we eliminate the EM iterations altogether with another control argument. In practice. As shown by Pinheiro I denotes the empirical information matrix. One statistical model is said to be nested within another model if it represents a special case of the other model.

The column labelled df is the number of parameters in each model. If ki is the number of parameters to be estimated in model i. In Chapter 1 we show several examples of likelihood ratio tests performed with the anova function. If L2 is the likelihood of the more general model e. In the latter case.

The asymptotic results for likelihood ratio tests have to be adjusted for boundary conditions. As Stram and Lee explain. Simulating Likelihood Ratio Test Statistics One way to check on the distribution of the likelihood ratio test statistic under the null hypothesis is through simulation.

Stram and Lee The simulate. These may be given as lme objects corresponding to each model. To simulate the likelihood ratio test statistic comparing model fm1OrthF to model fm2OrthF we generate data according to the null model using the parameter values from fm1OrthF. The nominal p-values for the simulated LRT statistics. By doing this we obtain an empirical distribution of the likelihood ratio test statistic under the null hypothesis.

Notice that one of these entries. This is repeated for nsim cases. Although there are three distinct entries in this row and column. Stram and Lee suggest a 0. The alternative model.

1. INTRODUCTION

In each panel. The null model. The null model was simulated times. According to this adjustment. The adjustment suggested by Stram and Lee is not always this successful. Plots of the nominal versus empirical p-values for the likelihood ratio test statistic comparing two models for the orthodontic data. Plots of the nominal versus empirical p-values for the likelihood ratio test statistic comparing two models for the Machines data. One should be aware that these p-values may be conservative.

This is the reference distribution we use to calculate the p-values quoted in the multiargument form of anova. As an example. Suppose we compare fm1Stool.

In this case the slight anticonservative nature of the reported p-values may not be too alarming. Plots of the nominal versus empirical p-values for the likelihood ratio test statistic comparing two models for the ergoStool data.

The important point with regard to the likelihood ratio tests is that there are 15 levels of the Treatment factor and only 60 observations in total. Plots of the nominal versus empirical p-values for the likelihood ratio test statistic comparing two models for the PBIB data. The blocking factor also has 15 levels. Ratio p-value fm2PBIB 1 17 By default.

The conditional t-tests are included in the output of the summary method applied to lme objects. As a parameter it is regarded as being estimated at level 0 because it is outer to all the grouping factors. A term is called inner relative to a grouping factor if its value can change within a given level of the grouping factor.

If a term is inner to all Q grouping factors in a model. In the case of the conditional F-tests. A term is said to be estimated at level i. The denominator degrees of freedom are determined by the grouping level at which the term is estimated. A term is outer to a grouping factor if its value does not change within levels of the grouping factor. For this reason. The intercept. Another example is provided by the analysis of the Oats data. This parameterization, which we call the natural parameterization, uses the logarithm of the standard deviations and the generalized logits of the correlations.

Therefore, the natural parameterization cannot be used for optimization. Population level predictions estimate the marginal expected value of the response. This extends naturally to an arbitrary level of nesting. For example, the BLUPs corresponding to the expected values in 2. Ware formulation. The degrees of freedom for a t-test or the denominator degrees of freedom for an F-test depend on whether the factor being considered is inner to the grouping factor changes within levels of the grouping factor or outer to the grouping factor is invariant within levels of the grouping factor.

The simulation results presented in Figure 2. Are the conclusions from this simulation similar to those from the simulation shown in Figure 2. Note that simulate. You may wish to set a lower value of nsim if the default number of simulations will take too long. Repeated measures data, longitudinal data, and growth curve data are examples of this general class of grouped data. A common and versatile way of organizing data in S is as data.

These are in the form of tables where each row corresponds to an observation and each column corresponds to one of the variables being observed. We extend the data. In this chapter we describe creating, summarizing and displaying groupedData objects with a single level of grouping or with multiple levels of grouping. The most important of the special roles is that of a grouping factor that divides the observations into the distinct groups of.

The formula also designates a response and, when available, a primary covariate. Most often these expressions are simply the name of a variable in the data frame, but they could also be functions of one or more variables.

For example, log conc would be a legitimate expression for the response if conc is one of the variables in the data frame. The formula function extracts the formula from a grouped data object. Notice that there is not primary covariate in the Rail data so we use the constant expression 1 in that position in the formula.

The formula of a grouped data object has the same pattern as the formula used in a call to a trellis graphics function, such as xyplot. This is intentional. Because such a formula is available with the data, the plot method for objects in the groupedData class can produce an informative trellis display from the object alone.

It may, in fact, be best to think of the formula stored with the data as a display formula for the data because it provides a meaningful default graphical display method for the data. The formula function shown above is an example of an extractor function for this class.

It returns some property of the object—the display formula in this case—without requiring the user to be aware of how that property is stored. We provide other extractor functions for each of the components of the display formula.

The getGroups extractor returns the value of the grouping factor. A companion function, getGroupsFormula, returns the formula that is evaluated to produce the grouping factor. The extractors for the other components of the display formula are getResponse and getCovariate,. Heights of 26 boys from Oxford, England, each measured on nine occasions. The ages have been centered and are in an arbitrary unit. It is safer to use these extractor functions instead of checking the display formula for the object and extracting a variable from the object.

For example, suppose we wish to check for balance in the Oxboys data, shown in Figure 3.

Mixed-Effects Methods and Classes for S and S-PLUS

To do this, we must know that the grouping factor for the Oxboys data is named Subject. Because there are exactly nine observations for each subject, the data are balanced with respect to the number of observations. Further checking reveals that there are 16 unique values of the covariate age. The boys are measured at approximately the same ages, but not exactly the same ages.

The isBalanced function in the nlme library can be used to check a groupedData object for balance with respect to the grouping factor s or with respect to the groups and the covariate. It is built from calls to getGroups and table like those above. When applied to data with multiple, nested grouping factors, the getGroups extractor takes an optional argument level. If we extract the groups for multiple levels the result is returned as a data frame with one column for each level.

Any inner grouping factors are preserved in their original form in this frame rather than being coerced to an ordered factor with distinct levels as above. These are, however, only the default values.

They can be overridden with an explicit model formula. For example, during the course of model building we may wish to change our idea of what constitutes the response, say by transforming from a measure of concentration to the logarithm of the concentration. It is not necessary to change the formula stored with the data when doing this.

If we do decide to make a permanent change in the formula of a groupedData object, we can use the update function to do this. There are several ways that data can be imported into S and. One of the simplest ways is using the read. The result of read. It has two dimensions: A function to create objects of a given class is called a constructor for that class.

The primary constructor function for a class is often given the same name as the class itself. Thus the default constructor for the groupedData class is the groupedData function. Its required arguments are a formula and a data frame. Optional arguments include labels, where display labels for the response and the primary covariate can be given, and units, where the units of these variables can be given. The default axis labels for data plots are constructed by pasting together components of labels and units.

When the grouping factor has been converted to an ordered factor. Most of the data plots in this book are ordered in this way. The order of the panels is from left to right across the rows.

The labels and units arguments are optional. The order is determined by applying a summary function to the response within each group. The default summary function is max. Conversion of the grouping factor to an ordered factor is done only if the expression for the grouping factor is simply the name of a variable in the data frame.

We recommend using them because this makes it easier to create more informative trellis plots.. The conversion does not cause the rows of the data frame themselves to be reordered.

Mixed-Effects Models in S and S-PLUS

In the default ordering described above the maximum value of the response in each panel will increase across the rows starting from the lower left panel. Many experiments impose additional structure on the data. A factor that is invariant within the groups determined by the grouping factor is said to be outer to the grouping factor. When outer factors are present they can.

Eight of the rats were given a control diet. Because Sex is a characteristic of the subject. There were four rats each in the two experimental diets.

Outer factors can be characteristics of the subject more generally. They can also be experimental factors that are applied to this level of experimental unit. This ensures that groups at the same levels of all outer factors will be plotted in a set of adjacent panels. When this argument is used the panels are determined by the factor or combination of factors given in the outer formula. Reordering of the groups is permitted only within the same level of an outer factor or within the same combination of levels of outer factors.

The Diet factor is an experimental factor that is outer to the grouping factor Rat. The plot method for the groupedData class allows an optional argument outer that can be either a logical value or a formula. When there is more than one outer factor in the data the arrangement of the panels depends on the order in which the factors are listed in the outer formula for the plot.

Points in the same group are joined by lines when each panel is a scatter plot of the response versus a continuous covariate. When specifying outer factors in the constructor or in a plot call we should ensure that they are indeed constant or invariant within each level of the grouping factor. Describing the Structure of Grouped Data The aspect argument. These values are returned as a data frame with one row for each level of the grouping factor.

The grouping factor is Plot and the outer factors are Variety and Year. It returns the values of only those variables that are invariant within each level of the grouping factor. If there is no primary covariate the data will be plotted as a dot plot where points in the same group will be rendered with the same symbol and color.

These data are from a cross-over trial where each experimental animal was exposed to increasing doses of PBG under. An example of an inner factor is provided by the phenylbiguanide PBG data. In some experiments one level of a treatment may be applied to the experimental unit. Rat in this case. Each rabbit was exposed to increasing doses of PBG. An outer factor is a characteristic of the experimental unit or an indicator of a treatment applied to the entire unit. Notice that the grouping factor itself.

If such an inner factor is distinct from the primary covariate. The change in blood pressure was measured for each dose on each occasion. The choice of one structure or the other is more an indication of how we think the inner factor should be modeled. In both these data sets there is a continuous response. It can be convenient to represent a balanced.

The PBG data is similar in structure to the Pixel data. These two structures are quite similar—in fact. These choices can be overridden when constructing models.

Describing the Structure of Grouped Data two treatments. We will discuss methods for modeling this in Chapter 7. In Figure 3. In the case of the Pixel data we used nested grouping factors to represent this structure.

In the case of the PBG data we used a single grouping factor with an inner treatment factor. The balancedGrouped function convert data from a table like this to a groupedData object.

Often the data from a balanced experiment are provided in the form of a table like this. The dimnames of the matrix are the unique levels of the grouping factor and the covariate. If the column names can all be converted to numeric values. The process of checking for numeric values in these names is what generates the warning message in the previous example.

The matrix should be arranged so each row contains the data from one group. This table provides a compact representation of balanced data. The data values are extracted from the matrix itself and from its dimnames attribute.. The covariate values. Otherwise it is left as a factor. It is always a good idea to check that the variables in a groupedData object have the expected classes. Later manipulations of the object or plots created from the object do not rely on its having been generated from balanced data.

Unless this is detected and the numeric variable is explicitly converted to a factor. In the default data plot for a groupedData object. Many methods for the analysis of longitudinal data or repeated measurements data depend on having balanced data. The defaults chosen in the trellis graphics library and in the plot method for groupedData objects will usually provide an informative and visually appealing plot. Because they are so effective at illustrating both within-group and between-group behavior.

For a full discussion of the trellis graphics parameters and controls see Becker. Although there is a certain amount of redundancy in storing multiple copies of the same covariate or grouping factor values.

The analysis techniques described in this book do not.

Describing the Structure of Grouped Data It is a common practice to label the levels of a factor like the Type factor as 1. The matrix of response values is converted to a vector and both the primary covariate and the grouping factor are expanded to have the same length as the response vector.

In this section we describe some of the trellis parameters that are helpful in enhancing plots of grouped data.

Cleveland and Shyu or the online documentation for the trellis library. They also allow comparisons between groups. The optional labels and units arguments can be used with balancedGrouped just as in the groupedData constructor. As seen in the example. The plants themselves come from one of two types and have been subjected to one of two treatments. If the primary covariate is a factor. We can do this by specifying a list as the optional between argument.

For some combinations of grass type and treatment the panels are spread across more than one row in Figure 3. An arrangement of the CO2 data in two rows of six panels with a gap between the third and fourth columns. If we have four rows of three. For numeric covariates the aspect ratio of each panel.

Generally a gap of 0. Because the aspect ratio chosen by the banking rule creates panels that are taller than they are wide. When forming a between argument for the rows. If you wish to override this choice of aspect ratio. We have found that this rule produces appealing and informative aspect ratios in a wide variety of cases.

If there is a large number of groups. In this case the response is on the horizontal axis. A value greater than 1. The gaps are given in units of character heights. It would be better to keep these combinations on the same row so we can more easily compare treatments and grass types. The horizonal axis in the panel is the primary covariate.

This does not always create a good arrangement for comparing patterns across outer factors. In these cases a third component can be added to the layout argument causing the plot to be spread over several pages. Half the plants of each type were chilled overnight before the measurements were taken. Carbon dioxide uptake versus ambient CO2 concentration for Echinochloa crus-galli plants. There were four groves of trees.

This plot shows a default layout of the panels. This plot shows an alternative layout of the panels. These data are described in more detail in Appendix A.

The axis annotations can be at powers of 10 or at powers of 2. Optical density versus DNase concentration. Describing the Structure of Grouped Data 0 2 4 5 6 8 10 0 7 2 6 4 6 8 10 2 3 2. You can consider skipping this section unless you want to modify the way the data are presented within each panel. This change is incorporated in Figure 3. If no primary covariate is available. When the primary covariate is numeric. The presentation of the data within each panel is controlled by a panel function.

The actual symbol that is drawn is determined by the trellis device that is active when the plot is displayed. Some care needs to be taken when doing this. In the DNase assay data. Optical density versus DNase concentration for eleven runs of an assay. The concentration is shown on a logarithmic scale. If you do override the default panel function. It is usually an open circle. This panel function can be overridden with an explicit panel argument to the plot call if.

The last four lines of the panel function add a line through the data values. Describing the Structure of Grouped Data 3. Another optional argument. This is a common occurrence. This summary function should take a numeric vector as its argument and return a single numeric value. Using collapse can help to reduce clutter in a plot. For the Pixel data. The default action is to preserve. It is more informative to examine the mean curve for each wafer and the standard deviation about this mean curve.

Displaying at the Wafer level with separate curves for each Site within each Wafer. The default is the mean function.

Mean pixel intensity of lymph nodes in the axillary region versus time by Side within Dog. The model-building approach we propose is described and illustrated in detail in the context of LME models in Chapter 4. Extensions of the basic LME model to include variance functions and correlation structures for the within-group errors are considered in Chapter 5.

Chapter 6 provides an overview of NLME models and some of the analysis tools available for them in nlme. Even though the material covered in the book is, for the most part, self-contained, we assume that the reader has some familiarity with linear regression models, say at the level of Draper and Smith For those who are new to, or less familiar with S, we suggest using in conjunction with this book the, by now, classic reference Venables and Ripley , which provides an overview of S and an introduction to a wide variety of statistical models to be used with S.

Typographical Conventions: The S language objects and commands referenced throughout the book are printed in a monospaced typewriter font like this, while the S classes are printed in sans-serif font like this. To save space, some of the S output has been edited.Note that the likelihoods and the numbers of parameters are identical. We can remove this correlation by centering the data. Each worker used each machine three times so we have three replicates at each set of conditions. The blocking factor also has 15 levels.

Nesting of a covariate within a factor is denoted by the. It takes several optional arguments. In the case of ophthalmic tests, data are grouped within an eye i.