lavaan : An R Package for Structural Equation Modeling

Structural equation modeling (SEM) is a vast ﬁeld and widely used by many applied researchers in the social and behavioral sciences. Over the years, many software packages for structural equation modeling have been developed, both free and commercial. However, perhaps the best state-of-the-art software packages in this ﬁeld are still closed-source and/or commercial. The R package lavaan has been developed to provide applied researchers, teachers, and statisticians, a free, fully open-source, but commercial-quality package for latent variable modeling. This paper explains the aims behind the development of the package, gives an overview of its most important features, and provides some examples to illustrate how lavaan works in practice.


Introduction
This paper describes package lavaan, a package for structural equation modeling implemented in the R system for statistical computing (R Development Core Team 2012). The package is available from the Comprehensive R Archive Network (CRAN) at http://CRAN.R-project. org/package=lavaan and supported by the website http://lavaan.org/. lavaan is an acronym for latent variable analysis, and its name reveals the long-term goal: to provide a collection of tools that can be used to explore, estimate, and understand a wide family of latent variable models, including factor analysis, structural equation, longitudinal, multilevel, latent class, item response, and missing data models Lee 2007;Muthén 2002).
However, the development of lavaan has only begun and much remains to be done to reach this ambitious goal. To date, the development of lavaan has focused on structural equation modeling (SEM) with continuous observed variables (Bollen 1989), which is the focus of this paper. Structural equation models encompass a wide range of multivariate statistical techniques. The history of the field traces back to three different traditions: (1) path analysis, originally developed by the geneticist Sewall Wright (Wright 1921), later picked up in sociology (Duncan 1966), (2) simultaneous-equation models, as developed in economics (Haavelmo 1943;Koopmans 1945), and (3) factor analysis from psychology (Spearman 1904;Lawley 1940;Anderson and Rubin 1956). The three traditions were ultimately merged in the early 1970s and although many different researchers have made significant contributions (Jöreskog 1970;Hauser and Goldberger 1971;Zellner 1970;Keesling 1973;Wiley 1973;Browne 1974), it was the work of Karl Jöreskog (Jöreskog 1973), that came to dominate the field. Not least because he (together with Dag Sörbom) developed a computer program called LISREL (for LInear Structural RELations), providing many applied researchers access to this new and exciting field of structural equation modeling. From 1974 onwards, LISREL was distributed commercially by Scientific Software International. In the following decades, the wide availability of LISREL initiated a methodological revolution in the social and behavioral sciences. Today, almost four decades later, LISREL 8 (Jöreskog and Sörbom 1997) is still one of the most widely used software packages for structural equation modeling.
In the years after the birth of LISREL, many technical advances were made and several new software packages for structural equation modeling were developed. Some of the more popular ones that are still in wide use today are EQS (Bentler 2004), AMOS (Arbuckle 2011) and Mplus (Muthén and Muthén 2010), all of which are commercial. The few non-commercial SEM programs outside the R environment are Mx (Neale, Boker, Xie, and Maes 2003) (free, but closed-source), and gllamm, which is implemented in Stata (Rabe-Hesketh, Skrondal, and Pickles 2004).
Within the R environment, there are two approaches to estimate structural equation models. The first approach is to connect R with external commercial SEM programs. This is often useful in simulation studies where fitting a model with SEM software is one part of the simulation pipeline (see, for example, van de Schoot, Hoijtink, and Deković 2010). During one run of the simulation, syntax is written to a file in a format that can be read by the external SEM program (say, Mplus or EQS); the model is fitted by the external SEM program and the resulting output is written to a file that needs to be parsed manually to extract the relevant information for the study at hand. Depending on the SEM program, the connection protocols can be tedious to set up. Fortunately, two R packages have been developed to ease this process: MplusAutomation (Hallquist 2012) and REQS (Mair, Wu, and Bentler 2010), to communicate with the Mplus and EQS program respectively. The second approach is to use a dedicated R package for structural equation modeling. At the time of writing, apart from lavaan, there are two alternative packages available. The sem package, developed by John Fox, has been around since 2001 (Fox, Nie, and Byrnes 2012;Fox 2006) and for a long time, it was the only package for SEM in the R environment. The second package is OpenMx (Boker, Neale, Maes, Wilde, Spiegel, Brick, Spies, Estabrook, Kenny, Bates, Mehta, and Fox 2011), available from http://openmx.psyc.virginia.edu/. As the name of the package suggests, OpenMx is a complete rewrite of the Mx program, consisting of three parts: a front end in R, a back end written in C, and a third-party commercial optimizer (NPSOL). All parts of OpenMx are open-source, except of course the NPSOL optimizer, which is closed-source.
The rest of the paper is organized as follows. First, I describe why I began developing lavaan and how my initial objectives impacted the software design. Next, I illustrate the most characteristic feature of lavaan: the 'lavaan model syntax'. In the sections that follow, I present two well-known examples from the SEM literature (a CFA example, and a SEM example) to illustrate the use of lavaan in practice. Next, I discuss the use of multiple groups, and in the last section before the conclusion, I provide a brief summary of features included in lavaan that may be of interest to applied researchers.
As described above, many SEM software packages are available, both free and commercial, including a couple of packages that run in the R environment. Why then is there a need for yet another SEM package? The answers to this question are threefold: 1. lavaan aims to appeal to a large group of applied researchers that needs SEM software to answer their substantive questions. Many applied researchers have not previously used R and are accustomed to commercial SEM programs. Applied researchers often value software that is intuitive and rich with modeling features, and lavaan tries to fulfill both of these objectives.
2. lavaan aims to appeal to those who teach SEM classes or SEM workshops; ideally, teachers should have access to an easy-to-use, but complete, SEM program that is inexpensive to install in a computer classroom.
3. lavaan aims to appeal to statisticians working in the field of SEM. To implement a new methodological idea, it is advantageous to have access to an open-source SEM program that enables direct access to the SEM code.
The first aim is arguably the most difficult one to achieve. If we wish to convince users of commercial SEM programs to use lavaan, there must be compelling reasons to switch. That lavaan is free is often irrelevant. What matters most to many applied researchers is that (1) the software is easy and intuitive to use, (2) the software has all the features they want, and (3) the results of lavaan are very close, if not identical, to those reported by their current commercial program. To ensure that the software is easy and intuitive to use, I developed the 'lavaan model syntax' which provides a concise approach to fitting structural equation models. Two features that many applied researchers often request are support for non-normal (but continuous) data, and handling of missing data. Both features have received careful attention in lavaan. And lastly, to ensure that the results reported by lavaan are comparable to the output of commercial programs, all fitting functions in lavaan contain a mimic option. If mimic = "Mplus", lavaan makes an effort to produce output that resembles the output of Mplus, both numerically and visually. If mimic = "EQS", lavaan produces output that approaches the output of EQS, at least numerically (not visually). In future releases of lavaan, we plan to add mimic = "LISREL" and mimic = "AMOS" (but users of those programs can currently use mimic = "EQS" as a proxy for those).
The second aim targets those of us that teach SEM techniques in classes or workshops. For teachers, the fact that lavaan is free is important. If the software is free, there is no longer a need to install limited 'student-versions' of the commercial programs to accompany the SEM course. Of course, teachers will also appreciate an easy and intuitive user experience, so that they can spend more time discussing and interpreting the substantive results of a SEM analysis, instead of expending time explaining the awkward model syntax of a specific program.
Finally, the mimic option makes a smooth transition possible from lavaan to one of the major commercial programs, and back. Therefore, students who received initial instruction in SEM with lavaan should have little difficulty using other (commercial) SEM programs in the future.
The third aim targets professional statisticians working in the field of structural equation modeling. For too long, this field has been dominated by closed-source commercial software. In practice, this meant that many of the technical contributions in the field were realized by those research groups (and their collaborators) that had access to the source code of one of the commercial programs. They could use the infrastructure that was already there to implement and evaluate their newest ideas. Outsiders were forced to write their own software. Some of them, faced with the enormous time-investment that is needed for writing SEM software from scratch, may have given up, and changed their research objectives altogether. Indeed, it seems unfortunate that new developments in this field have potentially been hindered by the lack of open-source software that researchers can use, and reuse, to bring computational and statistical advances to the field. This is in sharp contrast to other fields such as statistical genetics or neuroimaging, where nearly exponential progress has been made in part because both fields rely heavily on, and are driven by, open-source packages. Therefore, I chose to keep lavaan fully open-source, without any dependencies on commercial and/or closed-source components. In addition, the design of lavaan is extremely modular. Adding a new function for computing standard errors, for example, would require just two steps: (1) adding the new function to the source file containing all the other functions for computing various types of standard errors, and (2) adding an option to the se argument in the fitting functions of lavaan, allowing the user to request this new type of standard errors.

From model to syntax
Path diagrams are often a starting point for applied researchers seeking to fit a SEM model (see Figure 2 for an example). Informally, a path diagram is a schematic drawing that represents a concise overview of the model the researcher aims to fit. It includes all the relevant observed variables (typically represented by square boxes) and the latent variables (represented by circles), with arrows that illustrate the (hypothesized) relationships among these variables. A direct effect of one variable on another is represented by a single-headed arrow, while (unexplained) correlations between variables are represented by double-headed arrows. The main problem for the applied researcher is typically to convert this diagram into the appropriate input expected by the SEM program. In addition, the researcher has to take extra care to ensure the model is identifiable and estimable.

Specifying models in commercial SEM programs
In the early days of SEM, the only way to specify a model was by setting up the model matrices directly. This was the case for LISREL, and many generations of SEM users (including the author of this paper) have come to associate the practice of SEM modeling with setting up a LISREL syntax file. This required a good grasp of the underlying theory, and -for some -an incentive to review the Greek alphabet once more. For many first time users, the translation of their diagram directly to LISREL syntax was an unpleasant experience. And it added to the still wide-spread belief that SEM modeling is something that should be left to experts, well-versed in matrix algebra (and the Greek alphabet).
In the mid-1980s, EQS was the first program to offer a matrix-free model specification. The EQS model syntax distinguishes among four fundamental variable types: (1) measured variables, (2) latent variables or factors, (3) measured variable residuals or errors, and (4) latent variable residuals or disturbances. The four types are labeled V, F, E and D respectively. Rather than providing a full model matrix specification, users needed only to identify these four types of variables and their relations. For many applied researchers, this was a giant leap forward, and the EQS program quickly became successful. Soon after, this regressionoriented approach was adopted by many other programs (including LISREL, which introduced the SIMPLIS language with LISREL 8).
In the 1990s, the rise of operating systems with a graphical user interface led to a new evolution in the SEM world. The AMOS program, originally developed by James L. Arbuckle, offered a comprehensive graphical interface that allowed users to specify their model by drawing its path diagram. There is no doubt that this approach was very appealing to many SEM users, and again, many commercial SEM packages (including EQS and LISREL) added similar capabilities to their programs.
But a pure graphical approach is not without its limitations. Sometimes, it can be very tedious to draw each and every element of a path diagram, especially for large models. In addition, many (advanced) features do not translate easily in a graphical environment. For example, how do you specify nonlinear inequality constraints without relying on additional syntax? Although a graphical interface may be excellent as a teaching tool, or as an entry point for first-time users, an accessible text-based syntax may ultimately be more convenient. This is the approach used by Mplus. In the Mplus program, no graphical interface is available to specify the model, yet many models can be specified in a very concise and compact way. Only the core measurement and structural parameters of a model need to be specified. For example, in Mplus, there is no need to list all the residual variances that are part of the model. Mplus will add these parameters automatically, keeping the syntax short and easy to understand.

Specifying models in lavaan
In the lavaan package, models are specified by means of a powerful, easy-to-use text-based syntax describing the model, referred to as the 'lavaan model syntax'. Consider a simple regression model with a continuous dependent variable y, and four independent variables x 1 , x 2 , x 3 and x 4 . The usual regression model can be written as follows: where β 0 is called the intercept, β 1 to β 4 are the regression coefficients for each of the four variables, and i is the residual error for observation i. One of the attractive features of the R environment is the compact way we can express a regression formula like the one above: In this formula, the tilde sign ('~') is the regression operator. On the left-hand side of the operator, we have the dependent variable (y), and on the right-hand side, we have the independent variables, separated by the '+' operator. Note that the intercept is not explicitly included in the formula. Nor is the residual error term. But when this model is fitted (say, using the lm() function), both the intercept and the variance of the residual error will be estimated. The underlying logic, of course, is that an intercept and residual error term are (almost) always part of a (linear) regression model, and there is no need to mention them in the regression formula. Only the structural part (the dependent variable, and the independent variables) needs to be specified, and the lm() function takes care of the rest.
One way to look at SEM models is that they are simply an extension of linear regression. A first extension is that you can have several regression equations at the same time. A second extension is that a variable that is an independent (exogenous) variable in one equation can be a dependent (endogenous) variable in another equation. It seems natural to specify these regression equations using the same syntax as used for a single equation in R; we only have more than one of them. For example, we could have a set of three regression equations: This is the approach taken by lavaan. Multiple regression equations are simply a set of regression formulas, using the typical syntax of an R formula.
A third extension of SEM models is that they include continuous latent variables. In lavaan, any regression formula can contain latent variables, both as a dependent or as an independent variable. For example, in the syntax shown below, the variables starting with an 'f' are latent variables: This part of the model syntax would correspond with the 'structural part' of a SEM model. To describe the 'measurement part' of the model, we need to specify the (observed or latent) indicators for each of the latent variables. In lavaan, this is done with the special operator '=~', which can be read as is manifested by. The left-hand side of this formula contains the name of the latent variable. The right-hand side contains the indicators of this latent variable, separated by the '+' operator. For example: In this example, the variables item1 to item7 are observed variables. Therefore, the latent variables f1 and f2 are first-order factors. The latent variable f3 is a second-order factor, since all of its indicators are latent variables themselves.
To specify (residual) variances and covariances in the model syntax, lavaan provides the '~~' operator. If the variable name at the left-hand side and the right-hand side are the same, it is a variance. If the names differ, it is a covariance. The distinction between residual (co)variances and non-residual (co)variances is made automatically. For example: Finally, intercepts for observed and latent variables are simple regression formulas (using the '~' operator) with only an intercept (explicitly denoted by the number '1') as the only predictor:

item1~1 # intercept of an observed variable f1~1 # intercept of a latent variable
Using these four formula types, a large variety of latent variable models can be described. For reference, we summarize the four formula types in the top panel of Table 1.
A typical model syntax describing a SEM model will contain multiple formula types. In lavaan, to glue them together, they must be specified as a literal string. In the R environment, this can be done by enclosing the formula expressions with (single) quotes. For example, This piece of code will produce a model syntax object called myModel that can be used later when calling a function that estimates this model given a dataset, and it illustrates several features of the lavaan model syntax. Formulas can be split over multiple lines, and you can use comments (starting with the '#' character) and blank lines within the single quotes to improve readability of the model syntax. The order in which the formulas are specified does not matter. Therefore, you can use the latent variables in the regression formulas even before they are defined by the '=~' operator. And finally, since this model syntax is nothing more than a literal string, you can type the syntax in a separate text file and use a function like readLines() to read it in. Alternatively, the text processing infrastructure of R may be used to generate the syntax for a variety of models, perhaps when running a large simulation study.

A first example: Confirmatory factor analysis
The lavaan package contains a built-in dataset called HolzingerSwineford1939. We therefore start with loading the lavaan package:

R> library("lavaan")
The Holzinger & Swineford 1939 dataset is a 'classic' dataset that has been used in many papers and books on structural equation modeling, including some manuals of commercial SEM software packages. The data consists of mental ability test scores of seventh-and eighthgrade children from two different schools (Pasteur and Grant-White). In our version of the dataset, only 9 out of the original 26 tests are included. A CFA model that is often proposed for these 9 variables consists of three correlated latent variables (or factors), each with three indicators: a visual factor measured by 3 variables: x1, x2 and x3, a textual factor measured by 3 variables: x4, x5 and x6, id lhs op rhs user free ustart  In what follows, we will refer to this 3 factor model as the 'H&S model', graphically represented in Figure 1. Note that the path diagram in the figure is simplified: it does not indicate the residual variances of the observed variables or the variances of the exogenous latent variables. Still, it captures the essence of the model. Before discussing the lavaan model syntax for this model, it is worthwhile first to identify the free parameters in this model. There are three latent variables (factors) in this model, each with three indicators, resulting in nine factor loadings that need to be estimated. There are also three covariances among the latent variables -another three parameters. These 12 parameters are represented in the path diagram as single-headed and double-headed arrows, respectively. In addition, however, we need to estimate the residual variances of the nine observed variables and the variances of the latent variables, resulting in 12 additional free parameters. In total we have 24 parameters. But the model is not yet identified because we need to set the metric of the latent variables. There are typically two ways to do this: (1) for each latent variable, fix the factor loading of one of the indicators (typically the first) to a constant (conventionally, 1.0), or (2) standardize the variances of the latent variables. Either way, we fix three of these parameters, and 21 parameters remain free. Table 2, produced by the parTable() method, contains an overview of all the relevant parameters for this model, including three fixed factor loadings. Each row in the table corresponds to a single parameter. The 'rhs', 'op' and 'lhs' columns uniquely define the parameters of the model. All parameters with the '=~' operator are factor loadings, whereas all parameters with the '~~' operator are variances or covariances. The nonzero elements in the 'free' column are the free parameters of the model. The zero elements in the 'free' column correspond to fixed parameters, whose value is found in the 'ustart' column. The meaning of the 'user' column will be explained below.

Specifying a model using the lavaan model syntax
There are three approaches in lavaan to specify a model. In the first approach, a minimal description of the model is given by the user and the remaining elements are added automatically by the program. This 'user-friendly' approach is implemented in the fitting functions cfa() and sem(). In the second approach, a complete explication of all model parameters must be provided by the user -nothing is added automatically. This is the 'power-user' approach, implemented in the function lavaan(). Finally, in a third approach, the minimalist and complete approaches are blended by providing an incomplete description of the model in the model syntax, but adding selected groups of parameters using the auto.* arguments of the lavaan function. We illustrate and discuss each of these approaches in turn.
Method 1: Using the cfa() and sem() functions In the first approach, the idea is that the model syntax provided by the user should be as concise and intelligible as possible. To accomplish this, typically only the latent variables (using the '=~' operator) and regressions (using the '~' operator) are included in the model syntax. The other model parameters (for this model: the residual variances of the observed variables, the variances of the factors and the covariances among the factors) are added automatically. Since the H&S example contains three latent variables, but no regressions, the minimalist syntax is very short: We can now fit the model as follows: R> fit <-cfa(HS.model, data = HolzingerSwineford1939) The function cfa() is a dedicated function for fitting confirmatory factor analysis (CFA) models. The first argument is the object containing the lavaan model syntax. The second argument is the dataset that contains the observed variables. The 'user' column in Table 2 shows which parameters were explicitly contained in the user-specified model syntax (= 1), and which parameters were added by the cfa() function (= 0). If a model has been fitted, it is always possible (and highly informative) to inspect this parameter table by using the following command:

parTable(fit)
When using the cfa() (or sem()) function, several sets of parameters are included by default. A complete list of these parameter sets is provided in the top panel of  addition, several steps are taken in an attempt to fulfill the minimum requirements for an identifiable model. These steps are listed in the bottom panel of Table 3. In our example, only the first action (fixing the factor loadings of the first indicator) is used. The second one (auto.fix.single) is only needed if the model contains a latent variable that is manifested by a single indicator. The third and the fourth actions (int.ov.free and int.lv.free, respectively) are only needed if a mean structure is added to the model.
Before we move on to the next method, it is important to stress that all of these 'automatic' actions can be overridden. The model syntax always has precedence over the automatically generated actions. If, for example, one wishes not to fix the factor loadings of the first indicator, but instead to fix the variances of the latent variances, the model syntax would be adapted as follows: R> HS.model.bis <-'visual =~NA*x1 + x2 + x3 + textual =~NA*x4 + x5 + x6 + speed =~NA*x7 + x8 + x9 + visual~~1*visual + textual~~1*textual + speed~~1*speed' As illustrated above, model parameters are fixed by pre-multiplying them with a numeric value, and otherwise fixed parameters are freed by pre-multiplying them with 'NA'. The model syntax above overrides the default behavior of fixing the first factor loading and estimating the factor variances. In practice, however, a much more convenient method to use this parameterization is to keep the original syntax, but add the std.lv = TRUE argument to the cfa() function call: R> fit <-cfa(HS.model, data = HolzingerSwineford1939, std.lv = TRUE) Method 2: Using the lavaan() function In many situations, using the concise model syntax in combination with the cfa() and sem() functions is extremely convenient, particularly for many conventional models. But sometimes, these automatic actions may get in the way, especially when non-standard models need to be specified. For these situations, users may prefer to use the lavaan() function instead. Method 3: Using the lavaan() function with the auto.* arguments When using the lavaan() function, the user has full control, but the model syntax may become long and contain many formulas that could easily be added automatically. To compromise between a complete model specification using lavaan syntax and the automatic addition of certain parameters, the lavaan() function provides several optional arguments that can be used to add a particular set of parameters to the model, or to fix a particular set of parameters (see Table 3). For example, in the model syntax below, the first factor loadings are explicitly fixed to one, and the covariances among the factors are added manually. It would be more convenient and concise, however, to omit the residual variances and factor variances from the model syntax.

Examining the results
All three methods described above fit the same model. The cfa(), sem() and lavaan() fitting functions all return an object of class "lavaan", for which several methods are available to examine model fit statistics and parameters estimates. Table 4 contains an overview of some of these methods.

The summary() method
Perhaps the most useful method to view results from a SEM fitted with lavaan is summary().  The output consists of three sections. The first section (the first 6 lines) contains the package version number, an indication whether the model has converged (and in how many iterations), and the effective number of observations used in the analysis. Next, the model χ 2 test statistic, degrees of freedom, and a p value are printed. If fit.measures = TRUE, a second section is printed containing the test statistic of the baseline model (where all observed variables are assumed to be uncorrelated) and several popular fit indices. If maximum likelihood estimation is used, this section will also contain information about the loglikelihood, the AIC, and the BIC. The third section provides an overview of the parameter estimates, including the type of standard errors used and whether the observed or expected information matrix was used to compute standard errors. Then, for each model parameter, the estimate and the standard error are displayed, and if appropriate, a z value based on the Wald test and a corresponding two-sided p value are also shown. To ease the reading of the parameter estimates, they are grouped into three blocks: (1) factor loadings, (2) factor covariances, and (3) (residual) variances of both observed variables and factors.

The parameterEstimates() method
Although the summary() method provides a nice summary of the model results, it is useful for display only. An alternative is the parameterEstimates() method, which returns the parameter estimates as a data.frame, making the information easily accessible for further processing. By default, the parameterEstimates() method includes the estimates, standard errors, z value, p value, and 95% confidence intervals for all the model parameters.

The modificationIndices() method
If the model fit is not superb, it may be informative to inspect the modification indices (MIs) and their corresponding expected parameter changes (EPCs). In essence, modification indices provide a rough estimate of how the χ 2 test statistic of a model would improve, if a particular parameter were unconstrained. The expected parameter change is the value this parameter would have if it were included as a free parameter. The modificationIndices() method (or the alias with the shorter name, modindices()) will print out a long list of parameters as a data.frame. In the output below, we only show those parameters for which the modification index is 10 or higher. The last three columns contain the standardized EPCs, using the same standardization conventions as for ordinary parameter estimates.

A second example: Structural equation modeling
In our second example, we will explore the 'Industrialization and Political Democracy' dataset previously used by Bollen in his 1989 book on structural equation modeling (Bollen 1989), and included with lavaan in the PoliticalDemocracy data.frame. The dataset contains various measures of political democracy and industrialization in developing countries. In the model, three latent variables are defined. The focus of the analysis is on the structural part of the model (i.e., the regressions among the latent variables). A graphical representation of the model is depicted in Figure 2.

Specifying and fitting the model, examining the results
For this example, we will only use the user-friendly sem() function to keep the model syntax as concise as possible. To describe the model, we need three different formula types: latent variables, regression formulas, and (co)variance formulas for the residual covariances among the observed variables. After the model has been fitted, we request a summary with no fit measures, but with standardized parameter estimates.

Parameter labels and simple equality constraints
In lavaan, each parameter has a name, referred to as the 'parameter label'. The naming scheme is automatic and follows a simple set of rules. Each label consists of three components that describe the relevant formula defining the parameter. The first part is the variable name that appears on the left-hand side of the formula operator. The second part is the operator type of the formula, and the third part is the variable on the right-hand side of the operator that corresponds with the parameter. To see the naming mechanism in action, we can use the coef() function, which returns the (estimated) values of the free parameters and their corresponding parameter labels. Custom labels may be provided by the user in the model syntax, by pre-multiplying a variable name with that label. Consider, for example, the following regression formula: y~b1*x1 + b2*x2 + b3*x3 + b4*x4

R> coef(fit)
Here we have 'named' or 'labeled' the four regression coefficients as b1, b2, b3 and b4. Custom labels are convenient because you can refer to them in other places of the model syntax. In particular, labels can be used to impose equality constraints on certain parameters. If two parameters have the same name, they will be considered to be the same, and only one value will be computed for them (i.e., a simple equality constraint). To illustrate this, we will respecify the model syntax of the Political Democracy data. In the original example in Bollen's book, the factor loadings of the dem60 factor are constrained to be equal to the factor loadings of the dem65 factor. This make sense, since it is the same construct that is measured on two time points. To enforce these equality constraints, we label the factor loadings of the dem60 factor (arbitrarily) as d1, d2, and d3. Note that we do not label the first factor loading since it is a fixed parameter (equal to 1.0). Next, we use the same labels for the factor loadings of the dem65 factor, effectively imposing three equality constraints.

Extracting fit measures
The summary() method with the argument fit.measures = TRUE will output a number of fit measures. If fit statistics are needed for further processing, the fitMeasures() method is preferred. The first argument to fitMeasures() is the fitted object and the second argument is a character vector containing the names of the fit measures one wish to extract. For example, if we only need the CFI and RMSEA values, we can use: R> fitMeasures(fit, c("cfi", "rmsea")) cfi rmsea 0.995 0.035 Omitting the second argument will return all the fit measures computed by lavaan.

R> inspect(fit)
Many more inspect options are described in the help page for the lavaan class.

Multiple groups
The lavaan package has full support for multiple-groups SEM. To request a multiple-groups analysis, the variable defining group membership in the dataset can be passed to the group argument of the cfa(), sem(), or lavaan() function calls. By default, the same model is fitted in all groups without any equality constraints on the model parameters. In the following example, we fit the H&S model for the two schools (Pasteur and Grant-White).
R> HS.model <-'visual =~x1 + x2 + x3 + textual =~x4 + x5 + x6 + speed =~x7 + x8 + x9' R> fit <-cfa(HS.model, data = HolzingerSwineford1939, group = "school") The summary() output is rather long and not shown here. Essentially, it shows a set of parameter estimates for the Pasteur group, followed by another set of parameter estimates for the Grant-White group. If we wish to impose equality constraints on model parameters across groups, we can use the group.equal argument. For example, group.equal = c("loadings", "intercepts") will constrain both the factor loadings and the intercepts of the observed variables to be equal across groups. Other options that can be included in the group.equal argument are described in the help pages of the fitting functions. As a short example, we will fit the H&S model for the two schools, but constrain the factor loadings and intercepts to be equal. The anova function can be used to compare the two model fits.
R> fit.metric <-cfa(HS.model, data = HolzingerSwineford1939, + group = "school", group.equal = c("loadings", "intercepts")) R> anova ( If the group.equal argument is used to constrain the factor loadings across groups, all factor loadings are affected. If some exceptions are needed, one can use the group.partial argument, which takes a vector of parameter labels that specify which parameters will remain free across groups. Therefore, the combination of the group.equal and group.partial arguments gives the user a flexible mechanism to specify across group equality constraints.

Can lavaan do this? A short feature list
The lavaan package has many features, and we foresee that the feature list will keep growing in the next couple of years. To present the reader a flavor of the current capabilities of lavaan, I will use this section to mention briefly a variety of topics that are often of interest to users of SEM software.

Non-normal and categorical data
The 0.4 version of the lavaan package only supports continuous data. Support for categorical data is expected in the 0.5 release. In the current release, however, I have devoted considerable attention to dealing with non-normal continuous data. In the SEM literature, the topic of dealing with non-normal data is well studied (see Finney and DiStefano 2006, for an overview). Three popular strategies to deal with non-normal data have been implemented in the lavaan package: asymptotically distribution-free estimation; scaled test statistics and robust standard errors; and bootstrapping.

Asymptotically distribution-free (ADF) estimation
The asymptotically distribution-free (ADF) estimator (Browne 1984) makes no assumption of normality. Therefore, variables that are skewed or kurtotic do not distort the ADF based standard errors and test statistic. The ADF estimator is part of a larger family of estimators called weighted least squares (WLS) estimators. The discrepancy function of these estimators can be written as follows: where s is a vector of the non-duplicated elements in the sample covariance matrix (S),σ is a vector of the non-duplicated elements of the model-implied covariance matrix (Σ), and W is a weight matrix. The weight matrix utilized with the ADF estimator is the asymptotic covariance matrix: a matrix of the covariances of the observed sample variances and covariances. Theoretically, the ADF estimator seems to be a perfect strategy to deal with non-normal data. Unfortunately, empirical research (e.g., Olsson, Foss, Troye, and Howell 2000) has shown that the ADF method breaks down unless the sample size is huge (e.g., N > 5000).
In lavaan, the estimator can be set by using the estimator argument in one of the fitting functions. The default is maximum likelihood estimation, or estimator = "ML". To switch to the ADF estimator, you can set estimator = "WLS".

Satorra-Bentler scaled test statistic and robust standard errors
An alternative strategy is to use maximum likelihood (ML) for estimating the model parameters, even if the data are known to be non-normal. In this case, the parameter estimates are still consistent (if the model is identified and correctly specified), but the standard errors tend to be too small (as much as 25-50%), meaning that we may reject the null hypothesis (that a parameter is zero) too often. In addition, the model (χ 2 ) test statistic tends to be too large, meaning that we may reject the model too often.
In the SEM literature, several authors have extended the ML methodology to produce standard errors that are asymptotically correct for arbitrary distributions (with finite fourth-order moments), and where a rescaled test statistic is used for overall model evaluation.
Standard errors of the maximum likelihood estimators are based on the covariance matrix that is obtained by inverting the associated information matrix. Robust standard errors replace this covariance matrix by a sandwich-type covariance matrix, first proposed by Huber (1967) and introduced in the SEM literature by Bentler (1983); Browne (1984); Browne and Arminger (1995) and many others. In lavaan, the se argument can be used to switch between different types of standard errors. Setting se = "robust" will produce robust standard errors based on a sandwich-type covariance matrix.
The best known method for correcting the model test statistic is the so-called Satorra-Bentler scaled test statistic Bentler 1988, 1994). Their method rescales the value of the ML-based χ 2 test statistic by an amount that reflects the degree of kurtosis. Several simulation studies have shown that this correction is effective with non-normal data (Chou, Bentler, and Satorra 1991;Curran, West, and Finch 1996), even in small to moderate samples. And because applying the correction often results in a better model fit, it is not surprising that the Satorra-Bentler rescaling method has become popular among SEM users.
In lavaan, the test argument can be used to switch between different test statistics. Setting test = "satorra.bentler" supplements the standard χ 2 model test with the scaled version. In the output produced by the summary() method, both scaled and unscaled model tests (and corresponding fit indices) are displayed in adjacent columns. Because one typically wants both robust standard errors and a scaled test statistic, specifying estimator = "MLM" fits the model using standard maximum likelihood to estimate the model parameters, but with robust standard errors and a Satorra-Bentler scaled test statistic.
R> fit <-cfa(HS.model, data = HolzingerSwineford1939, missing = "listwise", + estimator = "MLM", mimic = "Mplus") R> summary(fit, estimates = FALSE, fit.measures = TRUE) If two models are nested, but the model test statistics are scaled, the usual χ 2 difference test can no longer be used. Instead, a special procedure is needed known as the scaled χ 2 difference test (Satorra and Bentler 2001). The anova() function in lavaan will automatically detect this and compute a scaled χ 2 difference test if appropriate.

Bootstrapping: The naïve bootstrap and the Bollen-Stine bootstrap
The third strategy for dealing with non-normal data is bootstrapping. For standard errors, we can use the standard nonparametric bootstrap to obtain bootstrap standard errors. However, to bootstrap the test statistic (and its p value), the standard (naïve) bootstrap is incorrect because it reflects not only non-normality and sampling variability, but also model misfit. Therefore, the original sample must first be transformed so that the sample covariance matrix corresponds with the model-implied covariance. In the SEM literature, this model-based bootstrap procedure is known as the Bollen-Stine bootstrap (Bollen and Stine 1993).
In lavaan, bootstrap standard errors can be obtained by setting se = "bootstrap". In this case, the confidence intervals produced by the parameterEstimates() method will be bootstrap-based confidence intervals. If test = "bootstrap" or test = "bollen.stine", the data are first transformed to perform a model-based 'Bollen-Stine' bootstrap. The bootstrap standard errors are also based on these model-based bootstrap draws, and the standard p value of the χ 2 test statistic is supplemented with a bootstrap probability value, obtained by computing the proportion of test statistics from the bootstrap samples that exceed the value of the test statistic from the original (parent) sample.
By default, lavaan generates R = 1000 bootstrap draws, but this number can be changed by setting the bootstrap argument. It may be informative to set verbose = TRUE to monitor the progress of bootstrapping.

Missing data
If the data contain missing values, the default behavior in lavaan is listwise deletion. If the missing mechanism is MCAR (missing completely at random) or MAR (missing at random), the lavaan package provides case-wise (or 'full information') maximum likelihood (FIML) estimation (Arbuckle 1996). FIML estimation can be enabled by specifying the argument missing = "ml" (or its alias missing = "fiml") when calling the fitting function. An unrestricted (h1) model will automatically be estimated, so that all common fit indices are available. Robust standard errors are also available, as is a scaled test statistic (Yuan and Bentler 2000) if the data are both incomplete and non-normal.

Linear and nonlinear equality and inequality constraints
In many applications, it is necessary to impose constraints on some of the model parameters. For example, one may wish to enforce that a variance parameter is strictly positive. For some models, it is important to specify that a parameter is equal to some (linear or nonlinear) function of other parameters. The aim of the lavaan package is to make such constraints easy to specify using the lavaan model syntax. A short example will illustrate constraint syntax in lavaan. Consider the following regression: y~b1*x1 + b2*x2 + b3*x3 where we have explicitly labeled the regression coefficients as b1, b2 and b3. We create a toy dataset containing these four variables and fit the regression model: R> set.seed(1234) R> Data <-data.frame(y = rnorm(100), x1 = rnorm(100), x2 = rnorm(100), + x3 = rnorm(100)) R> model <-'y~b1*x1 + b2*x2 + b3*x3' R> fit <-sem(model, data = Data) R> coef(fit) b1 b2 b3 y~~y -0.052 0.084 0.139 0.970 Suppose that we wish to impose two (nonlinear) constraints on b 1 : b 1 = (b 2 + b 3 ) 2 and b 1 ≥ exp(b 2 + b 3 ). The first constraint is an equality constraint, whereas the second is an inequality constraint. Both constraints are nonlinear. In lavaan, this is accomplished as follows:  (b2+b3)) 0.000 b1 -((b2+b3)^2) 0.000 The reader can verify that the constraints are indeed respected. The equality constraint holds exactly. The inequality constraint has resulted in an equality between the left-hand side (b 1 ) and the right-hand side (exp(b 2 + b 3 )). Since in both cases, the left-hand side is equal to the right-hand side, the 'slack' (= the difference between the two sides) is zero.

Indirect effects and mediation analysis
Once a model has been fitted, we may be interested in values that are functions of the original estimated model parameters. One example is an indirect effect which is a product of two (or more) regression coefficients. Consider a classical mediation setup with three variables: Y is the dependent variable, X is the predictor, and M is a mediator. For illustration, we again create The example illustrates the use of the ':=' operator in the lavaan model syntax. This operator 'defines' new parameters which take on values that are an arbitrary function of the original model parameters. The function, however, must be specified in terms of the parameter labels that are explicitly mentioned in the model syntax. By default, the standard errors for these defined parameters are computed using the delta method (Sobel 1982). As with other models, bootstrap standard errors can be requested simply by specifying se = "bootstrap" in the fitting function.

Concluding remarks
This paper described the R package lavaan. Despite its name, the current version (0.4) of lavaan should be regarded as a package for structural equation modeling with continuous data. One of the main attractions of lavaan is its intuitive and easy-to-use model syntax. lavaan is also fairly complete, and contains most of the features that applied researchers are looking for in a modern SEM package. So when will lavaan become a package for latent variable analysis? In due time.