spikeSlabGAM: Bayesian Variable Selection, Model Choice and Regularization for Generalized Additive Mixed Models in R

The R package spikeSlabGAM implements Bayesian variable selection, model choice, and regularized estimation in (geo-)additive mixed models for Gaussian, binomial, and Poisson responses. Its purpose is to (1) choose an appropriate subset of potential covariates and their interactions, (2) to determine whether linear or more flexible functional forms are required to model the effects of the respective covariates, and (3) to estimate their shapes. Selection and regularization of the model terms is based on a novel spike-and-slab-type prior on coefficient groups associated with parametric and semi-parametric effects.


Introduction
In data sets with many potential predictors, choosing an appropriate subset of covariates and their interactions at the same time as determining whether linear or more flexible functional forms are required to model the relationships between covariates and the response is a challenging and important task. From a Bayesian perspective, it can be translated into a question of estimating marginal posterior probabilities of whether a variable should be in the model and in what form (i.e., linear or smooth; as a main effect and/or as an effect modifier).
We introduce the R (R Development Core Team 2011) package spikeSlabGAM which is available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package= spikeSlabGAM. The package implements fully Bayesian variable selection and model choice with a spike-and-slab prior structure that expands the approach in Ishwaran and Rao (2005) to select or deselect single coefficients as well as blocks of coefficients associated with specific model terms. The spike-and-slab priors we use are bimodal priors for the hyper-variances of the regression coefficients which result in a two component mixture of a narrow spike around zero and a slab with wide support for the marginal prior of the coefficients themselves. The posterior mixture weights for the spike component for a specific coefficient or coefficient batch can be interpreted as the posterior probability of its exclusion from the model. The coefficient batches selected or deselected in this fashion can be associated with a wide variety of model terms such as simple linear terms, factor variables, basis expansions for the modeling of smooth curves or surfaces, intrinsically Gaussian Markov random fields (IGMRF), random effects, and all their interactions. spikeSlabGAM is able to deal with Gaussian, binomial and Poisson responses, and can be used to fit piecewise exponential models for timeto-event data. For these response types, the package presented here implements regularized estimation, term selection, model choice, and model averaging for a similarly broad class of models as that available in mboost (Hothorn et al. 2011) or BayesX (Brezger et al. 2005). To the best of our knowledge, it is the first implementation of a Bayesian model term selection method that: (1) is able to fit models for non-Gaussian responses from the exponential family; (2) selects and estimates many types of regularized effects with a (conditionally) Gaussian prior such as simple covariates (both metric and categorical), penalized splines (uni-or multivariate), random effects, spatial effects (kriging, IGMRF) and their interactions; (3) and can distinguish between smooth nonlinear and linear effects. The approach scales reasonably well to datasets with thousands of observations and a few hundred coefficients and is available in documented open source software.
Bayesian function selection, similar to the frequentist COSSO (component selection and smoothing operator) procedure (Lin and Zhang 2006), is usually based on decomposing the additive model in the spirit of a smoothing spline analysis of variance (smoothing spline ANOVA, Wahba et al. 1995). Wood et al. (2002) and Yau et al. (2003) describe procedures for Gaussian and latent Gaussian models using a data-based prior that requires two Markov chain Monte Carlo (MCMC) runs, a pilot run to obtain a data-based prior for the slab part and a second one to estimate parameters and select model components. A more general approach that also allows for flexible modeling of the dispersion in double exponential regression models is described in Cottet et al. (2008), but no implementation is available. Reich et al. (2009) also use the smoothing spline ANOVA framework and perform variable and function selection via stochastic search variable selection (SSVS) for Gaussian responses. Frühwirth-Schnatter and Wagner (2010) discuss various spike-and-slab prior variants for the selection of random intercepts for Gaussian and latent Gaussian models.
The remainder of this paper is structured as follows: Section 2 gives some background on the two main ideas used in spikeSlabGAM. 2.1 introduces the necessary notation for the generalized additive mixed model and 2.2 fills in some details on the spike-and-slab prior. Section 3 relates details of the implementation: how the design matrices for the model terms are constructed (Section 3.1) and how the MCMC sampler works (Section 3.2). Section 4 explains how to specify, visualize and interpret models fitted with spikeSlabGAM and contains an application to the Pima Indian Diabetes dataset.

Generalized additive mixed models
The generalized additive mixed model (GAMM) is a broad model class that forms a subset of structured additive regression (Fahrmeir et al. 2004). In a GAMM, the distribution of the responses y given a set of covariates x j (j = 1, . . . , p) belongs to an exponential family, i.e., with θ,φ,b(·) and c(·) determined by the type of distribution. The conditional expected value of the response E(y|x 1 , . . . , x p ) = h(η) is determined by the additive predictor η and a fixed response function h(·).
The additive predictor has three parts: a fixed and known offset η o , a linear predictor X u β u for model terms that are not under selection with coefficients β u associated with a very flat Gaussian prior (this will typically include at least a global intercept term), and the model terms f j (x) = (f j (x 1 ), . . . , f j (x n )) (j = 1, . . . , p) that are each represented as linear combinations of d j basis functions B j (·) so that and β j prior ∼ peNMIG(v 0 , w, a τ , b τ ) for j = 1, . . . , p. (2) The peNMIG prior structure is explained in detail in Section 2.2.
Components f j (x) of the additive predictor represent a wide variety of model terms, such as (1) linear terms (f j (x) = β j x j ), (2) nominal or ordinal covariates (f (x ji ) = β x(k) iff x ji = k, i.e., if entry i in x j is k), (3) smooth functions of (one or more) continuous covariates (splines, kriging effects, tensor product splines or varying coefficient terms, e.g., Wood (2006)), (4) Markov random fields for discrete spatial covariates (e.g., Rue and Held 2005), (5) random effects (subject-specific intercepts or slope), and (6) interactions between the different terms (varying-coefficient models, effect modifiers, factor interactions). Estimates for semiparametric model terms and random effects are regularized in order to avoid overfitting and modeled with appropriate shrinkage priors. These shrinkage or regularization priors are usually Gaussian or can be parameterized as scale mixtures of Gaussians (e.g., Fahrmeir et al. 2010). The peNMIG variable selection prior used in spikeSlabGAM can also be viewed as a scale mixture of Gaussians.

Stochastic search variable selection and spike-and-slab priors
While analyses can benefit immensely from a flexible and versatile array of potential model terms, the large number of possible models in any given data situation calls for a principled procedure that is able to select the covariates that are relevant for the modeling effort (i.e., variable selection) as well as to determine the shapes of their effects (e.g., smooth vs. linear) and which interaction effects or effect modifiers need to be considered (i.e., model choice). SSVS and spike-and-slab priors are Bayesian methods for these tasks that do not rely on the often very difficult calculation of marginal likelihoods for large collections of complex models (e.g., Han and Carlin 2001).
The basic idea of the SSVS approach (George and McCulloch 1993) is to introduce a binary latent variable γ j associated with the coefficients β j of each model term so that the contribution of a model term to the predictor is forced to be zero -or at least negligibly smallif γ j is in one state and left unchanged if γ j is in the other state. The posterior distribution of γ j can be interpreted as marginal posterior probabilities for exclusion or inclusion of the respective model term. The posterior distribution of the vector γ = (γ 1 , . . . , γ p ) can be interpreted as posterior probabilities for the different models represented by the configurations of γ. Another way to express this basic idea is to assume a spike-and-slab mixture prior for each β j , with one component being a narrow spike around the origin that imposes very strong shrinkage on the coefficients and the other component being a wide slab that imposes very little shrinkage on the coefficients. The posterior weights for the spike and the slab can then be interpreted analogously.
The flavor of spike-and-slab prior used in spikeSlabGAM is a further development based on Ishwaran and Rao (2005): The basic prior structure, which we call a Normal-mixture of inverse Gammas (NMIG) prior, uses a bimodal prior on the variance v 2 of the coefficients that results in a spike-and-slab type prior on the coefficients themselves. For a scalar β, the prior structure is given by: and w prior ∼ Beta(a w , b w ).
(3) I x (y) denotes a function that is 1 in x and 0 everywhere else and v 0 is some small positive constant, so that the indicator γ is 1 with probability w and close to zero with probability 1 − w. This means that the effective prior variance v 2 is very small if γ = v 0 -this is the spike part of the prior. The variance τ 2 is sampled from an informative Inverse Gamma (Γ −1 ) prior with density p(x|a, b) = b a Γ(a) x (a−1) exp − b x . This prior hierarchy has some advantages for selection of model terms for non-Gaussian data since the selection (i.e., the sampling of indicator variables γ) occurs on the level of the coefficient variance. This means that the likelihood itself is not in the Markov blanket of γ and consequently does not occur in the full conditional densities (FCD) for the indicator variables, so that the FCD for γ is available in closed form regardless of the likelihood. However, since only the regression coefficients and not the data itself occur in the Markov blanket of γ, inclusion or exclusion of model terms is based on the magnitude of the coefficients β and not on the magnitude of the effect Bβ itself. This means that design matrices have to be scaled similarly across all model terms for the magnitude of the coefficients to be a good proxy for the importance of the associated effect. In spikeSlabGAM, each term's design matrix is scaled to have a Frobenius norm of 0.5 to achieve this.

A parameter-expanded NMIG prior
While the conventional NMIG prior (3) works well for the selection of single coefficients, it is unsuited for the simultaneous selection or deselection of coefficient vectors, such as coefficients associated with spline basis functions or with the levels of a random intercept. In a nutshell, the problem is that a small variance for a batch of coefficients implies small coefficient values and small coefficient values in turn imply a small variance so that blockwise MCMC samplers are unlikely to exit a basin of attraction around the origin. Gelman et al. (2008) analyze this issue in the context of hierarchical models, where it is framed as a problematically strong dependence between a block of coefficients and their associated hypervariance. A bimodal prior for the variance, such as the NMIG prior, obviously exacerbates these difficulties as the chain has to be able to switch between the different components of the mixture prior. The problem is much less acute for coefficient batches with only a single or few entries since a small batch contributes much less information to the full conditional of its variance parameter. The sampler is then better able to switch between the less clearly separated basins of attraction around the two modes corresponding to the spike and the slab (Scheipl 2010, Section 3.2). In our context, "switching modes" means that entries in γ change their state from 1 to v 0 or vice versa. The practical importance for our aim is clear: Without fast and reliable mixing of γ for coefficient batches with more than a few entries, the posterior distribution cannot be used to define marginal probabilities of models or term inclusion. In previous approaches, this problem has been circumvented by either relying on very low dimensional bases with only a handful of basis functions (Reich et al. 2009;Cottet et al. 2008) or by sampling the indicators from a partial conditional density, with coefficients and their hypervariances integrated out (Yau et al. 2003).
A promising strategy to reduce the dependence between coefficient batches and their variance parameter that neither limits the dimension of the base nor relies on repeated integration of multivariate functions is the introduction of working parameters that are only partially identifiable along the lines of parameter expansion or marginal augmentation (Meng and van Dyk 1997;Gelman et al. 2008). The central idea implemented in spikeSlabGAM is a multiplicative parameter expansion that improves the shrinkage properties of the resulting marginal prior compared to NMIG (Scheipl 2011, Section 4.1.1) and enables simultaneous selection or deselection of large coefficient batches. Figure 1 shows the peNMIG prior hierarchy for a model with p model terms: We set β j = α j ξ j with mutually independent α j and ξ j for a coefficient batch β j with length d j and use a scalar parameter α j prior ∼ NMIG(v 0 , w, a τ , b τ ), where NMIG denotes the prior hierarchy given in (3).
We write as shorthand for this parameter expanded NMIG prior. The effective dimension of each coefficient batch associated with a specific γ j and τ 2 j is then just one, since the Markov blankets of both γ j and τ j now only contain the scalar parameter α j instead of the vector β j . This solves the mixing problems for γ described above. The long vector ξ = (ξ 1 , . . . , ξ p ) is decomposed into subvectors ξ j associated with the different coefficient batches and their respective entries α j (j = 1, . . . , p) in α. The parameter w is a global parameter that influences all model terms, it can be interpreted as the prior probability of a term being included in the model. The parameter α j parameterizes the "importance" of the j-th model term, while ξ j "distributes" α j across the entries in the coefficient batch β j . Setting the conditional expectation E(ξ jk |m jk ) = peNMIG: Normal mixture of inverse Gammas with parameter expansion ±1 shrinks |ξ jk | towards 1, the multiplicative identity, so that the interpretation of α j as the "importance" of the j-th coefficient batch can be maintained.
The marginal peNMIG prior, i.e., the prior for β integrated over the intermediate quantities α, ξ, τ 2 , γ and w, combines an infinite spike at zero with heavy tails. This desirable combination is similar to the properties of other recently proposed shrinkage priors such as the horseshoe prior (Carvalho et al. 2010) and the normal-Jeffreys prior (Bae and Mallick 2004) for which both robustness for large values of β and very efficient estimation of sparse coefficient vectors have been shown . The shape of the marginal peNMIG prior is fairly close to the original spike-and-slab prior suggested by Mitchell and Beauchamp (1988), which used a mixture of a point mass in zero and a uniform distribution on a finite interval, but it has the benefit of (partially) conjugate and proper priors. A detailed derivation of the properties of the peNMIG prior and an investigation of its sensitivity to hyperparameter settings is in Scheipl (2011), along with performance comparisons against mboost and other approaches with regard to term selection, sparsity recovery, and estimation error for Gaussian, binomial and Poisson responses on real and simulated data sets. The default settings for the hyperparameters, validated through many simulations and data examples are a τ = 5, b τ = 25, v 0 = 2.5 · 10 −4 . By default, we use a uniform prior on w, i.e., a w = b w = 1, and a very flat Γ −1 (10 −4 , 10 −4 ) prior for the error variance in Gaussian models.

Setting up the design
All of the terms implemented in spikeSlabGAM have the following structure: First, their  Table 1: Term types in spikeSlabGAM. The semiparametric terms (sm(), srf(), mrf()) only parameterize the proper part of their respective regularization priors (see Section 3.1). Unpenalized terms not associated with a peNMIG prior (i.e., the columns in X u in (1)) are specified with term type u().
contribution to the predictor η is represented as a linear combination of basis functions, i.e., the term associated with covariate x is represented asf (x) = K k=1 δ kBk (x) =Bδ, where δ is a vector of coefficients associated with the basis functionsB k (·) (k = 1, . . . , K) evaluated in x. Second, δ has a (conditionally) multivariate Gaussian prior, i.e., δ|v 2 prior ∼ N (0, v 2 P − ), with a fixed scaled precision matrix P that is often positive semi -definite. Table 1 gives an overview of the model terms available in spikeSlabGAM and how they fit into this framework.
Formula (2) glosses over the fact that every coefficient batch associated with a specific term will have some kind of prior dependency structure determined by P . Moreover, if P is only positive semi -definite, the prior is partially improper. For example, the precision matrix for a B-spline with second order difference penalty implies an improper flat prior on the linear and constant components of the estimated function (Lang and Brezger 2004). The precision matrix for an IGMRF of first order puts an improper flat prior on the mean level of the IGMRF (Rue and Held 2005, Chapter 3). These partially improper priors for splines and IGMRFs are problematic for spikeSlabGAM's purpose for two reasons: In the first place, if e.g., coefficient vectors that parameterize linear functions are in the nullspace of the prior precision matrix, the linear component of the function is estimated entirely unpenalized. This means that it is unaffected by the variable selection property of the peNMIG prior and thus always remains included in the model, but we need to be able to not only remove the entire effect of a covariate (i.e., both its penalized and unpenalized parts) from the model, but also be able to select or deselect its penalized and unpenalized parts separately. The second issue is that, since the nullspaces of these precision matrices usually also contain coefficient vectors that parameterize constant effects, terms in multivariate models are not identifiable, since adding a constant to one term and subtracting it from another does not affect the posterior.
Two strategies to resolve these issues are implemented in spikeSlabGAM. Both involve two steps: (1) Splitting terms with partially improper priors into two parts -one associated with the improper/unpenalized part of the prior and one associated with the proper/penalized part of the prior; and (2) absorbing the fixed prior correlation structure of the coefficients implied by P into a transformed design matrix B associated with then a priori independent coefficients β for the penalized part. Constant functions contained in the unpenalized part of a term are subsumed into a global intercept. This removes the identifiability issue. The remainder of the unpenalized component enters the model in a separate term, e.g., P-splines (term type sm(), see Table 1) leave polynomial functions of a certain order unpenalized and these enter the model in a separate lin() term.

Orthogonal decomposition
The first strategy, used by default, employs a reduced rank approximation of the implied covariance off (x) to construct B, similar to the approaches used in Reich et al. (2009) and Cottet et al. (2008): Sincef (x) =Bδ prior ∼ N (0, v 2B P −B ), we can use the spectral decompositionBP −B = U DU with orthonormal U and diagonal D with entries ≥ 0 to find an orthogonal basis representation for COV(f (x)). ForB withd columns and full column rank and P with rankd−n P , where n P is the dimension of the nullspace of P , all eigenvalues of COV(f (x)) except the firstd−n P are zero. Now write COV(f (x)) = [U + U 0 ] D + 0 0 0 [U + U 0 ] , where U + is a matrix of eigenvectors associated with the positive eigenvalues in D + , and U 0 are the eigenvectors associated with the zero eigenvalues. With B = U + D 1/2 + and β prior ∼ N (0, v 2 I), f (x) = Bβ has a proper Gaussian distribution that is proportional to that of the partially improper prior off (x) (Rue and Held 2005, Equation 3.16) and parameterizes only the penalized/proper part off (x), while the unpenalized part of the function is represented by U 0 .
In practice, it is unnecessary and impractically slow to compute all n eigenvectors and values for a full spectral decomposition U DU . Only the firstd−n P are needed for B, and of those the first few typically represent most of the variability in f (x). spikeSlabGAM makes use of a fast truncated bidiagonalization algorithm (Baglama and Reichel 2006) implemented in irlba (Baglama et al. 2011) to compute only the largestd − n P eigenvalues of COV(f (x)) and their associated eigenvectors. Only the first d eigenvectors and -values whose sum represents at least .995 of the sum of all eigenvalues are used to construct the reduced rank orthogonal basis B with d columns, e.g., for a cubic P-spline with second order difference penalty and 20 basis functions (i.e.,d = 20 columns inB and n P = 2), B will typically have only 8 to 12 columns.

"Mixed model" decomposition
The second strategy reparameterizes via a decomposition of the coefficient vector δ into an unpenalized part and a penalized part: δ = X u β u + X p β, where X u is a basis of the n Pdimensional nullspace of P and X p is a basis of its complement. spikeSlabGAM uses a spectral decomposition of P with P = [Λ + Λ 0 ] Γ + 0 0 0 [Λ + Λ 0 ] , where Λ + is the matrix of eigenvectors associated with the positive eigenvalues in Γ + , and Λ 0 are the eigenvectors associated with the zero eigenvalues. This decomposition yields X u = Λ 0 and X p = L(L L) −1 with L = Λ + Γ 1/2 + . The model term can then be expressed asBδ =B(X u β u + X p β) = B u β u + Bβ with B u as the design matrix associated with the unpenalized part and B as the design matrix associated with the penalized part of the term. The prior for the coefficients associated with the penalized part after reparameterization is then β ∼ N (0, v 2 I), while β u has a flat prior (cf. Kneib 2006, Chapter 5.1).

Interactions
Design matrices for interaction effects are constructed from tensor products (i.e., columnwise Kronecker products) of the bases for the respective main effect terms. For example, the complete interaction between two numeric covariates x 1 and x 2 with smooth effects modeled as P-splines with second order difference penalty consists of the interactions of their unpenalized parts (i.e., linear x 1 -linear x 2 ), two varying-coefficient terms (i.e., smooth x 1 × linear x 2 , linear x 1 × smooth x 2 ) and a 2-D nonlinear effect (i.e., smooth x 1 × smooth x 2 ). By default, spikeSlabGAM uses a reduced rank representation of these tensor product bases derived from their partial singular value decomposition as described above for the "'orthogonal" decomposition.
"Centering" the effects By default, spikeSlabGAM makes the estimated effects of all terms orthogonal to the nullspace of their associated penalty and, for interaction terms, against the corresponding main effects as in Yau et al. (2003). Every B is transformed via B → B I − Z(Z Z) −1 Z . For simple terms (i.e., fct(), lin(), rnd()), Z = 1 and the projection above simply enforces a sum-to-zero constraint on the estimated effect. For semi-parametric terms, Z is a basis of the nullspace of the implied prior on the effect. For interactions between d main effects, where B 1 , . . . , B d are the design matrices of the involved main effects. This centering improves separability between main effects and their interactions by removing any overlap of their respective column spaces. All uncertainty about the mean response level is shifted into the global intercept. The projection uses the QR decomposition of Z for speed and stability.

Markov chain Monte Carlo implementation
spikeSlabGAM uses the blockwise Gibbs sampler summarized in Table 2 for MCMC inference. The sampler cyclically updates the nodes in Figure 1. The FCD for α is based on the "collapsed" design matrix X α = X blockdiag(ξ 1 , . . . , ξ p ), while ξ is sampled based on a "rescaled" design matrix X ξ = X blockdiag(1 d1 , . . . , 1 dp )α, where 1 d is a d × 1 vector of ones and X = [X u B 1 . . . B p ] is the concatenation of the designs for the different model terms (see Initialize τ 2(0) , γ (0) , φ (0) , w (0) and β (0) (via IWLS for non-Gaussian response) b from its FCD (Gaussian case, see (5) b from its FCD (Gaussian case, see (5))/ via P-IWLS for model terms j = 1, . . . , p do rescale ξ (1)). The full conditionals for α and ξ for Gaussian responses are given by For non-Gaussian responses, we use penalized iteratively re-weighted least squares (P-IWLS) proposals (Lang and Brezger 2004) in a Metropolis-Hastings step to sample α and ξ, i.e., Gaussian proposals are drawn from the quadratic Taylor approximation of the logarithm of the intractable FCD. Because of the prohibitive computational cost for large q and p (and low acceptance rates for non-Gaussian response for high-dimensional IWLS proposals), neither α nor ξ are updated all at once. Rather, both α and ξ are split into b α (b ξ ) update blocks that are updated sequentially conditional on the states of all other parameters.
By default, starting values β (0) are drawn randomly in three steps: First, 5 Fisher scoring steps with fixed, large hypervariances are performed to reach a viable region of the parameter space. Second, for each chain run in parallel, Gaussian noise is added to this preliminary β (0) , and third its constituting p subvectors are scaled with variance parameters γ j τ 2 j (j = 1, . . . , p) drawn from their priors. this means that, for each of the parallel chains, some of the p model terms are set close to zero initially, and the remainder is in the vicinity of their respective ridgepenalized MLEs. Starting values for α (0) and ξ (0) are then computed via α Scheipl (2011) contains more details on the sampler.

Using spikeSlabGAM
4.1. Model specification and post-processing spikeSlabGAM uses the standard R formula syntax to specify models, with a slight twist: Every term in the model has to belong to one of the term types given in Table 1. If a model formula contains "raw" terms not wrapped in one of these term type functions, the package will try to guess appropriate term types: For example, the formula y~x + f with a numeric x and a factor f is expanded into y~lin(x) + sm(x) + fct(f) since the default is to model any numeric covariate as a smooth effect with a lin() term parameterizing functions from the nullspace of its penalty and an sm() term parameterizing the penalized part. The model formula defines the candidate set of model terms that comprise the model of maximal complexity under consideration. As of now, indicators γ are sampled without hierarchical constraints, i.e., an interaction effect can be included in the model even if the associated main effects or lower order interactions are not.
We generate some artificial data for a didactic example. We draw n = 200 observations from the following data generating process: covariates sm1, sm2, noise2, noise3 are  Figures 3 and 4 for the shapes of the nonlinear effects f (sm1) and f (sm2, f)), the response vector y = η + sd(η) snr is generated under signal-to-noise ratio snr = 3 with i.i.d. t 5 -distributed errors i (i = 1, . . . , n).

R> summary(m)
Spike-and-Slab STAR for Gaussian data Model: y~((lin(sm1) + sm(sm1)) + (lin(sm2) + sm(sm2)) + fct(f) + (lin(lin1) + sm(lin1)))^2 + (lin(lin2) + sm(lin2)) + (lin(lin3) + sm(lin3)) + (lin(noise1) + sm(noise1)) + (lin(noise2) + sm(noise2)) + (lin(noise3) + sm(noise3)) + fct ( In most applications, the primary focus will be on the marginal posterior inclusion probabilities P(gamma = 1), given along with a measure of term importance pi and the size of the associated coefficient batch dim. pi is defined as π j =η jη −1 /η T −1η −1 , whereη j is the posterior expectation of the linear predictor associated with the j-th term, andη −1 is the linear predictor minus the intercept. Since p j π j = 1, the pi values provide a rough percentage decomposition of the sum of squares of the (non-constant) linear predictor (Gu 1992). Note that they can assume negative values as well for terms whose contributions to the linear predictorη j are negatively correlated with the remainder of the (non-constant) linear predictorη −1 −η j . The summary shows that almost all true effects have a high posterior inclusion probability (i.e., lin() for lin2, lin3; lin(),sm() for sm1, sm2; fct(f); and the interaction terms between sm2 and f). All the terms associated with noise variables and the superfluous smooth terms for lin1, lin2, lin3 as well as the superfluous interaction terms have a very low posterior inclusion probability. The small linear influence of lin1 has not been recovered. Table 3 shows an excerpt from the second part of the summary output, which summarizes the posterior of the vector of inclusion indicators γ. The table shows the different configurations of P (γ j = 1) > .5, j = 1, . . . , p sorted by relative frequency, i.e., the models visited by the sampler sorted by decreasing posterior support. For this simulated data, the posterior is concentrated strongly on the (almost) true model missing the small linear effect of lin1.

Visualization
spikeSlabGAM offers automated visualizations for model terms and their interactions, implemented with ggplot2 (Wickham 2009). By default, the posterior mean of the linear predictor associated with each covariate (or combination of covariates if the model contains interactions) along with (pointwise) 80% credible intervals is shown. Figure 2 shows the estimated effects for m as generated by plot(m).
Plots for specific terms can be requested with the label argument, e.g., for f (sm1) (see Figure 3) via R> plot(m, labels = c("lin(sm1)", "sm(sm1)"), cumulative = FALSE) R> trueFsm1 <-data.frame(truth = fsm1 -mean(fsm1), sm1 = sm1) R> plot(m, labels = "sm(sm1)", ggElems = list(geom_line(aes(x = sm1, + y = truth), data = trueFsm1, linetype = 2))) Similarly for f (sm2, f) (see Figure 4) via R> trueFsm2f <-data.frame(truth = fsm2f -mean(fsm2f), sm2 = sm2, f = f) R> plot(m, labels = "sm(sm2):fct(f)", ggElems = list(geom_line(aes(x = sm2, + y = truth, colour = f), data = trueFsm2f, linetype = 2))) Posterior model probabilities (inclusion threshold = 0. x  Table 3: Excerpt of the second part of the output returned by the summary method, which tabulates the configurations of P (γ j = 1) > .5 with highest posterior probability. In the example, the posterior is very concentrated in the true model without lin1, which has a posterior probability of 0.27. The correct model that additionally includes lin1 (column 12) has a posterior probability of 0.01. Figure 2: Posterior means and pointwise 80% credible intervals for m. Interaction surfaces of two numerical covariates are displayed as color coded contour plots, with regions in which the credible interval does not overlap zero marked in blue (η < 0) or red (η > 0). Each panel contains a marginal rug plot that shows where the observations are located. Note that the default behavior of the plot method is to cumulate all terms associated with a covariate or covariate combination. In this example, the joint effects of the first 4 covariates sm1, sm2, f and lin1 and the sums of the lin and sm terms associated with lin2, lin3, noise1, noise2 and noise3 are displayed. All effects of the noise variables are ≈ 0, note the different scales on the vertical axes. Vertical axes can be forced to the same range by setting option commonEtaScale.  The fits are quite close to the truth despite the heavy-tailed errors and the many noise terms included in the model. Full disclosure: The code used to render Figures 3 and 4 is a little more intricate than the code snippets we show, but the additional code only affects details (font and margin sizes and the arrangement of the panels).

Assessing convergence
spikeSlabGAM uses the convergence diagnostics implemented in R2WinBUGS (Sturtz et al. 2005). The function ssGAM2Bugs() converts the posterior samples for a spikeSlabGAM object into a bugs object, for which graphical and numerical convergence diagnostics are available via plot and print. Note that not all cases of non-convergence should be considered problematic, e.g., if one of the chains samples from a different part of the model space than the others, but has converged on that part of the parameter space.

Summary
We have presented a novel approach for Bayesian variable selection, model choice, and regularized estimation in (geo-)additive mixed models for Gaussian, binomial, and Poisson responses implemented in spikeSlabGAM. The package uses the established R formula syntax so that complex models can be specified very concisely. It features powerful and user friendly visualizations of the fitted models. Major features of the software have been demonstrated on an example with artificial data with t-distributed errors and on the Pima Indians Diabetes data set.
In future work, the author plans to add capabilities for "always included" semiparametric terms and for sampling the inclusion indicators under hierarchical constraints, i.e., never including an interaction if the associated main effects are excluded. Updated versions of this article will be made available as vignettes to the spikeSlabGAM package.