https://www.jstatsoft.org/index.php/jss/issue/feed Journal of Statistical Software 2021-11-30T16:56:14+00:00 Editorial Office editor@jstatsoft.org Open Journal Systems The Journal of Statistical Software publishes articles on statistical software along with the source code of the software itself and replication code for all empirical results. https://www.jstatsoft.org/index.php/jss/article/view/v100i01 Software for Bayesian Statistics 2021-11-28T22:22:03+00:00 Michela Cameletti michela.cameletti@unibg.it Virgilio Gómez-Rubio virgilio.gomez@uclm.es <p>In this summary we introduce the papers published in the special issue on Bayesian statistics. This special issue comprises 20 papers on Bayesian statistics and Bayesian inference on different topics such as general packages for hierarchical linear model fitting, survival models, clinical trials, missing values, time series, hypothesis testing, priors, approximate Bayesian computation, and others.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Michela Cameletti, Virgilio Gómez-Rubio https://www.jstatsoft.org/index.php/jss/article/view/v100i02 New Frontiers in Bayesian Modeling Using the INLA Package in R 2021-06-08T10:38:43+00:00 Janet Van Niekerk janet.vanniekerk@kaust.edu.sa Haakon Bakka Haakon.Bakka@kaust.edu.sa Håvard Rue Haavard.Rue@kaust.edu.sa Olaf Schenk olaf.schenk@usi.ch <p>The INLA package provides a tool for computationally efficient Bayesian modeling and inference for various widely used models, more formally the class of latent Gaussian models. It is a non-sampling based framework which provides approximate results for Bayesian inference, using sparse matrices. The swift uptake of this framework for Bayesian modeling is rooted in the computational efficiency of the approach and catalyzed by the demand presented by the big data era. In this paper, we present new developments within the INLA package with the aim to provide a computationally efficient mechanism for the Bayesian inference of relevant challenging situations.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Janet van Niekerk, Haakon Bakka, Håvard Rue, Olaf Schenk https://www.jstatsoft.org/index.php/jss/article/view/v100i03 Sequential Monte Carlo Methods in the nimble and nimbleSMC R Packages 2020-02-25T12:45:37+00:00 Nicholas Michaud nicholas.michaud@gmail.com Perry de Valpine pdevalpine@berkeley.edu Daniel Turek dbt1@williams.edu Christopher J. Paciorek paciorek@stat.berkeley.edu Dao Nguyen dxnguyen@olemiss.edu <p>nimble is an R package for constructing algorithms and conducting inference on hierarchical models. The nimble package provides a unique combination of flexible model specification and the ability to program model-generic algorithms. Specifically, the package allows users to code models in the BUGS language, and it allows users to write algorithms that can be applied to any appropriate model. In this paper, we introduce the nimbleSMC R package. nimbleSMC contains algorithms for state-space model analysis using sequential Monte Carlo (SMC) techniques that are built using nimble. We first provide an overview of state-space models and commonly-used SMC algorithms. We then describe how to build a state-space model in nimble and conduct inference using existing SMC algorithms within nimbleSMC. SMC algorithms within nimbleSMC currently include the bootstrap filter, auxiliary particle filter, ensemble Kalman filter, IF2 method of iterated filtering, and a particle Markov chain Monte Carlo (MCMC) sampler. These algorithms can be run in R or compiled into C++ for more efficient execution. Examples of applying SMC algorithms to linear autoregressive models and a stochastic volatility model are provided. Finally, we give an overview of how model-generic algorithms are coded within nimble by providing code for a simple SMC algorithm. This illustrates how users can easily extend nimble's SMC methods in high-level code.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Nicholas Michaud, Perry de Valpine, Daniel Turek, Christopher J. Paciorek, Dao Nguyen https://www.jstatsoft.org/index.php/jss/article/view/v100i04 bamlss: A Lego Toolbox for Flexible Bayesian Regression (and Beyond) 2021-06-07T04:18:21+00:00 Nikolaus Umlauf Nikolaus.Umlauf@uibk.ac.at Nadja Klein nadja.klein@hu-berlin.de Thorsten Simon Thorsten.Simon@uibk.ac.at Achim Zeileis Achim.Zeileis@R-project.org <p>Over the last decades, the challenges in applied regression and in predictive modeling have been changing considerably: (1) More flexible regression model specifications are needed as data sizes and available information are steadily increasing, consequently demanding for more powerful computing infrastructure. (2) Full probabilistic models by means of distributional regression - rather than predicting only some underlying individual quantities from the distributions such as means or expectations - is crucial in many applications. (3) Availability of Bayesian inference has gained in importance both as an appealing framework for regularizing or penalizing complex models and estimation therein as well as a natural alternative to classical frequentist inference. However, while there has been a lot of research on all three challenges and the development of corresponding software packages, a modular software implementation that allows to easily combine all three aspects has not yet been available for the general framework of distributional regression. To fill this gap, the R package bamlss is introduced for Bayesian additive models for location, scale, and shape (and beyond) - with the name reflecting the most important distributional quantities (among others) that can be modeled with the software. At the core of the package are algorithms for highly-efficient Bayesian estimation and inference that can be applied to generalized additive models or generalized additive models for location, scale, and shape, or more general distributional regression models. However, its building blocks are designed as "Lego bricks" encompassing various distributions (exponential family, Cox, joint models, etc.), regression terms (linear, splines, random effects, tensor products, spatial fields, etc.), and estimators (MCMC, backfitting, gradient boosting, lasso, etc.). It is demonstrated how these can be easily combined to make classical models more flexible or to create new custom models for specific modeling challenges.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Nikolaus Umlauf, Nadja Klein, Thorsten Simon, Achim Zeileis https://www.jstatsoft.org/index.php/jss/article/view/v100i05 Bayesian Item Response Modeling in R with brms and Stan 2020-08-30T10:13:12+00:00 Paul-Christian Bürkner paul.buerkner@gmail.com <p>Item response theory (IRT) is widely applied in the human sciences to model persons' responses on a set of items measuring one or more latent constructs. While several R packages have been developed that implement IRT models, they tend to be restricted to respective pre-specified classes of models. Further, most implementations are frequentist while the availability of Bayesian methods remains comparably limited. I demonstrate how to use the R package brms together with the probabilistic programming language Stan to specify and fit a wide range of Bayesian IRT models using flexible and intuitive multilevel formula syntax. Further, item and person parameters can be related in both a linear or non-linear manner. Various distributions for categorical, ordinal, and continuous responses are supported. Users may even define their own custom response distribution for use in the presented framework. Common IRT model classes that can be specified natively in the presented framework include 1PL and 2PL logistic models optionally also containing guessing parameters, graded response and partial credit ordinal models, as well as drift diffusion models of response times coupled with binary decisions. Posterior distributions of item and person parameters can be conveniently extracted and postprocessed. Model fit can be evaluated and compared using Bayes factors and efficient cross-validation procedures.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Paul-Christian Bürkner https://www.jstatsoft.org/index.php/jss/article/view/v100i06 Efficient Bayesian Structural Equation Modeling in Stan 2021-02-23T17:18:51+00:00 Edgar C. Merkle merklee@missouri.edu Ellen Fitzsimmons eafbym@mail.missouri.edu James Uanhoro no@e-mail.provided Ben Goodrich benjamin.goodrich@columbia.edu <p>Structural equation models comprise a large class of popular statistical models, including factor analysis models, certain mixed models, and extensions thereof. Model estimation is complicated by the fact that we typically have multiple interdependent response variables and multiple latent variables (which may also be called random effects or hidden variables), often leading to slow and inefficient posterior sampling. In this paper, we describe and illustrate a general, efficient approach to Bayesian SEM estimation in Stan, contrasting it with previous implementations in R package blavaan (Merkle and Rosseel 2018). After describing the approaches in detail, we conduct a practical comparison under multiple scenarios. The comparisons show that the new approach is clearly better. We also discuss ways that the approach may be extended to other models that are of interest to psychometricians.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Edgar C. Merkle, Ellen Fitzsimmons, James Uanhoro, Ben Goodrich https://www.jstatsoft.org/index.php/jss/article/view/v100i07 ABCpy: A High-Performance Computing Perspective to Approximate Bayesian Computation 2021-02-04T12:09:15+00:00 Ritabrata Dutta Ritabrata.Dutta@warwick.ac.uk Marcel Schoengens no@e-mail.provided Lorenzo Pacchiardi no@e-mail.provided Avinash Ummadisingu no@e-mail.provided Nicole Widmer no@e-mail.provided Pierre Künzli no@e-mail.provided Jukka-Pekka Onnela no@e-mail.provided Antonietta Mira no@e-mail.provided <p>ABCpy is a highly modular scientific library for approximate Bayesian computation (ABC) written in Python. The main contribution of this paper is to document a software engineering effort that enables domain scientists to easily apply ABC to their research without being ABC experts; using ABCpy they can easily run large parallel simulations without much knowledge about parallelization. Further, ABCpy enables ABC experts to easily develop new inference schemes and evaluate them in a standardized environment and to extend the library with new algorithms. These benefits come mainly from the modularity of ABCpy. We give an overview of the design of ABCpy and provide a performance evaluation concentrating on parallelization. This points us towards the inherent imbalance in some of the ABC algorithms. We develop a dynamic scheduling MPI implementation to mitigate this issue and evaluate the various ABC algorithms according to their adaptability towards high-performance computing.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Ritabrata Dutta, Marcel Schoengens, Lorenzo Pacchiardi, Avinash Ummadisingu, Nicole Widmer, Pierre Künzli, Jukka-Pekka Onnela, Antonietta Mira https://www.jstatsoft.org/index.php/jss/article/view/v100i08 pexm: A JAGS Module for Applications Involving the Piecewise Exponential Distribution 2020-08-30T13:05:31+00:00 Vinícius D. Mayrink vdinizm@gmail.com João Daniel N. Duarte jdanielnd@gmail.com Fábio N. Demarqui fndemarqui@id.uff.br <p>In this study, we present a new module built for users interested in a programming language similar to BUGS to fit a Bayesian model based on the piecewise exponential (PE) distribution. The module is an extension to the open-source program JAGS by which a Gibbs sampler can be applied without requiring the derivation of complete conditionals and the subsequent implementation of strategies to draw samples from unknown distributions. The PE distribution is widely used in the fields of survival analysis and reliability. Currently, it can only be implemented in JAGS through methods to indirectly specify the likelihood based on the Poisson or Bernoulli probabilities. Our module provides a more straightforward implementation and is thus more attractive to the researchers aiming to spend more time exploring the results from the Bayesian inference rather than implementing the Markov Chain Monte Carlo algorithm. For those interested in extending JAGS, this work can be seen as a tutorial including important information not well investigated or organized in other materials. Here, we describe how to use the module taking advantage of the interface between R and JAGS. A short simulation study is developed to ensure that the module behaves well and a real illustration, involving two PE models, exhibits a context where the module can be used in practice.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Vinícius D. Mayrink, João Daniel N. Duarte, Fábio N. Demarqui https://www.jstatsoft.org/index.php/jss/article/view/v100i09 qgam: Bayesian Nonparametric Quantile Regression Modeling in R 2021-02-03T17:43:26+00:00 Matteo Fasiolo matteo.fasiolo@gmail.com Simon N. Wood simon.wood@bristol.ac.uk Margaux Zaffran margaux.zaffran@ensta-paristech.fr Raphaël Nedellec raphael.nedellec@edf.fr Yannig Goude yannig.goude@edf.fr <p>Generalized additive models (GAMs) are flexible non-linear regression models, which can be fitted efficiently using the approximate Bayesian methods provided by the mgcv R package. While the GAM methods provided by mgcv are based on the assumption that the response distribution is modeled parametrically, here we discuss more flexible methods that do not entail any parametric assumption. In particular, this article introduces the qgam package, which is an extension of mgcv providing fast calibrated Bayesian methods for fitting quantile GAMs (QGAMs) in R. QGAMs are based on a smooth version of the pinball loss of Koenker (2005), rather than on a likelihood function, hence jointly achieving satisfactory accuracy of the quantile point estimates and coverage of the corresponding credible intervals requires adopting the specialized Bayesian fitting framework of Fasiolo, Wood, Zaffran, Nedellec, and Goude (2021b). Here we detail how this framework is implemented in qgam and we provide examples illustrating how the package should be used in practice.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Matteo Fasiolo, Simon N. Wood, Margaux Zaffran, Raphaël Nedellec, Yannig Goude https://www.jstatsoft.org/index.php/jss/article/view/v100i10 dalmatian: A Package for Fitting Double Hierarchical Linear Models in R via JAGS and nimble 2021-02-02T16:18:00+00:00 Simon Bonner sbonner6@uwo.ca Han-Na Kim hkim787@uwo.ca David Westneat david.westneat@uky.edu Ariane Mutzel ariane.mutzel@uky.edu Jonathan Wright jonathan.wright@ntnu.no Matthew Schofield matthew.schofield@otago.ac.nz <p>Traditional regression models, including generalized linear mixed models, focus on understanding the deterministic factors that affect the mean of a response variable. Many biological studies seek to understand non-deterministic patterns in the variance or dispersion of a phenotypic or ecological response variable. We describe a new R package, dalmatian, that provides methods for fitting double hierarchical generalized linear models incorporating fixed and random predictors of both the mean and variance. Models are fit via Markov chain Monte Carlo sampling implemented in either JAGS or nimble and the package provides simple functions for monitoring the sampler and summarizing the results. We illustrate these functions through an application to data on food delivery by breeding pied flycatchers (Ficedula hypoleuca). Our intent is that this package makes it easier for practitioners to implement these models without having to learn the intricacies of Markov chain Monte Carlo methods.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Simon Bonner, Han-Na Kim, David Westneat, Ariane Mutzel, Jonathan Wright, Matthew Schofield https://www.jstatsoft.org/index.php/jss/article/view/v100i11 BayesSUR: An R Package for High-Dimensional Multivariate Bayesian Variable and Covariance Selection in Linear Regression 2021-02-04T17:33:49+00:00 Zhi Zhao zhi.zhao@medisin.uio.no Marco Banterle marco.banterle@gmail.com Leonardo Bottolo lb664@cam.ac.uk Sylvia Richardson sylvia.richardson@mrc-bsu.cam.ac.uk Alex Lewin alex.lewin@lshtm.ac.uk Manuela Zucknick manuela.zucknick@medisin.uio.no <p>In molecular biology, advances in high-throughput technologies have made it possible to study complex multivariate phenotypes and their simultaneous associations with high-dimensional genomic and other omics data, a problem that can be studied with high-dimensional multi-response regression, where the response variables are potentially highly correlated. To this purpose, we recently introduced several multivariate Bayesian variable and covariance selection models, e.g., Bayesian estimation methods for sparse seemingly unrelated regression for variable and covariance selection. Several variable selection priors have been implemented in this context, in particular the hotspot detection prior for latent variable inclusion indicators, which results in sparse variable selection for associations between predictors and multiple phenotypes. We also propose an alternative, which uses a Markov random field (MRF) prior for incorporating prior knowledge about the dependence structure of the inclusion indicators. Inference of Bayesian seemingly unrelated regression (SUR) by Markov chain Monte Carlo methods is made computationally feasible by factorization of the covariance matrix amongst the response variables. In this paper we present BayesSUR, an R package, which allows the user to easily specify and run a range of different Bayesian SUR models, which have been implemented in C++ for computational efficiency. The R package allows the specification of the models in a modular way, where the user chooses the priors for variable selection and for covariance selection separately. We demonstrate the performance of sparse SUR models with the hotspot prior and spike-and-slab MRF prior on synthetic and real data sets representing eQTL or mQTL studies and in vitro anti-cancer drug screening studies as examples for typical applications.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Zhi Zhao, Marco Banterle, Leonardo Bottolo, Sylvia Richardson, Alex Lewin, Manuela Zucknick https://www.jstatsoft.org/index.php/jss/article/view/v100i12 Modeling Univariate and Multivariate Stochastic Volatility in R with stochvol and factorstochvol 2021-02-05T18:54:20+00:00 Darjus Hosszejni darjus.hosszejni@wu.ac.at Gregor Kastner gregor.kastner@aau.at <p>Stochastic volatility (SV) models are nonlinear state-space models that enjoy increasing popularity for fitting and predicting heteroskedastic time series. However, due to the large number of latent quantities, their efficient estimation is non-trivial and software that allows to easily fit SV models to data is rare. We aim to alleviate this issue by presenting novel implementations of five SV models delivered in two R packages. Several unique features are included and documented. As opposed to previous versions, stochvol is now capable of handling linear mean models, conditionally heavy tails, and the leverage effect in combination with SV. Moreover, we newly introduce factorstochvol which caters for multivariate SV. Both packages offer a user-friendly interface through the conventional R generics and a range of tailor-made methods. Computational efficiency is achieved via interfacing R to C++ and doing the heavy work in the latter. In the paper at hand, we provide a detailed discussion on Bayesian SV estimation and showcase the use of the new software through various examples.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Darjus Hosszejni, Gregor Kastner https://www.jstatsoft.org/index.php/jss/article/view/v100i13 Shrinkage in the Time-Varying Parameter Model Framework Using the R Package shrinkTVP 2021-02-23T10:23:50+00:00 Peter Knaus peter.knaus@wu.ac.at Angela Bitto-Nemling angela.bitto-nemling@wu.ac.at Annalisa Cadonna annalisa.cadonna@crayon.com Sylvia Frühwirth-Schnatter sfruehwi@wu.ac.at <p>Time-varying parameter (TVP) models are widely used in time series analysis to flexibly deal with processes which gradually change over time. However, the risk of overfitting in TVP models is well known. This issue can be dealt with using appropriate global-local shrinkage priors, which pull time-varying parameters towards static ones. In this paper, we introduce the R package shrinkTVP (Knaus, Bitto-Nemling, Cadonna, and FrühwirthSchnatter 2021), which provides a fully Bayesian implementation of shrinkage priors for TVP models, taking advantage of recent developments in the literature, in particular those of Bitto and Frühwirth-Schnatter (2019) and Cadonna, Frühwirth-Schnatter, and Knaus (2020). The package shrinkTVP allows for posterior simulation of the parameters through an efficient Markov Chain Monte Carlo scheme. Moreover, summary and visualization methods, as well as the possibility of assessing predictive performance through log-predictive density scores, are provided. The computationally intensive tasks have been implemented in C++ and interfaced with R. The paper includes a brief overview of the models and shrinkage priors implemented in the package. Furthermore, core functionalities are illustrated, both with simulated and real data.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Peter Knaus, Angela Bitto-Nemling, Annalisa Cadonna, Sylvia Frühwirth-Schnatter https://www.jstatsoft.org/index.php/jss/article/view/v100i14 BVAR: Bayesian Vector Autoregressions with Hierarchical Prior Selection in R 2020-10-05T12:40:47+00:00 Nikolas Kuschnig nkuschni@wu.ac.at Lukas Vashold lvashold@wu.ac.at <p>Vector autoregression (VAR) models are widely used for multivariate time series analysis in macroeconomics, finance, and related fields. Bayesian methods are often employed to deal with their dense parameterization, imposing structure on model coefficients via prior information. The optimal choice of the degree of informativeness implied by these priors is subject of much debate and can be approached via hierarchical modeling. This paper introduces BVAR, an R package dedicated to the estimation of Bayesian VAR models with hierarchical prior selection. It implements functionalities and options that permit addressing a wide range of research problems, while retaining an easy-to-use and transparent interface. Features include structural analysis of impulse responses, forecasts, the most commonly used conjugate priors, as well as a framework for defining custom dummy-observation priors. BVAR makes Bayesian VAR models user-friendly and provides an accessible reference implementation.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Nikolas Kuschnig, Lukas Vashold https://www.jstatsoft.org/index.php/jss/article/view/v100i15 BNPmix: An R Package for Bayesian Nonparametric Modeling via Pitman-Yor Mixtures 2020-09-09T07:55:45+00:00 Riccardo Corradin riccardo.corradin@unimib.it Antonio Canale canale@stat.unipd.it Bernardo Nipoti bernardo.nipoti@unimib.it <p>BNPmix is an R package for Bayesian nonparametric multivariate density estimation, clustering, and regression, using Pitman-Yor mixture models, a flexible and robust generalization of the popular class of Dirichlet process mixture models. A variety of model specifications and state-of-the-art posterior samplers are implemented. In order to achieve computational efficiency, all sampling methods are written in C++ and seamless integrated into R by means of the Rcpp and RcppArmadillo packages. BNPmix exploits the ggplot2 capabilities and implements a series of generic functions to plot and print summaries of posterior densities and induced clustering of the data.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Riccardo Corradin, Antonio Canale, Bernardo Nipoti https://www.jstatsoft.org/index.php/jss/article/view/v100i16 A Bayesian Approach for Model-Based Clustering of Several Binary Dissimilarity Matrices: The dmbc Package in R 2021-03-01T05:46:28+00:00 Sergio Venturini sergio.venturini@gmail.com Raffaella Piccarreta raffaella.piccarreta@unibocconi.it <p>We introduce the new package dmbc that implements a Bayesian algorithm for clustering a set of binary dissimilarity matrices within a model-based framework. Specifically, we consider the case when S matrices are available, each describing the dissimilarities among the same n objects, possibly expressed by S subjects (judges), or measured under different experimental conditions, or with reference to different characteristics of the objects themselves. In particular, we focus on binary dissimilarities, taking values 0 or 1 depending on whether or not two objects are deemed as dissimilar. We are interested in analyzing such data using multidimensional scaling (MDS). Differently from standard MDS algorithms, our goal is to cluster the dissimilarity matrices and, simultaneously, to extract an MDS configuration specific for each cluster. To this end, we develop a fully Bayesian three-way MDS approach, where the elements of each dissimilarity matrix are modeled as a mixture of Bernoulli random vectors. The parameter estimates and the MDS configurations are derived using a hybrid Metropolis-Gibbs Markov Chain Monte Carlo algorithm. We also propose a BIC-like criterion for jointly selecting the optimal number of clusters and latent space dimensions. We illustrate our approach referring both to synthetic data and to a publicly available data set taken from the literature. For the sake of efficiency, the core computations in the package are implemented in C/C++. The package also allows the simulation of multiple chains through the support of the parallel package.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Sergio Venturini https://www.jstatsoft.org/index.php/jss/article/view/v100i17 Informed Bayesian Inference for the A/B Test 2020-11-19T10:01:47+00:00 Quentin F. Gronau Quentin.F.Gronau@gmail.com Akash Raj K. N. akashrajkn@gmail.com Eric-Jan Wagenmakers ej.wagenmakers@gmail.com <p>Booming in business and a staple analysis in medical trials, the A/B test assesses the effect of an intervention or treatment by comparing its success rate with that of a control condition. Across many practical applications, it is desirable that (1) evidence can be obtained in favor of the null hypothesis that the treatment is ineffective; (2) evidence can be monitored as the data accumulate; (3) expert prior knowledge can be taken into account. Most existing approaches do not fulfill these desiderata. Here we describe a Bayesian A/B procedure based on Kass and Vaidyanathan (1992) that allows one to monitor the evidence for the hypotheses that the treatment has either a positive effect, a negative effect, or, crucially, no effect. Furthermore, this approach enables one to incorporate expert knowledge about the relative prior plausibility of the rival hypotheses and about the expected size of the effect, given that it is non-zero. To facilitate the wider adoption of this Bayesian procedure we developed the abtest package in R. We illustrate the package options and the associated statistical results with a fictitious business example and a real data medical example.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Quentin F. Gronau, Akash Raj K. N., Eric-Jan Wagenmakers https://www.jstatsoft.org/index.php/jss/article/view/v100i18 BFpack: Flexible Bayes Factor Testing of Scientific Theories in R 2021-04-13T09:28:11+00:00 Joris Mulder j.mulder3@tilburguniversity.edu Donald R. Williams drwwilliams@ucdavis.edu Xin Gu guxin57@hotmail.com Andrew Tomarken andrew.j.tomarken@vanderbilt.edu Florian Böing-Messing F.Boeing-Messing@uvt.nl Anton Olsson-Collentine J.A.E.OlssonCollentine@uvt.nl Marlyne Meijerink-Bosman M.L.Meijerink@uvt.nl Janosch Menke j.menke@uu.nl Robbie van Aert R.C.M.vanAert@tilburguniversity.edu Jean-Paul Fox g.j.a.fox@utwente.nl Herbert Hoijtink h.hoijtink@uu.nl Yves Rosseel Yves.Rosseel@ugent.be Eric-Jan Wagenmakers ej.wagenmakers@gmail.com Caspar van Lissa C.J.vanLissa@uu.nl <p>There have been considerable methodological developments of Bayes factors for hypothesis testing in the social and behavioral sciences, and related fields. This development is due to the flexibility of the Bayes factor for testing multiple hypotheses simultaneously, the ability to test complex hypotheses involving equality as well as order constraints on the parameters of interest, and the interpretability of the outcome as the weight of evidence provided by the data in support of competing scientific theories. The available software tools for Bayesian hypothesis testing are still limited however. In this paper we present a new R package called BFpack that contains functions for Bayes factor hypothesis testing for the many common testing problems. The software includes novel tools for (i) Bayesian exploratory testing (e.g., zero vs positive vs negative effects), (ii) Bayesian confirmatory testing (competing hypotheses with equality and/or order constraints), (iii) common statistical analyses, such as linear regression, generalized linear models, (multivariate) analysis of (co)variance, correlation analysis, and random intercept models, (iv) using default priors, and (v) while allowing data to contain missing observations that are missing at random.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Joris Mulder, Donald R. Williams, Xin Gu, Andrew Tomarken, Florian Böing-Messing, Anton Olsson-Collentine, Marlyne Meijerink, Janosch Menke, Robbie van Aert, Jean-Paul Fox, Herbert Hoijtink, Yves Rosseel, Eric-Jan Wagenmakers, Caspar van Lissa https://www.jstatsoft.org/index.php/jss/article/view/v100i19 Applying Meta-Analytic-Predictive Priors with the R Bayesian Evidence Synthesis Tools 2021-01-14T09:51:28+00:00 Sebastian Weber sebastian.weber@novartis.com Yue Li yue-1.li@novartis.com John W. Seaman III john.seaman@novartis.com Tomoyuki Kakizume tomoyuki.kakizume@novartis.com Heinz Schmidli heinz.schmidli@novartis.com <p>Use of historical data in clinical trial design and analysis has shown various advantages such as reduction of number of subjects and increase of study power. The metaanalytic-predictive (MAP) approach accounts with a hierarchical model for between-trial heterogeneity in order to derive an informative prior from historical data. In this paper, we introduce the package RBesT (R Bayesian evidence synthesis tools) which implements the MAP approach with normal (known sampling standard deviation), binomial and Poisson endpoints. The hierarchical MAP model is evaluated by Markov chain Monte Carlo (MCMC). The MCMC samples representing the MAP prior are approximated with parametric mixture densities which are obtained with the expectation maximization algorithm. The parametric mixture density representation facilitates easy communication of the MAP prior and enables fast and accurate analytical procedures to evaluate properties of trial designs with informative MAP priors. The paper first introduces the framework of robust Bayesian evidence synthesis in this setting and then explains how RBesT facilitates the derivation and evaluation of an informative MAP prior from historical control data. In addition we describe how the meta-analytic framework relates to further applications including probability of success calculations.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Sebastian Weber, Yue Li, John W. Seaman III, Tomoyuki Kakizume, Heinz Schmidli https://www.jstatsoft.org/index.php/jss/article/view/v100i20 JointAI: Joint Analysis and Imputation of Incomplete Data in R 2020-11-17T10:13:49+00:00 Nicole S. Erler n.erler@erasmusmc.nl Dimitris Rizopoulos d.rizopoulos@erasmusmc.nl Emmanuel M. E. H. Lesaffre emmanuel.lesaffre@kuleuven.be <p>Missing data occur in many types of studies and typically complicate the analysis. Multiple imputation, either using joint modeling or the more flexible fully conditional specification approach, are popular and work well in standard settings. In settings involving nonlinear associations or interactions, however, incompatibility of the imputation model with the analysis model is an issue often resulting in bias. Similarly, complex outcomes such as longitudinal or survival outcomes cannot be adequately handled by standard implementations. In this paper, we introduce the R package JointAI, which utilizes the Bayesian framework to perform simultaneous analysis and imputation in regression models with incomplete covariates. Using a fully Bayesian joint modeling approach it overcomes the issue of uncongeniality while retaining the attractive flexibility of fully conditional specification multiple imputation by specifying the joint distribution of analysis and imputation models as a sequence of univariate models that can be adapted to the type of variable. JointAI provides functions for Bayesian inference with generalized linear and generalized linear mixed models and extensions thereof as well as survival models and joint models for longitudinal and survival data, that take arguments analogous to the corresponding well known functions for the analysis of complete data from base R and other packages. Usage and features of JointAI are described and illustrated using various examples and the theoretical background is outlined.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Nicole S. Erler, Dimitris Rizopoulos, Emmanuel M. E. H. Lesaffre https://www.jstatsoft.org/index.php/jss/article/view/v100i21 BayesCTDesign: An R Package for Bayesian Trial Design Using Historical Control Data 2020-09-05T02:34:19+00:00 Barry S. Eggleston beggleston@rti.org Joseph G. Ibrahim jibrahim@email.unc.edu Becky McNeil rmcneil@rti.org Diane Catellier dcatellier@rti.org <p>This article introduces the R package BayesCTDesign for two-arm randomized Bayesian trial design using historical control data when available, and simple two-arm randomized Bayesian trial design when historical control data is not available. The package BayesCTDesign, which is available from the Comprehensive R Archive Network, has two simulation functions, historic_sim() and simple_sim() for studying trial characteristics under user-defined scenarios, and two methods print() and plot() for displaying summaries of the simulated trial characteristics. The package BayesCTDesign works with two-arm trials with equal sample sizes per arm. The package BayesCTDesign allows a user to study Gaussian, Poisson, Bernoulli, Weibull, lognormal, and piecewise exponential outcomes. Power for two-sided hypothesis tests at a user-defined α is estimated via simulation using a test within each simulation replication that involves comparing a 95% credible interval for the outcome specific treatment effect measure to the null case value. If the 95% credible interval excludes the null case value, then the null hypothesis is rejected, else the null hypothesis is accepted. In the article, the idea of including historical control data in a Bayesian analysis is reviewed, the estimation process of BayesCTDesign is explained, and the user interface is described. Finally, the BayesCTDesign is illustrated via several examples.</p> 2021-11-30T00:00:00+00:00 Copyright (c) 2021 Barry S. Eggleston, Joseph G. Ibrahim, Becky McNeil, Diane Catellier