synthpop : Bespoke Creation of Synthetic Data in R

In many contexts, conﬁdentiality constraints severely restrict access to unique and valuable microdata. Synthetic data which mimic the original observed data and preserve the relationships between variables but do not contain any disclosive records are one possible solution to this problem. The synthpop package for R , introduced in this paper, provides routines to generate synthetic versions of original data sets. We describe the methodology and its consequences for the data characteristics. We illustrate the package features using a survey data example.


Synthetic data for disclosure control
National statistical agencies and other institutions gather large amounts of information about individuals and organisations.Such data can be used to understand population processes so as to inform policy and planning.The cost of such data can be considerable, both for the collectors and the subjects who provide their data.Because of confidentiality constraints and guarantees issued to data subjects the full access to such data is often restricted to the staff of the collection agencies.Traditionally, data collectors have used anonymisation along with simple perturbation methods such as aggregation, recoding, record-swapping, suppression of sensitive values or adding random noise to prevent the identification of data subjects.Advances in computer technology have shown that such measures may not prevent disclosure (Ohm 2010) and in addition they may compromise the conclusions one can draw from such data (Elliot and Purdam 2007;Winkler 2007).
In response to these limitations there have been several initiatives, most of them centred around the U.S. Census Bureau, to generate synthetic data which can be released to users outside the setting where the original data are held.The basic idea of synthetic data is to replace some or all of the observed values by sampling from appropriate probability distributions so that the essential statistical features of the original data are preserved.The approach has been developed along similar lines to recent practical experience with multiple imputation methods although synthesis is not the same as imputation.Imputation replaces data which are missing with modelled values and adjusts the inference for the additional uncertainty due to this process.For synthesis, in the circumstances when some data are missing two approaches are possible, one being to impute missing values prior to synthesis and the other to synthesise the observed patterns of missing data without estimating the missing values.
In both cases all data to be synthesised are treated as known and they are used to create the synthetic data which are then used for inference.The data collection agency generates multiple synthetic data sets and inferences are obtained by combining the results of models fitted to each of them.The formulae for the variance of estimates from synthetic data are different from those used for imputed data.
The synthetic data methods were first proposed by Rubin (1993) and Little (1993) and have been developed by Raghunathan, Reiter, and Rubin (2003), Reiter (2003) and Reiter and Raghunathan (2007).They have been discussed and exemplified in a further series of papers (Abowd and Lane 2004;Abowd and Woodcock 2004;Reiter 2002Reiter , 2005a;;Drechsler and Reiter 2010;Kinney, Reiter, and Berger 2010;Kinney, Reiter, Reznek, Miranda, Jarmin, and Abowd 2011).Non-parametric synthesising methods were introduced by Reiter (2005b) who first suggested to use classification and regression trees (CART; Breiman, Friedman, Olshen, and Stone 1984) to generate synthetic data.CART was then compared with more powerful machine learning procedures such as random forests, bagging and support vector machines (Caiola and Reiter 2010;Drechsler and Reiter 2011).The monograph by Drechsler (2011) summarises some of the theoretical, practical and policy developments and provides an excellent introduction to synthetic data for those new to the field.
The original aim of producing synthetic data has been to provide publicly available datasets that can be used for inference in place of the actual data.However, such inferences will only be valid if the model used to construct the synthetic data is the true mechanism that has generated the observed data, which is very difficult, if at all possible, to achieve.Our aim in writing the synthpop package (Nowok, Raab, Snoke, and Dibben 2016) for R (R Core Team 2016) is a more modest one of providing test data for users of confidential datasets.Note that currently all values of variables chosen for synthesis are replaced but this will be relaxed in future versions of the package.These test data should resemble the actual data as closely as possible, but would never be used in any final analyses.The users carry out exploratory analyses and test models on the synthetic data, but they, or perhaps staff of the data collection agencies, would use the code developed on the synthetic data to run their final analyses on the original data.This approach recognises the limitations of synthetic data produced by these methods.It is interesting to note that a similar approach is currently being used for both of the synthetic products made available by the U.S. Census Bureau (see https://www.census.gov/ces/dataproducts/synlbd/ and http://www.census.gov/programs-surveys/sipp/guidance/sipp-synthetic-beta-data-product.html),where results obtained from the synthetic data are validated on the original data ("gold standard files").

Motivation for the development of synthpop
The England and Wales Longitudinal Study (ONS LS; Hattersley and Cresser 1995), the Scottish Longitudinal Study (SLS; Boyle, Feijten, Feng, Hattersley, Huang, Nolan, and Raab 2012) and the Northern Ireland Longitudinal Study (NILS; O'Reilly, Rosato, Catney, Johnston, and Brolly 2011) are rich micro-datasets linking samples from the national census in each country to administrative data (births, deaths, marriages, cancer registrations and other sources) for individuals and their immediate families across several decades.Whilst unique and valuable resources, the sensitive nature of the information they contain means that access to the microdata is restricted to approved researchers and longitudinal study (LS) support staff, who can only view and work with the data in safe settings controlled by the national statistical agencies.Consequently, compared to other census data products such as the aggregate statistics or samples of anonymised records, the three longitudinal studies (LSs) are used by a small number of researchers, a situation which limits their potential impact.Given that confidentiality constraints and legal restrictions mean that open access is not possible with the original microdata, alternative options are needed to allow academics and other users to carry out their research more freely.To address this the SYLLS (Synthetic Data Estimation for UK Longitudinal Studies) project (see http://www.lscs.ac.uk/projects/ synthetic-data-estimation-for-uk-longitudinal-studies/) has been funded by the Economic and Social Research Council to develop techniques to produce synthetic data which mimics the observed data and preserves the relationships between variables and transitions of individuals over time, but can be made available to accredited researchers to analyse on their own computers.The synthpop package for R has been written as part of the SYLLS project to allow LS support staff to produce synthetic data for users of the LSs, that are tailored to the needs of each individual user.Hereinafter, we will use the term "synthesiser" for someone like an LS support officer who is producing the synthetic data from the observed data and hence has access to both.The term "analyst" will refer to someone like an LS user who has no access to the observed data and will be using the synthetic data for exploratory analyses.After the exploratory analysis the analyst will develop confirmatory models and can send the code to a synthesiser to run the gold standard analyses.As well as providing routines to generate the synthetic data the synthpop package contains routines that can be used by the analyst to summarise synthetic data and fitted models from synthetic data and those that can be used by the synthesiser to compare gold standard analyses with those from the synthetic data.
Although primarily targeted to the data from the LSs, the synthpop package is written in a form that makes it applicable to other confidential data where the resource of synthetic data would be valuable.By providing a comprehensive and flexible framework with parametric and non-parametric methods it fills a gap in tools for generating synthetic versions of original data sets.The R package simPop (Meindl, Templ, Alfons, and Kowarik 2016) which is a successor to the simPopulation package (Alfons, Kraft, Templ, and Filzmoser 2011;Alfons and Kraft 2013) implements model-based methods to simulate synthetic populations based on household survey data and auxiliary information.The approach used concentrates on simulation of closeto-reality population and is similar to microsimulation rather than multiple imputation.The software IVEware for SAS (SAS Institute Inc. 2013) and its stand-alone version SRCware (Raghunathan, Solenberger, and Van Hoewyk 2002;Survey Methodology Program 2011), originally developed for multiple imputation, include the SYNTHESIZE module that allows to produce synthetic data.IVEware uses conditionally specified parametric models with proper imputation and these can be adjusted for clustered, weighted or stratified samples.All item missing values are imputed when generating synthetic data sets.No analysis methods are available in this software because only the formulae for imputation are available which are not appropriate for synthetic data.

Structure of this paper
The structure of this paper is as follows.The next section introduces the notation, terminology and the main theoretical results needed for the simplest and, we expect, the most common use of the package.More details of the theoretical results for the general case can be found in Raab, Nowok, and Dibben (2016).Readers not interested in the theoretical details can now proceed directly to Section 3 which presents the package and its basic functionality.Section 4 that follows provides some illustrative examples.The concluding Section 5 indicates directions for future developments.

Overview of method
Observed data from a survey or a sample from a census or population register are available to the synthesiser.They consist of a sample of n units consisting of (x obs , y obs ) where x obs , which may be null, is a matrix of data that can be released unchanged to the analyst and y obs is an n × p matrix of p variables that require to be synthesised.We consider here the simple case when the synthetic data sets (syntheses) will each have the same number of records as the original data and the method of generating the synthetic sample (e.g., simple random sampling or a complex sample design) matches that of the observed data.This condition allows to make inferences from synthetic data generated from distributions with parameters fitted to the observed data without sampling the parameters from their posterior distributions.We refer to such synthesis as "simple synthesis".When synthetic data are generated from distributions with parameters sampled from their posterior distributions we refer to this as "proper synthesis".

Generating synthetic data
The observed data are assumed to be a sample from a population with parameters that can be estimated by the synthesiser, specifically y obs is assumed to be a sample from f (Y |x obs , θ) where θ is a vector of parameters.This could be a hypothetical infinite super-population or a finite population which is large enough for finite population corrections to be ignored.The synthesiser fits the data to the assumed distribution and obtains estimates of its parameters.In most implementations of synthetic data generation, including synthpop, the joint distribution is defined in terms of a series of conditional distributions.A column of y obs is selected and the distribution of this variable, conditional on x obs is estimated.Then the next column is selected and its distribution is estimated conditional on x obs and the column of y obs already selected.The distribution of subsequent columns of y obs are estimated conditional on x obs and all previous columns of y obs .
The generation of the synthetic data sets proceeds in parallel to the fitting of each conditional distribution.Each column of the synthetic data is generated from the assumed distribution, conditional on x obs , the fitted parameters of the conditional distribution (simple synthesis) and the synthesised values of all the previous columns of y obs .Alternatively the synthetic values can be generated from the posterior distribution of the parameters (proper synthesis).In both cases, a total of m synthetic data sets are generated.

Inference from the synthetic data
An analyst who wants to estimate a model from the synthetic data will fit the model to each of the m synthetic data sets and obtain an estimate of its vector of parameters β from each synthetic data set as ( β1 If the model for the data is correct the m estimates from the synthetic data will be centred around the estimate β that would have been obtained from the observed data.We are assuming that it is the goal of the analyst to use the synthetic data to estimate β and its variance-covariance matrix V β .If the method of inference used to fit the model provides consistent estimates of the parameters and the same is true for analyses of the synthetic data then the mean of m synthetic estimates, β = βi /m provides a consistent estimate of β.Provided the observed and synthetic data are generated by the common sampling scheme then The variance-covariance matrix of β, conditional on β and V β , becomes V β /m which can be estimated from V β /m.Thus the stochastic error in the mean of the synthetic estimates about the values from the observed data can be reduced to a negligible quantity by increasing m.
It must be remembered, however that the consistency of β only applies when observed data are a sample from the distribution used for synthesis.In practical applications differences between the analyses on the observed data and those from the mean of the syntheses will be found because the data do not conform to the model used for synthesis.Such differences will not be reduced by increasing m.The synthesiser, with access to the observed data, can estimate β − β and compare it to its standard error in order to judge the extent that this model mismatch affects the estimates.
Note that this result is different from the literature cited above which aims to use the results of the synthetic data to make inference about the population from which the original gold standard data have been generated.But our aim, in the simplest case we describe above, is only to make inferences to the results that would have been obtained by the gold standard analysis, with the expectation that the analyst will run final models on the observed data.Also, unlike most of the literature above, in the simplest case we do not sample from the predictive distribution of the parameters to create the synthetic data but an option to do so is available in synthpop.This approach has been proposed recently by Reiter and Kinney (2012) for partially synthetic data.The justification for this approach for completely synthesised data is in Raab et al. (2016) along with the details of how the synthpop package can be used to make inferences to the population.

Obtaining the software
The synthpop package is an add-on package to the statistical software R. It is freely available from the Comprehensive R Archive Network (CRAN) at http://CRAN.R-project.org/package=synthpop.It utilises the structure and some functions of the mice multiple imputation package (Van Buuren and Groothuis-Oudshoorn 2011) but adopts and extends it for the specific purpose of generating and analysing synthetic data.

Basic functionality
The synthpop package aims to provide a user with an easy way of generating synthetic versions of original observed data sets.Via the function syn() a synthetic data set is produced using a single command.The only required argument is data which is a data frame or a matrix containing the data to be synthesised.By default, a single synthetic data set is produced using simple synthesis, i.e., without sampling from the posterior distribution of the parameters of the synthesising models.Multiple data sets can be obtained by setting parameter m to a desired number.Proper synthesis with synthetic data sampled from the posterior predictive distribution of the observed data is conducted when argument proper is set to TRUE.Data synthesis can be further customized with other optional parameters.Below, we only present the salient features of the syn() function.See examples in Section 4 and the R documentation for the function syn() for more details (command ?syn at the R console).

Choice of synthesising method
The synthesising models are defined by a parameter method which can be a single string or a vector of strings.Providing a single method name assumes the same synthesising method for each variable, unless a variable's data type precludes it.Note that a variable to be synthesised first that has no predictors is a special case and its synthetic values are by default generated by random sampling with replacement from the original data ("sample" method).In general, a user can choose between parametric and non-parametric methods.The latter are based on classification and regression trees (CART) that can handle any type of data.By default "cart" method is used for all variables that have predictors.It utilizes function rpart() available in package rpart (Therneau, Atkinson, and Ripley 2015).An alternative implementation of the CART technique from package party (Hothorn, Hornik, and Zeileis 2006) can be used by selecting "ctree" method.Setting the parameter method to "parametric" assigns default parametric methods to variables to be synthesised based on their types.The default parametric methods for numeric, binary, unordered factor and ordered factor data type are specified in vector default.methodwhich may be customised if desired.Alternatively a method can be chosen out of the available methods for each variable separately.The methods currently implemented are listed in Table 1.Their default settings can be modified via additional parameters of the syn() function that have to be named using period-separated method and parameter name (method.parameter).For instance, in order to set a minbucket (minimum number of observations in any terminal node of a CART model) for a "cart" synthesising method, cart.minbucket has to be specified.Those arguments are method-specific and are used for all variables to be synthesised using that method.For variables to be left unchanged an empty method ("") should be used.A new synthesising method can be easily introduced by writing a function named syn.newmethod()and then specifying method parameter of syn() function as "newmethod".

Controlling the predictions
The synthetic values of the variables are generated sequentially from their conditional distributions given variables already synthesised with parameters from the same distributions fitted with the observed data.Next to choosing model types, a user may determine the order in which variables should be synthesised (visit.sequenceparameter) and also the set of vari- ables to include as predictors in the synthesising model (predictor.matrixparameter).As mentioned above, the choice of explanatory variables is restricted by the synthesis sequence and variables that are not synthesised yet cannot be used in prediction models.It is possible, however, to include predictor variables in the synthesis that will not be synthesized themselves.

Handling data with missing or restricted values
The aim of producing a synthetic version of observed data here is to mimic their characteristics in all possible ways, which may include missing and restricted values data.Values representing missing data in categorical variables are treated as additional categories and reproducing them is straightforward.Continuous variables with missing data are modelled in two steps.In the first step, we synthesise an auxiliary binary variable specifying whether a value is missing or not.Depending on the method specified by a user for the original variable a logit or CART model is used for synthesis.If there are different types of missing values an auxiliary categorical variable is created to reflect this and an appropriate model is used for synthesis (a polytomous or CART model).In the second step, a synthesising model is fitted to the non-missing values in the original variable and then used to generate synthetic values for the non-missing category records in our auxiliary variable.The auxiliary variable and a variable with non-missing values and zeros for remaining records are used instead of the original variable for prediction of other variables.The missing data codes have to be specified by a user in cont.naparameter of the syn() function if they differ from the R missing data code NA.The cont.na argument has to be provided as a named list with names of its elements corresponding to the variables names for which the missing data codes need to be specified.
Restricted values are those where the values for some cases are determined explicitly by those of other variables.In such cases the rules and the corresponding values should be specified using rules and rvalues parameters.They are supplied in the form of named lists in the same manner as the missing data codes parameter.The variables used in rules have to be synthesised prior to the variable they refer to.In the synthesis process the restricted values are assigned first and then only the records with unrestricted values are synthesised.

Disclosure control
Completely synthesised data such as those generated by the syn() function with default settings do not by definition include real units, so disclosure of a real person is acknowledged to be unlikely.It has been confirmed by Elliot (2015) in his report on the disclosure risk associated with the synthetic data produced using synthpop package.Nonetheless, there are some options that are designed to further protect data and limit the perceived disclosure.
For the CART model ("ctree" or "cart" method), the final leaves to be sampled from may include only a very small number of individuals, which elevates risk of replicating real persons.
To avoid this, a user can specify, for instance, a minimum size of a final node that a CART model can produce.It can be done using the cart.minbucketand the ctree.minbucketparameter for the "cart" and "ctree" methods respectively.However, the right balance needs to be found between disclosure risk and synthetic data quality.For the "ctree", "cart", "normrank" and "sample" methods there is also the risk of releasing real unusual values for continuous variables and therefore use of a smoothing option is essential for protecting confidentiality.If the smoothing parameter is set to "density" a Gaussian kernel density smoothing is applied to the synthesised values.
There are also additional precautionary options built into the package, which can be applied using sdc() function (sdc stands for statistical disclosure control).The function allows top and bottom coding, adding labels to the synthetic data sets to make it clear that the data are fake so no one mistakenly believes them to be real and removing from the synthetic data set any unique cases with variable sequences that are identical to unique individuals in the real dataset.The last tool reduces the chances of a person who is in the real data believing that their actual data is in the synthetic data.

Data
The synthpop package includes a data frame SD2011 with individual microdata that will be used for illustration.The data set is a subset of survey data collected in 2011 within the Social Diagnosis project (Council for Social Monitoring 2011) which aims to investigate objective and subjective quality of life in Poland.The complete data set is freely available at http://www.diagnoza.com/index-en.htmlalong with a detailed documentation.The SD2011 subset contains 35 selected variables of various type for a sample of 5,000 individuals aged 16 and over.

Simple example
To get access to synthpop functions and the SD2011 data set we need to load the package via

R> library("synthpop")
For our illustrative examples of the syn() function we use seven variables of various data types which are listed in Table 2.
Although function syn() allows synthesis of a subset of variables (see Section 4.3), for ease of presentation here we extract variables of interest from the SD2011 data set and store them in a data frame called ods which stands for "observed data set".The structure of the ods data can be investigated using the head() function which prints the first rows of a data frame.
R> vars <-c("sex", "age", "edu", "marital", "income", "ls", "wkabint") R> ods <-SD2011 To run a default synthesis only the data to be synthesised have to be provided as a function argument.Here, an additional parameter seed is used to fix the pseudo random number generator seed and make the results reproducible.To monitor the progress of the synthesising process the function syn() prints to the console the current synthesis number and the name of a variable that is being synthesised.This output can be suppressed by setting an argument print.flag to FALSE.
R> my.seed <-17914709 R> sds.default <-syn(ods, seed = my.seed) The resulting object of class 'synds' called here sds.default,where sds stands for "synthesised data set", is a list.The print method displays its selected components (see below).
An element syn contains a synthesised data set which can be accessed using a standard list referencing (sds.default$syn).

R> sds.default
Call The remaining (undisplayed) list elements include other syn() function parameters used in the synthesis.Their names can be listed via the names() function.For a complete description see the syn() function help page (?syn).
The default predictor selection matrix (predictor.matrix) is defined by the visit sequence.All variables that are earlier in the visit sequence are used as predictors.A value of 1 in a predictor selection matrix means that the column variable is used as a predictor for the target variable in the row.Since the order of variables is exactly the same as in the original data, for the default visit sequence the default predictor selection matrix has values of 1 in the lower triangle.
Synthesising data with default parametric methods is run with the methods listed below.Values of the other syn() arguments remain the same as for the default synthesis.

Extended example
To extend the simple example presented in Section 4.2 we change the order of synthesis, synthesise only selected variables, customise selection of predictors, handle missing values in a continuous variable and apply some rules that a variable has to follow.

Sequence and scope of synthesis
The default algorithm of synthesising variables in columns from left to right can be changed via the visit.sequenceargument.The vector visit.sequenceshould include indices of columns in an order desired by a user.Alternatively, names of variables can be used.If we do not want to synthesise some variables we can exclude them from visit sequence.By default those variables are not used to predict other variables but they are saved in the synthesised data.In order to remove their original values from the resulting synthetic data sets an argument drop.not.used has to be set to TRUE.To synthesize variables sex, age, ls, marital and edu in this order we run the syn() function with the following specification R> sds.selection <-syn(ods, visit.sequence = c(1, 2, 6, 4, 3) Note that a user-defined method vector (setting method for each variable separately) and a specified predictor.matrixboth have to include information for all variables present in the original observed data set regardless of whether they are in visit.sequenceor not.This allows changes in visit.sequencewithout adjustments to other arguments.For variables not to be synthesised but still to be used as predictors, which needs to be reflected in a predictor.matrix,an empty method ("") should be set.By default the original observed values of those variables are included in the synthesised data sets but it can be changed using an argument drop.pred.only.

Selection of predictors
The most important rule when selecting predictors is that independent variables in a prediction model have to be already synthesised.The only exception is when a variable is used only as a predictor and is not going to be synthesised at all.Assume we want to synthesise all variables except wkabint and: • exclude life satisfaction (ls) from the predictors of marital status (marital); • use monthly income (income) as a predictor of life satisfaction (ls), education (edu) and marital status (marital) but do not synthesise income variable itself; • use polytomous logistic regression (polyreg) to generate marital status (marital) instead of a default ctree method.
In order to build an adequate predictor selection matrix, instead of doing it from scratch we can define an initial visit.sequenceand corresponding method vector and run syn() function with parameter drop.not.usedset to FALSE (otherwise method and predictor.matrixwill miss information on wkabint), parameter m indicating number of synthesis set to zero and other arguments left as defaults.Then we can adjust the predictor selection matrix used in this synthesis and rerun the function with new parameters.The R code for this is given below.

Handling missing values in continuous variables
Data can be missing for a number of reasons (e.g.refusal, inapplicability, lack of knowledge) and multiple missing data codes are used to represent this variety.By default, numeric missing data codes for a continuous variable are treated as non-missing values.This may lead to erroneous synthetic values, especially when standard parametric models are used or when synthetic values are smoothed to decrease disclosure risk.The problem refers not only to the variable in question, but also to variables predicted from it.The parameter cont.na of the syn() function allows to define missing-data codes for continuous variables in order to model them separately (see Section 3.2).In our simple example a continuous variable income has two types of missing values (NA and -8) and they should be provided in a list element named "income".The following code shows the recommended settings for synthesis of income variable, which includes smoothing and separate synthesis of missing values R> sds.income <-syn(ods, cont.na= list(income = c(NA, -8)), + smoothing = list(income = "density"), seed = NA)

Rules for restricted values
To illustrate application of rules for restricted values consider marital status.According to Polish law males have to be at least 18 to get married.

Synthetic data analysis
Ideally, if the models used for synthesis truly represents the process that generated the original observed data, an analysis based on the synthesised data should lead to the same statistical inferences as an analysis based on the actual data.For illustration we estimate here a simple logistic regression model where our dependent variable is a probability of intention to work abroad.We use the wkabint variable which specifies the intentions of work migration but we adjust it to disregard the destination country group.Besides we recode the current missing data code of variable income (-8) into the R missing data code NA.
R> sds <-syn(ods, method = "ctree", m = 5, seed = my.seed)Before running the models let us compare some descriptive statistics of the observed and synthetic data sets.A very useful function in R for this purpose is summary().When a data frame is provided as an argument, here our original data set ods, it produces summary statistics of each variable.The summary() function with the synds object as an argument gives summary statistics of the variables in the synthesised data set.If more than one synthetic data set has been generated, as default summaries are calculated by averaging summary values for all synthetic data copies.R> compare(sds, ods, vars = "income")

R>
Comparing percentages observed with synthetic Selected utility measures: pMSE S_pMSE df income 8.8e-05 1.416 5 An argument msel can be used to compare the observed data with a single or multiple individual synthetic data sets, which is illustrated below and in Figure 2 for a life satisfaction factor variable (ls).Returning The summary() function of a fit.syndsobject can be used by the analyst to combine estimates based on all the synthesised data sets.By default inference is made to original data quantities.In order to make inference to population quantities the parameter population.inferencehas to be set to TRUE.The function's result provides point estimates of coefficients (B.syn), their standard errors (se(B.syn))and Z scores (Z.syn) for population and observed data quantities respectively.For inference to original data quantities it contains in addition estimates of the actual standard errors based on synthetic data (se(Beta).syn)and standard errors of Z scores (se(Z.syn)).Note that not all these quantities are printed automatically.
The mean of the estimates from each of the m synthetic data sets yields unbiased estimates of the coefficients if the data conform to the model used for synthesis.The variance is estimated differently depending whether inference is made to the original data quantities or the population parameters and whether synthetic data were produced using simple or proper synthesis (for details see Raab et al. 2016;expressions  Function compare() allows the synthesiser to compare the estimates based on the synthesised data sets with those based on the original data and presents the results in both tabular and graphical form (see Figure 3).

Concluding remarks
In this paper we presented the basic functionality of the R package synthpop for generating synthetic versions of microdata containing confidential information so that they are safe to be released to users for exploratory analysis.Interested readers can consult the package documentation for additional features currently implemented which can be used to influence the disclosure risk and the utility of the synthesised data.Note that synthpop is under continual development and future versions will include, among others, appropriate procedures for synthesising multiple event data, conducting stratified synthesis and replacing only selected cases from selected variables.The ultimate aim of synthpop is to provide a comprehensive, flexible and easy to use tool for generating bespoke synthetic data that can be safely released to interested data users.Since there are many different options to synthesise data, developing general guidelines for best practice remains an open issue to be addressed in our future research.

Figure 1 :
Figure 1: Relative frequency distribution of non-missing values and missing data categories for income variable for observed and synthetic data.

Figure 2 :
Figure 2: Relative frequency distribution of life satisfaction (ls) for observed and synthetic data.

Table 2 :
Variables to be synthesised.
An appropriate prediction matrix is created automatically.To avoid having to alter other parameters when the visit sequence is changed and to ensure the synthetic data have the same structure as the original ones, the variables in sds.selection$predictor.matrix are arranged in the same order as in the original data.The same applies to sds.selection$method and synthesised data set sds.selection$syn.As noted above, if the parameter drop.not.used is set to TRUE and there are variables that are not used in synthesis, they are not included in the output.In this case the column indices in visit sequence, which align to the synthetic data columns, may not be the same as in the original data.
Thus, in our synthesised data set all male individuals younger than 18 should have marital status SINGLE which is the case in the observed data set.Running without rules gives incorrect results with some of the males under 18 classified as MARRIED (see summary output table below).
to the logistic regression model for wkabint, we estimate the original data model using generalised linear models implemented in R glm() function.A synthpop package function glm.synds() is an equivalent function for estimating models for each of the m synthesised data sets.A similar function called lm.synds() is available for a standard linear regression model.An outcome of glm.synds() and lm.synds() function is an object of class 'fit.synds'.If m > 1, printing a 'fit.synds'objectgives the combined (average) coefficient estimates.Results for coefficient estimates based on individual synthetic data sets can be displayed using an msel argument of a print method.Note: To get more details of the fit see vignette on inference.
used to calculate variance for different cases are presented in Table1).By default a simple synthesis is conducted and inference is made to original data quantities.Fit to synthetic data set with 5 syntheses.Inference to coefficients and standard errors that would be obtained from the original data.