nprobust: Nonparametric Kernel-Based Estimation and Robust Bias-Corrected Inference

Nonparametric kernel density and local polynomial regression estimators are very popular in Statistics, Economics, and many other disciplines. They are routinely employed in applied work, either as part of the main empirical analysis or as a preliminary ingredient entering some other estimation or inference procedure. This article describes the main methodological and numerical features of the software package nprobust, which offers an array of estimation and inference procedures for nonparametric kernel-based density and local polynomial regression methods, implemented in both the R and Stata statistical platforms. The package includes not only classical bandwidth selection, estimation, and inference methods (Wand and Jones, 1995; Fan and Gijbels, 1996), but also other recent developments in the statistics and econometrics literatures such as robust bias-corrected inference and coverage error optimal bandwidth selection (Calonico, Cattaneo and Farrell, 2018, 2019). Furthermore, this article also proposes a simple way of estimating optimal bandwidths in practice that always delivers the optimal mean square error convergence rate regardless of the specific evaluation point, that is, no matter whether it is implemented at a boundary or interior point. Numerical performance is illustrated using an empirical application and simulated data, where a detailed numerical comparison with other R packages is given.


Introduction
Nonparametric kernel-based density and local polynomial regression methods are popular in Statistics, Economics, and many other disciplines, as they are routinely employed in empirical work. Correct implementation of these nonparametric procedures, however, often requires subtle (but important) methodological and numerical decisions. In this article, we give a comprehensive discussion of the software package nprobust, which is available for both R and Stata, and offers modern statistical methods for bandwidth selection, estimation, and inference in the context of kernel density and local polynomial regression fitting. Furthermore, this article also presents a novel bandwidth selection approach, which always delivers the optimal rate of convergence no matter the evaluation point under consideration (i.e., the bandwidth methodology is "boundary rate adaptive"), and therefore is the default method used in the package. All the methods are fully data-driven and are implemented paying particular attention to finite sample properties as well as to recent theoretical developments in the statistics and econometrics literatures.
The package nprobust implements three types of statistical procedures for both density and local polynomial kernel smoothing. First, it provides several data-driven optimal bandwidth selection methods, specifically tailored to both (i ) point estimation, based on both mean square error (MSE) and integrated MSE (IMSE) approximations, and (ii ) confidence interval estimation or, equivalently, two-sided hypothesis testing, based on coverage error (CE) approximations via valid higher-order Edgeworth expansions. Second, the package implements point and variance estimators for density estimation at an interior point, and regression function estimation both at interior and boundary points. Finally, the package offers conventional and robust bias-corrected inference for all settings considered: density at interior point and regression function at both interior and boundary points. Some of these implementations build on classical results in the nonparametrics literature such as those reviewed in Wand and Jones (1995) and Fan and Gijbels (1996), while others build on more recent developments on nonparametrics Farrell 2018, 2019, and references therein) and/or new practical ideas explained below. See also Ruppert, Wand, and Carroll (2009) for semi/nonparametric applications in statistics, and Li and Racine (2007) for semi/nonparametric applications in econometrics.
Section 2 below gives an heuristic introduction to the main methodological ideas and results underlying the package implementation, avoiding technical details as much as possible, and focusing instead on the big picture overview of the methods available. For a technical discussion of the methods, see Farrell (2018, 2019) and their supplemental appendices. Related software implementing partitioning-based nonparametric methods, including binscatter, splines, and wavelets, are discussed in Cattaneo, Farrell, and Feng (2019b) and Cattaneo, Crump, Farrell, and Feng (2019a).
Unlike most other kernel smoothing implementations available in R and Stata, the package nprobust has two distinctive features, in addition to offering several new statistical procedures currently unavailable. First, all objects entering the different statistical procedures are computed using pre-asymptotic formulas, which has been shown to deliver superior estimation and inference. For example, when implemented in pre-asymptotic form, local polynomial regression inference at a boundary point exhibits higher-order boundary carpentry, while this important distributional feature is not present when asymptotic approximations are used instead for Studentization purposes. Second, also motivated by pre-asymptotic approximations and better finite sample properties, all inference procedures are constructed using both heteroskedasticity consistent (HC) variance estimators and cluster robust variance estimators.
This approach departs from those employing homoskedasticity consistent variance estimators, which are implemented in most (if not all) packages currently available, because it has been found theoretically and in simulations to deliver more accurate inference procedures. The package nprobust implements HC variance estimators in two distinct ways: (i) using plug-in estimated residuals as usually done in least squares estimation (see Long and Ervin (2000), MacKinnon (2012), and references therein), and (ii) using nearest neighbor (NN) residuals (motivated by Muller and Stadtmuller (1987) and Abadie and Imbens (2008)). In particular, both the HC3 and NN variance estimators have been found to perform quite well in simulations. Cluster robust variance estimation is implemented using either plug-in estimated residuals with degrees-of-freedom adjustment or NN residuals.
In this article, we focus the discussion on the R version of the package nprobust, which makes use of the packages ggplot2 (Wickham 2016) and Rcpp (Eddelbuettel and Balamuta 2017).
Nevertheless, we remark that the exact same functionalities are available in its Stata version.
The package nprobust is organized through five general purpose functions (or commands in Stata): • lprobust(): this function implements estimation and inference procedures for kernelbased local polynomial regression at both interior and boundary points in an unified way.
This function takes the bandwidth choices as given, and implements point and variance estimation, bias correction, conventional and robust bias-corrected confidence intervals, and related inference procedures. All polynomial orders are allowed and the implementation automatically adapts to boundary points, given the choice of bandwidth(s). When the bandwidth(s) are not specified, the companion function lpbwselect() is used to select the necessary bandwidth(s) in a fully data-driven, automatic fashion, while taking into account whether the evaluation point is an interior or boundary point.
• lpbwselect(): this function implements bandwidth selection for kernel-based local polynomial regression methods at both interior and boundary points. Six distinct bandwidth selectors are available: direct plug-in (DPI) and rule-of-thumb (ROT) implemen-tations of MSE-optimal and IMSE-optimal choices (four alternatives), and DPI and ROT implementations of the CE-optimal choice (two alternatives).
• kdrobust(): this function implements estimation and inference procedures for kernel density estimation at an interior point. Like the function lprobust(), this function also takes the bandwidth choices as given and implements point and variance estimation, bias correction, conventional and robust bias-corrected confidence intervals, and related inference procedures. When the bandwidth(s) are not specified, the companion function kdbwselect() is used. This function cannot be used for density estimation at boundary points; see Cattaneo, Jansson, and Ma (2019c) for boundary-adaptive kernel density estimation methods.
• kdbwselect(): this function implements bandwidth selection for kernel density estimation at an interior point. Mimicking the structure and options of the function lpbwselect(), the same type of bandwidth selectors are available.
• nprobust.plot: this function is used as wrapper for plotting results, and is only available in R because it builds on the package ggplot2. Analogous plotting capabilities are available in Stata, as we illustrate in our companion replication files for the latter platform.
We also present detailed numerical evidence on the performance of the package nprobust.
First, in Section 3, we illustrate several of its main functionalities using the canonical dataset of Efron and Feldman (1991). Second, in Section 4, we investigate the finite sample performance of the package using a simulation study. We also compare its performance to four other popular R packages implementing similar nonparametric methods: see Table 1 for details.
We find that the package nprobust outperforms the alternatives in most cases, and never underperforms in terms of bandwidth selection, point estimation, and inference. Finally, for comparability, the Appendix discusses the R syntax for all the packages considered.
Installation details, scripts in R and Stata replicating all numerical results, links to software repositories, and other companion information concerning the package nprobust can be found in the website: https://sites.google.com/site/nppackages/nprobust/.

Overview of Methods and Implementation
This section offers a brief, self-contained review of the main methods implemented in the package nprobust for the case of local polynomial regression. Whenever possible, we employ exactly the same notation as in Calonico, Cattaneo, and Farrell (2018, CCF hereafter) to facilitate comparison and cross-reference with their lengthy supplemental appendix. See also Calonico, Cattaneo, and Farrell (2019) for extensions to derivative estimation and other theoretical and practical results. The case of kernel density estimation at interior points is not discussed here to conserve space, but is nonetheless implemented in the package nprobust, as illustrated below. Further details can be found in the classical textbooks such as Wand and Jones (1995) and Fan and Gijbels (1996), as well as in CCF and references therein.
We assume that (Y i , X i ), with i = 1, 2, . . . , n, is a random sample with m( or its derivative, begin the object of interest. The evaluation point x can be an "interior" or a "boundary" point, and we will discuss this issue when relevant. Regularity conditions such as smoothness or existence of moments are omitted herein, but can be found in the references already given. We discuss how the point estimator, its associated bias-corrected point estimator, and their corresponding variance estimators are all constructed. We employ two bandwidths h and b, where h is used to construct the original point estimator and b is used to construct the bias correction (robust bias-corrected inference allows for h = b). Finally, in our notation any object indexed by p is computed with bandwidth h and any object indexed by q is computed with bandwidth b and q = p + 1. We set ρ = h/b for future reference.
When appealing to asymptotic approximations, limits are taken h → 0 and b → 0 as n → ∞ unless explicitly stated otherwise. Finally, for any smooth function g(x), we use the notation g (ν) (x) = d ν g(x)/dx ν to denote its ν-th derivative, with g(x) = g (0) (x) to save notation.

Point Estimation
The package nprobust implements fully data-driven, automatic kernel-based estimation and inference methods for the regression function m(x) and its derivatives. Since kernel-based local polynomial regression estimators are consistent at all evaluation points, we consider both interior and boundary points simultaneously.
The classical local polynomial estimator of m (ν) (x), 0 ≤ ν ≤ p, iŝ where, for a non-negative integer p, e ν is the (ν+1)-th unit vector of (p+1)-dimension, r p (u) = (1, u, u 2 , . . . , u p ) , and K(·) denotes a symmetric kernel with bounded support. See Fan and Gijbels (1996) for further discussion and basic methodological issues. While CCF restrict attention to p odd, as is standard in the local polynomial literature, the package nprobust allows for any p ≥ 0. In particular, p = 0 corresponds to the classical Nadaraya-Watson kernel regression estimator, while p = 1 is the local linear estimator with the celebrated boundary carpentry feature. Some of the higher-order results reported in CCF for local polynomial estimators may change when p is even, as we discuss further below, but all the first-order properties remain valid and hence the package nprobust allows for any p ≥ 0. The package uses p = 1 as default, in part because all the first-and higher-order results reported in Fan and Gijbels (1996), CCF, and references therein apply. The package offers three different kernel choices: Epanechnikov, Triangular, and Uniform.
For any fixed sample size, the local polynomial estimatorm (ν) (x) is a weighted least squares estimator, and hence can be written in matrix form as follows: . . . , n), and Γ p = R p W p R p /n; here diag(a i : i = 1, . . . , n) denotes the n × n diagonal matrix constructed using a 1 , a 2 , · · · , a n . This representation is convenient to develop pre-asymptotic approximations for estimation and inference, as we discuss below.

Bias and Variance
The basic statistical properties of the nonparametric estimatorsm (ν) (x) are captured by their bias and variance. In pre-asymptotic form, that is, for fixed sample size and bandwidth choice, the (conditional) bias and variance ofm (ν) (x) can be represented by depend on observable quantities such as n, h and K(·), and each depend on only one unknown quantity: m (p+1) (x), m (p+2) (x), and the heteroskedasticity function σ 2 (X i ) = V[Y i |X i ], respectively. Throughout the paper, ≈ means that the approximation holds for large samples in probability.
To be more specific, the pre-asymptotic variance V[m (ν) (x)] takes the form: which is valid for any ν and p and for interior and boundary points x, as this formula follows straightforwardly from linear least squares algebra, where Σ = diag(σ 2 (X i ) : i = 1, . . . , n).
In practice, Σ is replaced by some "estimator", leading to heteroskedasticity consistent (HC) or cluster robust variance estimation for (weighted) linear least squares fitting.
The pre-asymptotic bias, on the other hand, is more complicated as it depends on whether x is an interior or a boundary point, and whether p − ν is odd or even. To be concrete, define is the leading term of the bias (i.e., regardless of the evaluation point when p − ν is odd), and hB 2 [m (ν) (x)] is negligible in large samples. Alternatively, if p − ν is even and x is an interior point, then where the quantity B 11 [m (ν) (x)] is non-zero, and relatively easy to estimate using a pilot bandwidth and pre-asymptotic approximations. (It is, however, notationally cumbersome to state explicitly.) Therefore, when p − ν is even and x is an interior point, both and hB 2 [m (ν) (x)] are part of the leading bias. This distinction is important when it comes to (automatic) implementation, as the form of the bias changes depending on the evaluation point and regression estimator considered.
At a conceptual level, putting aside technical specifics on approximation and implementation, the bias and variance quantities given above play a crucial role in bandwidth selection, estimation, and inference. The package nprobust pays special attention to constructing robust estimates of these quantities, with good finite sample and boundary properties.

Mean Square Error and Bandwidth Selection
In order to implement the point estimatorm (ν) (x), a choice of bandwidth h is needed. Following the classical nonparametric literature, the package nprobust employs pointwise and integrated MSE approximations for the point estimator to compute the corresponding MSEoptimal or IMSE-optimal bandwidth for point estimation.
The pointwise MSE approximation is while the integrated MSE approximation is with w(x) a weighting function.

Case 1: p − ν odd
Employing the approximations above, the MSE-optimal bandwidth is which, in this case, has closed form solution given by Here we abuse notation slightly: the quantities V[m (ν) (x)] and B[m (ν) (x)] are taken as fixed and independent of n and h, that is, denoting their probability limits as opposed to preasymptotic, as originally defined. Nevertheless, we abuse notation because in the end they will be approximated using their pre-asymptotic counterparts with a preliminary/pilot bandwidth.
Similarly, the integrated MSE approximation when p − ν is odd leads to the IMSE-optimal bandwidth h imse = arg min which also has a closed form solution given by These MSE-optimal and IMSE-optimal bandwidth selectors are valid for p − ν odd, regardless of whether x is a boundary or an interior point. Implementation of these selectors is not difficult, as the bias and variance quantities are taken to be pre-asymptotic and hence most parts are already in estimable form. For example, we have the following easy to implement bias estimator:B where Γ p and Λ p are pre-asymptotic and hence directly estimable once a preliminary/pilot bandwidth is set, andm (p+1) (x) is simply another local polynomial fit of order q = p + 1.
Similarly, the fixed-n variance estimator is given by: where all the objects are directly computable from the data and valid for all p − ν (odd or even) and all evaluation points (interior or boundary), provided thatΣ is chosen appropriately. Following CCF, the package nprobust allows for five differentΣ under unknown conditional heteroskedasticity: HC0, HC1, HC2, HC3, and NN. The first four choices employ weighted estimated residuals, motivated by a weighted least squares interpretation of the local polynomial estimator, while the last choice uses a nearest neighbor approach to constructing residuals. Details underlying these choices, and their properties in finite and large samples, are discussed in CCF and its supplemental appendix. In addition, the package nprobust allows for two cluster robust choices forΣ: plug-in residuals with degrees-of-freedom adjustment (similar to HC1) and NN-cluster residuals (similar to NN).
As mentioned in the introduction, the outlined pre-asymptotic approach for bias and variance estimation is quite different from employing the usual asymptotic approximations, which is the way most implementations of kernel smoothing are done in both R and Stata. Importantly, using valid higher-order Edgeworth expansions, CCF established formally that using preasymptotic approximations for bias and variance quantities delivers demonstrably superior distributional approximations, and hence better finite sample performance.

Case 2: p − ν even
This case requires additional care. If x is a boundary point, then the above formulas are valid and the same MSE-optimal and IMSE-optimal bandwidth selectors can be used. However, when x is an interior point and p − ν even, the leading bias formulas change and so do the corresponding bandwidth selectors. Therefore, to account for this issue in an automatic way, since in any given application it is not possible to cleanly distinguish between interior and boundary evaluation points, the default bandwidth selector implemented in the package nprobust optimizes the pre-asymptotic full bias approximation, leading to the following MSEoptimal and IMSE-optimal bandwidth selectors respectively.
These bandwidth selectors for local polynomial regression with p−ν even automatically adapt to the evaluation point in terms of the resulting convergence rate, i.e. the bandwidth choice will deliver point estimates with the MSE-or IMSE-optimal convergence rate, but cannot be given in closed form. Furthermore, the package also offers the possibility of explicitly treating all points x as interior points when p − ν even, in which case the estimator of the term is replaced by an estimator of the term hB 11 [m (ν) (x)], leading to the standard bandwidth selectors, which are more cumbersome to state but can be computed in closed form.

Boundary Adaptive Bandwidth Selection
The function lpbwselect() implements a DPI estimate of the MSE-optimal and IMSEoptimal bandwidth selectors discussed previously, taking into account whether p − ν is even or odd and whether x is interior or boundary point. These bandwidth choices depend on p, ν, K(·), and the preliminary bias and variance estimators. For the bias quantity, the preliminary estimator is computed using a nonparametric point estimators for the unknown quantities m (p+1) , m (p+2) , etc., depending on the case of local polynomial regression considered, but in all cases preliminary quantities are estimated using MSE-optimal bandwidth choices. For the variance quantity, a choice of "estimated" residual is used, as already discussed, and a preliminary rule-of-thumb bandwidth is employed to construct the remaining pre-asymptotic objects. The integrated versions are computed by taking a sample average of the pointwise bias and variance quantities over the evaluation points selected by the user. Finally, the function lpbwselect() also implements ROT versions of the MSE-optimal and IMSE-optimal bandwidth selectors for completeness.
All the proposed and implemented bandwidth selection methods outlined above are rate adaptive in the sense that the selected bandwidth always exhibits the (I)MSE-optimal convergence rate, regardless of the polynomial order (p), the derivative order (ν), and the evaluation point

Optimal Point Estimation
When implemented using the (pointwise) MSE-optimal bandwidth, the regression function is an MSE-optimal point estimator, in the sense that it minimizes the pointwise asymptotic mean square error objective function. Sometimes, researchers (and other software packages) implementing kernel smoothing methods prefer to optimize the bandwidth choice in a global sense and thus focus on optimal bandwidths targeting the IMSE, as presented above, or some form of cross-validation objective function. These alternative bandwidth selectors also lead to optimal point estimators in some sense. Regardless of the specific objective function considered, and hence bandwidth rule used, the resulting bandwidth selectors exhibit the same rate of decay as n → ∞ (albeit different constants) in all cases: for example, compare h mse and h imse given above in close form for the case of p − ν odd.
The package nprobust implements only plug-in bandwidth selectors for both local (pointwise) and global (integrated) bandwidth choices, because such rules tends to have better finite sample properties. Cross-validation methods are not implemented due to potential numerical instabilities, and potentially low convergence rates of the resulting bandwidth choices. Nevertheless, the package can of course be used to construct cross-validation bandwidth choices manually.

Conventional and Robust Bias-Corrected Inference
Given a choice of bandwidth, conventional inference in nonparametric kernel-based estimation employs a Gaussian distributional approximation for the usual Wald-type statistic. To be more precise, standard asymptotic results give , already discussed above, and denotes convergence in distribution. Using this classical result, we obtain the nominal (1 − α)-percent conventional symmetric confidence intervals: where Φ u = Φ −1 (u) with Φ(u) denoting the standard normal cumulative distribution function.
However, this standard distributional approximation result crucially requires undersmoothing (us) of the bandwidth employed: the confidence interval I us (x) will not have correct coverage when any of the mean square error optimal bandwidths discussed previously are used. The main reason is that, when h mse or h imse (or a cross-validation bandwidth) is employed to denotes a non-vanishing asymptotic bias. Furthermore, if a larger bandwidth than an MSE optimal is used, then the distributional approximation fails altogether.
This observation is very important because most (if not all) available kernel smoothing implementations in R and Stata employ a mean square error optimal bandwidth for both point estimation and inference, which means that the corresponding inference results are incorrect.
One solution to this problem is to undersmooth the bandwidth, that is, to employ a smaller bandwidth for conducting inference and constructing confidence intervals. However, while theoretically valid in large samples, this approach does not typically perform well in applications, leads to power losses, and requires employing different observations for estimation and inference purposes, all important drawbacks for empirical work.
Motivated in part by the above issues, CCF developed new inference methods based on nonparametric bias correction, which they termed Robust Bias Correction (RBC). This approach is based on traditional plug-in bias correction, as an alternative to undersmoothing, but employs a different variance estimator for Studentization purposes (when compared to those discussed in textbooks). The key underlying idea is that because the bias is estimated when conducting bias correction, the variance estimator should be change to account for the additional variability introduced. This leads to a new test statistic, where both the centering (bias) and the scale (variance) have been adjusted, but the same bandwidth can be used. To be more specific, we have As explained in CCF, this formula emerges from employing q-th order local polynomial with bandwidth b to estimate the leading higherorder derivative featuring in the fixed-n bias approximation for the estimatorm (ν) (x). Finally, in this case,Σ is also computed using any of the five HC variance estimators (HC0, HC1, HC2, HC3, NN) or the two cluster-robust versions mentioned above. The estimatorB[m (ν) (x)] was already introduced above for bandwidth selection, but the estimatorV [m (ν) bc (x)] is new because it accounts for the variance ofm (ν) (x), the variance ofB[m (ν) (x)], and the covariance between these terms.
The RBC test statistic can be constructed with any of the (I)MSE optimal bandwidths, including the ones introduced above, and the standard Gaussian approximation remains valid.
This result leads to the nominal (1 − α)-percent robust bias-corrected symmetric confidence intervals: The package nprobust implements both conventional and RBC inference for local polynomial estimators, at interior and boundary points. CCF also show that the same ideas and results apply to density estimation at an interior point, and the package also implements those results.
The function lprobust() implements the point estimators, variance estimators, conventional inference, and robust bias-corrected inference discussed above, taking into account whether the evaluation point x is near the boundary or not, and employing all pre-asymptotic approximations for estimation of bias and variance. Two bandwidths are used: h for constructing the main point estimator and related quantities, and b for constructing the higher-order derivative entering the bias correction term. As discussed in CCF, setting h = b is allowed by the theory, and leads to a simple method for bias correction. This is the default in nprobust.

Coverage Error and Bandwidth Selection
An (I)MSE-optimal bandwidth can be used to construct optimal point estimators but cannot be used directly to conduct inference or form confidence intervals, in the conventional way, since a first-order bias renders the standard Gaussian approximation invalid as discussed above. Robust bias-corrected inference allows the use of an (I)MSE-optimal bandwidth to form a test statistic with a standard Gaussian distribution in large samples, thereby leading to valid confidence intervals and hypothesis tests.
However, while valid, employing an (I)MSE-optimal bandwidth to form the robust biascorrected statistic may not be optimal from a distributional or inference perspective. CCF (for ν = 0) and Calonico, Cattaneo, and Farrell (2019) (for ν ≥ 0) study this problem formally using valid Edgeworth expansions and show that indeed the (I)MSE-optimal bandwidth is suboptimal in general, even after robust bias-correction, because it is too "large" (i.e., it decays to zero too slowly) relative to the optimal choice. A very important exception is p = 1, ν = 0, with x an interior point, in which case the MSE-optimal bandwidth does have the CE-optimal rate of convergence.
Consider first the case of local polynomial regression with p−ν odd and robust bias correction inference. The asymptotic coverage of the robust bias-corrected confidence interval estimator where the higher-order coverage error is This representation follows from Corollary 5 in CCF and results in Calonico, Cattaneo, and Farrell (2019) (see also their supplemental appendices), and is written in a way that is adaptive to the evaluation point x, that is, it is asymptotically valid for both interior and boundary points. Thus, a CE-optimal bandwidth for implementing confidence intervals I rbc (x), with p odd, is: Like in the case of (I)MSE-optimal bandwidth selection, a plug-in CE-optimal bandwidth estimator is obtained by replacing E k,ν (x) with consistent estimatorsÊ k,ν (x), k = 1, 2, 3, 4, 5.
The bandwidth selector h ce , and its data-driven implementation, can be used to construct robust bias-corrected confidence intervals I rbc (x) with CE-optimal properties, that is, with minimal coverage error in large samples. Because the constants entering the coverage error are estimated consistently, the resulting CE-optimal bandwidth choice corresponds to a DPI bandwidth selector, which is available only for p odd (and, of course, robust bias-corrected inference). For p even, only the ROT CE-optimal bandwidth selector discussed below is currently available.
As discussed in the supplemental appendix of CCF, it is always possible to construct a simple and intuitive ROT implementation of the CE-optimal bandwidth selector by rescaling any MSE-optimal bandwidth choice:

Empirical Illustration
This section showcases some of the features of the package nprobust employing the canonical dataset of Efron and Feldman (1991), which corresponds to a randomized clinical trial evaluating the efficacy of cholestyramine for reducing cholesterol levels. Additionally, the package help manual can be accessed via: Once the package has been installed, it can be loaded by typing: > library ( nprobust ) Efron and Feldman (1991)   We first employ kdrobust() with its default options to depict the distribution of the preintervention and post-intervention variables. Recall that this command is only valid for interior points, because of the well-known boundary bias in standard density estimation, so the grid of evaluation points needs to be restricted accordingly. As a first illustration, and to display the output in a parsimonious way, we use the function kdrobust() to estimate the density function of the pre-treatment variable chol1 among control units over only seven evaluation points. The command and its summary output is as follows:   Naturally, as the number of evaluation points increases the output via the R function summary() increases as well. (In Stata there is an option to suppress output, as in that platform commands typically issue detailed output by default.) Notice that the number of evaluation points was specified, via the option neval = 7, but not their location. The alternative option eval controls the number and location of evaluation points: for example, specifying eval = 3 would result in estimation at x = 3 only or specifying eval = seq(1,2,.1) would result in estimation at the eleven evaluation points x ∈ {1.0, 1.1, · · · , 1.9, 2.0}. Unless the option h is specified, kdrobust() employs the companion function kdbwselect(), with its default option bwselect = "IMSE-DPI", to choose a data-driven bandwidth; that is, unless the user specifies the bandwidth(s) manually, the default choice is an automatic DPI implementation of the IMSE-optimal bandwidth.
As mentioned previously, the default data-driven bandwidth implementation is IMSE-DPI.
However, the package nprobust includes other bandwidth selections, including CE-optimal ones as explained above. These alternative bandwidth choices can be selected directly for estimation and inference via the option bwselect in the function lprobust(), or they can all be obtained using the companion bandwidth selection function lpbwselect().    This output gives MSE, IMSE, and CE bandwidth selectors, implemented using either DPI or a ROT approach. The above bandwidth selection execution included the (default) option bwcheck=21. This option forces the command to select the minimum bandwidth such that at least 21 observations are used at each evaluation point, which is useful to mitigate issues related to sparse data at certain evaluation points. We combined this option with the IMSE-DPI bandwidth because it employs a larger grid of evaluation points to approximate the outer integrals present in its construction, and some of these grid points exhibited issues of (local) sparsity in this particular application.
In the upcoming section we illustrate how these bandwidth selection methods, together with the estimation and inference procedures discussed above, perform in simulations. We also show how they compare to other available R packages.

Simulations and Comparisons with Other R Packages
In this section we present the results from a simulation study addressing the finite-sample performance of nprobust in comparison to other R packages implementing kernel smoothing regression methods. We consider four other packages: locfit (Loader 2013), locpol, LPridge, and np (Hayfield and Racine 2008); Table 1 gives a brief comparison of the main features available in each of the packages. Among the main differences summarized there, the most important one is related to inference: none of the other packages account for the first-order asymptotic bias present in the distributional approximation when employing automatic MSEoptimal or similar bandwidth selection. In addition, none of the other packages employ preasymptotic quantities when constructing the estimators and test statistics. As we show below, these differences in implementation will lead to substantial empirical coverage distortions of confidence intervals when contrasted with the performance of nprobust. See the Appendix for a comparison of R syntax across these packages.
We compare the performance in finite samples of the five packages described in Table 1 using a simulation study. The data generating process was previously used by Fan and Gijbels (1996) and Cattaneo and Farrell (2013), and is given as follows: We consider 5, 000 simulations, where for each replication the data is generated as i.i.d.
draws of size n = 500. We investigate several performance measures for point estimation and inference, whenever available in each package. Specifically, we consider bias, variance, MSE, and empirical coverage and average interval length of nominal 95% confidence intervals, over five equally-spaced values x ∈ {0, 0.25, 0.5, 0.75, 1}. The regression function and evaluation points considered are plotted in Figure 3, together with a realization of the data and regression function estimate (and confidence intervals) using nprobust. Whenever possible we employ the Epanechnikov kernel, but otherwise whatever is the default kernel in each package.
We focus on two types of comparisons across packages in terms of point estimation and inference performance: (i) using a fixed, MSE-optimal bandwidth choice, and (ii) using an estimated bandwidth. This allows to disentangle the effects of constructing point estimators and inference procedures (given a fixed, common choice of bandwidth across packages) from the convoluted effects of different point estimators and inference procedures together with different data-driven bandwidth selectors.
We organize the presentation of the results by choice of polynomial order (p) and derivative being estimated (ν) in five tables as follows: • Table 2: local linear estimator (p = 1) for the level of the regression function (ν = 0); • Table 3: local constant estimator (p = 0) for the level of the regression function (ν = 0); • Table 4: local quadratic estimator (p = 2) for the level of the regression function (ν = 0); • Table 5: local linear estimator (p = 1) for the derivative of the regression function (ν = 1); • Table 6: local quadratic estimator (p = 2) for the derivative of the regression function (ν = 1).
Recall that p = 1 and ν = 0 is the default for the package nprobust, and corresponds to the recommended p − ν odd case for estimating m(x); similarly, Table 6 presents the results for the recommended p − ν odd case for estimating m (1) (x). Each of these tables reports results on the performance of each of the four packages available in R in addition to the package nprobust, for each of the five evaluation points mentioned previously. For the case of package nprobust, we consider all three bandwidth methods: MSE-optimal, IMSE-optimal, and CE-optimal. All this information is organized by rows in each of the tables.
Each of the Tables 2-6 have two sets of columns labeled "Population Bandwidth" and "Es- For brevity, here we discuss only the default case p = 1 and ν = 0 ( Table 2), but the results are qualitatively consistent across all tables. We found that nprobust offers a point estimator with similar MSE properties as other packages available, but much better inference performance.
In most cases, the empirical coverage of 95% confidence intervals are about correct for the package nprobust (across evaluation points and bandwidth selection methods), while other packages exhibit important coverage distortions. For example, for the evaluation point x = 0, which exhibits substantial local curvature in the regression function, the package np delivers nominal 95% confidence intervals with an empirical coverage of 0% when the population MSEoptimal bandwidth is used and of 76.2% with an estimated (by cross-validation) bandwidth.
In contrast, the empirical coverage when using the package nprobust is about 94% in the same setting, and using any of the population or estimated bandwidth procedures available in this package. These empirical findings are in line with the underlying theory and illustrate the advantages of employing robust bias correction methods for inference (Calonico et al. 2018).
The other Tables 3-6 show qualitatively similar empirical findings. Finally, to offer a more complete set of results we include Table 7, which reports the performance of all the bandwidth selectors available in the package nprobust. Some of these results were already reported in the previous tables, but Table 7 allows for an easier comparison across models and methods, in addition to including the performance of the ROT bandwidth selectors (not reported previously). The simulation results show that these bandwidth selectors offer good performance on average relative to their corresponding population target.

Conclusion
We gave an introduction to the software package nprobust, which offers an array of estimation and inference procedures for nonparametric kernel-based density and local polynomial meth-ods. This package is available in both R and Stata statistical platforms, and further installation details and replication scripts can be found at https://sites.google.com/site/nprobust/.
We also offered a comprehensive simulation study comparing the finite sample performance of the package vis-à-vis other packages available in R for kernel-based nonparametric estimation and inference. In particular, we found that the package nprobust offers on par point estimation methods with superior performance in terms of inference.

Acknowledgments
We