You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: NEWS.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,8 @@ WeightIt News and Updates
5
5
6
6
* Fixed a bug in which the output of `bread()` was off by a factor of -1. This doesn't affect its use in `sandwich::sandwich()`.
7
7
8
+
* Typo fixes in vignettes and documentation.
9
+
8
10
# `WeightIt` 1.4.0
9
11
10
12
* Entropy balancing works slightly differently when sampling weights are supplied. The negative entropy between the estimated weights and the product of the sampling weights and base weights (if any) is now the quantity minimized in the optimization. Previously, the negative entropy between the product of the sampling weights and estimated weights and the base weights was minimized. The new behavior ensures entropy balancing is consistent with mathematically equivalent methods when it ought to be (i.e., CBPS and IPT for the ATT) and prevents counter-intuitive results, like that the ESS after weighting could be larger than that before weighting. Note this will cause results to differ between this and previous versions of `WeightIt`.
Copy file name to clipboardExpand all lines: README.Rmd
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ knitr::opts_chunk$set(
26
26
27
27
For a complete vignette, see the [website](https://ngreifer.github.io/WeightIt/articles/WeightIt.html) for *WeightIt* or `vignette("WeightIt")`.
28
28
29
-
To install and load *WeightIt*, use the code below:
29
+
To install and load *WeightIt*, use the code below:
30
30
31
31
```{r, eval = FALSE}
32
32
#CRAN version
@@ -67,7 +67,7 @@ For the second goal, qualities of the distributions of weights can be assessed u
67
67
summary(W)
68
68
```
69
69
70
-
Desirable qualities include large effective sample sizes, which imply low variability in the weights (and therefore increased precision in estimating the treatment effect).
70
+
Large effective sample sizesimply low variability in the weights, and therefore increased precision in estimating the treatment effect.
71
71
72
72
Finally, we can estimate the effect of the treatment using a weighted outcome model, accounting for estimation of the weights in the standard error of the effect estimate:
73
73
@@ -78,7 +78,7 @@ fit <- lm_weightit(re78 ~ treat, data = lalonde,
78
78
summary(fit, ci = TRUE)
79
79
```
80
80
81
-
The tables below contains the available methods in *WeightIt* for estimating weights for binary, multi-category, and continuous treatments. Many of these methods do not require any other package to use; see `vignette("installing-packages")` for information on how to install packages that are used.
81
+
The tables below contain the available methods in *WeightIt* for estimating weights for binary, multi-category, and continuous treatments. Some of these methods require installing other packages to use; see `vignette("installing-packages")` for information on how to install them.
82
82
83
83
#### Binary Treatments
84
84
@@ -124,7 +124,7 @@ The tables below contains the available methods in *WeightIt* for estimating wei
124
124
Bayesian additive regression trees GPS | [`"bart"`](https://ngreifer.github.io/WeightIt/reference/method_bart.html)
In addition, *WeightIt* implements the subgroup balancing propensity score using the function `sbps()`. Several other tools and utilities are available, including `trim()` to trim or truncate weights, `calibrate()` to calibrate propensity scores, `get_w_from_ps()` to compute weights from propensity scores.
127
+
In addition, *WeightIt* implements the subgroup balancing propensity score using the function `sbps()`. Several other tools and utilities are available, including `trim()` to trim or truncate weights, `calibrate()` to calibrate propensity scores, and `get_w_from_ps()` to compute weights from propensity scores.
128
128
129
129
*WeightIt* provides functions to fit weighted models that account for the uncertainty in estimating the weights. These include `glm_weightit()` for fitting generalized linear models, `ordinal_weightit()` for ordinal regression models, `multinom_weightit()` for multinomial regression models, and `coxph_weightit()` for Cox proportional hazards models. Several methods are available for computing the parameter variances, including asymptotically correct M-estimation-based variances, robust variances that treat the weights as fixed, and traditional and fractional weighted bootstrap variances. Clustered variances are supported. See `vignette("estimating-effects")` for information on how to use these after weighting to estimate treatment effects.
Copy file name to clipboardExpand all lines: vignettes/WeightIt.Rmd
+12-12Lines changed: 12 additions & 12 deletions
Original file line number
Diff line number
Diff line change
@@ -25,21 +25,21 @@ set.seed(1000)
25
25
26
26
## Introduction
27
27
28
-
`WeightIt` contains several functions for estimating and assessing balancing weights for observational studies. These weights can be used to estimate the causal parameters of marginal structural models. I will not go into the basics of causal inference methods here. For good introductory articles, see @austinIntroductionPropensityScore2011, @austinMovingBestPractice2015, @robinsMarginalStructuralModels2000, or @thoemmesPrimerInverseProbability2016.
28
+
*WeightIt* contains several functions for estimating and assessing balancing weights for observational studies. These weights can be used to estimate the causal parameters of marginal structural models. I will not go into the basics of causal inference methods here. For good introductory articles, see @austinIntroductionPropensityScore2011, @austinMovingBestPractice2015, @robinsMarginalStructuralModels2000, or @thoemmesPrimerInverseProbability2016.
29
29
30
-
Typically, the analysis of an observation study might proceed as follows: identify the covariates for which balance is required; assess the quality of the data available, including missingness and measurement error; estimate weights that balance the covariates adequately; and estimate a treatment effect and corresponding standard error or confidence interval. This guide will go through all these steps for two observational studies: estimating the causal effect of a point treatment on an outcome, and estimating the causal parameters of a marginal structural model with multiple treatment periods. This is not meant to be a definitive guide, but rather an introduction to the relevant issues.
30
+
Typically, the analysis of an observational study might proceed as follows: identify the covariates for which balance is required; assess the quality of the data available, including missingness and measurement error; estimate weights that balance the covariates adequately; and estimate a treatment effect and corresponding standard error or confidence interval. This guide will go through these steps for two observational studies: estimating the causal effect of a point treatment on an outcome, and estimating the causal parameters of a marginal structural model with multiple treatment periods. This is not meant to be a definitive guide, but rather an introduction to the relevant issues.
31
31
32
32
## Balancing Weights for a Point Treatment
33
33
34
-
First we will use the Lalonde dataset to estimate the effect of a point treatment. We'll use the version of the data set that resides within the `cobalt` package, which we will use later on as well. Here, we are interested in the average treatment effect on the treated (ATT).
34
+
First we will use the Lalonde dataset to estimate the effect of a point treatment. We'll use the version of the data set that comes with the *cobalt* package, which we will use later on as well. Here, we are interested in the average treatment effect on the treated (ATT).
35
35
36
36
```{r}
37
37
library("cobalt")
38
38
data("lalonde", package = "cobalt")
39
39
head(lalonde)
40
40
```
41
41
42
-
We have our outcome (`re78`), our treatment (`treat`), and the covariates for which balance is desired (`age`, `educ`, `race`, `married`, `nodegree`, `re74`, and `re75`). Using `cobalt`, we can examine the initial imbalance on the covariates:
42
+
We have our outcome (`re78`), our treatment (`treat`), and the covariates for which balance is desired (`age`, `educ`, `race`, `married`, `nodegree`, `re74`, and `re75`). Using *cobalt*, we can examine the initial imbalance on the covariates:
43
43
44
44
```{r}
45
45
bal.tab(treat ~ age + educ + race + married + nodegree + re74 + re75,
@@ -48,7 +48,7 @@ bal.tab(treat ~ age + educ + race + married + nodegree + re74 + re75,
48
48
49
49
Based on this output, we can see that all variables are imbalanced in the sense that the standardized mean differences (for continuous variables) and differences in proportion (for binary variables) are greater than .05 for all variables. In particular, `re74` and `re75` are quite imbalanced, which is troubling given that they are likely strong predictors of the outcome. We will estimate weights using `weightit()` to try to attain balance on these covariates.
50
50
51
-
First, we'll start simple, and use inverse probability weights from propensity scores generated through logistic regression. We need to supply `weightit()` with the formula for the model, the data set, the estimand (ATT), and the method of estimation (`"glm"`) for generalized linear model propensity score weights).
51
+
First, we'll start simple, and use inverse probability weights from propensity scores generated through logistic regression. We need to supply `weightit()` with the formula for the model, the data set, the estimand (ATT), and the method of estimation (`"glm"` for generalized linear model propensity score weights).
Printing the output of `weightit()` displays a summary of how the weights were estimated. Let's examine the quality of the weights using `summary()`. Weights with low variability are desirable because they improve the precision of the estimator. This variability is presented in several ways: by the ratio of the largest weight to the smallest in each group, the coefficient of variation (standard deviation divided by the mean) of the weights in each group, and the effective sample size computed from the weights. We want a small ratio, a smaller coefficient of variation, and a large effective sample size (ESS). What constitutes these values is mostly relative, though, and must be balanced with other constraints, including covariate balance. These metrics are best used when comparing weighting methods, but the ESS can give a sense of how much information remains in the weighted sample on a familiar scale.
60
+
Printing the output of `weightit()` displays a summary of how the weights were estimated. Let's examine the quality of the weights using `summary()`. Weights with low variability are desirable because they improve the precision of the estimator. This variability is presented in several ways, but the most important is the effective sample size (ESS) computed from the weights, which we hope is as close to the original sample size as possible. What constitutes a "large enough" ESS is mostly relative, though, and must be considered with respect other constraints, including covariate balance.
61
61
62
62
```{r}
63
63
summary(W.out)
@@ -69,7 +69,7 @@ These weights have quite high variability, and yield an ESS of close to 100 in t
For nearly all the covariates, these weights yielded very good balance. Only `age` remained imbalanced, with a standardized mean difference greater than .05 and a variance ratio greater than 2. Let's see if we can do better. We'll choose a different method: entropy balancing [@hainmuellerEntropyBalancingCausal2012], which guarantees perfect balance on specified moments of the covariates while minimizing the entropy (a measure of dispersion) of the weights.
72
+
For nearly all the covariates, these weights yielded very good balance. Only `age` remained imbalanced, with a standardized mean difference greater than .05 and a variance ratio greater than 2. Let's see if we can do better. We'll choose a different method: entropy balancing [@hainmuellerEntropyBalancingCausal2012], which guarantees perfect balance on specified moments of the covariates while minimizing the negative entropy (a measure of dispersion) of the weights.
73
73
74
74
```{r}
75
75
W.out <- weightit(treat ~ age + educ + race + married + nodegree + re74 + re75,
Indeed, we have achieved perfect balance on the means of the covariates. However, the variance ratio of `age` is still quite high. We could continue to try to adjust for this imbalance, but if there is reason to believe it is unlikely to affect the outcome, it may be best to leave it as is. (You can try adding `I(age^2)` to the formula and see what changes this causes.)
87
87
88
-
Now that we have our weights stored in `W.out`, let's extract them and estimate our treatment effect. The functions `lm_weightit()` and `glm_weightit()`make it easy to fit (generalized) linear models that account for estimation of of the weights in their standard errors. We can then use functions in `marginaleffects` to perform g-computation to extract a treatment effect estimation from the outcome model.
88
+
Now that we have our weights stored in `W.out`, let's estimate our treatment effect in the weighted sample. The functions `lm_weightit()`, `glm_weightit()`, and friends make it easy to fit (generalized) linear models that account for estimation of of the weights in their standard errors. We can then use functions in *marginaleffects* to perform g-computation to extract a treatment effect estimation from the outcome model.
Our confidence interval for `treat` contains 0, so there isn't evidence that `treat` has an effect on `re78`. Several types of standard errors are available in `WeightIt`, including analytical standard errors that account for estimation of the weights using M-estimation, robust standard errors that treat the weights as fixed, and bootstrapping. All type are described in detail at `vignette("estimating-effects")`.
103
+
Our confidence interval for `treat` contains 0, so there isn't evidence that `treat` has an effect on `re78`. Several types of standard errors are available in *WeightIt*, including analytical standard errors that account for estimation of the weights using M-estimation, robust standard errors that treat the weights as fixed, and bootstrapping. All type are described in detail at `vignette("estimating-effects")`.
104
104
105
105
## Balancing Weights for a Longitudinal Treatment
106
106
107
-
`WeightIt` can estimate weights for longitudinal treatment marginal structural models as well. This time, we'll use the sample data set `msmdata` to estimate our weights. Data must be in "wide" format, with one row per unit.
107
+
*WeightIt* can estimate weights marginal structural models with longitudinal treatment as well. This time, we'll use the sample data set `msmdata` to estimate our weights. Data must be in "wide" format, with one row per unit.
108
108
109
109
```{r}
110
110
data("msmdata")
@@ -113,7 +113,7 @@ head(msmdata)
113
113
114
114
We have a binary outcome variable (`Y_B`), pre-treatment time-varying variables (`X1_0` and `X2_0`, measured before the first treatment, `X1_1` and `X2_1` measured between the first and second treatments, and `X1_2` and `X2_2` measured between the second and third treatments), and three time-varying binary treatment variables (`A_1`, `A_2`, and `A_3`). We are interested in the joint, unique, causal effects of each treatment period on the outcome. At each treatment time point, we need to achieve balance on all variables measured prior to that treatment, including previous treatments.
115
115
116
-
Using `cobalt`, we can examine the initial imbalance at each time point and overall:
116
+
Using *cobalt*, we can examine the initial imbalance at each time point and overall:
By setting `which.time = .none` in `bal.tab()`, we can focus on the overall balance assessment, which displays the greatest imbalance for each covariate across time points. We can see that our estimated weights balance all covariates all time points with respect to means and KS statistics. Now we can estimate our treatment effects.
158
158
159
-
First, we fit a marginal structural model for the outcome using `glm()` with the weights included:
159
+
First, we fit a marginal structural model for the outcome using `glm_weightit()` with the `weightit` object supplied:
0 commit comments