-
Notifications
You must be signed in to change notification settings - Fork 10
/
README.Rmd
310 lines (203 loc) · 11.5 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%",
dpi = 300,
warning = FALSE,
message = FALSE
)
```
# aorsf <a href="https://docs.ropensci.org/aorsf/"><img src="man/figures/logo.png" align="right" height="138" /></a>
<!-- badges: start -->
[![Project Status: Active – The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active)
[![Codecov test coverage](https://codecov.io/gh/bcjaeger/aorsf/branch/master/graph/badge.svg)](https://app.codecov.io/gh/bcjaeger/aorsf?branch=master)
[![R-CMD-check](https://github.com/bcjaeger/aorsf/workflows/R-CMD-check/badge.svg)](https://github.com/ropensci/aorsf/actions/)
[![Status at rOpenSci Software Peer Review](https://badges.ropensci.org/532_status.svg)](https://github.com/ropensci/software-review/issues/532/)
<a href="https://joss.theoj.org/papers/10.21105/joss.04705"><img src="https://joss.theoj.org/papers/10.21105/joss.04705/status.svg"></a>
[![CRAN status](https://www.r-pkg.org/badges/version/aorsf)](https://CRAN.R-project.org/package=aorsf)
[![DOI](https://zenodo.org/badge/394311897.svg)](https://zenodo.org/doi/10.5281/zenodo.7116854)
<!-- badges: end -->
Fit, interpret, and make predictions with oblique random forests (RFs).
## Why aorsf?
- Fast and versatile tools for oblique RFs.^1^
- Accurate predictions.^2^
- Intuitive design with formula based interface.
- Extensive input checks and informative error messages.
- Compatible with `tidymodels` and `mlr3`
## Installation
You can install `aorsf` from CRAN using
``` r
install.packages("aorsf")
```
You can install the development version of aorsf from [GitHub](https://github.com/) with:
``` r
# install.packages("remotes")
remotes::install_github("ropensci/aorsf")
```
## Get started
```{r}
library(aorsf)
library(tidyverse)
```
`aorsf` fits several types of oblique RFs with the `orsf()` function, including classification, regression, and survival RFs.
```{r, child='Rmd/orsf-fit-intro.Rmd'}
```
## What does "oblique" mean?
Decision trees are grown by splitting a set of training data into non-overlapping subsets, with the goal of having more similarity within the new subsets than between them. When subsets are created with a single predictor, the decision tree is *axis-based* because the subset boundaries are perpendicular to the axis of the predictor. When linear combinations (i.e., a weighted sum) of variables are used instead of a single variable, the tree is *oblique* because the boundaries are neither parallel nor perpendicular to the axis.
**Figure**: Decision trees for classification with axis-based splitting (left) and oblique splitting (right). Cases are orange squares; controls are purple circles. Both trees partition the predictor space defined by variables X1 and X2, but the oblique splits do a better job of separating the two classes.
```{r fig_oblique_v_axis, out.width='100%', echo = FALSE}
knitr::include_graphics('man/figures/tree_axis_v_oblique.png')
```
So, how does this difference translate to real data, and how does it impact random forests comprising hundreds of axis-based or oblique trees? We will demonstrate this using the `penguin` data.^3^ We will also use this function to make several plots:
```{r}
plot_decision_surface <- function(predictions, title, grid){
# this is not a general function for plotting
# decision surfaces. It just helps to minimize
# copying and pasting of code.
class_preds <- bind_cols(grid, predictions) %>%
pivot_longer(cols = c(Adelie,
Chinstrap,
Gentoo)) %>%
group_by(flipper_length_mm, bill_length_mm) %>%
arrange(desc(value)) %>%
slice(1)
cols <- c("darkorange", "purple", "cyan4")
ggplot(class_preds, aes(bill_length_mm, flipper_length_mm)) +
geom_contour_filled(aes(z = value, fill = name),
alpha = .25) +
geom_point(data = penguins_orsf,
aes(color = species, shape = species),
alpha = 0.5) +
scale_color_manual(values = cols) +
scale_fill_manual(values = cols) +
labs(x = "Bill length, mm",
y = "Flipper length, mm") +
theme_minimal() +
scale_x_continuous(expand = c(0,0)) +
scale_y_continuous(expand = c(0,0)) +
theme(panel.grid = element_blank(),
panel.border = element_rect(fill = NA),
legend.position = '') +
labs(title = title)
}
```
We also use a grid of points for plotting decision surfaces:
```{r}
grid <- expand_grid(
flipper_length_mm = seq(min(penguins_orsf$flipper_length_mm),
max(penguins_orsf$flipper_length_mm),
len = 200),
bill_length_mm = seq(min(penguins_orsf$bill_length_mm),
max(penguins_orsf$bill_length_mm),
len = 200)
)
```
We use `orsf` with `mtry=1` to fit axis-based trees:
```{r}
fit_axis_tree <- penguins_orsf %>%
orsf(species ~ bill_length_mm + flipper_length_mm,
n_tree = 1,
mtry = 1,
tree_seeds = 106760)
```
Next we use `orsf_update` to copy and modify the original model, expanding it to fit an oblique tree by using `mtry=2` instead of `mtry=1`, and to include 500 trees instead of 1:
```{r}
fit_axis_forest <- fit_axis_tree %>%
orsf_update(n_tree = 500)
fit_oblique_tree <- fit_axis_tree %>%
orsf_update(mtry = 2)
fit_oblique_forest <- fit_oblique_tree %>%
orsf_update(n_tree = 500)
```
And now we have all we need to visualize decision surfaces using predictions from these four fits:
```{r}
preds <- list(fit_axis_tree,
fit_axis_forest,
fit_oblique_tree,
fit_oblique_forest) %>%
map(predict, new_data = grid, pred_type = 'prob')
titles <- c("Axis-based tree",
"Axis-based forest",
"Oblique tree",
"Oblique forest")
plots <- map2(preds, titles, plot_decision_surface, grid = grid)
```
**Figure**: Axis-based and oblique decision surfaces from a single tree and an ensemble of 500 trees. Axis-based trees have boundaries perpendicular to predictor axes, whereas oblique trees can have boundaries that are neither parallel nor perpendicular to predictor axes. Axis-based forests tend to have 'step-function' decision boundaries, while oblique forests tend to have smooth decision boundaries.
```{r, echo=FALSE}
cowplot::plot_grid(plotlist = plots)
```
## Variable importance
The importance of individual predictor variables can be estimated in three ways using `aorsf` and can be used on any type of oblique RF. Also, variable importance functions always return a named character vector
- **negation**^2^: `r aorsf:::roxy_vi_describe('negate')`
```{r}
orsf_vi_negate(pbc_fit)
```
- **permutation**: `r aorsf:::roxy_vi_describe('permute')`
```{r}
orsf_vi_permute(penguin_fit)
```
- **analysis of variance (ANOVA)**^4^: `r aorsf:::roxy_vi_describe('anova')`
```{r}
orsf_vi_anova(bill_fit)
```
You can supply your own R function to estimate out-of-bag error (see [oob vignette](https://docs.ropensci.org/aorsf/articles/oobag.html)) or to estimate out-of-bag variable importance (see [orsf_vi examples](https://docs.ropensci.org/aorsf/reference/orsf_vi.html#examples))
## Partial dependence (PD)
`r aorsf:::roxy_pd_explain()`. You can use specific values for a predictor to compute PD or let `aorsf` pick reasonable values for you if you use `pred_spec_auto()`:
```{r}
# pick your own values
orsf_pd_oob(bill_fit, pred_spec = list(species = c("Adelie", "Gentoo")))
# let aorsf pick reasonable values for you:
orsf_pd_oob(bill_fit, pred_spec = pred_spec_auto(bill_depth_mm, island))
```
The summary function, `orsf_summarize_uni()`, computes PD for as many variables as you ask it to, using sensible values.
```{r}
orsf_summarize_uni(pbc_fit, n_variables = 2)
```
For more on PD, see the [vignette](https://docs.ropensci.org/aorsf/articles/pd.html)
## Individual conditional expectations (ICE)
`r aorsf:::roxy_ice_explain()`
For more on ICE, see the [vignette](https://docs.ropensci.org/aorsf/articles/pd.html#individual-conditional-expectations-ice)
## Interaction scores
The `orsf_vint()` function computes a score for each possible interaction in a model based on PD using the method described in Greenwell et al, 2018.^5^ It can be slow for larger datasets, but substantial speedups occur by making use of multi-threading and restricting the search to a smaller set of predictors.
```{r}
preds_interaction <- c("albumin", "protime", "bili", "spiders", "trt")
# While it is tempting to speed up `orsf_vint()` by growing a smaller
# number of trees, results may become unstable with this shortcut.
pbc_interactions <- pbc_fit %>%
orsf_update(n_tree = 500, tree_seeds = 329) %>%
orsf_vint(n_thread = 0, predictors = preds_interaction)
pbc_interactions
```
What do the values in `score` mean? These values are the average of the standard deviation of the standard deviation of PD in one variable conditional on the other variable. They should be interpreted relative to one another, i.e., a higher scoring interaction is more likely to reflect a real interaction between two variables than a lower scoring one.
Do these interaction scores make sense? Let's test the top scoring and lowest scoring interactions using `coxph()`.
```{r}
library(survival)
# the top scoring interaction should get a lower p-value
anova(coxph(Surv(time, status) ~ protime * albumin, data = pbc_orsf))
# the bottom scoring interaction should get a higher p-value
anova(coxph(Surv(time, status) ~ spiders * trt, data = pbc_orsf))
```
Note: this is exploratory and not a true null hypothesis test. Why? Because we used the same data both to generate and to test the null hypothesis. We are not so much conducting statistical inference when we test these interactions with `coxph` as we are demonstrating the interaction scores that `orsf_vint()` provides are consistent with tests from other models.
## Comparison to existing software
For survival analysis, comparisons between `aorsf` and existing software are presented in our [JCGS paper](https://doi.org/10.1080/10618600.2023.2231048). The paper:
- describes `aorsf` in detail with a summary of the procedures used in the tree fitting algorithm
- runs a general benchmark comparing `aorsf` with `obliqueRSF` and several other learners
- reports prediction accuracy and computational efficiency of all learners.
- runs a simulation study comparing variable importance techniques with oblique survival RFs, axis based survival RFs, and boosted trees.
- reports the probability that each variable importance technique will rank a relevant variable with higher importance than an irrelevant variable.
<!-- A more hands-on comparison of `aorsf` and other R packages is provided in [orsf examples](https://docs.ropensci.org/aorsf/reference/orsf.html#tidymodels) -->
## References
1. `r aorsf:::cite("jaeger_2019")`
1. `r aorsf:::cite("jaeger_2022")`
1. `r aorsf:::cite("penguins_2020")`
1. `r aorsf:::cite("menze_2011")`
1. `r aorsf:::cite("greenwell_2018")`
## Funding
The developers of `aorsf` received financial support from the Center for Biomedical Informatics, Wake Forest University School of Medicine. We also received support from the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR001420.
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.