-
Notifications
You must be signed in to change notification settings - Fork 8
/
2var-categorical_ordinal-line-bayesian_template.Rmd
446 lines (286 loc) · 26.4 KB
/
2var-categorical_ordinal-line-bayesian_template.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
---
title: <center>Bayesian analysis template</center>
author: <center>Phelan, C., Hullman, J., Kay, M. & Resnick, P.</center>
output:
html_document:
theme: flatly
highlight: pygments
---
<br>
<center><span style="color:#3875d8;font-size:1.5em">*Template 5:*</span>
![](images/generic_2line_chart.png)
<span style="color:#3875d8;font-size:2em">**Interaction of one categorical & one ordinal independent variable (line graph)**</span></center>
##Introduction
Welcome! This template will guide you through a Bayesian analysis in R, even if you have never done Bayesian analysis before. There are a set of templates, each for a different type of analysis. This template is for data with **two interacting independent variables, one categorical and one ordinal** and will produce a **line graph**. If your analysis includes a **two-way ANOVA**, this might be the right template for you. In most cases, we *do not recommend* using line charts for this type of analysis; a bar chart is usually the better option.
This template assumes you have basic familiarity with R. Once complete, this template will produce a summary of the analysis, complete with parameter estimates and credible intervals, and two animated HOPs (see Hullman, Resnick, Adar 2015 DOI: 10.1371/journal.pone.0142444 and Kale, Nguyen, Kay, and Hullman VIS 2018 for more information) for both your prior and posterior estimates.
This Bayesian analysis focuses on producing results in a form that are easily interpretable, even to nonexperts. The credible intervals produced by Bayesian analysis are the analogue of confidence intervals in traditional null hypothesis significance testing (NHST). A weakness of NHST confidence intervals is that they are easily misinterpreted. Many people naturally interpret an NHST 95% confidence interval to mean that there is a 95% chance that the true parameter value lies somewhere in that interval; in fact, it means that if the experiment were repeated 100 times, 95 of the resulting confidence intervals would include the true parameter value. The Bayesian credible interval sidesteps this complication by providing the intuitive meaning: a 95% chance that the true parameter value lies somewhere in that interval. To further support intuitive interpretations of your results, this template also produces animated HOPs, a type of plot that is more effective than visualizations such as error bars in helping people make accurate judgments about probability distributions.
This set of templates supports a few types of statistical analysis. (In future work, this list of supported statistical analyses will be expanded.) For clarity, each type has been broken out into a separate template, so be sure to select the right template before you start! A productive way to choose which template to use is to think about what type of chart you would like to produce to summarize your data. Currently, the templates support the following:
*One independent variable:*
1. Categorical; bar graph (e.g. t-tests, one-way ANOVA)
2. Ordinal; line graph (e.g. t-tests, one-way ANOVA)
3. Continuous; line graph (e.g. linear regression)
*Two interacting independent variables:*
4. Two categorical; bar graph (e.g. two-way ANOVA)
5. **One categorical, one ordinal; line graph (e.g. two-way ANOVA)**
6. One categorical, one continuous; line graph (e.g. linear regression with multiple lines)
Note that this template fits your data to a model that assumes normally distributed error terms. (This is the same assumption underlying t-tests, ANOVA, etc.) This template requires you to have already run diagnostics to determine that your data is consistent with this assumption; if you have not, the results may not be valid.
Once you have selected your template, to complete the analysis, please follow along this template. For each code chunk, you may need to make changes to customize the code for your own analysis. In those places, the code chunk will be preceded by a list of things you need to change (with the heading <span style="color:red">"What to change"</span>), and each line that needs to be customized will also include the comment `#CHANGE ME` within the code chunk itself. You can run each code chunk independently during debugging; when you're finished, you can knit the document to produce the complete document.
Good luck!
###Tips before you start
1. Make sure you have picked the right template! (See above.)
2. Use the pre-knitted HTML version of this template as a reference as you work (we've included all the HTML files, in the folder `html_outputs`. The formatting makes the template easier to follow. You can also knit this document as you work once you have completed set up.
3. Make sure you are using the most recent version of the templates. Updates can be found at https://github.com/cdphelan/bayesian-template.
###Sample dataset
This template comes prefilled with an example dataset from Moser et al. (DOI: 10.1145/3025453.3025778), which examines choice overload in the context of e-commerce. The study examined the relationship between choice satisfaction (measured at a 7-point Likert scale), the number of product choices presented on a webpage, and whether the participant is a decision "maximizer" (a person who examines all options and tries to choose the best) or a "satisficer" (a person who selects the first option that is satisfactory). In this template, we analyze the relationship between choice set size, which we treat as an ordinal variable in this template with possible values [12,24,40,50,60,72]; type of decision-making (maximizer or satisficer), a two-level categorical variable; and choice satisfaction, which we treat as a continuous variable with values that can fall in the range [1,7].
##Set up
###Requirements
To run this template, we assume that you are using RStudio, and you have the most recent version of R installed. (This template was built with R version 3.5.1.)
This template works best if you first open the file `bayesian-template.Rproj` from the code repository as a project in RStudio to get started, and then open the individual `.Rmd` template files after this.
###Libraries
<span style="color:red">**Installation:**</span>
If this is your first time using the template, you may need to install libraries.
1. **If you are using Windows,** first you will need to manually install RStan and Rtools. Follow the instructions [here](https://github.com/stan-dev/rstan/wiki/Installing-RStan-on-Windows) to install both.
2. On both Mac and Windows, uncomment the line with `install.packages()` to install the required packages. This only needs to be done once.
<span style="color:red">**Troubleshooting:**</span>
You may have some trouble installing the packages, especially if you are on Windows. Regardless of OS, if you have any issues installing these packages, try one or more of the following troubleshooting options:
1. Restart R.
2. Make sure you are running the most recent version of R (3.5.1, as of the writing of this template).
3. Manually install RStan and Rtools, following the instructions [here](https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started).
4. If you have tried the above and you are still getting error messages like `there is no package called [X]`, try installing the missing package(s) manually using the RStudio interface under Tools > Install Packages...
```{r libraries, message=FALSE, warning=FALSE}
knitr::opts_chunk$set(fig.align="center")
# install.packages(c("ggplot2", "rstanarm", "tidyverse", "tidybayes", "modelr", "gganimate"))
library(rstanarm) #bayesian analysis package
library(tidyverse) #tidy datascience commands
library(tidybayes) #tidy data + ggplot workflow
library(modelr) #tidy pipelines for modeling
library(ggplot2) #plotting package
library(gganimate) #animate ggplots
# We import all of our plotting functions from this separate R file to keep the code in
# this template easier to read. You can edit this file to customize aesthetics of the plots
# if desired. Just be sure to run this line again after you make edits!
source('plotting_functions.R')
theme_set(theme_light()) # set the ggplot theme for all plots
```
###Read in data
<span style="color:red">**What to change**</span>
1. mydata: Read in your data.
```{r data_prep}
mydata = read.csv('datasets/choc_cleaned_data.csv') #CHANGE ME 1
```
## Specify model
We'll fit the following model: `stan_glm(y ~ x1 * x2)`, where $x_1$ is an ordinal variable and $x_2$ is a categorical variable. This specifies a linear regression with dummy variables for each level in $x_1$ and $x_2$, plus interaction terms for each combination of $x_1$ and $x_2$. **This is equivalent to ANOVA.** So for example, for a regression where $x_1$ has three levels and $x_2$ has two levels, each $y_i$ is drawn from a normal distribution with mean equal to $a + b*dummy$ (where $b*dummy$ is the appropriate dummy term) and standard deviation equal to `sigma` ($\sigma$):
$$
\begin{aligned}
y_i \sim Normal(a + b_{x1a}dummy_{x1a} + b_{x1b}dummy_{x1b} + \\
b_{x2}dummy_{x2} + \\
b_{x2}dummy_{x2} * b_{x1a}dummy_{x1a} + \\
b_{x2}dummy_{x2} * b_{x1b}dummy_{x1b}, \\\sigma)
\end{aligned}
$$
Choose your independent and dependent variables. These are the variables that will correspond to the x and y axis on the final plots.
<span style="color:red">**What to change**</span>
2. mydata\$x1: Select which variables will appear on the x-axis of your plots. This is your ordered variable.
3. mydata\$x2: Select the second independent variable, the categorical variable. You will have one line in the output graph for each level of this variable.
4. mydata\$y: Select which variables will appear on the y-axis of your plots.
5. x_lab: Label your plots' x-axes.
6. y_lab: Label your plots' y-axes.
```{r specify_model}
#select your independent and dependent variables
mydata$x1 = as.factor(mydata$num_products_displayed) #CHANGE ME 2
mydata$x2 = mydata$sat_max #CHANGE ME 3
mydata$y = mydata$satis_Q1 #CHANGE ME 4
# label the axes on the plots
x_lab = "Choices" #CHANGE ME 5
y_lab = "Satisfaction" #CHANGE ME 6
```
###Set priors
In this section, you will set priors for your model. Setting priors thoughtfully is important to any Bayesian analysis, especially if you have a small sample of data that you will use for fitting for your model. The priors express your best prior belief, *before seeing any data*, of reasonable values for the model parameters.
Ideally, you will have previous literature from which to draw these prior beliefs. If no previous studies exist, you can instead assign "weakly informative priors" that only minimally restrict the model, excluding only values that are implausible or impossible. We have provided examples of how to set both weak and strong priors below.
To check the plausibility of your priors, use the code section after this one to generate a graph of five sample draws from your priors to check if the values generated are reasonable.
Our model has the following parameters:
a. the overall mean y-value across all levels of ordinal variable x
b. the mean y-value for each of the individual levels
c. the standard deviation of the normally distributed error term
To simplify things, we limit the number of different prior beliefs you can have. Think of the first level of the ordinal variable as specifying the control condition of an experiment, and all of the other levels being treatment conditions in the experiment. We let you specify a prior belief about the plausible values of mean in the control condition (a), and then we let you set a prior belief about the plausible effect size (b). You have to specify the same plausible effect sizes for all conditions, unless you dig deeper into our code.
To simplify things further, we only let you specify beliefs about these parameters in the form of a normal distribution. Thus, you will specify what you think is the most likely value for the parameter (the mean), and a standard deviation. You will be expressing a belief that you were 95% certain (before looking at any data) that the true value of the parameter is within two standard deviations of the mean.
Finally, our modeling system, `stan_glm()`, will automatically set priors for the last parameter, the standard deviation of the normally distributed error term for the model overall (c).
To explore more about priors, you can experiment with different values for these parameters and use the following section, *Checking priors with visualizations*, to see how different parameter values change the prior distribution.
Want more examples? Check your understanding of how to set priors in this [quizlet](https://cdphelan.shinyapps.io/check_understanding_priors/), which includes several more examples of how to set both strong and weak priors.
<span style="color:red">**What to change**</span>
**If you are using weakly informative priors (i.e. priors not informed by previous literature):**
*Remember: **do not** use any of your data from the current study to inform prior values.*
7. a_prior: Select the control condition mean.
8. a_prior_max: Select the maximum plausible value of the control condition data. (We will use this to calculate the sd of `a`.)
9. b1_prior: Select the effect size mean.
10. b1_sd: Select the effect size standard deviation.
11. You should also change the comments in the code below to explain your choice of priors.
**If you are using strong priors (i.e. priors from previous literature):**
Skip this code chunk and set your priors in the next code chunk. For clarity, comment out everything in this code chunk.
```{r}
# CHANGE THIS COMMENT EXPLAINING YOUR CHOICE OF PRIORS (11)
# In our example dataset, y-axis scores can be in the range [1, 7].
# In the absence of other information, we set the parameter mean as 4
# (the mean of the range [1,7]) and the maximum possible value as 7.
# From exploratory analysis, we know the mean score and sd for y in our
# dataset but we *DO NOT* use this information because priors *CANNOT*
# include any information from the current study.
a_prior = 4 # CHANGE ME 7
a_prior_max = 7 # CHANGE ME 8
# With a normal distribution, we can't completely rule out
# impossible values, but we choose an sd that assigns less than
# 5% probability to those impossible values. Remember that in a normal
# distribution, 95% of the data lies within 2 sds of the mean. Therefore,
# we calculate the value of 1 sd by finding the maximum amount our data
# can vary from the mean (a_prior_max - a_prior) and divide that in half.
a_sd = (a_prior_max - a_prior) / 2 # do not change
# CHANGE THIS COMMENT EXPLAINING YOUR CHOICE OF PRIORS (11)
# In our example dataset, we do not have a strong hypothesis that the treatment
# conditions will be higher or lower than the control, so we set the mean of
# the effect size parameters to be 0. In the absence of other information, we
# set the sd to be the same as for the control condition.
b1_prior = 0 # CHANGE ME 9
b1_sd = a_sd # CHANGE ME 10
```
<span style="color:red">**What to change**</span>
**If you are using weakly informative priors:**
Do not use this code chunk; use the code chunk above to set your priors instead. Make sure everything in this code chunk is commented out so that your priors are not overwritten.
**If you are using strong priors (i.e. priors from previous literature):**
*Remember: **do not** use any of your data from the current study to set prior values.*
First, make sure to uncomment all four variables set in this code chunk.
7. a_prior: Select the control condition mean.
8. a_sd: Select the control condition standard deviation.
9. b1_prior: Select the effect size mean.
10. b1_sd: Select the effect size standard deviation.
11. You should also change the comments in the code below to explain your choice of priors.
```{r}
# CHANGE THIS COMMENT EXPLAINING YOUR CHOICE OF PRIORS (11)
# In our example dataset, y-axis scores can be in the range [1, 7].
# To choose our priors, we use the results from a previous study
# where participants completed an identical task (choosing between
# different chocolate bars). For our overall prior mean, we pool the mean
# satisfaction scores from all conditions in the previous study to get
# an overall mean of 5.86. We set a_sd so that 5.86 +/- 2 sds encompasses
# the 95% confidence intervals from the previous study results.
# a_prior = 5.86 # CHANGE ME 7
# a_sd = 0.6 # CHANGE ME 8
# CHANGE THIS COMMENT EXPLAINING YOUR CHOICE OF PRIORS (11)
# In our example dataset, we do not have guidance from previous literature
# to set an exact effect size, but we do know that satisficers (the "treatment"
# condition) are likely to have higher mean satisfaction than the maximizers
# (the "control" condition), so we set an effect size parameter mean that
# results in a 1 point increase in satisfaction for satisficers. To reflect
# the uncertainty in this effect size, we select a broad sd so that there is
# a ~20% chance that the effect size will be negative.
# b1_prior = 1 # CHANGE ME 9
# b1_sd = 1 # CHANGE ME 10
```
### Checking priors with visualizations
Next, you'll want to check your priors by running this code chunk. It will produce a set of five sample plots drawn from the priors you set in the previous section, so you can check to see if the values generated are reasonable.
You'll also want to run the code chunk after this one, `HOPs_priors`, which presents plots of sample prior draws in an animated format called HOPs (Hypothetical Outcomes Plots). HOPs are a type of plot that visualizes uncertainty as sets of draws from a distribution, and has been demonstrated to improve multivariate probability estimates (Hullman et al. 2015) and increase sensitivity to the underlying trend in data (Kale et al. 2018) over static representations of uncertainty like error bars.
#### Static visualization of priors
<span style="color:red">**What to change**</span>
Nothing! Just run this code to check your priors, adjusting prior values above as needed until you find reasonable prior values. Note that you may get a couple of very implausible or even impossible values because our assumption of normally distributed priors assigns a small probability to even very extreme values. If you are concerned by the outcome, you can try rerunning it a few more times to make sure that any implausible values you see don't come up very often.
<span style="color:red">**Troubleshooting**</span>
* In rare cases, you may get a warning that the Markov chains have failed to converge. Chains that fail to converge are a sign that your model is not a good fit to the data. If you get this warning, you should adjust your priors. Your prior distribution may be too narrow, and/or your prior mean is very far from the data.
* If you get any other errors, first double-check the values you have changed in the code chunks above (i.e. `mydata`, `mydata$x1`, `mydata$x2`, `mydata$y`, and prior values). Problems with these values can cause confusing errors downstream.
```{r check_priors, results="hide"}
# generate the prior distribution
m_prior = stan_glm(y ~ x1*x2, data = mydata,
prior_intercept = normal(a_prior, a_sd, autoscale = FALSE),
prior = normal(b1_prior, b1_sd, autoscale = FALSE),
prior_PD = TRUE
)
# Create the dataframe with fitted draws
prior_draws = mydata %>% #pipe mydata to datagrid()
data_grid(x1,x2) %>% #create a fit grid with each level in x, and pipe it to add_fitted_draws()
add_fitted_draws(m_prior, n = 5, seed = 12345) #add n fitted draws from the model to the fit grid
# the seed argument is for reproducibility: it ensures the pseudo-random
# number generator used to pick draws has the same seed on every run,
# so that someone else can re-run this code and verify their output matches
# Plot the five sample draws
# this function is defined in 'plotting_functions.R', if you wish to customize the aesthetics.
static_prior_plot_5(prior_draws)
```
#### Animated visualization of priors
The five static draws above give use some idea of what the prior distribution might look like. Even better, we can animate this graph using HOPs, which are better for visualizing uncertainty and identifying underlying trends. HOPs visualizes the same information as the static plot generated above. However, with HOPs we can visualize more draws: with the static plot, we run out of room after only about five draws!
In this code chunk, we add more draws to the `prior_draws` dataframe, so we have a total of 50 draws to visualize, and then create the animated plot. Each frame of the animation shows a different draw from the prior, starting with the same five draws as the static image above.
<span style="color:red">**What to change:**</span> Nothing! Just run the code to check your priors.
```{r HOPs_priors}
# Animation parameters
n_draws = 50 # the number of draws to visualize in the HOPs
frames_per_second = 2.5 # the speed of the HOPs
# 2.5 frames per second (400ms) is the recommended speed for the HOPs visualization.
# Faster speeds (100ms) have been demonstrated to not work as well.
# See Kale et al. VIS 2018 for more info.
# Add more prior draws to the data frame for the visualization
more_prior_draws = prior_draws %>%
rbind(
mydata %>%
data_grid(x1, x2) %>%
add_fitted_draws(m_prior, n = n_draws - 5, seed = 12345))
# Animate the prior draws with HOPs
# this function is defined in 'plotting_functions.R', if you wish to customize the aesthetics.
prior_HOPs = animate(HOPS_plot_5(more_prior_draws), nframes = n_draws * 2, fps = frames_per_second)
prior_HOPs
```
In most cases, your prior HOPs will show a lot of uncertainty: the bars will jump around to a lot of different possible values. At the end of the template, you'll see how this uncertainty is affected when study data is added to the estimates.
Even when you see a lot of uncertainty in the graph, the individual HOPs frames should mostly show plausible values. You will see some implausible values (usually represented as empty graphs, or bars that reach/exceed the plot's maximum y-value), but if you see many implausible values, it may be a sign that you should adjust your priors in the "Set priors" section.
### Run the model
There's nothing you have to change here. Just run the model.
<span style="color:red">**Troubleshooting:**</span> If this code produces errors, check the troubleshooting section under the "Check priors" heading above for a few troubleshooting options.
```{r results = "hide", message = FALSE, warning = FALSE}
m = stan_glm(y ~ x1*x2, data = mydata,
prior_intercept = normal(a_prior, a_sd, autoscale = FALSE),
prior = normal(b1_prior, b1_sd, autoscale = FALSE)
)
```
## Model summary
Here is a summary of the model fit.
The summary reports diagnostic values that can help you evaluate whether your model is a good fit for the data. For this template, we can keep diagnostics simple: check that your `Rhat` values are very close to 1.0. Larger values mean that your model is not a good fit for the data. This is usually only a problem if the `Rhat` values are greater than 1.1, which is a warning sign that the Markov chains have failed to converge. In this happens, Stan will warn you about the failure, and you should adjust your priors.
```{r}
summary(m, digits=3)
```
## Visualizing results
To plot the results, we again create a fit grid using `data_grid()`, just as we did when we created the HOPs for the prior. Given this fit grid, we can then create any number of visualizations of the results. One way we might want to visualize the results is a static graph with error bars that represent a 95% credible interval. For each x position in the fit grid, we can get the posterior mean estimates and 95% credible intervals from the model:
```{r static_graph}
# Create the dataframe with fitted draws
fit = mydata %>%#pipe mydata to datagrid()
data_grid(x1, x2) %>% #create a fit grid with each level in x, and pipe it to add_fitted_draws()
add_fitted_draws(m) %>% #add n fitted draws from the model to the fit grid
mean_qi(.width = .95) #add 95% credible intervals
# Plot the posterior draws
# this function is defined in 'plotting_functions.R', if you wish to customize the aesthetics.
static_post_plot_5(fit)
```
#### Animated HOPs visualization
To get a better visualization of the uncertainty remaining in the posterior results, we can use animated HOPs for this graph as well. The code to generate the posterior plots is identical to the HOPs code for the priors, except we replace `m_prior` with `m`:
```{r}
p = mydata %>% #pipe mydata to datagrid()
data_grid(x1, x2) %>% #create a fit grid with each level in x, and pipe it to add_fitted_draws()
add_fitted_draws(m, n = n_draws, seed = 12345) #add n fitted draws from the model to the fit grid
# the seed argument is for reproducibility: it ensures the pseudo-random
# number generator used to pick draws has the same seed on every run,
# so that someone else can re-run this code and verify their output matches
#animate the data from p, using the graph aesthetics set in the graph aesthetics code chunk
# this function is defined in 'plotting_functions.R', if you wish to customize the aesthetics.
post_HOPs = animate(HOPS_plot_5(p), nframes = n_draws * 2, fps = frames_per_second)
post_HOPs
```
### Comparing the prior and posterior
If we look at our two HOPs plots together - one of the prior distribution, and one of the posterior - we can see how adding information to the model (i.e. the study data) adds more certainty to our estimates, and produces a posterior graph that is more "settled" than the prior graph.
<center><span style="font-size:1.5em">**Prior draws**</span></center>
```{r echo=F}
prior_HOPs
```
<center><span style="font-size:1.5em">**Posterior draws**</span></center>
```{r echo=F}
post_HOPs
```
## Finishing up
**Congratulations!** You made it through your first Bayesian analysis. We hope our templates helped demystify the process.
If you're interested in learning more about Bayesian statistics, we suggest the following textbooks:
- Statistical Rethinking, by Richard McElreath.(Website: https://xcelab.net/rm/statistical-rethinking/, including links to YouTube lectures.)
- Doing Bayesian Analysis, by John K. Kruschke. (Website: https://sites.google.com/site/doingbayesiandataanalysis/, including R code templates.)
The citation for paper reporting the process of developing and user-testing these templates is below:
Chanda Phelan, Jessica Hullman, Matthew Kay, and Paul Resnick. 2019. Some Prior(s) Experience Necessary: Templates for Getting Started with Bayesian Analysis. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, Glasgow, Scotland UK. ACM, New York, NY, USA, 12 pages. https: //doi.org/10.1145/3290605.3300709