Skip to content

Exploratory re-analyses of the Bayesian Sampler by Zhu, Sanborn, & Chater

License

Notifications You must be signed in to change notification settings

derekpowell/bayesian-sampler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Comparing probabilistic accounts of probability judgments

Pre-print: https://psyarxiv.com/2bk6f/ OSF: https://osf.io/bpkjf/

Abstract

Bayesian theories of cognitive science hold that cognition is fundamentally probabilistic, but people’s explicit probability judgments often violate the laws of probability. Two recent proposals, the "Probability Theory plus Noise" (Costello & Watts, 2014) and "Bayesian Sampler" (Zhu et al., 2020) theories of probability judgments, both seek to account for these biases while maintaining that mental credences are fundamentally probabilistic. These models differ in their averaged predictions about people's conditional probability judgments and in their distributional predictions about their overall patterns of judgments. In particular, the Bayesian sampler's Bayesian adjustment process predicts a truncated range of responses as well as a correlation between the average degree of bias and variability trial-to-trial. However, exploring these distributional predictions with participants' raw responses requires a careful treatment of rounding errors and exogenous response processes. Here, I cast these theories into a Bayesian data analysis framework that supports the treatment of these issues along with principled model comparison using information criteria. Comparing the fits of both models on data collected by Zhu and colleagues (2020) I find the data are best explained by an account of biases based on "noise" in the sample-reading process.

Supplementary materials

This repository contains all supplementary materials for this project. This includes all code to reproduce the analyses reported in the mansucript and the manuscript itself, as well as supplemental analyses.

Repository organization

  • Manuscript
    • paper-rmd/: folder with reproducible APA-style Rmarkdown document
    • create-paper-figures.Rmd: Notebook for translating from python to R for plotting with reticulate package
  • Models of models participant-level query-average data
  • Models of trial-level data
    • fit-trial-models.ipynb: Jupyter notebook for fitting trial-level models. Saved outputs can be downloaded from OSF and put in the local/ folder. If refitting from scratch strongly recommend using GPU.
  • lib/: Library folder for python functions
    • /models.py: implementations of all models
    • /simdata.py: data simulation functions
    • /icc.py: functions for reloo
    • /helpers.py: data loading and plotting functions
  • Simulation and model validation studies
  • Other analyses

Reproducing the manuscript

  1. From https://osf.io/mgcxj/files/ download original data files as .zip and extract into osfstorage-archive folder in repo directory.
  2. Download saved SVI results for trial-level models from this project's OSF repository and place/unzip in local/ directory.
  3. Use environment.yml to create Conda environment.
  4. Run fitting notebooks for query-level averaged and trial-level fitting first, then model comparison notebook, and then finally can knit Rmarkdown.

About

Exploratory re-analyses of the Bayesian Sampler by Zhu, Sanborn, & Chater

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published