generated from kogpsy/neuroscicomplabFS22
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathbibliography.bib
2816 lines (2652 loc) · 258 KB
/
bibliography.bib
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
@incollection{alexanderReciprocalInteractionsComputational2015,
title = {Reciprocal {{Interactions}} of {{Computational Modeling}} and {{Empirical Investigation}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Alexander, William H. and Brown, Joshua W.},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {321--338},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_16},
url = {https://doi.org/10.1007/978-1-4939-2236-9_16},
urldate = {2019-03-20},
abstract = {Models in general, and computational neural models in particular, are useful to the extent they fulfill three aims, which roughly constitute a life cycle of a model. First, at birth, models must account for existing phenomena, and with mechanisms that are no more complicated than necessary. Second, at maturity, models must make strong, falsifiable predictions that can guide future experiments. Third, all models are by definition incomplete, simplified representations of the mechanisms in question, so they should provide a basis of inspiration to guide the next generation of model development, as new data challenge and force the field to move beyond the existing models. Thus the final part of the model life cycle is a dialectic of model properties and empirical challenge. In this phase, new experimental data test and refine the model, leading either to a revised model or perhaps the birth of a new model. In what follows, we provide an outline of how this life cycle has played out in a particular series of models of the dorsal anterior cingulate cortex (ACC).},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {Anterior cingulate cortex,Cognitive control,Computational neural model,Dialectic,Error likelihood,Performance monitoring,Reinforcement learning}
}
@article{amrheinScientistsRiseStatistical2019,
title = {Scientists Rise up against Statistical Significance},
author = {Amrhein, Valentin and Greenland, Sander and McShane, Blake},
date = {2019-03},
journaltitle = {Nature},
volume = {567},
number = {7748},
pages = {305--307},
publisher = {{Nature Publishing Group}},
doi = {10.1038/d41586-019-00857-9},
url = {https://www.nature.com/articles/d41586-019-00857-9},
urldate = {2022-05-11},
abstract = {Valentin Amrhein, Sander Greenland, Blake McShane and more than 800 signatories call for an end to hyped claims and the dismissal of possibly crucial effects.},
issue = {7748},
langid = {english},
keywords = {Research data,Research management},
annotation = {Bandiera\_abtest: a Cg\_type: Comment Subject\_term: Research data, Research management},
file = {/Users/andrew/Zotero/storage/J6EUKI9Y/Amrhein et al. - 2019 - Scientists rise up against statistical significanc.pdf}
}
@article{andersonTeachingSignalDetection2015,
title = {Teaching Signal Detection Theory with Pseudoscience},
author = {Anderson, Nicole D.},
date = {2015},
journaltitle = {Frontiers in Psychology},
volume = {6},
issn = {1664-1078},
url = {https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00762},
urldate = {2023-04-30},
file = {/Users/andrew/Zotero/storage/3HJNDT56/Anderson_2015_Teaching signal detection theory with pseudoscience.pdf}
}
@article{andraszewiczIntroductionBayesianHypothesis,
title = {An {{Introduction}} to {{Bayesian Hypothesis Testing}} for {{Management Research}}},
author = {Andraszewicz, Sandra and Scheibehenne, Benjamin and Rieskamp, Jörg and Grasman, Raoul and Verhagen, Josine and Wagenmakers, Eric-Jan},
pages = {23},
doi = {10.1177/0149206314560412},
langid = {english},
file = {/Users/andrew/Zotero/storage/8WNPTHPP/Andraszewicz et al. - An Introduction to Bayesian Hypothesis Testing for.pdf}
}
@article{antonenkoTDCSinducedEpisodicMemory2019,
title = {{{tDCS-induced}} Episodic Memory Enhancement and Its Association with Functional Network Coupling in Older Adults},
author = {Antonenko, Daria and Hayek, Dayana and Netzband, Justus and Grittner, Ulrike and Flöel, Agnes},
date = {2019-02-19},
journaltitle = {Scientific Reports},
volume = {9},
number = {1},
pages = {2273},
publisher = {{Nature Publishing Group}},
issn = {2045-2322},
doi = {10.1038/s41598-019-38630-7},
url = {https://www.nature.com/articles/s41598-019-38630-7},
urldate = {2021-04-12},
abstract = {Transcranial direct current stimulation (tDCS) augments training-induced cognitive gains, an issue of particular relevance in the aging population. However, negative outcomes have been reported as well, and few studies so far have evaluated the impact of tDCS on episodic memory formation in elderly cohorts. The heterogeneity of previous findings highlights the importance of elucidating neuronal underpinnings of tDCS-induced modulations, and of determining individual predictors of a positive response. In the present study, we aimed to modulate episodic memory formation in 34 older adults with anodal tDCS (1\,mA, 20\,min) over left temporoparietal cortex. Participants were asked to learn novel associations between pictures and pseudowords, and episodic memory performance was subsequently assessed during immediate retrieval. Prior to experimental sessions, participants underwent resting-state functional magnetic resonance imaging. tDCS led to better retrieval performance and augmented learning curves. Hippocampo-temporoparietal functional connectivity was positively related to initial memory performance, and was positively associated with the magnitude of individual tDCS-induced enhancement. In sum, we provide evidence for brain stimulation-induced plasticity of episodic memory processes in older adults, corroborating and extending previous findings. Our results demonstrate that intrinsic network coupling may determine individual responsiveness to brain stimulation, and thus help to further explain variability of tDCS responsiveness in older adults.},
issue = {1},
langid = {english},
file = {/Users/andrew/Zotero/storage/LH8KFIF3/s41598-019-38630-7.html}
}
@incollection{ashbyIntroductionFMRI2015,
title = {An {{Introduction}} to {{fMRI}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Ashby, F. Gregory},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {91--112},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_5},
url = {https://doi.org/10.1007/978-1-4939-2236-9_5},
urldate = {2019-03-20},
abstract = {Functional magnetic resonance imaging (fMRI) provides an opportunity to indirectly observe neural activity noninvasively in the human brain as it changes in near real time. Most fMRI experiments measure the blood oxygen-level dependent (BOLD) signal, which rises to a peak several seconds after a brain area becomes active. Several experimental designs are common in fMRI research. Block designs alternate periods in which subjects perform some task with periods of rest, whereas event-related designs present the subject with a set of discrete trials. After the fMRI experiment is complete, pre-processing analyses prepare the data for task-related analyses. The most popular task-related analysis uses the General Linear Model to correlate a predicted BOLD response with the observed activity in each brain region. Regions where this correlation is high are identified as task related. Connectivity analysis then tries to identify active regions that belong to the same functional network. In contrast, multivariate methods, such as independent component analysis and multi-voxel pattern analysis identify networks of event-related regions, rather than single regions, so they simultaneously address questions of functional connectivity.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {BOLD response,fMRI,Functional connectivity analysis,General Linear Model,Hemodynamic response function,Multiple comparisons problem,Preprocessing}
}
@article{balotaMovingMeanStudies2011,
title = {Moving {{Beyond}} the {{Mean}} in {{Studies}} of {{Mental Chronometry}}: {{The Power}} of {{Response Time Distributional Analyses}}},
shorttitle = {Moving {{Beyond}} the {{Mean}} in {{Studies}} of {{Mental Chronometry}}},
author = {Balota, David A. and Yap, Melvin J.},
date = {2011-06},
journaltitle = {Current Directions in Psychological Science},
shortjournal = {Curr Dir Psychol Sci},
volume = {20},
number = {3},
pages = {160--166},
issn = {0963-7214, 1467-8721},
doi = {10.1177/0963721411408885},
url = {http://journals.sagepub.com/doi/10.1177/0963721411408885},
urldate = {2022-04-11},
abstract = {Although it is widely recognized that response time (RT) distributions are almost always positively skewed and that mathematical psychologists have developed straightforward procedures for capturing characteristics of RT distributions, researchers continue to rely primarily on mean performance, which can be misleading for such data. We review simple procedures for capturing characteristics of underlying RT distributions and show how such procedures have recently been useful to better understand effects from standard cognitive experimental paradigms and individual differences in performance. These well-studied procedures for understanding RT distributions indicate that effects in means can be produced by (a) shifts of RT distributions, (b) stretching of slow tails of RT distributions, or (c) some combination. Importantly, effects in means can actually be obscured by opposing influences on the modal and tail portions of RT distributions. Such disparate patterns demand novel theoretical interpretations.},
langid = {english},
file = {/Users/andrew/Zotero/storage/9ACHQFLA/Balota and Yap - 2011 - Moving Beyond the Mean in Studies of Mental Chrono.pdf}
}
@incollection{beanReyAuditoryVerbal2011,
title = {Rey {{Auditory Verbal Learning Test}}, {{Rey AVLT}}},
booktitle = {Encyclopedia of {{Clinical Neuropsychology}}},
author = {Bean, Jessica},
editor = {Kreutzer, Jeffrey S. and DeLuca, John and Caplan, Bruce},
date = {2011},
pages = {2174--2175},
publisher = {{Springer}},
location = {{New York, NY}},
doi = {10.1007/978-0-387-79948-3_1153},
url = {https://doi.org/10.1007/978-0-387-79948-3_1153},
urldate = {2023-04-30},
isbn = {978-0-387-79948-3},
langid = {english}
}
@article{berghTutorialConductingInterpreting2020,
title = {A Tutorial on Conducting and Interpreting a {{Bayesian ANOVA}} in {{JASP}}},
author = {family=Bergh, given=Don, prefix=van den, useprefix=false and family=Doorn, given=Johnny, prefix=van, useprefix=false and Marsman, Maarten and Draws, Tim and family=Kesteren, given=Erik-Jan, prefix=van, useprefix=false and Derks, Koen and Dablander, Fabian and Gronau, Quentin F. and Kucharský, Šimon and Gupta, Akash R. Komarlu Narendra and Sarafoglou, Alexandra and Voelkel, Jan G. and Stefan, Angelika and Ly, Alexander and Hinne, Max and Matzke, Dora and Wagenmakers, Eric-Jan},
date = {2020-03-18},
journaltitle = {LAnnee psychologique},
volume = {120},
number = {1},
pages = {73--96},
publisher = {{P.U.F.}},
issn = {0003-5033},
doi = {10.3917/anpsy1.201.0073},
url = {https://www.cairn-int.info/article.php?ID_ARTICLE=E_ANPSY1_201_0073},
urldate = {2022-05-31},
abstract = {Analysis of variance (ANOVA) is the standard procedure for statistical inference in factorial designs. Typically, ANOVAs are executed using frequentist statistics, where p-values determine statistical significance in an all-or-none fashion. In recent years, the Bayesian approach to statistics is increasingly viewed as a legitimate alternative to the p-value. However, the broad adoption of Bayesian statistics\&\#8212;and Bayesian ANOVA in particular\&\#8212;is frustrated by the fact that Bayesian concepts are rarely taught in applied statistics courses. Consequently, practitioners may be unsure how to conduct a Bayesian ANOVA and interpret the results. Here we provide a guide for executing and interpreting a Bayesian ANOVA with JASP, an open-source statistical software program with a graphical user interface. We explain the key concepts of the Bayesian ANOVA using two empirical examples.},
langid = {english},
annotation = {Bibliographie\_available: 1 Cairndomain: www.cairn-int.info Cite Par\_available: 0},
file = {/Users/andrew/Zotero/storage/5JXPQN2D/Bergh et al. - 2020 - A tutorial on conducting and interpreting a Bayesi.pdf;/Users/andrew/Zotero/storage/9SBDSKRW/article.html}
}
@incollection{bogaczOptimalDecisionMaking2015,
title = {Optimal {{Decision Making}} in the {{Cortico-Basal-Ganglia Circuit}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Bogacz, Rafal},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {291--302},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_14},
url = {https://doi.org/10.1007/978-1-4939-2236-9_14},
urldate = {2019-03-20},
abstract = {This chapter presents a model assuming that during decision making the cortico-basal-ganglia circuit computes probabilities that considered alternatives are correct, according to Bayes’ theorem. The model suggests how the equation of Bayes’ theorem is mapped onto the functional anatomy of a circuit involving the cortex, basal ganglia and thalamus. The chapter also describes the relationship of the model to other models of decision making and experimental data.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {Action selection,Basal ganglia,Decision making}
}
@incollection{borstUsingACTRCognitive2015,
title = {Using the {{ACT-R Cognitive Architecture}} in {{Combination With fMRI Data}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Borst, Jelmer P. and Anderson, John R.},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {339--352},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_17},
url = {https://doi.org/10.1007/978-1-4939-2236-9_17},
urldate = {2019-03-20},
abstract = {In this chapter we discuss how the ACT-R cognitive architecture can be used in combination with fMRI data. ACT-R is a cognitive architecture that can provide a description of the processes from perception through to action for a wide range of cognitive tasks. It has a computational implementation that can be used to create models of specific tasks, which yield exact predictions in the form of response times and accuracy measures. In the last decade, researchers have extended the predictive capabilities of ACT-R to fMRI data. Since ACT-R provides a model of all the components in task performance it can address brain-wide activation patterns. fMRI data can now be used to inform and constrain the architecture, and, on the other hand, the architecture can be used to interpret fMRI data in a principled manner. In the following sections we first introduce cognitive architectures, and ACT-R in particular. Then, on the basis of an example dataset, we explain how ACT-R can be used to create fMRI predictions. In the third and fourth section of this chapter we discuss two ways in which these predictions can be used: region-of-interest and model-based fMRI analysis, and how the results can be used to inform the architecture and to interpret fMRI data.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {ACT-R,Cognitive Architecture,fMRI,Model-based fMRI,ROI analysis}
}
@article{burknerBrmsPackageBayesian2017,
title = {Brms: {{An R Package}} for {{Bayesian Multilevel Models Using Stan}}},
shorttitle = {Brms},
author = {Bürkner, Paul-Christian},
date = {2017-08-29},
journaltitle = {Journal of Statistical Software},
volume = {80},
number = {1},
pages = {1--28},
issn = {1548-7660},
doi = {10.18637/jss.v080.i01},
url = {https://www.jstatsoft.org/index.php/jss/article/view/v080i01},
urldate = {2019-01-28},
langid = {english},
keywords = {Bayesian inference,MCMC,multilevel model,ordinal data,R,Stan},
file = {/Users/andrew/Zotero/storage/J9A47ZDN/v080i01.html}
}
@article{buttonPowerFailureWhy2013,
title = {Power Failure: Why Small Sample Size Undermines the Reliability of Neuroscience},
shorttitle = {Power Failure},
author = {Button, Katherine S. and Ioannidis, John P. A. and Mokrysz, Claire and Nosek, Brian A. and Flint, Jonathan and Robinson, Emma S. J. and Munafò, Marcus R.},
date = {2013-05},
journaltitle = {Nature Reviews Neuroscience},
shortjournal = {Nat Rev Neurosci},
volume = {14},
number = {5},
pages = {365--376},
issn = {1471-003X, 1471-0048},
doi = {10.1038/nrn3475},
url = {http://www.nature.com/articles/nrn3475},
urldate = {2022-05-10},
abstract = {A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.},
langid = {english},
file = {/Users/andrew/Zotero/storage/PXHLK739/Button et al. - 2013 - Power failure why small sample size undermines th.pdf}
}
@article{buttonPowerFailureWhy2013a,
title = {Power Failure: Why Small Sample Size Undermines the Reliability of Neuroscience},
shorttitle = {Power Failure},
author = {Button, Katherine S. and Ioannidis, John P. A. and Mokrysz, Claire and Nosek, Brian A. and Flint, Jonathan and Robinson, Emma S. J. and Munafò, Marcus R.},
date = {2013-05},
journaltitle = {Nature Reviews Neuroscience},
shortjournal = {Nat Rev Neurosci},
volume = {14},
number = {5},
pages = {365--376},
publisher = {{Nature Publishing Group}},
issn = {1471-0048},
doi = {10.1038/nrn3475},
url = {https://www.nature.com/articles/nrn3475},
urldate = {2023-05-07},
abstract = {Low statistical power undermines the purpose of scientific research; it reduces the chance of detecting a true effect.Perhaps less intuitively, low power also reduces the likelihood that a statistically significant result reflects a true effect.Empirically, we estimate the median statistical power of studies in the neurosciences is between ∼8\% and ∼31\%.We discuss the consequences of such low statistical power, which include overestimates of effect size and low reproducibility of results.There are ethical dimensions to the problem of low power; unreliable research is inefficient and wasteful.Improving reproducibility in neuroscience is a key priority and requires attention to well-established, but often ignored, methodological principles.We discuss how problems associated with low power can be addressed by adopting current best-practice and make clear recommendations for how to achieve this.},
issue = {5},
langid = {english},
keywords = {Molecular neuroscience},
file = {/Users/andrew/Zotero/storage/6BER9RNR/Button et al. - 2013 - Power failure why small sample size undermines th.pdf;/Users/andrew/Zotero/storage/FB62K8ZL/Button et al_2013_Power failure.pdf}
}
@article{chambersPoliciesKnowledgePriors2019,
title = {Policies or Knowledge: Priors Differ between a Perceptual and Sensorimotor Task},
shorttitle = {Policies or Knowledge},
author = {Chambers, Claire and Fernandes, Hugo and Kording, Konrad Paul},
date = {2019-06-01},
journaltitle = {Journal of Neurophysiology},
shortjournal = {Journal of Neurophysiology},
volume = {121},
number = {6},
pages = {2267--2275},
issn = {0022-3077, 1522-1598},
doi = {10.1152/jn.00035.2018},
url = {https://www.physiology.org/doi/10.1152/jn.00035.2018},
urldate = {2022-02-17},
abstract = {If the brain abstractly represents probability distributions as knowledge, then the modality of a decision, e.g., movement vs. perception, should not matter. If, on the other hand, learned representations are policies, they may be specific to the task where learning takes place. Here, we test this by asking whether a learned spatial prior generalizes from a sensorimotor estimation task to a two-alternative-forced choice (2-Afc) perceptual comparison task. A model and simulation-based analysis revealed that while participants learn prior distribution in the sensorimotor estimation task, measured priors are consistently broader than sensorimotor priors in the 2-Afc task. That the prior does not fully generalize suggests that sensorimotor priors are more like policies than knowledge. In disagreement with standard Bayesian thought, the modality of the decision has a strong influence on the implied prior distributions. NEW \& NOTEWORTHY We do not know whether the brain represents abstract and generalizable knowledge or task-specific policies that map internal states to actions. We find that learning in a sensorimotor task does not generalize strongly to a perceptual task, suggesting that humans learned policies and did not truly acquire knowledge. Priors differ across tasks, thus casting doubt on the central tenet of many Bayesian models, that the brain’s representation of the world is built on generalizable knowledge.},
langid = {english},
file = {/Users/andrew/Zotero/storage/SP9ESDR9/Chambers et al. - 2019 - Policies or knowledge priors differ between a per.pdf}
}
@article{chaterProbabilisticBiasesMeet2020,
title = {Probabilistic {{Biases Meet}} the {{Bayesian Brain}}},
author = {Chater, Nick and Zhu, Jian-Qiao and Spicer, Jake and Sundh, Joakim and León-Villagrá, Pablo and Sanborn, Adam},
date = {2020-10-01},
journaltitle = {Current Directions in Psychological Science},
shortjournal = {Curr Dir Psychol Sci},
volume = {29},
number = {5},
pages = {506--512},
publisher = {{SAGE Publications Inc}},
issn = {0963-7214},
doi = {10.1177/0963721420954801},
url = {https://doi.org/10.1177/0963721420954801},
urldate = {2021-03-04},
abstract = {In Bayesian cognitive science, the mind is seen as a spectacular probabilistic-inference machine. But judgment and decision-making (JDM) researchers have spent half a century uncovering how dramatically and systematically people depart from rational norms. In this article, we outline recent research that opens up the possibility of an unexpected reconciliation. The key hypothesis is that the brain neither represents nor calculates with probabilities but approximates probabilistic calculations by drawing samples from memory or mental simulation. Sampling models diverge from perfect probabilistic calculations in ways that capture many classic JDM findings, which offers the hope of an integrated explanation of classic heuristics and biases, including availability, representativeness, and anchoring and adjustment.},
langid = {english},
keywords = {Bayesian inference,heuristics and biases,judgment and decision-making,probability,sampling},
file = {/Users/andrew/Zotero/storage/8C3FY2LM/Chater et al. - 2020 - Probabilistic Biases Meet the Bayesian Brain.pdf}
}
@article{colomboBayesBrainBayesian2012,
title = {Bayes in the {{Brain}}—{{On Bayesian Modelling}} in {{Neuroscience}}},
author = {Colombo, Matteo and Seriès, Peggy},
date = {2012-09-01},
journaltitle = {The British Journal for the Philosophy of Science},
shortjournal = {The British Journal for the Philosophy of Science},
volume = {63},
number = {3},
pages = {697--723},
issn = {0007-0882, 1464-3537},
doi = {10.1093/bjps/axr043},
url = {https://www.journals.uchicago.edu/doi/10.1093/bjps/axr043},
urldate = {2022-04-04},
abstract = {According to a growing trend in theoretical neuroscience, the human perceptual system is akin to a Bayesian machine. The aim of this article is to clearly articulate the claims that perception can be considered Bayesian inference and that the brain can be considered a Bayesian machine, some of the epistemological challenges to these claims; and some of the implications of these claims. We address two questions: (i) How are Bayesian models used in theoretical neuroscience? (ii) From the use of Bayesian models in theoretical neuroscience, have we learned or can we hope to learn that perception is Bayesian inference or that the brain is a Bayesian machine? From actual practice in theoretical neuroscience, we argue for three claims. First, currently Bayesian models do not provide mechanistic explanations; instead they are useful devices for predicting and systematizing observational statements about people’s performances in a variety of perceptual tasks. That is, currently we should have an instrumentalist attitude towards Bayesian models in neuroscience. Second, the inference typically drawn from Bayesian behavioural performance in a variety of perceptual tasks to underlying Bayesian mechanisms should be understood within the three-level framework laid out by David Marr ([1982]). Third, we can hope to learn that perception is Bayesian inference or that the brain is a Bayesian machine to the extent that Bayesian models will prove successful in yielding secure and informative predictions of both subjects’ perceptual performance and features of the underlying neural mechanisms.},
langid = {english},
file = {/Users/andrew/Zotero/storage/YQU399RQ/Colombo and Seriès - 2012 - Bayes in the Brain—On Bayesian Modelling in Neuros.pdf}
}
@article{cousineauConfidenceIntervalsWithinsubject2005,
title = {Confidence Intervals in Within-Subject Designs: {{A}} Simpler Solution to {{Loftus}} and {{Masson}}'s Method},
shorttitle = {Confidence Intervals in Within-Subject Designs},
author = {Cousineau, Denis},
date = {2005-09-01},
journaltitle = {Tutorials in Quantitative Methods for Psychology},
shortjournal = {TQMP},
volume = {1},
number = {1},
pages = {42--45},
issn = {1913-4126},
doi = {10.20982/tqmp.01.1.p042},
url = {http://www.tqmp.org/RegularArticles/vol01-1/p042},
urldate = {2023-03-26},
langid = {english},
keywords = {/unread},
file = {/Users/andrew/Zotero/storage/8PJCDZU6/Cousineau - 2005 - Confidence intervals in within-subject designs A .pdf}
}
@article{cousineauConfidenceIntervalsWithinsubject2005a,
title = {Confidence Intervals in Within-Subject Designs: {{A}} Simpler Solution to {{Loftus}} and {{Masson}}'s Method},
shorttitle = {Confidence Intervals in Within-Subject Designs},
author = {Cousineau, Denis},
date = {2005-09-01},
journaltitle = {Tutorials in Quantitative Methods for Psychology},
shortjournal = {TQMP},
volume = {1},
number = {1},
pages = {42--45},
issn = {1913-4126},
doi = {10.20982/tqmp.01.1.p042},
url = {http://www.tqmp.org/RegularArticles/vol01-1/p042},
urldate = {2023-03-26},
langid = {english},
keywords = {/unread},
file = {/Users/andrew/Zotero/storage/4CKJI9YF/Cousineau - 2005 - Confidence intervals in within-subject designs A .pdf}
}
@article{cousineauSummaryPlotsAdjusted2021,
title = {Summary {{Plots With Adjusted Error Bars}}: {{The}} Superb {{Framework With}} an {{Implementation}} in {{R}}},
shorttitle = {Summary {{Plots With Adjusted Error Bars}}},
author = {Cousineau, Denis and Goulet, Marc-André and Harding, Bradley},
date = {2021-07-01},
journaltitle = {Advances in Methods and Practices in Psychological Science},
volume = {4},
number = {3},
pages = {25152459211035109},
publisher = {{SAGE Publications Inc}},
issn = {2515-2459},
doi = {10.1177/25152459211035109},
url = {https://doi.org/10.1177/25152459211035109},
urldate = {2023-03-26},
abstract = {Plotting the data of an experiment allows researchers to illustrate the main results of a study, show effect sizes, compare conditions, and guide interpretations. To achieve all this, it is necessary to show point estimates of the results and their precision using error bars. Often, and potentially unbeknownst to them, researchers use a type of error bars?the confidence intervals?that convey limited information. For instance, confidence intervals do not allow comparing results (a) between groups, (b) between repeated measures, (c) when participants are sampled in clusters, and (d) when the population size is finite. The use of such stand-alone error bars can lead to discrepancies between the plot?s display and the conclusions derived from statistical tests. To overcome this problem, we propose to generalize the precision of the results (the confidence intervals) by adjusting them so that they take into account the experimental design and the sampling methodology. Unfortunately, most software dedicated to statistical analyses do not offer options to adjust error bars. As a solution, we developed an open-access, open-source library for R?superb?that allows users to create summary plots with easily adjusted error bars.},
langid = {english},
keywords = {/unread},
file = {/Users/andrew/Zotero/storage/ERJAQYKN/Cousineau et al_2021_Summary Plots With Adjusted Error Bars.pdf}
}
@article{debruineUnderstandingMixedEffects2019a,
title = {Understanding Mixed Effects Models through Data Simulation},
author = {DeBruine, Lisa and Barr, Dale J.},
date = {2019-06-01},
publisher = {{OSF}},
url = {https://osf.io/3cz2e/},
urldate = {2021-03-22},
abstract = {Experimental designs that sample both subjects and stimuli from a larger population need to account for random effects of both subjects and stimuli using mixed effects models. However, much of this research is analyzed using ANOVA on aggregated responses because researchers are not confident specifying and interpreting mixed effects models. The tutorial will explain how to simulate data with random effects structure and analyse the data using linear mixed effects regression (with the lme4 R package). The focus will be on interpreting the LMER output in light of the simulated parameters, using this method for power calculations. Data simulation can not only enhance understanding of how these models work, but also enables researchers to perform power calculations for complex designs. Hosted on the Open Science Framework},
langid = {english},
file = {/Users/andrew/Zotero/storage/CUGXMBE8/3cz2e.html}
}
@article{decarloSignalDetectionTheory,
title = {Signal {{Detection Theory}} and {{Generalized Linear Models}}},
author = {DeCarlo, Lawrence T},
pages = {20},
langid = {english},
file = {/Users/andrew/Zotero/storage/3W9MG262/DeCarlo_Signal Detection Theory and Generalized Linear Models.pdf}
}
@article{decarloStatisticalTheoreticalBasis2010,
title = {On the Statistical and Theoretical Basis of Signal Detection Theory and Extensions: {{Unequal}} Variance, Random Coefficient, and Mixture Models},
shorttitle = {On the Statistical and Theoretical Basis of Signal Detection Theory and Extensions},
author = {DeCarlo, Lawrence T.},
date = {2010-06-01},
journaltitle = {Journal of Mathematical Psychology},
shortjournal = {Journal of Mathematical Psychology},
volume = {54},
number = {3},
pages = {304--313},
issn = {0022-2496},
doi = {10.1016/j.jmp.2010.01.001},
url = {https://www.sciencedirect.com/science/article/pii/S0022249610000027},
urldate = {2021-10-28},
abstract = {Basic results for conditional means and variances, as well as distributional results, are used to clarify the similarities and differences between various extensions of signal detection theory (SDT). It is shown that a previously presented motivation for the unequal variance SDT model (varying strength) actually leads to a related, yet distinct, model. The distinction has implications for other extensions of SDT, such as models with criteria that vary over trials. It is shown that a mixture extension of SDT is also consistent with unequal variances, but provides a different interpretation of the results; mixture SDT also offers a way to unify results found across several types of studies.},
langid = {english},
keywords = {Generalized linear mixed model,Mixture model,Random intercept,Random slope,Signal detection theory,Unequal variance model,Variable criterion,Variable strength}
}
@report{devezerCaseFormalMethodology2020,
type = {preprint},
title = {The Case for Formal Methodology in Scientific Reform},
author = {Devezer, Berna and Navarro, Danielle J. and Vandekerckhove, Joachim and Buzbas, Erkan Ozge},
date = {2020-04-28},
institution = {{Scientific Communication and Education}},
doi = {10.1101/2020.04.26.048306},
url = {http://biorxiv.org/lookup/doi/10.1101/2020.04.26.048306},
urldate = {2021-03-08},
abstract = {Abstract Current attempts at methodological reform in sciences come in response to an overall lack of rigor in methodological and scientific practices in experimental sciences. However, most methodological reform attempts suffer from similar mistakes and over-generalizations to the ones they aim to address. We argue that this can be attributed in part to lack of formalism and first principles. Considering the costs of allowing false claims to become canonized, we argue for formal statistical rigor and scientific nuance in methodological reform. To attain this rigor and nuance, we propose a five-step formal approach for solving methodological problems. To illustrate the use and benefits of such formalism, we present a formal statistical analysis of three popular claims in the metascientific literature: (a) that reproducibility is the cornerstone of science; (b) that data must not be used twice in any analysis; and (c) that exploratory projects imply poor statistical practice. We show how our formal approach can inform and shape debates about such methodological claims.},
langid = {english},
file = {/Users/andrew/Zotero/storage/P54G956I/Devezer et al. - 2020 - The case for formal methodology in scientific refo.pdf}
}
@article{dijkstraImageryAddsStimulusspecific2022,
title = {Imagery Adds Stimulus-Specific Sensory Evidence to Perceptual Detection},
author = {Dijkstra, Nadine and Kok, Peter and Fleming, Stephen M.},
date = {2022-02-17},
journaltitle = {Journal of Vision},
shortjournal = {Journal of Vision},
volume = {22},
number = {2},
pages = {11},
issn = {1534-7362},
doi = {10.1167/jov.22.2.11},
url = {https://doi.org/10.1167/jov.22.2.11},
urldate = {2022-02-18},
abstract = {Internally generated imagery and externally triggered perception rely on overlapping sensory processes. This overlap poses a challenge for perceptual reality monitoring: determining whether sensory signals reflect reality or imagination. In this study, we used psychophysics to investigate how imagery and perception interact to determine visual experience. Participants were instructed to detect oriented gratings that gradually appeared in noise while simultaneously either imagining the same grating, a grating perpendicular to the to-be-detected grating, or nothing. We found that, compared to both incongruent imagery and no imagery, congruent imagery caused a leftward shift of the psychometric function relating stimulus contrast to perceptual threshold. We discuss how this effect can best be explained by a model in which imagery adds sensory signal to the perceptual input, thereby increasing the visibility of perceived stimuli. These results suggest that, in contrast to changes in sensory signals caused by self-generated movement, the brain does not discount the influence of self-generated sensory signals on perception.},
file = {/Users/andrew/Zotero/storage/5IT4BQ8B/Dijkstra et al. - 2022 - Imagery adds stimulus-specific sensory evidence to.pdf;/Users/andrew/Zotero/storage/J3QNHLGS/article.html}
}
@incollection{ditterichDistinguishingModelsPerceptual2015,
title = {Distinguishing {{Between Models}} of {{Perceptual Decision Making}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Ditterich, Jochen},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {277--290},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_13},
url = {https://doi.org/10.1007/978-1-4939-2236-9_13},
urldate = {2019-03-20},
abstract = {Mathematical models are a useful tool for gaining insight into mechanisms of decision making. However, like other scientific methods, its application is not without pitfalls. This chapter demonstrates that it can be difficult to distinguish between alternative models and it illustrates that a model-based approach benefits from the availability of a rich dataset that provides sufficient constraints. Ideally, the dataset is not only comprised of behavioral data, but also contains neural data that provide information about the internal processing. The chapter focuses on two examples taken from perceptual decision making. In one case, information about response time distributions is used to reject a model that is otherwise consistent with accuracy data and mean response times. In the other case, only the availability of neural data allows a distinction between two alternative models that are both consistent with the behavioral data.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {Choice,Feedback inhibition,Feedforward inhibition,Parietal cortex,Perceptual decision making,Response time,Stochastic integration,Time-variant}
}
@article{doornBayesFactorsMixed2021,
title = {Bayes {{Factors}} for {{Mixed Models}}},
author = {family=Doorn, given=Johnny, prefix=van, useprefix=false and Aust, Frederik and Haaf, Julia M. and Stefan, Angelika and Wagenmakers, Eric-Jan},
date = {2021-02-22T12:02:03},
publisher = {{PsyArXiv}},
doi = {10.31234/osf.io/y65h8},
url = {https://psyarxiv.com/y65h8/},
urldate = {2021-02-24},
abstract = {Although Bayesian mixed models are increasingly popular for data analysis in psychology and other fields, there remains considerable ambiguity on the most appropriate Bayes factor hypothesis test to quantify the degree to which the data support the presence or absence of an experimental effect. Specifically, different choices for both the null model and the alternative model are possible, and each choice constitutes a different definition of an effect resulting in a different test outcome. We outline the common approaches and focus on the impact of aggregation, the effect of measurement error, the choice of prior distribution, and the detection of interactions. For concreteness, three example scenarios showcase how seemingly innocuous choices can lead to dramatic differences in statistical evidence. We hope this work will facilitate a more explicit discussion about best practices in Bayes factor hypothesis testing in mixed models.},
keywords = {Bayes factors,Mixed effects,Mixed models,Quantitative Methods,Random effects,Social and Behavioral Sciences,Statistical Methods}
}
@article{dutilhQualityResponseTime2019,
title = {The {{Quality}} of {{Response Time Data Inference}}: {{A Blinded}}, {{Collaborative Assessment}} of the {{Validity}} of {{Cognitive Models}}},
shorttitle = {The {{Quality}} of {{Response Time Data Inference}}},
author = {Dutilh, Gilles and Annis, Jeffrey and Brown, Scott D. and Cassey, Peter and Evans, Nathan J. and Grasman, Raoul P. P. P. and Hawkins, Guy E. and Heathcote, Andrew and Holmes, William R. and Krypotos, Angelos-Miltiadis and Kupitz, Colin N. and Leite, Fábio P. and Lerche, Veronika and Lin, Yi-Shin and Logan, Gordon D. and Palmeri, Thomas J. and Starns, Jeffrey J. and Trueblood, Jennifer S. and family=Maanen, given=Leendert, prefix=van, useprefix=true and family=Ravenzwaaij, given=Don, prefix=van, useprefix=true and Vandekerckhove, Joachim and Visser, Ingmar and Voss, Andreas and White, Corey N. and Wiecki, Thomas V. and Rieskamp, Jörg and Donkin, Chris},
date = {2019-08-01},
journaltitle = {Psychonomic Bulletin \& Review},
shortjournal = {Psychon Bull Rev},
volume = {26},
number = {4},
pages = {1051--1069},
issn = {1531-5320},
doi = {10.3758/s13423-017-1417-2},
url = {https://doi.org/10.3758/s13423-017-1417-2},
urldate = {2021-05-10},
abstract = {Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non-decision time. Inferences about these psychological factors, hinge upon the validity of the models’ parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two-condition data sets, we manipulated properties of participants’ behavior in a two-alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler’s degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.},
langid = {english}
}
@article{etzHowBecomeBayesian2016,
title = {How to Become a {{Bayesian}} in Eight Easy Steps: {{An}} Annotated Reading List},
shorttitle = {How to Become a {{Bayesian}} in Eight Easy Steps},
author = {Etz, Alexander and Gronau, Quentin Frederik and Dablander, Fabian and Edelsbrunner, Peter and Baribault, Beth},
date = {2016-08-15T20:41:08},
publisher = {{PsyArXiv}},
doi = {10.31234/osf.io/ph6sw},
url = {https://psyarxiv.com/ph6sw/},
urldate = {2021-03-01},
abstract = {In this guide, we present a reading list to serve as a concise introduction to Bayesian data analysis. The introduction is geared toward reviewers, editors, and interested researchers who are new to Bayesian statistics. We provide commentary for eight recommended sources, which together cover the theoretical and practical cornerstones of Bayesian statistics in psychology and related sciences.},
keywords = {Bayes Factor,Bayesian Inference,Bayesian Statistics,Posterior Probability,psyarxiv,Quantitative Methods,Social and Behavioral Sciences,Theory and Philosophy of Science}
}
@article{etzIntroductionBayesianInference2018,
title = {Introduction to {{Bayesian Inference}} for {{Psychology}}},
author = {Etz, Alexander and Vandekerckhove, Joachim},
date = {2018-02-01},
journaltitle = {Psychonomic Bulletin \& Review},
shortjournal = {Psychon Bull Rev},
volume = {25},
number = {1},
pages = {5--34},
issn = {1531-5320},
doi = {10.3758/s13423-017-1262-3},
url = {https://doi.org/10.3758/s13423-017-1262-3},
urldate = {2021-03-01},
abstract = {We introduce the fundamental tenets of Bayesian inference, which derive from two basic laws of probability theory. We cover the interpretation of probabilities, discrete and continuous versions of Bayes’ rule, parameter estimation, and model comparison. Using seven worked examples, we illustrate these principles and set up some of the technical background for the rest of this special issue of Psychonomic Bulletin \& Review. Supplemental material is available via https://osf.io/wskex/.},
langid = {english}
}
@article{falconerBalancingMindVestibular2012,
title = {Balancing the Mind: {{Vestibular}} Induced Facilitation of Egocentric Mental Transformations},
shorttitle = {Balancing the Mind},
author = {Falconer, Caroline J. and Mast, Fred W.},
date = {2012},
journaltitle = {Experimental Psychology},
volume = {59},
number = {6},
pages = {332--339},
publisher = {{Hogrefe Publishing}},
location = {{Germany}},
issn = {2190-5142(Electronic),1618-3169(Print)},
doi = {10.1027/1618-3169/a000161},
abstract = {The body schema is a key component in accomplishing egocentric mental transformations, which rely on bodily reference frames. These reference frames are based on a plurality of different cognitive and sensory cues among which the vestibular system plays a prominent role. We investigated whether a bottom-up influence of vestibular stimulation modulates the ability to perform egocentric mental transformations. Participants were significantly faster to make correct spatial judgments during vestibular stimulation as compared to sham stimulation. Interestingly, no such effects were found for mental transformation of hand stimuli or during mental transformations of letters, thus showing a selective influence of vestibular stimulation on the rotation of whole-body reference frames. Furthermore, we found an interaction with the angle of rotation and vestibular stimulation demonstrating an increase in facilitation during mental body rotations in a direction congruent with rightward vestibular afferents. We propose that facilitation reflects a convergence in shared brain areas that process bottom-up vestibular signals and top-down imagined whole-body rotations, including the precuneus and tempero-parietal junction. Ultimately, our results show that vestibular information can influence higher-order cognitive processes, such as the body schema and mental imagery. (PsycINFO Database Record (c) 2016 APA, all rights reserved)},
keywords = {Egocentrism,Mental Rotation,Schema,Somesthetic Stimulation,Spatial Ability},
file = {/Users/andrew/Zotero/storage/N7FR6UBS/2012-31133-003.html}
}
@article{fardBayesianReformulationExtended2017,
title = {A {{Bayesian Reformulation}} of the {{Extended Drift-Diffusion Model}} in {{Perceptual Decision Making}}},
author = {Fard, Pouyan R. and Park, Hame and Warkentin, Andrej and Kiebel, Stefan J. and Bitzer, Sebastian},
date = {2017},
journaltitle = {Frontiers in Computational Neuroscience},
shortjournal = {Front. Comput. Neurosci.},
volume = {11},
publisher = {{Frontiers}},
issn = {1662-5188},
doi = {10.3389/fncom.2017.00029},
url = {https://www.frontiersin.org/articles/10.3389/fncom.2017.00029/full},
urldate = {2021-03-31},
abstract = {Perceptual decision making can be described as a process of accumulating evidence to a bound which has been formalized within drift-diffusion models. Recently, an equivalent Bayesian model has been proposed. In contrast to standard drift-diffusion models, this Bayesian model directly links information in the stimulus to the decision process. Here, we extend this Bayesian model further and allow inter-trial variability of two parameters following the extended version of the drift-diffusion model. We derive parameter distributions for the Bayesian model and show that they lead to predictions that are qualitatively equivalent to those made by the extended drift-diffusion model. Further, we demonstrate the usefulness of the extended Bayesian model for the analysis of concrete behavioral data. Specifically, using Bayesian model selection, we find evidence that including additional inter-trial parameter variability provides for a better model, when the model is constrained by trial-wise stimulus features. This result is remarkable because it was derived using just 200 trials per condition, which is typically thought to be insufficient for identifying variability parameters in drift-diffusion models. In sum, we present a Bayesian analysis, which provides for a novel and promising analysis of perceptual decision making experiments.},
langid = {english},
keywords = {Bayesian Models,drift-diffusion model,exact input modeling,Model Comparison,parameter fitting,perceptual decision making,single-trial models}
}
@incollection{farrellIntroductionCognitiveModeling2015,
title = {An {{Introduction}} to {{Cognitive Modeling}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Farrell, Simon and Lewandowsky, Stephan},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {3--24},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_1},
url = {https://doi.org/10.1007/978-1-4939-2236-9_1},
urldate = {2019-03-20},
abstract = {We provide a tutorial on the basic attributes of computational cognitive models—models that are formulated as a set of mathematical equations or as a computer simulation. We first show how models can generate complex behavior and novel insights from very simple underlying assumptions about human cognition. We survey the different classes of models, from description to explanation, and present examples of each class. We then illustrate the reasons why computational models are preferable to purely verbal means of theorizing. For example, we show that computational models help theoreticians overcome the limitations of human cognition, thereby enabling us to create coherent and plausible accounts of how we think or remember and guard against subtle theoretical errors. Models can also measure latent constructs and link them to individual differences, which would escape detection if only the raw data were considered. We conclude by reviewing some open challenges.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {Agent-based modelling,Computational models,Model comparison,Necessity,Parameter interpretation,Practice,Scientific reasoning}
}
@article{faulkenberryBayesianInferenceNumerical2020,
title = {Bayesian {{Inference}} in {{Numerical Cognition}}: {{A Tutorial Using JASP}}},
shorttitle = {Bayesian {{Inference}} in {{Numerical Cognition}}},
author = {Faulkenberry, Thomas J. and Ly, Alexander and Wagenmakers, Eric-Jan},
date = {2020-09-09},
journaltitle = {Journal of Numerical Cognition},
volume = {6},
number = {2},
pages = {231--259},
issn = {2363-8761},
doi = {10.5964/jnc.v6i2.288},
url = {https://jnc.psychopen.eu/index.php/jnc/article/view/5903},
urldate = {2022-05-10},
langid = {american},
keywords = {Bayes factors,Bayesian inference,JASP,numerical cognition,tutorial},
file = {/Users/andrew/Zotero/storage/XERKKEC7/Faulkenberry et al. - 2020 - Bayesian Inference in Numerical Cognition A Tutor.pdf}
}
@article{feldmanNewSpinSpatial2020,
title = {A {{New Spin}} on {{Spatial Cognition}} in {{ADHD}}: {{A Diffusion Model Decomposition}} of {{Mental Rotation}}},
shorttitle = {A {{New Spin}} on {{Spatial Cognition}} in {{ADHD}}},
author = {Feldman, Jason S. and Huang-Pollock, Cynthia},
date = {2020},
journaltitle = {Journal of the International Neuropsychological Society},
pages = {1--12},
publisher = {{Cambridge University Press}},
issn = {1355-6177, 1469-7661},
doi = {10.1017/S1355617720001198},
url = {https://www.cambridge.org/core/journals/journal-of-the-international-neuropsychological-society/article/abs/new-spin-on-spatial-cognition-in-adhd-a-diffusion-model-decomposition-of-mental-rotation/DB35A44AF99DF3D2AB08EBA6E655B650},
urldate = {2021-05-17},
abstract = {Objectives: Multiple studies have found evidence of task non-specific slow drift rate in ADHD, and slow drift rate has rapidly become one of the most visible cognitive hallmarks of the disorder. In this study, we use the diffusion model to determine whether atypicalities in visuospatial cognitive processing exist independently of slow drift rate. Methods: Eight- to twelve-year-old children with (n = 207) and without ADHD (n = 99) completed a 144-trial mental rotation task. Results: Performance of children with ADHD was less accurate and more variable than non-ADHD controls, but there were no group differences in mean response time. Drift rate was slower, but nondecision time was faster for children with ADHD. A Rotation × ADHD interaction for boundary separation was also found in which children with ADHD did not strategically adjust their response thresholds to the same degree as non-ADHD controls. However, the Rotation × ADHD interaction was not significant for nondecision time, which would have been the primary indicator of a specific deficit in mental rotation per se. Conclusions: Poorer performance on the mental rotation task was due to slow rate of evidence accumulation, as well as relative inflexibility in adjusting boundary separation, but not to impaired visuospatial processing specifically. We discuss the implications of these findings for future cognitive research in ADHD.},
langid = {english},
keywords = {ADHD,Boundary separation,Children,Drift rate,Neuropsychology,Visuospatial reasoning}
}
@article{fenglerLikelihoodApproximationNetworks2020,
title = {Likelihood {{Approximation Networks}} ({{LANs}}) for {{Fast Inference}} of {{Simulation Models}} in {{Cognitive Neuroscience}}},
author = {Fengler, Alexander and Govindarajan, Lakshmi N. and Chen, Tony and Frank, Michael J.},
date = {2020-12-02},
journaltitle = {bioRxiv},
pages = {2020.11.20.392274},
publisher = {{Cold Spring Harbor Laboratory}},
doi = {10.1101/2020.11.20.392274},
url = {https://www.biorxiv.org/content/10.1101/2020.11.20.392274v2},
urldate = {2021-03-18},
abstract = {{$<$}h3{$>$}Abstract{$<$}/h3{$>$} {$<$}p{$>$}In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.{$<$}/p{$>$}},
langid = {english},
file = {/Users/andrew/Zotero/storage/FC6TIR27/2020.11.20.html}
}
@article{feuerriegelPredictiveActivationSensory2021,
title = {Predictive Activation of Sensory Representations as a Source of Evidence in Perceptual Decision-Making},
author = {Feuerriegel, Daniel and Blom, Tessel and Hogendoorn, Hinze},
date = {2021-03-01},
journaltitle = {Cortex},
shortjournal = {Cortex},
volume = {136},
pages = {140--146},
issn = {0010-9452},
doi = {10.1016/j.cortex.2020.12.008},
url = {https://www.sciencedirect.com/science/article/pii/S0010945220304494},
urldate = {2022-02-23},
abstract = {Our brains can represent expected future states of our sensory environment. Recent work has shown that, when we expect a specific stimulus to appear at a specific time, we can predictively generate neural representations of that stimulus even before it is physically presented. These observations raise two exciting questions: Are pre-activated sensory representations used for perceptual decision-making? And, do we transiently perceive an expected stimulus that does not actually appear? To address these questions, we propose that pre-activated neural representations provide sensory evidence that is used for perceptual decision-making. This can be understood within the framework of the Diffusion Decision Model as an early accumulation of decision evidence in favour of the expected percept. Our proposal makes novel predictions relating to expectation effects on neural markers of decision evidence accumulation, and also provides an explanation for why we sometimes perceive stimuli that are expected, but do not appear.},
langid = {english},
keywords = {Decision-making,Expectation,MVPA,Perception,Prediction},
file = {/Users/andrew/Zotero/storage/W3QBJZB8/Feuerriegel et al. - 2021 - Predictive activation of sensory representations a.pdf;/Users/andrew/Zotero/storage/SUAAWVII/S0010945220304494.html}
}
@online{FirstLessonBayesian,
title = {A {{First Lesson}} in {{Bayesian Inference}}},
url = {http://lmpp10e-mucesm.srv.mwn.de:3838/felix/BayesLessons/BayesianLesson1.Rmd},
urldate = {2021-03-01},
file = {/Users/andrew/Zotero/storage/FHZCVJDM/BayesianLesson1.html}
}
@incollection{forstmannIntroductionHumanBrain2015,
title = {An {{Introduction}} to {{Human Brain Anatomy}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Forstmann, Birte U. and Keuken, Max C. and Alkemade, Anneke},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {71--89},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_4},
url = {https://doi.org/10.1007/978-1-4939-2236-9_4},
urldate = {2019-03-20},
abstract = {This tutorial chapter provides an overview of the human brain anatomy. Knowledge of brain anatomy is fundamental to our understanding of cognitive processes in health and disease; moreover, anatomical constraints are vital for neurocomputational models and can be important for psychological theorizing as well. The main challenge in understanding brain anatomy is to integrate the different levels of description ranging from molecules to macroscopic brain networks. This chapter contains three main sections. The first section provides a brief introduction to the neuroanatomical nomenclature. The second section provides an introduction to the different levels of brain anatomy and describes commonly used atlases for the visualization of functional imaging data. The third section provides a concrete example of how human brain structure relates to performance.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {Connectional neuroanatomy,functional MRI,Neuroanatomical atlases,Sectional neuroanatomy,Structural MRI,Structure-function relationships,Ultra high resolution MRI}
}
@incollection{forstmannModelBasedCognitiveNeuroscience2015,
title = {Model-{{Based Cognitive Neuroscience}}: {{A Conceptual Introduction}}},
shorttitle = {Model-{{Based Cognitive Neuroscience}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {139--156},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_7},
url = {https://doi.org/10.1007/978-1-4939-2236-9_7},
urldate = {2019-03-20},
abstract = {This tutorial chapter shows how the separate fields of mathematical psychology and cognitive neuroscience can interact to their mutual benefit. Historically, the field of mathematical psychology is mostly concerned with formal theories of behavior, whereas cognitive neuroscience is mostly concerned with empirical measurements of brain activity. Despite these superficial differences in method, the ultimate goal of both disciplines is the same: to understand the workings of human cognition. In recognition of this common purpose, mathematical psychologists have recently started to apply their models in cognitive neuroscience, and cognitive neuroscientists have borrowed and extended key ideas that originated from mathematical psychology. This chapter consists of three main sections: the first describes the field of mathematical psychology, the second describes the field of cognitive neuroscience, and the third describes their recent combination: model-based cognitive neuroscience.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {Blood Oxygenation Level Dependent,Blood Oxygenation Level Dependent Signal,Cognitive Neuroscience,Drift Rate,Mathematical Psychology}
}
@article{forstmannSequentialSamplingModels2016,
title = {Sequential {{Sampling Models}} in {{Cognitive Neuroscience}}: {{Advantages}}, {{Applications}}, and {{Extensions}}},
shorttitle = {Sequential {{Sampling Models}} in {{Cognitive Neuroscience}}},
author = {Forstmann, B.U. and Ratcliff, R. and Wagenmakers, E.-J.},
date = {2016-01-04},
journaltitle = {Annual Review of Psychology},
volume = {67},
number = {1},
pages = {641--666},
issn = {0066-4308, 1545-2085},
doi = {10.1146/annurev-psych-122414-033645},
url = {http://www.annualreviews.org/doi/10.1146/annurev-psych-122414-033645},
urldate = {2020-04-29},
abstract = {Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model—the diffusion decision model—is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.},
langid = {english}
}
@article{forstmannSequentialSamplingModels2016a,
title = {Sequential {{Sampling Models}} in {{Cognitive Neuroscience}}: {{Advantages}}, {{Applications}}, and {{Extensions}}},
shorttitle = {Sequential {{Sampling Models}} in {{Cognitive Neuroscience}}},
author = {Forstmann, B.U. and Ratcliff, R. and Wagenmakers, E.-J.},
date = {2016},
journaltitle = {Annual review of psychology},
shortjournal = {Annu Rev Psychol},
volume = {67},
eprint = {26393872},
eprinttype = {pmid},
pages = {641--666},
issn = {0066-4308},
doi = {10.1146/annurev-psych-122414-033645},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5112760/},
urldate = {2022-04-04},
abstract = {Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model—the diffusion decision model—is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.},
pmcid = {PMC5112760},
file = {/Users/andrew/Zotero/storage/2V2LTLT3/Forstmann et al. - 2016 - Sequential Sampling Models in Cognitive Neuroscien.pdf}
}
@article{frankeBayesianRegressionModeling2019,
title = {Bayesian Regression Modeling (for Factorial Designs): {{A}} Tutorial},
shorttitle = {Bayesian Regression Modeling (for Factorial Designs)},
author = {Franke, Michael and Roettger, Timo B.},
date = {2019-07-13T18:36:33},
publisher = {{PsyArXiv}},
doi = {10.31234/osf.io/cdxv3},
url = {https://psyarxiv.com/cdxv3/},
urldate = {2022-02-23},
abstract = {Generalized linear mixed models are handy tools for statistical inference, and Bayesian approaches to applying these become increasingly popular. This tutorial provides an accessible, non-technical introduction to the use and feel of Bayesian mixed effects regression models. The focus is on data from a factorial-design experiment.},
langid = {american},
keywords = {bayesian,factorial design,multilevel regression,parameter estimation,R,Social and Behavioral Sciences}
}
@article{frankeBayesianRegressionModeling2019a,
title = {Bayesian Regression Modeling (for Factorial Designs): {{A}} Tutorial},
shorttitle = {Bayesian Regression Modeling (for Factorial Designs)},
author = {Franke, Michael and Roettger, Timo B.},
date = {2019-07-13T18:36:33},
publisher = {{PsyArXiv}},
doi = {10.31234/osf.io/cdxv3},
url = {https://psyarxiv.com/cdxv3/},
urldate = {2022-02-14},
abstract = {Generalized linear mixed models are handy tools for statistical inference, and Bayesian approaches to applying these become increasingly popular. This tutorial provides an accessible, non-technical introduction to the use and feel of Bayesian mixed effects regression models. The focus is on data from a factorial-design experiment.},
langid = {american},
keywords = {bayesian,factorial design,multilevel regression,parameter estimation,R,Social and Behavioral Sciences}
}
@incollection{frankLinkingLevelsComputation2015,
title = {Linking {{Across Levels}} of {{Computation}} in {{Model-Based Cognitive Neuroscience}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Frank, Michael J.},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {159--177},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_8},
url = {https://doi.org/10.1007/978-1-4939-2236-9_8},
urldate = {2019-03-20},
abstract = {Computational approaches to cognitive neuroscience encompass multiple levels of analysis, from detailed biophysical models of neural activity to abstract algorithmic or normative models of cognition, with several levels in between. Despite often strong opinions on the ‘right’ level of modeling, there is no single panacea: attempts to link biological with higher level cognitive processes require a multitude of approaches. Here I argue that these disparate approaches should not be viewed as competitive, nor should they be accessible to only other researchers already endorsing the particular level of modeling. Rather, insights gained from one level of modeling should inform modeling endeavors at the level above and below it. One way to achieve this synergism is to link levels of modeling by quantitatively fitting the behavioral outputs of detailed mechanistic models with higher level descriptions. If the fits are reasonable (e.g., similar to those achieved when applying high level models to human behavior), one can then derive plausible links between mechanism and computation. Model-based cognitive neuroscience approaches can then be employed to manipulate or measure neural function motivated by the candidate mechanisms, and to test whether these are related to high level model parameters. I describe several examples of this approach in the domain of reward-based learning, cognitive control, and decision making and show how neural and algorithmic models have each informed or refined the other.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {Algorithms,Basal ganglia,Computational models,Decision making,Dopamine,Neural networks,Prefrontal cortex,Reinforcement learning}
}
@article{ganisNewSetThreeDimensional2015,
title = {A {{New Set}} of {{Three-Dimensional Shapes}} for {{Investigating Mental Rotation Processes}}: {{Validation Data}} and {{Stimulus Set}}},
shorttitle = {A {{New Set}} of {{Three-Dimensional Shapes}} for {{Investigating Mental Rotation Processes}}},
author = {Ganis, Giorgio and Kievit, Rogier},
date = {2015-03-13},
journaltitle = {Journal of Open Psychology Data},
volume = {3},
number = {1},
pages = {e3},
publisher = {{Ubiquity Press}},
issn = {2050-9863},
doi = {10.5334/jopd.ai},
url = {http://openpsychologydata.metajnl.com/articles/10.5334/jopd.ai/},
urldate = {2021-03-16},
abstract = {Mental rotation is one of the most influential paradigms in the history of cognitive psychology. In this paper, we present a new set of validated mental rotation stimuli to be used freely by the scientific community. Three-dimensional visual rendering software was employed to generate a total of 384 realistic-looking mental rotation stimuli with shading and foreshortening depth cues. Each stimulus was composed of two pictures: a baseline object and a target object, placed side by side, which can be aligned by means of a rotation around the vertical axis in half of the stimuli but not in the other half. Behavioral data (N=54, freely available) based on these stimuli exhibited the typical linear increase in response times and error rates with angular disparity, validating the stimulus set. This set of stimuli is especially useful for studies where it is necessary to avoid stimulus repetition, such as training studies.},
issue = {1},
langid = {english},
keywords = {Mental rotation,{visual spatial skills, generalization}},
file = {/Users/andrew/Zotero/storage/52TGEV4U/jopd.ai.html}
}
@book{gelmanBayesianDataAnalysis2014,
title = {Bayesian Data Analysis},
author = {Gelman, Andrew},
date = {2014},
series = {Chapman \& {{Hall}}/{{CRC}} Texts in Statistical Science},
edition = {Third edition},
publisher = {{CRC Press}},
location = {{Boca Raton}},
abstract = {"Preface This book is intended to have three roles and to serve three associated audiences: an introductory text on Bayesian inference starting from first principles, a graduate text on effective current approaches to Bayesian modeling and computation in statistics and related fields, and a handbook of Bayesian methods in applied statistics for general users of and researchers in applied statistics. Although introductory in its early sections, the book is definitely not elementary in the sense of a first text in statistics. The mathematics used in our book is basic probability and statistics, elementary calculus, and linear algebra. A review of probability notation is given in Chapter 1 along with a more detailed list of topics assumed to have been studied. The practical orientation of the book means that the reader's previous experience in probability, statistics, and linear algebra should ideally have included strong computational components. To write an introductory text alone would leave many readers with only a taste of the conceptual elements but no guidance for venturing into genuine practical applications, beyond those where Bayesian methods agree essentially with standard non-Bayesian analyses. On the other hand, we feel it would be a mistake to present the advanced methods without first introducing the basic concepts from our data-analytic perspective. Furthermore, due to the nature of applied statistics, a text on current Bayesian methodology would be incomplete without a variety of worked examples drawn from real applications. To avoid cluttering the main narrative, there are bibliographic notes at the end of each chapter and references at the end of the book"--},
isbn = {978-1-4398-4095-5},
pagetotal = {661},
keywords = {Bayesian statistical decision theory,MATHEMATICS / Probability \& Statistics / General}
}
@article{gelmanRsquaredBayesianRegression2019c,
title = {R-Squared for {{Bayesian Regression Models}}},
author = {Gelman, Andrew and Goodrich, Ben and Gabry, Jonah and Vehtari, Aki},
date = {2019-07-03},
journaltitle = {The American Statistician},
volume = {73},
number = {3},
pages = {307--309},
publisher = {{Taylor \& Francis}},
issn = {0003-1305},
doi = {10.1080/00031305.2018.1549100},
url = {https://doi.org/10.1080/00031305.2018.1549100},
urldate = {2021-05-28},
abstract = {The usual definition of R2 (variance of the predicted values divided by the variance of the data) has a problem for Bayesian fits, as the numerator can be larger than the denominator. We propose an alternative definition similar to one that has appeared in the survival analysis literature: the variance of the predicted values divided by the variance of predicted values plus the expected variance of the errors.},
keywords = {Bayesian methods,R-squared,Regression},
file = {/Users/andrew/Zotero/storage/MRCS9Q3Y/00031305.2018.html}
}
@article{gigerenzerMindlessStatistics2004,
title = {Mindless Statistics},
author = {Gigerenzer, Gerd},
date = {2004-11},
journaltitle = {The Journal of Socio-Economics},
volume = {33},
number = {5},
pages = {587--606},
issn = {10535357},
doi = {10.1016/j.socec.2004.09.033},
url = {https://linkinghub.elsevier.com/retrieve/pii/S1053535704000927},
urldate = {2019-02-11},
abstract = {Statistical rituals largely eliminate statistical thinking in the social sciences. Rituals are indispensable for identification with social groups, but they should be the subject rather than the procedure of science. What I call the “null ritual” consists of three steps: (1) set up a statistical null hypothesis, but do not specify your own hypothesis nor any alternative hypothesis, (2) use the 5\% significance level for rejecting the null and accepting your hypothesis, and (3) always perform this procedure. I report evidence of the resulting collective confusion and fears about sanctions on the part of students and teachers, researchers and editors, as well as textbook writers.},
langid = {english}
}
@article{gigerenzerStatisticalRitualsReplication2018a,
title = {Statistical {{Rituals}}: {{The Replication Delusion}} and {{How We Got There}}},
shorttitle = {Statistical {{Rituals}}},
author = {Gigerenzer, Gerd},
date = {2018-06-01},
journaltitle = {Advances in Methods and Practices in Psychological Science},
shortjournal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {198--218},
publisher = {{SAGE Publications Inc}},
issn = {2515-2459},
doi = {10.1177/2515245918771329},
url = {https://doi.org/10.1177/2515245918771329},
urldate = {2021-03-01},
abstract = {The “replication crisis” has been attributed to misguided external incentives gamed by researchers (the strategic-game hypothesis). Here, I want to draw attention to a complementary internal factor, namely, researchers’ widespread faith in a statistical ritual and associated delusions (the statistical-ritual hypothesis). The “null ritual,” unknown in statistics proper, eliminates judgment precisely at points where statistical theories demand it. The crucial delusion is that the p value specifies the probability of a successful replication (i.e., 1 – p), which makes replication studies appear to be superfluous. A review of studies with 839 academic psychologists and 991 students shows that the replication delusion existed among 20\% of the faculty teaching statistics in psychology, 39\% of the professors and lecturers, and 66\% of the students. Two further beliefs, the illusion of certainty (e.g., that statistical significance proves that an effect exists) and Bayesian wishful thinking (e.g., that the probability of the alternative hypothesis being true is 1 – p), also make successful replication appear to be certain or almost certain, respectively. In every study reviewed, the majority of researchers (56\%–97\%) exhibited one or more of these delusions. Psychology departments need to begin teaching statistical thinking, not rituals, and journal editors should no longer accept manuscripts that report results as “significant” or “not significant.”},
langid = {english},
keywords = {illusion of certainty,null ritual,p value,p-hacking,replication}
}
@article{goldNeuralBasisDecision2007a,
title = {The {{Neural Basis}} of {{Decision Making}}},
author = {Gold, Joshua I. and Shadlen, Michael N.},
date = {2007-07-01},
journaltitle = {Annual Review of Neuroscience},
shortjournal = {Annu. Rev. Neurosci.},
volume = {30},
number = {1},
pages = {535--574},
issn = {0147-006X, 1545-4126},
doi = {10.1146/annurev.neuro.29.051605.113038},
url = {https://www.annualreviews.org/doi/10.1146/annurev.neuro.29.051605.113038},
urldate = {2023-04-21},
abstract = {The study of decision making spans such varied fields as neuroscience, psychology, economics, statistics, political science, and computer science. Despite this diversity of applications, most decisions share common elements including deliberation and commitment. Here we evaluate recent progress in understanding how these basic elements of decision formation are implemented in the brain. We focus on simple decisions that can be studied in the laboratory but emphasize general principles likely to extend to other settings.},
langid = {english},
file = {/Users/andrew/Zotero/storage/9DNSZWHA/Gold and Shadlen - 2007 - The Neural Basis of Decision Making.pdf}
}
@article{grabherrMentalTransformationAbilities2011,
title = {Mental Transformation Abilities in Patients with Unilateral and Bilateral Vestibular Loss},
author = {Grabherr, Luzia and Cuffel, Cyril and Guyot, Jean-Philippe and Mast, Fred W.},
date = {2011-03},
journaltitle = {Experimental Brain Research},
shortjournal = {Exp Brain Res},
volume = {209},
number = {2},
eprint = {21287158},
eprinttype = {pmid},
pages = {205--214},
issn = {1432-1106},
doi = {10.1007/s00221-011-2535-0},
abstract = {Vestibular information helps to establish a reliable gravitational frame of reference and contributes to the adequate perception of the location of one's own body in space. This information is likely to be required in spatial cognitive tasks. Indeed, previous studies suggest that the processing of vestibular information is involved in mental transformation tasks in healthy participants. In this study, we investigate whether patients with bilateral or unilateral vestibular loss show impaired ability to mentally transform images of bodies and body parts compared to a healthy, age-matched control group. An egocentric and an object-based mental transformation task were used. Moreover, spatial perception was assessed using a computerized version of the subjective visual vertical and the rod and frame test. Participants with bilateral vestibular loss showed impaired performance in mental transformation, especially in egocentric mental transformation, compared to participants with unilateral vestibular lesions and the control group. Performance of participants with unilateral vestibular lesions and the control group are comparable, and no differences were found between right- and left-sided labyrinthectomized patients. A control task showed no differences between the three groups. The findings from this study substantiate that central vestibular processes are involved in imagined spatial body transformations; but interestingly, only participants with bilateral vestibular loss are affected, whereas unilateral vestibular loss does not lead to a decline in spatial imagery.},
langid = {english},
keywords = {Adult,Aged,Analysis of Variance,Female,Humans,Imagination,Male,Middle Aged,Orientation,Psychomotor Performance,Reaction Time,Space Perception,Surveys and Questionnaires,Vestibular Diseases}
}
@article{gronauTutorialBridgeSampling2017a,
title = {A Tutorial on Bridge Sampling},
author = {Gronau, Quentin F. and Sarafoglou, Alexandra and Matzke, Dora and Ly, Alexander and Boehm, Udo and Marsman, Maarten and Leslie, David S. and Forster, Jonathan J. and Wagenmakers, Eric-Jan and Steingroever, Helen},
date = {2017-12-01},
journaltitle = {Journal of Mathematical Psychology},
shortjournal = {Journal of Mathematical Psychology},
volume = {81},
pages = {80--97},
issn = {0022-2496},
doi = {10.1016/j.jmp.2017.09.005},
url = {https://www.sciencedirect.com/science/article/pii/S0022249617300640},
urldate = {2021-05-03},
abstract = {The marginal likelihood plays an important role in many areas of Bayesian statistics such as parameter estimation, model comparison, and model averaging. In most applications, however, the marginal likelihood is not analytically tractable and must be approximated using numerical methods. Here we provide a tutorial on bridge sampling (Bennett, 1976; Meng \& Wong, 1996), a reliable and relatively straightforward sampling method that allows researchers to obtain the marginal likelihood for models of varying complexity. First, we introduce bridge sampling and three related sampling methods using the beta-binomial model as a running example. We then apply bridge sampling to estimate the marginal likelihood for the Expectancy Valence (EV) model—a popular model for reinforcement learning. Our results indicate that bridge sampling provides accurate estimates for both a single participant and a hierarchical version of the EV model. We conclude that bridge sampling is an attractive method for mathematical psychologists who typically aim to approximate the marginal likelihood for a limited set of possibly high-dimensional models.},
langid = {english},
keywords = {Bayes factor,Hierarchical model,Marginal likelihood,Normalizing constant,Predictive accuracy,Reinforcement learning},
file = {/Users/andrew/Zotero/storage/8MGSE8Q5/S0022249617300640.html}
}
@article{guestHowComputationalModeling2021,
title = {How {{Computational Modeling Can Force Theory Building}} in {{Psychological Science}}},
author = {Guest, Olivia and Martin, Andrea E.},
date = {2021-01-22},
journaltitle = {Perspectives on Psychological Science},
shortjournal = {Perspect Psychol Sci},
pages = {1745691620970585},
publisher = {{SAGE Publications Inc}},
issn = {1745-6916},
doi = {10.1177/1745691620970585},
url = {https://doi.org/10.1177/1745691620970585},
urldate = {2021-02-22},
abstract = {Psychology endeavors to develop theories of human capacities and behaviors on the basis of a variety of methodologies and dependent measures. We argue that one of the most divisive factors in psychological science is whether researchers choose to use computational modeling of theories (over and above data) during the scientific-inference process. Modeling is undervalued yet holds promise for advancing psychological science. The inherent demands of computational modeling guide us toward better science by forcing us to conceptually analyze, specify, and formalize intuitions that otherwise remain unexamined—what we dub open theory. Constraining our inference process through modeling enables us to build explanatory and predictive theories. Here, we present scientific inference in psychology as a path function in which each step shapes the next. Computational modeling can constrain these steps, thus advancing scientific inference over and above the stewardship of experimental practice (e.g., preregistration). If psychology continues to eschew computational modeling, we predict more replicability crises and persistent failure at coherent theory building. This is because without formal modeling we lack open and transparent theorizing. We also explain how to formalize, specify, and implement a computational model, emphasizing that the advantages of modeling can be achieved by anyone with benefit to all.},
langid = {english},
keywords = {computational model,open science,scientific inference,theoretical psychology},
file = {/Users/andrew/Zotero/storage/SVPEKCYE/Guest and Martin - 2021 - How Computational Modeling Can Force Theory Buildi.pdf}
}
@article{guestHowComputationalModeling2021a,
title = {How {{Computational Modeling Can Force Theory Building}} in {{Psychological Science}}},
author = {Guest, Olivia and Martin, Andrea E.},
date = {2021-07-01},
journaltitle = {Perspectives on Psychological Science},
shortjournal = {Perspect Psychol Sci},
volume = {16},
number = {4},
pages = {789--802},
publisher = {{SAGE Publications Inc}},
issn = {1745-6916},
doi = {10.1177/1745691620970585},
url = {https://doi.org/10.1177/1745691620970585},
urldate = {2022-02-14},
abstract = {Psychology endeavors to develop theories of human capacities and behaviors on the basis of a variety of methodologies and dependent measures. We argue that one of the most divisive factors in psychological science is whether researchers choose to use computational modeling of theories (over and above data) during the scientific-inference process. Modeling is undervalued yet holds promise for advancing psychological science. The inherent demands of computational modeling guide us toward better science by forcing us to conceptually analyze, specify, and formalize intuitions that otherwise remain unexamined—what we dub open theory. Constraining our inference process through modeling enables us to build explanatory and predictive theories. Here, we present scientific inference in psychology as a path function in which each step shapes the next. Computational modeling can constrain these steps, thus advancing scientific inference over and above the stewardship of experimental practice (e.g., preregistration). If psychology continues to eschew computational modeling, we predict more replicability crises and persistent failure at coherent theory building. This is because without formal modeling we lack open and transparent theorizing. We also explain how to formalize, specify, and implement a computational model, emphasizing that the advantages of modeling can be achieved by anyone with benefit to all.},
langid = {english},
keywords = {computational model,open science,scientific inference,theoretical psychology}
}
@article{hainesLearningReliabilityParadox2020,
title = {Learning from the {{Reliability Paradox}}: {{How Theoretically Informed Generative Models Can Advance}} the {{Social}}, {{Behavioral}}, and {{Brain Sciences}}},
shorttitle = {Learning from the {{Reliability Paradox}}},
author = {Haines, Nathaniel and Kvam, Peter D. and Irving, Louis H. and Smith, Colin and Beauchaine, Theodore P. and Pitt, Mark A. and Ahn, Woo-Young and Turner, Brandon},
date = {2020-08-24T13:56:49},
publisher = {{PsyArXiv}},
doi = {10.31234/osf.io/xr7y3},
url = {https://psyarxiv.com/xr7y3/},
urldate = {2021-03-08},
abstract = {Behavioral tasks (e.g., Stroop task) that produce replicable group-level effects (e.g., Stroop effect) often fail to reliably capture individual differences between participants (e.g., low test-retest reliability). This “reliability paradox” has led many researchers to conclude that most behavioral tasks cannot be used to develop and advance theories of individual differences. However, these conclusions are derived from statistical models that provide only superficial summary descriptions of behavioral data, thereby ignoring theoretically-relevant data-generating mechanisms that underly individual-level behavior. More generally, such descriptive methods lack the flexibility to test and develop increasingly complex theories of individual differences. To resolve this theory-description gap, we present generative modeling approaches, which involve using background knowledge to specify how behavior is generated at the individual level, and in turn how the distributions of individual-level mechanisms are characterized at the group level—all in a single joint model. Generative modeling shifts our focus away from estimating descriptive statistical “effects” toward estimating psychologically meaningful parameters, while simultaneously accounting for measurement error that would otherwise attenuate individual difference correlations. Using simulations and empirical data from the Implicit Association Test and Stroop, Flanker, Posner Cueing, and Delay Discounting tasks, we demonstrate how generative models yield (1) higher test-retest reliability estimates, and (2) more theoretically informative parameter estimates relative to traditional statistical approaches. Our results reclaim optimism regarding the utility of behavioral paradigms for testing and advancing theories of individual differences, and emphasize the importance of formally specifying and checking model assumptions to reduce theory-description gaps and facilitate principled theory development.},
keywords = {Bayesian analysis,Clinical Psychology,Cognitive Psychology,Generative modeling,Implicit attitudes,Impulsivity,Individual differences,Measurement error,Meta-science,Quantitative Methods,Reliability,Self-control,Social and Behavioral Sciences,Social and Personality Psychology,Theory and Philosophy of Science,Theory development}
}
@incollection{heathcoteIntroductionGoodPractices2015,
title = {An {{Introduction}} to {{Good Practices}} in {{Cognitive Modeling}}},
booktitle = {An {{Introduction}} to {{Model-Based Cognitive Neuroscience}}},
author = {Heathcote, Andrew and Brown, Scott D. and Wagenmakers, Eric-Jan},
editor = {Forstmann, Birte U. and Wagenmakers, Eric-Jan},
date = {2015},
pages = {25--48},
publisher = {{Springer New York}},
location = {{New York, NY}},
doi = {10.1007/978-1-4939-2236-9_2},
url = {https://doi.org/10.1007/978-1-4939-2236-9_2},
urldate = {2019-03-20},
abstract = {Cognitive modeling can provide important insights into the underlying causes of behavior, but the validity of those insights rests on careful model development and checking. We provide guidelines on five important aspects of the practice of cognitive modeling: parameter recovery, testing selective influence of experimental manipulations on model parameters, quantifying uncertainty in parameter estimates, testing and displaying model fit, and selecting among different model parameterizations and types of models. Each aspect is illustrated with examples.},
isbn = {978-1-4939-2236-9},
langid = {english},
keywords = {Cognition,Model,Model selection,Parameter estimation,Quantitative,Simulation study,Theory}
}
@article{heathcotePowerLawRepealed2000,
title = {The Power Law Repealed: {{The}} Case for an Exponential Law of Practice},
shorttitle = {The Power Law Repealed},
author = {Heathcote, Andrew and Brown, Scott and Mewhort, D. J. K.},
date = {2000-06-01},
journaltitle = {Psychonomic Bulletin \& Review},
shortjournal = {Psychonomic Bulletin \& Review},
volume = {7},
number = {2},
pages = {185--207},
issn = {1531-5320},
doi = {10.3758/BF03212979},
url = {https://doi.org/10.3758/BF03212979},
urldate = {2022-05-11},
abstract = {The power function is treated as the law relating response time to practice trials. However, the evidence for a power law is flawed, because it is based on averaged data. We report a survey that assessed the form of the practice function for individual learners and learning conditions in paradigms that have shaped theories of skill acquisition. We fit power and exponential functions to 40 sets of data representing 7,910 learning series from 475 subjects in 24 experiments. The exponential function fit better than the power function in all the unaveraged data sets. Averaging produced a bias in favor of the power function. A new practice function based on the exponential, the APEX function, fit better than a power function with an extra, preexperimental practice parameter. Clearly, the best candidate for the law of practice is the exponential or APEX function, not the generally accepted power function. The theoretical implications are discussed.},
langid = {english},
keywords = {Exponential Function,Journal ofExperimental Psychology,Learning Rate,Mental Rotation,Power Function},
file = {/Users/andrew/Zotero/storage/FCUWCAZG/Heathcote et al. - 2000 - The power law repealed The case for an exponentia.pdf}
}
@article{heitAreThereTwo2005,
title = {Are {{There Two Kinds}} of {{Reasoning}}},
author = {Heit, E. and Rotello, C.},
date = {2005},
journaltitle = {undefined},
url = {https://www.semanticscholar.org/paper/Are-There-Two-Kinds-of-Reasoning-Heit-Rotello/44e5bc66e2d1e137eecfa64a3fc23f8e02d141f0},
urldate = {2022-05-10},
abstract = {Are There Two Kinds of Reasoning? Evan Heit ([email protected]) Department of Psychology, University of Warwick Coventry CV4 7AL, UK Caren M. Rotello ([email protected]) Department of Psychology, University of Massachusetts Amherst MA 01003-7710, USA people use a common set of reasoning processes for both deductive and inductive arguments. For example, Chater and Oaksford (2000) have applied an account of probabilistic reasoning, explicitly non-deductive in nature, to a range of deductive problems. Likewise, Harman (1999) has argued that people reason in an essentially non- deductive way, and bring these same reasoning processes to bear on both inductive and deductive reasoning problems. Taking a related approach, Johnson-Laird (1994) has extended the mental models account, more frequently applied to deductive problems, to a range of inductive problems as well. Finally, some researchers have proposed accounts that focus mainly on reasoning about inductive arguments, and have treated deductively correct arguments as special cases that would be covered by the same accounts (Heit, 2000; Osherson, Smith, Wilkie, Lopez, \& Shafir, 1990; Sloman, 1993). In contrast, other researchers have emphasized a distinction between two kinds of reasoning (e.g., Evans \& Over, 1996; Sloman, 1996; Stanovich, 1999). In these two- process accounts there is one system that is relatively fast but heavily influenced by context and associations, and another system that is more deliberative and analytic or rule-based. Although these two systems do not necessarily correspond directly to induction and deduction, it is plausible that induction would depend more on the first system whereas deduction would depend more on the second system. In addition there is some neuropsychological evidence, based on brain imaging, for two anatomically separate systems of reasoning (Goel, Gold, Kapur, \& Houle, 1997; Parsons \& Osherson, 2001). These one- and two-process proposals are mainly aimed at accounting for a range of phenomena rather than drawing a sharp line between deduction and induction. In contrast, the proposal by Rips (2001) does not aim for a detailed description of reasoning processes but instead focuses on a key commonality and a key difference between deduction and induction. In his account, there is a single scale for evaluating arguments. This account will be referred to as the criterion-shift account, and it is illustrated in Figure 1. Here, the unitary scale of argument strength is shown, with different points on the scale corresponding to arguments of different strengths. Criterion 1 indicates the dividing line between arguments that are inductively weak, or implausible, and arguments that are inductively strong, or Abstract Two experiments addressed the issue of how deductive reasoning and inductive reasoning are related. According to the criterion-shift account, these two kinds of reasoning assess arguments along a common scale of strength, however there is a stricter criterion for saying an argument is deductively correct as opposed to just inductively strong. The method, adapted from Rips (2001), was to give two groups of participants the same set of written arguments but with either deduction or induction instructions. Signal detection and receiver operating characteristic analyses showed that the difference between conditions could not be explained in terms of a criterion shift. Instead, the deduction condition showed greater sensitivity to argument strength than did the induction condition. Implications for two-process and one-process accounts of reasoning, and relations to memory research, are discussed. Keywords: reasoning; deduction; detection theory; memory; modeling. induction; signal Introduction How do convincing arguments differ from non-convincing arguments? Rips (2001) has referred to the intuitive case for a single psychological dimension of argument strength, in which arguments can range from utterly worthless to completely compelling. Hence, the convincingness of an argument could be judged by assessing its position on the scale, in a similar manner to how judgments of loudness or brightness would use a psychophysical scale. This intuition of a unitary scale needs to be reconciled with the notion that there are different kinds of reasoning. In particular there is the textbook distinction between deduction and induction, with deduction being concerned with drawing logically valid conclusions as opposed to induction which involves drawing plausible inferences. Strictly speaking, there are different kinds of arguments, such as deductively correct arguments, with respect to a well-defined logic, and inductively strong arguments (Skyrms, 2000). It is still an open question whether there are different kinds of reasoning, such as deductive reasoning and inductive reasoning. Some researchers have suggested that rather than having specialized cognitive processes for each kind of reasoning,},
langid = {english},
keywords = {⛔ No DOI found},
file = {/Users/andrew/Zotero/storage/JWBUAVI6/44e5bc66e2d1e137eecfa64a3fc23f8e02d141f0.html}
}