Skip to content

Latest commit

 

History

History
71 lines (41 loc) · 5.65 KB

Franklin, Holmes, and the Epistemology of Computer Simulation.md

File metadata and controls

71 lines (41 loc) · 5.65 KB
Error in user YAML: (<unknown>): found character that cannot start any token while scanning for the next token at line 1 column 1
---
@article{Parker2008-PARFHA,
  volume = {22},
  journal = {International Studies in the Philosophy of Science},
  publisher = {Taylor \& Francis},
  number = {2},
  year = {2008},
  title = {Franklin, Holmes, and the Epistemology of Computer Simulation},
  doi = {10.1080/02698590802496722},
  author = {Wendy S. Parker},
  pages = {165--183}
}
---

Parker, Wendy S. (2008). Franklin, Holmes, and the epistemology of computer simulation. International Studies in the Philosophy of Science 22 (2):165 – 183.

Allan Franklin has identified a number of strategies that scientists use to build confidence in experimental results. This paper shows that Franklin’s strategies have direct analogues in the context of computer simulation and then suggests that one of his strategies—the so‐called ‘Sherlock Holmes’ strategy—deserves a privileged place within the epistemologies of experiment and simulation. In particular, it is argued that while the successful application of even several of Franklin’s other strategies (or their analogues in simulation) may not be sufficient for justified belief in results, the successful application of a slightly elaborated version of the Sherlock Holmes strategy is sufficient.

Allan Franklin (1986, 1989, 2002) has argued that the results of experiments in physics usually come to be accepted primarily on rational evidential grounds, contrary to the suggestions of some scholars in science studies, who have instead emphasized the importance of social factors.

At least two authors (Weissart 1997; Winsberg 1999a, 1999b, 2003) have claimed that many, if not all, of Franklin’s strategies have analogues in the context of computer simulation. (p165)

Often problems require numerical rather than analytical solutions

Not infrequently, it is difficult or impossible to find exact solutions to the sets of equations associated with these mathematical models. This often happens, for instance, when the equations of interest are nonlinear partial differential equations. In such cases, scientists may have little choice but to transform the equations of interest in various ways—some of the terms in the equations may need to be combined, simpli- fied, given alternative mathematical expression or omitted entirely—until they are in a form such that approximate, local solutions can be found using brute-force numerical methods. (p166)

Defines computer simulation ...

When actually implemented on a digital computer, this program is a computer simulation model—a physical implementation of a set of instructions for repeatedly solving a set of equations in order to produce a representation of the temporal evolution (if any) of particular properties of a target system. The execution or ‘running’ of the computer simulation model with specified initial and/or boundary conditions is a computer simulation. (166)

Validation ...

Consequently, the question of whether the computer simulation model is an adequate representation of the target system, relative to the goals of the modelling study, is of utmost importance. The activity of model evaluation (also sometimes known as ‘validation’) aims to collect evidence regarding precisely this question. (p166)

Footnote

It is worth reiterating that model evaluation should be understood as an investigation of a model’s adequacy-for-purpose, not an investigation of its truth or falsity, whatever that might mean.

Verification ...

Investigation of the latter—of the adequacy of the process by which solutions to the continuous model equations are estimated—is often considered an important activity in its own right and will be referred to here as code evaluation (also sometimes known as ‘verification’) (p167)

To say that an experimental result is internally valid is to say that its associated result statement is true.

An experimental result is externally valid when what its associated result statement says about the experimental system is also (or would also be) true of other specified entities under specified conditions

With computer simulation, internal validity is taken as true given results of logs / local print outs etc

It is their external validity—their being indicative of what is (or would be) true of a specified target system—that is the primary focus of the epistemology of computer simulation; both code evaluation and model evaluation are best understood as activi- ties concerned with external validity. (p168)

Franklin's 5 strategies:

(1) Apparatus gives results that match known results.

(2) Apparatus responds as expected after intervention on the experimental system

(3) Capacities of apparatus are underwritten by well- confirmed theory

(4) Experimental results are replicated in other experiments

(5) Plausible sources of significant experimental error can be ruled out

The 5th is dubbed the Sherlock Holms Method ....

When the focus is on code evaluation, an analogous confidence-building strategy would involve showing that one can rule out (or bound the magnitude of) all of the plausible sources of error that would prevent the simulation code from delivering accu- rate-enough solutions of interest to the continuous model equations. (p173)

On how validation is actually done ...

Model and code evaluation activities are often determined largely by convenience—by how much computing power is available, which analytic solutions and/or observational data one has in hand, which visualiza- tion tools are available, etc.—and frequently are not accompanied by any explicit argu- mentation concerning what the evaluation activities that are undertaken indicate, if anything, about the adequacy of the model for the purposes for which it is to be used. (p176)