scholarly journals The no miracles argument and the base rate fallacy

Synthese ◽  
2015 ◽  
Vol 194 (4) ◽  
pp. 1295-1302 ◽  
Author(s):  
Leah Henderson
Synthese ◽  
2017 ◽  
Vol 195 (9) ◽  
pp. 4063-4079 ◽  
Author(s):  
Richard Dawid ◽  
Stephan Hartmann

2006 ◽  
Vol 23 (1) ◽  
pp. 45-51 ◽  
Author(s):  
Mike Allen ◽  
Raymond W. Preiss ◽  
Barbara Mae Gayle
Keyword(s):  

1997 ◽  
Vol 15 (4) ◽  
pp. 292-307 ◽  
Author(s):  
Mary Lynne Kennedy ◽  
W. Grant Willis ◽  
David Faust

1996 ◽  
Vol 19 (1) ◽  
pp. 1-17 ◽  
Author(s):  
Jonathan J. Koehler

AbstractWe have been oversold on the base rate fallacy in probabilistic judgment from an empirical, normative, and methodological standpoint. At the empirical level, a thorough examination of the base rate literature (including the famous lawyer–engineer problem) does not support the conventional wisdom that people routinely ignore base rates. Quite the contrary, the literature shows that base rates are almost always used and that their degree of use depends on task structure and representation. Specifically, base rates play a relatively larger role in tasks where base rates are implicitly learned or can be represented in frequentist terms. Base rates are also used more when they are reliable and relatively more diagnostic than available individuating information. At the normative level, the base rate fallacy should be rejected because few tasks map unambiguously into the narrow framework that is held up as the standard of good decision making. Mechanical applications of Bayes's theorem to identify performance errors are inappropriate when (1) key assumptions of the model are either unchecked or grossly violated, and (2) no attempt is made to identify the decision maker's goals, values, and task assumptions. Methodologically, the current approach is criticized for its failure to consider how the ambiguous, unreliable, and unstable base rates of the real world are and should be used. Where decision makers' assumptions and goals vary, and where performance criteria are complex, the traditional Bayesian standard is insufficient. Even where predictive accuracy is the goal in commonly defined problems, there may be situations (e.g., informationally redundant environments) in which base rates can be ignored with impunity. A more ecologically valid research program is called for. This program should emphasize the development of prescriptive theory in rich, realistic decision environments.


2001 ◽  
Vol 18 (1) ◽  
pp. 63-86 ◽  
Author(s):  
Matthew C. Scheider
Keyword(s):  

2018 ◽  
Vol 56 (3) ◽  
pp. 333-348
Author(s):  
JAMES HENRY COLLIN

AbstractMichael Tooley has developed a sophisticated evidential version of the argument from evil that aims to circumvent sceptical theist responses. Evidential arguments from evil depend on the plausibility of inductive inferences from premises about our inability to see morally sufficient reasons for God to permit evils to conclusions about there being no morally sufficient reasons for God to permit evils. Tooley's defence of this inductive step depends on the idea that the existence of unknown rightmaking properties is no more likely, a priori, than the existence of unknown wrongmaking properties. I argue that Tooley's argument begs the question against the theist, and, in doing so, commits an analogue of the base rate fallacy. I conclude with some reflections on what a successful argument from evil would have to establish.


Sign in / Sign up

Export Citation Format

Share Document