October 24-25, 2014, Philosophy Department, University of Utah
Keynote Speakers
- Leah Henderson, (Fellow, Center for Formal Epistemology, Carnegie Mellon University)
- Tania Lombrozo, (Associate Professor, Psychology, UC Berkeley)
- Elliott Sober (Hans Reichenbach Professor and William F. Vilas Research Professor, Philosophy, UW Madison)
Schedule
- Friday, October 24
- 8:45–9:30, Coffee and Welcome
- 9:30–10:15, Justin Dallman, USC
"When Obstinacy is a Better (Epistemic) Policy" [Abstract] - 10:15–11:00, Sean Walsh, Irvine LPS; and Xiaodong Pan, Southwest Jiaotong U
"On the Probabiliistic Liar" [Abstract] - 11:00–11:15, Coffee break
- 11:15–12:00, Marta Sznajder, LMU Munich
"Geometrical Representations of Concepts in Inductive Logic" [Abstract] - 12:00–12:45, Jonathan Livengood, Illinois
"On Goodness of Fit" [Abstract] - 12:45–2:00, Lunch
- 2:00–2:45, Felipe Romero, WashU
"The Fragility of Scientific Self-Correction" [Abstract] - 2:45–3:30, Paul Weirich, Missouri
"The Foundations of Probabilism" [Abstract] - 3:30–3:45, Coffee break
- 3:45–5:15, Tania Lombrozo, UC Berkeley
"Explanation: The Good, The Bad, and the Beautiful" [Abstract] - 6:30–10:30, Conference banquet
For epistemic subjects like us, updating our credences incurs epistemic costs. Properly updating one’s credences by a subset of the available information expends limited processing power and working memory which can sometimes come at the cost of not responding to other available information. It is thus desirable to flesh out and compare alternative ways of taking information into account in light of cognitive shortcomings like our own. This paper is a preliminary attempt to do so.
A queue-theoretic framework for learning with limited cognitive resources is developed. Within this framework it is shown that it is better, in a range of “normal” circumstances and from the point of view of expected credal accuracy, not to update on available information that bears on propositions for which substantial evidence has been gathered than it is to update on information as it presents itself. Finally, two applications of this result to recent work on the relationship between outright belief and credence are considered.
A queue-theoretic framework for learning with limited cognitive resources is developed. Within this framework it is shown that it is better, in a range of “normal” circumstances and from the point of view of expected credal accuracy, not to update on available information that bears on propositions for which substantial evidence has been gathered than it is to update on information as it presents itself. Finally, two applications of this result to recent work on the relationship between outright belief and credence are considered.
Just as the liar is a sentence which says of itself that it is not true, so the probabilistic liar is a sentence which says of itself that it is improbable (cf. [Wal14] pp. 29-30). The goal of this paper is to contribute to our understanding of the probabilistic liar by (i) explaining why this kind of self-referential probability seems infrequent in probability theory as ordinarily practiced and applied, and (ii) explore the extent to which formal responses to the liar paradox may generate satisfactory solutions to the probabilistic liar.
References: [Wal14] Sean Walsh. Empiricism, Probability, and Knowledge of Arithmetic. Forthcoming in: Journal of Applied Logic.
References: [Wal14] Sean Walsh. Empiricism, Probability, and Knowledge of Arithmetic. Forthcoming in: Journal of Applied Logic.
In his late work on inductive logic, published partly posthumously as “A Basic System of Inductive Logic”, Rudolf Carnap introduces attribute spaces as a new element of a conceptual framework. The spaces are abstract, geometric structures that provide a representation for concepts from the framework and for the relations between those concepts. Furthermore, Carnap postulates two rules – the γ and the η rule – that tie certain values of the confirmation function with features of the relevant attribute space.
Carnap does not give much background on the reasons for his choice of the specific form of the two rules. In my talk I aim at filling that gap, by supplying a philosophical analysis of the two rules, their epistemological status and presuppositions. I show that the two geometrical constraints on the values of confirmation functions can be thought of as additional rationality requirements on initial credence functions.
Carnap does not give much background on the reasons for his choice of the specific form of the two rules. In my talk I aim at filling that gap, by supplying a philosophical analysis of the two rules, their epistemological status and presuppositions. I show that the two geometrical constraints on the values of confirmation functions can be thought of as additional rationality requirements on initial credence functions.
I produce and reflect on a simple argument that pragmatic encroachment in epistemology is unavoidable -- at least in the context of much ordinary scientific practice. In order to evaluate a model, one needs a way of measuring how well the model fits the data against which it is to be tested. Provided there are no purely epistemic (non-pragmatic) standards for selecting a measure of fit, pragmatic encroachment is unavoidable. All of the standards that have actually been used for selecting a measure of fit are pragmatic, not epistemic, having to do with computational complexity, statistical efficiency, robustness, and/or the modeler's loss function. Hence, we have some defeasible reason to think that pragmatic encroachment is unavoidable.
Can science correct its mistakes? Defenders of the self-corrective thesis answer affirmatively, arguing that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a plausible interpretation of this thesis that philosophers have defended in terms of frequentist statistics. Using computer simulations, I argue that such an interpretation is true only under idealized conditions that are hard to satisfy in scientific practice. In particular, I show how some features of the social organization of contemporary science make the long run performance of frequentists statistics fragile. I suggest that we have to pay attention to the relation between inference methods and the social structure of science in our theorizing about scientific self-correction.
Probabilism claims that rational degrees of belief in an ideal agent satisfy the axioms of probability (with respect to a finite, Archimedean probability model). It takes degrees of beliefs as representations of propositional attitudes without using the axioms to define degrees of belief. Principles of comparative probability, assuming that degrees of belief generate probability comparisons, may ground compliance with the axioms. A typical representation theorem for probability shows that meeting certain principles of comparative probability suffices for the existence of a probability function (obeying the probability axioms) that represents probability comparisons. That degrees of belief satisfy the axioms entails the existence of such a representation. So principles of comparative probability ground a necessary condition of probabilistic degrees of belief. How much grounding for probabilism can they provide? This paper investigates the prospects of a thorough grounding.
Like scientists, children and adults are often motivated to explain the world around them, including why people behave in particular ways, why objects have some properties rather than others, and why events unfold as they do. Moreover, people have strong and systematic intuitions about what makes something a good (or beautiful) explanation. Why are we so driven to explain? And what accounts for our explanatory preferences? In this talk I’ll present evidence that both children and adults prefer explanations that are simple and have broad scope, consistent with many accounts of explanation from philosophy of science. The good news is that a preference for simple and broad explanations can sometimes improve learning and support effective inference to the best explanation. The bad news is that under some conditions, these preferences can systematically lead children and adults astray. An important take-home lesson is that seeking, generating, and evaluating explanations plays an important role in human judgment and serves as a valuable window onto core cognitive processes such as learning and inference.
- Saturday, October 25
- 8:30–9:00, Coffee and Welcome
- 9:00–9:45, Greg Gandenberger, Pitt HPS
"Why Frequentist Violations of the Likelihood Principle Are at Best
Permissible and Far from Mandatory" [Abstract] - 9:45–10:30, Gregory Wheeler, LMU Munich
"Fast, Frugal, and Focused" [Abstract] - 10:30–11:15, Aaron Kenna, University of Utah
"The Epistemic Merits of Reichenbach's Pragmatic Defense of
Induction" [Abstract] - 11:15–11:30, Coffee break
- 11:30–1:00, Leah Henderson, Carnegie Mellon University
"Bayesianism and Inference to the Best Explanation" [Abstract] - 1:00–2:15, Lunch
- 2:15–3:00, Matt Haber, University of Utah
"Positively Misleading Errors" [Abstract] - 3:00–3:45, Bengt Autzen, University of Bristol
"Interpreting the Principle of Total Evidence" [Abstract] - 3:45–4:00, Coffee break
- 4:00–5:30, Elliott Sober, UW Madison
"Epistemological Questions about Darwin's Theory" [Abstract] - 6:00–, Evening festivities
The standard frequentist approach to testing simple statistical hypotheses each other is not fully justified by considerations of long-run error, evidential meaning, or expected utility. One could think of it is as appealing to objective considerations to "break the tie" among subjective perspectives over which the relevant agent is indifferent. This perspective on frequentist practice helps address some objections to it. However, there does not seem to be any compelling reason to prefer it to other methods of tie-breaking, some of which are better integrated into an overarching Bayesian approach.
People frequently do not abide by the total evidence norm of classical Bayesian rationality but instead use just a few items of information among the many available to them. Gerd Gigerenzer and colleagues have famously shown that decision-making with less information often leads to objectively better outcomes, which raises an intriguing normative question: if we could say precisely under what circumstances this "less is more" effect occurs, we conceivably could say when people should reason the Fast and Frugal way rather than the classical Bayesian way.
In this talk I report on results from joint work with Konstantinos Katsikopoulos (Max Planck Institute) that resolves a puzzle in the mathematical psychology literature over attempts to to explain the conditions responsible for this "less is more" effect. What is more, there is a surprisingly deep connection between the "less is more" effect and coherentist justification. In short, the conditions that are good for coherentism are lousy for single-reason strategies, and vice versa.
In this talk I report on results from joint work with Konstantinos Katsikopoulos (Max Planck Institute) that resolves a puzzle in the mathematical psychology literature over attempts to to explain the conditions responsible for this "less is more" effect. What is more, there is a surprisingly deep connection between the "less is more" effect and coherentist justification. In short, the conditions that are good for coherentism are lousy for single-reason strategies, and vice versa.
Herein I advance an epistemic interpretation of Hans Reichenbach's pragmatic defense of induction. In brief, I argue that, though induction – understood as an asymptotic sampling method – dominates any other predictive method and is thereby prudentially justified on decision theoretic grounds, we may further justify the defense on epistemic grounds. In order to show this, I situate my argument against two traditional criticisms of Reichenbach: (a) BonJour's 'no good grounds' criticism and (b) the underdetermination criticism as exposited in Salmon. To (a) I argue any reason(s) advanced in favor of any predictive method constitutes a reason in favor of induction and therefore the straight rule is a priori more probable. To (b) I contend that the straight rule possesses various statistical properties (e.g. the observed sample proportion is the maximum likelihood estimate) that permit pruning away competing inductive alternatives.
Two of the most influential theories about scientific inference are Inference to the Best Explanation (IBE) and Bayesianism. How are they related? Bas van Fraassen has claimed that IBE and Bayesianism are incompatible, since any probabilistic version of IBE would violate Bayesian conditionalisation. In response, several authors have defended the view that IBE is compatible with Bayesian updating, on the grounds that the explanatory considerations in IBE either do or should constrain the Bayesian assignment of priors and/or likelihoods. I propose a new argument for the compatibility of IBE and Bayesianism, which does not require that IBE acts as an external constraint on the Bayesian probabilities. Rather, I argue that explanatory considerations emerge naturally in a hierarchical Bayesian account. I will illustrate with two case studies: one based on the explanations of planetary retrograde motion by the Copernican and Ptolemaic theories, and the other based on the individual vs group selection controversy in biology.
Positively Misleading Errors (PMEs) are cases in which adding data to an analysis will systematically and reliably strengthen support for an erroneous hypothesis over a correct one. This pattern distinguishes them from other errors of inference and pattern recognition. Here I provide a general account of PMEs by describing both exemplar and candidate cases. Though well known in biology (phylogenetic systematics, to be precise), PMEs are likely widespread and deserve to be brought to the attention of the wider research community. This will facilitate a better understanding of them, will sharpen our ability to assess methods used to extract patterns from large sets of data, better identify the conditions under which fallacies may occur, and enhance our reasoning about complex systems.
The Principle of Total Evidence (PTE) is invoked in a number of philosophical arguments. Given its prominent role in philosophical discussions of inductive inference, an unambiguous interpretation of PTE is necessary. This paper is a step towards a clearer understanding of PTE. I will first assess Sober’s claim that significance testing violates PTE. I will argue that Sober’s argument against significance testing presupposes an inter-theoretic reading of PTE, which should be rejected. In contrast, I will call for an intra-theoretic reading of PTE, which I will further develop in the second part of the paper. In a final step, I will apply the intra-theoretic reading to Stegenga’s discussion of meta-analysis in evidence-based medicine.
Darwin's theory of evolution presents epistemologists with a number of interesting topics. In my talk I'll discuss three: (i) the inference that all of present-day life traces back to one or a few original progenitors; (ii) Darwin's thesis that adaptive similarities provide scant evidence for common ancestry; (iii) the question of whether Darwin's views on natural selection are supported by his ideas about common ancestry. In all three, the Law of Likelihood will get a workout.