(in alphabetical order by author last name)

Daniel Fogal, “Which Reasons? Which Rationality?”

Philosophers often say, or assume, that rationality is about responding to reasons. But these days, many draw distinctions between different kinds of reasons, and different kinds of rationality. We focus on two (purported) distinctions: subjective vs. objective reasons and structural vs. substantive rationality. Roughly put, subjective reasons are constrained by the perspective of the agent for whom they are reasons, whereas objective reasons are not, and structural rationality is about whether sets of attitudes cohere, whereas substantive rationality is about whether attitudes (paradigmatically, taken individually) are reasonable. Two interlocking questions thus arise. First: what’s the relationship between these two distinctions? Second: where does drawing both distinctions leave the slogan that rationality is about responding to reasons? These questions are complicated by the fact that (as we argue) the bipartite division between ‘objective’ and ‘subjective’ reasons is inadequate. We favor a tripartite distinction: fact-relative; evidence-relative; belief-relative. Given our distinctions, there are six possible interpretations of the slogan ‘rationality is about responsiveness to reasons’. The only interpretation with any plausibility, we argue, is that substantive rationality is about responsiveness to evidence-relative reasons. But hard work remains in fleshing it out, especially in precisifying (and rendering informative) the notion of an evidence-relative reason. To illustrate the pitfalls, we consider two recent accounts (Kiesewetter 2017; Lord 2018), arguing that existing versions of the evidence-relative account of (substantive) rationality are too weak. (Note: The talk will be based on joint work with Alex Worsnip)

Zoë A. Johnson King, “Don’t Know, Don’t Care?”

I argue that moral ignorance does not always involve, and does not imply, a failure to care adequately about what is in fact morally significant. I offer proof of concept in the form of three real-life cases: one in which someone is ignorant of the precise nature of what she cares about, one in which she does not notice its relevance in her particular context, and one in which she cares deeply about two morally significant considerations but is mistaken about their relative significance. I argue that these agents all clearly care adequately about everything morally significant. This creates theoretical room for a way of thinking about culpable moral ignorance that respects the main concerns of those in the voluntarist tradition, who have held that moral ignorance is often blameless, within an approach to thinking about moral responsibility according to which we are blameworthy for that which manifests poor quality of will. It also suggests a fruitful change of direction for quality-of-will theorists: we should focus on articulating the standards for adequate caring, i.e. the moral standards that specify what it is to care “adequately”, what it is to care inadequately, and what (if anything) constitutes supererogatory caring. I close by sketching a structure that the standards for adequate caring might have, and identifying five promising avenues for future research on this topic.

Errol Lord, “Suspension of Judgment, Rationality's Competition, and the Reach of the Epistemic”

It's orthodoxy to think that there are three different reactions governed by epistemic rationality: Belief, disbelief, and suspension of judgment. These reactions are governed by epistemic rationality in a special sense: They are epistemic competitors. The case for believing p competes with case for disbelieving p and the case for suspending judgment about p. The first task of this talk is to complicate this picture. The picture is complicated by the fact that there are many different ways to be committed to neutrality about p and thus many different candidates for suspension of judgment. After showing this, I will offer an answer to which of these states compete against each other. On the view I will sketch, there are least four participants in epistemic rationality's competition. The final section will argue that theorizing about the participants in epistemic rationality's competition is a good way of figuring out the reach of the epistemic. I will show that pragmatism about epistemic rationality is naturally motivated by my framework, pace a common reaction of evidentialists.

Kate Nolfi, “An Action-Oriented Ethics of Belief”

What should I believe, given my circumstances? And how should I form and/or revise my beliefs as my circumstances change? These are, of course, some of the questions that epistemological theorizing aims to answer. But our ordinary, everyday evaluative practices also routinely engage and settle these questions, albeit in a manner that exposes a now-familiar tension in our pre-theoretical commitments. First, our evaluative practices seem to presuppose that one’s beliefs must answer to one’s evidence. But our evaluative practices also seem to allow that, at least sometimes, features of one’s circumstances that do not bear on the truth of one’s beliefs do, nevertheless, bear on what or how one ought to believe. The epistemologist who aspires to bring epistemological theory into reflective equilibrium with our evaluative practices might reasonably hope to supply a unified account of the norms governing belief and belief regulation that can explain (in one breath, as it were) both how and why one’s beliefs must answer to one’s evidence, and yet features of one’s circumstances that do not bear on the truth of one’s beliefs can, nevertheless, bear on what/how one ought to believe. This paper aims to show that an action-oriented approach—i.e. an approach that takes the idea that belief subserves action as a kind of starting point for epistemological theorizing—is especially well-positioned to supply just such an account.

Nate Sharadin, “Worrying about Merely Formal Requirement”

Sometimes, people are indifferent to normative requirements. This possibility is typically thought to be worrying. Why is it supposed to be worrying? I argue it’s only plausibly worrying for two reasons: people who are indifferent to normative requirements fail to realize certain non-instrumental values (they have bad characters), or they contribute negatively to instrumental value (they make life terrible for the rest of us). How worrying is this possibility? I present a framework for answering this question and then go on to answer it. Long story short: it’s probably not all that worrying in the case of indifference to moral requirements, and it’s definitely not very worrying in the case of indifference to epistemic requirements. This won’t convince any hardcore moralists or epistemicists to actually stop their worrying, but I hope it’ll convince their opponents to stop trying to talk them down.

Jonathan Way, “A Puzzle about Enkratic Reasoning”

Enkratic reasoning – reasoning from the belief that you ought to act in a certain way to an intention to act in that way – seems like good reasoning. But there is a puzzle about how this could be so. It is plausible that good reasoning must transmit correctness: that is, that good reasoning from correct attitudes must lead you to a correct attitude, at least other things equal. But it is not clear that enkratic reasoning transmits correctness. It is widely thought that what you ought to do depends on your epistemic position. But the correctness of an attitude seems not to depend on your epistemic position. Thus correctly – and so, truly – believing that you ought to φ does not imply that it is correct to intend to φ. In this paper, I explain this puzzle and discuss some possible solutions. I argue that the most promising solution rejects the assumption that the correctness of attitudes – in the sense relevant to good reasoning – must be entirely independent of your epistemic position.