Moral uncertainty — uncertainty regarding a moral issue that cannot be reduced to descriptive uncertainty — is another important type of uncertainty that gives rise to several important ethical questions. First, there are questions regarding decision-making under conditions of moral uncertainty: does moral uncertainty raise a special problem for moral decision-making? Does it raise a special problem to moral decision-making under conditions of uncertainty?
Practical ethics given moral uncertainty
Can a plausible decision-rule for choice under conditions of moral uncertainty be found? If so — what is it? Second, there are questions regarding moral reasoning under conditions of moral uncertainty: how should one update one's moral beliefs after being exposed to different types of moral evidence? What are the relations between binary moral judgments and degrees of beliefs in moral claims? I primarily work on rationality What is it rational to believe?
What is it sensible to do? And what is it reasonable to care about? My current research concerns the generalization of expected utility theory to cases in which you endorse incommensurable goals that cannot be represented with a traditional, single-valued utility-function.
I'm also interested in the ways that being socially embedded influences what it is rational to do, especially diachronically. I also enjoy formal epistemology, philosophy of language, comic books, philosophy of economics, pop music, stand-up comedy, decision theory, and various fretted string instruments. Brian Weatherson is the Marshall M. Weinberg Professor of Philosophy at the University of Michigan.
He received his PhD from Monash University in Brian works on epistemology, philosophy of language, and ethics. I am a PhD student at the University of Southampton, specializing in moral and political philosophy. My research project relates intergenerational ethics to the distinction between doing and allowing harm. Questions I am interested in include: What kind of duties do we have towards future generations?
Do deontological distinctions apply to future generations? And what does all this imply for intergenerational justice in real-world political contexts?
- Is applied ethics applicable enough? Acting and hedging under moral uncertainty.
- Moral Uncertainty Its Consequences by Lockhart & Ted | Fruugo?
- Undergraduate Calculus-based Physics (physics2000.com)?
- The Strange Case of Ermine de Reims : A Medieval Woman Between Demons and Saints.
- Top Authors.
My research focuses on issues surrounding the morality of imposing risks of harm. How can we explain the moral significance of imposing risks on individuals?
But how often are we so certain about the outcomes of our actions? Perhaps there's a chance that the five people might escape, or that the one person might do the same. And are all six individuals equally virtuous? Are any of them terminally ill?
| Centre for Moral and Political Philosophy
Naturally, such possibilities would impact our attitude towards pulling the lever or not. Such concerns are often brought up by first-time respondents to the problem, and must be clarified before the question gets answered proper. Lots has been written about moral decision-making under factual uncertainty. Michael Zimmerman, for example, has written an excellent book on how such ignorance impacts morality. The point of most ethical thought experiments, though, is to eliminate precisely this sort of uncertainty. Ethicists are interested in finding out things like whether, once we know all the facts of the situation, and all other things being equal, it's okay to engage in certain actions.
If we're still not sure of the rightness or wrongness of such actions, or of underlying moral theories themselves, then we experience moral uncertainty. As the survey indicates, many professional philosophers still face such fundamental indecision. The trolley problem — especially the fat man variant — is used to test our fundamental moral commitment to deontology or consequentialism.
I'm pretty sure I'd never push a fat bystander off a bridge onto a train track in order to save five people, but what if a million people and my mother were at stake? Should I torture an innocent person for one hour if I knew it would save the population of China? Even though I'd like to think of myself as pretty committed to human rights, the truth is that I simply don't know. So, what's the best thing to do when we're faced with moral uncertainty?
Unless one thinks that anything goes once uncertainty enters the picture, then doing nothing by default is not a good strategy. As the trolley case demonstrates, inaction often has major consequences. Failure to act also comes with moral ramifications: Peter Singer famously argued that inaction is clearly immoral in many circumstances, such as refusing to save a child drowning in a shallow pond. It's also not plausible to deliberate until we are completely morally certain — by the time we're done deliberating, it's often too late.
Suppose I'm faced with the choice of saving one baby on a quickly-sinking raft, and saving an elderly couple on a quickly-sinking canoe. If I take too long to convince myself of the right decision, all three will drown. Ted Lockhart, professor of philosophy at Michigan Technological University, arguably kicked off the conversation in with his book Moral Uncertainty and its Consequences. Lockhart considers the following scenario:. Gary must choose between two alternatives, x and y.
However, Gary believes there is a. There are at least two ways that Gary could make his decision. First, Gary might pick the theory he has the most credence in. Following such an approach, Gary should stick to T 1 , and choose to do x. But Lockhart thinks that this 'my-favourite-theory' approach is mistaken.
Instead, Lockhart argues that it is more rational to maximize the probability of being morally right. Following this, the probability that x would be morally right is. Under this approach, Gary should choose y. This seems reasonable so far, but it isn't the end of the story.
Consider the following scenario described by Andrew Sepielli professor of philosophy at University of Toronto, who has written extensively about moral uncertainty and hedging over the past few years :. Suppose my credence is. But suppose I believe that, if killing animals is better, it is only slightly better; I also believe that, if killing animals is worse, it is substantially worse — tantamount to murder, even. Then it seems … that I have most subjective reason not to kill animals for food.
The small gains to be realized if the first hypothesis is right do not compensate for the significant chance that, if you kill animals for food, you are doing something normatively equivalent to murder. Both Lockhart and Sepielli agree that it isn't enough for us to maximize the probability of being morally right.
The value of outcomes under each theory should be factored into our decision-making process as well. Moral hedging seems like a promising strategy, but it's plagued by some substantial problems.
How are we supposed to compare values across moral theories that disagree with each other? The idea of intertheoretic comparison is at least intuitively intelligible, but on closer inspection, values from different moral theories seem fundamentally incommensurable. Given that different theories with different values are involved, how could it be otherwise? Instead, he proposes that we use existing beliefs about 'cardinal ranking' of values to make the comparison.
- Visitor management in tourism destinations.
- Cool String Art: Creative Activities That Make Math & Science Fun for Kids!.
- Post navigation.
- Moral Uncertainty and its Consequences by Ted Lockhart - hiqukycona.tk.
- Stuffed: The Ultimate Comfort Food Cookbook.
- Iran: Political Culture in the Islamic Republic.
- The Spirit of Hindu Law!
However, this method is open to its own objections, and also depends heavily on facts about practical psychology, which are themselves messy and have yet to be worked out. Whatever the case, there isn't any consensus on how to solve the problem of intertheoretic comparisons. PIC has serious consequences — if the problem turns out to be insurmountable, moral hedging will be impossible.
- Moral Uncertainty Its Consequences by Lockhart & Ted?
- Ant Colony Optimization.
- Ted Lockhart - Google Scholar Citations.
- Moral Uncertainty and Its Consequences.
- Lockhart Ted, Used - AbeBooks;
- Moral Uncertainty and its Consequences by Ted Lockhart - hiqukycona.tk.
This lack of consensus relates to another problem for moral hedging, and indeed for moral uncertainty in general. In addition to being uncertain about morality, we can also be uncertain about the best way to resolve moral uncertainty. Following that, we can be uncertain about the best way to resolve being uncertain about the best way to resolve moral uncertainty … and so on. How should we resolve this seemingly infinite regress of moral uncertainty? One last and related question is whether, practically speaking, calculated moral hedging is a plausible strategy for the average person.