My main research interests are in ethics. I focus on two questions.

  • When is one state of the world better than another?
  • What ought we to do?

I am especially interested in how these relate to uncertainty. This includes uncertainty about the future; uncertainty about the beliefs and actions of other people; and uncertainty about what is good for us.

In one sense, these are the most important questions there are, in any discipline. They deserve the most powerful tools available to answer them. Many of these come from areas of philosophy like decision theory, formal epistemology, and philosophy of probability, and such disciplines as computer science, economics, and mathematics. My work draws heavily on these resources. I enjoy collaborating with people who are experts in the areas, and supervising students interested in them.

My current projects relate to the first question, under the assumption that what ultimately matters is how well people’s lives (or those of sentient beings) go. They have led to other projects in epistemology and philosophy of probability. But in one way or another, these all connect with utilitarianism.

Research Themes


Crudely put, utilitarianism is the thesis that one world is better than another if it contains greater total welfare. This theory of distribution has been much criticised. But it received a major boost in Harsanyi’s 1955 utilitarian theorem.

Harsanyi’s theorem seems to mainly be about expected utility theory. But this makes it subject to versions of some of the traditional criticisms, particularly in the assumptions it makes about welfare comparisons. But joint work with Kalle Mikkola and Teru Thomas provides a reply.


It is common for moral philosophers to describe their own views about distribution in terms of what they see as the problems of utilitarianism. For this project to work well, it has to use the most plausible version of utilitarianism. But philosophers often choose straw man versions.

I have been using my favored account of utilitarianism to provide such a contrastive account of the alternatives. This work focuses on egalitarianism, the priority view, contractualism, personal and impersonal value, and various kinds of threshold views.


My account of utilitarianism, and therefore the alternatives to it, is all about uncertainty. For simplicity, it is mostly stated in terms of risk, but work in progress explains how to make sense of it for a much wider range of ways of representing uncertainty.

A curiously related project with Branden Fitelson examines a new way of thinking about the foundations of comparative likelihood. We argue that it supports Dempster-Shafer belief functions, of which probability functions are a special case.

Epistemic Value

Epistemic utility theory uses norms from decision theory to examine problems in epistemology. Our work on comparative likelihood is in this vein, but I have recently been trying to develop a more general theory of epistemic value.

I am optimistic that this approach will shed light on norms governing consistency, uncertainty, and inference. I believe that it provides an interesting alternative to more popular ways of developing accuracy-first epistemology.


Ethics involves complicated problems that overlap with formal epistemology, philosophy of probability, decision theory, and game theory. No one could deny the relevance of mathematics to those subjects, so it would be remarkable if mathematical methods were not central to the study of ethics.

My own work especially involves axiomatic methods. Ordinary language is not well suited to the precise statement of axioms, and it is almost impossible to work out the consequences of even small sets of simple axioms without some mathematics.

Future Projects

My early work in normative ethics took a broadly rule utilitarian or contractualist approach. I am now skeptical about rule-based approaches. But I see the topic as one aspect of a vast range of problems at the intersection of ethics and social epistemology that I wish to explore further.

One of the main difficulties is that different people have different beliefs, information, and evidence, along with perhaps faulty beliefs about each others beliefs, information, and evidence. Rule utilitarians seem to solve this challenging problem by pretending it isn’t one; what should the rest of us think?