Me/Moxie

Todd Karhu

I’m a lecturer at King’s College London, where I’m based in the Yeoh Tiong Lay Centre for Politics, Philosophy and Law. I'm also the director of the Politics, Philosophy and Law (PPL) programme. Most of my research is in moral, legal, and political philosophy.

Before coming to King’s, I was a postdoctoral fellow at the McCoy Family Center for Ethics in Society at Stanford University. Before that, I received a PhD in philosophy from the London School of Economics. I also have an MPhil in political theory from the University of Oxford and a BA from University College Maastricht.

You can read copies of my published work below.

My email address is todd.karhu@kcl.ac.uk.

Here is my CV.

Click here if you’d rather see my dog.




What Justifies Our Bias Toward the Future?

The Australasian Journal of Philosophy (2023). Online first, DOI: 10.1080/00048402.2022.2047747

A person is biased toward the future when she prefers, other things being equal, bad events to be in her past rather than her future or good ones to be in her future rather than her past. In this paper, I explain why both critics and defenders of future bias have failed to consider the best version of the view. I distinguish external time from personal time, and argue that future bias is best construed in terms of the latter. This conception of future bias avoids several standard objections. I then consider a justification of future bias which is consistent with that construal. My discussion points to a new position regarding the basic relation that grounds rational egoistic concern over time, according to which that relation is asymmetric between person-stages. I also explain how this way of justifying future bias would resolve the apparent tension between the future bias we display in our own case and our relative indifference to the timing of good and bad things that happen to other people.




Getting Machines to Do Your Dirty Work (with Tomi Francis)

Philosophical Studies (2023). Online first. DOI: 10.1007/s11098-023-02027-0

Autonomous systems are machines that can alter their behavior without direct human oversight or control. How ought we to program them to behave? A plausible starting point is given by the Reduction to Acts Thesis, according to which we ought to program autonomous systems to do whatever a human agent ought to do in the same circumstances. Although the Reduction to Acts Thesis is initially appealing, we argue that it is false: it is sometimes permissible to program a machine to do something that it would be wrong for a human to do. We advance two main arguments for this claim. First, the way an autonomous system will behave can be known in advance. This knowledge can indirectly affect the behavior of other agents, while the same may not be true at the time the system actually executes its programming. Second, a lack of knowledge of the identities of the victims and beneficiaries can provide a justification during the programming phase that would be unavailable to an agent at the time the autonomous system executes its programming.




Proportionality in the Liability to Compensate

Law and Philosophy 41 (5) (2022), pp. 583–600.

There is widely thought to be a proportionality constraint on harming others in self-defense, such that an act of defensive force can be impermissible because the harm it would inflict on an attacker is too great relative to the harm to the victim it would prevent. But little attention has been given to whether a corresponding constraint exists in the ethics of compensation, and, if so, what the nature of that constraint is. This article explores the issue of proportionality as it applies to the liability to compensate. The view that some perpetrators are not liable to pay full compensation because doing so would be disproportionately burdensome is clarified and defended, and it is asked what view we should adopt instead. A key step in that enquiry is an argument that someone is liable to bear the cost of compensating for an injury if and only if she would have been liable to bear that same cost in defense against that same injury ex ante.




Non-Compensable Harms

Analysis 79 (2) (2019), pp. 222–230.

It is more or less uncontroversial that when we harm someone through wrongful conduct we incur an obligation to compensate her. But sometimes compensation is impossible: when the victim is killed, for example. Other times, only partial compensation is possible. In this article, I take some initial steps toward exploring this largely ignored issue. I argue that the perpetrator of a wrongful harm incurs a duty to promote the impartial good in proportion to the amount of harm that cannot be made up for by compensating her victim.




Not All Killings Are Equally Wrong

Utilitas 31 (4) (2019), pp. 378–394.

Many people believe that the wrongness of killing a person does not depend on factors like her age, condition, or how much she has to lose by dying—a view Jeff McMahan has called the ‘Equal Wrongness Thesis’. This paper defends an argument that we should reject the Equal Wrongness Thesis on the basis of the moral equivalence between killing a person and knocking her unconscious.