Friday, September 21, 2012

Utilitarianism and Evolution


When talking about the “trolley problem” in class, most people took a utilitarian stance.  It seemed instinctive and obvious that we should sacrifice the life of one person to save the lives of ten others.  Why does this come so naturally to us, though? One can make a solid argument that taking no action is more blameless than actively killing a man.  By choosing to enter the situation, you’ve taken on a responsibility that may not be yours to take on.  

I think the answer lies in the evolution of our species.  Throughout the early stages of our development, it was advantageous to look out for the good of the whole.  The early humans who were able to predict the outcomes that would follow their actions gained an advantage both in individual and group survival.  Take yourself back to such a situation: if there are only a few hundred or thousand members of your species, the 10:1 utilitarian calculus seems much more urgent.  Over the course of our history, we’ve been put through ordeals in which it was advantageous to look out for the good of our group/tribe/species.  The Internet Encyclopedia of Philosophy has a good page on Evolutionary Ethics, if you’re interested: http://www.iep.utm.edu/evol-eth/ .


While that might explain why we naturally gravitate toward utilitarian behavior today, it doesn’t address whether it makes sense to continue with utilitarianism as a primary driving force behind our actions.  What do you think?  I believe utilitarianism definitely has merits, and it’s highly integral to the way most of the world’s political systems function.  In fact, I think every philosopher we’ve studied so far agrees with utilitarian logic to a degree.  Plato’s argument for why injustice isn’t profitable lies with the disruption of cooperation and a common sense of purpose; justice is achieved when everyone does what is best for the whole.  Aristotle believes that justice can only arise from the interactions between people and that we should aim for a virtuous mean in all things; rational, equitable interactions produce an effect that benefits society as a whole.  I think Kant would not agree that “killing a person is ok” could be exerted as a universal law, but I believe he might encounter a “conflict of obligations” that would guide him to save the ten people.  What do you think?

That being said, I think there are definitely merits to the idea that we can’t always accurately predict the consequences of our actions. If we consider a more realistic version of our versions of the trolley problem, we run into a whole host of variables.  How do you know that the ten people on an active track couldn’t get out of the way?  The single person on a track that doesn’t normally get traffic at that time certainly won’t be expecting it. How do you know those people aren’t suicidal and won’t simply kill themselves anyway (why else are they loitering around on a trolley track?)?  There are a lot of variables at play when we examine real life situations that utilitarian logic cannot take into account.  I realize that our discussion intentionally made many variables into constants, but we don't necessarily have that luxury when applying it in everyday life.

2 comments:


  1. As you so astutely observed, the problem with utilitarian logic is in the application. Although the logic behind utilitarian calculus is very sound, it is impractical when faced with more than daily decisions. To make choices based on our calculations of the outcomes is practical, but it doesn’t hold water when making increasingly complex and significant decisions. When we begin to use utilitarian thought to attach a value to a human life, we should probably step back and consider on what objective basis we are determining another sentient beings right to existence. It is critical to our understanding to many fundamental principals that all life is created equal. We are all equal in our worth in that we are all capable of exerting free will. When we begin to attach different values to different human beings, we are spiraling down the road toward immoral action. At what point do we not determine that a chair is worth more than some human beings, seeing as chairs serve a better functionality? My example is ridiculous, but using utilitarian logic, we begin to dictate the value of things not based on inherent value but on perceived value. Even the most objective individual in the world will still make mistakes as to the true value of individuals and actions. It is truly foolish to determine morality based on an equation with such an enormous variable as free will included in it.

    ReplyDelete
  2. Laudermilk, I'm so glad you opened the floor to the variables. Sitting in class, I'm thinkin' the same thing: "Why are all these folks loitering on trolly tracks!?" And I've also played with the idea of whether or not they were tied down to the tracks. I am intrigued by your question on whether or not these folks are suicidal; I laughed a little (seriously, who spends time like that on trolley tracks!).

    I LOVE that you brought up our past authors. I honestly didn't make the connection with all of them (Plato was a little more obvious, so I could get that one). I think Kant would let this be a subjective reaction. As there is NO way killing a person could be a maxim. Since every *real* maxim is supposed to encourage a happy life, ending one could never become a universal law, even in lieu of killing ten. Since an individual would have to respond to the situation at hand, there is no law that can dictate the proper, moral action. One would have to use reason to decide what to do, and in this case, I do think Kant would agree with Utilitarian calculus and would pull the switch to save the ten.

    Professor J termed this situation as a moral dilemma, a situation in which neither choice is truly morally sound, but we must choose the lesser of two evils, which most agree the lesser of the killing would be the one.

    Also, just gonna throw this in here, the evolution-based utilitarianism you started with is brilliant.

    ReplyDelete