In Wednesday’s symposium we further
discussed the many complex facets of the trolley problem. In our conversation
we attempted to differentiate between the beliefs of Kant and Mill. While, they
both are seeking for the best possible solution for the trolley problem, they
differ in their reasoning to pull the lever or not.
Under Kant’s deontology, we could
argue on both sides of the trolley problem. Kant states that in this scenario a
moral decision cannot be claimed and that decisions can only be based off of
rationality, while in accordance with the universal law. Kant could argue that
the most rational decision would be to act on principles of necessity, which
would allow him to kill one individual in order to save ten. Or Kant could make
the rational decision to not pull the lever, because by pulling it he would be
conflicting against the will of another human.
Conversely, under the Mill’s utilitarian
beliefs we concluded that the Mill would have no choice but to pull the lever
or push the fat man in order to save the ten people. Based on Mill’s understanding
of morals, his leading point is that the greatest amount of happiness for the
greatest number of people should be the most important rule in all decision
making. So under this felicific (happiness) calculus the possible happiness of
the ten people would always trump the one individual’s desires.
Nevertheless, when considering
these ideologies we have to ask ourselves which decision you would make. To put
the trolley problem in a different perspective, I would like to use an example
from the movie “I Robot”. In one particular scene Will Smith illustrates why he
has a hatred for robots. In this scene he states that one day while he was
driving alone in the rain he lost control of his car and crashed into another
vehicle, with a young girl and her father, sending them both into the river. As
their cars began to sink a robot jumped into the water to save them. The father
died on impact so the only two left were Will and the girl. So while in the
water the robot made the decision to save Will Smith instead of the girl
because he had a greater probability of surviving. Will’s hatred is stemmed
from this incident because the robot chose the best rational decision instead
of the best moral decision.
So my question to you all is
whether or not in this instance you, Kant, or Mill would choose to save Will or
the young girl?
Although it has been quite a while since I have watched this movie, I believe that it would be more simple to argue in the case of the robot using mill's utilitarianism logic. The mere fact that Will was more likely to survive supports the decision to let the girl die.The robot made a purely rational choice. In the movie, Will's hatred for robots stems from survivors guilt. it is difficult for him to understand the robot's point of view because unlike the robot, humans have emotions that play a part in our rationality.
ReplyDelete