Humans, Utilitarians, etc.

a human. jk shes not. will be relevant later

a human. jk shes not. this will be relevant later

I have been reading a bit of fellow Yale alum Aaron Gertler’s blog here. Recently, it has featured very persuasive argument of the idea that while empathy is nice, we also need to take morality beyond that and start helping people “even when we have no personal connection or emotional investment in the outcome.” A lot of what he mentions is combating the bystander effect, meaning intervening when kids are bullied shot etc instead of just standing there with our teeth in our mouths, as my grandpa would say, cause we don’t feel like helping. This I appreciated, because being brave is a cool thing that doesn’t get enough mention in morality-world probably.

The problem is Aaron’s article reminded me of my archenemy, which is rationality. And also helping people. And worst of all, rationality AND helping people. This is my archenemy because people like Peter Singer are always telling me that if I was rationally helping people I would be giving $10 to the most effective charity to save lives of people I don’t know, because rationally this would produce EVEN MORE GOOD than spending the same amount of money on going to the Indian buffet by myself, which is, of course, what I did. Except actually, when Singer tells me this I don’t feel bad at all. I don’t know why. Maybe I have a character flaw.

jk i dont have flaws

jk i dont have flaws

Aaron reminds me of this by quoting from Against Empathy, by the infamous Yale Psychology Professor Paul Bloom. I actually took “Against Empathy: The Class” (aka PSYC 423/CGSC 423/PSYC 623: Cognitive Science of Good and Evil) last fall as an undergrad, which solidified my hatred of rational helping. I think Professor Bloom (who is a great professor btw) would agree that the agenda for the class consisted mainly of him trying to persuade us to be against empathy, which inevitably pushed me to become quite a fan of the “emotion” (empathy probably isn’t an emotion)(empathy is an ill-defined construct). His preferred alternative to having empathy was to be a nice psychopath, or in other words, basically utilitarian.

where I learned everything i know

where I learned everything i know

Which is fine, I guess, if that’s like, your thing. But utilitarianism is honestly kind of boring. You just pick a value, and then you compute things. Traditionally the value would be happiness, which seems like a terrible idea to me but this is kind of a separate issue, so let’s just say the value is happiness.  The real problem is I’m kind of not down with any kind of morality where it would be preferable for me to turn into a computer. I’ma just bring back this quote from my last post (ironically by J.S. Mill who was a huge utilitarian):

Supposing it were possible to get houses built, corn grown, battles fought, causes tried, and even churches erected and prayers said by machinery–by automatons in human form–it would be a considerable loss to exchange for these automatons even the men and women who at present inhabit the more civilized part of the world, who are assuredly but starved specimens of what humanity can and will produce.

It seems like utilitarianism kind of runs into the automaton issue. Mixing other thoughts and feelings with our empathy, as in Aarons proposal, seems good. But eliminating empathy and other human emotions in the name of morality seems like a problem. I do not consider myself an especially emotional person. Someone once told me I don’t have emotions, which was rude. But it WAS rude because it seemed like they were denying my humanity. I wouldn’t take that as a compliment, even if according to Bloom et al., maybe I should because overall most emotions are morally detrimental because they make us use favoritism, punch people, spend money on buffets, etc. etc. Morality made out of “pure reason” is fine I GUESS but since I’M not made out of pure reason I can’t help but feel like being turned into an emotionless automaton in human form wouldn’t be the best thing for me. I have a bias towards being the person that I am cause I think individuality is cool and I don’t want to be an indistinguishable rational robot-person. So maybe that’s not THE BEST argument for not helping starving children but it’s where I’m at right now.

helps starving children

helps starving children (automaton)

does cool shit

does cool shit (me)

Advertisements

2 Comments

Filed under Uncategorized

2 responses to “Humans, Utilitarians, etc.

  1. Aaron Gertler

    Thanks for writing, Bo!

    Morality made out of “pure reason” is fine I GUESS but since I’M not made out of pure reason I can’t help but feel like being turned into an emotionless automaton in human form wouldn’t be the best thing for me.

    I think I may have given the wrong impression of what heroic responsibility is supposed to feel like from the inside. It’s certainly an emotional feeling. Imagine if I told you that sometime in the next year, you were going to have to perform CPR in an attempt to save someone’s life. How would you react?

    I’m guessing you wouldn’t start making robot noises and optimizing yourself as a CPR-bot. Instead, you’d take a class, practice on a regular basis, but not let it take over your life — most of the time, you wouldn’t even be thinking about CPR. If the thought did cross your mind, you’d feel something like: “I’m ready for this, I’m going to do my best!” or “Oh, no, am I really ready? I’d better review my compression rate!” Whenever you were in a large crowd of people, you might feel a mild sense of anticipation: “Is today the day? Will I be called upon to help? If someone collapses, what’s my next move?”

    Feelings are behind just about every action a person takes. Something really big, like trying to save a life or break up a fight, will involve lots of feelings. Becoming a robot is wholly unnecessary in the pursuit of a morally better life. 🙂

    The fact that rationalist utilitarians use words like “rationalist utilitarians” does tend to give off a robotic impression. But there are all sorts of emotional people in the community — cheerful go-getters, neurotic worriers, calm calculators, fierce advocates, and so on.

    During the 23.75 hours per day that the average EA isn’t thinking about anything EA-related, they’re doing what anyone else would do: Working at a job, hanging out with friends, and so on. The only difference is in some of the trappings of these activities: They spend their salaries somewhat differently, they choose cheaper restaurants for nights out on the town, and so on. Even people like Peter Singer and Will McAskill, the leading philosophers of the movement, eat out and buy movie tickets and live almost indistinguishably from other philosophers — other than in the way they spend their money and the topics they write about.

    One common misconception of EA/rationalism is that everyone is intent on reaching some kind of “maximum”. A few people would apply that description to themselves, but most people are focused on doing “better”: Living a bit more ethically, eating somewhat less meat, donating a bit more, forming a slightly more accurate understanding of the world, and so on. (Less Wrong, the internet’s main rationalist hub, prefers the term “aspiring rationalist” to just “rationalist”.)

    • Wow, thanks for writing back so extensively. Actually I think your explanation of heroic responsibility was cool, and doesn’t at all seem overly rational. That criticism wasn’t directed at your posts on this topic, which I feel like are a pretty even-handed take in terms of rationality v emotion. They just reminded me of other things I don’t like.

      I understand that most EA people are focused on just doing “better,” even Peter Singer etc. Which I appreciate because it makes them more relatable than if they were really going off the deep end in terms of maximization. I’m not really sure why they don’t try to maximize, but that’s ok. In any case, whether you’re trying to maximize or not is different than the question of what you would maximize, if you were maximizing, in other words what do you value. It seems to me like having “doing the most good” be your moral value doesn’t leave much room for the individuality of people, because there is always one right moral answer. And then we can say it’s fine for people not to maximize, not to choose that right answer, as you are saying, but if that really is fine doesn’t it just mean doing the most good isn’t the right answer? I’m sure there’s a utilitarian/EA answer to this but I feel like if I can go out to dinner with my ten dollars rather than saving a life, why can’t I go out to a more expensive place and not save another life, etc.?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s