←back to thread

The shrimp welfare project

(benthams.substack.com)
81 points 0xDEAFBEAD | 1 comments | | HN request time: 0s | source
Show context
ajkjk ◴[] No.42173485[source]
This person seems to think they have engaged with the counterarguments against their way of looking at the world, but to my eye they are clueless; they just have no idea how most other people think at all. They've rationalized their morality into some kind of pseudo-quantitative ethical maximization problem and then failed to notice that most people's moralities don't and aren't going to work like that.

Indeed, people will resist being "tricked" into this framework: debating on these terms will feel like having their morals twisted into justifying things they don't believe in. And although they may not have the patience or rhetorical skill to put into words exactly why they resist it, their intuitions won't lead them astray, and they'll react according to their true-but-hard-to-verbalize beliefs (usually by gradually getting frustrated and angry with you).

A person who believes in rationalizing everything will then think that someone who resists this kind of argument is just dumb, or irrational, or stubborn, or actually-evil, to see that they are wrong. But it seems to me that the very idea that you can rationalize morality, that you can compute the right thing to do at a personal-ethics level, is itself a moral belief, which those people simply do not agree with, and their resistance is in accordance with that: you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic. No wonder they resist! People do not release control over their moral beliefs lightly. Rather I think it's the people who are very insecure in their own beliefs who are susceptible to giving them up to someone who runs rhetorical circles around them.

I've come to think that a lot of 21st century discord (c.f. American political polarization) is due to this basic conflict. People who believe in rationalizing everything think they can't be wrong because the only way to evaluate anything is rationally--a lens through which, of course rationality looks better than anything else. Meanwhile everyone who trusts in their own moral intuitions feels tricked and betrayed and exploited and sold out when it happens. Sure, they can't always find the words to defend themselves. But it's the rationalizers who are in the wrong: pressuring someone into changing their mind is not okay; it's a basic act of disrespect. Getting someone on your side for real means appealing to their moral intuition, not making them doubt theirs until they give up and reluctantly agree with yours. Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance. At that point they may well be "wrong", but there's no convincing them otherwise: their moral goal has been replaced with a singular need to get to make their own decisions instead of being subjugated by yours.

Anyway.

IMO to justify animal welfare utilitarianism to people who don't care about it at all, you need to take one of two stances:

1. We (the animal-empathizers) live in a society with you, and we care a lot about this, but you don't. But we're in community with each other, so we ought to support each other's causes even if they're not personally relevant to us. So how about you support what we care about and you support what we care about, so everyone benefits? In this case it's very cheap to help.

2. We all live in a society together which should, by now, have largely solved for our basic needs (except for our basic incompetence at it, which, yeah, we need to keep working on). The basic job of morality is to guarantee the safety of everyone in our community. As we start checking off basic needs at the local scale we naturally start expanding our definition of "community" to more and more beings that we can empathize with: other nations and peoples, the natural world around us, people in the far future who suffer from our carelessness, pets, and then, yes, and animals that we use for food. Even though we're still working on the "nearby" hard stuff, like protecting our local ecosystems, we can also start with the low-hanging-fruit on the far-away stuff, including alleviating the needless suffering of shrimp. Long-term we hope to live in harmony with everything on earth in a way that has us all looking out for each other, and this is a small step towards that.

"(suffering per death) * (discount rate for shrimp being 3% of a human) * (dollar to alleviate) = best charity" just doesn't work at all. I notice that the natural human moral intuition (the non-rational version) is necessarily local: it's focused on protecting whatever you regard as your community. So to get someone to extend it to far-away less-sentient creatures, you have to convince the person to change their definition of the "community"--and I think that's what happens naturally when they feel like their local community is safe enough that they can start extending protection at a wider radius.

replies(2): >>42173641 #>>42173664 #
sodality2 ◴[] No.42173641[source]
> They've rationalized their morality into some kind of pseudo-quantitative ethical maximization problem and then failed to notice that most people's moralities don't and aren't going to work like that.

To me, the point of this argument (along with similar ones) is to expose these deeper asymmetries that exist in most people's moral systems - to make people question their moral beliefs instead of accepting their instinct. Not to say "You're all wrong, terrible people for not donating your money to this shrimp charity which I have calculated to be a moral imperative".

replies(2): >>42174251 #>>42176428 #
sixo ◴[] No.42174251[source]
> to make people question their moral beliefs instead of accepting their instinct

Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind. From the other side, this looks like, quoting OP:

> you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic.

And feels like:

> pressuring someone into changing their mind is not okay; it's a basic act of disrespect.

And doesn't work, instead:

> Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance.

replies(1): >>42174307 #
sodality2 ◴[] No.42174307[source]
> Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind

I may be putting my hands up in surrender, as a 20 year old (decidedly not genius though). But I'm instead defending this belief, not trying to convince others. Also, I don't think it's the worst thing in the world to have people question their preconceived moral notions. I've taken ethics classes in college and I personally loved having them challenged.

replies(1): >>42174391 #
sixo ◴[] No.42174391[source]
ha, got one. Yes it is pretty fun if you're in the right mental state for it, I've just seen so many EA-type rationalists out on the internet proliferating this worldview, and often pushing it on people who a) don't enjoy it, b) are threatened by it, c) are underequipped to defend themselves rationally against it, that I find myself jumping to defend against it. EA-type utilitarianism, I think, proliferates widely on the internet specifically by "survival bias"—it is easily-argued in text; it looks good on paper. Whereas the "innate" morality of most humans is more based on ground-truth emotional reality; see my other comment for the character of that https://news.ycombinator.com/item?id=42174022
replies(1): >>42174883 #
1. sodality2 ◴[] No.42174883[source]
I see, and I wholly agree. I'm looking at this from essentially the academic perspective (aka, when I was required to at least question my innate morality). When I saw this blog post, I looked at it in the same way. If you read it as "this charity is more useful than every other charity, we should stop offering soup kitchens, and redirect the funding to the SWP", then I disagree with that interpretation. I don't need or want to rationalize that decision to an EA. But it is a fun thought experiment to discuss.