←back to thread

The shrimp welfare project

(benthams.substack.com)
81 points 0xDEAFBEAD | 6 comments | | HN request time: 0.217s | source | bottom
Show context
ajkjk ◴[] No.42173485[source]
This person seems to think they have engaged with the counterarguments against their way of looking at the world, but to my eye they are clueless; they just have no idea how most other people think at all. They've rationalized their morality into some kind of pseudo-quantitative ethical maximization problem and then failed to notice that most people's moralities don't and aren't going to work like that.

Indeed, people will resist being "tricked" into this framework: debating on these terms will feel like having their morals twisted into justifying things they don't believe in. And although they may not have the patience or rhetorical skill to put into words exactly why they resist it, their intuitions won't lead them astray, and they'll react according to their true-but-hard-to-verbalize beliefs (usually by gradually getting frustrated and angry with you).

A person who believes in rationalizing everything will then think that someone who resists this kind of argument is just dumb, or irrational, or stubborn, or actually-evil, to see that they are wrong. But it seems to me that the very idea that you can rationalize morality, that you can compute the right thing to do at a personal-ethics level, is itself a moral belief, which those people simply do not agree with, and their resistance is in accordance with that: you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic. No wonder they resist! People do not release control over their moral beliefs lightly. Rather I think it's the people who are very insecure in their own beliefs who are susceptible to giving them up to someone who runs rhetorical circles around them.

I've come to think that a lot of 21st century discord (c.f. American political polarization) is due to this basic conflict. People who believe in rationalizing everything think they can't be wrong because the only way to evaluate anything is rationally--a lens through which, of course rationality looks better than anything else. Meanwhile everyone who trusts in their own moral intuitions feels tricked and betrayed and exploited and sold out when it happens. Sure, they can't always find the words to defend themselves. But it's the rationalizers who are in the wrong: pressuring someone into changing their mind is not okay; it's a basic act of disrespect. Getting someone on your side for real means appealing to their moral intuition, not making them doubt theirs until they give up and reluctantly agree with yours. Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance. At that point they may well be "wrong", but there's no convincing them otherwise: their moral goal has been replaced with a singular need to get to make their own decisions instead of being subjugated by yours.

Anyway.

IMO to justify animal welfare utilitarianism to people who don't care about it at all, you need to take one of two stances:

1. We (the animal-empathizers) live in a society with you, and we care a lot about this, but you don't. But we're in community with each other, so we ought to support each other's causes even if they're not personally relevant to us. So how about you support what we care about and you support what we care about, so everyone benefits? In this case it's very cheap to help.

2. We all live in a society together which should, by now, have largely solved for our basic needs (except for our basic incompetence at it, which, yeah, we need to keep working on). The basic job of morality is to guarantee the safety of everyone in our community. As we start checking off basic needs at the local scale we naturally start expanding our definition of "community" to more and more beings that we can empathize with: other nations and peoples, the natural world around us, people in the far future who suffer from our carelessness, pets, and then, yes, and animals that we use for food. Even though we're still working on the "nearby" hard stuff, like protecting our local ecosystems, we can also start with the low-hanging-fruit on the far-away stuff, including alleviating the needless suffering of shrimp. Long-term we hope to live in harmony with everything on earth in a way that has us all looking out for each other, and this is a small step towards that.

"(suffering per death) * (discount rate for shrimp being 3% of a human) * (dollar to alleviate) = best charity" just doesn't work at all. I notice that the natural human moral intuition (the non-rational version) is necessarily local: it's focused on protecting whatever you regard as your community. So to get someone to extend it to far-away less-sentient creatures, you have to convince the person to change their definition of the "community"--and I think that's what happens naturally when they feel like their local community is safe enough that they can start extending protection at a wider radius.

replies(2): >>42173641 #>>42173664 #
1. sodality2 ◴[] No.42173641[source]
> They've rationalized their morality into some kind of pseudo-quantitative ethical maximization problem and then failed to notice that most people's moralities don't and aren't going to work like that.

To me, the point of this argument (along with similar ones) is to expose these deeper asymmetries that exist in most people's moral systems - to make people question their moral beliefs instead of accepting their instinct. Not to say "You're all wrong, terrible people for not donating your money to this shrimp charity which I have calculated to be a moral imperative".

replies(2): >>42174251 #>>42176428 #
2. sixo ◴[] No.42174251[source]
> to make people question their moral beliefs instead of accepting their instinct

Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind. From the other side, this looks like, quoting OP:

> you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic.

And feels like:

> pressuring someone into changing their mind is not okay; it's a basic act of disrespect.

And doesn't work, instead:

> Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance.

replies(1): >>42174307 #
3. sodality2 ◴[] No.42174307[source]
> Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind

I may be putting my hands up in surrender, as a 20 year old (decidedly not genius though). But I'm instead defending this belief, not trying to convince others. Also, I don't think it's the worst thing in the world to have people question their preconceived moral notions. I've taken ethics classes in college and I personally loved having them challenged.

replies(1): >>42174391 #
4. sixo ◴[] No.42174391{3}[source]
ha, got one. Yes it is pretty fun if you're in the right mental state for it, I've just seen so many EA-type rationalists out on the internet proliferating this worldview, and often pushing it on people who a) don't enjoy it, b) are threatened by it, c) are underequipped to defend themselves rationally against it, that I find myself jumping to defend against it. EA-type utilitarianism, I think, proliferates widely on the internet specifically by "survival bias"—it is easily-argued in text; it looks good on paper. Whereas the "innate" morality of most humans is more based on ground-truth emotional reality; see my other comment for the character of that https://news.ycombinator.com/item?id=42174022
replies(1): >>42174883 #
5. sodality2 ◴[] No.42174883{4}[source]
I see, and I wholly agree. I'm looking at this from essentially the academic perspective (aka, when I was required to at least question my innate morality). When I saw this blog post, I looked at it in the same way. If you read it as "this charity is more useful than every other charity, we should stop offering soup kitchens, and redirect the funding to the SWP", then I disagree with that interpretation. I don't need or want to rationalize that decision to an EA. But it is a fun thought experiment to discuss.
6. ajkjk ◴[] No.42176428[source]
IMO: the idea that "this kind of argument exposes deeper asymmetries..." is itself fallacious for the same reason: it presupposes that a person's morality answers to logic.

Were morality a logical system, then yes, finding apparent contradictions would seem to invalidate it. But somehow that's backwards. At some level moral intuitions can't be wrong: they're moral intuitions, not logic. They obey different rules; they operate at the level of emotion, safety, and power. A person basically cannot be convinced with logic to no longer care about the safety of someone/something that they care about the safety of. Even if they submit to an argument of that form, they're doing it because they're conceding power to the arguer, not because they've changed their mind (although they may actually say that they changed their opinion as part of their concession).

This isn't cut-and-dry; I think I have seen people genuinely change their moral stances on something from a logical argument. But I suspect that it's incredibly rare, and when it happens it feels genuinely surprising and bizarre. Most of the time when it seems like it's happening, there's actually something else going on. A common one is a person changing their professed moral stance because they realize they win some social cachet for doing so. But that's a switch at the level of power, not morality.

Anyway it's easy to claim to hold a moral stance when it takes very little investment to do so. To identify a person's actual moral opinions you have to see how they act when pressure is put on them (for instance, do they resist someone trying to change their mind on an issue like the one in the OP?). People are incredibly good at extrapolating from a moral claim to its moral implications that affect them (if you claim that we should prioritize saving the lives of shrimp, what else does that argument justify? And what things that I care about does that argument then invalidate? Can I still justify spending money on the things I care about in a world where I'm supposed to spend it on saving animals?), and they will treat an argument as a threat if it seems to imply things that would upset their personal morality.

The sorts of arguments that do regularly change a person's opinion on the level of moral intuitions are of the form:

* information that you didn't notice how you were hurting/failing to help someone

* or, information that you thought you were helping or avoiding hurting someone, but you were wrong.

* corrective actions like shame from someone they respect or depend on ("you hurt this person and you're wrong to not care")

* other one-on-one emotional actions, like a person genuinely apologizing, or acting selfless towards you, or asserting a boundary

(Granted, this stance seems to invalidate the entire subject of ethics. And it kinda does: what I'm describing is phenomological, not ethical; I'm claiming that this is how people actually work, even if you would like them to follow ethics. It seems like ethics is what you get when you try to extend ground-level moralities to an institutional level. when you abstract morality from individuals to collectives, you have to distill it into actual rules that obey some internal logic, and that's where ethics comes in.)