←back to thread

The shrimp welfare project

(benthams.substack.com)
81 points 0xDEAFBEAD | 1 comments | | HN request time: 0.001s | source
Show context
n4r9 ◴[] No.42173011[source]
Apologies for focusing on just one sentence of this article, but I feel like it's crucial to the overall argument:

> ... if [shrimp] suffer only 3% as intensely as we do ...

Does this proposition make sense? It's not obvious to me that we can assign percentage values to suffering, or compare it to human suffering, or treat the values in a linear fashion.

It reminds me of that vaguely absurd thought experiment where you compare one person undergoing a lifetime of intense torture vs billions upon billions of humans getting a fleck of dust in their eyes. I just cannot square choosing the former with my conscience. Maybe I'm too unimaginative to comprehend so many billions of bits of dust.

replies(10): >>42173107 #>>42173149 #>>42173164 #>>42173244 #>>42173255 #>>42173304 #>>42173441 #>>42175565 #>>42175936 #>>42177306 #
0xDEAFBEAD ◴[] No.42173441[source]
The way I think about it is that we're already making decisions like this in our own lives. Imagine a teenager who gets a summer job so they can save for a PS5. The teenager is making an implicit moral judgement, with themselves as the only moral patient. They're judging that the negative utility from working the job is lower in magnitude than the positive utility that the PS5 would generate.

If the teenager gets a job offer, but the job only pays minimum wage, they may judge that the disutility for so many hours of work actually exceeds the positive utility from the PS5. There seems to be a capability to estimate the disutility from a single hour of work, and multiply it across all the hours which will be required to save enough.

It would be plausible for the teenager to argue that the disutility from the job exceeds the utility from the PS5, or vice versa. But I doubt many teenagers would tell you "I can't figure out if I want to get a job, because the utilities simply aren't comparable!" Incomparability just doesn't seem to be an issue in practice for people making decisions about their own lives.

Here's another thought experiment. Imagine you get laid off from your job. Times are tough, and your budget is tight. Christmas is coming up. You have two children and a pet. You could get a fancy present for Child A, or a fancy present for Child B, but not both. If you do buy a fancy present, the only way to make room in the budget is to switch to a less tasty food brand for your pet.

This might be a tough decision if the utilities are really close. But if you think your children will mostly ignore their presents in order to play on their phones, and your pet gets incredibly excited every time you feed them the more expensive food brand, I doubt you'll hesitate on the basis of cross-species incomparability.

I would argue that the shrimp situation sits closer to these sort of every-day "common sense" utility judgments than an exotic limiting case such as torture vs dust specks. I'm not sure dust specks have any negative utility at all, actually. Maybe they're even positive utility, if they trigger a blink which is infinitesimally pleasant. If I change it from specks to bee stings, it seems more intuitive that there's some astronomically large number of bee stings such that torture would be preferable.

It's also not clear to me what I should do when my intuitions and mathematical common sense come into conflict. As you suggest, maybe if I spent more time really trying to wrap my head around how astronomically large a number can get, my intuition would line up better with math.

Here's a question on the incomparability of excruciating pain. Back to the "moral judgements for oneself" theme... How many people would agree to get branded with a hot branding iron in exchange for a billion dollars? I'll bet at least a few would agree.

replies(2): >>42173720 #>>42174022 #
sixo ◴[] No.42174022[source]
> The teenager is making an implicit moral judgement, with themselves as the only moral patient.

No they're not! You have made a claim of the form "these things are the same thing"—but it only seems that way if you can't think of a single plausible alternative. Here's one:

* Humans are motivated by two competing drives. The first drive we can call "fear", which aims to avoid suffering, either personally or in people you care about or identify with. This derives from our natural empathic instinct, but is can be extended by a socially-construction of group identity. So, the shrimp argument is saying "your avoiding-suffering instinct can and should be applied to crustaceans too", which is contrary to how most people feel. Fear also includes "fear of ostracization", this being equivalent to death in a prehistoric context.

* The second drive is "thriving" or "growing" or "becoming yourself", and leads you to glimpse the person you could be, things you could do, identities you could hold, etc, and to strive to transform yourself into those things. The teenager ultimately wants the PS5 because they've identified with it in some way—they see it as a way to express themself. Their "utilitarian" actions in this context are instrumental, not moral—towards the attainment of what-they-want. I think, in this simple model, I'd also broader this drive to include "eating meat"—you don't do this for the animal or to abate suffering, you do it because you want to: your body's hungry, you desire the pleasure of satiation, and you act to realize that desire.

* The two drives are not the same, and in the case of eating meat are directly opposed. (You could perhaps devise a way to see either as, ultimately, an expression of the other.) Human nature, then, basically undertakes the "thriving" drive except when there's a threat of suffering, in which case we switch gears to "fear" until it's handled.

* Much utilitarian discourse seems to exist in a universe where the apparently-selfish "thriving" drive doesn't exist, or has been moralized out of existence—because it doesn't look good on paper. But, however it sounds, it in fact exists, and you will find that almost all living humans will defend their right to express themselves, sometimes to the death. This is at some level the essence of life, and the rejection of it leads many people to view EA-type utilitarianism as antithetical to life itself.

* One reason for this is that "fear-mode thinking" is cognitively expensive, and while people will maintain it for a while, they will eventually balk against it, no matter how reasonable it seems (probably this explains the last decade of American politics).

replies(1): >>42174453 #
0xDEAFBEAD ◴[] No.42174453[source]
I find myself motivated to alleviate suffering in other beings. It feels good that a quarter million shrimp are better off because I donated a few hundred bucks. It makes me feel like my existence on this planet is worthwhile. I did my good deed for the day.

There was a time when my good deeds were more motivated by fear. I found that fear wasn't a good motivator. This has become the consensus view in the EA community. EAs generally think it's important to avoid burnout. After reworking my motivations, doing good now feels like a way to thrive, not a way to avoid fear. The part of me which was afraid feels good about this development, because my new motivational structure is more sustainable.

If you're not motivated to alleviate suffering in other beings, it is what it is. I'm not going to insult you or anything. However, if I notice you insulting others over moral trifles, I might privately think to myself that you are being hyperbolic. When I put on my EA-type utilitarian hat on, almost all internet fighting seems to lack perspective.

I support your ability to express yourself. (I'm a little skeptical that's the main driver of the typical PS5 purchase, but that's beside the point.) I want you to thrive! I consume meat, so I can't condemn you for consuming meat. I did try going vegan for a bit, but a vegan diet was causing fatigue. I now make a mild effort to eat a low-suffering diet. I also donate to https://gfi.org/ to support research into alternative meats. (I think it's plausible that the utilitarian impact of my diet+donations is net positive, since the invention of viable alternative meats could have such a large impact.) And whenever I get the chance, I rant about the state of vegan nutrition online, in the hope that vegans will notice my rants and improve things.

(Note that I'm not a member of the EA community, but I agree with aspects of the philosophy. My issues with the community can go in another thread.)

(I appreciate you writing this reply. Specifically, I find myself wondering if utilitarian advocacy would be more effective if what I just wrote, about the value of rejecting fear-style motivation, was made explicit from the beginning. It could make utilitarianism both more appealing and more sustainable.)

replies(1): >>42180156 #
jhanschoo ◴[] No.42180156[source]
Hi, I'm a flexitarian who finds a more restrictive diet than the one I am having unsustainable in terms of my habits, so I'm interested in information regarding the topic. Can you direct me to what you've discussed regarding vegan nutrition?
replies(1): >>42180388 #
0xDEAFBEAD ◴[] No.42180388[source]
https://news.ycombinator.com/item?id=41820828

https://forum.effectivealtruism.org/posts/dbw2mgSGSAKB45fAk/...

Really I want to see vegans do a comprehensive investigation of every last nutrient that's disproportionately found in animal products, including random stuff like beta-alanine, creatine, choline, etc., and take a "better safe than sorry" approach of inventing a veggie burger that contains all that stuff in abundance, and is palatable.

I suspect you could make a lot of money by inventing such a burger. Vegans are currently fixated on improving taste, and they seem to have a bit of a blind spot around nutrition. I expect a veggie burger which mysteriously makes vegans feel good, gives them energy, and makes them feel like they should eat more of it will tend to sell well.

replies(4): >>42181101 #>>42182886 #>>42187548 #>>42192690 #
1. jhanschoo ◴[] No.42181101{3}[source]
Thanks!