Most active commenters
  • sodality2(18)
  • BenthamsBulldog(16)
  • 0xDEAFBEAD(14)
  • dfedbeef(8)
  • mistercow(7)
  • n4r9(6)
  • (6)
  • NoMoreNicksLeft(5)
  • dartos(4)
  • bhelkey(4)

The shrimp welfare project

(benthams.substack.com)
81 points 0xDEAFBEAD | 175 comments | | HN request time: 2.309s | source | bottom
1. sodality2 ◴[] No.42172993[source]
I read this article after it was linked on Amos Wollen's substack this weekend. Thoroughly convinced. I have been preaching shrimp rights to everyone in my life. 15,000 shrimp will be given a death with dignity by my donation.
2. thinkingtoilet ◴[] No.42172998[source]
I love articles like this that challenge the status quo of morality. I find my self asking the question, "Would I rather save a billion shrimp or one human?" and I honestly think I'm siding on the human. I'm not saying that answer is "correct". It's always good to think about these things and the point the author makes about shrimp being a test of our morality because they're so different is a good one.
replies(6): >>42173031 #>>42173080 #>>42173385 #>>42175436 #>>42175641 #>>42180356 #
3. n4r9 ◴[] No.42173011[source]
Apologies for focusing on just one sentence of this article, but I feel like it's crucial to the overall argument:

> ... if [shrimp] suffer only 3% as intensely as we do ...

Does this proposition make sense? It's not obvious to me that we can assign percentage values to suffering, or compare it to human suffering, or treat the values in a linear fashion.

It reminds me of that vaguely absurd thought experiment where you compare one person undergoing a lifetime of intense torture vs billions upon billions of humans getting a fleck of dust in their eyes. I just cannot square choosing the former with my conscience. Maybe I'm too unimaginative to comprehend so many billions of bits of dust.

replies(10): >>42173107 #>>42173149 #>>42173164 #>>42173244 #>>42173255 #>>42173304 #>>42173441 #>>42175565 #>>42175936 #>>42177306 #
4. barefoot ◴[] No.42173031[source]
How about one million kittens or one human?
replies(5): >>42173059 #>>42173272 #>>42173347 #>>42173388 #>>42175917 #
5. dfedbeef ◴[] No.42173041[source]
Waiting for the funny reveal that this is a prank
replies(3): >>42173328 #>>42173705 #>>42173874 #
6. dartos ◴[] No.42173059{3}[source]
Where did the kittens come from?

If they were spawned into existence for this thought experiment, then the human, probably.

But if even one of those kittens were mine, entire cities could be leveled before I let anyone hurt my kitten.

replies(3): >>42173282 #>>42173742 #>>42194099 #
7. himinlomax ◴[] No.42173080[source]
The best way to save even more shrimps would be to campaign for and subsidize whaling. They are shrimp-mass-murdering machines. What's a few whales versus billions of shrimps?
replies(3): >>42173120 #>>42175538 #>>42175779 #
8. VyseofArcadia ◴[] No.42173084[source]
This is an intensely weird read. I kept waiting for the satire to become more obvious. Maybe throw in a reference or two to the Futurama episode "The Problem with Popplers". But by the end I can only conclude that it is sincere.

I guess what strikes me the most odd is that not eating shrimp is never suggested as an alternative. It starts from the premise that, well, we're going to eat shrimp anyway, so the least we could do is give them a painless death first. If you follow this logic to its extremes, you get things like, "well, it's expensive to actually feed these starving children, but for just pennies a day you can make sure they at least die painlessly".

replies(1): >>42173201 #
9. sodality2 ◴[] No.42173107[source]
Have you read the linked paper by Norcross? "Great harms from small benefits grow: how death can be outweighed by headaches" [0].

[0]: https://www.jstor.org/stable/3328486

replies(2): >>42173211 #>>42174422 #
10. xipho ◴[] No.42173120{3}[source]
You didn't read the article nor follow the argument, just jumped in. It's about suffering shrimps, not saving shrimps.
replies(2): >>42173234 #>>42176931 #
11. InsideOutSanta ◴[] No.42173149[source]
The article mentions that issue in passing ("I reject the claim that no number of mild bads can add up to be as bad as a single thing that’s very bad, as do many philosophers"), but I don't understand the actual argument behind this assertion.

Personally, I believe that you can't just add up mildly bad things and create a very bad thing. For example, I'd rather get my finger pricked by a needle once a day for the rest of my life than have somebody amputate my legs without anesthesia just once, even though the "cumulative pain" of the former choice might be higher than that of the latter.

Having said that, I also believe that there is sufficient evidence that shrimp suffer greatly when they are killed in the manner described in the article, and that it is worthwhile to prevent that suffering.

replies(1): >>42173260 #
12. dfedbeef ◴[] No.42173164[source]
It is regular absurd.
13. InsideOutSanta ◴[] No.42173201[source]
You have no control over other people's eating habits, but you do have control over your own charitable spending.

If you're considering how to best spend your money, it doesn't matter that not eating shrimp would be an even better solution than preventing pain when they are killed. It only matters what the most effective way of spending your money is.

replies(1): >>42175516 #
14. n4r9 ◴[] No.42173211{3}[source]
No; thanks for bringing it to my attention. The first page is intriguing... I'll see if I can locate a free copy somewhere.
replies(1): >>42173263 #
15. erostrate ◴[] No.42173218[source]
Did the author factor in the impact of this kind of article on the external perception of the rationalist / utilitarian / EA community when weighing the utility of publishing this?

Should you push arguments that seem ridiculously unacceptable to the vast majority of people, thereby reducing the weight of more acceptable arguments you could possibly make?

replies(4): >>42173285 #>>42173338 #>>42173753 #>>42177536 #
16. some_random ◴[] No.42173234{4}[source]
Do shrimp not suffer from being consumed by whales?
replies(1): >>42173265 #
17. aithrowawaycomm ◴[] No.42173244[source]
Yeah this (along with the "billion headaches" inanity) rests on a fallacy: insisting an abstraction can be measured as a quantity when it clearly cannot. This trick is usually done by blindly averaging together some concrete quantities and claiming it represents the abstraction. The illusion is fostered by "local continuity" of these abstractions - if pulling your earlobe causes suffering, pulling harder causes more suffering. And of course the "mathiness" gives an aura of rigor and rationality. But a terrible error in quantitative reasoning occurs when you break locality: going from pulled earlobe to emotional loss, or pulled earlobe to pulled antennae, etc. The very nature of the abstraction - "suffering," "badness," - changes between entities and situations, so the one formula cannot possibly apply.

ETA: see also the McNamara fallacy https://en.wikipedia.org/wiki/McNamara_fallacy

replies(1): >>42177744 #
18. ◴[] No.42173255[source]
19. aithrowawaycomm ◴[] No.42173260{3}[source]
Their point isn't that it's merely "worthwhile," but that donating to Sudanese refugees is a waste of money because 1 starving child = 80 starving shrimp, or whatever their ghoulish and horrific math says.
replies(2): >>42173577 #>>42175592 #
20. sodality2 ◴[] No.42173263{4}[source]
Here's a copy I found: https://philosophysmith.com/wp-content/uploads/2018/07/alist...

It's pretty short, I liked it. Was surprised to find myself agreeing with it at the end of my first read.

replies(1): >>42187128 #
21. eightysixfour ◴[] No.42173265{5}[source]
In comparison to having their eyeballs crushed but left alive, or slowly frozen to death?
replies(1): >>42173862 #
22. saalweachter ◴[] No.42173272{3}[source]
Collectively we kill and eat around a billion rabbits a year, around 8 million in the US. They aren't kittens, but they do have a similar level of fluffy cuteness.

It's not quite "one million to one"; the meat from 1 million rabbits meets the caloric needs of around 2750 people for 1 year.

23. some_random ◴[] No.42173282{4}[source]
This brings up an interesting point, our view of morality is heavily skewed. If you made me choose between something bad happening to my partner or 10 random people, I would save my partner every time and I expect every normal person in the world to choose the same.
replies(1): >>42173324 #
24. delichon ◴[] No.42173283[source]
By this logic someone who kills a person and lets them decay in a swamp such that billions or trillions of microbes benefit, we should hail them as a paragon of charity. I hope this point of view doesn't catch on.
replies(1): >>42173484 #
25. sodality2 ◴[] No.42173285[source]
Should we stop making logically sound but unpalatable arguments?
replies(1): >>42173361 #
26. mistercow ◴[] No.42173304[source]
I don’t really doubt that it’s in principle possible to assign percentage values to suffering intensity, but the 3% value (which the source admits is a “placeholder”) seems completely unhinged for an animal with 0.05% as many neurons as a chicken, and the source’s justification for largely discounting neuron counts seems pretty arbitrary, at least as presented in their FAQ.
replies(3): >>42173750 #>>42173861 #>>42175438 #
27. dartos ◴[] No.42173324{5}[source]
Well humans aren’t perfectly rational.

I wouldn’t think it moral to save my kitten over a random non-evil person, but I’d still do it.

replies(1): >>42175091 #
28. bhelkey ◴[] No.42173325[source]
This two thousand word article boils down to, 1) every dollar donated saves ~1,500 shrimp per year from agony in perpituaty and 2) saving 32 shrimp from agony is morally equivalent to saving 1 human from agony.

Neither of these points are well supported by the article. Nor are they well supported by the copious links scattered through the blog post.

For example, "they worked with Tesco to get an extra 1.6 billion shrimp stunned before slaughter every year" links to a summary about the charity NOT to any source for 1.6 billion shrimp saved.

replies(1): >>42173344 #
29. niek_pas ◴[] No.42173328[source]
Why do you think this is a prank?
replies(1): >>42173625 #
30. slothtrop ◴[] No.42173338[source]
That may be part of the intent.
31. sodality2 ◴[] No.42173344[source]
> For example, "they worked with Tesco to get an extra 1.6 billion shrimp stunned before slaughter every year" links to a summary about the charity NOT to any source for 1.6 billion shrimp saved.

It's in the exact webpage linked there. You just didn't scroll down enough.

> Tesco and Sainsbury’s published shrimp welfare commitments, citing collaboration with SWP (among others), and signed 9 further memoranda of understanding with producers, in total committing to stunning a further ~1.6B shrimps per annum.

https://animalcharityevaluators.org/charity-review/shrimp-we...

replies(1): >>42173912 #
32. mihaic ◴[] No.42173347{3}[source]
In that case I actually ask "Who's the human?", and in about 80% of the time I'd pick the human.
33. erostrate ◴[] No.42173361{3}[source]
How palatable an argument is determines its actual impact. It's not logical to spend effort making arguments that are so unpalatable that they will just make people ignore you.
replies(1): >>42173415 #
34. tengbretson ◴[] No.42173364[source]
Gotta hand it to the author– no one here is arguing over whether ChatGPT wrote the article.
35. anothername12 ◴[] No.42173385[source]
It’s the trolley problem https://neal.fun/absurd-trolley-problems/
36. HansardExpert ◴[] No.42173388{3}[source]
Still the human.

How about one million humans or one kitten?

Where is the cut-off point for you>?

37. vasco ◴[] No.42173414[source]
It seems weird to genetically engineer, force reproduce and grow other species just for us to eat them but worry about the specific aspect of how they die. Their whole existence is for our exploitation. I know it's still a good thing to not cause extra suffering if we can avoid it for cheap, and I support this kinda thing, but it's always weird.
replies(1): >>42175682 #
38. sodality2 ◴[] No.42173415{4}[source]
Deontologically, maybe it's principally better to make an argument you know you can't refute. Maybe even just to try to convince yourself otherwise.

I know the person making this argument isn't necessarily aligned with deontology. Maybe that was your original point.

39. 0xDEAFBEAD ◴[] No.42173441[source]
The way I think about it is that we're already making decisions like this in our own lives. Imagine a teenager who gets a summer job so they can save for a PS5. The teenager is making an implicit moral judgement, with themselves as the only moral patient. They're judging that the negative utility from working the job is lower in magnitude than the positive utility that the PS5 would generate.

If the teenager gets a job offer, but the job only pays minimum wage, they may judge that the disutility for so many hours of work actually exceeds the positive utility from the PS5. There seems to be a capability to estimate the disutility from a single hour of work, and multiply it across all the hours which will be required to save enough.

It would be plausible for the teenager to argue that the disutility from the job exceeds the utility from the PS5, or vice versa. But I doubt many teenagers would tell you "I can't figure out if I want to get a job, because the utilities simply aren't comparable!" Incomparability just doesn't seem to be an issue in practice for people making decisions about their own lives.

Here's another thought experiment. Imagine you get laid off from your job. Times are tough, and your budget is tight. Christmas is coming up. You have two children and a pet. You could get a fancy present for Child A, or a fancy present for Child B, but not both. If you do buy a fancy present, the only way to make room in the budget is to switch to a less tasty food brand for your pet.

This might be a tough decision if the utilities are really close. But if you think your children will mostly ignore their presents in order to play on their phones, and your pet gets incredibly excited every time you feed them the more expensive food brand, I doubt you'll hesitate on the basis of cross-species incomparability.

I would argue that the shrimp situation sits closer to these sort of every-day "common sense" utility judgments than an exotic limiting case such as torture vs dust specks. I'm not sure dust specks have any negative utility at all, actually. Maybe they're even positive utility, if they trigger a blink which is infinitesimally pleasant. If I change it from specks to bee stings, it seems more intuitive that there's some astronomically large number of bee stings such that torture would be preferable.

It's also not clear to me what I should do when my intuitions and mathematical common sense come into conflict. As you suggest, maybe if I spent more time really trying to wrap my head around how astronomically large a number can get, my intuition would line up better with math.

Here's a question on the incomparability of excruciating pain. Back to the "moral judgements for oneself" theme... How many people would agree to get branded with a hot branding iron in exchange for a billion dollars? I'll bet at least a few would agree.

replies(2): >>42173720 #>>42174022 #
40. goda90 ◴[] No.42173464[source]
A purely selfish, human-centric argument for this initiative might be that electrical stunning before freezing might improve the taste and texture of the shrimp. I know some animals reportedly have worse tasting meat if their death was stressful.
41. sodality2 ◴[] No.42173484[source]
To draw this parallel situation, you're stipulating a few things: microbes feel pain, X good is as good as X bad is bad, and that actively bringing about a good thing is equivalent to avoiding a harmful thing. I don't think any of these are true, so I disagree.
replies(1): >>42182568 #
42. ajkjk ◴[] No.42173485[source]
This person seems to think they have engaged with the counterarguments against their way of looking at the world, but to my eye they are clueless; they just have no idea how most other people think at all. They've rationalized their morality into some kind of pseudo-quantitative ethical maximization problem and then failed to notice that most people's moralities don't and aren't going to work like that.

Indeed, people will resist being "tricked" into this framework: debating on these terms will feel like having their morals twisted into justifying things they don't believe in. And although they may not have the patience or rhetorical skill to put into words exactly why they resist it, their intuitions won't lead them astray, and they'll react according to their true-but-hard-to-verbalize beliefs (usually by gradually getting frustrated and angry with you).

A person who believes in rationalizing everything will then think that someone who resists this kind of argument is just dumb, or irrational, or stubborn, or actually-evil, to see that they are wrong. But it seems to me that the very idea that you can rationalize morality, that you can compute the right thing to do at a personal-ethics level, is itself a moral belief, which those people simply do not agree with, and their resistance is in accordance with that: you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic. No wonder they resist! People do not release control over their moral beliefs lightly. Rather I think it's the people who are very insecure in their own beliefs who are susceptible to giving them up to someone who runs rhetorical circles around them.

I've come to think that a lot of 21st century discord (c.f. American political polarization) is due to this basic conflict. People who believe in rationalizing everything think they can't be wrong because the only way to evaluate anything is rationally--a lens through which, of course rationality looks better than anything else. Meanwhile everyone who trusts in their own moral intuitions feels tricked and betrayed and exploited and sold out when it happens. Sure, they can't always find the words to defend themselves. But it's the rationalizers who are in the wrong: pressuring someone into changing their mind is not okay; it's a basic act of disrespect. Getting someone on your side for real means appealing to their moral intuition, not making them doubt theirs until they give up and reluctantly agree with yours. Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance. At that point they may well be "wrong", but there's no convincing them otherwise: their moral goal has been replaced with a singular need to get to make their own decisions instead of being subjugated by yours.

Anyway.

IMO to justify animal welfare utilitarianism to people who don't care about it at all, you need to take one of two stances:

1. We (the animal-empathizers) live in a society with you, and we care a lot about this, but you don't. But we're in community with each other, so we ought to support each other's causes even if they're not personally relevant to us. So how about you support what we care about and you support what we care about, so everyone benefits? In this case it's very cheap to help.

2. We all live in a society together which should, by now, have largely solved for our basic needs (except for our basic incompetence at it, which, yeah, we need to keep working on). The basic job of morality is to guarantee the safety of everyone in our community. As we start checking off basic needs at the local scale we naturally start expanding our definition of "community" to more and more beings that we can empathize with: other nations and peoples, the natural world around us, people in the far future who suffer from our carelessness, pets, and then, yes, and animals that we use for food. Even though we're still working on the "nearby" hard stuff, like protecting our local ecosystems, we can also start with the low-hanging-fruit on the far-away stuff, including alleviating the needless suffering of shrimp. Long-term we hope to live in harmony with everything on earth in a way that has us all looking out for each other, and this is a small step towards that.

"(suffering per death) * (discount rate for shrimp being 3% of a human) * (dollar to alleviate) = best charity" just doesn't work at all. I notice that the natural human moral intuition (the non-rational version) is necessarily local: it's focused on protecting whatever you regard as your community. So to get someone to extend it to far-away less-sentient creatures, you have to convince the person to change their definition of the "community"--and I think that's what happens naturally when they feel like their local community is safe enough that they can start extending protection at a wider radius.

replies(2): >>42173641 #>>42173664 #
43. sys32768 ◴[] No.42173486[source]
How does the author know that shrimp experience "extreme agony" the way humans experience it?

Trees and bushes and vegetables might experience extreme agony too when dying.

replies(1): >>42173779 #
44. addicted ◴[] No.42173488[source]
Or more simply, don’t kill animals when we don’t need to.
45. burnt-resistor ◴[] No.42173511[source]
Okay, plausible, I guess. The problem boils (no pun intended) down to a problem of anthropocentric one; that is, it's impossible to ask a shrimp how much it hurts. Perhaps it hurts a lot or only hurts a little, or it varies based on other factors. It's not necessarily an unknowable, but it's an unknowable in human-relatable terms because (human) intelligence and theory of mind frame taking requires a prerequisite of linguistic understanding and compatibility. (Has not almost every conquering civilization deemed every indigenous group it encountered to be "dumb" or "subhuman" simply by not being able to converse? And, I'll take it one further that "intelligence" is purely a qualitative property inferred by performative interaction often respecting either strategy signals, complexity of response, or academic fashions... all requiring a shared language. This leaves out all other species because humans haven't yet evolved the intelligence or tools to communicate with other species.)

Also, why not endeavor to replace meat grown by slaughtering animals with other alternatives? The optimization of such would reduce the energy, costs, biothreats, and suffering that eating other living beings creates.

replies(2): >>42173536 #>>42173653 #
46. sixo ◴[] No.42173514[source]
This is one of those arguments that you reach when you go in a certain direction for long enough, and you can divide people into two camps at this point by whether:

* they triumphally declare victory--ethics is solved! We can finally Do The Most Good!

* or, it's so ridiculous that it occurs to them that they're missing something--must have taken a wrong turn somewhere earlier on.

By my tone you can probably tell I take the latter position, roughly because "suffering", or "moral value", is not rightly seen as measurable, calculatable, or commensurable, even between humans. It's occasionally a useful view for institutions to hold, but imo the one for a human.

replies(1): >>42173667 #
47. RodgerTheGreat ◴[] No.42173519[source]
If you find this line of argument compelling, consider another alternative: engineering an organism which is much smaller and consumes far fewer resources than shrimp but which exists in a neurologically stable state of perpetual bliss. The survival and replication of this species of biological prayer-wheels would rapidly become a far stronger moral imperative (within the logic of the article) than any consideration for shrimp, or indeed humans.
replies(4): >>42173605 #>>42173693 #>>42173812 #>>42179008 #
48. IncreasePosts ◴[] No.42173536[source]
Yes, if individuals suffering is a metric, then we should be shutting down shrimp farms and only eating large animals that provide a whole lot of calories per individual - like cows or elephants.
replies(1): >>42175468 #
49. 0xDEAFBEAD ◴[] No.42173577{4}[source]
>donating to Sudanese refugees is a waste of money

Donating to Sudanese refugees sounds like a great use of money. Certainly not a waste.

Suboptimal isn't the same as wasteful. Suppose you sit down to eat a great meal at a restaurant. As you walk out, you realize that you could have gotten an even better meal for the same price at the restaurant next door. That doesn't mean you just wasted your money.

>ghoulish and horrific math

It's not the math that's horrific, it's the world we live in that's horrific. The math just helps us alleviate the horror better.

Researcher: "Here's my study which shows that a new medication reduces the incidence of incredibly painful kidney stones by 50%." Journal editorial board: "We refuse to publish this ghoulish and horrific math."

50. sodality2 ◴[] No.42173605[source]
I don't think this is the same at all. Creation of good is not necessarily as good as avoidance of harm is bad.
replies(1): >>42174168 #
51. dfedbeef ◴[] No.42173625{3}[source]
Because it's funny enough and seems like an absolute S-tier performance artist critique of the effective altruism movement. Like who gives a shit about whether shrimp freeze to death or are electrocuted and then freeze to death.

But this blog post uses a little BS math (.3 seconds IS shorter than 20 minutes! By an order of magnitude! Take my money!)

and some hand wavey citations (Did you know shrimp MIGHT be conscious based on a very loose definition of consciousness? Now you too are very smart! You can talk about this with your sort-of friends (coworkers) from the job where you spend 80 hours a week now!)

to convince some people that this is indeed an important and worthy thing. Because people who can be talked into this don't really interact with the real world, for the most part. So they don't know that lots of actual people need actual help that doesn't involve them dying anyway and being eaten en-masse afterwards.

replies(3): >>42173632 #>>42174131 #>>42182222 #
52. dfedbeef ◴[] No.42173632{4}[source]
and it stimulates interesting conversations like this. Watch this comment section it's going to be great
53. Melonotromo ◴[] No.42173634[source]
Shrimps don't suffer. No one 'suffers'.

Suffering is an expresion / concept we humans have because we called a certain state like this. Suffering is something a organism presents if that organism can't survive or struggles with survival.

Now i'm a human and my empathy is a lot stronger for humans, a lot, than for shrimps.

Btw. i do believe if we would really care and make sure fewllow humans would not need to suffer (they need to suffer because of capitalsm), a lot of other suffering would stop too.

We would be able to actually think about shrimps and other animals.

54. KevinMS ◴[] No.42173640[source]
> They were going to be thrown onto ice where slowly, agonizingly, over the course of 20 minutes, they’d suffocate and freeze to death at the same time, a bit like suffocating in a suitcase in the middle of Antarctica. Imagine them struggling, gasping, without enough air, fighting for their lives, but it’s no use.

How do we know this isn't just fiction?

55. sodality2 ◴[] No.42173641[source]
> They've rationalized their morality into some kind of pseudo-quantitative ethical maximization problem and then failed to notice that most people's moralities don't and aren't going to work like that.

To me, the point of this argument (along with similar ones) is to expose these deeper asymmetries that exist in most people's moral systems - to make people question their moral beliefs instead of accepting their instinct. Not to say "You're all wrong, terrible people for not donating your money to this shrimp charity which I have calculated to be a moral imperative".

replies(2): >>42174251 #>>42176428 #
56. 0xDEAFBEAD ◴[] No.42173653[source]
>why not endeavor to replace meat grown by slaughtering animals with other alternatives?

Utilitarians tend to be very interested in this, too. I've been giving to this group: https://gfi.org/

57. dfedbeef ◴[] No.42173664[source]
This is a good comment.
58. 0xDEAFBEAD ◴[] No.42173667[source]
You can read and respond to my reply here if you like: https://news.ycombinator.com/item?id=42173441
59. 0xDEAFBEAD ◴[] No.42173693[source]
If you start that charity, I think there's a decent chance the substack author will write a blog post about it. Seems like a great idea to me.
60. qwertygnu ◴[] No.42173696[source]
Responses to animal welfare articles are sad. There are mountains of evidence[0] that many animals experience emotions (including suffering) in much the same way that we do. It's tough seeing people say, without hesitation, they'd kill millions of animals over a single human.

> Shrimp are a test of our empathy. Shrimp don’t look normal, caring about them isn’t popular, but basic ethical principles entail that they matter.

I think we'll be looking back in the not-so-far future with disgust about how we treated animals.

[0] RTFA

replies(2): >>42175725 #>>42187791 #
61. dfedbeef ◴[] No.42173705[source]
I guess I've spent my whole life waiting for the funny reveal that this whole thing is a funny prank. I like when things are funny.
62. hansvm ◴[] No.42173720{3}[source]
> How many people would agree to get branded with a hot branding iron in exchange for a billion dollars?

Temporary pain without any meaningful lasting injuries? I do worse long-term damage than that at my actual job just in neck and wrist damage and not being sufficiently active (on a good day I get 1-2hrs, but that doesn't leave much time for other things), and I'm definitely not getting paid a billion for it.

replies(1): >>42174000 #
63. hansvm ◴[] No.42173742{4}[source]
Also, where did the human come from? Are they already on their deathbed, prolonged in this thought experiment for only a few fleeting moments? Were they themselves a murderer?
replies(1): >>42175197 #
64. adrian_b ◴[] No.42173750{3}[source]
The ratio of the neuron numbers may be somewhat meaningful when comparing vertebrates with vertebrates and arthropods with arthropods, but it is almost completely meaningless when comparing vertebrates with arthropods.

The reason is that the structure of the nervous systems of arthropods is quite different from that of the vertebrates. Comparing them is like comparing analog circuits and digital circuits that implement the same function, e.g. a number multiplier. The analog circuit may have a dozen transistors and the digital circuit may have hundreds of transistors, but they do the same thing (with different performance characteristics).

The analogy with comparing analog and digital circuits is quite appropriate, because parts of the nervous systems that have the same function, e.g. controlling a leg muscle, may have hundreds or thousands of neurons in a vertebrate, which function in an all-or-nothing manner, while in an arthropod the equivalent part may have only a few neurons that function in a much more complex manner in order to achieve fine control of the leg movement.

So typically one arthropod neuron is equivalent with much more vertebrate neurons, e.g. hundreds or even thousands.

This does not mean that the nervous system of arthropods is better than that of vertebrates. They are optimized for different criteria. Neither a vertebrate can become as small as the smallest arthropods, nor an arthropod can become as big as the bigger vertebrates, the systems that integrate the organs of a body into a single living organism, i.e. the nervous system and the circulatory and respiratory systems, are optimized for a small size in arthropods and for a big size in vertebrates.

replies(2): >>42173967 #>>42174406 #
65. 0xDEAFBEAD ◴[] No.42173753[source]
The article seems to have been well-received: https://benthams.substack.com/p/you-all-helped-hundreds-of-m...

I think this is a tough call in general. Current morality would be considered "ridiculously unacceptable" by 1800s standards, but I see it as a good thing that we've moved away from 1800s morality. I'm glad people were willing to challenge the 1800s status quo. At the same time, my sense is that the environmentalists who are ruining art in museums are probably challenging the status quo in a way that's unproductive.

To some degree, I suspect the rationalist / EA crowd has decided that weird contrarians tend to be the people who have the greatest impact in the long run, so it's OK to filter for those people.

66. sodality2 ◴[] No.42173779[source]
> How does the author know that shrimp experience "extreme agony" the way humans experience it?

https://rethinkpriorities.org/research-area/welfare-range-es...

67. VyseofArcadia ◴[] No.42173812[source]
Prayer wheels is an excellent example of where this kind of logic leads.

Kudos to you for making the connection.

68. sodality2 ◴[] No.42173861{3}[source]
> “Shouldn’t you give neuron counts more weight in your estimates?”

Rethink Priorities [0] has a FAQ entry on this [1].

[0]: https://rethinkpriorities.org/research-area/welfare-range-es...

[1]: https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/...

replies(1): >>42174342 #
69. some_random ◴[] No.42173862{6}[source]
The trade off here is eliminating mutilation, going from Pain + Death to just Death, or in the case of the whales, going from Death to normal, beautiful shrimp Life. I don't really have any interest in doing shrimp quality of life math this morning but there's clearly something there.
70. dfedbeef ◴[] No.42173874[source]
I guess it is not a prank, maybe just a perfect encapsulation of life in tech in the 2020's.
71. bhelkey ◴[] No.42173912{3}[source]
The purpose of a citation is to provide further evidence supporting the claim. This instead links to a ~thousand word article. A single sentence of which is relevant. Instead of supporting the claim, it instead restates the claim.
replies(1): >>42174249 #
72. leephillips ◴[] No.42173956[source]
Shrimp do not have experience. There is no place within the shrimp, no anatomical structure, where experience can reside. The article, and many of the articles it links to, confuse the existence of pain receptors, complexity of behavior, memory, aversion, intelligence, learning capacity, and other measures for experience.

Since they don’t have experience, they can’t suffer, in the morally relevant sense for this argument.

replies(1): >>42182200 #
73. 0xDEAFBEAD ◴[] No.42173967{4}[source]
Interesting.

I'm fairly puzzled by sensation/qualia. The idea that there's some chemical reaction in my brain which produces sensation as a side effect is very weird. In principle it seems like you ought to be able to pare things down in order to produce a "minimal chemical reaction" for suffering, and do "suffering chemistry" in a beaker (if you were feeling unethical). That's really trippy.

People often talk about suffering in conjunction with consciousness, but in my mind information processing and suffering are just different phenomena:

* Children aren't as good at information processing, but they are even more capable of suffering.

* I wouldn't liked to be kicked if I was sleeping, or blackout drunk, even if I was incapable of information processing at the time, and had no memory of the event.

So intuitively it seems like more neurons = more "suffering chemistry" = greater moral weight. However, I imagine that perhaps the amount of "suffering chemistry" required to motivate an organism is actually fairly constant regardless of its size. Same way a gigantic cargo ship and a small children's toy could in principle be controlled by the same tiny microchip. That could explain the moral weight result.

Interested to hear any thoughts.

replies(2): >>42174304 #>>42183853 #
74. 0xDEAFBEAD ◴[] No.42174000{4}[source]
Sorry to hear about your neck and wrist. I like this site:

https://www.painscience.com/

This article was especially helpful:

https://www.painscience.com/tutorials/trigger-points.php

I suspect the damage you're concerned about is reversible, if you're sufficiently persistent with research and experimentation. That's been my experience with chronic pain.

75. sixo ◴[] No.42174022{3}[source]
> The teenager is making an implicit moral judgement, with themselves as the only moral patient.

No they're not! You have made a claim of the form "these things are the same thing"—but it only seems that way if you can't think of a single plausible alternative. Here's one:

* Humans are motivated by two competing drives. The first drive we can call "fear", which aims to avoid suffering, either personally or in people you care about or identify with. This derives from our natural empathic instinct, but is can be extended by a socially-construction of group identity. So, the shrimp argument is saying "your avoiding-suffering instinct can and should be applied to crustaceans too", which is contrary to how most people feel. Fear also includes "fear of ostracization", this being equivalent to death in a prehistoric context.

* The second drive is "thriving" or "growing" or "becoming yourself", and leads you to glimpse the person you could be, things you could do, identities you could hold, etc, and to strive to transform yourself into those things. The teenager ultimately wants the PS5 because they've identified with it in some way—they see it as a way to express themself. Their "utilitarian" actions in this context are instrumental, not moral—towards the attainment of what-they-want. I think, in this simple model, I'd also broader this drive to include "eating meat"—you don't do this for the animal or to abate suffering, you do it because you want to: your body's hungry, you desire the pleasure of satiation, and you act to realize that desire.

* The two drives are not the same, and in the case of eating meat are directly opposed. (You could perhaps devise a way to see either as, ultimately, an expression of the other.) Human nature, then, basically undertakes the "thriving" drive except when there's a threat of suffering, in which case we switch gears to "fear" until it's handled.

* Much utilitarian discourse seems to exist in a universe where the apparently-selfish "thriving" drive doesn't exist, or has been moralized out of existence—because it doesn't look good on paper. But, however it sounds, it in fact exists, and you will find that almost all living humans will defend their right to express themselves, sometimes to the death. This is at some level the essence of life, and the rejection of it leads many people to view EA-type utilitarianism as antithetical to life itself.

* One reason for this is that "fear-mode thinking" is cognitively expensive, and while people will maintain it for a while, they will eventually balk against it, no matter how reasonable it seems (probably this explains the last decade of American politics).

replies(1): >>42174453 #
76. telharmonium ◴[] No.42174107[source]
I'm reminded of the novel "Venomous Lumpsucker" by Ned Beauman, a deeply weird satire about the perverse incentives and behaviors engendered by the union of industrial consumption, market-based conservation, and the abstract calculus of ethics at scale.

In particular, one portion features an autonomous bioreactor which produces enormous clouds of "yayflies"; mayflies whose nervous systems have been engineered to experience constant, maximal pleasure. The system's designer asserts that, given the sheer volume of yayflies produced, they have done more than anyone in history to increase the absolute quantity of happiness in the universe.

77. dfedbeef ◴[] No.42174131{4}[source]
It's also just such a perfect half-measure. You're not asking people to not eat these little guys. They're not even confirmed to be fully conscious. This is a speculative fix for a theoretical problem. Plus like, there's some company making shrimp zappers. So by donating you're also kind of paying two people to kill the shrimp?
replies(2): >>42174266 #>>42182208 #
78. VyseofArcadia ◴[] No.42174168{3}[source]
It depends on your goals, right?

If your goal is "maximize total happiness", then engineering blisshrimp is obviously the winning play. If your goal is "minimize total suffering", than the play is to engineer something that 1. experiences no suffering, 2. is delicious, and 3. outcompetes existing shrimp so we don't have to worry about their suffering anymore.

Ideally we'd engineer something that is in a state of perpetual bliss and wants to be eaten, not unlike the cows in Restaurant at the End of the Universe.

replies(1): >>42174215 #
79. sodality2 ◴[] No.42174215{4}[source]
> If your goal is "minimize total suffering", than the play is to engineer something that 1. experiences no suffering, 2. is delicious, and 3. outcompetes existing shrimp so we don't have to worry about their suffering anymore.

Eh, only if you're minimizing suffering per living being. Not total suffering. Having more happy creatures doesn't cancel out the sad ones. But I see what you mean.

replies(1): >>42176615 #
80. sodality2 ◴[] No.42174249{4}[source]
It's a primary source from the organization doing the partnership with Tesco. Why would they cite anything? Who would they cite?

https://www.globenewswire.com/en/news-release/2024/08/17/293...

replies(1): >>42175645 #
81. sixo ◴[] No.42174251{3}[source]
> to make people question their moral beliefs instead of accepting their instinct

Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind. From the other side, this looks like, quoting OP:

> you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic.

And feels like:

> pressuring someone into changing their mind is not okay; it's a basic act of disrespect.

And doesn't work, instead:

> Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance.

replies(1): >>42174307 #
82. sodality2 ◴[] No.42174266{5}[source]
> They're not even confirmed to be fully conscious

Please read the cited Rethink Priorities research: https://rethinkpriorities.org/research-area/welfare-range-es...

Notably the FAQ and responses.

replies(1): >>42175667 #
83. adrian_b ◴[] No.42174304{5}[source]
While in animals with complex nervous systems like humans and also many mammals and birds there may be psychological reasons for suffering, like the absence or death of someone beloved, suffering from physical pain is present in most, if not all animals.

The sensation of pain is provided by dedicated sensory neurons, like other sensory neurons are specialized for sensing light, sound, smell, taste, temperature, tactile pressure, gravity, force in the muscles/tendons, electric currents, magnetic fields, radiant heat a.k.a. infrared light and so on (some of these sensors exist only in some non-human animals).

The pain-sensing neurons, a.k.a. nociceptors, can be identified anatomically in some of the better studied animals, including humans, but it is likely that they also exist in most other animals, with the possible exception of some parasitic or sedentary animals, where all the sense organs are strongly reduced.

So all animals with such sensory neurons that cause pain are certain to suffer.

The nociceptors are activated by various stimuli, e.g. either by otherwise normal stimuli that exceed some pain threshold, e.g. too intense light or noise, or by substances generated by damaged cells from their neighborhood.

replies(1): >>42174507 #
84. sodality2 ◴[] No.42174307{4}[source]
> Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind

I may be putting my hands up in surrender, as a 20 year old (decidedly not genius though). But I'm instead defending this belief, not trying to convince others. Also, I don't think it's the worst thing in the world to have people question their preconceived moral notions. I've taken ethics classes in college and I personally loved having them challenged.

replies(1): >>42174391 #
85. mistercow ◴[] No.42174342{4}[source]
Which I referenced and called arbitrary.
replies(1): >>42174793 #
86. sixo ◴[] No.42174391{5}[source]
ha, got one. Yes it is pretty fun if you're in the right mental state for it, I've just seen so many EA-type rationalists out on the internet proliferating this worldview, and often pushing it on people who a) don't enjoy it, b) are threatened by it, c) are underequipped to defend themselves rationally against it, that I find myself jumping to defend against it. EA-type utilitarianism, I think, proliferates widely on the internet specifically by "survival bias"—it is easily-argued in text; it looks good on paper. Whereas the "innate" morality of most humans is more based on ground-truth emotional reality; see my other comment for the character of that https://news.ycombinator.com/item?id=42174022
replies(1): >>42174883 #
87. mistercow ◴[] No.42174406{4}[source]
I totally agree that you can’t just do a 1:1 comparison. My point is not to say that a shrimp suffers .05% as much as a chicken, but to use a chicken as a point of reference to illustrate just how simple the nervous system of a shrimp is.

We’re talking about a scale here where we have to question whether the notion of suffering is applicable at all before we try to put it on any kind of spectrum.

88. probably_wrong ◴[] No.42174422{3}[source]
I read the paper and I believe the same objections applies: the reasoning only works if you assume "pain" to be a constant number subject to the additive property.

If we have to use math, I'd say: the headaches are temporal - the effect of all the good you've done today is effectively gone tomorrow one way or another. But killing a person means, to quote "Unforgiven", that "you take away everything he's got and everything he's ever gonna have". So the calculation needs at least a temporal discount factor.

I also believe that the examples are too contrived to be actually useful. Comparing a room with one person to another with five million is like comparing the fine for a person traveling at twice the speed limit with that of someone traveling at 10% the speed of light - the results of such an analysis are entertaining to think about, but not actually useful.

replies(1): >>42175614 #
89. 0xDEAFBEAD ◴[] No.42174453{4}[source]
I find myself motivated to alleviate suffering in other beings. It feels good that a quarter million shrimp are better off because I donated a few hundred bucks. It makes me feel like my existence on this planet is worthwhile. I did my good deed for the day.

There was a time when my good deeds were more motivated by fear. I found that fear wasn't a good motivator. This has become the consensus view in the EA community. EAs generally think it's important to avoid burnout. After reworking my motivations, doing good now feels like a way to thrive, not a way to avoid fear. The part of me which was afraid feels good about this development, because my new motivational structure is more sustainable.

If you're not motivated to alleviate suffering in other beings, it is what it is. I'm not going to insult you or anything. However, if I notice you insulting others over moral trifles, I might privately think to myself that you are being hyperbolic. When I put on my EA-type utilitarian hat on, almost all internet fighting seems to lack perspective.

I support your ability to express yourself. (I'm a little skeptical that's the main driver of the typical PS5 purchase, but that's beside the point.) I want you to thrive! I consume meat, so I can't condemn you for consuming meat. I did try going vegan for a bit, but a vegan diet was causing fatigue. I now make a mild effort to eat a low-suffering diet. I also donate to https://gfi.org/ to support research into alternative meats. (I think it's plausible that the utilitarian impact of my diet+donations is net positive, since the invention of viable alternative meats could have such a large impact.) And whenever I get the chance, I rant about the state of vegan nutrition online, in the hope that vegans will notice my rants and improve things.

(Note that I'm not a member of the EA community, but I agree with aspects of the philosophy. My issues with the community can go in another thread.)

(I appreciate you writing this reply. Specifically, I find myself wondering if utilitarian advocacy would be more effective if what I just wrote, about the value of rejecting fear-style motivation, was made explicit from the beginning. It could make utilitarianism both more appealing and more sustainable.)

replies(1): >>42180156 #
90. 0xDEAFBEAD ◴[] No.42174507{6}[source]
Interesting. So how about counting nociceptors for moral weight?

What specifically makes it so the pain neurons cause pain and the pleasure neurons cause pleasure? Supposing I invented a sort of hybrid neuron, with some features of a pain neuron and some features of a pleasure neuron -- is there any way a neuroscientist could look at its structure+chemistry and predict whether it will produce pleasures vs pain?

replies(1): >>42176204 #
91. sodality2 ◴[] No.42174793{5}[source]
Your claim that it's arbitrary doesn't really have much weight without further reasoning.
replies(1): >>42175504 #
92. sodality2 ◴[] No.42174883{6}[source]
I see, and I wholly agree. I'm looking at this from essentially the academic perspective (aka, when I was required to at least question my innate morality). When I saw this blog post, I looked at it in the same way. If you read it as "this charity is more useful than every other charity, we should stop offering soup kitchens, and redirect the funding to the SWP", then I disagree with that interpretation. I don't need or want to rationalize that decision to an EA. But it is a fun thought experiment to discuss.
93. Iulioh ◴[] No.42175091{6}[source]
It is a rational choice tho.

It wouldn't just hurt your partner, it would hurt you.

We know that following a "objective morality" the 10 people would be a better choice but it would hurt (indirectly) you.

replies(1): >>42175202 #
94. dartos ◴[] No.42175197{5}[source]
None of that matters if my kitten is in danger!
95. dartos ◴[] No.42175202{7}[source]
You’re right. Maybe rational was the wrong word.

Humans aren’t perfectly objective.

96. AlexandrB ◴[] No.42175419[source]
Oh, boy. Site links to some "Effective Altruism" math on the topic[1]. This both reinforces my existing (negative) opinion of EA and makes me question the validity of this whole thing.

[1] https://forum.effectivealtruism.org/posts/EbQysXxofbSqkbAiT/...

replies(1): >>42182194 #
97. ◴[] No.42175436[source]
98. NoMoreNicksLeft ◴[] No.42175438{3}[source]
> I don’t really doubt that it’s in principle possible to assign percentage values to suffering intensity, but the 3% value (which the source admits is a “placeholder”) seems completely unhinged for an animal with 0.05% as many neurons as a chicken,

There is a simple explanation for the confusion that this causes you and the other people in this thread: suffering's not real. It's a dumb gobbledygook term that in the most generous interpretation refers to a completely subjective experience that is not empirical or measurable.

The author uses the word "imagine" three times in the first two paragraphs for a reason. Then he follows up with a fake picture of anthropomorphic shrimp. This is some sort of con game. And you're all falling for it. He's not scamming money out of you, instead he wants to convert you to his religious-dietary-code-that-is-trying-to-become-a-religion.

Shrimp are food. They have zero moral weight.

replies(2): >>42175771 #>>42177293 #
99. ◴[] No.42175468{3}[source]
100. mistercow ◴[] No.42175504{6}[source]
The problem is that the reasoning they give is so vague that there isn’t really anything to argue against. At best, they convincingly argue, in an extremely non-information-dense way, that neuron count isn’t everything, which is obviously true. They do not manage to argue convincingly that a 100k neuron system is something that we can even apply the word “suffering” to meaningfully.
replies(1): >>42178970 #
101. AlexandrB ◴[] No.42175516{3}[source]
If we're talking ethical giving, I'd rather give that money to a panhandler where there's a chance it relieves even a little bit of human suffering.
replies(1): >>42176121 #
102. 0xDEAFBEAD ◴[] No.42175538{3}[source]
I'm almost certain this organization is focused on farmed shrimp. https://forum.effectivealtruism.org/posts/z79ycP5jCDks4LPxA/...
103. BenthamsBulldog ◴[] No.42175565[source]
Seems possible in principle. Experiences can cause one to feel more or less pain--what's wrong with quantifying it? Sure it will be a bit handwavy and vague, but the alternative of doing no comparisons and just going based on vibes is worse https://www.goodthoughts.blog/p/refusing-to-quantify-is-refu.... But as I argue, given high uncertainty, you don't need any fine grained estimates to think giving to shrimp welfare is valuable. Like, if there was a dollar in front of you and you could use it to save 16,000 shrimp, seems like that's a good use of it.
replies(1): >>42175658 #
104. BenthamsBulldog ◴[] No.42175592{4}[source]
It's not a waste as another commenter noted, just probably not the best use of money.

I agree this is unintuitive, but I submit that's because of speceisism. What about shrimp makes it so that tens of millions of them painfully dying is less bad than a single human death? It doesn't seem like the fact that they aren't smart makes their extreme agony less bad (the badness of a headache doesn't depend on how smart you are).

replies(1): >>42176515 #
105. l1n ◴[] No.42175610[source]
I made these, proceeds go to the SWF folks https://www.etsy.com/listing/1371574690/shrimp-want-me-unali...
106. BenthamsBulldog ◴[] No.42175614{4}[source]
No, that isn't true. We can consider some metric like being at some temperature for an hour. Start with some truly torturous pain like being at 500 degrees for an hour (you'd die quickly, ofc). One person being at 500 degrees is less bad than 10 at 499 degrees which is less bad than 100 at 498 degrees...which is less bad than some number at 85 degrees (not torture, just a bit unpleasant).
replies(1): >>42176288 #
107. BenthamsBulldog ◴[] No.42175641[source]
Thanks for the kind words! I agree lots of people would value a human more than any number of shrimp. Now, in the article, I'm talking about which is worse--extreme suffering for one human or extreme suffering for millions of shrimp. So then the question is: can the common sense verdict be defended? What about shrimp is it that makes it so that their pain is of negligible importance compared to humans? Sure they aren't smart, but being dumb doesn't seem to make your pain less bad (hurting babies and mentally disabled people is still very very bad).
replies(1): >>42175842 #
108. RandallBrown ◴[] No.42175644[source]
I wonder if these donations would be better spent on lobbying for shrimp stunning regulations rather than just buying the shrimp farms shrimp stunners.
109. bhelkey ◴[] No.42175645{5}[source]
> It's a primary source

It's not a primary source. It's a one sentence summary of a secondary source. This[1] is the primary source of the Tesco commitment.

[1] https://www.tescoplc.com/sustainability/documents/policies/t...

replies(1): >>42176807 #
110. kaashif ◴[] No.42175658{3}[source]
> Like, if there was a dollar in front of you and you could use it to save 16,000 shrimp, seems like that's a good use of it.

Uhh, that's totally unintuitive and surely almost all people would disagree, right?

If not in words, people disagree in actions. Even within effective altruism there are a lot of people only giving to human centred causes.

111. AlexandrB ◴[] No.42175667{6}[source]
I think Dennis Prager is a hack, but this quote looms larger in my mind as I get older.

> The foolishness of that comment is so deep, I can only ascribe it to higher education. You have to have gone to college to say something that stupid.

The entire effort to quantify morality rests on the shakiest of foundations but makes confident claims about its own validity based on layers and layers of mathematical obfuscation and abstraction.

112. BenthamsBulldog ◴[] No.42175682[source]
I'm also against exploiting them.
113. theonething ◴[] No.42175725[source]
I find it not only sad, but horrifying that there are people that would actually consider sacrificing a human over animals.
114. mistercow ◴[] No.42175771{4}[source]
Denying the existence of something that you and everyone else has experienced is certainly an approach.

Look, I’m not going to defend the author here. The linked report reads to me like the output of a group of people who have become so insulated in their thinking on this subject that they’ve totally lost perspective. They give an 11% prior probability of earthworm sentience based on proxies like “avoiding noxious stimuli”, which is… really something.

But I’m not so confused by a bad set of arguments that I think suffering doesn’t exist.

replies(1): >>42176760 #
115. DangitBobby ◴[] No.42175779{3}[source]
You see this type of argument used against animal welfare all the time. At the end of the day, I dismiss them all as "we can't be perfect so we might as well do nothing".

As the article suggests, imagine you must live the lifetime of 1 million factory farmed shrimps. Would you then rather people quibble over whether we should hunt whales to extinction and ultimately do nothing (including never actually hunting whales to extinction to save you because they don't actually care about you), or would you rather they attempt to reduce your suffering in those millions of deaths as much as possible?

116. theonething ◴[] No.42175842{3}[source]
For babies and mentally disabled people, we absolutely know beyond any doubt that they are capable of feeling pain, intense, blood curdling pain.

I don't think we can say the same of shrimp.

That's why humane killing of cattle (with piston guns to the head) is widely practiced, but nothing of the sort for crabs, oysters, etc. We know for sure cattle feel pain so we do something about it.

replies(1): >>42178994 #
117. JumpCrisscross ◴[] No.42175917{3}[source]
One cat versus many humans. My spending on my cat makes the answer clear.
118. jjcm ◴[] No.42175936[source]
No the proposition doesn’t make sense. The 3% number comes from this: https://rethinkpriorities.org/research-area/welfare-range-es...

The page gives 3% to shrimp because their lifespan is 3% that of humans. It’s a terrible avenue for this estimate. By the same estimate, giant tortoises are less ethical to kill than humans. The heavens alone can judge you for the war crimes you’d be committing by killing a Turritopsis dohrnii.

Number of neurons is the least-bad objective measurement in my eyes. Arthropods famously have very few neurons, <100k compared to 86b in humans. That’s a 1:1000000 neuron ratio, which feels like a more appropriate ratio for suffering than a lifespan-based ratio, though both are terrible.

replies(2): >>42176353 #>>42178958 #
119. mrguyorama ◴[] No.42175979[source]
Pulling completely unsubstantiated numbers out of your ass is not an argument. No, calling it "an estimate" does not actually make a number you've pulled out of your ass an actual estimate. No, composing a bunch of """estimates""" doesn't make an argument, and it doesn't matter what kind of "error ranges" you give your made up numbers, the error range of the composed value is basically infinite.

>The way it works is simple and common sense

Claiming "common sense" in any argument is red flag number 1 that you don't actually have a self supporting argument. Common sense doesn't actually exist, and anyone leaning on it is just trying to compel you through embarrassment to support their cause without argument. There's a reason proving 1+1=2 takes a hundred pages.

Randomly inserting numbers that "seem" right so that you can pretend to be a rigorous field is cargo cultism and pseudoscience. Numbers without data and justification is not rigor.

replies(1): >>42177908 #
120. DangitBobby ◴[] No.42176121{4}[source]
TFA addresses this. Many humans believe that no amount of animal suffering is as bad as any amount of human suffering, which is just a failure of humans to empathetize. Human suffering is not all that matters, and people who can't be convinced otherwise probably aren't the target audience.
121. adrian_b ◴[] No.42176204{7}[source]
Even if this is not well understood, it is likely that any differences between the pain neurons and any other sensory neurons are not essential.

It is likely that it only matters where they are connected in the sensory paths that carry the information about sensations towards the central nervous system. Probably any signal coming into the central nervous system on those paths dedicated for pain is interpreted as pain, like a signal coming through the optical nerves would be interpreted as light, even when it would be caused by an impact on the head.

https://en.wikipedia.org/wiki/Nociception

122. n4r9 ◴[] No.42176288{5}[source]
I think OP's objection is that - even granting that a "badness value" can be assigned to headaches and that 3 people with headaches is worse than 2 - there's no clear reason to suppose that 3 is exactly half again as bad as 2. It may be that the function mapping headaches to badness is logarithmic, or even that it asymptotes towards some limit. In mathematical terms it can be both monotonic and bounded.

Thus, when comparing headaches to a man being tortured, there's no clear reason to suppose that there is a number of headaches that is worse than the torture.

replies(2): >>42177503 #>>42178980 #
123. aziaziazi ◴[] No.42176353{3}[source]
Not only lifespan. From the link you quote:

> Capacity for welfare = welfare range × lifespan. An individual’s welfare range is the difference between the best and worst welfare states the individual can realize.

> we rely on indirect measures even in humans: behavior, physiological changes, and verbal reports. We can observe behavior and physiological changes in nonhumans, but most of them aren’t verbal. So, we have to rely on other indirect proxies, piecing together an understanding from animals’ cognitive and affective traits or capabilities.

First time I see this "warfare range" notion and it seems quite clever to me.

Also the original article says 3.1% is the median while the mean is 19%. I guess that may be caused by individuals havûg différents experiences each other’s.

replies(1): >>42178954 #
124. ajkjk ◴[] No.42176428{3}[source]
IMO: the idea that "this kind of argument exposes deeper asymmetries..." is itself fallacious for the same reason: it presupposes that a person's morality answers to logic.

Were morality a logical system, then yes, finding apparent contradictions would seem to invalidate it. But somehow that's backwards. At some level moral intuitions can't be wrong: they're moral intuitions, not logic. They obey different rules; they operate at the level of emotion, safety, and power. A person basically cannot be convinced with logic to no longer care about the safety of someone/something that they care about the safety of. Even if they submit to an argument of that form, they're doing it because they're conceding power to the arguer, not because they've changed their mind (although they may actually say that they changed their opinion as part of their concession).

This isn't cut-and-dry; I think I have seen people genuinely change their moral stances on something from a logical argument. But I suspect that it's incredibly rare, and when it happens it feels genuinely surprising and bizarre. Most of the time when it seems like it's happening, there's actually something else going on. A common one is a person changing their professed moral stance because they realize they win some social cachet for doing so. But that's a switch at the level of power, not morality.

Anyway it's easy to claim to hold a moral stance when it takes very little investment to do so. To identify a person's actual moral opinions you have to see how they act when pressure is put on them (for instance, do they resist someone trying to change their mind on an issue like the one in the OP?). People are incredibly good at extrapolating from a moral claim to its moral implications that affect them (if you claim that we should prioritize saving the lives of shrimp, what else does that argument justify? And what things that I care about does that argument then invalidate? Can I still justify spending money on the things I care about in a world where I'm supposed to spend it on saving animals?), and they will treat an argument as a threat if it seems to imply things that would upset their personal morality.

The sorts of arguments that do regularly change a person's opinion on the level of moral intuitions are of the form:

* information that you didn't notice how you were hurting/failing to help someone

* or, information that you thought you were helping or avoiding hurting someone, but you were wrong.

* corrective actions like shame from someone they respect or depend on ("you hurt this person and you're wrong to not care")

* other one-on-one emotional actions, like a person genuinely apologizing, or acting selfless towards you, or asserting a boundary

(Granted, this stance seems to invalidate the entire subject of ethics. And it kinda does: what I'm describing is phenomological, not ethical; I'm claiming that this is how people actually work, even if you would like them to follow ethics. It seems like ethics is what you get when you try to extend ground-level moralities to an institutional level. when you abstract morality from individuals to collectives, you have to distill it into actual rules that obey some internal logic, and that's where ethics comes in.)

125. Vecr ◴[] No.42176515{5}[source]
How much of your posting is sophistry? I assume this isn't (I doubt this increases the positivity of the perception of EA), but the God stuff makes very close to no sense at all.

If it's sophistry anyway, can't you take Eliezer's position and say God doesn't exist, and some CEV like system is better than Bentham style utilitarianism because there's not an objective morality?

I don't think CEV makes much sense, but I think you're scoring far less points that you think you are even relative to something like that.

126. Vecr ◴[] No.42176615{5}[source]
> Eh, only if you're minimizing suffering per living being. Not total suffering. Having more happy creatures doesn't cancel out the sad ones. But I see what you mean.

According to this guy it does.

127. NoMoreNicksLeft ◴[] No.42176760{5}[source]
> Denying the existence of something that you and everyone else has experienced is certainly an approach.

You've experienced this mystical thing, and so you know it's true?

> They give an 11% prior probability of earthworm sentience

I'm having trouble holding in the laughter. But you don't seem to understand how dangerously deranged these people are. They'll convert you to their religion by hook or crook.

replies(2): >>42179635 #>>42180467 #
128. sodality2 ◴[] No.42176807{6}[source]
Given that two organizations make an agreement, I'd say a statement by either organization is considered a primary source of said agreement.
replies(1): >>42178121 #
129. himinlomax ◴[] No.42176931{4}[source]
I did. It was a whole lot of nothing.

In any case, I just wanted to point out that if you cared about the welfare of damn arthropods, you're going nowhere really fast.

Consider this: the quickest, surest, most efficient way and ONLY way to reduce all suffering on earth to nothing forever and ever is a good ole nuclear holocaust.

replies(1): >>42178856 #
130. abemiller ◴[] No.42177293{4}[source]
Using some italics with an edgy claim doesn't allow you to cut through centuries of philosophy. It's almost as if, when philosophers have coined this term in language "subjective experience" and thousands have used it often in coherent discussion, that it actually has semantic value. It exists in the intersubjective space between people who communicate with shared concepts.

I don't have much to say about the shrimp, but I find it deeply sad when people convince themselves that they don't really exist as a thinking, feeling thing. It's self repression to the maximum, and carries the implication that yourself and all humans have no value.

If you don't have certain measurable proof either way, why would you choose to align with the most grim possible skeptical beliefs? Listen to some music or something - don't you hear the sounds?

replies(1): >>42177437 #
131. hazbot ◴[] No.42177306[source]
I'm willing to run with the 3% figure... But I take issue with the linearity assumption that torturing 34 shrimp is thus worse than torturing a human!
132. NoMoreNicksLeft ◴[] No.42177437{5}[source]
> Using some italics with an edgy claim

There is nothing edgy about it. You can't detect it, you can't measure it, and if the word had any applicability (to say, humans), then you're also misapplying it. If it is your contention that suffering is something-other-than-subjective, then you're the one trying to be edgy. Not I.

The way sane, reasonable people describe subjective phenomena that we can't detect or measure is "not real". When we're talking about decapods, it can't even be self-reported.

> but I find it deeply sad when people convince themselves that they don't really exist as a thinking, feeling thing. It's self repression to the maximum,

Says the guy agreeing with a faction that seeks to convince people shrimp are anything other than food. That if for some reason we need to euthanize them, that they must be laid down on a velvet pillow to listen to symphonic music and watch films of the beautiful Swiss mountain countryside until their last gasp.

"Sad" is letting yourself be manipulated so that some other religion can enforce its noodle-brained dietary laws on you.

> If you don't have certain measurable proof either way

I'm not obligated to prove the negative.

replies(2): >>42179987 #>>42180220 #
133. sdwr ◴[] No.42177503{6}[source]
That's reversed. The number of people can be mapped linearly, but not the intensity of the pain.

(Intuitively, it's hard to say saving 100 people is 100x as good as saving 1, because we can't have 100 best friends, but it doesn't affect the math at all)

134. freejazz ◴[] No.42177536[source]
It remains to be seen that anyone besides the EA community takes the EA community seriously, I wouldn't worry about it.
135. sdwr ◴[] No.42177744{3}[source]
It's not about the numbers. The core argument is:

- they suffer

- we are good people who care about reducing suffering

- so we spend our resources to reduce their suffering

And some (most!) people balk at one of those steps

But seriously, pain is the abstraction already. It's damage to the body represented as a feeling.

136. BenthamsBulldog ◴[] No.42177908[source]
But the numbers didn't come from me but from the RP report.
137. bhelkey ◴[] No.42178121{7}[source]
This is the quote:

> Tesco and Sainsbury’s published shrimp welfare commitments, citing collaboration with SWP (among others), and signed 9 further memoranda of understanding with producers, in total committing to stunning a further ~1.6B shrimps per annum.

It is a secondary source. It does not present firsthand information. It describes commitments made by Tesco, Sainsbury's, and others.

Setting this aside, the point I made is simple. This article argues for a radical change in morality; folks generally view a human life as worth much much more than 32 shrimp lives.

A well-written radical argument should understand how it differs from mainstream thought and focus on the premise(s) that underlies this difference. As I am unimpressed with the premises, I find the article unconvincing.

138. xipho ◴[] No.42178856{5}[source]
I said nothing about caring about arthropods. Side note, I do though, they are immensely important to our well being, without them we're not around, with them, we have trillions of dollars of mess to deal with. Again note that I said nothing about their feelings.

I feel you're still missing the point. I get you might be coming from a binary perspective, as evidenced by going to a nuclear argument, i.e. why bother talking about anything else, but I highly doubt that's the goal of the author. They are trying to make you imagine, and think, about how things fit together. YRMV.

139. BenthamsBulldog ◴[] No.42178954{4}[source]
This is to model uncertainty not difference across species. The 50th percentile guess is that shrimp feel pain 3.1% as intensely as humans while the mean is 19%
140. BenthamsBulldog ◴[] No.42178958{3}[source]
No, it's not about lifespan. They have a very complicated report that tries to quantify intensity of suffering relying on a variety of proxies
141. BenthamsBulldog ◴[] No.42178970{7}[source]
But if neuron counts, as they show, have no reliable correlation (and in some cases an inverse correlation) with valenced experience, why would we use them to rule out intense experiences?
replies(1): >>42180518 #
142. BenthamsBulldog ◴[] No.42178980{6}[source]
Right but Norcross gives an argument against that. You either have to think that slightly reducing pain and making it 1000 times as prevalent doesn't always make a worse state of affairs or deny that worse than is transitive.
replies(1): >>42187038 #
143. BenthamsBulldog ◴[] No.42178994{4}[source]
Sure but if we're not sure then the possible infliction of huge amounts of horrendous suffering on trillions of them is quite serious. If there's even a 5% chance that shrimp feel pain as intensely as humans, the SWP is an excellent bet.
144. BenthamsBulldog ◴[] No.42179008[source]
But thinking that extreme suffering is bad doesn't require thinking that wireheading for tons of pleasure is good.
145. bulletsvshumans ◴[] No.42179635{6}[source]
Setting aside the shrimp, are you denying that any humans, including yourself, experience suffering?
replies(1): >>42184142 #
146. bulletsvshumans ◴[] No.42179854[source]
Whenever I see the backlash to conversations about animal suffering, I wonder if one reason is because it is a sort of “inconvenient truth”, ala global warming, where we are doing grave damage but we prefer to collectively ignore it because that damage aligns with our material incentives. That animals of many kinds are capable of substantial suffering seems honestly obvious to me from first hand observation, and I suspect it is obvious to most others too, at least when we are children. But I think we intentionally “unlearn” this culturally, because for most of human history it was a necessarily evil to sustain ourselves. To bring this truth up begins to break a spell we cast on ourselves that we don’t want unbroken. I look forward to the day when more people feel that their materials needs are satisfied enough that we can collectively approach this topic with an unencompromised intellectual honesty.
147. jhanschoo ◴[] No.42179987{6}[source]
> If it is your contention that suffering is something-other-than-subjective, then you're the one trying to be edgy.

You do feel pain and hunger, at least to the extent you experience touch. You can in fact be even more certain of that than anything conventionally thought to be objective, physical models of the world, for it is only through your perception that you receive those models, or evidence to build those models.

The notion of suffering used in the paper is primarily with respect to pain and pleasure.

Now, you may deny that shrimp feel pain and pleasure. It's also possible to deny that other people feel pain and pleasure. But you do feel pain and pleasure, and you always engage in behaviors in response to these sensations; your senses also inform you secondarily that many other people abide by similar rules.

Many animals like us are fundamentally sympathetic to pain and pleasure. That is, observing behavior related to pain and pleasure impels a related feeling ourselves, in certain contexts, not necessarily exact. This mechanism is quite obvious when you observe parents caring for their young, herd behavior, etc.. With this established, some people are in a context where they are sympathetic to observed pain and pleasure of nonhuman animals; in this case shrimp rather than cats and dogs, and such a study helps one figure out this relationship in more detail.

148. jhanschoo ◴[] No.42180156{5}[source]
Hi, I'm a flexitarian who finds a more restrictive diet than the one I am having unsustainable in terms of my habits, so I'm interested in information regarding the topic. Can you direct me to what you've discussed regarding vegan nutrition?
replies(1): >>42180388 #
149. holden_nelson ◴[] No.42180220{6}[source]
> You can’t detect it, you can’t measure it

Eh, perhaps we can’t detect it perfectly reliably, but we can absolutely detect it. Go to a funeral and observe a widow in anguish. Just because we haven’t (yet) built a machine to detect or measure it doesn’t mean it doesn’t exist.

replies(1): >>42184182 #
150. joegibbs ◴[] No.42180356[source]
The whole idea of morality is totally arbitrary and only exists because we say it does. If you were an Aztec then sacrificing people to Huitzilopochtli was actually moral, and if you were alive two thousand years ago nobody would have batted an eye at owning slaves.

In reality, nobody would actually choose to save the life of 34 crustaceans over the life of a human, even if killing the prawn results in 102% of the suffering of killing the human.

It's the same with all the EA stuff like prioritising X trillion potential humans that may exist in the future over actual people who exist now - you can get as granular as you want and mess around with probability to say anything. Maybe it's good to grow brains in vats and feed them heroin - that'll increase total happiness! Maybe we should judge someone who has killed enough flies the same as a murderer! Maybe our goals for the future should be based on the quadrillion future sentient creatures that will evolve from today's coconut crabs!

replies(1): >>42187739 #
151. 0xDEAFBEAD ◴[] No.42180388{6}[source]
https://news.ycombinator.com/item?id=41820828

https://forum.effectivealtruism.org/posts/dbw2mgSGSAKB45fAk/...

Really I want to see vegans do a comprehensive investigation of every last nutrient that's disproportionately found in animal products, including random stuff like beta-alanine, creatine, choline, etc., and take a "better safe than sorry" approach of inventing a veggie burger that contains all that stuff in abundance, and is palatable.

I suspect you could make a lot of money by inventing such a burger. Vegans are currently fixated on improving taste, and they seem to have a bit of a blind spot around nutrition. I expect a veggie burger which mysteriously makes vegans feel good, gives them energy, and makes them feel like they should eat more of it will tend to sell well.

replies(4): >>42181101 #>>42182886 #>>42187548 #>>42192690 #
152. mistercow ◴[] No.42180467{6}[source]
> You've experienced this mystical thing, and so you know it's true?

Suffering is experience, and my own internal experiences are the things that I can be most certain of. So in this case, yes. I don’t know why you’re calling it “mystical” though.

> They'll convert you to their religion by hook or crook.

I have a lot more confidence in my ability to evaluate arguments than you seem to.

153. mistercow ◴[] No.42180518{8}[source]
This feels like a severe motte and bailey. The motte is “neuron count is not perfectly linearly correlated with sentience” and the bailey is “neuron count has no reliable correlation with sentience”.

The priors referenced in the “technical details” doc (and good lord, why does everything about this require me to dereference three or four layers of pointers to get to basic answers to methodological questions?) appear to be based entirely on proxies like:

> Responses such as moving away, escaping, and avoidance, that seem to account for noxious stimuli intensity and direction.

This is a proxy that applies to slime molds and Roombas, yet I notice that neither of those made the table. Why not?

I suspect that the answer is that at least when it comes to having zero neurons, the correlation suddenly becomes pretty obviously reliable after all.

154. FearNotDaniel ◴[] No.42180574[source]
Oh wow so now we live in a world that tells us unborn humans are not actually human until a certain arbitrary number of months have passed, and that those non human living things do not have feelings or suffer in any way if we choose to kill them before this arbitrary date - but at the same time every single shrimp’s intense suffering is suddenly my problem?
155. jhanschoo ◴[] No.42181101{7}[source]
Thanks!
156. BenthamsBulldog ◴[] No.42182194[source]
Seems silly that one post from one person would cause you to rethink a movement.
replies(1): >>42197358 #
157. BenthamsBulldog ◴[] No.42182200[source]
What about the brain? https://link.springer.com/article/10.1007/s00441-017-2607-y#...
replies(1): >>42184058 #
158. BenthamsBulldog ◴[] No.42182208{5}[source]
I also think that people shouldn't eat them, but donating is much more effective. In total, fewer shrimp die painfully, which is good (it a second doctor provides an anesthetic for a patient before death, that wouldn't be bad by making two people complicit in death).
159. BenthamsBulldog ◴[] No.42182222{4}[source]
I'm using the standard definition of consciousness--having subjective experience. Not sure what "BS math" I used.

I agree that lots of people in the real world need help. Helping them is good. But so is averting enormous amounts of pain and suffering. In expectation, even given a low credence in shrimp sentience, giving averts huge amounts of pain and suffering, which is quite good.

160. rep_lodsb ◴[] No.42182568{3}[source]
But what if the person you murder will be responsible for the suffering of billions of shrimp? Or argues convincingly against Effective Altruism?

The reason nobody actually does this, is that EA is a belief system adopted by (at least) comfortably well-off Silicon Valley people to make themselves feel better about their effect on society. If there is a 0.00001% chance they can prevent AI MegaHitler, everything they do to make more money is justified.

161. aziaziazi ◴[] No.42182886{7}[source]
That would be a game changer. I also crave for burgers and taste/texture of most of them improved drastically but nutrients aren’t the first objective it seems, appart protein and sometime calcium/b12.

I often eat complemented processed food (~1serve/day) primary to transform lazy-meals (fries, pizza, sandwiches…) with a lazy-healthy meal. When I stopped flesh those became even more useful.

I recommend the bars (no preparation) and the "pots" (salty meals). Shakers are good and cheap. All seems very balanced and complete nutritiously.

http://jimmyjoy.com/ Not affiliated in any way, just an happy customer.

162. dleary ◴[] No.42183853{5}[source]
> Children aren't as good at information processing, but they are even more capable of suffering.

That is too strong a statement to just toss out there like that. And I don’t even think it’s true.

I think children probably feel pain more intensely than adults. But there are many more dimensions to suffering than pain. And many of those dimensions are beyond the ken of children.

Children will not know the suffering that comes from realizing that you have ruined a relationship that you value, and it was entirely your fault.

They will not know the kind of suffering that’s behind Imposter Syndrome after getting into MIT.

Or the suffering that comes from realizing that your heroin addiction will never be as good as the first time you shot up. Or that same heroin addict knowing that they are betraying their family, stealing their mother’s jewelry to pawn, and doing it anyway.

Or the suffering of a once-great athlete coming to terms with the fact that they are washed up and that life is over now.

Or the suffering behind their favorite band splitting up.

Or the suffering behind winning Silver at the Olympics.

Or the agony of childbirth.

Perhaps most importantly, one of the greatest sorrows of all: losing your own child.

Et cetera

163. leephillips ◴[] No.42184058{3}[source]
As is most clear in Figure 8 of the article you refer to (which is good science, but mainly of interest to specialists in the neural/sensory anatomy of certain classes of animals), the shrimp lacks, among other features, a cortex and a thalamus. These are known to be necessary for the generation of conscious experience.
164. NoMoreNicksLeft ◴[] No.42184142{7}[source]
Humans self-report "suffering". Strangely, those who claim it the most enthusiastically don't seem to be experiencing pain from disease or injury.

I would hesitate to use that word myself, though my personal experiences have, at times, been somewhat similar to those who do use the word.

165. NoMoreNicksLeft ◴[] No.42184182{7}[source]
> Eh, perhaps we can’t detect it perfectly reliably, but we can absolutely detect it. Go to a funeral and observe a widow in anguish.

If your definition of suffering describes both the widow grieving a lost husband and a shrimp slowly going whatever the equivalent is of unconscious in an icewater bath... it doesn't much seem to be a useful word.

> Just because we haven’t (yet) built a machine to

Yes, because we haven't built the machine, we can't much tell if the widow is in "anguish" or is putting on a show for the public. Some widows are living their most joyous days, but they can't always show it.

166. n4r9 ◴[] No.42187038{7}[source]
What are the arguments for "worse than" being transitive? It's not immediately clear to me that it ought to be.
167. n4r9 ◴[] No.42187128{5}[source]
Thanks. It's very clever how he uses probabilistic variants to move between scenarios. I've read to the end and whilst I'm not convinced, it's definitely given me food for thought. I'm stuck on two bits so far:

* He slides between talking about personal decisions vs decisions about someone else. The argument for Headache is couched in terms of whether an average person would drive to the chemist. Whilst the argument for shifting from Headache to Many Headaches is couched in terms of decisions made by an external party. This feels problematic to me. There may be some workaround.

* He describes rejecting transitivity as being overwhelmingly implausible. Is that obvious? Ethical considerations ultimately boil down to subjective evaluations, and there seems no obvious reason why those evaluations would be transitive.

168. 0xDEAFBEAD ◴[] No.42187548{7}[source]
More suggestions, if someone wants to create a product that I personally would be enthusiastic to try:

* No weird yeast or fungal ingredients -- living or dead. I get fatigue after consuming that stuff. My body doesn't respond well to it. I suppose the most ordinary sort of mushrooms are probably OK.

* No eggs, they're pretty suffering-intensive: https://sandhoefner.github.io/animal_suffering_calculator

* Minimally processed ingredients preferred.

* Infuse with iodized salt, healthy fats, and spices for flavor. Overall good taste is much more important than replicating meat flavor/mouthfeel. I assume if you iterate on the spices enough, you can come up with something that tastes incredible.

I'm imagining a heavily fortified, compressed bean patty of some kind. I suppose with beans you'd have to account for phytates interfering with nutrient absorption, though.

169. RestartKernel ◴[] No.42187686[source]
This reads like a thought experiment, but its sincerity makes me uncomfortable in the way good art does. Thanks for sharing.
170. ◴[] No.42187739{3}[source]
171. ◴[] No.42187791[source]
172. n4r9 ◴[] No.42192690{7}[source]
I'm reminded a video from Olympic weightlifter Clarence Kennedy. He's vegan but has done a lot of research on fuelling himself with the nutrients he needs to maintain a physical ability well beyond most people. This is a man with insane numbers in both weightlifting and powerlifting. And he doesn't supplement with additional protein or creatine powder. To be fair, he's probably also on steroids, but even so you need peak nutrition to reach those numbers.

There's an article here which discusses and embeds the video: https://barbend.com/clarence-kennedy-vegan-diet/

173. GJim ◴[] No.42194099{4}[source]
> entire cities could be leveled before I let anyone hurt my kitten

Allowing that kitten to live will cause untold suffering for other small mammals.

You have the power to stop that suffering!

174. ◴[] No.42197358{3}[source]