Indeed, people will resist being "tricked" into this framework: debating on these terms will feel like having their morals twisted into justifying things they don't believe in. And although they may not have the patience or rhetorical skill to put into words exactly why they resist it, their intuitions won't lead them astray, and they'll react according to their true-but-hard-to-verbalize beliefs (usually by gradually getting frustrated and angry with you).
A person who believes in rationalizing everything will then think that someone who resists this kind of argument is just dumb, or irrational, or stubborn, or actually-evil, to see that they are wrong. But it seems to me that the very idea that you can rationalize morality, that you can compute the right thing to do at a personal-ethics level, is itself a moral belief, which those people simply do not agree with, and their resistance is in accordance with that: you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic. No wonder they resist! People do not release control over their moral beliefs lightly. Rather I think it's the people who are very insecure in their own beliefs who are susceptible to giving them up to someone who runs rhetorical circles around them.
I've come to think that a lot of 21st century discord (c.f. American political polarization) is due to this basic conflict. People who believe in rationalizing everything think they can't be wrong because the only way to evaluate anything is rationally--a lens through which, of course rationality looks better than anything else. Meanwhile everyone who trusts in their own moral intuitions feels tricked and betrayed and exploited and sold out when it happens. Sure, they can't always find the words to defend themselves. But it's the rationalizers who are in the wrong: pressuring someone into changing their mind is not okay; it's a basic act of disrespect. Getting someone on your side for real means appealing to their moral intuition, not making them doubt theirs until they give up and reluctantly agree with yours. Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance. At that point they may well be "wrong", but there's no convincing them otherwise: their moral goal has been replaced with a singular need to get to make their own decisions instead of being subjugated by yours.
Anyway.
IMO to justify animal welfare utilitarianism to people who don't care about it at all, you need to take one of two stances:
1. We (the animal-empathizers) live in a society with you, and we care a lot about this, but you don't. But we're in community with each other, so we ought to support each other's causes even if they're not personally relevant to us. So how about you support what we care about and you support what we care about, so everyone benefits? In this case it's very cheap to help.
2. We all live in a society together which should, by now, have largely solved for our basic needs (except for our basic incompetence at it, which, yeah, we need to keep working on). The basic job of morality is to guarantee the safety of everyone in our community. As we start checking off basic needs at the local scale we naturally start expanding our definition of "community" to more and more beings that we can empathize with: other nations and peoples, the natural world around us, people in the far future who suffer from our carelessness, pets, and then, yes, and animals that we use for food. Even though we're still working on the "nearby" hard stuff, like protecting our local ecosystems, we can also start with the low-hanging-fruit on the far-away stuff, including alleviating the needless suffering of shrimp. Long-term we hope to live in harmony with everything on earth in a way that has us all looking out for each other, and this is a small step towards that.
"(suffering per death) * (discount rate for shrimp being 3% of a human) * (dollar to alleviate) = best charity" just doesn't work at all. I notice that the natural human moral intuition (the non-rational version) is necessarily local: it's focused on protecting whatever you regard as your community. So to get someone to extend it to far-away less-sentient creatures, you have to convince the person to change their definition of the "community"--and I think that's what happens naturally when they feel like their local community is safe enough that they can start extending protection at a wider radius.