←back to thread

1167 points jbredeche | 1 comments | | HN request time: 0.208s | source
Show context
tuna-piano ◴[] No.44000717[source]
If someone in the year 2050 was to pick out the most important news article from 2025, I won't be surprised if they choose this one.

For those who don't understand this stuff - we are now capable of editing some of a body's DNA in ways that predictably change their attributes. The baby's liver now has different (and better) DNA than the rest of its body.

We still are struggling in most cases with how to deliver the DNA update instructions into the body. But given the pace of change in this space, I expect massive improvements with this update process over time.

Combined with AI to better understand the genome, this is going to be a crazy century.

Further reading on related topics:

https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significan...

https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-mak...

https://www.lesswrong.com/posts/yT22RcWrxZcXyGjsA/how-to-hav...

replies(7): >>44000781 #>>44000909 #>>44001300 #>>44001456 #>>44001603 #>>44002146 #>>44010171 #
bglazer ◴[] No.44000781[source]
The “How to make superbabies” article demonstrates a couple of fundamental misunderstandings about genetics that make me think the authors don’t know what they’re talking about at a basic level. Zero mention of linkage disequilibrium. Zero mention of epistasis. Unquestioned assumptions of linear genotype-phenotype relationships for IQ. Seriously, the projections in their graphs into “danger zone” made me laugh out loud. This is elementary stuff that theyre missing but the entire essay is so shot through with hubris that I don’t think they’re capable of recognizing that.
replies(3): >>44000801 #>>44002867 #>>44003980 #
cayley_graph ◴[] No.44000801[source]
The EA community is generally incapable of self-awareness. The academic-but-totally-misinformed tone is comparable to reading LLM output. I've stopped trying to correct them, it's too much work on my part and not enough on theirs.
replies(3): >>44000840 #>>44006312 #>>44012322 #
Kuinox ◴[] No.44000840[source]
What does EA means here ?
replies(1): >>44000852 #
cayley_graph ◴[] No.44000852[source]
"Effective Altruism", something I find myself aligned with but not to the extremes taken by others.
replies(3): >>44001930 #>>44002050 #>>44005402 #
morsecodist ◴[] No.44002050[source]
Effective Altruism is such an interesting title. Almost no one views their Altruism as ineffective. The differentiator is what makes their flavor of Altruism effective, but that's not in the title. It would be like calling the movement "real Altruism" or "good Altruism".

A good name might be rational Altruism because in practice these people are from the rationalist movement and doing Altruism, or what they feel is Altruism. But the "rationalist" title suffers from similar problems.

replies(4): >>44002376 #>>44002857 #>>44003627 #>>44007765 #
kmmlng ◴[] No.44002376[source]
I suppose in the beginning, it was about finding ways to measure how effective different altruistic approaches actually are and focusing your efforts on the most effective ones. Effective then essentially means how much impact you are achieving per dollar spent. One of the more convincing ways of doing this is looking at different charitable foundations and determining how much of each dollar you donate to them actually ends up being used to fix some problem and how much ends up being absorbed by the charitable foundation itself (salaries etc.) with nothing to show for it.

They might have lost the plot somewhere along the line, but the effective altruism movement had some good ideas.

replies(3): >>44004275 #>>44004691 #>>44008104 #
1. sfink ◴[] No.44008104[source]
> One of the more convincing ways of doing this is looking at different charitable foundations and determining how much of each dollar you donate to them actually ends up being used to fix some problem and how much ends up being absorbed by the charitable foundation itself (salaries etc.) with nothing to show for it.

Color me unconvinced. This will work for some situations. At this point, it's well known enough that it's a target that has ceased to be a good measure (Goodhart's Law).

The usual way to look at this is to look at the percentage of donations spent on administrative costs. This makes two large assumptions: (1) administrative costs have zero benefit, and (2) non-administrative costs have 100% benefit. Both are wildly wrong.

A simple counterexample: you're going to solve hunger. So you take donations, skim 0.0000001% off the top for your time because "I'm maximizing benefit, baby!", and use the rest to purchase bananas. You dump those bananas in a pile in the middle of a homeless encampment.

There are so many problems with this, but I'll stick with the simplest: in 2 weeks, you have a pile of rotten bananas and everyone is starving again. It would have been better to store some of the bananas and give them out over time, which requires space and maybe even cooling to hold inventory, which cost money, and that's money that is not directly fixing the problem.

There are so many examples of feel-good world saving that end up destroying communities and cultures, fostering dependence, promoting corruption, propping up the institutions that causing the problem, etc.

Another analogy: you make a billion dollars and put it in a trust for your grandchild to inherit the full sum when they turn 16. Your efficiency measure is at 100%! What could possibly go wrong? Could someone improve the outcome by, you know, administering the trust for you?

Smart administration can (but does not have to) increase effectiveness. Using this magical "how much of each dollar... ends up being used to fix some problem" metric is going to encourage ineffective charities and deceptive accounting.