Most active commenters
  • morsecodist(3)

←back to thread

1168 points jbredeche | 27 comments | | HN request time: 0.001s | source | bottom
Show context
tuna-piano ◴[] No.44000717[source]
If someone in the year 2050 was to pick out the most important news article from 2025, I won't be surprised if they choose this one.

For those who don't understand this stuff - we are now capable of editing some of a body's DNA in ways that predictably change their attributes. The baby's liver now has different (and better) DNA than the rest of its body.

We still are struggling in most cases with how to deliver the DNA update instructions into the body. But given the pace of change in this space, I expect massive improvements with this update process over time.

Combined with AI to better understand the genome, this is going to be a crazy century.

Further reading on related topics:

https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significan...

https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-mak...

https://www.lesswrong.com/posts/yT22RcWrxZcXyGjsA/how-to-hav...

replies(7): >>44000781 #>>44000909 #>>44001300 #>>44001456 #>>44001603 #>>44002146 #>>44010171 #
1. bglazer ◴[] No.44000781[source]
The “How to make superbabies” article demonstrates a couple of fundamental misunderstandings about genetics that make me think the authors don’t know what they’re talking about at a basic level. Zero mention of linkage disequilibrium. Zero mention of epistasis. Unquestioned assumptions of linear genotype-phenotype relationships for IQ. Seriously, the projections in their graphs into “danger zone” made me laugh out loud. This is elementary stuff that theyre missing but the entire essay is so shot through with hubris that I don’t think they’re capable of recognizing that.
replies(3): >>44000801 #>>44002867 #>>44003980 #
2. cayley_graph ◴[] No.44000801[source]
The EA community is generally incapable of self-awareness. The academic-but-totally-misinformed tone is comparable to reading LLM output. I've stopped trying to correct them, it's too much work on my part and not enough on theirs.
replies(3): >>44000840 #>>44006312 #>>44012322 #
3. Kuinox ◴[] No.44000840[source]
What does EA means here ?
replies(1): >>44000852 #
4. cayley_graph ◴[] No.44000852{3}[source]
"Effective Altruism", something I find myself aligned with but not to the extremes taken by others.
replies(3): >>44001930 #>>44002050 #>>44005402 #
5. alexey-salmin ◴[] No.44001930{4}[source]
Technically lesswrong is about rationalists not effective altruists, but you're right in a sense that it's the same breed.

They think that the key to scientific thinking is to forego the moral limitations, not to study and learn. As soon as you're free from the shackles of tradition you become 100% rational and therefore 100% correct.

replies(5): >>44002894 #>>44003018 #>>44003129 #>>44003317 #>>44004503 #
6. morsecodist ◴[] No.44002050{4}[source]
Effective Altruism is such an interesting title. Almost no one views their Altruism as ineffective. The differentiator is what makes their flavor of Altruism effective, but that's not in the title. It would be like calling the movement "real Altruism" or "good Altruism".

A good name might be rational Altruism because in practice these people are from the rationalist movement and doing Altruism, or what they feel is Altruism. But the "rationalist" title suffers from similar problems.

replies(4): >>44002376 #>>44002857 #>>44003627 #>>44007765 #
7. kmmlng ◴[] No.44002376{5}[source]
I suppose in the beginning, it was about finding ways to measure how effective different altruistic approaches actually are and focusing your efforts on the most effective ones. Effective then essentially means how much impact you are achieving per dollar spent. One of the more convincing ways of doing this is looking at different charitable foundations and determining how much of each dollar you donate to them actually ends up being used to fix some problem and how much ends up being absorbed by the charitable foundation itself (salaries etc.) with nothing to show for it.

They might have lost the plot somewhere along the line, but the effective altruism movement had some good ideas.

replies(3): >>44004275 #>>44004691 #>>44008104 #
8. concordDance ◴[] No.44002857{5}[source]
The vast majority of non-EA charity givers to not expend effort on trying to find the most dollar efficient charities (or indeed pushing for quantification at all), which makes their altruism ineffectual in a world with strong competition between charities (where the winners are inevitably those who spend the most on acquiring donations).
9. concordDance ◴[] No.44002867[source]
Do you have some further reading where one can understand the basics of the subject?
10. stogot ◴[] No.44002894{5}[source]
Except no one is 100% rational nor 100% correct
11. cnity ◴[] No.44003018{5}[source]
That community is basically the "r/iamverysmart" types bringing their baggage into adulthood. Almost everything I've read in that sphere is basically Dunning–Kruger to the nth degree.
12. jaidhyani ◴[] No.44003129{5}[source]
Approximately no one in the community thinks this. If you can go two days in a rationalist space without hearing about "Chesterton's Fence", I'll be impressed. No one thinks they're 100% rational nor that this is a reasonable aspiration. Traditions are generally regarded as sufficiently important that a not small amount of effort has gone into trying to build new ones. Not only is the case that no one thinks that anyone including themselves is 100% correct, but the community norm is to express credence in probabilities and convert those probabilities into bets when possible. People in the rationalist community constantly, loudly, and proudly disagree with each other, to the point that this can make it difficult to coordinate on anything. And everyone is obsessed with studying and learning, and constantly trying to come up with ways to do this more effectively.

Like, I'm sure there are people who approximately match the description you're giving here. But I've spent a lot of time around flesh-and-blood rationalists and EAs, and they violently diverge from the account you give here.

13. winterdeaf ◴[] No.44003317{5}[source]
So much vitriol. I understand it's cool to hate on EA after the SBF fiasco, but this is just smearing.

The key to scientific thinking is empiricism and rationalism. Some people in EA and lesswrong extend this to moral reasoning, but utilitarianism is not a pillar of these communities.

replies(1): >>44006285 #
14. mushi01 ◴[] No.44003627{5}[source]
Do you really think all altruism is effective? Caring about the immediate well-being of others is not as effective as thinking in the long term. The altruism you are describing is misguided altruism, which ultimately hurts more than it helps, while effective altruism goes beyond the surface-level help in ways that don't enable self-destructing behaviours or that don't perpetuate the problem.
replies(1): >>44004202 #
15. tuna-piano ◴[] No.44003980[source]
Thanks for the healthy skepticism.

I still think there's a lot to learn from those articles for most folks uninvolved in this area, even if some of their immediate optimism has additional complications.

I think what I mostly took away is a combination of technologies is likely to dramatically change how we have babies in the future.

1. We'll make sperm/egg from skin cells. This has already been done in mice[1], so it is not science fiction to do it in people.

2. When we're able to do this inexpensively, we could create virtually unlimited embryos. We can then select the embryos that have the most optimal traits. Initially, this may be simple things like not choosing embryos with certain genes that give higher risk of certain diseases.

This may involve selecting traits like intelligence and height (there are already companies that offer this embryo selection capability [2]).

3. Instead of creating a lot of embryos and selecting the best ones, we could instead create just one embryo and edit the DNA of that embryo, which has already been done in humans [3]. Alternatively, we could edit the DNA of the sperm/egg prior to creating the embryo.

The fact that none of this is science fiction is just wild. All of these steps have already been done in animals or people. Buckle up, the future is going to be wild.

[1] https://www.npr.org/sections/health-shots/2023/05/27/1177191...

[2] https://www.theguardian.com/science/2024/oct/18/us-startup-c...

[3] https://www.science.org/content/article/chinese-scientist-wh...

16. morsecodist ◴[] No.44004202{6}[source]
No I think almost all people doing altruism at least think what they are doing is effective. I totally get that they EA people believe they have found the one true way but so does do others. Even if EA is correct it just makes talking about it confusing. Imagine if Darwin has called his theory "correct biology".
17. morsecodist ◴[] No.44004275{6}[source]
This is a super fair summary and has shifted my thinking on this a bit thanks.
18. ◴[] No.44004503{5}[source]
19. agos ◴[] No.44004691{6}[source]
“Measurable altruism” would have been a better name
20. JeremyNT ◴[] No.44005402{4}[source]
Note that these people often condescendingly refer to themselves as "rationalists," as if they've unlocked some higher level of intellectual enlightenment which the rest of us are incapable of achieving.

In reality, they're simply lay people who synthesize a lot of garbage they find on the Internet into overly verbose pseudo-intellectual blog posts filled with both the factual inaccuracies of their source material and new factual inaccuracies that they invent from whole cloth.

21. xrhobo ◴[] No.44006285{6}[source]
Empiricism and rationalism both tempered by a heavy dose of skepticism.

On the other hand, maybe that is some kind of fallacy itself. I almost want to say that "scientific thinking" should be called something else. The main issue being the lack of experiment. Using the word "science" without experiment leads to all sorts of nonsense.

A word that means "scientific thinking is much as possible without experiment" would at least embedded a dose of skepticism in the process.

The Achilles heel of rationalism is the descent into modeling complete nonsense. I should give lesswrong another chance I suppose because that would sum up my experience so far, empirically.

EA to me seems like obvious self serving nonsense. Hiding something in the obvious to avoid detection.

22. static_void ◴[] No.44006312[source]
I once went into a LessWrong IRC server.

I posted a question where I referred to something by the wrong name.

Someone said I was confused / wrong, so I corrected myself and restated my question.

For some 10 minutes they just kept dogpiling on the use of the wrong term.

Never a bunch a stupider people have I met than LessWrong people.

replies(1): >>44006475 #
23. Workaccount2 ◴[] No.44006475{3}[source]
Reminds me why I learned long ago to never post your code online when looking for help.

50 replies arguing about how you can simplify your for() loop syntax and not one reply with an actual answer.

24. tim333 ◴[] No.44007765{5}[source]
>Almost no one views their Altruism as ineffective

As someone who has occasionally given money to charities for homelessness and the like I don't really expect it to fix much. More the thought that counts.

replies(1): >>44008113 #
25. sfink ◴[] No.44008104{6}[source]
> One of the more convincing ways of doing this is looking at different charitable foundations and determining how much of each dollar you donate to them actually ends up being used to fix some problem and how much ends up being absorbed by the charitable foundation itself (salaries etc.) with nothing to show for it.

Color me unconvinced. This will work for some situations. At this point, it's well known enough that it's a target that has ceased to be a good measure (Goodhart's Law).

The usual way to look at this is to look at the percentage of donations spent on administrative costs. This makes two large assumptions: (1) administrative costs have zero benefit, and (2) non-administrative costs have 100% benefit. Both are wildly wrong.

A simple counterexample: you're going to solve hunger. So you take donations, skim 0.0000001% off the top for your time because "I'm maximizing benefit, baby!", and use the rest to purchase bananas. You dump those bananas in a pile in the middle of a homeless encampment.

There are so many problems with this, but I'll stick with the simplest: in 2 weeks, you have a pile of rotten bananas and everyone is starving again. It would have been better to store some of the bananas and give them out over time, which requires space and maybe even cooling to hold inventory, which cost money, and that's money that is not directly fixing the problem.

There are so many examples of feel-good world saving that end up destroying communities and cultures, fostering dependence, promoting corruption, propping up the institutions that causing the problem, etc.

Another analogy: you make a billion dollars and put it in a trust for your grandchild to inherit the full sum when they turn 16. Your efficiency measure is at 100%! What could possibly go wrong? Could someone improve the outcome by, you know, administering the trust for you?

Smart administration can (but does not have to) increase effectiveness. Using this magical "how much of each dollar... ends up being used to fix some problem" metric is going to encourage ineffective charities and deceptive accounting.

26. zarathustreal ◴[] No.44008113{6}[source]
I like to call this “lazy altruism”
27. AlexeyBelov ◴[] No.44012322[source]
It's like Mensa: if you really want to be a part of Mensa and be known for that, are you really that smart?