←back to thread

1162 points jbredeche | 1 comments | | HN request time: 0.21s | source
Show context
tuna-piano ◴[] No.44000717[source]
If someone in the year 2050 was to pick out the most important news article from 2025, I won't be surprised if they choose this one.

For those who don't understand this stuff - we are now capable of editing some of a body's DNA in ways that predictably change their attributes. The baby's liver now has different (and better) DNA than the rest of its body.

We still are struggling in most cases with how to deliver the DNA update instructions into the body. But given the pace of change in this space, I expect massive improvements with this update process over time.

Combined with AI to better understand the genome, this is going to be a crazy century.

Further reading on related topics:

https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significan...

https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-mak...

https://www.lesswrong.com/posts/yT22RcWrxZcXyGjsA/how-to-hav...

replies(7): >>44000781 #>>44000909 #>>44001300 #>>44001456 #>>44001603 #>>44002146 #>>44010171 #
bglazer ◴[] No.44000781[source]
The “How to make superbabies” article demonstrates a couple of fundamental misunderstandings about genetics that make me think the authors don’t know what they’re talking about at a basic level. Zero mention of linkage disequilibrium. Zero mention of epistasis. Unquestioned assumptions of linear genotype-phenotype relationships for IQ. Seriously, the projections in their graphs into “danger zone” made me laugh out loud. This is elementary stuff that theyre missing but the entire essay is so shot through with hubris that I don’t think they’re capable of recognizing that.
replies(3): >>44000801 #>>44002867 #>>44003980 #
cayley_graph ◴[] No.44000801[source]
The EA community is generally incapable of self-awareness. The academic-but-totally-misinformed tone is comparable to reading LLM output. I've stopped trying to correct them, it's too much work on my part and not enough on theirs.
replies(2): >>44000840 #>>44006312 #
Kuinox ◴[] No.44000840[source]
What does EA means here ?
replies(1): >>44000852 #
cayley_graph ◴[] No.44000852[source]
"Effective Altruism", something I find myself aligned with but not to the extremes taken by others.
replies(3): >>44001930 #>>44002050 #>>44005402 #
alexey-salmin ◴[] No.44001930[source]
Technically lesswrong is about rationalists not effective altruists, but you're right in a sense that it's the same breed.

They think that the key to scientific thinking is to forego the moral limitations, not to study and learn. As soon as you're free from the shackles of tradition you become 100% rational and therefore 100% correct.

replies(5): >>44002894 #>>44003018 #>>44003129 #>>44003317 #>>44004503 #
1. jaidhyani ◴[] No.44003129[source]
Approximately no one in the community thinks this. If you can go two days in a rationalist space without hearing about "Chesterton's Fence", I'll be impressed. No one thinks they're 100% rational nor that this is a reasonable aspiration. Traditions are generally regarded as sufficiently important that a not small amount of effort has gone into trying to build new ones. Not only is the case that no one thinks that anyone including themselves is 100% correct, but the community norm is to express credence in probabilities and convert those probabilities into bets when possible. People in the rationalist community constantly, loudly, and proudly disagree with each other, to the point that this can make it difficult to coordinate on anything. And everyone is obsessed with studying and learning, and constantly trying to come up with ways to do this more effectively.

Like, I'm sure there are people who approximately match the description you're giving here. But I've spent a lot of time around flesh-and-blood rationalists and EAs, and they violently diverge from the account you give here.