←back to thread

1165 points jbredeche | 1 comments | | HN request time: 0.202s | source
Show context
tuna-piano ◴[] No.44000717[source]
If someone in the year 2050 was to pick out the most important news article from 2025, I won't be surprised if they choose this one.

For those who don't understand this stuff - we are now capable of editing some of a body's DNA in ways that predictably change their attributes. The baby's liver now has different (and better) DNA than the rest of its body.

We still are struggling in most cases with how to deliver the DNA update instructions into the body. But given the pace of change in this space, I expect massive improvements with this update process over time.

Combined with AI to better understand the genome, this is going to be a crazy century.

Further reading on related topics:

https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significan...

https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-mak...

https://www.lesswrong.com/posts/yT22RcWrxZcXyGjsA/how-to-hav...

replies(7): >>44000781 #>>44000909 #>>44001300 #>>44001456 #>>44001603 #>>44002146 #>>44010171 #
bglazer ◴[] No.44000781[source]
The “How to make superbabies” article demonstrates a couple of fundamental misunderstandings about genetics that make me think the authors don’t know what they’re talking about at a basic level. Zero mention of linkage disequilibrium. Zero mention of epistasis. Unquestioned assumptions of linear genotype-phenotype relationships for IQ. Seriously, the projections in their graphs into “danger zone” made me laugh out loud. This is elementary stuff that theyre missing but the entire essay is so shot through with hubris that I don’t think they’re capable of recognizing that.
replies(3): >>44000801 #>>44002867 #>>44003980 #
cayley_graph ◴[] No.44000801[source]
The EA community is generally incapable of self-awareness. The academic-but-totally-misinformed tone is comparable to reading LLM output. I've stopped trying to correct them, it's too much work on my part and not enough on theirs.
replies(3): >>44000840 #>>44006312 #>>44012322 #
Kuinox ◴[] No.44000840[source]
What does EA means here ?
replies(1): >>44000852 #
cayley_graph ◴[] No.44000852[source]
"Effective Altruism", something I find myself aligned with but not to the extremes taken by others.
replies(3): >>44001930 #>>44002050 #>>44005402 #
alexey-salmin ◴[] No.44001930[source]
Technically lesswrong is about rationalists not effective altruists, but you're right in a sense that it's the same breed.

They think that the key to scientific thinking is to forego the moral limitations, not to study and learn. As soon as you're free from the shackles of tradition you become 100% rational and therefore 100% correct.

replies(5): >>44002894 #>>44003018 #>>44003129 #>>44003317 #>>44004503 #
winterdeaf ◴[] No.44003317[source]
So much vitriol. I understand it's cool to hate on EA after the SBF fiasco, but this is just smearing.

The key to scientific thinking is empiricism and rationalism. Some people in EA and lesswrong extend this to moral reasoning, but utilitarianism is not a pillar of these communities.

replies(1): >>44006285 #
1. xrhobo ◴[] No.44006285[source]
Empiricism and rationalism both tempered by a heavy dose of skepticism.

On the other hand, maybe that is some kind of fallacy itself. I almost want to say that "scientific thinking" should be called something else. The main issue being the lack of experiment. Using the word "science" without experiment leads to all sorts of nonsense.

A word that means "scientific thinking is much as possible without experiment" would at least embedded a dose of skepticism in the process.

The Achilles heel of rationalism is the descent into modeling complete nonsense. I should give lesswrong another chance I suppose because that would sum up my experience so far, empirically.

EA to me seems like obvious self serving nonsense. Hiding something in the obvious to avoid detection.