Answering the real question- it's unlikely these techniques will see widespread "recreational" usage any time soon, as they come with a wide range of risks. Further, the scientific community has learned a lot from previous eugenics programs; anything that happens in the future will happen with both social and political regulation.
It's ultimately hard to predict- many science fiction writers have speculated about this for some time, and social opinion can change quickly when people see new developments.
If you want make your baby smarter, taller, or more handsome, it's not so easy because these traits involve 1000s of genes.
For this reason I do not think that curying diseases will lead to designer babies.
It’s perfectly reasonable to say that while a technology has the propensity to be used for evil, it also has positive applications and that the real benefit now outweighs the potential downside in a hypothetical future.
Otherwise you will go down a rabbit hole at the bottom of which lies a future where we all just kinda dig in the dirt with our hands until we die because every technological innovation can be used in a variety of ways.
Like, it’s silly to me that I can’t bring a 1.5” blade keychain utility knife on a flight, and then they hand me a metal butter knife in first class. I could do way more damage with that. But they allow the butter knife because the utility has shown to far outweigh the potential downside that hasn’t manifested.
> I will slaughter a baby if I know for a fact that baby will grow up to be the next Hitler
This is one of those things that is easy to say precisely due to the impossibility of it ever actually becoming a real decision you have to make.
It will be that people just don't have children at all.
It's true. But things like this should be easy to say right? Like we may not be able to act logically. But we should be able to think logically, communicate logically and show that we are aware of what is logical.
My post got flagged meaning a lot of people can't separate the two things. So for example I may not be able to kill the baby in reality, but I can at least see how irrational I am.
The person who flagged me likely not only can't kill the baby. He has to construct an artificial reality to justify why he can't kill the baby and why his decision must be rational.
There could be other babies that can also grow up to be future Hitlers. So let's say 4 such babies exist. By killing one I eliminated 1/4 for futures with Grown up Hitlers that exist.
This whole thread is getting flagged. Likely by an irrational parent who can't even compute natural selection, babe, and Hitler all in a single paragraph.
I'll steelman "fixing defects" by sticking to serious hereditary diseases (and yes, only those that correspond to one or a few known genes). As more and more conditions become treatable, the population with access to resources will have lower healthcare costs by being less susceptible to problems. (Which is a good thing, note!) Insurance companies will have more and more proxies for differentiating that don't involve direct genetic information. Societally, "those people" [the poor and therefore untreated] cost more to support medically and are an increasing burden on the system. Eugenics gains a scientific basis. Do you want your daughter marrying someone genetically substandard, if you don't have the resources to correct any issues that might show up? Probably not, you're more likely to want to build a wall between you and them. Then throw over anyone who falls behind the bleeding edge of corrections.
It'll be the latest form of redlining, but this time "red" refers literally to blood.
It would maybe be easier for a 15-25 y.o. to kill a baby they don't know and whose parents/family they don't know, and maybe even easier if they don't speak their language or look like them. Of course, the baby wouldn't be the only one you'd have to kill, most likely.
I submit that it would be very very different if you found out that your 4 year old child was going to go on to be the next Hitler. For a "normal" person, I think they would go to the ends of the earth to try to shape them into the kind of person that wouldn't do it. I think very few people would coldly calculate "welp, guess I gotta execute this adorable little child I've grown so attached to" as it looks up at them saying "I love you so much forever, mommy/daddy" with their little doe eyes.
(ETA: it also brings up side questions about nature vs nurture and free will)
And then consider the lifelong repercussions of the emotional fallout. You can use all the logic in the world to justify the action, but your human brain will still torment you over it. And likely, most of the other human brains that learn about it would torment you as well.
---
So, while I think you can say things like that, ie the ability and allowance, I think you should question whether you should. I think saying those kinds of things really doesn't add much to the discussion because I believe it's really just an uninformed platitude that only someone with a lack of life experience would believe.
For me this all highlights the fact that meaty ethical questions don't have a simple reductive answer. Which ties back in to the original problem that OP outright states that this is simply and clearly the wrong path to go down.
(PS the downvoting/flagging could be due to breaking the guidelines around discussing downvotes and flags, and not actually due to the topical content of the posts, and/or assuming bad faith on the part of other users as such: https://news.ycombinator.com/newsguidelines.html)
But, I think that it's misguided to apply the human problem of othering to a given technology. Regardless of technology X, humans are gonna human. So, if X helps some people, we should consider it on that basis. Because without X, we will still have an endless stream of other reasons to draw red lines, as you allude to. Except in addition we'll also still have the problem that X could've helped solve.
If gene editing to cure diseases leads to a future where people want to shunt off the poor that are now the outsized burden of the healthcare system, the answer from where I sit is to find ways to make the gene therapies available to them, not to cart them off to concentration camps while they await extermination. This will require all the trappings of human coordination we've always had.
Preventing X from ever coming to fruition doesn't at all prevent all possible futures where concentration death camps are a possibility. To me they are orthogonal concerns.
Even if you can convince one culture/society not to do it, how do you stop others? Force? Now you have a different manifestation of the same problem to solve. Society needs to learn how to "yes, and..." more when it comes to this stuff. Otherwise, it's just war all the way down.
Well, you're wrong. Where is the line drawn for what constitutes a disease? Retardation? Autism? Eventually every child below, say, 130 IQ will be considered disabled and unable to find work.
Apply this to every other trait: cardiovascular health, strength, height, vision, etc. All forms of weakness can be considered a disease. The end product of eugenics is that mankind will be made into a docile and fragile monoculture.
>If you want make your baby smarter, taller, or more handsome, it's not so easy because these traits involve 1000s of genes.
And? it's obvious that the technology will eventually be capable of this, just not all at once. It starts with single-gene mutations, then it will be 10's of genes, and then hundreds and thousands.
That is the slippery slope: there is absolutely nothing about your reasoning that prevents one step from leading to another.
It's helpful to evaluate claims on this thread in the context of the story. It's possible (though still a very open question) that complex behavioral traits will generally become predictable or maybe even controllable in the future. But those would require breakthroughs (including basic science discoveries breaking in the direction baby-designers want them to) more significant than the announcement on this story.
You should because many choice in life are not strictly black and white. Saving a babies life versus introducing gene editing to humanity. If there was a baby where we knew he would grow up to slaughter millions it's absolutely worth talking about. In the age of AI and gene editing where things are influencing what it even means to be human, it is wise to stop and pause for a minute to ask the right question rather then charge forward with change that can't be taken back all because we wanted to save a baby.
There's no inherent metaphysical worth in being on any particular level of strength, height etc., so we can spread whatever is the most convenient. I think arguments against (that I see being made) ultimately devolve into some magical thinking and a priori thing bad. (I am glad to be shown otherwise.) In fact we are already messing with human fertility in possibly unsustainable ways, so maybe more tools are needed as a part of the way out.
Of course there is political execution, corruption etc., but I don't see it any different from other technological challenges that civilization has dealt with. I.e. we need better politics but the tech is not at fault. Gene editing is isolated interventions, so it's in that detail more manageable than for example mass surveillance which is hidden and continuous.
One more esoteric argument is that we cannot socially agree on what traits are desirable. The ‘The Twenty-first Voyage of Ijon Tichy’ scenario. So opposite to "monoculture" in a way. But I don't see people expanding on that.
> This will require all the trappings of human coordination we've always had.
It is also true to say that we've never had it as quickly as it has been needed, and neither is it done as well as it needs to be. We will blunder into things that are easy to predict in advance if we are willing to look and accept what we see, but we won't.
I absolutely agree that this advance is a great thing and should be pursued further. But I also think that simply categorizing it as good or bad is a way to willfully ignore the unintended consequences. We should at least try to do better.
> Society needs to learn how to "yes, and..." more when it comes to this stuff.
Absolutely. I just think that requires nuance, wide open eyes, and acceptance of uncomfortable truths. Part of the nuance is not boiling it down to a yes/no question of "should this proceed?" (For example, how about: "How can we utilize these new capabilities to maximize benefit and minimize harm, when the only real lever we seem to have to work with is the profit motive? Everything else is getting undermined in its service.")