How can anything be good without the awareness of evil? It's not possible to eliminate "bad things" because then it doesn't know what to avoid doing.
EDIT: "Waluigi effect"
replies(6):
Is there a way to make this point without both personifying LLMs and assuming some intrinsic natural qualities like good or evil?
An AI in in the present lacks the capacity for good and evil, morals, ethics, whatever. Why aren't developers, companies, integrators directly accountable? We haven't approached full Ghost in the Shell yet.