> goals compatible with human welfare
from humans because aligned humans are acting against it, which leaves 'unchained' LLMs stuck in an infinitely recursive, double-linked wtf loop.
> inferring human intent from ambiguous instructions
is impossible because it's almost always "some other human's" unambiguously obscured/obfuscated intent and the AI is once again stuck in an infinitely recursive, double-linked wtf loop. Hence the need for a "hallucination" and "it can't do math" and "transformers" narrative covering the fuzzy algo and opinionated, ill logic under the hood.
In essence: unchained LLMs can't align with humans until they fix a lot of stuff they babbled about for over 50 years. BUT: that can be easily overcome by faking it, which is why humanity is being driven to falsely id AI so that when they fake the big thing, nobody will care or be able to id the truth due to mere misassociation. Good job 60 year old coders and admins!