←back to thread

159 points todsacerdoti | 1 comments | | HN request time: 0.209s | source
Show context
boznz ◴[] No.40712390[source]
Humans simply cannot keep the whole stack for a complex system in memory, that is why we abstract layers with APIs etc and generally specialize on one layer only.

My (Sci-Fi) book postulated that an AGI (a real AGI, after all it was Sci-Fi) would simply discard everything the humans wrote and rewrite the complete stack (including later on the Hardware and ISA) in machine code without anything unnecessary for the task and of course totally unreadable to a human. It is an interesting scenario to ponder.

replies(5): >>40712536 #>>40712588 #>>40712846 #>>40714181 #>>40715499 #
layer8 ◴[] No.40712588[source]
While AIs are able to keep more in their “mind” at the same time than humans, there is still a cost to consider (e.g. token limit). If software requires more effort to change (adding a feature, fixing a bug) due to spaghetti architecture, then that will also add to the cost.

Secondly, we may want to keep software on a complexity level understandable by humans, in order to not become completely dependent on the software engineering AIs.

Thirdly, the same effect that we observe with software written by humans getting to complex to understand by any human, is likely to also occur with AIs of any given capacity. AIs will have to take care, just like humans have to, that the complexity doesn’t grow to exceed their abilities to maintain the software. The old adage “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” probably also applies to AIs.

The essential complexity of software doesn’t depend on whether a human or an AI writes it. AIs may be able to handle more accidental complexity, but it’s unclear if that is an actual benefit under the cost arguments mentioned above. So maybe it will only be useful to create and maintain software with more essential complexity than humans can handle. The question is if we want to depend on such software.

replies(1): >>40719984 #
1. ◴[] No.40719984[source]