> Have you actually tried agentic LLM based frameworks that use tool calling for long term memory storage and retrieval, or have you decided that because these tools do not behave perfectly in a fluid environment where humans do not behave perfectly either, that it's "impossible"?
i.e. "Have you tried this vague, unnamed thing that I alude to that seems to be the answer that contradicts your point, but actually doesn't?"
AGI = 90% of software devs, psychotherapists, lawyers, teachers lose their jobs, we are not there.
Once LLMs can fork themselves, reflect and accumulate domain specific knowledge and transfer the whole context back to the model weights, once that knowledge can become more important than the pre-pretrained information, once they can form new neurons related to a project topic, then yes, we will have AGI (probably not that far away).
Once LLM's can keep trying to find a bug for days and weeks and months, go through the debugger, ask people relevant questions, deploy code with new debugging traces, deploy mitigations and so on, we will have AGI.
Otherwise, AI is stuck in this groundhog day type scenario, where it's forever the brightest intern that any company has ever seen, but he's forever stuck at day 0 on the job, forever not that usefull, but full of potential.