Other issue is that LLM's can go off on a tangent. As context builds up, they forget what their objective was. One wrong turn, and in the rabbit hole they go never to recover.
The reason I know, is because we started solving these problems an year back. And we aren't done yet. But we did cover a lot of distance.
[Plug]: Try it out at https://nonbios.ai:
- Agentic memory → long-horizon coding
- Full Linux box → real runtime, not just toy demos
- Transparent → see & control every command
- Free beta — no invite needed. Works with throwaway email (mailinator etc.)