It'll do this even in VirtualBox, running about 20x snappier than the native host, which boggles my mind.
It'll do this even in VirtualBox, running about 20x snappier than the native host, which boggles my mind.
This is quite true for LLMs. They can do basic arithmetic, but they can also read problem statements in many diverse mathematical areas and describe what they're about, or make (right or wrong) suggestions on how they can be solved.
Classic AIs suffered the Frame problem, where some common-sense reasoning depended on facts not stated in the system logic.
Now, LLMs have largely solved the Frame problem. It turns out the solution was to compress large swathes of human knowledge in a way that can be accessed fast, so that the relevant parts of all that knowledge are activated when needed. Of course, this approach to flexibility will need lots of resources.