←back to thread

1525 points saeedesmaili | 1 comments | | HN request time: 0.263s | source
Show context
andai ◴[] No.43653301[source]
Ironically the old horses were faster! Run XP on modern hardware (if you can get it running at all) and you'll see what I mean. Explorer opens fully rendered in the span of a single frame (0.016 seconds). And XP was very slow and bloated for its time!

It'll do this even in VirtualBox, running about 20x snappier than the native host, which boggles my mind.

replies(7): >>43653493 #>>43653839 #>>43655971 #>>43656456 #>>43658105 #>>43664280 #>>43666798 #
svachalek ◴[] No.43653493[source]
It's amazing how fast we can eat up new hardware capabilities. The old 6502 1-MHz CPUs were capable of running much more sophisticated software than most people today imagine, with 1/1000 or 1/millionth the hardware. And now we're asking LLMs to answer math questions, using billions of operations to perform something a single CPU instruction can handle.
replies(1): >>43655738 #
1. TuringTest ◴[] No.43655738[source]
The classical answer of why more hardware resources are needed for the same tasks is that the new system allows for way much more flexibility. A problem domain can be thoroughly optimized for a single purpose, but then it can only be used for that purpose alone.

This is quite true for LLMs. They can do basic arithmetic, but they can also read problem statements in many diverse mathematical areas and describe what they're about, or make (right or wrong) suggestions on how they can be solved.

Classic AIs suffered the Frame problem, where some common-sense reasoning depended on facts not stated in the system logic.

Now, LLMs have largely solved the Frame problem. It turns out the solution was to compress large swathes of human knowledge in a way that can be accessed fast, so that the relevant parts of all that knowledge are activated when needed. Of course, this approach to flexibility will need lots of resources.