←back to thread

1311 points msoad | 2 comments | | HN request time: 0s | source
Show context
wkat4242 ◴[] No.35394851[source]
Wow I continue being amazed by the progress being made on language models in the scope of weeks. I didn't expect optimisations to move this quickly. Only a few weeks ago we were amazed with ChatGPT knowing it would never be something to run at home, requiring $100.000 in hardware (8xA100 card).
replies(1): >>35394981 #
1. smoldesu ◴[] No.35394981[source]
Before ChatGPT was in beta, there were already models that fit into 2gb and smaller. They were complete shit, but they did exist.
replies(1): >>35395738 #
2. wkat4242 ◴[] No.35395738[source]
I know but what's changing is that they aren't shit now. Not on par with GPT but getting much closer. Especially with a little massaging like Stanford has done.