←back to thread

111 points galeos | 2 comments | | HN request time: 0s | source
Show context
nopelynopington ◴[] No.43716144[source]
I built it at home this morning and tried it, perhaps my expectations were high but I wasn't terribly impressed. I asked it for a list of ten types of data I might show on a home info display panel. It gave me three. I clarified that I wanted ten, it gave me six. Every request after that just returned the same six things.

I know it's not chatGPT4 but I've tried other very small models that run on CPU only and had better results

replies(2): >>43716331 #>>43720674 #
1. Me1000 ◴[] No.43720674[source]
This is a technology demo, not a model you'd want to use. Because Bitnet models are only average 1.58 bits per weight you'd expect to need the model to be much larger than your fp8/fp16 counterparts in terms of parameter count. Plus this is only a 2 billion parameter model in the first place, even fp16 2B parameter models generally perform pretty poorly.
replies(1): >>43720945 #
2. nopelynopington ◴[] No.43720945[source]
Ok that's fair. I still think something was up with my build though, the online demo worked far better than my local build