←back to thread

688 points crescit_eundo | 1 comments | | HN request time: 0.212s | source
1. ericye16 ◴[] No.42142739[source]
I agree with some of the other comments here that the prompt is limiting. The model can't do any computation without emitting tokens and limiting the numbers of tokens it can emit is going to limit the skill of the model. It's surprising that any model at all is capable of performing well with this prompt in fact.