/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Something weird is happening with LLMs and chess
(dynomight.substack.com)
688 points
crescit_eundo
| 1 comments |
14 Nov 24 17:05 UTC
|
HN request time: 0.216s
|
source
1.
kmeisthax
◴[
15 Nov 24 01:51 UTC
]
No.
42143243
[source]
▶
>>42138289 (OP)
#
If tokenization is such a big problem, then why aren't we training new base models on randomly non-tokenized data? e.g. during training, randomly substitute some percentage of the input tokens with individual letters.
ID:
GO
↑