←back to thread

524 points andy99 | 1 comments | | HN request time: 0.369s | source
Show context
WeirderScience ◴[] No.44536327[source]
The open training data is a huge differentiator. Is this the first truly open dataset of this scale? Prior efforts like The Pile were valuable, but had limitations. Curious to see how reproducible the training is.
replies(2): >>44536400 #>>44537249 #
layer8 ◴[] No.44536400[source]
> The model will be fully open: source code and weights will be publicly available, and the training data will be transparent and reproducible

This leads me to believe that the training data won’t be made publicly available in full, but merely be “reproducible”. This might mean that they’ll provide references like a list of URLs of the pages they trained on, but not their contents.

replies(3): >>44536448 #>>44536623 #>>44536818 #
TobTobXX ◴[] No.44536818[source]
Well, when the actual content is 100s of terabytes big, providing URLs may be more practical for them and for others.
replies(1): >>44537342 #
1. layer8 ◴[] No.44537342[source]
The difference between content they are allowed to train on vs. being allowed to distribute copies of is likely at least as relevant.