To determine which rows are in the sample, you need to consider only two columns: the weight and (a) primary key.
If your population lives in a traditional row-oriented store, then you're going to have to read every row. But if you index your weights, you only need to scan the index, not the underlying population table to identify the sample.
If your population lives in a column-oriented store, identifying the sample is fast, again because you only need to read two small columns to do it. Column stores are optimized for this case.
If you pull the highest 5 product ID's sorting by product ID, you're right it only reads those 5 entries from disk.
But when you ORDER BY RANDOM(), it's forced to do a full-table scan, because RANDOM() can't be indexed.
No, even without indexes, you only need O(N) memory to find the top N records under any ordering, including on random sort keys. You do have to examine every row, but if you are using a column store or have indexed the weight column, the scan will require only a small fraction of the work that a read-everything table scan would require.
But that's because memory usage isn't the relevant bottleneck here -- it's disk IO. It's reading the entire table from disk, or only the entire column(s) if applicable.
That's not something you ever want to do in production. Not even if it's only a single column. That will destroy performance on any reasonably large table.
If a full-table scan of all columuns takes 180 seconds, then a "small fraction" to scan a single column might take 10 seconds, but queries running on a production database need to be measured in small numbers of milliseconds.