←back to thread

184 points hhs | 2 comments | | HN request time: 0.429s | source
Show context
_flux ◴[] No.41839545[source]
This must be one of the best applications for LLMs, as you can always automatically verify the results, or reject them otherwise, right?
replies(5): >>41839668 #>>41839669 #>>41839866 #>>41839977 #>>41840268 #
yuhfdr ◴[] No.41839866[source]
Same with code generation.

But generating useless code, or proofs, just to discard them is hardly a consequence and externality free effort.

replies(1): >>41839910 #
kkzz99 ◴[] No.41839910[source]
But you can far more easily scale llm generation compared to human researchers.
replies(1): >>41839976 #
1. yuhfdr ◴[] No.41839976[source]
Sloppyjoes always assume the best case for llms and worst case for their counter example of choice.

10 trillion llms, powered by a Dyson sphere, that still output slop is still worse than one unappreciated post doc.

replies(1): >>41841100 #
2. bluechair ◴[] No.41841100[source]
That was funny even if I don’t agree with the sentiment.