←back to thread

346 points throw0101c | 1 comments | | HN request time: 0s | source
Show context
oytis ◴[] No.44609364[source]
I just hope when (if) the hype is over, we can repurpose the capacities for something useful (e.g. drug discovery etc.)
replies(16): >>44609452 #>>44609461 #>>44609463 #>>44609471 #>>44609489 #>>44609580 #>>44609632 #>>44609635 #>>44609712 #>>44609785 #>>44609958 #>>44609979 #>>44610227 #>>44610522 #>>44610554 #>>44610755 #
charleshn ◴[] No.44610227[source]
I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress.

Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.

I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].

[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html

[1] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...

replies(8): >>44610262 #>>44610288 #>>44610349 #>>44610664 #>>44610947 #>>44611931 #>>44614230 #>>44614473 #
overgard ◴[] No.44610349[source]
We need to stop calling what we have AI. LLMs can't reliably reason. Until they can the progress is far from unstoppable.
replies(1): >>44612805 #
kadushka ◴[] No.44612805[source]
I love it how people are transitioning from “LLMs can’t reason” to “LLMs can’t reliably reason”.
replies(1): >>44614205 #
charleshn ◴[] No.44614205[source]
Frontier models went from not being able to count the number of 'r's in "strawberry" to getting gold at IMO in under 2 years [0], and people keep repeating the same clichés such as "LLMs can't reason" or "they're just next token predictors".

At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.

[0] https://x.com/alexwei_/status/1946477742855532918

replies(1): >>44616278 #
1. bwfan123 ◴[] No.44616278[source]
> At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.

Based on the past history with frontier-math & AIME 2025 [1],[2] I would not trust announcements which cant be independently verified. I am excited to try it out though.

Also, the performance of LLMs was not even bronze [3].

Finally, this article shows that LLMs were just mostly bluffing [4].

[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...

[2] https://x.com/DimitrisPapail/status/1888325914603516214

[3] https://matharena.ai/imo/

[4] https://arxiv.org/pdf/2503.21934