Most active commenters
  • charleshn(4)
  • mvieira38(3)
  • bwfan123(3)

←back to thread

337 points throw0101c | 23 comments | | HN request time: 0.001s | source | bottom
Show context
oytis ◴[] No.44609364[source]
I just hope when (if) the hype is over, we can repurpose the capacities for something useful (e.g. drug discovery etc.)
replies(16): >>44609452 #>>44609461 #>>44609463 #>>44609471 #>>44609489 #>>44609580 #>>44609632 #>>44609635 #>>44609712 #>>44609785 #>>44609958 #>>44609979 #>>44610227 #>>44610522 #>>44610554 #>>44610755 #
1. charleshn ◴[] No.44610227[source]
I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress.

Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.

I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].

[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html

[1] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...

replies(8): >>44610262 #>>44610288 #>>44610349 #>>44610664 #>>44610947 #>>44611931 #>>44614230 #>>44614473 #
2. oytis ◴[] No.44610262[source]
It's very different from chess etc. If we could formalise and "solve" software engineering precisely, it would be really cool, and probably indeed just lift programming to a new level of abstraction.

I don't mind if software jobs move from writing software to verifying software either if it makes the whole process more efficient and the software becomes better as a result. Again, not what is happening here.

What is happening, at least in AI optimist CEO minds is "disruption". Drop the quality while cutting costs dramatically.

replies(1): >>44610298 #
3. bigyabai ◴[] No.44610288[source]
People assume (rightly so) that the progress in AI should be self-evident. If the whole thing is really working that great, we should expect to see real advances in these fields. Protein-folding AI should lower the prices of drugs and create competitive new treatments at an unprecedented rate. Photo and video AI should be enabling film directors and game directors to release higher-quality content faster than ever before. Text AI should be spitting out Shakespeare-toppling opuses on a monthly basis.

So... where's the kaboom? Where's the giant, earth-shattering kaboom? There are solid applications for AI in computer vision and sentiment analysis right now, but even these are fallible and have limited effectiveness when you do deploy them. The grander ambitions, even for pared-back "ASI" definitions, is just kicking the can further down the road.

replies(1): >>44610445 #
4. charleshn ◴[] No.44610298[source]
I mentioned algorithms, not software engineering, precisely for that reason.

But the next step is obviously increased formalism via formal methods, deterministic simulators etc, basically so that one could define an environment for a RL agent.

replies(2): >>44610378 #>>44610657 #
5. overgard ◴[] No.44610349[source]
We need to stop calling what we have AI. LLMs can't reliably reason. Until they can the progress is far from unstoppable.
replies(1): >>44612805 #
6. bigyabai ◴[] No.44610378{3}[source]
I'll bet you $1,000*10^32 that AI never formalizes a novel FFT algorithm worth more than a dime.
7. TheBicPen ◴[] No.44610445[source]
The kaboom already happened on user-generated media platforms. YouTube, Facebook, tiktok, and so on are flooded with AI-generated videos, photos, sounds, and so on. The sheer volume of this low-quality slop is because AI lowered the barrier of entry for creating content. In this space the progress is not happening through pushing the upper bound of quality higher but by reducing the cost for minimal quality to down to near-0.
replies(1): >>44610730 #
8. puchatek ◴[] No.44610657{3}[source]
It's unlikely that LLMs are gonna get us there though. They ingested all relevant data at this point at the net effect might very well kill future sources of quality data. How is e.g. stackoverflow gonna stay alive if the next generation of programmers relies mainly on copilot and vibe coding? And what will the LLMs scrape once it's gone?
9. mvieira38 ◴[] No.44610664[source]
Your examples are not LLMs, though, and don't really behave like them at all. If we take the chess analogy and design an "LLM-like chess engine", it would behave like an average 1400 London spammer, not like Stockfish, because it would try to play like the average human plays in it's database.

It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?

replies(2): >>44610739 #>>44613850 #
10. mvieira38 ◴[] No.44610730{3}[source]
Another perspective for the kaboom is search and programming tasks for the average person.

For the average consumer, LLM chatbots are infinitely better than Google at search-like tasks, and in effect solve that problem. Remember when we had to roll our eyes at dad because he asked Google "what are some cool restaurants?" instead of "nice restaurants SF 2018 reddit"? Well, that is over, he can ask that to ChatGPT and it will make the most effective searches for him, aggregate and answer. Remember when a total noob had to familiarize himself with a language by figuring out hello world, then functions, etc? Now it's over, these people can just draft a toy example of what they want to build with Cursor instantly, tell it to make everything nice and simple, and then have ChatGPT guide them through what is happening.

In some industries you just don't need that much more code quality than what LLMs give you. A quick .bat script doesn't need you to know the best implementation of anything, neither does a Python scraper using only the stdlib, but these were locked behind programming knowledge before LLMs

11. throw310822 ◴[] No.44610739[source]
Maybe you didn't realise that LLMs have just wiped out entire class of problems, maybe entire disciplines- do you remember "natural language processing"? What, ehm, happened to it?

Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.

I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.

replies(2): >>44610990 #>>44611815 #
12. rcpt ◴[] No.44610947[source]
Have you ever seen a company say "welp, we wrote all the code. Now we're done?"
13. jpc0 ◴[] No.44610990{3}[source]
> do you remember "natural language processing"? What, ehm, happened to it

There’s this paper[1] you should read, is sparked an entire new AI dawn, it might answer your question

1. https://arxiv.org/abs/1706.03762

14. mvieira38 ◴[] No.44611815{3}[source]
How is NLP solved, exactly? Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data? Maybe if we ask them very nicely it will improve the precision, right? I understand what we have now is a huge leap, but the problems in the field are far from solved, and honestly BERT has more use cases in actual text analysis.

"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff

replies(1): >>44613617 #
15. bwfan123 ◴[] No.44611931[source]
> I'm always surprised by the number of people posting here that are dismissive of AI and the obvious unstoppable progress

Many of us have been through previous hype-cycles like the dot-com boom, and have learned to be skeptical. Some of that learning has been "reinforced" by layoffs in the ensuing bust (reinforcement learning). A few claims in your note like "it's only a matter of time before we have domain-specific ASI" are jarring - as you are "assuming the sale". LLMs are great as a tool for some usecases - nobody denies that.

The investment dollars are creating a class of people who are fed by those dollars, and have the incentive to push the agenda. The skeptics in contrast have no ax to grind.

16. kadushka ◴[] No.44612805[source]
I love it how people are transitioning from “LLMs can’t reason” to “LLMs can’t reliably reason”.
replies(1): >>44614205 #
17. throw310822 ◴[] No.44613617{4}[source]
> Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data?

I think they have the capability to do it, yes. Maybe it's not the best tool you can use- too expensive, or too flexible to focus with high accuracy on that single task- but yes you can definitely use LLMs to understand literary style and extract data from it. Depending on the complexity of the text I'm sure they can do jobs that BERT can't.

> they still can't fit anywhere in production

Not sure what do you mean for "production" but there's an enormous amount of people using them for work.

18. charcircuit ◴[] No.44613850[source]
>because it would try to play like the average human plays in it's database.

Why would it play like the average? LLMs pick tokens to try and maximize a reward function, they don't just pick the most common word from the training data set.

19. charleshn ◴[] No.44614205{3}[source]
Frontier models went from not being able to count the number of 'r's in "strawberry" to getting gold at IMO in under 2 years [0], and people keep repeating the same clichés such as "LLMs can't reason" or "they're just next token predictors".

At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.

[0] https://x.com/alexwei_/status/1946477742855532918

replies(1): >>44616278 #
20. charleshn ◴[] No.44614230[source]
You can now add getting gold at IMO [0] to the above list.

[0] https://x.com/alexwei_/status/1946477742855532918

replies(1): >>44616028 #
21. Tainnor ◴[] No.44614473[source]
Mathematics cannot be "solved", that's a consequence of Gödel's First Incompleteness Theorem.

It can already be "cheaply verified" in the sense that if you write a proof in, say, Lean, the compiler will tell if you if it's valid. The hard part is coming up with the proof.

It may be possible that some sort of AI at some stage becomes as good, or even better than, research mathematicians in coming up with novel proofs. But so far it doesn't look like it - LLMs seem to be able to help a little bit with finding theorems (e.g. stuff like https://leansearch.net/), but to my understanding they are rather poor beyond that.

22. bwfan123 ◴[] No.44616028[source]
on the surface this is a great achievement - if it holds . alpha-geometry required 1) human formalization of the question and 2) a solver for geometry

If the questions were given as-is (without a human formalizing it) and the llm didnt need domain solvers, and the llm was not trained on it already (which happened with frontier math) - I would be impressed.

Based on the past history with frontier math [1][2] I remain skeptical. The skeptic in me says that this happens prior to big announcements (GPT-5) to create the hype.

Finally, this article shows that LLMs were just bluffing in the usamo 2025 [3].

[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...

[2] https://x.com/DimitrisPapail/status/1888325914603516214

[3] https://arxiv.org/pdf/2503.21934

23. bwfan123 ◴[] No.44616278{4}[source]
> At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.

Based on the past history with frontier-math & AIME 2025 [1],[2] I would not trust announcements which cant be independently verified. I am excited to try it out though.

Also, the performance of LLMs was not even bronze [3].

Finally, this article shows that LLMs were just mostly bluffing [4].

[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...

[2] https://x.com/DimitrisPapail/status/1888325914603516214

[3] https://matharena.ai/imo/

[4] https://arxiv.org/pdf/2503.21934