←back to thread

Look, Another AI Browser

(manuelmoreale.com)
220 points v3am | 2 comments | | HN request time: 0.428s | source
Show context
codeflo ◴[] No.45672997[source]
To find out what someone truly believes, don't listen to what they say, observe how they act. I don't see how OpenAI's recent actions make any sense from the perspective a company that internally believes it's actually close to unlocking super-intelligence.
replies(6): >>45673046 #>>45673380 #>>45673815 #>>45674481 #>>45674718 #>>45675017 #
bloppe ◴[] No.45674718[source]
I'd go a step further: has OpenAI actually achieved any significant research breakthroughs on par with Google's transformers? So why does everybody think they will achieve the next N breakthroughs necessary to get to AGI?
replies(1): >>45674804 #
1. in-silico ◴[] No.45674804[source]
They basically invented LLMs as we know them (autoregressive transformers trained on web data) with the GPT 1/2/3 series of papers.

They also pioneered reasoning models, which are probably the biggest breakthrough since GPT-3 on the AGI tech tree.

replies(1): >>45678274 #
2. bloppe ◴[] No.45678274[source]
Google invented transformers. OpenAI just released their model to the public first. Good for them, but not exactly impressive research.

Reasoning models are pretty cool, but it was just taking what every body was already doing manually ("and please show your work") and making it automatic. The whole agentic shift is also nice but kinda obvious. But I'm still struggling with hallucinations and context rot all the time and it's becoming increasingly clear that that's not something that can be solved incrementally. We need more architectural breakthroughs like the transformer to achieve something like real AGI. Possibly several more.