←back to thread

114 points cmcconomy | 1 comments | | HN request time: 0s | source
Show context
anon291 ◴[] No.42174879[source]
Can we all agree that these models far surpass human intelligence now? I mean they process hours worth of audio in less time than it would take a human to even listen. I think the singularity passed and we didn't even notice (which would be expected)
replies(11): >>42174949 #>>42174987 #>>42175002 #>>42175008 #>>42175019 #>>42175095 #>>42175118 #>>42175171 #>>42175223 #>>42175324 #>>42176838 #
futureshock ◴[] No.42175118[source]
I actually do think you have a solid point. These models fall short of AGI, but that might be more of a OODA loop agentic tweak than anything else.

At their core, the state of the art LLMs can basically do any small to medium mental task better than I can or get so close to my level than I’ve found myself no longer thinking through things the long way. For example, if I want to run some napkin math on something, like I recently did some solar battery charge time estimates, an LLM can get to a plausible answer in seconds that would have taken me an hour.

So yeah, in many practical ways, LLMs are smarter than most people in most situations. They have not yet far surpassed all humans in all situations, and there are still some classes of reasoning problems that they seem to struggle with, but to a first order approximation, we do seem to be mostly there.

replies(2): >>42175190 #>>42175233 #
1. anon291 ◴[] No.42175190[source]
> For example, if I want to run some napkin math on something, like I recently did some solar battery charge time estimates, an LLM can get to a plausible answer in seconds that would have taken me an hour.

Exactly. I've used it to figure geometric problems for everyday things (carpentry), market sizing estimates for business ideas, etc. Very fast turnaround. All the doomers in this thread are just ignoring the amazing utility these models provide.