←back to thread

197 points baylearn | 2 comments | | HN request time: 0.415s | source
Show context
empiko ◴[] No.44471933[source]
Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).
replies(9): >>44471988 #>>44471991 #>>44472148 #>>44472874 #>>44473259 #>>44473640 #>>44474131 #>>44475570 #>>44476315 #
imiric ◴[] No.44473259[source]
Related to your point: if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

This is the main point that proves to me that these companies are mostly selling us snake oil. Yes, there is a great deal of utility from even the current technology. It can detect patterns in data that no human could; that alone can be revolutionary in some fields. It can generate data that mimics anything humans have produced, and certain permutations of that can be insightful. It can produce fascinating images, audio, and video. Some of these capabilities raise safety concerns, particularly in the wrong hands, and important questions that society needs to address. These hurdles are surmountable, but they require focusing on the reality of what these tools can do, instead of on whatever a group of serial tech entrepreneurs looking for the next cashout opportunity tell us they can do.

The constant anthropomorphization of this technology is dishonest at best, and harmful and dangerous at worst.

replies(4): >>44473413 #>>44474036 #>>44474147 #>>44474204 #
richk449 ◴[] No.44474147[source]
> if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.

As far as I can tell smart engineers are using AI tools, particularly people doing coding, but even non-coding roles.

The criticism feels about three years out of date.

replies(10): >>44474186 #>>44474349 #>>44474366 #>>44474767 #>>44475291 #>>44475424 #>>44475442 #>>44475678 #>>44476445 #>>44476449 #
1. imiric ◴[] No.44474366[source]
Not at all. The reason it's not talked about as much these days is because the prevailing way to work around it is by using "agents". I.e. by continuously prompting the LLM in a loop until it happens to generate the correct response. This brute force approach is hardly a solution, especially in fields that don't have a quick way of verifying the output. In programming, trying to compile the code can catch many (but definitely not all) issues. In other science and humanities fields this is just not possible, and verifying the output is much more labor intensive.

The other reason is because the primary focus of the last 3 years has been scaling the data and hardware up, with a bunch of (much needed) engineering around it. This has produced better results, but it can't sustain the AGI promises for much longer. The industry can only survive on shiny value added services and smoke and mirrors for so long.

replies(1): >>44475339 #
2. majormajor ◴[] No.44475339[source]
> In other science and humanities fields this is just not possible, and verifying the output is much more labor intensive.

Even just in industry, I think data functions at companies will have a dicey future.

I haven't seen many places where there's scientific peer review - or even software-engineering-level code-review - of findings from data science teams. If the data scientist team says "we should go after this demographic" and it sounds plausible, it usually gets implemented.

So if the ability to validate was already missing even pre-LLM, what hope is there for validation of the LLM-powered replacement. And so what hope is there of the person doing the non-LLM-version of keeping their job (at least until several quarters later when the strategy either proves itself out or doesn't.)

How many other departments are there where the same lack of rigor already exists? Marketing, sales, HR... yeesh.